Test Report: KVM_Linux_crio 18485

                    
                      bdd124d1e5a6e86e5bd4f9e512befe1eefe531bd:2024-03-28:33775
                    
                

Test fail (30/319)

Order failed test Duration
39 TestAddons/parallel/Ingress 156.82
53 TestAddons/StoppedEnableDisable 154.34
157 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 6.64
172 TestMultiControlPlane/serial/StopSecondaryNode 142.36
174 TestMultiControlPlane/serial/RestartSecondaryNode 56.4
176 TestMultiControlPlane/serial/RestartClusterKeepsNodes 397.58
179 TestMultiControlPlane/serial/StopCluster 142.25
239 TestMultiNode/serial/RestartKeepsNodes 313.01
241 TestMultiNode/serial/StopMultiNode 141.58
248 TestPreload 220.73
256 TestKubernetesUpgrade 393.51
290 TestPause/serial/SecondStartNoReconfiguration 100.49
327 TestStartStop/group/old-k8s-version/serial/FirstStart 284.12
347 TestStartStop/group/no-preload/serial/Stop 139.17
350 TestStartStop/group/embed-certs/serial/Stop 139.03
363 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 12.42
364 TestStartStop/group/old-k8s-version/serial/DeployApp 0.55
365 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 96.91
367 TestStartStop/group/default-k8s-diff-port/serial/Stop 139.2
368 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 12.39
373 TestStartStop/group/old-k8s-version/serial/SecondStart 720.5
374 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 12.42
376 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 544.34
377 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 544.21
378 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 544.24
379 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 543.49
380 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 421.57
381 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 520
382 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 278.49
383 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 156.37
x
+
TestAddons/parallel/Ingress (156.82s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-910864 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
2024/03/27 23:36:31 [DEBUG] GET http://192.168.39.45:5000
addons_test.go:232: (dbg) Run:  kubectl --context addons-910864 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-910864 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [9f410ff0-6c14-4608-abbb-a000665e8f49] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [9f410ff0-6c14-4608-abbb-a000665e8f49] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 13.005456717s
addons_test.go:262: (dbg) Run:  out/minikube-linux-amd64 -p addons-910864 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-910864 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m10.721066285s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:278: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:286: (dbg) Run:  kubectl --context addons-910864 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-amd64 -p addons-910864 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.39.45
addons_test.go:306: (dbg) Run:  out/minikube-linux-amd64 -p addons-910864 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:306: (dbg) Done: out/minikube-linux-amd64 -p addons-910864 addons disable ingress-dns --alsologtostderr -v=1: (1.585669655s)
addons_test.go:311: (dbg) Run:  out/minikube-linux-amd64 -p addons-910864 addons disable ingress --alsologtostderr -v=1
addons_test.go:311: (dbg) Done: out/minikube-linux-amd64 -p addons-910864 addons disable ingress --alsologtostderr -v=1: (8.120102691s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-910864 -n addons-910864
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-910864 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-910864 logs -n 25: (1.51277698s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   |    Version     |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	| delete  | -p download-only-811387                                                                     | download-only-811387 | jenkins | v1.33.0-beta.0 | 27 Mar 24 23:33 UTC | 27 Mar 24 23:33 UTC |
	| delete  | -p download-only-441167                                                                     | download-only-441167 | jenkins | v1.33.0-beta.0 | 27 Mar 24 23:33 UTC | 27 Mar 24 23:33 UTC |
	| delete  | -p download-only-412310                                                                     | download-only-412310 | jenkins | v1.33.0-beta.0 | 27 Mar 24 23:33 UTC | 27 Mar 24 23:33 UTC |
	| delete  | -p download-only-811387                                                                     | download-only-811387 | jenkins | v1.33.0-beta.0 | 27 Mar 24 23:33 UTC | 27 Mar 24 23:33 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-751324 | jenkins | v1.33.0-beta.0 | 27 Mar 24 23:33 UTC |                     |
	|         | binary-mirror-751324                                                                        |                      |         |                |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |                |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |                |                     |                     |
	|         | http://127.0.0.1:45193                                                                      |                      |         |                |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |                |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |                |                     |                     |
	| delete  | -p binary-mirror-751324                                                                     | binary-mirror-751324 | jenkins | v1.33.0-beta.0 | 27 Mar 24 23:33 UTC | 27 Mar 24 23:33 UTC |
	| addons  | enable dashboard -p                                                                         | addons-910864        | jenkins | v1.33.0-beta.0 | 27 Mar 24 23:33 UTC |                     |
	|         | addons-910864                                                                               |                      |         |                |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-910864        | jenkins | v1.33.0-beta.0 | 27 Mar 24 23:33 UTC |                     |
	|         | addons-910864                                                                               |                      |         |                |                     |                     |
	| start   | -p addons-910864 --wait=true                                                                | addons-910864        | jenkins | v1.33.0-beta.0 | 27 Mar 24 23:33 UTC | 27 Mar 24 23:36 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |                |                     |                     |
	|         | --addons=registry                                                                           |                      |         |                |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |                |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |                |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |                |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |                |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |                |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |                |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |                |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |                |                     |                     |
	|         | --addons=yakd --driver=kvm2                                                                 |                      |         |                |                     |                     |
	|         |  --container-runtime=crio                                                                   |                      |         |                |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |                |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |                |                     |                     |
	|         | --addons=helm-tiller                                                                        |                      |         |                |                     |                     |
	| addons  | addons-910864 addons                                                                        | addons-910864        | jenkins | v1.33.0-beta.0 | 27 Mar 24 23:36 UTC | 27 Mar 24 23:36 UTC |
	|         | disable metrics-server                                                                      |                      |         |                |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |                |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-910864        | jenkins | v1.33.0-beta.0 | 27 Mar 24 23:36 UTC | 27 Mar 24 23:36 UTC |
	|         | addons-910864                                                                               |                      |         |                |                     |                     |
	| ip      | addons-910864 ip                                                                            | addons-910864        | jenkins | v1.33.0-beta.0 | 27 Mar 24 23:36 UTC | 27 Mar 24 23:36 UTC |
	| addons  | addons-910864 addons disable                                                                | addons-910864        | jenkins | v1.33.0-beta.0 | 27 Mar 24 23:36 UTC | 27 Mar 24 23:36 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |                |                     |                     |
	|         | -v=1                                                                                        |                      |         |                |                     |                     |
	| addons  | addons-910864 addons disable                                                                | addons-910864        | jenkins | v1.33.0-beta.0 | 27 Mar 24 23:36 UTC | 27 Mar 24 23:36 UTC |
	|         | helm-tiller --alsologtostderr                                                               |                      |         |                |                     |                     |
	|         | -v=1                                                                                        |                      |         |                |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-910864        | jenkins | v1.33.0-beta.0 | 27 Mar 24 23:36 UTC | 27 Mar 24 23:36 UTC |
	|         | -p addons-910864                                                                            |                      |         |                |                     |                     |
	| ssh     | addons-910864 ssh curl -s                                                                   | addons-910864        | jenkins | v1.33.0-beta.0 | 27 Mar 24 23:36 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |                |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |                |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-910864        | jenkins | v1.33.0-beta.0 | 27 Mar 24 23:36 UTC | 27 Mar 24 23:36 UTC |
	|         | addons-910864                                                                               |                      |         |                |                     |                     |
	| ssh     | addons-910864 ssh cat                                                                       | addons-910864        | jenkins | v1.33.0-beta.0 | 27 Mar 24 23:36 UTC | 27 Mar 24 23:36 UTC |
	|         | /opt/local-path-provisioner/pvc-5d867087-2511-4b36-8e94-b5e7118d57da_default_test-pvc/file1 |                      |         |                |                     |                     |
	| addons  | addons-910864 addons disable                                                                | addons-910864        | jenkins | v1.33.0-beta.0 | 27 Mar 24 23:36 UTC | 27 Mar 24 23:36 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |                |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |                |                     |                     |
	| addons  | enable headlamp                                                                             | addons-910864        | jenkins | v1.33.0-beta.0 | 27 Mar 24 23:36 UTC | 27 Mar 24 23:36 UTC |
	|         | -p addons-910864                                                                            |                      |         |                |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |                |                     |                     |
	| addons  | addons-910864 addons                                                                        | addons-910864        | jenkins | v1.33.0-beta.0 | 27 Mar 24 23:37 UTC | 27 Mar 24 23:37 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |                |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |                |                     |                     |
	| addons  | addons-910864 addons                                                                        | addons-910864        | jenkins | v1.33.0-beta.0 | 27 Mar 24 23:37 UTC | 27 Mar 24 23:37 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |                |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |                |                     |                     |
	| ip      | addons-910864 ip                                                                            | addons-910864        | jenkins | v1.33.0-beta.0 | 27 Mar 24 23:38 UTC | 27 Mar 24 23:38 UTC |
	| addons  | addons-910864 addons disable                                                                | addons-910864        | jenkins | v1.33.0-beta.0 | 27 Mar 24 23:38 UTC | 27 Mar 24 23:38 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                      |         |                |                     |                     |
	|         | -v=1                                                                                        |                      |         |                |                     |                     |
	| addons  | addons-910864 addons disable                                                                | addons-910864        | jenkins | v1.33.0-beta.0 | 27 Mar 24 23:38 UTC | 27 Mar 24 23:39 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                      |         |                |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/27 23:33:42
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0327 23:33:42.890990 1077345 out.go:291] Setting OutFile to fd 1 ...
	I0327 23:33:42.891109 1077345 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 23:33:42.891114 1077345 out.go:304] Setting ErrFile to fd 2...
	I0327 23:33:42.891118 1077345 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 23:33:42.891336 1077345 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18485-1069254/.minikube/bin
	I0327 23:33:42.891976 1077345 out.go:298] Setting JSON to false
	I0327 23:33:42.892976 1077345 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":26120,"bootTime":1711556303,"procs":211,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1054-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0327 23:33:42.893044 1077345 start.go:139] virtualization: kvm guest
	I0327 23:33:42.895307 1077345 out.go:177] * [addons-910864] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I0327 23:33:42.897223 1077345 out.go:177]   - MINIKUBE_LOCATION=18485
	I0327 23:33:42.897303 1077345 notify.go:220] Checking for updates...
	I0327 23:33:42.899414 1077345 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0327 23:33:42.900733 1077345 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18485-1069254/kubeconfig
	I0327 23:33:42.902189 1077345 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18485-1069254/.minikube
	I0327 23:33:42.903522 1077345 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0327 23:33:42.904910 1077345 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0327 23:33:42.906307 1077345 driver.go:392] Setting default libvirt URI to qemu:///system
	I0327 23:33:42.937364 1077345 out.go:177] * Using the kvm2 driver based on user configuration
	I0327 23:33:42.938876 1077345 start.go:297] selected driver: kvm2
	I0327 23:33:42.938942 1077345 start.go:901] validating driver "kvm2" against <nil>
	I0327 23:33:42.939180 1077345 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0327 23:33:42.940325 1077345 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0327 23:33:42.940455 1077345 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18485-1069254/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0327 23:33:42.955964 1077345 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0-beta.0
	I0327 23:33:42.956036 1077345 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0327 23:33:42.956231 1077345 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0327 23:33:42.956292 1077345 cni.go:84] Creating CNI manager for ""
	I0327 23:33:42.956306 1077345 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0327 23:33:42.956315 1077345 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0327 23:33:42.956372 1077345 start.go:340] cluster config:
	{Name:addons-910864 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:addons-910864 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I0327 23:33:42.956481 1077345 iso.go:125] acquiring lock: {Name:mk3da1fa7d63a581c817f327d86a827697458cb8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0327 23:33:42.958357 1077345 out.go:177] * Starting "addons-910864" primary control-plane node in "addons-910864" cluster
	I0327 23:33:42.959737 1077345 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0327 23:33:42.959799 1077345 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4
	I0327 23:33:42.959813 1077345 cache.go:56] Caching tarball of preloaded images
	I0327 23:33:42.959891 1077345 preload.go:173] Found /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0327 23:33:42.959903 1077345 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on crio
	I0327 23:33:42.960217 1077345 profile.go:142] Saving config to /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/addons-910864/config.json ...
	I0327 23:33:42.960253 1077345 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/addons-910864/config.json: {Name:mk2fdb4b1842954877b696ff695f394ccd8d8605 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 23:33:42.960406 1077345 start.go:360] acquireMachinesLock for addons-910864: {Name:mk85e225431128bbd27ac7bb3815095957281902 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0327 23:33:42.960468 1077345 start.go:364] duration metric: took 45.935µs to acquireMachinesLock for "addons-910864"
	I0327 23:33:42.960496 1077345 start.go:93] Provisioning new machine with config: &{Name:addons-910864 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.29.3 ClusterName:addons-910864 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0327 23:33:42.960574 1077345 start.go:125] createHost starting for "" (driver="kvm2")
	I0327 23:33:42.962336 1077345 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0327 23:33:42.962491 1077345 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0327 23:33:42.962539 1077345 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0327 23:33:42.977051 1077345 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39185
	I0327 23:33:42.977595 1077345 main.go:141] libmachine: () Calling .GetVersion
	I0327 23:33:42.978264 1077345 main.go:141] libmachine: Using API Version  1
	I0327 23:33:42.978292 1077345 main.go:141] libmachine: () Calling .SetConfigRaw
	I0327 23:33:42.978634 1077345 main.go:141] libmachine: () Calling .GetMachineName
	I0327 23:33:42.978798 1077345 main.go:141] libmachine: (addons-910864) Calling .GetMachineName
	I0327 23:33:42.978933 1077345 main.go:141] libmachine: (addons-910864) Calling .DriverName
	I0327 23:33:42.979087 1077345 start.go:159] libmachine.API.Create for "addons-910864" (driver="kvm2")
	I0327 23:33:42.979121 1077345 client.go:168] LocalClient.Create starting
	I0327 23:33:42.979167 1077345 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem
	I0327 23:33:43.165180 1077345 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/cert.pem
	I0327 23:33:43.353878 1077345 main.go:141] libmachine: Running pre-create checks...
	I0327 23:33:43.353908 1077345 main.go:141] libmachine: (addons-910864) Calling .PreCreateCheck
	I0327 23:33:43.354450 1077345 main.go:141] libmachine: (addons-910864) Calling .GetConfigRaw
	I0327 23:33:43.354956 1077345 main.go:141] libmachine: Creating machine...
	I0327 23:33:43.354973 1077345 main.go:141] libmachine: (addons-910864) Calling .Create
	I0327 23:33:43.355126 1077345 main.go:141] libmachine: (addons-910864) Creating KVM machine...
	I0327 23:33:43.356488 1077345 main.go:141] libmachine: (addons-910864) DBG | found existing default KVM network
	I0327 23:33:43.357406 1077345 main.go:141] libmachine: (addons-910864) DBG | I0327 23:33:43.357252 1077367 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00010f350}
	I0327 23:33:43.357474 1077345 main.go:141] libmachine: (addons-910864) DBG | created network xml: 
	I0327 23:33:43.357495 1077345 main.go:141] libmachine: (addons-910864) DBG | <network>
	I0327 23:33:43.357502 1077345 main.go:141] libmachine: (addons-910864) DBG |   <name>mk-addons-910864</name>
	I0327 23:33:43.357510 1077345 main.go:141] libmachine: (addons-910864) DBG |   <dns enable='no'/>
	I0327 23:33:43.357515 1077345 main.go:141] libmachine: (addons-910864) DBG |   
	I0327 23:33:43.357529 1077345 main.go:141] libmachine: (addons-910864) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0327 23:33:43.357544 1077345 main.go:141] libmachine: (addons-910864) DBG |     <dhcp>
	I0327 23:33:43.357558 1077345 main.go:141] libmachine: (addons-910864) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0327 23:33:43.357569 1077345 main.go:141] libmachine: (addons-910864) DBG |     </dhcp>
	I0327 23:33:43.357574 1077345 main.go:141] libmachine: (addons-910864) DBG |   </ip>
	I0327 23:33:43.357580 1077345 main.go:141] libmachine: (addons-910864) DBG |   
	I0327 23:33:43.357587 1077345 main.go:141] libmachine: (addons-910864) DBG | </network>
	I0327 23:33:43.357594 1077345 main.go:141] libmachine: (addons-910864) DBG | 
	I0327 23:33:43.362905 1077345 main.go:141] libmachine: (addons-910864) DBG | trying to create private KVM network mk-addons-910864 192.168.39.0/24...
	I0327 23:33:43.431942 1077345 main.go:141] libmachine: (addons-910864) DBG | private KVM network mk-addons-910864 192.168.39.0/24 created
	I0327 23:33:43.431990 1077345 main.go:141] libmachine: (addons-910864) DBG | I0327 23:33:43.431856 1077367 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18485-1069254/.minikube
	I0327 23:33:43.432014 1077345 main.go:141] libmachine: (addons-910864) Setting up store path in /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/addons-910864 ...
	I0327 23:33:43.432036 1077345 main.go:141] libmachine: (addons-910864) Building disk image from file:///home/jenkins/minikube-integration/18485-1069254/.minikube/cache/iso/amd64/minikube-v1.33.0-1711559712-18485-amd64.iso
	I0327 23:33:43.432050 1077345 main.go:141] libmachine: (addons-910864) Downloading /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18485-1069254/.minikube/cache/iso/amd64/minikube-v1.33.0-1711559712-18485-amd64.iso...
	I0327 23:33:43.696365 1077345 main.go:141] libmachine: (addons-910864) DBG | I0327 23:33:43.696182 1077367 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/addons-910864/id_rsa...
	I0327 23:33:44.078742 1077345 main.go:141] libmachine: (addons-910864) DBG | I0327 23:33:44.078569 1077367 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/addons-910864/addons-910864.rawdisk...
	I0327 23:33:44.078772 1077345 main.go:141] libmachine: (addons-910864) DBG | Writing magic tar header
	I0327 23:33:44.078783 1077345 main.go:141] libmachine: (addons-910864) DBG | Writing SSH key tar header
	I0327 23:33:44.078792 1077345 main.go:141] libmachine: (addons-910864) DBG | I0327 23:33:44.078698 1077367 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/addons-910864 ...
	I0327 23:33:44.078808 1077345 main.go:141] libmachine: (addons-910864) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/addons-910864
	I0327 23:33:44.078902 1077345 main.go:141] libmachine: (addons-910864) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18485-1069254/.minikube/machines
	I0327 23:33:44.078930 1077345 main.go:141] libmachine: (addons-910864) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18485-1069254/.minikube
	I0327 23:33:44.078944 1077345 main.go:141] libmachine: (addons-910864) Setting executable bit set on /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/addons-910864 (perms=drwx------)
	I0327 23:33:44.078968 1077345 main.go:141] libmachine: (addons-910864) Setting executable bit set on /home/jenkins/minikube-integration/18485-1069254/.minikube/machines (perms=drwxr-xr-x)
	I0327 23:33:44.078980 1077345 main.go:141] libmachine: (addons-910864) Setting executable bit set on /home/jenkins/minikube-integration/18485-1069254/.minikube (perms=drwxr-xr-x)
	I0327 23:33:44.078993 1077345 main.go:141] libmachine: (addons-910864) Setting executable bit set on /home/jenkins/minikube-integration/18485-1069254 (perms=drwxrwxr-x)
	I0327 23:33:44.079008 1077345 main.go:141] libmachine: (addons-910864) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0327 23:33:44.079025 1077345 main.go:141] libmachine: (addons-910864) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0327 23:33:44.079040 1077345 main.go:141] libmachine: (addons-910864) Creating domain...
	I0327 23:33:44.079052 1077345 main.go:141] libmachine: (addons-910864) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18485-1069254
	I0327 23:33:44.079067 1077345 main.go:141] libmachine: (addons-910864) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0327 23:33:44.079077 1077345 main.go:141] libmachine: (addons-910864) DBG | Checking permissions on dir: /home/jenkins
	I0327 23:33:44.079092 1077345 main.go:141] libmachine: (addons-910864) DBG | Checking permissions on dir: /home
	I0327 23:33:44.079102 1077345 main.go:141] libmachine: (addons-910864) DBG | Skipping /home - not owner
	I0327 23:33:44.080359 1077345 main.go:141] libmachine: (addons-910864) define libvirt domain using xml: 
	I0327 23:33:44.080400 1077345 main.go:141] libmachine: (addons-910864) <domain type='kvm'>
	I0327 23:33:44.080412 1077345 main.go:141] libmachine: (addons-910864)   <name>addons-910864</name>
	I0327 23:33:44.080419 1077345 main.go:141] libmachine: (addons-910864)   <memory unit='MiB'>4000</memory>
	I0327 23:33:44.080430 1077345 main.go:141] libmachine: (addons-910864)   <vcpu>2</vcpu>
	I0327 23:33:44.080438 1077345 main.go:141] libmachine: (addons-910864)   <features>
	I0327 23:33:44.080447 1077345 main.go:141] libmachine: (addons-910864)     <acpi/>
	I0327 23:33:44.080460 1077345 main.go:141] libmachine: (addons-910864)     <apic/>
	I0327 23:33:44.080487 1077345 main.go:141] libmachine: (addons-910864)     <pae/>
	I0327 23:33:44.080508 1077345 main.go:141] libmachine: (addons-910864)     
	I0327 23:33:44.080527 1077345 main.go:141] libmachine: (addons-910864)   </features>
	I0327 23:33:44.080533 1077345 main.go:141] libmachine: (addons-910864)   <cpu mode='host-passthrough'>
	I0327 23:33:44.080538 1077345 main.go:141] libmachine: (addons-910864)   
	I0327 23:33:44.080543 1077345 main.go:141] libmachine: (addons-910864)   </cpu>
	I0327 23:33:44.080552 1077345 main.go:141] libmachine: (addons-910864)   <os>
	I0327 23:33:44.080557 1077345 main.go:141] libmachine: (addons-910864)     <type>hvm</type>
	I0327 23:33:44.080565 1077345 main.go:141] libmachine: (addons-910864)     <boot dev='cdrom'/>
	I0327 23:33:44.080569 1077345 main.go:141] libmachine: (addons-910864)     <boot dev='hd'/>
	I0327 23:33:44.080605 1077345 main.go:141] libmachine: (addons-910864)     <bootmenu enable='no'/>
	I0327 23:33:44.080629 1077345 main.go:141] libmachine: (addons-910864)   </os>
	I0327 23:33:44.080640 1077345 main.go:141] libmachine: (addons-910864)   <devices>
	I0327 23:33:44.080658 1077345 main.go:141] libmachine: (addons-910864)     <disk type='file' device='cdrom'>
	I0327 23:33:44.080676 1077345 main.go:141] libmachine: (addons-910864)       <source file='/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/addons-910864/boot2docker.iso'/>
	I0327 23:33:44.080690 1077345 main.go:141] libmachine: (addons-910864)       <target dev='hdc' bus='scsi'/>
	I0327 23:33:44.080705 1077345 main.go:141] libmachine: (addons-910864)       <readonly/>
	I0327 23:33:44.080722 1077345 main.go:141] libmachine: (addons-910864)     </disk>
	I0327 23:33:44.080743 1077345 main.go:141] libmachine: (addons-910864)     <disk type='file' device='disk'>
	I0327 23:33:44.080761 1077345 main.go:141] libmachine: (addons-910864)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0327 23:33:44.080781 1077345 main.go:141] libmachine: (addons-910864)       <source file='/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/addons-910864/addons-910864.rawdisk'/>
	I0327 23:33:44.080793 1077345 main.go:141] libmachine: (addons-910864)       <target dev='hda' bus='virtio'/>
	I0327 23:33:44.080805 1077345 main.go:141] libmachine: (addons-910864)     </disk>
	I0327 23:33:44.080816 1077345 main.go:141] libmachine: (addons-910864)     <interface type='network'>
	I0327 23:33:44.080829 1077345 main.go:141] libmachine: (addons-910864)       <source network='mk-addons-910864'/>
	I0327 23:33:44.080842 1077345 main.go:141] libmachine: (addons-910864)       <model type='virtio'/>
	I0327 23:33:44.080852 1077345 main.go:141] libmachine: (addons-910864)     </interface>
	I0327 23:33:44.080862 1077345 main.go:141] libmachine: (addons-910864)     <interface type='network'>
	I0327 23:33:44.080868 1077345 main.go:141] libmachine: (addons-910864)       <source network='default'/>
	I0327 23:33:44.080875 1077345 main.go:141] libmachine: (addons-910864)       <model type='virtio'/>
	I0327 23:33:44.080885 1077345 main.go:141] libmachine: (addons-910864)     </interface>
	I0327 23:33:44.080892 1077345 main.go:141] libmachine: (addons-910864)     <serial type='pty'>
	I0327 23:33:44.080897 1077345 main.go:141] libmachine: (addons-910864)       <target port='0'/>
	I0327 23:33:44.080904 1077345 main.go:141] libmachine: (addons-910864)     </serial>
	I0327 23:33:44.080916 1077345 main.go:141] libmachine: (addons-910864)     <console type='pty'>
	I0327 23:33:44.080924 1077345 main.go:141] libmachine: (addons-910864)       <target type='serial' port='0'/>
	I0327 23:33:44.080929 1077345 main.go:141] libmachine: (addons-910864)     </console>
	I0327 23:33:44.080940 1077345 main.go:141] libmachine: (addons-910864)     <rng model='virtio'>
	I0327 23:33:44.080947 1077345 main.go:141] libmachine: (addons-910864)       <backend model='random'>/dev/random</backend>
	I0327 23:33:44.080958 1077345 main.go:141] libmachine: (addons-910864)     </rng>
	I0327 23:33:44.080973 1077345 main.go:141] libmachine: (addons-910864)     
	I0327 23:33:44.080990 1077345 main.go:141] libmachine: (addons-910864)     
	I0327 23:33:44.081002 1077345 main.go:141] libmachine: (addons-910864)   </devices>
	I0327 23:33:44.081013 1077345 main.go:141] libmachine: (addons-910864) </domain>
	I0327 23:33:44.081025 1077345 main.go:141] libmachine: (addons-910864) 
	I0327 23:33:44.085416 1077345 main.go:141] libmachine: (addons-910864) DBG | domain addons-910864 has defined MAC address 52:54:00:70:09:92 in network default
	I0327 23:33:44.086043 1077345 main.go:141] libmachine: (addons-910864) Ensuring networks are active...
	I0327 23:33:44.086081 1077345 main.go:141] libmachine: (addons-910864) DBG | domain addons-910864 has defined MAC address 52:54:00:44:91:2e in network mk-addons-910864
	I0327 23:33:44.086754 1077345 main.go:141] libmachine: (addons-910864) Ensuring network default is active
	I0327 23:33:44.087008 1077345 main.go:141] libmachine: (addons-910864) Ensuring network mk-addons-910864 is active
	I0327 23:33:44.087609 1077345 main.go:141] libmachine: (addons-910864) Getting domain xml...
	I0327 23:33:44.088479 1077345 main.go:141] libmachine: (addons-910864) Creating domain...
	I0327 23:33:45.303161 1077345 main.go:141] libmachine: (addons-910864) Waiting to get IP...
	I0327 23:33:45.303956 1077345 main.go:141] libmachine: (addons-910864) DBG | domain addons-910864 has defined MAC address 52:54:00:44:91:2e in network mk-addons-910864
	I0327 23:33:45.304395 1077345 main.go:141] libmachine: (addons-910864) DBG | unable to find current IP address of domain addons-910864 in network mk-addons-910864
	I0327 23:33:45.304439 1077345 main.go:141] libmachine: (addons-910864) DBG | I0327 23:33:45.304375 1077367 retry.go:31] will retry after 305.182683ms: waiting for machine to come up
	I0327 23:33:45.611032 1077345 main.go:141] libmachine: (addons-910864) DBG | domain addons-910864 has defined MAC address 52:54:00:44:91:2e in network mk-addons-910864
	I0327 23:33:45.611504 1077345 main.go:141] libmachine: (addons-910864) DBG | unable to find current IP address of domain addons-910864 in network mk-addons-910864
	I0327 23:33:45.611538 1077345 main.go:141] libmachine: (addons-910864) DBG | I0327 23:33:45.611462 1077367 retry.go:31] will retry after 295.780196ms: waiting for machine to come up
	I0327 23:33:45.908987 1077345 main.go:141] libmachine: (addons-910864) DBG | domain addons-910864 has defined MAC address 52:54:00:44:91:2e in network mk-addons-910864
	I0327 23:33:45.909479 1077345 main.go:141] libmachine: (addons-910864) DBG | unable to find current IP address of domain addons-910864 in network mk-addons-910864
	I0327 23:33:45.909512 1077345 main.go:141] libmachine: (addons-910864) DBG | I0327 23:33:45.909431 1077367 retry.go:31] will retry after 413.71948ms: waiting for machine to come up
	I0327 23:33:46.324932 1077345 main.go:141] libmachine: (addons-910864) DBG | domain addons-910864 has defined MAC address 52:54:00:44:91:2e in network mk-addons-910864
	I0327 23:33:46.325288 1077345 main.go:141] libmachine: (addons-910864) DBG | unable to find current IP address of domain addons-910864 in network mk-addons-910864
	I0327 23:33:46.325325 1077345 main.go:141] libmachine: (addons-910864) DBG | I0327 23:33:46.325247 1077367 retry.go:31] will retry after 442.92756ms: waiting for machine to come up
	I0327 23:33:46.769969 1077345 main.go:141] libmachine: (addons-910864) DBG | domain addons-910864 has defined MAC address 52:54:00:44:91:2e in network mk-addons-910864
	I0327 23:33:46.770469 1077345 main.go:141] libmachine: (addons-910864) DBG | unable to find current IP address of domain addons-910864 in network mk-addons-910864
	I0327 23:33:46.770496 1077345 main.go:141] libmachine: (addons-910864) DBG | I0327 23:33:46.770430 1077367 retry.go:31] will retry after 666.211615ms: waiting for machine to come up
	I0327 23:33:47.437966 1077345 main.go:141] libmachine: (addons-910864) DBG | domain addons-910864 has defined MAC address 52:54:00:44:91:2e in network mk-addons-910864
	I0327 23:33:47.438328 1077345 main.go:141] libmachine: (addons-910864) DBG | unable to find current IP address of domain addons-910864 in network mk-addons-910864
	I0327 23:33:47.438357 1077345 main.go:141] libmachine: (addons-910864) DBG | I0327 23:33:47.438287 1077367 retry.go:31] will retry after 779.049228ms: waiting for machine to come up
	I0327 23:33:48.219135 1077345 main.go:141] libmachine: (addons-910864) DBG | domain addons-910864 has defined MAC address 52:54:00:44:91:2e in network mk-addons-910864
	I0327 23:33:48.219588 1077345 main.go:141] libmachine: (addons-910864) DBG | unable to find current IP address of domain addons-910864 in network mk-addons-910864
	I0327 23:33:48.219616 1077345 main.go:141] libmachine: (addons-910864) DBG | I0327 23:33:48.219539 1077367 retry.go:31] will retry after 767.904216ms: waiting for machine to come up
	I0327 23:33:48.989651 1077345 main.go:141] libmachine: (addons-910864) DBG | domain addons-910864 has defined MAC address 52:54:00:44:91:2e in network mk-addons-910864
	I0327 23:33:48.990100 1077345 main.go:141] libmachine: (addons-910864) DBG | unable to find current IP address of domain addons-910864 in network mk-addons-910864
	I0327 23:33:48.990130 1077345 main.go:141] libmachine: (addons-910864) DBG | I0327 23:33:48.990058 1077367 retry.go:31] will retry after 1.479413167s: waiting for machine to come up
	I0327 23:33:50.471856 1077345 main.go:141] libmachine: (addons-910864) DBG | domain addons-910864 has defined MAC address 52:54:00:44:91:2e in network mk-addons-910864
	I0327 23:33:50.472301 1077345 main.go:141] libmachine: (addons-910864) DBG | unable to find current IP address of domain addons-910864 in network mk-addons-910864
	I0327 23:33:50.472329 1077345 main.go:141] libmachine: (addons-910864) DBG | I0327 23:33:50.472249 1077367 retry.go:31] will retry after 1.361586319s: waiting for machine to come up
	I0327 23:33:51.835886 1077345 main.go:141] libmachine: (addons-910864) DBG | domain addons-910864 has defined MAC address 52:54:00:44:91:2e in network mk-addons-910864
	I0327 23:33:51.836445 1077345 main.go:141] libmachine: (addons-910864) DBG | unable to find current IP address of domain addons-910864 in network mk-addons-910864
	I0327 23:33:51.836475 1077345 main.go:141] libmachine: (addons-910864) DBG | I0327 23:33:51.836393 1077367 retry.go:31] will retry after 1.969565686s: waiting for machine to come up
	I0327 23:33:53.807597 1077345 main.go:141] libmachine: (addons-910864) DBG | domain addons-910864 has defined MAC address 52:54:00:44:91:2e in network mk-addons-910864
	I0327 23:33:53.808029 1077345 main.go:141] libmachine: (addons-910864) DBG | unable to find current IP address of domain addons-910864 in network mk-addons-910864
	I0327 23:33:53.808103 1077345 main.go:141] libmachine: (addons-910864) DBG | I0327 23:33:53.808019 1077367 retry.go:31] will retry after 2.90090724s: waiting for machine to come up
	I0327 23:33:56.710365 1077345 main.go:141] libmachine: (addons-910864) DBG | domain addons-910864 has defined MAC address 52:54:00:44:91:2e in network mk-addons-910864
	I0327 23:33:56.710839 1077345 main.go:141] libmachine: (addons-910864) DBG | unable to find current IP address of domain addons-910864 in network mk-addons-910864
	I0327 23:33:56.710868 1077345 main.go:141] libmachine: (addons-910864) DBG | I0327 23:33:56.710792 1077367 retry.go:31] will retry after 2.311306104s: waiting for machine to come up
	I0327 23:33:59.024026 1077345 main.go:141] libmachine: (addons-910864) DBG | domain addons-910864 has defined MAC address 52:54:00:44:91:2e in network mk-addons-910864
	I0327 23:33:59.024569 1077345 main.go:141] libmachine: (addons-910864) DBG | unable to find current IP address of domain addons-910864 in network mk-addons-910864
	I0327 23:33:59.024599 1077345 main.go:141] libmachine: (addons-910864) DBG | I0327 23:33:59.024510 1077367 retry.go:31] will retry after 3.767451286s: waiting for machine to come up
	I0327 23:34:02.796509 1077345 main.go:141] libmachine: (addons-910864) DBG | domain addons-910864 has defined MAC address 52:54:00:44:91:2e in network mk-addons-910864
	I0327 23:34:02.796909 1077345 main.go:141] libmachine: (addons-910864) DBG | unable to find current IP address of domain addons-910864 in network mk-addons-910864
	I0327 23:34:02.796988 1077345 main.go:141] libmachine: (addons-910864) DBG | I0327 23:34:02.796843 1077367 retry.go:31] will retry after 3.417050441s: waiting for machine to come up
	I0327 23:34:06.215425 1077345 main.go:141] libmachine: (addons-910864) DBG | domain addons-910864 has defined MAC address 52:54:00:44:91:2e in network mk-addons-910864
	I0327 23:34:06.216105 1077345 main.go:141] libmachine: (addons-910864) Found IP for machine: 192.168.39.45
	I0327 23:34:06.216133 1077345 main.go:141] libmachine: (addons-910864) DBG | domain addons-910864 has current primary IP address 192.168.39.45 and MAC address 52:54:00:44:91:2e in network mk-addons-910864
	I0327 23:34:06.216140 1077345 main.go:141] libmachine: (addons-910864) Reserving static IP address...
	I0327 23:34:06.216490 1077345 main.go:141] libmachine: (addons-910864) DBG | unable to find host DHCP lease matching {name: "addons-910864", mac: "52:54:00:44:91:2e", ip: "192.168.39.45"} in network mk-addons-910864
	I0327 23:34:06.295133 1077345 main.go:141] libmachine: (addons-910864) DBG | Getting to WaitForSSH function...
	I0327 23:34:06.295170 1077345 main.go:141] libmachine: (addons-910864) Reserved static IP address: 192.168.39.45
	I0327 23:34:06.295184 1077345 main.go:141] libmachine: (addons-910864) Waiting for SSH to be available...
	I0327 23:34:06.297940 1077345 main.go:141] libmachine: (addons-910864) DBG | domain addons-910864 has defined MAC address 52:54:00:44:91:2e in network mk-addons-910864
	I0327 23:34:06.298351 1077345 main.go:141] libmachine: (addons-910864) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:91:2e", ip: ""} in network mk-addons-910864: {Iface:virbr1 ExpiryTime:2024-03-28 00:33:58 +0000 UTC Type:0 Mac:52:54:00:44:91:2e Iaid: IPaddr:192.168.39.45 Prefix:24 Hostname:minikube Clientid:01:52:54:00:44:91:2e}
	I0327 23:34:06.298392 1077345 main.go:141] libmachine: (addons-910864) DBG | domain addons-910864 has defined IP address 192.168.39.45 and MAC address 52:54:00:44:91:2e in network mk-addons-910864
	I0327 23:34:06.298484 1077345 main.go:141] libmachine: (addons-910864) DBG | Using SSH client type: external
	I0327 23:34:06.298514 1077345 main.go:141] libmachine: (addons-910864) DBG | Using SSH private key: /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/addons-910864/id_rsa (-rw-------)
	I0327 23:34:06.298545 1077345 main.go:141] libmachine: (addons-910864) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.45 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/addons-910864/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0327 23:34:06.298564 1077345 main.go:141] libmachine: (addons-910864) DBG | About to run SSH command:
	I0327 23:34:06.298581 1077345 main.go:141] libmachine: (addons-910864) DBG | exit 0
	I0327 23:34:06.422341 1077345 main.go:141] libmachine: (addons-910864) DBG | SSH cmd err, output: <nil>: 
	I0327 23:34:06.422618 1077345 main.go:141] libmachine: (addons-910864) KVM machine creation complete!
	I0327 23:34:06.422913 1077345 main.go:141] libmachine: (addons-910864) Calling .GetConfigRaw
	I0327 23:34:06.423457 1077345 main.go:141] libmachine: (addons-910864) Calling .DriverName
	I0327 23:34:06.423663 1077345 main.go:141] libmachine: (addons-910864) Calling .DriverName
	I0327 23:34:06.423851 1077345 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0327 23:34:06.423869 1077345 main.go:141] libmachine: (addons-910864) Calling .GetState
	I0327 23:34:06.425141 1077345 main.go:141] libmachine: Detecting operating system of created instance...
	I0327 23:34:06.425158 1077345 main.go:141] libmachine: Waiting for SSH to be available...
	I0327 23:34:06.425165 1077345 main.go:141] libmachine: Getting to WaitForSSH function...
	I0327 23:34:06.425174 1077345 main.go:141] libmachine: (addons-910864) Calling .GetSSHHostname
	I0327 23:34:06.427496 1077345 main.go:141] libmachine: (addons-910864) DBG | domain addons-910864 has defined MAC address 52:54:00:44:91:2e in network mk-addons-910864
	I0327 23:34:06.427789 1077345 main.go:141] libmachine: (addons-910864) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:91:2e", ip: ""} in network mk-addons-910864: {Iface:virbr1 ExpiryTime:2024-03-28 00:33:58 +0000 UTC Type:0 Mac:52:54:00:44:91:2e Iaid: IPaddr:192.168.39.45 Prefix:24 Hostname:addons-910864 Clientid:01:52:54:00:44:91:2e}
	I0327 23:34:06.427822 1077345 main.go:141] libmachine: (addons-910864) DBG | domain addons-910864 has defined IP address 192.168.39.45 and MAC address 52:54:00:44:91:2e in network mk-addons-910864
	I0327 23:34:06.427937 1077345 main.go:141] libmachine: (addons-910864) Calling .GetSSHPort
	I0327 23:34:06.428162 1077345 main.go:141] libmachine: (addons-910864) Calling .GetSSHKeyPath
	I0327 23:34:06.428315 1077345 main.go:141] libmachine: (addons-910864) Calling .GetSSHKeyPath
	I0327 23:34:06.428435 1077345 main.go:141] libmachine: (addons-910864) Calling .GetSSHUsername
	I0327 23:34:06.428599 1077345 main.go:141] libmachine: Using SSH client type: native
	I0327 23:34:06.428788 1077345 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.45 22 <nil> <nil>}
	I0327 23:34:06.428799 1077345 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0327 23:34:06.529776 1077345 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0327 23:34:06.529809 1077345 main.go:141] libmachine: Detecting the provisioner...
	I0327 23:34:06.529821 1077345 main.go:141] libmachine: (addons-910864) Calling .GetSSHHostname
	I0327 23:34:06.533053 1077345 main.go:141] libmachine: (addons-910864) DBG | domain addons-910864 has defined MAC address 52:54:00:44:91:2e in network mk-addons-910864
	I0327 23:34:06.533417 1077345 main.go:141] libmachine: (addons-910864) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:91:2e", ip: ""} in network mk-addons-910864: {Iface:virbr1 ExpiryTime:2024-03-28 00:33:58 +0000 UTC Type:0 Mac:52:54:00:44:91:2e Iaid: IPaddr:192.168.39.45 Prefix:24 Hostname:addons-910864 Clientid:01:52:54:00:44:91:2e}
	I0327 23:34:06.533448 1077345 main.go:141] libmachine: (addons-910864) DBG | domain addons-910864 has defined IP address 192.168.39.45 and MAC address 52:54:00:44:91:2e in network mk-addons-910864
	I0327 23:34:06.533631 1077345 main.go:141] libmachine: (addons-910864) Calling .GetSSHPort
	I0327 23:34:06.533853 1077345 main.go:141] libmachine: (addons-910864) Calling .GetSSHKeyPath
	I0327 23:34:06.534021 1077345 main.go:141] libmachine: (addons-910864) Calling .GetSSHKeyPath
	I0327 23:34:06.534209 1077345 main.go:141] libmachine: (addons-910864) Calling .GetSSHUsername
	I0327 23:34:06.534384 1077345 main.go:141] libmachine: Using SSH client type: native
	I0327 23:34:06.534604 1077345 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.45 22 <nil> <nil>}
	I0327 23:34:06.534617 1077345 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0327 23:34:06.635164 1077345 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0327 23:34:06.635241 1077345 main.go:141] libmachine: found compatible host: buildroot
	I0327 23:34:06.635248 1077345 main.go:141] libmachine: Provisioning with buildroot...
	I0327 23:34:06.635257 1077345 main.go:141] libmachine: (addons-910864) Calling .GetMachineName
	I0327 23:34:06.635582 1077345 buildroot.go:166] provisioning hostname "addons-910864"
	I0327 23:34:06.635612 1077345 main.go:141] libmachine: (addons-910864) Calling .GetMachineName
	I0327 23:34:06.635806 1077345 main.go:141] libmachine: (addons-910864) Calling .GetSSHHostname
	I0327 23:34:06.638456 1077345 main.go:141] libmachine: (addons-910864) DBG | domain addons-910864 has defined MAC address 52:54:00:44:91:2e in network mk-addons-910864
	I0327 23:34:06.638807 1077345 main.go:141] libmachine: (addons-910864) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:91:2e", ip: ""} in network mk-addons-910864: {Iface:virbr1 ExpiryTime:2024-03-28 00:33:58 +0000 UTC Type:0 Mac:52:54:00:44:91:2e Iaid: IPaddr:192.168.39.45 Prefix:24 Hostname:addons-910864 Clientid:01:52:54:00:44:91:2e}
	I0327 23:34:06.638846 1077345 main.go:141] libmachine: (addons-910864) DBG | domain addons-910864 has defined IP address 192.168.39.45 and MAC address 52:54:00:44:91:2e in network mk-addons-910864
	I0327 23:34:06.638947 1077345 main.go:141] libmachine: (addons-910864) Calling .GetSSHPort
	I0327 23:34:06.639149 1077345 main.go:141] libmachine: (addons-910864) Calling .GetSSHKeyPath
	I0327 23:34:06.639338 1077345 main.go:141] libmachine: (addons-910864) Calling .GetSSHKeyPath
	I0327 23:34:06.639568 1077345 main.go:141] libmachine: (addons-910864) Calling .GetSSHUsername
	I0327 23:34:06.639790 1077345 main.go:141] libmachine: Using SSH client type: native
	I0327 23:34:06.639965 1077345 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.45 22 <nil> <nil>}
	I0327 23:34:06.639983 1077345 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-910864 && echo "addons-910864" | sudo tee /etc/hostname
	I0327 23:34:06.753631 1077345 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-910864
	
	I0327 23:34:06.753668 1077345 main.go:141] libmachine: (addons-910864) Calling .GetSSHHostname
	I0327 23:34:06.756454 1077345 main.go:141] libmachine: (addons-910864) DBG | domain addons-910864 has defined MAC address 52:54:00:44:91:2e in network mk-addons-910864
	I0327 23:34:06.756822 1077345 main.go:141] libmachine: (addons-910864) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:91:2e", ip: ""} in network mk-addons-910864: {Iface:virbr1 ExpiryTime:2024-03-28 00:33:58 +0000 UTC Type:0 Mac:52:54:00:44:91:2e Iaid: IPaddr:192.168.39.45 Prefix:24 Hostname:addons-910864 Clientid:01:52:54:00:44:91:2e}
	I0327 23:34:06.756849 1077345 main.go:141] libmachine: (addons-910864) DBG | domain addons-910864 has defined IP address 192.168.39.45 and MAC address 52:54:00:44:91:2e in network mk-addons-910864
	I0327 23:34:06.757026 1077345 main.go:141] libmachine: (addons-910864) Calling .GetSSHPort
	I0327 23:34:06.757307 1077345 main.go:141] libmachine: (addons-910864) Calling .GetSSHKeyPath
	I0327 23:34:06.757569 1077345 main.go:141] libmachine: (addons-910864) Calling .GetSSHKeyPath
	I0327 23:34:06.757746 1077345 main.go:141] libmachine: (addons-910864) Calling .GetSSHUsername
	I0327 23:34:06.757989 1077345 main.go:141] libmachine: Using SSH client type: native
	I0327 23:34:06.758191 1077345 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.45 22 <nil> <nil>}
	I0327 23:34:06.758210 1077345 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-910864' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-910864/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-910864' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0327 23:34:06.873890 1077345 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0327 23:34:06.873929 1077345 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18485-1069254/.minikube CaCertPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18485-1069254/.minikube}
	I0327 23:34:06.873964 1077345 buildroot.go:174] setting up certificates
	I0327 23:34:06.873983 1077345 provision.go:84] configureAuth start
	I0327 23:34:06.874003 1077345 main.go:141] libmachine: (addons-910864) Calling .GetMachineName
	I0327 23:34:06.874386 1077345 main.go:141] libmachine: (addons-910864) Calling .GetIP
	I0327 23:34:06.877457 1077345 main.go:141] libmachine: (addons-910864) DBG | domain addons-910864 has defined MAC address 52:54:00:44:91:2e in network mk-addons-910864
	I0327 23:34:06.878038 1077345 main.go:141] libmachine: (addons-910864) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:91:2e", ip: ""} in network mk-addons-910864: {Iface:virbr1 ExpiryTime:2024-03-28 00:33:58 +0000 UTC Type:0 Mac:52:54:00:44:91:2e Iaid: IPaddr:192.168.39.45 Prefix:24 Hostname:addons-910864 Clientid:01:52:54:00:44:91:2e}
	I0327 23:34:06.878075 1077345 main.go:141] libmachine: (addons-910864) DBG | domain addons-910864 has defined IP address 192.168.39.45 and MAC address 52:54:00:44:91:2e in network mk-addons-910864
	I0327 23:34:06.878242 1077345 main.go:141] libmachine: (addons-910864) Calling .GetSSHHostname
	I0327 23:34:06.880266 1077345 main.go:141] libmachine: (addons-910864) DBG | domain addons-910864 has defined MAC address 52:54:00:44:91:2e in network mk-addons-910864
	I0327 23:34:06.880590 1077345 main.go:141] libmachine: (addons-910864) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:91:2e", ip: ""} in network mk-addons-910864: {Iface:virbr1 ExpiryTime:2024-03-28 00:33:58 +0000 UTC Type:0 Mac:52:54:00:44:91:2e Iaid: IPaddr:192.168.39.45 Prefix:24 Hostname:addons-910864 Clientid:01:52:54:00:44:91:2e}
	I0327 23:34:06.880613 1077345 main.go:141] libmachine: (addons-910864) DBG | domain addons-910864 has defined IP address 192.168.39.45 and MAC address 52:54:00:44:91:2e in network mk-addons-910864
	I0327 23:34:06.880747 1077345 provision.go:143] copyHostCerts
	I0327 23:34:06.880836 1077345 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.pem (1078 bytes)
	I0327 23:34:06.880947 1077345 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18485-1069254/.minikube/cert.pem (1123 bytes)
	I0327 23:34:06.881034 1077345 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18485-1069254/.minikube/key.pem (1679 bytes)
	I0327 23:34:06.881167 1077345 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca-key.pem org=jenkins.addons-910864 san=[127.0.0.1 192.168.39.45 addons-910864 localhost minikube]
	I0327 23:34:06.942907 1077345 provision.go:177] copyRemoteCerts
	I0327 23:34:06.942969 1077345 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0327 23:34:06.942995 1077345 main.go:141] libmachine: (addons-910864) Calling .GetSSHHostname
	I0327 23:34:06.945922 1077345 main.go:141] libmachine: (addons-910864) DBG | domain addons-910864 has defined MAC address 52:54:00:44:91:2e in network mk-addons-910864
	I0327 23:34:06.946273 1077345 main.go:141] libmachine: (addons-910864) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:91:2e", ip: ""} in network mk-addons-910864: {Iface:virbr1 ExpiryTime:2024-03-28 00:33:58 +0000 UTC Type:0 Mac:52:54:00:44:91:2e Iaid: IPaddr:192.168.39.45 Prefix:24 Hostname:addons-910864 Clientid:01:52:54:00:44:91:2e}
	I0327 23:34:06.946301 1077345 main.go:141] libmachine: (addons-910864) DBG | domain addons-910864 has defined IP address 192.168.39.45 and MAC address 52:54:00:44:91:2e in network mk-addons-910864
	I0327 23:34:06.946516 1077345 main.go:141] libmachine: (addons-910864) Calling .GetSSHPort
	I0327 23:34:06.946735 1077345 main.go:141] libmachine: (addons-910864) Calling .GetSSHKeyPath
	I0327 23:34:06.946919 1077345 main.go:141] libmachine: (addons-910864) Calling .GetSSHUsername
	I0327 23:34:06.947045 1077345 sshutil.go:53] new ssh client: &{IP:192.168.39.45 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/addons-910864/id_rsa Username:docker}
	I0327 23:34:07.028988 1077345 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0327 23:34:07.060202 1077345 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0327 23:34:07.087137 1077345 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0327 23:34:07.112694 1077345 provision.go:87] duration metric: took 238.687405ms to configureAuth
	I0327 23:34:07.112735 1077345 buildroot.go:189] setting minikube options for container-runtime
	I0327 23:34:07.112920 1077345 config.go:182] Loaded profile config "addons-910864": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0327 23:34:07.113003 1077345 main.go:141] libmachine: (addons-910864) Calling .GetSSHHostname
	I0327 23:34:07.116034 1077345 main.go:141] libmachine: (addons-910864) DBG | domain addons-910864 has defined MAC address 52:54:00:44:91:2e in network mk-addons-910864
	I0327 23:34:07.116402 1077345 main.go:141] libmachine: (addons-910864) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:91:2e", ip: ""} in network mk-addons-910864: {Iface:virbr1 ExpiryTime:2024-03-28 00:33:58 +0000 UTC Type:0 Mac:52:54:00:44:91:2e Iaid: IPaddr:192.168.39.45 Prefix:24 Hostname:addons-910864 Clientid:01:52:54:00:44:91:2e}
	I0327 23:34:07.116450 1077345 main.go:141] libmachine: (addons-910864) DBG | domain addons-910864 has defined IP address 192.168.39.45 and MAC address 52:54:00:44:91:2e in network mk-addons-910864
	I0327 23:34:07.116643 1077345 main.go:141] libmachine: (addons-910864) Calling .GetSSHPort
	I0327 23:34:07.116889 1077345 main.go:141] libmachine: (addons-910864) Calling .GetSSHKeyPath
	I0327 23:34:07.117061 1077345 main.go:141] libmachine: (addons-910864) Calling .GetSSHKeyPath
	I0327 23:34:07.117190 1077345 main.go:141] libmachine: (addons-910864) Calling .GetSSHUsername
	I0327 23:34:07.117450 1077345 main.go:141] libmachine: Using SSH client type: native
	I0327 23:34:07.117662 1077345 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.45 22 <nil> <nil>}
	I0327 23:34:07.117684 1077345 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0327 23:34:07.392034 1077345 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0327 23:34:07.392066 1077345 main.go:141] libmachine: Checking connection to Docker...
	I0327 23:34:07.392080 1077345 main.go:141] libmachine: (addons-910864) Calling .GetURL
	I0327 23:34:07.393751 1077345 main.go:141] libmachine: (addons-910864) DBG | Using libvirt version 6000000
	I0327 23:34:07.395958 1077345 main.go:141] libmachine: (addons-910864) DBG | domain addons-910864 has defined MAC address 52:54:00:44:91:2e in network mk-addons-910864
	I0327 23:34:07.396302 1077345 main.go:141] libmachine: (addons-910864) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:91:2e", ip: ""} in network mk-addons-910864: {Iface:virbr1 ExpiryTime:2024-03-28 00:33:58 +0000 UTC Type:0 Mac:52:54:00:44:91:2e Iaid: IPaddr:192.168.39.45 Prefix:24 Hostname:addons-910864 Clientid:01:52:54:00:44:91:2e}
	I0327 23:34:07.396337 1077345 main.go:141] libmachine: (addons-910864) DBG | domain addons-910864 has defined IP address 192.168.39.45 and MAC address 52:54:00:44:91:2e in network mk-addons-910864
	I0327 23:34:07.396458 1077345 main.go:141] libmachine: Docker is up and running!
	I0327 23:34:07.396474 1077345 main.go:141] libmachine: Reticulating splines...
	I0327 23:34:07.396483 1077345 client.go:171] duration metric: took 24.417349175s to LocalClient.Create
	I0327 23:34:07.396514 1077345 start.go:167] duration metric: took 24.417425673s to libmachine.API.Create "addons-910864"
	I0327 23:34:07.396537 1077345 start.go:293] postStartSetup for "addons-910864" (driver="kvm2")
	I0327 23:34:07.396554 1077345 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0327 23:34:07.396577 1077345 main.go:141] libmachine: (addons-910864) Calling .DriverName
	I0327 23:34:07.396876 1077345 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0327 23:34:07.396901 1077345 main.go:141] libmachine: (addons-910864) Calling .GetSSHHostname
	I0327 23:34:07.399041 1077345 main.go:141] libmachine: (addons-910864) DBG | domain addons-910864 has defined MAC address 52:54:00:44:91:2e in network mk-addons-910864
	I0327 23:34:07.399377 1077345 main.go:141] libmachine: (addons-910864) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:91:2e", ip: ""} in network mk-addons-910864: {Iface:virbr1 ExpiryTime:2024-03-28 00:33:58 +0000 UTC Type:0 Mac:52:54:00:44:91:2e Iaid: IPaddr:192.168.39.45 Prefix:24 Hostname:addons-910864 Clientid:01:52:54:00:44:91:2e}
	I0327 23:34:07.399407 1077345 main.go:141] libmachine: (addons-910864) DBG | domain addons-910864 has defined IP address 192.168.39.45 and MAC address 52:54:00:44:91:2e in network mk-addons-910864
	I0327 23:34:07.399483 1077345 main.go:141] libmachine: (addons-910864) Calling .GetSSHPort
	I0327 23:34:07.399682 1077345 main.go:141] libmachine: (addons-910864) Calling .GetSSHKeyPath
	I0327 23:34:07.399832 1077345 main.go:141] libmachine: (addons-910864) Calling .GetSSHUsername
	I0327 23:34:07.399958 1077345 sshutil.go:53] new ssh client: &{IP:192.168.39.45 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/addons-910864/id_rsa Username:docker}
	I0327 23:34:07.481179 1077345 ssh_runner.go:195] Run: cat /etc/os-release
	I0327 23:34:07.486583 1077345 info.go:137] Remote host: Buildroot 2023.02.9
	I0327 23:34:07.486609 1077345 filesync.go:126] Scanning /home/jenkins/minikube-integration/18485-1069254/.minikube/addons for local assets ...
	I0327 23:34:07.486675 1077345 filesync.go:126] Scanning /home/jenkins/minikube-integration/18485-1069254/.minikube/files for local assets ...
	I0327 23:34:07.486698 1077345 start.go:296] duration metric: took 90.153232ms for postStartSetup
	I0327 23:34:07.486737 1077345 main.go:141] libmachine: (addons-910864) Calling .GetConfigRaw
	I0327 23:34:07.487305 1077345 main.go:141] libmachine: (addons-910864) Calling .GetIP
	I0327 23:34:07.489974 1077345 main.go:141] libmachine: (addons-910864) DBG | domain addons-910864 has defined MAC address 52:54:00:44:91:2e in network mk-addons-910864
	I0327 23:34:07.490315 1077345 main.go:141] libmachine: (addons-910864) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:91:2e", ip: ""} in network mk-addons-910864: {Iface:virbr1 ExpiryTime:2024-03-28 00:33:58 +0000 UTC Type:0 Mac:52:54:00:44:91:2e Iaid: IPaddr:192.168.39.45 Prefix:24 Hostname:addons-910864 Clientid:01:52:54:00:44:91:2e}
	I0327 23:34:07.490344 1077345 main.go:141] libmachine: (addons-910864) DBG | domain addons-910864 has defined IP address 192.168.39.45 and MAC address 52:54:00:44:91:2e in network mk-addons-910864
	I0327 23:34:07.490589 1077345 profile.go:142] Saving config to /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/addons-910864/config.json ...
	I0327 23:34:07.490753 1077345 start.go:128] duration metric: took 24.530166819s to createHost
	I0327 23:34:07.490776 1077345 main.go:141] libmachine: (addons-910864) Calling .GetSSHHostname
	I0327 23:34:07.493034 1077345 main.go:141] libmachine: (addons-910864) DBG | domain addons-910864 has defined MAC address 52:54:00:44:91:2e in network mk-addons-910864
	I0327 23:34:07.493305 1077345 main.go:141] libmachine: (addons-910864) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:91:2e", ip: ""} in network mk-addons-910864: {Iface:virbr1 ExpiryTime:2024-03-28 00:33:58 +0000 UTC Type:0 Mac:52:54:00:44:91:2e Iaid: IPaddr:192.168.39.45 Prefix:24 Hostname:addons-910864 Clientid:01:52:54:00:44:91:2e}
	I0327 23:34:07.493342 1077345 main.go:141] libmachine: (addons-910864) DBG | domain addons-910864 has defined IP address 192.168.39.45 and MAC address 52:54:00:44:91:2e in network mk-addons-910864
	I0327 23:34:07.493473 1077345 main.go:141] libmachine: (addons-910864) Calling .GetSSHPort
	I0327 23:34:07.493657 1077345 main.go:141] libmachine: (addons-910864) Calling .GetSSHKeyPath
	I0327 23:34:07.493834 1077345 main.go:141] libmachine: (addons-910864) Calling .GetSSHKeyPath
	I0327 23:34:07.494000 1077345 main.go:141] libmachine: (addons-910864) Calling .GetSSHUsername
	I0327 23:34:07.494202 1077345 main.go:141] libmachine: Using SSH client type: native
	I0327 23:34:07.494424 1077345 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.45 22 <nil> <nil>}
	I0327 23:34:07.494438 1077345 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0327 23:34:07.595394 1077345 main.go:141] libmachine: SSH cmd err, output: <nil>: 1711582447.565758875
	
	I0327 23:34:07.595421 1077345 fix.go:216] guest clock: 1711582447.565758875
	I0327 23:34:07.595429 1077345 fix.go:229] Guest: 2024-03-27 23:34:07.565758875 +0000 UTC Remote: 2024-03-27 23:34:07.490763971 +0000 UTC m=+24.647534309 (delta=74.994904ms)
	I0327 23:34:07.595453 1077345 fix.go:200] guest clock delta is within tolerance: 74.994904ms
	I0327 23:34:07.595460 1077345 start.go:83] releasing machines lock for "addons-910864", held for 24.634976465s
	I0327 23:34:07.595488 1077345 main.go:141] libmachine: (addons-910864) Calling .DriverName
	I0327 23:34:07.595874 1077345 main.go:141] libmachine: (addons-910864) Calling .GetIP
	I0327 23:34:07.598619 1077345 main.go:141] libmachine: (addons-910864) DBG | domain addons-910864 has defined MAC address 52:54:00:44:91:2e in network mk-addons-910864
	I0327 23:34:07.598998 1077345 main.go:141] libmachine: (addons-910864) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:91:2e", ip: ""} in network mk-addons-910864: {Iface:virbr1 ExpiryTime:2024-03-28 00:33:58 +0000 UTC Type:0 Mac:52:54:00:44:91:2e Iaid: IPaddr:192.168.39.45 Prefix:24 Hostname:addons-910864 Clientid:01:52:54:00:44:91:2e}
	I0327 23:34:07.599030 1077345 main.go:141] libmachine: (addons-910864) DBG | domain addons-910864 has defined IP address 192.168.39.45 and MAC address 52:54:00:44:91:2e in network mk-addons-910864
	I0327 23:34:07.599211 1077345 main.go:141] libmachine: (addons-910864) Calling .DriverName
	I0327 23:34:07.599750 1077345 main.go:141] libmachine: (addons-910864) Calling .DriverName
	I0327 23:34:07.599924 1077345 main.go:141] libmachine: (addons-910864) Calling .DriverName
	I0327 23:34:07.599992 1077345 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0327 23:34:07.600055 1077345 main.go:141] libmachine: (addons-910864) Calling .GetSSHHostname
	I0327 23:34:07.600175 1077345 ssh_runner.go:195] Run: cat /version.json
	I0327 23:34:07.600197 1077345 main.go:141] libmachine: (addons-910864) Calling .GetSSHHostname
	I0327 23:34:07.602729 1077345 main.go:141] libmachine: (addons-910864) DBG | domain addons-910864 has defined MAC address 52:54:00:44:91:2e in network mk-addons-910864
	I0327 23:34:07.603211 1077345 main.go:141] libmachine: (addons-910864) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:91:2e", ip: ""} in network mk-addons-910864: {Iface:virbr1 ExpiryTime:2024-03-28 00:33:58 +0000 UTC Type:0 Mac:52:54:00:44:91:2e Iaid: IPaddr:192.168.39.45 Prefix:24 Hostname:addons-910864 Clientid:01:52:54:00:44:91:2e}
	I0327 23:34:07.603241 1077345 main.go:141] libmachine: (addons-910864) DBG | domain addons-910864 has defined MAC address 52:54:00:44:91:2e in network mk-addons-910864
	I0327 23:34:07.603283 1077345 main.go:141] libmachine: (addons-910864) DBG | domain addons-910864 has defined IP address 192.168.39.45 and MAC address 52:54:00:44:91:2e in network mk-addons-910864
	I0327 23:34:07.603463 1077345 main.go:141] libmachine: (addons-910864) Calling .GetSSHPort
	I0327 23:34:07.603672 1077345 main.go:141] libmachine: (addons-910864) Calling .GetSSHKeyPath
	I0327 23:34:07.603702 1077345 main.go:141] libmachine: (addons-910864) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:91:2e", ip: ""} in network mk-addons-910864: {Iface:virbr1 ExpiryTime:2024-03-28 00:33:58 +0000 UTC Type:0 Mac:52:54:00:44:91:2e Iaid: IPaddr:192.168.39.45 Prefix:24 Hostname:addons-910864 Clientid:01:52:54:00:44:91:2e}
	I0327 23:34:07.603760 1077345 main.go:141] libmachine: (addons-910864) DBG | domain addons-910864 has defined IP address 192.168.39.45 and MAC address 52:54:00:44:91:2e in network mk-addons-910864
	I0327 23:34:07.603870 1077345 main.go:141] libmachine: (addons-910864) Calling .GetSSHUsername
	I0327 23:34:07.603909 1077345 main.go:141] libmachine: (addons-910864) Calling .GetSSHPort
	I0327 23:34:07.604013 1077345 sshutil.go:53] new ssh client: &{IP:192.168.39.45 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/addons-910864/id_rsa Username:docker}
	I0327 23:34:07.604096 1077345 main.go:141] libmachine: (addons-910864) Calling .GetSSHKeyPath
	I0327 23:34:07.604220 1077345 main.go:141] libmachine: (addons-910864) Calling .GetSSHUsername
	I0327 23:34:07.604358 1077345 sshutil.go:53] new ssh client: &{IP:192.168.39.45 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/addons-910864/id_rsa Username:docker}
	I0327 23:34:07.710840 1077345 ssh_runner.go:195] Run: systemctl --version
	I0327 23:34:07.717572 1077345 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0327 23:34:07.878116 1077345 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0327 23:34:07.885351 1077345 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0327 23:34:07.885427 1077345 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0327 23:34:07.905381 1077345 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0327 23:34:07.905414 1077345 start.go:494] detecting cgroup driver to use...
	I0327 23:34:07.905490 1077345 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0327 23:34:07.923564 1077345 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0327 23:34:07.939290 1077345 docker.go:217] disabling cri-docker service (if available) ...
	I0327 23:34:07.939362 1077345 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0327 23:34:07.954954 1077345 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0327 23:34:07.970440 1077345 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0327 23:34:08.089790 1077345 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0327 23:34:08.237794 1077345 docker.go:233] disabling docker service ...
	I0327 23:34:08.237861 1077345 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0327 23:34:08.253111 1077345 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0327 23:34:08.265948 1077345 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0327 23:34:08.399338 1077345 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0327 23:34:08.525018 1077345 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0327 23:34:08.547696 1077345 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0327 23:34:08.567764 1077345 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0327 23:34:08.567834 1077345 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0327 23:34:08.579461 1077345 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0327 23:34:08.579526 1077345 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0327 23:34:08.591016 1077345 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0327 23:34:08.602554 1077345 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0327 23:34:08.613848 1077345 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0327 23:34:08.625442 1077345 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0327 23:34:08.636820 1077345 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0327 23:34:08.655474 1077345 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0327 23:34:08.667349 1077345 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0327 23:34:08.677930 1077345 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0327 23:34:08.678018 1077345 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0327 23:34:08.693610 1077345 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0327 23:34:08.704575 1077345 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0327 23:34:08.831567 1077345 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0327 23:34:08.973987 1077345 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0327 23:34:08.974091 1077345 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0327 23:34:08.979443 1077345 start.go:562] Will wait 60s for crictl version
	I0327 23:34:08.979527 1077345 ssh_runner.go:195] Run: which crictl
	I0327 23:34:08.983623 1077345 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0327 23:34:09.024043 1077345 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0327 23:34:09.024142 1077345 ssh_runner.go:195] Run: crio --version
	I0327 23:34:09.055122 1077345 ssh_runner.go:195] Run: crio --version
	I0327 23:34:09.086962 1077345 out.go:177] * Preparing Kubernetes v1.29.3 on CRI-O 1.29.1 ...
	I0327 23:34:09.088248 1077345 main.go:141] libmachine: (addons-910864) Calling .GetIP
	I0327 23:34:09.090835 1077345 main.go:141] libmachine: (addons-910864) DBG | domain addons-910864 has defined MAC address 52:54:00:44:91:2e in network mk-addons-910864
	I0327 23:34:09.091208 1077345 main.go:141] libmachine: (addons-910864) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:91:2e", ip: ""} in network mk-addons-910864: {Iface:virbr1 ExpiryTime:2024-03-28 00:33:58 +0000 UTC Type:0 Mac:52:54:00:44:91:2e Iaid: IPaddr:192.168.39.45 Prefix:24 Hostname:addons-910864 Clientid:01:52:54:00:44:91:2e}
	I0327 23:34:09.091245 1077345 main.go:141] libmachine: (addons-910864) DBG | domain addons-910864 has defined IP address 192.168.39.45 and MAC address 52:54:00:44:91:2e in network mk-addons-910864
	I0327 23:34:09.091436 1077345 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0327 23:34:09.095894 1077345 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0327 23:34:09.108691 1077345 kubeadm.go:877] updating cluster {Name:addons-910864 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.
3 ClusterName:addons-910864 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.45 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTy
pe:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0327 23:34:09.108802 1077345 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0327 23:34:09.108845 1077345 ssh_runner.go:195] Run: sudo crictl images --output json
	I0327 23:34:09.141337 1077345 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.3". assuming images are not preloaded.
	I0327 23:34:09.141415 1077345 ssh_runner.go:195] Run: which lz4
	I0327 23:34:09.145903 1077345 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0327 23:34:09.150457 1077345 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0327 23:34:09.150488 1077345 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (402967820 bytes)
	I0327 23:34:10.611124 1077345 crio.go:462] duration metric: took 1.46526143s to copy over tarball
	I0327 23:34:10.611202 1077345 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0327 23:34:12.989162 1077345 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.37792671s)
	I0327 23:34:12.989197 1077345 crio.go:469] duration metric: took 2.378039063s to extract the tarball
	I0327 23:34:12.989208 1077345 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0327 23:34:13.028029 1077345 ssh_runner.go:195] Run: sudo crictl images --output json
	I0327 23:34:13.071434 1077345 crio.go:514] all images are preloaded for cri-o runtime.
	I0327 23:34:13.071460 1077345 cache_images.go:84] Images are preloaded, skipping loading
	I0327 23:34:13.071470 1077345 kubeadm.go:928] updating node { 192.168.39.45 8443 v1.29.3 crio true true} ...
	I0327 23:34:13.071577 1077345 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-910864 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.45
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.3 ClusterName:addons-910864 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0327 23:34:13.071646 1077345 ssh_runner.go:195] Run: crio config
	I0327 23:34:13.119225 1077345 cni.go:84] Creating CNI manager for ""
	I0327 23:34:13.119248 1077345 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0327 23:34:13.119258 1077345 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0327 23:34:13.119280 1077345 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.45 APIServerPort:8443 KubernetesVersion:v1.29.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-910864 NodeName:addons-910864 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.45"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.45 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kube
rnetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0327 23:34:13.119432 1077345 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.45
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-910864"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.45
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.45"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0327 23:34:13.119495 1077345 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.3
	I0327 23:34:13.130767 1077345 binaries.go:44] Found k8s binaries, skipping transfer
	I0327 23:34:13.130848 1077345 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0327 23:34:13.141647 1077345 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0327 23:34:13.160773 1077345 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0327 23:34:13.179840 1077345 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2154 bytes)
	I0327 23:34:13.198569 1077345 ssh_runner.go:195] Run: grep 192.168.39.45	control-plane.minikube.internal$ /etc/hosts
	I0327 23:34:13.202716 1077345 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.45	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0327 23:34:13.215867 1077345 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0327 23:34:13.353734 1077345 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0327 23:34:13.372923 1077345 certs.go:68] Setting up /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/addons-910864 for IP: 192.168.39.45
	I0327 23:34:13.372953 1077345 certs.go:194] generating shared ca certs ...
	I0327 23:34:13.372978 1077345 certs.go:226] acquiring lock for ca certs: {Name:mkf4dec8f33bbf51de6ed3aabdf175c7fd744ef9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 23:34:13.373171 1077345 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.key
	I0327 23:34:13.618590 1077345 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.crt ...
	I0327 23:34:13.618624 1077345 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.crt: {Name:mkfd491caf0a4b563ee5cb8b98ea79a67e901883 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 23:34:13.618800 1077345 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.key ...
	I0327 23:34:13.618814 1077345 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.key: {Name:mk92c0d95665cf01dfae9e1c1e5955fa550470db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 23:34:13.618886 1077345 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/proxy-client-ca.key
	I0327 23:34:13.793266 1077345 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18485-1069254/.minikube/proxy-client-ca.crt ...
	I0327 23:34:13.793299 1077345 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18485-1069254/.minikube/proxy-client-ca.crt: {Name:mk7c62917f059122f8eb727c0e5be0e4ffee8269 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 23:34:13.793461 1077345 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18485-1069254/.minikube/proxy-client-ca.key ...
	I0327 23:34:13.793475 1077345 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18485-1069254/.minikube/proxy-client-ca.key: {Name:mkbbf66a35b5629b37ff716ca43df1b8c85bb4ab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 23:34:13.793549 1077345 certs.go:256] generating profile certs ...
	I0327 23:34:13.793612 1077345 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/addons-910864/client.key
	I0327 23:34:13.793626 1077345 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/addons-910864/client.crt with IP's: []
	I0327 23:34:14.103112 1077345 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/addons-910864/client.crt ...
	I0327 23:34:14.103152 1077345 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/addons-910864/client.crt: {Name:mke5a52d3638fdff489d37b240fbbaac5aa74554 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 23:34:14.103318 1077345 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/addons-910864/client.key ...
	I0327 23:34:14.103329 1077345 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/addons-910864/client.key: {Name:mk08b10030048b331c2a747146feb9b1e03238cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 23:34:14.103394 1077345 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/addons-910864/apiserver.key.7e9dcc90
	I0327 23:34:14.103412 1077345 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/addons-910864/apiserver.crt.7e9dcc90 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.45]
	I0327 23:34:14.503219 1077345 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/addons-910864/apiserver.crt.7e9dcc90 ...
	I0327 23:34:14.503256 1077345 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/addons-910864/apiserver.crt.7e9dcc90: {Name:mk25e670a9270509090e8ee0fb5dd86d2bc39966 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 23:34:14.503415 1077345 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/addons-910864/apiserver.key.7e9dcc90 ...
	I0327 23:34:14.503429 1077345 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/addons-910864/apiserver.key.7e9dcc90: {Name:mkc0f8180d1bd9382890b7d98c75224375cd5d45 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 23:34:14.503507 1077345 certs.go:381] copying /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/addons-910864/apiserver.crt.7e9dcc90 -> /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/addons-910864/apiserver.crt
	I0327 23:34:14.503599 1077345 certs.go:385] copying /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/addons-910864/apiserver.key.7e9dcc90 -> /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/addons-910864/apiserver.key
	I0327 23:34:14.503653 1077345 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/addons-910864/proxy-client.key
	I0327 23:34:14.503668 1077345 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/addons-910864/proxy-client.crt with IP's: []
	I0327 23:34:14.546045 1077345 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/addons-910864/proxy-client.crt ...
	I0327 23:34:14.546082 1077345 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/addons-910864/proxy-client.crt: {Name:mkeef8e2a01998daba434bc3ddd475224bc86cc1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 23:34:14.546313 1077345 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/addons-910864/proxy-client.key ...
	I0327 23:34:14.546339 1077345 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/addons-910864/proxy-client.key: {Name:mkd3ca102037b4e6a934de634eff0bb06d013c5e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 23:34:14.546529 1077345 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca-key.pem (1675 bytes)
	I0327 23:34:14.546577 1077345 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem (1078 bytes)
	I0327 23:34:14.546607 1077345 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/cert.pem (1123 bytes)
	I0327 23:34:14.546629 1077345 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/key.pem (1679 bytes)
	I0327 23:34:14.547314 1077345 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0327 23:34:14.578828 1077345 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0327 23:34:14.605255 1077345 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0327 23:34:14.630472 1077345 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0327 23:34:14.656561 1077345 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/addons-910864/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0327 23:34:14.683436 1077345 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/addons-910864/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0327 23:34:14.709388 1077345 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/addons-910864/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0327 23:34:14.734116 1077345 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/addons-910864/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0327 23:34:14.761651 1077345 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0327 23:34:14.787805 1077345 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0327 23:34:14.805394 1077345 ssh_runner.go:195] Run: openssl version
	I0327 23:34:14.811253 1077345 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0327 23:34:14.823208 1077345 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0327 23:34:14.828065 1077345 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 27 23:34 /usr/share/ca-certificates/minikubeCA.pem
	I0327 23:34:14.828121 1077345 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0327 23:34:14.833997 1077345 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0327 23:34:14.846532 1077345 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0327 23:34:14.851023 1077345 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0327 23:34:14.851080 1077345 kubeadm.go:391] StartCluster: {Name:addons-910864 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 C
lusterName:addons-910864 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.45 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0327 23:34:14.851172 1077345 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0327 23:34:14.851234 1077345 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0327 23:34:14.900550 1077345 cri.go:89] found id: ""
	I0327 23:34:14.900644 1077345 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0327 23:34:14.919094 1077345 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0327 23:34:14.941003 1077345 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0327 23:34:14.957258 1077345 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0327 23:34:14.957288 1077345 kubeadm.go:156] found existing configuration files:
	
	I0327 23:34:14.957351 1077345 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0327 23:34:14.975493 1077345 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0327 23:34:14.975568 1077345 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0327 23:34:14.986692 1077345 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0327 23:34:14.997138 1077345 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0327 23:34:14.997215 1077345 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0327 23:34:15.008022 1077345 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0327 23:34:15.018647 1077345 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0327 23:34:15.018714 1077345 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0327 23:34:15.029347 1077345 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0327 23:34:15.039334 1077345 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0327 23:34:15.039410 1077345 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0327 23:34:15.049621 1077345 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0327 23:34:15.106111 1077345 kubeadm.go:309] [init] Using Kubernetes version: v1.29.3
	I0327 23:34:15.106295 1077345 kubeadm.go:309] [preflight] Running pre-flight checks
	I0327 23:34:15.240849 1077345 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0327 23:34:15.241011 1077345 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0327 23:34:15.241165 1077345 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0327 23:34:15.456301 1077345 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0327 23:34:15.595395 1077345 out.go:204]   - Generating certificates and keys ...
	I0327 23:34:15.595525 1077345 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0327 23:34:15.595624 1077345 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0327 23:34:15.620211 1077345 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0327 23:34:15.825836 1077345 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0327 23:34:16.022804 1077345 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0327 23:34:16.175994 1077345 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0327 23:34:16.391450 1077345 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0327 23:34:16.391630 1077345 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [addons-910864 localhost] and IPs [192.168.39.45 127.0.0.1 ::1]
	I0327 23:34:16.511552 1077345 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0327 23:34:16.511770 1077345 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [addons-910864 localhost] and IPs [192.168.39.45 127.0.0.1 ::1]
	I0327 23:34:16.804329 1077345 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0327 23:34:16.891542 1077345 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0327 23:34:17.051601 1077345 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0327 23:34:17.051680 1077345 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0327 23:34:17.098584 1077345 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0327 23:34:17.226614 1077345 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0327 23:34:17.560001 1077345 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0327 23:34:17.686696 1077345 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0327 23:34:17.949188 1077345 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0327 23:34:17.949685 1077345 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0327 23:34:17.951975 1077345 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0327 23:34:17.953957 1077345 out.go:204]   - Booting up control plane ...
	I0327 23:34:17.954137 1077345 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0327 23:34:17.954286 1077345 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0327 23:34:17.954394 1077345 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0327 23:34:17.977597 1077345 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0327 23:34:17.979941 1077345 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0327 23:34:17.980019 1077345 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0327 23:34:18.128388 1077345 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0327 23:34:24.125830 1077345 kubeadm.go:309] [apiclient] All control plane components are healthy after 6.001486 seconds
	I0327 23:34:24.144635 1077345 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0327 23:34:24.160845 1077345 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0327 23:34:24.692064 1077345 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0327 23:34:24.692326 1077345 kubeadm.go:309] [mark-control-plane] Marking the node addons-910864 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0327 23:34:25.212989 1077345 kubeadm.go:309] [bootstrap-token] Using token: auvd0a.hc5mtgchg5mo6t7h
	I0327 23:34:25.214591 1077345 out.go:204]   - Configuring RBAC rules ...
	I0327 23:34:25.214792 1077345 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0327 23:34:25.221232 1077345 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0327 23:34:25.231149 1077345 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0327 23:34:25.234631 1077345 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0327 23:34:25.238262 1077345 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0327 23:34:25.242249 1077345 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0327 23:34:25.259319 1077345 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0327 23:34:25.515276 1077345 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0327 23:34:25.644544 1077345 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0327 23:34:25.644609 1077345 kubeadm.go:309] 
	I0327 23:34:25.644691 1077345 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0327 23:34:25.644703 1077345 kubeadm.go:309] 
	I0327 23:34:25.644800 1077345 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0327 23:34:25.644810 1077345 kubeadm.go:309] 
	I0327 23:34:25.644845 1077345 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0327 23:34:25.644944 1077345 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0327 23:34:25.645041 1077345 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0327 23:34:25.645063 1077345 kubeadm.go:309] 
	I0327 23:34:25.645140 1077345 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0327 23:34:25.645150 1077345 kubeadm.go:309] 
	I0327 23:34:25.645208 1077345 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0327 23:34:25.645226 1077345 kubeadm.go:309] 
	I0327 23:34:25.645303 1077345 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0327 23:34:25.645418 1077345 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0327 23:34:25.645522 1077345 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0327 23:34:25.645542 1077345 kubeadm.go:309] 
	I0327 23:34:25.645648 1077345 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0327 23:34:25.645753 1077345 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0327 23:34:25.645767 1077345 kubeadm.go:309] 
	I0327 23:34:25.645881 1077345 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token auvd0a.hc5mtgchg5mo6t7h \
	I0327 23:34:25.646027 1077345 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:eb47e943713ff45bf2b8bbea854932df17a469668ec0f8e79ea3bb626e56fc59 \
	I0327 23:34:25.646065 1077345 kubeadm.go:309] 	--control-plane 
	I0327 23:34:25.646079 1077345 kubeadm.go:309] 
	I0327 23:34:25.646197 1077345 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0327 23:34:25.646209 1077345 kubeadm.go:309] 
	I0327 23:34:25.646338 1077345 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token auvd0a.hc5mtgchg5mo6t7h \
	I0327 23:34:25.646492 1077345 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:eb47e943713ff45bf2b8bbea854932df17a469668ec0f8e79ea3bb626e56fc59 
	I0327 23:34:25.647344 1077345 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0327 23:34:25.647370 1077345 cni.go:84] Creating CNI manager for ""
	I0327 23:34:25.647378 1077345 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0327 23:34:25.649109 1077345 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0327 23:34:25.650422 1077345 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0327 23:34:25.676096 1077345 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0327 23:34:25.720434 1077345 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0327 23:34:25.720545 1077345 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0327 23:34:25.720547 1077345 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-910864 minikube.k8s.io/updated_at=2024_03_27T23_34_25_0700 minikube.k8s.io/version=v1.33.0-beta.0 minikube.k8s.io/commit=1a940f980f77c9bfc8ef93678db8c1f49c9dd79d minikube.k8s.io/name=addons-910864 minikube.k8s.io/primary=true
	I0327 23:34:25.803362 1077345 ops.go:34] apiserver oom_adj: -16
	I0327 23:34:25.920579 1077345 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0327 23:34:26.421055 1077345 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0327 23:34:26.921387 1077345 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0327 23:34:27.421415 1077345 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0327 23:34:27.921173 1077345 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0327 23:34:28.421414 1077345 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0327 23:34:28.920618 1077345 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0327 23:34:29.420752 1077345 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0327 23:34:29.921372 1077345 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0327 23:34:30.421265 1077345 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0327 23:34:30.921046 1077345 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0327 23:34:31.420773 1077345 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0327 23:34:31.921253 1077345 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0327 23:34:32.420597 1077345 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0327 23:34:32.921258 1077345 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0327 23:34:33.421459 1077345 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0327 23:34:33.921600 1077345 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0327 23:34:34.421480 1077345 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0327 23:34:34.920978 1077345 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0327 23:34:35.420607 1077345 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0327 23:34:35.921429 1077345 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0327 23:34:36.421502 1077345 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0327 23:34:36.921032 1077345 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0327 23:34:37.421449 1077345 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0327 23:34:37.920658 1077345 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0327 23:34:38.357174 1077345 kubeadm.go:1107] duration metric: took 12.636703457s to wait for elevateKubeSystemPrivileges
	W0327 23:34:38.357236 1077345 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0327 23:34:38.357249 1077345 kubeadm.go:393] duration metric: took 23.506173446s to StartCluster
	I0327 23:34:38.357320 1077345 settings.go:142] acquiring lock: {Name:mk594dd5d083edcb4c668528431bf610fe4ea638 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 23:34:38.357533 1077345 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18485-1069254/kubeconfig
	I0327 23:34:38.358119 1077345 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18485-1069254/kubeconfig: {Name:mk608bb0dce90860f51972a105c5a28b1a9b081e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 23:34:38.358401 1077345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0327 23:34:38.358429 1077345 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.39.45 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0327 23:34:38.360437 1077345 out.go:177] * Verifying Kubernetes components...
	I0327 23:34:38.358470 1077345 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volumesnapshots:true yakd:true]
	I0327 23:34:38.358686 1077345 config.go:182] Loaded profile config "addons-910864": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0327 23:34:38.361954 1077345 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-910864"
	I0327 23:34:38.361969 1077345 addons.go:69] Setting yakd=true in profile "addons-910864"
	I0327 23:34:38.361978 1077345 addons.go:69] Setting default-storageclass=true in profile "addons-910864"
	I0327 23:34:38.362000 1077345 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0327 23:34:38.361967 1077345 addons.go:69] Setting helm-tiller=true in profile "addons-910864"
	I0327 23:34:38.362012 1077345 addons.go:69] Setting gcp-auth=true in profile "addons-910864"
	I0327 23:34:38.362028 1077345 addons.go:234] Setting addon helm-tiller=true in "addons-910864"
	I0327 23:34:38.362036 1077345 mustload.go:65] Loading cluster: addons-910864
	I0327 23:34:38.362035 1077345 addons.go:69] Setting cloud-spanner=true in profile "addons-910864"
	I0327 23:34:38.362054 1077345 addons.go:234] Setting addon cloud-spanner=true in "addons-910864"
	I0327 23:34:38.362062 1077345 addons.go:69] Setting ingress-dns=true in profile "addons-910864"
	I0327 23:34:38.362007 1077345 addons.go:234] Setting addon yakd=true in "addons-910864"
	I0327 23:34:38.362028 1077345 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-910864"
	I0327 23:34:38.362109 1077345 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-910864"
	I0327 23:34:38.362113 1077345 addons.go:234] Setting addon ingress-dns=true in "addons-910864"
	I0327 23:34:38.362125 1077345 host.go:66] Checking if "addons-910864" exists ...
	I0327 23:34:38.362143 1077345 addons.go:69] Setting registry=true in profile "addons-910864"
	I0327 23:34:38.362154 1077345 addons.go:69] Setting ingress=true in profile "addons-910864"
	I0327 23:34:38.362167 1077345 host.go:66] Checking if "addons-910864" exists ...
	I0327 23:34:38.362007 1077345 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-910864"
	I0327 23:34:38.362184 1077345 addons.go:234] Setting addon ingress=true in "addons-910864"
	I0327 23:34:38.362214 1077345 host.go:66] Checking if "addons-910864" exists ...
	I0327 23:34:38.362261 1077345 config.go:182] Loaded profile config "addons-910864": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0327 23:34:38.362168 1077345 addons.go:69] Setting storage-provisioner=true in profile "addons-910864"
	I0327 23:34:38.362297 1077345 addons.go:234] Setting addon storage-provisioner=true in "addons-910864"
	I0327 23:34:38.362325 1077345 host.go:66] Checking if "addons-910864" exists ...
	I0327 23:34:38.362530 1077345 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0327 23:34:38.362565 1077345 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0327 23:34:38.362135 1077345 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-910864"
	I0327 23:34:38.362616 1077345 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0327 23:34:38.362619 1077345 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0327 23:34:38.362649 1077345 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0327 23:34:38.362663 1077345 addons.go:69] Setting volumesnapshots=true in profile "addons-910864"
	I0327 23:34:38.362688 1077345 addons.go:234] Setting addon volumesnapshots=true in "addons-910864"
	I0327 23:34:38.362701 1077345 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0327 23:34:38.362747 1077345 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0327 23:34:38.362097 1077345 host.go:66] Checking if "addons-910864" exists ...
	I0327 23:34:38.362138 1077345 host.go:66] Checking if "addons-910864" exists ...
	I0327 23:34:38.362899 1077345 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0327 23:34:38.362924 1077345 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0327 23:34:38.362100 1077345 host.go:66] Checking if "addons-910864" exists ...
	I0327 23:34:38.362977 1077345 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0327 23:34:38.362999 1077345 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0327 23:34:38.362160 1077345 addons.go:234] Setting addon registry=true in "addons-910864"
	I0327 23:34:38.362650 1077345 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0327 23:34:38.363087 1077345 addons.go:69] Setting inspektor-gadget=true in profile "addons-910864"
	I0327 23:34:38.363101 1077345 addons.go:69] Setting metrics-server=true in profile "addons-910864"
	I0327 23:34:38.363118 1077345 addons.go:234] Setting addon inspektor-gadget=true in "addons-910864"
	I0327 23:34:38.363121 1077345 addons.go:234] Setting addon metrics-server=true in "addons-910864"
	I0327 23:34:38.363155 1077345 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0327 23:34:38.363177 1077345 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0327 23:34:38.363230 1077345 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0327 23:34:38.363260 1077345 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0327 23:34:38.363267 1077345 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0327 23:34:38.363269 1077345 host.go:66] Checking if "addons-910864" exists ...
	I0327 23:34:38.363296 1077345 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0327 23:34:38.363233 1077345 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0327 23:34:38.363316 1077345 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0327 23:34:38.363628 1077345 host.go:66] Checking if "addons-910864" exists ...
	I0327 23:34:38.363671 1077345 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0327 23:34:38.363720 1077345 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0327 23:34:38.363799 1077345 host.go:66] Checking if "addons-910864" exists ...
	I0327 23:34:38.364193 1077345 host.go:66] Checking if "addons-910864" exists ...
	I0327 23:34:38.364279 1077345 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0327 23:34:38.364310 1077345 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0327 23:34:38.364226 1077345 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-910864"
	I0327 23:34:38.364491 1077345 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-910864"
	I0327 23:34:38.364550 1077345 host.go:66] Checking if "addons-910864" exists ...
	I0327 23:34:38.364937 1077345 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0327 23:34:38.365002 1077345 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0327 23:34:38.382631 1077345 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0327 23:34:38.382673 1077345 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0327 23:34:38.383024 1077345 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0327 23:34:38.383075 1077345 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0327 23:34:38.384710 1077345 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45935
	I0327 23:34:38.384913 1077345 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42667
	I0327 23:34:38.384979 1077345 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40551
	I0327 23:34:38.385070 1077345 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41977
	I0327 23:34:38.385071 1077345 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34713
	I0327 23:34:38.385716 1077345 main.go:141] libmachine: () Calling .GetVersion
	I0327 23:34:38.385854 1077345 main.go:141] libmachine: () Calling .GetVersion
	I0327 23:34:38.385933 1077345 main.go:141] libmachine: () Calling .GetVersion
	I0327 23:34:38.386008 1077345 main.go:141] libmachine: () Calling .GetVersion
	I0327 23:34:38.386473 1077345 main.go:141] libmachine: Using API Version  1
	I0327 23:34:38.386495 1077345 main.go:141] libmachine: () Calling .SetConfigRaw
	I0327 23:34:38.386616 1077345 main.go:141] libmachine: Using API Version  1
	I0327 23:34:38.386635 1077345 main.go:141] libmachine: () Calling .SetConfigRaw
	I0327 23:34:38.386635 1077345 main.go:141] libmachine: Using API Version  1
	I0327 23:34:38.386650 1077345 main.go:141] libmachine: () Calling .SetConfigRaw
	I0327 23:34:38.386992 1077345 main.go:141] libmachine: () Calling .GetMachineName
	I0327 23:34:38.387020 1077345 main.go:141] libmachine: () Calling .GetMachineName
	I0327 23:34:38.387087 1077345 main.go:141] libmachine: () Calling .GetMachineName
	I0327 23:34:38.387148 1077345 main.go:141] libmachine: Using API Version  1
	I0327 23:34:38.387172 1077345 main.go:141] libmachine: () Calling .SetConfigRaw
	I0327 23:34:38.387192 1077345 main.go:141] libmachine: (addons-910864) Calling .GetState
	I0327 23:34:38.387343 1077345 main.go:141] libmachine: (addons-910864) Calling .GetState
	I0327 23:34:38.387797 1077345 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0327 23:34:38.387850 1077345 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0327 23:34:38.388448 1077345 main.go:141] libmachine: () Calling .GetVersion
	I0327 23:34:38.388730 1077345 main.go:141] libmachine: () Calling .GetMachineName
	I0327 23:34:38.389315 1077345 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0327 23:34:38.389356 1077345 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0327 23:34:38.392202 1077345 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-910864"
	I0327 23:34:38.392250 1077345 host.go:66] Checking if "addons-910864" exists ...
	I0327 23:34:38.392606 1077345 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0327 23:34:38.392640 1077345 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0327 23:34:38.394537 1077345 main.go:141] libmachine: Using API Version  1
	I0327 23:34:38.394558 1077345 main.go:141] libmachine: () Calling .SetConfigRaw
	I0327 23:34:38.394984 1077345 main.go:141] libmachine: () Calling .GetMachineName
	I0327 23:34:38.395841 1077345 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0327 23:34:38.395883 1077345 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0327 23:34:38.403509 1077345 host.go:66] Checking if "addons-910864" exists ...
	I0327 23:34:38.403908 1077345 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0327 23:34:38.403937 1077345 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0327 23:34:38.424608 1077345 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45767
	I0327 23:34:38.424647 1077345 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40263
	I0327 23:34:38.424617 1077345 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37349
	I0327 23:34:38.425147 1077345 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38719
	I0327 23:34:38.426696 1077345 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34113
	I0327 23:34:38.426742 1077345 main.go:141] libmachine: () Calling .GetVersion
	I0327 23:34:38.426872 1077345 main.go:141] libmachine: () Calling .GetVersion
	I0327 23:34:38.427171 1077345 main.go:141] libmachine: () Calling .GetVersion
	I0327 23:34:38.427260 1077345 main.go:141] libmachine: () Calling .GetVersion
	I0327 23:34:38.427328 1077345 main.go:141] libmachine: () Calling .GetVersion
	I0327 23:34:38.428122 1077345 main.go:141] libmachine: Using API Version  1
	I0327 23:34:38.428143 1077345 main.go:141] libmachine: () Calling .SetConfigRaw
	I0327 23:34:38.428207 1077345 main.go:141] libmachine: Using API Version  1
	I0327 23:34:38.428218 1077345 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34325
	I0327 23:34:38.428226 1077345 main.go:141] libmachine: () Calling .SetConfigRaw
	I0327 23:34:38.428285 1077345 main.go:141] libmachine: Using API Version  1
	I0327 23:34:38.428295 1077345 main.go:141] libmachine: () Calling .SetConfigRaw
	I0327 23:34:38.428406 1077345 main.go:141] libmachine: Using API Version  1
	I0327 23:34:38.428418 1077345 main.go:141] libmachine: () Calling .SetConfigRaw
	I0327 23:34:38.428758 1077345 main.go:141] libmachine: () Calling .GetMachineName
	I0327 23:34:38.428783 1077345 main.go:141] libmachine: () Calling .GetMachineName
	I0327 23:34:38.428814 1077345 main.go:141] libmachine: () Calling .GetMachineName
	I0327 23:34:38.429448 1077345 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0327 23:34:38.429475 1077345 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0327 23:34:38.430008 1077345 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0327 23:34:38.430068 1077345 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0327 23:34:38.430284 1077345 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45297
	I0327 23:34:38.430400 1077345 main.go:141] libmachine: () Calling .GetMachineName
	I0327 23:34:38.430492 1077345 main.go:141] libmachine: (addons-910864) Calling .GetState
	I0327 23:34:38.431246 1077345 main.go:141] libmachine: () Calling .GetVersion
	I0327 23:34:38.431351 1077345 main.go:141] libmachine: (addons-910864) Calling .GetState
	I0327 23:34:38.431411 1077345 main.go:141] libmachine: () Calling .GetVersion
	I0327 23:34:38.431561 1077345 main.go:141] libmachine: Using API Version  1
	I0327 23:34:38.431585 1077345 main.go:141] libmachine: () Calling .SetConfigRaw
	I0327 23:34:38.432433 1077345 main.go:141] libmachine: () Calling .GetMachineName
	I0327 23:34:38.432478 1077345 main.go:141] libmachine: Using API Version  1
	I0327 23:34:38.432492 1077345 main.go:141] libmachine: () Calling .SetConfigRaw
	I0327 23:34:38.432553 1077345 main.go:141] libmachine: Using API Version  1
	I0327 23:34:38.432567 1077345 main.go:141] libmachine: () Calling .SetConfigRaw
	I0327 23:34:38.432811 1077345 main.go:141] libmachine: () Calling .GetMachineName
	I0327 23:34:38.433028 1077345 main.go:141] libmachine: () Calling .GetMachineName
	I0327 23:34:38.433107 1077345 main.go:141] libmachine: (addons-910864) Calling .DriverName
	I0327 23:34:38.433300 1077345 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0327 23:34:38.433338 1077345 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0327 23:34:38.435168 1077345 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0327 23:34:38.433776 1077345 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37515
	I0327 23:34:38.433796 1077345 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43479
	I0327 23:34:38.433860 1077345 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0327 23:34:38.434225 1077345 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0327 23:34:38.436301 1077345 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43365
	I0327 23:34:38.436816 1077345 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0327 23:34:38.436839 1077345 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0327 23:34:38.436858 1077345 main.go:141] libmachine: (addons-910864) Calling .GetSSHHostname
	I0327 23:34:38.436936 1077345 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0327 23:34:38.436945 1077345 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0327 23:34:38.437368 1077345 addons.go:234] Setting addon default-storageclass=true in "addons-910864"
	I0327 23:34:38.437416 1077345 host.go:66] Checking if "addons-910864" exists ...
	I0327 23:34:38.437532 1077345 main.go:141] libmachine: () Calling .GetVersion
	I0327 23:34:38.437861 1077345 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0327 23:34:38.437908 1077345 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0327 23:34:38.438095 1077345 main.go:141] libmachine: Using API Version  1
	I0327 23:34:38.438119 1077345 main.go:141] libmachine: () Calling .SetConfigRaw
	I0327 23:34:38.438201 1077345 main.go:141] libmachine: () Calling .GetVersion
	I0327 23:34:38.438594 1077345 main.go:141] libmachine: () Calling .GetMachineName
	I0327 23:34:38.438707 1077345 main.go:141] libmachine: Using API Version  1
	I0327 23:34:38.438729 1077345 main.go:141] libmachine: () Calling .SetConfigRaw
	I0327 23:34:38.438819 1077345 main.go:141] libmachine: (addons-910864) Calling .GetState
	I0327 23:34:38.439453 1077345 main.go:141] libmachine: () Calling .GetMachineName
	I0327 23:34:38.439527 1077345 main.go:141] libmachine: () Calling .GetVersion
	I0327 23:34:38.440041 1077345 main.go:141] libmachine: Using API Version  1
	I0327 23:34:38.440059 1077345 main.go:141] libmachine: () Calling .SetConfigRaw
	I0327 23:34:38.440518 1077345 main.go:141] libmachine: () Calling .GetMachineName
	I0327 23:34:38.442695 1077345 main.go:141] libmachine: (addons-910864) Calling .GetSSHPort
	I0327 23:34:38.442696 1077345 main.go:141] libmachine: (addons-910864) DBG | domain addons-910864 has defined MAC address 52:54:00:44:91:2e in network mk-addons-910864
	I0327 23:34:38.442759 1077345 main.go:141] libmachine: (addons-910864) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:91:2e", ip: ""} in network mk-addons-910864: {Iface:virbr1 ExpiryTime:2024-03-28 00:33:58 +0000 UTC Type:0 Mac:52:54:00:44:91:2e Iaid: IPaddr:192.168.39.45 Prefix:24 Hostname:addons-910864 Clientid:01:52:54:00:44:91:2e}
	I0327 23:34:38.442781 1077345 main.go:141] libmachine: (addons-910864) DBG | domain addons-910864 has defined IP address 192.168.39.45 and MAC address 52:54:00:44:91:2e in network mk-addons-910864
	I0327 23:34:38.443083 1077345 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0327 23:34:38.443115 1077345 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0327 23:34:38.443153 1077345 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0327 23:34:38.443169 1077345 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0327 23:34:38.443359 1077345 main.go:141] libmachine: (addons-910864) Calling .DriverName
	I0327 23:34:38.445305 1077345 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0327 23:34:38.444077 1077345 main.go:141] libmachine: (addons-910864) Calling .GetSSHKeyPath
	I0327 23:34:38.447968 1077345 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0327 23:34:38.447016 1077345 main.go:141] libmachine: (addons-910864) Calling .GetSSHUsername
	I0327 23:34:38.450561 1077345 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0327 23:34:38.449559 1077345 sshutil.go:53] new ssh client: &{IP:192.168.39.45 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/addons-910864/id_rsa Username:docker}
	I0327 23:34:38.453966 1077345 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0327 23:34:38.453361 1077345 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41949
	I0327 23:34:38.455357 1077345 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0327 23:34:38.456719 1077345 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0327 23:34:38.456468 1077345 main.go:141] libmachine: () Calling .GetVersion
	I0327 23:34:38.457945 1077345 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40499
	I0327 23:34:38.458095 1077345 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0327 23:34:38.459474 1077345 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0327 23:34:38.460854 1077345 addons.go:426] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0327 23:34:38.460920 1077345 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0327 23:34:38.460949 1077345 main.go:141] libmachine: (addons-910864) Calling .GetSSHHostname
	I0327 23:34:38.459037 1077345 main.go:141] libmachine: Using API Version  1
	I0327 23:34:38.461035 1077345 main.go:141] libmachine: () Calling .SetConfigRaw
	I0327 23:34:38.461566 1077345 main.go:141] libmachine: () Calling .GetMachineName
	I0327 23:34:38.462308 1077345 main.go:141] libmachine: () Calling .GetVersion
	I0327 23:34:38.462450 1077345 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0327 23:34:38.462495 1077345 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0327 23:34:38.464615 1077345 main.go:141] libmachine: (addons-910864) DBG | domain addons-910864 has defined MAC address 52:54:00:44:91:2e in network mk-addons-910864
	I0327 23:34:38.465195 1077345 main.go:141] libmachine: (addons-910864) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:91:2e", ip: ""} in network mk-addons-910864: {Iface:virbr1 ExpiryTime:2024-03-28 00:33:58 +0000 UTC Type:0 Mac:52:54:00:44:91:2e Iaid: IPaddr:192.168.39.45 Prefix:24 Hostname:addons-910864 Clientid:01:52:54:00:44:91:2e}
	I0327 23:34:38.465217 1077345 main.go:141] libmachine: (addons-910864) DBG | domain addons-910864 has defined IP address 192.168.39.45 and MAC address 52:54:00:44:91:2e in network mk-addons-910864
	I0327 23:34:38.465423 1077345 main.go:141] libmachine: (addons-910864) Calling .GetSSHPort
	I0327 23:34:38.465606 1077345 main.go:141] libmachine: (addons-910864) Calling .GetSSHKeyPath
	I0327 23:34:38.465777 1077345 main.go:141] libmachine: (addons-910864) Calling .GetSSHUsername
	I0327 23:34:38.465920 1077345 sshutil.go:53] new ssh client: &{IP:192.168.39.45 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/addons-910864/id_rsa Username:docker}
	I0327 23:34:38.470781 1077345 main.go:141] libmachine: Using API Version  1
	I0327 23:34:38.470804 1077345 main.go:141] libmachine: () Calling .SetConfigRaw
	I0327 23:34:38.471284 1077345 main.go:141] libmachine: () Calling .GetMachineName
	I0327 23:34:38.471544 1077345 main.go:141] libmachine: (addons-910864) Calling .GetState
	I0327 23:34:38.472536 1077345 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36735
	I0327 23:34:38.472999 1077345 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46781
	I0327 23:34:38.473331 1077345 main.go:141] libmachine: (addons-910864) Calling .DriverName
	I0327 23:34:38.473428 1077345 main.go:141] libmachine: () Calling .GetVersion
	I0327 23:34:38.475382 1077345 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.15
	I0327 23:34:38.476842 1077345 addons.go:426] installing /etc/kubernetes/addons/deployment.yaml
	I0327 23:34:38.476865 1077345 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0327 23:34:38.476887 1077345 main.go:141] libmachine: (addons-910864) Calling .GetSSHHostname
	I0327 23:34:38.475412 1077345 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41663
	I0327 23:34:38.474307 1077345 main.go:141] libmachine: () Calling .GetVersion
	I0327 23:34:38.474522 1077345 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33359
	I0327 23:34:38.474051 1077345 main.go:141] libmachine: Using API Version  1
	I0327 23:34:38.477185 1077345 main.go:141] libmachine: () Calling .SetConfigRaw
	I0327 23:34:38.477530 1077345 main.go:141] libmachine: Using API Version  1
	I0327 23:34:38.477545 1077345 main.go:141] libmachine: () Calling .SetConfigRaw
	I0327 23:34:38.477634 1077345 main.go:141] libmachine: () Calling .GetVersion
	I0327 23:34:38.477722 1077345 main.go:141] libmachine: () Calling .GetMachineName
	I0327 23:34:38.478393 1077345 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0327 23:34:38.478435 1077345 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0327 23:34:38.480651 1077345 main.go:141] libmachine: (addons-910864) Calling .GetSSHPort
	I0327 23:34:38.480651 1077345 main.go:141] libmachine: (addons-910864) DBG | domain addons-910864 has defined MAC address 52:54:00:44:91:2e in network mk-addons-910864
	I0327 23:34:38.480702 1077345 main.go:141] libmachine: () Calling .GetVersion
	I0327 23:34:38.480718 1077345 main.go:141] libmachine: (addons-910864) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:91:2e", ip: ""} in network mk-addons-910864: {Iface:virbr1 ExpiryTime:2024-03-28 00:33:58 +0000 UTC Type:0 Mac:52:54:00:44:91:2e Iaid: IPaddr:192.168.39.45 Prefix:24 Hostname:addons-910864 Clientid:01:52:54:00:44:91:2e}
	I0327 23:34:38.480743 1077345 main.go:141] libmachine: (addons-910864) DBG | domain addons-910864 has defined IP address 192.168.39.45 and MAC address 52:54:00:44:91:2e in network mk-addons-910864
	I0327 23:34:38.480773 1077345 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34241
	I0327 23:34:38.480775 1077345 main.go:141] libmachine: () Calling .GetMachineName
	I0327 23:34:38.480890 1077345 main.go:141] libmachine: Using API Version  1
	I0327 23:34:38.480915 1077345 main.go:141] libmachine: () Calling .SetConfigRaw
	I0327 23:34:38.481142 1077345 main.go:141] libmachine: Using API Version  1
	I0327 23:34:38.481166 1077345 main.go:141] libmachine: () Calling .SetConfigRaw
	I0327 23:34:38.481172 1077345 main.go:141] libmachine: (addons-910864) Calling .GetState
	I0327 23:34:38.481226 1077345 main.go:141] libmachine: (addons-910864) Calling .GetSSHKeyPath
	I0327 23:34:38.481240 1077345 main.go:141] libmachine: () Calling .GetVersion
	I0327 23:34:38.481278 1077345 main.go:141] libmachine: () Calling .GetMachineName
	I0327 23:34:38.481445 1077345 main.go:141] libmachine: (addons-910864) Calling .GetSSHUsername
	I0327 23:34:38.481752 1077345 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0327 23:34:38.481766 1077345 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0327 23:34:38.482022 1077345 main.go:141] libmachine: () Calling .GetMachineName
	I0327 23:34:38.482094 1077345 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42905
	I0327 23:34:38.482257 1077345 main.go:141] libmachine: (addons-910864) Calling .GetState
	I0327 23:34:38.482348 1077345 main.go:141] libmachine: Using API Version  1
	I0327 23:34:38.482360 1077345 main.go:141] libmachine: () Calling .SetConfigRaw
	I0327 23:34:38.482564 1077345 main.go:141] libmachine: () Calling .GetVersion
	I0327 23:34:38.482590 1077345 sshutil.go:53] new ssh client: &{IP:192.168.39.45 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/addons-910864/id_rsa Username:docker}
	I0327 23:34:38.483066 1077345 main.go:141] libmachine: Using API Version  1
	I0327 23:34:38.483088 1077345 main.go:141] libmachine: () Calling .SetConfigRaw
	I0327 23:34:38.483187 1077345 main.go:141] libmachine: (addons-910864) Calling .DriverName
	I0327 23:34:38.485162 1077345 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0327 23:34:38.483557 1077345 main.go:141] libmachine: () Calling .GetMachineName
	I0327 23:34:38.483740 1077345 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40687
	I0327 23:34:38.484022 1077345 main.go:141] libmachine: (addons-910864) Calling .DriverName
	I0327 23:34:38.484378 1077345 main.go:141] libmachine: () Calling .GetMachineName
	I0327 23:34:38.486611 1077345 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0327 23:34:38.486630 1077345 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0327 23:34:38.486651 1077345 main.go:141] libmachine: (addons-910864) Calling .GetSSHHostname
	I0327 23:34:38.488117 1077345 out.go:177]   - Using image docker.io/registry:2.8.3
	I0327 23:34:38.487451 1077345 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0327 23:34:38.487484 1077345 main.go:141] libmachine: (addons-910864) Calling .GetState
	I0327 23:34:38.488656 1077345 main.go:141] libmachine: () Calling .GetVersion
	I0327 23:34:38.488921 1077345 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37511
	I0327 23:34:38.489968 1077345 main.go:141] libmachine: (addons-910864) DBG | domain addons-910864 has defined MAC address 52:54:00:44:91:2e in network mk-addons-910864
	I0327 23:34:38.489982 1077345 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0327 23:34:38.491700 1077345 addons.go:426] installing /etc/kubernetes/addons/registry-rc.yaml
	I0327 23:34:38.491730 1077345 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I0327 23:34:38.491749 1077345 main.go:141] libmachine: (addons-910864) Calling .GetSSHHostname
	I0327 23:34:38.491756 1077345 main.go:141] libmachine: (addons-910864) Calling .DriverName
	I0327 23:34:38.490049 1077345 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0327 23:34:38.490620 1077345 main.go:141] libmachine: Using API Version  1
	I0327 23:34:38.490719 1077345 main.go:141] libmachine: (addons-910864) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:91:2e", ip: ""} in network mk-addons-910864: {Iface:virbr1 ExpiryTime:2024-03-28 00:33:58 +0000 UTC Type:0 Mac:52:54:00:44:91:2e Iaid: IPaddr:192.168.39.45 Prefix:24 Hostname:addons-910864 Clientid:01:52:54:00:44:91:2e}
	I0327 23:34:38.490799 1077345 main.go:141] libmachine: (addons-910864) Calling .GetSSHPort
	I0327 23:34:38.490881 1077345 main.go:141] libmachine: () Calling .GetVersion
	I0327 23:34:38.493374 1077345 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0
	I0327 23:34:38.493449 1077345 main.go:141] libmachine: (addons-910864) DBG | domain addons-910864 has defined IP address 192.168.39.45 and MAC address 52:54:00:44:91:2e in network mk-addons-910864
	I0327 23:34:38.493466 1077345 main.go:141] libmachine: () Calling .SetConfigRaw
	I0327 23:34:38.493597 1077345 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41613
	I0327 23:34:38.493668 1077345 main.go:141] libmachine: (addons-910864) Calling .GetSSHKeyPath
	I0327 23:34:38.493998 1077345 main.go:141] libmachine: Using API Version  1
	I0327 23:34:38.494801 1077345 main.go:141] libmachine: () Calling .SetConfigRaw
	I0327 23:34:38.494867 1077345 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0
	I0327 23:34:38.494971 1077345 main.go:141] libmachine: (addons-910864) DBG | domain addons-910864 has defined MAC address 52:54:00:44:91:2e in network mk-addons-910864
	I0327 23:34:38.496327 1077345 main.go:141] libmachine: (addons-910864) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:91:2e", ip: ""} in network mk-addons-910864: {Iface:virbr1 ExpiryTime:2024-03-28 00:33:58 +0000 UTC Type:0 Mac:52:54:00:44:91:2e Iaid: IPaddr:192.168.39.45 Prefix:24 Hostname:addons-910864 Clientid:01:52:54:00:44:91:2e}
	I0327 23:34:38.496354 1077345 main.go:141] libmachine: (addons-910864) DBG | domain addons-910864 has defined IP address 192.168.39.45 and MAC address 52:54:00:44:91:2e in network mk-addons-910864
	I0327 23:34:38.495471 1077345 main.go:141] libmachine: (addons-910864) Calling .GetSSHUsername
	I0327 23:34:38.495502 1077345 main.go:141] libmachine: () Calling .GetMachineName
	I0327 23:34:38.495637 1077345 main.go:141] libmachine: (addons-910864) Calling .GetSSHPort
	I0327 23:34:38.495675 1077345 main.go:141] libmachine: () Calling .GetVersion
	I0327 23:34:38.495705 1077345 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41033
	I0327 23:34:38.495758 1077345 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38205
	I0327 23:34:38.496056 1077345 main.go:141] libmachine: () Calling .GetMachineName
	I0327 23:34:38.497842 1077345 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.10.0
	I0327 23:34:38.497880 1077345 main.go:141] libmachine: () Calling .GetVersion
	I0327 23:34:38.497294 1077345 sshutil.go:53] new ssh client: &{IP:192.168.39.45 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/addons-910864/id_rsa Username:docker}
	I0327 23:34:38.499492 1077345 addons.go:426] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0327 23:34:38.499518 1077345 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0327 23:34:38.499541 1077345 main.go:141] libmachine: (addons-910864) Calling .GetSSHHostname
	I0327 23:34:38.497348 1077345 main.go:141] libmachine: Using API Version  1
	I0327 23:34:38.499710 1077345 main.go:141] libmachine: () Calling .SetConfigRaw
	I0327 23:34:38.497371 1077345 main.go:141] libmachine: (addons-910864) Calling .GetSSHKeyPath
	I0327 23:34:38.497392 1077345 main.go:141] libmachine: (addons-910864) Calling .GetState
	I0327 23:34:38.497678 1077345 main.go:141] libmachine: () Calling .GetVersion
	I0327 23:34:38.497267 1077345 main.go:141] libmachine: (addons-910864) Calling .GetState
	I0327 23:34:38.501236 1077345 main.go:141] libmachine: Using API Version  1
	I0327 23:34:38.501255 1077345 main.go:141] libmachine: () Calling .SetConfigRaw
	I0327 23:34:38.501318 1077345 main.go:141] libmachine: () Calling .GetMachineName
	I0327 23:34:38.501360 1077345 main.go:141] libmachine: (addons-910864) Calling .GetSSHUsername
	I0327 23:34:38.501542 1077345 sshutil.go:53] new ssh client: &{IP:192.168.39.45 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/addons-910864/id_rsa Username:docker}
	I0327 23:34:38.501622 1077345 main.go:141] libmachine: (addons-910864) Calling .DriverName
	I0327 23:34:38.502523 1077345 main.go:141] libmachine: () Calling .GetMachineName
	I0327 23:34:38.502644 1077345 main.go:141] libmachine: Using API Version  1
	I0327 23:34:38.502663 1077345 main.go:141] libmachine: () Calling .SetConfigRaw
	I0327 23:34:38.502714 1077345 main.go:141] libmachine: (addons-910864) Calling .GetState
	I0327 23:34:38.503123 1077345 main.go:141] libmachine: () Calling .GetMachineName
	I0327 23:34:38.503179 1077345 main.go:141] libmachine: (addons-910864) DBG | domain addons-910864 has defined MAC address 52:54:00:44:91:2e in network mk-addons-910864
	I0327 23:34:38.503361 1077345 main.go:141] libmachine: (addons-910864) Calling .GetState
	I0327 23:34:38.503417 1077345 main.go:141] libmachine: (addons-910864) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:91:2e", ip: ""} in network mk-addons-910864: {Iface:virbr1 ExpiryTime:2024-03-28 00:33:58 +0000 UTC Type:0 Mac:52:54:00:44:91:2e Iaid: IPaddr:192.168.39.45 Prefix:24 Hostname:addons-910864 Clientid:01:52:54:00:44:91:2e}
	I0327 23:34:38.503441 1077345 main.go:141] libmachine: (addons-910864) DBG | domain addons-910864 has defined IP address 192.168.39.45 and MAC address 52:54:00:44:91:2e in network mk-addons-910864
	I0327 23:34:38.503926 1077345 main.go:141] libmachine: (addons-910864) Calling .GetSSHPort
	I0327 23:34:38.504137 1077345 main.go:141] libmachine: (addons-910864) Calling .GetSSHKeyPath
	I0327 23:34:38.504289 1077345 main.go:141] libmachine: (addons-910864) Calling .GetSSHUsername
	I0327 23:34:38.504492 1077345 main.go:141] libmachine: (addons-910864) Calling .DriverName
	I0327 23:34:38.504546 1077345 sshutil.go:53] new ssh client: &{IP:192.168.39.45 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/addons-910864/id_rsa Username:docker}
	I0327 23:34:38.504806 1077345 main.go:141] libmachine: (addons-910864) Calling .DriverName
	I0327 23:34:38.506401 1077345 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I0327 23:34:38.505369 1077345 main.go:141] libmachine: (addons-910864) Calling .DriverName
	I0327 23:34:38.505407 1077345 main.go:141] libmachine: (addons-910864) Calling .DriverName
	I0327 23:34:38.507650 1077345 addons.go:426] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0327 23:34:38.509773 1077345 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.26.0
	I0327 23:34:38.507954 1077345 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46559
	I0327 23:34:38.508762 1077345 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.0
	I0327 23:34:38.508782 1077345 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0327 23:34:38.511805 1077345 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.4
	I0327 23:34:38.510888 1077345 addons.go:426] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0327 23:34:38.510912 1077345 main.go:141] libmachine: (addons-910864) Calling .GetSSHHostname
	I0327 23:34:38.511309 1077345 main.go:141] libmachine: () Calling .GetVersion
	I0327 23:34:38.511487 1077345 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40169
	I0327 23:34:38.513039 1077345 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0327 23:34:38.513071 1077345 main.go:141] libmachine: (addons-910864) Calling .GetSSHHostname
	I0327 23:34:38.513136 1077345 addons.go:426] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0327 23:34:38.513147 1077345 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0327 23:34:38.513162 1077345 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0327 23:34:38.513163 1077345 main.go:141] libmachine: (addons-910864) Calling .GetSSHHostname
	I0327 23:34:38.513172 1077345 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0327 23:34:38.513187 1077345 main.go:141] libmachine: (addons-910864) Calling .GetSSHHostname
	I0327 23:34:38.514336 1077345 main.go:141] libmachine: () Calling .GetVersion
	I0327 23:34:38.514980 1077345 main.go:141] libmachine: Using API Version  1
	I0327 23:34:38.514999 1077345 main.go:141] libmachine: () Calling .SetConfigRaw
	I0327 23:34:38.515154 1077345 main.go:141] libmachine: Using API Version  1
	I0327 23:34:38.515177 1077345 main.go:141] libmachine: () Calling .SetConfigRaw
	I0327 23:34:38.515557 1077345 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42001
	I0327 23:34:38.516022 1077345 main.go:141] libmachine: () Calling .GetVersion
	I0327 23:34:38.516114 1077345 main.go:141] libmachine: () Calling .GetMachineName
	I0327 23:34:38.516155 1077345 main.go:141] libmachine: () Calling .GetMachineName
	I0327 23:34:38.516600 1077345 main.go:141] libmachine: (addons-910864) Calling .GetState
	I0327 23:34:38.516656 1077345 main.go:141] libmachine: (addons-910864) Calling .GetState
	I0327 23:34:38.516668 1077345 main.go:141] libmachine: Using API Version  1
	I0327 23:34:38.516690 1077345 main.go:141] libmachine: () Calling .SetConfigRaw
	I0327 23:34:38.517075 1077345 main.go:141] libmachine: () Calling .GetMachineName
	I0327 23:34:38.517316 1077345 main.go:141] libmachine: (addons-910864) Calling .GetState
	I0327 23:34:38.517648 1077345 main.go:141] libmachine: (addons-910864) DBG | domain addons-910864 has defined MAC address 52:54:00:44:91:2e in network mk-addons-910864
	I0327 23:34:38.518310 1077345 main.go:141] libmachine: (addons-910864) DBG | domain addons-910864 has defined MAC address 52:54:00:44:91:2e in network mk-addons-910864
	I0327 23:34:38.518810 1077345 main.go:141] libmachine: (addons-910864) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:91:2e", ip: ""} in network mk-addons-910864: {Iface:virbr1 ExpiryTime:2024-03-28 00:33:58 +0000 UTC Type:0 Mac:52:54:00:44:91:2e Iaid: IPaddr:192.168.39.45 Prefix:24 Hostname:addons-910864 Clientid:01:52:54:00:44:91:2e}
	I0327 23:34:38.518842 1077345 main.go:141] libmachine: (addons-910864) DBG | domain addons-910864 has defined IP address 192.168.39.45 and MAC address 52:54:00:44:91:2e in network mk-addons-910864
	I0327 23:34:38.518918 1077345 main.go:141] libmachine: (addons-910864) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:91:2e", ip: ""} in network mk-addons-910864: {Iface:virbr1 ExpiryTime:2024-03-28 00:33:58 +0000 UTC Type:0 Mac:52:54:00:44:91:2e Iaid: IPaddr:192.168.39.45 Prefix:24 Hostname:addons-910864 Clientid:01:52:54:00:44:91:2e}
	I0327 23:34:38.518951 1077345 main.go:141] libmachine: (addons-910864) DBG | domain addons-910864 has defined IP address 192.168.39.45 and MAC address 52:54:00:44:91:2e in network mk-addons-910864
	I0327 23:34:38.519121 1077345 main.go:141] libmachine: (addons-910864) Calling .GetSSHPort
	I0327 23:34:38.519342 1077345 main.go:141] libmachine: (addons-910864) Calling .GetSSHKeyPath
	I0327 23:34:38.519511 1077345 main.go:141] libmachine: (addons-910864) Calling .GetSSHUsername
	I0327 23:34:38.519698 1077345 main.go:141] libmachine: (addons-910864) Calling .GetSSHPort
	I0327 23:34:38.519697 1077345 sshutil.go:53] new ssh client: &{IP:192.168.39.45 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/addons-910864/id_rsa Username:docker}
	I0327 23:34:38.519926 1077345 main.go:141] libmachine: (addons-910864) Calling .DriverName
	I0327 23:34:38.520007 1077345 main.go:141] libmachine: (addons-910864) DBG | domain addons-910864 has defined MAC address 52:54:00:44:91:2e in network mk-addons-910864
	I0327 23:34:38.520152 1077345 main.go:141] libmachine: (addons-910864) Calling .GetSSHKeyPath
	I0327 23:34:38.521717 1077345 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0327 23:34:38.520326 1077345 main.go:141] libmachine: (addons-910864) Calling .GetSSHUsername
	I0327 23:34:38.520501 1077345 main.go:141] libmachine: (addons-910864) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:91:2e", ip: ""} in network mk-addons-910864: {Iface:virbr1 ExpiryTime:2024-03-28 00:33:58 +0000 UTC Type:0 Mac:52:54:00:44:91:2e Iaid: IPaddr:192.168.39.45 Prefix:24 Hostname:addons-910864 Clientid:01:52:54:00:44:91:2e}
	I0327 23:34:38.520683 1077345 main.go:141] libmachine: (addons-910864) Calling .GetSSHPort
	I0327 23:34:38.520888 1077345 main.go:141] libmachine: (addons-910864) DBG | domain addons-910864 has defined MAC address 52:54:00:44:91:2e in network mk-addons-910864
	I0327 23:34:38.520919 1077345 main.go:141] libmachine: (addons-910864) Calling .DriverName
	I0327 23:34:38.521490 1077345 main.go:141] libmachine: (addons-910864) Calling .GetSSHPort
	I0327 23:34:38.521790 1077345 main.go:141] libmachine: (addons-910864) Calling .DriverName
	I0327 23:34:38.522957 1077345 main.go:141] libmachine: (addons-910864) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:91:2e", ip: ""} in network mk-addons-910864: {Iface:virbr1 ExpiryTime:2024-03-28 00:33:58 +0000 UTC Type:0 Mac:52:54:00:44:91:2e Iaid: IPaddr:192.168.39.45 Prefix:24 Hostname:addons-910864 Clientid:01:52:54:00:44:91:2e}
	I0327 23:34:38.522975 1077345 main.go:141] libmachine: (addons-910864) DBG | domain addons-910864 has defined IP address 192.168.39.45 and MAC address 52:54:00:44:91:2e in network mk-addons-910864
	I0327 23:34:38.524292 1077345 out.go:177]   - Using image docker.io/busybox:stable
	I0327 23:34:38.523043 1077345 main.go:141] libmachine: (addons-910864) DBG | domain addons-910864 has defined IP address 192.168.39.45 and MAC address 52:54:00:44:91:2e in network mk-addons-910864
	I0327 23:34:38.523213 1077345 main.go:141] libmachine: (addons-910864) Calling .GetSSHKeyPath
	I0327 23:34:38.523221 1077345 main.go:141] libmachine: (addons-910864) Calling .GetSSHKeyPath
	I0327 23:34:38.523258 1077345 sshutil.go:53] new ssh client: &{IP:192.168.39.45 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/addons-910864/id_rsa Username:docker}
	I0327 23:34:38.525601 1077345 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0327 23:34:38.526677 1077345 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0327 23:34:38.527818 1077345 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0327 23:34:38.527836 1077345 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0327 23:34:38.527853 1077345 main.go:141] libmachine: (addons-910864) Calling .GetSSHHostname
	I0327 23:34:38.526812 1077345 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.14.5
	I0327 23:34:38.528982 1077345 addons.go:426] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0327 23:34:38.528994 1077345 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0327 23:34:38.529007 1077345 main.go:141] libmachine: (addons-910864) Calling .GetSSHHostname
	I0327 23:34:38.526961 1077345 main.go:141] libmachine: (addons-910864) Calling .GetSSHUsername
	I0327 23:34:38.526981 1077345 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0327 23:34:38.529067 1077345 main.go:141] libmachine: (addons-910864) Calling .GetSSHHostname
	I0327 23:34:38.527010 1077345 main.go:141] libmachine: (addons-910864) Calling .GetSSHUsername
	I0327 23:34:38.527687 1077345 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45015
	I0327 23:34:38.529702 1077345 sshutil.go:53] new ssh client: &{IP:192.168.39.45 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/addons-910864/id_rsa Username:docker}
	I0327 23:34:38.529736 1077345 sshutil.go:53] new ssh client: &{IP:192.168.39.45 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/addons-910864/id_rsa Username:docker}
	I0327 23:34:38.533097 1077345 main.go:141] libmachine: (addons-910864) DBG | domain addons-910864 has defined MAC address 52:54:00:44:91:2e in network mk-addons-910864
	I0327 23:34:38.533100 1077345 main.go:141] libmachine: (addons-910864) DBG | domain addons-910864 has defined MAC address 52:54:00:44:91:2e in network mk-addons-910864
	I0327 23:34:38.533139 1077345 main.go:141] libmachine: (addons-910864) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:91:2e", ip: ""} in network mk-addons-910864: {Iface:virbr1 ExpiryTime:2024-03-28 00:33:58 +0000 UTC Type:0 Mac:52:54:00:44:91:2e Iaid: IPaddr:192.168.39.45 Prefix:24 Hostname:addons-910864 Clientid:01:52:54:00:44:91:2e}
	I0327 23:34:38.533153 1077345 main.go:141] libmachine: (addons-910864) DBG | domain addons-910864 has defined IP address 192.168.39.45 and MAC address 52:54:00:44:91:2e in network mk-addons-910864
	I0327 23:34:38.533199 1077345 main.go:141] libmachine: () Calling .GetVersion
	I0327 23:34:38.533650 1077345 main.go:141] libmachine: (addons-910864) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:91:2e", ip: ""} in network mk-addons-910864: {Iface:virbr1 ExpiryTime:2024-03-28 00:33:58 +0000 UTC Type:0 Mac:52:54:00:44:91:2e Iaid: IPaddr:192.168.39.45 Prefix:24 Hostname:addons-910864 Clientid:01:52:54:00:44:91:2e}
	I0327 23:34:38.533662 1077345 main.go:141] libmachine: (addons-910864) DBG | domain addons-910864 has defined IP address 192.168.39.45 and MAC address 52:54:00:44:91:2e in network mk-addons-910864
	I0327 23:34:38.533799 1077345 main.go:141] libmachine: Using API Version  1
	I0327 23:34:38.533811 1077345 main.go:141] libmachine: () Calling .SetConfigRaw
	I0327 23:34:38.533838 1077345 main.go:141] libmachine: (addons-910864) DBG | domain addons-910864 has defined MAC address 52:54:00:44:91:2e in network mk-addons-910864
	I0327 23:34:38.533920 1077345 main.go:141] libmachine: (addons-910864) Calling .GetSSHPort
	I0327 23:34:38.534180 1077345 main.go:141] libmachine: (addons-910864) Calling .GetSSHPort
	I0327 23:34:38.534182 1077345 main.go:141] libmachine: (addons-910864) Calling .GetSSHKeyPath
	I0327 23:34:38.534280 1077345 main.go:141] libmachine: () Calling .GetMachineName
	I0327 23:34:38.534661 1077345 main.go:141] libmachine: (addons-910864) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:91:2e", ip: ""} in network mk-addons-910864: {Iface:virbr1 ExpiryTime:2024-03-28 00:33:58 +0000 UTC Type:0 Mac:52:54:00:44:91:2e Iaid: IPaddr:192.168.39.45 Prefix:24 Hostname:addons-910864 Clientid:01:52:54:00:44:91:2e}
	I0327 23:34:38.534681 1077345 main.go:141] libmachine: (addons-910864) Calling .GetState
	I0327 23:34:38.534689 1077345 main.go:141] libmachine: (addons-910864) DBG | domain addons-910864 has defined IP address 192.168.39.45 and MAC address 52:54:00:44:91:2e in network mk-addons-910864
	I0327 23:34:38.534691 1077345 main.go:141] libmachine: (addons-910864) Calling .GetSSHKeyPath
	I0327 23:34:38.534672 1077345 main.go:141] libmachine: (addons-910864) Calling .GetSSHUsername
	I0327 23:34:38.534831 1077345 main.go:141] libmachine: (addons-910864) Calling .GetSSHUsername
	I0327 23:34:38.534872 1077345 sshutil.go:53] new ssh client: &{IP:192.168.39.45 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/addons-910864/id_rsa Username:docker}
	I0327 23:34:38.534938 1077345 sshutil.go:53] new ssh client: &{IP:192.168.39.45 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/addons-910864/id_rsa Username:docker}
	I0327 23:34:38.535081 1077345 main.go:141] libmachine: (addons-910864) Calling .GetSSHPort
	I0327 23:34:38.535238 1077345 main.go:141] libmachine: (addons-910864) Calling .GetSSHKeyPath
	I0327 23:34:38.535378 1077345 main.go:141] libmachine: (addons-910864) Calling .GetSSHUsername
	I0327 23:34:38.535583 1077345 sshutil.go:53] new ssh client: &{IP:192.168.39.45 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/addons-910864/id_rsa Username:docker}
	I0327 23:34:38.536424 1077345 main.go:141] libmachine: (addons-910864) Calling .DriverName
	I0327 23:34:38.536813 1077345 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0327 23:34:38.536826 1077345 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0327 23:34:38.536837 1077345 main.go:141] libmachine: (addons-910864) Calling .GetSSHHostname
	W0327 23:34:38.536974 1077345 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:38226->192.168.39.45:22: read: connection reset by peer
	I0327 23:34:38.537000 1077345 retry.go:31] will retry after 154.174564ms: ssh: handshake failed: read tcp 192.168.39.1:38226->192.168.39.45:22: read: connection reset by peer
	I0327 23:34:38.539589 1077345 main.go:141] libmachine: (addons-910864) DBG | domain addons-910864 has defined MAC address 52:54:00:44:91:2e in network mk-addons-910864
	I0327 23:34:38.540005 1077345 main.go:141] libmachine: (addons-910864) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:91:2e", ip: ""} in network mk-addons-910864: {Iface:virbr1 ExpiryTime:2024-03-28 00:33:58 +0000 UTC Type:0 Mac:52:54:00:44:91:2e Iaid: IPaddr:192.168.39.45 Prefix:24 Hostname:addons-910864 Clientid:01:52:54:00:44:91:2e}
	I0327 23:34:38.540031 1077345 main.go:141] libmachine: (addons-910864) DBG | domain addons-910864 has defined IP address 192.168.39.45 and MAC address 52:54:00:44:91:2e in network mk-addons-910864
	I0327 23:34:38.540185 1077345 main.go:141] libmachine: (addons-910864) Calling .GetSSHPort
	I0327 23:34:38.540364 1077345 main.go:141] libmachine: (addons-910864) Calling .GetSSHKeyPath
	I0327 23:34:38.540511 1077345 main.go:141] libmachine: (addons-910864) Calling .GetSSHUsername
	I0327 23:34:38.540648 1077345 sshutil.go:53] new ssh client: &{IP:192.168.39.45 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/addons-910864/id_rsa Username:docker}
	W0327 23:34:38.692830 1077345 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:38244->192.168.39.45:22: read: connection reset by peer
	I0327 23:34:38.692868 1077345 retry.go:31] will retry after 280.330623ms: ssh: handshake failed: read tcp 192.168.39.1:38244->192.168.39.45:22: read: connection reset by peer
	I0327 23:34:38.850559 1077345 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0327 23:34:38.881076 1077345 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0327 23:34:38.883735 1077345 addons.go:426] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0327 23:34:38.883761 1077345 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0327 23:34:38.973354 1077345 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0327 23:34:38.973385 1077345 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0327 23:34:39.035695 1077345 addons.go:426] installing /etc/kubernetes/addons/registry-svc.yaml
	I0327 23:34:39.035726 1077345 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0327 23:34:39.038562 1077345 addons.go:426] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0327 23:34:39.038587 1077345 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0327 23:34:39.042815 1077345 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0327 23:34:39.044601 1077345 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0327 23:34:39.046461 1077345 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0327 23:34:39.047640 1077345 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0327 23:34:39.047658 1077345 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0327 23:34:39.049993 1077345 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0327 23:34:39.050009 1077345 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0327 23:34:39.165090 1077345 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0327 23:34:39.165103 1077345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0327 23:34:39.165937 1077345 addons.go:426] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0327 23:34:39.165954 1077345 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0327 23:34:39.175718 1077345 addons.go:426] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0327 23:34:39.175734 1077345 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0327 23:34:39.178238 1077345 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0327 23:34:39.203975 1077345 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0327 23:34:39.204002 1077345 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0327 23:34:39.249526 1077345 addons.go:426] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0327 23:34:39.249553 1077345 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0327 23:34:39.310850 1077345 addons.go:426] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0327 23:34:39.310877 1077345 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0327 23:34:39.318415 1077345 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0327 23:34:39.318439 1077345 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0327 23:34:39.339620 1077345 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0327 23:34:39.339650 1077345 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0327 23:34:39.340371 1077345 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0327 23:34:39.340392 1077345 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0327 23:34:39.420347 1077345 addons.go:426] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0327 23:34:39.420378 1077345 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0327 23:34:39.508398 1077345 addons.go:426] installing /etc/kubernetes/addons/ig-role.yaml
	I0327 23:34:39.508426 1077345 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0327 23:34:39.511713 1077345 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0327 23:34:39.515614 1077345 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0327 23:34:39.563892 1077345 addons.go:426] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0327 23:34:39.563929 1077345 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0327 23:34:39.572875 1077345 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0327 23:34:39.572898 1077345 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0327 23:34:39.617229 1077345 addons.go:426] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0327 23:34:39.617258 1077345 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0327 23:34:39.640149 1077345 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0327 23:34:39.681611 1077345 addons.go:426] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0327 23:34:39.681646 1077345 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0327 23:34:39.711243 1077345 addons.go:426] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0327 23:34:39.711268 1077345 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0327 23:34:39.797226 1077345 addons.go:426] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0327 23:34:39.797258 1077345 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0327 23:34:39.895224 1077345 addons.go:426] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0327 23:34:39.895257 1077345 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0327 23:34:39.896670 1077345 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0327 23:34:40.025182 1077345 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0327 23:34:40.085076 1077345 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0327 23:34:40.085115 1077345 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0327 23:34:40.168705 1077345 addons.go:426] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0327 23:34:40.168743 1077345 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0327 23:34:40.199234 1077345 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0327 23:34:40.199271 1077345 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0327 23:34:40.265245 1077345 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0327 23:34:40.265279 1077345 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0327 23:34:40.599897 1077345 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0327 23:34:40.669587 1077345 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0327 23:34:40.669626 1077345 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0327 23:34:40.724865 1077345 addons.go:426] installing /etc/kubernetes/addons/ig-crd.yaml
	I0327 23:34:40.724891 1077345 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0327 23:34:40.986848 1077345 addons.go:426] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0327 23:34:40.986875 1077345 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0327 23:34:41.022966 1077345 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0327 23:34:41.022997 1077345 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0327 23:34:41.142386 1077345 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0327 23:34:41.243745 1077345 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0327 23:34:41.243783 1077345 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0327 23:34:41.575072 1077345 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0327 23:34:41.575103 1077345 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0327 23:34:41.937057 1077345 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0327 23:34:43.681450 1077345 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.830838625s)
	I0327 23:34:43.681519 1077345 main.go:141] libmachine: Making call to close driver server
	I0327 23:34:43.681533 1077345 main.go:141] libmachine: (addons-910864) Calling .Close
	I0327 23:34:43.682058 1077345 main.go:141] libmachine: (addons-910864) DBG | Closing plugin on server side
	I0327 23:34:43.682126 1077345 main.go:141] libmachine: Successfully made call to close driver server
	I0327 23:34:43.682145 1077345 main.go:141] libmachine: Making call to close connection to plugin binary
	I0327 23:34:43.682166 1077345 main.go:141] libmachine: Making call to close driver server
	I0327 23:34:43.682176 1077345 main.go:141] libmachine: (addons-910864) Calling .Close
	I0327 23:34:43.682572 1077345 main.go:141] libmachine: Successfully made call to close driver server
	I0327 23:34:43.682590 1077345 main.go:141] libmachine: Making call to close connection to plugin binary
	I0327 23:34:45.337546 1077345 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0327 23:34:45.337596 1077345 main.go:141] libmachine: (addons-910864) Calling .GetSSHHostname
	I0327 23:34:45.340900 1077345 main.go:141] libmachine: (addons-910864) DBG | domain addons-910864 has defined MAC address 52:54:00:44:91:2e in network mk-addons-910864
	I0327 23:34:45.341397 1077345 main.go:141] libmachine: (addons-910864) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:91:2e", ip: ""} in network mk-addons-910864: {Iface:virbr1 ExpiryTime:2024-03-28 00:33:58 +0000 UTC Type:0 Mac:52:54:00:44:91:2e Iaid: IPaddr:192.168.39.45 Prefix:24 Hostname:addons-910864 Clientid:01:52:54:00:44:91:2e}
	I0327 23:34:45.341429 1077345 main.go:141] libmachine: (addons-910864) DBG | domain addons-910864 has defined IP address 192.168.39.45 and MAC address 52:54:00:44:91:2e in network mk-addons-910864
	I0327 23:34:45.341616 1077345 main.go:141] libmachine: (addons-910864) Calling .GetSSHPort
	I0327 23:34:45.341865 1077345 main.go:141] libmachine: (addons-910864) Calling .GetSSHKeyPath
	I0327 23:34:45.342049 1077345 main.go:141] libmachine: (addons-910864) Calling .GetSSHUsername
	I0327 23:34:45.342207 1077345 sshutil.go:53] new ssh client: &{IP:192.168.39.45 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/addons-910864/id_rsa Username:docker}
	I0327 23:34:45.524061 1077345 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0327 23:34:45.927480 1077345 addons.go:234] Setting addon gcp-auth=true in "addons-910864"
	I0327 23:34:45.927558 1077345 host.go:66] Checking if "addons-910864" exists ...
	I0327 23:34:45.927980 1077345 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0327 23:34:45.928023 1077345 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0327 23:34:45.963255 1077345 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34971
	I0327 23:34:45.963744 1077345 main.go:141] libmachine: () Calling .GetVersion
	I0327 23:34:45.964353 1077345 main.go:141] libmachine: Using API Version  1
	I0327 23:34:45.964383 1077345 main.go:141] libmachine: () Calling .SetConfigRaw
	I0327 23:34:45.964791 1077345 main.go:141] libmachine: () Calling .GetMachineName
	I0327 23:34:45.965304 1077345 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0327 23:34:45.965333 1077345 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0327 23:34:45.982295 1077345 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41267
	I0327 23:34:45.982820 1077345 main.go:141] libmachine: () Calling .GetVersion
	I0327 23:34:45.983349 1077345 main.go:141] libmachine: Using API Version  1
	I0327 23:34:45.983380 1077345 main.go:141] libmachine: () Calling .SetConfigRaw
	I0327 23:34:45.983791 1077345 main.go:141] libmachine: () Calling .GetMachineName
	I0327 23:34:45.984072 1077345 main.go:141] libmachine: (addons-910864) Calling .GetState
	I0327 23:34:45.986004 1077345 main.go:141] libmachine: (addons-910864) Calling .DriverName
	I0327 23:34:45.986399 1077345 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0327 23:34:45.986425 1077345 main.go:141] libmachine: (addons-910864) Calling .GetSSHHostname
	I0327 23:34:45.989502 1077345 main.go:141] libmachine: (addons-910864) DBG | domain addons-910864 has defined MAC address 52:54:00:44:91:2e in network mk-addons-910864
	I0327 23:34:45.989930 1077345 main.go:141] libmachine: (addons-910864) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:91:2e", ip: ""} in network mk-addons-910864: {Iface:virbr1 ExpiryTime:2024-03-28 00:33:58 +0000 UTC Type:0 Mac:52:54:00:44:91:2e Iaid: IPaddr:192.168.39.45 Prefix:24 Hostname:addons-910864 Clientid:01:52:54:00:44:91:2e}
	I0327 23:34:45.989959 1077345 main.go:141] libmachine: (addons-910864) DBG | domain addons-910864 has defined IP address 192.168.39.45 and MAC address 52:54:00:44:91:2e in network mk-addons-910864
	I0327 23:34:45.990203 1077345 main.go:141] libmachine: (addons-910864) Calling .GetSSHPort
	I0327 23:34:45.990436 1077345 main.go:141] libmachine: (addons-910864) Calling .GetSSHKeyPath
	I0327 23:34:45.990679 1077345 main.go:141] libmachine: (addons-910864) Calling .GetSSHUsername
	I0327 23:34:45.990853 1077345 sshutil.go:53] new ssh client: &{IP:192.168.39.45 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/addons-910864/id_rsa Username:docker}
	I0327 23:34:47.665410 1077345 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (8.784287126s)
	I0327 23:34:47.665456 1077345 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (8.622608363s)
	I0327 23:34:47.665477 1077345 main.go:141] libmachine: Making call to close driver server
	I0327 23:34:47.665489 1077345 main.go:141] libmachine: (addons-910864) Calling .Close
	I0327 23:34:47.665505 1077345 main.go:141] libmachine: Making call to close driver server
	I0327 23:34:47.665512 1077345 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (8.620888069s)
	I0327 23:34:47.665538 1077345 main.go:141] libmachine: Making call to close driver server
	I0327 23:34:47.665517 1077345 main.go:141] libmachine: (addons-910864) Calling .Close
	I0327 23:34:47.665555 1077345 main.go:141] libmachine: (addons-910864) Calling .Close
	I0327 23:34:47.665566 1077345 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (8.619082066s)
	I0327 23:34:47.665584 1077345 main.go:141] libmachine: Making call to close driver server
	I0327 23:34:47.665593 1077345 main.go:141] libmachine: (addons-910864) Calling .Close
	I0327 23:34:47.665608 1077345 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (8.500419491s)
	I0327 23:34:47.665634 1077345 start.go:948] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0327 23:34:47.665634 1077345 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (8.50050008s)
	I0327 23:34:47.665686 1077345 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (8.487430091s)
	I0327 23:34:47.665703 1077345 main.go:141] libmachine: Making call to close driver server
	I0327 23:34:47.665709 1077345 main.go:141] libmachine: (addons-910864) Calling .Close
	I0327 23:34:47.665767 1077345 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (8.154032519s)
	I0327 23:34:47.665780 1077345 main.go:141] libmachine: Making call to close driver server
	I0327 23:34:47.665791 1077345 main.go:141] libmachine: (addons-910864) Calling .Close
	I0327 23:34:47.665851 1077345 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (8.150203231s)
	I0327 23:34:47.665866 1077345 main.go:141] libmachine: Making call to close driver server
	I0327 23:34:47.665882 1077345 main.go:141] libmachine: (addons-910864) DBG | Closing plugin on server side
	I0327 23:34:47.665896 1077345 main.go:141] libmachine: (addons-910864) Calling .Close
	I0327 23:34:47.665906 1077345 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (8.025725224s)
	I0327 23:34:47.665919 1077345 main.go:141] libmachine: Making call to close driver server
	I0327 23:34:47.665927 1077345 main.go:141] libmachine: (addons-910864) Calling .Close
	I0327 23:34:47.665971 1077345 main.go:141] libmachine: (addons-910864) DBG | Closing plugin on server side
	I0327 23:34:47.665992 1077345 main.go:141] libmachine: Successfully made call to close driver server
	I0327 23:34:47.665999 1077345 main.go:141] libmachine: Making call to close connection to plugin binary
	I0327 23:34:47.666007 1077345 main.go:141] libmachine: Making call to close driver server
	I0327 23:34:47.666013 1077345 main.go:141] libmachine: (addons-910864) Calling .Close
	I0327 23:34:47.666014 1077345 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.769315364s)
	I0327 23:34:47.666033 1077345 main.go:141] libmachine: Making call to close driver server
	I0327 23:34:47.666041 1077345 main.go:141] libmachine: (addons-910864) Calling .Close
	I0327 23:34:47.666124 1077345 main.go:141] libmachine: (addons-910864) DBG | Closing plugin on server side
	I0327 23:34:47.666152 1077345 main.go:141] libmachine: Successfully made call to close driver server
	I0327 23:34:47.666158 1077345 main.go:141] libmachine: Making call to close connection to plugin binary
	I0327 23:34:47.666158 1077345 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (7.64092813s)
	I0327 23:34:47.666165 1077345 main.go:141] libmachine: Making call to close driver server
	I0327 23:34:47.666173 1077345 main.go:141] libmachine: (addons-910864) Calling .Close
	W0327 23:34:47.666187 1077345 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0327 23:34:47.666247 1077345 main.go:141] libmachine: (addons-910864) DBG | Closing plugin on server side
	I0327 23:34:47.666268 1077345 main.go:141] libmachine: Successfully made call to close driver server
	I0327 23:34:47.666274 1077345 main.go:141] libmachine: Making call to close connection to plugin binary
	I0327 23:34:47.666282 1077345 main.go:141] libmachine: Making call to close driver server
	I0327 23:34:47.666288 1077345 main.go:141] libmachine: (addons-910864) Calling .Close
	I0327 23:34:47.666390 1077345 main.go:141] libmachine: (addons-910864) DBG | Closing plugin on server side
	I0327 23:34:47.666414 1077345 main.go:141] libmachine: Successfully made call to close driver server
	I0327 23:34:47.666421 1077345 main.go:141] libmachine: Making call to close connection to plugin binary
	I0327 23:34:47.666428 1077345 main.go:141] libmachine: Making call to close driver server
	I0327 23:34:47.666437 1077345 main.go:141] libmachine: (addons-910864) Calling .Close
	I0327 23:34:47.666491 1077345 main.go:141] libmachine: (addons-910864) DBG | Closing plugin on server side
	I0327 23:34:47.666493 1077345 main.go:141] libmachine: Successfully made call to close driver server
	I0327 23:34:47.666511 1077345 main.go:141] libmachine: Making call to close connection to plugin binary
	I0327 23:34:47.666511 1077345 main.go:141] libmachine: (addons-910864) DBG | Closing plugin on server side
	I0327 23:34:47.666520 1077345 main.go:141] libmachine: Making call to close driver server
	I0327 23:34:47.666529 1077345 main.go:141] libmachine: (addons-910864) Calling .Close
	I0327 23:34:47.666533 1077345 main.go:141] libmachine: Successfully made call to close driver server
	I0327 23:34:47.666539 1077345 main.go:141] libmachine: Making call to close connection to plugin binary
	I0327 23:34:47.666927 1077345 main.go:141] libmachine: Successfully made call to close driver server
	I0327 23:34:47.666941 1077345 main.go:141] libmachine: Making call to close connection to plugin binary
	I0327 23:34:47.666949 1077345 main.go:141] libmachine: Making call to close driver server
	I0327 23:34:47.666987 1077345 main.go:141] libmachine: (addons-910864) Calling .Close
	I0327 23:34:47.668140 1077345 main.go:141] libmachine: (addons-910864) DBG | Closing plugin on server side
	I0327 23:34:47.668180 1077345 main.go:141] libmachine: Successfully made call to close driver server
	I0327 23:34:47.668191 1077345 main.go:141] libmachine: Making call to close connection to plugin binary
	I0327 23:34:47.668202 1077345 addons.go:470] Verifying addon ingress=true in "addons-910864"
	I0327 23:34:47.670937 1077345 out.go:177] * Verifying ingress addon...
	I0327 23:34:47.668571 1077345 main.go:141] libmachine: (addons-910864) DBG | Closing plugin on server side
	I0327 23:34:47.668601 1077345 main.go:141] libmachine: Successfully made call to close driver server
	I0327 23:34:47.669076 1077345 main.go:141] libmachine: (addons-910864) DBG | Closing plugin on server side
	I0327 23:34:47.669105 1077345 main.go:141] libmachine: Successfully made call to close driver server
	I0327 23:34:47.669125 1077345 main.go:141] libmachine: (addons-910864) DBG | Closing plugin on server side
	I0327 23:34:47.669139 1077345 main.go:141] libmachine: (addons-910864) DBG | Closing plugin on server side
	I0327 23:34:47.669157 1077345 main.go:141] libmachine: Successfully made call to close driver server
	I0327 23:34:47.669174 1077345 main.go:141] libmachine: Successfully made call to close driver server
	I0327 23:34:47.669189 1077345 main.go:141] libmachine: (addons-910864) DBG | Closing plugin on server side
	I0327 23:34:47.669207 1077345 main.go:141] libmachine: Successfully made call to close driver server
	I0327 23:34:47.670112 1077345 main.go:141] libmachine: (addons-910864) DBG | Closing plugin on server side
	I0327 23:34:47.670143 1077345 main.go:141] libmachine: Successfully made call to close driver server
	I0327 23:34:47.670168 1077345 main.go:141] libmachine: (addons-910864) DBG | Closing plugin on server side
	I0327 23:34:47.670185 1077345 main.go:141] libmachine: Successfully made call to close driver server
	I0327 23:34:47.670270 1077345 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (7.070338276s)
	I0327 23:34:47.670336 1077345 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (6.527913595s)
	I0327 23:34:47.670618 1077345 node_ready.go:35] waiting up to 6m0s for node "addons-910864" to be "Ready" ...
	I0327 23:34:47.671182 1077345 retry.go:31] will retry after 264.082631ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0327 23:34:47.672402 1077345 main.go:141] libmachine: Making call to close connection to plugin binary
	I0327 23:34:47.672419 1077345 main.go:141] libmachine: Making call to close driver server
	I0327 23:34:47.672424 1077345 main.go:141] libmachine: Making call to close connection to plugin binary
	I0327 23:34:47.672441 1077345 main.go:141] libmachine: Making call to close connection to plugin binary
	I0327 23:34:47.672455 1077345 main.go:141] libmachine: Making call to close connection to plugin binary
	I0327 23:34:47.672510 1077345 main.go:141] libmachine: Making call to close connection to plugin binary
	I0327 23:34:47.672522 1077345 main.go:141] libmachine: Making call to close driver server
	I0327 23:34:47.672529 1077345 main.go:141] libmachine: (addons-910864) Calling .Close
	I0327 23:34:47.672563 1077345 main.go:141] libmachine: Making call to close driver server
	I0327 23:34:47.672442 1077345 main.go:141] libmachine: Making call to close driver server
	I0327 23:34:47.672575 1077345 main.go:141] libmachine: (addons-910864) Calling .Close
	I0327 23:34:47.672581 1077345 main.go:141] libmachine: (addons-910864) Calling .Close
	I0327 23:34:47.672512 1077345 main.go:141] libmachine: Making call to close connection to plugin binary
	I0327 23:34:47.672627 1077345 main.go:141] libmachine: Making call to close connection to plugin binary
	I0327 23:34:47.672635 1077345 main.go:141] libmachine: Making call to close driver server
	I0327 23:34:47.672640 1077345 addons.go:470] Verifying addon registry=true in "addons-910864"
	I0327 23:34:47.672645 1077345 main.go:141] libmachine: (addons-910864) Calling .Close
	I0327 23:34:47.672430 1077345 main.go:141] libmachine: (addons-910864) Calling .Close
	I0327 23:34:47.675633 1077345 out.go:177] * Verifying registry addon...
	I0327 23:34:47.672811 1077345 main.go:141] libmachine: Successfully made call to close driver server
	I0327 23:34:47.672833 1077345 main.go:141] libmachine: (addons-910864) DBG | Closing plugin on server side
	I0327 23:34:47.673229 1077345 main.go:141] libmachine: (addons-910864) DBG | Closing plugin on server side
	I0327 23:34:47.673234 1077345 main.go:141] libmachine: (addons-910864) DBG | Closing plugin on server side
	I0327 23:34:47.673256 1077345 main.go:141] libmachine: Successfully made call to close driver server
	I0327 23:34:47.673263 1077345 main.go:141] libmachine: Successfully made call to close driver server
	I0327 23:34:47.673271 1077345 main.go:141] libmachine: (addons-910864) DBG | Closing plugin on server side
	I0327 23:34:47.673280 1077345 main.go:141] libmachine: Successfully made call to close driver server
	I0327 23:34:47.673289 1077345 main.go:141] libmachine: Successfully made call to close driver server
	I0327 23:34:47.673292 1077345 main.go:141] libmachine: (addons-910864) DBG | Closing plugin on server side
	I0327 23:34:47.673385 1077345 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0327 23:34:47.677351 1077345 main.go:141] libmachine: Making call to close connection to plugin binary
	I0327 23:34:47.677367 1077345 main.go:141] libmachine: Making call to close connection to plugin binary
	I0327 23:34:47.677376 1077345 addons.go:470] Verifying addon metrics-server=true in "addons-910864"
	I0327 23:34:47.677382 1077345 main.go:141] libmachine: Making call to close driver server
	I0327 23:34:47.677391 1077345 main.go:141] libmachine: (addons-910864) Calling .Close
	I0327 23:34:47.677395 1077345 main.go:141] libmachine: Making call to close connection to plugin binary
	I0327 23:34:47.677430 1077345 main.go:141] libmachine: Making call to close connection to plugin binary
	I0327 23:34:47.677446 1077345 main.go:141] libmachine: Making call to close driver server
	I0327 23:34:47.677454 1077345 main.go:141] libmachine: (addons-910864) Calling .Close
	I0327 23:34:47.677515 1077345 main.go:141] libmachine: Making call to close connection to plugin binary
	I0327 23:34:47.677698 1077345 main.go:141] libmachine: Successfully made call to close driver server
	I0327 23:34:47.677715 1077345 main.go:141] libmachine: Making call to close connection to plugin binary
	I0327 23:34:47.677817 1077345 main.go:141] libmachine: (addons-910864) DBG | Closing plugin on server side
	I0327 23:34:47.677827 1077345 main.go:141] libmachine: Successfully made call to close driver server
	I0327 23:34:47.677837 1077345 main.go:141] libmachine: Making call to close connection to plugin binary
	I0327 23:34:47.679063 1077345 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-910864 service yakd-dashboard -n yakd-dashboard
	
	I0327 23:34:47.678280 1077345 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0327 23:34:47.684187 1077345 node_ready.go:49] node "addons-910864" has status "Ready":"True"
	I0327 23:34:47.684210 1077345 node_ready.go:38] duration metric: took 11.62172ms for node "addons-910864" to be "Ready" ...
	I0327 23:34:47.684221 1077345 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0327 23:34:47.708864 1077345 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0327 23:34:47.708897 1077345 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:34:47.721447 1077345 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0327 23:34:47.721484 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 23:34:47.736558 1077345 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-6kxnz" in "kube-system" namespace to be "Ready" ...
	I0327 23:34:47.737208 1077345 main.go:141] libmachine: Making call to close driver server
	I0327 23:34:47.737225 1077345 main.go:141] libmachine: (addons-910864) Calling .Close
	I0327 23:34:47.737517 1077345 main.go:141] libmachine: Successfully made call to close driver server
	I0327 23:34:47.737535 1077345 main.go:141] libmachine: Making call to close connection to plugin binary
	W0327 23:34:47.737634 1077345 out.go:239] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0327 23:34:47.761657 1077345 main.go:141] libmachine: Making call to close driver server
	I0327 23:34:47.761697 1077345 main.go:141] libmachine: (addons-910864) Calling .Close
	I0327 23:34:47.761989 1077345 main.go:141] libmachine: Successfully made call to close driver server
	I0327 23:34:47.762009 1077345 main.go:141] libmachine: Making call to close connection to plugin binary
	I0327 23:34:47.774735 1077345 pod_ready.go:92] pod "coredns-76f75df574-6kxnz" in "kube-system" namespace has status "Ready":"True"
	I0327 23:34:47.774779 1077345 pod_ready.go:81] duration metric: took 38.188974ms for pod "coredns-76f75df574-6kxnz" in "kube-system" namespace to be "Ready" ...
	I0327 23:34:47.774800 1077345 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-nt8t8" in "kube-system" namespace to be "Ready" ...
	I0327 23:34:47.829855 1077345 pod_ready.go:92] pod "coredns-76f75df574-nt8t8" in "kube-system" namespace has status "Ready":"True"
	I0327 23:34:47.829886 1077345 pod_ready.go:81] duration metric: took 55.076952ms for pod "coredns-76f75df574-nt8t8" in "kube-system" namespace to be "Ready" ...
	I0327 23:34:47.829903 1077345 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-910864" in "kube-system" namespace to be "Ready" ...
	I0327 23:34:47.839890 1077345 pod_ready.go:92] pod "etcd-addons-910864" in "kube-system" namespace has status "Ready":"True"
	I0327 23:34:47.839915 1077345 pod_ready.go:81] duration metric: took 10.004782ms for pod "etcd-addons-910864" in "kube-system" namespace to be "Ready" ...
	I0327 23:34:47.839925 1077345 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-910864" in "kube-system" namespace to be "Ready" ...
	I0327 23:34:47.874439 1077345 pod_ready.go:92] pod "kube-apiserver-addons-910864" in "kube-system" namespace has status "Ready":"True"
	I0327 23:34:47.874468 1077345 pod_ready.go:81] duration metric: took 34.536704ms for pod "kube-apiserver-addons-910864" in "kube-system" namespace to be "Ready" ...
	I0327 23:34:47.874479 1077345 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-910864" in "kube-system" namespace to be "Ready" ...
	I0327 23:34:47.936742 1077345 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0327 23:34:48.081094 1077345 pod_ready.go:92] pod "kube-controller-manager-addons-910864" in "kube-system" namespace has status "Ready":"True"
	I0327 23:34:48.081135 1077345 pod_ready.go:81] duration metric: took 206.64791ms for pod "kube-controller-manager-addons-910864" in "kube-system" namespace to be "Ready" ...
	I0327 23:34:48.081152 1077345 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-kmd42" in "kube-system" namespace to be "Ready" ...
	I0327 23:34:48.169914 1077345 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-910864" context rescaled to 1 replicas
	I0327 23:34:48.188447 1077345 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:34:48.189920 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 23:34:48.484520 1077345 pod_ready.go:92] pod "kube-proxy-kmd42" in "kube-system" namespace has status "Ready":"True"
	I0327 23:34:48.484560 1077345 pod_ready.go:81] duration metric: took 403.397881ms for pod "kube-proxy-kmd42" in "kube-system" namespace to be "Ready" ...
	I0327 23:34:48.484575 1077345 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-910864" in "kube-system" namespace to be "Ready" ...
	I0327 23:34:48.721782 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 23:34:48.721822 1077345 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:34:48.759496 1077345 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (6.822369973s)
	I0327 23:34:48.759573 1077345 main.go:141] libmachine: Making call to close driver server
	I0327 23:34:48.759588 1077345 main.go:141] libmachine: (addons-910864) Calling .Close
	I0327 23:34:48.759509 1077345 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.773084721s)
	I0327 23:34:48.761702 1077345 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0327 23:34:48.759979 1077345 main.go:141] libmachine: Successfully made call to close driver server
	I0327 23:34:48.760012 1077345 main.go:141] libmachine: (addons-910864) DBG | Closing plugin on server side
	I0327 23:34:48.763339 1077345 main.go:141] libmachine: Making call to close connection to plugin binary
	I0327 23:34:48.763360 1077345 main.go:141] libmachine: Making call to close driver server
	I0327 23:34:48.763373 1077345 main.go:141] libmachine: (addons-910864) Calling .Close
	I0327 23:34:48.764973 1077345 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0
	I0327 23:34:48.763758 1077345 main.go:141] libmachine: Successfully made call to close driver server
	I0327 23:34:48.765021 1077345 main.go:141] libmachine: Making call to close connection to plugin binary
	I0327 23:34:48.765037 1077345 addons.go:470] Verifying addon csi-hostpath-driver=true in "addons-910864"
	I0327 23:34:48.763815 1077345 main.go:141] libmachine: (addons-910864) DBG | Closing plugin on server side
	I0327 23:34:48.766563 1077345 out.go:177] * Verifying csi-hostpath-driver addon...
	I0327 23:34:48.767724 1077345 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0327 23:34:48.767741 1077345 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0327 23:34:48.769940 1077345 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0327 23:34:48.781687 1077345 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0327 23:34:48.781716 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:34:48.886850 1077345 pod_ready.go:92] pod "kube-scheduler-addons-910864" in "kube-system" namespace has status "Ready":"True"
	I0327 23:34:48.886887 1077345 pod_ready.go:81] duration metric: took 402.302252ms for pod "kube-scheduler-addons-910864" in "kube-system" namespace to be "Ready" ...
	I0327 23:34:48.886903 1077345 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-69cf46c98-c4zrg" in "kube-system" namespace to be "Ready" ...
	I0327 23:34:48.933860 1077345 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0327 23:34:48.933897 1077345 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0327 23:34:49.013877 1077345 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0327 23:34:49.013900 1077345 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0327 23:34:49.080369 1077345 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0327 23:34:49.181745 1077345 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:34:49.186088 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 23:34:49.277423 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:34:49.681330 1077345 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:34:49.684000 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 23:34:49.783017 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:34:50.182765 1077345 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:34:50.185385 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 23:34:50.275451 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:34:50.424079 1077345 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.487245933s)
	I0327 23:34:50.424144 1077345 main.go:141] libmachine: Making call to close driver server
	I0327 23:34:50.424156 1077345 main.go:141] libmachine: (addons-910864) Calling .Close
	I0327 23:34:50.424485 1077345 main.go:141] libmachine: Successfully made call to close driver server
	I0327 23:34:50.424508 1077345 main.go:141] libmachine: Making call to close connection to plugin binary
	I0327 23:34:50.424519 1077345 main.go:141] libmachine: Making call to close driver server
	I0327 23:34:50.424530 1077345 main.go:141] libmachine: (addons-910864) Calling .Close
	I0327 23:34:50.424530 1077345 main.go:141] libmachine: (addons-910864) DBG | Closing plugin on server side
	I0327 23:34:50.424819 1077345 main.go:141] libmachine: Successfully made call to close driver server
	I0327 23:34:50.424833 1077345 main.go:141] libmachine: Making call to close connection to plugin binary
	I0327 23:34:50.746084 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 23:34:50.746461 1077345 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:34:50.779811 1077345 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.699398424s)
	I0327 23:34:50.779873 1077345 main.go:141] libmachine: Making call to close driver server
	I0327 23:34:50.779890 1077345 main.go:141] libmachine: (addons-910864) Calling .Close
	I0327 23:34:50.780290 1077345 main.go:141] libmachine: Successfully made call to close driver server
	I0327 23:34:50.780363 1077345 main.go:141] libmachine: Making call to close connection to plugin binary
	I0327 23:34:50.780381 1077345 main.go:141] libmachine: Making call to close driver server
	I0327 23:34:50.780392 1077345 main.go:141] libmachine: (addons-910864) Calling .Close
	I0327 23:34:50.780400 1077345 main.go:141] libmachine: (addons-910864) DBG | Closing plugin on server side
	I0327 23:34:50.780690 1077345 main.go:141] libmachine: Successfully made call to close driver server
	I0327 23:34:50.780733 1077345 main.go:141] libmachine: Making call to close connection to plugin binary
	I0327 23:34:50.780748 1077345 main.go:141] libmachine: (addons-910864) DBG | Closing plugin on server side
	I0327 23:34:50.782703 1077345 addons.go:470] Verifying addon gcp-auth=true in "addons-910864"
	I0327 23:34:50.785188 1077345 out.go:177] * Verifying gcp-auth addon...
	I0327 23:34:50.787304 1077345 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0327 23:34:50.797475 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:34:50.798142 1077345 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0327 23:34:50.798161 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:34:50.896622 1077345 pod_ready.go:102] pod "metrics-server-69cf46c98-c4zrg" in "kube-system" namespace has status "Ready":"False"
	I0327 23:34:51.182661 1077345 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:34:51.190638 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 23:34:51.276674 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:34:51.292091 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:34:51.682966 1077345 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:34:51.685524 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 23:34:51.776819 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:34:51.791087 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:34:52.183327 1077345 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:34:52.191833 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 23:34:52.279439 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:34:52.292658 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:34:52.682452 1077345 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:34:52.685234 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 23:34:52.775818 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:34:52.791944 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:34:53.182542 1077345 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:34:53.187481 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 23:34:53.279636 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:34:53.291229 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:34:53.392861 1077345 pod_ready.go:102] pod "metrics-server-69cf46c98-c4zrg" in "kube-system" namespace has status "Ready":"False"
	I0327 23:34:53.683861 1077345 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:34:53.685610 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 23:34:53.776228 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:34:53.792948 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:34:54.183595 1077345 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:34:54.187674 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 23:34:54.275905 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:34:54.290909 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:34:54.683539 1077345 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:34:54.685896 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 23:34:54.780412 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:34:54.791865 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:34:55.182626 1077345 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:34:55.185743 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 23:34:55.280032 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:34:55.293990 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:34:55.420240 1077345 pod_ready.go:102] pod "metrics-server-69cf46c98-c4zrg" in "kube-system" namespace has status "Ready":"False"
	I0327 23:34:55.684288 1077345 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:34:55.690402 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 23:34:55.779941 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:34:55.792845 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:34:56.182637 1077345 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:34:56.186521 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 23:34:56.275878 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:34:56.291558 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:34:56.682252 1077345 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:34:56.684952 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 23:34:56.775882 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:34:56.791396 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:34:57.182524 1077345 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:34:57.185388 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 23:34:57.276798 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:34:57.292496 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:34:57.684428 1077345 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:34:57.685246 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 23:34:57.776976 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:34:57.791490 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:34:57.894201 1077345 pod_ready.go:102] pod "metrics-server-69cf46c98-c4zrg" in "kube-system" namespace has status "Ready":"False"
	I0327 23:34:58.184031 1077345 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:34:58.187097 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 23:34:58.276928 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:34:58.291344 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:34:58.682374 1077345 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:34:58.685597 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 23:34:58.776316 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:34:58.790803 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:34:59.182669 1077345 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:34:59.187283 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 23:34:59.276893 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:34:59.292693 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:34:59.684587 1077345 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:34:59.691304 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 23:34:59.776346 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:34:59.790613 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:34:59.894692 1077345 pod_ready.go:102] pod "metrics-server-69cf46c98-c4zrg" in "kube-system" namespace has status "Ready":"False"
	I0327 23:35:00.182609 1077345 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:35:00.185340 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 23:35:00.276170 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:35:00.292320 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:35:00.682467 1077345 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:35:00.687833 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 23:35:00.776595 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:35:00.790985 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:35:01.182893 1077345 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:35:01.185110 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 23:35:01.276043 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:35:01.293446 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:35:01.682040 1077345 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:35:01.685460 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 23:35:01.779650 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:35:01.792325 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:35:02.181721 1077345 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:35:02.185243 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 23:35:02.276982 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:35:02.291926 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:35:02.394475 1077345 pod_ready.go:102] pod "metrics-server-69cf46c98-c4zrg" in "kube-system" namespace has status "Ready":"False"
	I0327 23:35:02.681774 1077345 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:35:02.685155 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 23:35:02.776136 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:35:02.791987 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:35:03.253206 1077345 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:35:03.253850 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 23:35:03.280415 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:35:03.291786 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:35:03.682556 1077345 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:35:03.685514 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 23:35:03.777533 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:35:03.791540 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:35:04.182389 1077345 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:35:04.185583 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 23:35:04.275502 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:35:04.291186 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:35:04.683504 1077345 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:35:04.686522 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 23:35:04.775796 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:35:04.791655 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:35:04.896108 1077345 pod_ready.go:102] pod "metrics-server-69cf46c98-c4zrg" in "kube-system" namespace has status "Ready":"False"
	I0327 23:35:05.183476 1077345 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:35:05.189971 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 23:35:05.275998 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:35:05.291447 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:35:05.682927 1077345 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:35:05.686666 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 23:35:05.777620 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:35:05.791329 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:35:06.183020 1077345 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:35:06.185911 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 23:35:06.276585 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:35:06.297741 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:35:06.681671 1077345 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:35:06.684452 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 23:35:06.776055 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:35:06.791897 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:35:07.182066 1077345 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:35:07.184954 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 23:35:07.280482 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:35:07.292209 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:35:07.393518 1077345 pod_ready.go:102] pod "metrics-server-69cf46c98-c4zrg" in "kube-system" namespace has status "Ready":"False"
	I0327 23:35:07.686723 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 23:35:07.692417 1077345 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:35:07.776465 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:35:07.791437 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:35:08.184060 1077345 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:35:08.189465 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 23:35:08.283852 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:35:08.293243 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:35:08.683082 1077345 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:35:08.686787 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 23:35:08.776667 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:35:08.791085 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:35:09.386451 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:35:09.386954 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 23:35:09.386968 1077345 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:35:09.390024 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:35:09.398804 1077345 pod_ready.go:102] pod "metrics-server-69cf46c98-c4zrg" in "kube-system" namespace has status "Ready":"False"
	I0327 23:35:09.682601 1077345 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:35:09.685516 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 23:35:09.783729 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:35:09.795436 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:35:10.181931 1077345 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:35:10.185025 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 23:35:10.280362 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:35:10.293540 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:35:10.683470 1077345 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:35:10.685789 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 23:35:10.781955 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:35:10.791252 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:35:11.182770 1077345 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:35:11.185953 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 23:35:11.277197 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:35:11.291709 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:35:11.683324 1077345 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:35:11.687283 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 23:35:11.775710 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:35:11.791062 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:35:11.895611 1077345 pod_ready.go:102] pod "metrics-server-69cf46c98-c4zrg" in "kube-system" namespace has status "Ready":"False"
	I0327 23:35:12.182722 1077345 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:35:12.186515 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 23:35:12.279052 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:35:12.291952 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:35:12.682192 1077345 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:35:12.684931 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 23:35:12.776861 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:35:12.793565 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:35:13.182424 1077345 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:35:13.186189 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 23:35:13.276083 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:35:13.291634 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:35:13.683185 1077345 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:35:13.685661 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 23:35:13.776082 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:35:13.792164 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:35:14.182392 1077345 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:35:14.185576 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 23:35:14.275583 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:35:14.290934 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:35:14.395169 1077345 pod_ready.go:102] pod "metrics-server-69cf46c98-c4zrg" in "kube-system" namespace has status "Ready":"False"
	I0327 23:35:14.688372 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 23:35:14.700965 1077345 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:35:14.779099 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:35:14.799365 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:35:15.182321 1077345 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:35:15.185758 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 23:35:15.276747 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:35:15.291302 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:35:15.681872 1077345 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:35:15.684948 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 23:35:15.776525 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:35:15.793057 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:35:16.182894 1077345 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:35:16.185958 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 23:35:16.275626 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:35:16.291167 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:35:16.682793 1077345 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:35:16.685542 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 23:35:16.775901 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:35:16.792036 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:35:16.896893 1077345 pod_ready.go:102] pod "metrics-server-69cf46c98-c4zrg" in "kube-system" namespace has status "Ready":"False"
	I0327 23:35:17.182469 1077345 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:35:17.185250 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 23:35:17.276774 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:35:17.291804 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:35:17.681937 1077345 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:35:17.684887 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 23:35:17.777352 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:35:17.792205 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:35:18.184068 1077345 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:35:18.186160 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 23:35:18.281212 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:35:18.290622 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:35:18.685050 1077345 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:35:18.686320 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 23:35:18.780793 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:35:18.791902 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:35:19.190213 1077345 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:35:19.190492 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 23:35:19.275906 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:35:19.291724 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:35:19.403943 1077345 pod_ready.go:102] pod "metrics-server-69cf46c98-c4zrg" in "kube-system" namespace has status "Ready":"False"
	I0327 23:35:19.683251 1077345 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:35:19.686553 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 23:35:19.776596 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:35:19.791375 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:35:20.183890 1077345 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:35:20.185966 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 23:35:20.276360 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:35:20.293174 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:35:20.683823 1077345 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:35:20.687922 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 23:35:20.776947 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:35:20.792174 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:35:21.181964 1077345 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:35:21.187255 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 23:35:21.277012 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:35:21.290764 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:35:21.682088 1077345 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:35:21.685555 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 23:35:21.775828 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:35:21.791647 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:35:21.893130 1077345 pod_ready.go:102] pod "metrics-server-69cf46c98-c4zrg" in "kube-system" namespace has status "Ready":"False"
	I0327 23:35:22.182760 1077345 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:35:22.186487 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 23:35:22.283034 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:35:22.293174 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:35:22.682472 1077345 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:35:22.686286 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 23:35:22.780012 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:35:22.793824 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:35:23.182469 1077345 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:35:23.185159 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 23:35:23.280827 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:35:23.291170 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:35:24.093192 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:35:24.096011 1077345 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:35:24.101006 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 23:35:24.103900 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:35:24.104182 1077345 pod_ready.go:102] pod "metrics-server-69cf46c98-c4zrg" in "kube-system" namespace has status "Ready":"False"
	I0327 23:35:24.182429 1077345 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:35:24.185324 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 23:35:24.277897 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:35:24.292918 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:35:24.682695 1077345 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:35:24.685702 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 23:35:24.780040 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:35:24.793612 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:35:25.184511 1077345 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:35:25.186666 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 23:35:25.275959 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:35:25.294136 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:35:25.682754 1077345 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:35:25.686932 1077345 kapi.go:107] duration metric: took 38.008650594s to wait for kubernetes.io/minikube-addons=registry ...
	I0327 23:35:25.779160 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:35:25.793541 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:35:26.182841 1077345 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:35:26.276081 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:35:26.291309 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:35:26.393541 1077345 pod_ready.go:102] pod "metrics-server-69cf46c98-c4zrg" in "kube-system" namespace has status "Ready":"False"
	I0327 23:35:26.682154 1077345 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:35:26.776105 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:35:26.793003 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:35:27.184535 1077345 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:35:27.276085 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:35:27.291384 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:35:27.682499 1077345 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:35:27.777361 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:35:27.791760 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:35:28.184241 1077345 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:35:28.276109 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:35:28.292534 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:35:28.394050 1077345 pod_ready.go:102] pod "metrics-server-69cf46c98-c4zrg" in "kube-system" namespace has status "Ready":"False"
	I0327 23:35:28.682691 1077345 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:35:28.775574 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:35:28.791193 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:35:28.897138 1077345 pod_ready.go:92] pod "metrics-server-69cf46c98-c4zrg" in "kube-system" namespace has status "Ready":"True"
	I0327 23:35:28.897170 1077345 pod_ready.go:81] duration metric: took 40.01025957s for pod "metrics-server-69cf46c98-c4zrg" in "kube-system" namespace to be "Ready" ...
	I0327 23:35:28.897181 1077345 pod_ready.go:78] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-trctv" in "kube-system" namespace to be "Ready" ...
	I0327 23:35:28.908209 1077345 pod_ready.go:92] pod "nvidia-device-plugin-daemonset-trctv" in "kube-system" namespace has status "Ready":"True"
	I0327 23:35:28.908241 1077345 pod_ready.go:81] duration metric: took 11.052502ms for pod "nvidia-device-plugin-daemonset-trctv" in "kube-system" namespace to be "Ready" ...
	I0327 23:35:28.908291 1077345 pod_ready.go:38] duration metric: took 41.224054669s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0327 23:35:28.908317 1077345 api_server.go:52] waiting for apiserver process to appear ...
	I0327 23:35:28.908402 1077345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0327 23:35:28.975308 1077345 api_server.go:72] duration metric: took 50.616829899s to wait for apiserver process to appear ...
	I0327 23:35:28.975338 1077345 api_server.go:88] waiting for apiserver healthz status ...
	I0327 23:35:28.975363 1077345 api_server.go:253] Checking apiserver healthz at https://192.168.39.45:8443/healthz ...
	I0327 23:35:28.980820 1077345 api_server.go:279] https://192.168.39.45:8443/healthz returned 200:
	ok
	I0327 23:35:28.982208 1077345 api_server.go:141] control plane version: v1.29.3
	I0327 23:35:28.982250 1077345 api_server.go:131] duration metric: took 6.903831ms to wait for apiserver health ...
	I0327 23:35:28.982263 1077345 system_pods.go:43] waiting for kube-system pods to appear ...
	I0327 23:35:28.991369 1077345 system_pods.go:59] 18 kube-system pods found
	I0327 23:35:28.991403 1077345 system_pods.go:61] "coredns-76f75df574-nt8t8" [3632f64b-2085-497f-b53b-8379a52fbec0] Running
	I0327 23:35:28.991414 1077345 system_pods.go:61] "csi-hostpath-attacher-0" [d34b6d9e-df6e-4301-9086-9183bb090428] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0327 23:35:28.991422 1077345 system_pods.go:61] "csi-hostpath-resizer-0" [5dcb0750-fa8a-4b75-ae54-16067f40b7fd] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0327 23:35:28.991435 1077345 system_pods.go:61] "csi-hostpathplugin-xvtt9" [5d557f60-5346-47fa-a57d-b37afcbce7c3] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0327 23:35:28.991441 1077345 system_pods.go:61] "etcd-addons-910864" [3c3eccd0-73f2-47ee-aec5-75f90b3c118e] Running
	I0327 23:35:28.991446 1077345 system_pods.go:61] "kube-apiserver-addons-910864" [95132900-7365-4d5a-a54a-90e2c28fc766] Running
	I0327 23:35:28.991452 1077345 system_pods.go:61] "kube-controller-manager-addons-910864" [5c38a38d-f2b9-466e-85e9-6e88760a3b24] Running
	I0327 23:35:28.991458 1077345 system_pods.go:61] "kube-ingress-dns-minikube" [b851c646-d18a-4817-81d3-b8664af063c8] Running
	I0327 23:35:28.991466 1077345 system_pods.go:61] "kube-proxy-kmd42" [eb736629-19bc-44d4-a8a2-985b4d347e59] Running
	I0327 23:35:28.991472 1077345 system_pods.go:61] "kube-scheduler-addons-910864" [a5511caf-f241-4fed-8265-69a23732162f] Running
	I0327 23:35:28.991478 1077345 system_pods.go:61] "metrics-server-69cf46c98-c4zrg" [be84ea98-7e43-48f2-8b80-8187e0478a9c] Running
	I0327 23:35:28.991483 1077345 system_pods.go:61] "nvidia-device-plugin-daemonset-trctv" [189a46ce-1f31-42d5-bcf7-caefe2c656f6] Running
	I0327 23:35:28.991490 1077345 system_pods.go:61] "registry-ft9qx" [1d2583d0-7b3d-414c-be8d-513217d275d5] Running
	I0327 23:35:28.991496 1077345 system_pods.go:61] "registry-proxy-fn7cc" [c0e21e74-fdab-4924-8b67-c75809a350f1] Running
	I0327 23:35:28.991508 1077345 system_pods.go:61] "snapshot-controller-58dbcc7b99-stxxp" [f9af3825-fd9c-4741-9d6e-319c47c84cf3] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0327 23:35:28.991523 1077345 system_pods.go:61] "snapshot-controller-58dbcc7b99-xgcp9" [8ff400c0-a342-4b37-b05b-b3a8f7485bc9] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0327 23:35:28.991529 1077345 system_pods.go:61] "storage-provisioner" [673aa9e5-9e38-4e75-91cd-07fcada37b61] Running
	I0327 23:35:28.991535 1077345 system_pods.go:61] "tiller-deploy-7b677967b9-kt5p7" [617f7919-14ff-44b4-8722-d900d19371e3] Running
	I0327 23:35:28.991545 1077345 system_pods.go:74] duration metric: took 9.270191ms to wait for pod list to return data ...
	I0327 23:35:28.991562 1077345 default_sa.go:34] waiting for default service account to be created ...
	I0327 23:35:28.993565 1077345 default_sa.go:45] found service account: "default"
	I0327 23:35:28.993589 1077345 default_sa.go:55] duration metric: took 2.019404ms for default service account to be created ...
	I0327 23:35:28.993599 1077345 system_pods.go:116] waiting for k8s-apps to be running ...
	I0327 23:35:29.003992 1077345 system_pods.go:86] 18 kube-system pods found
	I0327 23:35:29.004028 1077345 system_pods.go:89] "coredns-76f75df574-nt8t8" [3632f64b-2085-497f-b53b-8379a52fbec0] Running
	I0327 23:35:29.004044 1077345 system_pods.go:89] "csi-hostpath-attacher-0" [d34b6d9e-df6e-4301-9086-9183bb090428] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0327 23:35:29.004055 1077345 system_pods.go:89] "csi-hostpath-resizer-0" [5dcb0750-fa8a-4b75-ae54-16067f40b7fd] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0327 23:35:29.004071 1077345 system_pods.go:89] "csi-hostpathplugin-xvtt9" [5d557f60-5346-47fa-a57d-b37afcbce7c3] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0327 23:35:29.004081 1077345 system_pods.go:89] "etcd-addons-910864" [3c3eccd0-73f2-47ee-aec5-75f90b3c118e] Running
	I0327 23:35:29.004102 1077345 system_pods.go:89] "kube-apiserver-addons-910864" [95132900-7365-4d5a-a54a-90e2c28fc766] Running
	I0327 23:35:29.004110 1077345 system_pods.go:89] "kube-controller-manager-addons-910864" [5c38a38d-f2b9-466e-85e9-6e88760a3b24] Running
	I0327 23:35:29.004122 1077345 system_pods.go:89] "kube-ingress-dns-minikube" [b851c646-d18a-4817-81d3-b8664af063c8] Running
	I0327 23:35:29.004128 1077345 system_pods.go:89] "kube-proxy-kmd42" [eb736629-19bc-44d4-a8a2-985b4d347e59] Running
	I0327 23:35:29.004140 1077345 system_pods.go:89] "kube-scheduler-addons-910864" [a5511caf-f241-4fed-8265-69a23732162f] Running
	I0327 23:35:29.004145 1077345 system_pods.go:89] "metrics-server-69cf46c98-c4zrg" [be84ea98-7e43-48f2-8b80-8187e0478a9c] Running
	I0327 23:35:29.004155 1077345 system_pods.go:89] "nvidia-device-plugin-daemonset-trctv" [189a46ce-1f31-42d5-bcf7-caefe2c656f6] Running
	I0327 23:35:29.004161 1077345 system_pods.go:89] "registry-ft9qx" [1d2583d0-7b3d-414c-be8d-513217d275d5] Running
	I0327 23:35:29.004169 1077345 system_pods.go:89] "registry-proxy-fn7cc" [c0e21e74-fdab-4924-8b67-c75809a350f1] Running
	I0327 23:35:29.004179 1077345 system_pods.go:89] "snapshot-controller-58dbcc7b99-stxxp" [f9af3825-fd9c-4741-9d6e-319c47c84cf3] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0327 23:35:29.004192 1077345 system_pods.go:89] "snapshot-controller-58dbcc7b99-xgcp9" [8ff400c0-a342-4b37-b05b-b3a8f7485bc9] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0327 23:35:29.004202 1077345 system_pods.go:89] "storage-provisioner" [673aa9e5-9e38-4e75-91cd-07fcada37b61] Running
	I0327 23:35:29.004211 1077345 system_pods.go:89] "tiller-deploy-7b677967b9-kt5p7" [617f7919-14ff-44b4-8722-d900d19371e3] Running
	I0327 23:35:29.004224 1077345 system_pods.go:126] duration metric: took 10.618069ms to wait for k8s-apps to be running ...
	I0327 23:35:29.004237 1077345 system_svc.go:44] waiting for kubelet service to be running ....
	I0327 23:35:29.004297 1077345 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0327 23:35:29.028038 1077345 system_svc.go:56] duration metric: took 23.791142ms WaitForService to wait for kubelet
	I0327 23:35:29.028071 1077345 kubeadm.go:576] duration metric: took 50.669606988s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0327 23:35:29.028100 1077345 node_conditions.go:102] verifying NodePressure condition ...
	I0327 23:35:29.031610 1077345 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0327 23:35:29.031644 1077345 node_conditions.go:123] node cpu capacity is 2
	I0327 23:35:29.031660 1077345 node_conditions.go:105] duration metric: took 3.554687ms to run NodePressure ...
	I0327 23:35:29.031675 1077345 start.go:240] waiting for startup goroutines ...
	I0327 23:35:29.181976 1077345 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:35:29.279069 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:35:29.291587 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:35:29.979923 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:35:29.980419 1077345 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:35:29.980452 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:35:30.182344 1077345 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:35:30.276344 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:35:30.294096 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:35:30.682753 1077345 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:35:30.775906 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:35:30.793725 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:35:31.182314 1077345 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:35:31.286704 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:35:31.293101 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:35:31.687112 1077345 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:35:31.779344 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:35:31.793470 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:35:32.182694 1077345 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:35:32.276162 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:35:32.291657 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:35:32.682401 1077345 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:35:32.778747 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:35:32.792620 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:35:33.182179 1077345 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:35:33.278188 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:35:33.292155 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:35:33.685878 1077345 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:35:33.776649 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:35:33.791657 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:35:34.182358 1077345 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:35:34.276397 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:35:34.291048 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:35:34.682841 1077345 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:35:34.775500 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:35:34.791223 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:35:35.182923 1077345 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:35:35.275219 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:35:35.293129 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:35:35.683007 1077345 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:35:35.776985 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:35:35.791217 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:35:36.183159 1077345 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:35:36.278768 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:35:36.291913 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:35:36.682291 1077345 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:35:36.775918 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:35:36.908832 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:35:37.182719 1077345 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:35:37.276799 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:35:37.291594 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:35:37.682450 1077345 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:35:37.784185 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:35:37.798862 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:35:38.543351 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:35:38.544254 1077345 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:35:38.545966 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:35:38.682085 1077345 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:35:38.776586 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:35:38.791734 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:35:39.182081 1077345 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:35:39.282015 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:35:39.291139 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:35:39.690223 1077345 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:35:39.776703 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:35:39.791825 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:35:40.182357 1077345 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:35:40.277204 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:35:40.291225 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:35:40.685900 1077345 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:35:40.779289 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:35:40.796950 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:35:41.182736 1077345 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:35:41.276981 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:35:41.291685 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:35:41.683132 1077345 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:35:41.776143 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:35:41.791739 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:35:42.182560 1077345 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:35:42.276074 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:35:42.291650 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:35:42.682337 1077345 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:35:42.776012 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:35:42.792019 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:35:43.183629 1077345 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:35:43.276026 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:35:43.293033 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:35:43.682741 1077345 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:35:43.786746 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:35:44.048160 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:35:44.182344 1077345 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:35:44.278011 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:35:44.292184 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:35:44.683455 1077345 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:35:44.791240 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:35:44.793765 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:35:45.182332 1077345 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:35:45.275956 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:35:45.293859 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:35:45.682982 1077345 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:35:45.776203 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:35:45.791031 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:35:46.182604 1077345 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:35:46.275476 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:35:46.291235 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:35:46.684995 1077345 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:35:46.775713 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:35:46.793762 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:35:47.181866 1077345 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:35:47.275490 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:35:47.291006 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:35:47.682719 1077345 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:35:47.775611 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:35:47.791040 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:35:48.183855 1077345 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:35:48.276429 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:35:48.291431 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:35:48.707222 1077345 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:35:48.781286 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:35:48.790964 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:35:49.182955 1077345 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:35:49.275726 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:35:49.291456 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:35:49.682821 1077345 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:35:49.780293 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:35:49.791277 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:35:50.182475 1077345 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:35:50.276544 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:35:50.291135 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:35:50.685103 1077345 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:35:50.775810 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:35:50.796220 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:35:51.182859 1077345 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:35:51.283604 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:35:51.291232 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:35:51.682931 1077345 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:35:51.775372 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:35:51.791188 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:35:52.182508 1077345 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:35:52.276595 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:35:52.295953 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:35:52.683863 1077345 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:35:52.775532 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:35:52.791953 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:35:53.182852 1077345 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:35:53.278149 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:35:53.301002 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:35:53.682585 1077345 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:35:53.777166 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:35:53.794610 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:35:54.182064 1077345 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:35:54.280529 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:35:54.293592 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:35:54.684415 1077345 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:35:54.783769 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:35:54.791720 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:35:55.182984 1077345 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:35:55.276454 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:35:55.291365 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:35:55.684233 1077345 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:35:55.780692 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:35:55.790873 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:35:56.182641 1077345 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:35:56.285912 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:35:56.292607 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:35:56.689734 1077345 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:35:56.792190 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:35:56.792662 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:35:57.184777 1077345 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:35:57.275679 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:35:57.291816 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:35:58.005021 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:35:58.005723 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:35:58.006820 1077345 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:35:58.181730 1077345 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:35:58.275117 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:35:58.290681 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:35:58.682932 1077345 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:35:58.776400 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:35:58.791351 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:35:59.182541 1077345 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:35:59.276284 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:35:59.291874 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:35:59.682894 1077345 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:35:59.776334 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:35:59.790869 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:36:00.184744 1077345 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:36:00.276830 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:36:00.291856 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:36:00.682556 1077345 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:36:00.776428 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:36:00.791079 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:36:01.185823 1077345 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:36:01.275239 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:36:01.291824 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:36:01.685139 1077345 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 23:36:01.777494 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:36:01.796272 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:36:02.182613 1077345 kapi.go:107] duration metric: took 1m14.509227035s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0327 23:36:02.276661 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:36:02.290905 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:36:02.776070 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:36:02.792543 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:36:03.276438 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:36:03.295903 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:36:03.776051 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:36:03.792605 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:36:04.276401 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:36:04.291563 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:36:04.777687 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:36:04.791399 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:36:05.276275 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:36:05.291157 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:36:05.776604 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:36:05.790894 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:36:06.276534 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:36:06.291003 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:36:06.778731 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:36:06.797038 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 23:36:07.276109 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:36:07.295807 1077345 kapi.go:107] duration metric: took 1m16.508496654s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0327 23:36:07.297840 1077345 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-910864 cluster.
	I0327 23:36:07.299326 1077345 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0327 23:36:07.300930 1077345 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0327 23:36:07.777147 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:36:08.280362 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:36:08.776364 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:36:09.282202 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:36:09.776451 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:36:10.297658 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:36:10.939225 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:36:11.274821 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:36:11.781940 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:36:12.276316 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:36:12.776494 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:36:13.275849 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:36:13.784657 1077345 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 23:36:14.279381 1077345 kapi.go:107] duration metric: took 1m25.509441158s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0327 23:36:14.281399 1077345 out.go:177] * Enabled addons: storage-provisioner, cloud-spanner, ingress-dns, helm-tiller, metrics-server, nvidia-device-plugin, inspektor-gadget, yakd, default-storageclass, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I0327 23:36:14.283208 1077345 addons.go:505] duration metric: took 1m35.924736687s for enable addons: enabled=[storage-provisioner cloud-spanner ingress-dns helm-tiller metrics-server nvidia-device-plugin inspektor-gadget yakd default-storageclass volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I0327 23:36:14.283270 1077345 start.go:245] waiting for cluster config update ...
	I0327 23:36:14.283298 1077345 start.go:254] writing updated cluster config ...
	I0327 23:36:14.283616 1077345 ssh_runner.go:195] Run: rm -f paused
	I0327 23:36:14.340750 1077345 start.go:600] kubectl: 1.29.3, cluster: 1.29.3 (minor skew: 0)
	I0327 23:36:14.342812 1077345 out.go:177] * Done! kubectl is now configured to use "addons-910864" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Mar 27 23:39:07 addons-910864 crio[682]: time="2024-03-27 23:39:07.153709541Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1711582747153678538,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:570813,},InodesUsed:&UInt64Value{Value:203,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=258dc19f-c414-4033-a785-411737033be8 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 27 23:39:07 addons-910864 crio[682]: time="2024-03-27 23:39:07.154220426Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0d4bebb6-b3ce-4d90-b249-399dd510c771 name=/runtime.v1.RuntimeService/ListContainers
	Mar 27 23:39:07 addons-910864 crio[682]: time="2024-03-27 23:39:07.154303010Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0d4bebb6-b3ce-4d90-b249-399dd510c771 name=/runtime.v1.RuntimeService/ListContainers
	Mar 27 23:39:07 addons-910864 crio[682]: time="2024-03-27 23:39:07.154910114Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:021498fd37869affda947c35ec55186ac0e1a19b58e69151adfa271c59235f7e,PodSandboxId:d817bec815667f9a89080710ae992c5f65249ad9b021f45fad03e20856efba1d,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79,State:CONTAINER_RUNNING,CreatedAt:1711582739884246318,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5d77478584-cbrng,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 29bf4177-543d-4dfa-a835-d2dc80f2e79f,},Annotations:map[string]string{io.kubernetes.container.hash: 874f0231,io.kubernetes.containe
r.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b6a0f1d9c503c15e507349ba2631dce490cb4595c739d81993c3398e2c58710,PodSandboxId:d1f26f1d60d16c4fcdf053d50920c4f50371510c8ab29726f5e67d1973cccca9,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:9d84f30d4c5e54cdc40f63b060e93ba6a0cd8a4c05d28d7cda4cd14f6b56490f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7373e995f4086a9db4ce8b2f96af2c2ae7f319e3e7e2ebdc1291e9c50ae4437e,State:CONTAINER_RUNNING,CreatedAt:1711582615962971857,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-5b77dbd7c4-4jmtg,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.uid: a192abb6-aa9b-48c5-b8ed-d698518f6d50,},Annota
tions:map[string]string{io.kubernetes.container.hash: 9a669f2f,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55ff8c607b91a5507e8ea30e77798a3fa69b176d895ada3925e78113ca130a3c,PodSandboxId:1933577baff606ee1f1af341a3acc19c7fe3c2bf5268bd190a90a4fc5fae265d,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:31bad00311cb5eeb8a6648beadcf67277a175da89989f14727420a80e2e76742,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e289a478ace02cd72f0a71a5b2ec0594495e1fae85faa10aae3b0da530812608,State:CONTAINER_RUNNING,CreatedAt:1711582597616317267,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: defaul
t,io.kubernetes.pod.uid: 9f410ff0-6c14-4608-abbb-a000665e8f49,},Annotations:map[string]string{io.kubernetes.container.hash: fedd4e32,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:677151f29db59aa6d094ab294dbd8f24b56112e9fc4e54b0992a012101913d5e,PodSandboxId:ff765e66a9f94c26973eea2b765b323818b465c1ea37f193541d1f8c5eb859e2,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1711582566587164048,Labels:map[string]string{io.kubernetes.container.name: gcp-a
uth,io.kubernetes.pod.name: gcp-auth-7d69788767-dbn28,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 1ac148e4-defc-46dc-a63b-1c9edb033043,},Annotations:map[string]string{io.kubernetes.container.hash: c580e4a4,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:27a0f42ebebe25d2341f1d71a43d23869432e7eabe68bdfe5b8826b8ac7847fb,PodSandboxId:68859a38edc4dea3f3aa331adf8386e77bd1608079267ac8a47b559ba43409aa,Metadata:&ContainerMetadata{Name:patch,Attempt:2,},Image:&ImageSpec{Image:b29d748098e32a42a2ac743679dd53501184ba9c4a1009248b6f60a370895135,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b29d748098e32a42a2ac743679dd53501184ba9c4a1009248b6f60a370895135,State:CONTAINER_EXITED,CreatedAt:1711582563647145403,Labels:map[strin
g]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-5wmgl,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: be82d9d5-a0c5-4fe0-acd4-0c95cb1cae94,},Annotations:map[string]string{io.kubernetes.container.hash: 9b0c71dd,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf8e038561cf4bd49e01a936690acbaee3fdf64b890300ae917965af87d6fba2,PodSandboxId:25c89fa4ba7bb72c7da1720afa28a57bbd2358d45336d77847e51aef07908c1a,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:261843b59d96d7e8a91e89545c7f27a066b1ab5cddbea8236cf1695c31889023,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b29d748098e32a42a2ac743679dd53501184ba9c4a1009248b6f60a370895135,State:CONTAINER_EXITED,CreatedAt:
1711582548988714737,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-g7x6h,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 9bfd1354-62e5-4840-9890-b769ac79dd02,},Annotations:map[string]string{io.kubernetes.container.hash: 16e0bbae,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2d43855975977771ce9eb4f3ed589927f452cca477f8486c12f0999671d6403,PodSandboxId:11678d5ca230fad75f954e596aac7d3af98286f0e3cc4bc2c32d7db07a274588,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3f
cf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1711582544159067102,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-78b46b4d5c-g6mfs,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 8df36b49-f92d-46c0-992c-cb8f83698ad1,},Annotations:map[string]string{io.kubernetes.container.hash: 63582996,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c16eb35fadeb3b7584d00a76e47875ab8c44fd5f692216e4e86d66e6173f5ad,PodSandboxId:bdf538c1c703d425f2dfe54fb9fc752a81b7745b302aa1a55823a172c5886495,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:31de47c733c918d83713
61afabd259bfb18f75409c61d94dce8151a83ee615a5,State:CONTAINER_RUNNING,CreatedAt:1711582538944603246,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-9947fc6bf-fmvp5,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: 1c917feb-9f7b-4dbc-9b7c-cede23bc4786,},Annotations:map[string]string{io.kubernetes.container.hash: 4b09eca1,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc0d1c097a71b29a1ad30c15d53788b1ca35ae062f21ab5d99e47c783d2dca57,PodSandboxId:a6896f54b83adfe971dfb78c94397c2cea2b89caccdf9351c91066c4710c8a4d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map
[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1711582484903348575,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 673aa9e5-9e38-4e75-91cd-07fcada37b61,},Annotations:map[string]string{io.kubernetes.container.hash: ae79716,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0623ca267e2df290cba12fad32c4888842f27893e0ceb0dcb7093147be3cbb9,PodSandboxId:4bd37bcc8652897b3d527e03d9ba9385956e612fd1400bf18e1776e02a7cf315,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpec
ifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1711582480482123444,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-nt8t8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3632f64b-2085-497f-b53b-8379a52fbec0,},Annotations:map[string]string{io.kubernetes.container.hash: e158318e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a058218f549304e7359855ab9a4ba74de8f2a71ba4ac3ebeb710dbdab18a3f1,PodSandboxId:29878a28221c3ad229bd7a2eefef445b92182e18f068edee327
3b2e95b223e79,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1711582479644927030,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kmd42,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eb736629-19bc-44d4-a8a2-985b4d347e59,},Annotations:map[string]string{io.kubernetes.container.hash: f3957dcc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab16a20788bc2872d3efc827d21ac466c884a0cb96ba7f807a53aca486c5747c,PodSandboxId:114e36f4687867833982b883f45a2081afc53a2b2766d26c64e4e7249704314b,Metadata:&Container
Metadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1711582459247180766,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-910864,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae5a9270b2ceb8021090aabf5455eee5,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:beb042f59832b38448458ab635c7c8b845a1e0d8d85124206ab1b8a9e88a3146,PodSandboxId:b4458f944daa07fed098e99db0f4745305309398dd2943cfe4f1bf783558b559,Metadata:&ContainerMetadata{Name:etc
d,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1711582459196523315,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-910864,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 59a25f5600ded87070cf05bab7496ade,},Annotations:map[string]string{io.kubernetes.container.hash: b103fd24,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed82bce9d821fed50a43e3f77ef841f96fa1903ed1ab609a2dd02db89d582ed8,PodSandboxId:5723b4565a708021c8495f42c8527f8eab40dc58c2fec2f93ab32b9d1c71742c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSp
ec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1711582459149515757,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-910864,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fed9bd4f4c35277368d1a4c753fc8fb1,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33c0dd39e97a48d319dbd4b267a04f0e0c6c8cc8a13551c62cceaba087a67114,PodSandboxId:84f03feaa99d83a8e65384064389a5e9c33aea2ca8e967cf3832aa2028105f20,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageS
pec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1711582459095619222,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-910864,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ddd577f641cad16692a36a9f3a39001,},Annotations:map[string]string{io.kubernetes.container.hash: 41d7ace8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0d4bebb6-b3ce-4d90-b249-399dd510c771 name=/runtime.v1.RuntimeService/ListContainers
	Mar 27 23:39:07 addons-910864 crio[682]: time="2024-03-27 23:39:07.202200113Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=23de516a-a5c9-4bbe-89b6-fa902c1ff3b1 name=/runtime.v1.RuntimeService/Version
	Mar 27 23:39:07 addons-910864 crio[682]: time="2024-03-27 23:39:07.202303108Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=23de516a-a5c9-4bbe-89b6-fa902c1ff3b1 name=/runtime.v1.RuntimeService/Version
	Mar 27 23:39:07 addons-910864 crio[682]: time="2024-03-27 23:39:07.203419294Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=cf7e4dcf-ffa4-424c-bee1-631ebd8a58a2 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 27 23:39:07 addons-910864 crio[682]: time="2024-03-27 23:39:07.205105897Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1711582747205076224,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:570813,},InodesUsed:&UInt64Value{Value:203,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=cf7e4dcf-ffa4-424c-bee1-631ebd8a58a2 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 27 23:39:07 addons-910864 crio[682]: time="2024-03-27 23:39:07.205860920Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c031a12e-9b37-4f09-955d-540dc7da6dbc name=/runtime.v1.RuntimeService/ListContainers
	Mar 27 23:39:07 addons-910864 crio[682]: time="2024-03-27 23:39:07.205926156Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c031a12e-9b37-4f09-955d-540dc7da6dbc name=/runtime.v1.RuntimeService/ListContainers
	Mar 27 23:39:07 addons-910864 crio[682]: time="2024-03-27 23:39:07.206709897Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:021498fd37869affda947c35ec55186ac0e1a19b58e69151adfa271c59235f7e,PodSandboxId:d817bec815667f9a89080710ae992c5f65249ad9b021f45fad03e20856efba1d,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79,State:CONTAINER_RUNNING,CreatedAt:1711582739884246318,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5d77478584-cbrng,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 29bf4177-543d-4dfa-a835-d2dc80f2e79f,},Annotations:map[string]string{io.kubernetes.container.hash: 874f0231,io.kubernetes.containe
r.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b6a0f1d9c503c15e507349ba2631dce490cb4595c739d81993c3398e2c58710,PodSandboxId:d1f26f1d60d16c4fcdf053d50920c4f50371510c8ab29726f5e67d1973cccca9,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:9d84f30d4c5e54cdc40f63b060e93ba6a0cd8a4c05d28d7cda4cd14f6b56490f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7373e995f4086a9db4ce8b2f96af2c2ae7f319e3e7e2ebdc1291e9c50ae4437e,State:CONTAINER_RUNNING,CreatedAt:1711582615962971857,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-5b77dbd7c4-4jmtg,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.uid: a192abb6-aa9b-48c5-b8ed-d698518f6d50,},Annota
tions:map[string]string{io.kubernetes.container.hash: 9a669f2f,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55ff8c607b91a5507e8ea30e77798a3fa69b176d895ada3925e78113ca130a3c,PodSandboxId:1933577baff606ee1f1af341a3acc19c7fe3c2bf5268bd190a90a4fc5fae265d,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:31bad00311cb5eeb8a6648beadcf67277a175da89989f14727420a80e2e76742,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e289a478ace02cd72f0a71a5b2ec0594495e1fae85faa10aae3b0da530812608,State:CONTAINER_RUNNING,CreatedAt:1711582597616317267,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: defaul
t,io.kubernetes.pod.uid: 9f410ff0-6c14-4608-abbb-a000665e8f49,},Annotations:map[string]string{io.kubernetes.container.hash: fedd4e32,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:677151f29db59aa6d094ab294dbd8f24b56112e9fc4e54b0992a012101913d5e,PodSandboxId:ff765e66a9f94c26973eea2b765b323818b465c1ea37f193541d1f8c5eb859e2,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1711582566587164048,Labels:map[string]string{io.kubernetes.container.name: gcp-a
uth,io.kubernetes.pod.name: gcp-auth-7d69788767-dbn28,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 1ac148e4-defc-46dc-a63b-1c9edb033043,},Annotations:map[string]string{io.kubernetes.container.hash: c580e4a4,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:27a0f42ebebe25d2341f1d71a43d23869432e7eabe68bdfe5b8826b8ac7847fb,PodSandboxId:68859a38edc4dea3f3aa331adf8386e77bd1608079267ac8a47b559ba43409aa,Metadata:&ContainerMetadata{Name:patch,Attempt:2,},Image:&ImageSpec{Image:b29d748098e32a42a2ac743679dd53501184ba9c4a1009248b6f60a370895135,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b29d748098e32a42a2ac743679dd53501184ba9c4a1009248b6f60a370895135,State:CONTAINER_EXITED,CreatedAt:1711582563647145403,Labels:map[strin
g]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-5wmgl,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: be82d9d5-a0c5-4fe0-acd4-0c95cb1cae94,},Annotations:map[string]string{io.kubernetes.container.hash: 9b0c71dd,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf8e038561cf4bd49e01a936690acbaee3fdf64b890300ae917965af87d6fba2,PodSandboxId:25c89fa4ba7bb72c7da1720afa28a57bbd2358d45336d77847e51aef07908c1a,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:261843b59d96d7e8a91e89545c7f27a066b1ab5cddbea8236cf1695c31889023,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b29d748098e32a42a2ac743679dd53501184ba9c4a1009248b6f60a370895135,State:CONTAINER_EXITED,CreatedAt:
1711582548988714737,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-g7x6h,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 9bfd1354-62e5-4840-9890-b769ac79dd02,},Annotations:map[string]string{io.kubernetes.container.hash: 16e0bbae,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2d43855975977771ce9eb4f3ed589927f452cca477f8486c12f0999671d6403,PodSandboxId:11678d5ca230fad75f954e596aac7d3af98286f0e3cc4bc2c32d7db07a274588,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3f
cf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1711582544159067102,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-78b46b4d5c-g6mfs,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 8df36b49-f92d-46c0-992c-cb8f83698ad1,},Annotations:map[string]string{io.kubernetes.container.hash: 63582996,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c16eb35fadeb3b7584d00a76e47875ab8c44fd5f692216e4e86d66e6173f5ad,PodSandboxId:bdf538c1c703d425f2dfe54fb9fc752a81b7745b302aa1a55823a172c5886495,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:31de47c733c918d83713
61afabd259bfb18f75409c61d94dce8151a83ee615a5,State:CONTAINER_RUNNING,CreatedAt:1711582538944603246,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-9947fc6bf-fmvp5,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: 1c917feb-9f7b-4dbc-9b7c-cede23bc4786,},Annotations:map[string]string{io.kubernetes.container.hash: 4b09eca1,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc0d1c097a71b29a1ad30c15d53788b1ca35ae062f21ab5d99e47c783d2dca57,PodSandboxId:a6896f54b83adfe971dfb78c94397c2cea2b89caccdf9351c91066c4710c8a4d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map
[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1711582484903348575,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 673aa9e5-9e38-4e75-91cd-07fcada37b61,},Annotations:map[string]string{io.kubernetes.container.hash: ae79716,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0623ca267e2df290cba12fad32c4888842f27893e0ceb0dcb7093147be3cbb9,PodSandboxId:4bd37bcc8652897b3d527e03d9ba9385956e612fd1400bf18e1776e02a7cf315,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpec
ifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1711582480482123444,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-nt8t8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3632f64b-2085-497f-b53b-8379a52fbec0,},Annotations:map[string]string{io.kubernetes.container.hash: e158318e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a058218f549304e7359855ab9a4ba74de8f2a71ba4ac3ebeb710dbdab18a3f1,PodSandboxId:29878a28221c3ad229bd7a2eefef445b92182e18f068edee327
3b2e95b223e79,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1711582479644927030,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kmd42,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eb736629-19bc-44d4-a8a2-985b4d347e59,},Annotations:map[string]string{io.kubernetes.container.hash: f3957dcc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab16a20788bc2872d3efc827d21ac466c884a0cb96ba7f807a53aca486c5747c,PodSandboxId:114e36f4687867833982b883f45a2081afc53a2b2766d26c64e4e7249704314b,Metadata:&Container
Metadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1711582459247180766,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-910864,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae5a9270b2ceb8021090aabf5455eee5,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:beb042f59832b38448458ab635c7c8b845a1e0d8d85124206ab1b8a9e88a3146,PodSandboxId:b4458f944daa07fed098e99db0f4745305309398dd2943cfe4f1bf783558b559,Metadata:&ContainerMetadata{Name:etc
d,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1711582459196523315,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-910864,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 59a25f5600ded87070cf05bab7496ade,},Annotations:map[string]string{io.kubernetes.container.hash: b103fd24,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed82bce9d821fed50a43e3f77ef841f96fa1903ed1ab609a2dd02db89d582ed8,PodSandboxId:5723b4565a708021c8495f42c8527f8eab40dc58c2fec2f93ab32b9d1c71742c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSp
ec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1711582459149515757,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-910864,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fed9bd4f4c35277368d1a4c753fc8fb1,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33c0dd39e97a48d319dbd4b267a04f0e0c6c8cc8a13551c62cceaba087a67114,PodSandboxId:84f03feaa99d83a8e65384064389a5e9c33aea2ca8e967cf3832aa2028105f20,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageS
pec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1711582459095619222,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-910864,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ddd577f641cad16692a36a9f3a39001,},Annotations:map[string]string{io.kubernetes.container.hash: 41d7ace8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c031a12e-9b37-4f09-955d-540dc7da6dbc name=/runtime.v1.RuntimeService/ListContainers
	Mar 27 23:39:07 addons-910864 crio[682]: time="2024-03-27 23:39:07.261397736Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=20c2ddf0-1523-498e-8d8d-5304506b690a name=/runtime.v1.RuntimeService/Version
	Mar 27 23:39:07 addons-910864 crio[682]: time="2024-03-27 23:39:07.261571912Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=20c2ddf0-1523-498e-8d8d-5304506b690a name=/runtime.v1.RuntimeService/Version
	Mar 27 23:39:07 addons-910864 crio[682]: time="2024-03-27 23:39:07.263536033Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=79ef1f0d-13e7-46c2-ba50-88bb93f8c70d name=/runtime.v1.ImageService/ImageFsInfo
	Mar 27 23:39:07 addons-910864 crio[682]: time="2024-03-27 23:39:07.265212812Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1711582747265177930,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:570813,},InodesUsed:&UInt64Value{Value:203,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=79ef1f0d-13e7-46c2-ba50-88bb93f8c70d name=/runtime.v1.ImageService/ImageFsInfo
	Mar 27 23:39:07 addons-910864 crio[682]: time="2024-03-27 23:39:07.266071522Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=14514b9f-706f-4c57-80f3-64c98fc46525 name=/runtime.v1.RuntimeService/ListContainers
	Mar 27 23:39:07 addons-910864 crio[682]: time="2024-03-27 23:39:07.266328035Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=14514b9f-706f-4c57-80f3-64c98fc46525 name=/runtime.v1.RuntimeService/ListContainers
	Mar 27 23:39:07 addons-910864 crio[682]: time="2024-03-27 23:39:07.266957776Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:021498fd37869affda947c35ec55186ac0e1a19b58e69151adfa271c59235f7e,PodSandboxId:d817bec815667f9a89080710ae992c5f65249ad9b021f45fad03e20856efba1d,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79,State:CONTAINER_RUNNING,CreatedAt:1711582739884246318,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5d77478584-cbrng,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 29bf4177-543d-4dfa-a835-d2dc80f2e79f,},Annotations:map[string]string{io.kubernetes.container.hash: 874f0231,io.kubernetes.containe
r.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b6a0f1d9c503c15e507349ba2631dce490cb4595c739d81993c3398e2c58710,PodSandboxId:d1f26f1d60d16c4fcdf053d50920c4f50371510c8ab29726f5e67d1973cccca9,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:9d84f30d4c5e54cdc40f63b060e93ba6a0cd8a4c05d28d7cda4cd14f6b56490f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7373e995f4086a9db4ce8b2f96af2c2ae7f319e3e7e2ebdc1291e9c50ae4437e,State:CONTAINER_RUNNING,CreatedAt:1711582615962971857,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-5b77dbd7c4-4jmtg,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.uid: a192abb6-aa9b-48c5-b8ed-d698518f6d50,},Annota
tions:map[string]string{io.kubernetes.container.hash: 9a669f2f,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55ff8c607b91a5507e8ea30e77798a3fa69b176d895ada3925e78113ca130a3c,PodSandboxId:1933577baff606ee1f1af341a3acc19c7fe3c2bf5268bd190a90a4fc5fae265d,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:31bad00311cb5eeb8a6648beadcf67277a175da89989f14727420a80e2e76742,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e289a478ace02cd72f0a71a5b2ec0594495e1fae85faa10aae3b0da530812608,State:CONTAINER_RUNNING,CreatedAt:1711582597616317267,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: defaul
t,io.kubernetes.pod.uid: 9f410ff0-6c14-4608-abbb-a000665e8f49,},Annotations:map[string]string{io.kubernetes.container.hash: fedd4e32,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:677151f29db59aa6d094ab294dbd8f24b56112e9fc4e54b0992a012101913d5e,PodSandboxId:ff765e66a9f94c26973eea2b765b323818b465c1ea37f193541d1f8c5eb859e2,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1711582566587164048,Labels:map[string]string{io.kubernetes.container.name: gcp-a
uth,io.kubernetes.pod.name: gcp-auth-7d69788767-dbn28,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 1ac148e4-defc-46dc-a63b-1c9edb033043,},Annotations:map[string]string{io.kubernetes.container.hash: c580e4a4,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:27a0f42ebebe25d2341f1d71a43d23869432e7eabe68bdfe5b8826b8ac7847fb,PodSandboxId:68859a38edc4dea3f3aa331adf8386e77bd1608079267ac8a47b559ba43409aa,Metadata:&ContainerMetadata{Name:patch,Attempt:2,},Image:&ImageSpec{Image:b29d748098e32a42a2ac743679dd53501184ba9c4a1009248b6f60a370895135,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b29d748098e32a42a2ac743679dd53501184ba9c4a1009248b6f60a370895135,State:CONTAINER_EXITED,CreatedAt:1711582563647145403,Labels:map[strin
g]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-5wmgl,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: be82d9d5-a0c5-4fe0-acd4-0c95cb1cae94,},Annotations:map[string]string{io.kubernetes.container.hash: 9b0c71dd,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf8e038561cf4bd49e01a936690acbaee3fdf64b890300ae917965af87d6fba2,PodSandboxId:25c89fa4ba7bb72c7da1720afa28a57bbd2358d45336d77847e51aef07908c1a,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:261843b59d96d7e8a91e89545c7f27a066b1ab5cddbea8236cf1695c31889023,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b29d748098e32a42a2ac743679dd53501184ba9c4a1009248b6f60a370895135,State:CONTAINER_EXITED,CreatedAt:
1711582548988714737,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-g7x6h,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 9bfd1354-62e5-4840-9890-b769ac79dd02,},Annotations:map[string]string{io.kubernetes.container.hash: 16e0bbae,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2d43855975977771ce9eb4f3ed589927f452cca477f8486c12f0999671d6403,PodSandboxId:11678d5ca230fad75f954e596aac7d3af98286f0e3cc4bc2c32d7db07a274588,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3f
cf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1711582544159067102,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-78b46b4d5c-g6mfs,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 8df36b49-f92d-46c0-992c-cb8f83698ad1,},Annotations:map[string]string{io.kubernetes.container.hash: 63582996,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c16eb35fadeb3b7584d00a76e47875ab8c44fd5f692216e4e86d66e6173f5ad,PodSandboxId:bdf538c1c703d425f2dfe54fb9fc752a81b7745b302aa1a55823a172c5886495,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:31de47c733c918d83713
61afabd259bfb18f75409c61d94dce8151a83ee615a5,State:CONTAINER_RUNNING,CreatedAt:1711582538944603246,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-9947fc6bf-fmvp5,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: 1c917feb-9f7b-4dbc-9b7c-cede23bc4786,},Annotations:map[string]string{io.kubernetes.container.hash: 4b09eca1,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc0d1c097a71b29a1ad30c15d53788b1ca35ae062f21ab5d99e47c783d2dca57,PodSandboxId:a6896f54b83adfe971dfb78c94397c2cea2b89caccdf9351c91066c4710c8a4d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map
[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1711582484903348575,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 673aa9e5-9e38-4e75-91cd-07fcada37b61,},Annotations:map[string]string{io.kubernetes.container.hash: ae79716,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0623ca267e2df290cba12fad32c4888842f27893e0ceb0dcb7093147be3cbb9,PodSandboxId:4bd37bcc8652897b3d527e03d9ba9385956e612fd1400bf18e1776e02a7cf315,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpec
ifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1711582480482123444,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-nt8t8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3632f64b-2085-497f-b53b-8379a52fbec0,},Annotations:map[string]string{io.kubernetes.container.hash: e158318e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a058218f549304e7359855ab9a4ba74de8f2a71ba4ac3ebeb710dbdab18a3f1,PodSandboxId:29878a28221c3ad229bd7a2eefef445b92182e18f068edee327
3b2e95b223e79,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1711582479644927030,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kmd42,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eb736629-19bc-44d4-a8a2-985b4d347e59,},Annotations:map[string]string{io.kubernetes.container.hash: f3957dcc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab16a20788bc2872d3efc827d21ac466c884a0cb96ba7f807a53aca486c5747c,PodSandboxId:114e36f4687867833982b883f45a2081afc53a2b2766d26c64e4e7249704314b,Metadata:&Container
Metadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1711582459247180766,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-910864,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae5a9270b2ceb8021090aabf5455eee5,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:beb042f59832b38448458ab635c7c8b845a1e0d8d85124206ab1b8a9e88a3146,PodSandboxId:b4458f944daa07fed098e99db0f4745305309398dd2943cfe4f1bf783558b559,Metadata:&ContainerMetadata{Name:etc
d,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1711582459196523315,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-910864,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 59a25f5600ded87070cf05bab7496ade,},Annotations:map[string]string{io.kubernetes.container.hash: b103fd24,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed82bce9d821fed50a43e3f77ef841f96fa1903ed1ab609a2dd02db89d582ed8,PodSandboxId:5723b4565a708021c8495f42c8527f8eab40dc58c2fec2f93ab32b9d1c71742c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSp
ec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1711582459149515757,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-910864,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fed9bd4f4c35277368d1a4c753fc8fb1,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33c0dd39e97a48d319dbd4b267a04f0e0c6c8cc8a13551c62cceaba087a67114,PodSandboxId:84f03feaa99d83a8e65384064389a5e9c33aea2ca8e967cf3832aa2028105f20,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageS
pec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1711582459095619222,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-910864,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ddd577f641cad16692a36a9f3a39001,},Annotations:map[string]string{io.kubernetes.container.hash: 41d7ace8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=14514b9f-706f-4c57-80f3-64c98fc46525 name=/runtime.v1.RuntimeService/ListContainers
	Mar 27 23:39:07 addons-910864 crio[682]: time="2024-03-27 23:39:07.318956287Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=88e63c03-3f80-4ae8-bb48-c5b9698af989 name=/runtime.v1.RuntimeService/Version
	Mar 27 23:39:07 addons-910864 crio[682]: time="2024-03-27 23:39:07.319061417Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=88e63c03-3f80-4ae8-bb48-c5b9698af989 name=/runtime.v1.RuntimeService/Version
	Mar 27 23:39:07 addons-910864 crio[682]: time="2024-03-27 23:39:07.321227153Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e74525d3-f3b9-4c7b-865b-f8a7abe47b9e name=/runtime.v1.ImageService/ImageFsInfo
	Mar 27 23:39:07 addons-910864 crio[682]: time="2024-03-27 23:39:07.323280841Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1711582747323245667,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:570813,},InodesUsed:&UInt64Value{Value:203,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e74525d3-f3b9-4c7b-865b-f8a7abe47b9e name=/runtime.v1.ImageService/ImageFsInfo
	Mar 27 23:39:07 addons-910864 crio[682]: time="2024-03-27 23:39:07.324307593Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=166f14aa-e3d0-4207-b034-4e15038fcd28 name=/runtime.v1.RuntimeService/ListContainers
	Mar 27 23:39:07 addons-910864 crio[682]: time="2024-03-27 23:39:07.324380595Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=166f14aa-e3d0-4207-b034-4e15038fcd28 name=/runtime.v1.RuntimeService/ListContainers
	Mar 27 23:39:07 addons-910864 crio[682]: time="2024-03-27 23:39:07.324942401Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:021498fd37869affda947c35ec55186ac0e1a19b58e69151adfa271c59235f7e,PodSandboxId:d817bec815667f9a89080710ae992c5f65249ad9b021f45fad03e20856efba1d,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79,State:CONTAINER_RUNNING,CreatedAt:1711582739884246318,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5d77478584-cbrng,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 29bf4177-543d-4dfa-a835-d2dc80f2e79f,},Annotations:map[string]string{io.kubernetes.container.hash: 874f0231,io.kubernetes.containe
r.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b6a0f1d9c503c15e507349ba2631dce490cb4595c739d81993c3398e2c58710,PodSandboxId:d1f26f1d60d16c4fcdf053d50920c4f50371510c8ab29726f5e67d1973cccca9,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:9d84f30d4c5e54cdc40f63b060e93ba6a0cd8a4c05d28d7cda4cd14f6b56490f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7373e995f4086a9db4ce8b2f96af2c2ae7f319e3e7e2ebdc1291e9c50ae4437e,State:CONTAINER_RUNNING,CreatedAt:1711582615962971857,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-5b77dbd7c4-4jmtg,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.uid: a192abb6-aa9b-48c5-b8ed-d698518f6d50,},Annota
tions:map[string]string{io.kubernetes.container.hash: 9a669f2f,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55ff8c607b91a5507e8ea30e77798a3fa69b176d895ada3925e78113ca130a3c,PodSandboxId:1933577baff606ee1f1af341a3acc19c7fe3c2bf5268bd190a90a4fc5fae265d,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:31bad00311cb5eeb8a6648beadcf67277a175da89989f14727420a80e2e76742,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e289a478ace02cd72f0a71a5b2ec0594495e1fae85faa10aae3b0da530812608,State:CONTAINER_RUNNING,CreatedAt:1711582597616317267,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: defaul
t,io.kubernetes.pod.uid: 9f410ff0-6c14-4608-abbb-a000665e8f49,},Annotations:map[string]string{io.kubernetes.container.hash: fedd4e32,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:677151f29db59aa6d094ab294dbd8f24b56112e9fc4e54b0992a012101913d5e,PodSandboxId:ff765e66a9f94c26973eea2b765b323818b465c1ea37f193541d1f8c5eb859e2,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1711582566587164048,Labels:map[string]string{io.kubernetes.container.name: gcp-a
uth,io.kubernetes.pod.name: gcp-auth-7d69788767-dbn28,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 1ac148e4-defc-46dc-a63b-1c9edb033043,},Annotations:map[string]string{io.kubernetes.container.hash: c580e4a4,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:27a0f42ebebe25d2341f1d71a43d23869432e7eabe68bdfe5b8826b8ac7847fb,PodSandboxId:68859a38edc4dea3f3aa331adf8386e77bd1608079267ac8a47b559ba43409aa,Metadata:&ContainerMetadata{Name:patch,Attempt:2,},Image:&ImageSpec{Image:b29d748098e32a42a2ac743679dd53501184ba9c4a1009248b6f60a370895135,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b29d748098e32a42a2ac743679dd53501184ba9c4a1009248b6f60a370895135,State:CONTAINER_EXITED,CreatedAt:1711582563647145403,Labels:map[strin
g]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-5wmgl,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: be82d9d5-a0c5-4fe0-acd4-0c95cb1cae94,},Annotations:map[string]string{io.kubernetes.container.hash: 9b0c71dd,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf8e038561cf4bd49e01a936690acbaee3fdf64b890300ae917965af87d6fba2,PodSandboxId:25c89fa4ba7bb72c7da1720afa28a57bbd2358d45336d77847e51aef07908c1a,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:261843b59d96d7e8a91e89545c7f27a066b1ab5cddbea8236cf1695c31889023,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b29d748098e32a42a2ac743679dd53501184ba9c4a1009248b6f60a370895135,State:CONTAINER_EXITED,CreatedAt:
1711582548988714737,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-g7x6h,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 9bfd1354-62e5-4840-9890-b769ac79dd02,},Annotations:map[string]string{io.kubernetes.container.hash: 16e0bbae,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2d43855975977771ce9eb4f3ed589927f452cca477f8486c12f0999671d6403,PodSandboxId:11678d5ca230fad75f954e596aac7d3af98286f0e3cc4bc2c32d7db07a274588,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3f
cf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1711582544159067102,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-78b46b4d5c-g6mfs,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 8df36b49-f92d-46c0-992c-cb8f83698ad1,},Annotations:map[string]string{io.kubernetes.container.hash: 63582996,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c16eb35fadeb3b7584d00a76e47875ab8c44fd5f692216e4e86d66e6173f5ad,PodSandboxId:bdf538c1c703d425f2dfe54fb9fc752a81b7745b302aa1a55823a172c5886495,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:31de47c733c918d83713
61afabd259bfb18f75409c61d94dce8151a83ee615a5,State:CONTAINER_RUNNING,CreatedAt:1711582538944603246,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-9947fc6bf-fmvp5,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: 1c917feb-9f7b-4dbc-9b7c-cede23bc4786,},Annotations:map[string]string{io.kubernetes.container.hash: 4b09eca1,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc0d1c097a71b29a1ad30c15d53788b1ca35ae062f21ab5d99e47c783d2dca57,PodSandboxId:a6896f54b83adfe971dfb78c94397c2cea2b89caccdf9351c91066c4710c8a4d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map
[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1711582484903348575,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 673aa9e5-9e38-4e75-91cd-07fcada37b61,},Annotations:map[string]string{io.kubernetes.container.hash: ae79716,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0623ca267e2df290cba12fad32c4888842f27893e0ceb0dcb7093147be3cbb9,PodSandboxId:4bd37bcc8652897b3d527e03d9ba9385956e612fd1400bf18e1776e02a7cf315,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpec
ifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1711582480482123444,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-nt8t8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3632f64b-2085-497f-b53b-8379a52fbec0,},Annotations:map[string]string{io.kubernetes.container.hash: e158318e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a058218f549304e7359855ab9a4ba74de8f2a71ba4ac3ebeb710dbdab18a3f1,PodSandboxId:29878a28221c3ad229bd7a2eefef445b92182e18f068edee327
3b2e95b223e79,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1711582479644927030,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kmd42,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eb736629-19bc-44d4-a8a2-985b4d347e59,},Annotations:map[string]string{io.kubernetes.container.hash: f3957dcc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab16a20788bc2872d3efc827d21ac466c884a0cb96ba7f807a53aca486c5747c,PodSandboxId:114e36f4687867833982b883f45a2081afc53a2b2766d26c64e4e7249704314b,Metadata:&Container
Metadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1711582459247180766,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-910864,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae5a9270b2ceb8021090aabf5455eee5,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:beb042f59832b38448458ab635c7c8b845a1e0d8d85124206ab1b8a9e88a3146,PodSandboxId:b4458f944daa07fed098e99db0f4745305309398dd2943cfe4f1bf783558b559,Metadata:&ContainerMetadata{Name:etc
d,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1711582459196523315,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-910864,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 59a25f5600ded87070cf05bab7496ade,},Annotations:map[string]string{io.kubernetes.container.hash: b103fd24,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed82bce9d821fed50a43e3f77ef841f96fa1903ed1ab609a2dd02db89d582ed8,PodSandboxId:5723b4565a708021c8495f42c8527f8eab40dc58c2fec2f93ab32b9d1c71742c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSp
ec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1711582459149515757,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-910864,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fed9bd4f4c35277368d1a4c753fc8fb1,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33c0dd39e97a48d319dbd4b267a04f0e0c6c8cc8a13551c62cceaba087a67114,PodSandboxId:84f03feaa99d83a8e65384064389a5e9c33aea2ca8e967cf3832aa2028105f20,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageS
pec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1711582459095619222,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-910864,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ddd577f641cad16692a36a9f3a39001,},Annotations:map[string]string{io.kubernetes.container.hash: 41d7ace8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=166f14aa-e3d0-4207-b034-4e15038fcd28 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	021498fd37869       gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7                      7 seconds ago       Running             hello-world-app           0                   d817bec815667       hello-world-app-5d77478584-cbrng
	4b6a0f1d9c503       ghcr.io/headlamp-k8s/headlamp@sha256:9d84f30d4c5e54cdc40f63b060e93ba6a0cd8a4c05d28d7cda4cd14f6b56490f                        2 minutes ago       Running             headlamp                  0                   d1f26f1d60d16       headlamp-5b77dbd7c4-4jmtg
	55ff8c607b91a       docker.io/library/nginx@sha256:31bad00311cb5eeb8a6648beadcf67277a175da89989f14727420a80e2e76742                              2 minutes ago       Running             nginx                     0                   1933577baff60       nginx
	677151f29db59       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b                 3 minutes ago       Running             gcp-auth                  0                   ff765e66a9f94       gcp-auth-7d69788767-dbn28
	27a0f42ebebe2       b29d748098e32a42a2ac743679dd53501184ba9c4a1009248b6f60a370895135                                                             3 minutes ago       Exited              patch                     2                   68859a38edc4d       ingress-nginx-admission-patch-5wmgl
	cf8e038561cf4       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:261843b59d96d7e8a91e89545c7f27a066b1ab5cddbea8236cf1695c31889023   3 minutes ago       Exited              create                    0                   25c89fa4ba7bb       ingress-nginx-admission-create-g7x6h
	f2d4385597597       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef             3 minutes ago       Running             local-path-provisioner    0                   11678d5ca230f       local-path-provisioner-78b46b4d5c-g6mfs
	0c16eb35fadeb       docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310                              3 minutes ago       Running             yakd                      0                   bdf538c1c703d       yakd-dashboard-9947fc6bf-fmvp5
	bc0d1c097a71b       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             4 minutes ago       Running             storage-provisioner       0                   a6896f54b83ad       storage-provisioner
	c0623ca267e2d       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                                             4 minutes ago       Running             coredns                   0                   4bd37bcc86528       coredns-76f75df574-nt8t8
	1a058218f5493       a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392                                                             4 minutes ago       Running             kube-proxy                0                   29878a28221c3       kube-proxy-kmd42
	ab16a20788bc2       8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b                                                             4 minutes ago       Running             kube-scheduler            0                   114e36f468786       kube-scheduler-addons-910864
	beb042f59832b       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                                             4 minutes ago       Running             etcd                      0                   b4458f944daa0       etcd-addons-910864
	ed82bce9d821f       6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3                                                             4 minutes ago       Running             kube-controller-manager   0                   5723b4565a708       kube-controller-manager-addons-910864
	33c0dd39e97a4       39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533                                                             4 minutes ago       Running             kube-apiserver            0                   84f03feaa99d8       kube-apiserver-addons-910864
	
	
	==> coredns [c0623ca267e2df290cba12fad32c4888842f27893e0ceb0dcb7093147be3cbb9] <==
	[INFO] 10.244.0.7:53233 - 22894 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000070732s
	[INFO] 10.244.0.7:40973 - 16197 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000074277s
	[INFO] 10.244.0.7:40973 - 65095 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000047521s
	[INFO] 10.244.0.7:46252 - 62582 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000042157s
	[INFO] 10.244.0.7:46252 - 52596 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000056132s
	[INFO] 10.244.0.7:44393 - 62726 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000089542s
	[INFO] 10.244.0.7:44393 - 26884 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000048456s
	[INFO] 10.244.0.7:59511 - 21021 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000052218s
	[INFO] 10.244.0.7:59511 - 56347 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000040094s
	[INFO] 10.244.0.7:38544 - 5764 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000037724s
	[INFO] 10.244.0.7:38544 - 50054 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000026826s
	[INFO] 10.244.0.7:56144 - 57294 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.00003107s
	[INFO] 10.244.0.7:56144 - 13515 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000024388s
	[INFO] 10.244.0.7:51902 - 43433 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000028963s
	[INFO] 10.244.0.7:51902 - 11179 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000035637s
	[INFO] 10.244.0.22:47476 - 830 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000449753s
	[INFO] 10.244.0.22:56378 - 14494 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000177112s
	[INFO] 10.244.0.22:49058 - 42061 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000117389s
	[INFO] 10.244.0.22:34945 - 48942 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000198513s
	[INFO] 10.244.0.22:36012 - 63430 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000079957s
	[INFO] 10.244.0.22:60789 - 10199 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000343312s
	[INFO] 10.244.0.22:51576 - 6887 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000947566s
	[INFO] 10.244.0.22:50540 - 34484 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 496 0.000674823s
	[INFO] 10.244.0.23:53885 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000273231s
	[INFO] 10.244.0.23:57801 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000141635s
	
	
	==> describe nodes <==
	Name:               addons-910864
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-910864
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1a940f980f77c9bfc8ef93678db8c1f49c9dd79d
	                    minikube.k8s.io/name=addons-910864
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_03_27T23_34_25_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-910864
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 27 Mar 2024 23:34:22 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-910864
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 27 Mar 2024 23:39:01 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 27 Mar 2024 23:37:29 +0000   Wed, 27 Mar 2024 23:34:19 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 27 Mar 2024 23:37:29 +0000   Wed, 27 Mar 2024 23:34:19 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 27 Mar 2024 23:37:29 +0000   Wed, 27 Mar 2024 23:34:19 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 27 Mar 2024 23:37:29 +0000   Wed, 27 Mar 2024 23:34:25 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.45
	  Hostname:    addons-910864
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	System Info:
	  Machine ID:                 7fdc7d44496947149d4114f64720d10c
	  System UUID:                7fdc7d44-4969-4714-9d41-14f64720d10c
	  Boot ID:                    dad7c70f-d1ce-4589-856f-84b27bef5eb7
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                       ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-5d77478584-cbrng           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11s
	  default                     nginx                                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m35s
	  gcp-auth                    gcp-auth-7d69788767-dbn28                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m17s
	  headlamp                    headlamp-5b77dbd7c4-4jmtg                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m18s
	  kube-system                 coredns-76f75df574-nt8t8                   100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     4m29s
	  kube-system                 etcd-addons-910864                         100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         4m42s
	  kube-system                 kube-apiserver-addons-910864               250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m42s
	  kube-system                 kube-controller-manager-addons-910864      200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m42s
	  kube-system                 kube-proxy-kmd42                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m29s
	  kube-system                 kube-scheduler-addons-910864               100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m42s
	  kube-system                 storage-provisioner                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m24s
	  local-path-storage          local-path-provisioner-78b46b4d5c-g6mfs    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m24s
	  yakd-dashboard              yakd-dashboard-9947fc6bf-fmvp5             0 (0%!)(MISSING)        0 (0%!)(MISSING)      128Mi (3%!)(MISSING)       256Mi (6%!)(MISSING)     4m22s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             298Mi (7%!)(MISSING)  426Mi (11%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 4m26s  kube-proxy       
	  Normal  Starting                 4m42s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  4m42s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m42s  kubelet          Node addons-910864 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m42s  kubelet          Node addons-910864 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m42s  kubelet          Node addons-910864 status is now: NodeHasSufficientPID
	  Normal  NodeReady                4m42s  kubelet          Node addons-910864 status is now: NodeReady
	  Normal  RegisteredNode           4m30s  node-controller  Node addons-910864 event: Registered Node addons-910864 in Controller
	
	
	==> dmesg <==
	[  +5.022543] kauditd_printk_skb: 95 callbacks suppressed
	[  +5.321324] kauditd_printk_skb: 98 callbacks suppressed
	[  +6.537496] kauditd_printk_skb: 80 callbacks suppressed
	[Mar27 23:35] kauditd_printk_skb: 4 callbacks suppressed
	[ +13.040868] kauditd_printk_skb: 4 callbacks suppressed
	[ +14.709344] kauditd_printk_skb: 4 callbacks suppressed
	[  +5.549193] kauditd_printk_skb: 23 callbacks suppressed
	[  +5.225938] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.598140] kauditd_printk_skb: 20 callbacks suppressed
	[  +5.371456] kauditd_printk_skb: 65 callbacks suppressed
	[Mar27 23:36] kauditd_printk_skb: 11 callbacks suppressed
	[  +5.044319] kauditd_printk_skb: 16 callbacks suppressed
	[  +5.147716] kauditd_printk_skb: 14 callbacks suppressed
	[  +5.207747] kauditd_printk_skb: 32 callbacks suppressed
	[ +11.942408] kauditd_printk_skb: 20 callbacks suppressed
	[  +6.380667] kauditd_printk_skb: 45 callbacks suppressed
	[  +6.068820] kauditd_printk_skb: 18 callbacks suppressed
	[  +6.045618] kauditd_printk_skb: 17 callbacks suppressed
	[  +5.669238] kauditd_printk_skb: 39 callbacks suppressed
	[Mar27 23:37] kauditd_printk_skb: 1 callbacks suppressed
	[ +11.025820] kauditd_printk_skb: 5 callbacks suppressed
	[ +23.033274] kauditd_printk_skb: 5 callbacks suppressed
	[  +8.466771] kauditd_printk_skb: 25 callbacks suppressed
	[Mar27 23:38] kauditd_printk_skb: 2 callbacks suppressed
	[Mar27 23:39] kauditd_printk_skb: 17 callbacks suppressed
	
	
	==> etcd [beb042f59832b38448458ab635c7c8b845a1e0d8d85124206ab1b8a9e88a3146] <==
	{"level":"warn","ts":"2024-03-27T23:35:57.975559Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"305.776274ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/ingress-nginx/\" range_end:\"/registry/pods/ingress-nginx0\" ","response":"range_response_count:3 size:14709"}
	{"level":"info","ts":"2024-03-27T23:35:57.975585Z","caller":"traceutil/trace.go:171","msg":"trace[1908060836] range","detail":"{range_begin:/registry/pods/ingress-nginx/; range_end:/registry/pods/ingress-nginx0; response_count:3; response_revision:1087; }","duration":"305.808634ms","start":"2024-03-27T23:35:57.669769Z","end":"2024-03-27T23:35:57.975578Z","steps":["trace[1908060836] 'agreement among raft nodes before linearized reading'  (duration: 305.637679ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-27T23:35:57.975601Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-03-27T23:35:57.669756Z","time spent":"305.841638ms","remote":"127.0.0.1:56798","response type":"/etcdserverpb.KV/Range","request count":0,"request size":62,"response count":3,"response size":14732,"request content":"key:\"/registry/pods/ingress-nginx/\" range_end:\"/registry/pods/ingress-nginx0\" "}
	{"level":"warn","ts":"2024-03-27T23:35:57.975741Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"241.511468ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/gadget/gadget-gsn5n\" ","response":"range_response_count:1 size:9577"}
	{"level":"info","ts":"2024-03-27T23:35:57.975755Z","caller":"traceutil/trace.go:171","msg":"trace[1795967474] range","detail":"{range_begin:/registry/pods/gadget/gadget-gsn5n; range_end:; response_count:1; response_revision:1087; }","duration":"241.543ms","start":"2024-03-27T23:35:57.734207Z","end":"2024-03-27T23:35:57.97575Z","steps":["trace[1795967474] 'agreement among raft nodes before linearized reading'  (duration: 241.495142ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-27T23:35:57.97595Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"196.653422ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/gcp-auth/\" range_end:\"/registry/pods/gcp-auth0\" ","response":"range_response_count:3 size:11447"}
	{"level":"info","ts":"2024-03-27T23:35:57.975964Z","caller":"traceutil/trace.go:171","msg":"trace[1996905847] range","detail":"{range_begin:/registry/pods/gcp-auth/; range_end:/registry/pods/gcp-auth0; response_count:3; response_revision:1087; }","duration":"196.699595ms","start":"2024-03-27T23:35:57.77926Z","end":"2024-03-27T23:35:57.97596Z","steps":["trace[1996905847] 'agreement among raft nodes before linearized reading'  (duration: 196.648117ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-27T23:35:57.976116Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"213.410266ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" ","response":"range_response_count:18 size:85294"}
	{"level":"info","ts":"2024-03-27T23:35:57.976132Z","caller":"traceutil/trace.go:171","msg":"trace[1966797885] range","detail":"{range_begin:/registry/pods/kube-system/; range_end:/registry/pods/kube-system0; response_count:18; response_revision:1087; }","duration":"213.447073ms","start":"2024-03-27T23:35:57.76268Z","end":"2024-03-27T23:35:57.976127Z","steps":["trace[1966797885] 'agreement among raft nodes before linearized reading'  (duration: 213.332761ms)"],"step_count":1}
	{"level":"info","ts":"2024-03-27T23:36:05.968349Z","caller":"traceutil/trace.go:171","msg":"trace[348339844] transaction","detail":"{read_only:false; response_revision:1124; number_of_response:1; }","duration":"106.504824ms","start":"2024-03-27T23:36:05.861829Z","end":"2024-03-27T23:36:05.968333Z","steps":["trace[348339844] 'process raft request'  (duration: 106.391067ms)"],"step_count":1}
	{"level":"info","ts":"2024-03-27T23:36:10.922856Z","caller":"traceutil/trace.go:171","msg":"trace[1809809801] linearizableReadLoop","detail":"{readStateIndex:1199; appliedIndex:1198; }","duration":"160.951261ms","start":"2024-03-27T23:36:10.76189Z","end":"2024-03-27T23:36:10.922842Z","steps":["trace[1809809801] 'read index received'  (duration: 160.804222ms)","trace[1809809801] 'applied index is now lower than readState.Index'  (duration: 146.358µs)"],"step_count":2}
	{"level":"info","ts":"2024-03-27T23:36:10.923003Z","caller":"traceutil/trace.go:171","msg":"trace[1746518578] transaction","detail":"{read_only:false; response_revision:1161; number_of_response:1; }","duration":"260.849312ms","start":"2024-03-27T23:36:10.662145Z","end":"2024-03-27T23:36:10.922994Z","steps":["trace[1746518578] 'process raft request'  (duration: 260.590021ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-27T23:36:10.923359Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"161.480172ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" ","response":"range_response_count:18 size:85294"}
	{"level":"info","ts":"2024-03-27T23:36:10.923417Z","caller":"traceutil/trace.go:171","msg":"trace[395227241] range","detail":"{range_begin:/registry/pods/kube-system/; range_end:/registry/pods/kube-system0; response_count:18; response_revision:1161; }","duration":"161.567879ms","start":"2024-03-27T23:36:10.76184Z","end":"2024-03-27T23:36:10.923408Z","steps":["trace[395227241] 'agreement among raft nodes before linearized reading'  (duration: 161.386045ms)"],"step_count":1}
	{"level":"info","ts":"2024-03-27T23:36:21.175827Z","caller":"traceutil/trace.go:171","msg":"trace[541559334] linearizableReadLoop","detail":"{readStateIndex:1248; appliedIndex:1247; }","duration":"188.793041ms","start":"2024-03-27T23:36:20.98701Z","end":"2024-03-27T23:36:21.175803Z","steps":["trace[541559334] 'read index received'  (duration: 188.633319ms)","trace[541559334] 'applied index is now lower than readState.Index'  (duration: 159.148µs)"],"step_count":2}
	{"level":"info","ts":"2024-03-27T23:36:21.176042Z","caller":"traceutil/trace.go:171","msg":"trace[2139001982] transaction","detail":"{read_only:false; number_of_response:1; response_revision:1208; }","duration":"202.354039ms","start":"2024-03-27T23:36:20.973674Z","end":"2024-03-27T23:36:21.176028Z","steps":["trace[2139001982] 'process raft request'  (duration: 202.000604ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-27T23:36:21.176189Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"189.167053ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1113"}
	{"level":"info","ts":"2024-03-27T23:36:21.17684Z","caller":"traceutil/trace.go:171","msg":"trace[1863085693] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:1208; }","duration":"189.868144ms","start":"2024-03-27T23:36:20.986963Z","end":"2024-03-27T23:36:21.176831Z","steps":["trace[1863085693] 'agreement among raft nodes before linearized reading'  (duration: 189.103174ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-27T23:36:21.176657Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"146.735139ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/snapshot-controller-leader\" ","response":"range_response_count:1 size:498"}
	{"level":"info","ts":"2024-03-27T23:36:21.17698Z","caller":"traceutil/trace.go:171","msg":"trace[895345880] range","detail":"{range_begin:/registry/leases/kube-system/snapshot-controller-leader; range_end:; response_count:1; response_revision:1208; }","duration":"147.080152ms","start":"2024-03-27T23:36:21.029893Z","end":"2024-03-27T23:36:21.176973Z","steps":["trace[895345880] 'agreement among raft nodes before linearized reading'  (duration: 146.708881ms)"],"step_count":1}
	{"level":"info","ts":"2024-03-27T23:36:55.858369Z","caller":"traceutil/trace.go:171","msg":"trace[1800022335] linearizableReadLoop","detail":"{readStateIndex:1568; appliedIndex:1567; }","duration":"294.175481ms","start":"2024-03-27T23:36:55.564164Z","end":"2024-03-27T23:36:55.85834Z","steps":["trace[1800022335] 'read index received'  (duration: 294.013716ms)","trace[1800022335] 'applied index is now lower than readState.Index'  (duration: 161.283µs)"],"step_count":2}
	{"level":"info","ts":"2024-03-27T23:36:55.858747Z","caller":"traceutil/trace.go:171","msg":"trace[1478623832] transaction","detail":"{read_only:false; response_revision:1509; number_of_response:1; }","duration":"431.866316ms","start":"2024-03-27T23:36:55.426858Z","end":"2024-03-27T23:36:55.858725Z","steps":["trace[1478623832] 'process raft request'  (duration: 431.374374ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-27T23:36:55.858774Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"294.500002ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/persistentvolumeclaims/default/hpvc\" ","response":"range_response_count:1 size:822"}
	{"level":"warn","ts":"2024-03-27T23:36:55.858858Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-03-27T23:36:55.426845Z","time spent":"431.939361ms","remote":"127.0.0.1:56794","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1098,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1505 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1025 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"info","ts":"2024-03-27T23:36:55.858872Z","caller":"traceutil/trace.go:171","msg":"trace[1230949943] range","detail":"{range_begin:/registry/persistentvolumeclaims/default/hpvc; range_end:; response_count:1; response_revision:1509; }","duration":"294.72401ms","start":"2024-03-27T23:36:55.564139Z","end":"2024-03-27T23:36:55.858863Z","steps":["trace[1230949943] 'agreement among raft nodes before linearized reading'  (duration: 294.421004ms)"],"step_count":1}
	
	
	==> gcp-auth [677151f29db59aa6d094ab294dbd8f24b56112e9fc4e54b0992a012101913d5e] <==
	2024/03/27 23:36:06 GCP Auth Webhook started!
	2024/03/27 23:36:25 Ready to marshal response ...
	2024/03/27 23:36:25 Ready to write response ...
	2024/03/27 23:36:26 Ready to marshal response ...
	2024/03/27 23:36:26 Ready to write response ...
	2024/03/27 23:36:32 Ready to marshal response ...
	2024/03/27 23:36:32 Ready to write response ...
	2024/03/27 23:36:33 Ready to marshal response ...
	2024/03/27 23:36:33 Ready to write response ...
	2024/03/27 23:36:33 Ready to marshal response ...
	2024/03/27 23:36:33 Ready to write response ...
	2024/03/27 23:36:48 Ready to marshal response ...
	2024/03/27 23:36:48 Ready to write response ...
	2024/03/27 23:36:49 Ready to marshal response ...
	2024/03/27 23:36:49 Ready to write response ...
	2024/03/27 23:36:49 Ready to marshal response ...
	2024/03/27 23:36:49 Ready to write response ...
	2024/03/27 23:36:49 Ready to marshal response ...
	2024/03/27 23:36:49 Ready to write response ...
	2024/03/27 23:37:02 Ready to marshal response ...
	2024/03/27 23:37:02 Ready to write response ...
	2024/03/27 23:37:28 Ready to marshal response ...
	2024/03/27 23:37:28 Ready to write response ...
	2024/03/27 23:38:56 Ready to marshal response ...
	2024/03/27 23:38:56 Ready to write response ...
	
	
	==> kernel <==
	 23:39:07 up 5 min,  0 users,  load average: 1.05, 1.22, 0.58
	Linux addons-910864 5.10.207 #1 SMP Wed Mar 27 22:02:20 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [33c0dd39e97a48d319dbd4b267a04f0e0c6c8cc8a13551c62cceaba087a67114] <==
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	E0327 23:35:28.587154       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.111.55.158:443/apis/metrics.k8s.io/v1beta1: Get "https://10.111.55.158:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.111.55.158:443: connect: connection refused
	I0327 23:35:28.651697       1 handler.go:275] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0327 23:36:26.045753       1 handler.go:275] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0327 23:36:27.119656       1 cacher.go:168] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0327 23:36:29.594993       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I0327 23:36:32.023811       1 controller.go:624] quota admission added evaluator for: ingresses.networking.k8s.io
	I0327 23:36:32.291648       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.111.245.174"}
	I0327 23:36:49.888864       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.99.124.122"}
	I0327 23:37:12.759641       1 controller.go:624] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0327 23:37:45.199107       1 handler.go:275] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0327 23:37:45.199176       1 handler.go:275] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0327 23:37:45.229546       1 handler.go:275] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0327 23:37:45.229605       1 handler.go:275] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0327 23:37:45.251401       1 handler.go:275] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0327 23:37:45.251557       1 handler.go:275] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0327 23:37:45.258864       1 handler.go:275] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0327 23:37:45.262531       1 handler.go:275] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0327 23:37:45.300280       1 handler.go:275] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0327 23:37:45.300335       1 handler.go:275] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0327 23:37:46.251189       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0327 23:37:46.301139       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0327 23:37:46.309781       1 cacher.go:168] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0327 23:38:56.351813       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.101.198.11"}
	E0327 23:38:59.349347       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"ingress-nginx\" not found]"
	
	
	==> kube-controller-manager [ed82bce9d821fed50a43e3f77ef841f96fa1903ed1ab609a2dd02db89d582ed8] <==
	W0327 23:38:10.287508       1 reflector.go:539] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0327 23:38:10.287614       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0327 23:38:18.041547       1 reflector.go:539] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0327 23:38:18.041646       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0327 23:38:24.830570       1 reflector.go:539] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0327 23:38:24.830607       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0327 23:38:28.214593       1 reflector.go:539] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0327 23:38:28.214699       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0327 23:38:44.818327       1 reflector.go:539] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0327 23:38:44.818393       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0327 23:38:54.628260       1 reflector.go:539] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0327 23:38:54.628289       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0327 23:38:55.654737       1 reflector.go:539] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0327 23:38:55.654808       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0327 23:38:56.156042       1 event.go:376] "Event occurred" object="default/hello-world-app" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set hello-world-app-5d77478584 to 1"
	I0327 23:38:56.184266       1 event.go:376] "Event occurred" object="default/hello-world-app-5d77478584" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hello-world-app-5d77478584-cbrng"
	I0327 23:38:56.206399       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="51.432651ms"
	I0327 23:38:56.219775       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="12.162358ms"
	I0327 23:38:56.220095       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="53.183µs"
	I0327 23:38:56.228249       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="70.878µs"
	I0327 23:38:59.213530       1 job_controller.go:554] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-create"
	I0327 23:38:59.217020       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-65496f9567" duration="3.798µs"
	I0327 23:38:59.233625       1 job_controller.go:554] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-patch"
	I0327 23:39:00.100251       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="9.353983ms"
	I0327 23:39:00.100503       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="111.436µs"
	
	
	==> kube-proxy [1a058218f549304e7359855ab9a4ba74de8f2a71ba4ac3ebeb710dbdab18a3f1] <==
	I0327 23:34:40.474215       1 server_others.go:72] "Using iptables proxy"
	I0327 23:34:40.496138       1 server.go:1050] "Successfully retrieved node IP(s)" IPs=["192.168.39.45"]
	I0327 23:34:40.617558       1 server_others.go:146] "No iptables support for family" ipFamily="IPv6"
	I0327 23:34:40.617616       1 server.go:654] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0327 23:34:40.617637       1 server_others.go:168] "Using iptables Proxier"
	I0327 23:34:40.622675       1 proxier.go:245] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0327 23:34:40.622895       1 server.go:865] "Version info" version="v1.29.3"
	I0327 23:34:40.622909       1 server.go:867] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0327 23:34:40.624284       1 config.go:188] "Starting service config controller"
	I0327 23:34:40.624302       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0327 23:34:40.624324       1 config.go:97] "Starting endpoint slice config controller"
	I0327 23:34:40.624327       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0327 23:34:40.624675       1 config.go:315] "Starting node config controller"
	I0327 23:34:40.624683       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0327 23:34:40.724509       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0327 23:34:40.724546       1 shared_informer.go:318] Caches are synced for service config
	I0327 23:34:40.725058       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [ab16a20788bc2872d3efc827d21ac466c884a0cb96ba7f807a53aca486c5747c] <==
	W0327 23:34:22.118671       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0327 23:34:22.118679       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0327 23:34:22.118714       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0327 23:34:22.118745       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0327 23:34:22.118775       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0327 23:34:22.118802       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0327 23:34:22.118838       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0327 23:34:22.118846       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0327 23:34:22.118937       1 reflector.go:539] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0327 23:34:22.118970       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0327 23:34:23.037746       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0327 23:34:23.037903       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0327 23:34:23.043586       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0327 23:34:23.043633       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0327 23:34:23.054095       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0327 23:34:23.054197       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0327 23:34:23.150576       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0327 23:34:23.150818       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0327 23:34:23.197959       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0327 23:34:23.198792       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0327 23:34:23.244951       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0327 23:34:23.245081       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0327 23:34:23.476913       1 reflector.go:539] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0327 23:34:23.476974       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0327 23:34:26.103504       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Mar 27 23:38:56 addons-910864 kubelet[1281]: I0327 23:38:56.200383    1281 memory_manager.go:354] "RemoveStaleState removing state" podUID="5d557f60-5346-47fa-a57d-b37afcbce7c3" containerName="hostpath"
	Mar 27 23:38:56 addons-910864 kubelet[1281]: I0327 23:38:56.200423    1281 memory_manager.go:354] "RemoveStaleState removing state" podUID="d34b6d9e-df6e-4301-9086-9183bb090428" containerName="csi-attacher"
	Mar 27 23:38:56 addons-910864 kubelet[1281]: I0327 23:38:56.346755    1281 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r4zzx\" (UniqueName: \"kubernetes.io/projected/29bf4177-543d-4dfa-a835-d2dc80f2e79f-kube-api-access-r4zzx\") pod \"hello-world-app-5d77478584-cbrng\" (UID: \"29bf4177-543d-4dfa-a835-d2dc80f2e79f\") " pod="default/hello-world-app-5d77478584-cbrng"
	Mar 27 23:38:56 addons-910864 kubelet[1281]: I0327 23:38:56.346969    1281 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/29bf4177-543d-4dfa-a835-d2dc80f2e79f-gcp-creds\") pod \"hello-world-app-5d77478584-cbrng\" (UID: \"29bf4177-543d-4dfa-a835-d2dc80f2e79f\") " pod="default/hello-world-app-5d77478584-cbrng"
	Mar 27 23:38:57 addons-910864 kubelet[1281]: I0327 23:38:57.557214    1281 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9j5xn\" (UniqueName: \"kubernetes.io/projected/b851c646-d18a-4817-81d3-b8664af063c8-kube-api-access-9j5xn\") pod \"b851c646-d18a-4817-81d3-b8664af063c8\" (UID: \"b851c646-d18a-4817-81d3-b8664af063c8\") "
	Mar 27 23:38:57 addons-910864 kubelet[1281]: I0327 23:38:57.561131    1281 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b851c646-d18a-4817-81d3-b8664af063c8-kube-api-access-9j5xn" (OuterVolumeSpecName: "kube-api-access-9j5xn") pod "b851c646-d18a-4817-81d3-b8664af063c8" (UID: "b851c646-d18a-4817-81d3-b8664af063c8"). InnerVolumeSpecName "kube-api-access-9j5xn". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Mar 27 23:38:57 addons-910864 kubelet[1281]: I0327 23:38:57.657622    1281 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-9j5xn\" (UniqueName: \"kubernetes.io/projected/b851c646-d18a-4817-81d3-b8664af063c8-kube-api-access-9j5xn\") on node \"addons-910864\" DevicePath \"\""
	Mar 27 23:38:58 addons-910864 kubelet[1281]: I0327 23:38:58.063414    1281 scope.go:117] "RemoveContainer" containerID="4d76b56804390ecc894d6f550d9159a746ba634d784f1bf03b7dcae820fe8e2e"
	Mar 27 23:38:58 addons-910864 kubelet[1281]: I0327 23:38:58.120872    1281 scope.go:117] "RemoveContainer" containerID="4d76b56804390ecc894d6f550d9159a746ba634d784f1bf03b7dcae820fe8e2e"
	Mar 27 23:38:58 addons-910864 kubelet[1281]: E0327 23:38:58.124203    1281 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4d76b56804390ecc894d6f550d9159a746ba634d784f1bf03b7dcae820fe8e2e\": container with ID starting with 4d76b56804390ecc894d6f550d9159a746ba634d784f1bf03b7dcae820fe8e2e not found: ID does not exist" containerID="4d76b56804390ecc894d6f550d9159a746ba634d784f1bf03b7dcae820fe8e2e"
	Mar 27 23:38:58 addons-910864 kubelet[1281]: I0327 23:38:58.124274    1281 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4d76b56804390ecc894d6f550d9159a746ba634d784f1bf03b7dcae820fe8e2e"} err="failed to get container status \"4d76b56804390ecc894d6f550d9159a746ba634d784f1bf03b7dcae820fe8e2e\": rpc error: code = NotFound desc = could not find container \"4d76b56804390ecc894d6f550d9159a746ba634d784f1bf03b7dcae820fe8e2e\": container with ID starting with 4d76b56804390ecc894d6f550d9159a746ba634d784f1bf03b7dcae820fe8e2e not found: ID does not exist"
	Mar 27 23:38:59 addons-910864 kubelet[1281]: I0327 23:38:59.647164    1281 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9bfd1354-62e5-4840-9890-b769ac79dd02" path="/var/lib/kubelet/pods/9bfd1354-62e5-4840-9890-b769ac79dd02/volumes"
	Mar 27 23:38:59 addons-910864 kubelet[1281]: I0327 23:38:59.648187    1281 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b851c646-d18a-4817-81d3-b8664af063c8" path="/var/lib/kubelet/pods/b851c646-d18a-4817-81d3-b8664af063c8/volumes"
	Mar 27 23:38:59 addons-910864 kubelet[1281]: I0327 23:38:59.648771    1281 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="be82d9d5-a0c5-4fe0-acd4-0c95cb1cae94" path="/var/lib/kubelet/pods/be82d9d5-a0c5-4fe0-acd4-0c95cb1cae94/volumes"
	Mar 27 23:39:02 addons-910864 kubelet[1281]: I0327 23:39:02.501549    1281 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/9e63607c-a16f-43ee-a9d3-e6aed40b3436-webhook-cert\") pod \"9e63607c-a16f-43ee-a9d3-e6aed40b3436\" (UID: \"9e63607c-a16f-43ee-a9d3-e6aed40b3436\") "
	Mar 27 23:39:02 addons-910864 kubelet[1281]: I0327 23:39:02.501603    1281 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v6hl9\" (UniqueName: \"kubernetes.io/projected/9e63607c-a16f-43ee-a9d3-e6aed40b3436-kube-api-access-v6hl9\") pod \"9e63607c-a16f-43ee-a9d3-e6aed40b3436\" (UID: \"9e63607c-a16f-43ee-a9d3-e6aed40b3436\") "
	Mar 27 23:39:02 addons-910864 kubelet[1281]: I0327 23:39:02.504820    1281 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e63607c-a16f-43ee-a9d3-e6aed40b3436-kube-api-access-v6hl9" (OuterVolumeSpecName: "kube-api-access-v6hl9") pod "9e63607c-a16f-43ee-a9d3-e6aed40b3436" (UID: "9e63607c-a16f-43ee-a9d3-e6aed40b3436"). InnerVolumeSpecName "kube-api-access-v6hl9". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Mar 27 23:39:02 addons-910864 kubelet[1281]: I0327 23:39:02.506882    1281 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e63607c-a16f-43ee-a9d3-e6aed40b3436-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "9e63607c-a16f-43ee-a9d3-e6aed40b3436" (UID: "9e63607c-a16f-43ee-a9d3-e6aed40b3436"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Mar 27 23:39:02 addons-910864 kubelet[1281]: I0327 23:39:02.602622    1281 reconciler_common.go:300] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/9e63607c-a16f-43ee-a9d3-e6aed40b3436-webhook-cert\") on node \"addons-910864\" DevicePath \"\""
	Mar 27 23:39:02 addons-910864 kubelet[1281]: I0327 23:39:02.602662    1281 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-v6hl9\" (UniqueName: \"kubernetes.io/projected/9e63607c-a16f-43ee-a9d3-e6aed40b3436-kube-api-access-v6hl9\") on node \"addons-910864\" DevicePath \"\""
	Mar 27 23:39:03 addons-910864 kubelet[1281]: I0327 23:39:03.096074    1281 scope.go:117] "RemoveContainer" containerID="e595f722c4e34dd493acbe654f41f77874c5302605fae1c80c8208027cbed335"
	Mar 27 23:39:03 addons-910864 kubelet[1281]: I0327 23:39:03.118809    1281 scope.go:117] "RemoveContainer" containerID="e595f722c4e34dd493acbe654f41f77874c5302605fae1c80c8208027cbed335"
	Mar 27 23:39:03 addons-910864 kubelet[1281]: E0327 23:39:03.119856    1281 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e595f722c4e34dd493acbe654f41f77874c5302605fae1c80c8208027cbed335\": container with ID starting with e595f722c4e34dd493acbe654f41f77874c5302605fae1c80c8208027cbed335 not found: ID does not exist" containerID="e595f722c4e34dd493acbe654f41f77874c5302605fae1c80c8208027cbed335"
	Mar 27 23:39:03 addons-910864 kubelet[1281]: I0327 23:39:03.119898    1281 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e595f722c4e34dd493acbe654f41f77874c5302605fae1c80c8208027cbed335"} err="failed to get container status \"e595f722c4e34dd493acbe654f41f77874c5302605fae1c80c8208027cbed335\": rpc error: code = NotFound desc = could not find container \"e595f722c4e34dd493acbe654f41f77874c5302605fae1c80c8208027cbed335\": container with ID starting with e595f722c4e34dd493acbe654f41f77874c5302605fae1c80c8208027cbed335 not found: ID does not exist"
	Mar 27 23:39:03 addons-910864 kubelet[1281]: I0327 23:39:03.632200    1281 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9e63607c-a16f-43ee-a9d3-e6aed40b3436" path="/var/lib/kubelet/pods/9e63607c-a16f-43ee-a9d3-e6aed40b3436/volumes"
	
	
	==> storage-provisioner [bc0d1c097a71b29a1ad30c15d53788b1ca35ae062f21ab5d99e47c783d2dca57] <==
	I0327 23:34:46.579580       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0327 23:34:46.594512       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0327 23:34:46.594610       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0327 23:34:46.617347       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0327 23:34:46.617552       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-910864_9a0f28e2-0653-4b7a-b03f-f1e7dfdbdf3a!
	I0327 23:34:46.618311       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"3a6f2722-0c23-44f9-9be7-634fb6a998e0", APIVersion:"v1", ResourceVersion:"642", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-910864_9a0f28e2-0653-4b7a-b03f-f1e7dfdbdf3a became leader
	I0327 23:34:46.717848       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-910864_9a0f28e2-0653-4b7a-b03f-f1e7dfdbdf3a!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-910864 -n addons-910864
helpers_test.go:261: (dbg) Run:  kubectl --context addons-910864 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (156.82s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (154.34s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-910864
addons_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p addons-910864: exit status 82 (2m0.497659735s)

                                                
                                                
-- stdout --
	* Stopping node "addons-910864"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_2.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:174: failed to stop minikube. args "out/minikube-linux-amd64 stop -p addons-910864" : exit status 82
addons_test.go:176: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-910864
addons_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-910864: exit status 11 (21.556955465s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.45:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:178: failed to enable dashboard addon: args "out/minikube-linux-amd64 addons enable dashboard -p addons-910864" : exit status 11
addons_test.go:180: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-910864
addons_test.go:180: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-910864: exit status 11 (6.143857322s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.45:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_7b2045b3edf32de99b3c34afdc43bfaabe8aa3c2_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:182: failed to disable dashboard addon: args "out/minikube-linux-amd64 addons disable dashboard -p addons-910864" : exit status 11
addons_test.go:185: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-910864
addons_test.go:185: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable gvisor -p addons-910864: exit status 11 (6.144300588s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.45:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_8dd43b2cee45a94e37dbac1dd983966d1c97e7d4_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:187: failed to disable non-enabled addon: args "out/minikube-linux-amd64 addons disable gvisor -p addons-910864" : exit status 11
--- FAIL: TestAddons/StoppedEnableDisable (154.34s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (6.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-amd64 -p functional-800754 image load /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-linux-amd64 -p functional-800754 image load /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr: (4.312644568s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-800754 image ls
functional_test.go:447: (dbg) Done: out/minikube-linux-amd64 -p functional-800754 image ls: (2.327659708s)
functional_test.go:442: expected "gcr.io/google-containers/addon-resizer:functional-800754" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (6.64s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (142.36s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-amd64 -p ha-377576 node stop m02 -v=7 --alsologtostderr
E0327 23:59:05.053528 1076522 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/functional-800754/client.crt: no such file or directory
ha_test.go:363: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-377576 node stop m02 -v=7 --alsologtostderr: exit status 30 (2m0.506795273s)

                                                
                                                
-- stdout --
	* Stopping node "ha-377576-m02"  ...

                                                
                                                
-- /stdout --
** stderr ** 
	I0327 23:58:04.708419 1090556 out.go:291] Setting OutFile to fd 1 ...
	I0327 23:58:04.708579 1090556 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 23:58:04.708592 1090556 out.go:304] Setting ErrFile to fd 2...
	I0327 23:58:04.708598 1090556 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 23:58:04.708872 1090556 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18485-1069254/.minikube/bin
	I0327 23:58:04.709174 1090556 mustload.go:65] Loading cluster: ha-377576
	I0327 23:58:04.709615 1090556 config.go:182] Loaded profile config "ha-377576": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0327 23:58:04.709634 1090556 stop.go:39] StopHost: ha-377576-m02
	I0327 23:58:04.710088 1090556 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0327 23:58:04.710154 1090556 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0327 23:58:04.726330 1090556 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39797
	I0327 23:58:04.726909 1090556 main.go:141] libmachine: () Calling .GetVersion
	I0327 23:58:04.727603 1090556 main.go:141] libmachine: Using API Version  1
	I0327 23:58:04.727630 1090556 main.go:141] libmachine: () Calling .SetConfigRaw
	I0327 23:58:04.728037 1090556 main.go:141] libmachine: () Calling .GetMachineName
	I0327 23:58:04.730622 1090556 out.go:177] * Stopping node "ha-377576-m02"  ...
	I0327 23:58:04.732005 1090556 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0327 23:58:04.732064 1090556 main.go:141] libmachine: (ha-377576-m02) Calling .DriverName
	I0327 23:58:04.732351 1090556 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0327 23:58:04.732378 1090556 main.go:141] libmachine: (ha-377576-m02) Calling .GetSSHHostname
	I0327 23:58:04.735369 1090556 main.go:141] libmachine: (ha-377576-m02) DBG | domain ha-377576-m02 has defined MAC address 52:54:00:bb:83:99 in network mk-ha-377576
	I0327 23:58:04.735664 1090556 main.go:141] libmachine: (ha-377576-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:83:99", ip: ""} in network mk-ha-377576: {Iface:virbr1 ExpiryTime:2024-03-28 00:53:30 +0000 UTC Type:0 Mac:52:54:00:bb:83:99 Iaid: IPaddr:192.168.39.117 Prefix:24 Hostname:ha-377576-m02 Clientid:01:52:54:00:bb:83:99}
	I0327 23:58:04.735703 1090556 main.go:141] libmachine: (ha-377576-m02) DBG | domain ha-377576-m02 has defined IP address 192.168.39.117 and MAC address 52:54:00:bb:83:99 in network mk-ha-377576
	I0327 23:58:04.735869 1090556 main.go:141] libmachine: (ha-377576-m02) Calling .GetSSHPort
	I0327 23:58:04.736098 1090556 main.go:141] libmachine: (ha-377576-m02) Calling .GetSSHKeyPath
	I0327 23:58:04.736308 1090556 main.go:141] libmachine: (ha-377576-m02) Calling .GetSSHUsername
	I0327 23:58:04.736493 1090556 sshutil.go:53] new ssh client: &{IP:192.168.39.117 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/ha-377576-m02/id_rsa Username:docker}
	I0327 23:58:04.824000 1090556 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0327 23:58:04.879259 1090556 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0327 23:58:04.936085 1090556 main.go:141] libmachine: Stopping "ha-377576-m02"...
	I0327 23:58:04.936124 1090556 main.go:141] libmachine: (ha-377576-m02) Calling .GetState
	I0327 23:58:04.937805 1090556 main.go:141] libmachine: (ha-377576-m02) Calling .Stop
	I0327 23:58:04.941901 1090556 main.go:141] libmachine: (ha-377576-m02) Waiting for machine to stop 0/120
	I0327 23:58:05.943424 1090556 main.go:141] libmachine: (ha-377576-m02) Waiting for machine to stop 1/120
	I0327 23:58:06.945355 1090556 main.go:141] libmachine: (ha-377576-m02) Waiting for machine to stop 2/120
	I0327 23:58:07.947016 1090556 main.go:141] libmachine: (ha-377576-m02) Waiting for machine to stop 3/120
	I0327 23:58:08.948814 1090556 main.go:141] libmachine: (ha-377576-m02) Waiting for machine to stop 4/120
	I0327 23:58:09.950599 1090556 main.go:141] libmachine: (ha-377576-m02) Waiting for machine to stop 5/120
	I0327 23:58:10.952819 1090556 main.go:141] libmachine: (ha-377576-m02) Waiting for machine to stop 6/120
	I0327 23:58:11.954127 1090556 main.go:141] libmachine: (ha-377576-m02) Waiting for machine to stop 7/120
	I0327 23:58:12.955493 1090556 main.go:141] libmachine: (ha-377576-m02) Waiting for machine to stop 8/120
	I0327 23:58:13.957246 1090556 main.go:141] libmachine: (ha-377576-m02) Waiting for machine to stop 9/120
	I0327 23:58:14.959517 1090556 main.go:141] libmachine: (ha-377576-m02) Waiting for machine to stop 10/120
	I0327 23:58:15.961002 1090556 main.go:141] libmachine: (ha-377576-m02) Waiting for machine to stop 11/120
	I0327 23:58:16.963340 1090556 main.go:141] libmachine: (ha-377576-m02) Waiting for machine to stop 12/120
	I0327 23:58:17.964864 1090556 main.go:141] libmachine: (ha-377576-m02) Waiting for machine to stop 13/120
	I0327 23:58:18.967332 1090556 main.go:141] libmachine: (ha-377576-m02) Waiting for machine to stop 14/120
	I0327 23:58:19.969289 1090556 main.go:141] libmachine: (ha-377576-m02) Waiting for machine to stop 15/120
	I0327 23:58:20.971589 1090556 main.go:141] libmachine: (ha-377576-m02) Waiting for machine to stop 16/120
	I0327 23:58:21.973264 1090556 main.go:141] libmachine: (ha-377576-m02) Waiting for machine to stop 17/120
	I0327 23:58:22.974635 1090556 main.go:141] libmachine: (ha-377576-m02) Waiting for machine to stop 18/120
	I0327 23:58:23.976719 1090556 main.go:141] libmachine: (ha-377576-m02) Waiting for machine to stop 19/120
	I0327 23:58:24.979187 1090556 main.go:141] libmachine: (ha-377576-m02) Waiting for machine to stop 20/120
	I0327 23:58:25.981423 1090556 main.go:141] libmachine: (ha-377576-m02) Waiting for machine to stop 21/120
	I0327 23:58:26.982888 1090556 main.go:141] libmachine: (ha-377576-m02) Waiting for machine to stop 22/120
	I0327 23:58:27.985221 1090556 main.go:141] libmachine: (ha-377576-m02) Waiting for machine to stop 23/120
	I0327 23:58:28.986810 1090556 main.go:141] libmachine: (ha-377576-m02) Waiting for machine to stop 24/120
	I0327 23:58:29.989025 1090556 main.go:141] libmachine: (ha-377576-m02) Waiting for machine to stop 25/120
	I0327 23:58:30.990592 1090556 main.go:141] libmachine: (ha-377576-m02) Waiting for machine to stop 26/120
	I0327 23:58:31.992392 1090556 main.go:141] libmachine: (ha-377576-m02) Waiting for machine to stop 27/120
	I0327 23:58:32.994296 1090556 main.go:141] libmachine: (ha-377576-m02) Waiting for machine to stop 28/120
	I0327 23:58:33.995914 1090556 main.go:141] libmachine: (ha-377576-m02) Waiting for machine to stop 29/120
	I0327 23:58:34.998043 1090556 main.go:141] libmachine: (ha-377576-m02) Waiting for machine to stop 30/120
	I0327 23:58:35.999711 1090556 main.go:141] libmachine: (ha-377576-m02) Waiting for machine to stop 31/120
	I0327 23:58:37.001100 1090556 main.go:141] libmachine: (ha-377576-m02) Waiting for machine to stop 32/120
	I0327 23:58:38.002692 1090556 main.go:141] libmachine: (ha-377576-m02) Waiting for machine to stop 33/120
	I0327 23:58:39.004145 1090556 main.go:141] libmachine: (ha-377576-m02) Waiting for machine to stop 34/120
	I0327 23:58:40.006104 1090556 main.go:141] libmachine: (ha-377576-m02) Waiting for machine to stop 35/120
	I0327 23:58:41.007500 1090556 main.go:141] libmachine: (ha-377576-m02) Waiting for machine to stop 36/120
	I0327 23:58:42.009717 1090556 main.go:141] libmachine: (ha-377576-m02) Waiting for machine to stop 37/120
	I0327 23:58:43.011147 1090556 main.go:141] libmachine: (ha-377576-m02) Waiting for machine to stop 38/120
	I0327 23:58:44.012688 1090556 main.go:141] libmachine: (ha-377576-m02) Waiting for machine to stop 39/120
	I0327 23:58:45.014763 1090556 main.go:141] libmachine: (ha-377576-m02) Waiting for machine to stop 40/120
	I0327 23:58:46.017248 1090556 main.go:141] libmachine: (ha-377576-m02) Waiting for machine to stop 41/120
	I0327 23:58:47.018579 1090556 main.go:141] libmachine: (ha-377576-m02) Waiting for machine to stop 42/120
	I0327 23:58:48.020094 1090556 main.go:141] libmachine: (ha-377576-m02) Waiting for machine to stop 43/120
	I0327 23:58:49.021502 1090556 main.go:141] libmachine: (ha-377576-m02) Waiting for machine to stop 44/120
	I0327 23:58:50.023893 1090556 main.go:141] libmachine: (ha-377576-m02) Waiting for machine to stop 45/120
	I0327 23:58:51.025254 1090556 main.go:141] libmachine: (ha-377576-m02) Waiting for machine to stop 46/120
	I0327 23:58:52.026879 1090556 main.go:141] libmachine: (ha-377576-m02) Waiting for machine to stop 47/120
	I0327 23:58:53.028819 1090556 main.go:141] libmachine: (ha-377576-m02) Waiting for machine to stop 48/120
	I0327 23:58:54.030197 1090556 main.go:141] libmachine: (ha-377576-m02) Waiting for machine to stop 49/120
	I0327 23:58:55.032584 1090556 main.go:141] libmachine: (ha-377576-m02) Waiting for machine to stop 50/120
	I0327 23:58:56.033984 1090556 main.go:141] libmachine: (ha-377576-m02) Waiting for machine to stop 51/120
	I0327 23:58:57.035256 1090556 main.go:141] libmachine: (ha-377576-m02) Waiting for machine to stop 52/120
	I0327 23:58:58.036620 1090556 main.go:141] libmachine: (ha-377576-m02) Waiting for machine to stop 53/120
	I0327 23:58:59.038290 1090556 main.go:141] libmachine: (ha-377576-m02) Waiting for machine to stop 54/120
	I0327 23:59:00.039747 1090556 main.go:141] libmachine: (ha-377576-m02) Waiting for machine to stop 55/120
	I0327 23:59:01.041317 1090556 main.go:141] libmachine: (ha-377576-m02) Waiting for machine to stop 56/120
	I0327 23:59:02.043012 1090556 main.go:141] libmachine: (ha-377576-m02) Waiting for machine to stop 57/120
	I0327 23:59:03.044714 1090556 main.go:141] libmachine: (ha-377576-m02) Waiting for machine to stop 58/120
	I0327 23:59:04.046093 1090556 main.go:141] libmachine: (ha-377576-m02) Waiting for machine to stop 59/120
	I0327 23:59:05.048156 1090556 main.go:141] libmachine: (ha-377576-m02) Waiting for machine to stop 60/120
	I0327 23:59:06.049596 1090556 main.go:141] libmachine: (ha-377576-m02) Waiting for machine to stop 61/120
	I0327 23:59:07.051067 1090556 main.go:141] libmachine: (ha-377576-m02) Waiting for machine to stop 62/120
	I0327 23:59:08.052801 1090556 main.go:141] libmachine: (ha-377576-m02) Waiting for machine to stop 63/120
	I0327 23:59:09.054359 1090556 main.go:141] libmachine: (ha-377576-m02) Waiting for machine to stop 64/120
	I0327 23:59:10.056486 1090556 main.go:141] libmachine: (ha-377576-m02) Waiting for machine to stop 65/120
	I0327 23:59:11.057842 1090556 main.go:141] libmachine: (ha-377576-m02) Waiting for machine to stop 66/120
	I0327 23:59:12.059564 1090556 main.go:141] libmachine: (ha-377576-m02) Waiting for machine to stop 67/120
	I0327 23:59:13.061416 1090556 main.go:141] libmachine: (ha-377576-m02) Waiting for machine to stop 68/120
	I0327 23:59:14.062989 1090556 main.go:141] libmachine: (ha-377576-m02) Waiting for machine to stop 69/120
	I0327 23:59:15.065396 1090556 main.go:141] libmachine: (ha-377576-m02) Waiting for machine to stop 70/120
	I0327 23:59:16.067346 1090556 main.go:141] libmachine: (ha-377576-m02) Waiting for machine to stop 71/120
	I0327 23:59:17.068730 1090556 main.go:141] libmachine: (ha-377576-m02) Waiting for machine to stop 72/120
	I0327 23:59:18.070697 1090556 main.go:141] libmachine: (ha-377576-m02) Waiting for machine to stop 73/120
	I0327 23:59:19.072943 1090556 main.go:141] libmachine: (ha-377576-m02) Waiting for machine to stop 74/120
	I0327 23:59:20.074949 1090556 main.go:141] libmachine: (ha-377576-m02) Waiting for machine to stop 75/120
	I0327 23:59:21.076839 1090556 main.go:141] libmachine: (ha-377576-m02) Waiting for machine to stop 76/120
	I0327 23:59:22.078484 1090556 main.go:141] libmachine: (ha-377576-m02) Waiting for machine to stop 77/120
	I0327 23:59:23.080881 1090556 main.go:141] libmachine: (ha-377576-m02) Waiting for machine to stop 78/120
	I0327 23:59:24.082126 1090556 main.go:141] libmachine: (ha-377576-m02) Waiting for machine to stop 79/120
	I0327 23:59:25.084028 1090556 main.go:141] libmachine: (ha-377576-m02) Waiting for machine to stop 80/120
	I0327 23:59:26.085541 1090556 main.go:141] libmachine: (ha-377576-m02) Waiting for machine to stop 81/120
	I0327 23:59:27.087080 1090556 main.go:141] libmachine: (ha-377576-m02) Waiting for machine to stop 82/120
	I0327 23:59:28.088514 1090556 main.go:141] libmachine: (ha-377576-m02) Waiting for machine to stop 83/120
	I0327 23:59:29.090072 1090556 main.go:141] libmachine: (ha-377576-m02) Waiting for machine to stop 84/120
	I0327 23:59:30.092084 1090556 main.go:141] libmachine: (ha-377576-m02) Waiting for machine to stop 85/120
	I0327 23:59:31.093471 1090556 main.go:141] libmachine: (ha-377576-m02) Waiting for machine to stop 86/120
	I0327 23:59:32.095462 1090556 main.go:141] libmachine: (ha-377576-m02) Waiting for machine to stop 87/120
	I0327 23:59:33.097017 1090556 main.go:141] libmachine: (ha-377576-m02) Waiting for machine to stop 88/120
	I0327 23:59:34.098309 1090556 main.go:141] libmachine: (ha-377576-m02) Waiting for machine to stop 89/120
	I0327 23:59:35.100678 1090556 main.go:141] libmachine: (ha-377576-m02) Waiting for machine to stop 90/120
	I0327 23:59:36.102983 1090556 main.go:141] libmachine: (ha-377576-m02) Waiting for machine to stop 91/120
	I0327 23:59:37.105043 1090556 main.go:141] libmachine: (ha-377576-m02) Waiting for machine to stop 92/120
	I0327 23:59:38.106883 1090556 main.go:141] libmachine: (ha-377576-m02) Waiting for machine to stop 93/120
	I0327 23:59:39.108316 1090556 main.go:141] libmachine: (ha-377576-m02) Waiting for machine to stop 94/120
	I0327 23:59:40.110197 1090556 main.go:141] libmachine: (ha-377576-m02) Waiting for machine to stop 95/120
	I0327 23:59:41.111604 1090556 main.go:141] libmachine: (ha-377576-m02) Waiting for machine to stop 96/120
	I0327 23:59:42.113440 1090556 main.go:141] libmachine: (ha-377576-m02) Waiting for machine to stop 97/120
	I0327 23:59:43.114818 1090556 main.go:141] libmachine: (ha-377576-m02) Waiting for machine to stop 98/120
	I0327 23:59:44.116879 1090556 main.go:141] libmachine: (ha-377576-m02) Waiting for machine to stop 99/120
	I0327 23:59:45.119084 1090556 main.go:141] libmachine: (ha-377576-m02) Waiting for machine to stop 100/120
	I0327 23:59:46.120804 1090556 main.go:141] libmachine: (ha-377576-m02) Waiting for machine to stop 101/120
	I0327 23:59:47.122294 1090556 main.go:141] libmachine: (ha-377576-m02) Waiting for machine to stop 102/120
	I0327 23:59:48.123853 1090556 main.go:141] libmachine: (ha-377576-m02) Waiting for machine to stop 103/120
	I0327 23:59:49.125364 1090556 main.go:141] libmachine: (ha-377576-m02) Waiting for machine to stop 104/120
	I0327 23:59:50.127504 1090556 main.go:141] libmachine: (ha-377576-m02) Waiting for machine to stop 105/120
	I0327 23:59:51.129094 1090556 main.go:141] libmachine: (ha-377576-m02) Waiting for machine to stop 106/120
	I0327 23:59:52.130642 1090556 main.go:141] libmachine: (ha-377576-m02) Waiting for machine to stop 107/120
	I0327 23:59:53.132049 1090556 main.go:141] libmachine: (ha-377576-m02) Waiting for machine to stop 108/120
	I0327 23:59:54.133951 1090556 main.go:141] libmachine: (ha-377576-m02) Waiting for machine to stop 109/120
	I0327 23:59:55.136008 1090556 main.go:141] libmachine: (ha-377576-m02) Waiting for machine to stop 110/120
	I0327 23:59:56.137480 1090556 main.go:141] libmachine: (ha-377576-m02) Waiting for machine to stop 111/120
	I0327 23:59:57.138897 1090556 main.go:141] libmachine: (ha-377576-m02) Waiting for machine to stop 112/120
	I0327 23:59:58.141146 1090556 main.go:141] libmachine: (ha-377576-m02) Waiting for machine to stop 113/120
	I0327 23:59:59.142833 1090556 main.go:141] libmachine: (ha-377576-m02) Waiting for machine to stop 114/120
	I0328 00:00:00.145296 1090556 main.go:141] libmachine: (ha-377576-m02) Waiting for machine to stop 115/120
	I0328 00:00:01.146778 1090556 main.go:141] libmachine: (ha-377576-m02) Waiting for machine to stop 116/120
	I0328 00:00:02.149366 1090556 main.go:141] libmachine: (ha-377576-m02) Waiting for machine to stop 117/120
	I0328 00:00:03.151342 1090556 main.go:141] libmachine: (ha-377576-m02) Waiting for machine to stop 118/120
	I0328 00:00:04.152878 1090556 main.go:141] libmachine: (ha-377576-m02) Waiting for machine to stop 119/120
	I0328 00:00:05.153569 1090556 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0328 00:00:05.153738 1090556 out.go:239] X Failed to stop node m02: Temporary Error: stop: unable to stop vm, current state "Running"
	X Failed to stop node m02: Temporary Error: stop: unable to stop vm, current state "Running"

                                                
                                                
** /stderr **
ha_test.go:365: secondary control-plane node stop returned an error. args "out/minikube-linux-amd64 -p ha-377576 node stop m02 -v=7 --alsologtostderr": exit status 30
ha_test.go:369: (dbg) Run:  out/minikube-linux-amd64 -p ha-377576 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-377576 status -v=7 --alsologtostderr: exit status 3 (19.282445484s)

                                                
                                                
-- stdout --
	ha-377576
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-377576-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-377576-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-377576-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0328 00:00:05.219673 1090863 out.go:291] Setting OutFile to fd 1 ...
	I0328 00:00:05.219838 1090863 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 00:00:05.219849 1090863 out.go:304] Setting ErrFile to fd 2...
	I0328 00:00:05.219853 1090863 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 00:00:05.220083 1090863 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18485-1069254/.minikube/bin
	I0328 00:00:05.220287 1090863 out.go:298] Setting JSON to false
	I0328 00:00:05.220317 1090863 mustload.go:65] Loading cluster: ha-377576
	I0328 00:00:05.220448 1090863 notify.go:220] Checking for updates...
	I0328 00:00:05.220805 1090863 config.go:182] Loaded profile config "ha-377576": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0328 00:00:05.220824 1090863 status.go:255] checking status of ha-377576 ...
	I0328 00:00:05.221278 1090863 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0328 00:00:05.221347 1090863 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 00:00:05.238976 1090863 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44367
	I0328 00:00:05.239515 1090863 main.go:141] libmachine: () Calling .GetVersion
	I0328 00:00:05.240297 1090863 main.go:141] libmachine: Using API Version  1
	I0328 00:00:05.240328 1090863 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 00:00:05.240789 1090863 main.go:141] libmachine: () Calling .GetMachineName
	I0328 00:00:05.241006 1090863 main.go:141] libmachine: (ha-377576) Calling .GetState
	I0328 00:00:05.242986 1090863 status.go:330] ha-377576 host status = "Running" (err=<nil>)
	I0328 00:00:05.243008 1090863 host.go:66] Checking if "ha-377576" exists ...
	I0328 00:00:05.243341 1090863 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0328 00:00:05.243409 1090863 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 00:00:05.259766 1090863 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44805
	I0328 00:00:05.260276 1090863 main.go:141] libmachine: () Calling .GetVersion
	I0328 00:00:05.260720 1090863 main.go:141] libmachine: Using API Version  1
	I0328 00:00:05.260751 1090863 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 00:00:05.261073 1090863 main.go:141] libmachine: () Calling .GetMachineName
	I0328 00:00:05.261337 1090863 main.go:141] libmachine: (ha-377576) Calling .GetIP
	I0328 00:00:05.264571 1090863 main.go:141] libmachine: (ha-377576) DBG | domain ha-377576 has defined MAC address 52:54:00:9c:48:13 in network mk-ha-377576
	I0328 00:00:05.265034 1090863 main.go:141] libmachine: (ha-377576) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:48:13", ip: ""} in network mk-ha-377576: {Iface:virbr1 ExpiryTime:2024-03-28 00:52:31 +0000 UTC Type:0 Mac:52:54:00:9c:48:13 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:ha-377576 Clientid:01:52:54:00:9c:48:13}
	I0328 00:00:05.265068 1090863 main.go:141] libmachine: (ha-377576) DBG | domain ha-377576 has defined IP address 192.168.39.47 and MAC address 52:54:00:9c:48:13 in network mk-ha-377576
	I0328 00:00:05.265216 1090863 host.go:66] Checking if "ha-377576" exists ...
	I0328 00:00:05.265557 1090863 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0328 00:00:05.265610 1090863 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 00:00:05.281109 1090863 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36339
	I0328 00:00:05.281670 1090863 main.go:141] libmachine: () Calling .GetVersion
	I0328 00:00:05.282146 1090863 main.go:141] libmachine: Using API Version  1
	I0328 00:00:05.282168 1090863 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 00:00:05.282518 1090863 main.go:141] libmachine: () Calling .GetMachineName
	I0328 00:00:05.282697 1090863 main.go:141] libmachine: (ha-377576) Calling .DriverName
	I0328 00:00:05.282905 1090863 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0328 00:00:05.282943 1090863 main.go:141] libmachine: (ha-377576) Calling .GetSSHHostname
	I0328 00:00:05.285903 1090863 main.go:141] libmachine: (ha-377576) DBG | domain ha-377576 has defined MAC address 52:54:00:9c:48:13 in network mk-ha-377576
	I0328 00:00:05.286386 1090863 main.go:141] libmachine: (ha-377576) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:48:13", ip: ""} in network mk-ha-377576: {Iface:virbr1 ExpiryTime:2024-03-28 00:52:31 +0000 UTC Type:0 Mac:52:54:00:9c:48:13 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:ha-377576 Clientid:01:52:54:00:9c:48:13}
	I0328 00:00:05.286425 1090863 main.go:141] libmachine: (ha-377576) DBG | domain ha-377576 has defined IP address 192.168.39.47 and MAC address 52:54:00:9c:48:13 in network mk-ha-377576
	I0328 00:00:05.286656 1090863 main.go:141] libmachine: (ha-377576) Calling .GetSSHPort
	I0328 00:00:05.286855 1090863 main.go:141] libmachine: (ha-377576) Calling .GetSSHKeyPath
	I0328 00:00:05.286990 1090863 main.go:141] libmachine: (ha-377576) Calling .GetSSHUsername
	I0328 00:00:05.287131 1090863 sshutil.go:53] new ssh client: &{IP:192.168.39.47 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/ha-377576/id_rsa Username:docker}
	I0328 00:00:05.380118 1090863 ssh_runner.go:195] Run: systemctl --version
	I0328 00:00:05.388070 1090863 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0328 00:00:05.407945 1090863 kubeconfig.go:125] found "ha-377576" server: "https://192.168.39.254:8443"
	I0328 00:00:05.407978 1090863 api_server.go:166] Checking apiserver status ...
	I0328 00:00:05.408017 1090863 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 00:00:05.424809 1090863 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1197/cgroup
	W0328 00:00:05.435299 1090863 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1197/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0328 00:00:05.435354 1090863 ssh_runner.go:195] Run: ls
	I0328 00:00:05.440548 1090863 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0328 00:00:05.446046 1090863 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0328 00:00:05.446080 1090863 status.go:422] ha-377576 apiserver status = Running (err=<nil>)
	I0328 00:00:05.446091 1090863 status.go:257] ha-377576 status: &{Name:ha-377576 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0328 00:00:05.446114 1090863 status.go:255] checking status of ha-377576-m02 ...
	I0328 00:00:05.446549 1090863 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0328 00:00:05.446592 1090863 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 00:00:05.463272 1090863 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35565
	I0328 00:00:05.463809 1090863 main.go:141] libmachine: () Calling .GetVersion
	I0328 00:00:05.464456 1090863 main.go:141] libmachine: Using API Version  1
	I0328 00:00:05.464485 1090863 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 00:00:05.464913 1090863 main.go:141] libmachine: () Calling .GetMachineName
	I0328 00:00:05.465217 1090863 main.go:141] libmachine: (ha-377576-m02) Calling .GetState
	I0328 00:00:05.467230 1090863 status.go:330] ha-377576-m02 host status = "Running" (err=<nil>)
	I0328 00:00:05.467257 1090863 host.go:66] Checking if "ha-377576-m02" exists ...
	I0328 00:00:05.470665 1090863 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0328 00:00:05.470714 1090863 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 00:00:05.486815 1090863 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38921
	I0328 00:00:05.487367 1090863 main.go:141] libmachine: () Calling .GetVersion
	I0328 00:00:05.487857 1090863 main.go:141] libmachine: Using API Version  1
	I0328 00:00:05.487879 1090863 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 00:00:05.488224 1090863 main.go:141] libmachine: () Calling .GetMachineName
	I0328 00:00:05.488418 1090863 main.go:141] libmachine: (ha-377576-m02) Calling .GetIP
	I0328 00:00:05.491539 1090863 main.go:141] libmachine: (ha-377576-m02) DBG | domain ha-377576-m02 has defined MAC address 52:54:00:bb:83:99 in network mk-ha-377576
	I0328 00:00:05.492001 1090863 main.go:141] libmachine: (ha-377576-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:83:99", ip: ""} in network mk-ha-377576: {Iface:virbr1 ExpiryTime:2024-03-28 00:53:30 +0000 UTC Type:0 Mac:52:54:00:bb:83:99 Iaid: IPaddr:192.168.39.117 Prefix:24 Hostname:ha-377576-m02 Clientid:01:52:54:00:bb:83:99}
	I0328 00:00:05.492032 1090863 main.go:141] libmachine: (ha-377576-m02) DBG | domain ha-377576-m02 has defined IP address 192.168.39.117 and MAC address 52:54:00:bb:83:99 in network mk-ha-377576
	I0328 00:00:05.492225 1090863 host.go:66] Checking if "ha-377576-m02" exists ...
	I0328 00:00:05.492643 1090863 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0328 00:00:05.492712 1090863 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 00:00:05.508507 1090863 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41327
	I0328 00:00:05.509040 1090863 main.go:141] libmachine: () Calling .GetVersion
	I0328 00:00:05.509604 1090863 main.go:141] libmachine: Using API Version  1
	I0328 00:00:05.509628 1090863 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 00:00:05.509960 1090863 main.go:141] libmachine: () Calling .GetMachineName
	I0328 00:00:05.510155 1090863 main.go:141] libmachine: (ha-377576-m02) Calling .DriverName
	I0328 00:00:05.510362 1090863 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0328 00:00:05.510390 1090863 main.go:141] libmachine: (ha-377576-m02) Calling .GetSSHHostname
	I0328 00:00:05.513280 1090863 main.go:141] libmachine: (ha-377576-m02) DBG | domain ha-377576-m02 has defined MAC address 52:54:00:bb:83:99 in network mk-ha-377576
	I0328 00:00:05.513769 1090863 main.go:141] libmachine: (ha-377576-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:83:99", ip: ""} in network mk-ha-377576: {Iface:virbr1 ExpiryTime:2024-03-28 00:53:30 +0000 UTC Type:0 Mac:52:54:00:bb:83:99 Iaid: IPaddr:192.168.39.117 Prefix:24 Hostname:ha-377576-m02 Clientid:01:52:54:00:bb:83:99}
	I0328 00:00:05.513800 1090863 main.go:141] libmachine: (ha-377576-m02) DBG | domain ha-377576-m02 has defined IP address 192.168.39.117 and MAC address 52:54:00:bb:83:99 in network mk-ha-377576
	I0328 00:00:05.513976 1090863 main.go:141] libmachine: (ha-377576-m02) Calling .GetSSHPort
	I0328 00:00:05.514164 1090863 main.go:141] libmachine: (ha-377576-m02) Calling .GetSSHKeyPath
	I0328 00:00:05.514364 1090863 main.go:141] libmachine: (ha-377576-m02) Calling .GetSSHUsername
	I0328 00:00:05.514522 1090863 sshutil.go:53] new ssh client: &{IP:192.168.39.117 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/ha-377576-m02/id_rsa Username:docker}
	W0328 00:00:24.042469 1090863 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.117:22: connect: no route to host
	W0328 00:00:24.042585 1090863 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.117:22: connect: no route to host
	E0328 00:00:24.042602 1090863 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.117:22: connect: no route to host
	I0328 00:00:24.042618 1090863 status.go:257] ha-377576-m02 status: &{Name:ha-377576-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0328 00:00:24.042648 1090863 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.117:22: connect: no route to host
	I0328 00:00:24.042656 1090863 status.go:255] checking status of ha-377576-m03 ...
	I0328 00:00:24.043114 1090863 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0328 00:00:24.043157 1090863 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 00:00:24.058721 1090863 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45597
	I0328 00:00:24.059230 1090863 main.go:141] libmachine: () Calling .GetVersion
	I0328 00:00:24.059756 1090863 main.go:141] libmachine: Using API Version  1
	I0328 00:00:24.059783 1090863 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 00:00:24.060120 1090863 main.go:141] libmachine: () Calling .GetMachineName
	I0328 00:00:24.060357 1090863 main.go:141] libmachine: (ha-377576-m03) Calling .GetState
	I0328 00:00:24.062204 1090863 status.go:330] ha-377576-m03 host status = "Running" (err=<nil>)
	I0328 00:00:24.062243 1090863 host.go:66] Checking if "ha-377576-m03" exists ...
	I0328 00:00:24.062545 1090863 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0328 00:00:24.062590 1090863 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 00:00:24.078396 1090863 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33551
	I0328 00:00:24.078866 1090863 main.go:141] libmachine: () Calling .GetVersion
	I0328 00:00:24.079359 1090863 main.go:141] libmachine: Using API Version  1
	I0328 00:00:24.079386 1090863 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 00:00:24.079741 1090863 main.go:141] libmachine: () Calling .GetMachineName
	I0328 00:00:24.079968 1090863 main.go:141] libmachine: (ha-377576-m03) Calling .GetIP
	I0328 00:00:24.082999 1090863 main.go:141] libmachine: (ha-377576-m03) DBG | domain ha-377576-m03 has defined MAC address 52:54:00:f5:c1:99 in network mk-ha-377576
	I0328 00:00:24.083466 1090863 main.go:141] libmachine: (ha-377576-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:c1:99", ip: ""} in network mk-ha-377576: {Iface:virbr1 ExpiryTime:2024-03-28 00:55:52 +0000 UTC Type:0 Mac:52:54:00:f5:c1:99 Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:ha-377576-m03 Clientid:01:52:54:00:f5:c1:99}
	I0328 00:00:24.083494 1090863 main.go:141] libmachine: (ha-377576-m03) DBG | domain ha-377576-m03 has defined IP address 192.168.39.101 and MAC address 52:54:00:f5:c1:99 in network mk-ha-377576
	I0328 00:00:24.083611 1090863 host.go:66] Checking if "ha-377576-m03" exists ...
	I0328 00:00:24.083920 1090863 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0328 00:00:24.083959 1090863 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 00:00:24.100065 1090863 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33465
	I0328 00:00:24.100542 1090863 main.go:141] libmachine: () Calling .GetVersion
	I0328 00:00:24.101029 1090863 main.go:141] libmachine: Using API Version  1
	I0328 00:00:24.101053 1090863 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 00:00:24.101399 1090863 main.go:141] libmachine: () Calling .GetMachineName
	I0328 00:00:24.101608 1090863 main.go:141] libmachine: (ha-377576-m03) Calling .DriverName
	I0328 00:00:24.101802 1090863 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0328 00:00:24.101827 1090863 main.go:141] libmachine: (ha-377576-m03) Calling .GetSSHHostname
	I0328 00:00:24.104514 1090863 main.go:141] libmachine: (ha-377576-m03) DBG | domain ha-377576-m03 has defined MAC address 52:54:00:f5:c1:99 in network mk-ha-377576
	I0328 00:00:24.104948 1090863 main.go:141] libmachine: (ha-377576-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:c1:99", ip: ""} in network mk-ha-377576: {Iface:virbr1 ExpiryTime:2024-03-28 00:55:52 +0000 UTC Type:0 Mac:52:54:00:f5:c1:99 Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:ha-377576-m03 Clientid:01:52:54:00:f5:c1:99}
	I0328 00:00:24.104968 1090863 main.go:141] libmachine: (ha-377576-m03) DBG | domain ha-377576-m03 has defined IP address 192.168.39.101 and MAC address 52:54:00:f5:c1:99 in network mk-ha-377576
	I0328 00:00:24.105125 1090863 main.go:141] libmachine: (ha-377576-m03) Calling .GetSSHPort
	I0328 00:00:24.105307 1090863 main.go:141] libmachine: (ha-377576-m03) Calling .GetSSHKeyPath
	I0328 00:00:24.105505 1090863 main.go:141] libmachine: (ha-377576-m03) Calling .GetSSHUsername
	I0328 00:00:24.105625 1090863 sshutil.go:53] new ssh client: &{IP:192.168.39.101 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/ha-377576-m03/id_rsa Username:docker}
	I0328 00:00:24.206721 1090863 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0328 00:00:24.228183 1090863 kubeconfig.go:125] found "ha-377576" server: "https://192.168.39.254:8443"
	I0328 00:00:24.228222 1090863 api_server.go:166] Checking apiserver status ...
	I0328 00:00:24.228271 1090863 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 00:00:24.246203 1090863 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1535/cgroup
	W0328 00:00:24.257540 1090863 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1535/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0328 00:00:24.257600 1090863 ssh_runner.go:195] Run: ls
	I0328 00:00:24.263134 1090863 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0328 00:00:24.268024 1090863 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0328 00:00:24.268051 1090863 status.go:422] ha-377576-m03 apiserver status = Running (err=<nil>)
	I0328 00:00:24.268060 1090863 status.go:257] ha-377576-m03 status: &{Name:ha-377576-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0328 00:00:24.268078 1090863 status.go:255] checking status of ha-377576-m04 ...
	I0328 00:00:24.268412 1090863 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0328 00:00:24.268453 1090863 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 00:00:24.284106 1090863 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37637
	I0328 00:00:24.284673 1090863 main.go:141] libmachine: () Calling .GetVersion
	I0328 00:00:24.285230 1090863 main.go:141] libmachine: Using API Version  1
	I0328 00:00:24.285255 1090863 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 00:00:24.285684 1090863 main.go:141] libmachine: () Calling .GetMachineName
	I0328 00:00:24.285911 1090863 main.go:141] libmachine: (ha-377576-m04) Calling .GetState
	I0328 00:00:24.287743 1090863 status.go:330] ha-377576-m04 host status = "Running" (err=<nil>)
	I0328 00:00:24.287759 1090863 host.go:66] Checking if "ha-377576-m04" exists ...
	I0328 00:00:24.288022 1090863 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0328 00:00:24.288044 1090863 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 00:00:24.302797 1090863 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34219
	I0328 00:00:24.303236 1090863 main.go:141] libmachine: () Calling .GetVersion
	I0328 00:00:24.303717 1090863 main.go:141] libmachine: Using API Version  1
	I0328 00:00:24.303744 1090863 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 00:00:24.304131 1090863 main.go:141] libmachine: () Calling .GetMachineName
	I0328 00:00:24.304367 1090863 main.go:141] libmachine: (ha-377576-m04) Calling .GetIP
	I0328 00:00:24.307279 1090863 main.go:141] libmachine: (ha-377576-m04) DBG | domain ha-377576-m04 has defined MAC address 52:54:00:6a:c2:81 in network mk-ha-377576
	I0328 00:00:24.307738 1090863 main.go:141] libmachine: (ha-377576-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:c2:81", ip: ""} in network mk-ha-377576: {Iface:virbr1 ExpiryTime:2024-03-28 00:57:19 +0000 UTC Type:0 Mac:52:54:00:6a:c2:81 Iaid: IPaddr:192.168.39.93 Prefix:24 Hostname:ha-377576-m04 Clientid:01:52:54:00:6a:c2:81}
	I0328 00:00:24.307770 1090863 main.go:141] libmachine: (ha-377576-m04) DBG | domain ha-377576-m04 has defined IP address 192.168.39.93 and MAC address 52:54:00:6a:c2:81 in network mk-ha-377576
	I0328 00:00:24.307955 1090863 host.go:66] Checking if "ha-377576-m04" exists ...
	I0328 00:00:24.308249 1090863 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0328 00:00:24.308274 1090863 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 00:00:24.323520 1090863 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38621
	I0328 00:00:24.323941 1090863 main.go:141] libmachine: () Calling .GetVersion
	I0328 00:00:24.324431 1090863 main.go:141] libmachine: Using API Version  1
	I0328 00:00:24.324459 1090863 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 00:00:24.324769 1090863 main.go:141] libmachine: () Calling .GetMachineName
	I0328 00:00:24.324982 1090863 main.go:141] libmachine: (ha-377576-m04) Calling .DriverName
	I0328 00:00:24.325182 1090863 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0328 00:00:24.325209 1090863 main.go:141] libmachine: (ha-377576-m04) Calling .GetSSHHostname
	I0328 00:00:24.328041 1090863 main.go:141] libmachine: (ha-377576-m04) DBG | domain ha-377576-m04 has defined MAC address 52:54:00:6a:c2:81 in network mk-ha-377576
	I0328 00:00:24.328429 1090863 main.go:141] libmachine: (ha-377576-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:c2:81", ip: ""} in network mk-ha-377576: {Iface:virbr1 ExpiryTime:2024-03-28 00:57:19 +0000 UTC Type:0 Mac:52:54:00:6a:c2:81 Iaid: IPaddr:192.168.39.93 Prefix:24 Hostname:ha-377576-m04 Clientid:01:52:54:00:6a:c2:81}
	I0328 00:00:24.328461 1090863 main.go:141] libmachine: (ha-377576-m04) DBG | domain ha-377576-m04 has defined IP address 192.168.39.93 and MAC address 52:54:00:6a:c2:81 in network mk-ha-377576
	I0328 00:00:24.328555 1090863 main.go:141] libmachine: (ha-377576-m04) Calling .GetSSHPort
	I0328 00:00:24.328776 1090863 main.go:141] libmachine: (ha-377576-m04) Calling .GetSSHKeyPath
	I0328 00:00:24.328945 1090863 main.go:141] libmachine: (ha-377576-m04) Calling .GetSSHUsername
	I0328 00:00:24.329136 1090863 sshutil.go:53] new ssh client: &{IP:192.168.39.93 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/ha-377576-m04/id_rsa Username:docker}
	I0328 00:00:24.416191 1090863 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0328 00:00:24.434543 1090863 status.go:257] ha-377576-m04 status: &{Name:ha-377576-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:372: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-377576 status -v=7 --alsologtostderr" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-377576 -n ha-377576
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-377576 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-377576 logs -n 25: (1.607433992s)
helpers_test.go:252: TestMultiControlPlane/serial/StopSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|----------------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   |    Version     |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|----------------|---------------------|---------------------|
	| cp      | ha-377576 cp ha-377576-m03:/home/docker/cp-test.txt                              | ha-377576 | jenkins | v1.33.0-beta.0 | 27 Mar 24 23:57 UTC | 27 Mar 24 23:57 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3418864072/001/cp-test_ha-377576-m03.txt |           |         |                |                     |                     |
	| ssh     | ha-377576 ssh -n                                                                 | ha-377576 | jenkins | v1.33.0-beta.0 | 27 Mar 24 23:57 UTC | 27 Mar 24 23:57 UTC |
	|         | ha-377576-m03 sudo cat                                                           |           |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |                |                     |                     |
	| cp      | ha-377576 cp ha-377576-m03:/home/docker/cp-test.txt                              | ha-377576 | jenkins | v1.33.0-beta.0 | 27 Mar 24 23:57 UTC | 27 Mar 24 23:57 UTC |
	|         | ha-377576:/home/docker/cp-test_ha-377576-m03_ha-377576.txt                       |           |         |                |                     |                     |
	| ssh     | ha-377576 ssh -n                                                                 | ha-377576 | jenkins | v1.33.0-beta.0 | 27 Mar 24 23:57 UTC | 27 Mar 24 23:57 UTC |
	|         | ha-377576-m03 sudo cat                                                           |           |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |                |                     |                     |
	| ssh     | ha-377576 ssh -n ha-377576 sudo cat                                              | ha-377576 | jenkins | v1.33.0-beta.0 | 27 Mar 24 23:57 UTC | 27 Mar 24 23:57 UTC |
	|         | /home/docker/cp-test_ha-377576-m03_ha-377576.txt                                 |           |         |                |                     |                     |
	| cp      | ha-377576 cp ha-377576-m03:/home/docker/cp-test.txt                              | ha-377576 | jenkins | v1.33.0-beta.0 | 27 Mar 24 23:57 UTC | 27 Mar 24 23:58 UTC |
	|         | ha-377576-m02:/home/docker/cp-test_ha-377576-m03_ha-377576-m02.txt               |           |         |                |                     |                     |
	| ssh     | ha-377576 ssh -n                                                                 | ha-377576 | jenkins | v1.33.0-beta.0 | 27 Mar 24 23:58 UTC | 27 Mar 24 23:58 UTC |
	|         | ha-377576-m03 sudo cat                                                           |           |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |                |                     |                     |
	| ssh     | ha-377576 ssh -n ha-377576-m02 sudo cat                                          | ha-377576 | jenkins | v1.33.0-beta.0 | 27 Mar 24 23:58 UTC | 27 Mar 24 23:58 UTC |
	|         | /home/docker/cp-test_ha-377576-m03_ha-377576-m02.txt                             |           |         |                |                     |                     |
	| cp      | ha-377576 cp ha-377576-m03:/home/docker/cp-test.txt                              | ha-377576 | jenkins | v1.33.0-beta.0 | 27 Mar 24 23:58 UTC | 27 Mar 24 23:58 UTC |
	|         | ha-377576-m04:/home/docker/cp-test_ha-377576-m03_ha-377576-m04.txt               |           |         |                |                     |                     |
	| ssh     | ha-377576 ssh -n                                                                 | ha-377576 | jenkins | v1.33.0-beta.0 | 27 Mar 24 23:58 UTC | 27 Mar 24 23:58 UTC |
	|         | ha-377576-m03 sudo cat                                                           |           |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |                |                     |                     |
	| ssh     | ha-377576 ssh -n ha-377576-m04 sudo cat                                          | ha-377576 | jenkins | v1.33.0-beta.0 | 27 Mar 24 23:58 UTC | 27 Mar 24 23:58 UTC |
	|         | /home/docker/cp-test_ha-377576-m03_ha-377576-m04.txt                             |           |         |                |                     |                     |
	| cp      | ha-377576 cp testdata/cp-test.txt                                                | ha-377576 | jenkins | v1.33.0-beta.0 | 27 Mar 24 23:58 UTC | 27 Mar 24 23:58 UTC |
	|         | ha-377576-m04:/home/docker/cp-test.txt                                           |           |         |                |                     |                     |
	| ssh     | ha-377576 ssh -n                                                                 | ha-377576 | jenkins | v1.33.0-beta.0 | 27 Mar 24 23:58 UTC | 27 Mar 24 23:58 UTC |
	|         | ha-377576-m04 sudo cat                                                           |           |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |                |                     |                     |
	| cp      | ha-377576 cp ha-377576-m04:/home/docker/cp-test.txt                              | ha-377576 | jenkins | v1.33.0-beta.0 | 27 Mar 24 23:58 UTC | 27 Mar 24 23:58 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3418864072/001/cp-test_ha-377576-m04.txt |           |         |                |                     |                     |
	| ssh     | ha-377576 ssh -n                                                                 | ha-377576 | jenkins | v1.33.0-beta.0 | 27 Mar 24 23:58 UTC | 27 Mar 24 23:58 UTC |
	|         | ha-377576-m04 sudo cat                                                           |           |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |                |                     |                     |
	| cp      | ha-377576 cp ha-377576-m04:/home/docker/cp-test.txt                              | ha-377576 | jenkins | v1.33.0-beta.0 | 27 Mar 24 23:58 UTC | 27 Mar 24 23:58 UTC |
	|         | ha-377576:/home/docker/cp-test_ha-377576-m04_ha-377576.txt                       |           |         |                |                     |                     |
	| ssh     | ha-377576 ssh -n                                                                 | ha-377576 | jenkins | v1.33.0-beta.0 | 27 Mar 24 23:58 UTC | 27 Mar 24 23:58 UTC |
	|         | ha-377576-m04 sudo cat                                                           |           |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |                |                     |                     |
	| ssh     | ha-377576 ssh -n ha-377576 sudo cat                                              | ha-377576 | jenkins | v1.33.0-beta.0 | 27 Mar 24 23:58 UTC | 27 Mar 24 23:58 UTC |
	|         | /home/docker/cp-test_ha-377576-m04_ha-377576.txt                                 |           |         |                |                     |                     |
	| cp      | ha-377576 cp ha-377576-m04:/home/docker/cp-test.txt                              | ha-377576 | jenkins | v1.33.0-beta.0 | 27 Mar 24 23:58 UTC | 27 Mar 24 23:58 UTC |
	|         | ha-377576-m02:/home/docker/cp-test_ha-377576-m04_ha-377576-m02.txt               |           |         |                |                     |                     |
	| ssh     | ha-377576 ssh -n                                                                 | ha-377576 | jenkins | v1.33.0-beta.0 | 27 Mar 24 23:58 UTC | 27 Mar 24 23:58 UTC |
	|         | ha-377576-m04 sudo cat                                                           |           |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |                |                     |                     |
	| ssh     | ha-377576 ssh -n ha-377576-m02 sudo cat                                          | ha-377576 | jenkins | v1.33.0-beta.0 | 27 Mar 24 23:58 UTC | 27 Mar 24 23:58 UTC |
	|         | /home/docker/cp-test_ha-377576-m04_ha-377576-m02.txt                             |           |         |                |                     |                     |
	| cp      | ha-377576 cp ha-377576-m04:/home/docker/cp-test.txt                              | ha-377576 | jenkins | v1.33.0-beta.0 | 27 Mar 24 23:58 UTC | 27 Mar 24 23:58 UTC |
	|         | ha-377576-m03:/home/docker/cp-test_ha-377576-m04_ha-377576-m03.txt               |           |         |                |                     |                     |
	| ssh     | ha-377576 ssh -n                                                                 | ha-377576 | jenkins | v1.33.0-beta.0 | 27 Mar 24 23:58 UTC | 27 Mar 24 23:58 UTC |
	|         | ha-377576-m04 sudo cat                                                           |           |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |                |                     |                     |
	| ssh     | ha-377576 ssh -n ha-377576-m03 sudo cat                                          | ha-377576 | jenkins | v1.33.0-beta.0 | 27 Mar 24 23:58 UTC | 27 Mar 24 23:58 UTC |
	|         | /home/docker/cp-test_ha-377576-m04_ha-377576-m03.txt                             |           |         |                |                     |                     |
	| node    | ha-377576 node stop m02 -v=7                                                     | ha-377576 | jenkins | v1.33.0-beta.0 | 27 Mar 24 23:58 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |                |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/27 23:52:16
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0327 23:52:16.059043 1086621 out.go:291] Setting OutFile to fd 1 ...
	I0327 23:52:16.059498 1086621 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 23:52:16.059517 1086621 out.go:304] Setting ErrFile to fd 2...
	I0327 23:52:16.059525 1086621 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 23:52:16.059960 1086621 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18485-1069254/.minikube/bin
	I0327 23:52:16.061060 1086621 out.go:298] Setting JSON to false
	I0327 23:52:16.062149 1086621 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":27233,"bootTime":1711556303,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1054-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0327 23:52:16.062248 1086621 start.go:139] virtualization: kvm guest
	I0327 23:52:16.064258 1086621 out.go:177] * [ha-377576] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I0327 23:52:16.066095 1086621 out.go:177]   - MINIKUBE_LOCATION=18485
	I0327 23:52:16.066097 1086621 notify.go:220] Checking for updates...
	I0327 23:52:16.067989 1086621 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0327 23:52:16.069658 1086621 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18485-1069254/kubeconfig
	I0327 23:52:16.071176 1086621 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18485-1069254/.minikube
	I0327 23:52:16.072627 1086621 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0327 23:52:16.073910 1086621 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0327 23:52:16.075399 1086621 driver.go:392] Setting default libvirt URI to qemu:///system
	I0327 23:52:16.111607 1086621 out.go:177] * Using the kvm2 driver based on user configuration
	I0327 23:52:16.112947 1086621 start.go:297] selected driver: kvm2
	I0327 23:52:16.112961 1086621 start.go:901] validating driver "kvm2" against <nil>
	I0327 23:52:16.112972 1086621 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0327 23:52:16.113693 1086621 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0327 23:52:16.113798 1086621 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18485-1069254/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0327 23:52:16.129010 1086621 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0-beta.0
	I0327 23:52:16.129081 1086621 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0327 23:52:16.129301 1086621 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0327 23:52:16.129366 1086621 cni.go:84] Creating CNI manager for ""
	I0327 23:52:16.129378 1086621 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0327 23:52:16.129383 1086621 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0327 23:52:16.129440 1086621 start.go:340] cluster config:
	{Name:ha-377576 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:ha-377576 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0327 23:52:16.129529 1086621 iso.go:125] acquiring lock: {Name:mk3da1fa7d63a581c817f327d86a827697458cb8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0327 23:52:16.131398 1086621 out.go:177] * Starting "ha-377576" primary control-plane node in "ha-377576" cluster
	I0327 23:52:16.132750 1086621 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0327 23:52:16.132793 1086621 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4
	I0327 23:52:16.132805 1086621 cache.go:56] Caching tarball of preloaded images
	I0327 23:52:16.132941 1086621 preload.go:173] Found /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0327 23:52:16.132957 1086621 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on crio
	I0327 23:52:16.133307 1086621 profile.go:142] Saving config to /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/ha-377576/config.json ...
	I0327 23:52:16.133329 1086621 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/ha-377576/config.json: {Name:mk05ad12aac82a6fb79fe39e932ee9fe3ad41cd4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 23:52:16.133477 1086621 start.go:360] acquireMachinesLock for ha-377576: {Name:mk85e225431128bbd27ac7bb3815095957281902 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0327 23:52:16.133512 1086621 start.go:364] duration metric: took 18.15µs to acquireMachinesLock for "ha-377576"
	I0327 23:52:16.133535 1086621 start.go:93] Provisioning new machine with config: &{Name:ha-377576 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.29.3 ClusterName:ha-377576 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0327 23:52:16.133617 1086621 start.go:125] createHost starting for "" (driver="kvm2")
	I0327 23:52:16.135178 1086621 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0327 23:52:16.135316 1086621 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0327 23:52:16.135357 1086621 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0327 23:52:16.150129 1086621 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37299
	I0327 23:52:16.150640 1086621 main.go:141] libmachine: () Calling .GetVersion
	I0327 23:52:16.151183 1086621 main.go:141] libmachine: Using API Version  1
	I0327 23:52:16.151205 1086621 main.go:141] libmachine: () Calling .SetConfigRaw
	I0327 23:52:16.151734 1086621 main.go:141] libmachine: () Calling .GetMachineName
	I0327 23:52:16.151993 1086621 main.go:141] libmachine: (ha-377576) Calling .GetMachineName
	I0327 23:52:16.152206 1086621 main.go:141] libmachine: (ha-377576) Calling .DriverName
	I0327 23:52:16.152423 1086621 start.go:159] libmachine.API.Create for "ha-377576" (driver="kvm2")
	I0327 23:52:16.152459 1086621 client.go:168] LocalClient.Create starting
	I0327 23:52:16.152502 1086621 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem
	I0327 23:52:16.152550 1086621 main.go:141] libmachine: Decoding PEM data...
	I0327 23:52:16.152573 1086621 main.go:141] libmachine: Parsing certificate...
	I0327 23:52:16.152642 1086621 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/cert.pem
	I0327 23:52:16.152670 1086621 main.go:141] libmachine: Decoding PEM data...
	I0327 23:52:16.152694 1086621 main.go:141] libmachine: Parsing certificate...
	I0327 23:52:16.152724 1086621 main.go:141] libmachine: Running pre-create checks...
	I0327 23:52:16.152735 1086621 main.go:141] libmachine: (ha-377576) Calling .PreCreateCheck
	I0327 23:52:16.153138 1086621 main.go:141] libmachine: (ha-377576) Calling .GetConfigRaw
	I0327 23:52:16.153557 1086621 main.go:141] libmachine: Creating machine...
	I0327 23:52:16.153575 1086621 main.go:141] libmachine: (ha-377576) Calling .Create
	I0327 23:52:16.153737 1086621 main.go:141] libmachine: (ha-377576) Creating KVM machine...
	I0327 23:52:16.155112 1086621 main.go:141] libmachine: (ha-377576) DBG | found existing default KVM network
	I0327 23:52:16.155959 1086621 main.go:141] libmachine: (ha-377576) DBG | I0327 23:52:16.155814 1086655 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015c30}
	I0327 23:52:16.156017 1086621 main.go:141] libmachine: (ha-377576) DBG | created network xml: 
	I0327 23:52:16.156038 1086621 main.go:141] libmachine: (ha-377576) DBG | <network>
	I0327 23:52:16.156051 1086621 main.go:141] libmachine: (ha-377576) DBG |   <name>mk-ha-377576</name>
	I0327 23:52:16.156062 1086621 main.go:141] libmachine: (ha-377576) DBG |   <dns enable='no'/>
	I0327 23:52:16.156067 1086621 main.go:141] libmachine: (ha-377576) DBG |   
	I0327 23:52:16.156078 1086621 main.go:141] libmachine: (ha-377576) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0327 23:52:16.156087 1086621 main.go:141] libmachine: (ha-377576) DBG |     <dhcp>
	I0327 23:52:16.156100 1086621 main.go:141] libmachine: (ha-377576) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0327 23:52:16.156110 1086621 main.go:141] libmachine: (ha-377576) DBG |     </dhcp>
	I0327 23:52:16.156152 1086621 main.go:141] libmachine: (ha-377576) DBG |   </ip>
	I0327 23:52:16.156193 1086621 main.go:141] libmachine: (ha-377576) DBG |   
	I0327 23:52:16.156211 1086621 main.go:141] libmachine: (ha-377576) DBG | </network>
	I0327 23:52:16.156222 1086621 main.go:141] libmachine: (ha-377576) DBG | 
	I0327 23:52:16.161472 1086621 main.go:141] libmachine: (ha-377576) DBG | trying to create private KVM network mk-ha-377576 192.168.39.0/24...
	I0327 23:52:16.238648 1086621 main.go:141] libmachine: (ha-377576) DBG | private KVM network mk-ha-377576 192.168.39.0/24 created
	I0327 23:52:16.238692 1086621 main.go:141] libmachine: (ha-377576) DBG | I0327 23:52:16.238584 1086655 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18485-1069254/.minikube
	I0327 23:52:16.238709 1086621 main.go:141] libmachine: (ha-377576) Setting up store path in /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/ha-377576 ...
	I0327 23:52:16.238800 1086621 main.go:141] libmachine: (ha-377576) Building disk image from file:///home/jenkins/minikube-integration/18485-1069254/.minikube/cache/iso/amd64/minikube-v1.33.0-1711559712-18485-amd64.iso
	I0327 23:52:16.238849 1086621 main.go:141] libmachine: (ha-377576) Downloading /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18485-1069254/.minikube/cache/iso/amd64/minikube-v1.33.0-1711559712-18485-amd64.iso...
	I0327 23:52:16.504597 1086621 main.go:141] libmachine: (ha-377576) DBG | I0327 23:52:16.504449 1086655 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/ha-377576/id_rsa...
	I0327 23:52:16.699561 1086621 main.go:141] libmachine: (ha-377576) DBG | I0327 23:52:16.699384 1086655 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/ha-377576/ha-377576.rawdisk...
	I0327 23:52:16.699604 1086621 main.go:141] libmachine: (ha-377576) DBG | Writing magic tar header
	I0327 23:52:16.699619 1086621 main.go:141] libmachine: (ha-377576) DBG | Writing SSH key tar header
	I0327 23:52:16.699632 1086621 main.go:141] libmachine: (ha-377576) DBG | I0327 23:52:16.699527 1086655 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/ha-377576 ...
	I0327 23:52:16.699646 1086621 main.go:141] libmachine: (ha-377576) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/ha-377576
	I0327 23:52:16.699714 1086621 main.go:141] libmachine: (ha-377576) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18485-1069254/.minikube/machines
	I0327 23:52:16.699754 1086621 main.go:141] libmachine: (ha-377576) Setting executable bit set on /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/ha-377576 (perms=drwx------)
	I0327 23:52:16.699769 1086621 main.go:141] libmachine: (ha-377576) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18485-1069254/.minikube
	I0327 23:52:16.699788 1086621 main.go:141] libmachine: (ha-377576) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18485-1069254
	I0327 23:52:16.699801 1086621 main.go:141] libmachine: (ha-377576) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0327 23:52:16.699819 1086621 main.go:141] libmachine: (ha-377576) DBG | Checking permissions on dir: /home/jenkins
	I0327 23:52:16.699831 1086621 main.go:141] libmachine: (ha-377576) DBG | Checking permissions on dir: /home
	I0327 23:52:16.699843 1086621 main.go:141] libmachine: (ha-377576) DBG | Skipping /home - not owner
	I0327 23:52:16.699859 1086621 main.go:141] libmachine: (ha-377576) Setting executable bit set on /home/jenkins/minikube-integration/18485-1069254/.minikube/machines (perms=drwxr-xr-x)
	I0327 23:52:16.699877 1086621 main.go:141] libmachine: (ha-377576) Setting executable bit set on /home/jenkins/minikube-integration/18485-1069254/.minikube (perms=drwxr-xr-x)
	I0327 23:52:16.699892 1086621 main.go:141] libmachine: (ha-377576) Setting executable bit set on /home/jenkins/minikube-integration/18485-1069254 (perms=drwxrwxr-x)
	I0327 23:52:16.699910 1086621 main.go:141] libmachine: (ha-377576) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0327 23:52:16.699923 1086621 main.go:141] libmachine: (ha-377576) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0327 23:52:16.699939 1086621 main.go:141] libmachine: (ha-377576) Creating domain...
	I0327 23:52:16.700928 1086621 main.go:141] libmachine: (ha-377576) define libvirt domain using xml: 
	I0327 23:52:16.700949 1086621 main.go:141] libmachine: (ha-377576) <domain type='kvm'>
	I0327 23:52:16.700956 1086621 main.go:141] libmachine: (ha-377576)   <name>ha-377576</name>
	I0327 23:52:16.700960 1086621 main.go:141] libmachine: (ha-377576)   <memory unit='MiB'>2200</memory>
	I0327 23:52:16.700969 1086621 main.go:141] libmachine: (ha-377576)   <vcpu>2</vcpu>
	I0327 23:52:16.700973 1086621 main.go:141] libmachine: (ha-377576)   <features>
	I0327 23:52:16.700978 1086621 main.go:141] libmachine: (ha-377576)     <acpi/>
	I0327 23:52:16.700982 1086621 main.go:141] libmachine: (ha-377576)     <apic/>
	I0327 23:52:16.700987 1086621 main.go:141] libmachine: (ha-377576)     <pae/>
	I0327 23:52:16.700997 1086621 main.go:141] libmachine: (ha-377576)     
	I0327 23:52:16.701004 1086621 main.go:141] libmachine: (ha-377576)   </features>
	I0327 23:52:16.701012 1086621 main.go:141] libmachine: (ha-377576)   <cpu mode='host-passthrough'>
	I0327 23:52:16.701075 1086621 main.go:141] libmachine: (ha-377576)   
	I0327 23:52:16.701101 1086621 main.go:141] libmachine: (ha-377576)   </cpu>
	I0327 23:52:16.701111 1086621 main.go:141] libmachine: (ha-377576)   <os>
	I0327 23:52:16.701125 1086621 main.go:141] libmachine: (ha-377576)     <type>hvm</type>
	I0327 23:52:16.701192 1086621 main.go:141] libmachine: (ha-377576)     <boot dev='cdrom'/>
	I0327 23:52:16.701225 1086621 main.go:141] libmachine: (ha-377576)     <boot dev='hd'/>
	I0327 23:52:16.701233 1086621 main.go:141] libmachine: (ha-377576)     <bootmenu enable='no'/>
	I0327 23:52:16.701242 1086621 main.go:141] libmachine: (ha-377576)   </os>
	I0327 23:52:16.701254 1086621 main.go:141] libmachine: (ha-377576)   <devices>
	I0327 23:52:16.701269 1086621 main.go:141] libmachine: (ha-377576)     <disk type='file' device='cdrom'>
	I0327 23:52:16.701285 1086621 main.go:141] libmachine: (ha-377576)       <source file='/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/ha-377576/boot2docker.iso'/>
	I0327 23:52:16.701302 1086621 main.go:141] libmachine: (ha-377576)       <target dev='hdc' bus='scsi'/>
	I0327 23:52:16.701313 1086621 main.go:141] libmachine: (ha-377576)       <readonly/>
	I0327 23:52:16.701322 1086621 main.go:141] libmachine: (ha-377576)     </disk>
	I0327 23:52:16.701329 1086621 main.go:141] libmachine: (ha-377576)     <disk type='file' device='disk'>
	I0327 23:52:16.701341 1086621 main.go:141] libmachine: (ha-377576)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0327 23:52:16.701359 1086621 main.go:141] libmachine: (ha-377576)       <source file='/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/ha-377576/ha-377576.rawdisk'/>
	I0327 23:52:16.701374 1086621 main.go:141] libmachine: (ha-377576)       <target dev='hda' bus='virtio'/>
	I0327 23:52:16.701385 1086621 main.go:141] libmachine: (ha-377576)     </disk>
	I0327 23:52:16.701395 1086621 main.go:141] libmachine: (ha-377576)     <interface type='network'>
	I0327 23:52:16.701406 1086621 main.go:141] libmachine: (ha-377576)       <source network='mk-ha-377576'/>
	I0327 23:52:16.701414 1086621 main.go:141] libmachine: (ha-377576)       <model type='virtio'/>
	I0327 23:52:16.701425 1086621 main.go:141] libmachine: (ha-377576)     </interface>
	I0327 23:52:16.701441 1086621 main.go:141] libmachine: (ha-377576)     <interface type='network'>
	I0327 23:52:16.701453 1086621 main.go:141] libmachine: (ha-377576)       <source network='default'/>
	I0327 23:52:16.701463 1086621 main.go:141] libmachine: (ha-377576)       <model type='virtio'/>
	I0327 23:52:16.701474 1086621 main.go:141] libmachine: (ha-377576)     </interface>
	I0327 23:52:16.701483 1086621 main.go:141] libmachine: (ha-377576)     <serial type='pty'>
	I0327 23:52:16.701494 1086621 main.go:141] libmachine: (ha-377576)       <target port='0'/>
	I0327 23:52:16.701505 1086621 main.go:141] libmachine: (ha-377576)     </serial>
	I0327 23:52:16.701517 1086621 main.go:141] libmachine: (ha-377576)     <console type='pty'>
	I0327 23:52:16.701538 1086621 main.go:141] libmachine: (ha-377576)       <target type='serial' port='0'/>
	I0327 23:52:16.701561 1086621 main.go:141] libmachine: (ha-377576)     </console>
	I0327 23:52:16.701572 1086621 main.go:141] libmachine: (ha-377576)     <rng model='virtio'>
	I0327 23:52:16.701583 1086621 main.go:141] libmachine: (ha-377576)       <backend model='random'>/dev/random</backend>
	I0327 23:52:16.701592 1086621 main.go:141] libmachine: (ha-377576)     </rng>
	I0327 23:52:16.701601 1086621 main.go:141] libmachine: (ha-377576)     
	I0327 23:52:16.701611 1086621 main.go:141] libmachine: (ha-377576)     
	I0327 23:52:16.701622 1086621 main.go:141] libmachine: (ha-377576)   </devices>
	I0327 23:52:16.701631 1086621 main.go:141] libmachine: (ha-377576) </domain>
	I0327 23:52:16.701642 1086621 main.go:141] libmachine: (ha-377576) 
	I0327 23:52:16.706024 1086621 main.go:141] libmachine: (ha-377576) DBG | domain ha-377576 has defined MAC address 52:54:00:1a:7a:20 in network default
	I0327 23:52:16.706672 1086621 main.go:141] libmachine: (ha-377576) DBG | domain ha-377576 has defined MAC address 52:54:00:9c:48:13 in network mk-ha-377576
	I0327 23:52:16.706711 1086621 main.go:141] libmachine: (ha-377576) Ensuring networks are active...
	I0327 23:52:16.707503 1086621 main.go:141] libmachine: (ha-377576) Ensuring network default is active
	I0327 23:52:16.707806 1086621 main.go:141] libmachine: (ha-377576) Ensuring network mk-ha-377576 is active
	I0327 23:52:16.708327 1086621 main.go:141] libmachine: (ha-377576) Getting domain xml...
	I0327 23:52:16.709023 1086621 main.go:141] libmachine: (ha-377576) Creating domain...
	I0327 23:52:17.895451 1086621 main.go:141] libmachine: (ha-377576) Waiting to get IP...
	I0327 23:52:17.896440 1086621 main.go:141] libmachine: (ha-377576) DBG | domain ha-377576 has defined MAC address 52:54:00:9c:48:13 in network mk-ha-377576
	I0327 23:52:17.896888 1086621 main.go:141] libmachine: (ha-377576) DBG | unable to find current IP address of domain ha-377576 in network mk-ha-377576
	I0327 23:52:17.896916 1086621 main.go:141] libmachine: (ha-377576) DBG | I0327 23:52:17.896869 1086655 retry.go:31] will retry after 204.228349ms: waiting for machine to come up
	I0327 23:52:18.102278 1086621 main.go:141] libmachine: (ha-377576) DBG | domain ha-377576 has defined MAC address 52:54:00:9c:48:13 in network mk-ha-377576
	I0327 23:52:18.102719 1086621 main.go:141] libmachine: (ha-377576) DBG | unable to find current IP address of domain ha-377576 in network mk-ha-377576
	I0327 23:52:18.102752 1086621 main.go:141] libmachine: (ha-377576) DBG | I0327 23:52:18.102661 1086655 retry.go:31] will retry after 294.764841ms: waiting for machine to come up
	I0327 23:52:18.399271 1086621 main.go:141] libmachine: (ha-377576) DBG | domain ha-377576 has defined MAC address 52:54:00:9c:48:13 in network mk-ha-377576
	I0327 23:52:18.399693 1086621 main.go:141] libmachine: (ha-377576) DBG | unable to find current IP address of domain ha-377576 in network mk-ha-377576
	I0327 23:52:18.399727 1086621 main.go:141] libmachine: (ha-377576) DBG | I0327 23:52:18.399641 1086655 retry.go:31] will retry after 420.882267ms: waiting for machine to come up
	I0327 23:52:18.822360 1086621 main.go:141] libmachine: (ha-377576) DBG | domain ha-377576 has defined MAC address 52:54:00:9c:48:13 in network mk-ha-377576
	I0327 23:52:18.822782 1086621 main.go:141] libmachine: (ha-377576) DBG | unable to find current IP address of domain ha-377576 in network mk-ha-377576
	I0327 23:52:18.822804 1086621 main.go:141] libmachine: (ha-377576) DBG | I0327 23:52:18.822744 1086655 retry.go:31] will retry after 440.762004ms: waiting for machine to come up
	I0327 23:52:19.265653 1086621 main.go:141] libmachine: (ha-377576) DBG | domain ha-377576 has defined MAC address 52:54:00:9c:48:13 in network mk-ha-377576
	I0327 23:52:19.266113 1086621 main.go:141] libmachine: (ha-377576) DBG | unable to find current IP address of domain ha-377576 in network mk-ha-377576
	I0327 23:52:19.266154 1086621 main.go:141] libmachine: (ha-377576) DBG | I0327 23:52:19.266075 1086655 retry.go:31] will retry after 681.995366ms: waiting for machine to come up
	I0327 23:52:19.950049 1086621 main.go:141] libmachine: (ha-377576) DBG | domain ha-377576 has defined MAC address 52:54:00:9c:48:13 in network mk-ha-377576
	I0327 23:52:19.950578 1086621 main.go:141] libmachine: (ha-377576) DBG | unable to find current IP address of domain ha-377576 in network mk-ha-377576
	I0327 23:52:19.950619 1086621 main.go:141] libmachine: (ha-377576) DBG | I0327 23:52:19.950509 1086655 retry.go:31] will retry after 730.337887ms: waiting for machine to come up
	I0327 23:52:20.682331 1086621 main.go:141] libmachine: (ha-377576) DBG | domain ha-377576 has defined MAC address 52:54:00:9c:48:13 in network mk-ha-377576
	I0327 23:52:20.682662 1086621 main.go:141] libmachine: (ha-377576) DBG | unable to find current IP address of domain ha-377576 in network mk-ha-377576
	I0327 23:52:20.682692 1086621 main.go:141] libmachine: (ha-377576) DBG | I0327 23:52:20.682614 1086655 retry.go:31] will retry after 1.140943407s: waiting for machine to come up
	I0327 23:52:21.825498 1086621 main.go:141] libmachine: (ha-377576) DBG | domain ha-377576 has defined MAC address 52:54:00:9c:48:13 in network mk-ha-377576
	I0327 23:52:21.825993 1086621 main.go:141] libmachine: (ha-377576) DBG | unable to find current IP address of domain ha-377576 in network mk-ha-377576
	I0327 23:52:21.826022 1086621 main.go:141] libmachine: (ha-377576) DBG | I0327 23:52:21.825942 1086655 retry.go:31] will retry after 984.170194ms: waiting for machine to come up
	I0327 23:52:22.812114 1086621 main.go:141] libmachine: (ha-377576) DBG | domain ha-377576 has defined MAC address 52:54:00:9c:48:13 in network mk-ha-377576
	I0327 23:52:22.812430 1086621 main.go:141] libmachine: (ha-377576) DBG | unable to find current IP address of domain ha-377576 in network mk-ha-377576
	I0327 23:52:22.812455 1086621 main.go:141] libmachine: (ha-377576) DBG | I0327 23:52:22.812390 1086655 retry.go:31] will retry after 1.836089758s: waiting for machine to come up
	I0327 23:52:24.651479 1086621 main.go:141] libmachine: (ha-377576) DBG | domain ha-377576 has defined MAC address 52:54:00:9c:48:13 in network mk-ha-377576
	I0327 23:52:24.652063 1086621 main.go:141] libmachine: (ha-377576) DBG | unable to find current IP address of domain ha-377576 in network mk-ha-377576
	I0327 23:52:24.652127 1086621 main.go:141] libmachine: (ha-377576) DBG | I0327 23:52:24.652029 1086655 retry.go:31] will retry after 2.280967862s: waiting for machine to come up
	I0327 23:52:26.934212 1086621 main.go:141] libmachine: (ha-377576) DBG | domain ha-377576 has defined MAC address 52:54:00:9c:48:13 in network mk-ha-377576
	I0327 23:52:26.934740 1086621 main.go:141] libmachine: (ha-377576) DBG | unable to find current IP address of domain ha-377576 in network mk-ha-377576
	I0327 23:52:26.934771 1086621 main.go:141] libmachine: (ha-377576) DBG | I0327 23:52:26.934689 1086655 retry.go:31] will retry after 2.253174542s: waiting for machine to come up
	I0327 23:52:29.191272 1086621 main.go:141] libmachine: (ha-377576) DBG | domain ha-377576 has defined MAC address 52:54:00:9c:48:13 in network mk-ha-377576
	I0327 23:52:29.191722 1086621 main.go:141] libmachine: (ha-377576) DBG | unable to find current IP address of domain ha-377576 in network mk-ha-377576
	I0327 23:52:29.191748 1086621 main.go:141] libmachine: (ha-377576) DBG | I0327 23:52:29.191680 1086655 retry.go:31] will retry after 2.19894248s: waiting for machine to come up
	I0327 23:52:31.392676 1086621 main.go:141] libmachine: (ha-377576) DBG | domain ha-377576 has defined MAC address 52:54:00:9c:48:13 in network mk-ha-377576
	I0327 23:52:31.393122 1086621 main.go:141] libmachine: (ha-377576) DBG | unable to find current IP address of domain ha-377576 in network mk-ha-377576
	I0327 23:52:31.393146 1086621 main.go:141] libmachine: (ha-377576) DBG | I0327 23:52:31.393070 1086655 retry.go:31] will retry after 4.465104492s: waiting for machine to come up
	I0327 23:52:35.863650 1086621 main.go:141] libmachine: (ha-377576) DBG | domain ha-377576 has defined MAC address 52:54:00:9c:48:13 in network mk-ha-377576
	I0327 23:52:35.864081 1086621 main.go:141] libmachine: (ha-377576) DBG | unable to find current IP address of domain ha-377576 in network mk-ha-377576
	I0327 23:52:35.864105 1086621 main.go:141] libmachine: (ha-377576) DBG | I0327 23:52:35.864025 1086655 retry.go:31] will retry after 3.929483337s: waiting for machine to come up
	I0327 23:52:39.798335 1086621 main.go:141] libmachine: (ha-377576) DBG | domain ha-377576 has defined MAC address 52:54:00:9c:48:13 in network mk-ha-377576
	I0327 23:52:39.798873 1086621 main.go:141] libmachine: (ha-377576) Found IP for machine: 192.168.39.47
	I0327 23:52:39.798899 1086621 main.go:141] libmachine: (ha-377576) DBG | domain ha-377576 has current primary IP address 192.168.39.47 and MAC address 52:54:00:9c:48:13 in network mk-ha-377576
	I0327 23:52:39.798908 1086621 main.go:141] libmachine: (ha-377576) Reserving static IP address...
	I0327 23:52:39.799237 1086621 main.go:141] libmachine: (ha-377576) DBG | unable to find host DHCP lease matching {name: "ha-377576", mac: "52:54:00:9c:48:13", ip: "192.168.39.47"} in network mk-ha-377576
	I0327 23:52:39.880786 1086621 main.go:141] libmachine: (ha-377576) DBG | Getting to WaitForSSH function...
	I0327 23:52:39.880824 1086621 main.go:141] libmachine: (ha-377576) Reserved static IP address: 192.168.39.47
	I0327 23:52:39.880837 1086621 main.go:141] libmachine: (ha-377576) Waiting for SSH to be available...
	I0327 23:52:39.883827 1086621 main.go:141] libmachine: (ha-377576) DBG | domain ha-377576 has defined MAC address 52:54:00:9c:48:13 in network mk-ha-377576
	I0327 23:52:39.884204 1086621 main.go:141] libmachine: (ha-377576) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:9c:48:13", ip: ""} in network mk-ha-377576
	I0327 23:52:39.884227 1086621 main.go:141] libmachine: (ha-377576) DBG | unable to find defined IP address of network mk-ha-377576 interface with MAC address 52:54:00:9c:48:13
	I0327 23:52:39.884387 1086621 main.go:141] libmachine: (ha-377576) DBG | Using SSH client type: external
	I0327 23:52:39.884401 1086621 main.go:141] libmachine: (ha-377576) DBG | Using SSH private key: /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/ha-377576/id_rsa (-rw-------)
	I0327 23:52:39.884467 1086621 main.go:141] libmachine: (ha-377576) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/ha-377576/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0327 23:52:39.884475 1086621 main.go:141] libmachine: (ha-377576) DBG | About to run SSH command:
	I0327 23:52:39.884483 1086621 main.go:141] libmachine: (ha-377576) DBG | exit 0
	I0327 23:52:39.888335 1086621 main.go:141] libmachine: (ha-377576) DBG | SSH cmd err, output: exit status 255: 
	I0327 23:52:39.888363 1086621 main.go:141] libmachine: (ha-377576) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0327 23:52:39.888374 1086621 main.go:141] libmachine: (ha-377576) DBG | command : exit 0
	I0327 23:52:39.888381 1086621 main.go:141] libmachine: (ha-377576) DBG | err     : exit status 255
	I0327 23:52:39.888392 1086621 main.go:141] libmachine: (ha-377576) DBG | output  : 
	I0327 23:52:42.890051 1086621 main.go:141] libmachine: (ha-377576) DBG | Getting to WaitForSSH function...
	I0327 23:52:42.892810 1086621 main.go:141] libmachine: (ha-377576) DBG | domain ha-377576 has defined MAC address 52:54:00:9c:48:13 in network mk-ha-377576
	I0327 23:52:42.893215 1086621 main.go:141] libmachine: (ha-377576) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:48:13", ip: ""} in network mk-ha-377576: {Iface:virbr1 ExpiryTime:2024-03-28 00:52:31 +0000 UTC Type:0 Mac:52:54:00:9c:48:13 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:ha-377576 Clientid:01:52:54:00:9c:48:13}
	I0327 23:52:42.893251 1086621 main.go:141] libmachine: (ha-377576) DBG | domain ha-377576 has defined IP address 192.168.39.47 and MAC address 52:54:00:9c:48:13 in network mk-ha-377576
	I0327 23:52:42.893341 1086621 main.go:141] libmachine: (ha-377576) DBG | Using SSH client type: external
	I0327 23:52:42.893370 1086621 main.go:141] libmachine: (ha-377576) DBG | Using SSH private key: /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/ha-377576/id_rsa (-rw-------)
	I0327 23:52:42.893414 1086621 main.go:141] libmachine: (ha-377576) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.47 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/ha-377576/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0327 23:52:42.893428 1086621 main.go:141] libmachine: (ha-377576) DBG | About to run SSH command:
	I0327 23:52:42.893464 1086621 main.go:141] libmachine: (ha-377576) DBG | exit 0
	I0327 23:52:43.014339 1086621 main.go:141] libmachine: (ha-377576) DBG | SSH cmd err, output: <nil>: 
	I0327 23:52:43.014656 1086621 main.go:141] libmachine: (ha-377576) KVM machine creation complete!
	I0327 23:52:43.015004 1086621 main.go:141] libmachine: (ha-377576) Calling .GetConfigRaw
	I0327 23:52:43.015552 1086621 main.go:141] libmachine: (ha-377576) Calling .DriverName
	I0327 23:52:43.015792 1086621 main.go:141] libmachine: (ha-377576) Calling .DriverName
	I0327 23:52:43.015968 1086621 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0327 23:52:43.015985 1086621 main.go:141] libmachine: (ha-377576) Calling .GetState
	I0327 23:52:43.017383 1086621 main.go:141] libmachine: Detecting operating system of created instance...
	I0327 23:52:43.017400 1086621 main.go:141] libmachine: Waiting for SSH to be available...
	I0327 23:52:43.017407 1086621 main.go:141] libmachine: Getting to WaitForSSH function...
	I0327 23:52:43.017415 1086621 main.go:141] libmachine: (ha-377576) Calling .GetSSHHostname
	I0327 23:52:43.019790 1086621 main.go:141] libmachine: (ha-377576) DBG | domain ha-377576 has defined MAC address 52:54:00:9c:48:13 in network mk-ha-377576
	I0327 23:52:43.020164 1086621 main.go:141] libmachine: (ha-377576) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:48:13", ip: ""} in network mk-ha-377576: {Iface:virbr1 ExpiryTime:2024-03-28 00:52:31 +0000 UTC Type:0 Mac:52:54:00:9c:48:13 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:ha-377576 Clientid:01:52:54:00:9c:48:13}
	I0327 23:52:43.020192 1086621 main.go:141] libmachine: (ha-377576) DBG | domain ha-377576 has defined IP address 192.168.39.47 and MAC address 52:54:00:9c:48:13 in network mk-ha-377576
	I0327 23:52:43.020318 1086621 main.go:141] libmachine: (ha-377576) Calling .GetSSHPort
	I0327 23:52:43.020505 1086621 main.go:141] libmachine: (ha-377576) Calling .GetSSHKeyPath
	I0327 23:52:43.020676 1086621 main.go:141] libmachine: (ha-377576) Calling .GetSSHKeyPath
	I0327 23:52:43.020866 1086621 main.go:141] libmachine: (ha-377576) Calling .GetSSHUsername
	I0327 23:52:43.021085 1086621 main.go:141] libmachine: Using SSH client type: native
	I0327 23:52:43.021341 1086621 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.47 22 <nil> <nil>}
	I0327 23:52:43.021353 1086621 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0327 23:52:43.121794 1086621 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0327 23:52:43.121819 1086621 main.go:141] libmachine: Detecting the provisioner...
	I0327 23:52:43.121827 1086621 main.go:141] libmachine: (ha-377576) Calling .GetSSHHostname
	I0327 23:52:43.124764 1086621 main.go:141] libmachine: (ha-377576) DBG | domain ha-377576 has defined MAC address 52:54:00:9c:48:13 in network mk-ha-377576
	I0327 23:52:43.125171 1086621 main.go:141] libmachine: (ha-377576) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:48:13", ip: ""} in network mk-ha-377576: {Iface:virbr1 ExpiryTime:2024-03-28 00:52:31 +0000 UTC Type:0 Mac:52:54:00:9c:48:13 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:ha-377576 Clientid:01:52:54:00:9c:48:13}
	I0327 23:52:43.125197 1086621 main.go:141] libmachine: (ha-377576) DBG | domain ha-377576 has defined IP address 192.168.39.47 and MAC address 52:54:00:9c:48:13 in network mk-ha-377576
	I0327 23:52:43.125379 1086621 main.go:141] libmachine: (ha-377576) Calling .GetSSHPort
	I0327 23:52:43.125589 1086621 main.go:141] libmachine: (ha-377576) Calling .GetSSHKeyPath
	I0327 23:52:43.125741 1086621 main.go:141] libmachine: (ha-377576) Calling .GetSSHKeyPath
	I0327 23:52:43.125930 1086621 main.go:141] libmachine: (ha-377576) Calling .GetSSHUsername
	I0327 23:52:43.126154 1086621 main.go:141] libmachine: Using SSH client type: native
	I0327 23:52:43.126359 1086621 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.47 22 <nil> <nil>}
	I0327 23:52:43.126372 1086621 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0327 23:52:43.227215 1086621 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0327 23:52:43.227347 1086621 main.go:141] libmachine: found compatible host: buildroot
	I0327 23:52:43.227364 1086621 main.go:141] libmachine: Provisioning with buildroot...
	I0327 23:52:43.227375 1086621 main.go:141] libmachine: (ha-377576) Calling .GetMachineName
	I0327 23:52:43.227698 1086621 buildroot.go:166] provisioning hostname "ha-377576"
	I0327 23:52:43.227731 1086621 main.go:141] libmachine: (ha-377576) Calling .GetMachineName
	I0327 23:52:43.227928 1086621 main.go:141] libmachine: (ha-377576) Calling .GetSSHHostname
	I0327 23:52:43.230515 1086621 main.go:141] libmachine: (ha-377576) DBG | domain ha-377576 has defined MAC address 52:54:00:9c:48:13 in network mk-ha-377576
	I0327 23:52:43.230854 1086621 main.go:141] libmachine: (ha-377576) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:48:13", ip: ""} in network mk-ha-377576: {Iface:virbr1 ExpiryTime:2024-03-28 00:52:31 +0000 UTC Type:0 Mac:52:54:00:9c:48:13 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:ha-377576 Clientid:01:52:54:00:9c:48:13}
	I0327 23:52:43.230874 1086621 main.go:141] libmachine: (ha-377576) DBG | domain ha-377576 has defined IP address 192.168.39.47 and MAC address 52:54:00:9c:48:13 in network mk-ha-377576
	I0327 23:52:43.231023 1086621 main.go:141] libmachine: (ha-377576) Calling .GetSSHPort
	I0327 23:52:43.231255 1086621 main.go:141] libmachine: (ha-377576) Calling .GetSSHKeyPath
	I0327 23:52:43.231436 1086621 main.go:141] libmachine: (ha-377576) Calling .GetSSHKeyPath
	I0327 23:52:43.231597 1086621 main.go:141] libmachine: (ha-377576) Calling .GetSSHUsername
	I0327 23:52:43.231810 1086621 main.go:141] libmachine: Using SSH client type: native
	I0327 23:52:43.232010 1086621 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.47 22 <nil> <nil>}
	I0327 23:52:43.232027 1086621 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-377576 && echo "ha-377576" | sudo tee /etc/hostname
	I0327 23:52:43.344231 1086621 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-377576
	
	I0327 23:52:43.344341 1086621 main.go:141] libmachine: (ha-377576) Calling .GetSSHHostname
	I0327 23:52:43.347237 1086621 main.go:141] libmachine: (ha-377576) DBG | domain ha-377576 has defined MAC address 52:54:00:9c:48:13 in network mk-ha-377576
	I0327 23:52:43.347540 1086621 main.go:141] libmachine: (ha-377576) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:48:13", ip: ""} in network mk-ha-377576: {Iface:virbr1 ExpiryTime:2024-03-28 00:52:31 +0000 UTC Type:0 Mac:52:54:00:9c:48:13 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:ha-377576 Clientid:01:52:54:00:9c:48:13}
	I0327 23:52:43.347580 1086621 main.go:141] libmachine: (ha-377576) DBG | domain ha-377576 has defined IP address 192.168.39.47 and MAC address 52:54:00:9c:48:13 in network mk-ha-377576
	I0327 23:52:43.347761 1086621 main.go:141] libmachine: (ha-377576) Calling .GetSSHPort
	I0327 23:52:43.347958 1086621 main.go:141] libmachine: (ha-377576) Calling .GetSSHKeyPath
	I0327 23:52:43.348208 1086621 main.go:141] libmachine: (ha-377576) Calling .GetSSHKeyPath
	I0327 23:52:43.348324 1086621 main.go:141] libmachine: (ha-377576) Calling .GetSSHUsername
	I0327 23:52:43.348486 1086621 main.go:141] libmachine: Using SSH client type: native
	I0327 23:52:43.348682 1086621 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.47 22 <nil> <nil>}
	I0327 23:52:43.348699 1086621 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-377576' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-377576/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-377576' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0327 23:52:43.456557 1086621 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0327 23:52:43.456595 1086621 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18485-1069254/.minikube CaCertPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18485-1069254/.minikube}
	I0327 23:52:43.456649 1086621 buildroot.go:174] setting up certificates
	I0327 23:52:43.456678 1086621 provision.go:84] configureAuth start
	I0327 23:52:43.456700 1086621 main.go:141] libmachine: (ha-377576) Calling .GetMachineName
	I0327 23:52:43.457046 1086621 main.go:141] libmachine: (ha-377576) Calling .GetIP
	I0327 23:52:43.460050 1086621 main.go:141] libmachine: (ha-377576) DBG | domain ha-377576 has defined MAC address 52:54:00:9c:48:13 in network mk-ha-377576
	I0327 23:52:43.460440 1086621 main.go:141] libmachine: (ha-377576) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:48:13", ip: ""} in network mk-ha-377576: {Iface:virbr1 ExpiryTime:2024-03-28 00:52:31 +0000 UTC Type:0 Mac:52:54:00:9c:48:13 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:ha-377576 Clientid:01:52:54:00:9c:48:13}
	I0327 23:52:43.460474 1086621 main.go:141] libmachine: (ha-377576) DBG | domain ha-377576 has defined IP address 192.168.39.47 and MAC address 52:54:00:9c:48:13 in network mk-ha-377576
	I0327 23:52:43.460602 1086621 main.go:141] libmachine: (ha-377576) Calling .GetSSHHostname
	I0327 23:52:43.462984 1086621 main.go:141] libmachine: (ha-377576) DBG | domain ha-377576 has defined MAC address 52:54:00:9c:48:13 in network mk-ha-377576
	I0327 23:52:43.463266 1086621 main.go:141] libmachine: (ha-377576) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:48:13", ip: ""} in network mk-ha-377576: {Iface:virbr1 ExpiryTime:2024-03-28 00:52:31 +0000 UTC Type:0 Mac:52:54:00:9c:48:13 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:ha-377576 Clientid:01:52:54:00:9c:48:13}
	I0327 23:52:43.463298 1086621 main.go:141] libmachine: (ha-377576) DBG | domain ha-377576 has defined IP address 192.168.39.47 and MAC address 52:54:00:9c:48:13 in network mk-ha-377576
	I0327 23:52:43.463452 1086621 provision.go:143] copyHostCerts
	I0327 23:52:43.463487 1086621 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.pem
	I0327 23:52:43.463532 1086621 exec_runner.go:144] found /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.pem, removing ...
	I0327 23:52:43.463541 1086621 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.pem
	I0327 23:52:43.463610 1086621 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.pem (1078 bytes)
	I0327 23:52:43.463694 1086621 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18485-1069254/.minikube/cert.pem
	I0327 23:52:43.463712 1086621 exec_runner.go:144] found /home/jenkins/minikube-integration/18485-1069254/.minikube/cert.pem, removing ...
	I0327 23:52:43.463719 1086621 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18485-1069254/.minikube/cert.pem
	I0327 23:52:43.463743 1086621 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18485-1069254/.minikube/cert.pem (1123 bytes)
	I0327 23:52:43.463787 1086621 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18485-1069254/.minikube/key.pem
	I0327 23:52:43.463804 1086621 exec_runner.go:144] found /home/jenkins/minikube-integration/18485-1069254/.minikube/key.pem, removing ...
	I0327 23:52:43.463810 1086621 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18485-1069254/.minikube/key.pem
	I0327 23:52:43.463829 1086621 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18485-1069254/.minikube/key.pem (1679 bytes)
	I0327 23:52:43.463880 1086621 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca-key.pem org=jenkins.ha-377576 san=[127.0.0.1 192.168.39.47 ha-377576 localhost minikube]
	I0327 23:52:43.642308 1086621 provision.go:177] copyRemoteCerts
	I0327 23:52:43.642380 1086621 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0327 23:52:43.642408 1086621 main.go:141] libmachine: (ha-377576) Calling .GetSSHHostname
	I0327 23:52:43.645301 1086621 main.go:141] libmachine: (ha-377576) DBG | domain ha-377576 has defined MAC address 52:54:00:9c:48:13 in network mk-ha-377576
	I0327 23:52:43.645576 1086621 main.go:141] libmachine: (ha-377576) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:48:13", ip: ""} in network mk-ha-377576: {Iface:virbr1 ExpiryTime:2024-03-28 00:52:31 +0000 UTC Type:0 Mac:52:54:00:9c:48:13 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:ha-377576 Clientid:01:52:54:00:9c:48:13}
	I0327 23:52:43.645620 1086621 main.go:141] libmachine: (ha-377576) DBG | domain ha-377576 has defined IP address 192.168.39.47 and MAC address 52:54:00:9c:48:13 in network mk-ha-377576
	I0327 23:52:43.645826 1086621 main.go:141] libmachine: (ha-377576) Calling .GetSSHPort
	I0327 23:52:43.646014 1086621 main.go:141] libmachine: (ha-377576) Calling .GetSSHKeyPath
	I0327 23:52:43.646166 1086621 main.go:141] libmachine: (ha-377576) Calling .GetSSHUsername
	I0327 23:52:43.646301 1086621 sshutil.go:53] new ssh client: &{IP:192.168.39.47 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/ha-377576/id_rsa Username:docker}
	I0327 23:52:43.725452 1086621 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0327 23:52:43.725553 1086621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0327 23:52:43.750634 1086621 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0327 23:52:43.750717 1086621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0327 23:52:43.775284 1086621 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0327 23:52:43.775370 1086621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0327 23:52:43.799029 1086621 provision.go:87] duration metric: took 342.333808ms to configureAuth
	I0327 23:52:43.799057 1086621 buildroot.go:189] setting minikube options for container-runtime
	I0327 23:52:43.799224 1086621 config.go:182] Loaded profile config "ha-377576": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0327 23:52:43.799312 1086621 main.go:141] libmachine: (ha-377576) Calling .GetSSHHostname
	I0327 23:52:43.802043 1086621 main.go:141] libmachine: (ha-377576) DBG | domain ha-377576 has defined MAC address 52:54:00:9c:48:13 in network mk-ha-377576
	I0327 23:52:43.802451 1086621 main.go:141] libmachine: (ha-377576) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:48:13", ip: ""} in network mk-ha-377576: {Iface:virbr1 ExpiryTime:2024-03-28 00:52:31 +0000 UTC Type:0 Mac:52:54:00:9c:48:13 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:ha-377576 Clientid:01:52:54:00:9c:48:13}
	I0327 23:52:43.802471 1086621 main.go:141] libmachine: (ha-377576) DBG | domain ha-377576 has defined IP address 192.168.39.47 and MAC address 52:54:00:9c:48:13 in network mk-ha-377576
	I0327 23:52:43.802693 1086621 main.go:141] libmachine: (ha-377576) Calling .GetSSHPort
	I0327 23:52:43.802906 1086621 main.go:141] libmachine: (ha-377576) Calling .GetSSHKeyPath
	I0327 23:52:43.803143 1086621 main.go:141] libmachine: (ha-377576) Calling .GetSSHKeyPath
	I0327 23:52:43.803291 1086621 main.go:141] libmachine: (ha-377576) Calling .GetSSHUsername
	I0327 23:52:43.803498 1086621 main.go:141] libmachine: Using SSH client type: native
	I0327 23:52:43.803707 1086621 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.47 22 <nil> <nil>}
	I0327 23:52:43.803732 1086621 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0327 23:52:44.066756 1086621 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0327 23:52:44.066788 1086621 main.go:141] libmachine: Checking connection to Docker...
	I0327 23:52:44.066798 1086621 main.go:141] libmachine: (ha-377576) Calling .GetURL
	I0327 23:52:44.068332 1086621 main.go:141] libmachine: (ha-377576) DBG | Using libvirt version 6000000
	I0327 23:52:44.070555 1086621 main.go:141] libmachine: (ha-377576) DBG | domain ha-377576 has defined MAC address 52:54:00:9c:48:13 in network mk-ha-377576
	I0327 23:52:44.070883 1086621 main.go:141] libmachine: (ha-377576) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:48:13", ip: ""} in network mk-ha-377576: {Iface:virbr1 ExpiryTime:2024-03-28 00:52:31 +0000 UTC Type:0 Mac:52:54:00:9c:48:13 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:ha-377576 Clientid:01:52:54:00:9c:48:13}
	I0327 23:52:44.070914 1086621 main.go:141] libmachine: (ha-377576) DBG | domain ha-377576 has defined IP address 192.168.39.47 and MAC address 52:54:00:9c:48:13 in network mk-ha-377576
	I0327 23:52:44.071084 1086621 main.go:141] libmachine: Docker is up and running!
	I0327 23:52:44.071112 1086621 main.go:141] libmachine: Reticulating splines...
	I0327 23:52:44.071121 1086621 client.go:171] duration metric: took 27.91864995s to LocalClient.Create
	I0327 23:52:44.071147 1086621 start.go:167] duration metric: took 27.918726761s to libmachine.API.Create "ha-377576"
	I0327 23:52:44.071157 1086621 start.go:293] postStartSetup for "ha-377576" (driver="kvm2")
	I0327 23:52:44.071167 1086621 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0327 23:52:44.071183 1086621 main.go:141] libmachine: (ha-377576) Calling .DriverName
	I0327 23:52:44.071444 1086621 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0327 23:52:44.071479 1086621 main.go:141] libmachine: (ha-377576) Calling .GetSSHHostname
	I0327 23:52:44.073535 1086621 main.go:141] libmachine: (ha-377576) DBG | domain ha-377576 has defined MAC address 52:54:00:9c:48:13 in network mk-ha-377576
	I0327 23:52:44.073898 1086621 main.go:141] libmachine: (ha-377576) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:48:13", ip: ""} in network mk-ha-377576: {Iface:virbr1 ExpiryTime:2024-03-28 00:52:31 +0000 UTC Type:0 Mac:52:54:00:9c:48:13 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:ha-377576 Clientid:01:52:54:00:9c:48:13}
	I0327 23:52:44.073930 1086621 main.go:141] libmachine: (ha-377576) DBG | domain ha-377576 has defined IP address 192.168.39.47 and MAC address 52:54:00:9c:48:13 in network mk-ha-377576
	I0327 23:52:44.074043 1086621 main.go:141] libmachine: (ha-377576) Calling .GetSSHPort
	I0327 23:52:44.074258 1086621 main.go:141] libmachine: (ha-377576) Calling .GetSSHKeyPath
	I0327 23:52:44.074465 1086621 main.go:141] libmachine: (ha-377576) Calling .GetSSHUsername
	I0327 23:52:44.074657 1086621 sshutil.go:53] new ssh client: &{IP:192.168.39.47 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/ha-377576/id_rsa Username:docker}
	I0327 23:52:44.157934 1086621 ssh_runner.go:195] Run: cat /etc/os-release
	I0327 23:52:44.162213 1086621 info.go:137] Remote host: Buildroot 2023.02.9
	I0327 23:52:44.162248 1086621 filesync.go:126] Scanning /home/jenkins/minikube-integration/18485-1069254/.minikube/addons for local assets ...
	I0327 23:52:44.162319 1086621 filesync.go:126] Scanning /home/jenkins/minikube-integration/18485-1069254/.minikube/files for local assets ...
	I0327 23:52:44.162406 1086621 filesync.go:149] local asset: /home/jenkins/minikube-integration/18485-1069254/.minikube/files/etc/ssl/certs/10765222.pem -> 10765222.pem in /etc/ssl/certs
	I0327 23:52:44.162423 1086621 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18485-1069254/.minikube/files/etc/ssl/certs/10765222.pem -> /etc/ssl/certs/10765222.pem
	I0327 23:52:44.162539 1086621 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0327 23:52:44.173260 1086621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/files/etc/ssl/certs/10765222.pem --> /etc/ssl/certs/10765222.pem (1708 bytes)
	I0327 23:52:44.198225 1086621 start.go:296] duration metric: took 127.049448ms for postStartSetup
	I0327 23:52:44.198302 1086621 main.go:141] libmachine: (ha-377576) Calling .GetConfigRaw
	I0327 23:52:44.198945 1086621 main.go:141] libmachine: (ha-377576) Calling .GetIP
	I0327 23:52:44.201358 1086621 main.go:141] libmachine: (ha-377576) DBG | domain ha-377576 has defined MAC address 52:54:00:9c:48:13 in network mk-ha-377576
	I0327 23:52:44.201689 1086621 main.go:141] libmachine: (ha-377576) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:48:13", ip: ""} in network mk-ha-377576: {Iface:virbr1 ExpiryTime:2024-03-28 00:52:31 +0000 UTC Type:0 Mac:52:54:00:9c:48:13 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:ha-377576 Clientid:01:52:54:00:9c:48:13}
	I0327 23:52:44.201731 1086621 main.go:141] libmachine: (ha-377576) DBG | domain ha-377576 has defined IP address 192.168.39.47 and MAC address 52:54:00:9c:48:13 in network mk-ha-377576
	I0327 23:52:44.201956 1086621 profile.go:142] Saving config to /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/ha-377576/config.json ...
	I0327 23:52:44.202173 1086621 start.go:128] duration metric: took 28.06854382s to createHost
	I0327 23:52:44.202198 1086621 main.go:141] libmachine: (ha-377576) Calling .GetSSHHostname
	I0327 23:52:44.204255 1086621 main.go:141] libmachine: (ha-377576) DBG | domain ha-377576 has defined MAC address 52:54:00:9c:48:13 in network mk-ha-377576
	I0327 23:52:44.204563 1086621 main.go:141] libmachine: (ha-377576) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:48:13", ip: ""} in network mk-ha-377576: {Iface:virbr1 ExpiryTime:2024-03-28 00:52:31 +0000 UTC Type:0 Mac:52:54:00:9c:48:13 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:ha-377576 Clientid:01:52:54:00:9c:48:13}
	I0327 23:52:44.204585 1086621 main.go:141] libmachine: (ha-377576) DBG | domain ha-377576 has defined IP address 192.168.39.47 and MAC address 52:54:00:9c:48:13 in network mk-ha-377576
	I0327 23:52:44.204724 1086621 main.go:141] libmachine: (ha-377576) Calling .GetSSHPort
	I0327 23:52:44.204943 1086621 main.go:141] libmachine: (ha-377576) Calling .GetSSHKeyPath
	I0327 23:52:44.205104 1086621 main.go:141] libmachine: (ha-377576) Calling .GetSSHKeyPath
	I0327 23:52:44.205268 1086621 main.go:141] libmachine: (ha-377576) Calling .GetSSHUsername
	I0327 23:52:44.205440 1086621 main.go:141] libmachine: Using SSH client type: native
	I0327 23:52:44.205607 1086621 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.47 22 <nil> <nil>}
	I0327 23:52:44.205617 1086621 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0327 23:52:44.307153 1086621 main.go:141] libmachine: SSH cmd err, output: <nil>: 1711583564.283994941
	
	I0327 23:52:44.307192 1086621 fix.go:216] guest clock: 1711583564.283994941
	I0327 23:52:44.307202 1086621 fix.go:229] Guest: 2024-03-27 23:52:44.283994941 +0000 UTC Remote: 2024-03-27 23:52:44.202188235 +0000 UTC m=+28.191661090 (delta=81.806706ms)
	I0327 23:52:44.307232 1086621 fix.go:200] guest clock delta is within tolerance: 81.806706ms
	I0327 23:52:44.307239 1086621 start.go:83] releasing machines lock for "ha-377576", held for 28.173715757s
	I0327 23:52:44.307268 1086621 main.go:141] libmachine: (ha-377576) Calling .DriverName
	I0327 23:52:44.307610 1086621 main.go:141] libmachine: (ha-377576) Calling .GetIP
	I0327 23:52:44.310114 1086621 main.go:141] libmachine: (ha-377576) DBG | domain ha-377576 has defined MAC address 52:54:00:9c:48:13 in network mk-ha-377576
	I0327 23:52:44.310470 1086621 main.go:141] libmachine: (ha-377576) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:48:13", ip: ""} in network mk-ha-377576: {Iface:virbr1 ExpiryTime:2024-03-28 00:52:31 +0000 UTC Type:0 Mac:52:54:00:9c:48:13 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:ha-377576 Clientid:01:52:54:00:9c:48:13}
	I0327 23:52:44.310500 1086621 main.go:141] libmachine: (ha-377576) DBG | domain ha-377576 has defined IP address 192.168.39.47 and MAC address 52:54:00:9c:48:13 in network mk-ha-377576
	I0327 23:52:44.310638 1086621 main.go:141] libmachine: (ha-377576) Calling .DriverName
	I0327 23:52:44.311177 1086621 main.go:141] libmachine: (ha-377576) Calling .DriverName
	I0327 23:52:44.311390 1086621 main.go:141] libmachine: (ha-377576) Calling .DriverName
	I0327 23:52:44.311497 1086621 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0327 23:52:44.311548 1086621 main.go:141] libmachine: (ha-377576) Calling .GetSSHHostname
	I0327 23:52:44.311684 1086621 ssh_runner.go:195] Run: cat /version.json
	I0327 23:52:44.311711 1086621 main.go:141] libmachine: (ha-377576) Calling .GetSSHHostname
	I0327 23:52:44.313880 1086621 main.go:141] libmachine: (ha-377576) DBG | domain ha-377576 has defined MAC address 52:54:00:9c:48:13 in network mk-ha-377576
	I0327 23:52:44.314113 1086621 main.go:141] libmachine: (ha-377576) DBG | domain ha-377576 has defined MAC address 52:54:00:9c:48:13 in network mk-ha-377576
	I0327 23:52:44.314309 1086621 main.go:141] libmachine: (ha-377576) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:48:13", ip: ""} in network mk-ha-377576: {Iface:virbr1 ExpiryTime:2024-03-28 00:52:31 +0000 UTC Type:0 Mac:52:54:00:9c:48:13 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:ha-377576 Clientid:01:52:54:00:9c:48:13}
	I0327 23:52:44.314341 1086621 main.go:141] libmachine: (ha-377576) DBG | domain ha-377576 has defined IP address 192.168.39.47 and MAC address 52:54:00:9c:48:13 in network mk-ha-377576
	I0327 23:52:44.314449 1086621 main.go:141] libmachine: (ha-377576) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:48:13", ip: ""} in network mk-ha-377576: {Iface:virbr1 ExpiryTime:2024-03-28 00:52:31 +0000 UTC Type:0 Mac:52:54:00:9c:48:13 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:ha-377576 Clientid:01:52:54:00:9c:48:13}
	I0327 23:52:44.314483 1086621 main.go:141] libmachine: (ha-377576) DBG | domain ha-377576 has defined IP address 192.168.39.47 and MAC address 52:54:00:9c:48:13 in network mk-ha-377576
	I0327 23:52:44.314493 1086621 main.go:141] libmachine: (ha-377576) Calling .GetSSHPort
	I0327 23:52:44.314654 1086621 main.go:141] libmachine: (ha-377576) Calling .GetSSHPort
	I0327 23:52:44.314722 1086621 main.go:141] libmachine: (ha-377576) Calling .GetSSHKeyPath
	I0327 23:52:44.314835 1086621 main.go:141] libmachine: (ha-377576) Calling .GetSSHKeyPath
	I0327 23:52:44.314911 1086621 main.go:141] libmachine: (ha-377576) Calling .GetSSHUsername
	I0327 23:52:44.314982 1086621 main.go:141] libmachine: (ha-377576) Calling .GetSSHUsername
	I0327 23:52:44.315062 1086621 sshutil.go:53] new ssh client: &{IP:192.168.39.47 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/ha-377576/id_rsa Username:docker}
	I0327 23:52:44.315117 1086621 sshutil.go:53] new ssh client: &{IP:192.168.39.47 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/ha-377576/id_rsa Username:docker}
	I0327 23:52:44.387658 1086621 ssh_runner.go:195] Run: systemctl --version
	I0327 23:52:44.423158 1086621 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0327 23:52:44.585628 1086621 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0327 23:52:44.591837 1086621 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0327 23:52:44.591900 1086621 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0327 23:52:44.608131 1086621 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0327 23:52:44.608156 1086621 start.go:494] detecting cgroup driver to use...
	I0327 23:52:44.608235 1086621 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0327 23:52:44.624318 1086621 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0327 23:52:44.639158 1086621 docker.go:217] disabling cri-docker service (if available) ...
	I0327 23:52:44.639244 1086621 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0327 23:52:44.654032 1086621 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0327 23:52:44.669218 1086621 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0327 23:52:44.786572 1086621 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0327 23:52:44.950797 1086621 docker.go:233] disabling docker service ...
	I0327 23:52:44.950891 1086621 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0327 23:52:44.965206 1086621 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0327 23:52:44.978629 1086621 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0327 23:52:45.095342 1086621 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0327 23:52:45.204691 1086621 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0327 23:52:45.218871 1086621 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0327 23:52:45.238462 1086621 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0327 23:52:45.238543 1086621 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0327 23:52:45.249244 1086621 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0327 23:52:45.249332 1086621 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0327 23:52:45.259853 1086621 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0327 23:52:45.270460 1086621 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0327 23:52:45.281148 1086621 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0327 23:52:45.291751 1086621 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0327 23:52:45.302581 1086621 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0327 23:52:45.320266 1086621 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0327 23:52:45.331412 1086621 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0327 23:52:45.340733 1086621 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0327 23:52:45.340797 1086621 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0327 23:52:45.353559 1086621 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0327 23:52:45.363291 1086621 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0327 23:52:45.481299 1086621 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0327 23:52:45.635017 1086621 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0327 23:52:45.635106 1086621 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0327 23:52:45.640269 1086621 start.go:562] Will wait 60s for crictl version
	I0327 23:52:45.640336 1086621 ssh_runner.go:195] Run: which crictl
	I0327 23:52:45.644527 1086621 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0327 23:52:45.686675 1086621 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0327 23:52:45.686756 1086621 ssh_runner.go:195] Run: crio --version
	I0327 23:52:45.716611 1086621 ssh_runner.go:195] Run: crio --version
	I0327 23:52:45.747217 1086621 out.go:177] * Preparing Kubernetes v1.29.3 on CRI-O 1.29.1 ...
	I0327 23:52:45.748462 1086621 main.go:141] libmachine: (ha-377576) Calling .GetIP
	I0327 23:52:45.751504 1086621 main.go:141] libmachine: (ha-377576) DBG | domain ha-377576 has defined MAC address 52:54:00:9c:48:13 in network mk-ha-377576
	I0327 23:52:45.751851 1086621 main.go:141] libmachine: (ha-377576) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:48:13", ip: ""} in network mk-ha-377576: {Iface:virbr1 ExpiryTime:2024-03-28 00:52:31 +0000 UTC Type:0 Mac:52:54:00:9c:48:13 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:ha-377576 Clientid:01:52:54:00:9c:48:13}
	I0327 23:52:45.751884 1086621 main.go:141] libmachine: (ha-377576) DBG | domain ha-377576 has defined IP address 192.168.39.47 and MAC address 52:54:00:9c:48:13 in network mk-ha-377576
	I0327 23:52:45.752114 1086621 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0327 23:52:45.756617 1086621 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0327 23:52:45.771501 1086621 kubeadm.go:877] updating cluster {Name:ha-377576 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 Cl
usterName:ha-377576 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.47 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0327 23:52:45.771616 1086621 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0327 23:52:45.771661 1086621 ssh_runner.go:195] Run: sudo crictl images --output json
	I0327 23:52:45.808162 1086621 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.3". assuming images are not preloaded.
	I0327 23:52:45.808240 1086621 ssh_runner.go:195] Run: which lz4
	I0327 23:52:45.812245 1086621 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0327 23:52:45.812350 1086621 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0327 23:52:45.816564 1086621 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0327 23:52:45.816605 1086621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (402967820 bytes)
	I0327 23:52:47.404885 1086621 crio.go:462] duration metric: took 1.592565204s to copy over tarball
	I0327 23:52:47.404982 1086621 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0327 23:52:49.661801 1086621 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.256781993s)
	I0327 23:52:49.661840 1086621 crio.go:469] duration metric: took 2.25692182s to extract the tarball
	I0327 23:52:49.661849 1086621 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0327 23:52:49.701294 1086621 ssh_runner.go:195] Run: sudo crictl images --output json
	I0327 23:52:49.745828 1086621 crio.go:514] all images are preloaded for cri-o runtime.
	I0327 23:52:49.745853 1086621 cache_images.go:84] Images are preloaded, skipping loading
	I0327 23:52:49.745862 1086621 kubeadm.go:928] updating node { 192.168.39.47 8443 v1.29.3 crio true true} ...
	I0327 23:52:49.745980 1086621 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-377576 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.47
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.3 ClusterName:ha-377576 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0327 23:52:49.746047 1086621 ssh_runner.go:195] Run: crio config
	I0327 23:52:49.795743 1086621 cni.go:84] Creating CNI manager for ""
	I0327 23:52:49.795765 1086621 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0327 23:52:49.795774 1086621 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0327 23:52:49.795796 1086621 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.47 APIServerPort:8443 KubernetesVersion:v1.29.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-377576 NodeName:ha-377576 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.47"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.47 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/m
anifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0327 23:52:49.795952 1086621 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.47
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-377576"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.47
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.47"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0327 23:52:49.795981 1086621 kube-vip.go:111] generating kube-vip config ...
	I0327 23:52:49.796035 1086621 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0327 23:52:49.813337 1086621 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0327 23:52:49.813457 1086621 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0327 23:52:49.813525 1086621 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.3
	I0327 23:52:49.824365 1086621 binaries.go:44] Found k8s binaries, skipping transfer
	I0327 23:52:49.824453 1086621 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0327 23:52:49.834850 1086621 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I0327 23:52:49.852145 1086621 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0327 23:52:49.869506 1086621 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2150 bytes)
	I0327 23:52:49.887226 1086621 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1352 bytes)
	I0327 23:52:49.904933 1086621 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0327 23:52:49.909004 1086621 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0327 23:52:49.922928 1086621 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0327 23:52:50.050938 1086621 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0327 23:52:50.069329 1086621 certs.go:68] Setting up /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/ha-377576 for IP: 192.168.39.47
	I0327 23:52:50.069361 1086621 certs.go:194] generating shared ca certs ...
	I0327 23:52:50.069382 1086621 certs.go:226] acquiring lock for ca certs: {Name:mkf4dec8f33bbf51de6ed3aabdf175c7fd744ef9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 23:52:50.069574 1086621 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.key
	I0327 23:52:50.069625 1086621 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/proxy-client-ca.key
	I0327 23:52:50.069635 1086621 certs.go:256] generating profile certs ...
	I0327 23:52:50.069705 1086621 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/ha-377576/client.key
	I0327 23:52:50.069726 1086621 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/ha-377576/client.crt with IP's: []
	I0327 23:52:50.366949 1086621 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/ha-377576/client.crt ...
	I0327 23:52:50.366989 1086621 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/ha-377576/client.crt: {Name:mk1d41578a56d1ff6fc7e659b4e37c20b338628b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 23:52:50.367268 1086621 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/ha-377576/client.key ...
	I0327 23:52:50.367303 1086621 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/ha-377576/client.key: {Name:mk706342d211e03475387d7a483acc5545792a46 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 23:52:50.367440 1086621 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/ha-377576/apiserver.key.71312e33
	I0327 23:52:50.367461 1086621 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/ha-377576/apiserver.crt.71312e33 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.47 192.168.39.254]
	I0327 23:52:50.599407 1086621 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/ha-377576/apiserver.crt.71312e33 ...
	I0327 23:52:50.599456 1086621 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/ha-377576/apiserver.crt.71312e33: {Name:mk693900fd14c89f17e34a8eb0d7a534d0f67662 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 23:52:50.599698 1086621 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/ha-377576/apiserver.key.71312e33 ...
	I0327 23:52:50.599724 1086621 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/ha-377576/apiserver.key.71312e33: {Name:mk335ff62d1fd6fb0ca416a673648c77ad800201 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 23:52:50.599848 1086621 certs.go:381] copying /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/ha-377576/apiserver.crt.71312e33 -> /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/ha-377576/apiserver.crt
	I0327 23:52:50.599992 1086621 certs.go:385] copying /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/ha-377576/apiserver.key.71312e33 -> /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/ha-377576/apiserver.key
	I0327 23:52:50.600079 1086621 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/ha-377576/proxy-client.key
	I0327 23:52:50.600101 1086621 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/ha-377576/proxy-client.crt with IP's: []
	I0327 23:52:50.910113 1086621 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/ha-377576/proxy-client.crt ...
	I0327 23:52:50.910156 1086621 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/ha-377576/proxy-client.crt: {Name:mk2ce6ac8523adee2bde9e93ac88ef9a3e9fa932 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 23:52:50.910352 1086621 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/ha-377576/proxy-client.key ...
	I0327 23:52:50.910366 1086621 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/ha-377576/proxy-client.key: {Name:mkfd2d417237ed30cbebe68eb094310dc75e3e0c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 23:52:50.910437 1086621 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0327 23:52:50.910454 1086621 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0327 23:52:50.910467 1086621 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18485-1069254/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0327 23:52:50.910479 1086621 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18485-1069254/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0327 23:52:50.910492 1086621 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/ha-377576/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0327 23:52:50.910505 1086621 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/ha-377576/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0327 23:52:50.910517 1086621 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/ha-377576/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0327 23:52:50.910529 1086621 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/ha-377576/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0327 23:52:50.910582 1086621 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/1076522.pem (1338 bytes)
	W0327 23:52:50.910624 1086621 certs.go:480] ignoring /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/1076522_empty.pem, impossibly tiny 0 bytes
	I0327 23:52:50.910632 1086621 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca-key.pem (1675 bytes)
	I0327 23:52:50.910654 1086621 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem (1078 bytes)
	I0327 23:52:50.910677 1086621 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/cert.pem (1123 bytes)
	I0327 23:52:50.910698 1086621 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/key.pem (1679 bytes)
	I0327 23:52:50.910733 1086621 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/files/etc/ssl/certs/10765222.pem (1708 bytes)
	I0327 23:52:50.910768 1086621 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/1076522.pem -> /usr/share/ca-certificates/1076522.pem
	I0327 23:52:50.910782 1086621 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18485-1069254/.minikube/files/etc/ssl/certs/10765222.pem -> /usr/share/ca-certificates/10765222.pem
	I0327 23:52:50.910795 1086621 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0327 23:52:50.911422 1086621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0327 23:52:50.939932 1086621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0327 23:52:50.971656 1086621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0327 23:52:51.005915 1086621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0327 23:52:51.033590 1086621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/ha-377576/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0327 23:52:51.061650 1086621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/ha-377576/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0327 23:52:51.121413 1086621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/ha-377576/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0327 23:52:51.149533 1086621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/ha-377576/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0327 23:52:51.177918 1086621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/1076522.pem --> /usr/share/ca-certificates/1076522.pem (1338 bytes)
	I0327 23:52:51.204176 1086621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/files/etc/ssl/certs/10765222.pem --> /usr/share/ca-certificates/10765222.pem (1708 bytes)
	I0327 23:52:51.229832 1086621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0327 23:52:51.255289 1086621 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0327 23:52:51.273246 1086621 ssh_runner.go:195] Run: openssl version
	I0327 23:52:51.279240 1086621 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1076522.pem && ln -fs /usr/share/ca-certificates/1076522.pem /etc/ssl/certs/1076522.pem"
	I0327 23:52:51.290343 1086621 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1076522.pem
	I0327 23:52:51.295088 1086621 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 27 23:42 /usr/share/ca-certificates/1076522.pem
	I0327 23:52:51.295139 1086621 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1076522.pem
	I0327 23:52:51.300752 1086621 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1076522.pem /etc/ssl/certs/51391683.0"
	I0327 23:52:51.311496 1086621 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10765222.pem && ln -fs /usr/share/ca-certificates/10765222.pem /etc/ssl/certs/10765222.pem"
	I0327 23:52:51.322244 1086621 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10765222.pem
	I0327 23:52:51.326858 1086621 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 27 23:42 /usr/share/ca-certificates/10765222.pem
	I0327 23:52:51.326921 1086621 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10765222.pem
	I0327 23:52:51.332664 1086621 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/10765222.pem /etc/ssl/certs/3ec20f2e.0"
	I0327 23:52:51.343703 1086621 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0327 23:52:51.356685 1086621 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0327 23:52:51.361385 1086621 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 27 23:34 /usr/share/ca-certificates/minikubeCA.pem
	I0327 23:52:51.361461 1086621 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0327 23:52:51.367629 1086621 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0327 23:52:51.379407 1086621 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0327 23:52:51.383611 1086621 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0327 23:52:51.383671 1086621 kubeadm.go:391] StartCluster: {Name:ha-377576 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 Clust
erName:ha-377576 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.47 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mount
Type:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0327 23:52:51.383765 1086621 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0327 23:52:51.383811 1086621 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0327 23:52:51.419239 1086621 cri.go:89] found id: ""
	I0327 23:52:51.419316 1086621 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0327 23:52:51.429183 1086621 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0327 23:52:51.438745 1086621 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0327 23:52:51.448195 1086621 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0327 23:52:51.448217 1086621 kubeadm.go:156] found existing configuration files:
	
	I0327 23:52:51.448260 1086621 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0327 23:52:51.457235 1086621 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0327 23:52:51.457294 1086621 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0327 23:52:51.466705 1086621 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0327 23:52:51.475578 1086621 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0327 23:52:51.475648 1086621 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0327 23:52:51.485356 1086621 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0327 23:52:51.494473 1086621 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0327 23:52:51.494539 1086621 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0327 23:52:51.504355 1086621 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0327 23:52:51.513909 1086621 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0327 23:52:51.513966 1086621 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0327 23:52:51.523362 1086621 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0327 23:52:51.636705 1086621 kubeadm.go:309] [init] Using Kubernetes version: v1.29.3
	I0327 23:52:51.636774 1086621 kubeadm.go:309] [preflight] Running pre-flight checks
	I0327 23:52:51.800363 1086621 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0327 23:52:51.800510 1086621 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0327 23:52:51.800626 1086621 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0327 23:52:52.010811 1086621 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0327 23:52:52.144657 1086621 out.go:204]   - Generating certificates and keys ...
	I0327 23:52:52.144805 1086621 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0327 23:52:52.144916 1086621 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0327 23:52:52.344008 1086621 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0327 23:52:52.741592 1086621 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0327 23:52:52.910542 1086621 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0327 23:52:53.119492 1086621 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0327 23:52:53.281640 1086621 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0327 23:52:53.281836 1086621 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [ha-377576 localhost] and IPs [192.168.39.47 127.0.0.1 ::1]
	I0327 23:52:53.419622 1086621 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0327 23:52:53.419795 1086621 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [ha-377576 localhost] and IPs [192.168.39.47 127.0.0.1 ::1]
	I0327 23:52:53.696144 1086621 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0327 23:52:53.818770 1086621 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0327 23:52:53.917616 1086621 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0327 23:52:53.917779 1086621 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0327 23:52:53.984881 1086621 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0327 23:52:54.055230 1086621 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0327 23:52:54.140631 1086621 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0327 23:52:54.315913 1086621 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0327 23:52:54.381453 1086621 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0327 23:52:54.382203 1086621 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0327 23:52:54.386750 1086621 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0327 23:52:54.388612 1086621 out.go:204]   - Booting up control plane ...
	I0327 23:52:54.388716 1086621 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0327 23:52:54.388805 1086621 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0327 23:52:54.389235 1086621 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0327 23:52:54.407469 1086621 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0327 23:52:54.408355 1086621 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0327 23:52:54.408591 1086621 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0327 23:52:54.542068 1086621 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0327 23:53:01.128995 1086621 kubeadm.go:309] [apiclient] All control plane components are healthy after 6.590435 seconds
	I0327 23:53:01.145793 1086621 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0327 23:53:01.165234 1086621 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0327 23:53:01.701282 1086621 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0327 23:53:01.701515 1086621 kubeadm.go:309] [mark-control-plane] Marking the node ha-377576 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0327 23:53:02.217178 1086621 kubeadm.go:309] [bootstrap-token] Using token: oom77v.j3g2umgvg8sl8qjv
	I0327 23:53:02.219040 1086621 out.go:204]   - Configuring RBAC rules ...
	I0327 23:53:02.219154 1086621 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0327 23:53:02.225966 1086621 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0327 23:53:02.239486 1086621 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0327 23:53:02.243901 1086621 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0327 23:53:02.249599 1086621 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0327 23:53:02.257315 1086621 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0327 23:53:02.277875 1086621 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0327 23:53:02.529458 1086621 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0327 23:53:02.636494 1086621 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0327 23:53:02.638841 1086621 kubeadm.go:309] 
	I0327 23:53:02.638921 1086621 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0327 23:53:02.638931 1086621 kubeadm.go:309] 
	I0327 23:53:02.639058 1086621 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0327 23:53:02.639079 1086621 kubeadm.go:309] 
	I0327 23:53:02.639111 1086621 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0327 23:53:02.639179 1086621 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0327 23:53:02.639245 1086621 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0327 23:53:02.639257 1086621 kubeadm.go:309] 
	I0327 23:53:02.639338 1086621 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0327 23:53:02.639349 1086621 kubeadm.go:309] 
	I0327 23:53:02.639421 1086621 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0327 23:53:02.639434 1086621 kubeadm.go:309] 
	I0327 23:53:02.639490 1086621 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0327 23:53:02.639601 1086621 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0327 23:53:02.639726 1086621 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0327 23:53:02.639745 1086621 kubeadm.go:309] 
	I0327 23:53:02.639831 1086621 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0327 23:53:02.639895 1086621 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0327 23:53:02.639901 1086621 kubeadm.go:309] 
	I0327 23:53:02.640025 1086621 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token oom77v.j3g2umgvg8sl8qjv \
	I0327 23:53:02.640166 1086621 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:eb47e943713ff45bf2b8bbea854932df17a469668ec0f8e79ea3bb626e56fc59 \
	I0327 23:53:02.640204 1086621 kubeadm.go:309] 	--control-plane 
	I0327 23:53:02.640220 1086621 kubeadm.go:309] 
	I0327 23:53:02.640305 1086621 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0327 23:53:02.640313 1086621 kubeadm.go:309] 
	I0327 23:53:02.640402 1086621 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token oom77v.j3g2umgvg8sl8qjv \
	I0327 23:53:02.640553 1086621 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:eb47e943713ff45bf2b8bbea854932df17a469668ec0f8e79ea3bb626e56fc59 
	I0327 23:53:02.642281 1086621 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0327 23:53:02.642307 1086621 cni.go:84] Creating CNI manager for ""
	I0327 23:53:02.642315 1086621 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0327 23:53:02.644064 1086621 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0327 23:53:02.645282 1086621 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0327 23:53:02.676273 1086621 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.29.3/kubectl ...
	I0327 23:53:02.676309 1086621 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0327 23:53:02.700454 1086621 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0327 23:53:03.122994 1086621 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0327 23:53:03.123079 1086621 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0327 23:53:03.123079 1086621 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-377576 minikube.k8s.io/updated_at=2024_03_27T23_53_03_0700 minikube.k8s.io/version=v1.33.0-beta.0 minikube.k8s.io/commit=1a940f980f77c9bfc8ef93678db8c1f49c9dd79d minikube.k8s.io/name=ha-377576 minikube.k8s.io/primary=true
	I0327 23:53:03.160178 1086621 ops.go:34] apiserver oom_adj: -16
	I0327 23:53:03.264652 1086621 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0327 23:53:03.765712 1086621 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0327 23:53:04.265515 1086621 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0327 23:53:04.765454 1086621 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0327 23:53:05.265528 1086621 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0327 23:53:05.765468 1086621 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0327 23:53:06.265163 1086621 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0327 23:53:06.764846 1086621 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0327 23:53:07.264791 1086621 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0327 23:53:07.765632 1086621 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0327 23:53:08.265071 1086621 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0327 23:53:08.765617 1086621 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0327 23:53:09.265700 1086621 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0327 23:53:09.764872 1086621 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0327 23:53:10.265282 1086621 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0327 23:53:10.764681 1086621 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0327 23:53:11.265261 1086621 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0327 23:53:11.765608 1086621 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0327 23:53:12.264990 1086621 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0327 23:53:12.764869 1086621 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0327 23:53:13.264768 1086621 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0327 23:53:13.764825 1086621 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0327 23:53:14.264753 1086621 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0327 23:53:14.378744 1086621 kubeadm.go:1107] duration metric: took 11.255748792s to wait for elevateKubeSystemPrivileges
	W0327 23:53:14.378795 1086621 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0327 23:53:14.378803 1086621 kubeadm.go:393] duration metric: took 22.995138013s to StartCluster
	I0327 23:53:14.378822 1086621 settings.go:142] acquiring lock: {Name:mk594dd5d083edcb4c668528431bf610fe4ea638 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 23:53:14.378931 1086621 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18485-1069254/kubeconfig
	I0327 23:53:14.379795 1086621 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18485-1069254/kubeconfig: {Name:mk608bb0dce90860f51972a105c5a28b1a9b081e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 23:53:14.380047 1086621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0327 23:53:14.380072 1086621 start.go:232] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.47 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0327 23:53:14.380099 1086621 start.go:240] waiting for startup goroutines ...
	I0327 23:53:14.380121 1086621 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0327 23:53:14.380173 1086621 addons.go:69] Setting storage-provisioner=true in profile "ha-377576"
	I0327 23:53:14.380229 1086621 addons.go:234] Setting addon storage-provisioner=true in "ha-377576"
	I0327 23:53:14.380246 1086621 addons.go:69] Setting default-storageclass=true in profile "ha-377576"
	I0327 23:53:14.380271 1086621 host.go:66] Checking if "ha-377576" exists ...
	I0327 23:53:14.380279 1086621 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-377576"
	I0327 23:53:14.380289 1086621 config.go:182] Loaded profile config "ha-377576": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0327 23:53:14.380642 1086621 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0327 23:53:14.380674 1086621 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0327 23:53:14.380863 1086621 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0327 23:53:14.380899 1086621 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0327 23:53:14.395863 1086621 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42487
	I0327 23:53:14.396088 1086621 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39925
	I0327 23:53:14.396447 1086621 main.go:141] libmachine: () Calling .GetVersion
	I0327 23:53:14.396569 1086621 main.go:141] libmachine: () Calling .GetVersion
	I0327 23:53:14.397040 1086621 main.go:141] libmachine: Using API Version  1
	I0327 23:53:14.397057 1086621 main.go:141] libmachine: () Calling .SetConfigRaw
	I0327 23:53:14.397258 1086621 main.go:141] libmachine: Using API Version  1
	I0327 23:53:14.397286 1086621 main.go:141] libmachine: () Calling .SetConfigRaw
	I0327 23:53:14.397372 1086621 main.go:141] libmachine: () Calling .GetMachineName
	I0327 23:53:14.397556 1086621 main.go:141] libmachine: (ha-377576) Calling .GetState
	I0327 23:53:14.397750 1086621 main.go:141] libmachine: () Calling .GetMachineName
	I0327 23:53:14.398335 1086621 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0327 23:53:14.398363 1086621 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0327 23:53:14.399962 1086621 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18485-1069254/kubeconfig
	I0327 23:53:14.400415 1086621 kapi.go:59] client config for ha-377576: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/ha-377576/client.crt", KeyFile:"/home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/ha-377576/client.key", CAFile:"/home/jenkins/minikube-integration/18485-1069254/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]strin
g(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c58000), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0327 23:53:14.401059 1086621 cert_rotation.go:137] Starting client certificate rotation controller
	I0327 23:53:14.401420 1086621 addons.go:234] Setting addon default-storageclass=true in "ha-377576"
	I0327 23:53:14.401478 1086621 host.go:66] Checking if "ha-377576" exists ...
	I0327 23:53:14.401883 1086621 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0327 23:53:14.401921 1086621 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0327 23:53:14.413982 1086621 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45293
	I0327 23:53:14.414461 1086621 main.go:141] libmachine: () Calling .GetVersion
	I0327 23:53:14.415004 1086621 main.go:141] libmachine: Using API Version  1
	I0327 23:53:14.415043 1086621 main.go:141] libmachine: () Calling .SetConfigRaw
	I0327 23:53:14.415436 1086621 main.go:141] libmachine: () Calling .GetMachineName
	I0327 23:53:14.415672 1086621 main.go:141] libmachine: (ha-377576) Calling .GetState
	I0327 23:53:14.417095 1086621 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35553
	I0327 23:53:14.417515 1086621 main.go:141] libmachine: (ha-377576) Calling .DriverName
	I0327 23:53:14.417579 1086621 main.go:141] libmachine: () Calling .GetVersion
	I0327 23:53:14.420005 1086621 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0327 23:53:14.418067 1086621 main.go:141] libmachine: Using API Version  1
	I0327 23:53:14.420036 1086621 main.go:141] libmachine: () Calling .SetConfigRaw
	I0327 23:53:14.421548 1086621 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0327 23:53:14.421564 1086621 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0327 23:53:14.421580 1086621 main.go:141] libmachine: (ha-377576) Calling .GetSSHHostname
	I0327 23:53:14.421978 1086621 main.go:141] libmachine: () Calling .GetMachineName
	I0327 23:53:14.422630 1086621 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0327 23:53:14.422682 1086621 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0327 23:53:14.424924 1086621 main.go:141] libmachine: (ha-377576) DBG | domain ha-377576 has defined MAC address 52:54:00:9c:48:13 in network mk-ha-377576
	I0327 23:53:14.425383 1086621 main.go:141] libmachine: (ha-377576) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:48:13", ip: ""} in network mk-ha-377576: {Iface:virbr1 ExpiryTime:2024-03-28 00:52:31 +0000 UTC Type:0 Mac:52:54:00:9c:48:13 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:ha-377576 Clientid:01:52:54:00:9c:48:13}
	I0327 23:53:14.425410 1086621 main.go:141] libmachine: (ha-377576) DBG | domain ha-377576 has defined IP address 192.168.39.47 and MAC address 52:54:00:9c:48:13 in network mk-ha-377576
	I0327 23:53:14.425610 1086621 main.go:141] libmachine: (ha-377576) Calling .GetSSHPort
	I0327 23:53:14.425808 1086621 main.go:141] libmachine: (ha-377576) Calling .GetSSHKeyPath
	I0327 23:53:14.425996 1086621 main.go:141] libmachine: (ha-377576) Calling .GetSSHUsername
	I0327 23:53:14.426132 1086621 sshutil.go:53] new ssh client: &{IP:192.168.39.47 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/ha-377576/id_rsa Username:docker}
	I0327 23:53:14.438960 1086621 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44561
	I0327 23:53:14.439398 1086621 main.go:141] libmachine: () Calling .GetVersion
	I0327 23:53:14.439917 1086621 main.go:141] libmachine: Using API Version  1
	I0327 23:53:14.439939 1086621 main.go:141] libmachine: () Calling .SetConfigRaw
	I0327 23:53:14.440319 1086621 main.go:141] libmachine: () Calling .GetMachineName
	I0327 23:53:14.440593 1086621 main.go:141] libmachine: (ha-377576) Calling .GetState
	I0327 23:53:14.442301 1086621 main.go:141] libmachine: (ha-377576) Calling .DriverName
	I0327 23:53:14.442580 1086621 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0327 23:53:14.442597 1086621 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0327 23:53:14.442612 1086621 main.go:141] libmachine: (ha-377576) Calling .GetSSHHostname
	I0327 23:53:14.445647 1086621 main.go:141] libmachine: (ha-377576) DBG | domain ha-377576 has defined MAC address 52:54:00:9c:48:13 in network mk-ha-377576
	I0327 23:53:14.446085 1086621 main.go:141] libmachine: (ha-377576) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:48:13", ip: ""} in network mk-ha-377576: {Iface:virbr1 ExpiryTime:2024-03-28 00:52:31 +0000 UTC Type:0 Mac:52:54:00:9c:48:13 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:ha-377576 Clientid:01:52:54:00:9c:48:13}
	I0327 23:53:14.446108 1086621 main.go:141] libmachine: (ha-377576) DBG | domain ha-377576 has defined IP address 192.168.39.47 and MAC address 52:54:00:9c:48:13 in network mk-ha-377576
	I0327 23:53:14.446394 1086621 main.go:141] libmachine: (ha-377576) Calling .GetSSHPort
	I0327 23:53:14.446606 1086621 main.go:141] libmachine: (ha-377576) Calling .GetSSHKeyPath
	I0327 23:53:14.446757 1086621 main.go:141] libmachine: (ha-377576) Calling .GetSSHUsername
	I0327 23:53:14.446891 1086621 sshutil.go:53] new ssh client: &{IP:192.168.39.47 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/ha-377576/id_rsa Username:docker}
	I0327 23:53:14.496411 1086621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0327 23:53:14.577119 1086621 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0327 23:53:14.616830 1086621 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0327 23:53:15.020906 1086621 start.go:948] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0327 23:53:15.288424 1086621 main.go:141] libmachine: Making call to close driver server
	I0327 23:53:15.288460 1086621 main.go:141] libmachine: (ha-377576) Calling .Close
	I0327 23:53:15.288477 1086621 main.go:141] libmachine: Making call to close driver server
	I0327 23:53:15.288501 1086621 main.go:141] libmachine: (ha-377576) Calling .Close
	I0327 23:53:15.288790 1086621 main.go:141] libmachine: Successfully made call to close driver server
	I0327 23:53:15.288798 1086621 main.go:141] libmachine: Successfully made call to close driver server
	I0327 23:53:15.288808 1086621 main.go:141] libmachine: Making call to close connection to plugin binary
	I0327 23:53:15.288812 1086621 main.go:141] libmachine: Making call to close connection to plugin binary
	I0327 23:53:15.288818 1086621 main.go:141] libmachine: Making call to close driver server
	I0327 23:53:15.288821 1086621 main.go:141] libmachine: Making call to close driver server
	I0327 23:53:15.288826 1086621 main.go:141] libmachine: (ha-377576) Calling .Close
	I0327 23:53:15.288832 1086621 main.go:141] libmachine: (ha-377576) Calling .Close
	I0327 23:53:15.289108 1086621 main.go:141] libmachine: Successfully made call to close driver server
	I0327 23:53:15.289125 1086621 main.go:141] libmachine: Making call to close connection to plugin binary
	I0327 23:53:15.289145 1086621 main.go:141] libmachine: Successfully made call to close driver server
	I0327 23:53:15.289160 1086621 main.go:141] libmachine: Making call to close connection to plugin binary
	I0327 23:53:15.289235 1086621 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0327 23:53:15.289242 1086621 round_trippers.go:469] Request Headers:
	I0327 23:53:15.289252 1086621 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:53:15.289258 1086621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0327 23:53:15.300091 1086621 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0327 23:53:15.300824 1086621 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0327 23:53:15.300839 1086621 round_trippers.go:469] Request Headers:
	I0327 23:53:15.300847 1086621 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:53:15.300851 1086621 round_trippers.go:473]     Content-Type: application/json
	I0327 23:53:15.300855 1086621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0327 23:53:15.305315 1086621 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0327 23:53:15.305614 1086621 main.go:141] libmachine: Making call to close driver server
	I0327 23:53:15.305636 1086621 main.go:141] libmachine: (ha-377576) Calling .Close
	I0327 23:53:15.305931 1086621 main.go:141] libmachine: Successfully made call to close driver server
	I0327 23:53:15.305948 1086621 main.go:141] libmachine: Making call to close connection to plugin binary
	I0327 23:53:15.305972 1086621 main.go:141] libmachine: (ha-377576) DBG | Closing plugin on server side
	I0327 23:53:15.307723 1086621 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0327 23:53:15.309131 1086621 addons.go:505] duration metric: took 929.009209ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0327 23:53:15.309164 1086621 start.go:245] waiting for cluster config update ...
	I0327 23:53:15.309184 1086621 start.go:254] writing updated cluster config ...
	I0327 23:53:15.310769 1086621 out.go:177] 
	I0327 23:53:15.312225 1086621 config.go:182] Loaded profile config "ha-377576": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0327 23:53:15.312306 1086621 profile.go:142] Saving config to /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/ha-377576/config.json ...
	I0327 23:53:15.314088 1086621 out.go:177] * Starting "ha-377576-m02" control-plane node in "ha-377576" cluster
	I0327 23:53:15.315412 1086621 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0327 23:53:15.315441 1086621 cache.go:56] Caching tarball of preloaded images
	I0327 23:53:15.315561 1086621 preload.go:173] Found /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0327 23:53:15.315577 1086621 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on crio
	I0327 23:53:15.315664 1086621 profile.go:142] Saving config to /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/ha-377576/config.json ...
	I0327 23:53:15.315876 1086621 start.go:360] acquireMachinesLock for ha-377576-m02: {Name:mk85e225431128bbd27ac7bb3815095957281902 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0327 23:53:15.315964 1086621 start.go:364] duration metric: took 29.087µs to acquireMachinesLock for "ha-377576-m02"
	I0327 23:53:15.315989 1086621 start.go:93] Provisioning new machine with config: &{Name:ha-377576 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.29.3 ClusterName:ha-377576 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.47 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cert
Expiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0327 23:53:15.316078 1086621 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0327 23:53:15.317833 1086621 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0327 23:53:15.317925 1086621 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0327 23:53:15.317952 1086621 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0327 23:53:15.332686 1086621 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38675
	I0327 23:53:15.333194 1086621 main.go:141] libmachine: () Calling .GetVersion
	I0327 23:53:15.333658 1086621 main.go:141] libmachine: Using API Version  1
	I0327 23:53:15.333688 1086621 main.go:141] libmachine: () Calling .SetConfigRaw
	I0327 23:53:15.334060 1086621 main.go:141] libmachine: () Calling .GetMachineName
	I0327 23:53:15.334292 1086621 main.go:141] libmachine: (ha-377576-m02) Calling .GetMachineName
	I0327 23:53:15.334455 1086621 main.go:141] libmachine: (ha-377576-m02) Calling .DriverName
	I0327 23:53:15.334652 1086621 start.go:159] libmachine.API.Create for "ha-377576" (driver="kvm2")
	I0327 23:53:15.334677 1086621 client.go:168] LocalClient.Create starting
	I0327 23:53:15.334712 1086621 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem
	I0327 23:53:15.334751 1086621 main.go:141] libmachine: Decoding PEM data...
	I0327 23:53:15.334766 1086621 main.go:141] libmachine: Parsing certificate...
	I0327 23:53:15.334821 1086621 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/cert.pem
	I0327 23:53:15.334838 1086621 main.go:141] libmachine: Decoding PEM data...
	I0327 23:53:15.334852 1086621 main.go:141] libmachine: Parsing certificate...
	I0327 23:53:15.334868 1086621 main.go:141] libmachine: Running pre-create checks...
	I0327 23:53:15.334876 1086621 main.go:141] libmachine: (ha-377576-m02) Calling .PreCreateCheck
	I0327 23:53:15.335043 1086621 main.go:141] libmachine: (ha-377576-m02) Calling .GetConfigRaw
	I0327 23:53:15.335460 1086621 main.go:141] libmachine: Creating machine...
	I0327 23:53:15.335475 1086621 main.go:141] libmachine: (ha-377576-m02) Calling .Create
	I0327 23:53:15.335629 1086621 main.go:141] libmachine: (ha-377576-m02) Creating KVM machine...
	I0327 23:53:15.337060 1086621 main.go:141] libmachine: (ha-377576-m02) DBG | found existing default KVM network
	I0327 23:53:15.337187 1086621 main.go:141] libmachine: (ha-377576-m02) DBG | found existing private KVM network mk-ha-377576
	I0327 23:53:15.337407 1086621 main.go:141] libmachine: (ha-377576-m02) Setting up store path in /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/ha-377576-m02 ...
	I0327 23:53:15.337427 1086621 main.go:141] libmachine: (ha-377576-m02) Building disk image from file:///home/jenkins/minikube-integration/18485-1069254/.minikube/cache/iso/amd64/minikube-v1.33.0-1711559712-18485-amd64.iso
	I0327 23:53:15.337508 1086621 main.go:141] libmachine: (ha-377576-m02) DBG | I0327 23:53:15.337395 1086974 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18485-1069254/.minikube
	I0327 23:53:15.337599 1086621 main.go:141] libmachine: (ha-377576-m02) Downloading /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18485-1069254/.minikube/cache/iso/amd64/minikube-v1.33.0-1711559712-18485-amd64.iso...
	I0327 23:53:15.585573 1086621 main.go:141] libmachine: (ha-377576-m02) DBG | I0327 23:53:15.585404 1086974 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/ha-377576-m02/id_rsa...
	I0327 23:53:15.737806 1086621 main.go:141] libmachine: (ha-377576-m02) DBG | I0327 23:53:15.737609 1086974 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/ha-377576-m02/ha-377576-m02.rawdisk...
	I0327 23:53:15.737849 1086621 main.go:141] libmachine: (ha-377576-m02) DBG | Writing magic tar header
	I0327 23:53:15.737862 1086621 main.go:141] libmachine: (ha-377576-m02) Setting executable bit set on /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/ha-377576-m02 (perms=drwx------)
	I0327 23:53:15.737872 1086621 main.go:141] libmachine: (ha-377576-m02) DBG | Writing SSH key tar header
	I0327 23:53:15.737889 1086621 main.go:141] libmachine: (ha-377576-m02) DBG | I0327 23:53:15.737729 1086974 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/ha-377576-m02 ...
	I0327 23:53:15.737901 1086621 main.go:141] libmachine: (ha-377576-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/ha-377576-m02
	I0327 23:53:15.737942 1086621 main.go:141] libmachine: (ha-377576-m02) Setting executable bit set on /home/jenkins/minikube-integration/18485-1069254/.minikube/machines (perms=drwxr-xr-x)
	I0327 23:53:15.737983 1086621 main.go:141] libmachine: (ha-377576-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18485-1069254/.minikube/machines
	I0327 23:53:15.737999 1086621 main.go:141] libmachine: (ha-377576-m02) Setting executable bit set on /home/jenkins/minikube-integration/18485-1069254/.minikube (perms=drwxr-xr-x)
	I0327 23:53:15.738014 1086621 main.go:141] libmachine: (ha-377576-m02) Setting executable bit set on /home/jenkins/minikube-integration/18485-1069254 (perms=drwxrwxr-x)
	I0327 23:53:15.738027 1086621 main.go:141] libmachine: (ha-377576-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0327 23:53:15.738048 1086621 main.go:141] libmachine: (ha-377576-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18485-1069254/.minikube
	I0327 23:53:15.738066 1086621 main.go:141] libmachine: (ha-377576-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18485-1069254
	I0327 23:53:15.738081 1086621 main.go:141] libmachine: (ha-377576-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0327 23:53:15.738098 1086621 main.go:141] libmachine: (ha-377576-m02) Creating domain...
	I0327 23:53:15.738114 1086621 main.go:141] libmachine: (ha-377576-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0327 23:53:15.738131 1086621 main.go:141] libmachine: (ha-377576-m02) DBG | Checking permissions on dir: /home/jenkins
	I0327 23:53:15.738149 1086621 main.go:141] libmachine: (ha-377576-m02) DBG | Checking permissions on dir: /home
	I0327 23:53:15.738162 1086621 main.go:141] libmachine: (ha-377576-m02) DBG | Skipping /home - not owner
	I0327 23:53:15.739118 1086621 main.go:141] libmachine: (ha-377576-m02) define libvirt domain using xml: 
	I0327 23:53:15.739144 1086621 main.go:141] libmachine: (ha-377576-m02) <domain type='kvm'>
	I0327 23:53:15.739156 1086621 main.go:141] libmachine: (ha-377576-m02)   <name>ha-377576-m02</name>
	I0327 23:53:15.739173 1086621 main.go:141] libmachine: (ha-377576-m02)   <memory unit='MiB'>2200</memory>
	I0327 23:53:15.739183 1086621 main.go:141] libmachine: (ha-377576-m02)   <vcpu>2</vcpu>
	I0327 23:53:15.739195 1086621 main.go:141] libmachine: (ha-377576-m02)   <features>
	I0327 23:53:15.739207 1086621 main.go:141] libmachine: (ha-377576-m02)     <acpi/>
	I0327 23:53:15.739220 1086621 main.go:141] libmachine: (ha-377576-m02)     <apic/>
	I0327 23:53:15.739229 1086621 main.go:141] libmachine: (ha-377576-m02)     <pae/>
	I0327 23:53:15.739239 1086621 main.go:141] libmachine: (ha-377576-m02)     
	I0327 23:53:15.739248 1086621 main.go:141] libmachine: (ha-377576-m02)   </features>
	I0327 23:53:15.739261 1086621 main.go:141] libmachine: (ha-377576-m02)   <cpu mode='host-passthrough'>
	I0327 23:53:15.739298 1086621 main.go:141] libmachine: (ha-377576-m02)   
	I0327 23:53:15.739320 1086621 main.go:141] libmachine: (ha-377576-m02)   </cpu>
	I0327 23:53:15.739330 1086621 main.go:141] libmachine: (ha-377576-m02)   <os>
	I0327 23:53:15.739343 1086621 main.go:141] libmachine: (ha-377576-m02)     <type>hvm</type>
	I0327 23:53:15.739389 1086621 main.go:141] libmachine: (ha-377576-m02)     <boot dev='cdrom'/>
	I0327 23:53:15.739415 1086621 main.go:141] libmachine: (ha-377576-m02)     <boot dev='hd'/>
	I0327 23:53:15.739434 1086621 main.go:141] libmachine: (ha-377576-m02)     <bootmenu enable='no'/>
	I0327 23:53:15.739460 1086621 main.go:141] libmachine: (ha-377576-m02)   </os>
	I0327 23:53:15.739474 1086621 main.go:141] libmachine: (ha-377576-m02)   <devices>
	I0327 23:53:15.739483 1086621 main.go:141] libmachine: (ha-377576-m02)     <disk type='file' device='cdrom'>
	I0327 23:53:15.739502 1086621 main.go:141] libmachine: (ha-377576-m02)       <source file='/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/ha-377576-m02/boot2docker.iso'/>
	I0327 23:53:15.739513 1086621 main.go:141] libmachine: (ha-377576-m02)       <target dev='hdc' bus='scsi'/>
	I0327 23:53:15.739532 1086621 main.go:141] libmachine: (ha-377576-m02)       <readonly/>
	I0327 23:53:15.739546 1086621 main.go:141] libmachine: (ha-377576-m02)     </disk>
	I0327 23:53:15.739559 1086621 main.go:141] libmachine: (ha-377576-m02)     <disk type='file' device='disk'>
	I0327 23:53:15.739573 1086621 main.go:141] libmachine: (ha-377576-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0327 23:53:15.739587 1086621 main.go:141] libmachine: (ha-377576-m02)       <source file='/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/ha-377576-m02/ha-377576-m02.rawdisk'/>
	I0327 23:53:15.739603 1086621 main.go:141] libmachine: (ha-377576-m02)       <target dev='hda' bus='virtio'/>
	I0327 23:53:15.739616 1086621 main.go:141] libmachine: (ha-377576-m02)     </disk>
	I0327 23:53:15.739628 1086621 main.go:141] libmachine: (ha-377576-m02)     <interface type='network'>
	I0327 23:53:15.739642 1086621 main.go:141] libmachine: (ha-377576-m02)       <source network='mk-ha-377576'/>
	I0327 23:53:15.739653 1086621 main.go:141] libmachine: (ha-377576-m02)       <model type='virtio'/>
	I0327 23:53:15.739664 1086621 main.go:141] libmachine: (ha-377576-m02)     </interface>
	I0327 23:53:15.739676 1086621 main.go:141] libmachine: (ha-377576-m02)     <interface type='network'>
	I0327 23:53:15.739688 1086621 main.go:141] libmachine: (ha-377576-m02)       <source network='default'/>
	I0327 23:53:15.739697 1086621 main.go:141] libmachine: (ha-377576-m02)       <model type='virtio'/>
	I0327 23:53:15.739705 1086621 main.go:141] libmachine: (ha-377576-m02)     </interface>
	I0327 23:53:15.739715 1086621 main.go:141] libmachine: (ha-377576-m02)     <serial type='pty'>
	I0327 23:53:15.739725 1086621 main.go:141] libmachine: (ha-377576-m02)       <target port='0'/>
	I0327 23:53:15.739737 1086621 main.go:141] libmachine: (ha-377576-m02)     </serial>
	I0327 23:53:15.739748 1086621 main.go:141] libmachine: (ha-377576-m02)     <console type='pty'>
	I0327 23:53:15.739760 1086621 main.go:141] libmachine: (ha-377576-m02)       <target type='serial' port='0'/>
	I0327 23:53:15.739767 1086621 main.go:141] libmachine: (ha-377576-m02)     </console>
	I0327 23:53:15.739780 1086621 main.go:141] libmachine: (ha-377576-m02)     <rng model='virtio'>
	I0327 23:53:15.739791 1086621 main.go:141] libmachine: (ha-377576-m02)       <backend model='random'>/dev/random</backend>
	I0327 23:53:15.739802 1086621 main.go:141] libmachine: (ha-377576-m02)     </rng>
	I0327 23:53:15.739815 1086621 main.go:141] libmachine: (ha-377576-m02)     
	I0327 23:53:15.739824 1086621 main.go:141] libmachine: (ha-377576-m02)     
	I0327 23:53:15.739831 1086621 main.go:141] libmachine: (ha-377576-m02)   </devices>
	I0327 23:53:15.739841 1086621 main.go:141] libmachine: (ha-377576-m02) </domain>
	I0327 23:53:15.739847 1086621 main.go:141] libmachine: (ha-377576-m02) 
	I0327 23:53:15.747300 1086621 main.go:141] libmachine: (ha-377576-m02) DBG | domain ha-377576-m02 has defined MAC address 52:54:00:61:b9:1e in network default
	I0327 23:53:15.748104 1086621 main.go:141] libmachine: (ha-377576-m02) Ensuring networks are active...
	I0327 23:53:15.748133 1086621 main.go:141] libmachine: (ha-377576-m02) DBG | domain ha-377576-m02 has defined MAC address 52:54:00:bb:83:99 in network mk-ha-377576
	I0327 23:53:15.748959 1086621 main.go:141] libmachine: (ha-377576-m02) Ensuring network default is active
	I0327 23:53:15.749414 1086621 main.go:141] libmachine: (ha-377576-m02) Ensuring network mk-ha-377576 is active
	I0327 23:53:15.749904 1086621 main.go:141] libmachine: (ha-377576-m02) Getting domain xml...
	I0327 23:53:15.750948 1086621 main.go:141] libmachine: (ha-377576-m02) Creating domain...
	I0327 23:53:16.994107 1086621 main.go:141] libmachine: (ha-377576-m02) Waiting to get IP...
	I0327 23:53:16.994933 1086621 main.go:141] libmachine: (ha-377576-m02) DBG | domain ha-377576-m02 has defined MAC address 52:54:00:bb:83:99 in network mk-ha-377576
	I0327 23:53:16.995389 1086621 main.go:141] libmachine: (ha-377576-m02) DBG | unable to find current IP address of domain ha-377576-m02 in network mk-ha-377576
	I0327 23:53:16.995419 1086621 main.go:141] libmachine: (ha-377576-m02) DBG | I0327 23:53:16.995361 1086974 retry.go:31] will retry after 307.585701ms: waiting for machine to come up
	I0327 23:53:17.305617 1086621 main.go:141] libmachine: (ha-377576-m02) DBG | domain ha-377576-m02 has defined MAC address 52:54:00:bb:83:99 in network mk-ha-377576
	I0327 23:53:17.306593 1086621 main.go:141] libmachine: (ha-377576-m02) DBG | unable to find current IP address of domain ha-377576-m02 in network mk-ha-377576
	I0327 23:53:17.306623 1086621 main.go:141] libmachine: (ha-377576-m02) DBG | I0327 23:53:17.306553 1086974 retry.go:31] will retry after 321.687137ms: waiting for machine to come up
	I0327 23:53:17.630498 1086621 main.go:141] libmachine: (ha-377576-m02) DBG | domain ha-377576-m02 has defined MAC address 52:54:00:bb:83:99 in network mk-ha-377576
	I0327 23:53:17.630996 1086621 main.go:141] libmachine: (ha-377576-m02) DBG | unable to find current IP address of domain ha-377576-m02 in network mk-ha-377576
	I0327 23:53:17.631028 1086621 main.go:141] libmachine: (ha-377576-m02) DBG | I0327 23:53:17.630940 1086974 retry.go:31] will retry after 411.240849ms: waiting for machine to come up
	I0327 23:53:18.043729 1086621 main.go:141] libmachine: (ha-377576-m02) DBG | domain ha-377576-m02 has defined MAC address 52:54:00:bb:83:99 in network mk-ha-377576
	I0327 23:53:18.044211 1086621 main.go:141] libmachine: (ha-377576-m02) DBG | unable to find current IP address of domain ha-377576-m02 in network mk-ha-377576
	I0327 23:53:18.044244 1086621 main.go:141] libmachine: (ha-377576-m02) DBG | I0327 23:53:18.044147 1086974 retry.go:31] will retry after 543.743675ms: waiting for machine to come up
	I0327 23:53:18.589887 1086621 main.go:141] libmachine: (ha-377576-m02) DBG | domain ha-377576-m02 has defined MAC address 52:54:00:bb:83:99 in network mk-ha-377576
	I0327 23:53:18.590408 1086621 main.go:141] libmachine: (ha-377576-m02) DBG | unable to find current IP address of domain ha-377576-m02 in network mk-ha-377576
	I0327 23:53:18.590439 1086621 main.go:141] libmachine: (ha-377576-m02) DBG | I0327 23:53:18.590370 1086974 retry.go:31] will retry after 541.228138ms: waiting for machine to come up
	I0327 23:53:19.133287 1086621 main.go:141] libmachine: (ha-377576-m02) DBG | domain ha-377576-m02 has defined MAC address 52:54:00:bb:83:99 in network mk-ha-377576
	I0327 23:53:19.133820 1086621 main.go:141] libmachine: (ha-377576-m02) DBG | unable to find current IP address of domain ha-377576-m02 in network mk-ha-377576
	I0327 23:53:19.133854 1086621 main.go:141] libmachine: (ha-377576-m02) DBG | I0327 23:53:19.133772 1086974 retry.go:31] will retry after 874.601632ms: waiting for machine to come up
	I0327 23:53:20.009880 1086621 main.go:141] libmachine: (ha-377576-m02) DBG | domain ha-377576-m02 has defined MAC address 52:54:00:bb:83:99 in network mk-ha-377576
	I0327 23:53:20.010299 1086621 main.go:141] libmachine: (ha-377576-m02) DBG | unable to find current IP address of domain ha-377576-m02 in network mk-ha-377576
	I0327 23:53:20.010336 1086621 main.go:141] libmachine: (ha-377576-m02) DBG | I0327 23:53:20.010244 1086974 retry.go:31] will retry after 764.266491ms: waiting for machine to come up
	I0327 23:53:20.776759 1086621 main.go:141] libmachine: (ha-377576-m02) DBG | domain ha-377576-m02 has defined MAC address 52:54:00:bb:83:99 in network mk-ha-377576
	I0327 23:53:20.777293 1086621 main.go:141] libmachine: (ha-377576-m02) DBG | unable to find current IP address of domain ha-377576-m02 in network mk-ha-377576
	I0327 23:53:20.777322 1086621 main.go:141] libmachine: (ha-377576-m02) DBG | I0327 23:53:20.777229 1086974 retry.go:31] will retry after 1.354206268s: waiting for machine to come up
	I0327 23:53:22.132893 1086621 main.go:141] libmachine: (ha-377576-m02) DBG | domain ha-377576-m02 has defined MAC address 52:54:00:bb:83:99 in network mk-ha-377576
	I0327 23:53:22.133295 1086621 main.go:141] libmachine: (ha-377576-m02) DBG | unable to find current IP address of domain ha-377576-m02 in network mk-ha-377576
	I0327 23:53:22.133328 1086621 main.go:141] libmachine: (ha-377576-m02) DBG | I0327 23:53:22.133231 1086974 retry.go:31] will retry after 1.748976151s: waiting for machine to come up
	I0327 23:53:23.884465 1086621 main.go:141] libmachine: (ha-377576-m02) DBG | domain ha-377576-m02 has defined MAC address 52:54:00:bb:83:99 in network mk-ha-377576
	I0327 23:53:23.884952 1086621 main.go:141] libmachine: (ha-377576-m02) DBG | unable to find current IP address of domain ha-377576-m02 in network mk-ha-377576
	I0327 23:53:23.884985 1086621 main.go:141] libmachine: (ha-377576-m02) DBG | I0327 23:53:23.884894 1086974 retry.go:31] will retry after 1.53502578s: waiting for machine to come up
	I0327 23:53:25.421857 1086621 main.go:141] libmachine: (ha-377576-m02) DBG | domain ha-377576-m02 has defined MAC address 52:54:00:bb:83:99 in network mk-ha-377576
	I0327 23:53:25.422261 1086621 main.go:141] libmachine: (ha-377576-m02) DBG | unable to find current IP address of domain ha-377576-m02 in network mk-ha-377576
	I0327 23:53:25.422293 1086621 main.go:141] libmachine: (ha-377576-m02) DBG | I0327 23:53:25.422217 1086974 retry.go:31] will retry after 2.750520171s: waiting for machine to come up
	I0327 23:53:28.176280 1086621 main.go:141] libmachine: (ha-377576-m02) DBG | domain ha-377576-m02 has defined MAC address 52:54:00:bb:83:99 in network mk-ha-377576
	I0327 23:53:28.176674 1086621 main.go:141] libmachine: (ha-377576-m02) DBG | unable to find current IP address of domain ha-377576-m02 in network mk-ha-377576
	I0327 23:53:28.176704 1086621 main.go:141] libmachine: (ha-377576-m02) DBG | I0327 23:53:28.176610 1086974 retry.go:31] will retry after 2.87947611s: waiting for machine to come up
	I0327 23:53:31.057720 1086621 main.go:141] libmachine: (ha-377576-m02) DBG | domain ha-377576-m02 has defined MAC address 52:54:00:bb:83:99 in network mk-ha-377576
	I0327 23:53:31.058132 1086621 main.go:141] libmachine: (ha-377576-m02) DBG | unable to find current IP address of domain ha-377576-m02 in network mk-ha-377576
	I0327 23:53:31.058168 1086621 main.go:141] libmachine: (ha-377576-m02) DBG | I0327 23:53:31.058076 1086974 retry.go:31] will retry after 4.114177302s: waiting for machine to come up
	I0327 23:53:35.177386 1086621 main.go:141] libmachine: (ha-377576-m02) DBG | domain ha-377576-m02 has defined MAC address 52:54:00:bb:83:99 in network mk-ha-377576
	I0327 23:53:35.177859 1086621 main.go:141] libmachine: (ha-377576-m02) DBG | unable to find current IP address of domain ha-377576-m02 in network mk-ha-377576
	I0327 23:53:35.177882 1086621 main.go:141] libmachine: (ha-377576-m02) DBG | I0327 23:53:35.177820 1086974 retry.go:31] will retry after 5.380971027s: waiting for machine to come up
	I0327 23:53:40.559846 1086621 main.go:141] libmachine: (ha-377576-m02) DBG | domain ha-377576-m02 has defined MAC address 52:54:00:bb:83:99 in network mk-ha-377576
	I0327 23:53:40.560341 1086621 main.go:141] libmachine: (ha-377576-m02) DBG | domain ha-377576-m02 has current primary IP address 192.168.39.117 and MAC address 52:54:00:bb:83:99 in network mk-ha-377576
	I0327 23:53:40.560367 1086621 main.go:141] libmachine: (ha-377576-m02) Found IP for machine: 192.168.39.117
	I0327 23:53:40.560414 1086621 main.go:141] libmachine: (ha-377576-m02) Reserving static IP address...
	I0327 23:53:40.560762 1086621 main.go:141] libmachine: (ha-377576-m02) DBG | unable to find host DHCP lease matching {name: "ha-377576-m02", mac: "52:54:00:bb:83:99", ip: "192.168.39.117"} in network mk-ha-377576
	I0327 23:53:40.639456 1086621 main.go:141] libmachine: (ha-377576-m02) DBG | Getting to WaitForSSH function...
	I0327 23:53:40.639494 1086621 main.go:141] libmachine: (ha-377576-m02) Reserved static IP address: 192.168.39.117
	I0327 23:53:40.639510 1086621 main.go:141] libmachine: (ha-377576-m02) Waiting for SSH to be available...
	I0327 23:53:40.642766 1086621 main.go:141] libmachine: (ha-377576-m02) DBG | domain ha-377576-m02 has defined MAC address 52:54:00:bb:83:99 in network mk-ha-377576
	I0327 23:53:40.643212 1086621 main.go:141] libmachine: (ha-377576-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:83:99", ip: ""} in network mk-ha-377576: {Iface:virbr1 ExpiryTime:2024-03-28 00:53:30 +0000 UTC Type:0 Mac:52:54:00:bb:83:99 Iaid: IPaddr:192.168.39.117 Prefix:24 Hostname:minikube Clientid:01:52:54:00:bb:83:99}
	I0327 23:53:40.643244 1086621 main.go:141] libmachine: (ha-377576-m02) DBG | domain ha-377576-m02 has defined IP address 192.168.39.117 and MAC address 52:54:00:bb:83:99 in network mk-ha-377576
	I0327 23:53:40.643348 1086621 main.go:141] libmachine: (ha-377576-m02) DBG | Using SSH client type: external
	I0327 23:53:40.643372 1086621 main.go:141] libmachine: (ha-377576-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/ha-377576-m02/id_rsa (-rw-------)
	I0327 23:53:40.643401 1086621 main.go:141] libmachine: (ha-377576-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.117 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/ha-377576-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0327 23:53:40.643420 1086621 main.go:141] libmachine: (ha-377576-m02) DBG | About to run SSH command:
	I0327 23:53:40.643442 1086621 main.go:141] libmachine: (ha-377576-m02) DBG | exit 0
	I0327 23:53:40.774407 1086621 main.go:141] libmachine: (ha-377576-m02) DBG | SSH cmd err, output: <nil>: 
	I0327 23:53:40.774727 1086621 main.go:141] libmachine: (ha-377576-m02) KVM machine creation complete!
	I0327 23:53:40.775008 1086621 main.go:141] libmachine: (ha-377576-m02) Calling .GetConfigRaw
	I0327 23:53:40.775649 1086621 main.go:141] libmachine: (ha-377576-m02) Calling .DriverName
	I0327 23:53:40.775857 1086621 main.go:141] libmachine: (ha-377576-m02) Calling .DriverName
	I0327 23:53:40.776061 1086621 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0327 23:53:40.776077 1086621 main.go:141] libmachine: (ha-377576-m02) Calling .GetState
	I0327 23:53:40.777381 1086621 main.go:141] libmachine: Detecting operating system of created instance...
	I0327 23:53:40.777400 1086621 main.go:141] libmachine: Waiting for SSH to be available...
	I0327 23:53:40.777407 1086621 main.go:141] libmachine: Getting to WaitForSSH function...
	I0327 23:53:40.777414 1086621 main.go:141] libmachine: (ha-377576-m02) Calling .GetSSHHostname
	I0327 23:53:40.780109 1086621 main.go:141] libmachine: (ha-377576-m02) DBG | domain ha-377576-m02 has defined MAC address 52:54:00:bb:83:99 in network mk-ha-377576
	I0327 23:53:40.780507 1086621 main.go:141] libmachine: (ha-377576-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:83:99", ip: ""} in network mk-ha-377576: {Iface:virbr1 ExpiryTime:2024-03-28 00:53:30 +0000 UTC Type:0 Mac:52:54:00:bb:83:99 Iaid: IPaddr:192.168.39.117 Prefix:24 Hostname:ha-377576-m02 Clientid:01:52:54:00:bb:83:99}
	I0327 23:53:40.780541 1086621 main.go:141] libmachine: (ha-377576-m02) DBG | domain ha-377576-m02 has defined IP address 192.168.39.117 and MAC address 52:54:00:bb:83:99 in network mk-ha-377576
	I0327 23:53:40.780691 1086621 main.go:141] libmachine: (ha-377576-m02) Calling .GetSSHPort
	I0327 23:53:40.780915 1086621 main.go:141] libmachine: (ha-377576-m02) Calling .GetSSHKeyPath
	I0327 23:53:40.781095 1086621 main.go:141] libmachine: (ha-377576-m02) Calling .GetSSHKeyPath
	I0327 23:53:40.781254 1086621 main.go:141] libmachine: (ha-377576-m02) Calling .GetSSHUsername
	I0327 23:53:40.781480 1086621 main.go:141] libmachine: Using SSH client type: native
	I0327 23:53:40.781753 1086621 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.117 22 <nil> <nil>}
	I0327 23:53:40.781770 1086621 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0327 23:53:40.889745 1086621 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0327 23:53:40.889785 1086621 main.go:141] libmachine: Detecting the provisioner...
	I0327 23:53:40.889800 1086621 main.go:141] libmachine: (ha-377576-m02) Calling .GetSSHHostname
	I0327 23:53:40.892872 1086621 main.go:141] libmachine: (ha-377576-m02) DBG | domain ha-377576-m02 has defined MAC address 52:54:00:bb:83:99 in network mk-ha-377576
	I0327 23:53:40.893298 1086621 main.go:141] libmachine: (ha-377576-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:83:99", ip: ""} in network mk-ha-377576: {Iface:virbr1 ExpiryTime:2024-03-28 00:53:30 +0000 UTC Type:0 Mac:52:54:00:bb:83:99 Iaid: IPaddr:192.168.39.117 Prefix:24 Hostname:ha-377576-m02 Clientid:01:52:54:00:bb:83:99}
	I0327 23:53:40.893332 1086621 main.go:141] libmachine: (ha-377576-m02) DBG | domain ha-377576-m02 has defined IP address 192.168.39.117 and MAC address 52:54:00:bb:83:99 in network mk-ha-377576
	I0327 23:53:40.893564 1086621 main.go:141] libmachine: (ha-377576-m02) Calling .GetSSHPort
	I0327 23:53:40.893810 1086621 main.go:141] libmachine: (ha-377576-m02) Calling .GetSSHKeyPath
	I0327 23:53:40.893995 1086621 main.go:141] libmachine: (ha-377576-m02) Calling .GetSSHKeyPath
	I0327 23:53:40.894136 1086621 main.go:141] libmachine: (ha-377576-m02) Calling .GetSSHUsername
	I0327 23:53:40.894318 1086621 main.go:141] libmachine: Using SSH client type: native
	I0327 23:53:40.894544 1086621 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.117 22 <nil> <nil>}
	I0327 23:53:40.894563 1086621 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0327 23:53:41.007518 1086621 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0327 23:53:41.007587 1086621 main.go:141] libmachine: found compatible host: buildroot
	I0327 23:53:41.007595 1086621 main.go:141] libmachine: Provisioning with buildroot...
	I0327 23:53:41.007607 1086621 main.go:141] libmachine: (ha-377576-m02) Calling .GetMachineName
	I0327 23:53:41.007886 1086621 buildroot.go:166] provisioning hostname "ha-377576-m02"
	I0327 23:53:41.007918 1086621 main.go:141] libmachine: (ha-377576-m02) Calling .GetMachineName
	I0327 23:53:41.008130 1086621 main.go:141] libmachine: (ha-377576-m02) Calling .GetSSHHostname
	I0327 23:53:41.011050 1086621 main.go:141] libmachine: (ha-377576-m02) DBG | domain ha-377576-m02 has defined MAC address 52:54:00:bb:83:99 in network mk-ha-377576
	I0327 23:53:41.011449 1086621 main.go:141] libmachine: (ha-377576-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:83:99", ip: ""} in network mk-ha-377576: {Iface:virbr1 ExpiryTime:2024-03-28 00:53:30 +0000 UTC Type:0 Mac:52:54:00:bb:83:99 Iaid: IPaddr:192.168.39.117 Prefix:24 Hostname:ha-377576-m02 Clientid:01:52:54:00:bb:83:99}
	I0327 23:53:41.011473 1086621 main.go:141] libmachine: (ha-377576-m02) DBG | domain ha-377576-m02 has defined IP address 192.168.39.117 and MAC address 52:54:00:bb:83:99 in network mk-ha-377576
	I0327 23:53:41.011618 1086621 main.go:141] libmachine: (ha-377576-m02) Calling .GetSSHPort
	I0327 23:53:41.011814 1086621 main.go:141] libmachine: (ha-377576-m02) Calling .GetSSHKeyPath
	I0327 23:53:41.012021 1086621 main.go:141] libmachine: (ha-377576-m02) Calling .GetSSHKeyPath
	I0327 23:53:41.012185 1086621 main.go:141] libmachine: (ha-377576-m02) Calling .GetSSHUsername
	I0327 23:53:41.012377 1086621 main.go:141] libmachine: Using SSH client type: native
	I0327 23:53:41.012565 1086621 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.117 22 <nil> <nil>}
	I0327 23:53:41.012581 1086621 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-377576-m02 && echo "ha-377576-m02" | sudo tee /etc/hostname
	I0327 23:53:41.134757 1086621 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-377576-m02
	
	I0327 23:53:41.134796 1086621 main.go:141] libmachine: (ha-377576-m02) Calling .GetSSHHostname
	I0327 23:53:41.137686 1086621 main.go:141] libmachine: (ha-377576-m02) DBG | domain ha-377576-m02 has defined MAC address 52:54:00:bb:83:99 in network mk-ha-377576
	I0327 23:53:41.138037 1086621 main.go:141] libmachine: (ha-377576-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:83:99", ip: ""} in network mk-ha-377576: {Iface:virbr1 ExpiryTime:2024-03-28 00:53:30 +0000 UTC Type:0 Mac:52:54:00:bb:83:99 Iaid: IPaddr:192.168.39.117 Prefix:24 Hostname:ha-377576-m02 Clientid:01:52:54:00:bb:83:99}
	I0327 23:53:41.138068 1086621 main.go:141] libmachine: (ha-377576-m02) DBG | domain ha-377576-m02 has defined IP address 192.168.39.117 and MAC address 52:54:00:bb:83:99 in network mk-ha-377576
	I0327 23:53:41.138260 1086621 main.go:141] libmachine: (ha-377576-m02) Calling .GetSSHPort
	I0327 23:53:41.138483 1086621 main.go:141] libmachine: (ha-377576-m02) Calling .GetSSHKeyPath
	I0327 23:53:41.138649 1086621 main.go:141] libmachine: (ha-377576-m02) Calling .GetSSHKeyPath
	I0327 23:53:41.138808 1086621 main.go:141] libmachine: (ha-377576-m02) Calling .GetSSHUsername
	I0327 23:53:41.138968 1086621 main.go:141] libmachine: Using SSH client type: native
	I0327 23:53:41.139210 1086621 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.117 22 <nil> <nil>}
	I0327 23:53:41.139229 1086621 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-377576-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-377576-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-377576-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0327 23:53:41.251935 1086621 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0327 23:53:41.251979 1086621 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18485-1069254/.minikube CaCertPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18485-1069254/.minikube}
	I0327 23:53:41.252003 1086621 buildroot.go:174] setting up certificates
	I0327 23:53:41.252019 1086621 provision.go:84] configureAuth start
	I0327 23:53:41.252036 1086621 main.go:141] libmachine: (ha-377576-m02) Calling .GetMachineName
	I0327 23:53:41.252405 1086621 main.go:141] libmachine: (ha-377576-m02) Calling .GetIP
	I0327 23:53:41.255380 1086621 main.go:141] libmachine: (ha-377576-m02) DBG | domain ha-377576-m02 has defined MAC address 52:54:00:bb:83:99 in network mk-ha-377576
	I0327 23:53:41.255787 1086621 main.go:141] libmachine: (ha-377576-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:83:99", ip: ""} in network mk-ha-377576: {Iface:virbr1 ExpiryTime:2024-03-28 00:53:30 +0000 UTC Type:0 Mac:52:54:00:bb:83:99 Iaid: IPaddr:192.168.39.117 Prefix:24 Hostname:ha-377576-m02 Clientid:01:52:54:00:bb:83:99}
	I0327 23:53:41.255820 1086621 main.go:141] libmachine: (ha-377576-m02) DBG | domain ha-377576-m02 has defined IP address 192.168.39.117 and MAC address 52:54:00:bb:83:99 in network mk-ha-377576
	I0327 23:53:41.256013 1086621 main.go:141] libmachine: (ha-377576-m02) Calling .GetSSHHostname
	I0327 23:53:41.258411 1086621 main.go:141] libmachine: (ha-377576-m02) DBG | domain ha-377576-m02 has defined MAC address 52:54:00:bb:83:99 in network mk-ha-377576
	I0327 23:53:41.258769 1086621 main.go:141] libmachine: (ha-377576-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:83:99", ip: ""} in network mk-ha-377576: {Iface:virbr1 ExpiryTime:2024-03-28 00:53:30 +0000 UTC Type:0 Mac:52:54:00:bb:83:99 Iaid: IPaddr:192.168.39.117 Prefix:24 Hostname:ha-377576-m02 Clientid:01:52:54:00:bb:83:99}
	I0327 23:53:41.258804 1086621 main.go:141] libmachine: (ha-377576-m02) DBG | domain ha-377576-m02 has defined IP address 192.168.39.117 and MAC address 52:54:00:bb:83:99 in network mk-ha-377576
	I0327 23:53:41.258928 1086621 provision.go:143] copyHostCerts
	I0327 23:53:41.258965 1086621 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.pem
	I0327 23:53:41.259004 1086621 exec_runner.go:144] found /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.pem, removing ...
	I0327 23:53:41.259014 1086621 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.pem
	I0327 23:53:41.259100 1086621 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.pem (1078 bytes)
	I0327 23:53:41.259195 1086621 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18485-1069254/.minikube/cert.pem
	I0327 23:53:41.259222 1086621 exec_runner.go:144] found /home/jenkins/minikube-integration/18485-1069254/.minikube/cert.pem, removing ...
	I0327 23:53:41.259237 1086621 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18485-1069254/.minikube/cert.pem
	I0327 23:53:41.259277 1086621 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18485-1069254/.minikube/cert.pem (1123 bytes)
	I0327 23:53:41.259338 1086621 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18485-1069254/.minikube/key.pem
	I0327 23:53:41.259361 1086621 exec_runner.go:144] found /home/jenkins/minikube-integration/18485-1069254/.minikube/key.pem, removing ...
	I0327 23:53:41.259369 1086621 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18485-1069254/.minikube/key.pem
	I0327 23:53:41.259400 1086621 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18485-1069254/.minikube/key.pem (1679 bytes)
	I0327 23:53:41.259465 1086621 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca-key.pem org=jenkins.ha-377576-m02 san=[127.0.0.1 192.168.39.117 ha-377576-m02 localhost minikube]
	I0327 23:53:41.409802 1086621 provision.go:177] copyRemoteCerts
	I0327 23:53:41.409872 1086621 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0327 23:53:41.409899 1086621 main.go:141] libmachine: (ha-377576-m02) Calling .GetSSHHostname
	I0327 23:53:41.412541 1086621 main.go:141] libmachine: (ha-377576-m02) DBG | domain ha-377576-m02 has defined MAC address 52:54:00:bb:83:99 in network mk-ha-377576
	I0327 23:53:41.412892 1086621 main.go:141] libmachine: (ha-377576-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:83:99", ip: ""} in network mk-ha-377576: {Iface:virbr1 ExpiryTime:2024-03-28 00:53:30 +0000 UTC Type:0 Mac:52:54:00:bb:83:99 Iaid: IPaddr:192.168.39.117 Prefix:24 Hostname:ha-377576-m02 Clientid:01:52:54:00:bb:83:99}
	I0327 23:53:41.412926 1086621 main.go:141] libmachine: (ha-377576-m02) DBG | domain ha-377576-m02 has defined IP address 192.168.39.117 and MAC address 52:54:00:bb:83:99 in network mk-ha-377576
	I0327 23:53:41.413127 1086621 main.go:141] libmachine: (ha-377576-m02) Calling .GetSSHPort
	I0327 23:53:41.413352 1086621 main.go:141] libmachine: (ha-377576-m02) Calling .GetSSHKeyPath
	I0327 23:53:41.413544 1086621 main.go:141] libmachine: (ha-377576-m02) Calling .GetSSHUsername
	I0327 23:53:41.413723 1086621 sshutil.go:53] new ssh client: &{IP:192.168.39.117 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/ha-377576-m02/id_rsa Username:docker}
	I0327 23:53:41.497360 1086621 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0327 23:53:41.497455 1086621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0327 23:53:41.523437 1086621 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0327 23:53:41.523537 1086621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0327 23:53:41.550433 1086621 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0327 23:53:41.550525 1086621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0327 23:53:41.575575 1086621 provision.go:87] duration metric: took 323.534653ms to configureAuth
	I0327 23:53:41.575613 1086621 buildroot.go:189] setting minikube options for container-runtime
	I0327 23:53:41.575802 1086621 config.go:182] Loaded profile config "ha-377576": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0327 23:53:41.575910 1086621 main.go:141] libmachine: (ha-377576-m02) Calling .GetSSHHostname
	I0327 23:53:41.578678 1086621 main.go:141] libmachine: (ha-377576-m02) DBG | domain ha-377576-m02 has defined MAC address 52:54:00:bb:83:99 in network mk-ha-377576
	I0327 23:53:41.579104 1086621 main.go:141] libmachine: (ha-377576-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:83:99", ip: ""} in network mk-ha-377576: {Iface:virbr1 ExpiryTime:2024-03-28 00:53:30 +0000 UTC Type:0 Mac:52:54:00:bb:83:99 Iaid: IPaddr:192.168.39.117 Prefix:24 Hostname:ha-377576-m02 Clientid:01:52:54:00:bb:83:99}
	I0327 23:53:41.579137 1086621 main.go:141] libmachine: (ha-377576-m02) DBG | domain ha-377576-m02 has defined IP address 192.168.39.117 and MAC address 52:54:00:bb:83:99 in network mk-ha-377576
	I0327 23:53:41.579293 1086621 main.go:141] libmachine: (ha-377576-m02) Calling .GetSSHPort
	I0327 23:53:41.579517 1086621 main.go:141] libmachine: (ha-377576-m02) Calling .GetSSHKeyPath
	I0327 23:53:41.579755 1086621 main.go:141] libmachine: (ha-377576-m02) Calling .GetSSHKeyPath
	I0327 23:53:41.579912 1086621 main.go:141] libmachine: (ha-377576-m02) Calling .GetSSHUsername
	I0327 23:53:41.580093 1086621 main.go:141] libmachine: Using SSH client type: native
	I0327 23:53:41.580261 1086621 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.117 22 <nil> <nil>}
	I0327 23:53:41.580276 1086621 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0327 23:53:41.869465 1086621 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0327 23:53:41.869492 1086621 main.go:141] libmachine: Checking connection to Docker...
	I0327 23:53:41.869501 1086621 main.go:141] libmachine: (ha-377576-m02) Calling .GetURL
	I0327 23:53:41.870913 1086621 main.go:141] libmachine: (ha-377576-m02) DBG | Using libvirt version 6000000
	I0327 23:53:41.873302 1086621 main.go:141] libmachine: (ha-377576-m02) DBG | domain ha-377576-m02 has defined MAC address 52:54:00:bb:83:99 in network mk-ha-377576
	I0327 23:53:41.873661 1086621 main.go:141] libmachine: (ha-377576-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:83:99", ip: ""} in network mk-ha-377576: {Iface:virbr1 ExpiryTime:2024-03-28 00:53:30 +0000 UTC Type:0 Mac:52:54:00:bb:83:99 Iaid: IPaddr:192.168.39.117 Prefix:24 Hostname:ha-377576-m02 Clientid:01:52:54:00:bb:83:99}
	I0327 23:53:41.873698 1086621 main.go:141] libmachine: (ha-377576-m02) DBG | domain ha-377576-m02 has defined IP address 192.168.39.117 and MAC address 52:54:00:bb:83:99 in network mk-ha-377576
	I0327 23:53:41.873831 1086621 main.go:141] libmachine: Docker is up and running!
	I0327 23:53:41.873848 1086621 main.go:141] libmachine: Reticulating splines...
	I0327 23:53:41.873855 1086621 client.go:171] duration metric: took 26.539168369s to LocalClient.Create
	I0327 23:53:41.873882 1086621 start.go:167] duration metric: took 26.539231877s to libmachine.API.Create "ha-377576"
	I0327 23:53:41.873892 1086621 start.go:293] postStartSetup for "ha-377576-m02" (driver="kvm2")
	I0327 23:53:41.873905 1086621 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0327 23:53:41.873926 1086621 main.go:141] libmachine: (ha-377576-m02) Calling .DriverName
	I0327 23:53:41.874212 1086621 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0327 23:53:41.874254 1086621 main.go:141] libmachine: (ha-377576-m02) Calling .GetSSHHostname
	I0327 23:53:41.876404 1086621 main.go:141] libmachine: (ha-377576-m02) DBG | domain ha-377576-m02 has defined MAC address 52:54:00:bb:83:99 in network mk-ha-377576
	I0327 23:53:41.876792 1086621 main.go:141] libmachine: (ha-377576-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:83:99", ip: ""} in network mk-ha-377576: {Iface:virbr1 ExpiryTime:2024-03-28 00:53:30 +0000 UTC Type:0 Mac:52:54:00:bb:83:99 Iaid: IPaddr:192.168.39.117 Prefix:24 Hostname:ha-377576-m02 Clientid:01:52:54:00:bb:83:99}
	I0327 23:53:41.876819 1086621 main.go:141] libmachine: (ha-377576-m02) DBG | domain ha-377576-m02 has defined IP address 192.168.39.117 and MAC address 52:54:00:bb:83:99 in network mk-ha-377576
	I0327 23:53:41.876997 1086621 main.go:141] libmachine: (ha-377576-m02) Calling .GetSSHPort
	I0327 23:53:41.877214 1086621 main.go:141] libmachine: (ha-377576-m02) Calling .GetSSHKeyPath
	I0327 23:53:41.877351 1086621 main.go:141] libmachine: (ha-377576-m02) Calling .GetSSHUsername
	I0327 23:53:41.877543 1086621 sshutil.go:53] new ssh client: &{IP:192.168.39.117 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/ha-377576-m02/id_rsa Username:docker}
	I0327 23:53:41.961688 1086621 ssh_runner.go:195] Run: cat /etc/os-release
	I0327 23:53:41.966054 1086621 info.go:137] Remote host: Buildroot 2023.02.9
	I0327 23:53:41.966082 1086621 filesync.go:126] Scanning /home/jenkins/minikube-integration/18485-1069254/.minikube/addons for local assets ...
	I0327 23:53:41.966162 1086621 filesync.go:126] Scanning /home/jenkins/minikube-integration/18485-1069254/.minikube/files for local assets ...
	I0327 23:53:41.966319 1086621 filesync.go:149] local asset: /home/jenkins/minikube-integration/18485-1069254/.minikube/files/etc/ssl/certs/10765222.pem -> 10765222.pem in /etc/ssl/certs
	I0327 23:53:41.966337 1086621 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18485-1069254/.minikube/files/etc/ssl/certs/10765222.pem -> /etc/ssl/certs/10765222.pem
	I0327 23:53:41.966454 1086621 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0327 23:53:41.976650 1086621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/files/etc/ssl/certs/10765222.pem --> /etc/ssl/certs/10765222.pem (1708 bytes)
	I0327 23:53:42.002259 1086621 start.go:296] duration metric: took 128.327335ms for postStartSetup
	I0327 23:53:42.002321 1086621 main.go:141] libmachine: (ha-377576-m02) Calling .GetConfigRaw
	I0327 23:53:42.002963 1086621 main.go:141] libmachine: (ha-377576-m02) Calling .GetIP
	I0327 23:53:42.005709 1086621 main.go:141] libmachine: (ha-377576-m02) DBG | domain ha-377576-m02 has defined MAC address 52:54:00:bb:83:99 in network mk-ha-377576
	I0327 23:53:42.006101 1086621 main.go:141] libmachine: (ha-377576-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:83:99", ip: ""} in network mk-ha-377576: {Iface:virbr1 ExpiryTime:2024-03-28 00:53:30 +0000 UTC Type:0 Mac:52:54:00:bb:83:99 Iaid: IPaddr:192.168.39.117 Prefix:24 Hostname:ha-377576-m02 Clientid:01:52:54:00:bb:83:99}
	I0327 23:53:42.006134 1086621 main.go:141] libmachine: (ha-377576-m02) DBG | domain ha-377576-m02 has defined IP address 192.168.39.117 and MAC address 52:54:00:bb:83:99 in network mk-ha-377576
	I0327 23:53:42.006364 1086621 profile.go:142] Saving config to /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/ha-377576/config.json ...
	I0327 23:53:42.006577 1086621 start.go:128] duration metric: took 26.690481281s to createHost
	I0327 23:53:42.006608 1086621 main.go:141] libmachine: (ha-377576-m02) Calling .GetSSHHostname
	I0327 23:53:42.008746 1086621 main.go:141] libmachine: (ha-377576-m02) DBG | domain ha-377576-m02 has defined MAC address 52:54:00:bb:83:99 in network mk-ha-377576
	I0327 23:53:42.009073 1086621 main.go:141] libmachine: (ha-377576-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:83:99", ip: ""} in network mk-ha-377576: {Iface:virbr1 ExpiryTime:2024-03-28 00:53:30 +0000 UTC Type:0 Mac:52:54:00:bb:83:99 Iaid: IPaddr:192.168.39.117 Prefix:24 Hostname:ha-377576-m02 Clientid:01:52:54:00:bb:83:99}
	I0327 23:53:42.009100 1086621 main.go:141] libmachine: (ha-377576-m02) DBG | domain ha-377576-m02 has defined IP address 192.168.39.117 and MAC address 52:54:00:bb:83:99 in network mk-ha-377576
	I0327 23:53:42.009260 1086621 main.go:141] libmachine: (ha-377576-m02) Calling .GetSSHPort
	I0327 23:53:42.009434 1086621 main.go:141] libmachine: (ha-377576-m02) Calling .GetSSHKeyPath
	I0327 23:53:42.009595 1086621 main.go:141] libmachine: (ha-377576-m02) Calling .GetSSHKeyPath
	I0327 23:53:42.009706 1086621 main.go:141] libmachine: (ha-377576-m02) Calling .GetSSHUsername
	I0327 23:53:42.009895 1086621 main.go:141] libmachine: Using SSH client type: native
	I0327 23:53:42.010107 1086621 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.117 22 <nil> <nil>}
	I0327 23:53:42.010119 1086621 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0327 23:53:42.115066 1086621 main.go:141] libmachine: SSH cmd err, output: <nil>: 1711583622.090140495
	
	I0327 23:53:42.115091 1086621 fix.go:216] guest clock: 1711583622.090140495
	I0327 23:53:42.115099 1086621 fix.go:229] Guest: 2024-03-27 23:53:42.090140495 +0000 UTC Remote: 2024-03-27 23:53:42.006590822 +0000 UTC m=+85.996063686 (delta=83.549673ms)
	I0327 23:53:42.115121 1086621 fix.go:200] guest clock delta is within tolerance: 83.549673ms
	I0327 23:53:42.115126 1086621 start.go:83] releasing machines lock for "ha-377576-m02", held for 26.799149182s
	I0327 23:53:42.115144 1086621 main.go:141] libmachine: (ha-377576-m02) Calling .DriverName
	I0327 23:53:42.115420 1086621 main.go:141] libmachine: (ha-377576-m02) Calling .GetIP
	I0327 23:53:42.118120 1086621 main.go:141] libmachine: (ha-377576-m02) DBG | domain ha-377576-m02 has defined MAC address 52:54:00:bb:83:99 in network mk-ha-377576
	I0327 23:53:42.118458 1086621 main.go:141] libmachine: (ha-377576-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:83:99", ip: ""} in network mk-ha-377576: {Iface:virbr1 ExpiryTime:2024-03-28 00:53:30 +0000 UTC Type:0 Mac:52:54:00:bb:83:99 Iaid: IPaddr:192.168.39.117 Prefix:24 Hostname:ha-377576-m02 Clientid:01:52:54:00:bb:83:99}
	I0327 23:53:42.118508 1086621 main.go:141] libmachine: (ha-377576-m02) DBG | domain ha-377576-m02 has defined IP address 192.168.39.117 and MAC address 52:54:00:bb:83:99 in network mk-ha-377576
	I0327 23:53:42.121008 1086621 out.go:177] * Found network options:
	I0327 23:53:42.122574 1086621 out.go:177]   - NO_PROXY=192.168.39.47
	W0327 23:53:42.123842 1086621 proxy.go:119] fail to check proxy env: Error ip not in block
	I0327 23:53:42.123892 1086621 main.go:141] libmachine: (ha-377576-m02) Calling .DriverName
	I0327 23:53:42.124441 1086621 main.go:141] libmachine: (ha-377576-m02) Calling .DriverName
	I0327 23:53:42.124633 1086621 main.go:141] libmachine: (ha-377576-m02) Calling .DriverName
	I0327 23:53:42.124726 1086621 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0327 23:53:42.124770 1086621 main.go:141] libmachine: (ha-377576-m02) Calling .GetSSHHostname
	W0327 23:53:42.124842 1086621 proxy.go:119] fail to check proxy env: Error ip not in block
	I0327 23:53:42.124926 1086621 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0327 23:53:42.124953 1086621 main.go:141] libmachine: (ha-377576-m02) Calling .GetSSHHostname
	I0327 23:53:42.127640 1086621 main.go:141] libmachine: (ha-377576-m02) DBG | domain ha-377576-m02 has defined MAC address 52:54:00:bb:83:99 in network mk-ha-377576
	I0327 23:53:42.127843 1086621 main.go:141] libmachine: (ha-377576-m02) DBG | domain ha-377576-m02 has defined MAC address 52:54:00:bb:83:99 in network mk-ha-377576
	I0327 23:53:42.127991 1086621 main.go:141] libmachine: (ha-377576-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:83:99", ip: ""} in network mk-ha-377576: {Iface:virbr1 ExpiryTime:2024-03-28 00:53:30 +0000 UTC Type:0 Mac:52:54:00:bb:83:99 Iaid: IPaddr:192.168.39.117 Prefix:24 Hostname:ha-377576-m02 Clientid:01:52:54:00:bb:83:99}
	I0327 23:53:42.128022 1086621 main.go:141] libmachine: (ha-377576-m02) DBG | domain ha-377576-m02 has defined IP address 192.168.39.117 and MAC address 52:54:00:bb:83:99 in network mk-ha-377576
	I0327 23:53:42.128158 1086621 main.go:141] libmachine: (ha-377576-m02) Calling .GetSSHPort
	I0327 23:53:42.128286 1086621 main.go:141] libmachine: (ha-377576-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:83:99", ip: ""} in network mk-ha-377576: {Iface:virbr1 ExpiryTime:2024-03-28 00:53:30 +0000 UTC Type:0 Mac:52:54:00:bb:83:99 Iaid: IPaddr:192.168.39.117 Prefix:24 Hostname:ha-377576-m02 Clientid:01:52:54:00:bb:83:99}
	I0327 23:53:42.128310 1086621 main.go:141] libmachine: (ha-377576-m02) DBG | domain ha-377576-m02 has defined IP address 192.168.39.117 and MAC address 52:54:00:bb:83:99 in network mk-ha-377576
	I0327 23:53:42.128329 1086621 main.go:141] libmachine: (ha-377576-m02) Calling .GetSSHKeyPath
	I0327 23:53:42.128471 1086621 main.go:141] libmachine: (ha-377576-m02) Calling .GetSSHUsername
	I0327 23:53:42.128542 1086621 main.go:141] libmachine: (ha-377576-m02) Calling .GetSSHPort
	I0327 23:53:42.128629 1086621 sshutil.go:53] new ssh client: &{IP:192.168.39.117 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/ha-377576-m02/id_rsa Username:docker}
	I0327 23:53:42.128722 1086621 main.go:141] libmachine: (ha-377576-m02) Calling .GetSSHKeyPath
	I0327 23:53:42.128886 1086621 main.go:141] libmachine: (ha-377576-m02) Calling .GetSSHUsername
	I0327 23:53:42.129068 1086621 sshutil.go:53] new ssh client: &{IP:192.168.39.117 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/ha-377576-m02/id_rsa Username:docker}
	I0327 23:53:42.370792 1086621 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0327 23:53:42.377319 1086621 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0327 23:53:42.377397 1086621 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0327 23:53:42.395227 1086621 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0327 23:53:42.395252 1086621 start.go:494] detecting cgroup driver to use...
	I0327 23:53:42.395323 1086621 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0327 23:53:42.412140 1086621 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0327 23:53:42.426584 1086621 docker.go:217] disabling cri-docker service (if available) ...
	I0327 23:53:42.426650 1086621 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0327 23:53:42.441340 1086621 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0327 23:53:42.456034 1086621 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0327 23:53:42.575600 1086621 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0327 23:53:42.749254 1086621 docker.go:233] disabling docker service ...
	I0327 23:53:42.749352 1086621 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0327 23:53:42.766091 1086621 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0327 23:53:42.780227 1086621 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0327 23:53:42.926840 1086621 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0327 23:53:43.062022 1086621 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0327 23:53:43.076773 1086621 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0327 23:53:43.096231 1086621 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0327 23:53:43.096291 1086621 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0327 23:53:43.107862 1086621 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0327 23:53:43.107934 1086621 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0327 23:53:43.119412 1086621 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0327 23:53:43.131130 1086621 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0327 23:53:43.142892 1086621 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0327 23:53:43.154907 1086621 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0327 23:53:43.167171 1086621 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0327 23:53:43.186608 1086621 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0327 23:53:43.198344 1086621 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0327 23:53:43.208682 1086621 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0327 23:53:43.208750 1086621 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0327 23:53:43.223268 1086621 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0327 23:53:43.236108 1086621 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0327 23:53:43.362449 1086621 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0327 23:53:43.515363 1086621 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0327 23:53:43.515439 1086621 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0327 23:53:43.520709 1086621 start.go:562] Will wait 60s for crictl version
	I0327 23:53:43.520773 1086621 ssh_runner.go:195] Run: which crictl
	I0327 23:53:43.524704 1086621 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0327 23:53:43.568185 1086621 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0327 23:53:43.568277 1086621 ssh_runner.go:195] Run: crio --version
	I0327 23:53:43.601998 1086621 ssh_runner.go:195] Run: crio --version
	I0327 23:53:43.634026 1086621 out.go:177] * Preparing Kubernetes v1.29.3 on CRI-O 1.29.1 ...
	I0327 23:53:43.635731 1086621 out.go:177]   - env NO_PROXY=192.168.39.47
	I0327 23:53:43.637324 1086621 main.go:141] libmachine: (ha-377576-m02) Calling .GetIP
	I0327 23:53:43.640212 1086621 main.go:141] libmachine: (ha-377576-m02) DBG | domain ha-377576-m02 has defined MAC address 52:54:00:bb:83:99 in network mk-ha-377576
	I0327 23:53:43.640708 1086621 main.go:141] libmachine: (ha-377576-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:83:99", ip: ""} in network mk-ha-377576: {Iface:virbr1 ExpiryTime:2024-03-28 00:53:30 +0000 UTC Type:0 Mac:52:54:00:bb:83:99 Iaid: IPaddr:192.168.39.117 Prefix:24 Hostname:ha-377576-m02 Clientid:01:52:54:00:bb:83:99}
	I0327 23:53:43.640734 1086621 main.go:141] libmachine: (ha-377576-m02) DBG | domain ha-377576-m02 has defined IP address 192.168.39.117 and MAC address 52:54:00:bb:83:99 in network mk-ha-377576
	I0327 23:53:43.641028 1086621 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0327 23:53:43.645636 1086621 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0327 23:53:43.658843 1086621 mustload.go:65] Loading cluster: ha-377576
	I0327 23:53:43.659053 1086621 config.go:182] Loaded profile config "ha-377576": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0327 23:53:43.659359 1086621 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0327 23:53:43.659391 1086621 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0327 23:53:43.674527 1086621 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38387
	I0327 23:53:43.675161 1086621 main.go:141] libmachine: () Calling .GetVersion
	I0327 23:53:43.675664 1086621 main.go:141] libmachine: Using API Version  1
	I0327 23:53:43.675684 1086621 main.go:141] libmachine: () Calling .SetConfigRaw
	I0327 23:53:43.676021 1086621 main.go:141] libmachine: () Calling .GetMachineName
	I0327 23:53:43.676225 1086621 main.go:141] libmachine: (ha-377576) Calling .GetState
	I0327 23:53:43.677734 1086621 host.go:66] Checking if "ha-377576" exists ...
	I0327 23:53:43.678020 1086621 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0327 23:53:43.678062 1086621 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0327 23:53:43.693403 1086621 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34833
	I0327 23:53:43.693870 1086621 main.go:141] libmachine: () Calling .GetVersion
	I0327 23:53:43.694348 1086621 main.go:141] libmachine: Using API Version  1
	I0327 23:53:43.694368 1086621 main.go:141] libmachine: () Calling .SetConfigRaw
	I0327 23:53:43.694707 1086621 main.go:141] libmachine: () Calling .GetMachineName
	I0327 23:53:43.694922 1086621 main.go:141] libmachine: (ha-377576) Calling .DriverName
	I0327 23:53:43.695136 1086621 certs.go:68] Setting up /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/ha-377576 for IP: 192.168.39.117
	I0327 23:53:43.695150 1086621 certs.go:194] generating shared ca certs ...
	I0327 23:53:43.695171 1086621 certs.go:226] acquiring lock for ca certs: {Name:mkf4dec8f33bbf51de6ed3aabdf175c7fd744ef9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 23:53:43.695329 1086621 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.key
	I0327 23:53:43.695368 1086621 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/proxy-client-ca.key
	I0327 23:53:43.695377 1086621 certs.go:256] generating profile certs ...
	I0327 23:53:43.695447 1086621 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/ha-377576/client.key
	I0327 23:53:43.695473 1086621 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/ha-377576/apiserver.key.78940cd1
	I0327 23:53:43.695489 1086621 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/ha-377576/apiserver.crt.78940cd1 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.47 192.168.39.117 192.168.39.254]
	I0327 23:53:43.862402 1086621 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/ha-377576/apiserver.crt.78940cd1 ...
	I0327 23:53:43.862434 1086621 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/ha-377576/apiserver.crt.78940cd1: {Name:mk473d722fafe522ae7b30b1d0d075c26a7522f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 23:53:43.862614 1086621 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/ha-377576/apiserver.key.78940cd1 ...
	I0327 23:53:43.862627 1086621 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/ha-377576/apiserver.key.78940cd1: {Name:mk107444c4c288abfb44e45af6913a62c73f33ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 23:53:43.862696 1086621 certs.go:381] copying /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/ha-377576/apiserver.crt.78940cd1 -> /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/ha-377576/apiserver.crt
	I0327 23:53:43.862816 1086621 certs.go:385] copying /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/ha-377576/apiserver.key.78940cd1 -> /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/ha-377576/apiserver.key
	I0327 23:53:43.862945 1086621 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/ha-377576/proxy-client.key
	I0327 23:53:43.862962 1086621 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0327 23:53:43.862975 1086621 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0327 23:53:43.862987 1086621 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18485-1069254/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0327 23:53:43.863001 1086621 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18485-1069254/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0327 23:53:43.863014 1086621 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/ha-377576/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0327 23:53:43.863026 1086621 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/ha-377576/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0327 23:53:43.863040 1086621 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/ha-377576/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0327 23:53:43.863051 1086621 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/ha-377576/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0327 23:53:43.863106 1086621 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/1076522.pem (1338 bytes)
	W0327 23:53:43.863134 1086621 certs.go:480] ignoring /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/1076522_empty.pem, impossibly tiny 0 bytes
	I0327 23:53:43.863144 1086621 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca-key.pem (1675 bytes)
	I0327 23:53:43.863166 1086621 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem (1078 bytes)
	I0327 23:53:43.863187 1086621 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/cert.pem (1123 bytes)
	I0327 23:53:43.863209 1086621 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/key.pem (1679 bytes)
	I0327 23:53:43.863247 1086621 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/files/etc/ssl/certs/10765222.pem (1708 bytes)
	I0327 23:53:43.863275 1086621 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0327 23:53:43.863289 1086621 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/1076522.pem -> /usr/share/ca-certificates/1076522.pem
	I0327 23:53:43.863301 1086621 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18485-1069254/.minikube/files/etc/ssl/certs/10765222.pem -> /usr/share/ca-certificates/10765222.pem
	I0327 23:53:43.863335 1086621 main.go:141] libmachine: (ha-377576) Calling .GetSSHHostname
	I0327 23:53:43.866375 1086621 main.go:141] libmachine: (ha-377576) DBG | domain ha-377576 has defined MAC address 52:54:00:9c:48:13 in network mk-ha-377576
	I0327 23:53:43.866734 1086621 main.go:141] libmachine: (ha-377576) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:48:13", ip: ""} in network mk-ha-377576: {Iface:virbr1 ExpiryTime:2024-03-28 00:52:31 +0000 UTC Type:0 Mac:52:54:00:9c:48:13 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:ha-377576 Clientid:01:52:54:00:9c:48:13}
	I0327 23:53:43.866768 1086621 main.go:141] libmachine: (ha-377576) DBG | domain ha-377576 has defined IP address 192.168.39.47 and MAC address 52:54:00:9c:48:13 in network mk-ha-377576
	I0327 23:53:43.866941 1086621 main.go:141] libmachine: (ha-377576) Calling .GetSSHPort
	I0327 23:53:43.867177 1086621 main.go:141] libmachine: (ha-377576) Calling .GetSSHKeyPath
	I0327 23:53:43.867362 1086621 main.go:141] libmachine: (ha-377576) Calling .GetSSHUsername
	I0327 23:53:43.867535 1086621 sshutil.go:53] new ssh client: &{IP:192.168.39.47 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/ha-377576/id_rsa Username:docker}
	I0327 23:53:43.938712 1086621 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0327 23:53:43.945284 1086621 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0327 23:53:43.957915 1086621 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0327 23:53:43.962856 1086621 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0327 23:53:43.974426 1086621 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0327 23:53:43.979101 1086621 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0327 23:53:43.990511 1086621 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0327 23:53:43.995068 1086621 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0327 23:53:44.006319 1086621 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0327 23:53:44.011165 1086621 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0327 23:53:44.022259 1086621 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0327 23:53:44.026959 1086621 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0327 23:53:44.038404 1086621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0327 23:53:44.064478 1086621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0327 23:53:44.089667 1086621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0327 23:53:44.114566 1086621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0327 23:53:44.139792 1086621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/ha-377576/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0327 23:53:44.165836 1086621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/ha-377576/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0327 23:53:44.193411 1086621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/ha-377576/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0327 23:53:44.221666 1086621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/ha-377576/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0327 23:53:44.248890 1086621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0327 23:53:44.276835 1086621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/1076522.pem --> /usr/share/ca-certificates/1076522.pem (1338 bytes)
	I0327 23:53:44.301323 1086621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/files/etc/ssl/certs/10765222.pem --> /usr/share/ca-certificates/10765222.pem (1708 bytes)
	I0327 23:53:44.326963 1086621 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0327 23:53:44.344218 1086621 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0327 23:53:44.361249 1086621 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0327 23:53:44.378451 1086621 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0327 23:53:44.395826 1086621 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0327 23:53:44.413371 1086621 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0327 23:53:44.431510 1086621 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (758 bytes)
	I0327 23:53:44.449406 1086621 ssh_runner.go:195] Run: openssl version
	I0327 23:53:44.455442 1086621 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0327 23:53:44.466460 1086621 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0327 23:53:44.471040 1086621 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 27 23:34 /usr/share/ca-certificates/minikubeCA.pem
	I0327 23:53:44.471114 1086621 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0327 23:53:44.476932 1086621 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0327 23:53:44.488161 1086621 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1076522.pem && ln -fs /usr/share/ca-certificates/1076522.pem /etc/ssl/certs/1076522.pem"
	I0327 23:53:44.498965 1086621 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1076522.pem
	I0327 23:53:44.503800 1086621 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 27 23:42 /usr/share/ca-certificates/1076522.pem
	I0327 23:53:44.503860 1086621 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1076522.pem
	I0327 23:53:44.509935 1086621 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1076522.pem /etc/ssl/certs/51391683.0"
	I0327 23:53:44.520774 1086621 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10765222.pem && ln -fs /usr/share/ca-certificates/10765222.pem /etc/ssl/certs/10765222.pem"
	I0327 23:53:44.532167 1086621 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10765222.pem
	I0327 23:53:44.536691 1086621 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 27 23:42 /usr/share/ca-certificates/10765222.pem
	I0327 23:53:44.536741 1086621 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10765222.pem
	I0327 23:53:44.542677 1086621 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/10765222.pem /etc/ssl/certs/3ec20f2e.0"
	I0327 23:53:44.553713 1086621 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0327 23:53:44.557898 1086621 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0327 23:53:44.557960 1086621 kubeadm.go:928] updating node {m02 192.168.39.117 8443 v1.29.3 crio true true} ...
	I0327 23:53:44.558066 1086621 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-377576-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.117
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.3 ClusterName:ha-377576 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0327 23:53:44.558095 1086621 kube-vip.go:111] generating kube-vip config ...
	I0327 23:53:44.558139 1086621 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0327 23:53:44.576189 1086621 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0327 23:53:44.576311 1086621 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0327 23:53:44.576393 1086621 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.3
	I0327 23:53:44.586776 1086621 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.29.3: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.29.3': No such file or directory
	
	Initiating transfer...
	I0327 23:53:44.586863 1086621 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.29.3
	I0327 23:53:44.596914 1086621 download.go:107] Downloading: https://dl.k8s.io/release/v1.29.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.29.3/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/linux/amd64/v1.29.3/kubeadm
	I0327 23:53:44.596935 1086621 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.29.3/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.29.3/bin/linux/amd64/kubectl.sha256
	I0327 23:53:44.596939 1086621 download.go:107] Downloading: https://dl.k8s.io/release/v1.29.3/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.29.3/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/linux/amd64/v1.29.3/kubelet
	I0327 23:53:44.596960 1086621 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/linux/amd64/v1.29.3/kubectl -> /var/lib/minikube/binaries/v1.29.3/kubectl
	I0327 23:53:44.597048 1086621 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.29.3/kubectl
	I0327 23:53:44.601577 1086621 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.29.3/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.29.3/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.29.3/kubectl': No such file or directory
	I0327 23:53:44.601604 1086621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/linux/amd64/v1.29.3/kubectl --> /var/lib/minikube/binaries/v1.29.3/kubectl (49799168 bytes)
	I0327 23:54:16.611463 1086621 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/linux/amd64/v1.29.3/kubeadm -> /var/lib/minikube/binaries/v1.29.3/kubeadm
	I0327 23:54:16.611547 1086621 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.29.3/kubeadm
	I0327 23:54:16.617537 1086621 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.29.3/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.29.3/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.29.3/kubeadm': No such file or directory
	I0327 23:54:16.617569 1086621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/linux/amd64/v1.29.3/kubeadm --> /var/lib/minikube/binaries/v1.29.3/kubeadm (48340992 bytes)
	I0327 23:54:57.459077 1086621 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0327 23:54:57.477351 1086621 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/linux/amd64/v1.29.3/kubelet -> /var/lib/minikube/binaries/v1.29.3/kubelet
	I0327 23:54:57.477490 1086621 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.29.3/kubelet
	I0327 23:54:57.482346 1086621 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.29.3/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.29.3/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.29.3/kubelet': No such file or directory
	I0327 23:54:57.482380 1086621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/linux/amd64/v1.29.3/kubelet --> /var/lib/minikube/binaries/v1.29.3/kubelet (111919104 bytes)
	I0327 23:54:57.935005 1086621 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0327 23:54:57.944551 1086621 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0327 23:54:57.961813 1086621 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0327 23:54:57.979150 1086621 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1346 bytes)
	I0327 23:54:57.996772 1086621 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0327 23:54:58.000922 1086621 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0327 23:54:58.014371 1086621 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0327 23:54:58.130424 1086621 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0327 23:54:58.147860 1086621 host.go:66] Checking if "ha-377576" exists ...
	I0327 23:54:58.148199 1086621 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0327 23:54:58.148238 1086621 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0327 23:54:58.164481 1086621 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40685
	I0327 23:54:58.164951 1086621 main.go:141] libmachine: () Calling .GetVersion
	I0327 23:54:58.165554 1086621 main.go:141] libmachine: Using API Version  1
	I0327 23:54:58.165600 1086621 main.go:141] libmachine: () Calling .SetConfigRaw
	I0327 23:54:58.165947 1086621 main.go:141] libmachine: () Calling .GetMachineName
	I0327 23:54:58.166200 1086621 main.go:141] libmachine: (ha-377576) Calling .DriverName
	I0327 23:54:58.166401 1086621 start.go:316] joinCluster: &{Name:ha-377576 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 Cluster
Name:ha-377576 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.47 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.117 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0327 23:54:58.166526 1086621 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0327 23:54:58.166546 1086621 main.go:141] libmachine: (ha-377576) Calling .GetSSHHostname
	I0327 23:54:58.170248 1086621 main.go:141] libmachine: (ha-377576) DBG | domain ha-377576 has defined MAC address 52:54:00:9c:48:13 in network mk-ha-377576
	I0327 23:54:58.170750 1086621 main.go:141] libmachine: (ha-377576) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:48:13", ip: ""} in network mk-ha-377576: {Iface:virbr1 ExpiryTime:2024-03-28 00:52:31 +0000 UTC Type:0 Mac:52:54:00:9c:48:13 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:ha-377576 Clientid:01:52:54:00:9c:48:13}
	I0327 23:54:58.170784 1086621 main.go:141] libmachine: (ha-377576) DBG | domain ha-377576 has defined IP address 192.168.39.47 and MAC address 52:54:00:9c:48:13 in network mk-ha-377576
	I0327 23:54:58.170994 1086621 main.go:141] libmachine: (ha-377576) Calling .GetSSHPort
	I0327 23:54:58.171235 1086621 main.go:141] libmachine: (ha-377576) Calling .GetSSHKeyPath
	I0327 23:54:58.171438 1086621 main.go:141] libmachine: (ha-377576) Calling .GetSSHUsername
	I0327 23:54:58.171628 1086621 sshutil.go:53] new ssh client: &{IP:192.168.39.47 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/ha-377576/id_rsa Username:docker}
	I0327 23:54:58.347215 1086621 start.go:342] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.117 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0327 23:54:58.347278 1086621 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 2506a6.td1hnn5cxoz7asyy --discovery-token-ca-cert-hash sha256:eb47e943713ff45bf2b8bbea854932df17a469668ec0f8e79ea3bb626e56fc59 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-377576-m02 --control-plane --apiserver-advertise-address=192.168.39.117 --apiserver-bind-port=8443"
	I0327 23:55:23.115782 1086621 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 2506a6.td1hnn5cxoz7asyy --discovery-token-ca-cert-hash sha256:eb47e943713ff45bf2b8bbea854932df17a469668ec0f8e79ea3bb626e56fc59 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-377576-m02 --control-plane --apiserver-advertise-address=192.168.39.117 --apiserver-bind-port=8443": (24.768463732s)
	I0327 23:55:23.115836 1086621 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0327 23:55:23.717993 1086621 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-377576-m02 minikube.k8s.io/updated_at=2024_03_27T23_55_23_0700 minikube.k8s.io/version=v1.33.0-beta.0 minikube.k8s.io/commit=1a940f980f77c9bfc8ef93678db8c1f49c9dd79d minikube.k8s.io/name=ha-377576 minikube.k8s.io/primary=false
	I0327 23:55:23.866652 1086621 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-377576-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0327 23:55:23.985138 1086621 start.go:318] duration metric: took 25.818732645s to joinCluster
	I0327 23:55:23.985234 1086621 start.go:234] Will wait 6m0s for node &{Name:m02 IP:192.168.39.117 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0327 23:55:23.988265 1086621 out.go:177] * Verifying Kubernetes components...
	I0327 23:55:23.985583 1086621 config.go:182] Loaded profile config "ha-377576": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0327 23:55:23.989818 1086621 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0327 23:55:24.271450 1086621 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0327 23:55:24.292190 1086621 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18485-1069254/kubeconfig
	I0327 23:55:24.292545 1086621 kapi.go:59] client config for ha-377576: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/ha-377576/client.crt", KeyFile:"/home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/ha-377576/client.key", CAFile:"/home/jenkins/minikube-integration/18485-1069254/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]strin
g(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c58000), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0327 23:55:24.292636 1086621 kubeadm.go:477] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.47:8443
	I0327 23:55:24.292998 1086621 node_ready.go:35] waiting up to 6m0s for node "ha-377576-m02" to be "Ready" ...
	I0327 23:55:24.293114 1086621 round_trippers.go:463] GET https://192.168.39.47:8443/api/v1/nodes/ha-377576-m02
	I0327 23:55:24.293127 1086621 round_trippers.go:469] Request Headers:
	I0327 23:55:24.293165 1086621 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:55:24.293175 1086621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0327 23:55:24.304501 1086621 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0327 23:55:24.793739 1086621 round_trippers.go:463] GET https://192.168.39.47:8443/api/v1/nodes/ha-377576-m02
	I0327 23:55:24.793766 1086621 round_trippers.go:469] Request Headers:
	I0327 23:55:24.793776 1086621 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:55:24.793781 1086621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0327 23:55:24.797308 1086621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0327 23:55:25.293222 1086621 round_trippers.go:463] GET https://192.168.39.47:8443/api/v1/nodes/ha-377576-m02
	I0327 23:55:25.293244 1086621 round_trippers.go:469] Request Headers:
	I0327 23:55:25.293252 1086621 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:55:25.293257 1086621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0327 23:55:25.296587 1086621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0327 23:55:25.794096 1086621 round_trippers.go:463] GET https://192.168.39.47:8443/api/v1/nodes/ha-377576-m02
	I0327 23:55:25.794123 1086621 round_trippers.go:469] Request Headers:
	I0327 23:55:25.794131 1086621 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:55:25.794136 1086621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0327 23:55:25.798629 1086621 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0327 23:55:26.293775 1086621 round_trippers.go:463] GET https://192.168.39.47:8443/api/v1/nodes/ha-377576-m02
	I0327 23:55:26.293808 1086621 round_trippers.go:469] Request Headers:
	I0327 23:55:26.293821 1086621 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:55:26.293826 1086621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0327 23:55:26.298026 1086621 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0327 23:55:26.298971 1086621 node_ready.go:53] node "ha-377576-m02" has status "Ready":"False"
	I0327 23:55:26.793379 1086621 round_trippers.go:463] GET https://192.168.39.47:8443/api/v1/nodes/ha-377576-m02
	I0327 23:55:26.793404 1086621 round_trippers.go:469] Request Headers:
	I0327 23:55:26.793413 1086621 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:55:26.793417 1086621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0327 23:55:26.797180 1086621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0327 23:55:27.293219 1086621 round_trippers.go:463] GET https://192.168.39.47:8443/api/v1/nodes/ha-377576-m02
	I0327 23:55:27.293248 1086621 round_trippers.go:469] Request Headers:
	I0327 23:55:27.293260 1086621 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:55:27.293265 1086621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0327 23:55:27.297031 1086621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0327 23:55:27.794307 1086621 round_trippers.go:463] GET https://192.168.39.47:8443/api/v1/nodes/ha-377576-m02
	I0327 23:55:27.794340 1086621 round_trippers.go:469] Request Headers:
	I0327 23:55:27.794353 1086621 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:55:27.794359 1086621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0327 23:55:27.797961 1086621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0327 23:55:28.293800 1086621 round_trippers.go:463] GET https://192.168.39.47:8443/api/v1/nodes/ha-377576-m02
	I0327 23:55:28.293832 1086621 round_trippers.go:469] Request Headers:
	I0327 23:55:28.293847 1086621 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:55:28.293852 1086621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0327 23:55:28.297787 1086621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0327 23:55:28.794223 1086621 round_trippers.go:463] GET https://192.168.39.47:8443/api/v1/nodes/ha-377576-m02
	I0327 23:55:28.794284 1086621 round_trippers.go:469] Request Headers:
	I0327 23:55:28.794294 1086621 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:55:28.794303 1086621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0327 23:55:28.798436 1086621 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0327 23:55:28.799159 1086621 node_ready.go:53] node "ha-377576-m02" has status "Ready":"False"
	I0327 23:55:29.293448 1086621 round_trippers.go:463] GET https://192.168.39.47:8443/api/v1/nodes/ha-377576-m02
	I0327 23:55:29.293478 1086621 round_trippers.go:469] Request Headers:
	I0327 23:55:29.293489 1086621 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:55:29.293494 1086621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0327 23:55:29.306909 1086621 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0327 23:55:29.793961 1086621 round_trippers.go:463] GET https://192.168.39.47:8443/api/v1/nodes/ha-377576-m02
	I0327 23:55:29.793986 1086621 round_trippers.go:469] Request Headers:
	I0327 23:55:29.793995 1086621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0327 23:55:29.794003 1086621 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:55:29.797902 1086621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0327 23:55:30.293860 1086621 round_trippers.go:463] GET https://192.168.39.47:8443/api/v1/nodes/ha-377576-m02
	I0327 23:55:30.293884 1086621 round_trippers.go:469] Request Headers:
	I0327 23:55:30.293894 1086621 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:55:30.293899 1086621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0327 23:55:30.297751 1086621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0327 23:55:30.298614 1086621 node_ready.go:49] node "ha-377576-m02" has status "Ready":"True"
	I0327 23:55:30.298634 1086621 node_ready.go:38] duration metric: took 6.005611952s for node "ha-377576-m02" to be "Ready" ...
	I0327 23:55:30.298643 1086621 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0327 23:55:30.298712 1086621 round_trippers.go:463] GET https://192.168.39.47:8443/api/v1/namespaces/kube-system/pods
	I0327 23:55:30.298724 1086621 round_trippers.go:469] Request Headers:
	I0327 23:55:30.298730 1086621 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:55:30.298734 1086621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0327 23:55:30.304126 1086621 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0327 23:55:30.310345 1086621 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-47npx" in "kube-system" namespace to be "Ready" ...
	I0327 23:55:30.310428 1086621 round_trippers.go:463] GET https://192.168.39.47:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-47npx
	I0327 23:55:30.310437 1086621 round_trippers.go:469] Request Headers:
	I0327 23:55:30.310445 1086621 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:55:30.310449 1086621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0327 23:55:30.314793 1086621 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0327 23:55:30.315952 1086621 round_trippers.go:463] GET https://192.168.39.47:8443/api/v1/nodes/ha-377576
	I0327 23:55:30.315968 1086621 round_trippers.go:469] Request Headers:
	I0327 23:55:30.315976 1086621 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:55:30.315979 1086621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0327 23:55:30.319079 1086621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0327 23:55:30.319742 1086621 pod_ready.go:92] pod "coredns-76f75df574-47npx" in "kube-system" namespace has status "Ready":"True"
	I0327 23:55:30.319759 1086621 pod_ready.go:81] duration metric: took 9.391861ms for pod "coredns-76f75df574-47npx" in "kube-system" namespace to be "Ready" ...
	I0327 23:55:30.319769 1086621 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-msv9s" in "kube-system" namespace to be "Ready" ...
	I0327 23:55:30.319828 1086621 round_trippers.go:463] GET https://192.168.39.47:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-msv9s
	I0327 23:55:30.319837 1086621 round_trippers.go:469] Request Headers:
	I0327 23:55:30.319843 1086621 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:55:30.319847 1086621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0327 23:55:30.322989 1086621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0327 23:55:30.323881 1086621 round_trippers.go:463] GET https://192.168.39.47:8443/api/v1/nodes/ha-377576
	I0327 23:55:30.323897 1086621 round_trippers.go:469] Request Headers:
	I0327 23:55:30.323907 1086621 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:55:30.323913 1086621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0327 23:55:30.326602 1086621 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0327 23:55:30.327132 1086621 pod_ready.go:92] pod "coredns-76f75df574-msv9s" in "kube-system" namespace has status "Ready":"True"
	I0327 23:55:30.327150 1086621 pod_ready.go:81] duration metric: took 7.373142ms for pod "coredns-76f75df574-msv9s" in "kube-system" namespace to be "Ready" ...
	I0327 23:55:30.327163 1086621 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-377576" in "kube-system" namespace to be "Ready" ...
	I0327 23:55:30.327228 1086621 round_trippers.go:463] GET https://192.168.39.47:8443/api/v1/namespaces/kube-system/pods/etcd-ha-377576
	I0327 23:55:30.327238 1086621 round_trippers.go:469] Request Headers:
	I0327 23:55:30.327249 1086621 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:55:30.327258 1086621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0327 23:55:30.329942 1086621 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0327 23:55:30.330747 1086621 round_trippers.go:463] GET https://192.168.39.47:8443/api/v1/nodes/ha-377576
	I0327 23:55:30.330762 1086621 round_trippers.go:469] Request Headers:
	I0327 23:55:30.330770 1086621 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:55:30.330776 1086621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0327 23:55:30.333524 1086621 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0327 23:55:30.334031 1086621 pod_ready.go:92] pod "etcd-ha-377576" in "kube-system" namespace has status "Ready":"True"
	I0327 23:55:30.334047 1086621 pod_ready.go:81] duration metric: took 6.873231ms for pod "etcd-ha-377576" in "kube-system" namespace to be "Ready" ...
	I0327 23:55:30.334057 1086621 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-377576-m02" in "kube-system" namespace to be "Ready" ...
	I0327 23:55:30.334115 1086621 round_trippers.go:463] GET https://192.168.39.47:8443/api/v1/namespaces/kube-system/pods/etcd-ha-377576-m02
	I0327 23:55:30.334126 1086621 round_trippers.go:469] Request Headers:
	I0327 23:55:30.334136 1086621 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:55:30.334140 1086621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0327 23:55:30.336929 1086621 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0327 23:55:30.337645 1086621 round_trippers.go:463] GET https://192.168.39.47:8443/api/v1/nodes/ha-377576-m02
	I0327 23:55:30.337659 1086621 round_trippers.go:469] Request Headers:
	I0327 23:55:30.337668 1086621 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:55:30.337673 1086621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0327 23:55:30.340451 1086621 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0327 23:55:30.835099 1086621 round_trippers.go:463] GET https://192.168.39.47:8443/api/v1/namespaces/kube-system/pods/etcd-ha-377576-m02
	I0327 23:55:30.835125 1086621 round_trippers.go:469] Request Headers:
	I0327 23:55:30.835136 1086621 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:55:30.835141 1086621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0327 23:55:30.839257 1086621 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0327 23:55:30.840455 1086621 round_trippers.go:463] GET https://192.168.39.47:8443/api/v1/nodes/ha-377576-m02
	I0327 23:55:30.840477 1086621 round_trippers.go:469] Request Headers:
	I0327 23:55:30.840488 1086621 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:55:30.840494 1086621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0327 23:55:30.844353 1086621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0327 23:55:31.335155 1086621 round_trippers.go:463] GET https://192.168.39.47:8443/api/v1/namespaces/kube-system/pods/etcd-ha-377576-m02
	I0327 23:55:31.335181 1086621 round_trippers.go:469] Request Headers:
	I0327 23:55:31.335189 1086621 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:55:31.335195 1086621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0327 23:55:31.344449 1086621 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0327 23:55:31.345199 1086621 round_trippers.go:463] GET https://192.168.39.47:8443/api/v1/nodes/ha-377576-m02
	I0327 23:55:31.345215 1086621 round_trippers.go:469] Request Headers:
	I0327 23:55:31.345225 1086621 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:55:31.345230 1086621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0327 23:55:31.350967 1086621 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0327 23:55:31.834618 1086621 round_trippers.go:463] GET https://192.168.39.47:8443/api/v1/namespaces/kube-system/pods/etcd-ha-377576-m02
	I0327 23:55:31.834647 1086621 round_trippers.go:469] Request Headers:
	I0327 23:55:31.834659 1086621 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:55:31.834664 1086621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0327 23:55:31.838397 1086621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0327 23:55:31.839441 1086621 round_trippers.go:463] GET https://192.168.39.47:8443/api/v1/nodes/ha-377576-m02
	I0327 23:55:31.839457 1086621 round_trippers.go:469] Request Headers:
	I0327 23:55:31.839465 1086621 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:55:31.839469 1086621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0327 23:55:31.843573 1086621 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0327 23:55:32.335111 1086621 round_trippers.go:463] GET https://192.168.39.47:8443/api/v1/namespaces/kube-system/pods/etcd-ha-377576-m02
	I0327 23:55:32.335140 1086621 round_trippers.go:469] Request Headers:
	I0327 23:55:32.335148 1086621 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:55:32.335153 1086621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0327 23:55:32.338841 1086621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0327 23:55:32.339703 1086621 round_trippers.go:463] GET https://192.168.39.47:8443/api/v1/nodes/ha-377576-m02
	I0327 23:55:32.339721 1086621 round_trippers.go:469] Request Headers:
	I0327 23:55:32.339733 1086621 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:55:32.339739 1086621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0327 23:55:32.342668 1086621 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0327 23:55:32.343321 1086621 pod_ready.go:102] pod "etcd-ha-377576-m02" in "kube-system" namespace has status "Ready":"False"
	I0327 23:55:32.834438 1086621 round_trippers.go:463] GET https://192.168.39.47:8443/api/v1/namespaces/kube-system/pods/etcd-ha-377576-m02
	I0327 23:55:32.834470 1086621 round_trippers.go:469] Request Headers:
	I0327 23:55:32.834482 1086621 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:55:32.834488 1086621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0327 23:55:32.840254 1086621 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0327 23:55:32.841752 1086621 round_trippers.go:463] GET https://192.168.39.47:8443/api/v1/nodes/ha-377576-m02
	I0327 23:55:32.841769 1086621 round_trippers.go:469] Request Headers:
	I0327 23:55:32.841777 1086621 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:55:32.841782 1086621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0327 23:55:32.853503 1086621 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0327 23:55:33.335021 1086621 round_trippers.go:463] GET https://192.168.39.47:8443/api/v1/namespaces/kube-system/pods/etcd-ha-377576-m02
	I0327 23:55:33.335053 1086621 round_trippers.go:469] Request Headers:
	I0327 23:55:33.335064 1086621 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:55:33.335075 1086621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0327 23:55:33.338674 1086621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0327 23:55:33.339876 1086621 round_trippers.go:463] GET https://192.168.39.47:8443/api/v1/nodes/ha-377576-m02
	I0327 23:55:33.339892 1086621 round_trippers.go:469] Request Headers:
	I0327 23:55:33.339903 1086621 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:55:33.339910 1086621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0327 23:55:33.343215 1086621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0327 23:55:33.834501 1086621 round_trippers.go:463] GET https://192.168.39.47:8443/api/v1/namespaces/kube-system/pods/etcd-ha-377576-m02
	I0327 23:55:33.834526 1086621 round_trippers.go:469] Request Headers:
	I0327 23:55:33.834534 1086621 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:55:33.834538 1086621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0327 23:55:33.838710 1086621 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0327 23:55:33.839574 1086621 round_trippers.go:463] GET https://192.168.39.47:8443/api/v1/nodes/ha-377576-m02
	I0327 23:55:33.839592 1086621 round_trippers.go:469] Request Headers:
	I0327 23:55:33.839599 1086621 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:55:33.839603 1086621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0327 23:55:33.843186 1086621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0327 23:55:34.334283 1086621 round_trippers.go:463] GET https://192.168.39.47:8443/api/v1/namespaces/kube-system/pods/etcd-ha-377576-m02
	I0327 23:55:34.334317 1086621 round_trippers.go:469] Request Headers:
	I0327 23:55:34.334329 1086621 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:55:34.334336 1086621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0327 23:55:34.339550 1086621 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0327 23:55:34.340850 1086621 round_trippers.go:463] GET https://192.168.39.47:8443/api/v1/nodes/ha-377576-m02
	I0327 23:55:34.340867 1086621 round_trippers.go:469] Request Headers:
	I0327 23:55:34.340875 1086621 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:55:34.340881 1086621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0327 23:55:34.346262 1086621 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0327 23:55:34.347506 1086621 pod_ready.go:92] pod "etcd-ha-377576-m02" in "kube-system" namespace has status "Ready":"True"
	I0327 23:55:34.347527 1086621 pod_ready.go:81] duration metric: took 4.013462372s for pod "etcd-ha-377576-m02" in "kube-system" namespace to be "Ready" ...
	I0327 23:55:34.347542 1086621 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-377576" in "kube-system" namespace to be "Ready" ...
	I0327 23:55:34.347613 1086621 round_trippers.go:463] GET https://192.168.39.47:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-377576
	I0327 23:55:34.347623 1086621 round_trippers.go:469] Request Headers:
	I0327 23:55:34.347636 1086621 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:55:34.347646 1086621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0327 23:55:34.353573 1086621 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0327 23:55:34.354358 1086621 round_trippers.go:463] GET https://192.168.39.47:8443/api/v1/nodes/ha-377576
	I0327 23:55:34.354377 1086621 round_trippers.go:469] Request Headers:
	I0327 23:55:34.354387 1086621 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:55:34.354392 1086621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0327 23:55:34.358321 1086621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0327 23:55:34.359040 1086621 pod_ready.go:92] pod "kube-apiserver-ha-377576" in "kube-system" namespace has status "Ready":"True"
	I0327 23:55:34.359058 1086621 pod_ready.go:81] duration metric: took 11.509065ms for pod "kube-apiserver-ha-377576" in "kube-system" namespace to be "Ready" ...
	I0327 23:55:34.359067 1086621 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-377576-m02" in "kube-system" namespace to be "Ready" ...
	I0327 23:55:34.359122 1086621 round_trippers.go:463] GET https://192.168.39.47:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-377576-m02
	I0327 23:55:34.359130 1086621 round_trippers.go:469] Request Headers:
	I0327 23:55:34.359136 1086621 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:55:34.359140 1086621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0327 23:55:34.362904 1086621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0327 23:55:34.363503 1086621 round_trippers.go:463] GET https://192.168.39.47:8443/api/v1/nodes/ha-377576-m02
	I0327 23:55:34.363520 1086621 round_trippers.go:469] Request Headers:
	I0327 23:55:34.363526 1086621 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:55:34.363531 1086621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0327 23:55:34.367669 1086621 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0327 23:55:34.368113 1086621 pod_ready.go:92] pod "kube-apiserver-ha-377576-m02" in "kube-system" namespace has status "Ready":"True"
	I0327 23:55:34.368131 1086621 pod_ready.go:81] duration metric: took 9.057067ms for pod "kube-apiserver-ha-377576-m02" in "kube-system" namespace to be "Ready" ...
	I0327 23:55:34.368142 1086621 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-377576" in "kube-system" namespace to be "Ready" ...
	I0327 23:55:34.494286 1086621 request.go:629] Waited for 126.036919ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.47:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-377576
	I0327 23:55:34.494359 1086621 round_trippers.go:463] GET https://192.168.39.47:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-377576
	I0327 23:55:34.494364 1086621 round_trippers.go:469] Request Headers:
	I0327 23:55:34.494372 1086621 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:55:34.494380 1086621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0327 23:55:34.498108 1086621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0327 23:55:34.694304 1086621 request.go:629] Waited for 195.386085ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.47:8443/api/v1/nodes/ha-377576
	I0327 23:55:34.694393 1086621 round_trippers.go:463] GET https://192.168.39.47:8443/api/v1/nodes/ha-377576
	I0327 23:55:34.694401 1086621 round_trippers.go:469] Request Headers:
	I0327 23:55:34.694411 1086621 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:55:34.694415 1086621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0327 23:55:34.698177 1086621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0327 23:55:34.699219 1086621 pod_ready.go:92] pod "kube-controller-manager-ha-377576" in "kube-system" namespace has status "Ready":"True"
	I0327 23:55:34.699243 1086621 pod_ready.go:81] duration metric: took 331.095005ms for pod "kube-controller-manager-ha-377576" in "kube-system" namespace to be "Ready" ...
	I0327 23:55:34.699256 1086621 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-377576-m02" in "kube-system" namespace to be "Ready" ...
	I0327 23:55:34.894348 1086621 request.go:629] Waited for 194.995133ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.47:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-377576-m02
	I0327 23:55:34.894433 1086621 round_trippers.go:463] GET https://192.168.39.47:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-377576-m02
	I0327 23:55:34.894441 1086621 round_trippers.go:469] Request Headers:
	I0327 23:55:34.894452 1086621 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:55:34.894462 1086621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0327 23:55:34.898559 1086621 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0327 23:55:35.094833 1086621 request.go:629] Waited for 195.405826ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.47:8443/api/v1/nodes/ha-377576-m02
	I0327 23:55:35.094906 1086621 round_trippers.go:463] GET https://192.168.39.47:8443/api/v1/nodes/ha-377576-m02
	I0327 23:55:35.094911 1086621 round_trippers.go:469] Request Headers:
	I0327 23:55:35.094919 1086621 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:55:35.094924 1086621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0327 23:55:35.098340 1086621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0327 23:55:35.099062 1086621 pod_ready.go:92] pod "kube-controller-manager-ha-377576-m02" in "kube-system" namespace has status "Ready":"True"
	I0327 23:55:35.099082 1086621 pod_ready.go:81] duration metric: took 399.817994ms for pod "kube-controller-manager-ha-377576-m02" in "kube-system" namespace to be "Ready" ...
	I0327 23:55:35.099097 1086621 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-4t77p" in "kube-system" namespace to be "Ready" ...
	I0327 23:55:35.294222 1086621 request.go:629] Waited for 195.021189ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.47:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4t77p
	I0327 23:55:35.294311 1086621 round_trippers.go:463] GET https://192.168.39.47:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4t77p
	I0327 23:55:35.294318 1086621 round_trippers.go:469] Request Headers:
	I0327 23:55:35.294329 1086621 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:55:35.294336 1086621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0327 23:55:35.299084 1086621 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0327 23:55:35.493975 1086621 request.go:629] Waited for 194.213986ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.47:8443/api/v1/nodes/ha-377576
	I0327 23:55:35.494046 1086621 round_trippers.go:463] GET https://192.168.39.47:8443/api/v1/nodes/ha-377576
	I0327 23:55:35.494051 1086621 round_trippers.go:469] Request Headers:
	I0327 23:55:35.494058 1086621 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:55:35.494062 1086621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0327 23:55:35.497449 1086621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0327 23:55:35.498360 1086621 pod_ready.go:92] pod "kube-proxy-4t77p" in "kube-system" namespace has status "Ready":"True"
	I0327 23:55:35.498384 1086621 pod_ready.go:81] duration metric: took 399.278414ms for pod "kube-proxy-4t77p" in "kube-system" namespace to be "Ready" ...
	I0327 23:55:35.498398 1086621 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-k9dcr" in "kube-system" namespace to be "Ready" ...
	I0327 23:55:35.694463 1086621 request.go:629] Waited for 195.979619ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.47:8443/api/v1/namespaces/kube-system/pods/kube-proxy-k9dcr
	I0327 23:55:35.694532 1086621 round_trippers.go:463] GET https://192.168.39.47:8443/api/v1/namespaces/kube-system/pods/kube-proxy-k9dcr
	I0327 23:55:35.694539 1086621 round_trippers.go:469] Request Headers:
	I0327 23:55:35.694546 1086621 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:55:35.694552 1086621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0327 23:55:35.698729 1086621 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0327 23:55:35.894897 1086621 request.go:629] Waited for 195.396289ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.47:8443/api/v1/nodes/ha-377576-m02
	I0327 23:55:35.894965 1086621 round_trippers.go:463] GET https://192.168.39.47:8443/api/v1/nodes/ha-377576-m02
	I0327 23:55:35.894970 1086621 round_trippers.go:469] Request Headers:
	I0327 23:55:35.894978 1086621 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:55:35.894981 1086621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0327 23:55:35.899097 1086621 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0327 23:55:35.899611 1086621 pod_ready.go:92] pod "kube-proxy-k9dcr" in "kube-system" namespace has status "Ready":"True"
	I0327 23:55:35.899633 1086621 pod_ready.go:81] duration metric: took 401.224891ms for pod "kube-proxy-k9dcr" in "kube-system" namespace to be "Ready" ...
	I0327 23:55:35.899644 1086621 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-377576" in "kube-system" namespace to be "Ready" ...
	I0327 23:55:36.094331 1086621 request.go:629] Waited for 194.589054ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.47:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-377576
	I0327 23:55:36.094405 1086621 round_trippers.go:463] GET https://192.168.39.47:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-377576
	I0327 23:55:36.094410 1086621 round_trippers.go:469] Request Headers:
	I0327 23:55:36.094419 1086621 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:55:36.094423 1086621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0327 23:55:36.098005 1086621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0327 23:55:36.294386 1086621 request.go:629] Waited for 195.567508ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.47:8443/api/v1/nodes/ha-377576
	I0327 23:55:36.294452 1086621 round_trippers.go:463] GET https://192.168.39.47:8443/api/v1/nodes/ha-377576
	I0327 23:55:36.294457 1086621 round_trippers.go:469] Request Headers:
	I0327 23:55:36.294465 1086621 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:55:36.294471 1086621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0327 23:55:36.298034 1086621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0327 23:55:36.298650 1086621 pod_ready.go:92] pod "kube-scheduler-ha-377576" in "kube-system" namespace has status "Ready":"True"
	I0327 23:55:36.298676 1086621 pod_ready.go:81] duration metric: took 399.022593ms for pod "kube-scheduler-ha-377576" in "kube-system" namespace to be "Ready" ...
	I0327 23:55:36.298691 1086621 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-377576-m02" in "kube-system" namespace to be "Ready" ...
	I0327 23:55:36.494776 1086621 request.go:629] Waited for 195.998292ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.47:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-377576-m02
	I0327 23:55:36.494870 1086621 round_trippers.go:463] GET https://192.168.39.47:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-377576-m02
	I0327 23:55:36.494876 1086621 round_trippers.go:469] Request Headers:
	I0327 23:55:36.494884 1086621 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:55:36.494890 1086621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0327 23:55:36.500470 1086621 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0327 23:55:36.693969 1086621 request.go:629] Waited for 192.303867ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.47:8443/api/v1/nodes/ha-377576-m02
	I0327 23:55:36.694052 1086621 round_trippers.go:463] GET https://192.168.39.47:8443/api/v1/nodes/ha-377576-m02
	I0327 23:55:36.694061 1086621 round_trippers.go:469] Request Headers:
	I0327 23:55:36.694072 1086621 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:55:36.694077 1086621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0327 23:55:36.701098 1086621 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0327 23:55:36.701748 1086621 pod_ready.go:92] pod "kube-scheduler-ha-377576-m02" in "kube-system" namespace has status "Ready":"True"
	I0327 23:55:36.701779 1086621 pod_ready.go:81] duration metric: took 403.071107ms for pod "kube-scheduler-ha-377576-m02" in "kube-system" namespace to be "Ready" ...
	I0327 23:55:36.701799 1086621 pod_ready.go:38] duration metric: took 6.40314322s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0327 23:55:36.701827 1086621 api_server.go:52] waiting for apiserver process to appear ...
	I0327 23:55:36.701907 1086621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0327 23:55:36.718672 1086621 api_server.go:72] duration metric: took 12.733392053s to wait for apiserver process to appear ...
	I0327 23:55:36.718705 1086621 api_server.go:88] waiting for apiserver healthz status ...
	I0327 23:55:36.718730 1086621 api_server.go:253] Checking apiserver healthz at https://192.168.39.47:8443/healthz ...
	I0327 23:55:36.723277 1086621 api_server.go:279] https://192.168.39.47:8443/healthz returned 200:
	ok
	I0327 23:55:36.723362 1086621 round_trippers.go:463] GET https://192.168.39.47:8443/version
	I0327 23:55:36.723378 1086621 round_trippers.go:469] Request Headers:
	I0327 23:55:36.723389 1086621 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:55:36.723397 1086621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0327 23:55:36.724525 1086621 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0327 23:55:36.724636 1086621 api_server.go:141] control plane version: v1.29.3
	I0327 23:55:36.724654 1086621 api_server.go:131] duration metric: took 5.942511ms to wait for apiserver health ...
	I0327 23:55:36.724663 1086621 system_pods.go:43] waiting for kube-system pods to appear ...
	I0327 23:55:36.894000 1086621 request.go:629] Waited for 169.256759ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.47:8443/api/v1/namespaces/kube-system/pods
	I0327 23:55:36.894083 1086621 round_trippers.go:463] GET https://192.168.39.47:8443/api/v1/namespaces/kube-system/pods
	I0327 23:55:36.894088 1086621 round_trippers.go:469] Request Headers:
	I0327 23:55:36.894096 1086621 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:55:36.894100 1086621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0327 23:55:36.899406 1086621 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0327 23:55:36.904633 1086621 system_pods.go:59] 17 kube-system pods found
	I0327 23:55:36.904666 1086621 system_pods.go:61] "coredns-76f75df574-47npx" [968d63e4-f44a-4e52-b6c0-04e0ed1a068e] Running
	I0327 23:55:36.904671 1086621 system_pods.go:61] "coredns-76f75df574-msv9s" [7c549358-2f35-4345-aa7a-8bbbcfc4ef01] Running
	I0327 23:55:36.904675 1086621 system_pods.go:61] "etcd-ha-377576" [885cacaa-1b61-4f8b-90b5-3f7dbc9df4ad] Running
	I0327 23:55:36.904678 1086621 system_pods.go:61] "etcd-ha-377576-m02" [c3fa0266-db99-4bf1-a3b4-2f050d69e2ff] Running
	I0327 23:55:36.904682 1086621 system_pods.go:61] "kindnet-5zmtk" [4e75cdc5-22da-47f2-9833-b2f4eaa9caac] Running
	I0327 23:55:36.904691 1086621 system_pods.go:61] "kindnet-6wmmc" [ef36a453-2352-47f7-8a75-72abc4004e82] Running
	I0327 23:55:36.904694 1086621 system_pods.go:61] "kube-apiserver-ha-377576" [a1a979ea-0199-4e24-af63-c79b32a66c0e] Running
	I0327 23:55:36.904699 1086621 system_pods.go:61] "kube-apiserver-ha-377576-m02" [516bd332-2602-4380-aac0-3fd71f0834cb] Running
	I0327 23:55:36.904702 1086621 system_pods.go:61] "kube-controller-manager-ha-377576" [f72d4847-2902-4e1f-8852-bdcc020a6099] Running
	I0327 23:55:36.904707 1086621 system_pods.go:61] "kube-controller-manager-ha-377576-m02" [a3e945b8-d18c-434c-b8a7-70510fbce333] Running
	I0327 23:55:36.904711 1086621 system_pods.go:61] "kube-proxy-4t77p" [27eff0c9-9b45-4530-aba9-1a5e0ca60802] Running
	I0327 23:55:36.904714 1086621 system_pods.go:61] "kube-proxy-k9dcr" [07c785f3-3b08-4f43-b957-5f4092f757ea] Running
	I0327 23:55:36.904721 1086621 system_pods.go:61] "kube-scheduler-ha-377576" [6b97a544-a0e8-4c35-b93c-197f200da53b] Running
	I0327 23:55:36.904725 1086621 system_pods.go:61] "kube-scheduler-ha-377576-m02" [91c25780-d677-4394-9624-31dfaec279c3] Running
	I0327 23:55:36.904731 1086621 system_pods.go:61] "kube-vip-ha-377576" [2d4dd5f7-c798-4a52-97f5-4bc068603373] Running
	I0327 23:55:36.904734 1086621 system_pods.go:61] "kube-vip-ha-377576-m02" [dde68b43-553a-4d1b-ad7f-5284653080e4] Running
	I0327 23:55:36.904738 1086621 system_pods.go:61] "storage-provisioner" [9000645c-8323-43af-bd87-011d1574493c] Running
	I0327 23:55:36.904745 1086621 system_pods.go:74] duration metric: took 180.073661ms to wait for pod list to return data ...
	I0327 23:55:36.904757 1086621 default_sa.go:34] waiting for default service account to be created ...
	I0327 23:55:37.094201 1086621 request.go:629] Waited for 189.350418ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.47:8443/api/v1/namespaces/default/serviceaccounts
	I0327 23:55:37.094280 1086621 round_trippers.go:463] GET https://192.168.39.47:8443/api/v1/namespaces/default/serviceaccounts
	I0327 23:55:37.094287 1086621 round_trippers.go:469] Request Headers:
	I0327 23:55:37.094295 1086621 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:55:37.094300 1086621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0327 23:55:37.098334 1086621 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0327 23:55:37.098559 1086621 default_sa.go:45] found service account: "default"
	I0327 23:55:37.098577 1086621 default_sa.go:55] duration metric: took 193.811552ms for default service account to be created ...
	I0327 23:55:37.098587 1086621 system_pods.go:116] waiting for k8s-apps to be running ...
	I0327 23:55:37.294816 1086621 request.go:629] Waited for 196.134816ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.47:8443/api/v1/namespaces/kube-system/pods
	I0327 23:55:37.294900 1086621 round_trippers.go:463] GET https://192.168.39.47:8443/api/v1/namespaces/kube-system/pods
	I0327 23:55:37.294909 1086621 round_trippers.go:469] Request Headers:
	I0327 23:55:37.294921 1086621 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:55:37.294928 1086621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0327 23:55:37.301717 1086621 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0327 23:55:37.306164 1086621 system_pods.go:86] 17 kube-system pods found
	I0327 23:55:37.306188 1086621 system_pods.go:89] "coredns-76f75df574-47npx" [968d63e4-f44a-4e52-b6c0-04e0ed1a068e] Running
	I0327 23:55:37.306193 1086621 system_pods.go:89] "coredns-76f75df574-msv9s" [7c549358-2f35-4345-aa7a-8bbbcfc4ef01] Running
	I0327 23:55:37.306198 1086621 system_pods.go:89] "etcd-ha-377576" [885cacaa-1b61-4f8b-90b5-3f7dbc9df4ad] Running
	I0327 23:55:37.306202 1086621 system_pods.go:89] "etcd-ha-377576-m02" [c3fa0266-db99-4bf1-a3b4-2f050d69e2ff] Running
	I0327 23:55:37.306206 1086621 system_pods.go:89] "kindnet-5zmtk" [4e75cdc5-22da-47f2-9833-b2f4eaa9caac] Running
	I0327 23:55:37.306209 1086621 system_pods.go:89] "kindnet-6wmmc" [ef36a453-2352-47f7-8a75-72abc4004e82] Running
	I0327 23:55:37.306213 1086621 system_pods.go:89] "kube-apiserver-ha-377576" [a1a979ea-0199-4e24-af63-c79b32a66c0e] Running
	I0327 23:55:37.306218 1086621 system_pods.go:89] "kube-apiserver-ha-377576-m02" [516bd332-2602-4380-aac0-3fd71f0834cb] Running
	I0327 23:55:37.306224 1086621 system_pods.go:89] "kube-controller-manager-ha-377576" [f72d4847-2902-4e1f-8852-bdcc020a6099] Running
	I0327 23:55:37.306252 1086621 system_pods.go:89] "kube-controller-manager-ha-377576-m02" [a3e945b8-d18c-434c-b8a7-70510fbce333] Running
	I0327 23:55:37.306263 1086621 system_pods.go:89] "kube-proxy-4t77p" [27eff0c9-9b45-4530-aba9-1a5e0ca60802] Running
	I0327 23:55:37.306269 1086621 system_pods.go:89] "kube-proxy-k9dcr" [07c785f3-3b08-4f43-b957-5f4092f757ea] Running
	I0327 23:55:37.306275 1086621 system_pods.go:89] "kube-scheduler-ha-377576" [6b97a544-a0e8-4c35-b93c-197f200da53b] Running
	I0327 23:55:37.306279 1086621 system_pods.go:89] "kube-scheduler-ha-377576-m02" [91c25780-d677-4394-9624-31dfaec279c3] Running
	I0327 23:55:37.306284 1086621 system_pods.go:89] "kube-vip-ha-377576" [2d4dd5f7-c798-4a52-97f5-4bc068603373] Running
	I0327 23:55:37.306287 1086621 system_pods.go:89] "kube-vip-ha-377576-m02" [dde68b43-553a-4d1b-ad7f-5284653080e4] Running
	I0327 23:55:37.306291 1086621 system_pods.go:89] "storage-provisioner" [9000645c-8323-43af-bd87-011d1574493c] Running
	I0327 23:55:37.306302 1086621 system_pods.go:126] duration metric: took 207.709153ms to wait for k8s-apps to be running ...
	I0327 23:55:37.306311 1086621 system_svc.go:44] waiting for kubelet service to be running ....
	I0327 23:55:37.306373 1086621 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0327 23:55:37.322496 1086621 system_svc.go:56] duration metric: took 16.172159ms WaitForService to wait for kubelet
	I0327 23:55:37.322528 1086621 kubeadm.go:576] duration metric: took 13.337255798s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0327 23:55:37.322554 1086621 node_conditions.go:102] verifying NodePressure condition ...
	I0327 23:55:37.493952 1086621 request.go:629] Waited for 171.283703ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.47:8443/api/v1/nodes
	I0327 23:55:37.494023 1086621 round_trippers.go:463] GET https://192.168.39.47:8443/api/v1/nodes
	I0327 23:55:37.494030 1086621 round_trippers.go:469] Request Headers:
	I0327 23:55:37.494045 1086621 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:55:37.494050 1086621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0327 23:55:37.497664 1086621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0327 23:55:37.498677 1086621 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0327 23:55:37.498704 1086621 node_conditions.go:123] node cpu capacity is 2
	I0327 23:55:37.498719 1086621 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0327 23:55:37.498723 1086621 node_conditions.go:123] node cpu capacity is 2
	I0327 23:55:37.498729 1086621 node_conditions.go:105] duration metric: took 176.168713ms to run NodePressure ...
	I0327 23:55:37.498743 1086621 start.go:240] waiting for startup goroutines ...
	I0327 23:55:37.498779 1086621 start.go:254] writing updated cluster config ...
	I0327 23:55:37.501217 1086621 out.go:177] 
	I0327 23:55:37.502852 1086621 config.go:182] Loaded profile config "ha-377576": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0327 23:55:37.502986 1086621 profile.go:142] Saving config to /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/ha-377576/config.json ...
	I0327 23:55:37.504915 1086621 out.go:177] * Starting "ha-377576-m03" control-plane node in "ha-377576" cluster
	I0327 23:55:37.506153 1086621 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0327 23:55:37.506174 1086621 cache.go:56] Caching tarball of preloaded images
	I0327 23:55:37.506317 1086621 preload.go:173] Found /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0327 23:55:37.506331 1086621 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on crio
	I0327 23:55:37.506437 1086621 profile.go:142] Saving config to /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/ha-377576/config.json ...
	I0327 23:55:37.506637 1086621 start.go:360] acquireMachinesLock for ha-377576-m03: {Name:mk85e225431128bbd27ac7bb3815095957281902 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0327 23:55:37.506687 1086621 start.go:364] duration metric: took 26.886µs to acquireMachinesLock for "ha-377576-m03"
	I0327 23:55:37.506713 1086621 start.go:93] Provisioning new machine with config: &{Name:ha-377576 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.29.3 ClusterName:ha-377576 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.47 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.117 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-d
ns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimi
zations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0327 23:55:37.506843 1086621 start.go:125] createHost starting for "m03" (driver="kvm2")
	I0327 23:55:37.508415 1086621 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0327 23:55:37.508527 1086621 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0327 23:55:37.508575 1086621 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0327 23:55:37.523640 1086621 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41441
	I0327 23:55:37.524175 1086621 main.go:141] libmachine: () Calling .GetVersion
	I0327 23:55:37.524653 1086621 main.go:141] libmachine: Using API Version  1
	I0327 23:55:37.524676 1086621 main.go:141] libmachine: () Calling .SetConfigRaw
	I0327 23:55:37.524988 1086621 main.go:141] libmachine: () Calling .GetMachineName
	I0327 23:55:37.525204 1086621 main.go:141] libmachine: (ha-377576-m03) Calling .GetMachineName
	I0327 23:55:37.525353 1086621 main.go:141] libmachine: (ha-377576-m03) Calling .DriverName
	I0327 23:55:37.525516 1086621 start.go:159] libmachine.API.Create for "ha-377576" (driver="kvm2")
	I0327 23:55:37.525548 1086621 client.go:168] LocalClient.Create starting
	I0327 23:55:37.525588 1086621 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem
	I0327 23:55:37.525627 1086621 main.go:141] libmachine: Decoding PEM data...
	I0327 23:55:37.525653 1086621 main.go:141] libmachine: Parsing certificate...
	I0327 23:55:37.525745 1086621 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/cert.pem
	I0327 23:55:37.525773 1086621 main.go:141] libmachine: Decoding PEM data...
	I0327 23:55:37.525788 1086621 main.go:141] libmachine: Parsing certificate...
	I0327 23:55:37.525818 1086621 main.go:141] libmachine: Running pre-create checks...
	I0327 23:55:37.525830 1086621 main.go:141] libmachine: (ha-377576-m03) Calling .PreCreateCheck
	I0327 23:55:37.525984 1086621 main.go:141] libmachine: (ha-377576-m03) Calling .GetConfigRaw
	I0327 23:55:37.526423 1086621 main.go:141] libmachine: Creating machine...
	I0327 23:55:37.526442 1086621 main.go:141] libmachine: (ha-377576-m03) Calling .Create
	I0327 23:55:37.526566 1086621 main.go:141] libmachine: (ha-377576-m03) Creating KVM machine...
	I0327 23:55:37.527825 1086621 main.go:141] libmachine: (ha-377576-m03) DBG | found existing default KVM network
	I0327 23:55:37.527940 1086621 main.go:141] libmachine: (ha-377576-m03) DBG | found existing private KVM network mk-ha-377576
	I0327 23:55:37.528044 1086621 main.go:141] libmachine: (ha-377576-m03) Setting up store path in /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/ha-377576-m03 ...
	I0327 23:55:37.528064 1086621 main.go:141] libmachine: (ha-377576-m03) Building disk image from file:///home/jenkins/minikube-integration/18485-1069254/.minikube/cache/iso/amd64/minikube-v1.33.0-1711559712-18485-amd64.iso
	I0327 23:55:37.528132 1086621 main.go:141] libmachine: (ha-377576-m03) DBG | I0327 23:55:37.528034 1087512 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18485-1069254/.minikube
	I0327 23:55:37.528270 1086621 main.go:141] libmachine: (ha-377576-m03) Downloading /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18485-1069254/.minikube/cache/iso/amd64/minikube-v1.33.0-1711559712-18485-amd64.iso...
	I0327 23:55:37.781950 1086621 main.go:141] libmachine: (ha-377576-m03) DBG | I0327 23:55:37.781824 1087512 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/ha-377576-m03/id_rsa...
	I0327 23:55:37.902971 1086621 main.go:141] libmachine: (ha-377576-m03) DBG | I0327 23:55:37.902840 1087512 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/ha-377576-m03/ha-377576-m03.rawdisk...
	I0327 23:55:37.903017 1086621 main.go:141] libmachine: (ha-377576-m03) DBG | Writing magic tar header
	I0327 23:55:37.903031 1086621 main.go:141] libmachine: (ha-377576-m03) DBG | Writing SSH key tar header
	I0327 23:55:37.903049 1086621 main.go:141] libmachine: (ha-377576-m03) DBG | I0327 23:55:37.902965 1087512 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/ha-377576-m03 ...
	I0327 23:55:37.903064 1086621 main.go:141] libmachine: (ha-377576-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/ha-377576-m03
	I0327 23:55:37.903137 1086621 main.go:141] libmachine: (ha-377576-m03) Setting executable bit set on /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/ha-377576-m03 (perms=drwx------)
	I0327 23:55:37.903180 1086621 main.go:141] libmachine: (ha-377576-m03) Setting executable bit set on /home/jenkins/minikube-integration/18485-1069254/.minikube/machines (perms=drwxr-xr-x)
	I0327 23:55:37.903198 1086621 main.go:141] libmachine: (ha-377576-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18485-1069254/.minikube/machines
	I0327 23:55:37.903213 1086621 main.go:141] libmachine: (ha-377576-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18485-1069254/.minikube
	I0327 23:55:37.903223 1086621 main.go:141] libmachine: (ha-377576-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18485-1069254
	I0327 23:55:37.903240 1086621 main.go:141] libmachine: (ha-377576-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0327 23:55:37.903249 1086621 main.go:141] libmachine: (ha-377576-m03) DBG | Checking permissions on dir: /home/jenkins
	I0327 23:55:37.903263 1086621 main.go:141] libmachine: (ha-377576-m03) DBG | Checking permissions on dir: /home
	I0327 23:55:37.903276 1086621 main.go:141] libmachine: (ha-377576-m03) DBG | Skipping /home - not owner
	I0327 23:55:37.903285 1086621 main.go:141] libmachine: (ha-377576-m03) Setting executable bit set on /home/jenkins/minikube-integration/18485-1069254/.minikube (perms=drwxr-xr-x)
	I0327 23:55:37.903297 1086621 main.go:141] libmachine: (ha-377576-m03) Setting executable bit set on /home/jenkins/minikube-integration/18485-1069254 (perms=drwxrwxr-x)
	I0327 23:55:37.903306 1086621 main.go:141] libmachine: (ha-377576-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0327 23:55:37.903317 1086621 main.go:141] libmachine: (ha-377576-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0327 23:55:37.903326 1086621 main.go:141] libmachine: (ha-377576-m03) Creating domain...
	I0327 23:55:37.904291 1086621 main.go:141] libmachine: (ha-377576-m03) define libvirt domain using xml: 
	I0327 23:55:37.904316 1086621 main.go:141] libmachine: (ha-377576-m03) <domain type='kvm'>
	I0327 23:55:37.904326 1086621 main.go:141] libmachine: (ha-377576-m03)   <name>ha-377576-m03</name>
	I0327 23:55:37.904334 1086621 main.go:141] libmachine: (ha-377576-m03)   <memory unit='MiB'>2200</memory>
	I0327 23:55:37.904343 1086621 main.go:141] libmachine: (ha-377576-m03)   <vcpu>2</vcpu>
	I0327 23:55:37.904350 1086621 main.go:141] libmachine: (ha-377576-m03)   <features>
	I0327 23:55:37.904363 1086621 main.go:141] libmachine: (ha-377576-m03)     <acpi/>
	I0327 23:55:37.904371 1086621 main.go:141] libmachine: (ha-377576-m03)     <apic/>
	I0327 23:55:37.904380 1086621 main.go:141] libmachine: (ha-377576-m03)     <pae/>
	I0327 23:55:37.904386 1086621 main.go:141] libmachine: (ha-377576-m03)     
	I0327 23:55:37.904394 1086621 main.go:141] libmachine: (ha-377576-m03)   </features>
	I0327 23:55:37.904401 1086621 main.go:141] libmachine: (ha-377576-m03)   <cpu mode='host-passthrough'>
	I0327 23:55:37.904413 1086621 main.go:141] libmachine: (ha-377576-m03)   
	I0327 23:55:37.904420 1086621 main.go:141] libmachine: (ha-377576-m03)   </cpu>
	I0327 23:55:37.904428 1086621 main.go:141] libmachine: (ha-377576-m03)   <os>
	I0327 23:55:37.904441 1086621 main.go:141] libmachine: (ha-377576-m03)     <type>hvm</type>
	I0327 23:55:37.904452 1086621 main.go:141] libmachine: (ha-377576-m03)     <boot dev='cdrom'/>
	I0327 23:55:37.904457 1086621 main.go:141] libmachine: (ha-377576-m03)     <boot dev='hd'/>
	I0327 23:55:37.904463 1086621 main.go:141] libmachine: (ha-377576-m03)     <bootmenu enable='no'/>
	I0327 23:55:37.904468 1086621 main.go:141] libmachine: (ha-377576-m03)   </os>
	I0327 23:55:37.904473 1086621 main.go:141] libmachine: (ha-377576-m03)   <devices>
	I0327 23:55:37.904478 1086621 main.go:141] libmachine: (ha-377576-m03)     <disk type='file' device='cdrom'>
	I0327 23:55:37.904488 1086621 main.go:141] libmachine: (ha-377576-m03)       <source file='/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/ha-377576-m03/boot2docker.iso'/>
	I0327 23:55:37.904494 1086621 main.go:141] libmachine: (ha-377576-m03)       <target dev='hdc' bus='scsi'/>
	I0327 23:55:37.904499 1086621 main.go:141] libmachine: (ha-377576-m03)       <readonly/>
	I0327 23:55:37.904503 1086621 main.go:141] libmachine: (ha-377576-m03)     </disk>
	I0327 23:55:37.904510 1086621 main.go:141] libmachine: (ha-377576-m03)     <disk type='file' device='disk'>
	I0327 23:55:37.904520 1086621 main.go:141] libmachine: (ha-377576-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0327 23:55:37.904529 1086621 main.go:141] libmachine: (ha-377576-m03)       <source file='/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/ha-377576-m03/ha-377576-m03.rawdisk'/>
	I0327 23:55:37.904534 1086621 main.go:141] libmachine: (ha-377576-m03)       <target dev='hda' bus='virtio'/>
	I0327 23:55:37.904539 1086621 main.go:141] libmachine: (ha-377576-m03)     </disk>
	I0327 23:55:37.904550 1086621 main.go:141] libmachine: (ha-377576-m03)     <interface type='network'>
	I0327 23:55:37.904559 1086621 main.go:141] libmachine: (ha-377576-m03)       <source network='mk-ha-377576'/>
	I0327 23:55:37.904563 1086621 main.go:141] libmachine: (ha-377576-m03)       <model type='virtio'/>
	I0327 23:55:37.904568 1086621 main.go:141] libmachine: (ha-377576-m03)     </interface>
	I0327 23:55:37.904576 1086621 main.go:141] libmachine: (ha-377576-m03)     <interface type='network'>
	I0327 23:55:37.904581 1086621 main.go:141] libmachine: (ha-377576-m03)       <source network='default'/>
	I0327 23:55:37.904586 1086621 main.go:141] libmachine: (ha-377576-m03)       <model type='virtio'/>
	I0327 23:55:37.904592 1086621 main.go:141] libmachine: (ha-377576-m03)     </interface>
	I0327 23:55:37.904604 1086621 main.go:141] libmachine: (ha-377576-m03)     <serial type='pty'>
	I0327 23:55:37.904638 1086621 main.go:141] libmachine: (ha-377576-m03)       <target port='0'/>
	I0327 23:55:37.904667 1086621 main.go:141] libmachine: (ha-377576-m03)     </serial>
	I0327 23:55:37.904692 1086621 main.go:141] libmachine: (ha-377576-m03)     <console type='pty'>
	I0327 23:55:37.904714 1086621 main.go:141] libmachine: (ha-377576-m03)       <target type='serial' port='0'/>
	I0327 23:55:37.904728 1086621 main.go:141] libmachine: (ha-377576-m03)     </console>
	I0327 23:55:37.904741 1086621 main.go:141] libmachine: (ha-377576-m03)     <rng model='virtio'>
	I0327 23:55:37.904753 1086621 main.go:141] libmachine: (ha-377576-m03)       <backend model='random'>/dev/random</backend>
	I0327 23:55:37.904759 1086621 main.go:141] libmachine: (ha-377576-m03)     </rng>
	I0327 23:55:37.904765 1086621 main.go:141] libmachine: (ha-377576-m03)     
	I0327 23:55:37.904776 1086621 main.go:141] libmachine: (ha-377576-m03)     
	I0327 23:55:37.904788 1086621 main.go:141] libmachine: (ha-377576-m03)   </devices>
	I0327 23:55:37.904800 1086621 main.go:141] libmachine: (ha-377576-m03) </domain>
	I0327 23:55:37.904815 1086621 main.go:141] libmachine: (ha-377576-m03) 
	I0327 23:55:37.912683 1086621 main.go:141] libmachine: (ha-377576-m03) DBG | domain ha-377576-m03 has defined MAC address 52:54:00:fd:17:0d in network default
	I0327 23:55:37.913367 1086621 main.go:141] libmachine: (ha-377576-m03) Ensuring networks are active...
	I0327 23:55:37.913395 1086621 main.go:141] libmachine: (ha-377576-m03) DBG | domain ha-377576-m03 has defined MAC address 52:54:00:f5:c1:99 in network mk-ha-377576
	I0327 23:55:37.914179 1086621 main.go:141] libmachine: (ha-377576-m03) Ensuring network default is active
	I0327 23:55:37.914598 1086621 main.go:141] libmachine: (ha-377576-m03) Ensuring network mk-ha-377576 is active
	I0327 23:55:37.915024 1086621 main.go:141] libmachine: (ha-377576-m03) Getting domain xml...
	I0327 23:55:37.915693 1086621 main.go:141] libmachine: (ha-377576-m03) Creating domain...
	I0327 23:55:39.175839 1086621 main.go:141] libmachine: (ha-377576-m03) Waiting to get IP...
	I0327 23:55:39.176610 1086621 main.go:141] libmachine: (ha-377576-m03) DBG | domain ha-377576-m03 has defined MAC address 52:54:00:f5:c1:99 in network mk-ha-377576
	I0327 23:55:39.176972 1086621 main.go:141] libmachine: (ha-377576-m03) DBG | unable to find current IP address of domain ha-377576-m03 in network mk-ha-377576
	I0327 23:55:39.177023 1086621 main.go:141] libmachine: (ha-377576-m03) DBG | I0327 23:55:39.176972 1087512 retry.go:31] will retry after 213.405089ms: waiting for machine to come up
	I0327 23:55:39.392470 1086621 main.go:141] libmachine: (ha-377576-m03) DBG | domain ha-377576-m03 has defined MAC address 52:54:00:f5:c1:99 in network mk-ha-377576
	I0327 23:55:39.392959 1086621 main.go:141] libmachine: (ha-377576-m03) DBG | unable to find current IP address of domain ha-377576-m03 in network mk-ha-377576
	I0327 23:55:39.392990 1086621 main.go:141] libmachine: (ha-377576-m03) DBG | I0327 23:55:39.392901 1087512 retry.go:31] will retry after 348.371793ms: waiting for machine to come up
	I0327 23:55:39.742502 1086621 main.go:141] libmachine: (ha-377576-m03) DBG | domain ha-377576-m03 has defined MAC address 52:54:00:f5:c1:99 in network mk-ha-377576
	I0327 23:55:39.742929 1086621 main.go:141] libmachine: (ha-377576-m03) DBG | unable to find current IP address of domain ha-377576-m03 in network mk-ha-377576
	I0327 23:55:39.742959 1086621 main.go:141] libmachine: (ha-377576-m03) DBG | I0327 23:55:39.742881 1087512 retry.go:31] will retry after 367.169553ms: waiting for machine to come up
	I0327 23:55:40.111395 1086621 main.go:141] libmachine: (ha-377576-m03) DBG | domain ha-377576-m03 has defined MAC address 52:54:00:f5:c1:99 in network mk-ha-377576
	I0327 23:55:40.111861 1086621 main.go:141] libmachine: (ha-377576-m03) DBG | unable to find current IP address of domain ha-377576-m03 in network mk-ha-377576
	I0327 23:55:40.111894 1086621 main.go:141] libmachine: (ha-377576-m03) DBG | I0327 23:55:40.111820 1087512 retry.go:31] will retry after 591.714034ms: waiting for machine to come up
	I0327 23:55:40.705655 1086621 main.go:141] libmachine: (ha-377576-m03) DBG | domain ha-377576-m03 has defined MAC address 52:54:00:f5:c1:99 in network mk-ha-377576
	I0327 23:55:40.706080 1086621 main.go:141] libmachine: (ha-377576-m03) DBG | unable to find current IP address of domain ha-377576-m03 in network mk-ha-377576
	I0327 23:55:40.706114 1086621 main.go:141] libmachine: (ha-377576-m03) DBG | I0327 23:55:40.706016 1087512 retry.go:31] will retry after 697.427889ms: waiting for machine to come up
	I0327 23:55:41.404887 1086621 main.go:141] libmachine: (ha-377576-m03) DBG | domain ha-377576-m03 has defined MAC address 52:54:00:f5:c1:99 in network mk-ha-377576
	I0327 23:55:41.405382 1086621 main.go:141] libmachine: (ha-377576-m03) DBG | unable to find current IP address of domain ha-377576-m03 in network mk-ha-377576
	I0327 23:55:41.405411 1086621 main.go:141] libmachine: (ha-377576-m03) DBG | I0327 23:55:41.405339 1087512 retry.go:31] will retry after 639.33076ms: waiting for machine to come up
	I0327 23:55:42.045878 1086621 main.go:141] libmachine: (ha-377576-m03) DBG | domain ha-377576-m03 has defined MAC address 52:54:00:f5:c1:99 in network mk-ha-377576
	I0327 23:55:42.046307 1086621 main.go:141] libmachine: (ha-377576-m03) DBG | unable to find current IP address of domain ha-377576-m03 in network mk-ha-377576
	I0327 23:55:42.046339 1086621 main.go:141] libmachine: (ha-377576-m03) DBG | I0327 23:55:42.046258 1087512 retry.go:31] will retry after 958.955128ms: waiting for machine to come up
	I0327 23:55:43.008657 1086621 main.go:141] libmachine: (ha-377576-m03) DBG | domain ha-377576-m03 has defined MAC address 52:54:00:f5:c1:99 in network mk-ha-377576
	I0327 23:55:43.009179 1086621 main.go:141] libmachine: (ha-377576-m03) DBG | unable to find current IP address of domain ha-377576-m03 in network mk-ha-377576
	I0327 23:55:43.009215 1086621 main.go:141] libmachine: (ha-377576-m03) DBG | I0327 23:55:43.009116 1087512 retry.go:31] will retry after 1.019044797s: waiting for machine to come up
	I0327 23:55:44.029473 1086621 main.go:141] libmachine: (ha-377576-m03) DBG | domain ha-377576-m03 has defined MAC address 52:54:00:f5:c1:99 in network mk-ha-377576
	I0327 23:55:44.030014 1086621 main.go:141] libmachine: (ha-377576-m03) DBG | unable to find current IP address of domain ha-377576-m03 in network mk-ha-377576
	I0327 23:55:44.030056 1086621 main.go:141] libmachine: (ha-377576-m03) DBG | I0327 23:55:44.029969 1087512 retry.go:31] will retry after 1.285580774s: waiting for machine to come up
	I0327 23:55:45.317500 1086621 main.go:141] libmachine: (ha-377576-m03) DBG | domain ha-377576-m03 has defined MAC address 52:54:00:f5:c1:99 in network mk-ha-377576
	I0327 23:55:45.317917 1086621 main.go:141] libmachine: (ha-377576-m03) DBG | unable to find current IP address of domain ha-377576-m03 in network mk-ha-377576
	I0327 23:55:45.317946 1086621 main.go:141] libmachine: (ha-377576-m03) DBG | I0327 23:55:45.317865 1087512 retry.go:31] will retry after 1.460536362s: waiting for machine to come up
	I0327 23:55:46.780529 1086621 main.go:141] libmachine: (ha-377576-m03) DBG | domain ha-377576-m03 has defined MAC address 52:54:00:f5:c1:99 in network mk-ha-377576
	I0327 23:55:46.781026 1086621 main.go:141] libmachine: (ha-377576-m03) DBG | unable to find current IP address of domain ha-377576-m03 in network mk-ha-377576
	I0327 23:55:46.781062 1086621 main.go:141] libmachine: (ha-377576-m03) DBG | I0327 23:55:46.780958 1087512 retry.go:31] will retry after 1.920245901s: waiting for machine to come up
	I0327 23:55:48.703319 1086621 main.go:141] libmachine: (ha-377576-m03) DBG | domain ha-377576-m03 has defined MAC address 52:54:00:f5:c1:99 in network mk-ha-377576
	I0327 23:55:48.703729 1086621 main.go:141] libmachine: (ha-377576-m03) DBG | unable to find current IP address of domain ha-377576-m03 in network mk-ha-377576
	I0327 23:55:48.703764 1086621 main.go:141] libmachine: (ha-377576-m03) DBG | I0327 23:55:48.703692 1087512 retry.go:31] will retry after 2.714118256s: waiting for machine to come up
	I0327 23:55:51.419327 1086621 main.go:141] libmachine: (ha-377576-m03) DBG | domain ha-377576-m03 has defined MAC address 52:54:00:f5:c1:99 in network mk-ha-377576
	I0327 23:55:51.419720 1086621 main.go:141] libmachine: (ha-377576-m03) DBG | unable to find current IP address of domain ha-377576-m03 in network mk-ha-377576
	I0327 23:55:51.419814 1086621 main.go:141] libmachine: (ha-377576-m03) DBG | I0327 23:55:51.419681 1087512 retry.go:31] will retry after 3.81300902s: waiting for machine to come up
	I0327 23:55:55.235976 1086621 main.go:141] libmachine: (ha-377576-m03) DBG | domain ha-377576-m03 has defined MAC address 52:54:00:f5:c1:99 in network mk-ha-377576
	I0327 23:55:55.236562 1086621 main.go:141] libmachine: (ha-377576-m03) DBG | unable to find current IP address of domain ha-377576-m03 in network mk-ha-377576
	I0327 23:55:55.236606 1086621 main.go:141] libmachine: (ha-377576-m03) DBG | I0327 23:55:55.236497 1087512 retry.go:31] will retry after 5.681513625s: waiting for machine to come up
	I0327 23:56:00.921564 1086621 main.go:141] libmachine: (ha-377576-m03) DBG | domain ha-377576-m03 has defined MAC address 52:54:00:f5:c1:99 in network mk-ha-377576
	I0327 23:56:00.922124 1086621 main.go:141] libmachine: (ha-377576-m03) Found IP for machine: 192.168.39.101
	I0327 23:56:00.922150 1086621 main.go:141] libmachine: (ha-377576-m03) DBG | domain ha-377576-m03 has current primary IP address 192.168.39.101 and MAC address 52:54:00:f5:c1:99 in network mk-ha-377576
	I0327 23:56:00.922157 1086621 main.go:141] libmachine: (ha-377576-m03) Reserving static IP address...
	I0327 23:56:00.922600 1086621 main.go:141] libmachine: (ha-377576-m03) DBG | unable to find host DHCP lease matching {name: "ha-377576-m03", mac: "52:54:00:f5:c1:99", ip: "192.168.39.101"} in network mk-ha-377576
	I0327 23:56:01.005124 1086621 main.go:141] libmachine: (ha-377576-m03) DBG | Getting to WaitForSSH function...
	I0327 23:56:01.005161 1086621 main.go:141] libmachine: (ha-377576-m03) Reserved static IP address: 192.168.39.101
	I0327 23:56:01.005175 1086621 main.go:141] libmachine: (ha-377576-m03) Waiting for SSH to be available...
	I0327 23:56:01.008093 1086621 main.go:141] libmachine: (ha-377576-m03) DBG | domain ha-377576-m03 has defined MAC address 52:54:00:f5:c1:99 in network mk-ha-377576
	I0327 23:56:01.008481 1086621 main.go:141] libmachine: (ha-377576-m03) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:f5:c1:99", ip: ""} in network mk-ha-377576
	I0327 23:56:01.008507 1086621 main.go:141] libmachine: (ha-377576-m03) DBG | unable to find defined IP address of network mk-ha-377576 interface with MAC address 52:54:00:f5:c1:99
	I0327 23:56:01.008732 1086621 main.go:141] libmachine: (ha-377576-m03) DBG | Using SSH client type: external
	I0327 23:56:01.008781 1086621 main.go:141] libmachine: (ha-377576-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/ha-377576-m03/id_rsa (-rw-------)
	I0327 23:56:01.008884 1086621 main.go:141] libmachine: (ha-377576-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/ha-377576-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0327 23:56:01.008912 1086621 main.go:141] libmachine: (ha-377576-m03) DBG | About to run SSH command:
	I0327 23:56:01.008933 1086621 main.go:141] libmachine: (ha-377576-m03) DBG | exit 0
	I0327 23:56:01.013657 1086621 main.go:141] libmachine: (ha-377576-m03) DBG | SSH cmd err, output: exit status 255: 
	I0327 23:56:01.013687 1086621 main.go:141] libmachine: (ha-377576-m03) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0327 23:56:01.013696 1086621 main.go:141] libmachine: (ha-377576-m03) DBG | command : exit 0
	I0327 23:56:01.013705 1086621 main.go:141] libmachine: (ha-377576-m03) DBG | err     : exit status 255
	I0327 23:56:01.013716 1086621 main.go:141] libmachine: (ha-377576-m03) DBG | output  : 
	I0327 23:56:04.014436 1086621 main.go:141] libmachine: (ha-377576-m03) DBG | Getting to WaitForSSH function...
	I0327 23:56:04.017146 1086621 main.go:141] libmachine: (ha-377576-m03) DBG | domain ha-377576-m03 has defined MAC address 52:54:00:f5:c1:99 in network mk-ha-377576
	I0327 23:56:04.017559 1086621 main.go:141] libmachine: (ha-377576-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:c1:99", ip: ""} in network mk-ha-377576: {Iface:virbr1 ExpiryTime:2024-03-28 00:55:52 +0000 UTC Type:0 Mac:52:54:00:f5:c1:99 Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:ha-377576-m03 Clientid:01:52:54:00:f5:c1:99}
	I0327 23:56:04.017590 1086621 main.go:141] libmachine: (ha-377576-m03) DBG | domain ha-377576-m03 has defined IP address 192.168.39.101 and MAC address 52:54:00:f5:c1:99 in network mk-ha-377576
	I0327 23:56:04.017784 1086621 main.go:141] libmachine: (ha-377576-m03) DBG | Using SSH client type: external
	I0327 23:56:04.017816 1086621 main.go:141] libmachine: (ha-377576-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/ha-377576-m03/id_rsa (-rw-------)
	I0327 23:56:04.017852 1086621 main.go:141] libmachine: (ha-377576-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.101 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/ha-377576-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0327 23:56:04.017866 1086621 main.go:141] libmachine: (ha-377576-m03) DBG | About to run SSH command:
	I0327 23:56:04.017883 1086621 main.go:141] libmachine: (ha-377576-m03) DBG | exit 0
	I0327 23:56:04.146450 1086621 main.go:141] libmachine: (ha-377576-m03) DBG | SSH cmd err, output: <nil>: 
	I0327 23:56:04.146807 1086621 main.go:141] libmachine: (ha-377576-m03) KVM machine creation complete!
	I0327 23:56:04.147186 1086621 main.go:141] libmachine: (ha-377576-m03) Calling .GetConfigRaw
	I0327 23:56:04.147800 1086621 main.go:141] libmachine: (ha-377576-m03) Calling .DriverName
	I0327 23:56:04.148014 1086621 main.go:141] libmachine: (ha-377576-m03) Calling .DriverName
	I0327 23:56:04.148192 1086621 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0327 23:56:04.148208 1086621 main.go:141] libmachine: (ha-377576-m03) Calling .GetState
	I0327 23:56:04.149562 1086621 main.go:141] libmachine: Detecting operating system of created instance...
	I0327 23:56:04.149578 1086621 main.go:141] libmachine: Waiting for SSH to be available...
	I0327 23:56:04.149584 1086621 main.go:141] libmachine: Getting to WaitForSSH function...
	I0327 23:56:04.149590 1086621 main.go:141] libmachine: (ha-377576-m03) Calling .GetSSHHostname
	I0327 23:56:04.151903 1086621 main.go:141] libmachine: (ha-377576-m03) DBG | domain ha-377576-m03 has defined MAC address 52:54:00:f5:c1:99 in network mk-ha-377576
	I0327 23:56:04.152268 1086621 main.go:141] libmachine: (ha-377576-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:c1:99", ip: ""} in network mk-ha-377576: {Iface:virbr1 ExpiryTime:2024-03-28 00:55:52 +0000 UTC Type:0 Mac:52:54:00:f5:c1:99 Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:ha-377576-m03 Clientid:01:52:54:00:f5:c1:99}
	I0327 23:56:04.152294 1086621 main.go:141] libmachine: (ha-377576-m03) DBG | domain ha-377576-m03 has defined IP address 192.168.39.101 and MAC address 52:54:00:f5:c1:99 in network mk-ha-377576
	I0327 23:56:04.152424 1086621 main.go:141] libmachine: (ha-377576-m03) Calling .GetSSHPort
	I0327 23:56:04.152647 1086621 main.go:141] libmachine: (ha-377576-m03) Calling .GetSSHKeyPath
	I0327 23:56:04.152804 1086621 main.go:141] libmachine: (ha-377576-m03) Calling .GetSSHKeyPath
	I0327 23:56:04.152957 1086621 main.go:141] libmachine: (ha-377576-m03) Calling .GetSSHUsername
	I0327 23:56:04.153129 1086621 main.go:141] libmachine: Using SSH client type: native
	I0327 23:56:04.153428 1086621 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.101 22 <nil> <nil>}
	I0327 23:56:04.153447 1086621 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0327 23:56:04.270314 1086621 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0327 23:56:04.270341 1086621 main.go:141] libmachine: Detecting the provisioner...
	I0327 23:56:04.270349 1086621 main.go:141] libmachine: (ha-377576-m03) Calling .GetSSHHostname
	I0327 23:56:04.273191 1086621 main.go:141] libmachine: (ha-377576-m03) DBG | domain ha-377576-m03 has defined MAC address 52:54:00:f5:c1:99 in network mk-ha-377576
	I0327 23:56:04.273642 1086621 main.go:141] libmachine: (ha-377576-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:c1:99", ip: ""} in network mk-ha-377576: {Iface:virbr1 ExpiryTime:2024-03-28 00:55:52 +0000 UTC Type:0 Mac:52:54:00:f5:c1:99 Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:ha-377576-m03 Clientid:01:52:54:00:f5:c1:99}
	I0327 23:56:04.273654 1086621 main.go:141] libmachine: (ha-377576-m03) DBG | domain ha-377576-m03 has defined IP address 192.168.39.101 and MAC address 52:54:00:f5:c1:99 in network mk-ha-377576
	I0327 23:56:04.273881 1086621 main.go:141] libmachine: (ha-377576-m03) Calling .GetSSHPort
	I0327 23:56:04.274129 1086621 main.go:141] libmachine: (ha-377576-m03) Calling .GetSSHKeyPath
	I0327 23:56:04.274359 1086621 main.go:141] libmachine: (ha-377576-m03) Calling .GetSSHKeyPath
	I0327 23:56:04.274558 1086621 main.go:141] libmachine: (ha-377576-m03) Calling .GetSSHUsername
	I0327 23:56:04.274773 1086621 main.go:141] libmachine: Using SSH client type: native
	I0327 23:56:04.274982 1086621 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.101 22 <nil> <nil>}
	I0327 23:56:04.274996 1086621 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0327 23:56:04.391643 1086621 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0327 23:56:04.391731 1086621 main.go:141] libmachine: found compatible host: buildroot
	I0327 23:56:04.391742 1086621 main.go:141] libmachine: Provisioning with buildroot...
	I0327 23:56:04.391755 1086621 main.go:141] libmachine: (ha-377576-m03) Calling .GetMachineName
	I0327 23:56:04.392045 1086621 buildroot.go:166] provisioning hostname "ha-377576-m03"
	I0327 23:56:04.392085 1086621 main.go:141] libmachine: (ha-377576-m03) Calling .GetMachineName
	I0327 23:56:04.392332 1086621 main.go:141] libmachine: (ha-377576-m03) Calling .GetSSHHostname
	I0327 23:56:04.395471 1086621 main.go:141] libmachine: (ha-377576-m03) DBG | domain ha-377576-m03 has defined MAC address 52:54:00:f5:c1:99 in network mk-ha-377576
	I0327 23:56:04.395879 1086621 main.go:141] libmachine: (ha-377576-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:c1:99", ip: ""} in network mk-ha-377576: {Iface:virbr1 ExpiryTime:2024-03-28 00:55:52 +0000 UTC Type:0 Mac:52:54:00:f5:c1:99 Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:ha-377576-m03 Clientid:01:52:54:00:f5:c1:99}
	I0327 23:56:04.395899 1086621 main.go:141] libmachine: (ha-377576-m03) DBG | domain ha-377576-m03 has defined IP address 192.168.39.101 and MAC address 52:54:00:f5:c1:99 in network mk-ha-377576
	I0327 23:56:04.396170 1086621 main.go:141] libmachine: (ha-377576-m03) Calling .GetSSHPort
	I0327 23:56:04.396388 1086621 main.go:141] libmachine: (ha-377576-m03) Calling .GetSSHKeyPath
	I0327 23:56:04.396560 1086621 main.go:141] libmachine: (ha-377576-m03) Calling .GetSSHKeyPath
	I0327 23:56:04.396725 1086621 main.go:141] libmachine: (ha-377576-m03) Calling .GetSSHUsername
	I0327 23:56:04.396923 1086621 main.go:141] libmachine: Using SSH client type: native
	I0327 23:56:04.397099 1086621 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.101 22 <nil> <nil>}
	I0327 23:56:04.397112 1086621 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-377576-m03 && echo "ha-377576-m03" | sudo tee /etc/hostname
	I0327 23:56:04.526624 1086621 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-377576-m03
	
	I0327 23:56:04.526666 1086621 main.go:141] libmachine: (ha-377576-m03) Calling .GetSSHHostname
	I0327 23:56:04.529734 1086621 main.go:141] libmachine: (ha-377576-m03) DBG | domain ha-377576-m03 has defined MAC address 52:54:00:f5:c1:99 in network mk-ha-377576
	I0327 23:56:04.530188 1086621 main.go:141] libmachine: (ha-377576-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:c1:99", ip: ""} in network mk-ha-377576: {Iface:virbr1 ExpiryTime:2024-03-28 00:55:52 +0000 UTC Type:0 Mac:52:54:00:f5:c1:99 Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:ha-377576-m03 Clientid:01:52:54:00:f5:c1:99}
	I0327 23:56:04.530223 1086621 main.go:141] libmachine: (ha-377576-m03) DBG | domain ha-377576-m03 has defined IP address 192.168.39.101 and MAC address 52:54:00:f5:c1:99 in network mk-ha-377576
	I0327 23:56:04.530423 1086621 main.go:141] libmachine: (ha-377576-m03) Calling .GetSSHPort
	I0327 23:56:04.530657 1086621 main.go:141] libmachine: (ha-377576-m03) Calling .GetSSHKeyPath
	I0327 23:56:04.530839 1086621 main.go:141] libmachine: (ha-377576-m03) Calling .GetSSHKeyPath
	I0327 23:56:04.530983 1086621 main.go:141] libmachine: (ha-377576-m03) Calling .GetSSHUsername
	I0327 23:56:04.531143 1086621 main.go:141] libmachine: Using SSH client type: native
	I0327 23:56:04.531312 1086621 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.101 22 <nil> <nil>}
	I0327 23:56:04.531329 1086621 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-377576-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-377576-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-377576-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0327 23:56:04.656477 1086621 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0327 23:56:04.656524 1086621 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18485-1069254/.minikube CaCertPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18485-1069254/.minikube}
	I0327 23:56:04.656548 1086621 buildroot.go:174] setting up certificates
	I0327 23:56:04.656560 1086621 provision.go:84] configureAuth start
	I0327 23:56:04.656574 1086621 main.go:141] libmachine: (ha-377576-m03) Calling .GetMachineName
	I0327 23:56:04.656952 1086621 main.go:141] libmachine: (ha-377576-m03) Calling .GetIP
	I0327 23:56:04.659851 1086621 main.go:141] libmachine: (ha-377576-m03) DBG | domain ha-377576-m03 has defined MAC address 52:54:00:f5:c1:99 in network mk-ha-377576
	I0327 23:56:04.660404 1086621 main.go:141] libmachine: (ha-377576-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:c1:99", ip: ""} in network mk-ha-377576: {Iface:virbr1 ExpiryTime:2024-03-28 00:55:52 +0000 UTC Type:0 Mac:52:54:00:f5:c1:99 Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:ha-377576-m03 Clientid:01:52:54:00:f5:c1:99}
	I0327 23:56:04.660435 1086621 main.go:141] libmachine: (ha-377576-m03) DBG | domain ha-377576-m03 has defined IP address 192.168.39.101 and MAC address 52:54:00:f5:c1:99 in network mk-ha-377576
	I0327 23:56:04.660622 1086621 main.go:141] libmachine: (ha-377576-m03) Calling .GetSSHHostname
	I0327 23:56:04.663429 1086621 main.go:141] libmachine: (ha-377576-m03) DBG | domain ha-377576-m03 has defined MAC address 52:54:00:f5:c1:99 in network mk-ha-377576
	I0327 23:56:04.663922 1086621 main.go:141] libmachine: (ha-377576-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:c1:99", ip: ""} in network mk-ha-377576: {Iface:virbr1 ExpiryTime:2024-03-28 00:55:52 +0000 UTC Type:0 Mac:52:54:00:f5:c1:99 Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:ha-377576-m03 Clientid:01:52:54:00:f5:c1:99}
	I0327 23:56:04.663955 1086621 main.go:141] libmachine: (ha-377576-m03) DBG | domain ha-377576-m03 has defined IP address 192.168.39.101 and MAC address 52:54:00:f5:c1:99 in network mk-ha-377576
	I0327 23:56:04.664149 1086621 provision.go:143] copyHostCerts
	I0327 23:56:04.664195 1086621 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.pem
	I0327 23:56:04.664244 1086621 exec_runner.go:144] found /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.pem, removing ...
	I0327 23:56:04.664257 1086621 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.pem
	I0327 23:56:04.664337 1086621 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.pem (1078 bytes)
	I0327 23:56:04.664439 1086621 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18485-1069254/.minikube/cert.pem
	I0327 23:56:04.664466 1086621 exec_runner.go:144] found /home/jenkins/minikube-integration/18485-1069254/.minikube/cert.pem, removing ...
	I0327 23:56:04.664474 1086621 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18485-1069254/.minikube/cert.pem
	I0327 23:56:04.664517 1086621 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18485-1069254/.minikube/cert.pem (1123 bytes)
	I0327 23:56:04.664584 1086621 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18485-1069254/.minikube/key.pem
	I0327 23:56:04.664612 1086621 exec_runner.go:144] found /home/jenkins/minikube-integration/18485-1069254/.minikube/key.pem, removing ...
	I0327 23:56:04.664619 1086621 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18485-1069254/.minikube/key.pem
	I0327 23:56:04.664665 1086621 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18485-1069254/.minikube/key.pem (1679 bytes)
	I0327 23:56:04.664759 1086621 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca-key.pem org=jenkins.ha-377576-m03 san=[127.0.0.1 192.168.39.101 ha-377576-m03 localhost minikube]
	I0327 23:56:04.763355 1086621 provision.go:177] copyRemoteCerts
	I0327 23:56:04.763432 1086621 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0327 23:56:04.763471 1086621 main.go:141] libmachine: (ha-377576-m03) Calling .GetSSHHostname
	I0327 23:56:04.766276 1086621 main.go:141] libmachine: (ha-377576-m03) DBG | domain ha-377576-m03 has defined MAC address 52:54:00:f5:c1:99 in network mk-ha-377576
	I0327 23:56:04.766663 1086621 main.go:141] libmachine: (ha-377576-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:c1:99", ip: ""} in network mk-ha-377576: {Iface:virbr1 ExpiryTime:2024-03-28 00:55:52 +0000 UTC Type:0 Mac:52:54:00:f5:c1:99 Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:ha-377576-m03 Clientid:01:52:54:00:f5:c1:99}
	I0327 23:56:04.766696 1086621 main.go:141] libmachine: (ha-377576-m03) DBG | domain ha-377576-m03 has defined IP address 192.168.39.101 and MAC address 52:54:00:f5:c1:99 in network mk-ha-377576
	I0327 23:56:04.766868 1086621 main.go:141] libmachine: (ha-377576-m03) Calling .GetSSHPort
	I0327 23:56:04.767136 1086621 main.go:141] libmachine: (ha-377576-m03) Calling .GetSSHKeyPath
	I0327 23:56:04.767338 1086621 main.go:141] libmachine: (ha-377576-m03) Calling .GetSSHUsername
	I0327 23:56:04.767517 1086621 sshutil.go:53] new ssh client: &{IP:192.168.39.101 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/ha-377576-m03/id_rsa Username:docker}
	I0327 23:56:04.857439 1086621 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0327 23:56:04.857522 1086621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0327 23:56:04.883431 1086621 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0327 23:56:04.883549 1086621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0327 23:56:04.911355 1086621 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0327 23:56:04.911443 1086621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0327 23:56:04.940002 1086621 provision.go:87] duration metric: took 283.428319ms to configureAuth
	I0327 23:56:04.940031 1086621 buildroot.go:189] setting minikube options for container-runtime
	I0327 23:56:04.940251 1086621 config.go:182] Loaded profile config "ha-377576": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0327 23:56:04.940334 1086621 main.go:141] libmachine: (ha-377576-m03) Calling .GetSSHHostname
	I0327 23:56:04.943213 1086621 main.go:141] libmachine: (ha-377576-m03) DBG | domain ha-377576-m03 has defined MAC address 52:54:00:f5:c1:99 in network mk-ha-377576
	I0327 23:56:04.943612 1086621 main.go:141] libmachine: (ha-377576-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:c1:99", ip: ""} in network mk-ha-377576: {Iface:virbr1 ExpiryTime:2024-03-28 00:55:52 +0000 UTC Type:0 Mac:52:54:00:f5:c1:99 Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:ha-377576-m03 Clientid:01:52:54:00:f5:c1:99}
	I0327 23:56:04.943646 1086621 main.go:141] libmachine: (ha-377576-m03) DBG | domain ha-377576-m03 has defined IP address 192.168.39.101 and MAC address 52:54:00:f5:c1:99 in network mk-ha-377576
	I0327 23:56:04.943831 1086621 main.go:141] libmachine: (ha-377576-m03) Calling .GetSSHPort
	I0327 23:56:04.944044 1086621 main.go:141] libmachine: (ha-377576-m03) Calling .GetSSHKeyPath
	I0327 23:56:04.944224 1086621 main.go:141] libmachine: (ha-377576-m03) Calling .GetSSHKeyPath
	I0327 23:56:04.944375 1086621 main.go:141] libmachine: (ha-377576-m03) Calling .GetSSHUsername
	I0327 23:56:04.944525 1086621 main.go:141] libmachine: Using SSH client type: native
	I0327 23:56:04.944709 1086621 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.101 22 <nil> <nil>}
	I0327 23:56:04.944735 1086621 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0327 23:56:05.233217 1086621 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0327 23:56:05.233261 1086621 main.go:141] libmachine: Checking connection to Docker...
	I0327 23:56:05.233273 1086621 main.go:141] libmachine: (ha-377576-m03) Calling .GetURL
	I0327 23:56:05.234691 1086621 main.go:141] libmachine: (ha-377576-m03) DBG | Using libvirt version 6000000
	I0327 23:56:05.237542 1086621 main.go:141] libmachine: (ha-377576-m03) DBG | domain ha-377576-m03 has defined MAC address 52:54:00:f5:c1:99 in network mk-ha-377576
	I0327 23:56:05.237920 1086621 main.go:141] libmachine: (ha-377576-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:c1:99", ip: ""} in network mk-ha-377576: {Iface:virbr1 ExpiryTime:2024-03-28 00:55:52 +0000 UTC Type:0 Mac:52:54:00:f5:c1:99 Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:ha-377576-m03 Clientid:01:52:54:00:f5:c1:99}
	I0327 23:56:05.237957 1086621 main.go:141] libmachine: (ha-377576-m03) DBG | domain ha-377576-m03 has defined IP address 192.168.39.101 and MAC address 52:54:00:f5:c1:99 in network mk-ha-377576
	I0327 23:56:05.238141 1086621 main.go:141] libmachine: Docker is up and running!
	I0327 23:56:05.238162 1086621 main.go:141] libmachine: Reticulating splines...
	I0327 23:56:05.238171 1086621 client.go:171] duration metric: took 27.712611142s to LocalClient.Create
	I0327 23:56:05.238203 1086621 start.go:167] duration metric: took 27.712688435s to libmachine.API.Create "ha-377576"
	I0327 23:56:05.238216 1086621 start.go:293] postStartSetup for "ha-377576-m03" (driver="kvm2")
	I0327 23:56:05.238244 1086621 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0327 23:56:05.238270 1086621 main.go:141] libmachine: (ha-377576-m03) Calling .DriverName
	I0327 23:56:05.238562 1086621 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0327 23:56:05.238589 1086621 main.go:141] libmachine: (ha-377576-m03) Calling .GetSSHHostname
	I0327 23:56:05.241038 1086621 main.go:141] libmachine: (ha-377576-m03) DBG | domain ha-377576-m03 has defined MAC address 52:54:00:f5:c1:99 in network mk-ha-377576
	I0327 23:56:05.241541 1086621 main.go:141] libmachine: (ha-377576-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:c1:99", ip: ""} in network mk-ha-377576: {Iface:virbr1 ExpiryTime:2024-03-28 00:55:52 +0000 UTC Type:0 Mac:52:54:00:f5:c1:99 Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:ha-377576-m03 Clientid:01:52:54:00:f5:c1:99}
	I0327 23:56:05.241570 1086621 main.go:141] libmachine: (ha-377576-m03) DBG | domain ha-377576-m03 has defined IP address 192.168.39.101 and MAC address 52:54:00:f5:c1:99 in network mk-ha-377576
	I0327 23:56:05.241715 1086621 main.go:141] libmachine: (ha-377576-m03) Calling .GetSSHPort
	I0327 23:56:05.241945 1086621 main.go:141] libmachine: (ha-377576-m03) Calling .GetSSHKeyPath
	I0327 23:56:05.242142 1086621 main.go:141] libmachine: (ha-377576-m03) Calling .GetSSHUsername
	I0327 23:56:05.242283 1086621 sshutil.go:53] new ssh client: &{IP:192.168.39.101 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/ha-377576-m03/id_rsa Username:docker}
	I0327 23:56:05.330275 1086621 ssh_runner.go:195] Run: cat /etc/os-release
	I0327 23:56:05.335255 1086621 info.go:137] Remote host: Buildroot 2023.02.9
	I0327 23:56:05.335292 1086621 filesync.go:126] Scanning /home/jenkins/minikube-integration/18485-1069254/.minikube/addons for local assets ...
	I0327 23:56:05.335360 1086621 filesync.go:126] Scanning /home/jenkins/minikube-integration/18485-1069254/.minikube/files for local assets ...
	I0327 23:56:05.335454 1086621 filesync.go:149] local asset: /home/jenkins/minikube-integration/18485-1069254/.minikube/files/etc/ssl/certs/10765222.pem -> 10765222.pem in /etc/ssl/certs
	I0327 23:56:05.335470 1086621 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18485-1069254/.minikube/files/etc/ssl/certs/10765222.pem -> /etc/ssl/certs/10765222.pem
	I0327 23:56:05.335573 1086621 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0327 23:56:05.346463 1086621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/files/etc/ssl/certs/10765222.pem --> /etc/ssl/certs/10765222.pem (1708 bytes)
	I0327 23:56:05.373007 1086621 start.go:296] duration metric: took 134.775912ms for postStartSetup
	I0327 23:56:05.373075 1086621 main.go:141] libmachine: (ha-377576-m03) Calling .GetConfigRaw
	I0327 23:56:05.373682 1086621 main.go:141] libmachine: (ha-377576-m03) Calling .GetIP
	I0327 23:56:05.376460 1086621 main.go:141] libmachine: (ha-377576-m03) DBG | domain ha-377576-m03 has defined MAC address 52:54:00:f5:c1:99 in network mk-ha-377576
	I0327 23:56:05.376885 1086621 main.go:141] libmachine: (ha-377576-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:c1:99", ip: ""} in network mk-ha-377576: {Iface:virbr1 ExpiryTime:2024-03-28 00:55:52 +0000 UTC Type:0 Mac:52:54:00:f5:c1:99 Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:ha-377576-m03 Clientid:01:52:54:00:f5:c1:99}
	I0327 23:56:05.376927 1086621 main.go:141] libmachine: (ha-377576-m03) DBG | domain ha-377576-m03 has defined IP address 192.168.39.101 and MAC address 52:54:00:f5:c1:99 in network mk-ha-377576
	I0327 23:56:05.377243 1086621 profile.go:142] Saving config to /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/ha-377576/config.json ...
	I0327 23:56:05.377492 1086621 start.go:128] duration metric: took 27.870631426s to createHost
	I0327 23:56:05.377557 1086621 main.go:141] libmachine: (ha-377576-m03) Calling .GetSSHHostname
	I0327 23:56:05.379971 1086621 main.go:141] libmachine: (ha-377576-m03) DBG | domain ha-377576-m03 has defined MAC address 52:54:00:f5:c1:99 in network mk-ha-377576
	I0327 23:56:05.380233 1086621 main.go:141] libmachine: (ha-377576-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:c1:99", ip: ""} in network mk-ha-377576: {Iface:virbr1 ExpiryTime:2024-03-28 00:55:52 +0000 UTC Type:0 Mac:52:54:00:f5:c1:99 Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:ha-377576-m03 Clientid:01:52:54:00:f5:c1:99}
	I0327 23:56:05.380262 1086621 main.go:141] libmachine: (ha-377576-m03) DBG | domain ha-377576-m03 has defined IP address 192.168.39.101 and MAC address 52:54:00:f5:c1:99 in network mk-ha-377576
	I0327 23:56:05.380486 1086621 main.go:141] libmachine: (ha-377576-m03) Calling .GetSSHPort
	I0327 23:56:05.380689 1086621 main.go:141] libmachine: (ha-377576-m03) Calling .GetSSHKeyPath
	I0327 23:56:05.380881 1086621 main.go:141] libmachine: (ha-377576-m03) Calling .GetSSHKeyPath
	I0327 23:56:05.381022 1086621 main.go:141] libmachine: (ha-377576-m03) Calling .GetSSHUsername
	I0327 23:56:05.381191 1086621 main.go:141] libmachine: Using SSH client type: native
	I0327 23:56:05.381400 1086621 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.101 22 <nil> <nil>}
	I0327 23:56:05.381420 1086621 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0327 23:56:05.499346 1086621 main.go:141] libmachine: SSH cmd err, output: <nil>: 1711583765.475969753
	
	I0327 23:56:05.499376 1086621 fix.go:216] guest clock: 1711583765.475969753
	I0327 23:56:05.499385 1086621 fix.go:229] Guest: 2024-03-27 23:56:05.475969753 +0000 UTC Remote: 2024-03-27 23:56:05.377506121 +0000 UTC m=+229.366978974 (delta=98.463632ms)
	I0327 23:56:05.499403 1086621 fix.go:200] guest clock delta is within tolerance: 98.463632ms
	I0327 23:56:05.499408 1086621 start.go:83] releasing machines lock for "ha-377576-m03", held for 27.99270788s
	I0327 23:56:05.499430 1086621 main.go:141] libmachine: (ha-377576-m03) Calling .DriverName
	I0327 23:56:05.499716 1086621 main.go:141] libmachine: (ha-377576-m03) Calling .GetIP
	I0327 23:56:05.502554 1086621 main.go:141] libmachine: (ha-377576-m03) DBG | domain ha-377576-m03 has defined MAC address 52:54:00:f5:c1:99 in network mk-ha-377576
	I0327 23:56:05.502975 1086621 main.go:141] libmachine: (ha-377576-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:c1:99", ip: ""} in network mk-ha-377576: {Iface:virbr1 ExpiryTime:2024-03-28 00:55:52 +0000 UTC Type:0 Mac:52:54:00:f5:c1:99 Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:ha-377576-m03 Clientid:01:52:54:00:f5:c1:99}
	I0327 23:56:05.502999 1086621 main.go:141] libmachine: (ha-377576-m03) DBG | domain ha-377576-m03 has defined IP address 192.168.39.101 and MAC address 52:54:00:f5:c1:99 in network mk-ha-377576
	I0327 23:56:05.505355 1086621 out.go:177] * Found network options:
	I0327 23:56:05.506658 1086621 out.go:177]   - NO_PROXY=192.168.39.47,192.168.39.117
	W0327 23:56:05.507868 1086621 proxy.go:119] fail to check proxy env: Error ip not in block
	W0327 23:56:05.507887 1086621 proxy.go:119] fail to check proxy env: Error ip not in block
	I0327 23:56:05.507901 1086621 main.go:141] libmachine: (ha-377576-m03) Calling .DriverName
	I0327 23:56:05.508396 1086621 main.go:141] libmachine: (ha-377576-m03) Calling .DriverName
	I0327 23:56:05.508587 1086621 main.go:141] libmachine: (ha-377576-m03) Calling .DriverName
	I0327 23:56:05.508704 1086621 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0327 23:56:05.508749 1086621 main.go:141] libmachine: (ha-377576-m03) Calling .GetSSHHostname
	W0327 23:56:05.508814 1086621 proxy.go:119] fail to check proxy env: Error ip not in block
	W0327 23:56:05.508850 1086621 proxy.go:119] fail to check proxy env: Error ip not in block
	I0327 23:56:05.508923 1086621 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0327 23:56:05.508946 1086621 main.go:141] libmachine: (ha-377576-m03) Calling .GetSSHHostname
	I0327 23:56:05.511547 1086621 main.go:141] libmachine: (ha-377576-m03) DBG | domain ha-377576-m03 has defined MAC address 52:54:00:f5:c1:99 in network mk-ha-377576
	I0327 23:56:05.511662 1086621 main.go:141] libmachine: (ha-377576-m03) DBG | domain ha-377576-m03 has defined MAC address 52:54:00:f5:c1:99 in network mk-ha-377576
	I0327 23:56:05.511959 1086621 main.go:141] libmachine: (ha-377576-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:c1:99", ip: ""} in network mk-ha-377576: {Iface:virbr1 ExpiryTime:2024-03-28 00:55:52 +0000 UTC Type:0 Mac:52:54:00:f5:c1:99 Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:ha-377576-m03 Clientid:01:52:54:00:f5:c1:99}
	I0327 23:56:05.511985 1086621 main.go:141] libmachine: (ha-377576-m03) DBG | domain ha-377576-m03 has defined IP address 192.168.39.101 and MAC address 52:54:00:f5:c1:99 in network mk-ha-377576
	I0327 23:56:05.512016 1086621 main.go:141] libmachine: (ha-377576-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:c1:99", ip: ""} in network mk-ha-377576: {Iface:virbr1 ExpiryTime:2024-03-28 00:55:52 +0000 UTC Type:0 Mac:52:54:00:f5:c1:99 Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:ha-377576-m03 Clientid:01:52:54:00:f5:c1:99}
	I0327 23:56:05.512032 1086621 main.go:141] libmachine: (ha-377576-m03) DBG | domain ha-377576-m03 has defined IP address 192.168.39.101 and MAC address 52:54:00:f5:c1:99 in network mk-ha-377576
	I0327 23:56:05.512304 1086621 main.go:141] libmachine: (ha-377576-m03) Calling .GetSSHPort
	I0327 23:56:05.512317 1086621 main.go:141] libmachine: (ha-377576-m03) Calling .GetSSHPort
	I0327 23:56:05.512518 1086621 main.go:141] libmachine: (ha-377576-m03) Calling .GetSSHKeyPath
	I0327 23:56:05.512579 1086621 main.go:141] libmachine: (ha-377576-m03) Calling .GetSSHKeyPath
	I0327 23:56:05.512681 1086621 main.go:141] libmachine: (ha-377576-m03) Calling .GetSSHUsername
	I0327 23:56:05.512779 1086621 main.go:141] libmachine: (ha-377576-m03) Calling .GetSSHUsername
	I0327 23:56:05.512878 1086621 sshutil.go:53] new ssh client: &{IP:192.168.39.101 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/ha-377576-m03/id_rsa Username:docker}
	I0327 23:56:05.512928 1086621 sshutil.go:53] new ssh client: &{IP:192.168.39.101 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/ha-377576-m03/id_rsa Username:docker}
	I0327 23:56:05.764029 1086621 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0327 23:56:05.770387 1086621 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0327 23:56:05.770458 1086621 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0327 23:56:05.787525 1086621 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0327 23:56:05.787556 1086621 start.go:494] detecting cgroup driver to use...
	I0327 23:56:05.787625 1086621 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0327 23:56:05.804936 1086621 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0327 23:56:05.820067 1086621 docker.go:217] disabling cri-docker service (if available) ...
	I0327 23:56:05.820146 1086621 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0327 23:56:05.835624 1086621 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0327 23:56:05.850885 1086621 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0327 23:56:05.979530 1086621 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0327 23:56:06.163328 1086621 docker.go:233] disabling docker service ...
	I0327 23:56:06.163417 1086621 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0327 23:56:06.181733 1086621 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0327 23:56:06.196697 1086621 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0327 23:56:06.323799 1086621 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0327 23:56:06.452459 1086621 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0327 23:56:06.466660 1086621 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0327 23:56:06.486969 1086621 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0327 23:56:06.487057 1086621 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0327 23:56:06.498247 1086621 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0327 23:56:06.498338 1086621 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0327 23:56:06.509119 1086621 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0327 23:56:06.520341 1086621 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0327 23:56:06.531966 1086621 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0327 23:56:06.543892 1086621 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0327 23:56:06.555435 1086621 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0327 23:56:06.575004 1086621 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0327 23:56:06.586736 1086621 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0327 23:56:06.597234 1086621 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0327 23:56:06.597306 1086621 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0327 23:56:06.612465 1086621 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0327 23:56:06.625164 1086621 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0327 23:56:06.756049 1086621 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0327 23:56:06.903539 1086621 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0327 23:56:06.903631 1086621 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0327 23:56:06.908884 1086621 start.go:562] Will wait 60s for crictl version
	I0327 23:56:06.908961 1086621 ssh_runner.go:195] Run: which crictl
	I0327 23:56:06.912999 1086621 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0327 23:56:06.955867 1086621 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0327 23:56:06.955975 1086621 ssh_runner.go:195] Run: crio --version
	I0327 23:56:06.986524 1086621 ssh_runner.go:195] Run: crio --version
	I0327 23:56:07.018085 1086621 out.go:177] * Preparing Kubernetes v1.29.3 on CRI-O 1.29.1 ...
	I0327 23:56:07.019735 1086621 out.go:177]   - env NO_PROXY=192.168.39.47
	I0327 23:56:07.021076 1086621 out.go:177]   - env NO_PROXY=192.168.39.47,192.168.39.117
	I0327 23:56:07.022196 1086621 main.go:141] libmachine: (ha-377576-m03) Calling .GetIP
	I0327 23:56:07.025082 1086621 main.go:141] libmachine: (ha-377576-m03) DBG | domain ha-377576-m03 has defined MAC address 52:54:00:f5:c1:99 in network mk-ha-377576
	I0327 23:56:07.025528 1086621 main.go:141] libmachine: (ha-377576-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:c1:99", ip: ""} in network mk-ha-377576: {Iface:virbr1 ExpiryTime:2024-03-28 00:55:52 +0000 UTC Type:0 Mac:52:54:00:f5:c1:99 Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:ha-377576-m03 Clientid:01:52:54:00:f5:c1:99}
	I0327 23:56:07.025558 1086621 main.go:141] libmachine: (ha-377576-m03) DBG | domain ha-377576-m03 has defined IP address 192.168.39.101 and MAC address 52:54:00:f5:c1:99 in network mk-ha-377576
	I0327 23:56:07.025799 1086621 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0327 23:56:07.030288 1086621 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0327 23:56:07.045396 1086621 mustload.go:65] Loading cluster: ha-377576
	I0327 23:56:07.045641 1086621 config.go:182] Loaded profile config "ha-377576": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0327 23:56:07.045894 1086621 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0327 23:56:07.045933 1086621 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0327 23:56:07.062119 1086621 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44963
	I0327 23:56:07.062684 1086621 main.go:141] libmachine: () Calling .GetVersion
	I0327 23:56:07.063307 1086621 main.go:141] libmachine: Using API Version  1
	I0327 23:56:07.063328 1086621 main.go:141] libmachine: () Calling .SetConfigRaw
	I0327 23:56:07.063689 1086621 main.go:141] libmachine: () Calling .GetMachineName
	I0327 23:56:07.063913 1086621 main.go:141] libmachine: (ha-377576) Calling .GetState
	I0327 23:56:07.065492 1086621 host.go:66] Checking if "ha-377576" exists ...
	I0327 23:56:07.065774 1086621 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0327 23:56:07.065813 1086621 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0327 23:56:07.081401 1086621 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40127
	I0327 23:56:07.081869 1086621 main.go:141] libmachine: () Calling .GetVersion
	I0327 23:56:07.082398 1086621 main.go:141] libmachine: Using API Version  1
	I0327 23:56:07.082422 1086621 main.go:141] libmachine: () Calling .SetConfigRaw
	I0327 23:56:07.082766 1086621 main.go:141] libmachine: () Calling .GetMachineName
	I0327 23:56:07.082970 1086621 main.go:141] libmachine: (ha-377576) Calling .DriverName
	I0327 23:56:07.083177 1086621 certs.go:68] Setting up /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/ha-377576 for IP: 192.168.39.101
	I0327 23:56:07.083197 1086621 certs.go:194] generating shared ca certs ...
	I0327 23:56:07.083217 1086621 certs.go:226] acquiring lock for ca certs: {Name:mkf4dec8f33bbf51de6ed3aabdf175c7fd744ef9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 23:56:07.083349 1086621 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.key
	I0327 23:56:07.083385 1086621 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/proxy-client-ca.key
	I0327 23:56:07.083395 1086621 certs.go:256] generating profile certs ...
	I0327 23:56:07.083464 1086621 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/ha-377576/client.key
	I0327 23:56:07.083490 1086621 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/ha-377576/apiserver.key.14ab1afe
	I0327 23:56:07.083506 1086621 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/ha-377576/apiserver.crt.14ab1afe with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.47 192.168.39.117 192.168.39.101 192.168.39.254]
	I0327 23:56:07.233689 1086621 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/ha-377576/apiserver.crt.14ab1afe ...
	I0327 23:56:07.233725 1086621 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/ha-377576/apiserver.crt.14ab1afe: {Name:mke646c03fbf55548f1277ba55ee1c517a259751 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 23:56:07.233948 1086621 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/ha-377576/apiserver.key.14ab1afe ...
	I0327 23:56:07.233968 1086621 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/ha-377576/apiserver.key.14ab1afe: {Name:mkbe768d663de231129cf0d33824155d9f1fcace Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 23:56:07.234070 1086621 certs.go:381] copying /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/ha-377576/apiserver.crt.14ab1afe -> /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/ha-377576/apiserver.crt
	I0327 23:56:07.234215 1086621 certs.go:385] copying /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/ha-377576/apiserver.key.14ab1afe -> /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/ha-377576/apiserver.key
	I0327 23:56:07.234387 1086621 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/ha-377576/proxy-client.key
	I0327 23:56:07.234407 1086621 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0327 23:56:07.234419 1086621 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0327 23:56:07.234432 1086621 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18485-1069254/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0327 23:56:07.234445 1086621 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18485-1069254/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0327 23:56:07.234460 1086621 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/ha-377576/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0327 23:56:07.234473 1086621 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/ha-377576/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0327 23:56:07.234485 1086621 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/ha-377576/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0327 23:56:07.234498 1086621 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/ha-377576/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0327 23:56:07.234545 1086621 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/1076522.pem (1338 bytes)
	W0327 23:56:07.234575 1086621 certs.go:480] ignoring /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/1076522_empty.pem, impossibly tiny 0 bytes
	I0327 23:56:07.234584 1086621 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca-key.pem (1675 bytes)
	I0327 23:56:07.234605 1086621 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem (1078 bytes)
	I0327 23:56:07.234629 1086621 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/cert.pem (1123 bytes)
	I0327 23:56:07.234651 1086621 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/key.pem (1679 bytes)
	I0327 23:56:07.234692 1086621 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/files/etc/ssl/certs/10765222.pem (1708 bytes)
	I0327 23:56:07.234718 1086621 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18485-1069254/.minikube/files/etc/ssl/certs/10765222.pem -> /usr/share/ca-certificates/10765222.pem
	I0327 23:56:07.234732 1086621 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0327 23:56:07.234745 1086621 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/1076522.pem -> /usr/share/ca-certificates/1076522.pem
	I0327 23:56:07.234780 1086621 main.go:141] libmachine: (ha-377576) Calling .GetSSHHostname
	I0327 23:56:07.238242 1086621 main.go:141] libmachine: (ha-377576) DBG | domain ha-377576 has defined MAC address 52:54:00:9c:48:13 in network mk-ha-377576
	I0327 23:56:07.238664 1086621 main.go:141] libmachine: (ha-377576) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:48:13", ip: ""} in network mk-ha-377576: {Iface:virbr1 ExpiryTime:2024-03-28 00:52:31 +0000 UTC Type:0 Mac:52:54:00:9c:48:13 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:ha-377576 Clientid:01:52:54:00:9c:48:13}
	I0327 23:56:07.238695 1086621 main.go:141] libmachine: (ha-377576) DBG | domain ha-377576 has defined IP address 192.168.39.47 and MAC address 52:54:00:9c:48:13 in network mk-ha-377576
	I0327 23:56:07.238893 1086621 main.go:141] libmachine: (ha-377576) Calling .GetSSHPort
	I0327 23:56:07.239133 1086621 main.go:141] libmachine: (ha-377576) Calling .GetSSHKeyPath
	I0327 23:56:07.239313 1086621 main.go:141] libmachine: (ha-377576) Calling .GetSSHUsername
	I0327 23:56:07.239475 1086621 sshutil.go:53] new ssh client: &{IP:192.168.39.47 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/ha-377576/id_rsa Username:docker}
	I0327 23:56:07.310644 1086621 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0327 23:56:07.316568 1086621 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0327 23:56:07.334530 1086621 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0327 23:56:07.341938 1086621 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0327 23:56:07.357980 1086621 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0327 23:56:07.366300 1086621 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0327 23:56:07.380609 1086621 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0327 23:56:07.386934 1086621 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0327 23:56:07.399464 1086621 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0327 23:56:07.404668 1086621 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0327 23:56:07.417488 1086621 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0327 23:56:07.421948 1086621 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0327 23:56:07.433016 1086621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0327 23:56:07.458895 1086621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0327 23:56:07.484924 1086621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0327 23:56:07.511942 1086621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0327 23:56:07.538914 1086621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/ha-377576/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0327 23:56:07.565987 1086621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/ha-377576/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0327 23:56:07.595806 1086621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/ha-377576/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0327 23:56:07.621826 1086621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/ha-377576/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0327 23:56:07.648539 1086621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/files/etc/ssl/certs/10765222.pem --> /usr/share/ca-certificates/10765222.pem (1708 bytes)
	I0327 23:56:07.674897 1086621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0327 23:56:07.700547 1086621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/1076522.pem --> /usr/share/ca-certificates/1076522.pem (1338 bytes)
	I0327 23:56:07.728082 1086621 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0327 23:56:07.746342 1086621 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0327 23:56:07.765427 1086621 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0327 23:56:07.786146 1086621 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0327 23:56:07.805152 1086621 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0327 23:56:07.823638 1086621 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0327 23:56:07.842581 1086621 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (758 bytes)
	I0327 23:56:07.861267 1086621 ssh_runner.go:195] Run: openssl version
	I0327 23:56:07.867702 1086621 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10765222.pem && ln -fs /usr/share/ca-certificates/10765222.pem /etc/ssl/certs/10765222.pem"
	I0327 23:56:07.879425 1086621 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10765222.pem
	I0327 23:56:07.884319 1086621 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 27 23:42 /usr/share/ca-certificates/10765222.pem
	I0327 23:56:07.884381 1086621 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10765222.pem
	I0327 23:56:07.890545 1086621 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/10765222.pem /etc/ssl/certs/3ec20f2e.0"
	I0327 23:56:07.903427 1086621 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0327 23:56:07.915107 1086621 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0327 23:56:07.921670 1086621 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 27 23:34 /usr/share/ca-certificates/minikubeCA.pem
	I0327 23:56:07.921740 1086621 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0327 23:56:07.928119 1086621 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0327 23:56:07.940843 1086621 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1076522.pem && ln -fs /usr/share/ca-certificates/1076522.pem /etc/ssl/certs/1076522.pem"
	I0327 23:56:07.952114 1086621 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1076522.pem
	I0327 23:56:07.957534 1086621 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 27 23:42 /usr/share/ca-certificates/1076522.pem
	I0327 23:56:07.957630 1086621 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1076522.pem
	I0327 23:56:07.964211 1086621 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1076522.pem /etc/ssl/certs/51391683.0"
	I0327 23:56:07.976813 1086621 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0327 23:56:07.981522 1086621 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0327 23:56:07.981586 1086621 kubeadm.go:928] updating node {m03 192.168.39.101 8443 v1.29.3 crio true true} ...
	I0327 23:56:07.981675 1086621 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-377576-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.101
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.3 ClusterName:ha-377576 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0327 23:56:07.981701 1086621 kube-vip.go:111] generating kube-vip config ...
	I0327 23:56:07.981734 1086621 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0327 23:56:07.998559 1086621 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0327 23:56:07.998658 1086621 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0327 23:56:07.998727 1086621 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.3
	I0327 23:56:08.010506 1086621 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.29.3: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.29.3': No such file or directory
	
	Initiating transfer...
	I0327 23:56:08.010577 1086621 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.29.3
	I0327 23:56:08.022366 1086621 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.29.3/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.29.3/bin/linux/amd64/kubectl.sha256
	I0327 23:56:08.022394 1086621 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/linux/amd64/v1.29.3/kubectl -> /var/lib/minikube/binaries/v1.29.3/kubectl
	I0327 23:56:08.022399 1086621 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.29.3/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.29.3/bin/linux/amd64/kubelet.sha256
	I0327 23:56:08.022405 1086621 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.29.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.29.3/bin/linux/amd64/kubeadm.sha256
	I0327 23:56:08.022422 1086621 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/linux/amd64/v1.29.3/kubeadm -> /var/lib/minikube/binaries/v1.29.3/kubeadm
	I0327 23:56:08.022451 1086621 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0327 23:56:08.022468 1086621 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.29.3/kubectl
	I0327 23:56:08.022490 1086621 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.29.3/kubeadm
	I0327 23:56:08.038196 1086621 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.29.3/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.29.3/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.29.3/kubectl': No such file or directory
	I0327 23:56:08.038222 1086621 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/linux/amd64/v1.29.3/kubelet -> /var/lib/minikube/binaries/v1.29.3/kubelet
	I0327 23:56:08.038254 1086621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/linux/amd64/v1.29.3/kubectl --> /var/lib/minikube/binaries/v1.29.3/kubectl (49799168 bytes)
	I0327 23:56:08.038278 1086621 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.29.3/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.29.3/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.29.3/kubeadm': No such file or directory
	I0327 23:56:08.038310 1086621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/linux/amd64/v1.29.3/kubeadm --> /var/lib/minikube/binaries/v1.29.3/kubeadm (48340992 bytes)
	I0327 23:56:08.038319 1086621 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.29.3/kubelet
	I0327 23:56:08.068862 1086621 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.29.3/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.29.3/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.29.3/kubelet': No such file or directory
	I0327 23:56:08.068906 1086621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/linux/amd64/v1.29.3/kubelet --> /var/lib/minikube/binaries/v1.29.3/kubelet (111919104 bytes)
	I0327 23:56:09.084569 1086621 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0327 23:56:09.095546 1086621 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0327 23:56:09.113313 1086621 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0327 23:56:09.130871 1086621 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1346 bytes)
	I0327 23:56:09.148175 1086621 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0327 23:56:09.152365 1086621 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0327 23:56:09.165319 1086621 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0327 23:56:09.303285 1086621 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0327 23:56:09.321947 1086621 host.go:66] Checking if "ha-377576" exists ...
	I0327 23:56:09.322376 1086621 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0327 23:56:09.322428 1086621 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0327 23:56:09.337982 1086621 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36531
	I0327 23:56:09.338573 1086621 main.go:141] libmachine: () Calling .GetVersion
	I0327 23:56:09.339164 1086621 main.go:141] libmachine: Using API Version  1
	I0327 23:56:09.339191 1086621 main.go:141] libmachine: () Calling .SetConfigRaw
	I0327 23:56:09.339526 1086621 main.go:141] libmachine: () Calling .GetMachineName
	I0327 23:56:09.339731 1086621 main.go:141] libmachine: (ha-377576) Calling .DriverName
	I0327 23:56:09.339909 1086621 start.go:316] joinCluster: &{Name:ha-377576 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 Cluster
Name:ha-377576 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.47 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.117 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.101 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false i
nspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fa
lse DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0327 23:56:09.340107 1086621 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0327 23:56:09.340138 1086621 main.go:141] libmachine: (ha-377576) Calling .GetSSHHostname
	I0327 23:56:09.343370 1086621 main.go:141] libmachine: (ha-377576) DBG | domain ha-377576 has defined MAC address 52:54:00:9c:48:13 in network mk-ha-377576
	I0327 23:56:09.343988 1086621 main.go:141] libmachine: (ha-377576) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:48:13", ip: ""} in network mk-ha-377576: {Iface:virbr1 ExpiryTime:2024-03-28 00:52:31 +0000 UTC Type:0 Mac:52:54:00:9c:48:13 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:ha-377576 Clientid:01:52:54:00:9c:48:13}
	I0327 23:56:09.344019 1086621 main.go:141] libmachine: (ha-377576) DBG | domain ha-377576 has defined IP address 192.168.39.47 and MAC address 52:54:00:9c:48:13 in network mk-ha-377576
	I0327 23:56:09.344167 1086621 main.go:141] libmachine: (ha-377576) Calling .GetSSHPort
	I0327 23:56:09.344343 1086621 main.go:141] libmachine: (ha-377576) Calling .GetSSHKeyPath
	I0327 23:56:09.344535 1086621 main.go:141] libmachine: (ha-377576) Calling .GetSSHUsername
	I0327 23:56:09.344696 1086621 sshutil.go:53] new ssh client: &{IP:192.168.39.47 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/ha-377576/id_rsa Username:docker}
	I0327 23:56:09.522187 1086621 start.go:342] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.101 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0327 23:56:09.522260 1086621 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token no2fhc.v651hn034bq9oi06 --discovery-token-ca-cert-hash sha256:eb47e943713ff45bf2b8bbea854932df17a469668ec0f8e79ea3bb626e56fc59 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-377576-m03 --control-plane --apiserver-advertise-address=192.168.39.101 --apiserver-bind-port=8443"
	I0327 23:56:35.655702 1086621 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token no2fhc.v651hn034bq9oi06 --discovery-token-ca-cert-hash sha256:eb47e943713ff45bf2b8bbea854932df17a469668ec0f8e79ea3bb626e56fc59 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-377576-m03 --control-plane --apiserver-advertise-address=192.168.39.101 --apiserver-bind-port=8443": (26.133413587s)
	I0327 23:56:35.655756 1086621 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0327 23:56:36.165787 1086621 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-377576-m03 minikube.k8s.io/updated_at=2024_03_27T23_56_36_0700 minikube.k8s.io/version=v1.33.0-beta.0 minikube.k8s.io/commit=1a940f980f77c9bfc8ef93678db8c1f49c9dd79d minikube.k8s.io/name=ha-377576 minikube.k8s.io/primary=false
	I0327 23:56:36.332599 1086621 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-377576-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0327 23:56:36.442969 1086621 start.go:318] duration metric: took 27.103052401s to joinCluster
	I0327 23:56:36.443060 1086621 start.go:234] Will wait 6m0s for node &{Name:m03 IP:192.168.39.101 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0327 23:56:36.444602 1086621 out.go:177] * Verifying Kubernetes components...
	I0327 23:56:36.443567 1086621 config.go:182] Loaded profile config "ha-377576": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0327 23:56:36.446447 1086621 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0327 23:56:36.654784 1086621 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0327 23:56:36.681979 1086621 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18485-1069254/kubeconfig
	I0327 23:56:36.682394 1086621 kapi.go:59] client config for ha-377576: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/ha-377576/client.crt", KeyFile:"/home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/ha-377576/client.key", CAFile:"/home/jenkins/minikube-integration/18485-1069254/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]strin
g(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c58000), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0327 23:56:36.682490 1086621 kubeadm.go:477] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.47:8443
	I0327 23:56:36.682806 1086621 node_ready.go:35] waiting up to 6m0s for node "ha-377576-m03" to be "Ready" ...
	I0327 23:56:36.682902 1086621 round_trippers.go:463] GET https://192.168.39.47:8443/api/v1/nodes/ha-377576-m03
	I0327 23:56:36.682915 1086621 round_trippers.go:469] Request Headers:
	I0327 23:56:36.682926 1086621 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:56:36.682932 1086621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0327 23:56:36.686557 1086621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0327 23:56:37.183707 1086621 round_trippers.go:463] GET https://192.168.39.47:8443/api/v1/nodes/ha-377576-m03
	I0327 23:56:37.183729 1086621 round_trippers.go:469] Request Headers:
	I0327 23:56:37.183737 1086621 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:56:37.183740 1086621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0327 23:56:37.189182 1086621 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0327 23:56:37.684008 1086621 round_trippers.go:463] GET https://192.168.39.47:8443/api/v1/nodes/ha-377576-m03
	I0327 23:56:37.684034 1086621 round_trippers.go:469] Request Headers:
	I0327 23:56:37.684045 1086621 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:56:37.684052 1086621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0327 23:56:37.688293 1086621 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0327 23:56:38.183310 1086621 round_trippers.go:463] GET https://192.168.39.47:8443/api/v1/nodes/ha-377576-m03
	I0327 23:56:38.183341 1086621 round_trippers.go:469] Request Headers:
	I0327 23:56:38.183353 1086621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0327 23:56:38.183363 1086621 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:56:38.187056 1086621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0327 23:56:38.683957 1086621 round_trippers.go:463] GET https://192.168.39.47:8443/api/v1/nodes/ha-377576-m03
	I0327 23:56:38.683996 1086621 round_trippers.go:469] Request Headers:
	I0327 23:56:38.684008 1086621 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:56:38.684017 1086621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0327 23:56:38.688792 1086621 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0327 23:56:38.690433 1086621 node_ready.go:53] node "ha-377576-m03" has status "Ready":"False"
	I0327 23:56:39.183582 1086621 round_trippers.go:463] GET https://192.168.39.47:8443/api/v1/nodes/ha-377576-m03
	I0327 23:56:39.183615 1086621 round_trippers.go:469] Request Headers:
	I0327 23:56:39.183628 1086621 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:56:39.183634 1086621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0327 23:56:39.188139 1086621 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0327 23:56:39.683378 1086621 round_trippers.go:463] GET https://192.168.39.47:8443/api/v1/nodes/ha-377576-m03
	I0327 23:56:39.683405 1086621 round_trippers.go:469] Request Headers:
	I0327 23:56:39.683413 1086621 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:56:39.683416 1086621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0327 23:56:39.688006 1086621 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0327 23:56:40.183174 1086621 round_trippers.go:463] GET https://192.168.39.47:8443/api/v1/nodes/ha-377576-m03
	I0327 23:56:40.183209 1086621 round_trippers.go:469] Request Headers:
	I0327 23:56:40.183221 1086621 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:56:40.183226 1086621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0327 23:56:40.186909 1086621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0327 23:56:40.684015 1086621 round_trippers.go:463] GET https://192.168.39.47:8443/api/v1/nodes/ha-377576-m03
	I0327 23:56:40.684046 1086621 round_trippers.go:469] Request Headers:
	I0327 23:56:40.684058 1086621 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:56:40.684063 1086621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0327 23:56:40.694808 1086621 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0327 23:56:40.695512 1086621 node_ready.go:53] node "ha-377576-m03" has status "Ready":"False"
	I0327 23:56:41.183500 1086621 round_trippers.go:463] GET https://192.168.39.47:8443/api/v1/nodes/ha-377576-m03
	I0327 23:56:41.183525 1086621 round_trippers.go:469] Request Headers:
	I0327 23:56:41.183532 1086621 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:56:41.183537 1086621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0327 23:56:41.187765 1086621 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0327 23:56:41.683582 1086621 round_trippers.go:463] GET https://192.168.39.47:8443/api/v1/nodes/ha-377576-m03
	I0327 23:56:41.683620 1086621 round_trippers.go:469] Request Headers:
	I0327 23:56:41.683630 1086621 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:56:41.683635 1086621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0327 23:56:41.687754 1086621 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0327 23:56:42.183372 1086621 round_trippers.go:463] GET https://192.168.39.47:8443/api/v1/nodes/ha-377576-m03
	I0327 23:56:42.183403 1086621 round_trippers.go:469] Request Headers:
	I0327 23:56:42.183416 1086621 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:56:42.183420 1086621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0327 23:56:42.187955 1086621 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0327 23:56:42.683349 1086621 round_trippers.go:463] GET https://192.168.39.47:8443/api/v1/nodes/ha-377576-m03
	I0327 23:56:42.683374 1086621 round_trippers.go:469] Request Headers:
	I0327 23:56:42.683383 1086621 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:56:42.683387 1086621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0327 23:56:42.687396 1086621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0327 23:56:42.688590 1086621 node_ready.go:49] node "ha-377576-m03" has status "Ready":"True"
	I0327 23:56:42.688611 1086621 node_ready.go:38] duration metric: took 6.005786492s for node "ha-377576-m03" to be "Ready" ...
	I0327 23:56:42.688621 1086621 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0327 23:56:42.688679 1086621 round_trippers.go:463] GET https://192.168.39.47:8443/api/v1/namespaces/kube-system/pods
	I0327 23:56:42.688688 1086621 round_trippers.go:469] Request Headers:
	I0327 23:56:42.688695 1086621 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:56:42.688702 1086621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0327 23:56:42.698024 1086621 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0327 23:56:42.705114 1086621 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-47npx" in "kube-system" namespace to be "Ready" ...
	I0327 23:56:42.705197 1086621 round_trippers.go:463] GET https://192.168.39.47:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-47npx
	I0327 23:56:42.705206 1086621 round_trippers.go:469] Request Headers:
	I0327 23:56:42.705213 1086621 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:56:42.705218 1086621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0327 23:56:42.708131 1086621 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0327 23:56:42.709135 1086621 round_trippers.go:463] GET https://192.168.39.47:8443/api/v1/nodes/ha-377576
	I0327 23:56:42.709153 1086621 round_trippers.go:469] Request Headers:
	I0327 23:56:42.709162 1086621 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:56:42.709168 1086621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0327 23:56:42.712094 1086621 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0327 23:56:42.712871 1086621 pod_ready.go:92] pod "coredns-76f75df574-47npx" in "kube-system" namespace has status "Ready":"True"
	I0327 23:56:42.712889 1086621 pod_ready.go:81] duration metric: took 7.750876ms for pod "coredns-76f75df574-47npx" in "kube-system" namespace to be "Ready" ...
	I0327 23:56:42.712898 1086621 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-msv9s" in "kube-system" namespace to be "Ready" ...
	I0327 23:56:42.712950 1086621 round_trippers.go:463] GET https://192.168.39.47:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-msv9s
	I0327 23:56:42.712958 1086621 round_trippers.go:469] Request Headers:
	I0327 23:56:42.712965 1086621 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:56:42.712969 1086621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0327 23:56:42.715709 1086621 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0327 23:56:42.716446 1086621 round_trippers.go:463] GET https://192.168.39.47:8443/api/v1/nodes/ha-377576
	I0327 23:56:42.716465 1086621 round_trippers.go:469] Request Headers:
	I0327 23:56:42.716473 1086621 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:56:42.716478 1086621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0327 23:56:42.719444 1086621 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0327 23:56:42.720077 1086621 pod_ready.go:92] pod "coredns-76f75df574-msv9s" in "kube-system" namespace has status "Ready":"True"
	I0327 23:56:42.720100 1086621 pod_ready.go:81] duration metric: took 7.195082ms for pod "coredns-76f75df574-msv9s" in "kube-system" namespace to be "Ready" ...
	I0327 23:56:42.720113 1086621 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-377576" in "kube-system" namespace to be "Ready" ...
	I0327 23:56:42.720181 1086621 round_trippers.go:463] GET https://192.168.39.47:8443/api/v1/namespaces/kube-system/pods/etcd-ha-377576
	I0327 23:56:42.720193 1086621 round_trippers.go:469] Request Headers:
	I0327 23:56:42.720202 1086621 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:56:42.720208 1086621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0327 23:56:42.723109 1086621 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0327 23:56:42.723873 1086621 round_trippers.go:463] GET https://192.168.39.47:8443/api/v1/nodes/ha-377576
	I0327 23:56:42.723889 1086621 round_trippers.go:469] Request Headers:
	I0327 23:56:42.723898 1086621 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:56:42.723905 1086621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0327 23:56:42.726829 1086621 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0327 23:56:42.727423 1086621 pod_ready.go:92] pod "etcd-ha-377576" in "kube-system" namespace has status "Ready":"True"
	I0327 23:56:42.727443 1086621 pod_ready.go:81] duration metric: took 7.323127ms for pod "etcd-ha-377576" in "kube-system" namespace to be "Ready" ...
	I0327 23:56:42.727453 1086621 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-377576-m02" in "kube-system" namespace to be "Ready" ...
	I0327 23:56:42.727510 1086621 round_trippers.go:463] GET https://192.168.39.47:8443/api/v1/namespaces/kube-system/pods/etcd-ha-377576-m02
	I0327 23:56:42.727522 1086621 round_trippers.go:469] Request Headers:
	I0327 23:56:42.727531 1086621 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:56:42.727536 1086621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0327 23:56:42.730683 1086621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0327 23:56:42.731252 1086621 round_trippers.go:463] GET https://192.168.39.47:8443/api/v1/nodes/ha-377576-m02
	I0327 23:56:42.731265 1086621 round_trippers.go:469] Request Headers:
	I0327 23:56:42.731274 1086621 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:56:42.731282 1086621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0327 23:56:42.734162 1086621 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0327 23:56:42.734803 1086621 pod_ready.go:92] pod "etcd-ha-377576-m02" in "kube-system" namespace has status "Ready":"True"
	I0327 23:56:42.734821 1086621 pod_ready.go:81] duration metric: took 7.362639ms for pod "etcd-ha-377576-m02" in "kube-system" namespace to be "Ready" ...
	I0327 23:56:42.734832 1086621 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-377576-m03" in "kube-system" namespace to be "Ready" ...
	I0327 23:56:42.884173 1086621 request.go:629] Waited for 149.266045ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.47:8443/api/v1/namespaces/kube-system/pods/etcd-ha-377576-m03
	I0327 23:56:42.884264 1086621 round_trippers.go:463] GET https://192.168.39.47:8443/api/v1/namespaces/kube-system/pods/etcd-ha-377576-m03
	I0327 23:56:42.884269 1086621 round_trippers.go:469] Request Headers:
	I0327 23:56:42.884277 1086621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0327 23:56:42.884283 1086621 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:56:42.887842 1086621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0327 23:56:43.083824 1086621 request.go:629] Waited for 195.361102ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.47:8443/api/v1/nodes/ha-377576-m03
	I0327 23:56:43.083927 1086621 round_trippers.go:463] GET https://192.168.39.47:8443/api/v1/nodes/ha-377576-m03
	I0327 23:56:43.083936 1086621 round_trippers.go:469] Request Headers:
	I0327 23:56:43.083950 1086621 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:56:43.083958 1086621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0327 23:56:43.087289 1086621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0327 23:56:43.284015 1086621 round_trippers.go:463] GET https://192.168.39.47:8443/api/v1/namespaces/kube-system/pods/etcd-ha-377576-m03
	I0327 23:56:43.284043 1086621 round_trippers.go:469] Request Headers:
	I0327 23:56:43.284055 1086621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0327 23:56:43.284060 1086621 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:56:43.288108 1086621 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0327 23:56:43.484181 1086621 request.go:629] Waited for 195.302432ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.47:8443/api/v1/nodes/ha-377576-m03
	I0327 23:56:43.484244 1086621 round_trippers.go:463] GET https://192.168.39.47:8443/api/v1/nodes/ha-377576-m03
	I0327 23:56:43.484251 1086621 round_trippers.go:469] Request Headers:
	I0327 23:56:43.484261 1086621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0327 23:56:43.484270 1086621 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:56:43.488262 1086621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0327 23:56:43.735353 1086621 round_trippers.go:463] GET https://192.168.39.47:8443/api/v1/namespaces/kube-system/pods/etcd-ha-377576-m03
	I0327 23:56:43.735377 1086621 round_trippers.go:469] Request Headers:
	I0327 23:56:43.735387 1086621 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:56:43.735393 1086621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0327 23:56:43.739458 1086621 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0327 23:56:43.883694 1086621 request.go:629] Waited for 143.301815ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.47:8443/api/v1/nodes/ha-377576-m03
	I0327 23:56:43.883772 1086621 round_trippers.go:463] GET https://192.168.39.47:8443/api/v1/nodes/ha-377576-m03
	I0327 23:56:43.883789 1086621 round_trippers.go:469] Request Headers:
	I0327 23:56:43.883801 1086621 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:56:43.883812 1086621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0327 23:56:43.887912 1086621 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0327 23:56:44.235735 1086621 round_trippers.go:463] GET https://192.168.39.47:8443/api/v1/namespaces/kube-system/pods/etcd-ha-377576-m03
	I0327 23:56:44.235761 1086621 round_trippers.go:469] Request Headers:
	I0327 23:56:44.235770 1086621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0327 23:56:44.235775 1086621 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:56:44.240631 1086621 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0327 23:56:44.284049 1086621 round_trippers.go:463] GET https://192.168.39.47:8443/api/v1/nodes/ha-377576-m03
	I0327 23:56:44.284076 1086621 round_trippers.go:469] Request Headers:
	I0327 23:56:44.284085 1086621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0327 23:56:44.284089 1086621 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:56:44.287503 1086621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0327 23:56:44.735290 1086621 round_trippers.go:463] GET https://192.168.39.47:8443/api/v1/namespaces/kube-system/pods/etcd-ha-377576-m03
	I0327 23:56:44.735319 1086621 round_trippers.go:469] Request Headers:
	I0327 23:56:44.735328 1086621 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:56:44.735335 1086621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0327 23:56:44.741307 1086621 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0327 23:56:44.742133 1086621 round_trippers.go:463] GET https://192.168.39.47:8443/api/v1/nodes/ha-377576-m03
	I0327 23:56:44.742149 1086621 round_trippers.go:469] Request Headers:
	I0327 23:56:44.742156 1086621 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:56:44.742160 1086621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0327 23:56:44.745268 1086621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0327 23:56:44.745776 1086621 pod_ready.go:102] pod "etcd-ha-377576-m03" in "kube-system" namespace has status "Ready":"False"
	I0327 23:56:45.235186 1086621 round_trippers.go:463] GET https://192.168.39.47:8443/api/v1/namespaces/kube-system/pods/etcd-ha-377576-m03
	I0327 23:56:45.235212 1086621 round_trippers.go:469] Request Headers:
	I0327 23:56:45.235220 1086621 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:56:45.235227 1086621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0327 23:56:45.238958 1086621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0327 23:56:45.239781 1086621 round_trippers.go:463] GET https://192.168.39.47:8443/api/v1/nodes/ha-377576-m03
	I0327 23:56:45.239799 1086621 round_trippers.go:469] Request Headers:
	I0327 23:56:45.239810 1086621 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:56:45.239814 1086621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0327 23:56:45.242801 1086621 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0327 23:56:45.735193 1086621 round_trippers.go:463] GET https://192.168.39.47:8443/api/v1/namespaces/kube-system/pods/etcd-ha-377576-m03
	I0327 23:56:45.735222 1086621 round_trippers.go:469] Request Headers:
	I0327 23:56:45.735230 1086621 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:56:45.735234 1086621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0327 23:56:45.739378 1086621 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0327 23:56:45.740482 1086621 round_trippers.go:463] GET https://192.168.39.47:8443/api/v1/nodes/ha-377576-m03
	I0327 23:56:45.740499 1086621 round_trippers.go:469] Request Headers:
	I0327 23:56:45.740508 1086621 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:56:45.740512 1086621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0327 23:56:45.743836 1086621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0327 23:56:46.235737 1086621 round_trippers.go:463] GET https://192.168.39.47:8443/api/v1/namespaces/kube-system/pods/etcd-ha-377576-m03
	I0327 23:56:46.235768 1086621 round_trippers.go:469] Request Headers:
	I0327 23:56:46.235781 1086621 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:56:46.235787 1086621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0327 23:56:46.240205 1086621 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0327 23:56:46.241588 1086621 round_trippers.go:463] GET https://192.168.39.47:8443/api/v1/nodes/ha-377576-m03
	I0327 23:56:46.241604 1086621 round_trippers.go:469] Request Headers:
	I0327 23:56:46.241611 1086621 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:56:46.241617 1086621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0327 23:56:46.244827 1086621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0327 23:56:46.735709 1086621 round_trippers.go:463] GET https://192.168.39.47:8443/api/v1/namespaces/kube-system/pods/etcd-ha-377576-m03
	I0327 23:56:46.735745 1086621 round_trippers.go:469] Request Headers:
	I0327 23:56:46.735755 1086621 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:56:46.735763 1086621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0327 23:56:46.739633 1086621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0327 23:56:46.740501 1086621 round_trippers.go:463] GET https://192.168.39.47:8443/api/v1/nodes/ha-377576-m03
	I0327 23:56:46.740521 1086621 round_trippers.go:469] Request Headers:
	I0327 23:56:46.740531 1086621 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:56:46.740536 1086621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0327 23:56:46.744441 1086621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0327 23:56:47.235859 1086621 round_trippers.go:463] GET https://192.168.39.47:8443/api/v1/namespaces/kube-system/pods/etcd-ha-377576-m03
	I0327 23:56:47.235886 1086621 round_trippers.go:469] Request Headers:
	I0327 23:56:47.235894 1086621 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:56:47.235898 1086621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0327 23:56:47.239551 1086621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0327 23:56:47.240375 1086621 round_trippers.go:463] GET https://192.168.39.47:8443/api/v1/nodes/ha-377576-m03
	I0327 23:56:47.240394 1086621 round_trippers.go:469] Request Headers:
	I0327 23:56:47.240403 1086621 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:56:47.240409 1086621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0327 23:56:47.245574 1086621 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0327 23:56:47.246083 1086621 pod_ready.go:102] pod "etcd-ha-377576-m03" in "kube-system" namespace has status "Ready":"False"
	I0327 23:56:47.735884 1086621 round_trippers.go:463] GET https://192.168.39.47:8443/api/v1/namespaces/kube-system/pods/etcd-ha-377576-m03
	I0327 23:56:47.735911 1086621 round_trippers.go:469] Request Headers:
	I0327 23:56:47.735920 1086621 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:56:47.735923 1086621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0327 23:56:47.740832 1086621 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0327 23:56:47.741463 1086621 round_trippers.go:463] GET https://192.168.39.47:8443/api/v1/nodes/ha-377576-m03
	I0327 23:56:47.741479 1086621 round_trippers.go:469] Request Headers:
	I0327 23:56:47.741487 1086621 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:56:47.741492 1086621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0327 23:56:47.744811 1086621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0327 23:56:48.235433 1086621 round_trippers.go:463] GET https://192.168.39.47:8443/api/v1/namespaces/kube-system/pods/etcd-ha-377576-m03
	I0327 23:56:48.235462 1086621 round_trippers.go:469] Request Headers:
	I0327 23:56:48.235473 1086621 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:56:48.235479 1086621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0327 23:56:48.239406 1086621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0327 23:56:48.240442 1086621 round_trippers.go:463] GET https://192.168.39.47:8443/api/v1/nodes/ha-377576-m03
	I0327 23:56:48.240459 1086621 round_trippers.go:469] Request Headers:
	I0327 23:56:48.240466 1086621 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:56:48.240471 1086621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0327 23:56:48.244012 1086621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0327 23:56:48.736034 1086621 round_trippers.go:463] GET https://192.168.39.47:8443/api/v1/namespaces/kube-system/pods/etcd-ha-377576-m03
	I0327 23:56:48.736064 1086621 round_trippers.go:469] Request Headers:
	I0327 23:56:48.736076 1086621 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:56:48.736083 1086621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0327 23:56:48.740226 1086621 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0327 23:56:48.741114 1086621 round_trippers.go:463] GET https://192.168.39.47:8443/api/v1/nodes/ha-377576-m03
	I0327 23:56:48.741134 1086621 round_trippers.go:469] Request Headers:
	I0327 23:56:48.741141 1086621 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:56:48.741147 1086621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0327 23:56:48.744670 1086621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0327 23:56:49.235562 1086621 round_trippers.go:463] GET https://192.168.39.47:8443/api/v1/namespaces/kube-system/pods/etcd-ha-377576-m03
	I0327 23:56:49.235591 1086621 round_trippers.go:469] Request Headers:
	I0327 23:56:49.235603 1086621 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:56:49.235607 1086621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0327 23:56:49.239989 1086621 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0327 23:56:49.240783 1086621 round_trippers.go:463] GET https://192.168.39.47:8443/api/v1/nodes/ha-377576-m03
	I0327 23:56:49.240804 1086621 round_trippers.go:469] Request Headers:
	I0327 23:56:49.240815 1086621 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:56:49.240823 1086621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0327 23:56:49.244325 1086621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0327 23:56:49.735747 1086621 round_trippers.go:463] GET https://192.168.39.47:8443/api/v1/namespaces/kube-system/pods/etcd-ha-377576-m03
	I0327 23:56:49.735776 1086621 round_trippers.go:469] Request Headers:
	I0327 23:56:49.735787 1086621 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:56:49.735791 1086621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0327 23:56:49.739978 1086621 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0327 23:56:49.741049 1086621 round_trippers.go:463] GET https://192.168.39.47:8443/api/v1/nodes/ha-377576-m03
	I0327 23:56:49.741066 1086621 round_trippers.go:469] Request Headers:
	I0327 23:56:49.741073 1086621 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:56:49.741076 1086621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0327 23:56:49.743872 1086621 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0327 23:56:49.744665 1086621 pod_ready.go:102] pod "etcd-ha-377576-m03" in "kube-system" namespace has status "Ready":"False"
	I0327 23:56:50.235994 1086621 round_trippers.go:463] GET https://192.168.39.47:8443/api/v1/namespaces/kube-system/pods/etcd-ha-377576-m03
	I0327 23:56:50.236021 1086621 round_trippers.go:469] Request Headers:
	I0327 23:56:50.236029 1086621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0327 23:56:50.236034 1086621 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:56:50.240362 1086621 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0327 23:56:50.241161 1086621 round_trippers.go:463] GET https://192.168.39.47:8443/api/v1/nodes/ha-377576-m03
	I0327 23:56:50.241178 1086621 round_trippers.go:469] Request Headers:
	I0327 23:56:50.241185 1086621 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:56:50.241191 1086621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0327 23:56:50.245427 1086621 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0327 23:56:50.246393 1086621 pod_ready.go:92] pod "etcd-ha-377576-m03" in "kube-system" namespace has status "Ready":"True"
	I0327 23:56:50.246419 1086621 pod_ready.go:81] duration metric: took 7.511577614s for pod "etcd-ha-377576-m03" in "kube-system" namespace to be "Ready" ...
	I0327 23:56:50.246446 1086621 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-377576" in "kube-system" namespace to be "Ready" ...
	I0327 23:56:50.246532 1086621 round_trippers.go:463] GET https://192.168.39.47:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-377576
	I0327 23:56:50.246541 1086621 round_trippers.go:469] Request Headers:
	I0327 23:56:50.246550 1086621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0327 23:56:50.246554 1086621 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:56:50.250397 1086621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0327 23:56:50.251162 1086621 round_trippers.go:463] GET https://192.168.39.47:8443/api/v1/nodes/ha-377576
	I0327 23:56:50.251178 1086621 round_trippers.go:469] Request Headers:
	I0327 23:56:50.251186 1086621 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:56:50.251192 1086621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0327 23:56:50.255238 1086621 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0327 23:56:50.255873 1086621 pod_ready.go:92] pod "kube-apiserver-ha-377576" in "kube-system" namespace has status "Ready":"True"
	I0327 23:56:50.255897 1086621 pod_ready.go:81] duration metric: took 9.436535ms for pod "kube-apiserver-ha-377576" in "kube-system" namespace to be "Ready" ...
	I0327 23:56:50.255911 1086621 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-377576-m02" in "kube-system" namespace to be "Ready" ...
	I0327 23:56:50.255993 1086621 round_trippers.go:463] GET https://192.168.39.47:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-377576-m02
	I0327 23:56:50.256009 1086621 round_trippers.go:469] Request Headers:
	I0327 23:56:50.256021 1086621 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:56:50.256030 1086621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0327 23:56:50.259572 1086621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0327 23:56:50.260136 1086621 round_trippers.go:463] GET https://192.168.39.47:8443/api/v1/nodes/ha-377576-m02
	I0327 23:56:50.260151 1086621 round_trippers.go:469] Request Headers:
	I0327 23:56:50.260161 1086621 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:56:50.260165 1086621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0327 23:56:50.264120 1086621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0327 23:56:50.264638 1086621 pod_ready.go:92] pod "kube-apiserver-ha-377576-m02" in "kube-system" namespace has status "Ready":"True"
	I0327 23:56:50.264660 1086621 pod_ready.go:81] duration metric: took 8.741632ms for pod "kube-apiserver-ha-377576-m02" in "kube-system" namespace to be "Ready" ...
	I0327 23:56:50.264673 1086621 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-377576-m03" in "kube-system" namespace to be "Ready" ...
	I0327 23:56:50.264742 1086621 round_trippers.go:463] GET https://192.168.39.47:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-377576-m03
	I0327 23:56:50.264751 1086621 round_trippers.go:469] Request Headers:
	I0327 23:56:50.264759 1086621 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:56:50.264766 1086621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0327 23:56:50.270019 1086621 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0327 23:56:50.283571 1086621 round_trippers.go:463] GET https://192.168.39.47:8443/api/v1/nodes/ha-377576-m03
	I0327 23:56:50.283594 1086621 round_trippers.go:469] Request Headers:
	I0327 23:56:50.283605 1086621 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:56:50.283610 1086621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0327 23:56:50.288806 1086621 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0327 23:56:50.289465 1086621 pod_ready.go:92] pod "kube-apiserver-ha-377576-m03" in "kube-system" namespace has status "Ready":"True"
	I0327 23:56:50.289487 1086621 pod_ready.go:81] duration metric: took 24.804888ms for pod "kube-apiserver-ha-377576-m03" in "kube-system" namespace to be "Ready" ...
	I0327 23:56:50.289503 1086621 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-377576" in "kube-system" namespace to be "Ready" ...
	I0327 23:56:50.483990 1086621 request.go:629] Waited for 194.372281ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.47:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-377576
	I0327 23:56:50.484093 1086621 round_trippers.go:463] GET https://192.168.39.47:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-377576
	I0327 23:56:50.484106 1086621 round_trippers.go:469] Request Headers:
	I0327 23:56:50.484115 1086621 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:56:50.484125 1086621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0327 23:56:50.488203 1086621 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0327 23:56:50.683373 1086621 request.go:629] Waited for 194.304643ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.47:8443/api/v1/nodes/ha-377576
	I0327 23:56:50.683448 1086621 round_trippers.go:463] GET https://192.168.39.47:8443/api/v1/nodes/ha-377576
	I0327 23:56:50.683455 1086621 round_trippers.go:469] Request Headers:
	I0327 23:56:50.683465 1086621 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:56:50.683473 1086621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0327 23:56:50.687353 1086621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0327 23:56:50.688064 1086621 pod_ready.go:92] pod "kube-controller-manager-ha-377576" in "kube-system" namespace has status "Ready":"True"
	I0327 23:56:50.688098 1086621 pod_ready.go:81] duration metric: took 398.584298ms for pod "kube-controller-manager-ha-377576" in "kube-system" namespace to be "Ready" ...
	I0327 23:56:50.688115 1086621 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-377576-m02" in "kube-system" namespace to be "Ready" ...
	I0327 23:56:50.884180 1086621 request.go:629] Waited for 195.982065ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.47:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-377576-m02
	I0327 23:56:50.884255 1086621 round_trippers.go:463] GET https://192.168.39.47:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-377576-m02
	I0327 23:56:50.884261 1086621 round_trippers.go:469] Request Headers:
	I0327 23:56:50.884269 1086621 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:56:50.884272 1086621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0327 23:56:50.887827 1086621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0327 23:56:51.083494 1086621 request.go:629] Waited for 194.90192ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.47:8443/api/v1/nodes/ha-377576-m02
	I0327 23:56:51.083988 1086621 round_trippers.go:463] GET https://192.168.39.47:8443/api/v1/nodes/ha-377576-m02
	I0327 23:56:51.084003 1086621 round_trippers.go:469] Request Headers:
	I0327 23:56:51.084186 1086621 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:56:51.084198 1086621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0327 23:56:51.088830 1086621 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0327 23:56:51.089375 1086621 pod_ready.go:92] pod "kube-controller-manager-ha-377576-m02" in "kube-system" namespace has status "Ready":"True"
	I0327 23:56:51.089398 1086621 pod_ready.go:81] duration metric: took 401.273088ms for pod "kube-controller-manager-ha-377576-m02" in "kube-system" namespace to be "Ready" ...
	I0327 23:56:51.089408 1086621 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-377576-m03" in "kube-system" namespace to be "Ready" ...
	I0327 23:56:51.283667 1086621 request.go:629] Waited for 194.168427ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.47:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-377576-m03
	I0327 23:56:51.283749 1086621 round_trippers.go:463] GET https://192.168.39.47:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-377576-m03
	I0327 23:56:51.283756 1086621 round_trippers.go:469] Request Headers:
	I0327 23:56:51.283765 1086621 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:56:51.283774 1086621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0327 23:56:51.286638 1086621 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0327 23:56:51.483792 1086621 request.go:629] Waited for 196.379227ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.47:8443/api/v1/nodes/ha-377576-m03
	I0327 23:56:51.483859 1086621 round_trippers.go:463] GET https://192.168.39.47:8443/api/v1/nodes/ha-377576-m03
	I0327 23:56:51.483864 1086621 round_trippers.go:469] Request Headers:
	I0327 23:56:51.483871 1086621 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:56:51.483874 1086621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0327 23:56:51.487811 1086621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0327 23:56:51.488590 1086621 pod_ready.go:92] pod "kube-controller-manager-ha-377576-m03" in "kube-system" namespace has status "Ready":"True"
	I0327 23:56:51.488611 1086621 pod_ready.go:81] duration metric: took 399.195466ms for pod "kube-controller-manager-ha-377576-m03" in "kube-system" namespace to be "Ready" ...
	I0327 23:56:51.488622 1086621 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-4t77p" in "kube-system" namespace to be "Ready" ...
	I0327 23:56:51.683627 1086621 request.go:629] Waited for 194.930641ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.47:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4t77p
	I0327 23:56:51.683690 1086621 round_trippers.go:463] GET https://192.168.39.47:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4t77p
	I0327 23:56:51.683695 1086621 round_trippers.go:469] Request Headers:
	I0327 23:56:51.683703 1086621 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:56:51.683708 1086621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0327 23:56:51.687572 1086621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0327 23:56:51.884242 1086621 request.go:629] Waited for 195.42626ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.47:8443/api/v1/nodes/ha-377576
	I0327 23:56:51.884322 1086621 round_trippers.go:463] GET https://192.168.39.47:8443/api/v1/nodes/ha-377576
	I0327 23:56:51.884330 1086621 round_trippers.go:469] Request Headers:
	I0327 23:56:51.884341 1086621 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:56:51.884346 1086621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0327 23:56:51.888227 1086621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0327 23:56:51.888882 1086621 pod_ready.go:92] pod "kube-proxy-4t77p" in "kube-system" namespace has status "Ready":"True"
	I0327 23:56:51.888901 1086621 pod_ready.go:81] duration metric: took 400.273136ms for pod "kube-proxy-4t77p" in "kube-system" namespace to be "Ready" ...
	I0327 23:56:51.888911 1086621 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-5plfq" in "kube-system" namespace to be "Ready" ...
	I0327 23:56:52.084424 1086621 request.go:629] Waited for 195.429144ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.47:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5plfq
	I0327 23:56:52.084497 1086621 round_trippers.go:463] GET https://192.168.39.47:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5plfq
	I0327 23:56:52.084505 1086621 round_trippers.go:469] Request Headers:
	I0327 23:56:52.084515 1086621 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:56:52.084525 1086621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0327 23:56:52.088288 1086621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0327 23:56:52.284352 1086621 request.go:629] Waited for 195.327151ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.47:8443/api/v1/nodes/ha-377576-m03
	I0327 23:56:52.284437 1086621 round_trippers.go:463] GET https://192.168.39.47:8443/api/v1/nodes/ha-377576-m03
	I0327 23:56:52.284445 1086621 round_trippers.go:469] Request Headers:
	I0327 23:56:52.284456 1086621 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:56:52.284463 1086621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0327 23:56:52.288568 1086621 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0327 23:56:52.289385 1086621 pod_ready.go:92] pod "kube-proxy-5plfq" in "kube-system" namespace has status "Ready":"True"
	I0327 23:56:52.289410 1086621 pod_ready.go:81] duration metric: took 400.492143ms for pod "kube-proxy-5plfq" in "kube-system" namespace to be "Ready" ...
	I0327 23:56:52.289424 1086621 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-k9dcr" in "kube-system" namespace to be "Ready" ...
	I0327 23:56:52.483441 1086621 request.go:629] Waited for 193.93715ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.47:8443/api/v1/namespaces/kube-system/pods/kube-proxy-k9dcr
	I0327 23:56:52.483505 1086621 round_trippers.go:463] GET https://192.168.39.47:8443/api/v1/namespaces/kube-system/pods/kube-proxy-k9dcr
	I0327 23:56:52.483510 1086621 round_trippers.go:469] Request Headers:
	I0327 23:56:52.483518 1086621 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:56:52.483523 1086621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0327 23:56:52.487367 1086621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0327 23:56:52.684256 1086621 request.go:629] Waited for 196.267273ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.47:8443/api/v1/nodes/ha-377576-m02
	I0327 23:56:52.684340 1086621 round_trippers.go:463] GET https://192.168.39.47:8443/api/v1/nodes/ha-377576-m02
	I0327 23:56:52.684348 1086621 round_trippers.go:469] Request Headers:
	I0327 23:56:52.684360 1086621 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:56:52.684370 1086621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0327 23:56:52.694392 1086621 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0327 23:56:52.694936 1086621 pod_ready.go:92] pod "kube-proxy-k9dcr" in "kube-system" namespace has status "Ready":"True"
	I0327 23:56:52.694955 1086621 pod_ready.go:81] duration metric: took 405.5237ms for pod "kube-proxy-k9dcr" in "kube-system" namespace to be "Ready" ...
	I0327 23:56:52.694964 1086621 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-377576" in "kube-system" namespace to be "Ready" ...
	I0327 23:56:52.884113 1086621 request.go:629] Waited for 189.030906ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.47:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-377576
	I0327 23:56:52.884204 1086621 round_trippers.go:463] GET https://192.168.39.47:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-377576
	I0327 23:56:52.884216 1086621 round_trippers.go:469] Request Headers:
	I0327 23:56:52.884232 1086621 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:56:52.884242 1086621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0327 23:56:52.888368 1086621 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0327 23:56:53.083460 1086621 request.go:629] Waited for 194.309059ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.47:8443/api/v1/nodes/ha-377576
	I0327 23:56:53.083544 1086621 round_trippers.go:463] GET https://192.168.39.47:8443/api/v1/nodes/ha-377576
	I0327 23:56:53.083554 1086621 round_trippers.go:469] Request Headers:
	I0327 23:56:53.083564 1086621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0327 23:56:53.083590 1086621 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:56:53.088058 1086621 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0327 23:56:53.088783 1086621 pod_ready.go:92] pod "kube-scheduler-ha-377576" in "kube-system" namespace has status "Ready":"True"
	I0327 23:56:53.088810 1086621 pod_ready.go:81] duration metric: took 393.835456ms for pod "kube-scheduler-ha-377576" in "kube-system" namespace to be "Ready" ...
	I0327 23:56:53.088823 1086621 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-377576-m02" in "kube-system" namespace to be "Ready" ...
	I0327 23:56:53.284265 1086621 request.go:629] Waited for 195.327926ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.47:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-377576-m02
	I0327 23:56:53.284398 1086621 round_trippers.go:463] GET https://192.168.39.47:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-377576-m02
	I0327 23:56:53.284410 1086621 round_trippers.go:469] Request Headers:
	I0327 23:56:53.284418 1086621 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:56:53.284422 1086621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0327 23:56:53.288213 1086621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0327 23:56:53.483411 1086621 request.go:629] Waited for 194.300711ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.47:8443/api/v1/nodes/ha-377576-m02
	I0327 23:56:53.483498 1086621 round_trippers.go:463] GET https://192.168.39.47:8443/api/v1/nodes/ha-377576-m02
	I0327 23:56:53.483508 1086621 round_trippers.go:469] Request Headers:
	I0327 23:56:53.483518 1086621 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:56:53.483524 1086621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0327 23:56:53.487515 1086621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0327 23:56:53.488080 1086621 pod_ready.go:92] pod "kube-scheduler-ha-377576-m02" in "kube-system" namespace has status "Ready":"True"
	I0327 23:56:53.488101 1086621 pod_ready.go:81] duration metric: took 399.271123ms for pod "kube-scheduler-ha-377576-m02" in "kube-system" namespace to be "Ready" ...
	I0327 23:56:53.488111 1086621 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-377576-m03" in "kube-system" namespace to be "Ready" ...
	I0327 23:56:53.684190 1086621 request.go:629] Waited for 195.974352ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.47:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-377576-m03
	I0327 23:56:53.684277 1086621 round_trippers.go:463] GET https://192.168.39.47:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-377576-m03
	I0327 23:56:53.684286 1086621 round_trippers.go:469] Request Headers:
	I0327 23:56:53.684299 1086621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0327 23:56:53.684313 1086621 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:56:53.690015 1086621 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0327 23:56:53.884143 1086621 request.go:629] Waited for 193.097337ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.47:8443/api/v1/nodes/ha-377576-m03
	I0327 23:56:53.884219 1086621 round_trippers.go:463] GET https://192.168.39.47:8443/api/v1/nodes/ha-377576-m03
	I0327 23:56:53.884228 1086621 round_trippers.go:469] Request Headers:
	I0327 23:56:53.884241 1086621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0327 23:56:53.884251 1086621 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:56:53.888422 1086621 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0327 23:56:53.889239 1086621 pod_ready.go:92] pod "kube-scheduler-ha-377576-m03" in "kube-system" namespace has status "Ready":"True"
	I0327 23:56:53.889264 1086621 pod_ready.go:81] duration metric: took 401.14261ms for pod "kube-scheduler-ha-377576-m03" in "kube-system" namespace to be "Ready" ...
	I0327 23:56:53.889275 1086621 pod_ready.go:38] duration metric: took 11.200644288s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0327 23:56:53.889292 1086621 api_server.go:52] waiting for apiserver process to appear ...
	I0327 23:56:53.889346 1086621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0327 23:56:53.907707 1086621 api_server.go:72] duration metric: took 17.464605805s to wait for apiserver process to appear ...
	I0327 23:56:53.907737 1086621 api_server.go:88] waiting for apiserver healthz status ...
	I0327 23:56:53.907801 1086621 api_server.go:253] Checking apiserver healthz at https://192.168.39.47:8443/healthz ...
	I0327 23:56:53.914314 1086621 api_server.go:279] https://192.168.39.47:8443/healthz returned 200:
	ok
	I0327 23:56:53.914425 1086621 round_trippers.go:463] GET https://192.168.39.47:8443/version
	I0327 23:56:53.914436 1086621 round_trippers.go:469] Request Headers:
	I0327 23:56:53.914446 1086621 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:56:53.914452 1086621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0327 23:56:53.915710 1086621 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0327 23:56:53.915790 1086621 api_server.go:141] control plane version: v1.29.3
	I0327 23:56:53.915808 1086621 api_server.go:131] duration metric: took 8.063524ms to wait for apiserver health ...
	I0327 23:56:53.915819 1086621 system_pods.go:43] waiting for kube-system pods to appear ...
	I0327 23:56:54.084201 1086621 request.go:629] Waited for 168.294038ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.47:8443/api/v1/namespaces/kube-system/pods
	I0327 23:56:54.084292 1086621 round_trippers.go:463] GET https://192.168.39.47:8443/api/v1/namespaces/kube-system/pods
	I0327 23:56:54.084304 1086621 round_trippers.go:469] Request Headers:
	I0327 23:56:54.084316 1086621 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:56:54.084337 1086621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0327 23:56:54.092921 1086621 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0327 23:56:54.099815 1086621 system_pods.go:59] 24 kube-system pods found
	I0327 23:56:54.099850 1086621 system_pods.go:61] "coredns-76f75df574-47npx" [968d63e4-f44a-4e52-b6c0-04e0ed1a068e] Running
	I0327 23:56:54.099856 1086621 system_pods.go:61] "coredns-76f75df574-msv9s" [7c549358-2f35-4345-aa7a-8bbbcfc4ef01] Running
	I0327 23:56:54.099862 1086621 system_pods.go:61] "etcd-ha-377576" [885cacaa-1b61-4f8b-90b5-3f7dbc9df4ad] Running
	I0327 23:56:54.099868 1086621 system_pods.go:61] "etcd-ha-377576-m02" [c3fa0266-db99-4bf1-a3b4-2f050d69e2ff] Running
	I0327 23:56:54.099873 1086621 system_pods.go:61] "etcd-ha-377576-m03" [57afa52e-1e76-4e4d-8398-ef919c6e4905] Running
	I0327 23:56:54.099878 1086621 system_pods.go:61] "kindnet-5zmtk" [4e75cdc5-22da-47f2-9833-b2f4eaa9caac] Running
	I0327 23:56:54.099886 1086621 system_pods.go:61] "kindnet-6wmmc" [ef36a453-2352-47f7-8a75-72abc4004e82] Running
	I0327 23:56:54.099894 1086621 system_pods.go:61] "kindnet-n8fpn" [223f6537-8296-4147-b72e-da25c00ce693] Running
	I0327 23:56:54.099900 1086621 system_pods.go:61] "kube-apiserver-ha-377576" [a1a979ea-0199-4e24-af63-c79b32a66c0e] Running
	I0327 23:56:54.099909 1086621 system_pods.go:61] "kube-apiserver-ha-377576-m02" [516bd332-2602-4380-aac0-3fd71f0834cb] Running
	I0327 23:56:54.099914 1086621 system_pods.go:61] "kube-apiserver-ha-377576-m03" [a0cf529d-7e29-4df8-9d57-7fa331f256aa] Running
	I0327 23:56:54.099921 1086621 system_pods.go:61] "kube-controller-manager-ha-377576" [f72d4847-2902-4e1f-8852-bdcc020a6099] Running
	I0327 23:56:54.099930 1086621 system_pods.go:61] "kube-controller-manager-ha-377576-m02" [a3e945b8-d18c-434c-b8a7-70510fbce333] Running
	I0327 23:56:54.099935 1086621 system_pods.go:61] "kube-controller-manager-ha-377576-m03" [3d21c9a3-5ed2-4d74-8979-05be2cd7957c] Running
	I0327 23:56:54.099941 1086621 system_pods.go:61] "kube-proxy-4t77p" [27eff0c9-9b45-4530-aba9-1a5e0ca60802] Running
	I0327 23:56:54.099949 1086621 system_pods.go:61] "kube-proxy-5plfq" [7598b740-38ad-4c94-a1e2-0420818e60d1] Running
	I0327 23:56:54.099955 1086621 system_pods.go:61] "kube-proxy-k9dcr" [07c785f3-3b08-4f43-b957-5f4092f757ea] Running
	I0327 23:56:54.099964 1086621 system_pods.go:61] "kube-scheduler-ha-377576" [6b97a544-a0e8-4c35-b93c-197f200da53b] Running
	I0327 23:56:54.099973 1086621 system_pods.go:61] "kube-scheduler-ha-377576-m02" [91c25780-d677-4394-9624-31dfaec279c3] Running
	I0327 23:56:54.099979 1086621 system_pods.go:61] "kube-scheduler-ha-377576-m03" [dbbf81ca-9fea-410e-bbf2-c7e4eecb043d] Running
	I0327 23:56:54.099986 1086621 system_pods.go:61] "kube-vip-ha-377576" [2d4dd5f7-c798-4a52-97f5-4bc068603373] Running
	I0327 23:56:54.099992 1086621 system_pods.go:61] "kube-vip-ha-377576-m02" [dde68b43-553a-4d1b-ad7f-5284653080e4] Running
	I0327 23:56:54.099997 1086621 system_pods.go:61] "kube-vip-ha-377576-m03" [e03923bf-eed7-4645-8673-e81441d197dd] Running
	I0327 23:56:54.100003 1086621 system_pods.go:61] "storage-provisioner" [9000645c-8323-43af-bd87-011d1574493c] Running
	I0327 23:56:54.100015 1086621 system_pods.go:74] duration metric: took 184.185451ms to wait for pod list to return data ...
	I0327 23:56:54.100029 1086621 default_sa.go:34] waiting for default service account to be created ...
	I0327 23:56:54.283420 1086621 request.go:629] Waited for 183.266157ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.47:8443/api/v1/namespaces/default/serviceaccounts
	I0327 23:56:54.283515 1086621 round_trippers.go:463] GET https://192.168.39.47:8443/api/v1/namespaces/default/serviceaccounts
	I0327 23:56:54.283523 1086621 round_trippers.go:469] Request Headers:
	I0327 23:56:54.283531 1086621 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:56:54.283536 1086621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0327 23:56:54.287299 1086621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0327 23:56:54.287441 1086621 default_sa.go:45] found service account: "default"
	I0327 23:56:54.287461 1086621 default_sa.go:55] duration metric: took 187.419615ms for default service account to be created ...
	I0327 23:56:54.287474 1086621 system_pods.go:116] waiting for k8s-apps to be running ...
	I0327 23:56:54.483401 1086621 request.go:629] Waited for 195.840614ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.47:8443/api/v1/namespaces/kube-system/pods
	I0327 23:56:54.483479 1086621 round_trippers.go:463] GET https://192.168.39.47:8443/api/v1/namespaces/kube-system/pods
	I0327 23:56:54.483484 1086621 round_trippers.go:469] Request Headers:
	I0327 23:56:54.483493 1086621 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:56:54.483497 1086621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0327 23:56:54.490936 1086621 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0327 23:56:54.498994 1086621 system_pods.go:86] 24 kube-system pods found
	I0327 23:56:54.499067 1086621 system_pods.go:89] "coredns-76f75df574-47npx" [968d63e4-f44a-4e52-b6c0-04e0ed1a068e] Running
	I0327 23:56:54.499081 1086621 system_pods.go:89] "coredns-76f75df574-msv9s" [7c549358-2f35-4345-aa7a-8bbbcfc4ef01] Running
	I0327 23:56:54.499092 1086621 system_pods.go:89] "etcd-ha-377576" [885cacaa-1b61-4f8b-90b5-3f7dbc9df4ad] Running
	I0327 23:56:54.499098 1086621 system_pods.go:89] "etcd-ha-377576-m02" [c3fa0266-db99-4bf1-a3b4-2f050d69e2ff] Running
	I0327 23:56:54.499108 1086621 system_pods.go:89] "etcd-ha-377576-m03" [57afa52e-1e76-4e4d-8398-ef919c6e4905] Running
	I0327 23:56:54.499116 1086621 system_pods.go:89] "kindnet-5zmtk" [4e75cdc5-22da-47f2-9833-b2f4eaa9caac] Running
	I0327 23:56:54.499121 1086621 system_pods.go:89] "kindnet-6wmmc" [ef36a453-2352-47f7-8a75-72abc4004e82] Running
	I0327 23:56:54.499131 1086621 system_pods.go:89] "kindnet-n8fpn" [223f6537-8296-4147-b72e-da25c00ce693] Running
	I0327 23:56:54.499140 1086621 system_pods.go:89] "kube-apiserver-ha-377576" [a1a979ea-0199-4e24-af63-c79b32a66c0e] Running
	I0327 23:56:54.499148 1086621 system_pods.go:89] "kube-apiserver-ha-377576-m02" [516bd332-2602-4380-aac0-3fd71f0834cb] Running
	I0327 23:56:54.499152 1086621 system_pods.go:89] "kube-apiserver-ha-377576-m03" [a0cf529d-7e29-4df8-9d57-7fa331f256aa] Running
	I0327 23:56:54.499159 1086621 system_pods.go:89] "kube-controller-manager-ha-377576" [f72d4847-2902-4e1f-8852-bdcc020a6099] Running
	I0327 23:56:54.499163 1086621 system_pods.go:89] "kube-controller-manager-ha-377576-m02" [a3e945b8-d18c-434c-b8a7-70510fbce333] Running
	I0327 23:56:54.499175 1086621 system_pods.go:89] "kube-controller-manager-ha-377576-m03" [3d21c9a3-5ed2-4d74-8979-05be2cd7957c] Running
	I0327 23:56:54.499186 1086621 system_pods.go:89] "kube-proxy-4t77p" [27eff0c9-9b45-4530-aba9-1a5e0ca60802] Running
	I0327 23:56:54.499199 1086621 system_pods.go:89] "kube-proxy-5plfq" [7598b740-38ad-4c94-a1e2-0420818e60d1] Running
	I0327 23:56:54.499208 1086621 system_pods.go:89] "kube-proxy-k9dcr" [07c785f3-3b08-4f43-b957-5f4092f757ea] Running
	I0327 23:56:54.499218 1086621 system_pods.go:89] "kube-scheduler-ha-377576" [6b97a544-a0e8-4c35-b93c-197f200da53b] Running
	I0327 23:56:54.499227 1086621 system_pods.go:89] "kube-scheduler-ha-377576-m02" [91c25780-d677-4394-9624-31dfaec279c3] Running
	I0327 23:56:54.499234 1086621 system_pods.go:89] "kube-scheduler-ha-377576-m03" [dbbf81ca-9fea-410e-bbf2-c7e4eecb043d] Running
	I0327 23:56:54.499238 1086621 system_pods.go:89] "kube-vip-ha-377576" [2d4dd5f7-c798-4a52-97f5-4bc068603373] Running
	I0327 23:56:54.499244 1086621 system_pods.go:89] "kube-vip-ha-377576-m02" [dde68b43-553a-4d1b-ad7f-5284653080e4] Running
	I0327 23:56:54.499248 1086621 system_pods.go:89] "kube-vip-ha-377576-m03" [e03923bf-eed7-4645-8673-e81441d197dd] Running
	I0327 23:56:54.499254 1086621 system_pods.go:89] "storage-provisioner" [9000645c-8323-43af-bd87-011d1574493c] Running
	I0327 23:56:54.499261 1086621 system_pods.go:126] duration metric: took 211.778744ms to wait for k8s-apps to be running ...
	I0327 23:56:54.499271 1086621 system_svc.go:44] waiting for kubelet service to be running ....
	I0327 23:56:54.499332 1086621 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0327 23:56:54.516360 1086621 system_svc.go:56] duration metric: took 17.077136ms WaitForService to wait for kubelet
	I0327 23:56:54.516404 1086621 kubeadm.go:576] duration metric: took 18.073307914s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0327 23:56:54.516437 1086621 node_conditions.go:102] verifying NodePressure condition ...
	I0327 23:56:54.683906 1086621 request.go:629] Waited for 167.365703ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.47:8443/api/v1/nodes
	I0327 23:56:54.683980 1086621 round_trippers.go:463] GET https://192.168.39.47:8443/api/v1/nodes
	I0327 23:56:54.683985 1086621 round_trippers.go:469] Request Headers:
	I0327 23:56:54.683993 1086621 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:56:54.683997 1086621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0327 23:56:54.691589 1086621 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0327 23:56:54.692913 1086621 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0327 23:56:54.692939 1086621 node_conditions.go:123] node cpu capacity is 2
	I0327 23:56:54.692953 1086621 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0327 23:56:54.692958 1086621 node_conditions.go:123] node cpu capacity is 2
	I0327 23:56:54.692964 1086621 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0327 23:56:54.692975 1086621 node_conditions.go:123] node cpu capacity is 2
	I0327 23:56:54.692988 1086621 node_conditions.go:105] duration metric: took 176.544348ms to run NodePressure ...
	I0327 23:56:54.693004 1086621 start.go:240] waiting for startup goroutines ...
	I0327 23:56:54.693033 1086621 start.go:254] writing updated cluster config ...
	I0327 23:56:54.693376 1086621 ssh_runner.go:195] Run: rm -f paused
	I0327 23:56:54.755056 1086621 start.go:600] kubectl: 1.29.3, cluster: 1.29.3 (minor skew: 0)
	I0327 23:56:54.757341 1086621 out.go:177] * Done! kubectl is now configured to use "ha-377576" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Mar 28 00:00:25 ha-377576 crio[682]: time="2024-03-28 00:00:25.252476011Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1711584025252440334,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:141828,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=dbdadc24-445a-4ea2-a815-8140e8850a1a name=/runtime.v1.ImageService/ImageFsInfo
	Mar 28 00:00:25 ha-377576 crio[682]: time="2024-03-28 00:00:25.253299397Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=fc37eff7-518e-4cad-883d-ed2dfd6a3be4 name=/runtime.v1.RuntimeService/ListContainers
	Mar 28 00:00:25 ha-377576 crio[682]: time="2024-03-28 00:00:25.253354754Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=fc37eff7-518e-4cad-883d-ed2dfd6a3be4 name=/runtime.v1.RuntimeService/ListContainers
	Mar 28 00:00:25 ha-377576 crio[682]: time="2024-03-28 00:00:25.253655080Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:fc41f34db32bf53a65c6af8f9f9eae933fb349e5555060707ec7131ab3a7b835,PodSandboxId:d8bf33d99bda1de3168ad1efb04ca03a74a918d2ca74b585a7fcdb3a108d229e,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1711583818896676098,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-78c89,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3272474d-5490-4c7c-9dfe-ded8488ec32f,},Annotations:map[string]string{io.kubernetes.container.hash: f4bc3217,io.kubernetes.container.restartCount: 0,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d5198968b76968a091c33806ec424495c03395675b20e7eb35e330d16217211,PodSandboxId:78b0408435c31eb2c100690d28760dd80d92d11a1204f91418812c69660e79a8,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1711583597995379318,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-47npx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 968d63e4-f44a-4e52-b6c0-04e0ed1a068e,},Annotations:map[string]string{io.kubernetes.container.hash: d58e1b37,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\
"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed9a38e9f6cd98054d1e672c652e36e6262ceb6eb2a3e14911a5178d222135e7,PodSandboxId:906a95ca7b9309653761922034b8a2dfca231a15f3b955bb1c166ebd569149c8,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1711583597982373668,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-msv9s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid
: 7c549358-2f35-4345-aa7a-8bbbcfc4ef01,},Annotations:map[string]string{io.kubernetes.container.hash: 76b7b3af,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:381348b1458cea236fc315e0a9a42d269c69969b162efaa25de894ac4284ba88,PodSandboxId:bfc67c80fc55899fa456134d4af2ac6fa90fd9b5f87f5a582d5200171283b55c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNI
NG,CreatedAt:1711583597583161047,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9000645c-8323-43af-bd87-011d1574493c,},Annotations:map[string]string{io.kubernetes.container.hash: 43103468,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:196de4c982b9c19fd66bac5f3fa839489745805e12542f317a366989b520706b,PodSandboxId:cea371a7b82b947e5fb342214a35fe253cd152ca1e8ed5c6cc068b4d719ce55e,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1711583
595748903770,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-5zmtk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e75cdc5-22da-47f2-9833-b2f4eaa9caac,},Annotations:map[string]string{io.kubernetes.container.hash: 6486d0c8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a226f01452a72cf6e9a608450715a12f6663ccf2697e8259f809e966809978ce,PodSandboxId:3f1239e30a953ae16c917e002c235d64ba2b2b2f9165e939c99bda7578eae785,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1711583595702313227,Labels:map[string]stri
ng{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4t77p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 27eff0c9-9b45-4530-aba9-1a5e0ca60802,},Annotations:map[string]string{io.kubernetes.container.hash: d0985273,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f28af42c6db4a0efe0547d4442478084f4054bb5d4d47038a8a7f727ec1044df,PodSandboxId:893f7358a6722bb051ca8e38bb9af692a62e5ff985eb3863a033431715b43128,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:58ce44dc60694b0aa547d87d4a8337133961d3a8538021a672ba9bd33b267c9a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1711583579138103960,Labels:map[string]string
{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-377576,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b447b16a29b2ea987c7683714523f85a,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22d460b8d6582d93d5633e1e1af46683647a5632f6b9153f61a6c374dca4f34c,PodSandboxId:97aabc5fbaef976f88ad3764ab080be7aeab5127cc15d872fe7107cf3126e072,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1711583575918296538,Labels:map[string]string{io.kubernetes.container
.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-377576,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 32d18f050adf42c0d971a9903270a7b6,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f113e7564c47f0e2aa57dbb0702acbf86b1d75d00e9210d72d606a1b0505e5b,PodSandboxId:ac57491c8945575ea35326a6b572c93ff05b65a9f7c2a1f53e465d0b97a5fe09,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1711583575841481160,Labels:map[string]string{io.kubernetes.container.na
me: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-377576,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6490cbc40210ad634becf13ac3a1705,},Annotations:map[string]string{io.kubernetes.container.hash: bc2c0f48,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0128cd878ebdae2c6e217413183a103417714ffe00d55c1c92d1353e38238fa,PodSandboxId:bbb9d168e952fe414d641f2ec6a7e1e63a04205d4e2aae1c6ad9a2756c6443ee,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1711583575849937975,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name
: etcd-ha-377576,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ab33d6840338638cbdcd9ebe5fdd4d4,},Annotations:map[string]string{io.kubernetes.container.hash: 7359b186,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:afbf14c176818ba796eacd41f1600b5f09075c6a2b4f1edcad903530221f76ff,PodSandboxId:b75106f2dccc70e077dfb09e12c05d0942a02fa2c4894a1bd68ec76ca144eed7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1711583575809836577,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-377576,io.k
ubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f9eaf884653411ba1f22eb4cdbdfa748,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=fc37eff7-518e-4cad-883d-ed2dfd6a3be4 name=/runtime.v1.RuntimeService/ListContainers
	Mar 28 00:00:25 ha-377576 crio[682]: time="2024-03-28 00:00:25.301865379Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=febff7fb-1dcb-4e53-b0ec-d5dd2db67333 name=/runtime.v1.RuntimeService/Version
	Mar 28 00:00:25 ha-377576 crio[682]: time="2024-03-28 00:00:25.301970940Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=febff7fb-1dcb-4e53-b0ec-d5dd2db67333 name=/runtime.v1.RuntimeService/Version
	Mar 28 00:00:25 ha-377576 crio[682]: time="2024-03-28 00:00:25.303443995Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d696f0d7-651a-4211-a936-9d3a77b0c14d name=/runtime.v1.ImageService/ImageFsInfo
	Mar 28 00:00:25 ha-377576 crio[682]: time="2024-03-28 00:00:25.304147797Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1711584025304120328,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:141828,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d696f0d7-651a-4211-a936-9d3a77b0c14d name=/runtime.v1.ImageService/ImageFsInfo
	Mar 28 00:00:25 ha-377576 crio[682]: time="2024-03-28 00:00:25.305441658Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5f95ca74-8329-41de-9b31-ed443ba12e88 name=/runtime.v1.RuntimeService/ListContainers
	Mar 28 00:00:25 ha-377576 crio[682]: time="2024-03-28 00:00:25.305564710Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5f95ca74-8329-41de-9b31-ed443ba12e88 name=/runtime.v1.RuntimeService/ListContainers
	Mar 28 00:00:25 ha-377576 crio[682]: time="2024-03-28 00:00:25.305830454Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:fc41f34db32bf53a65c6af8f9f9eae933fb349e5555060707ec7131ab3a7b835,PodSandboxId:d8bf33d99bda1de3168ad1efb04ca03a74a918d2ca74b585a7fcdb3a108d229e,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1711583818896676098,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-78c89,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3272474d-5490-4c7c-9dfe-ded8488ec32f,},Annotations:map[string]string{io.kubernetes.container.hash: f4bc3217,io.kubernetes.container.restartCount: 0,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d5198968b76968a091c33806ec424495c03395675b20e7eb35e330d16217211,PodSandboxId:78b0408435c31eb2c100690d28760dd80d92d11a1204f91418812c69660e79a8,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1711583597995379318,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-47npx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 968d63e4-f44a-4e52-b6c0-04e0ed1a068e,},Annotations:map[string]string{io.kubernetes.container.hash: d58e1b37,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\
"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed9a38e9f6cd98054d1e672c652e36e6262ceb6eb2a3e14911a5178d222135e7,PodSandboxId:906a95ca7b9309653761922034b8a2dfca231a15f3b955bb1c166ebd569149c8,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1711583597982373668,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-msv9s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid
: 7c549358-2f35-4345-aa7a-8bbbcfc4ef01,},Annotations:map[string]string{io.kubernetes.container.hash: 76b7b3af,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:381348b1458cea236fc315e0a9a42d269c69969b162efaa25de894ac4284ba88,PodSandboxId:bfc67c80fc55899fa456134d4af2ac6fa90fd9b5f87f5a582d5200171283b55c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNI
NG,CreatedAt:1711583597583161047,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9000645c-8323-43af-bd87-011d1574493c,},Annotations:map[string]string{io.kubernetes.container.hash: 43103468,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:196de4c982b9c19fd66bac5f3fa839489745805e12542f317a366989b520706b,PodSandboxId:cea371a7b82b947e5fb342214a35fe253cd152ca1e8ed5c6cc068b4d719ce55e,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1711583
595748903770,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-5zmtk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e75cdc5-22da-47f2-9833-b2f4eaa9caac,},Annotations:map[string]string{io.kubernetes.container.hash: 6486d0c8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a226f01452a72cf6e9a608450715a12f6663ccf2697e8259f809e966809978ce,PodSandboxId:3f1239e30a953ae16c917e002c235d64ba2b2b2f9165e939c99bda7578eae785,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1711583595702313227,Labels:map[string]stri
ng{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4t77p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 27eff0c9-9b45-4530-aba9-1a5e0ca60802,},Annotations:map[string]string{io.kubernetes.container.hash: d0985273,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f28af42c6db4a0efe0547d4442478084f4054bb5d4d47038a8a7f727ec1044df,PodSandboxId:893f7358a6722bb051ca8e38bb9af692a62e5ff985eb3863a033431715b43128,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:58ce44dc60694b0aa547d87d4a8337133961d3a8538021a672ba9bd33b267c9a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1711583579138103960,Labels:map[string]string
{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-377576,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b447b16a29b2ea987c7683714523f85a,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22d460b8d6582d93d5633e1e1af46683647a5632f6b9153f61a6c374dca4f34c,PodSandboxId:97aabc5fbaef976f88ad3764ab080be7aeab5127cc15d872fe7107cf3126e072,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1711583575918296538,Labels:map[string]string{io.kubernetes.container
.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-377576,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 32d18f050adf42c0d971a9903270a7b6,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f113e7564c47f0e2aa57dbb0702acbf86b1d75d00e9210d72d606a1b0505e5b,PodSandboxId:ac57491c8945575ea35326a6b572c93ff05b65a9f7c2a1f53e465d0b97a5fe09,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1711583575841481160,Labels:map[string]string{io.kubernetes.container.na
me: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-377576,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6490cbc40210ad634becf13ac3a1705,},Annotations:map[string]string{io.kubernetes.container.hash: bc2c0f48,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0128cd878ebdae2c6e217413183a103417714ffe00d55c1c92d1353e38238fa,PodSandboxId:bbb9d168e952fe414d641f2ec6a7e1e63a04205d4e2aae1c6ad9a2756c6443ee,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1711583575849937975,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name
: etcd-ha-377576,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ab33d6840338638cbdcd9ebe5fdd4d4,},Annotations:map[string]string{io.kubernetes.container.hash: 7359b186,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:afbf14c176818ba796eacd41f1600b5f09075c6a2b4f1edcad903530221f76ff,PodSandboxId:b75106f2dccc70e077dfb09e12c05d0942a02fa2c4894a1bd68ec76ca144eed7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1711583575809836577,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-377576,io.k
ubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f9eaf884653411ba1f22eb4cdbdfa748,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5f95ca74-8329-41de-9b31-ed443ba12e88 name=/runtime.v1.RuntimeService/ListContainers
	Mar 28 00:00:25 ha-377576 crio[682]: time="2024-03-28 00:00:25.350876637Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b83803ac-807b-4caa-bd66-fc3e377458e6 name=/runtime.v1.RuntimeService/Version
	Mar 28 00:00:25 ha-377576 crio[682]: time="2024-03-28 00:00:25.350963537Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b83803ac-807b-4caa-bd66-fc3e377458e6 name=/runtime.v1.RuntimeService/Version
	Mar 28 00:00:25 ha-377576 crio[682]: time="2024-03-28 00:00:25.360069968Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8235362e-0b9e-4225-84d4-ccfac296891e name=/runtime.v1.ImageService/ImageFsInfo
	Mar 28 00:00:25 ha-377576 crio[682]: time="2024-03-28 00:00:25.360650223Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1711584025360623094,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:141828,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8235362e-0b9e-4225-84d4-ccfac296891e name=/runtime.v1.ImageService/ImageFsInfo
	Mar 28 00:00:25 ha-377576 crio[682]: time="2024-03-28 00:00:25.361395896Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=564177bb-f73d-45b4-a79e-83a38f45431e name=/runtime.v1.RuntimeService/ListContainers
	Mar 28 00:00:25 ha-377576 crio[682]: time="2024-03-28 00:00:25.361573382Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=564177bb-f73d-45b4-a79e-83a38f45431e name=/runtime.v1.RuntimeService/ListContainers
	Mar 28 00:00:25 ha-377576 crio[682]: time="2024-03-28 00:00:25.361841107Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:fc41f34db32bf53a65c6af8f9f9eae933fb349e5555060707ec7131ab3a7b835,PodSandboxId:d8bf33d99bda1de3168ad1efb04ca03a74a918d2ca74b585a7fcdb3a108d229e,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1711583818896676098,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-78c89,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3272474d-5490-4c7c-9dfe-ded8488ec32f,},Annotations:map[string]string{io.kubernetes.container.hash: f4bc3217,io.kubernetes.container.restartCount: 0,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d5198968b76968a091c33806ec424495c03395675b20e7eb35e330d16217211,PodSandboxId:78b0408435c31eb2c100690d28760dd80d92d11a1204f91418812c69660e79a8,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1711583597995379318,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-47npx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 968d63e4-f44a-4e52-b6c0-04e0ed1a068e,},Annotations:map[string]string{io.kubernetes.container.hash: d58e1b37,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\
"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed9a38e9f6cd98054d1e672c652e36e6262ceb6eb2a3e14911a5178d222135e7,PodSandboxId:906a95ca7b9309653761922034b8a2dfca231a15f3b955bb1c166ebd569149c8,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1711583597982373668,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-msv9s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid
: 7c549358-2f35-4345-aa7a-8bbbcfc4ef01,},Annotations:map[string]string{io.kubernetes.container.hash: 76b7b3af,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:381348b1458cea236fc315e0a9a42d269c69969b162efaa25de894ac4284ba88,PodSandboxId:bfc67c80fc55899fa456134d4af2ac6fa90fd9b5f87f5a582d5200171283b55c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNI
NG,CreatedAt:1711583597583161047,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9000645c-8323-43af-bd87-011d1574493c,},Annotations:map[string]string{io.kubernetes.container.hash: 43103468,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:196de4c982b9c19fd66bac5f3fa839489745805e12542f317a366989b520706b,PodSandboxId:cea371a7b82b947e5fb342214a35fe253cd152ca1e8ed5c6cc068b4d719ce55e,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1711583
595748903770,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-5zmtk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e75cdc5-22da-47f2-9833-b2f4eaa9caac,},Annotations:map[string]string{io.kubernetes.container.hash: 6486d0c8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a226f01452a72cf6e9a608450715a12f6663ccf2697e8259f809e966809978ce,PodSandboxId:3f1239e30a953ae16c917e002c235d64ba2b2b2f9165e939c99bda7578eae785,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1711583595702313227,Labels:map[string]stri
ng{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4t77p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 27eff0c9-9b45-4530-aba9-1a5e0ca60802,},Annotations:map[string]string{io.kubernetes.container.hash: d0985273,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f28af42c6db4a0efe0547d4442478084f4054bb5d4d47038a8a7f727ec1044df,PodSandboxId:893f7358a6722bb051ca8e38bb9af692a62e5ff985eb3863a033431715b43128,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:58ce44dc60694b0aa547d87d4a8337133961d3a8538021a672ba9bd33b267c9a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1711583579138103960,Labels:map[string]string
{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-377576,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b447b16a29b2ea987c7683714523f85a,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22d460b8d6582d93d5633e1e1af46683647a5632f6b9153f61a6c374dca4f34c,PodSandboxId:97aabc5fbaef976f88ad3764ab080be7aeab5127cc15d872fe7107cf3126e072,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1711583575918296538,Labels:map[string]string{io.kubernetes.container
.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-377576,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 32d18f050adf42c0d971a9903270a7b6,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f113e7564c47f0e2aa57dbb0702acbf86b1d75d00e9210d72d606a1b0505e5b,PodSandboxId:ac57491c8945575ea35326a6b572c93ff05b65a9f7c2a1f53e465d0b97a5fe09,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1711583575841481160,Labels:map[string]string{io.kubernetes.container.na
me: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-377576,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6490cbc40210ad634becf13ac3a1705,},Annotations:map[string]string{io.kubernetes.container.hash: bc2c0f48,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0128cd878ebdae2c6e217413183a103417714ffe00d55c1c92d1353e38238fa,PodSandboxId:bbb9d168e952fe414d641f2ec6a7e1e63a04205d4e2aae1c6ad9a2756c6443ee,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1711583575849937975,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name
: etcd-ha-377576,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ab33d6840338638cbdcd9ebe5fdd4d4,},Annotations:map[string]string{io.kubernetes.container.hash: 7359b186,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:afbf14c176818ba796eacd41f1600b5f09075c6a2b4f1edcad903530221f76ff,PodSandboxId:b75106f2dccc70e077dfb09e12c05d0942a02fa2c4894a1bd68ec76ca144eed7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1711583575809836577,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-377576,io.k
ubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f9eaf884653411ba1f22eb4cdbdfa748,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=564177bb-f73d-45b4-a79e-83a38f45431e name=/runtime.v1.RuntimeService/ListContainers
	Mar 28 00:00:25 ha-377576 crio[682]: time="2024-03-28 00:00:25.407584712Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9b98ad46-8e1d-4e6f-ad28-b133c1fdd106 name=/runtime.v1.RuntimeService/Version
	Mar 28 00:00:25 ha-377576 crio[682]: time="2024-03-28 00:00:25.407667078Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9b98ad46-8e1d-4e6f-ad28-b133c1fdd106 name=/runtime.v1.RuntimeService/Version
	Mar 28 00:00:25 ha-377576 crio[682]: time="2024-03-28 00:00:25.409166854Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=5e683dd5-0d37-4230-97a2-4c8db04d493b name=/runtime.v1.ImageService/ImageFsInfo
	Mar 28 00:00:25 ha-377576 crio[682]: time="2024-03-28 00:00:25.409678241Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1711584025409649823,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:141828,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5e683dd5-0d37-4230-97a2-4c8db04d493b name=/runtime.v1.ImageService/ImageFsInfo
	Mar 28 00:00:25 ha-377576 crio[682]: time="2024-03-28 00:00:25.410910431Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d66b161e-c6ce-4a94-9996-a53d799b7150 name=/runtime.v1.RuntimeService/ListContainers
	Mar 28 00:00:25 ha-377576 crio[682]: time="2024-03-28 00:00:25.410963684Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d66b161e-c6ce-4a94-9996-a53d799b7150 name=/runtime.v1.RuntimeService/ListContainers
	Mar 28 00:00:25 ha-377576 crio[682]: time="2024-03-28 00:00:25.411230666Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:fc41f34db32bf53a65c6af8f9f9eae933fb349e5555060707ec7131ab3a7b835,PodSandboxId:d8bf33d99bda1de3168ad1efb04ca03a74a918d2ca74b585a7fcdb3a108d229e,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1711583818896676098,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-78c89,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3272474d-5490-4c7c-9dfe-ded8488ec32f,},Annotations:map[string]string{io.kubernetes.container.hash: f4bc3217,io.kubernetes.container.restartCount: 0,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d5198968b76968a091c33806ec424495c03395675b20e7eb35e330d16217211,PodSandboxId:78b0408435c31eb2c100690d28760dd80d92d11a1204f91418812c69660e79a8,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1711583597995379318,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-47npx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 968d63e4-f44a-4e52-b6c0-04e0ed1a068e,},Annotations:map[string]string{io.kubernetes.container.hash: d58e1b37,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\
"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed9a38e9f6cd98054d1e672c652e36e6262ceb6eb2a3e14911a5178d222135e7,PodSandboxId:906a95ca7b9309653761922034b8a2dfca231a15f3b955bb1c166ebd569149c8,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1711583597982373668,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-msv9s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid
: 7c549358-2f35-4345-aa7a-8bbbcfc4ef01,},Annotations:map[string]string{io.kubernetes.container.hash: 76b7b3af,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:381348b1458cea236fc315e0a9a42d269c69969b162efaa25de894ac4284ba88,PodSandboxId:bfc67c80fc55899fa456134d4af2ac6fa90fd9b5f87f5a582d5200171283b55c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNI
NG,CreatedAt:1711583597583161047,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9000645c-8323-43af-bd87-011d1574493c,},Annotations:map[string]string{io.kubernetes.container.hash: 43103468,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:196de4c982b9c19fd66bac5f3fa839489745805e12542f317a366989b520706b,PodSandboxId:cea371a7b82b947e5fb342214a35fe253cd152ca1e8ed5c6cc068b4d719ce55e,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1711583
595748903770,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-5zmtk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e75cdc5-22da-47f2-9833-b2f4eaa9caac,},Annotations:map[string]string{io.kubernetes.container.hash: 6486d0c8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a226f01452a72cf6e9a608450715a12f6663ccf2697e8259f809e966809978ce,PodSandboxId:3f1239e30a953ae16c917e002c235d64ba2b2b2f9165e939c99bda7578eae785,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1711583595702313227,Labels:map[string]stri
ng{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4t77p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 27eff0c9-9b45-4530-aba9-1a5e0ca60802,},Annotations:map[string]string{io.kubernetes.container.hash: d0985273,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f28af42c6db4a0efe0547d4442478084f4054bb5d4d47038a8a7f727ec1044df,PodSandboxId:893f7358a6722bb051ca8e38bb9af692a62e5ff985eb3863a033431715b43128,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:58ce44dc60694b0aa547d87d4a8337133961d3a8538021a672ba9bd33b267c9a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1711583579138103960,Labels:map[string]string
{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-377576,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b447b16a29b2ea987c7683714523f85a,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22d460b8d6582d93d5633e1e1af46683647a5632f6b9153f61a6c374dca4f34c,PodSandboxId:97aabc5fbaef976f88ad3764ab080be7aeab5127cc15d872fe7107cf3126e072,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1711583575918296538,Labels:map[string]string{io.kubernetes.container
.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-377576,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 32d18f050adf42c0d971a9903270a7b6,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f113e7564c47f0e2aa57dbb0702acbf86b1d75d00e9210d72d606a1b0505e5b,PodSandboxId:ac57491c8945575ea35326a6b572c93ff05b65a9f7c2a1f53e465d0b97a5fe09,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1711583575841481160,Labels:map[string]string{io.kubernetes.container.na
me: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-377576,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6490cbc40210ad634becf13ac3a1705,},Annotations:map[string]string{io.kubernetes.container.hash: bc2c0f48,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0128cd878ebdae2c6e217413183a103417714ffe00d55c1c92d1353e38238fa,PodSandboxId:bbb9d168e952fe414d641f2ec6a7e1e63a04205d4e2aae1c6ad9a2756c6443ee,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1711583575849937975,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name
: etcd-ha-377576,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ab33d6840338638cbdcd9ebe5fdd4d4,},Annotations:map[string]string{io.kubernetes.container.hash: 7359b186,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:afbf14c176818ba796eacd41f1600b5f09075c6a2b4f1edcad903530221f76ff,PodSandboxId:b75106f2dccc70e077dfb09e12c05d0942a02fa2c4894a1bd68ec76ca144eed7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1711583575809836577,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-377576,io.k
ubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f9eaf884653411ba1f22eb4cdbdfa748,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d66b161e-c6ce-4a94-9996-a53d799b7150 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	fc41f34db32bf       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 minutes ago       Running             busybox                   0                   d8bf33d99bda1       busybox-7fdf7869d9-78c89
	1d5198968b769       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      7 minutes ago       Running             coredns                   0                   78b0408435c31       coredns-76f75df574-47npx
	ed9a38e9f6cd9       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      7 minutes ago       Running             coredns                   0                   906a95ca7b930       coredns-76f75df574-msv9s
	381348b1458ce       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      7 minutes ago       Running             storage-provisioner       0                   bfc67c80fc558       storage-provisioner
	196de4c982b9c       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5                                      7 minutes ago       Running             kindnet-cni               0                   cea371a7b82b9       kindnet-5zmtk
	a226f01452a72       a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392                                      7 minutes ago       Running             kube-proxy                0                   3f1239e30a953       kube-proxy-4t77p
	f28af42c6db4a       ghcr.io/kube-vip/kube-vip@sha256:58ce44dc60694b0aa547d87d4a8337133961d3a8538021a672ba9bd33b267c9a     7 minutes ago       Running             kube-vip                  0                   893f7358a6722       kube-vip-ha-377576
	22d460b8d6582       6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3                                      7 minutes ago       Running             kube-controller-manager   0                   97aabc5fbaef9       kube-controller-manager-ha-377576
	a0128cd878ebd       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      7 minutes ago       Running             etcd                      0                   bbb9d168e952f       etcd-ha-377576
	5f113e7564c47       39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533                                      7 minutes ago       Running             kube-apiserver            0                   ac57491c89455       kube-apiserver-ha-377576
	afbf14c176818       8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b                                      7 minutes ago       Running             kube-scheduler            0                   b75106f2dccc7       kube-scheduler-ha-377576
	
	
	==> coredns [1d5198968b76968a091c33806ec424495c03395675b20e7eb35e330d16217211] <==
	[INFO] 10.244.0.4:39660 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002146785s
	[INFO] 10.244.0.4:50403 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000179658s
	[INFO] 10.244.0.4:56935 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000188852s
	[INFO] 10.244.0.4:48453 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000160917s
	[INFO] 10.244.0.4:36560 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000067457s
	[INFO] 10.244.2.2:60611 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00032109s
	[INFO] 10.244.2.2:33575 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000137606s
	[INFO] 10.244.2.2:52980 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000106821s
	[INFO] 10.244.2.2:50141 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000114136s
	[INFO] 10.244.1.2:48883 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000154613s
	[INFO] 10.244.1.2:60634 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000118063s
	[INFO] 10.244.1.2:39068 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000170354s
	[INFO] 10.244.0.4:42784 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000130962s
	[INFO] 10.244.0.4:58150 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000087285s
	[INFO] 10.244.0.4:44129 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000081095s
	[INFO] 10.244.0.4:44169 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000047878s
	[INFO] 10.244.2.2:38674 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000113751s
	[INFO] 10.244.1.2:52689 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000279728s
	[INFO] 10.244.0.4:54702 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000138182s
	[INFO] 10.244.0.4:33994 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000143246s
	[INFO] 10.244.0.4:59928 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000149415s
	[INFO] 10.244.0.4:48254 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000119791s
	[INFO] 10.244.2.2:38914 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000113463s
	[INFO] 10.244.2.2:45000 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000084412s
	[INFO] 10.244.2.2:45899 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000082622s
	
	
	==> coredns [ed9a38e9f6cd98054d1e672c652e36e6262ceb6eb2a3e14911a5178d222135e7] <==
	[INFO] 10.244.1.2:48521 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.010184678s
	[INFO] 10.244.0.4:54036 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000157882s
	[INFO] 10.244.0.4:33757 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 60 0.000062762s
	[INFO] 10.244.1.2:42842 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000182746s
	[INFO] 10.244.1.2:38978 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.003075134s
	[INFO] 10.244.1.2:39882 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000227134s
	[INFO] 10.244.1.2:36591 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000230513s
	[INFO] 10.244.1.2:39147 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002683128s
	[INFO] 10.244.1.2:57485 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000145666s
	[INFO] 10.244.1.2:50733 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000171259s
	[INFO] 10.244.0.4:38643 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000147285s
	[INFO] 10.244.0.4:54253 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00151748s
	[INFO] 10.244.0.4:55400 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000105715s
	[INFO] 10.244.2.2:37662 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00219357s
	[INFO] 10.244.2.2:39646 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000125023s
	[INFO] 10.244.2.2:33350 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001640561s
	[INFO] 10.244.2.2:40494 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000076386s
	[INFO] 10.244.1.2:45207 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000150664s
	[INFO] 10.244.2.2:56881 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000230324s
	[INFO] 10.244.2.2:46450 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000102951s
	[INFO] 10.244.2.2:49186 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000107347s
	[INFO] 10.244.1.2:32923 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.00033097s
	[INFO] 10.244.1.2:38607 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000207486s
	[INFO] 10.244.1.2:54186 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000187929s
	[INFO] 10.244.2.2:59559 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000147121s
	
	
	==> describe nodes <==
	Name:               ha-377576
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-377576
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1a940f980f77c9bfc8ef93678db8c1f49c9dd79d
	                    minikube.k8s.io/name=ha-377576
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_03_27T23_53_03_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 27 Mar 2024 23:53:01 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-377576
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 28 Mar 2024 00:00:21 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 27 Mar 2024 23:57:08 +0000   Wed, 27 Mar 2024 23:53:01 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 27 Mar 2024 23:57:08 +0000   Wed, 27 Mar 2024 23:53:01 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 27 Mar 2024 23:57:08 +0000   Wed, 27 Mar 2024 23:53:01 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 27 Mar 2024 23:57:08 +0000   Wed, 27 Mar 2024 23:53:17 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.47
	  Hostname:    ha-377576
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 548afee7a42c42209042fc22e933a640
	  System UUID:                548afee7-a42c-4220-9042-fc22e933a640
	  Boot ID:                    446624d0-3e4c-494a-bf42-903d59e41c0c
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7fdf7869d9-78c89             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m30s
	  kube-system                 coredns-76f75df574-47npx             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     7m10s
	  kube-system                 coredns-76f75df574-msv9s             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     7m10s
	  kube-system                 etcd-ha-377576                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         7m23s
	  kube-system                 kindnet-5zmtk                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      7m11s
	  kube-system                 kube-apiserver-ha-377576             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m23s
	  kube-system                 kube-controller-manager-ha-377576    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m23s
	  kube-system                 kube-proxy-4t77p                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m11s
	  kube-system                 kube-scheduler-ha-377576             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m23s
	  kube-system                 kube-vip-ha-377576                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m23s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m10s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 7m9s   kube-proxy       
	  Normal  Starting                 7m23s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  7m23s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  7m23s  kubelet          Node ha-377576 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m23s  kubelet          Node ha-377576 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m23s  kubelet          Node ha-377576 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           7m11s  node-controller  Node ha-377576 event: Registered Node ha-377576 in Controller
	  Normal  NodeReady                7m8s   kubelet          Node ha-377576 status is now: NodeReady
	  Normal  RegisteredNode           4m47s  node-controller  Node ha-377576 event: Registered Node ha-377576 in Controller
	  Normal  RegisteredNode           3m35s  node-controller  Node ha-377576 event: Registered Node ha-377576 in Controller
	
	
	Name:               ha-377576-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-377576-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1a940f980f77c9bfc8ef93678db8c1f49c9dd79d
	                    minikube.k8s.io/name=ha-377576
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_03_27T23_55_23_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 27 Mar 2024 23:55:20 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-377576-m02
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 27 Mar 2024 23:58:03 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Wed, 27 Mar 2024 23:57:23 +0000   Wed, 27 Mar 2024 23:58:44 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Wed, 27 Mar 2024 23:57:23 +0000   Wed, 27 Mar 2024 23:58:44 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Wed, 27 Mar 2024 23:57:23 +0000   Wed, 27 Mar 2024 23:58:44 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Wed, 27 Mar 2024 23:57:23 +0000   Wed, 27 Mar 2024 23:58:44 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.117
	  Hostname:    ha-377576-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 e8bdd7497a164e8f88f2bc1a3706be52
	  System UUID:                e8bdd749-7a16-4e8f-88f2-bc1a3706be52
	  Boot ID:                    9b021c57-de29-4df2-84eb-a4b0b13be45a
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7fdf7869d9-2dqtf                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m30s
	  kube-system                 etcd-ha-377576-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         5m3s
	  kube-system                 kindnet-6wmmc                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      5m5s
	  kube-system                 kube-apiserver-ha-377576-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m4s
	  kube-system                 kube-controller-manager-ha-377576-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m4s
	  kube-system                 kube-proxy-k9dcr                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m5s
	  kube-system                 kube-scheduler-ha-377576-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m4s
	  kube-system                 kube-vip-ha-377576-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 5m1s                 kube-proxy       
	  Normal  Starting                 5m5s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  5m5s (x8 over 5m5s)  kubelet          Node ha-377576-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m5s (x8 over 5m5s)  kubelet          Node ha-377576-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m5s (x7 over 5m5s)  kubelet          Node ha-377576-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m5s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m1s                 node-controller  Node ha-377576-m02 event: Registered Node ha-377576-m02 in Controller
	  Normal  RegisteredNode           4m47s                node-controller  Node ha-377576-m02 event: Registered Node ha-377576-m02 in Controller
	  Normal  RegisteredNode           3m35s                node-controller  Node ha-377576-m02 event: Registered Node ha-377576-m02 in Controller
	  Normal  NodeNotReady             101s                 node-controller  Node ha-377576-m02 status is now: NodeNotReady
	
	
	Name:               ha-377576-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-377576-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1a940f980f77c9bfc8ef93678db8c1f49c9dd79d
	                    minikube.k8s.io/name=ha-377576
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_03_27T23_56_36_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 27 Mar 2024 23:56:31 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-377576-m03
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 28 Mar 2024 00:00:15 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 27 Mar 2024 23:57:02 +0000   Wed, 27 Mar 2024 23:56:31 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 27 Mar 2024 23:57:02 +0000   Wed, 27 Mar 2024 23:56:31 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 27 Mar 2024 23:57:02 +0000   Wed, 27 Mar 2024 23:56:31 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 27 Mar 2024 23:57:02 +0000   Wed, 27 Mar 2024 23:56:42 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.101
	  Hostname:    ha-377576-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 71074434be55477c85d1de1bbea96887
	  System UUID:                71074434-be55-477c-85d1-de1bbea96887
	  Boot ID:                    772f8d7c-e549-4957-ae7c-91dfd2921db0
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7fdf7869d9-jrh7n                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m30s
	  kube-system                 etcd-ha-377576-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         3m53s
	  kube-system                 kindnet-n8fpn                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      3m54s
	  kube-system                 kube-apiserver-ha-377576-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m53s
	  kube-system                 kube-controller-manager-ha-377576-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m53s
	  kube-system                 kube-proxy-5plfq                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m54s
	  kube-system                 kube-scheduler-ha-377576-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m53s
	  kube-system                 kube-vip-ha-377576-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m49s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m51s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  3m54s (x8 over 3m54s)  kubelet          Node ha-377576-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m54s (x8 over 3m54s)  kubelet          Node ha-377576-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m54s (x7 over 3m54s)  kubelet          Node ha-377576-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m54s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m52s                  node-controller  Node ha-377576-m03 event: Registered Node ha-377576-m03 in Controller
	  Normal  RegisteredNode           3m51s                  node-controller  Node ha-377576-m03 event: Registered Node ha-377576-m03 in Controller
	  Normal  RegisteredNode           3m35s                  node-controller  Node ha-377576-m03 event: Registered Node ha-377576-m03 in Controller
	
	
	Name:               ha-377576-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-377576-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1a940f980f77c9bfc8ef93678db8c1f49c9dd79d
	                    minikube.k8s.io/name=ha-377576
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_03_27T23_57_34_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 27 Mar 2024 23:57:33 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-377576-m04
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 28 Mar 2024 00:00:16 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 27 Mar 2024 23:58:04 +0000   Wed, 27 Mar 2024 23:57:33 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 27 Mar 2024 23:58:04 +0000   Wed, 27 Mar 2024 23:57:33 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 27 Mar 2024 23:58:04 +0000   Wed, 27 Mar 2024 23:57:33 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 27 Mar 2024 23:58:04 +0000   Wed, 27 Mar 2024 23:57:42 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.93
	  Hostname:    ha-377576-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 9888e36a359a48f1aa6b97712e7f2662
	  System UUID:                9888e36a-359a-48f1-aa6b-97712e7f2662
	  Boot ID:                    952cc36b-038c-4c06-a7c6-406fd5b9d995
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-57xkj       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      2m52s
	  kube-system                 kube-proxy-nsmbj    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m52s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 2m47s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  2m52s (x3 over 2m52s)  kubelet          Node ha-377576-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m52s (x3 over 2m52s)  kubelet          Node ha-377576-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m52s (x3 over 2m52s)  kubelet          Node ha-377576-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m52s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           2m51s                  node-controller  Node ha-377576-m04 event: Registered Node ha-377576-m04 in Controller
	  Normal  RegisteredNode           2m50s                  node-controller  Node ha-377576-m04 event: Registered Node ha-377576-m04 in Controller
	  Normal  RegisteredNode           2m47s                  node-controller  Node ha-377576-m04 event: Registered Node ha-377576-m04 in Controller
	  Normal  NodeReady                2m43s                  kubelet          Node ha-377576-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Mar27 23:52] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.052795] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.040776] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.536087] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.734753] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.644341] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[ +12.445381] systemd-fstab-generator[598]: Ignoring "noauto" option for root device
	[  +0.055911] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.058244] systemd-fstab-generator[610]: Ignoring "noauto" option for root device
	[  +0.192360] systemd-fstab-generator[624]: Ignoring "noauto" option for root device
	[  +0.112715] systemd-fstab-generator[636]: Ignoring "noauto" option for root device
	[  +0.267509] systemd-fstab-generator[666]: Ignoring "noauto" option for root device
	[  +4.568474] systemd-fstab-generator[768]: Ignoring "noauto" option for root device
	[  +0.064108] kauditd_printk_skb: 130 callbacks suppressed
	[  +4.418967] systemd-fstab-generator[951]: Ignoring "noauto" option for root device
	[  +1.239042] kauditd_printk_skb: 57 callbacks suppressed
	[Mar27 23:53] kauditd_printk_skb: 40 callbacks suppressed
	[  +0.989248] systemd-fstab-generator[1376]: Ignoring "noauto" option for root device
	[ +13.165128] kauditd_printk_skb: 15 callbacks suppressed
	[Mar27 23:55] kauditd_printk_skb: 74 callbacks suppressed
	
	
	==> etcd [a0128cd878ebdae2c6e217413183a103417714ffe00d55c1c92d1353e38238fa] <==
	{"level":"warn","ts":"2024-03-28T00:00:25.624002Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"dda2c3e6a900b50e","from":"dda2c3e6a900b50e","remote-peer-id":"fcedd589fb2f542e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-28T00:00:25.700413Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"dda2c3e6a900b50e","from":"dda2c3e6a900b50e","remote-peer-id":"fcedd589fb2f542e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-28T00:00:25.713025Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"dda2c3e6a900b50e","from":"dda2c3e6a900b50e","remote-peer-id":"fcedd589fb2f542e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-28T00:00:25.717798Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"dda2c3e6a900b50e","from":"dda2c3e6a900b50e","remote-peer-id":"fcedd589fb2f542e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-28T00:00:25.724376Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"dda2c3e6a900b50e","from":"dda2c3e6a900b50e","remote-peer-id":"fcedd589fb2f542e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-28T00:00:25.735238Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"dda2c3e6a900b50e","from":"dda2c3e6a900b50e","remote-peer-id":"fcedd589fb2f542e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-28T00:00:25.744797Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"dda2c3e6a900b50e","from":"dda2c3e6a900b50e","remote-peer-id":"fcedd589fb2f542e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-28T00:00:25.76878Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"dda2c3e6a900b50e","from":"dda2c3e6a900b50e","remote-peer-id":"fcedd589fb2f542e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-28T00:00:25.778791Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"dda2c3e6a900b50e","from":"dda2c3e6a900b50e","remote-peer-id":"fcedd589fb2f542e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-28T00:00:25.787803Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"dda2c3e6a900b50e","from":"dda2c3e6a900b50e","remote-peer-id":"fcedd589fb2f542e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-28T00:00:25.809875Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"dda2c3e6a900b50e","from":"dda2c3e6a900b50e","remote-peer-id":"fcedd589fb2f542e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-28T00:00:25.824889Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"dda2c3e6a900b50e","from":"dda2c3e6a900b50e","remote-peer-id":"fcedd589fb2f542e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-28T00:00:25.825104Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"dda2c3e6a900b50e","from":"dda2c3e6a900b50e","remote-peer-id":"fcedd589fb2f542e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-28T00:00:25.844838Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"dda2c3e6a900b50e","from":"dda2c3e6a900b50e","remote-peer-id":"fcedd589fb2f542e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-28T00:00:25.850788Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"dda2c3e6a900b50e","from":"dda2c3e6a900b50e","remote-peer-id":"fcedd589fb2f542e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-28T00:00:25.860435Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"dda2c3e6a900b50e","from":"dda2c3e6a900b50e","remote-peer-id":"fcedd589fb2f542e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-28T00:00:25.871182Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"dda2c3e6a900b50e","from":"dda2c3e6a900b50e","remote-peer-id":"fcedd589fb2f542e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-28T00:00:25.879014Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"dda2c3e6a900b50e","from":"dda2c3e6a900b50e","remote-peer-id":"fcedd589fb2f542e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-28T00:00:25.886217Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"dda2c3e6a900b50e","from":"dda2c3e6a900b50e","remote-peer-id":"fcedd589fb2f542e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-28T00:00:25.890765Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"dda2c3e6a900b50e","from":"dda2c3e6a900b50e","remote-peer-id":"fcedd589fb2f542e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-28T00:00:25.894657Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"dda2c3e6a900b50e","from":"dda2c3e6a900b50e","remote-peer-id":"fcedd589fb2f542e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-28T00:00:25.90095Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"dda2c3e6a900b50e","from":"dda2c3e6a900b50e","remote-peer-id":"fcedd589fb2f542e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-28T00:00:25.908014Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"dda2c3e6a900b50e","from":"dda2c3e6a900b50e","remote-peer-id":"fcedd589fb2f542e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-28T00:00:25.916036Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"dda2c3e6a900b50e","from":"dda2c3e6a900b50e","remote-peer-id":"fcedd589fb2f542e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-28T00:00:25.924463Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"dda2c3e6a900b50e","from":"dda2c3e6a900b50e","remote-peer-id":"fcedd589fb2f542e","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 00:00:26 up 8 min,  0 users,  load average: 1.54, 0.78, 0.33
	Linux ha-377576 5.10.207 #1 SMP Wed Mar 27 22:02:20 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [196de4c982b9c19fd66bac5f3fa839489745805e12542f317a366989b520706b] <==
	I0327 23:59:47.308417       1 main.go:250] Node ha-377576-m04 has CIDR [10.244.3.0/24] 
	I0327 23:59:57.315079       1 main.go:223] Handling node with IPs: map[192.168.39.47:{}]
	I0327 23:59:57.315280       1 main.go:227] handling current node
	I0327 23:59:57.315326       1 main.go:223] Handling node with IPs: map[192.168.39.117:{}]
	I0327 23:59:57.315347       1 main.go:250] Node ha-377576-m02 has CIDR [10.244.1.0/24] 
	I0327 23:59:57.315606       1 main.go:223] Handling node with IPs: map[192.168.39.101:{}]
	I0327 23:59:57.315655       1 main.go:250] Node ha-377576-m03 has CIDR [10.244.2.0/24] 
	I0327 23:59:57.315722       1 main.go:223] Handling node with IPs: map[192.168.39.93:{}]
	I0327 23:59:57.315740       1 main.go:250] Node ha-377576-m04 has CIDR [10.244.3.0/24] 
	I0328 00:00:07.322694       1 main.go:223] Handling node with IPs: map[192.168.39.47:{}]
	I0328 00:00:07.322798       1 main.go:227] handling current node
	I0328 00:00:07.322825       1 main.go:223] Handling node with IPs: map[192.168.39.117:{}]
	I0328 00:00:07.322929       1 main.go:250] Node ha-377576-m02 has CIDR [10.244.1.0/24] 
	I0328 00:00:07.323099       1 main.go:223] Handling node with IPs: map[192.168.39.101:{}]
	I0328 00:00:07.323133       1 main.go:250] Node ha-377576-m03 has CIDR [10.244.2.0/24] 
	I0328 00:00:07.323199       1 main.go:223] Handling node with IPs: map[192.168.39.93:{}]
	I0328 00:00:07.323217       1 main.go:250] Node ha-377576-m04 has CIDR [10.244.3.0/24] 
	I0328 00:00:17.337129       1 main.go:223] Handling node with IPs: map[192.168.39.47:{}]
	I0328 00:00:17.337187       1 main.go:227] handling current node
	I0328 00:00:17.337204       1 main.go:223] Handling node with IPs: map[192.168.39.117:{}]
	I0328 00:00:17.337213       1 main.go:250] Node ha-377576-m02 has CIDR [10.244.1.0/24] 
	I0328 00:00:17.337334       1 main.go:223] Handling node with IPs: map[192.168.39.101:{}]
	I0328 00:00:17.337363       1 main.go:250] Node ha-377576-m03 has CIDR [10.244.2.0/24] 
	I0328 00:00:17.337416       1 main.go:223] Handling node with IPs: map[192.168.39.93:{}]
	I0328 00:00:17.337421       1 main.go:250] Node ha-377576-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [5f113e7564c47f0e2aa57dbb0702acbf86b1d75d00e9210d72d606a1b0505e5b] <==
	I0327 23:52:58.602484       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0327 23:52:58.602589       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0327 23:52:58.602912       1 aggregator.go:165] initial CRD sync complete...
	I0327 23:52:58.602950       1 autoregister_controller.go:141] Starting autoregister controller
	I0327 23:52:58.602957       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0327 23:52:58.602962       1 cache.go:39] Caches are synced for autoregister controller
	I0327 23:52:58.656155       1 controller.go:624] quota admission added evaluator for: namespaces
	I0327 23:52:58.669066       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0327 23:52:58.685463       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0327 23:52:58.763771       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0327 23:52:59.502968       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0327 23:52:59.508615       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0327 23:52:59.508680       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0327 23:53:00.133191       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0327 23:53:00.184659       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0327 23:53:00.331947       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0327 23:53:00.346384       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.47]
	I0327 23:53:00.347287       1 controller.go:624] quota admission added evaluator for: endpoints
	I0327 23:53:00.351876       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0327 23:53:00.528981       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0327 23:53:02.496870       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0327 23:53:02.517479       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0327 23:53:02.530303       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0327 23:53:14.644261       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	I0327 23:53:15.098771       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [22d460b8d6582d93d5633e1e1af46683647a5632f6b9153f61a6c374dca4f34c] <==
	I0327 23:56:59.802638       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="139.102µs"
	E0327 23:57:33.565264       1 certificate_controller.go:146] Sync csr-c72s4 failed with : error updating signature for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io "csr-c72s4": the object has been modified; please apply your changes to the latest version and try again
	I0327 23:57:33.600483       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-377576-m04\" does not exist"
	I0327 23:57:33.652219       1 range_allocator.go:380] "Set node PodCIDR" node="ha-377576-m04" podCIDRs=["10.244.3.0/24"]
	I0327 23:57:33.660867       1 event.go:376] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-tzvbj"
	I0327 23:57:33.661119       1 event.go:376] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-nsmbj"
	I0327 23:57:33.750910       1 event.go:376] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: kube-proxy-t6nx6"
	I0327 23:57:33.772460       1 event.go:376] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: kindnet-tzvbj"
	I0327 23:57:33.856750       1 event.go:376] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: kube-proxy-prljv"
	I0327 23:57:33.871118       1 event.go:376] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: kindnet-pn9dw"
	I0327 23:57:34.129097       1 node_lifecycle_controller.go:874] "Missing timestamp for Node. Assuming now as a timestamp" node="ha-377576-m04"
	I0327 23:57:34.129327       1 event.go:376] "Event occurred" object="ha-377576-m04" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node ha-377576-m04 event: Registered Node ha-377576-m04 in Controller"
	I0327 23:57:43.008283       1 topologycache.go:237] "Can't get CPU or zone information for node" node="ha-377576-m04"
	I0327 23:58:44.158629       1 topologycache.go:237] "Can't get CPU or zone information for node" node="ha-377576-m04"
	I0327 23:58:44.159072       1 event.go:376] "Event occurred" object="ha-377576-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="NodeNotReady" message="Node ha-377576-m02 status is now: NodeNotReady"
	I0327 23:58:44.179770       1 event.go:376] "Event occurred" object="kube-system/etcd-ha-377576-m02" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0327 23:58:44.203197       1 event.go:376] "Event occurred" object="kube-system/kube-vip-ha-377576-m02" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0327 23:58:44.213596       1 event.go:376] "Event occurred" object="default/busybox-7fdf7869d9-2dqtf" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0327 23:58:44.230027       1 event.go:376] "Event occurred" object="kube-system/kindnet-6wmmc" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0327 23:58:44.256763       1 event.go:376] "Event occurred" object="kube-system/kube-proxy-k9dcr" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0327 23:58:44.285708       1 event.go:376] "Event occurred" object="kube-system/kube-controller-manager-ha-377576-m02" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0327 23:58:44.289987       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="75.746521ms"
	I0327 23:58:44.290123       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="45.009µs"
	I0327 23:58:44.307661       1 event.go:376] "Event occurred" object="kube-system/kube-apiserver-ha-377576-m02" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0327 23:58:44.343032       1 event.go:376] "Event occurred" object="kube-system/kube-scheduler-ha-377576-m02" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	
	
	==> kube-proxy [a226f01452a72cf6e9a608450715a12f6663ccf2697e8259f809e966809978ce] <==
	I0327 23:53:15.959892       1 server_others.go:72] "Using iptables proxy"
	I0327 23:53:15.983320       1 server.go:1050] "Successfully retrieved node IP(s)" IPs=["192.168.39.47"]
	I0327 23:53:16.055266       1 server_others.go:146] "No iptables support for family" ipFamily="IPv6"
	I0327 23:53:16.055358       1 server.go:654] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0327 23:53:16.055446       1 server_others.go:168] "Using iptables Proxier"
	I0327 23:53:16.064618       1 proxier.go:245] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0327 23:53:16.065411       1 server.go:865] "Version info" version="v1.29.3"
	I0327 23:53:16.065456       1 server.go:867] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0327 23:53:16.072197       1 config.go:188] "Starting service config controller"
	I0327 23:53:16.072660       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0327 23:53:16.072718       1 config.go:97] "Starting endpoint slice config controller"
	I0327 23:53:16.072726       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0327 23:53:16.074737       1 config.go:315] "Starting node config controller"
	I0327 23:53:16.074765       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0327 23:53:16.172890       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0327 23:53:16.172897       1 shared_informer.go:318] Caches are synced for service config
	I0327 23:53:16.175683       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [afbf14c176818ba796eacd41f1600b5f09075c6a2b4f1edcad903530221f76ff] <==
	W0327 23:52:58.721691       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0327 23:52:58.721703       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0327 23:52:58.721890       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0327 23:52:58.721901       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0327 23:52:58.726852       1 reflector.go:539] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0327 23:52:58.726931       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0327 23:52:58.727134       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0327 23:52:58.727145       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0327 23:52:58.727180       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0327 23:52:58.727191       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0327 23:52:58.727312       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0327 23:52:58.728036       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0327 23:52:59.556968       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0327 23:52:59.557037       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0327 23:52:59.658438       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0327 23:52:59.658484       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0327 23:52:59.748727       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0327 23:52:59.748764       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0327 23:52:59.830540       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0327 23:52:59.830590       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0327 23:52:59.911289       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0327 23:52:59.911418       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0327 23:53:00.016951       1 reflector.go:539] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0327 23:53:00.017332       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0327 23:53:03.289815       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Mar 27 23:56:02 ha-377576 kubelet[1383]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 27 23:56:02 ha-377576 kubelet[1383]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 27 23:56:02 ha-377576 kubelet[1383]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 27 23:56:55 ha-377576 kubelet[1383]: I0327 23:56:55.769873    1383 topology_manager.go:215] "Topology Admit Handler" podUID="3272474d-5490-4c7c-9dfe-ded8488ec32f" podNamespace="default" podName="busybox-7fdf7869d9-78c89"
	Mar 27 23:56:55 ha-377576 kubelet[1383]: I0327 23:56:55.788457    1383 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8vdp8\" (UniqueName: \"kubernetes.io/projected/3272474d-5490-4c7c-9dfe-ded8488ec32f-kube-api-access-8vdp8\") pod \"busybox-7fdf7869d9-78c89\" (UID: \"3272474d-5490-4c7c-9dfe-ded8488ec32f\") " pod="default/busybox-7fdf7869d9-78c89"
	Mar 27 23:57:02 ha-377576 kubelet[1383]: E0327 23:57:02.708996    1383 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 27 23:57:02 ha-377576 kubelet[1383]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 27 23:57:02 ha-377576 kubelet[1383]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 27 23:57:02 ha-377576 kubelet[1383]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 27 23:57:02 ha-377576 kubelet[1383]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 27 23:58:02 ha-377576 kubelet[1383]: E0327 23:58:02.709132    1383 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 27 23:58:02 ha-377576 kubelet[1383]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 27 23:58:02 ha-377576 kubelet[1383]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 27 23:58:02 ha-377576 kubelet[1383]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 27 23:58:02 ha-377576 kubelet[1383]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 27 23:59:02 ha-377576 kubelet[1383]: E0327 23:59:02.709132    1383 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 27 23:59:02 ha-377576 kubelet[1383]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 27 23:59:02 ha-377576 kubelet[1383]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 27 23:59:02 ha-377576 kubelet[1383]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 27 23:59:02 ha-377576 kubelet[1383]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 28 00:00:02 ha-377576 kubelet[1383]: E0328 00:00:02.709291    1383 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 28 00:00:02 ha-377576 kubelet[1383]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 28 00:00:02 ha-377576 kubelet[1383]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 28 00:00:02 ha-377576 kubelet[1383]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 28 00:00:02 ha-377576 kubelet[1383]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-377576 -n ha-377576
helpers_test.go:261: (dbg) Run:  kubectl --context ha-377576 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (142.36s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (56.4s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-amd64 -p ha-377576 node start m02 -v=7 --alsologtostderr
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-377576 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-377576 status -v=7 --alsologtostderr: exit status 3 (3.212245638s)

                                                
                                                
-- stdout --
	ha-377576
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-377576-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-377576-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-377576-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0328 00:00:30.666825 1091188 out.go:291] Setting OutFile to fd 1 ...
	I0328 00:00:30.666984 1091188 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 00:00:30.666993 1091188 out.go:304] Setting ErrFile to fd 2...
	I0328 00:00:30.666998 1091188 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 00:00:30.667214 1091188 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18485-1069254/.minikube/bin
	I0328 00:00:30.667453 1091188 out.go:298] Setting JSON to false
	I0328 00:00:30.667485 1091188 mustload.go:65] Loading cluster: ha-377576
	I0328 00:00:30.667594 1091188 notify.go:220] Checking for updates...
	I0328 00:00:30.668015 1091188 config.go:182] Loaded profile config "ha-377576": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0328 00:00:30.668040 1091188 status.go:255] checking status of ha-377576 ...
	I0328 00:00:30.668562 1091188 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0328 00:00:30.668631 1091188 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 00:00:30.688245 1091188 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40585
	I0328 00:00:30.688708 1091188 main.go:141] libmachine: () Calling .GetVersion
	I0328 00:00:30.689285 1091188 main.go:141] libmachine: Using API Version  1
	I0328 00:00:30.689310 1091188 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 00:00:30.689761 1091188 main.go:141] libmachine: () Calling .GetMachineName
	I0328 00:00:30.689981 1091188 main.go:141] libmachine: (ha-377576) Calling .GetState
	I0328 00:00:30.691679 1091188 status.go:330] ha-377576 host status = "Running" (err=<nil>)
	I0328 00:00:30.691696 1091188 host.go:66] Checking if "ha-377576" exists ...
	I0328 00:00:30.691986 1091188 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0328 00:00:30.692026 1091188 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 00:00:30.706998 1091188 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46137
	I0328 00:00:30.707469 1091188 main.go:141] libmachine: () Calling .GetVersion
	I0328 00:00:30.707974 1091188 main.go:141] libmachine: Using API Version  1
	I0328 00:00:30.708004 1091188 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 00:00:30.708340 1091188 main.go:141] libmachine: () Calling .GetMachineName
	I0328 00:00:30.708540 1091188 main.go:141] libmachine: (ha-377576) Calling .GetIP
	I0328 00:00:30.711849 1091188 main.go:141] libmachine: (ha-377576) DBG | domain ha-377576 has defined MAC address 52:54:00:9c:48:13 in network mk-ha-377576
	I0328 00:00:30.712351 1091188 main.go:141] libmachine: (ha-377576) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:48:13", ip: ""} in network mk-ha-377576: {Iface:virbr1 ExpiryTime:2024-03-28 00:52:31 +0000 UTC Type:0 Mac:52:54:00:9c:48:13 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:ha-377576 Clientid:01:52:54:00:9c:48:13}
	I0328 00:00:30.712380 1091188 main.go:141] libmachine: (ha-377576) DBG | domain ha-377576 has defined IP address 192.168.39.47 and MAC address 52:54:00:9c:48:13 in network mk-ha-377576
	I0328 00:00:30.712567 1091188 host.go:66] Checking if "ha-377576" exists ...
	I0328 00:00:30.712855 1091188 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0328 00:00:30.712892 1091188 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 00:00:30.729506 1091188 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34187
	I0328 00:00:30.730212 1091188 main.go:141] libmachine: () Calling .GetVersion
	I0328 00:00:30.730883 1091188 main.go:141] libmachine: Using API Version  1
	I0328 00:00:30.730927 1091188 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 00:00:30.731336 1091188 main.go:141] libmachine: () Calling .GetMachineName
	I0328 00:00:30.731560 1091188 main.go:141] libmachine: (ha-377576) Calling .DriverName
	I0328 00:00:30.731743 1091188 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0328 00:00:30.731771 1091188 main.go:141] libmachine: (ha-377576) Calling .GetSSHHostname
	I0328 00:00:30.734474 1091188 main.go:141] libmachine: (ha-377576) DBG | domain ha-377576 has defined MAC address 52:54:00:9c:48:13 in network mk-ha-377576
	I0328 00:00:30.734849 1091188 main.go:141] libmachine: (ha-377576) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:48:13", ip: ""} in network mk-ha-377576: {Iface:virbr1 ExpiryTime:2024-03-28 00:52:31 +0000 UTC Type:0 Mac:52:54:00:9c:48:13 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:ha-377576 Clientid:01:52:54:00:9c:48:13}
	I0328 00:00:30.734884 1091188 main.go:141] libmachine: (ha-377576) DBG | domain ha-377576 has defined IP address 192.168.39.47 and MAC address 52:54:00:9c:48:13 in network mk-ha-377576
	I0328 00:00:30.735074 1091188 main.go:141] libmachine: (ha-377576) Calling .GetSSHPort
	I0328 00:00:30.735263 1091188 main.go:141] libmachine: (ha-377576) Calling .GetSSHKeyPath
	I0328 00:00:30.735428 1091188 main.go:141] libmachine: (ha-377576) Calling .GetSSHUsername
	I0328 00:00:30.735597 1091188 sshutil.go:53] new ssh client: &{IP:192.168.39.47 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/ha-377576/id_rsa Username:docker}
	I0328 00:00:30.815192 1091188 ssh_runner.go:195] Run: systemctl --version
	I0328 00:00:30.822206 1091188 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0328 00:00:30.838578 1091188 kubeconfig.go:125] found "ha-377576" server: "https://192.168.39.254:8443"
	I0328 00:00:30.838619 1091188 api_server.go:166] Checking apiserver status ...
	I0328 00:00:30.838652 1091188 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 00:00:30.854323 1091188 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1197/cgroup
	W0328 00:00:30.864122 1091188 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1197/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0328 00:00:30.864185 1091188 ssh_runner.go:195] Run: ls
	I0328 00:00:30.869768 1091188 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0328 00:00:30.874264 1091188 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0328 00:00:30.874290 1091188 status.go:422] ha-377576 apiserver status = Running (err=<nil>)
	I0328 00:00:30.874301 1091188 status.go:257] ha-377576 status: &{Name:ha-377576 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0328 00:00:30.874318 1091188 status.go:255] checking status of ha-377576-m02 ...
	I0328 00:00:30.874603 1091188 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0328 00:00:30.874640 1091188 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 00:00:30.890185 1091188 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39991
	I0328 00:00:30.890829 1091188 main.go:141] libmachine: () Calling .GetVersion
	I0328 00:00:30.891408 1091188 main.go:141] libmachine: Using API Version  1
	I0328 00:00:30.891437 1091188 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 00:00:30.891908 1091188 main.go:141] libmachine: () Calling .GetMachineName
	I0328 00:00:30.892134 1091188 main.go:141] libmachine: (ha-377576-m02) Calling .GetState
	I0328 00:00:30.893955 1091188 status.go:330] ha-377576-m02 host status = "Running" (err=<nil>)
	I0328 00:00:30.893979 1091188 host.go:66] Checking if "ha-377576-m02" exists ...
	I0328 00:00:30.894409 1091188 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0328 00:00:30.894464 1091188 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 00:00:30.911372 1091188 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35913
	I0328 00:00:30.911828 1091188 main.go:141] libmachine: () Calling .GetVersion
	I0328 00:00:30.912344 1091188 main.go:141] libmachine: Using API Version  1
	I0328 00:00:30.912377 1091188 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 00:00:30.912791 1091188 main.go:141] libmachine: () Calling .GetMachineName
	I0328 00:00:30.912962 1091188 main.go:141] libmachine: (ha-377576-m02) Calling .GetIP
	I0328 00:00:30.915845 1091188 main.go:141] libmachine: (ha-377576-m02) DBG | domain ha-377576-m02 has defined MAC address 52:54:00:bb:83:99 in network mk-ha-377576
	I0328 00:00:30.916329 1091188 main.go:141] libmachine: (ha-377576-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:83:99", ip: ""} in network mk-ha-377576: {Iface:virbr1 ExpiryTime:2024-03-28 00:53:30 +0000 UTC Type:0 Mac:52:54:00:bb:83:99 Iaid: IPaddr:192.168.39.117 Prefix:24 Hostname:ha-377576-m02 Clientid:01:52:54:00:bb:83:99}
	I0328 00:00:30.916362 1091188 main.go:141] libmachine: (ha-377576-m02) DBG | domain ha-377576-m02 has defined IP address 192.168.39.117 and MAC address 52:54:00:bb:83:99 in network mk-ha-377576
	I0328 00:00:30.916616 1091188 host.go:66] Checking if "ha-377576-m02" exists ...
	I0328 00:00:30.916911 1091188 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0328 00:00:30.916960 1091188 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 00:00:30.931853 1091188 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45493
	I0328 00:00:30.932332 1091188 main.go:141] libmachine: () Calling .GetVersion
	I0328 00:00:30.932854 1091188 main.go:141] libmachine: Using API Version  1
	I0328 00:00:30.932882 1091188 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 00:00:30.933295 1091188 main.go:141] libmachine: () Calling .GetMachineName
	I0328 00:00:30.933510 1091188 main.go:141] libmachine: (ha-377576-m02) Calling .DriverName
	I0328 00:00:30.933724 1091188 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0328 00:00:30.933748 1091188 main.go:141] libmachine: (ha-377576-m02) Calling .GetSSHHostname
	I0328 00:00:30.936699 1091188 main.go:141] libmachine: (ha-377576-m02) DBG | domain ha-377576-m02 has defined MAC address 52:54:00:bb:83:99 in network mk-ha-377576
	I0328 00:00:30.938358 1091188 main.go:141] libmachine: (ha-377576-m02) Calling .GetSSHPort
	I0328 00:00:30.938408 1091188 main.go:141] libmachine: (ha-377576-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:83:99", ip: ""} in network mk-ha-377576: {Iface:virbr1 ExpiryTime:2024-03-28 00:53:30 +0000 UTC Type:0 Mac:52:54:00:bb:83:99 Iaid: IPaddr:192.168.39.117 Prefix:24 Hostname:ha-377576-m02 Clientid:01:52:54:00:bb:83:99}
	I0328 00:00:30.938453 1091188 main.go:141] libmachine: (ha-377576-m02) DBG | domain ha-377576-m02 has defined IP address 192.168.39.117 and MAC address 52:54:00:bb:83:99 in network mk-ha-377576
	I0328 00:00:30.939016 1091188 main.go:141] libmachine: (ha-377576-m02) Calling .GetSSHKeyPath
	I0328 00:00:30.939325 1091188 main.go:141] libmachine: (ha-377576-m02) Calling .GetSSHUsername
	I0328 00:00:30.939502 1091188 sshutil.go:53] new ssh client: &{IP:192.168.39.117 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/ha-377576-m02/id_rsa Username:docker}
	W0328 00:00:33.418649 1091188 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.117:22: connect: no route to host
	W0328 00:00:33.418758 1091188 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.117:22: connect: no route to host
	E0328 00:00:33.418775 1091188 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.117:22: connect: no route to host
	I0328 00:00:33.418785 1091188 status.go:257] ha-377576-m02 status: &{Name:ha-377576-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0328 00:00:33.418803 1091188 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.117:22: connect: no route to host
	I0328 00:00:33.418811 1091188 status.go:255] checking status of ha-377576-m03 ...
	I0328 00:00:33.419237 1091188 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0328 00:00:33.419290 1091188 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 00:00:33.435293 1091188 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46695
	I0328 00:00:33.435812 1091188 main.go:141] libmachine: () Calling .GetVersion
	I0328 00:00:33.436367 1091188 main.go:141] libmachine: Using API Version  1
	I0328 00:00:33.436401 1091188 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 00:00:33.436804 1091188 main.go:141] libmachine: () Calling .GetMachineName
	I0328 00:00:33.437029 1091188 main.go:141] libmachine: (ha-377576-m03) Calling .GetState
	I0328 00:00:33.438835 1091188 status.go:330] ha-377576-m03 host status = "Running" (err=<nil>)
	I0328 00:00:33.438854 1091188 host.go:66] Checking if "ha-377576-m03" exists ...
	I0328 00:00:33.439143 1091188 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0328 00:00:33.439183 1091188 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 00:00:33.455658 1091188 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46703
	I0328 00:00:33.456117 1091188 main.go:141] libmachine: () Calling .GetVersion
	I0328 00:00:33.456542 1091188 main.go:141] libmachine: Using API Version  1
	I0328 00:00:33.456562 1091188 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 00:00:33.456936 1091188 main.go:141] libmachine: () Calling .GetMachineName
	I0328 00:00:33.457153 1091188 main.go:141] libmachine: (ha-377576-m03) Calling .GetIP
	I0328 00:00:33.460006 1091188 main.go:141] libmachine: (ha-377576-m03) DBG | domain ha-377576-m03 has defined MAC address 52:54:00:f5:c1:99 in network mk-ha-377576
	I0328 00:00:33.460490 1091188 main.go:141] libmachine: (ha-377576-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:c1:99", ip: ""} in network mk-ha-377576: {Iface:virbr1 ExpiryTime:2024-03-28 00:55:52 +0000 UTC Type:0 Mac:52:54:00:f5:c1:99 Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:ha-377576-m03 Clientid:01:52:54:00:f5:c1:99}
	I0328 00:00:33.460520 1091188 main.go:141] libmachine: (ha-377576-m03) DBG | domain ha-377576-m03 has defined IP address 192.168.39.101 and MAC address 52:54:00:f5:c1:99 in network mk-ha-377576
	I0328 00:00:33.460660 1091188 host.go:66] Checking if "ha-377576-m03" exists ...
	I0328 00:00:33.461102 1091188 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0328 00:00:33.461158 1091188 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 00:00:33.476705 1091188 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36739
	I0328 00:00:33.477229 1091188 main.go:141] libmachine: () Calling .GetVersion
	I0328 00:00:33.477693 1091188 main.go:141] libmachine: Using API Version  1
	I0328 00:00:33.477719 1091188 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 00:00:33.478119 1091188 main.go:141] libmachine: () Calling .GetMachineName
	I0328 00:00:33.478368 1091188 main.go:141] libmachine: (ha-377576-m03) Calling .DriverName
	I0328 00:00:33.478579 1091188 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0328 00:00:33.478604 1091188 main.go:141] libmachine: (ha-377576-m03) Calling .GetSSHHostname
	I0328 00:00:33.481457 1091188 main.go:141] libmachine: (ha-377576-m03) DBG | domain ha-377576-m03 has defined MAC address 52:54:00:f5:c1:99 in network mk-ha-377576
	I0328 00:00:33.482137 1091188 main.go:141] libmachine: (ha-377576-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:c1:99", ip: ""} in network mk-ha-377576: {Iface:virbr1 ExpiryTime:2024-03-28 00:55:52 +0000 UTC Type:0 Mac:52:54:00:f5:c1:99 Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:ha-377576-m03 Clientid:01:52:54:00:f5:c1:99}
	I0328 00:00:33.482167 1091188 main.go:141] libmachine: (ha-377576-m03) DBG | domain ha-377576-m03 has defined IP address 192.168.39.101 and MAC address 52:54:00:f5:c1:99 in network mk-ha-377576
	I0328 00:00:33.482335 1091188 main.go:141] libmachine: (ha-377576-m03) Calling .GetSSHPort
	I0328 00:00:33.482534 1091188 main.go:141] libmachine: (ha-377576-m03) Calling .GetSSHKeyPath
	I0328 00:00:33.482738 1091188 main.go:141] libmachine: (ha-377576-m03) Calling .GetSSHUsername
	I0328 00:00:33.482899 1091188 sshutil.go:53] new ssh client: &{IP:192.168.39.101 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/ha-377576-m03/id_rsa Username:docker}
	I0328 00:00:33.582067 1091188 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0328 00:00:33.599443 1091188 kubeconfig.go:125] found "ha-377576" server: "https://192.168.39.254:8443"
	I0328 00:00:33.599477 1091188 api_server.go:166] Checking apiserver status ...
	I0328 00:00:33.599514 1091188 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 00:00:33.614641 1091188 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1535/cgroup
	W0328 00:00:33.625285 1091188 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1535/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0328 00:00:33.625362 1091188 ssh_runner.go:195] Run: ls
	I0328 00:00:33.630619 1091188 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0328 00:00:33.638201 1091188 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0328 00:00:33.638255 1091188 status.go:422] ha-377576-m03 apiserver status = Running (err=<nil>)
	I0328 00:00:33.638269 1091188 status.go:257] ha-377576-m03 status: &{Name:ha-377576-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0328 00:00:33.638291 1091188 status.go:255] checking status of ha-377576-m04 ...
	I0328 00:00:33.638723 1091188 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0328 00:00:33.638773 1091188 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 00:00:33.655657 1091188 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41071
	I0328 00:00:33.656301 1091188 main.go:141] libmachine: () Calling .GetVersion
	I0328 00:00:33.656851 1091188 main.go:141] libmachine: Using API Version  1
	I0328 00:00:33.656881 1091188 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 00:00:33.657282 1091188 main.go:141] libmachine: () Calling .GetMachineName
	I0328 00:00:33.657488 1091188 main.go:141] libmachine: (ha-377576-m04) Calling .GetState
	I0328 00:00:33.659183 1091188 status.go:330] ha-377576-m04 host status = "Running" (err=<nil>)
	I0328 00:00:33.659204 1091188 host.go:66] Checking if "ha-377576-m04" exists ...
	I0328 00:00:33.659491 1091188 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0328 00:00:33.659528 1091188 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 00:00:33.674875 1091188 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38435
	I0328 00:00:33.675384 1091188 main.go:141] libmachine: () Calling .GetVersion
	I0328 00:00:33.675955 1091188 main.go:141] libmachine: Using API Version  1
	I0328 00:00:33.675986 1091188 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 00:00:33.676359 1091188 main.go:141] libmachine: () Calling .GetMachineName
	I0328 00:00:33.676568 1091188 main.go:141] libmachine: (ha-377576-m04) Calling .GetIP
	I0328 00:00:33.679960 1091188 main.go:141] libmachine: (ha-377576-m04) DBG | domain ha-377576-m04 has defined MAC address 52:54:00:6a:c2:81 in network mk-ha-377576
	I0328 00:00:33.680416 1091188 main.go:141] libmachine: (ha-377576-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:c2:81", ip: ""} in network mk-ha-377576: {Iface:virbr1 ExpiryTime:2024-03-28 00:57:19 +0000 UTC Type:0 Mac:52:54:00:6a:c2:81 Iaid: IPaddr:192.168.39.93 Prefix:24 Hostname:ha-377576-m04 Clientid:01:52:54:00:6a:c2:81}
	I0328 00:00:33.680451 1091188 main.go:141] libmachine: (ha-377576-m04) DBG | domain ha-377576-m04 has defined IP address 192.168.39.93 and MAC address 52:54:00:6a:c2:81 in network mk-ha-377576
	I0328 00:00:33.680663 1091188 host.go:66] Checking if "ha-377576-m04" exists ...
	I0328 00:00:33.681007 1091188 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0328 00:00:33.681051 1091188 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 00:00:33.697241 1091188 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40865
	I0328 00:00:33.697752 1091188 main.go:141] libmachine: () Calling .GetVersion
	I0328 00:00:33.698271 1091188 main.go:141] libmachine: Using API Version  1
	I0328 00:00:33.698300 1091188 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 00:00:33.698655 1091188 main.go:141] libmachine: () Calling .GetMachineName
	I0328 00:00:33.698845 1091188 main.go:141] libmachine: (ha-377576-m04) Calling .DriverName
	I0328 00:00:33.699036 1091188 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0328 00:00:33.699059 1091188 main.go:141] libmachine: (ha-377576-m04) Calling .GetSSHHostname
	I0328 00:00:33.701844 1091188 main.go:141] libmachine: (ha-377576-m04) DBG | domain ha-377576-m04 has defined MAC address 52:54:00:6a:c2:81 in network mk-ha-377576
	I0328 00:00:33.702383 1091188 main.go:141] libmachine: (ha-377576-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:c2:81", ip: ""} in network mk-ha-377576: {Iface:virbr1 ExpiryTime:2024-03-28 00:57:19 +0000 UTC Type:0 Mac:52:54:00:6a:c2:81 Iaid: IPaddr:192.168.39.93 Prefix:24 Hostname:ha-377576-m04 Clientid:01:52:54:00:6a:c2:81}
	I0328 00:00:33.702422 1091188 main.go:141] libmachine: (ha-377576-m04) DBG | domain ha-377576-m04 has defined IP address 192.168.39.93 and MAC address 52:54:00:6a:c2:81 in network mk-ha-377576
	I0328 00:00:33.702569 1091188 main.go:141] libmachine: (ha-377576-m04) Calling .GetSSHPort
	I0328 00:00:33.702785 1091188 main.go:141] libmachine: (ha-377576-m04) Calling .GetSSHKeyPath
	I0328 00:00:33.702960 1091188 main.go:141] libmachine: (ha-377576-m04) Calling .GetSSHUsername
	I0328 00:00:33.703091 1091188 sshutil.go:53] new ssh client: &{IP:192.168.39.93 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/ha-377576-m04/id_rsa Username:docker}
	I0328 00:00:33.790563 1091188 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0328 00:00:33.812687 1091188 status.go:257] ha-377576-m04 status: &{Name:ha-377576-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-377576 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-377576 status -v=7 --alsologtostderr: exit status 3 (5.143147211s)

                                                
                                                
-- stdout --
	ha-377576
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-377576-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-377576-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-377576-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0328 00:00:35.218889 1091283 out.go:291] Setting OutFile to fd 1 ...
	I0328 00:00:35.219027 1091283 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 00:00:35.219041 1091283 out.go:304] Setting ErrFile to fd 2...
	I0328 00:00:35.219046 1091283 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 00:00:35.219271 1091283 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18485-1069254/.minikube/bin
	I0328 00:00:35.219460 1091283 out.go:298] Setting JSON to false
	I0328 00:00:35.219485 1091283 mustload.go:65] Loading cluster: ha-377576
	I0328 00:00:35.219551 1091283 notify.go:220] Checking for updates...
	I0328 00:00:35.219879 1091283 config.go:182] Loaded profile config "ha-377576": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0328 00:00:35.219893 1091283 status.go:255] checking status of ha-377576 ...
	I0328 00:00:35.220305 1091283 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0328 00:00:35.220373 1091283 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 00:00:35.239699 1091283 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41895
	I0328 00:00:35.240197 1091283 main.go:141] libmachine: () Calling .GetVersion
	I0328 00:00:35.240947 1091283 main.go:141] libmachine: Using API Version  1
	I0328 00:00:35.240982 1091283 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 00:00:35.241580 1091283 main.go:141] libmachine: () Calling .GetMachineName
	I0328 00:00:35.241856 1091283 main.go:141] libmachine: (ha-377576) Calling .GetState
	I0328 00:00:35.243637 1091283 status.go:330] ha-377576 host status = "Running" (err=<nil>)
	I0328 00:00:35.243658 1091283 host.go:66] Checking if "ha-377576" exists ...
	I0328 00:00:35.243946 1091283 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0328 00:00:35.243991 1091283 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 00:00:35.261874 1091283 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40725
	I0328 00:00:35.262618 1091283 main.go:141] libmachine: () Calling .GetVersion
	I0328 00:00:35.263220 1091283 main.go:141] libmachine: Using API Version  1
	I0328 00:00:35.263253 1091283 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 00:00:35.263607 1091283 main.go:141] libmachine: () Calling .GetMachineName
	I0328 00:00:35.263821 1091283 main.go:141] libmachine: (ha-377576) Calling .GetIP
	I0328 00:00:35.267032 1091283 main.go:141] libmachine: (ha-377576) DBG | domain ha-377576 has defined MAC address 52:54:00:9c:48:13 in network mk-ha-377576
	I0328 00:00:35.267517 1091283 main.go:141] libmachine: (ha-377576) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:48:13", ip: ""} in network mk-ha-377576: {Iface:virbr1 ExpiryTime:2024-03-28 00:52:31 +0000 UTC Type:0 Mac:52:54:00:9c:48:13 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:ha-377576 Clientid:01:52:54:00:9c:48:13}
	I0328 00:00:35.267546 1091283 main.go:141] libmachine: (ha-377576) DBG | domain ha-377576 has defined IP address 192.168.39.47 and MAC address 52:54:00:9c:48:13 in network mk-ha-377576
	I0328 00:00:35.267755 1091283 host.go:66] Checking if "ha-377576" exists ...
	I0328 00:00:35.268046 1091283 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0328 00:00:35.268084 1091283 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 00:00:35.284125 1091283 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45015
	I0328 00:00:35.284651 1091283 main.go:141] libmachine: () Calling .GetVersion
	I0328 00:00:35.285218 1091283 main.go:141] libmachine: Using API Version  1
	I0328 00:00:35.285246 1091283 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 00:00:35.285676 1091283 main.go:141] libmachine: () Calling .GetMachineName
	I0328 00:00:35.285924 1091283 main.go:141] libmachine: (ha-377576) Calling .DriverName
	I0328 00:00:35.286192 1091283 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0328 00:00:35.286223 1091283 main.go:141] libmachine: (ha-377576) Calling .GetSSHHostname
	I0328 00:00:35.289083 1091283 main.go:141] libmachine: (ha-377576) DBG | domain ha-377576 has defined MAC address 52:54:00:9c:48:13 in network mk-ha-377576
	I0328 00:00:35.289651 1091283 main.go:141] libmachine: (ha-377576) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:48:13", ip: ""} in network mk-ha-377576: {Iface:virbr1 ExpiryTime:2024-03-28 00:52:31 +0000 UTC Type:0 Mac:52:54:00:9c:48:13 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:ha-377576 Clientid:01:52:54:00:9c:48:13}
	I0328 00:00:35.289684 1091283 main.go:141] libmachine: (ha-377576) DBG | domain ha-377576 has defined IP address 192.168.39.47 and MAC address 52:54:00:9c:48:13 in network mk-ha-377576
	I0328 00:00:35.289799 1091283 main.go:141] libmachine: (ha-377576) Calling .GetSSHPort
	I0328 00:00:35.289996 1091283 main.go:141] libmachine: (ha-377576) Calling .GetSSHKeyPath
	I0328 00:00:35.290188 1091283 main.go:141] libmachine: (ha-377576) Calling .GetSSHUsername
	I0328 00:00:35.290376 1091283 sshutil.go:53] new ssh client: &{IP:192.168.39.47 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/ha-377576/id_rsa Username:docker}
	I0328 00:00:35.379812 1091283 ssh_runner.go:195] Run: systemctl --version
	I0328 00:00:35.386796 1091283 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0328 00:00:35.401962 1091283 kubeconfig.go:125] found "ha-377576" server: "https://192.168.39.254:8443"
	I0328 00:00:35.401996 1091283 api_server.go:166] Checking apiserver status ...
	I0328 00:00:35.402044 1091283 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 00:00:35.420076 1091283 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1197/cgroup
	W0328 00:00:35.431403 1091283 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1197/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0328 00:00:35.431458 1091283 ssh_runner.go:195] Run: ls
	I0328 00:00:35.435952 1091283 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0328 00:00:35.440469 1091283 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0328 00:00:35.440497 1091283 status.go:422] ha-377576 apiserver status = Running (err=<nil>)
	I0328 00:00:35.440508 1091283 status.go:257] ha-377576 status: &{Name:ha-377576 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0328 00:00:35.440525 1091283 status.go:255] checking status of ha-377576-m02 ...
	I0328 00:00:35.440879 1091283 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0328 00:00:35.440934 1091283 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 00:00:35.457321 1091283 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44117
	I0328 00:00:35.457817 1091283 main.go:141] libmachine: () Calling .GetVersion
	I0328 00:00:35.458330 1091283 main.go:141] libmachine: Using API Version  1
	I0328 00:00:35.458355 1091283 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 00:00:35.458720 1091283 main.go:141] libmachine: () Calling .GetMachineName
	I0328 00:00:35.458896 1091283 main.go:141] libmachine: (ha-377576-m02) Calling .GetState
	I0328 00:00:35.460768 1091283 status.go:330] ha-377576-m02 host status = "Running" (err=<nil>)
	I0328 00:00:35.460804 1091283 host.go:66] Checking if "ha-377576-m02" exists ...
	I0328 00:00:35.461187 1091283 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0328 00:00:35.461235 1091283 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 00:00:35.477394 1091283 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37163
	I0328 00:00:35.477900 1091283 main.go:141] libmachine: () Calling .GetVersion
	I0328 00:00:35.478459 1091283 main.go:141] libmachine: Using API Version  1
	I0328 00:00:35.478488 1091283 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 00:00:35.478891 1091283 main.go:141] libmachine: () Calling .GetMachineName
	I0328 00:00:35.479131 1091283 main.go:141] libmachine: (ha-377576-m02) Calling .GetIP
	I0328 00:00:35.482376 1091283 main.go:141] libmachine: (ha-377576-m02) DBG | domain ha-377576-m02 has defined MAC address 52:54:00:bb:83:99 in network mk-ha-377576
	I0328 00:00:35.482779 1091283 main.go:141] libmachine: (ha-377576-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:83:99", ip: ""} in network mk-ha-377576: {Iface:virbr1 ExpiryTime:2024-03-28 00:53:30 +0000 UTC Type:0 Mac:52:54:00:bb:83:99 Iaid: IPaddr:192.168.39.117 Prefix:24 Hostname:ha-377576-m02 Clientid:01:52:54:00:bb:83:99}
	I0328 00:00:35.482803 1091283 main.go:141] libmachine: (ha-377576-m02) DBG | domain ha-377576-m02 has defined IP address 192.168.39.117 and MAC address 52:54:00:bb:83:99 in network mk-ha-377576
	I0328 00:00:35.483110 1091283 host.go:66] Checking if "ha-377576-m02" exists ...
	I0328 00:00:35.483446 1091283 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0328 00:00:35.483495 1091283 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 00:00:35.501109 1091283 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46649
	I0328 00:00:35.501579 1091283 main.go:141] libmachine: () Calling .GetVersion
	I0328 00:00:35.502151 1091283 main.go:141] libmachine: Using API Version  1
	I0328 00:00:35.502183 1091283 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 00:00:35.502534 1091283 main.go:141] libmachine: () Calling .GetMachineName
	I0328 00:00:35.502771 1091283 main.go:141] libmachine: (ha-377576-m02) Calling .DriverName
	I0328 00:00:35.503011 1091283 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0328 00:00:35.503036 1091283 main.go:141] libmachine: (ha-377576-m02) Calling .GetSSHHostname
	I0328 00:00:35.506163 1091283 main.go:141] libmachine: (ha-377576-m02) DBG | domain ha-377576-m02 has defined MAC address 52:54:00:bb:83:99 in network mk-ha-377576
	I0328 00:00:35.506642 1091283 main.go:141] libmachine: (ha-377576-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:83:99", ip: ""} in network mk-ha-377576: {Iface:virbr1 ExpiryTime:2024-03-28 00:53:30 +0000 UTC Type:0 Mac:52:54:00:bb:83:99 Iaid: IPaddr:192.168.39.117 Prefix:24 Hostname:ha-377576-m02 Clientid:01:52:54:00:bb:83:99}
	I0328 00:00:35.506675 1091283 main.go:141] libmachine: (ha-377576-m02) DBG | domain ha-377576-m02 has defined IP address 192.168.39.117 and MAC address 52:54:00:bb:83:99 in network mk-ha-377576
	I0328 00:00:35.506861 1091283 main.go:141] libmachine: (ha-377576-m02) Calling .GetSSHPort
	I0328 00:00:35.507040 1091283 main.go:141] libmachine: (ha-377576-m02) Calling .GetSSHKeyPath
	I0328 00:00:35.507235 1091283 main.go:141] libmachine: (ha-377576-m02) Calling .GetSSHUsername
	I0328 00:00:35.507389 1091283 sshutil.go:53] new ssh client: &{IP:192.168.39.117 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/ha-377576-m02/id_rsa Username:docker}
	W0328 00:00:36.490520 1091283 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.117:22: connect: no route to host
	I0328 00:00:36.490606 1091283 retry.go:31] will retry after 351.508326ms: dial tcp 192.168.39.117:22: connect: no route to host
	W0328 00:00:39.914591 1091283 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.117:22: connect: no route to host
	W0328 00:00:39.914731 1091283 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.117:22: connect: no route to host
	E0328 00:00:39.914756 1091283 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.117:22: connect: no route to host
	I0328 00:00:39.914764 1091283 status.go:257] ha-377576-m02 status: &{Name:ha-377576-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0328 00:00:39.914787 1091283 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.117:22: connect: no route to host
	I0328 00:00:39.914794 1091283 status.go:255] checking status of ha-377576-m03 ...
	I0328 00:00:39.915120 1091283 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0328 00:00:39.915185 1091283 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 00:00:39.930740 1091283 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33403
	I0328 00:00:39.931241 1091283 main.go:141] libmachine: () Calling .GetVersion
	I0328 00:00:39.931777 1091283 main.go:141] libmachine: Using API Version  1
	I0328 00:00:39.931802 1091283 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 00:00:39.932135 1091283 main.go:141] libmachine: () Calling .GetMachineName
	I0328 00:00:39.932365 1091283 main.go:141] libmachine: (ha-377576-m03) Calling .GetState
	I0328 00:00:39.934053 1091283 status.go:330] ha-377576-m03 host status = "Running" (err=<nil>)
	I0328 00:00:39.934083 1091283 host.go:66] Checking if "ha-377576-m03" exists ...
	I0328 00:00:39.934439 1091283 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0328 00:00:39.934479 1091283 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 00:00:39.951062 1091283 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44379
	I0328 00:00:39.951548 1091283 main.go:141] libmachine: () Calling .GetVersion
	I0328 00:00:39.952056 1091283 main.go:141] libmachine: Using API Version  1
	I0328 00:00:39.952078 1091283 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 00:00:39.952450 1091283 main.go:141] libmachine: () Calling .GetMachineName
	I0328 00:00:39.952694 1091283 main.go:141] libmachine: (ha-377576-m03) Calling .GetIP
	I0328 00:00:39.955640 1091283 main.go:141] libmachine: (ha-377576-m03) DBG | domain ha-377576-m03 has defined MAC address 52:54:00:f5:c1:99 in network mk-ha-377576
	I0328 00:00:39.956207 1091283 main.go:141] libmachine: (ha-377576-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:c1:99", ip: ""} in network mk-ha-377576: {Iface:virbr1 ExpiryTime:2024-03-28 00:55:52 +0000 UTC Type:0 Mac:52:54:00:f5:c1:99 Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:ha-377576-m03 Clientid:01:52:54:00:f5:c1:99}
	I0328 00:00:39.956238 1091283 main.go:141] libmachine: (ha-377576-m03) DBG | domain ha-377576-m03 has defined IP address 192.168.39.101 and MAC address 52:54:00:f5:c1:99 in network mk-ha-377576
	I0328 00:00:39.956389 1091283 host.go:66] Checking if "ha-377576-m03" exists ...
	I0328 00:00:39.956810 1091283 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0328 00:00:39.956843 1091283 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 00:00:39.972540 1091283 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42021
	I0328 00:00:39.973071 1091283 main.go:141] libmachine: () Calling .GetVersion
	I0328 00:00:39.973550 1091283 main.go:141] libmachine: Using API Version  1
	I0328 00:00:39.973573 1091283 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 00:00:39.973898 1091283 main.go:141] libmachine: () Calling .GetMachineName
	I0328 00:00:39.974090 1091283 main.go:141] libmachine: (ha-377576-m03) Calling .DriverName
	I0328 00:00:39.974318 1091283 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0328 00:00:39.974340 1091283 main.go:141] libmachine: (ha-377576-m03) Calling .GetSSHHostname
	I0328 00:00:39.977378 1091283 main.go:141] libmachine: (ha-377576-m03) DBG | domain ha-377576-m03 has defined MAC address 52:54:00:f5:c1:99 in network mk-ha-377576
	I0328 00:00:39.977821 1091283 main.go:141] libmachine: (ha-377576-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:c1:99", ip: ""} in network mk-ha-377576: {Iface:virbr1 ExpiryTime:2024-03-28 00:55:52 +0000 UTC Type:0 Mac:52:54:00:f5:c1:99 Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:ha-377576-m03 Clientid:01:52:54:00:f5:c1:99}
	I0328 00:00:39.977863 1091283 main.go:141] libmachine: (ha-377576-m03) DBG | domain ha-377576-m03 has defined IP address 192.168.39.101 and MAC address 52:54:00:f5:c1:99 in network mk-ha-377576
	I0328 00:00:39.978012 1091283 main.go:141] libmachine: (ha-377576-m03) Calling .GetSSHPort
	I0328 00:00:39.978185 1091283 main.go:141] libmachine: (ha-377576-m03) Calling .GetSSHKeyPath
	I0328 00:00:39.978394 1091283 main.go:141] libmachine: (ha-377576-m03) Calling .GetSSHUsername
	I0328 00:00:39.978564 1091283 sshutil.go:53] new ssh client: &{IP:192.168.39.101 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/ha-377576-m03/id_rsa Username:docker}
	I0328 00:00:40.075438 1091283 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0328 00:00:40.095888 1091283 kubeconfig.go:125] found "ha-377576" server: "https://192.168.39.254:8443"
	I0328 00:00:40.095919 1091283 api_server.go:166] Checking apiserver status ...
	I0328 00:00:40.095955 1091283 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 00:00:40.111294 1091283 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1535/cgroup
	W0328 00:00:40.122664 1091283 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1535/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0328 00:00:40.122724 1091283 ssh_runner.go:195] Run: ls
	I0328 00:00:40.127859 1091283 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0328 00:00:40.135747 1091283 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0328 00:00:40.135782 1091283 status.go:422] ha-377576-m03 apiserver status = Running (err=<nil>)
	I0328 00:00:40.135795 1091283 status.go:257] ha-377576-m03 status: &{Name:ha-377576-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0328 00:00:40.135818 1091283 status.go:255] checking status of ha-377576-m04 ...
	I0328 00:00:40.136156 1091283 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0328 00:00:40.136209 1091283 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 00:00:40.151765 1091283 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46877
	I0328 00:00:40.152236 1091283 main.go:141] libmachine: () Calling .GetVersion
	I0328 00:00:40.152751 1091283 main.go:141] libmachine: Using API Version  1
	I0328 00:00:40.152773 1091283 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 00:00:40.153102 1091283 main.go:141] libmachine: () Calling .GetMachineName
	I0328 00:00:40.153349 1091283 main.go:141] libmachine: (ha-377576-m04) Calling .GetState
	I0328 00:00:40.154907 1091283 status.go:330] ha-377576-m04 host status = "Running" (err=<nil>)
	I0328 00:00:40.154928 1091283 host.go:66] Checking if "ha-377576-m04" exists ...
	I0328 00:00:40.155320 1091283 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0328 00:00:40.155368 1091283 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 00:00:40.171224 1091283 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34349
	I0328 00:00:40.171722 1091283 main.go:141] libmachine: () Calling .GetVersion
	I0328 00:00:40.172301 1091283 main.go:141] libmachine: Using API Version  1
	I0328 00:00:40.172328 1091283 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 00:00:40.172701 1091283 main.go:141] libmachine: () Calling .GetMachineName
	I0328 00:00:40.172911 1091283 main.go:141] libmachine: (ha-377576-m04) Calling .GetIP
	I0328 00:00:40.176110 1091283 main.go:141] libmachine: (ha-377576-m04) DBG | domain ha-377576-m04 has defined MAC address 52:54:00:6a:c2:81 in network mk-ha-377576
	I0328 00:00:40.176530 1091283 main.go:141] libmachine: (ha-377576-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:c2:81", ip: ""} in network mk-ha-377576: {Iface:virbr1 ExpiryTime:2024-03-28 00:57:19 +0000 UTC Type:0 Mac:52:54:00:6a:c2:81 Iaid: IPaddr:192.168.39.93 Prefix:24 Hostname:ha-377576-m04 Clientid:01:52:54:00:6a:c2:81}
	I0328 00:00:40.176575 1091283 main.go:141] libmachine: (ha-377576-m04) DBG | domain ha-377576-m04 has defined IP address 192.168.39.93 and MAC address 52:54:00:6a:c2:81 in network mk-ha-377576
	I0328 00:00:40.176696 1091283 host.go:66] Checking if "ha-377576-m04" exists ...
	I0328 00:00:40.177022 1091283 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0328 00:00:40.177068 1091283 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 00:00:40.193162 1091283 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40323
	I0328 00:00:40.193578 1091283 main.go:141] libmachine: () Calling .GetVersion
	I0328 00:00:40.194081 1091283 main.go:141] libmachine: Using API Version  1
	I0328 00:00:40.194107 1091283 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 00:00:40.194510 1091283 main.go:141] libmachine: () Calling .GetMachineName
	I0328 00:00:40.194729 1091283 main.go:141] libmachine: (ha-377576-m04) Calling .DriverName
	I0328 00:00:40.194915 1091283 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0328 00:00:40.194940 1091283 main.go:141] libmachine: (ha-377576-m04) Calling .GetSSHHostname
	I0328 00:00:40.197675 1091283 main.go:141] libmachine: (ha-377576-m04) DBG | domain ha-377576-m04 has defined MAC address 52:54:00:6a:c2:81 in network mk-ha-377576
	I0328 00:00:40.198122 1091283 main.go:141] libmachine: (ha-377576-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:c2:81", ip: ""} in network mk-ha-377576: {Iface:virbr1 ExpiryTime:2024-03-28 00:57:19 +0000 UTC Type:0 Mac:52:54:00:6a:c2:81 Iaid: IPaddr:192.168.39.93 Prefix:24 Hostname:ha-377576-m04 Clientid:01:52:54:00:6a:c2:81}
	I0328 00:00:40.198150 1091283 main.go:141] libmachine: (ha-377576-m04) DBG | domain ha-377576-m04 has defined IP address 192.168.39.93 and MAC address 52:54:00:6a:c2:81 in network mk-ha-377576
	I0328 00:00:40.198370 1091283 main.go:141] libmachine: (ha-377576-m04) Calling .GetSSHPort
	I0328 00:00:40.198555 1091283 main.go:141] libmachine: (ha-377576-m04) Calling .GetSSHKeyPath
	I0328 00:00:40.198703 1091283 main.go:141] libmachine: (ha-377576-m04) Calling .GetSSHUsername
	I0328 00:00:40.198828 1091283 sshutil.go:53] new ssh client: &{IP:192.168.39.93 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/ha-377576-m04/id_rsa Username:docker}
	I0328 00:00:40.282850 1091283 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0328 00:00:40.299651 1091283 status.go:257] ha-377576-m04 status: &{Name:ha-377576-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-377576 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-377576 status -v=7 --alsologtostderr: exit status 3 (4.506080887s)

                                                
                                                
-- stdout --
	ha-377576
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-377576-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-377576-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-377576-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0328 00:00:42.337913 1091389 out.go:291] Setting OutFile to fd 1 ...
	I0328 00:00:42.338048 1091389 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 00:00:42.338057 1091389 out.go:304] Setting ErrFile to fd 2...
	I0328 00:00:42.338062 1091389 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 00:00:42.338277 1091389 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18485-1069254/.minikube/bin
	I0328 00:00:42.338467 1091389 out.go:298] Setting JSON to false
	I0328 00:00:42.338502 1091389 mustload.go:65] Loading cluster: ha-377576
	I0328 00:00:42.338632 1091389 notify.go:220] Checking for updates...
	I0328 00:00:42.338954 1091389 config.go:182] Loaded profile config "ha-377576": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0328 00:00:42.338972 1091389 status.go:255] checking status of ha-377576 ...
	I0328 00:00:42.339476 1091389 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0328 00:00:42.339554 1091389 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 00:00:42.359049 1091389 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43647
	I0328 00:00:42.359537 1091389 main.go:141] libmachine: () Calling .GetVersion
	I0328 00:00:42.360153 1091389 main.go:141] libmachine: Using API Version  1
	I0328 00:00:42.360176 1091389 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 00:00:42.360633 1091389 main.go:141] libmachine: () Calling .GetMachineName
	I0328 00:00:42.360874 1091389 main.go:141] libmachine: (ha-377576) Calling .GetState
	I0328 00:00:42.362595 1091389 status.go:330] ha-377576 host status = "Running" (err=<nil>)
	I0328 00:00:42.362627 1091389 host.go:66] Checking if "ha-377576" exists ...
	I0328 00:00:42.363080 1091389 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0328 00:00:42.363138 1091389 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 00:00:42.379010 1091389 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33791
	I0328 00:00:42.379606 1091389 main.go:141] libmachine: () Calling .GetVersion
	I0328 00:00:42.380242 1091389 main.go:141] libmachine: Using API Version  1
	I0328 00:00:42.380279 1091389 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 00:00:42.380669 1091389 main.go:141] libmachine: () Calling .GetMachineName
	I0328 00:00:42.380864 1091389 main.go:141] libmachine: (ha-377576) Calling .GetIP
	I0328 00:00:42.383626 1091389 main.go:141] libmachine: (ha-377576) DBG | domain ha-377576 has defined MAC address 52:54:00:9c:48:13 in network mk-ha-377576
	I0328 00:00:42.384121 1091389 main.go:141] libmachine: (ha-377576) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:48:13", ip: ""} in network mk-ha-377576: {Iface:virbr1 ExpiryTime:2024-03-28 00:52:31 +0000 UTC Type:0 Mac:52:54:00:9c:48:13 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:ha-377576 Clientid:01:52:54:00:9c:48:13}
	I0328 00:00:42.384161 1091389 main.go:141] libmachine: (ha-377576) DBG | domain ha-377576 has defined IP address 192.168.39.47 and MAC address 52:54:00:9c:48:13 in network mk-ha-377576
	I0328 00:00:42.384293 1091389 host.go:66] Checking if "ha-377576" exists ...
	I0328 00:00:42.384731 1091389 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0328 00:00:42.384787 1091389 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 00:00:42.401295 1091389 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34035
	I0328 00:00:42.401860 1091389 main.go:141] libmachine: () Calling .GetVersion
	I0328 00:00:42.402429 1091389 main.go:141] libmachine: Using API Version  1
	I0328 00:00:42.402453 1091389 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 00:00:42.402830 1091389 main.go:141] libmachine: () Calling .GetMachineName
	I0328 00:00:42.403029 1091389 main.go:141] libmachine: (ha-377576) Calling .DriverName
	I0328 00:00:42.403250 1091389 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0328 00:00:42.403289 1091389 main.go:141] libmachine: (ha-377576) Calling .GetSSHHostname
	I0328 00:00:42.406211 1091389 main.go:141] libmachine: (ha-377576) DBG | domain ha-377576 has defined MAC address 52:54:00:9c:48:13 in network mk-ha-377576
	I0328 00:00:42.406740 1091389 main.go:141] libmachine: (ha-377576) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:48:13", ip: ""} in network mk-ha-377576: {Iface:virbr1 ExpiryTime:2024-03-28 00:52:31 +0000 UTC Type:0 Mac:52:54:00:9c:48:13 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:ha-377576 Clientid:01:52:54:00:9c:48:13}
	I0328 00:00:42.406781 1091389 main.go:141] libmachine: (ha-377576) DBG | domain ha-377576 has defined IP address 192.168.39.47 and MAC address 52:54:00:9c:48:13 in network mk-ha-377576
	I0328 00:00:42.406964 1091389 main.go:141] libmachine: (ha-377576) Calling .GetSSHPort
	I0328 00:00:42.407160 1091389 main.go:141] libmachine: (ha-377576) Calling .GetSSHKeyPath
	I0328 00:00:42.407314 1091389 main.go:141] libmachine: (ha-377576) Calling .GetSSHUsername
	I0328 00:00:42.407483 1091389 sshutil.go:53] new ssh client: &{IP:192.168.39.47 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/ha-377576/id_rsa Username:docker}
	I0328 00:00:42.494475 1091389 ssh_runner.go:195] Run: systemctl --version
	I0328 00:00:42.501363 1091389 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0328 00:00:42.516510 1091389 kubeconfig.go:125] found "ha-377576" server: "https://192.168.39.254:8443"
	I0328 00:00:42.516541 1091389 api_server.go:166] Checking apiserver status ...
	I0328 00:00:42.516576 1091389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 00:00:42.534747 1091389 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1197/cgroup
	W0328 00:00:42.545774 1091389 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1197/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0328 00:00:42.545829 1091389 ssh_runner.go:195] Run: ls
	I0328 00:00:42.553293 1091389 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0328 00:00:42.561514 1091389 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0328 00:00:42.561546 1091389 status.go:422] ha-377576 apiserver status = Running (err=<nil>)
	I0328 00:00:42.561556 1091389 status.go:257] ha-377576 status: &{Name:ha-377576 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0328 00:00:42.561581 1091389 status.go:255] checking status of ha-377576-m02 ...
	I0328 00:00:42.561930 1091389 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0328 00:00:42.561966 1091389 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 00:00:42.577996 1091389 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40055
	I0328 00:00:42.578569 1091389 main.go:141] libmachine: () Calling .GetVersion
	I0328 00:00:42.579087 1091389 main.go:141] libmachine: Using API Version  1
	I0328 00:00:42.579113 1091389 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 00:00:42.579450 1091389 main.go:141] libmachine: () Calling .GetMachineName
	I0328 00:00:42.579665 1091389 main.go:141] libmachine: (ha-377576-m02) Calling .GetState
	I0328 00:00:42.581364 1091389 status.go:330] ha-377576-m02 host status = "Running" (err=<nil>)
	I0328 00:00:42.581390 1091389 host.go:66] Checking if "ha-377576-m02" exists ...
	I0328 00:00:42.581807 1091389 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0328 00:00:42.581858 1091389 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 00:00:42.596990 1091389 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33805
	I0328 00:00:42.597464 1091389 main.go:141] libmachine: () Calling .GetVersion
	I0328 00:00:42.597945 1091389 main.go:141] libmachine: Using API Version  1
	I0328 00:00:42.597967 1091389 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 00:00:42.598295 1091389 main.go:141] libmachine: () Calling .GetMachineName
	I0328 00:00:42.598563 1091389 main.go:141] libmachine: (ha-377576-m02) Calling .GetIP
	I0328 00:00:42.601893 1091389 main.go:141] libmachine: (ha-377576-m02) DBG | domain ha-377576-m02 has defined MAC address 52:54:00:bb:83:99 in network mk-ha-377576
	I0328 00:00:42.602416 1091389 main.go:141] libmachine: (ha-377576-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:83:99", ip: ""} in network mk-ha-377576: {Iface:virbr1 ExpiryTime:2024-03-28 00:53:30 +0000 UTC Type:0 Mac:52:54:00:bb:83:99 Iaid: IPaddr:192.168.39.117 Prefix:24 Hostname:ha-377576-m02 Clientid:01:52:54:00:bb:83:99}
	I0328 00:00:42.602446 1091389 main.go:141] libmachine: (ha-377576-m02) DBG | domain ha-377576-m02 has defined IP address 192.168.39.117 and MAC address 52:54:00:bb:83:99 in network mk-ha-377576
	I0328 00:00:42.602614 1091389 host.go:66] Checking if "ha-377576-m02" exists ...
	I0328 00:00:42.602932 1091389 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0328 00:00:42.602975 1091389 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 00:00:42.619973 1091389 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37311
	I0328 00:00:42.620429 1091389 main.go:141] libmachine: () Calling .GetVersion
	I0328 00:00:42.621003 1091389 main.go:141] libmachine: Using API Version  1
	I0328 00:00:42.621027 1091389 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 00:00:42.621364 1091389 main.go:141] libmachine: () Calling .GetMachineName
	I0328 00:00:42.621595 1091389 main.go:141] libmachine: (ha-377576-m02) Calling .DriverName
	I0328 00:00:42.621819 1091389 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0328 00:00:42.621843 1091389 main.go:141] libmachine: (ha-377576-m02) Calling .GetSSHHostname
	I0328 00:00:42.625079 1091389 main.go:141] libmachine: (ha-377576-m02) DBG | domain ha-377576-m02 has defined MAC address 52:54:00:bb:83:99 in network mk-ha-377576
	I0328 00:00:42.625525 1091389 main.go:141] libmachine: (ha-377576-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:83:99", ip: ""} in network mk-ha-377576: {Iface:virbr1 ExpiryTime:2024-03-28 00:53:30 +0000 UTC Type:0 Mac:52:54:00:bb:83:99 Iaid: IPaddr:192.168.39.117 Prefix:24 Hostname:ha-377576-m02 Clientid:01:52:54:00:bb:83:99}
	I0328 00:00:42.625556 1091389 main.go:141] libmachine: (ha-377576-m02) DBG | domain ha-377576-m02 has defined IP address 192.168.39.117 and MAC address 52:54:00:bb:83:99 in network mk-ha-377576
	I0328 00:00:42.625737 1091389 main.go:141] libmachine: (ha-377576-m02) Calling .GetSSHPort
	I0328 00:00:42.625914 1091389 main.go:141] libmachine: (ha-377576-m02) Calling .GetSSHKeyPath
	I0328 00:00:42.626069 1091389 main.go:141] libmachine: (ha-377576-m02) Calling .GetSSHUsername
	I0328 00:00:42.626223 1091389 sshutil.go:53] new ssh client: &{IP:192.168.39.117 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/ha-377576-m02/id_rsa Username:docker}
	W0328 00:00:42.986453 1091389 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.117:22: connect: no route to host
	I0328 00:00:42.986510 1091389 retry.go:31] will retry after 359.314552ms: dial tcp 192.168.39.117:22: connect: no route to host
	W0328 00:00:46.410629 1091389 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.117:22: connect: no route to host
	W0328 00:00:46.410756 1091389 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.117:22: connect: no route to host
	E0328 00:00:46.410783 1091389 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.117:22: connect: no route to host
	I0328 00:00:46.410804 1091389 status.go:257] ha-377576-m02 status: &{Name:ha-377576-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0328 00:00:46.410846 1091389 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.117:22: connect: no route to host
	I0328 00:00:46.410892 1091389 status.go:255] checking status of ha-377576-m03 ...
	I0328 00:00:46.411237 1091389 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0328 00:00:46.411297 1091389 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 00:00:46.428476 1091389 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39373
	I0328 00:00:46.428944 1091389 main.go:141] libmachine: () Calling .GetVersion
	I0328 00:00:46.429419 1091389 main.go:141] libmachine: Using API Version  1
	I0328 00:00:46.429441 1091389 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 00:00:46.429840 1091389 main.go:141] libmachine: () Calling .GetMachineName
	I0328 00:00:46.430093 1091389 main.go:141] libmachine: (ha-377576-m03) Calling .GetState
	I0328 00:00:46.431834 1091389 status.go:330] ha-377576-m03 host status = "Running" (err=<nil>)
	I0328 00:00:46.431857 1091389 host.go:66] Checking if "ha-377576-m03" exists ...
	I0328 00:00:46.432255 1091389 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0328 00:00:46.432298 1091389 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 00:00:46.447500 1091389 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39005
	I0328 00:00:46.447937 1091389 main.go:141] libmachine: () Calling .GetVersion
	I0328 00:00:46.448473 1091389 main.go:141] libmachine: Using API Version  1
	I0328 00:00:46.448512 1091389 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 00:00:46.448885 1091389 main.go:141] libmachine: () Calling .GetMachineName
	I0328 00:00:46.449137 1091389 main.go:141] libmachine: (ha-377576-m03) Calling .GetIP
	I0328 00:00:46.451994 1091389 main.go:141] libmachine: (ha-377576-m03) DBG | domain ha-377576-m03 has defined MAC address 52:54:00:f5:c1:99 in network mk-ha-377576
	I0328 00:00:46.452448 1091389 main.go:141] libmachine: (ha-377576-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:c1:99", ip: ""} in network mk-ha-377576: {Iface:virbr1 ExpiryTime:2024-03-28 00:55:52 +0000 UTC Type:0 Mac:52:54:00:f5:c1:99 Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:ha-377576-m03 Clientid:01:52:54:00:f5:c1:99}
	I0328 00:00:46.452480 1091389 main.go:141] libmachine: (ha-377576-m03) DBG | domain ha-377576-m03 has defined IP address 192.168.39.101 and MAC address 52:54:00:f5:c1:99 in network mk-ha-377576
	I0328 00:00:46.452621 1091389 host.go:66] Checking if "ha-377576-m03" exists ...
	I0328 00:00:46.452918 1091389 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0328 00:00:46.452955 1091389 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 00:00:46.468961 1091389 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42035
	I0328 00:00:46.469463 1091389 main.go:141] libmachine: () Calling .GetVersion
	I0328 00:00:46.470134 1091389 main.go:141] libmachine: Using API Version  1
	I0328 00:00:46.470166 1091389 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 00:00:46.470536 1091389 main.go:141] libmachine: () Calling .GetMachineName
	I0328 00:00:46.470792 1091389 main.go:141] libmachine: (ha-377576-m03) Calling .DriverName
	I0328 00:00:46.470997 1091389 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0328 00:00:46.471025 1091389 main.go:141] libmachine: (ha-377576-m03) Calling .GetSSHHostname
	I0328 00:00:46.474713 1091389 main.go:141] libmachine: (ha-377576-m03) DBG | domain ha-377576-m03 has defined MAC address 52:54:00:f5:c1:99 in network mk-ha-377576
	I0328 00:00:46.475277 1091389 main.go:141] libmachine: (ha-377576-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:c1:99", ip: ""} in network mk-ha-377576: {Iface:virbr1 ExpiryTime:2024-03-28 00:55:52 +0000 UTC Type:0 Mac:52:54:00:f5:c1:99 Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:ha-377576-m03 Clientid:01:52:54:00:f5:c1:99}
	I0328 00:00:46.475319 1091389 main.go:141] libmachine: (ha-377576-m03) DBG | domain ha-377576-m03 has defined IP address 192.168.39.101 and MAC address 52:54:00:f5:c1:99 in network mk-ha-377576
	I0328 00:00:46.475489 1091389 main.go:141] libmachine: (ha-377576-m03) Calling .GetSSHPort
	I0328 00:00:46.475712 1091389 main.go:141] libmachine: (ha-377576-m03) Calling .GetSSHKeyPath
	I0328 00:00:46.475946 1091389 main.go:141] libmachine: (ha-377576-m03) Calling .GetSSHUsername
	I0328 00:00:46.476215 1091389 sshutil.go:53] new ssh client: &{IP:192.168.39.101 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/ha-377576-m03/id_rsa Username:docker}
	I0328 00:00:46.562495 1091389 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0328 00:00:46.582417 1091389 kubeconfig.go:125] found "ha-377576" server: "https://192.168.39.254:8443"
	I0328 00:00:46.582454 1091389 api_server.go:166] Checking apiserver status ...
	I0328 00:00:46.582495 1091389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 00:00:46.596989 1091389 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1535/cgroup
	W0328 00:00:46.606872 1091389 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1535/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0328 00:00:46.606934 1091389 ssh_runner.go:195] Run: ls
	I0328 00:00:46.611925 1091389 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0328 00:00:46.616716 1091389 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0328 00:00:46.616740 1091389 status.go:422] ha-377576-m03 apiserver status = Running (err=<nil>)
	I0328 00:00:46.616748 1091389 status.go:257] ha-377576-m03 status: &{Name:ha-377576-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0328 00:00:46.616764 1091389 status.go:255] checking status of ha-377576-m04 ...
	I0328 00:00:46.617069 1091389 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0328 00:00:46.617107 1091389 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 00:00:46.633380 1091389 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44881
	I0328 00:00:46.633859 1091389 main.go:141] libmachine: () Calling .GetVersion
	I0328 00:00:46.634366 1091389 main.go:141] libmachine: Using API Version  1
	I0328 00:00:46.634395 1091389 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 00:00:46.634717 1091389 main.go:141] libmachine: () Calling .GetMachineName
	I0328 00:00:46.634921 1091389 main.go:141] libmachine: (ha-377576-m04) Calling .GetState
	I0328 00:00:46.636339 1091389 status.go:330] ha-377576-m04 host status = "Running" (err=<nil>)
	I0328 00:00:46.636356 1091389 host.go:66] Checking if "ha-377576-m04" exists ...
	I0328 00:00:46.636616 1091389 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0328 00:00:46.636650 1091389 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 00:00:46.651962 1091389 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43797
	I0328 00:00:46.652355 1091389 main.go:141] libmachine: () Calling .GetVersion
	I0328 00:00:46.652838 1091389 main.go:141] libmachine: Using API Version  1
	I0328 00:00:46.652859 1091389 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 00:00:46.653148 1091389 main.go:141] libmachine: () Calling .GetMachineName
	I0328 00:00:46.653311 1091389 main.go:141] libmachine: (ha-377576-m04) Calling .GetIP
	I0328 00:00:46.656000 1091389 main.go:141] libmachine: (ha-377576-m04) DBG | domain ha-377576-m04 has defined MAC address 52:54:00:6a:c2:81 in network mk-ha-377576
	I0328 00:00:46.656442 1091389 main.go:141] libmachine: (ha-377576-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:c2:81", ip: ""} in network mk-ha-377576: {Iface:virbr1 ExpiryTime:2024-03-28 00:57:19 +0000 UTC Type:0 Mac:52:54:00:6a:c2:81 Iaid: IPaddr:192.168.39.93 Prefix:24 Hostname:ha-377576-m04 Clientid:01:52:54:00:6a:c2:81}
	I0328 00:00:46.656487 1091389 main.go:141] libmachine: (ha-377576-m04) DBG | domain ha-377576-m04 has defined IP address 192.168.39.93 and MAC address 52:54:00:6a:c2:81 in network mk-ha-377576
	I0328 00:00:46.656662 1091389 host.go:66] Checking if "ha-377576-m04" exists ...
	I0328 00:00:46.656977 1091389 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0328 00:00:46.657013 1091389 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 00:00:46.672814 1091389 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42941
	I0328 00:00:46.673212 1091389 main.go:141] libmachine: () Calling .GetVersion
	I0328 00:00:46.673693 1091389 main.go:141] libmachine: Using API Version  1
	I0328 00:00:46.673716 1091389 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 00:00:46.674054 1091389 main.go:141] libmachine: () Calling .GetMachineName
	I0328 00:00:46.674279 1091389 main.go:141] libmachine: (ha-377576-m04) Calling .DriverName
	I0328 00:00:46.674455 1091389 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0328 00:00:46.674477 1091389 main.go:141] libmachine: (ha-377576-m04) Calling .GetSSHHostname
	I0328 00:00:46.677392 1091389 main.go:141] libmachine: (ha-377576-m04) DBG | domain ha-377576-m04 has defined MAC address 52:54:00:6a:c2:81 in network mk-ha-377576
	I0328 00:00:46.677888 1091389 main.go:141] libmachine: (ha-377576-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:c2:81", ip: ""} in network mk-ha-377576: {Iface:virbr1 ExpiryTime:2024-03-28 00:57:19 +0000 UTC Type:0 Mac:52:54:00:6a:c2:81 Iaid: IPaddr:192.168.39.93 Prefix:24 Hostname:ha-377576-m04 Clientid:01:52:54:00:6a:c2:81}
	I0328 00:00:46.677918 1091389 main.go:141] libmachine: (ha-377576-m04) DBG | domain ha-377576-m04 has defined IP address 192.168.39.93 and MAC address 52:54:00:6a:c2:81 in network mk-ha-377576
	I0328 00:00:46.678052 1091389 main.go:141] libmachine: (ha-377576-m04) Calling .GetSSHPort
	I0328 00:00:46.678221 1091389 main.go:141] libmachine: (ha-377576-m04) Calling .GetSSHKeyPath
	I0328 00:00:46.678429 1091389 main.go:141] libmachine: (ha-377576-m04) Calling .GetSSHUsername
	I0328 00:00:46.678584 1091389 sshutil.go:53] new ssh client: &{IP:192.168.39.93 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/ha-377576-m04/id_rsa Username:docker}
	I0328 00:00:46.762164 1091389 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0328 00:00:46.777927 1091389 status.go:257] ha-377576-m04 status: &{Name:ha-377576-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-377576 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-377576 status -v=7 --alsologtostderr: exit status 3 (3.793360754s)

                                                
                                                
-- stdout --
	ha-377576
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-377576-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-377576-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-377576-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0328 00:00:50.093638 1091492 out.go:291] Setting OutFile to fd 1 ...
	I0328 00:00:50.094260 1091492 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 00:00:50.094279 1091492 out.go:304] Setting ErrFile to fd 2...
	I0328 00:00:50.094287 1091492 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 00:00:50.094768 1091492 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18485-1069254/.minikube/bin
	I0328 00:00:50.095066 1091492 out.go:298] Setting JSON to false
	I0328 00:00:50.095110 1091492 mustload.go:65] Loading cluster: ha-377576
	I0328 00:00:50.095334 1091492 notify.go:220] Checking for updates...
	I0328 00:00:50.096016 1091492 config.go:182] Loaded profile config "ha-377576": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0328 00:00:50.096042 1091492 status.go:255] checking status of ha-377576 ...
	I0328 00:00:50.096669 1091492 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0328 00:00:50.096747 1091492 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 00:00:50.118140 1091492 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39165
	I0328 00:00:50.118787 1091492 main.go:141] libmachine: () Calling .GetVersion
	I0328 00:00:50.119416 1091492 main.go:141] libmachine: Using API Version  1
	I0328 00:00:50.119438 1091492 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 00:00:50.119902 1091492 main.go:141] libmachine: () Calling .GetMachineName
	I0328 00:00:50.120142 1091492 main.go:141] libmachine: (ha-377576) Calling .GetState
	I0328 00:00:50.121850 1091492 status.go:330] ha-377576 host status = "Running" (err=<nil>)
	I0328 00:00:50.121871 1091492 host.go:66] Checking if "ha-377576" exists ...
	I0328 00:00:50.122177 1091492 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0328 00:00:50.122217 1091492 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 00:00:50.140935 1091492 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35643
	I0328 00:00:50.141421 1091492 main.go:141] libmachine: () Calling .GetVersion
	I0328 00:00:50.141992 1091492 main.go:141] libmachine: Using API Version  1
	I0328 00:00:50.142010 1091492 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 00:00:50.142388 1091492 main.go:141] libmachine: () Calling .GetMachineName
	I0328 00:00:50.142560 1091492 main.go:141] libmachine: (ha-377576) Calling .GetIP
	I0328 00:00:50.145975 1091492 main.go:141] libmachine: (ha-377576) DBG | domain ha-377576 has defined MAC address 52:54:00:9c:48:13 in network mk-ha-377576
	I0328 00:00:50.146488 1091492 main.go:141] libmachine: (ha-377576) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:48:13", ip: ""} in network mk-ha-377576: {Iface:virbr1 ExpiryTime:2024-03-28 00:52:31 +0000 UTC Type:0 Mac:52:54:00:9c:48:13 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:ha-377576 Clientid:01:52:54:00:9c:48:13}
	I0328 00:00:50.146519 1091492 main.go:141] libmachine: (ha-377576) DBG | domain ha-377576 has defined IP address 192.168.39.47 and MAC address 52:54:00:9c:48:13 in network mk-ha-377576
	I0328 00:00:50.146711 1091492 host.go:66] Checking if "ha-377576" exists ...
	I0328 00:00:50.147003 1091492 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0328 00:00:50.147046 1091492 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 00:00:50.162680 1091492 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46485
	I0328 00:00:50.163101 1091492 main.go:141] libmachine: () Calling .GetVersion
	I0328 00:00:50.163598 1091492 main.go:141] libmachine: Using API Version  1
	I0328 00:00:50.163623 1091492 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 00:00:50.163976 1091492 main.go:141] libmachine: () Calling .GetMachineName
	I0328 00:00:50.164180 1091492 main.go:141] libmachine: (ha-377576) Calling .DriverName
	I0328 00:00:50.164379 1091492 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0328 00:00:50.164434 1091492 main.go:141] libmachine: (ha-377576) Calling .GetSSHHostname
	I0328 00:00:50.167534 1091492 main.go:141] libmachine: (ha-377576) DBG | domain ha-377576 has defined MAC address 52:54:00:9c:48:13 in network mk-ha-377576
	I0328 00:00:50.168035 1091492 main.go:141] libmachine: (ha-377576) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:48:13", ip: ""} in network mk-ha-377576: {Iface:virbr1 ExpiryTime:2024-03-28 00:52:31 +0000 UTC Type:0 Mac:52:54:00:9c:48:13 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:ha-377576 Clientid:01:52:54:00:9c:48:13}
	I0328 00:00:50.168067 1091492 main.go:141] libmachine: (ha-377576) DBG | domain ha-377576 has defined IP address 192.168.39.47 and MAC address 52:54:00:9c:48:13 in network mk-ha-377576
	I0328 00:00:50.168230 1091492 main.go:141] libmachine: (ha-377576) Calling .GetSSHPort
	I0328 00:00:50.168364 1091492 main.go:141] libmachine: (ha-377576) Calling .GetSSHKeyPath
	I0328 00:00:50.168463 1091492 main.go:141] libmachine: (ha-377576) Calling .GetSSHUsername
	I0328 00:00:50.168653 1091492 sshutil.go:53] new ssh client: &{IP:192.168.39.47 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/ha-377576/id_rsa Username:docker}
	I0328 00:00:50.248155 1091492 ssh_runner.go:195] Run: systemctl --version
	I0328 00:00:50.256903 1091492 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0328 00:00:50.276660 1091492 kubeconfig.go:125] found "ha-377576" server: "https://192.168.39.254:8443"
	I0328 00:00:50.276694 1091492 api_server.go:166] Checking apiserver status ...
	I0328 00:00:50.276731 1091492 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 00:00:50.292521 1091492 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1197/cgroup
	W0328 00:00:50.303544 1091492 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1197/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0328 00:00:50.303607 1091492 ssh_runner.go:195] Run: ls
	I0328 00:00:50.308274 1091492 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0328 00:00:50.315264 1091492 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0328 00:00:50.315289 1091492 status.go:422] ha-377576 apiserver status = Running (err=<nil>)
	I0328 00:00:50.315299 1091492 status.go:257] ha-377576 status: &{Name:ha-377576 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0328 00:00:50.315316 1091492 status.go:255] checking status of ha-377576-m02 ...
	I0328 00:00:50.315604 1091492 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0328 00:00:50.315638 1091492 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 00:00:50.332978 1091492 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37237
	I0328 00:00:50.333462 1091492 main.go:141] libmachine: () Calling .GetVersion
	I0328 00:00:50.333923 1091492 main.go:141] libmachine: Using API Version  1
	I0328 00:00:50.333946 1091492 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 00:00:50.334306 1091492 main.go:141] libmachine: () Calling .GetMachineName
	I0328 00:00:50.334545 1091492 main.go:141] libmachine: (ha-377576-m02) Calling .GetState
	I0328 00:00:50.335963 1091492 status.go:330] ha-377576-m02 host status = "Running" (err=<nil>)
	I0328 00:00:50.335985 1091492 host.go:66] Checking if "ha-377576-m02" exists ...
	I0328 00:00:50.336392 1091492 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0328 00:00:50.336449 1091492 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 00:00:50.352243 1091492 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32787
	I0328 00:00:50.352693 1091492 main.go:141] libmachine: () Calling .GetVersion
	I0328 00:00:50.353125 1091492 main.go:141] libmachine: Using API Version  1
	I0328 00:00:50.353146 1091492 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 00:00:50.353489 1091492 main.go:141] libmachine: () Calling .GetMachineName
	I0328 00:00:50.353665 1091492 main.go:141] libmachine: (ha-377576-m02) Calling .GetIP
	I0328 00:00:50.356455 1091492 main.go:141] libmachine: (ha-377576-m02) DBG | domain ha-377576-m02 has defined MAC address 52:54:00:bb:83:99 in network mk-ha-377576
	I0328 00:00:50.356892 1091492 main.go:141] libmachine: (ha-377576-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:83:99", ip: ""} in network mk-ha-377576: {Iface:virbr1 ExpiryTime:2024-03-28 00:53:30 +0000 UTC Type:0 Mac:52:54:00:bb:83:99 Iaid: IPaddr:192.168.39.117 Prefix:24 Hostname:ha-377576-m02 Clientid:01:52:54:00:bb:83:99}
	I0328 00:00:50.356933 1091492 main.go:141] libmachine: (ha-377576-m02) DBG | domain ha-377576-m02 has defined IP address 192.168.39.117 and MAC address 52:54:00:bb:83:99 in network mk-ha-377576
	I0328 00:00:50.357072 1091492 host.go:66] Checking if "ha-377576-m02" exists ...
	I0328 00:00:50.357390 1091492 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0328 00:00:50.357434 1091492 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 00:00:50.374819 1091492 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33197
	I0328 00:00:50.375541 1091492 main.go:141] libmachine: () Calling .GetVersion
	I0328 00:00:50.376216 1091492 main.go:141] libmachine: Using API Version  1
	I0328 00:00:50.376246 1091492 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 00:00:50.376634 1091492 main.go:141] libmachine: () Calling .GetMachineName
	I0328 00:00:50.376866 1091492 main.go:141] libmachine: (ha-377576-m02) Calling .DriverName
	I0328 00:00:50.377123 1091492 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0328 00:00:50.377153 1091492 main.go:141] libmachine: (ha-377576-m02) Calling .GetSSHHostname
	I0328 00:00:50.380387 1091492 main.go:141] libmachine: (ha-377576-m02) DBG | domain ha-377576-m02 has defined MAC address 52:54:00:bb:83:99 in network mk-ha-377576
	I0328 00:00:50.380838 1091492 main.go:141] libmachine: (ha-377576-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:83:99", ip: ""} in network mk-ha-377576: {Iface:virbr1 ExpiryTime:2024-03-28 00:53:30 +0000 UTC Type:0 Mac:52:54:00:bb:83:99 Iaid: IPaddr:192.168.39.117 Prefix:24 Hostname:ha-377576-m02 Clientid:01:52:54:00:bb:83:99}
	I0328 00:00:50.380869 1091492 main.go:141] libmachine: (ha-377576-m02) DBG | domain ha-377576-m02 has defined IP address 192.168.39.117 and MAC address 52:54:00:bb:83:99 in network mk-ha-377576
	I0328 00:00:50.381003 1091492 main.go:141] libmachine: (ha-377576-m02) Calling .GetSSHPort
	I0328 00:00:50.381155 1091492 main.go:141] libmachine: (ha-377576-m02) Calling .GetSSHKeyPath
	I0328 00:00:50.381378 1091492 main.go:141] libmachine: (ha-377576-m02) Calling .GetSSHUsername
	I0328 00:00:50.381542 1091492 sshutil.go:53] new ssh client: &{IP:192.168.39.117 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/ha-377576-m02/id_rsa Username:docker}
	W0328 00:00:53.450534 1091492 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.117:22: connect: no route to host
	W0328 00:00:53.450690 1091492 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.117:22: connect: no route to host
	E0328 00:00:53.450724 1091492 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.117:22: connect: no route to host
	I0328 00:00:53.450738 1091492 status.go:257] ha-377576-m02 status: &{Name:ha-377576-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0328 00:00:53.450765 1091492 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.117:22: connect: no route to host
	I0328 00:00:53.450779 1091492 status.go:255] checking status of ha-377576-m03 ...
	I0328 00:00:53.451230 1091492 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0328 00:00:53.451275 1091492 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 00:00:53.467319 1091492 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37629
	I0328 00:00:53.467789 1091492 main.go:141] libmachine: () Calling .GetVersion
	I0328 00:00:53.468292 1091492 main.go:141] libmachine: Using API Version  1
	I0328 00:00:53.468319 1091492 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 00:00:53.468760 1091492 main.go:141] libmachine: () Calling .GetMachineName
	I0328 00:00:53.468986 1091492 main.go:141] libmachine: (ha-377576-m03) Calling .GetState
	I0328 00:00:53.470752 1091492 status.go:330] ha-377576-m03 host status = "Running" (err=<nil>)
	I0328 00:00:53.470776 1091492 host.go:66] Checking if "ha-377576-m03" exists ...
	I0328 00:00:53.471156 1091492 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0328 00:00:53.471208 1091492 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 00:00:53.487577 1091492 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38585
	I0328 00:00:53.488071 1091492 main.go:141] libmachine: () Calling .GetVersion
	I0328 00:00:53.488571 1091492 main.go:141] libmachine: Using API Version  1
	I0328 00:00:53.488592 1091492 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 00:00:53.488931 1091492 main.go:141] libmachine: () Calling .GetMachineName
	I0328 00:00:53.489183 1091492 main.go:141] libmachine: (ha-377576-m03) Calling .GetIP
	I0328 00:00:53.492148 1091492 main.go:141] libmachine: (ha-377576-m03) DBG | domain ha-377576-m03 has defined MAC address 52:54:00:f5:c1:99 in network mk-ha-377576
	I0328 00:00:53.492680 1091492 main.go:141] libmachine: (ha-377576-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:c1:99", ip: ""} in network mk-ha-377576: {Iface:virbr1 ExpiryTime:2024-03-28 00:55:52 +0000 UTC Type:0 Mac:52:54:00:f5:c1:99 Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:ha-377576-m03 Clientid:01:52:54:00:f5:c1:99}
	I0328 00:00:53.492708 1091492 main.go:141] libmachine: (ha-377576-m03) DBG | domain ha-377576-m03 has defined IP address 192.168.39.101 and MAC address 52:54:00:f5:c1:99 in network mk-ha-377576
	I0328 00:00:53.492895 1091492 host.go:66] Checking if "ha-377576-m03" exists ...
	I0328 00:00:53.493198 1091492 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0328 00:00:53.493222 1091492 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 00:00:53.509057 1091492 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39489
	I0328 00:00:53.509509 1091492 main.go:141] libmachine: () Calling .GetVersion
	I0328 00:00:53.510063 1091492 main.go:141] libmachine: Using API Version  1
	I0328 00:00:53.510088 1091492 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 00:00:53.510480 1091492 main.go:141] libmachine: () Calling .GetMachineName
	I0328 00:00:53.510700 1091492 main.go:141] libmachine: (ha-377576-m03) Calling .DriverName
	I0328 00:00:53.510879 1091492 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0328 00:00:53.510897 1091492 main.go:141] libmachine: (ha-377576-m03) Calling .GetSSHHostname
	I0328 00:00:53.513447 1091492 main.go:141] libmachine: (ha-377576-m03) DBG | domain ha-377576-m03 has defined MAC address 52:54:00:f5:c1:99 in network mk-ha-377576
	I0328 00:00:53.513848 1091492 main.go:141] libmachine: (ha-377576-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:c1:99", ip: ""} in network mk-ha-377576: {Iface:virbr1 ExpiryTime:2024-03-28 00:55:52 +0000 UTC Type:0 Mac:52:54:00:f5:c1:99 Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:ha-377576-m03 Clientid:01:52:54:00:f5:c1:99}
	I0328 00:00:53.513882 1091492 main.go:141] libmachine: (ha-377576-m03) DBG | domain ha-377576-m03 has defined IP address 192.168.39.101 and MAC address 52:54:00:f5:c1:99 in network mk-ha-377576
	I0328 00:00:53.514041 1091492 main.go:141] libmachine: (ha-377576-m03) Calling .GetSSHPort
	I0328 00:00:53.514213 1091492 main.go:141] libmachine: (ha-377576-m03) Calling .GetSSHKeyPath
	I0328 00:00:53.514384 1091492 main.go:141] libmachine: (ha-377576-m03) Calling .GetSSHUsername
	I0328 00:00:53.514530 1091492 sshutil.go:53] new ssh client: &{IP:192.168.39.101 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/ha-377576-m03/id_rsa Username:docker}
	I0328 00:00:53.604622 1091492 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0328 00:00:53.622030 1091492 kubeconfig.go:125] found "ha-377576" server: "https://192.168.39.254:8443"
	I0328 00:00:53.622065 1091492 api_server.go:166] Checking apiserver status ...
	I0328 00:00:53.622115 1091492 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 00:00:53.640521 1091492 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1535/cgroup
	W0328 00:00:53.653179 1091492 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1535/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0328 00:00:53.653249 1091492 ssh_runner.go:195] Run: ls
	I0328 00:00:53.658382 1091492 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0328 00:00:53.663891 1091492 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0328 00:00:53.663922 1091492 status.go:422] ha-377576-m03 apiserver status = Running (err=<nil>)
	I0328 00:00:53.663932 1091492 status.go:257] ha-377576-m03 status: &{Name:ha-377576-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0328 00:00:53.663948 1091492 status.go:255] checking status of ha-377576-m04 ...
	I0328 00:00:53.664244 1091492 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0328 00:00:53.664269 1091492 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 00:00:53.680286 1091492 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43149
	I0328 00:00:53.680788 1091492 main.go:141] libmachine: () Calling .GetVersion
	I0328 00:00:53.681388 1091492 main.go:141] libmachine: Using API Version  1
	I0328 00:00:53.681422 1091492 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 00:00:53.681740 1091492 main.go:141] libmachine: () Calling .GetMachineName
	I0328 00:00:53.681942 1091492 main.go:141] libmachine: (ha-377576-m04) Calling .GetState
	I0328 00:00:53.683336 1091492 status.go:330] ha-377576-m04 host status = "Running" (err=<nil>)
	I0328 00:00:53.683360 1091492 host.go:66] Checking if "ha-377576-m04" exists ...
	I0328 00:00:53.683638 1091492 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0328 00:00:53.683663 1091492 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 00:00:53.699880 1091492 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34745
	I0328 00:00:53.700503 1091492 main.go:141] libmachine: () Calling .GetVersion
	I0328 00:00:53.701122 1091492 main.go:141] libmachine: Using API Version  1
	I0328 00:00:53.701153 1091492 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 00:00:53.701547 1091492 main.go:141] libmachine: () Calling .GetMachineName
	I0328 00:00:53.701752 1091492 main.go:141] libmachine: (ha-377576-m04) Calling .GetIP
	I0328 00:00:53.705362 1091492 main.go:141] libmachine: (ha-377576-m04) DBG | domain ha-377576-m04 has defined MAC address 52:54:00:6a:c2:81 in network mk-ha-377576
	I0328 00:00:53.705907 1091492 main.go:141] libmachine: (ha-377576-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:c2:81", ip: ""} in network mk-ha-377576: {Iface:virbr1 ExpiryTime:2024-03-28 00:57:19 +0000 UTC Type:0 Mac:52:54:00:6a:c2:81 Iaid: IPaddr:192.168.39.93 Prefix:24 Hostname:ha-377576-m04 Clientid:01:52:54:00:6a:c2:81}
	I0328 00:00:53.705940 1091492 main.go:141] libmachine: (ha-377576-m04) DBG | domain ha-377576-m04 has defined IP address 192.168.39.93 and MAC address 52:54:00:6a:c2:81 in network mk-ha-377576
	I0328 00:00:53.706165 1091492 host.go:66] Checking if "ha-377576-m04" exists ...
	I0328 00:00:53.706550 1091492 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0328 00:00:53.706605 1091492 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 00:00:53.722111 1091492 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45955
	I0328 00:00:53.722627 1091492 main.go:141] libmachine: () Calling .GetVersion
	I0328 00:00:53.723111 1091492 main.go:141] libmachine: Using API Version  1
	I0328 00:00:53.723143 1091492 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 00:00:53.723533 1091492 main.go:141] libmachine: () Calling .GetMachineName
	I0328 00:00:53.723735 1091492 main.go:141] libmachine: (ha-377576-m04) Calling .DriverName
	I0328 00:00:53.723917 1091492 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0328 00:00:53.723939 1091492 main.go:141] libmachine: (ha-377576-m04) Calling .GetSSHHostname
	I0328 00:00:53.726898 1091492 main.go:141] libmachine: (ha-377576-m04) DBG | domain ha-377576-m04 has defined MAC address 52:54:00:6a:c2:81 in network mk-ha-377576
	I0328 00:00:53.727269 1091492 main.go:141] libmachine: (ha-377576-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:c2:81", ip: ""} in network mk-ha-377576: {Iface:virbr1 ExpiryTime:2024-03-28 00:57:19 +0000 UTC Type:0 Mac:52:54:00:6a:c2:81 Iaid: IPaddr:192.168.39.93 Prefix:24 Hostname:ha-377576-m04 Clientid:01:52:54:00:6a:c2:81}
	I0328 00:00:53.727296 1091492 main.go:141] libmachine: (ha-377576-m04) DBG | domain ha-377576-m04 has defined IP address 192.168.39.93 and MAC address 52:54:00:6a:c2:81 in network mk-ha-377576
	I0328 00:00:53.727636 1091492 main.go:141] libmachine: (ha-377576-m04) Calling .GetSSHPort
	I0328 00:00:53.727844 1091492 main.go:141] libmachine: (ha-377576-m04) Calling .GetSSHKeyPath
	I0328 00:00:53.728200 1091492 main.go:141] libmachine: (ha-377576-m04) Calling .GetSSHUsername
	I0328 00:00:53.728361 1091492 sshutil.go:53] new ssh client: &{IP:192.168.39.93 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/ha-377576-m04/id_rsa Username:docker}
	I0328 00:00:53.809995 1091492 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0328 00:00:53.824849 1091492 status.go:257] ha-377576-m04 status: &{Name:ha-377576-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-377576 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-377576 status -v=7 --alsologtostderr: exit status 3 (3.782286008s)

                                                
                                                
-- stdout --
	ha-377576
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-377576-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-377576-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-377576-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0328 00:00:56.950317 1091599 out.go:291] Setting OutFile to fd 1 ...
	I0328 00:00:56.950861 1091599 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 00:00:56.950883 1091599 out.go:304] Setting ErrFile to fd 2...
	I0328 00:00:56.950890 1091599 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 00:00:56.951366 1091599 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18485-1069254/.minikube/bin
	I0328 00:00:56.951962 1091599 out.go:298] Setting JSON to false
	I0328 00:00:56.952145 1091599 mustload.go:65] Loading cluster: ha-377576
	I0328 00:00:56.952171 1091599 notify.go:220] Checking for updates...
	I0328 00:00:56.952824 1091599 config.go:182] Loaded profile config "ha-377576": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0328 00:00:56.952848 1091599 status.go:255] checking status of ha-377576 ...
	I0328 00:00:56.953416 1091599 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0328 00:00:56.953503 1091599 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 00:00:56.970276 1091599 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36903
	I0328 00:00:56.970796 1091599 main.go:141] libmachine: () Calling .GetVersion
	I0328 00:00:56.971399 1091599 main.go:141] libmachine: Using API Version  1
	I0328 00:00:56.971423 1091599 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 00:00:56.971878 1091599 main.go:141] libmachine: () Calling .GetMachineName
	I0328 00:00:56.972214 1091599 main.go:141] libmachine: (ha-377576) Calling .GetState
	I0328 00:00:56.974211 1091599 status.go:330] ha-377576 host status = "Running" (err=<nil>)
	I0328 00:00:56.974283 1091599 host.go:66] Checking if "ha-377576" exists ...
	I0328 00:00:56.974720 1091599 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0328 00:00:56.974781 1091599 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 00:00:56.990338 1091599 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43541
	I0328 00:00:56.990858 1091599 main.go:141] libmachine: () Calling .GetVersion
	I0328 00:00:56.991317 1091599 main.go:141] libmachine: Using API Version  1
	I0328 00:00:56.991338 1091599 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 00:00:56.991828 1091599 main.go:141] libmachine: () Calling .GetMachineName
	I0328 00:00:56.992070 1091599 main.go:141] libmachine: (ha-377576) Calling .GetIP
	I0328 00:00:56.995396 1091599 main.go:141] libmachine: (ha-377576) DBG | domain ha-377576 has defined MAC address 52:54:00:9c:48:13 in network mk-ha-377576
	I0328 00:00:56.995862 1091599 main.go:141] libmachine: (ha-377576) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:48:13", ip: ""} in network mk-ha-377576: {Iface:virbr1 ExpiryTime:2024-03-28 00:52:31 +0000 UTC Type:0 Mac:52:54:00:9c:48:13 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:ha-377576 Clientid:01:52:54:00:9c:48:13}
	I0328 00:00:56.995897 1091599 main.go:141] libmachine: (ha-377576) DBG | domain ha-377576 has defined IP address 192.168.39.47 and MAC address 52:54:00:9c:48:13 in network mk-ha-377576
	I0328 00:00:56.996034 1091599 host.go:66] Checking if "ha-377576" exists ...
	I0328 00:00:56.996376 1091599 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0328 00:00:56.996422 1091599 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 00:00:57.011361 1091599 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35853
	I0328 00:00:57.011828 1091599 main.go:141] libmachine: () Calling .GetVersion
	I0328 00:00:57.012377 1091599 main.go:141] libmachine: Using API Version  1
	I0328 00:00:57.012404 1091599 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 00:00:57.012749 1091599 main.go:141] libmachine: () Calling .GetMachineName
	I0328 00:00:57.012945 1091599 main.go:141] libmachine: (ha-377576) Calling .DriverName
	I0328 00:00:57.013187 1091599 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0328 00:00:57.013230 1091599 main.go:141] libmachine: (ha-377576) Calling .GetSSHHostname
	I0328 00:00:57.015998 1091599 main.go:141] libmachine: (ha-377576) DBG | domain ha-377576 has defined MAC address 52:54:00:9c:48:13 in network mk-ha-377576
	I0328 00:00:57.016521 1091599 main.go:141] libmachine: (ha-377576) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:48:13", ip: ""} in network mk-ha-377576: {Iface:virbr1 ExpiryTime:2024-03-28 00:52:31 +0000 UTC Type:0 Mac:52:54:00:9c:48:13 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:ha-377576 Clientid:01:52:54:00:9c:48:13}
	I0328 00:00:57.016558 1091599 main.go:141] libmachine: (ha-377576) DBG | domain ha-377576 has defined IP address 192.168.39.47 and MAC address 52:54:00:9c:48:13 in network mk-ha-377576
	I0328 00:00:57.016650 1091599 main.go:141] libmachine: (ha-377576) Calling .GetSSHPort
	I0328 00:00:57.016816 1091599 main.go:141] libmachine: (ha-377576) Calling .GetSSHKeyPath
	I0328 00:00:57.016950 1091599 main.go:141] libmachine: (ha-377576) Calling .GetSSHUsername
	I0328 00:00:57.017111 1091599 sshutil.go:53] new ssh client: &{IP:192.168.39.47 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/ha-377576/id_rsa Username:docker}
	I0328 00:00:57.102743 1091599 ssh_runner.go:195] Run: systemctl --version
	I0328 00:00:57.109179 1091599 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0328 00:00:57.125841 1091599 kubeconfig.go:125] found "ha-377576" server: "https://192.168.39.254:8443"
	I0328 00:00:57.125870 1091599 api_server.go:166] Checking apiserver status ...
	I0328 00:00:57.125904 1091599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 00:00:57.140640 1091599 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1197/cgroup
	W0328 00:00:57.153005 1091599 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1197/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0328 00:00:57.153063 1091599 ssh_runner.go:195] Run: ls
	I0328 00:00:57.158086 1091599 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0328 00:00:57.163977 1091599 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0328 00:00:57.164005 1091599 status.go:422] ha-377576 apiserver status = Running (err=<nil>)
	I0328 00:00:57.164028 1091599 status.go:257] ha-377576 status: &{Name:ha-377576 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0328 00:00:57.164054 1091599 status.go:255] checking status of ha-377576-m02 ...
	I0328 00:00:57.164388 1091599 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0328 00:00:57.164436 1091599 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 00:00:57.180860 1091599 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42587
	I0328 00:00:57.181418 1091599 main.go:141] libmachine: () Calling .GetVersion
	I0328 00:00:57.182005 1091599 main.go:141] libmachine: Using API Version  1
	I0328 00:00:57.182029 1091599 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 00:00:57.182428 1091599 main.go:141] libmachine: () Calling .GetMachineName
	I0328 00:00:57.182659 1091599 main.go:141] libmachine: (ha-377576-m02) Calling .GetState
	I0328 00:00:57.184559 1091599 status.go:330] ha-377576-m02 host status = "Running" (err=<nil>)
	I0328 00:00:57.184578 1091599 host.go:66] Checking if "ha-377576-m02" exists ...
	I0328 00:00:57.184857 1091599 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0328 00:00:57.184898 1091599 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 00:00:57.201804 1091599 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33155
	I0328 00:00:57.202279 1091599 main.go:141] libmachine: () Calling .GetVersion
	I0328 00:00:57.202865 1091599 main.go:141] libmachine: Using API Version  1
	I0328 00:00:57.202889 1091599 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 00:00:57.203211 1091599 main.go:141] libmachine: () Calling .GetMachineName
	I0328 00:00:57.203416 1091599 main.go:141] libmachine: (ha-377576-m02) Calling .GetIP
	I0328 00:00:57.206806 1091599 main.go:141] libmachine: (ha-377576-m02) DBG | domain ha-377576-m02 has defined MAC address 52:54:00:bb:83:99 in network mk-ha-377576
	I0328 00:00:57.207289 1091599 main.go:141] libmachine: (ha-377576-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:83:99", ip: ""} in network mk-ha-377576: {Iface:virbr1 ExpiryTime:2024-03-28 00:53:30 +0000 UTC Type:0 Mac:52:54:00:bb:83:99 Iaid: IPaddr:192.168.39.117 Prefix:24 Hostname:ha-377576-m02 Clientid:01:52:54:00:bb:83:99}
	I0328 00:00:57.207318 1091599 main.go:141] libmachine: (ha-377576-m02) DBG | domain ha-377576-m02 has defined IP address 192.168.39.117 and MAC address 52:54:00:bb:83:99 in network mk-ha-377576
	I0328 00:00:57.207613 1091599 host.go:66] Checking if "ha-377576-m02" exists ...
	I0328 00:00:57.207961 1091599 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0328 00:00:57.208012 1091599 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 00:00:57.223269 1091599 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38801
	I0328 00:00:57.223721 1091599 main.go:141] libmachine: () Calling .GetVersion
	I0328 00:00:57.224214 1091599 main.go:141] libmachine: Using API Version  1
	I0328 00:00:57.224236 1091599 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 00:00:57.224625 1091599 main.go:141] libmachine: () Calling .GetMachineName
	I0328 00:00:57.224853 1091599 main.go:141] libmachine: (ha-377576-m02) Calling .DriverName
	I0328 00:00:57.225070 1091599 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0328 00:00:57.225092 1091599 main.go:141] libmachine: (ha-377576-m02) Calling .GetSSHHostname
	I0328 00:00:57.227969 1091599 main.go:141] libmachine: (ha-377576-m02) DBG | domain ha-377576-m02 has defined MAC address 52:54:00:bb:83:99 in network mk-ha-377576
	I0328 00:00:57.228403 1091599 main.go:141] libmachine: (ha-377576-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:83:99", ip: ""} in network mk-ha-377576: {Iface:virbr1 ExpiryTime:2024-03-28 00:53:30 +0000 UTC Type:0 Mac:52:54:00:bb:83:99 Iaid: IPaddr:192.168.39.117 Prefix:24 Hostname:ha-377576-m02 Clientid:01:52:54:00:bb:83:99}
	I0328 00:00:57.228438 1091599 main.go:141] libmachine: (ha-377576-m02) DBG | domain ha-377576-m02 has defined IP address 192.168.39.117 and MAC address 52:54:00:bb:83:99 in network mk-ha-377576
	I0328 00:00:57.228555 1091599 main.go:141] libmachine: (ha-377576-m02) Calling .GetSSHPort
	I0328 00:00:57.228731 1091599 main.go:141] libmachine: (ha-377576-m02) Calling .GetSSHKeyPath
	I0328 00:00:57.228910 1091599 main.go:141] libmachine: (ha-377576-m02) Calling .GetSSHUsername
	I0328 00:00:57.229065 1091599 sshutil.go:53] new ssh client: &{IP:192.168.39.117 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/ha-377576-m02/id_rsa Username:docker}
	W0328 00:01:00.298564 1091599 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.117:22: connect: no route to host
	W0328 00:01:00.298695 1091599 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.117:22: connect: no route to host
	E0328 00:01:00.298717 1091599 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.117:22: connect: no route to host
	I0328 00:01:00.298725 1091599 status.go:257] ha-377576-m02 status: &{Name:ha-377576-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0328 00:01:00.298743 1091599 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.117:22: connect: no route to host
	I0328 00:01:00.298751 1091599 status.go:255] checking status of ha-377576-m03 ...
	I0328 00:01:00.299068 1091599 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0328 00:01:00.299122 1091599 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 00:01:00.315387 1091599 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43695
	I0328 00:01:00.315866 1091599 main.go:141] libmachine: () Calling .GetVersion
	I0328 00:01:00.316405 1091599 main.go:141] libmachine: Using API Version  1
	I0328 00:01:00.316434 1091599 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 00:01:00.316816 1091599 main.go:141] libmachine: () Calling .GetMachineName
	I0328 00:01:00.317047 1091599 main.go:141] libmachine: (ha-377576-m03) Calling .GetState
	I0328 00:01:00.318629 1091599 status.go:330] ha-377576-m03 host status = "Running" (err=<nil>)
	I0328 00:01:00.318649 1091599 host.go:66] Checking if "ha-377576-m03" exists ...
	I0328 00:01:00.318940 1091599 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0328 00:01:00.318977 1091599 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 00:01:00.334148 1091599 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41137
	I0328 00:01:00.334612 1091599 main.go:141] libmachine: () Calling .GetVersion
	I0328 00:01:00.335155 1091599 main.go:141] libmachine: Using API Version  1
	I0328 00:01:00.335193 1091599 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 00:01:00.335521 1091599 main.go:141] libmachine: () Calling .GetMachineName
	I0328 00:01:00.335693 1091599 main.go:141] libmachine: (ha-377576-m03) Calling .GetIP
	I0328 00:01:00.338754 1091599 main.go:141] libmachine: (ha-377576-m03) DBG | domain ha-377576-m03 has defined MAC address 52:54:00:f5:c1:99 in network mk-ha-377576
	I0328 00:01:00.339249 1091599 main.go:141] libmachine: (ha-377576-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:c1:99", ip: ""} in network mk-ha-377576: {Iface:virbr1 ExpiryTime:2024-03-28 00:55:52 +0000 UTC Type:0 Mac:52:54:00:f5:c1:99 Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:ha-377576-m03 Clientid:01:52:54:00:f5:c1:99}
	I0328 00:01:00.339278 1091599 main.go:141] libmachine: (ha-377576-m03) DBG | domain ha-377576-m03 has defined IP address 192.168.39.101 and MAC address 52:54:00:f5:c1:99 in network mk-ha-377576
	I0328 00:01:00.339418 1091599 host.go:66] Checking if "ha-377576-m03" exists ...
	I0328 00:01:00.339723 1091599 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0328 00:01:00.339772 1091599 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 00:01:00.356269 1091599 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35855
	I0328 00:01:00.356778 1091599 main.go:141] libmachine: () Calling .GetVersion
	I0328 00:01:00.357415 1091599 main.go:141] libmachine: Using API Version  1
	I0328 00:01:00.357446 1091599 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 00:01:00.357829 1091599 main.go:141] libmachine: () Calling .GetMachineName
	I0328 00:01:00.358125 1091599 main.go:141] libmachine: (ha-377576-m03) Calling .DriverName
	I0328 00:01:00.358373 1091599 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0328 00:01:00.358402 1091599 main.go:141] libmachine: (ha-377576-m03) Calling .GetSSHHostname
	I0328 00:01:00.361555 1091599 main.go:141] libmachine: (ha-377576-m03) DBG | domain ha-377576-m03 has defined MAC address 52:54:00:f5:c1:99 in network mk-ha-377576
	I0328 00:01:00.362059 1091599 main.go:141] libmachine: (ha-377576-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:c1:99", ip: ""} in network mk-ha-377576: {Iface:virbr1 ExpiryTime:2024-03-28 00:55:52 +0000 UTC Type:0 Mac:52:54:00:f5:c1:99 Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:ha-377576-m03 Clientid:01:52:54:00:f5:c1:99}
	I0328 00:01:00.362082 1091599 main.go:141] libmachine: (ha-377576-m03) DBG | domain ha-377576-m03 has defined IP address 192.168.39.101 and MAC address 52:54:00:f5:c1:99 in network mk-ha-377576
	I0328 00:01:00.362247 1091599 main.go:141] libmachine: (ha-377576-m03) Calling .GetSSHPort
	I0328 00:01:00.362460 1091599 main.go:141] libmachine: (ha-377576-m03) Calling .GetSSHKeyPath
	I0328 00:01:00.362655 1091599 main.go:141] libmachine: (ha-377576-m03) Calling .GetSSHUsername
	I0328 00:01:00.362847 1091599 sshutil.go:53] new ssh client: &{IP:192.168.39.101 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/ha-377576-m03/id_rsa Username:docker}
	I0328 00:01:00.451276 1091599 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0328 00:01:00.471230 1091599 kubeconfig.go:125] found "ha-377576" server: "https://192.168.39.254:8443"
	I0328 00:01:00.471274 1091599 api_server.go:166] Checking apiserver status ...
	I0328 00:01:00.471326 1091599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 00:01:00.486688 1091599 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1535/cgroup
	W0328 00:01:00.498054 1091599 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1535/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0328 00:01:00.498131 1091599 ssh_runner.go:195] Run: ls
	I0328 00:01:00.503013 1091599 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0328 00:01:00.507449 1091599 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0328 00:01:00.507475 1091599 status.go:422] ha-377576-m03 apiserver status = Running (err=<nil>)
	I0328 00:01:00.507484 1091599 status.go:257] ha-377576-m03 status: &{Name:ha-377576-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0328 00:01:00.507501 1091599 status.go:255] checking status of ha-377576-m04 ...
	I0328 00:01:00.507780 1091599 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0328 00:01:00.507815 1091599 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 00:01:00.524051 1091599 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35699
	I0328 00:01:00.524498 1091599 main.go:141] libmachine: () Calling .GetVersion
	I0328 00:01:00.524986 1091599 main.go:141] libmachine: Using API Version  1
	I0328 00:01:00.525018 1091599 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 00:01:00.525344 1091599 main.go:141] libmachine: () Calling .GetMachineName
	I0328 00:01:00.525536 1091599 main.go:141] libmachine: (ha-377576-m04) Calling .GetState
	I0328 00:01:00.527005 1091599 status.go:330] ha-377576-m04 host status = "Running" (err=<nil>)
	I0328 00:01:00.527030 1091599 host.go:66] Checking if "ha-377576-m04" exists ...
	I0328 00:01:00.527318 1091599 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0328 00:01:00.527357 1091599 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 00:01:00.542674 1091599 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37035
	I0328 00:01:00.543154 1091599 main.go:141] libmachine: () Calling .GetVersion
	I0328 00:01:00.543637 1091599 main.go:141] libmachine: Using API Version  1
	I0328 00:01:00.543661 1091599 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 00:01:00.543952 1091599 main.go:141] libmachine: () Calling .GetMachineName
	I0328 00:01:00.544194 1091599 main.go:141] libmachine: (ha-377576-m04) Calling .GetIP
	I0328 00:01:00.547834 1091599 main.go:141] libmachine: (ha-377576-m04) DBG | domain ha-377576-m04 has defined MAC address 52:54:00:6a:c2:81 in network mk-ha-377576
	I0328 00:01:00.548289 1091599 main.go:141] libmachine: (ha-377576-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:c2:81", ip: ""} in network mk-ha-377576: {Iface:virbr1 ExpiryTime:2024-03-28 00:57:19 +0000 UTC Type:0 Mac:52:54:00:6a:c2:81 Iaid: IPaddr:192.168.39.93 Prefix:24 Hostname:ha-377576-m04 Clientid:01:52:54:00:6a:c2:81}
	I0328 00:01:00.548328 1091599 main.go:141] libmachine: (ha-377576-m04) DBG | domain ha-377576-m04 has defined IP address 192.168.39.93 and MAC address 52:54:00:6a:c2:81 in network mk-ha-377576
	I0328 00:01:00.548410 1091599 host.go:66] Checking if "ha-377576-m04" exists ...
	I0328 00:01:00.548826 1091599 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0328 00:01:00.548871 1091599 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 00:01:00.564340 1091599 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44715
	I0328 00:01:00.564796 1091599 main.go:141] libmachine: () Calling .GetVersion
	I0328 00:01:00.565256 1091599 main.go:141] libmachine: Using API Version  1
	I0328 00:01:00.565276 1091599 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 00:01:00.565631 1091599 main.go:141] libmachine: () Calling .GetMachineName
	I0328 00:01:00.565883 1091599 main.go:141] libmachine: (ha-377576-m04) Calling .DriverName
	I0328 00:01:00.566121 1091599 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0328 00:01:00.566147 1091599 main.go:141] libmachine: (ha-377576-m04) Calling .GetSSHHostname
	I0328 00:01:00.569312 1091599 main.go:141] libmachine: (ha-377576-m04) DBG | domain ha-377576-m04 has defined MAC address 52:54:00:6a:c2:81 in network mk-ha-377576
	I0328 00:01:00.569838 1091599 main.go:141] libmachine: (ha-377576-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:c2:81", ip: ""} in network mk-ha-377576: {Iface:virbr1 ExpiryTime:2024-03-28 00:57:19 +0000 UTC Type:0 Mac:52:54:00:6a:c2:81 Iaid: IPaddr:192.168.39.93 Prefix:24 Hostname:ha-377576-m04 Clientid:01:52:54:00:6a:c2:81}
	I0328 00:01:00.569860 1091599 main.go:141] libmachine: (ha-377576-m04) DBG | domain ha-377576-m04 has defined IP address 192.168.39.93 and MAC address 52:54:00:6a:c2:81 in network mk-ha-377576
	I0328 00:01:00.570017 1091599 main.go:141] libmachine: (ha-377576-m04) Calling .GetSSHPort
	I0328 00:01:00.570215 1091599 main.go:141] libmachine: (ha-377576-m04) Calling .GetSSHKeyPath
	I0328 00:01:00.570413 1091599 main.go:141] libmachine: (ha-377576-m04) Calling .GetSSHUsername
	I0328 00:01:00.570564 1091599 sshutil.go:53] new ssh client: &{IP:192.168.39.93 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/ha-377576-m04/id_rsa Username:docker}
	I0328 00:01:00.655342 1091599 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0328 00:01:00.671574 1091599 status.go:257] ha-377576-m04 status: &{Name:ha-377576-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-377576 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-377576 status -v=7 --alsologtostderr: exit status 3 (3.769740964s)

                                                
                                                
-- stdout --
	ha-377576
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-377576-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-377576-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-377576-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0328 00:01:03.486956 1091694 out.go:291] Setting OutFile to fd 1 ...
	I0328 00:01:03.487233 1091694 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 00:01:03.487243 1091694 out.go:304] Setting ErrFile to fd 2...
	I0328 00:01:03.487247 1091694 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 00:01:03.487452 1091694 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18485-1069254/.minikube/bin
	I0328 00:01:03.487632 1091694 out.go:298] Setting JSON to false
	I0328 00:01:03.487666 1091694 mustload.go:65] Loading cluster: ha-377576
	I0328 00:01:03.487734 1091694 notify.go:220] Checking for updates...
	I0328 00:01:03.488152 1091694 config.go:182] Loaded profile config "ha-377576": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0328 00:01:03.488182 1091694 status.go:255] checking status of ha-377576 ...
	I0328 00:01:03.488725 1091694 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0328 00:01:03.488814 1091694 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 00:01:03.508517 1091694 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46175
	I0328 00:01:03.509141 1091694 main.go:141] libmachine: () Calling .GetVersion
	I0328 00:01:03.509823 1091694 main.go:141] libmachine: Using API Version  1
	I0328 00:01:03.509862 1091694 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 00:01:03.510318 1091694 main.go:141] libmachine: () Calling .GetMachineName
	I0328 00:01:03.510576 1091694 main.go:141] libmachine: (ha-377576) Calling .GetState
	I0328 00:01:03.512385 1091694 status.go:330] ha-377576 host status = "Running" (err=<nil>)
	I0328 00:01:03.512407 1091694 host.go:66] Checking if "ha-377576" exists ...
	I0328 00:01:03.512824 1091694 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0328 00:01:03.512880 1091694 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 00:01:03.528241 1091694 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35285
	I0328 00:01:03.528691 1091694 main.go:141] libmachine: () Calling .GetVersion
	I0328 00:01:03.529206 1091694 main.go:141] libmachine: Using API Version  1
	I0328 00:01:03.529251 1091694 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 00:01:03.529664 1091694 main.go:141] libmachine: () Calling .GetMachineName
	I0328 00:01:03.529918 1091694 main.go:141] libmachine: (ha-377576) Calling .GetIP
	I0328 00:01:03.532868 1091694 main.go:141] libmachine: (ha-377576) DBG | domain ha-377576 has defined MAC address 52:54:00:9c:48:13 in network mk-ha-377576
	I0328 00:01:03.533306 1091694 main.go:141] libmachine: (ha-377576) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:48:13", ip: ""} in network mk-ha-377576: {Iface:virbr1 ExpiryTime:2024-03-28 00:52:31 +0000 UTC Type:0 Mac:52:54:00:9c:48:13 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:ha-377576 Clientid:01:52:54:00:9c:48:13}
	I0328 00:01:03.533344 1091694 main.go:141] libmachine: (ha-377576) DBG | domain ha-377576 has defined IP address 192.168.39.47 and MAC address 52:54:00:9c:48:13 in network mk-ha-377576
	I0328 00:01:03.533572 1091694 host.go:66] Checking if "ha-377576" exists ...
	I0328 00:01:03.534012 1091694 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0328 00:01:03.534074 1091694 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 00:01:03.550803 1091694 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34905
	I0328 00:01:03.551297 1091694 main.go:141] libmachine: () Calling .GetVersion
	I0328 00:01:03.551846 1091694 main.go:141] libmachine: Using API Version  1
	I0328 00:01:03.551881 1091694 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 00:01:03.552288 1091694 main.go:141] libmachine: () Calling .GetMachineName
	I0328 00:01:03.552528 1091694 main.go:141] libmachine: (ha-377576) Calling .DriverName
	I0328 00:01:03.552766 1091694 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0328 00:01:03.552794 1091694 main.go:141] libmachine: (ha-377576) Calling .GetSSHHostname
	I0328 00:01:03.556083 1091694 main.go:141] libmachine: (ha-377576) DBG | domain ha-377576 has defined MAC address 52:54:00:9c:48:13 in network mk-ha-377576
	I0328 00:01:03.556583 1091694 main.go:141] libmachine: (ha-377576) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:48:13", ip: ""} in network mk-ha-377576: {Iface:virbr1 ExpiryTime:2024-03-28 00:52:31 +0000 UTC Type:0 Mac:52:54:00:9c:48:13 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:ha-377576 Clientid:01:52:54:00:9c:48:13}
	I0328 00:01:03.556618 1091694 main.go:141] libmachine: (ha-377576) DBG | domain ha-377576 has defined IP address 192.168.39.47 and MAC address 52:54:00:9c:48:13 in network mk-ha-377576
	I0328 00:01:03.556735 1091694 main.go:141] libmachine: (ha-377576) Calling .GetSSHPort
	I0328 00:01:03.556907 1091694 main.go:141] libmachine: (ha-377576) Calling .GetSSHKeyPath
	I0328 00:01:03.557046 1091694 main.go:141] libmachine: (ha-377576) Calling .GetSSHUsername
	I0328 00:01:03.557190 1091694 sshutil.go:53] new ssh client: &{IP:192.168.39.47 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/ha-377576/id_rsa Username:docker}
	I0328 00:01:03.634462 1091694 ssh_runner.go:195] Run: systemctl --version
	I0328 00:01:03.641240 1091694 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0328 00:01:03.660226 1091694 kubeconfig.go:125] found "ha-377576" server: "https://192.168.39.254:8443"
	I0328 00:01:03.660260 1091694 api_server.go:166] Checking apiserver status ...
	I0328 00:01:03.660303 1091694 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 00:01:03.676336 1091694 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1197/cgroup
	W0328 00:01:03.691731 1091694 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1197/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0328 00:01:03.691799 1091694 ssh_runner.go:195] Run: ls
	I0328 00:01:03.697568 1091694 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0328 00:01:03.703466 1091694 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0328 00:01:03.703503 1091694 status.go:422] ha-377576 apiserver status = Running (err=<nil>)
	I0328 00:01:03.703517 1091694 status.go:257] ha-377576 status: &{Name:ha-377576 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0328 00:01:03.703536 1091694 status.go:255] checking status of ha-377576-m02 ...
	I0328 00:01:03.703989 1091694 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0328 00:01:03.704030 1091694 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 00:01:03.720042 1091694 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37259
	I0328 00:01:03.720522 1091694 main.go:141] libmachine: () Calling .GetVersion
	I0328 00:01:03.721138 1091694 main.go:141] libmachine: Using API Version  1
	I0328 00:01:03.721162 1091694 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 00:01:03.721566 1091694 main.go:141] libmachine: () Calling .GetMachineName
	I0328 00:01:03.721792 1091694 main.go:141] libmachine: (ha-377576-m02) Calling .GetState
	I0328 00:01:03.723465 1091694 status.go:330] ha-377576-m02 host status = "Running" (err=<nil>)
	I0328 00:01:03.723491 1091694 host.go:66] Checking if "ha-377576-m02" exists ...
	I0328 00:01:03.723824 1091694 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0328 00:01:03.723875 1091694 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 00:01:03.741181 1091694 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42115
	I0328 00:01:03.741879 1091694 main.go:141] libmachine: () Calling .GetVersion
	I0328 00:01:03.742562 1091694 main.go:141] libmachine: Using API Version  1
	I0328 00:01:03.742589 1091694 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 00:01:03.742986 1091694 main.go:141] libmachine: () Calling .GetMachineName
	I0328 00:01:03.743227 1091694 main.go:141] libmachine: (ha-377576-m02) Calling .GetIP
	I0328 00:01:03.746496 1091694 main.go:141] libmachine: (ha-377576-m02) DBG | domain ha-377576-m02 has defined MAC address 52:54:00:bb:83:99 in network mk-ha-377576
	I0328 00:01:03.747241 1091694 main.go:141] libmachine: (ha-377576-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:83:99", ip: ""} in network mk-ha-377576: {Iface:virbr1 ExpiryTime:2024-03-28 00:53:30 +0000 UTC Type:0 Mac:52:54:00:bb:83:99 Iaid: IPaddr:192.168.39.117 Prefix:24 Hostname:ha-377576-m02 Clientid:01:52:54:00:bb:83:99}
	I0328 00:01:03.747283 1091694 main.go:141] libmachine: (ha-377576-m02) DBG | domain ha-377576-m02 has defined IP address 192.168.39.117 and MAC address 52:54:00:bb:83:99 in network mk-ha-377576
	I0328 00:01:03.747367 1091694 host.go:66] Checking if "ha-377576-m02" exists ...
	I0328 00:01:03.747705 1091694 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0328 00:01:03.747761 1091694 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 00:01:03.763595 1091694 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41401
	I0328 00:01:03.764150 1091694 main.go:141] libmachine: () Calling .GetVersion
	I0328 00:01:03.764630 1091694 main.go:141] libmachine: Using API Version  1
	I0328 00:01:03.764652 1091694 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 00:01:03.765052 1091694 main.go:141] libmachine: () Calling .GetMachineName
	I0328 00:01:03.765249 1091694 main.go:141] libmachine: (ha-377576-m02) Calling .DriverName
	I0328 00:01:03.765446 1091694 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0328 00:01:03.765477 1091694 main.go:141] libmachine: (ha-377576-m02) Calling .GetSSHHostname
	I0328 00:01:03.768179 1091694 main.go:141] libmachine: (ha-377576-m02) DBG | domain ha-377576-m02 has defined MAC address 52:54:00:bb:83:99 in network mk-ha-377576
	I0328 00:01:03.768721 1091694 main.go:141] libmachine: (ha-377576-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:83:99", ip: ""} in network mk-ha-377576: {Iface:virbr1 ExpiryTime:2024-03-28 00:53:30 +0000 UTC Type:0 Mac:52:54:00:bb:83:99 Iaid: IPaddr:192.168.39.117 Prefix:24 Hostname:ha-377576-m02 Clientid:01:52:54:00:bb:83:99}
	I0328 00:01:03.768741 1091694 main.go:141] libmachine: (ha-377576-m02) DBG | domain ha-377576-m02 has defined IP address 192.168.39.117 and MAC address 52:54:00:bb:83:99 in network mk-ha-377576
	I0328 00:01:03.768872 1091694 main.go:141] libmachine: (ha-377576-m02) Calling .GetSSHPort
	I0328 00:01:03.769041 1091694 main.go:141] libmachine: (ha-377576-m02) Calling .GetSSHKeyPath
	I0328 00:01:03.769177 1091694 main.go:141] libmachine: (ha-377576-m02) Calling .GetSSHUsername
	I0328 00:01:03.769345 1091694 sshutil.go:53] new ssh client: &{IP:192.168.39.117 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/ha-377576-m02/id_rsa Username:docker}
	W0328 00:01:06.826545 1091694 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.117:22: connect: no route to host
	W0328 00:01:06.826686 1091694 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.117:22: connect: no route to host
	E0328 00:01:06.826723 1091694 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.117:22: connect: no route to host
	I0328 00:01:06.826737 1091694 status.go:257] ha-377576-m02 status: &{Name:ha-377576-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0328 00:01:06.826765 1091694 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.117:22: connect: no route to host
	I0328 00:01:06.826778 1091694 status.go:255] checking status of ha-377576-m03 ...
	I0328 00:01:06.827299 1091694 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0328 00:01:06.827369 1091694 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 00:01:06.843966 1091694 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46871
	I0328 00:01:06.844465 1091694 main.go:141] libmachine: () Calling .GetVersion
	I0328 00:01:06.845034 1091694 main.go:141] libmachine: Using API Version  1
	I0328 00:01:06.845067 1091694 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 00:01:06.845538 1091694 main.go:141] libmachine: () Calling .GetMachineName
	I0328 00:01:06.845810 1091694 main.go:141] libmachine: (ha-377576-m03) Calling .GetState
	I0328 00:01:06.847704 1091694 status.go:330] ha-377576-m03 host status = "Running" (err=<nil>)
	I0328 00:01:06.847725 1091694 host.go:66] Checking if "ha-377576-m03" exists ...
	I0328 00:01:06.848003 1091694 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0328 00:01:06.848041 1091694 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 00:01:06.864771 1091694 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42435
	I0328 00:01:06.865229 1091694 main.go:141] libmachine: () Calling .GetVersion
	I0328 00:01:06.865801 1091694 main.go:141] libmachine: Using API Version  1
	I0328 00:01:06.865827 1091694 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 00:01:06.866156 1091694 main.go:141] libmachine: () Calling .GetMachineName
	I0328 00:01:06.866419 1091694 main.go:141] libmachine: (ha-377576-m03) Calling .GetIP
	I0328 00:01:06.869553 1091694 main.go:141] libmachine: (ha-377576-m03) DBG | domain ha-377576-m03 has defined MAC address 52:54:00:f5:c1:99 in network mk-ha-377576
	I0328 00:01:06.870038 1091694 main.go:141] libmachine: (ha-377576-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:c1:99", ip: ""} in network mk-ha-377576: {Iface:virbr1 ExpiryTime:2024-03-28 00:55:52 +0000 UTC Type:0 Mac:52:54:00:f5:c1:99 Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:ha-377576-m03 Clientid:01:52:54:00:f5:c1:99}
	I0328 00:01:06.870066 1091694 main.go:141] libmachine: (ha-377576-m03) DBG | domain ha-377576-m03 has defined IP address 192.168.39.101 and MAC address 52:54:00:f5:c1:99 in network mk-ha-377576
	I0328 00:01:06.870198 1091694 host.go:66] Checking if "ha-377576-m03" exists ...
	I0328 00:01:06.870521 1091694 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0328 00:01:06.870570 1091694 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 00:01:06.885859 1091694 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42413
	I0328 00:01:06.886323 1091694 main.go:141] libmachine: () Calling .GetVersion
	I0328 00:01:06.886788 1091694 main.go:141] libmachine: Using API Version  1
	I0328 00:01:06.886811 1091694 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 00:01:06.887117 1091694 main.go:141] libmachine: () Calling .GetMachineName
	I0328 00:01:06.887296 1091694 main.go:141] libmachine: (ha-377576-m03) Calling .DriverName
	I0328 00:01:06.887493 1091694 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0328 00:01:06.887517 1091694 main.go:141] libmachine: (ha-377576-m03) Calling .GetSSHHostname
	I0328 00:01:06.890832 1091694 main.go:141] libmachine: (ha-377576-m03) DBG | domain ha-377576-m03 has defined MAC address 52:54:00:f5:c1:99 in network mk-ha-377576
	I0328 00:01:06.891277 1091694 main.go:141] libmachine: (ha-377576-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:c1:99", ip: ""} in network mk-ha-377576: {Iface:virbr1 ExpiryTime:2024-03-28 00:55:52 +0000 UTC Type:0 Mac:52:54:00:f5:c1:99 Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:ha-377576-m03 Clientid:01:52:54:00:f5:c1:99}
	I0328 00:01:06.891300 1091694 main.go:141] libmachine: (ha-377576-m03) DBG | domain ha-377576-m03 has defined IP address 192.168.39.101 and MAC address 52:54:00:f5:c1:99 in network mk-ha-377576
	I0328 00:01:06.891451 1091694 main.go:141] libmachine: (ha-377576-m03) Calling .GetSSHPort
	I0328 00:01:06.891631 1091694 main.go:141] libmachine: (ha-377576-m03) Calling .GetSSHKeyPath
	I0328 00:01:06.891800 1091694 main.go:141] libmachine: (ha-377576-m03) Calling .GetSSHUsername
	I0328 00:01:06.891956 1091694 sshutil.go:53] new ssh client: &{IP:192.168.39.101 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/ha-377576-m03/id_rsa Username:docker}
	I0328 00:01:06.979467 1091694 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0328 00:01:06.996392 1091694 kubeconfig.go:125] found "ha-377576" server: "https://192.168.39.254:8443"
	I0328 00:01:06.996421 1091694 api_server.go:166] Checking apiserver status ...
	I0328 00:01:06.996454 1091694 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 00:01:07.011591 1091694 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1535/cgroup
	W0328 00:01:07.022078 1091694 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1535/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0328 00:01:07.022128 1091694 ssh_runner.go:195] Run: ls
	I0328 00:01:07.026428 1091694 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0328 00:01:07.030783 1091694 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0328 00:01:07.030805 1091694 status.go:422] ha-377576-m03 apiserver status = Running (err=<nil>)
	I0328 00:01:07.030814 1091694 status.go:257] ha-377576-m03 status: &{Name:ha-377576-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0328 00:01:07.030830 1091694 status.go:255] checking status of ha-377576-m04 ...
	I0328 00:01:07.031165 1091694 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0328 00:01:07.031211 1091694 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 00:01:07.046751 1091694 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38181
	I0328 00:01:07.047192 1091694 main.go:141] libmachine: () Calling .GetVersion
	I0328 00:01:07.047653 1091694 main.go:141] libmachine: Using API Version  1
	I0328 00:01:07.047691 1091694 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 00:01:07.048080 1091694 main.go:141] libmachine: () Calling .GetMachineName
	I0328 00:01:07.048330 1091694 main.go:141] libmachine: (ha-377576-m04) Calling .GetState
	I0328 00:01:07.050071 1091694 status.go:330] ha-377576-m04 host status = "Running" (err=<nil>)
	I0328 00:01:07.050088 1091694 host.go:66] Checking if "ha-377576-m04" exists ...
	I0328 00:01:07.050392 1091694 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0328 00:01:07.050432 1091694 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 00:01:07.065387 1091694 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45841
	I0328 00:01:07.065848 1091694 main.go:141] libmachine: () Calling .GetVersion
	I0328 00:01:07.066307 1091694 main.go:141] libmachine: Using API Version  1
	I0328 00:01:07.066333 1091694 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 00:01:07.066715 1091694 main.go:141] libmachine: () Calling .GetMachineName
	I0328 00:01:07.066976 1091694 main.go:141] libmachine: (ha-377576-m04) Calling .GetIP
	I0328 00:01:07.070323 1091694 main.go:141] libmachine: (ha-377576-m04) DBG | domain ha-377576-m04 has defined MAC address 52:54:00:6a:c2:81 in network mk-ha-377576
	I0328 00:01:07.070927 1091694 main.go:141] libmachine: (ha-377576-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:c2:81", ip: ""} in network mk-ha-377576: {Iface:virbr1 ExpiryTime:2024-03-28 00:57:19 +0000 UTC Type:0 Mac:52:54:00:6a:c2:81 Iaid: IPaddr:192.168.39.93 Prefix:24 Hostname:ha-377576-m04 Clientid:01:52:54:00:6a:c2:81}
	I0328 00:01:07.070966 1091694 main.go:141] libmachine: (ha-377576-m04) DBG | domain ha-377576-m04 has defined IP address 192.168.39.93 and MAC address 52:54:00:6a:c2:81 in network mk-ha-377576
	I0328 00:01:07.071175 1091694 host.go:66] Checking if "ha-377576-m04" exists ...
	I0328 00:01:07.071512 1091694 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0328 00:01:07.071558 1091694 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 00:01:07.087325 1091694 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45111
	I0328 00:01:07.087777 1091694 main.go:141] libmachine: () Calling .GetVersion
	I0328 00:01:07.088290 1091694 main.go:141] libmachine: Using API Version  1
	I0328 00:01:07.088310 1091694 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 00:01:07.088630 1091694 main.go:141] libmachine: () Calling .GetMachineName
	I0328 00:01:07.088854 1091694 main.go:141] libmachine: (ha-377576-m04) Calling .DriverName
	I0328 00:01:07.089041 1091694 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0328 00:01:07.089064 1091694 main.go:141] libmachine: (ha-377576-m04) Calling .GetSSHHostname
	I0328 00:01:07.091769 1091694 main.go:141] libmachine: (ha-377576-m04) DBG | domain ha-377576-m04 has defined MAC address 52:54:00:6a:c2:81 in network mk-ha-377576
	I0328 00:01:07.092257 1091694 main.go:141] libmachine: (ha-377576-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:c2:81", ip: ""} in network mk-ha-377576: {Iface:virbr1 ExpiryTime:2024-03-28 00:57:19 +0000 UTC Type:0 Mac:52:54:00:6a:c2:81 Iaid: IPaddr:192.168.39.93 Prefix:24 Hostname:ha-377576-m04 Clientid:01:52:54:00:6a:c2:81}
	I0328 00:01:07.092289 1091694 main.go:141] libmachine: (ha-377576-m04) DBG | domain ha-377576-m04 has defined IP address 192.168.39.93 and MAC address 52:54:00:6a:c2:81 in network mk-ha-377576
	I0328 00:01:07.092404 1091694 main.go:141] libmachine: (ha-377576-m04) Calling .GetSSHPort
	I0328 00:01:07.092608 1091694 main.go:141] libmachine: (ha-377576-m04) Calling .GetSSHKeyPath
	I0328 00:01:07.092774 1091694 main.go:141] libmachine: (ha-377576-m04) Calling .GetSSHUsername
	I0328 00:01:07.092919 1091694 sshutil.go:53] new ssh client: &{IP:192.168.39.93 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/ha-377576-m04/id_rsa Username:docker}
	I0328 00:01:07.173858 1091694 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0328 00:01:07.189312 1091694 status.go:257] ha-377576-m04 status: &{Name:ha-377576-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
E0328 00:01:14.355713 1076522 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/addons-910864/client.crt: no such file or directory
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-377576 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-377576 status -v=7 --alsologtostderr: exit status 7 (687.582487ms)

                                                
                                                
-- stdout --
	ha-377576
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-377576-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-377576-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-377576-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0328 00:01:15.611124 1091821 out.go:291] Setting OutFile to fd 1 ...
	I0328 00:01:15.611285 1091821 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 00:01:15.611295 1091821 out.go:304] Setting ErrFile to fd 2...
	I0328 00:01:15.611302 1091821 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 00:01:15.611514 1091821 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18485-1069254/.minikube/bin
	I0328 00:01:15.611719 1091821 out.go:298] Setting JSON to false
	I0328 00:01:15.611757 1091821 mustload.go:65] Loading cluster: ha-377576
	I0328 00:01:15.611885 1091821 notify.go:220] Checking for updates...
	I0328 00:01:15.612253 1091821 config.go:182] Loaded profile config "ha-377576": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0328 00:01:15.612280 1091821 status.go:255] checking status of ha-377576 ...
	I0328 00:01:15.612746 1091821 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0328 00:01:15.612823 1091821 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 00:01:15.629412 1091821 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38185
	I0328 00:01:15.629920 1091821 main.go:141] libmachine: () Calling .GetVersion
	I0328 00:01:15.630535 1091821 main.go:141] libmachine: Using API Version  1
	I0328 00:01:15.630564 1091821 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 00:01:15.630998 1091821 main.go:141] libmachine: () Calling .GetMachineName
	I0328 00:01:15.631214 1091821 main.go:141] libmachine: (ha-377576) Calling .GetState
	I0328 00:01:15.633064 1091821 status.go:330] ha-377576 host status = "Running" (err=<nil>)
	I0328 00:01:15.633086 1091821 host.go:66] Checking if "ha-377576" exists ...
	I0328 00:01:15.633393 1091821 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0328 00:01:15.633436 1091821 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 00:01:15.648581 1091821 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38737
	I0328 00:01:15.649116 1091821 main.go:141] libmachine: () Calling .GetVersion
	I0328 00:01:15.649613 1091821 main.go:141] libmachine: Using API Version  1
	I0328 00:01:15.649642 1091821 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 00:01:15.649956 1091821 main.go:141] libmachine: () Calling .GetMachineName
	I0328 00:01:15.650123 1091821 main.go:141] libmachine: (ha-377576) Calling .GetIP
	I0328 00:01:15.653105 1091821 main.go:141] libmachine: (ha-377576) DBG | domain ha-377576 has defined MAC address 52:54:00:9c:48:13 in network mk-ha-377576
	I0328 00:01:15.653574 1091821 main.go:141] libmachine: (ha-377576) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:48:13", ip: ""} in network mk-ha-377576: {Iface:virbr1 ExpiryTime:2024-03-28 00:52:31 +0000 UTC Type:0 Mac:52:54:00:9c:48:13 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:ha-377576 Clientid:01:52:54:00:9c:48:13}
	I0328 00:01:15.653620 1091821 main.go:141] libmachine: (ha-377576) DBG | domain ha-377576 has defined IP address 192.168.39.47 and MAC address 52:54:00:9c:48:13 in network mk-ha-377576
	I0328 00:01:15.653665 1091821 host.go:66] Checking if "ha-377576" exists ...
	I0328 00:01:15.653964 1091821 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0328 00:01:15.654000 1091821 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 00:01:15.669370 1091821 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33775
	I0328 00:01:15.669819 1091821 main.go:141] libmachine: () Calling .GetVersion
	I0328 00:01:15.670338 1091821 main.go:141] libmachine: Using API Version  1
	I0328 00:01:15.670360 1091821 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 00:01:15.670754 1091821 main.go:141] libmachine: () Calling .GetMachineName
	I0328 00:01:15.670965 1091821 main.go:141] libmachine: (ha-377576) Calling .DriverName
	I0328 00:01:15.671158 1091821 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0328 00:01:15.671191 1091821 main.go:141] libmachine: (ha-377576) Calling .GetSSHHostname
	I0328 00:01:15.674189 1091821 main.go:141] libmachine: (ha-377576) DBG | domain ha-377576 has defined MAC address 52:54:00:9c:48:13 in network mk-ha-377576
	I0328 00:01:15.674672 1091821 main.go:141] libmachine: (ha-377576) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:48:13", ip: ""} in network mk-ha-377576: {Iface:virbr1 ExpiryTime:2024-03-28 00:52:31 +0000 UTC Type:0 Mac:52:54:00:9c:48:13 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:ha-377576 Clientid:01:52:54:00:9c:48:13}
	I0328 00:01:15.674698 1091821 main.go:141] libmachine: (ha-377576) DBG | domain ha-377576 has defined IP address 192.168.39.47 and MAC address 52:54:00:9c:48:13 in network mk-ha-377576
	I0328 00:01:15.674820 1091821 main.go:141] libmachine: (ha-377576) Calling .GetSSHPort
	I0328 00:01:15.675005 1091821 main.go:141] libmachine: (ha-377576) Calling .GetSSHKeyPath
	I0328 00:01:15.675167 1091821 main.go:141] libmachine: (ha-377576) Calling .GetSSHUsername
	I0328 00:01:15.675330 1091821 sshutil.go:53] new ssh client: &{IP:192.168.39.47 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/ha-377576/id_rsa Username:docker}
	I0328 00:01:15.755277 1091821 ssh_runner.go:195] Run: systemctl --version
	I0328 00:01:15.762841 1091821 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0328 00:01:15.779532 1091821 kubeconfig.go:125] found "ha-377576" server: "https://192.168.39.254:8443"
	I0328 00:01:15.779568 1091821 api_server.go:166] Checking apiserver status ...
	I0328 00:01:15.779607 1091821 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 00:01:15.797378 1091821 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1197/cgroup
	W0328 00:01:15.808054 1091821 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1197/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0328 00:01:15.808133 1091821 ssh_runner.go:195] Run: ls
	I0328 00:01:15.813670 1091821 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0328 00:01:15.823230 1091821 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0328 00:01:15.823272 1091821 status.go:422] ha-377576 apiserver status = Running (err=<nil>)
	I0328 00:01:15.823289 1091821 status.go:257] ha-377576 status: &{Name:ha-377576 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0328 00:01:15.823308 1091821 status.go:255] checking status of ha-377576-m02 ...
	I0328 00:01:15.823909 1091821 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0328 00:01:15.823956 1091821 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 00:01:15.840358 1091821 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35963
	I0328 00:01:15.840857 1091821 main.go:141] libmachine: () Calling .GetVersion
	I0328 00:01:15.841444 1091821 main.go:141] libmachine: Using API Version  1
	I0328 00:01:15.841477 1091821 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 00:01:15.841911 1091821 main.go:141] libmachine: () Calling .GetMachineName
	I0328 00:01:15.842118 1091821 main.go:141] libmachine: (ha-377576-m02) Calling .GetState
	I0328 00:01:15.843929 1091821 status.go:330] ha-377576-m02 host status = "Stopped" (err=<nil>)
	I0328 00:01:15.843948 1091821 status.go:343] host is not running, skipping remaining checks
	I0328 00:01:15.843966 1091821 status.go:257] ha-377576-m02 status: &{Name:ha-377576-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0328 00:01:15.844003 1091821 status.go:255] checking status of ha-377576-m03 ...
	I0328 00:01:15.844432 1091821 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0328 00:01:15.844485 1091821 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 00:01:15.860422 1091821 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42079
	I0328 00:01:15.861100 1091821 main.go:141] libmachine: () Calling .GetVersion
	I0328 00:01:15.861663 1091821 main.go:141] libmachine: Using API Version  1
	I0328 00:01:15.861686 1091821 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 00:01:15.862029 1091821 main.go:141] libmachine: () Calling .GetMachineName
	I0328 00:01:15.862316 1091821 main.go:141] libmachine: (ha-377576-m03) Calling .GetState
	I0328 00:01:15.864234 1091821 status.go:330] ha-377576-m03 host status = "Running" (err=<nil>)
	I0328 00:01:15.864253 1091821 host.go:66] Checking if "ha-377576-m03" exists ...
	I0328 00:01:15.864572 1091821 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0328 00:01:15.864620 1091821 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 00:01:15.880775 1091821 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39517
	I0328 00:01:15.881345 1091821 main.go:141] libmachine: () Calling .GetVersion
	I0328 00:01:15.881955 1091821 main.go:141] libmachine: Using API Version  1
	I0328 00:01:15.881977 1091821 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 00:01:15.882421 1091821 main.go:141] libmachine: () Calling .GetMachineName
	I0328 00:01:15.882663 1091821 main.go:141] libmachine: (ha-377576-m03) Calling .GetIP
	I0328 00:01:15.885862 1091821 main.go:141] libmachine: (ha-377576-m03) DBG | domain ha-377576-m03 has defined MAC address 52:54:00:f5:c1:99 in network mk-ha-377576
	I0328 00:01:15.886495 1091821 main.go:141] libmachine: (ha-377576-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:c1:99", ip: ""} in network mk-ha-377576: {Iface:virbr1 ExpiryTime:2024-03-28 00:55:52 +0000 UTC Type:0 Mac:52:54:00:f5:c1:99 Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:ha-377576-m03 Clientid:01:52:54:00:f5:c1:99}
	I0328 00:01:15.886520 1091821 main.go:141] libmachine: (ha-377576-m03) DBG | domain ha-377576-m03 has defined IP address 192.168.39.101 and MAC address 52:54:00:f5:c1:99 in network mk-ha-377576
	I0328 00:01:15.886717 1091821 host.go:66] Checking if "ha-377576-m03" exists ...
	I0328 00:01:15.887141 1091821 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0328 00:01:15.887189 1091821 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 00:01:15.902743 1091821 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45507
	I0328 00:01:15.903173 1091821 main.go:141] libmachine: () Calling .GetVersion
	I0328 00:01:15.903662 1091821 main.go:141] libmachine: Using API Version  1
	I0328 00:01:15.903684 1091821 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 00:01:15.904009 1091821 main.go:141] libmachine: () Calling .GetMachineName
	I0328 00:01:15.904246 1091821 main.go:141] libmachine: (ha-377576-m03) Calling .DriverName
	I0328 00:01:15.904425 1091821 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0328 00:01:15.904450 1091821 main.go:141] libmachine: (ha-377576-m03) Calling .GetSSHHostname
	I0328 00:01:15.907284 1091821 main.go:141] libmachine: (ha-377576-m03) DBG | domain ha-377576-m03 has defined MAC address 52:54:00:f5:c1:99 in network mk-ha-377576
	I0328 00:01:15.907711 1091821 main.go:141] libmachine: (ha-377576-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:c1:99", ip: ""} in network mk-ha-377576: {Iface:virbr1 ExpiryTime:2024-03-28 00:55:52 +0000 UTC Type:0 Mac:52:54:00:f5:c1:99 Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:ha-377576-m03 Clientid:01:52:54:00:f5:c1:99}
	I0328 00:01:15.907743 1091821 main.go:141] libmachine: (ha-377576-m03) DBG | domain ha-377576-m03 has defined IP address 192.168.39.101 and MAC address 52:54:00:f5:c1:99 in network mk-ha-377576
	I0328 00:01:15.907909 1091821 main.go:141] libmachine: (ha-377576-m03) Calling .GetSSHPort
	I0328 00:01:15.908092 1091821 main.go:141] libmachine: (ha-377576-m03) Calling .GetSSHKeyPath
	I0328 00:01:15.908277 1091821 main.go:141] libmachine: (ha-377576-m03) Calling .GetSSHUsername
	I0328 00:01:15.908438 1091821 sshutil.go:53] new ssh client: &{IP:192.168.39.101 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/ha-377576-m03/id_rsa Username:docker}
	I0328 00:01:15.999843 1091821 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0328 00:01:16.021774 1091821 kubeconfig.go:125] found "ha-377576" server: "https://192.168.39.254:8443"
	I0328 00:01:16.021811 1091821 api_server.go:166] Checking apiserver status ...
	I0328 00:01:16.021847 1091821 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 00:01:16.038636 1091821 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1535/cgroup
	W0328 00:01:16.052126 1091821 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1535/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0328 00:01:16.052187 1091821 ssh_runner.go:195] Run: ls
	I0328 00:01:16.057186 1091821 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0328 00:01:16.061808 1091821 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0328 00:01:16.061833 1091821 status.go:422] ha-377576-m03 apiserver status = Running (err=<nil>)
	I0328 00:01:16.061844 1091821 status.go:257] ha-377576-m03 status: &{Name:ha-377576-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0328 00:01:16.061866 1091821 status.go:255] checking status of ha-377576-m04 ...
	I0328 00:01:16.062187 1091821 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0328 00:01:16.062221 1091821 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 00:01:16.079245 1091821 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34951
	I0328 00:01:16.079808 1091821 main.go:141] libmachine: () Calling .GetVersion
	I0328 00:01:16.080348 1091821 main.go:141] libmachine: Using API Version  1
	I0328 00:01:16.080375 1091821 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 00:01:16.080741 1091821 main.go:141] libmachine: () Calling .GetMachineName
	I0328 00:01:16.080966 1091821 main.go:141] libmachine: (ha-377576-m04) Calling .GetState
	I0328 00:01:16.082768 1091821 status.go:330] ha-377576-m04 host status = "Running" (err=<nil>)
	I0328 00:01:16.082787 1091821 host.go:66] Checking if "ha-377576-m04" exists ...
	I0328 00:01:16.083065 1091821 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0328 00:01:16.083104 1091821 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 00:01:16.098303 1091821 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39673
	I0328 00:01:16.098808 1091821 main.go:141] libmachine: () Calling .GetVersion
	I0328 00:01:16.099374 1091821 main.go:141] libmachine: Using API Version  1
	I0328 00:01:16.099403 1091821 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 00:01:16.099808 1091821 main.go:141] libmachine: () Calling .GetMachineName
	I0328 00:01:16.100023 1091821 main.go:141] libmachine: (ha-377576-m04) Calling .GetIP
	I0328 00:01:16.102991 1091821 main.go:141] libmachine: (ha-377576-m04) DBG | domain ha-377576-m04 has defined MAC address 52:54:00:6a:c2:81 in network mk-ha-377576
	I0328 00:01:16.103447 1091821 main.go:141] libmachine: (ha-377576-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:c2:81", ip: ""} in network mk-ha-377576: {Iface:virbr1 ExpiryTime:2024-03-28 00:57:19 +0000 UTC Type:0 Mac:52:54:00:6a:c2:81 Iaid: IPaddr:192.168.39.93 Prefix:24 Hostname:ha-377576-m04 Clientid:01:52:54:00:6a:c2:81}
	I0328 00:01:16.103474 1091821 main.go:141] libmachine: (ha-377576-m04) DBG | domain ha-377576-m04 has defined IP address 192.168.39.93 and MAC address 52:54:00:6a:c2:81 in network mk-ha-377576
	I0328 00:01:16.103633 1091821 host.go:66] Checking if "ha-377576-m04" exists ...
	I0328 00:01:16.103932 1091821 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0328 00:01:16.103971 1091821 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 00:01:16.119469 1091821 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34381
	I0328 00:01:16.119958 1091821 main.go:141] libmachine: () Calling .GetVersion
	I0328 00:01:16.120522 1091821 main.go:141] libmachine: Using API Version  1
	I0328 00:01:16.120549 1091821 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 00:01:16.120985 1091821 main.go:141] libmachine: () Calling .GetMachineName
	I0328 00:01:16.121255 1091821 main.go:141] libmachine: (ha-377576-m04) Calling .DriverName
	I0328 00:01:16.121504 1091821 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0328 00:01:16.121530 1091821 main.go:141] libmachine: (ha-377576-m04) Calling .GetSSHHostname
	I0328 00:01:16.124803 1091821 main.go:141] libmachine: (ha-377576-m04) DBG | domain ha-377576-m04 has defined MAC address 52:54:00:6a:c2:81 in network mk-ha-377576
	I0328 00:01:16.125273 1091821 main.go:141] libmachine: (ha-377576-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:c2:81", ip: ""} in network mk-ha-377576: {Iface:virbr1 ExpiryTime:2024-03-28 00:57:19 +0000 UTC Type:0 Mac:52:54:00:6a:c2:81 Iaid: IPaddr:192.168.39.93 Prefix:24 Hostname:ha-377576-m04 Clientid:01:52:54:00:6a:c2:81}
	I0328 00:01:16.125304 1091821 main.go:141] libmachine: (ha-377576-m04) DBG | domain ha-377576-m04 has defined IP address 192.168.39.93 and MAC address 52:54:00:6a:c2:81 in network mk-ha-377576
	I0328 00:01:16.125516 1091821 main.go:141] libmachine: (ha-377576-m04) Calling .GetSSHPort
	I0328 00:01:16.125742 1091821 main.go:141] libmachine: (ha-377576-m04) Calling .GetSSHKeyPath
	I0328 00:01:16.125939 1091821 main.go:141] libmachine: (ha-377576-m04) Calling .GetSSHUsername
	I0328 00:01:16.126203 1091821 sshutil.go:53] new ssh client: &{IP:192.168.39.93 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/ha-377576-m04/id_rsa Username:docker}
	I0328 00:01:16.214716 1091821 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0328 00:01:16.229975 1091821 status.go:257] ha-377576-m04 status: &{Name:ha-377576-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
E0328 00:01:21.208147 1076522 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/functional-800754/client.crt: no such file or directory
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-377576 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-377576 status -v=7 --alsologtostderr: exit status 7 (716.069638ms)

                                                
                                                
-- stdout --
	ha-377576
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-377576-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-377576-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-377576-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0328 00:01:23.720000 1091915 out.go:291] Setting OutFile to fd 1 ...
	I0328 00:01:23.720271 1091915 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 00:01:23.720280 1091915 out.go:304] Setting ErrFile to fd 2...
	I0328 00:01:23.720285 1091915 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 00:01:23.720495 1091915 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18485-1069254/.minikube/bin
	I0328 00:01:23.720696 1091915 out.go:298] Setting JSON to false
	I0328 00:01:23.720729 1091915 mustload.go:65] Loading cluster: ha-377576
	I0328 00:01:23.720797 1091915 notify.go:220] Checking for updates...
	I0328 00:01:23.721925 1091915 config.go:182] Loaded profile config "ha-377576": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0328 00:01:23.721963 1091915 status.go:255] checking status of ha-377576 ...
	I0328 00:01:23.723146 1091915 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0328 00:01:23.723215 1091915 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 00:01:23.742815 1091915 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32941
	I0328 00:01:23.743372 1091915 main.go:141] libmachine: () Calling .GetVersion
	I0328 00:01:23.743953 1091915 main.go:141] libmachine: Using API Version  1
	I0328 00:01:23.743977 1091915 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 00:01:23.744353 1091915 main.go:141] libmachine: () Calling .GetMachineName
	I0328 00:01:23.744588 1091915 main.go:141] libmachine: (ha-377576) Calling .GetState
	I0328 00:01:23.746694 1091915 status.go:330] ha-377576 host status = "Running" (err=<nil>)
	I0328 00:01:23.746717 1091915 host.go:66] Checking if "ha-377576" exists ...
	I0328 00:01:23.747054 1091915 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0328 00:01:23.747131 1091915 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 00:01:23.766694 1091915 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43717
	I0328 00:01:23.767450 1091915 main.go:141] libmachine: () Calling .GetVersion
	I0328 00:01:23.768108 1091915 main.go:141] libmachine: Using API Version  1
	I0328 00:01:23.768139 1091915 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 00:01:23.768529 1091915 main.go:141] libmachine: () Calling .GetMachineName
	I0328 00:01:23.768769 1091915 main.go:141] libmachine: (ha-377576) Calling .GetIP
	I0328 00:01:23.772441 1091915 main.go:141] libmachine: (ha-377576) DBG | domain ha-377576 has defined MAC address 52:54:00:9c:48:13 in network mk-ha-377576
	I0328 00:01:23.772951 1091915 main.go:141] libmachine: (ha-377576) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:48:13", ip: ""} in network mk-ha-377576: {Iface:virbr1 ExpiryTime:2024-03-28 00:52:31 +0000 UTC Type:0 Mac:52:54:00:9c:48:13 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:ha-377576 Clientid:01:52:54:00:9c:48:13}
	I0328 00:01:23.772984 1091915 main.go:141] libmachine: (ha-377576) DBG | domain ha-377576 has defined IP address 192.168.39.47 and MAC address 52:54:00:9c:48:13 in network mk-ha-377576
	I0328 00:01:23.773227 1091915 host.go:66] Checking if "ha-377576" exists ...
	I0328 00:01:23.773594 1091915 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0328 00:01:23.773659 1091915 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 00:01:23.789633 1091915 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32847
	I0328 00:01:23.790110 1091915 main.go:141] libmachine: () Calling .GetVersion
	I0328 00:01:23.790641 1091915 main.go:141] libmachine: Using API Version  1
	I0328 00:01:23.790665 1091915 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 00:01:23.791127 1091915 main.go:141] libmachine: () Calling .GetMachineName
	I0328 00:01:23.791381 1091915 main.go:141] libmachine: (ha-377576) Calling .DriverName
	I0328 00:01:23.791767 1091915 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0328 00:01:23.791817 1091915 main.go:141] libmachine: (ha-377576) Calling .GetSSHHostname
	I0328 00:01:23.795503 1091915 main.go:141] libmachine: (ha-377576) DBG | domain ha-377576 has defined MAC address 52:54:00:9c:48:13 in network mk-ha-377576
	I0328 00:01:23.796042 1091915 main.go:141] libmachine: (ha-377576) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:48:13", ip: ""} in network mk-ha-377576: {Iface:virbr1 ExpiryTime:2024-03-28 00:52:31 +0000 UTC Type:0 Mac:52:54:00:9c:48:13 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:ha-377576 Clientid:01:52:54:00:9c:48:13}
	I0328 00:01:23.796174 1091915 main.go:141] libmachine: (ha-377576) DBG | domain ha-377576 has defined IP address 192.168.39.47 and MAC address 52:54:00:9c:48:13 in network mk-ha-377576
	I0328 00:01:23.796437 1091915 main.go:141] libmachine: (ha-377576) Calling .GetSSHPort
	I0328 00:01:23.796614 1091915 main.go:141] libmachine: (ha-377576) Calling .GetSSHKeyPath
	I0328 00:01:23.796728 1091915 main.go:141] libmachine: (ha-377576) Calling .GetSSHUsername
	I0328 00:01:23.796828 1091915 sshutil.go:53] new ssh client: &{IP:192.168.39.47 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/ha-377576/id_rsa Username:docker}
	I0328 00:01:23.883021 1091915 ssh_runner.go:195] Run: systemctl --version
	I0328 00:01:23.894406 1091915 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0328 00:01:23.910934 1091915 kubeconfig.go:125] found "ha-377576" server: "https://192.168.39.254:8443"
	I0328 00:01:23.910965 1091915 api_server.go:166] Checking apiserver status ...
	I0328 00:01:23.911018 1091915 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 00:01:23.926929 1091915 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1197/cgroup
	W0328 00:01:23.948034 1091915 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1197/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0328 00:01:23.948092 1091915 ssh_runner.go:195] Run: ls
	I0328 00:01:23.954813 1091915 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0328 00:01:23.961801 1091915 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0328 00:01:23.961840 1091915 status.go:422] ha-377576 apiserver status = Running (err=<nil>)
	I0328 00:01:23.961851 1091915 status.go:257] ha-377576 status: &{Name:ha-377576 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0328 00:01:23.961870 1091915 status.go:255] checking status of ha-377576-m02 ...
	I0328 00:01:23.962294 1091915 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0328 00:01:23.962342 1091915 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 00:01:23.979378 1091915 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40735
	I0328 00:01:23.979856 1091915 main.go:141] libmachine: () Calling .GetVersion
	I0328 00:01:23.980359 1091915 main.go:141] libmachine: Using API Version  1
	I0328 00:01:23.980388 1091915 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 00:01:23.980744 1091915 main.go:141] libmachine: () Calling .GetMachineName
	I0328 00:01:23.980937 1091915 main.go:141] libmachine: (ha-377576-m02) Calling .GetState
	I0328 00:01:23.982683 1091915 status.go:330] ha-377576-m02 host status = "Stopped" (err=<nil>)
	I0328 00:01:23.982706 1091915 status.go:343] host is not running, skipping remaining checks
	I0328 00:01:23.982712 1091915 status.go:257] ha-377576-m02 status: &{Name:ha-377576-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0328 00:01:23.982733 1091915 status.go:255] checking status of ha-377576-m03 ...
	I0328 00:01:23.983011 1091915 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0328 00:01:23.983057 1091915 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 00:01:23.998883 1091915 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36103
	I0328 00:01:23.999351 1091915 main.go:141] libmachine: () Calling .GetVersion
	I0328 00:01:23.999988 1091915 main.go:141] libmachine: Using API Version  1
	I0328 00:01:24.000017 1091915 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 00:01:24.000454 1091915 main.go:141] libmachine: () Calling .GetMachineName
	I0328 00:01:24.000675 1091915 main.go:141] libmachine: (ha-377576-m03) Calling .GetState
	I0328 00:01:24.002449 1091915 status.go:330] ha-377576-m03 host status = "Running" (err=<nil>)
	I0328 00:01:24.002473 1091915 host.go:66] Checking if "ha-377576-m03" exists ...
	I0328 00:01:24.002834 1091915 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0328 00:01:24.002877 1091915 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 00:01:24.019525 1091915 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38121
	I0328 00:01:24.020038 1091915 main.go:141] libmachine: () Calling .GetVersion
	I0328 00:01:24.020507 1091915 main.go:141] libmachine: Using API Version  1
	I0328 00:01:24.020530 1091915 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 00:01:24.020918 1091915 main.go:141] libmachine: () Calling .GetMachineName
	I0328 00:01:24.021133 1091915 main.go:141] libmachine: (ha-377576-m03) Calling .GetIP
	I0328 00:01:24.024391 1091915 main.go:141] libmachine: (ha-377576-m03) DBG | domain ha-377576-m03 has defined MAC address 52:54:00:f5:c1:99 in network mk-ha-377576
	I0328 00:01:24.024824 1091915 main.go:141] libmachine: (ha-377576-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:c1:99", ip: ""} in network mk-ha-377576: {Iface:virbr1 ExpiryTime:2024-03-28 00:55:52 +0000 UTC Type:0 Mac:52:54:00:f5:c1:99 Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:ha-377576-m03 Clientid:01:52:54:00:f5:c1:99}
	I0328 00:01:24.024859 1091915 main.go:141] libmachine: (ha-377576-m03) DBG | domain ha-377576-m03 has defined IP address 192.168.39.101 and MAC address 52:54:00:f5:c1:99 in network mk-ha-377576
	I0328 00:01:24.024963 1091915 host.go:66] Checking if "ha-377576-m03" exists ...
	I0328 00:01:24.025313 1091915 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0328 00:01:24.025372 1091915 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 00:01:24.042656 1091915 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32879
	I0328 00:01:24.043126 1091915 main.go:141] libmachine: () Calling .GetVersion
	I0328 00:01:24.043603 1091915 main.go:141] libmachine: Using API Version  1
	I0328 00:01:24.043633 1091915 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 00:01:24.044025 1091915 main.go:141] libmachine: () Calling .GetMachineName
	I0328 00:01:24.044237 1091915 main.go:141] libmachine: (ha-377576-m03) Calling .DriverName
	I0328 00:01:24.044428 1091915 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0328 00:01:24.044453 1091915 main.go:141] libmachine: (ha-377576-m03) Calling .GetSSHHostname
	I0328 00:01:24.047781 1091915 main.go:141] libmachine: (ha-377576-m03) DBG | domain ha-377576-m03 has defined MAC address 52:54:00:f5:c1:99 in network mk-ha-377576
	I0328 00:01:24.048228 1091915 main.go:141] libmachine: (ha-377576-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:c1:99", ip: ""} in network mk-ha-377576: {Iface:virbr1 ExpiryTime:2024-03-28 00:55:52 +0000 UTC Type:0 Mac:52:54:00:f5:c1:99 Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:ha-377576-m03 Clientid:01:52:54:00:f5:c1:99}
	I0328 00:01:24.048260 1091915 main.go:141] libmachine: (ha-377576-m03) DBG | domain ha-377576-m03 has defined IP address 192.168.39.101 and MAC address 52:54:00:f5:c1:99 in network mk-ha-377576
	I0328 00:01:24.048381 1091915 main.go:141] libmachine: (ha-377576-m03) Calling .GetSSHPort
	I0328 00:01:24.048571 1091915 main.go:141] libmachine: (ha-377576-m03) Calling .GetSSHKeyPath
	I0328 00:01:24.048746 1091915 main.go:141] libmachine: (ha-377576-m03) Calling .GetSSHUsername
	I0328 00:01:24.048898 1091915 sshutil.go:53] new ssh client: &{IP:192.168.39.101 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/ha-377576-m03/id_rsa Username:docker}
	I0328 00:01:24.139766 1091915 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0328 00:01:24.160789 1091915 kubeconfig.go:125] found "ha-377576" server: "https://192.168.39.254:8443"
	I0328 00:01:24.160825 1091915 api_server.go:166] Checking apiserver status ...
	I0328 00:01:24.160863 1091915 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 00:01:24.179859 1091915 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1535/cgroup
	W0328 00:01:24.192742 1091915 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1535/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0328 00:01:24.192812 1091915 ssh_runner.go:195] Run: ls
	I0328 00:01:24.198928 1091915 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0328 00:01:24.204036 1091915 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0328 00:01:24.204075 1091915 status.go:422] ha-377576-m03 apiserver status = Running (err=<nil>)
	I0328 00:01:24.204084 1091915 status.go:257] ha-377576-m03 status: &{Name:ha-377576-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0328 00:01:24.204102 1091915 status.go:255] checking status of ha-377576-m04 ...
	I0328 00:01:24.204413 1091915 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0328 00:01:24.204451 1091915 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 00:01:24.220875 1091915 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46543
	I0328 00:01:24.221517 1091915 main.go:141] libmachine: () Calling .GetVersion
	I0328 00:01:24.222113 1091915 main.go:141] libmachine: Using API Version  1
	I0328 00:01:24.222137 1091915 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 00:01:24.222501 1091915 main.go:141] libmachine: () Calling .GetMachineName
	I0328 00:01:24.222708 1091915 main.go:141] libmachine: (ha-377576-m04) Calling .GetState
	I0328 00:01:24.224377 1091915 status.go:330] ha-377576-m04 host status = "Running" (err=<nil>)
	I0328 00:01:24.224395 1091915 host.go:66] Checking if "ha-377576-m04" exists ...
	I0328 00:01:24.224751 1091915 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0328 00:01:24.224799 1091915 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 00:01:24.241131 1091915 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40579
	I0328 00:01:24.241637 1091915 main.go:141] libmachine: () Calling .GetVersion
	I0328 00:01:24.242114 1091915 main.go:141] libmachine: Using API Version  1
	I0328 00:01:24.242138 1091915 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 00:01:24.242552 1091915 main.go:141] libmachine: () Calling .GetMachineName
	I0328 00:01:24.242824 1091915 main.go:141] libmachine: (ha-377576-m04) Calling .GetIP
	I0328 00:01:24.245839 1091915 main.go:141] libmachine: (ha-377576-m04) DBG | domain ha-377576-m04 has defined MAC address 52:54:00:6a:c2:81 in network mk-ha-377576
	I0328 00:01:24.246360 1091915 main.go:141] libmachine: (ha-377576-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:c2:81", ip: ""} in network mk-ha-377576: {Iface:virbr1 ExpiryTime:2024-03-28 00:57:19 +0000 UTC Type:0 Mac:52:54:00:6a:c2:81 Iaid: IPaddr:192.168.39.93 Prefix:24 Hostname:ha-377576-m04 Clientid:01:52:54:00:6a:c2:81}
	I0328 00:01:24.246386 1091915 main.go:141] libmachine: (ha-377576-m04) DBG | domain ha-377576-m04 has defined IP address 192.168.39.93 and MAC address 52:54:00:6a:c2:81 in network mk-ha-377576
	I0328 00:01:24.246593 1091915 host.go:66] Checking if "ha-377576-m04" exists ...
	I0328 00:01:24.246936 1091915 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0328 00:01:24.246984 1091915 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 00:01:24.262515 1091915 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35771
	I0328 00:01:24.263005 1091915 main.go:141] libmachine: () Calling .GetVersion
	I0328 00:01:24.263652 1091915 main.go:141] libmachine: Using API Version  1
	I0328 00:01:24.263678 1091915 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 00:01:24.264024 1091915 main.go:141] libmachine: () Calling .GetMachineName
	I0328 00:01:24.264270 1091915 main.go:141] libmachine: (ha-377576-m04) Calling .DriverName
	I0328 00:01:24.264501 1091915 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0328 00:01:24.264526 1091915 main.go:141] libmachine: (ha-377576-m04) Calling .GetSSHHostname
	I0328 00:01:24.267298 1091915 main.go:141] libmachine: (ha-377576-m04) DBG | domain ha-377576-m04 has defined MAC address 52:54:00:6a:c2:81 in network mk-ha-377576
	I0328 00:01:24.267750 1091915 main.go:141] libmachine: (ha-377576-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:c2:81", ip: ""} in network mk-ha-377576: {Iface:virbr1 ExpiryTime:2024-03-28 00:57:19 +0000 UTC Type:0 Mac:52:54:00:6a:c2:81 Iaid: IPaddr:192.168.39.93 Prefix:24 Hostname:ha-377576-m04 Clientid:01:52:54:00:6a:c2:81}
	I0328 00:01:24.267779 1091915 main.go:141] libmachine: (ha-377576-m04) DBG | domain ha-377576-m04 has defined IP address 192.168.39.93 and MAC address 52:54:00:6a:c2:81 in network mk-ha-377576
	I0328 00:01:24.267950 1091915 main.go:141] libmachine: (ha-377576-m04) Calling .GetSSHPort
	I0328 00:01:24.268165 1091915 main.go:141] libmachine: (ha-377576-m04) Calling .GetSSHKeyPath
	I0328 00:01:24.268346 1091915 main.go:141] libmachine: (ha-377576-m04) Calling .GetSSHUsername
	I0328 00:01:24.268503 1091915 sshutil.go:53] new ssh client: &{IP:192.168.39.93 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/ha-377576-m04/id_rsa Username:docker}
	I0328 00:01:24.354565 1091915 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0328 00:01:24.372247 1091915 status.go:257] ha-377576-m04 status: &{Name:ha-377576-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:432: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-377576 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-377576 -n ha-377576
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-377576 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-377576 logs -n 25: (1.569255793s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|----------------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   |    Version     |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|----------------|---------------------|---------------------|
	| ssh     | ha-377576 ssh -n                                                                 | ha-377576 | jenkins | v1.33.0-beta.0 | 27 Mar 24 23:57 UTC | 27 Mar 24 23:57 UTC |
	|         | ha-377576-m03 sudo cat                                                           |           |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |                |                     |                     |
	| cp      | ha-377576 cp ha-377576-m03:/home/docker/cp-test.txt                              | ha-377576 | jenkins | v1.33.0-beta.0 | 27 Mar 24 23:57 UTC | 27 Mar 24 23:57 UTC |
	|         | ha-377576:/home/docker/cp-test_ha-377576-m03_ha-377576.txt                       |           |         |                |                     |                     |
	| ssh     | ha-377576 ssh -n                                                                 | ha-377576 | jenkins | v1.33.0-beta.0 | 27 Mar 24 23:57 UTC | 27 Mar 24 23:57 UTC |
	|         | ha-377576-m03 sudo cat                                                           |           |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |                |                     |                     |
	| ssh     | ha-377576 ssh -n ha-377576 sudo cat                                              | ha-377576 | jenkins | v1.33.0-beta.0 | 27 Mar 24 23:57 UTC | 27 Mar 24 23:57 UTC |
	|         | /home/docker/cp-test_ha-377576-m03_ha-377576.txt                                 |           |         |                |                     |                     |
	| cp      | ha-377576 cp ha-377576-m03:/home/docker/cp-test.txt                              | ha-377576 | jenkins | v1.33.0-beta.0 | 27 Mar 24 23:57 UTC | 27 Mar 24 23:58 UTC |
	|         | ha-377576-m02:/home/docker/cp-test_ha-377576-m03_ha-377576-m02.txt               |           |         |                |                     |                     |
	| ssh     | ha-377576 ssh -n                                                                 | ha-377576 | jenkins | v1.33.0-beta.0 | 27 Mar 24 23:58 UTC | 27 Mar 24 23:58 UTC |
	|         | ha-377576-m03 sudo cat                                                           |           |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |                |                     |                     |
	| ssh     | ha-377576 ssh -n ha-377576-m02 sudo cat                                          | ha-377576 | jenkins | v1.33.0-beta.0 | 27 Mar 24 23:58 UTC | 27 Mar 24 23:58 UTC |
	|         | /home/docker/cp-test_ha-377576-m03_ha-377576-m02.txt                             |           |         |                |                     |                     |
	| cp      | ha-377576 cp ha-377576-m03:/home/docker/cp-test.txt                              | ha-377576 | jenkins | v1.33.0-beta.0 | 27 Mar 24 23:58 UTC | 27 Mar 24 23:58 UTC |
	|         | ha-377576-m04:/home/docker/cp-test_ha-377576-m03_ha-377576-m04.txt               |           |         |                |                     |                     |
	| ssh     | ha-377576 ssh -n                                                                 | ha-377576 | jenkins | v1.33.0-beta.0 | 27 Mar 24 23:58 UTC | 27 Mar 24 23:58 UTC |
	|         | ha-377576-m03 sudo cat                                                           |           |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |                |                     |                     |
	| ssh     | ha-377576 ssh -n ha-377576-m04 sudo cat                                          | ha-377576 | jenkins | v1.33.0-beta.0 | 27 Mar 24 23:58 UTC | 27 Mar 24 23:58 UTC |
	|         | /home/docker/cp-test_ha-377576-m03_ha-377576-m04.txt                             |           |         |                |                     |                     |
	| cp      | ha-377576 cp testdata/cp-test.txt                                                | ha-377576 | jenkins | v1.33.0-beta.0 | 27 Mar 24 23:58 UTC | 27 Mar 24 23:58 UTC |
	|         | ha-377576-m04:/home/docker/cp-test.txt                                           |           |         |                |                     |                     |
	| ssh     | ha-377576 ssh -n                                                                 | ha-377576 | jenkins | v1.33.0-beta.0 | 27 Mar 24 23:58 UTC | 27 Mar 24 23:58 UTC |
	|         | ha-377576-m04 sudo cat                                                           |           |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |                |                     |                     |
	| cp      | ha-377576 cp ha-377576-m04:/home/docker/cp-test.txt                              | ha-377576 | jenkins | v1.33.0-beta.0 | 27 Mar 24 23:58 UTC | 27 Mar 24 23:58 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3418864072/001/cp-test_ha-377576-m04.txt |           |         |                |                     |                     |
	| ssh     | ha-377576 ssh -n                                                                 | ha-377576 | jenkins | v1.33.0-beta.0 | 27 Mar 24 23:58 UTC | 27 Mar 24 23:58 UTC |
	|         | ha-377576-m04 sudo cat                                                           |           |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |                |                     |                     |
	| cp      | ha-377576 cp ha-377576-m04:/home/docker/cp-test.txt                              | ha-377576 | jenkins | v1.33.0-beta.0 | 27 Mar 24 23:58 UTC | 27 Mar 24 23:58 UTC |
	|         | ha-377576:/home/docker/cp-test_ha-377576-m04_ha-377576.txt                       |           |         |                |                     |                     |
	| ssh     | ha-377576 ssh -n                                                                 | ha-377576 | jenkins | v1.33.0-beta.0 | 27 Mar 24 23:58 UTC | 27 Mar 24 23:58 UTC |
	|         | ha-377576-m04 sudo cat                                                           |           |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |                |                     |                     |
	| ssh     | ha-377576 ssh -n ha-377576 sudo cat                                              | ha-377576 | jenkins | v1.33.0-beta.0 | 27 Mar 24 23:58 UTC | 27 Mar 24 23:58 UTC |
	|         | /home/docker/cp-test_ha-377576-m04_ha-377576.txt                                 |           |         |                |                     |                     |
	| cp      | ha-377576 cp ha-377576-m04:/home/docker/cp-test.txt                              | ha-377576 | jenkins | v1.33.0-beta.0 | 27 Mar 24 23:58 UTC | 27 Mar 24 23:58 UTC |
	|         | ha-377576-m02:/home/docker/cp-test_ha-377576-m04_ha-377576-m02.txt               |           |         |                |                     |                     |
	| ssh     | ha-377576 ssh -n                                                                 | ha-377576 | jenkins | v1.33.0-beta.0 | 27 Mar 24 23:58 UTC | 27 Mar 24 23:58 UTC |
	|         | ha-377576-m04 sudo cat                                                           |           |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |                |                     |                     |
	| ssh     | ha-377576 ssh -n ha-377576-m02 sudo cat                                          | ha-377576 | jenkins | v1.33.0-beta.0 | 27 Mar 24 23:58 UTC | 27 Mar 24 23:58 UTC |
	|         | /home/docker/cp-test_ha-377576-m04_ha-377576-m02.txt                             |           |         |                |                     |                     |
	| cp      | ha-377576 cp ha-377576-m04:/home/docker/cp-test.txt                              | ha-377576 | jenkins | v1.33.0-beta.0 | 27 Mar 24 23:58 UTC | 27 Mar 24 23:58 UTC |
	|         | ha-377576-m03:/home/docker/cp-test_ha-377576-m04_ha-377576-m03.txt               |           |         |                |                     |                     |
	| ssh     | ha-377576 ssh -n                                                                 | ha-377576 | jenkins | v1.33.0-beta.0 | 27 Mar 24 23:58 UTC | 27 Mar 24 23:58 UTC |
	|         | ha-377576-m04 sudo cat                                                           |           |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |                |                     |                     |
	| ssh     | ha-377576 ssh -n ha-377576-m03 sudo cat                                          | ha-377576 | jenkins | v1.33.0-beta.0 | 27 Mar 24 23:58 UTC | 27 Mar 24 23:58 UTC |
	|         | /home/docker/cp-test_ha-377576-m04_ha-377576-m03.txt                             |           |         |                |                     |                     |
	| node    | ha-377576 node stop m02 -v=7                                                     | ha-377576 | jenkins | v1.33.0-beta.0 | 27 Mar 24 23:58 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |                |                     |                     |
	| node    | ha-377576 node start m02 -v=7                                                    | ha-377576 | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:00 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |                |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/27 23:52:16
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0327 23:52:16.059043 1086621 out.go:291] Setting OutFile to fd 1 ...
	I0327 23:52:16.059498 1086621 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 23:52:16.059517 1086621 out.go:304] Setting ErrFile to fd 2...
	I0327 23:52:16.059525 1086621 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 23:52:16.059960 1086621 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18485-1069254/.minikube/bin
	I0327 23:52:16.061060 1086621 out.go:298] Setting JSON to false
	I0327 23:52:16.062149 1086621 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":27233,"bootTime":1711556303,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1054-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0327 23:52:16.062248 1086621 start.go:139] virtualization: kvm guest
	I0327 23:52:16.064258 1086621 out.go:177] * [ha-377576] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I0327 23:52:16.066095 1086621 out.go:177]   - MINIKUBE_LOCATION=18485
	I0327 23:52:16.066097 1086621 notify.go:220] Checking for updates...
	I0327 23:52:16.067989 1086621 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0327 23:52:16.069658 1086621 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18485-1069254/kubeconfig
	I0327 23:52:16.071176 1086621 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18485-1069254/.minikube
	I0327 23:52:16.072627 1086621 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0327 23:52:16.073910 1086621 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0327 23:52:16.075399 1086621 driver.go:392] Setting default libvirt URI to qemu:///system
	I0327 23:52:16.111607 1086621 out.go:177] * Using the kvm2 driver based on user configuration
	I0327 23:52:16.112947 1086621 start.go:297] selected driver: kvm2
	I0327 23:52:16.112961 1086621 start.go:901] validating driver "kvm2" against <nil>
	I0327 23:52:16.112972 1086621 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0327 23:52:16.113693 1086621 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0327 23:52:16.113798 1086621 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18485-1069254/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0327 23:52:16.129010 1086621 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0-beta.0
	I0327 23:52:16.129081 1086621 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0327 23:52:16.129301 1086621 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0327 23:52:16.129366 1086621 cni.go:84] Creating CNI manager for ""
	I0327 23:52:16.129378 1086621 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0327 23:52:16.129383 1086621 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0327 23:52:16.129440 1086621 start.go:340] cluster config:
	{Name:ha-377576 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:ha-377576 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0327 23:52:16.129529 1086621 iso.go:125] acquiring lock: {Name:mk3da1fa7d63a581c817f327d86a827697458cb8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0327 23:52:16.131398 1086621 out.go:177] * Starting "ha-377576" primary control-plane node in "ha-377576" cluster
	I0327 23:52:16.132750 1086621 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0327 23:52:16.132793 1086621 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4
	I0327 23:52:16.132805 1086621 cache.go:56] Caching tarball of preloaded images
	I0327 23:52:16.132941 1086621 preload.go:173] Found /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0327 23:52:16.132957 1086621 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on crio
	I0327 23:52:16.133307 1086621 profile.go:142] Saving config to /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/ha-377576/config.json ...
	I0327 23:52:16.133329 1086621 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/ha-377576/config.json: {Name:mk05ad12aac82a6fb79fe39e932ee9fe3ad41cd4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 23:52:16.133477 1086621 start.go:360] acquireMachinesLock for ha-377576: {Name:mk85e225431128bbd27ac7bb3815095957281902 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0327 23:52:16.133512 1086621 start.go:364] duration metric: took 18.15µs to acquireMachinesLock for "ha-377576"
	I0327 23:52:16.133535 1086621 start.go:93] Provisioning new machine with config: &{Name:ha-377576 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.29.3 ClusterName:ha-377576 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0327 23:52:16.133617 1086621 start.go:125] createHost starting for "" (driver="kvm2")
	I0327 23:52:16.135178 1086621 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0327 23:52:16.135316 1086621 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0327 23:52:16.135357 1086621 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0327 23:52:16.150129 1086621 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37299
	I0327 23:52:16.150640 1086621 main.go:141] libmachine: () Calling .GetVersion
	I0327 23:52:16.151183 1086621 main.go:141] libmachine: Using API Version  1
	I0327 23:52:16.151205 1086621 main.go:141] libmachine: () Calling .SetConfigRaw
	I0327 23:52:16.151734 1086621 main.go:141] libmachine: () Calling .GetMachineName
	I0327 23:52:16.151993 1086621 main.go:141] libmachine: (ha-377576) Calling .GetMachineName
	I0327 23:52:16.152206 1086621 main.go:141] libmachine: (ha-377576) Calling .DriverName
	I0327 23:52:16.152423 1086621 start.go:159] libmachine.API.Create for "ha-377576" (driver="kvm2")
	I0327 23:52:16.152459 1086621 client.go:168] LocalClient.Create starting
	I0327 23:52:16.152502 1086621 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem
	I0327 23:52:16.152550 1086621 main.go:141] libmachine: Decoding PEM data...
	I0327 23:52:16.152573 1086621 main.go:141] libmachine: Parsing certificate...
	I0327 23:52:16.152642 1086621 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/cert.pem
	I0327 23:52:16.152670 1086621 main.go:141] libmachine: Decoding PEM data...
	I0327 23:52:16.152694 1086621 main.go:141] libmachine: Parsing certificate...
	I0327 23:52:16.152724 1086621 main.go:141] libmachine: Running pre-create checks...
	I0327 23:52:16.152735 1086621 main.go:141] libmachine: (ha-377576) Calling .PreCreateCheck
	I0327 23:52:16.153138 1086621 main.go:141] libmachine: (ha-377576) Calling .GetConfigRaw
	I0327 23:52:16.153557 1086621 main.go:141] libmachine: Creating machine...
	I0327 23:52:16.153575 1086621 main.go:141] libmachine: (ha-377576) Calling .Create
	I0327 23:52:16.153737 1086621 main.go:141] libmachine: (ha-377576) Creating KVM machine...
	I0327 23:52:16.155112 1086621 main.go:141] libmachine: (ha-377576) DBG | found existing default KVM network
	I0327 23:52:16.155959 1086621 main.go:141] libmachine: (ha-377576) DBG | I0327 23:52:16.155814 1086655 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015c30}
	I0327 23:52:16.156017 1086621 main.go:141] libmachine: (ha-377576) DBG | created network xml: 
	I0327 23:52:16.156038 1086621 main.go:141] libmachine: (ha-377576) DBG | <network>
	I0327 23:52:16.156051 1086621 main.go:141] libmachine: (ha-377576) DBG |   <name>mk-ha-377576</name>
	I0327 23:52:16.156062 1086621 main.go:141] libmachine: (ha-377576) DBG |   <dns enable='no'/>
	I0327 23:52:16.156067 1086621 main.go:141] libmachine: (ha-377576) DBG |   
	I0327 23:52:16.156078 1086621 main.go:141] libmachine: (ha-377576) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0327 23:52:16.156087 1086621 main.go:141] libmachine: (ha-377576) DBG |     <dhcp>
	I0327 23:52:16.156100 1086621 main.go:141] libmachine: (ha-377576) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0327 23:52:16.156110 1086621 main.go:141] libmachine: (ha-377576) DBG |     </dhcp>
	I0327 23:52:16.156152 1086621 main.go:141] libmachine: (ha-377576) DBG |   </ip>
	I0327 23:52:16.156193 1086621 main.go:141] libmachine: (ha-377576) DBG |   
	I0327 23:52:16.156211 1086621 main.go:141] libmachine: (ha-377576) DBG | </network>
	I0327 23:52:16.156222 1086621 main.go:141] libmachine: (ha-377576) DBG | 
	I0327 23:52:16.161472 1086621 main.go:141] libmachine: (ha-377576) DBG | trying to create private KVM network mk-ha-377576 192.168.39.0/24...
	I0327 23:52:16.238648 1086621 main.go:141] libmachine: (ha-377576) DBG | private KVM network mk-ha-377576 192.168.39.0/24 created
	I0327 23:52:16.238692 1086621 main.go:141] libmachine: (ha-377576) DBG | I0327 23:52:16.238584 1086655 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18485-1069254/.minikube
	I0327 23:52:16.238709 1086621 main.go:141] libmachine: (ha-377576) Setting up store path in /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/ha-377576 ...
	I0327 23:52:16.238800 1086621 main.go:141] libmachine: (ha-377576) Building disk image from file:///home/jenkins/minikube-integration/18485-1069254/.minikube/cache/iso/amd64/minikube-v1.33.0-1711559712-18485-amd64.iso
	I0327 23:52:16.238849 1086621 main.go:141] libmachine: (ha-377576) Downloading /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18485-1069254/.minikube/cache/iso/amd64/minikube-v1.33.0-1711559712-18485-amd64.iso...
	I0327 23:52:16.504597 1086621 main.go:141] libmachine: (ha-377576) DBG | I0327 23:52:16.504449 1086655 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/ha-377576/id_rsa...
	I0327 23:52:16.699561 1086621 main.go:141] libmachine: (ha-377576) DBG | I0327 23:52:16.699384 1086655 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/ha-377576/ha-377576.rawdisk...
	I0327 23:52:16.699604 1086621 main.go:141] libmachine: (ha-377576) DBG | Writing magic tar header
	I0327 23:52:16.699619 1086621 main.go:141] libmachine: (ha-377576) DBG | Writing SSH key tar header
	I0327 23:52:16.699632 1086621 main.go:141] libmachine: (ha-377576) DBG | I0327 23:52:16.699527 1086655 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/ha-377576 ...
	I0327 23:52:16.699646 1086621 main.go:141] libmachine: (ha-377576) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/ha-377576
	I0327 23:52:16.699714 1086621 main.go:141] libmachine: (ha-377576) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18485-1069254/.minikube/machines
	I0327 23:52:16.699754 1086621 main.go:141] libmachine: (ha-377576) Setting executable bit set on /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/ha-377576 (perms=drwx------)
	I0327 23:52:16.699769 1086621 main.go:141] libmachine: (ha-377576) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18485-1069254/.minikube
	I0327 23:52:16.699788 1086621 main.go:141] libmachine: (ha-377576) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18485-1069254
	I0327 23:52:16.699801 1086621 main.go:141] libmachine: (ha-377576) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0327 23:52:16.699819 1086621 main.go:141] libmachine: (ha-377576) DBG | Checking permissions on dir: /home/jenkins
	I0327 23:52:16.699831 1086621 main.go:141] libmachine: (ha-377576) DBG | Checking permissions on dir: /home
	I0327 23:52:16.699843 1086621 main.go:141] libmachine: (ha-377576) DBG | Skipping /home - not owner
	I0327 23:52:16.699859 1086621 main.go:141] libmachine: (ha-377576) Setting executable bit set on /home/jenkins/minikube-integration/18485-1069254/.minikube/machines (perms=drwxr-xr-x)
	I0327 23:52:16.699877 1086621 main.go:141] libmachine: (ha-377576) Setting executable bit set on /home/jenkins/minikube-integration/18485-1069254/.minikube (perms=drwxr-xr-x)
	I0327 23:52:16.699892 1086621 main.go:141] libmachine: (ha-377576) Setting executable bit set on /home/jenkins/minikube-integration/18485-1069254 (perms=drwxrwxr-x)
	I0327 23:52:16.699910 1086621 main.go:141] libmachine: (ha-377576) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0327 23:52:16.699923 1086621 main.go:141] libmachine: (ha-377576) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0327 23:52:16.699939 1086621 main.go:141] libmachine: (ha-377576) Creating domain...
	I0327 23:52:16.700928 1086621 main.go:141] libmachine: (ha-377576) define libvirt domain using xml: 
	I0327 23:52:16.700949 1086621 main.go:141] libmachine: (ha-377576) <domain type='kvm'>
	I0327 23:52:16.700956 1086621 main.go:141] libmachine: (ha-377576)   <name>ha-377576</name>
	I0327 23:52:16.700960 1086621 main.go:141] libmachine: (ha-377576)   <memory unit='MiB'>2200</memory>
	I0327 23:52:16.700969 1086621 main.go:141] libmachine: (ha-377576)   <vcpu>2</vcpu>
	I0327 23:52:16.700973 1086621 main.go:141] libmachine: (ha-377576)   <features>
	I0327 23:52:16.700978 1086621 main.go:141] libmachine: (ha-377576)     <acpi/>
	I0327 23:52:16.700982 1086621 main.go:141] libmachine: (ha-377576)     <apic/>
	I0327 23:52:16.700987 1086621 main.go:141] libmachine: (ha-377576)     <pae/>
	I0327 23:52:16.700997 1086621 main.go:141] libmachine: (ha-377576)     
	I0327 23:52:16.701004 1086621 main.go:141] libmachine: (ha-377576)   </features>
	I0327 23:52:16.701012 1086621 main.go:141] libmachine: (ha-377576)   <cpu mode='host-passthrough'>
	I0327 23:52:16.701075 1086621 main.go:141] libmachine: (ha-377576)   
	I0327 23:52:16.701101 1086621 main.go:141] libmachine: (ha-377576)   </cpu>
	I0327 23:52:16.701111 1086621 main.go:141] libmachine: (ha-377576)   <os>
	I0327 23:52:16.701125 1086621 main.go:141] libmachine: (ha-377576)     <type>hvm</type>
	I0327 23:52:16.701192 1086621 main.go:141] libmachine: (ha-377576)     <boot dev='cdrom'/>
	I0327 23:52:16.701225 1086621 main.go:141] libmachine: (ha-377576)     <boot dev='hd'/>
	I0327 23:52:16.701233 1086621 main.go:141] libmachine: (ha-377576)     <bootmenu enable='no'/>
	I0327 23:52:16.701242 1086621 main.go:141] libmachine: (ha-377576)   </os>
	I0327 23:52:16.701254 1086621 main.go:141] libmachine: (ha-377576)   <devices>
	I0327 23:52:16.701269 1086621 main.go:141] libmachine: (ha-377576)     <disk type='file' device='cdrom'>
	I0327 23:52:16.701285 1086621 main.go:141] libmachine: (ha-377576)       <source file='/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/ha-377576/boot2docker.iso'/>
	I0327 23:52:16.701302 1086621 main.go:141] libmachine: (ha-377576)       <target dev='hdc' bus='scsi'/>
	I0327 23:52:16.701313 1086621 main.go:141] libmachine: (ha-377576)       <readonly/>
	I0327 23:52:16.701322 1086621 main.go:141] libmachine: (ha-377576)     </disk>
	I0327 23:52:16.701329 1086621 main.go:141] libmachine: (ha-377576)     <disk type='file' device='disk'>
	I0327 23:52:16.701341 1086621 main.go:141] libmachine: (ha-377576)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0327 23:52:16.701359 1086621 main.go:141] libmachine: (ha-377576)       <source file='/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/ha-377576/ha-377576.rawdisk'/>
	I0327 23:52:16.701374 1086621 main.go:141] libmachine: (ha-377576)       <target dev='hda' bus='virtio'/>
	I0327 23:52:16.701385 1086621 main.go:141] libmachine: (ha-377576)     </disk>
	I0327 23:52:16.701395 1086621 main.go:141] libmachine: (ha-377576)     <interface type='network'>
	I0327 23:52:16.701406 1086621 main.go:141] libmachine: (ha-377576)       <source network='mk-ha-377576'/>
	I0327 23:52:16.701414 1086621 main.go:141] libmachine: (ha-377576)       <model type='virtio'/>
	I0327 23:52:16.701425 1086621 main.go:141] libmachine: (ha-377576)     </interface>
	I0327 23:52:16.701441 1086621 main.go:141] libmachine: (ha-377576)     <interface type='network'>
	I0327 23:52:16.701453 1086621 main.go:141] libmachine: (ha-377576)       <source network='default'/>
	I0327 23:52:16.701463 1086621 main.go:141] libmachine: (ha-377576)       <model type='virtio'/>
	I0327 23:52:16.701474 1086621 main.go:141] libmachine: (ha-377576)     </interface>
	I0327 23:52:16.701483 1086621 main.go:141] libmachine: (ha-377576)     <serial type='pty'>
	I0327 23:52:16.701494 1086621 main.go:141] libmachine: (ha-377576)       <target port='0'/>
	I0327 23:52:16.701505 1086621 main.go:141] libmachine: (ha-377576)     </serial>
	I0327 23:52:16.701517 1086621 main.go:141] libmachine: (ha-377576)     <console type='pty'>
	I0327 23:52:16.701538 1086621 main.go:141] libmachine: (ha-377576)       <target type='serial' port='0'/>
	I0327 23:52:16.701561 1086621 main.go:141] libmachine: (ha-377576)     </console>
	I0327 23:52:16.701572 1086621 main.go:141] libmachine: (ha-377576)     <rng model='virtio'>
	I0327 23:52:16.701583 1086621 main.go:141] libmachine: (ha-377576)       <backend model='random'>/dev/random</backend>
	I0327 23:52:16.701592 1086621 main.go:141] libmachine: (ha-377576)     </rng>
	I0327 23:52:16.701601 1086621 main.go:141] libmachine: (ha-377576)     
	I0327 23:52:16.701611 1086621 main.go:141] libmachine: (ha-377576)     
	I0327 23:52:16.701622 1086621 main.go:141] libmachine: (ha-377576)   </devices>
	I0327 23:52:16.701631 1086621 main.go:141] libmachine: (ha-377576) </domain>
	I0327 23:52:16.701642 1086621 main.go:141] libmachine: (ha-377576) 
	I0327 23:52:16.706024 1086621 main.go:141] libmachine: (ha-377576) DBG | domain ha-377576 has defined MAC address 52:54:00:1a:7a:20 in network default
	I0327 23:52:16.706672 1086621 main.go:141] libmachine: (ha-377576) DBG | domain ha-377576 has defined MAC address 52:54:00:9c:48:13 in network mk-ha-377576
	I0327 23:52:16.706711 1086621 main.go:141] libmachine: (ha-377576) Ensuring networks are active...
	I0327 23:52:16.707503 1086621 main.go:141] libmachine: (ha-377576) Ensuring network default is active
	I0327 23:52:16.707806 1086621 main.go:141] libmachine: (ha-377576) Ensuring network mk-ha-377576 is active
	I0327 23:52:16.708327 1086621 main.go:141] libmachine: (ha-377576) Getting domain xml...
	I0327 23:52:16.709023 1086621 main.go:141] libmachine: (ha-377576) Creating domain...
	I0327 23:52:17.895451 1086621 main.go:141] libmachine: (ha-377576) Waiting to get IP...
	I0327 23:52:17.896440 1086621 main.go:141] libmachine: (ha-377576) DBG | domain ha-377576 has defined MAC address 52:54:00:9c:48:13 in network mk-ha-377576
	I0327 23:52:17.896888 1086621 main.go:141] libmachine: (ha-377576) DBG | unable to find current IP address of domain ha-377576 in network mk-ha-377576
	I0327 23:52:17.896916 1086621 main.go:141] libmachine: (ha-377576) DBG | I0327 23:52:17.896869 1086655 retry.go:31] will retry after 204.228349ms: waiting for machine to come up
	I0327 23:52:18.102278 1086621 main.go:141] libmachine: (ha-377576) DBG | domain ha-377576 has defined MAC address 52:54:00:9c:48:13 in network mk-ha-377576
	I0327 23:52:18.102719 1086621 main.go:141] libmachine: (ha-377576) DBG | unable to find current IP address of domain ha-377576 in network mk-ha-377576
	I0327 23:52:18.102752 1086621 main.go:141] libmachine: (ha-377576) DBG | I0327 23:52:18.102661 1086655 retry.go:31] will retry after 294.764841ms: waiting for machine to come up
	I0327 23:52:18.399271 1086621 main.go:141] libmachine: (ha-377576) DBG | domain ha-377576 has defined MAC address 52:54:00:9c:48:13 in network mk-ha-377576
	I0327 23:52:18.399693 1086621 main.go:141] libmachine: (ha-377576) DBG | unable to find current IP address of domain ha-377576 in network mk-ha-377576
	I0327 23:52:18.399727 1086621 main.go:141] libmachine: (ha-377576) DBG | I0327 23:52:18.399641 1086655 retry.go:31] will retry after 420.882267ms: waiting for machine to come up
	I0327 23:52:18.822360 1086621 main.go:141] libmachine: (ha-377576) DBG | domain ha-377576 has defined MAC address 52:54:00:9c:48:13 in network mk-ha-377576
	I0327 23:52:18.822782 1086621 main.go:141] libmachine: (ha-377576) DBG | unable to find current IP address of domain ha-377576 in network mk-ha-377576
	I0327 23:52:18.822804 1086621 main.go:141] libmachine: (ha-377576) DBG | I0327 23:52:18.822744 1086655 retry.go:31] will retry after 440.762004ms: waiting for machine to come up
	I0327 23:52:19.265653 1086621 main.go:141] libmachine: (ha-377576) DBG | domain ha-377576 has defined MAC address 52:54:00:9c:48:13 in network mk-ha-377576
	I0327 23:52:19.266113 1086621 main.go:141] libmachine: (ha-377576) DBG | unable to find current IP address of domain ha-377576 in network mk-ha-377576
	I0327 23:52:19.266154 1086621 main.go:141] libmachine: (ha-377576) DBG | I0327 23:52:19.266075 1086655 retry.go:31] will retry after 681.995366ms: waiting for machine to come up
	I0327 23:52:19.950049 1086621 main.go:141] libmachine: (ha-377576) DBG | domain ha-377576 has defined MAC address 52:54:00:9c:48:13 in network mk-ha-377576
	I0327 23:52:19.950578 1086621 main.go:141] libmachine: (ha-377576) DBG | unable to find current IP address of domain ha-377576 in network mk-ha-377576
	I0327 23:52:19.950619 1086621 main.go:141] libmachine: (ha-377576) DBG | I0327 23:52:19.950509 1086655 retry.go:31] will retry after 730.337887ms: waiting for machine to come up
	I0327 23:52:20.682331 1086621 main.go:141] libmachine: (ha-377576) DBG | domain ha-377576 has defined MAC address 52:54:00:9c:48:13 in network mk-ha-377576
	I0327 23:52:20.682662 1086621 main.go:141] libmachine: (ha-377576) DBG | unable to find current IP address of domain ha-377576 in network mk-ha-377576
	I0327 23:52:20.682692 1086621 main.go:141] libmachine: (ha-377576) DBG | I0327 23:52:20.682614 1086655 retry.go:31] will retry after 1.140943407s: waiting for machine to come up
	I0327 23:52:21.825498 1086621 main.go:141] libmachine: (ha-377576) DBG | domain ha-377576 has defined MAC address 52:54:00:9c:48:13 in network mk-ha-377576
	I0327 23:52:21.825993 1086621 main.go:141] libmachine: (ha-377576) DBG | unable to find current IP address of domain ha-377576 in network mk-ha-377576
	I0327 23:52:21.826022 1086621 main.go:141] libmachine: (ha-377576) DBG | I0327 23:52:21.825942 1086655 retry.go:31] will retry after 984.170194ms: waiting for machine to come up
	I0327 23:52:22.812114 1086621 main.go:141] libmachine: (ha-377576) DBG | domain ha-377576 has defined MAC address 52:54:00:9c:48:13 in network mk-ha-377576
	I0327 23:52:22.812430 1086621 main.go:141] libmachine: (ha-377576) DBG | unable to find current IP address of domain ha-377576 in network mk-ha-377576
	I0327 23:52:22.812455 1086621 main.go:141] libmachine: (ha-377576) DBG | I0327 23:52:22.812390 1086655 retry.go:31] will retry after 1.836089758s: waiting for machine to come up
	I0327 23:52:24.651479 1086621 main.go:141] libmachine: (ha-377576) DBG | domain ha-377576 has defined MAC address 52:54:00:9c:48:13 in network mk-ha-377576
	I0327 23:52:24.652063 1086621 main.go:141] libmachine: (ha-377576) DBG | unable to find current IP address of domain ha-377576 in network mk-ha-377576
	I0327 23:52:24.652127 1086621 main.go:141] libmachine: (ha-377576) DBG | I0327 23:52:24.652029 1086655 retry.go:31] will retry after 2.280967862s: waiting for machine to come up
	I0327 23:52:26.934212 1086621 main.go:141] libmachine: (ha-377576) DBG | domain ha-377576 has defined MAC address 52:54:00:9c:48:13 in network mk-ha-377576
	I0327 23:52:26.934740 1086621 main.go:141] libmachine: (ha-377576) DBG | unable to find current IP address of domain ha-377576 in network mk-ha-377576
	I0327 23:52:26.934771 1086621 main.go:141] libmachine: (ha-377576) DBG | I0327 23:52:26.934689 1086655 retry.go:31] will retry after 2.253174542s: waiting for machine to come up
	I0327 23:52:29.191272 1086621 main.go:141] libmachine: (ha-377576) DBG | domain ha-377576 has defined MAC address 52:54:00:9c:48:13 in network mk-ha-377576
	I0327 23:52:29.191722 1086621 main.go:141] libmachine: (ha-377576) DBG | unable to find current IP address of domain ha-377576 in network mk-ha-377576
	I0327 23:52:29.191748 1086621 main.go:141] libmachine: (ha-377576) DBG | I0327 23:52:29.191680 1086655 retry.go:31] will retry after 2.19894248s: waiting for machine to come up
	I0327 23:52:31.392676 1086621 main.go:141] libmachine: (ha-377576) DBG | domain ha-377576 has defined MAC address 52:54:00:9c:48:13 in network mk-ha-377576
	I0327 23:52:31.393122 1086621 main.go:141] libmachine: (ha-377576) DBG | unable to find current IP address of domain ha-377576 in network mk-ha-377576
	I0327 23:52:31.393146 1086621 main.go:141] libmachine: (ha-377576) DBG | I0327 23:52:31.393070 1086655 retry.go:31] will retry after 4.465104492s: waiting for machine to come up
	I0327 23:52:35.863650 1086621 main.go:141] libmachine: (ha-377576) DBG | domain ha-377576 has defined MAC address 52:54:00:9c:48:13 in network mk-ha-377576
	I0327 23:52:35.864081 1086621 main.go:141] libmachine: (ha-377576) DBG | unable to find current IP address of domain ha-377576 in network mk-ha-377576
	I0327 23:52:35.864105 1086621 main.go:141] libmachine: (ha-377576) DBG | I0327 23:52:35.864025 1086655 retry.go:31] will retry after 3.929483337s: waiting for machine to come up
	I0327 23:52:39.798335 1086621 main.go:141] libmachine: (ha-377576) DBG | domain ha-377576 has defined MAC address 52:54:00:9c:48:13 in network mk-ha-377576
	I0327 23:52:39.798873 1086621 main.go:141] libmachine: (ha-377576) Found IP for machine: 192.168.39.47
	I0327 23:52:39.798899 1086621 main.go:141] libmachine: (ha-377576) DBG | domain ha-377576 has current primary IP address 192.168.39.47 and MAC address 52:54:00:9c:48:13 in network mk-ha-377576
	I0327 23:52:39.798908 1086621 main.go:141] libmachine: (ha-377576) Reserving static IP address...
	I0327 23:52:39.799237 1086621 main.go:141] libmachine: (ha-377576) DBG | unable to find host DHCP lease matching {name: "ha-377576", mac: "52:54:00:9c:48:13", ip: "192.168.39.47"} in network mk-ha-377576
	I0327 23:52:39.880786 1086621 main.go:141] libmachine: (ha-377576) DBG | Getting to WaitForSSH function...
	I0327 23:52:39.880824 1086621 main.go:141] libmachine: (ha-377576) Reserved static IP address: 192.168.39.47
	I0327 23:52:39.880837 1086621 main.go:141] libmachine: (ha-377576) Waiting for SSH to be available...
	I0327 23:52:39.883827 1086621 main.go:141] libmachine: (ha-377576) DBG | domain ha-377576 has defined MAC address 52:54:00:9c:48:13 in network mk-ha-377576
	I0327 23:52:39.884204 1086621 main.go:141] libmachine: (ha-377576) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:9c:48:13", ip: ""} in network mk-ha-377576
	I0327 23:52:39.884227 1086621 main.go:141] libmachine: (ha-377576) DBG | unable to find defined IP address of network mk-ha-377576 interface with MAC address 52:54:00:9c:48:13
	I0327 23:52:39.884387 1086621 main.go:141] libmachine: (ha-377576) DBG | Using SSH client type: external
	I0327 23:52:39.884401 1086621 main.go:141] libmachine: (ha-377576) DBG | Using SSH private key: /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/ha-377576/id_rsa (-rw-------)
	I0327 23:52:39.884467 1086621 main.go:141] libmachine: (ha-377576) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/ha-377576/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0327 23:52:39.884475 1086621 main.go:141] libmachine: (ha-377576) DBG | About to run SSH command:
	I0327 23:52:39.884483 1086621 main.go:141] libmachine: (ha-377576) DBG | exit 0
	I0327 23:52:39.888335 1086621 main.go:141] libmachine: (ha-377576) DBG | SSH cmd err, output: exit status 255: 
	I0327 23:52:39.888363 1086621 main.go:141] libmachine: (ha-377576) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0327 23:52:39.888374 1086621 main.go:141] libmachine: (ha-377576) DBG | command : exit 0
	I0327 23:52:39.888381 1086621 main.go:141] libmachine: (ha-377576) DBG | err     : exit status 255
	I0327 23:52:39.888392 1086621 main.go:141] libmachine: (ha-377576) DBG | output  : 
	I0327 23:52:42.890051 1086621 main.go:141] libmachine: (ha-377576) DBG | Getting to WaitForSSH function...
	I0327 23:52:42.892810 1086621 main.go:141] libmachine: (ha-377576) DBG | domain ha-377576 has defined MAC address 52:54:00:9c:48:13 in network mk-ha-377576
	I0327 23:52:42.893215 1086621 main.go:141] libmachine: (ha-377576) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:48:13", ip: ""} in network mk-ha-377576: {Iface:virbr1 ExpiryTime:2024-03-28 00:52:31 +0000 UTC Type:0 Mac:52:54:00:9c:48:13 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:ha-377576 Clientid:01:52:54:00:9c:48:13}
	I0327 23:52:42.893251 1086621 main.go:141] libmachine: (ha-377576) DBG | domain ha-377576 has defined IP address 192.168.39.47 and MAC address 52:54:00:9c:48:13 in network mk-ha-377576
	I0327 23:52:42.893341 1086621 main.go:141] libmachine: (ha-377576) DBG | Using SSH client type: external
	I0327 23:52:42.893370 1086621 main.go:141] libmachine: (ha-377576) DBG | Using SSH private key: /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/ha-377576/id_rsa (-rw-------)
	I0327 23:52:42.893414 1086621 main.go:141] libmachine: (ha-377576) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.47 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/ha-377576/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0327 23:52:42.893428 1086621 main.go:141] libmachine: (ha-377576) DBG | About to run SSH command:
	I0327 23:52:42.893464 1086621 main.go:141] libmachine: (ha-377576) DBG | exit 0
	I0327 23:52:43.014339 1086621 main.go:141] libmachine: (ha-377576) DBG | SSH cmd err, output: <nil>: 
	I0327 23:52:43.014656 1086621 main.go:141] libmachine: (ha-377576) KVM machine creation complete!
	I0327 23:52:43.015004 1086621 main.go:141] libmachine: (ha-377576) Calling .GetConfigRaw
	I0327 23:52:43.015552 1086621 main.go:141] libmachine: (ha-377576) Calling .DriverName
	I0327 23:52:43.015792 1086621 main.go:141] libmachine: (ha-377576) Calling .DriverName
	I0327 23:52:43.015968 1086621 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0327 23:52:43.015985 1086621 main.go:141] libmachine: (ha-377576) Calling .GetState
	I0327 23:52:43.017383 1086621 main.go:141] libmachine: Detecting operating system of created instance...
	I0327 23:52:43.017400 1086621 main.go:141] libmachine: Waiting for SSH to be available...
	I0327 23:52:43.017407 1086621 main.go:141] libmachine: Getting to WaitForSSH function...
	I0327 23:52:43.017415 1086621 main.go:141] libmachine: (ha-377576) Calling .GetSSHHostname
	I0327 23:52:43.019790 1086621 main.go:141] libmachine: (ha-377576) DBG | domain ha-377576 has defined MAC address 52:54:00:9c:48:13 in network mk-ha-377576
	I0327 23:52:43.020164 1086621 main.go:141] libmachine: (ha-377576) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:48:13", ip: ""} in network mk-ha-377576: {Iface:virbr1 ExpiryTime:2024-03-28 00:52:31 +0000 UTC Type:0 Mac:52:54:00:9c:48:13 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:ha-377576 Clientid:01:52:54:00:9c:48:13}
	I0327 23:52:43.020192 1086621 main.go:141] libmachine: (ha-377576) DBG | domain ha-377576 has defined IP address 192.168.39.47 and MAC address 52:54:00:9c:48:13 in network mk-ha-377576
	I0327 23:52:43.020318 1086621 main.go:141] libmachine: (ha-377576) Calling .GetSSHPort
	I0327 23:52:43.020505 1086621 main.go:141] libmachine: (ha-377576) Calling .GetSSHKeyPath
	I0327 23:52:43.020676 1086621 main.go:141] libmachine: (ha-377576) Calling .GetSSHKeyPath
	I0327 23:52:43.020866 1086621 main.go:141] libmachine: (ha-377576) Calling .GetSSHUsername
	I0327 23:52:43.021085 1086621 main.go:141] libmachine: Using SSH client type: native
	I0327 23:52:43.021341 1086621 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.47 22 <nil> <nil>}
	I0327 23:52:43.021353 1086621 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0327 23:52:43.121794 1086621 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0327 23:52:43.121819 1086621 main.go:141] libmachine: Detecting the provisioner...
	I0327 23:52:43.121827 1086621 main.go:141] libmachine: (ha-377576) Calling .GetSSHHostname
	I0327 23:52:43.124764 1086621 main.go:141] libmachine: (ha-377576) DBG | domain ha-377576 has defined MAC address 52:54:00:9c:48:13 in network mk-ha-377576
	I0327 23:52:43.125171 1086621 main.go:141] libmachine: (ha-377576) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:48:13", ip: ""} in network mk-ha-377576: {Iface:virbr1 ExpiryTime:2024-03-28 00:52:31 +0000 UTC Type:0 Mac:52:54:00:9c:48:13 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:ha-377576 Clientid:01:52:54:00:9c:48:13}
	I0327 23:52:43.125197 1086621 main.go:141] libmachine: (ha-377576) DBG | domain ha-377576 has defined IP address 192.168.39.47 and MAC address 52:54:00:9c:48:13 in network mk-ha-377576
	I0327 23:52:43.125379 1086621 main.go:141] libmachine: (ha-377576) Calling .GetSSHPort
	I0327 23:52:43.125589 1086621 main.go:141] libmachine: (ha-377576) Calling .GetSSHKeyPath
	I0327 23:52:43.125741 1086621 main.go:141] libmachine: (ha-377576) Calling .GetSSHKeyPath
	I0327 23:52:43.125930 1086621 main.go:141] libmachine: (ha-377576) Calling .GetSSHUsername
	I0327 23:52:43.126154 1086621 main.go:141] libmachine: Using SSH client type: native
	I0327 23:52:43.126359 1086621 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.47 22 <nil> <nil>}
	I0327 23:52:43.126372 1086621 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0327 23:52:43.227215 1086621 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0327 23:52:43.227347 1086621 main.go:141] libmachine: found compatible host: buildroot
	I0327 23:52:43.227364 1086621 main.go:141] libmachine: Provisioning with buildroot...
	I0327 23:52:43.227375 1086621 main.go:141] libmachine: (ha-377576) Calling .GetMachineName
	I0327 23:52:43.227698 1086621 buildroot.go:166] provisioning hostname "ha-377576"
	I0327 23:52:43.227731 1086621 main.go:141] libmachine: (ha-377576) Calling .GetMachineName
	I0327 23:52:43.227928 1086621 main.go:141] libmachine: (ha-377576) Calling .GetSSHHostname
	I0327 23:52:43.230515 1086621 main.go:141] libmachine: (ha-377576) DBG | domain ha-377576 has defined MAC address 52:54:00:9c:48:13 in network mk-ha-377576
	I0327 23:52:43.230854 1086621 main.go:141] libmachine: (ha-377576) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:48:13", ip: ""} in network mk-ha-377576: {Iface:virbr1 ExpiryTime:2024-03-28 00:52:31 +0000 UTC Type:0 Mac:52:54:00:9c:48:13 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:ha-377576 Clientid:01:52:54:00:9c:48:13}
	I0327 23:52:43.230874 1086621 main.go:141] libmachine: (ha-377576) DBG | domain ha-377576 has defined IP address 192.168.39.47 and MAC address 52:54:00:9c:48:13 in network mk-ha-377576
	I0327 23:52:43.231023 1086621 main.go:141] libmachine: (ha-377576) Calling .GetSSHPort
	I0327 23:52:43.231255 1086621 main.go:141] libmachine: (ha-377576) Calling .GetSSHKeyPath
	I0327 23:52:43.231436 1086621 main.go:141] libmachine: (ha-377576) Calling .GetSSHKeyPath
	I0327 23:52:43.231597 1086621 main.go:141] libmachine: (ha-377576) Calling .GetSSHUsername
	I0327 23:52:43.231810 1086621 main.go:141] libmachine: Using SSH client type: native
	I0327 23:52:43.232010 1086621 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.47 22 <nil> <nil>}
	I0327 23:52:43.232027 1086621 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-377576 && echo "ha-377576" | sudo tee /etc/hostname
	I0327 23:52:43.344231 1086621 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-377576
	
	I0327 23:52:43.344341 1086621 main.go:141] libmachine: (ha-377576) Calling .GetSSHHostname
	I0327 23:52:43.347237 1086621 main.go:141] libmachine: (ha-377576) DBG | domain ha-377576 has defined MAC address 52:54:00:9c:48:13 in network mk-ha-377576
	I0327 23:52:43.347540 1086621 main.go:141] libmachine: (ha-377576) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:48:13", ip: ""} in network mk-ha-377576: {Iface:virbr1 ExpiryTime:2024-03-28 00:52:31 +0000 UTC Type:0 Mac:52:54:00:9c:48:13 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:ha-377576 Clientid:01:52:54:00:9c:48:13}
	I0327 23:52:43.347580 1086621 main.go:141] libmachine: (ha-377576) DBG | domain ha-377576 has defined IP address 192.168.39.47 and MAC address 52:54:00:9c:48:13 in network mk-ha-377576
	I0327 23:52:43.347761 1086621 main.go:141] libmachine: (ha-377576) Calling .GetSSHPort
	I0327 23:52:43.347958 1086621 main.go:141] libmachine: (ha-377576) Calling .GetSSHKeyPath
	I0327 23:52:43.348208 1086621 main.go:141] libmachine: (ha-377576) Calling .GetSSHKeyPath
	I0327 23:52:43.348324 1086621 main.go:141] libmachine: (ha-377576) Calling .GetSSHUsername
	I0327 23:52:43.348486 1086621 main.go:141] libmachine: Using SSH client type: native
	I0327 23:52:43.348682 1086621 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.47 22 <nil> <nil>}
	I0327 23:52:43.348699 1086621 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-377576' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-377576/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-377576' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0327 23:52:43.456557 1086621 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0327 23:52:43.456595 1086621 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18485-1069254/.minikube CaCertPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18485-1069254/.minikube}
	I0327 23:52:43.456649 1086621 buildroot.go:174] setting up certificates
	I0327 23:52:43.456678 1086621 provision.go:84] configureAuth start
	I0327 23:52:43.456700 1086621 main.go:141] libmachine: (ha-377576) Calling .GetMachineName
	I0327 23:52:43.457046 1086621 main.go:141] libmachine: (ha-377576) Calling .GetIP
	I0327 23:52:43.460050 1086621 main.go:141] libmachine: (ha-377576) DBG | domain ha-377576 has defined MAC address 52:54:00:9c:48:13 in network mk-ha-377576
	I0327 23:52:43.460440 1086621 main.go:141] libmachine: (ha-377576) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:48:13", ip: ""} in network mk-ha-377576: {Iface:virbr1 ExpiryTime:2024-03-28 00:52:31 +0000 UTC Type:0 Mac:52:54:00:9c:48:13 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:ha-377576 Clientid:01:52:54:00:9c:48:13}
	I0327 23:52:43.460474 1086621 main.go:141] libmachine: (ha-377576) DBG | domain ha-377576 has defined IP address 192.168.39.47 and MAC address 52:54:00:9c:48:13 in network mk-ha-377576
	I0327 23:52:43.460602 1086621 main.go:141] libmachine: (ha-377576) Calling .GetSSHHostname
	I0327 23:52:43.462984 1086621 main.go:141] libmachine: (ha-377576) DBG | domain ha-377576 has defined MAC address 52:54:00:9c:48:13 in network mk-ha-377576
	I0327 23:52:43.463266 1086621 main.go:141] libmachine: (ha-377576) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:48:13", ip: ""} in network mk-ha-377576: {Iface:virbr1 ExpiryTime:2024-03-28 00:52:31 +0000 UTC Type:0 Mac:52:54:00:9c:48:13 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:ha-377576 Clientid:01:52:54:00:9c:48:13}
	I0327 23:52:43.463298 1086621 main.go:141] libmachine: (ha-377576) DBG | domain ha-377576 has defined IP address 192.168.39.47 and MAC address 52:54:00:9c:48:13 in network mk-ha-377576
	I0327 23:52:43.463452 1086621 provision.go:143] copyHostCerts
	I0327 23:52:43.463487 1086621 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.pem
	I0327 23:52:43.463532 1086621 exec_runner.go:144] found /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.pem, removing ...
	I0327 23:52:43.463541 1086621 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.pem
	I0327 23:52:43.463610 1086621 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.pem (1078 bytes)
	I0327 23:52:43.463694 1086621 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18485-1069254/.minikube/cert.pem
	I0327 23:52:43.463712 1086621 exec_runner.go:144] found /home/jenkins/minikube-integration/18485-1069254/.minikube/cert.pem, removing ...
	I0327 23:52:43.463719 1086621 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18485-1069254/.minikube/cert.pem
	I0327 23:52:43.463743 1086621 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18485-1069254/.minikube/cert.pem (1123 bytes)
	I0327 23:52:43.463787 1086621 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18485-1069254/.minikube/key.pem
	I0327 23:52:43.463804 1086621 exec_runner.go:144] found /home/jenkins/minikube-integration/18485-1069254/.minikube/key.pem, removing ...
	I0327 23:52:43.463810 1086621 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18485-1069254/.minikube/key.pem
	I0327 23:52:43.463829 1086621 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18485-1069254/.minikube/key.pem (1679 bytes)
	I0327 23:52:43.463880 1086621 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca-key.pem org=jenkins.ha-377576 san=[127.0.0.1 192.168.39.47 ha-377576 localhost minikube]
	I0327 23:52:43.642308 1086621 provision.go:177] copyRemoteCerts
	I0327 23:52:43.642380 1086621 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0327 23:52:43.642408 1086621 main.go:141] libmachine: (ha-377576) Calling .GetSSHHostname
	I0327 23:52:43.645301 1086621 main.go:141] libmachine: (ha-377576) DBG | domain ha-377576 has defined MAC address 52:54:00:9c:48:13 in network mk-ha-377576
	I0327 23:52:43.645576 1086621 main.go:141] libmachine: (ha-377576) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:48:13", ip: ""} in network mk-ha-377576: {Iface:virbr1 ExpiryTime:2024-03-28 00:52:31 +0000 UTC Type:0 Mac:52:54:00:9c:48:13 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:ha-377576 Clientid:01:52:54:00:9c:48:13}
	I0327 23:52:43.645620 1086621 main.go:141] libmachine: (ha-377576) DBG | domain ha-377576 has defined IP address 192.168.39.47 and MAC address 52:54:00:9c:48:13 in network mk-ha-377576
	I0327 23:52:43.645826 1086621 main.go:141] libmachine: (ha-377576) Calling .GetSSHPort
	I0327 23:52:43.646014 1086621 main.go:141] libmachine: (ha-377576) Calling .GetSSHKeyPath
	I0327 23:52:43.646166 1086621 main.go:141] libmachine: (ha-377576) Calling .GetSSHUsername
	I0327 23:52:43.646301 1086621 sshutil.go:53] new ssh client: &{IP:192.168.39.47 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/ha-377576/id_rsa Username:docker}
	I0327 23:52:43.725452 1086621 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0327 23:52:43.725553 1086621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0327 23:52:43.750634 1086621 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0327 23:52:43.750717 1086621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0327 23:52:43.775284 1086621 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0327 23:52:43.775370 1086621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0327 23:52:43.799029 1086621 provision.go:87] duration metric: took 342.333808ms to configureAuth
	I0327 23:52:43.799057 1086621 buildroot.go:189] setting minikube options for container-runtime
	I0327 23:52:43.799224 1086621 config.go:182] Loaded profile config "ha-377576": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0327 23:52:43.799312 1086621 main.go:141] libmachine: (ha-377576) Calling .GetSSHHostname
	I0327 23:52:43.802043 1086621 main.go:141] libmachine: (ha-377576) DBG | domain ha-377576 has defined MAC address 52:54:00:9c:48:13 in network mk-ha-377576
	I0327 23:52:43.802451 1086621 main.go:141] libmachine: (ha-377576) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:48:13", ip: ""} in network mk-ha-377576: {Iface:virbr1 ExpiryTime:2024-03-28 00:52:31 +0000 UTC Type:0 Mac:52:54:00:9c:48:13 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:ha-377576 Clientid:01:52:54:00:9c:48:13}
	I0327 23:52:43.802471 1086621 main.go:141] libmachine: (ha-377576) DBG | domain ha-377576 has defined IP address 192.168.39.47 and MAC address 52:54:00:9c:48:13 in network mk-ha-377576
	I0327 23:52:43.802693 1086621 main.go:141] libmachine: (ha-377576) Calling .GetSSHPort
	I0327 23:52:43.802906 1086621 main.go:141] libmachine: (ha-377576) Calling .GetSSHKeyPath
	I0327 23:52:43.803143 1086621 main.go:141] libmachine: (ha-377576) Calling .GetSSHKeyPath
	I0327 23:52:43.803291 1086621 main.go:141] libmachine: (ha-377576) Calling .GetSSHUsername
	I0327 23:52:43.803498 1086621 main.go:141] libmachine: Using SSH client type: native
	I0327 23:52:43.803707 1086621 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.47 22 <nil> <nil>}
	I0327 23:52:43.803732 1086621 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0327 23:52:44.066756 1086621 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0327 23:52:44.066788 1086621 main.go:141] libmachine: Checking connection to Docker...
	I0327 23:52:44.066798 1086621 main.go:141] libmachine: (ha-377576) Calling .GetURL
	I0327 23:52:44.068332 1086621 main.go:141] libmachine: (ha-377576) DBG | Using libvirt version 6000000
	I0327 23:52:44.070555 1086621 main.go:141] libmachine: (ha-377576) DBG | domain ha-377576 has defined MAC address 52:54:00:9c:48:13 in network mk-ha-377576
	I0327 23:52:44.070883 1086621 main.go:141] libmachine: (ha-377576) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:48:13", ip: ""} in network mk-ha-377576: {Iface:virbr1 ExpiryTime:2024-03-28 00:52:31 +0000 UTC Type:0 Mac:52:54:00:9c:48:13 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:ha-377576 Clientid:01:52:54:00:9c:48:13}
	I0327 23:52:44.070914 1086621 main.go:141] libmachine: (ha-377576) DBG | domain ha-377576 has defined IP address 192.168.39.47 and MAC address 52:54:00:9c:48:13 in network mk-ha-377576
	I0327 23:52:44.071084 1086621 main.go:141] libmachine: Docker is up and running!
	I0327 23:52:44.071112 1086621 main.go:141] libmachine: Reticulating splines...
	I0327 23:52:44.071121 1086621 client.go:171] duration metric: took 27.91864995s to LocalClient.Create
	I0327 23:52:44.071147 1086621 start.go:167] duration metric: took 27.918726761s to libmachine.API.Create "ha-377576"
	I0327 23:52:44.071157 1086621 start.go:293] postStartSetup for "ha-377576" (driver="kvm2")
	I0327 23:52:44.071167 1086621 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0327 23:52:44.071183 1086621 main.go:141] libmachine: (ha-377576) Calling .DriverName
	I0327 23:52:44.071444 1086621 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0327 23:52:44.071479 1086621 main.go:141] libmachine: (ha-377576) Calling .GetSSHHostname
	I0327 23:52:44.073535 1086621 main.go:141] libmachine: (ha-377576) DBG | domain ha-377576 has defined MAC address 52:54:00:9c:48:13 in network mk-ha-377576
	I0327 23:52:44.073898 1086621 main.go:141] libmachine: (ha-377576) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:48:13", ip: ""} in network mk-ha-377576: {Iface:virbr1 ExpiryTime:2024-03-28 00:52:31 +0000 UTC Type:0 Mac:52:54:00:9c:48:13 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:ha-377576 Clientid:01:52:54:00:9c:48:13}
	I0327 23:52:44.073930 1086621 main.go:141] libmachine: (ha-377576) DBG | domain ha-377576 has defined IP address 192.168.39.47 and MAC address 52:54:00:9c:48:13 in network mk-ha-377576
	I0327 23:52:44.074043 1086621 main.go:141] libmachine: (ha-377576) Calling .GetSSHPort
	I0327 23:52:44.074258 1086621 main.go:141] libmachine: (ha-377576) Calling .GetSSHKeyPath
	I0327 23:52:44.074465 1086621 main.go:141] libmachine: (ha-377576) Calling .GetSSHUsername
	I0327 23:52:44.074657 1086621 sshutil.go:53] new ssh client: &{IP:192.168.39.47 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/ha-377576/id_rsa Username:docker}
	I0327 23:52:44.157934 1086621 ssh_runner.go:195] Run: cat /etc/os-release
	I0327 23:52:44.162213 1086621 info.go:137] Remote host: Buildroot 2023.02.9
	I0327 23:52:44.162248 1086621 filesync.go:126] Scanning /home/jenkins/minikube-integration/18485-1069254/.minikube/addons for local assets ...
	I0327 23:52:44.162319 1086621 filesync.go:126] Scanning /home/jenkins/minikube-integration/18485-1069254/.minikube/files for local assets ...
	I0327 23:52:44.162406 1086621 filesync.go:149] local asset: /home/jenkins/minikube-integration/18485-1069254/.minikube/files/etc/ssl/certs/10765222.pem -> 10765222.pem in /etc/ssl/certs
	I0327 23:52:44.162423 1086621 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18485-1069254/.minikube/files/etc/ssl/certs/10765222.pem -> /etc/ssl/certs/10765222.pem
	I0327 23:52:44.162539 1086621 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0327 23:52:44.173260 1086621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/files/etc/ssl/certs/10765222.pem --> /etc/ssl/certs/10765222.pem (1708 bytes)
	I0327 23:52:44.198225 1086621 start.go:296] duration metric: took 127.049448ms for postStartSetup
	I0327 23:52:44.198302 1086621 main.go:141] libmachine: (ha-377576) Calling .GetConfigRaw
	I0327 23:52:44.198945 1086621 main.go:141] libmachine: (ha-377576) Calling .GetIP
	I0327 23:52:44.201358 1086621 main.go:141] libmachine: (ha-377576) DBG | domain ha-377576 has defined MAC address 52:54:00:9c:48:13 in network mk-ha-377576
	I0327 23:52:44.201689 1086621 main.go:141] libmachine: (ha-377576) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:48:13", ip: ""} in network mk-ha-377576: {Iface:virbr1 ExpiryTime:2024-03-28 00:52:31 +0000 UTC Type:0 Mac:52:54:00:9c:48:13 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:ha-377576 Clientid:01:52:54:00:9c:48:13}
	I0327 23:52:44.201731 1086621 main.go:141] libmachine: (ha-377576) DBG | domain ha-377576 has defined IP address 192.168.39.47 and MAC address 52:54:00:9c:48:13 in network mk-ha-377576
	I0327 23:52:44.201956 1086621 profile.go:142] Saving config to /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/ha-377576/config.json ...
	I0327 23:52:44.202173 1086621 start.go:128] duration metric: took 28.06854382s to createHost
	I0327 23:52:44.202198 1086621 main.go:141] libmachine: (ha-377576) Calling .GetSSHHostname
	I0327 23:52:44.204255 1086621 main.go:141] libmachine: (ha-377576) DBG | domain ha-377576 has defined MAC address 52:54:00:9c:48:13 in network mk-ha-377576
	I0327 23:52:44.204563 1086621 main.go:141] libmachine: (ha-377576) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:48:13", ip: ""} in network mk-ha-377576: {Iface:virbr1 ExpiryTime:2024-03-28 00:52:31 +0000 UTC Type:0 Mac:52:54:00:9c:48:13 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:ha-377576 Clientid:01:52:54:00:9c:48:13}
	I0327 23:52:44.204585 1086621 main.go:141] libmachine: (ha-377576) DBG | domain ha-377576 has defined IP address 192.168.39.47 and MAC address 52:54:00:9c:48:13 in network mk-ha-377576
	I0327 23:52:44.204724 1086621 main.go:141] libmachine: (ha-377576) Calling .GetSSHPort
	I0327 23:52:44.204943 1086621 main.go:141] libmachine: (ha-377576) Calling .GetSSHKeyPath
	I0327 23:52:44.205104 1086621 main.go:141] libmachine: (ha-377576) Calling .GetSSHKeyPath
	I0327 23:52:44.205268 1086621 main.go:141] libmachine: (ha-377576) Calling .GetSSHUsername
	I0327 23:52:44.205440 1086621 main.go:141] libmachine: Using SSH client type: native
	I0327 23:52:44.205607 1086621 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.47 22 <nil> <nil>}
	I0327 23:52:44.205617 1086621 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0327 23:52:44.307153 1086621 main.go:141] libmachine: SSH cmd err, output: <nil>: 1711583564.283994941
	
	I0327 23:52:44.307192 1086621 fix.go:216] guest clock: 1711583564.283994941
	I0327 23:52:44.307202 1086621 fix.go:229] Guest: 2024-03-27 23:52:44.283994941 +0000 UTC Remote: 2024-03-27 23:52:44.202188235 +0000 UTC m=+28.191661090 (delta=81.806706ms)
	I0327 23:52:44.307232 1086621 fix.go:200] guest clock delta is within tolerance: 81.806706ms
	I0327 23:52:44.307239 1086621 start.go:83] releasing machines lock for "ha-377576", held for 28.173715757s
	I0327 23:52:44.307268 1086621 main.go:141] libmachine: (ha-377576) Calling .DriverName
	I0327 23:52:44.307610 1086621 main.go:141] libmachine: (ha-377576) Calling .GetIP
	I0327 23:52:44.310114 1086621 main.go:141] libmachine: (ha-377576) DBG | domain ha-377576 has defined MAC address 52:54:00:9c:48:13 in network mk-ha-377576
	I0327 23:52:44.310470 1086621 main.go:141] libmachine: (ha-377576) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:48:13", ip: ""} in network mk-ha-377576: {Iface:virbr1 ExpiryTime:2024-03-28 00:52:31 +0000 UTC Type:0 Mac:52:54:00:9c:48:13 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:ha-377576 Clientid:01:52:54:00:9c:48:13}
	I0327 23:52:44.310500 1086621 main.go:141] libmachine: (ha-377576) DBG | domain ha-377576 has defined IP address 192.168.39.47 and MAC address 52:54:00:9c:48:13 in network mk-ha-377576
	I0327 23:52:44.310638 1086621 main.go:141] libmachine: (ha-377576) Calling .DriverName
	I0327 23:52:44.311177 1086621 main.go:141] libmachine: (ha-377576) Calling .DriverName
	I0327 23:52:44.311390 1086621 main.go:141] libmachine: (ha-377576) Calling .DriverName
	I0327 23:52:44.311497 1086621 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0327 23:52:44.311548 1086621 main.go:141] libmachine: (ha-377576) Calling .GetSSHHostname
	I0327 23:52:44.311684 1086621 ssh_runner.go:195] Run: cat /version.json
	I0327 23:52:44.311711 1086621 main.go:141] libmachine: (ha-377576) Calling .GetSSHHostname
	I0327 23:52:44.313880 1086621 main.go:141] libmachine: (ha-377576) DBG | domain ha-377576 has defined MAC address 52:54:00:9c:48:13 in network mk-ha-377576
	I0327 23:52:44.314113 1086621 main.go:141] libmachine: (ha-377576) DBG | domain ha-377576 has defined MAC address 52:54:00:9c:48:13 in network mk-ha-377576
	I0327 23:52:44.314309 1086621 main.go:141] libmachine: (ha-377576) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:48:13", ip: ""} in network mk-ha-377576: {Iface:virbr1 ExpiryTime:2024-03-28 00:52:31 +0000 UTC Type:0 Mac:52:54:00:9c:48:13 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:ha-377576 Clientid:01:52:54:00:9c:48:13}
	I0327 23:52:44.314341 1086621 main.go:141] libmachine: (ha-377576) DBG | domain ha-377576 has defined IP address 192.168.39.47 and MAC address 52:54:00:9c:48:13 in network mk-ha-377576
	I0327 23:52:44.314449 1086621 main.go:141] libmachine: (ha-377576) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:48:13", ip: ""} in network mk-ha-377576: {Iface:virbr1 ExpiryTime:2024-03-28 00:52:31 +0000 UTC Type:0 Mac:52:54:00:9c:48:13 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:ha-377576 Clientid:01:52:54:00:9c:48:13}
	I0327 23:52:44.314483 1086621 main.go:141] libmachine: (ha-377576) DBG | domain ha-377576 has defined IP address 192.168.39.47 and MAC address 52:54:00:9c:48:13 in network mk-ha-377576
	I0327 23:52:44.314493 1086621 main.go:141] libmachine: (ha-377576) Calling .GetSSHPort
	I0327 23:52:44.314654 1086621 main.go:141] libmachine: (ha-377576) Calling .GetSSHPort
	I0327 23:52:44.314722 1086621 main.go:141] libmachine: (ha-377576) Calling .GetSSHKeyPath
	I0327 23:52:44.314835 1086621 main.go:141] libmachine: (ha-377576) Calling .GetSSHKeyPath
	I0327 23:52:44.314911 1086621 main.go:141] libmachine: (ha-377576) Calling .GetSSHUsername
	I0327 23:52:44.314982 1086621 main.go:141] libmachine: (ha-377576) Calling .GetSSHUsername
	I0327 23:52:44.315062 1086621 sshutil.go:53] new ssh client: &{IP:192.168.39.47 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/ha-377576/id_rsa Username:docker}
	I0327 23:52:44.315117 1086621 sshutil.go:53] new ssh client: &{IP:192.168.39.47 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/ha-377576/id_rsa Username:docker}
	I0327 23:52:44.387658 1086621 ssh_runner.go:195] Run: systemctl --version
	I0327 23:52:44.423158 1086621 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0327 23:52:44.585628 1086621 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0327 23:52:44.591837 1086621 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0327 23:52:44.591900 1086621 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0327 23:52:44.608131 1086621 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0327 23:52:44.608156 1086621 start.go:494] detecting cgroup driver to use...
	I0327 23:52:44.608235 1086621 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0327 23:52:44.624318 1086621 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0327 23:52:44.639158 1086621 docker.go:217] disabling cri-docker service (if available) ...
	I0327 23:52:44.639244 1086621 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0327 23:52:44.654032 1086621 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0327 23:52:44.669218 1086621 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0327 23:52:44.786572 1086621 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0327 23:52:44.950797 1086621 docker.go:233] disabling docker service ...
	I0327 23:52:44.950891 1086621 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0327 23:52:44.965206 1086621 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0327 23:52:44.978629 1086621 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0327 23:52:45.095342 1086621 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0327 23:52:45.204691 1086621 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0327 23:52:45.218871 1086621 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0327 23:52:45.238462 1086621 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0327 23:52:45.238543 1086621 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0327 23:52:45.249244 1086621 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0327 23:52:45.249332 1086621 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0327 23:52:45.259853 1086621 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0327 23:52:45.270460 1086621 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0327 23:52:45.281148 1086621 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0327 23:52:45.291751 1086621 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0327 23:52:45.302581 1086621 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0327 23:52:45.320266 1086621 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0327 23:52:45.331412 1086621 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0327 23:52:45.340733 1086621 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0327 23:52:45.340797 1086621 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0327 23:52:45.353559 1086621 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0327 23:52:45.363291 1086621 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0327 23:52:45.481299 1086621 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0327 23:52:45.635017 1086621 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0327 23:52:45.635106 1086621 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0327 23:52:45.640269 1086621 start.go:562] Will wait 60s for crictl version
	I0327 23:52:45.640336 1086621 ssh_runner.go:195] Run: which crictl
	I0327 23:52:45.644527 1086621 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0327 23:52:45.686675 1086621 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0327 23:52:45.686756 1086621 ssh_runner.go:195] Run: crio --version
	I0327 23:52:45.716611 1086621 ssh_runner.go:195] Run: crio --version
	I0327 23:52:45.747217 1086621 out.go:177] * Preparing Kubernetes v1.29.3 on CRI-O 1.29.1 ...
	I0327 23:52:45.748462 1086621 main.go:141] libmachine: (ha-377576) Calling .GetIP
	I0327 23:52:45.751504 1086621 main.go:141] libmachine: (ha-377576) DBG | domain ha-377576 has defined MAC address 52:54:00:9c:48:13 in network mk-ha-377576
	I0327 23:52:45.751851 1086621 main.go:141] libmachine: (ha-377576) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:48:13", ip: ""} in network mk-ha-377576: {Iface:virbr1 ExpiryTime:2024-03-28 00:52:31 +0000 UTC Type:0 Mac:52:54:00:9c:48:13 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:ha-377576 Clientid:01:52:54:00:9c:48:13}
	I0327 23:52:45.751884 1086621 main.go:141] libmachine: (ha-377576) DBG | domain ha-377576 has defined IP address 192.168.39.47 and MAC address 52:54:00:9c:48:13 in network mk-ha-377576
	I0327 23:52:45.752114 1086621 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0327 23:52:45.756617 1086621 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0327 23:52:45.771501 1086621 kubeadm.go:877] updating cluster {Name:ha-377576 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 Cl
usterName:ha-377576 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.47 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0327 23:52:45.771616 1086621 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0327 23:52:45.771661 1086621 ssh_runner.go:195] Run: sudo crictl images --output json
	I0327 23:52:45.808162 1086621 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.3". assuming images are not preloaded.
	I0327 23:52:45.808240 1086621 ssh_runner.go:195] Run: which lz4
	I0327 23:52:45.812245 1086621 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0327 23:52:45.812350 1086621 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0327 23:52:45.816564 1086621 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0327 23:52:45.816605 1086621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (402967820 bytes)
	I0327 23:52:47.404885 1086621 crio.go:462] duration metric: took 1.592565204s to copy over tarball
	I0327 23:52:47.404982 1086621 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0327 23:52:49.661801 1086621 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.256781993s)
	I0327 23:52:49.661840 1086621 crio.go:469] duration metric: took 2.25692182s to extract the tarball
	I0327 23:52:49.661849 1086621 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0327 23:52:49.701294 1086621 ssh_runner.go:195] Run: sudo crictl images --output json
	I0327 23:52:49.745828 1086621 crio.go:514] all images are preloaded for cri-o runtime.
	I0327 23:52:49.745853 1086621 cache_images.go:84] Images are preloaded, skipping loading
	I0327 23:52:49.745862 1086621 kubeadm.go:928] updating node { 192.168.39.47 8443 v1.29.3 crio true true} ...
	I0327 23:52:49.745980 1086621 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-377576 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.47
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.3 ClusterName:ha-377576 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0327 23:52:49.746047 1086621 ssh_runner.go:195] Run: crio config
	I0327 23:52:49.795743 1086621 cni.go:84] Creating CNI manager for ""
	I0327 23:52:49.795765 1086621 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0327 23:52:49.795774 1086621 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0327 23:52:49.795796 1086621 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.47 APIServerPort:8443 KubernetesVersion:v1.29.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-377576 NodeName:ha-377576 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.47"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.47 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/m
anifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0327 23:52:49.795952 1086621 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.47
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-377576"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.47
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.47"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0327 23:52:49.795981 1086621 kube-vip.go:111] generating kube-vip config ...
	I0327 23:52:49.796035 1086621 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0327 23:52:49.813337 1086621 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0327 23:52:49.813457 1086621 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0327 23:52:49.813525 1086621 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.3
	I0327 23:52:49.824365 1086621 binaries.go:44] Found k8s binaries, skipping transfer
	I0327 23:52:49.824453 1086621 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0327 23:52:49.834850 1086621 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I0327 23:52:49.852145 1086621 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0327 23:52:49.869506 1086621 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2150 bytes)
	I0327 23:52:49.887226 1086621 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1352 bytes)
	I0327 23:52:49.904933 1086621 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0327 23:52:49.909004 1086621 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0327 23:52:49.922928 1086621 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0327 23:52:50.050938 1086621 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0327 23:52:50.069329 1086621 certs.go:68] Setting up /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/ha-377576 for IP: 192.168.39.47
	I0327 23:52:50.069361 1086621 certs.go:194] generating shared ca certs ...
	I0327 23:52:50.069382 1086621 certs.go:226] acquiring lock for ca certs: {Name:mkf4dec8f33bbf51de6ed3aabdf175c7fd744ef9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 23:52:50.069574 1086621 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.key
	I0327 23:52:50.069625 1086621 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/proxy-client-ca.key
	I0327 23:52:50.069635 1086621 certs.go:256] generating profile certs ...
	I0327 23:52:50.069705 1086621 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/ha-377576/client.key
	I0327 23:52:50.069726 1086621 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/ha-377576/client.crt with IP's: []
	I0327 23:52:50.366949 1086621 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/ha-377576/client.crt ...
	I0327 23:52:50.366989 1086621 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/ha-377576/client.crt: {Name:mk1d41578a56d1ff6fc7e659b4e37c20b338628b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 23:52:50.367268 1086621 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/ha-377576/client.key ...
	I0327 23:52:50.367303 1086621 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/ha-377576/client.key: {Name:mk706342d211e03475387d7a483acc5545792a46 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 23:52:50.367440 1086621 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/ha-377576/apiserver.key.71312e33
	I0327 23:52:50.367461 1086621 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/ha-377576/apiserver.crt.71312e33 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.47 192.168.39.254]
	I0327 23:52:50.599407 1086621 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/ha-377576/apiserver.crt.71312e33 ...
	I0327 23:52:50.599456 1086621 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/ha-377576/apiserver.crt.71312e33: {Name:mk693900fd14c89f17e34a8eb0d7a534d0f67662 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 23:52:50.599698 1086621 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/ha-377576/apiserver.key.71312e33 ...
	I0327 23:52:50.599724 1086621 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/ha-377576/apiserver.key.71312e33: {Name:mk335ff62d1fd6fb0ca416a673648c77ad800201 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 23:52:50.599848 1086621 certs.go:381] copying /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/ha-377576/apiserver.crt.71312e33 -> /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/ha-377576/apiserver.crt
	I0327 23:52:50.599992 1086621 certs.go:385] copying /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/ha-377576/apiserver.key.71312e33 -> /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/ha-377576/apiserver.key
	I0327 23:52:50.600079 1086621 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/ha-377576/proxy-client.key
	I0327 23:52:50.600101 1086621 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/ha-377576/proxy-client.crt with IP's: []
	I0327 23:52:50.910113 1086621 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/ha-377576/proxy-client.crt ...
	I0327 23:52:50.910156 1086621 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/ha-377576/proxy-client.crt: {Name:mk2ce6ac8523adee2bde9e93ac88ef9a3e9fa932 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 23:52:50.910352 1086621 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/ha-377576/proxy-client.key ...
	I0327 23:52:50.910366 1086621 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/ha-377576/proxy-client.key: {Name:mkfd2d417237ed30cbebe68eb094310dc75e3e0c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 23:52:50.910437 1086621 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0327 23:52:50.910454 1086621 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0327 23:52:50.910467 1086621 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18485-1069254/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0327 23:52:50.910479 1086621 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18485-1069254/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0327 23:52:50.910492 1086621 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/ha-377576/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0327 23:52:50.910505 1086621 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/ha-377576/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0327 23:52:50.910517 1086621 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/ha-377576/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0327 23:52:50.910529 1086621 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/ha-377576/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0327 23:52:50.910582 1086621 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/1076522.pem (1338 bytes)
	W0327 23:52:50.910624 1086621 certs.go:480] ignoring /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/1076522_empty.pem, impossibly tiny 0 bytes
	I0327 23:52:50.910632 1086621 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca-key.pem (1675 bytes)
	I0327 23:52:50.910654 1086621 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem (1078 bytes)
	I0327 23:52:50.910677 1086621 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/cert.pem (1123 bytes)
	I0327 23:52:50.910698 1086621 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/key.pem (1679 bytes)
	I0327 23:52:50.910733 1086621 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/files/etc/ssl/certs/10765222.pem (1708 bytes)
	I0327 23:52:50.910768 1086621 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/1076522.pem -> /usr/share/ca-certificates/1076522.pem
	I0327 23:52:50.910782 1086621 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18485-1069254/.minikube/files/etc/ssl/certs/10765222.pem -> /usr/share/ca-certificates/10765222.pem
	I0327 23:52:50.910795 1086621 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0327 23:52:50.911422 1086621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0327 23:52:50.939932 1086621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0327 23:52:50.971656 1086621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0327 23:52:51.005915 1086621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0327 23:52:51.033590 1086621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/ha-377576/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0327 23:52:51.061650 1086621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/ha-377576/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0327 23:52:51.121413 1086621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/ha-377576/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0327 23:52:51.149533 1086621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/ha-377576/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0327 23:52:51.177918 1086621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/1076522.pem --> /usr/share/ca-certificates/1076522.pem (1338 bytes)
	I0327 23:52:51.204176 1086621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/files/etc/ssl/certs/10765222.pem --> /usr/share/ca-certificates/10765222.pem (1708 bytes)
	I0327 23:52:51.229832 1086621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0327 23:52:51.255289 1086621 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0327 23:52:51.273246 1086621 ssh_runner.go:195] Run: openssl version
	I0327 23:52:51.279240 1086621 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1076522.pem && ln -fs /usr/share/ca-certificates/1076522.pem /etc/ssl/certs/1076522.pem"
	I0327 23:52:51.290343 1086621 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1076522.pem
	I0327 23:52:51.295088 1086621 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 27 23:42 /usr/share/ca-certificates/1076522.pem
	I0327 23:52:51.295139 1086621 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1076522.pem
	I0327 23:52:51.300752 1086621 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1076522.pem /etc/ssl/certs/51391683.0"
	I0327 23:52:51.311496 1086621 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10765222.pem && ln -fs /usr/share/ca-certificates/10765222.pem /etc/ssl/certs/10765222.pem"
	I0327 23:52:51.322244 1086621 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10765222.pem
	I0327 23:52:51.326858 1086621 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 27 23:42 /usr/share/ca-certificates/10765222.pem
	I0327 23:52:51.326921 1086621 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10765222.pem
	I0327 23:52:51.332664 1086621 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/10765222.pem /etc/ssl/certs/3ec20f2e.0"
	I0327 23:52:51.343703 1086621 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0327 23:52:51.356685 1086621 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0327 23:52:51.361385 1086621 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 27 23:34 /usr/share/ca-certificates/minikubeCA.pem
	I0327 23:52:51.361461 1086621 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0327 23:52:51.367629 1086621 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0327 23:52:51.379407 1086621 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0327 23:52:51.383611 1086621 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0327 23:52:51.383671 1086621 kubeadm.go:391] StartCluster: {Name:ha-377576 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 Clust
erName:ha-377576 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.47 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mount
Type:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0327 23:52:51.383765 1086621 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0327 23:52:51.383811 1086621 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0327 23:52:51.419239 1086621 cri.go:89] found id: ""
	I0327 23:52:51.419316 1086621 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0327 23:52:51.429183 1086621 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0327 23:52:51.438745 1086621 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0327 23:52:51.448195 1086621 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0327 23:52:51.448217 1086621 kubeadm.go:156] found existing configuration files:
	
	I0327 23:52:51.448260 1086621 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0327 23:52:51.457235 1086621 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0327 23:52:51.457294 1086621 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0327 23:52:51.466705 1086621 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0327 23:52:51.475578 1086621 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0327 23:52:51.475648 1086621 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0327 23:52:51.485356 1086621 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0327 23:52:51.494473 1086621 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0327 23:52:51.494539 1086621 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0327 23:52:51.504355 1086621 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0327 23:52:51.513909 1086621 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0327 23:52:51.513966 1086621 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0327 23:52:51.523362 1086621 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0327 23:52:51.636705 1086621 kubeadm.go:309] [init] Using Kubernetes version: v1.29.3
	I0327 23:52:51.636774 1086621 kubeadm.go:309] [preflight] Running pre-flight checks
	I0327 23:52:51.800363 1086621 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0327 23:52:51.800510 1086621 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0327 23:52:51.800626 1086621 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0327 23:52:52.010811 1086621 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0327 23:52:52.144657 1086621 out.go:204]   - Generating certificates and keys ...
	I0327 23:52:52.144805 1086621 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0327 23:52:52.144916 1086621 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0327 23:52:52.344008 1086621 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0327 23:52:52.741592 1086621 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0327 23:52:52.910542 1086621 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0327 23:52:53.119492 1086621 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0327 23:52:53.281640 1086621 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0327 23:52:53.281836 1086621 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [ha-377576 localhost] and IPs [192.168.39.47 127.0.0.1 ::1]
	I0327 23:52:53.419622 1086621 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0327 23:52:53.419795 1086621 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [ha-377576 localhost] and IPs [192.168.39.47 127.0.0.1 ::1]
	I0327 23:52:53.696144 1086621 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0327 23:52:53.818770 1086621 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0327 23:52:53.917616 1086621 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0327 23:52:53.917779 1086621 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0327 23:52:53.984881 1086621 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0327 23:52:54.055230 1086621 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0327 23:52:54.140631 1086621 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0327 23:52:54.315913 1086621 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0327 23:52:54.381453 1086621 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0327 23:52:54.382203 1086621 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0327 23:52:54.386750 1086621 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0327 23:52:54.388612 1086621 out.go:204]   - Booting up control plane ...
	I0327 23:52:54.388716 1086621 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0327 23:52:54.388805 1086621 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0327 23:52:54.389235 1086621 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0327 23:52:54.407469 1086621 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0327 23:52:54.408355 1086621 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0327 23:52:54.408591 1086621 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0327 23:52:54.542068 1086621 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0327 23:53:01.128995 1086621 kubeadm.go:309] [apiclient] All control plane components are healthy after 6.590435 seconds
	I0327 23:53:01.145793 1086621 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0327 23:53:01.165234 1086621 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0327 23:53:01.701282 1086621 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0327 23:53:01.701515 1086621 kubeadm.go:309] [mark-control-plane] Marking the node ha-377576 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0327 23:53:02.217178 1086621 kubeadm.go:309] [bootstrap-token] Using token: oom77v.j3g2umgvg8sl8qjv
	I0327 23:53:02.219040 1086621 out.go:204]   - Configuring RBAC rules ...
	I0327 23:53:02.219154 1086621 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0327 23:53:02.225966 1086621 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0327 23:53:02.239486 1086621 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0327 23:53:02.243901 1086621 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0327 23:53:02.249599 1086621 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0327 23:53:02.257315 1086621 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0327 23:53:02.277875 1086621 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0327 23:53:02.529458 1086621 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0327 23:53:02.636494 1086621 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0327 23:53:02.638841 1086621 kubeadm.go:309] 
	I0327 23:53:02.638921 1086621 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0327 23:53:02.638931 1086621 kubeadm.go:309] 
	I0327 23:53:02.639058 1086621 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0327 23:53:02.639079 1086621 kubeadm.go:309] 
	I0327 23:53:02.639111 1086621 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0327 23:53:02.639179 1086621 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0327 23:53:02.639245 1086621 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0327 23:53:02.639257 1086621 kubeadm.go:309] 
	I0327 23:53:02.639338 1086621 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0327 23:53:02.639349 1086621 kubeadm.go:309] 
	I0327 23:53:02.639421 1086621 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0327 23:53:02.639434 1086621 kubeadm.go:309] 
	I0327 23:53:02.639490 1086621 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0327 23:53:02.639601 1086621 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0327 23:53:02.639726 1086621 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0327 23:53:02.639745 1086621 kubeadm.go:309] 
	I0327 23:53:02.639831 1086621 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0327 23:53:02.639895 1086621 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0327 23:53:02.639901 1086621 kubeadm.go:309] 
	I0327 23:53:02.640025 1086621 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token oom77v.j3g2umgvg8sl8qjv \
	I0327 23:53:02.640166 1086621 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:eb47e943713ff45bf2b8bbea854932df17a469668ec0f8e79ea3bb626e56fc59 \
	I0327 23:53:02.640204 1086621 kubeadm.go:309] 	--control-plane 
	I0327 23:53:02.640220 1086621 kubeadm.go:309] 
	I0327 23:53:02.640305 1086621 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0327 23:53:02.640313 1086621 kubeadm.go:309] 
	I0327 23:53:02.640402 1086621 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token oom77v.j3g2umgvg8sl8qjv \
	I0327 23:53:02.640553 1086621 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:eb47e943713ff45bf2b8bbea854932df17a469668ec0f8e79ea3bb626e56fc59 
	I0327 23:53:02.642281 1086621 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0327 23:53:02.642307 1086621 cni.go:84] Creating CNI manager for ""
	I0327 23:53:02.642315 1086621 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0327 23:53:02.644064 1086621 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0327 23:53:02.645282 1086621 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0327 23:53:02.676273 1086621 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.29.3/kubectl ...
	I0327 23:53:02.676309 1086621 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0327 23:53:02.700454 1086621 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0327 23:53:03.122994 1086621 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0327 23:53:03.123079 1086621 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0327 23:53:03.123079 1086621 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-377576 minikube.k8s.io/updated_at=2024_03_27T23_53_03_0700 minikube.k8s.io/version=v1.33.0-beta.0 minikube.k8s.io/commit=1a940f980f77c9bfc8ef93678db8c1f49c9dd79d minikube.k8s.io/name=ha-377576 minikube.k8s.io/primary=true
	I0327 23:53:03.160178 1086621 ops.go:34] apiserver oom_adj: -16
	I0327 23:53:03.264652 1086621 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0327 23:53:03.765712 1086621 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0327 23:53:04.265515 1086621 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0327 23:53:04.765454 1086621 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0327 23:53:05.265528 1086621 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0327 23:53:05.765468 1086621 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0327 23:53:06.265163 1086621 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0327 23:53:06.764846 1086621 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0327 23:53:07.264791 1086621 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0327 23:53:07.765632 1086621 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0327 23:53:08.265071 1086621 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0327 23:53:08.765617 1086621 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0327 23:53:09.265700 1086621 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0327 23:53:09.764872 1086621 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0327 23:53:10.265282 1086621 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0327 23:53:10.764681 1086621 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0327 23:53:11.265261 1086621 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0327 23:53:11.765608 1086621 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0327 23:53:12.264990 1086621 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0327 23:53:12.764869 1086621 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0327 23:53:13.264768 1086621 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0327 23:53:13.764825 1086621 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0327 23:53:14.264753 1086621 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0327 23:53:14.378744 1086621 kubeadm.go:1107] duration metric: took 11.255748792s to wait for elevateKubeSystemPrivileges
	W0327 23:53:14.378795 1086621 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0327 23:53:14.378803 1086621 kubeadm.go:393] duration metric: took 22.995138013s to StartCluster
	I0327 23:53:14.378822 1086621 settings.go:142] acquiring lock: {Name:mk594dd5d083edcb4c668528431bf610fe4ea638 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 23:53:14.378931 1086621 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18485-1069254/kubeconfig
	I0327 23:53:14.379795 1086621 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18485-1069254/kubeconfig: {Name:mk608bb0dce90860f51972a105c5a28b1a9b081e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 23:53:14.380047 1086621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0327 23:53:14.380072 1086621 start.go:232] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.47 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0327 23:53:14.380099 1086621 start.go:240] waiting for startup goroutines ...
	I0327 23:53:14.380121 1086621 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0327 23:53:14.380173 1086621 addons.go:69] Setting storage-provisioner=true in profile "ha-377576"
	I0327 23:53:14.380229 1086621 addons.go:234] Setting addon storage-provisioner=true in "ha-377576"
	I0327 23:53:14.380246 1086621 addons.go:69] Setting default-storageclass=true in profile "ha-377576"
	I0327 23:53:14.380271 1086621 host.go:66] Checking if "ha-377576" exists ...
	I0327 23:53:14.380279 1086621 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-377576"
	I0327 23:53:14.380289 1086621 config.go:182] Loaded profile config "ha-377576": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0327 23:53:14.380642 1086621 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0327 23:53:14.380674 1086621 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0327 23:53:14.380863 1086621 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0327 23:53:14.380899 1086621 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0327 23:53:14.395863 1086621 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42487
	I0327 23:53:14.396088 1086621 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39925
	I0327 23:53:14.396447 1086621 main.go:141] libmachine: () Calling .GetVersion
	I0327 23:53:14.396569 1086621 main.go:141] libmachine: () Calling .GetVersion
	I0327 23:53:14.397040 1086621 main.go:141] libmachine: Using API Version  1
	I0327 23:53:14.397057 1086621 main.go:141] libmachine: () Calling .SetConfigRaw
	I0327 23:53:14.397258 1086621 main.go:141] libmachine: Using API Version  1
	I0327 23:53:14.397286 1086621 main.go:141] libmachine: () Calling .SetConfigRaw
	I0327 23:53:14.397372 1086621 main.go:141] libmachine: () Calling .GetMachineName
	I0327 23:53:14.397556 1086621 main.go:141] libmachine: (ha-377576) Calling .GetState
	I0327 23:53:14.397750 1086621 main.go:141] libmachine: () Calling .GetMachineName
	I0327 23:53:14.398335 1086621 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0327 23:53:14.398363 1086621 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0327 23:53:14.399962 1086621 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18485-1069254/kubeconfig
	I0327 23:53:14.400415 1086621 kapi.go:59] client config for ha-377576: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/ha-377576/client.crt", KeyFile:"/home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/ha-377576/client.key", CAFile:"/home/jenkins/minikube-integration/18485-1069254/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]strin
g(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c58000), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0327 23:53:14.401059 1086621 cert_rotation.go:137] Starting client certificate rotation controller
	I0327 23:53:14.401420 1086621 addons.go:234] Setting addon default-storageclass=true in "ha-377576"
	I0327 23:53:14.401478 1086621 host.go:66] Checking if "ha-377576" exists ...
	I0327 23:53:14.401883 1086621 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0327 23:53:14.401921 1086621 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0327 23:53:14.413982 1086621 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45293
	I0327 23:53:14.414461 1086621 main.go:141] libmachine: () Calling .GetVersion
	I0327 23:53:14.415004 1086621 main.go:141] libmachine: Using API Version  1
	I0327 23:53:14.415043 1086621 main.go:141] libmachine: () Calling .SetConfigRaw
	I0327 23:53:14.415436 1086621 main.go:141] libmachine: () Calling .GetMachineName
	I0327 23:53:14.415672 1086621 main.go:141] libmachine: (ha-377576) Calling .GetState
	I0327 23:53:14.417095 1086621 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35553
	I0327 23:53:14.417515 1086621 main.go:141] libmachine: (ha-377576) Calling .DriverName
	I0327 23:53:14.417579 1086621 main.go:141] libmachine: () Calling .GetVersion
	I0327 23:53:14.420005 1086621 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0327 23:53:14.418067 1086621 main.go:141] libmachine: Using API Version  1
	I0327 23:53:14.420036 1086621 main.go:141] libmachine: () Calling .SetConfigRaw
	I0327 23:53:14.421548 1086621 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0327 23:53:14.421564 1086621 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0327 23:53:14.421580 1086621 main.go:141] libmachine: (ha-377576) Calling .GetSSHHostname
	I0327 23:53:14.421978 1086621 main.go:141] libmachine: () Calling .GetMachineName
	I0327 23:53:14.422630 1086621 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0327 23:53:14.422682 1086621 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0327 23:53:14.424924 1086621 main.go:141] libmachine: (ha-377576) DBG | domain ha-377576 has defined MAC address 52:54:00:9c:48:13 in network mk-ha-377576
	I0327 23:53:14.425383 1086621 main.go:141] libmachine: (ha-377576) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:48:13", ip: ""} in network mk-ha-377576: {Iface:virbr1 ExpiryTime:2024-03-28 00:52:31 +0000 UTC Type:0 Mac:52:54:00:9c:48:13 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:ha-377576 Clientid:01:52:54:00:9c:48:13}
	I0327 23:53:14.425410 1086621 main.go:141] libmachine: (ha-377576) DBG | domain ha-377576 has defined IP address 192.168.39.47 and MAC address 52:54:00:9c:48:13 in network mk-ha-377576
	I0327 23:53:14.425610 1086621 main.go:141] libmachine: (ha-377576) Calling .GetSSHPort
	I0327 23:53:14.425808 1086621 main.go:141] libmachine: (ha-377576) Calling .GetSSHKeyPath
	I0327 23:53:14.425996 1086621 main.go:141] libmachine: (ha-377576) Calling .GetSSHUsername
	I0327 23:53:14.426132 1086621 sshutil.go:53] new ssh client: &{IP:192.168.39.47 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/ha-377576/id_rsa Username:docker}
	I0327 23:53:14.438960 1086621 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44561
	I0327 23:53:14.439398 1086621 main.go:141] libmachine: () Calling .GetVersion
	I0327 23:53:14.439917 1086621 main.go:141] libmachine: Using API Version  1
	I0327 23:53:14.439939 1086621 main.go:141] libmachine: () Calling .SetConfigRaw
	I0327 23:53:14.440319 1086621 main.go:141] libmachine: () Calling .GetMachineName
	I0327 23:53:14.440593 1086621 main.go:141] libmachine: (ha-377576) Calling .GetState
	I0327 23:53:14.442301 1086621 main.go:141] libmachine: (ha-377576) Calling .DriverName
	I0327 23:53:14.442580 1086621 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0327 23:53:14.442597 1086621 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0327 23:53:14.442612 1086621 main.go:141] libmachine: (ha-377576) Calling .GetSSHHostname
	I0327 23:53:14.445647 1086621 main.go:141] libmachine: (ha-377576) DBG | domain ha-377576 has defined MAC address 52:54:00:9c:48:13 in network mk-ha-377576
	I0327 23:53:14.446085 1086621 main.go:141] libmachine: (ha-377576) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:48:13", ip: ""} in network mk-ha-377576: {Iface:virbr1 ExpiryTime:2024-03-28 00:52:31 +0000 UTC Type:0 Mac:52:54:00:9c:48:13 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:ha-377576 Clientid:01:52:54:00:9c:48:13}
	I0327 23:53:14.446108 1086621 main.go:141] libmachine: (ha-377576) DBG | domain ha-377576 has defined IP address 192.168.39.47 and MAC address 52:54:00:9c:48:13 in network mk-ha-377576
	I0327 23:53:14.446394 1086621 main.go:141] libmachine: (ha-377576) Calling .GetSSHPort
	I0327 23:53:14.446606 1086621 main.go:141] libmachine: (ha-377576) Calling .GetSSHKeyPath
	I0327 23:53:14.446757 1086621 main.go:141] libmachine: (ha-377576) Calling .GetSSHUsername
	I0327 23:53:14.446891 1086621 sshutil.go:53] new ssh client: &{IP:192.168.39.47 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/ha-377576/id_rsa Username:docker}
	I0327 23:53:14.496411 1086621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0327 23:53:14.577119 1086621 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0327 23:53:14.616830 1086621 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0327 23:53:15.020906 1086621 start.go:948] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0327 23:53:15.288424 1086621 main.go:141] libmachine: Making call to close driver server
	I0327 23:53:15.288460 1086621 main.go:141] libmachine: (ha-377576) Calling .Close
	I0327 23:53:15.288477 1086621 main.go:141] libmachine: Making call to close driver server
	I0327 23:53:15.288501 1086621 main.go:141] libmachine: (ha-377576) Calling .Close
	I0327 23:53:15.288790 1086621 main.go:141] libmachine: Successfully made call to close driver server
	I0327 23:53:15.288798 1086621 main.go:141] libmachine: Successfully made call to close driver server
	I0327 23:53:15.288808 1086621 main.go:141] libmachine: Making call to close connection to plugin binary
	I0327 23:53:15.288812 1086621 main.go:141] libmachine: Making call to close connection to plugin binary
	I0327 23:53:15.288818 1086621 main.go:141] libmachine: Making call to close driver server
	I0327 23:53:15.288821 1086621 main.go:141] libmachine: Making call to close driver server
	I0327 23:53:15.288826 1086621 main.go:141] libmachine: (ha-377576) Calling .Close
	I0327 23:53:15.288832 1086621 main.go:141] libmachine: (ha-377576) Calling .Close
	I0327 23:53:15.289108 1086621 main.go:141] libmachine: Successfully made call to close driver server
	I0327 23:53:15.289125 1086621 main.go:141] libmachine: Making call to close connection to plugin binary
	I0327 23:53:15.289145 1086621 main.go:141] libmachine: Successfully made call to close driver server
	I0327 23:53:15.289160 1086621 main.go:141] libmachine: Making call to close connection to plugin binary
	I0327 23:53:15.289235 1086621 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0327 23:53:15.289242 1086621 round_trippers.go:469] Request Headers:
	I0327 23:53:15.289252 1086621 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:53:15.289258 1086621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0327 23:53:15.300091 1086621 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0327 23:53:15.300824 1086621 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0327 23:53:15.300839 1086621 round_trippers.go:469] Request Headers:
	I0327 23:53:15.300847 1086621 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:53:15.300851 1086621 round_trippers.go:473]     Content-Type: application/json
	I0327 23:53:15.300855 1086621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0327 23:53:15.305315 1086621 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0327 23:53:15.305614 1086621 main.go:141] libmachine: Making call to close driver server
	I0327 23:53:15.305636 1086621 main.go:141] libmachine: (ha-377576) Calling .Close
	I0327 23:53:15.305931 1086621 main.go:141] libmachine: Successfully made call to close driver server
	I0327 23:53:15.305948 1086621 main.go:141] libmachine: Making call to close connection to plugin binary
	I0327 23:53:15.305972 1086621 main.go:141] libmachine: (ha-377576) DBG | Closing plugin on server side
	I0327 23:53:15.307723 1086621 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0327 23:53:15.309131 1086621 addons.go:505] duration metric: took 929.009209ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0327 23:53:15.309164 1086621 start.go:245] waiting for cluster config update ...
	I0327 23:53:15.309184 1086621 start.go:254] writing updated cluster config ...
	I0327 23:53:15.310769 1086621 out.go:177] 
	I0327 23:53:15.312225 1086621 config.go:182] Loaded profile config "ha-377576": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0327 23:53:15.312306 1086621 profile.go:142] Saving config to /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/ha-377576/config.json ...
	I0327 23:53:15.314088 1086621 out.go:177] * Starting "ha-377576-m02" control-plane node in "ha-377576" cluster
	I0327 23:53:15.315412 1086621 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0327 23:53:15.315441 1086621 cache.go:56] Caching tarball of preloaded images
	I0327 23:53:15.315561 1086621 preload.go:173] Found /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0327 23:53:15.315577 1086621 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on crio
	I0327 23:53:15.315664 1086621 profile.go:142] Saving config to /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/ha-377576/config.json ...
	I0327 23:53:15.315876 1086621 start.go:360] acquireMachinesLock for ha-377576-m02: {Name:mk85e225431128bbd27ac7bb3815095957281902 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0327 23:53:15.315964 1086621 start.go:364] duration metric: took 29.087µs to acquireMachinesLock for "ha-377576-m02"
	I0327 23:53:15.315989 1086621 start.go:93] Provisioning new machine with config: &{Name:ha-377576 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.29.3 ClusterName:ha-377576 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.47 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cert
Expiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0327 23:53:15.316078 1086621 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0327 23:53:15.317833 1086621 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0327 23:53:15.317925 1086621 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0327 23:53:15.317952 1086621 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0327 23:53:15.332686 1086621 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38675
	I0327 23:53:15.333194 1086621 main.go:141] libmachine: () Calling .GetVersion
	I0327 23:53:15.333658 1086621 main.go:141] libmachine: Using API Version  1
	I0327 23:53:15.333688 1086621 main.go:141] libmachine: () Calling .SetConfigRaw
	I0327 23:53:15.334060 1086621 main.go:141] libmachine: () Calling .GetMachineName
	I0327 23:53:15.334292 1086621 main.go:141] libmachine: (ha-377576-m02) Calling .GetMachineName
	I0327 23:53:15.334455 1086621 main.go:141] libmachine: (ha-377576-m02) Calling .DriverName
	I0327 23:53:15.334652 1086621 start.go:159] libmachine.API.Create for "ha-377576" (driver="kvm2")
	I0327 23:53:15.334677 1086621 client.go:168] LocalClient.Create starting
	I0327 23:53:15.334712 1086621 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem
	I0327 23:53:15.334751 1086621 main.go:141] libmachine: Decoding PEM data...
	I0327 23:53:15.334766 1086621 main.go:141] libmachine: Parsing certificate...
	I0327 23:53:15.334821 1086621 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/cert.pem
	I0327 23:53:15.334838 1086621 main.go:141] libmachine: Decoding PEM data...
	I0327 23:53:15.334852 1086621 main.go:141] libmachine: Parsing certificate...
	I0327 23:53:15.334868 1086621 main.go:141] libmachine: Running pre-create checks...
	I0327 23:53:15.334876 1086621 main.go:141] libmachine: (ha-377576-m02) Calling .PreCreateCheck
	I0327 23:53:15.335043 1086621 main.go:141] libmachine: (ha-377576-m02) Calling .GetConfigRaw
	I0327 23:53:15.335460 1086621 main.go:141] libmachine: Creating machine...
	I0327 23:53:15.335475 1086621 main.go:141] libmachine: (ha-377576-m02) Calling .Create
	I0327 23:53:15.335629 1086621 main.go:141] libmachine: (ha-377576-m02) Creating KVM machine...
	I0327 23:53:15.337060 1086621 main.go:141] libmachine: (ha-377576-m02) DBG | found existing default KVM network
	I0327 23:53:15.337187 1086621 main.go:141] libmachine: (ha-377576-m02) DBG | found existing private KVM network mk-ha-377576
	I0327 23:53:15.337407 1086621 main.go:141] libmachine: (ha-377576-m02) Setting up store path in /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/ha-377576-m02 ...
	I0327 23:53:15.337427 1086621 main.go:141] libmachine: (ha-377576-m02) Building disk image from file:///home/jenkins/minikube-integration/18485-1069254/.minikube/cache/iso/amd64/minikube-v1.33.0-1711559712-18485-amd64.iso
	I0327 23:53:15.337508 1086621 main.go:141] libmachine: (ha-377576-m02) DBG | I0327 23:53:15.337395 1086974 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18485-1069254/.minikube
	I0327 23:53:15.337599 1086621 main.go:141] libmachine: (ha-377576-m02) Downloading /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18485-1069254/.minikube/cache/iso/amd64/minikube-v1.33.0-1711559712-18485-amd64.iso...
	I0327 23:53:15.585573 1086621 main.go:141] libmachine: (ha-377576-m02) DBG | I0327 23:53:15.585404 1086974 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/ha-377576-m02/id_rsa...
	I0327 23:53:15.737806 1086621 main.go:141] libmachine: (ha-377576-m02) DBG | I0327 23:53:15.737609 1086974 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/ha-377576-m02/ha-377576-m02.rawdisk...
	I0327 23:53:15.737849 1086621 main.go:141] libmachine: (ha-377576-m02) DBG | Writing magic tar header
	I0327 23:53:15.737862 1086621 main.go:141] libmachine: (ha-377576-m02) Setting executable bit set on /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/ha-377576-m02 (perms=drwx------)
	I0327 23:53:15.737872 1086621 main.go:141] libmachine: (ha-377576-m02) DBG | Writing SSH key tar header
	I0327 23:53:15.737889 1086621 main.go:141] libmachine: (ha-377576-m02) DBG | I0327 23:53:15.737729 1086974 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/ha-377576-m02 ...
	I0327 23:53:15.737901 1086621 main.go:141] libmachine: (ha-377576-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/ha-377576-m02
	I0327 23:53:15.737942 1086621 main.go:141] libmachine: (ha-377576-m02) Setting executable bit set on /home/jenkins/minikube-integration/18485-1069254/.minikube/machines (perms=drwxr-xr-x)
	I0327 23:53:15.737983 1086621 main.go:141] libmachine: (ha-377576-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18485-1069254/.minikube/machines
	I0327 23:53:15.737999 1086621 main.go:141] libmachine: (ha-377576-m02) Setting executable bit set on /home/jenkins/minikube-integration/18485-1069254/.minikube (perms=drwxr-xr-x)
	I0327 23:53:15.738014 1086621 main.go:141] libmachine: (ha-377576-m02) Setting executable bit set on /home/jenkins/minikube-integration/18485-1069254 (perms=drwxrwxr-x)
	I0327 23:53:15.738027 1086621 main.go:141] libmachine: (ha-377576-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0327 23:53:15.738048 1086621 main.go:141] libmachine: (ha-377576-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18485-1069254/.minikube
	I0327 23:53:15.738066 1086621 main.go:141] libmachine: (ha-377576-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18485-1069254
	I0327 23:53:15.738081 1086621 main.go:141] libmachine: (ha-377576-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0327 23:53:15.738098 1086621 main.go:141] libmachine: (ha-377576-m02) Creating domain...
	I0327 23:53:15.738114 1086621 main.go:141] libmachine: (ha-377576-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0327 23:53:15.738131 1086621 main.go:141] libmachine: (ha-377576-m02) DBG | Checking permissions on dir: /home/jenkins
	I0327 23:53:15.738149 1086621 main.go:141] libmachine: (ha-377576-m02) DBG | Checking permissions on dir: /home
	I0327 23:53:15.738162 1086621 main.go:141] libmachine: (ha-377576-m02) DBG | Skipping /home - not owner
	I0327 23:53:15.739118 1086621 main.go:141] libmachine: (ha-377576-m02) define libvirt domain using xml: 
	I0327 23:53:15.739144 1086621 main.go:141] libmachine: (ha-377576-m02) <domain type='kvm'>
	I0327 23:53:15.739156 1086621 main.go:141] libmachine: (ha-377576-m02)   <name>ha-377576-m02</name>
	I0327 23:53:15.739173 1086621 main.go:141] libmachine: (ha-377576-m02)   <memory unit='MiB'>2200</memory>
	I0327 23:53:15.739183 1086621 main.go:141] libmachine: (ha-377576-m02)   <vcpu>2</vcpu>
	I0327 23:53:15.739195 1086621 main.go:141] libmachine: (ha-377576-m02)   <features>
	I0327 23:53:15.739207 1086621 main.go:141] libmachine: (ha-377576-m02)     <acpi/>
	I0327 23:53:15.739220 1086621 main.go:141] libmachine: (ha-377576-m02)     <apic/>
	I0327 23:53:15.739229 1086621 main.go:141] libmachine: (ha-377576-m02)     <pae/>
	I0327 23:53:15.739239 1086621 main.go:141] libmachine: (ha-377576-m02)     
	I0327 23:53:15.739248 1086621 main.go:141] libmachine: (ha-377576-m02)   </features>
	I0327 23:53:15.739261 1086621 main.go:141] libmachine: (ha-377576-m02)   <cpu mode='host-passthrough'>
	I0327 23:53:15.739298 1086621 main.go:141] libmachine: (ha-377576-m02)   
	I0327 23:53:15.739320 1086621 main.go:141] libmachine: (ha-377576-m02)   </cpu>
	I0327 23:53:15.739330 1086621 main.go:141] libmachine: (ha-377576-m02)   <os>
	I0327 23:53:15.739343 1086621 main.go:141] libmachine: (ha-377576-m02)     <type>hvm</type>
	I0327 23:53:15.739389 1086621 main.go:141] libmachine: (ha-377576-m02)     <boot dev='cdrom'/>
	I0327 23:53:15.739415 1086621 main.go:141] libmachine: (ha-377576-m02)     <boot dev='hd'/>
	I0327 23:53:15.739434 1086621 main.go:141] libmachine: (ha-377576-m02)     <bootmenu enable='no'/>
	I0327 23:53:15.739460 1086621 main.go:141] libmachine: (ha-377576-m02)   </os>
	I0327 23:53:15.739474 1086621 main.go:141] libmachine: (ha-377576-m02)   <devices>
	I0327 23:53:15.739483 1086621 main.go:141] libmachine: (ha-377576-m02)     <disk type='file' device='cdrom'>
	I0327 23:53:15.739502 1086621 main.go:141] libmachine: (ha-377576-m02)       <source file='/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/ha-377576-m02/boot2docker.iso'/>
	I0327 23:53:15.739513 1086621 main.go:141] libmachine: (ha-377576-m02)       <target dev='hdc' bus='scsi'/>
	I0327 23:53:15.739532 1086621 main.go:141] libmachine: (ha-377576-m02)       <readonly/>
	I0327 23:53:15.739546 1086621 main.go:141] libmachine: (ha-377576-m02)     </disk>
	I0327 23:53:15.739559 1086621 main.go:141] libmachine: (ha-377576-m02)     <disk type='file' device='disk'>
	I0327 23:53:15.739573 1086621 main.go:141] libmachine: (ha-377576-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0327 23:53:15.739587 1086621 main.go:141] libmachine: (ha-377576-m02)       <source file='/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/ha-377576-m02/ha-377576-m02.rawdisk'/>
	I0327 23:53:15.739603 1086621 main.go:141] libmachine: (ha-377576-m02)       <target dev='hda' bus='virtio'/>
	I0327 23:53:15.739616 1086621 main.go:141] libmachine: (ha-377576-m02)     </disk>
	I0327 23:53:15.739628 1086621 main.go:141] libmachine: (ha-377576-m02)     <interface type='network'>
	I0327 23:53:15.739642 1086621 main.go:141] libmachine: (ha-377576-m02)       <source network='mk-ha-377576'/>
	I0327 23:53:15.739653 1086621 main.go:141] libmachine: (ha-377576-m02)       <model type='virtio'/>
	I0327 23:53:15.739664 1086621 main.go:141] libmachine: (ha-377576-m02)     </interface>
	I0327 23:53:15.739676 1086621 main.go:141] libmachine: (ha-377576-m02)     <interface type='network'>
	I0327 23:53:15.739688 1086621 main.go:141] libmachine: (ha-377576-m02)       <source network='default'/>
	I0327 23:53:15.739697 1086621 main.go:141] libmachine: (ha-377576-m02)       <model type='virtio'/>
	I0327 23:53:15.739705 1086621 main.go:141] libmachine: (ha-377576-m02)     </interface>
	I0327 23:53:15.739715 1086621 main.go:141] libmachine: (ha-377576-m02)     <serial type='pty'>
	I0327 23:53:15.739725 1086621 main.go:141] libmachine: (ha-377576-m02)       <target port='0'/>
	I0327 23:53:15.739737 1086621 main.go:141] libmachine: (ha-377576-m02)     </serial>
	I0327 23:53:15.739748 1086621 main.go:141] libmachine: (ha-377576-m02)     <console type='pty'>
	I0327 23:53:15.739760 1086621 main.go:141] libmachine: (ha-377576-m02)       <target type='serial' port='0'/>
	I0327 23:53:15.739767 1086621 main.go:141] libmachine: (ha-377576-m02)     </console>
	I0327 23:53:15.739780 1086621 main.go:141] libmachine: (ha-377576-m02)     <rng model='virtio'>
	I0327 23:53:15.739791 1086621 main.go:141] libmachine: (ha-377576-m02)       <backend model='random'>/dev/random</backend>
	I0327 23:53:15.739802 1086621 main.go:141] libmachine: (ha-377576-m02)     </rng>
	I0327 23:53:15.739815 1086621 main.go:141] libmachine: (ha-377576-m02)     
	I0327 23:53:15.739824 1086621 main.go:141] libmachine: (ha-377576-m02)     
	I0327 23:53:15.739831 1086621 main.go:141] libmachine: (ha-377576-m02)   </devices>
	I0327 23:53:15.739841 1086621 main.go:141] libmachine: (ha-377576-m02) </domain>
	I0327 23:53:15.739847 1086621 main.go:141] libmachine: (ha-377576-m02) 
	I0327 23:53:15.747300 1086621 main.go:141] libmachine: (ha-377576-m02) DBG | domain ha-377576-m02 has defined MAC address 52:54:00:61:b9:1e in network default
	I0327 23:53:15.748104 1086621 main.go:141] libmachine: (ha-377576-m02) Ensuring networks are active...
	I0327 23:53:15.748133 1086621 main.go:141] libmachine: (ha-377576-m02) DBG | domain ha-377576-m02 has defined MAC address 52:54:00:bb:83:99 in network mk-ha-377576
	I0327 23:53:15.748959 1086621 main.go:141] libmachine: (ha-377576-m02) Ensuring network default is active
	I0327 23:53:15.749414 1086621 main.go:141] libmachine: (ha-377576-m02) Ensuring network mk-ha-377576 is active
	I0327 23:53:15.749904 1086621 main.go:141] libmachine: (ha-377576-m02) Getting domain xml...
	I0327 23:53:15.750948 1086621 main.go:141] libmachine: (ha-377576-m02) Creating domain...
	I0327 23:53:16.994107 1086621 main.go:141] libmachine: (ha-377576-m02) Waiting to get IP...
	I0327 23:53:16.994933 1086621 main.go:141] libmachine: (ha-377576-m02) DBG | domain ha-377576-m02 has defined MAC address 52:54:00:bb:83:99 in network mk-ha-377576
	I0327 23:53:16.995389 1086621 main.go:141] libmachine: (ha-377576-m02) DBG | unable to find current IP address of domain ha-377576-m02 in network mk-ha-377576
	I0327 23:53:16.995419 1086621 main.go:141] libmachine: (ha-377576-m02) DBG | I0327 23:53:16.995361 1086974 retry.go:31] will retry after 307.585701ms: waiting for machine to come up
	I0327 23:53:17.305617 1086621 main.go:141] libmachine: (ha-377576-m02) DBG | domain ha-377576-m02 has defined MAC address 52:54:00:bb:83:99 in network mk-ha-377576
	I0327 23:53:17.306593 1086621 main.go:141] libmachine: (ha-377576-m02) DBG | unable to find current IP address of domain ha-377576-m02 in network mk-ha-377576
	I0327 23:53:17.306623 1086621 main.go:141] libmachine: (ha-377576-m02) DBG | I0327 23:53:17.306553 1086974 retry.go:31] will retry after 321.687137ms: waiting for machine to come up
	I0327 23:53:17.630498 1086621 main.go:141] libmachine: (ha-377576-m02) DBG | domain ha-377576-m02 has defined MAC address 52:54:00:bb:83:99 in network mk-ha-377576
	I0327 23:53:17.630996 1086621 main.go:141] libmachine: (ha-377576-m02) DBG | unable to find current IP address of domain ha-377576-m02 in network mk-ha-377576
	I0327 23:53:17.631028 1086621 main.go:141] libmachine: (ha-377576-m02) DBG | I0327 23:53:17.630940 1086974 retry.go:31] will retry after 411.240849ms: waiting for machine to come up
	I0327 23:53:18.043729 1086621 main.go:141] libmachine: (ha-377576-m02) DBG | domain ha-377576-m02 has defined MAC address 52:54:00:bb:83:99 in network mk-ha-377576
	I0327 23:53:18.044211 1086621 main.go:141] libmachine: (ha-377576-m02) DBG | unable to find current IP address of domain ha-377576-m02 in network mk-ha-377576
	I0327 23:53:18.044244 1086621 main.go:141] libmachine: (ha-377576-m02) DBG | I0327 23:53:18.044147 1086974 retry.go:31] will retry after 543.743675ms: waiting for machine to come up
	I0327 23:53:18.589887 1086621 main.go:141] libmachine: (ha-377576-m02) DBG | domain ha-377576-m02 has defined MAC address 52:54:00:bb:83:99 in network mk-ha-377576
	I0327 23:53:18.590408 1086621 main.go:141] libmachine: (ha-377576-m02) DBG | unable to find current IP address of domain ha-377576-m02 in network mk-ha-377576
	I0327 23:53:18.590439 1086621 main.go:141] libmachine: (ha-377576-m02) DBG | I0327 23:53:18.590370 1086974 retry.go:31] will retry after 541.228138ms: waiting for machine to come up
	I0327 23:53:19.133287 1086621 main.go:141] libmachine: (ha-377576-m02) DBG | domain ha-377576-m02 has defined MAC address 52:54:00:bb:83:99 in network mk-ha-377576
	I0327 23:53:19.133820 1086621 main.go:141] libmachine: (ha-377576-m02) DBG | unable to find current IP address of domain ha-377576-m02 in network mk-ha-377576
	I0327 23:53:19.133854 1086621 main.go:141] libmachine: (ha-377576-m02) DBG | I0327 23:53:19.133772 1086974 retry.go:31] will retry after 874.601632ms: waiting for machine to come up
	I0327 23:53:20.009880 1086621 main.go:141] libmachine: (ha-377576-m02) DBG | domain ha-377576-m02 has defined MAC address 52:54:00:bb:83:99 in network mk-ha-377576
	I0327 23:53:20.010299 1086621 main.go:141] libmachine: (ha-377576-m02) DBG | unable to find current IP address of domain ha-377576-m02 in network mk-ha-377576
	I0327 23:53:20.010336 1086621 main.go:141] libmachine: (ha-377576-m02) DBG | I0327 23:53:20.010244 1086974 retry.go:31] will retry after 764.266491ms: waiting for machine to come up
	I0327 23:53:20.776759 1086621 main.go:141] libmachine: (ha-377576-m02) DBG | domain ha-377576-m02 has defined MAC address 52:54:00:bb:83:99 in network mk-ha-377576
	I0327 23:53:20.777293 1086621 main.go:141] libmachine: (ha-377576-m02) DBG | unable to find current IP address of domain ha-377576-m02 in network mk-ha-377576
	I0327 23:53:20.777322 1086621 main.go:141] libmachine: (ha-377576-m02) DBG | I0327 23:53:20.777229 1086974 retry.go:31] will retry after 1.354206268s: waiting for machine to come up
	I0327 23:53:22.132893 1086621 main.go:141] libmachine: (ha-377576-m02) DBG | domain ha-377576-m02 has defined MAC address 52:54:00:bb:83:99 in network mk-ha-377576
	I0327 23:53:22.133295 1086621 main.go:141] libmachine: (ha-377576-m02) DBG | unable to find current IP address of domain ha-377576-m02 in network mk-ha-377576
	I0327 23:53:22.133328 1086621 main.go:141] libmachine: (ha-377576-m02) DBG | I0327 23:53:22.133231 1086974 retry.go:31] will retry after 1.748976151s: waiting for machine to come up
	I0327 23:53:23.884465 1086621 main.go:141] libmachine: (ha-377576-m02) DBG | domain ha-377576-m02 has defined MAC address 52:54:00:bb:83:99 in network mk-ha-377576
	I0327 23:53:23.884952 1086621 main.go:141] libmachine: (ha-377576-m02) DBG | unable to find current IP address of domain ha-377576-m02 in network mk-ha-377576
	I0327 23:53:23.884985 1086621 main.go:141] libmachine: (ha-377576-m02) DBG | I0327 23:53:23.884894 1086974 retry.go:31] will retry after 1.53502578s: waiting for machine to come up
	I0327 23:53:25.421857 1086621 main.go:141] libmachine: (ha-377576-m02) DBG | domain ha-377576-m02 has defined MAC address 52:54:00:bb:83:99 in network mk-ha-377576
	I0327 23:53:25.422261 1086621 main.go:141] libmachine: (ha-377576-m02) DBG | unable to find current IP address of domain ha-377576-m02 in network mk-ha-377576
	I0327 23:53:25.422293 1086621 main.go:141] libmachine: (ha-377576-m02) DBG | I0327 23:53:25.422217 1086974 retry.go:31] will retry after 2.750520171s: waiting for machine to come up
	I0327 23:53:28.176280 1086621 main.go:141] libmachine: (ha-377576-m02) DBG | domain ha-377576-m02 has defined MAC address 52:54:00:bb:83:99 in network mk-ha-377576
	I0327 23:53:28.176674 1086621 main.go:141] libmachine: (ha-377576-m02) DBG | unable to find current IP address of domain ha-377576-m02 in network mk-ha-377576
	I0327 23:53:28.176704 1086621 main.go:141] libmachine: (ha-377576-m02) DBG | I0327 23:53:28.176610 1086974 retry.go:31] will retry after 2.87947611s: waiting for machine to come up
	I0327 23:53:31.057720 1086621 main.go:141] libmachine: (ha-377576-m02) DBG | domain ha-377576-m02 has defined MAC address 52:54:00:bb:83:99 in network mk-ha-377576
	I0327 23:53:31.058132 1086621 main.go:141] libmachine: (ha-377576-m02) DBG | unable to find current IP address of domain ha-377576-m02 in network mk-ha-377576
	I0327 23:53:31.058168 1086621 main.go:141] libmachine: (ha-377576-m02) DBG | I0327 23:53:31.058076 1086974 retry.go:31] will retry after 4.114177302s: waiting for machine to come up
	I0327 23:53:35.177386 1086621 main.go:141] libmachine: (ha-377576-m02) DBG | domain ha-377576-m02 has defined MAC address 52:54:00:bb:83:99 in network mk-ha-377576
	I0327 23:53:35.177859 1086621 main.go:141] libmachine: (ha-377576-m02) DBG | unable to find current IP address of domain ha-377576-m02 in network mk-ha-377576
	I0327 23:53:35.177882 1086621 main.go:141] libmachine: (ha-377576-m02) DBG | I0327 23:53:35.177820 1086974 retry.go:31] will retry after 5.380971027s: waiting for machine to come up
	I0327 23:53:40.559846 1086621 main.go:141] libmachine: (ha-377576-m02) DBG | domain ha-377576-m02 has defined MAC address 52:54:00:bb:83:99 in network mk-ha-377576
	I0327 23:53:40.560341 1086621 main.go:141] libmachine: (ha-377576-m02) DBG | domain ha-377576-m02 has current primary IP address 192.168.39.117 and MAC address 52:54:00:bb:83:99 in network mk-ha-377576
	I0327 23:53:40.560367 1086621 main.go:141] libmachine: (ha-377576-m02) Found IP for machine: 192.168.39.117
	I0327 23:53:40.560414 1086621 main.go:141] libmachine: (ha-377576-m02) Reserving static IP address...
	I0327 23:53:40.560762 1086621 main.go:141] libmachine: (ha-377576-m02) DBG | unable to find host DHCP lease matching {name: "ha-377576-m02", mac: "52:54:00:bb:83:99", ip: "192.168.39.117"} in network mk-ha-377576
	I0327 23:53:40.639456 1086621 main.go:141] libmachine: (ha-377576-m02) DBG | Getting to WaitForSSH function...
	I0327 23:53:40.639494 1086621 main.go:141] libmachine: (ha-377576-m02) Reserved static IP address: 192.168.39.117
	I0327 23:53:40.639510 1086621 main.go:141] libmachine: (ha-377576-m02) Waiting for SSH to be available...
	I0327 23:53:40.642766 1086621 main.go:141] libmachine: (ha-377576-m02) DBG | domain ha-377576-m02 has defined MAC address 52:54:00:bb:83:99 in network mk-ha-377576
	I0327 23:53:40.643212 1086621 main.go:141] libmachine: (ha-377576-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:83:99", ip: ""} in network mk-ha-377576: {Iface:virbr1 ExpiryTime:2024-03-28 00:53:30 +0000 UTC Type:0 Mac:52:54:00:bb:83:99 Iaid: IPaddr:192.168.39.117 Prefix:24 Hostname:minikube Clientid:01:52:54:00:bb:83:99}
	I0327 23:53:40.643244 1086621 main.go:141] libmachine: (ha-377576-m02) DBG | domain ha-377576-m02 has defined IP address 192.168.39.117 and MAC address 52:54:00:bb:83:99 in network mk-ha-377576
	I0327 23:53:40.643348 1086621 main.go:141] libmachine: (ha-377576-m02) DBG | Using SSH client type: external
	I0327 23:53:40.643372 1086621 main.go:141] libmachine: (ha-377576-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/ha-377576-m02/id_rsa (-rw-------)
	I0327 23:53:40.643401 1086621 main.go:141] libmachine: (ha-377576-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.117 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/ha-377576-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0327 23:53:40.643420 1086621 main.go:141] libmachine: (ha-377576-m02) DBG | About to run SSH command:
	I0327 23:53:40.643442 1086621 main.go:141] libmachine: (ha-377576-m02) DBG | exit 0
	I0327 23:53:40.774407 1086621 main.go:141] libmachine: (ha-377576-m02) DBG | SSH cmd err, output: <nil>: 
	I0327 23:53:40.774727 1086621 main.go:141] libmachine: (ha-377576-m02) KVM machine creation complete!
	I0327 23:53:40.775008 1086621 main.go:141] libmachine: (ha-377576-m02) Calling .GetConfigRaw
	I0327 23:53:40.775649 1086621 main.go:141] libmachine: (ha-377576-m02) Calling .DriverName
	I0327 23:53:40.775857 1086621 main.go:141] libmachine: (ha-377576-m02) Calling .DriverName
	I0327 23:53:40.776061 1086621 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0327 23:53:40.776077 1086621 main.go:141] libmachine: (ha-377576-m02) Calling .GetState
	I0327 23:53:40.777381 1086621 main.go:141] libmachine: Detecting operating system of created instance...
	I0327 23:53:40.777400 1086621 main.go:141] libmachine: Waiting for SSH to be available...
	I0327 23:53:40.777407 1086621 main.go:141] libmachine: Getting to WaitForSSH function...
	I0327 23:53:40.777414 1086621 main.go:141] libmachine: (ha-377576-m02) Calling .GetSSHHostname
	I0327 23:53:40.780109 1086621 main.go:141] libmachine: (ha-377576-m02) DBG | domain ha-377576-m02 has defined MAC address 52:54:00:bb:83:99 in network mk-ha-377576
	I0327 23:53:40.780507 1086621 main.go:141] libmachine: (ha-377576-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:83:99", ip: ""} in network mk-ha-377576: {Iface:virbr1 ExpiryTime:2024-03-28 00:53:30 +0000 UTC Type:0 Mac:52:54:00:bb:83:99 Iaid: IPaddr:192.168.39.117 Prefix:24 Hostname:ha-377576-m02 Clientid:01:52:54:00:bb:83:99}
	I0327 23:53:40.780541 1086621 main.go:141] libmachine: (ha-377576-m02) DBG | domain ha-377576-m02 has defined IP address 192.168.39.117 and MAC address 52:54:00:bb:83:99 in network mk-ha-377576
	I0327 23:53:40.780691 1086621 main.go:141] libmachine: (ha-377576-m02) Calling .GetSSHPort
	I0327 23:53:40.780915 1086621 main.go:141] libmachine: (ha-377576-m02) Calling .GetSSHKeyPath
	I0327 23:53:40.781095 1086621 main.go:141] libmachine: (ha-377576-m02) Calling .GetSSHKeyPath
	I0327 23:53:40.781254 1086621 main.go:141] libmachine: (ha-377576-m02) Calling .GetSSHUsername
	I0327 23:53:40.781480 1086621 main.go:141] libmachine: Using SSH client type: native
	I0327 23:53:40.781753 1086621 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.117 22 <nil> <nil>}
	I0327 23:53:40.781770 1086621 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0327 23:53:40.889745 1086621 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0327 23:53:40.889785 1086621 main.go:141] libmachine: Detecting the provisioner...
	I0327 23:53:40.889800 1086621 main.go:141] libmachine: (ha-377576-m02) Calling .GetSSHHostname
	I0327 23:53:40.892872 1086621 main.go:141] libmachine: (ha-377576-m02) DBG | domain ha-377576-m02 has defined MAC address 52:54:00:bb:83:99 in network mk-ha-377576
	I0327 23:53:40.893298 1086621 main.go:141] libmachine: (ha-377576-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:83:99", ip: ""} in network mk-ha-377576: {Iface:virbr1 ExpiryTime:2024-03-28 00:53:30 +0000 UTC Type:0 Mac:52:54:00:bb:83:99 Iaid: IPaddr:192.168.39.117 Prefix:24 Hostname:ha-377576-m02 Clientid:01:52:54:00:bb:83:99}
	I0327 23:53:40.893332 1086621 main.go:141] libmachine: (ha-377576-m02) DBG | domain ha-377576-m02 has defined IP address 192.168.39.117 and MAC address 52:54:00:bb:83:99 in network mk-ha-377576
	I0327 23:53:40.893564 1086621 main.go:141] libmachine: (ha-377576-m02) Calling .GetSSHPort
	I0327 23:53:40.893810 1086621 main.go:141] libmachine: (ha-377576-m02) Calling .GetSSHKeyPath
	I0327 23:53:40.893995 1086621 main.go:141] libmachine: (ha-377576-m02) Calling .GetSSHKeyPath
	I0327 23:53:40.894136 1086621 main.go:141] libmachine: (ha-377576-m02) Calling .GetSSHUsername
	I0327 23:53:40.894318 1086621 main.go:141] libmachine: Using SSH client type: native
	I0327 23:53:40.894544 1086621 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.117 22 <nil> <nil>}
	I0327 23:53:40.894563 1086621 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0327 23:53:41.007518 1086621 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0327 23:53:41.007587 1086621 main.go:141] libmachine: found compatible host: buildroot
	I0327 23:53:41.007595 1086621 main.go:141] libmachine: Provisioning with buildroot...
	I0327 23:53:41.007607 1086621 main.go:141] libmachine: (ha-377576-m02) Calling .GetMachineName
	I0327 23:53:41.007886 1086621 buildroot.go:166] provisioning hostname "ha-377576-m02"
	I0327 23:53:41.007918 1086621 main.go:141] libmachine: (ha-377576-m02) Calling .GetMachineName
	I0327 23:53:41.008130 1086621 main.go:141] libmachine: (ha-377576-m02) Calling .GetSSHHostname
	I0327 23:53:41.011050 1086621 main.go:141] libmachine: (ha-377576-m02) DBG | domain ha-377576-m02 has defined MAC address 52:54:00:bb:83:99 in network mk-ha-377576
	I0327 23:53:41.011449 1086621 main.go:141] libmachine: (ha-377576-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:83:99", ip: ""} in network mk-ha-377576: {Iface:virbr1 ExpiryTime:2024-03-28 00:53:30 +0000 UTC Type:0 Mac:52:54:00:bb:83:99 Iaid: IPaddr:192.168.39.117 Prefix:24 Hostname:ha-377576-m02 Clientid:01:52:54:00:bb:83:99}
	I0327 23:53:41.011473 1086621 main.go:141] libmachine: (ha-377576-m02) DBG | domain ha-377576-m02 has defined IP address 192.168.39.117 and MAC address 52:54:00:bb:83:99 in network mk-ha-377576
	I0327 23:53:41.011618 1086621 main.go:141] libmachine: (ha-377576-m02) Calling .GetSSHPort
	I0327 23:53:41.011814 1086621 main.go:141] libmachine: (ha-377576-m02) Calling .GetSSHKeyPath
	I0327 23:53:41.012021 1086621 main.go:141] libmachine: (ha-377576-m02) Calling .GetSSHKeyPath
	I0327 23:53:41.012185 1086621 main.go:141] libmachine: (ha-377576-m02) Calling .GetSSHUsername
	I0327 23:53:41.012377 1086621 main.go:141] libmachine: Using SSH client type: native
	I0327 23:53:41.012565 1086621 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.117 22 <nil> <nil>}
	I0327 23:53:41.012581 1086621 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-377576-m02 && echo "ha-377576-m02" | sudo tee /etc/hostname
	I0327 23:53:41.134757 1086621 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-377576-m02
	
	I0327 23:53:41.134796 1086621 main.go:141] libmachine: (ha-377576-m02) Calling .GetSSHHostname
	I0327 23:53:41.137686 1086621 main.go:141] libmachine: (ha-377576-m02) DBG | domain ha-377576-m02 has defined MAC address 52:54:00:bb:83:99 in network mk-ha-377576
	I0327 23:53:41.138037 1086621 main.go:141] libmachine: (ha-377576-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:83:99", ip: ""} in network mk-ha-377576: {Iface:virbr1 ExpiryTime:2024-03-28 00:53:30 +0000 UTC Type:0 Mac:52:54:00:bb:83:99 Iaid: IPaddr:192.168.39.117 Prefix:24 Hostname:ha-377576-m02 Clientid:01:52:54:00:bb:83:99}
	I0327 23:53:41.138068 1086621 main.go:141] libmachine: (ha-377576-m02) DBG | domain ha-377576-m02 has defined IP address 192.168.39.117 and MAC address 52:54:00:bb:83:99 in network mk-ha-377576
	I0327 23:53:41.138260 1086621 main.go:141] libmachine: (ha-377576-m02) Calling .GetSSHPort
	I0327 23:53:41.138483 1086621 main.go:141] libmachine: (ha-377576-m02) Calling .GetSSHKeyPath
	I0327 23:53:41.138649 1086621 main.go:141] libmachine: (ha-377576-m02) Calling .GetSSHKeyPath
	I0327 23:53:41.138808 1086621 main.go:141] libmachine: (ha-377576-m02) Calling .GetSSHUsername
	I0327 23:53:41.138968 1086621 main.go:141] libmachine: Using SSH client type: native
	I0327 23:53:41.139210 1086621 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.117 22 <nil> <nil>}
	I0327 23:53:41.139229 1086621 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-377576-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-377576-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-377576-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0327 23:53:41.251935 1086621 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0327 23:53:41.251979 1086621 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18485-1069254/.minikube CaCertPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18485-1069254/.minikube}
	I0327 23:53:41.252003 1086621 buildroot.go:174] setting up certificates
	I0327 23:53:41.252019 1086621 provision.go:84] configureAuth start
	I0327 23:53:41.252036 1086621 main.go:141] libmachine: (ha-377576-m02) Calling .GetMachineName
	I0327 23:53:41.252405 1086621 main.go:141] libmachine: (ha-377576-m02) Calling .GetIP
	I0327 23:53:41.255380 1086621 main.go:141] libmachine: (ha-377576-m02) DBG | domain ha-377576-m02 has defined MAC address 52:54:00:bb:83:99 in network mk-ha-377576
	I0327 23:53:41.255787 1086621 main.go:141] libmachine: (ha-377576-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:83:99", ip: ""} in network mk-ha-377576: {Iface:virbr1 ExpiryTime:2024-03-28 00:53:30 +0000 UTC Type:0 Mac:52:54:00:bb:83:99 Iaid: IPaddr:192.168.39.117 Prefix:24 Hostname:ha-377576-m02 Clientid:01:52:54:00:bb:83:99}
	I0327 23:53:41.255820 1086621 main.go:141] libmachine: (ha-377576-m02) DBG | domain ha-377576-m02 has defined IP address 192.168.39.117 and MAC address 52:54:00:bb:83:99 in network mk-ha-377576
	I0327 23:53:41.256013 1086621 main.go:141] libmachine: (ha-377576-m02) Calling .GetSSHHostname
	I0327 23:53:41.258411 1086621 main.go:141] libmachine: (ha-377576-m02) DBG | domain ha-377576-m02 has defined MAC address 52:54:00:bb:83:99 in network mk-ha-377576
	I0327 23:53:41.258769 1086621 main.go:141] libmachine: (ha-377576-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:83:99", ip: ""} in network mk-ha-377576: {Iface:virbr1 ExpiryTime:2024-03-28 00:53:30 +0000 UTC Type:0 Mac:52:54:00:bb:83:99 Iaid: IPaddr:192.168.39.117 Prefix:24 Hostname:ha-377576-m02 Clientid:01:52:54:00:bb:83:99}
	I0327 23:53:41.258804 1086621 main.go:141] libmachine: (ha-377576-m02) DBG | domain ha-377576-m02 has defined IP address 192.168.39.117 and MAC address 52:54:00:bb:83:99 in network mk-ha-377576
	I0327 23:53:41.258928 1086621 provision.go:143] copyHostCerts
	I0327 23:53:41.258965 1086621 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.pem
	I0327 23:53:41.259004 1086621 exec_runner.go:144] found /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.pem, removing ...
	I0327 23:53:41.259014 1086621 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.pem
	I0327 23:53:41.259100 1086621 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.pem (1078 bytes)
	I0327 23:53:41.259195 1086621 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18485-1069254/.minikube/cert.pem
	I0327 23:53:41.259222 1086621 exec_runner.go:144] found /home/jenkins/minikube-integration/18485-1069254/.minikube/cert.pem, removing ...
	I0327 23:53:41.259237 1086621 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18485-1069254/.minikube/cert.pem
	I0327 23:53:41.259277 1086621 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18485-1069254/.minikube/cert.pem (1123 bytes)
	I0327 23:53:41.259338 1086621 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18485-1069254/.minikube/key.pem
	I0327 23:53:41.259361 1086621 exec_runner.go:144] found /home/jenkins/minikube-integration/18485-1069254/.minikube/key.pem, removing ...
	I0327 23:53:41.259369 1086621 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18485-1069254/.minikube/key.pem
	I0327 23:53:41.259400 1086621 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18485-1069254/.minikube/key.pem (1679 bytes)
	I0327 23:53:41.259465 1086621 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca-key.pem org=jenkins.ha-377576-m02 san=[127.0.0.1 192.168.39.117 ha-377576-m02 localhost minikube]
	I0327 23:53:41.409802 1086621 provision.go:177] copyRemoteCerts
	I0327 23:53:41.409872 1086621 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0327 23:53:41.409899 1086621 main.go:141] libmachine: (ha-377576-m02) Calling .GetSSHHostname
	I0327 23:53:41.412541 1086621 main.go:141] libmachine: (ha-377576-m02) DBG | domain ha-377576-m02 has defined MAC address 52:54:00:bb:83:99 in network mk-ha-377576
	I0327 23:53:41.412892 1086621 main.go:141] libmachine: (ha-377576-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:83:99", ip: ""} in network mk-ha-377576: {Iface:virbr1 ExpiryTime:2024-03-28 00:53:30 +0000 UTC Type:0 Mac:52:54:00:bb:83:99 Iaid: IPaddr:192.168.39.117 Prefix:24 Hostname:ha-377576-m02 Clientid:01:52:54:00:bb:83:99}
	I0327 23:53:41.412926 1086621 main.go:141] libmachine: (ha-377576-m02) DBG | domain ha-377576-m02 has defined IP address 192.168.39.117 and MAC address 52:54:00:bb:83:99 in network mk-ha-377576
	I0327 23:53:41.413127 1086621 main.go:141] libmachine: (ha-377576-m02) Calling .GetSSHPort
	I0327 23:53:41.413352 1086621 main.go:141] libmachine: (ha-377576-m02) Calling .GetSSHKeyPath
	I0327 23:53:41.413544 1086621 main.go:141] libmachine: (ha-377576-m02) Calling .GetSSHUsername
	I0327 23:53:41.413723 1086621 sshutil.go:53] new ssh client: &{IP:192.168.39.117 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/ha-377576-m02/id_rsa Username:docker}
	I0327 23:53:41.497360 1086621 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0327 23:53:41.497455 1086621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0327 23:53:41.523437 1086621 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0327 23:53:41.523537 1086621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0327 23:53:41.550433 1086621 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0327 23:53:41.550525 1086621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0327 23:53:41.575575 1086621 provision.go:87] duration metric: took 323.534653ms to configureAuth
	I0327 23:53:41.575613 1086621 buildroot.go:189] setting minikube options for container-runtime
	I0327 23:53:41.575802 1086621 config.go:182] Loaded profile config "ha-377576": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0327 23:53:41.575910 1086621 main.go:141] libmachine: (ha-377576-m02) Calling .GetSSHHostname
	I0327 23:53:41.578678 1086621 main.go:141] libmachine: (ha-377576-m02) DBG | domain ha-377576-m02 has defined MAC address 52:54:00:bb:83:99 in network mk-ha-377576
	I0327 23:53:41.579104 1086621 main.go:141] libmachine: (ha-377576-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:83:99", ip: ""} in network mk-ha-377576: {Iface:virbr1 ExpiryTime:2024-03-28 00:53:30 +0000 UTC Type:0 Mac:52:54:00:bb:83:99 Iaid: IPaddr:192.168.39.117 Prefix:24 Hostname:ha-377576-m02 Clientid:01:52:54:00:bb:83:99}
	I0327 23:53:41.579137 1086621 main.go:141] libmachine: (ha-377576-m02) DBG | domain ha-377576-m02 has defined IP address 192.168.39.117 and MAC address 52:54:00:bb:83:99 in network mk-ha-377576
	I0327 23:53:41.579293 1086621 main.go:141] libmachine: (ha-377576-m02) Calling .GetSSHPort
	I0327 23:53:41.579517 1086621 main.go:141] libmachine: (ha-377576-m02) Calling .GetSSHKeyPath
	I0327 23:53:41.579755 1086621 main.go:141] libmachine: (ha-377576-m02) Calling .GetSSHKeyPath
	I0327 23:53:41.579912 1086621 main.go:141] libmachine: (ha-377576-m02) Calling .GetSSHUsername
	I0327 23:53:41.580093 1086621 main.go:141] libmachine: Using SSH client type: native
	I0327 23:53:41.580261 1086621 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.117 22 <nil> <nil>}
	I0327 23:53:41.580276 1086621 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0327 23:53:41.869465 1086621 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0327 23:53:41.869492 1086621 main.go:141] libmachine: Checking connection to Docker...
	I0327 23:53:41.869501 1086621 main.go:141] libmachine: (ha-377576-m02) Calling .GetURL
	I0327 23:53:41.870913 1086621 main.go:141] libmachine: (ha-377576-m02) DBG | Using libvirt version 6000000
	I0327 23:53:41.873302 1086621 main.go:141] libmachine: (ha-377576-m02) DBG | domain ha-377576-m02 has defined MAC address 52:54:00:bb:83:99 in network mk-ha-377576
	I0327 23:53:41.873661 1086621 main.go:141] libmachine: (ha-377576-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:83:99", ip: ""} in network mk-ha-377576: {Iface:virbr1 ExpiryTime:2024-03-28 00:53:30 +0000 UTC Type:0 Mac:52:54:00:bb:83:99 Iaid: IPaddr:192.168.39.117 Prefix:24 Hostname:ha-377576-m02 Clientid:01:52:54:00:bb:83:99}
	I0327 23:53:41.873698 1086621 main.go:141] libmachine: (ha-377576-m02) DBG | domain ha-377576-m02 has defined IP address 192.168.39.117 and MAC address 52:54:00:bb:83:99 in network mk-ha-377576
	I0327 23:53:41.873831 1086621 main.go:141] libmachine: Docker is up and running!
	I0327 23:53:41.873848 1086621 main.go:141] libmachine: Reticulating splines...
	I0327 23:53:41.873855 1086621 client.go:171] duration metric: took 26.539168369s to LocalClient.Create
	I0327 23:53:41.873882 1086621 start.go:167] duration metric: took 26.539231877s to libmachine.API.Create "ha-377576"
	I0327 23:53:41.873892 1086621 start.go:293] postStartSetup for "ha-377576-m02" (driver="kvm2")
	I0327 23:53:41.873905 1086621 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0327 23:53:41.873926 1086621 main.go:141] libmachine: (ha-377576-m02) Calling .DriverName
	I0327 23:53:41.874212 1086621 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0327 23:53:41.874254 1086621 main.go:141] libmachine: (ha-377576-m02) Calling .GetSSHHostname
	I0327 23:53:41.876404 1086621 main.go:141] libmachine: (ha-377576-m02) DBG | domain ha-377576-m02 has defined MAC address 52:54:00:bb:83:99 in network mk-ha-377576
	I0327 23:53:41.876792 1086621 main.go:141] libmachine: (ha-377576-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:83:99", ip: ""} in network mk-ha-377576: {Iface:virbr1 ExpiryTime:2024-03-28 00:53:30 +0000 UTC Type:0 Mac:52:54:00:bb:83:99 Iaid: IPaddr:192.168.39.117 Prefix:24 Hostname:ha-377576-m02 Clientid:01:52:54:00:bb:83:99}
	I0327 23:53:41.876819 1086621 main.go:141] libmachine: (ha-377576-m02) DBG | domain ha-377576-m02 has defined IP address 192.168.39.117 and MAC address 52:54:00:bb:83:99 in network mk-ha-377576
	I0327 23:53:41.876997 1086621 main.go:141] libmachine: (ha-377576-m02) Calling .GetSSHPort
	I0327 23:53:41.877214 1086621 main.go:141] libmachine: (ha-377576-m02) Calling .GetSSHKeyPath
	I0327 23:53:41.877351 1086621 main.go:141] libmachine: (ha-377576-m02) Calling .GetSSHUsername
	I0327 23:53:41.877543 1086621 sshutil.go:53] new ssh client: &{IP:192.168.39.117 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/ha-377576-m02/id_rsa Username:docker}
	I0327 23:53:41.961688 1086621 ssh_runner.go:195] Run: cat /etc/os-release
	I0327 23:53:41.966054 1086621 info.go:137] Remote host: Buildroot 2023.02.9
	I0327 23:53:41.966082 1086621 filesync.go:126] Scanning /home/jenkins/minikube-integration/18485-1069254/.minikube/addons for local assets ...
	I0327 23:53:41.966162 1086621 filesync.go:126] Scanning /home/jenkins/minikube-integration/18485-1069254/.minikube/files for local assets ...
	I0327 23:53:41.966319 1086621 filesync.go:149] local asset: /home/jenkins/minikube-integration/18485-1069254/.minikube/files/etc/ssl/certs/10765222.pem -> 10765222.pem in /etc/ssl/certs
	I0327 23:53:41.966337 1086621 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18485-1069254/.minikube/files/etc/ssl/certs/10765222.pem -> /etc/ssl/certs/10765222.pem
	I0327 23:53:41.966454 1086621 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0327 23:53:41.976650 1086621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/files/etc/ssl/certs/10765222.pem --> /etc/ssl/certs/10765222.pem (1708 bytes)
	I0327 23:53:42.002259 1086621 start.go:296] duration metric: took 128.327335ms for postStartSetup
	I0327 23:53:42.002321 1086621 main.go:141] libmachine: (ha-377576-m02) Calling .GetConfigRaw
	I0327 23:53:42.002963 1086621 main.go:141] libmachine: (ha-377576-m02) Calling .GetIP
	I0327 23:53:42.005709 1086621 main.go:141] libmachine: (ha-377576-m02) DBG | domain ha-377576-m02 has defined MAC address 52:54:00:bb:83:99 in network mk-ha-377576
	I0327 23:53:42.006101 1086621 main.go:141] libmachine: (ha-377576-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:83:99", ip: ""} in network mk-ha-377576: {Iface:virbr1 ExpiryTime:2024-03-28 00:53:30 +0000 UTC Type:0 Mac:52:54:00:bb:83:99 Iaid: IPaddr:192.168.39.117 Prefix:24 Hostname:ha-377576-m02 Clientid:01:52:54:00:bb:83:99}
	I0327 23:53:42.006134 1086621 main.go:141] libmachine: (ha-377576-m02) DBG | domain ha-377576-m02 has defined IP address 192.168.39.117 and MAC address 52:54:00:bb:83:99 in network mk-ha-377576
	I0327 23:53:42.006364 1086621 profile.go:142] Saving config to /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/ha-377576/config.json ...
	I0327 23:53:42.006577 1086621 start.go:128] duration metric: took 26.690481281s to createHost
	I0327 23:53:42.006608 1086621 main.go:141] libmachine: (ha-377576-m02) Calling .GetSSHHostname
	I0327 23:53:42.008746 1086621 main.go:141] libmachine: (ha-377576-m02) DBG | domain ha-377576-m02 has defined MAC address 52:54:00:bb:83:99 in network mk-ha-377576
	I0327 23:53:42.009073 1086621 main.go:141] libmachine: (ha-377576-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:83:99", ip: ""} in network mk-ha-377576: {Iface:virbr1 ExpiryTime:2024-03-28 00:53:30 +0000 UTC Type:0 Mac:52:54:00:bb:83:99 Iaid: IPaddr:192.168.39.117 Prefix:24 Hostname:ha-377576-m02 Clientid:01:52:54:00:bb:83:99}
	I0327 23:53:42.009100 1086621 main.go:141] libmachine: (ha-377576-m02) DBG | domain ha-377576-m02 has defined IP address 192.168.39.117 and MAC address 52:54:00:bb:83:99 in network mk-ha-377576
	I0327 23:53:42.009260 1086621 main.go:141] libmachine: (ha-377576-m02) Calling .GetSSHPort
	I0327 23:53:42.009434 1086621 main.go:141] libmachine: (ha-377576-m02) Calling .GetSSHKeyPath
	I0327 23:53:42.009595 1086621 main.go:141] libmachine: (ha-377576-m02) Calling .GetSSHKeyPath
	I0327 23:53:42.009706 1086621 main.go:141] libmachine: (ha-377576-m02) Calling .GetSSHUsername
	I0327 23:53:42.009895 1086621 main.go:141] libmachine: Using SSH client type: native
	I0327 23:53:42.010107 1086621 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.117 22 <nil> <nil>}
	I0327 23:53:42.010119 1086621 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0327 23:53:42.115066 1086621 main.go:141] libmachine: SSH cmd err, output: <nil>: 1711583622.090140495
	
	I0327 23:53:42.115091 1086621 fix.go:216] guest clock: 1711583622.090140495
	I0327 23:53:42.115099 1086621 fix.go:229] Guest: 2024-03-27 23:53:42.090140495 +0000 UTC Remote: 2024-03-27 23:53:42.006590822 +0000 UTC m=+85.996063686 (delta=83.549673ms)
	I0327 23:53:42.115121 1086621 fix.go:200] guest clock delta is within tolerance: 83.549673ms
	I0327 23:53:42.115126 1086621 start.go:83] releasing machines lock for "ha-377576-m02", held for 26.799149182s
	I0327 23:53:42.115144 1086621 main.go:141] libmachine: (ha-377576-m02) Calling .DriverName
	I0327 23:53:42.115420 1086621 main.go:141] libmachine: (ha-377576-m02) Calling .GetIP
	I0327 23:53:42.118120 1086621 main.go:141] libmachine: (ha-377576-m02) DBG | domain ha-377576-m02 has defined MAC address 52:54:00:bb:83:99 in network mk-ha-377576
	I0327 23:53:42.118458 1086621 main.go:141] libmachine: (ha-377576-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:83:99", ip: ""} in network mk-ha-377576: {Iface:virbr1 ExpiryTime:2024-03-28 00:53:30 +0000 UTC Type:0 Mac:52:54:00:bb:83:99 Iaid: IPaddr:192.168.39.117 Prefix:24 Hostname:ha-377576-m02 Clientid:01:52:54:00:bb:83:99}
	I0327 23:53:42.118508 1086621 main.go:141] libmachine: (ha-377576-m02) DBG | domain ha-377576-m02 has defined IP address 192.168.39.117 and MAC address 52:54:00:bb:83:99 in network mk-ha-377576
	I0327 23:53:42.121008 1086621 out.go:177] * Found network options:
	I0327 23:53:42.122574 1086621 out.go:177]   - NO_PROXY=192.168.39.47
	W0327 23:53:42.123842 1086621 proxy.go:119] fail to check proxy env: Error ip not in block
	I0327 23:53:42.123892 1086621 main.go:141] libmachine: (ha-377576-m02) Calling .DriverName
	I0327 23:53:42.124441 1086621 main.go:141] libmachine: (ha-377576-m02) Calling .DriverName
	I0327 23:53:42.124633 1086621 main.go:141] libmachine: (ha-377576-m02) Calling .DriverName
	I0327 23:53:42.124726 1086621 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0327 23:53:42.124770 1086621 main.go:141] libmachine: (ha-377576-m02) Calling .GetSSHHostname
	W0327 23:53:42.124842 1086621 proxy.go:119] fail to check proxy env: Error ip not in block
	I0327 23:53:42.124926 1086621 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0327 23:53:42.124953 1086621 main.go:141] libmachine: (ha-377576-m02) Calling .GetSSHHostname
	I0327 23:53:42.127640 1086621 main.go:141] libmachine: (ha-377576-m02) DBG | domain ha-377576-m02 has defined MAC address 52:54:00:bb:83:99 in network mk-ha-377576
	I0327 23:53:42.127843 1086621 main.go:141] libmachine: (ha-377576-m02) DBG | domain ha-377576-m02 has defined MAC address 52:54:00:bb:83:99 in network mk-ha-377576
	I0327 23:53:42.127991 1086621 main.go:141] libmachine: (ha-377576-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:83:99", ip: ""} in network mk-ha-377576: {Iface:virbr1 ExpiryTime:2024-03-28 00:53:30 +0000 UTC Type:0 Mac:52:54:00:bb:83:99 Iaid: IPaddr:192.168.39.117 Prefix:24 Hostname:ha-377576-m02 Clientid:01:52:54:00:bb:83:99}
	I0327 23:53:42.128022 1086621 main.go:141] libmachine: (ha-377576-m02) DBG | domain ha-377576-m02 has defined IP address 192.168.39.117 and MAC address 52:54:00:bb:83:99 in network mk-ha-377576
	I0327 23:53:42.128158 1086621 main.go:141] libmachine: (ha-377576-m02) Calling .GetSSHPort
	I0327 23:53:42.128286 1086621 main.go:141] libmachine: (ha-377576-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:83:99", ip: ""} in network mk-ha-377576: {Iface:virbr1 ExpiryTime:2024-03-28 00:53:30 +0000 UTC Type:0 Mac:52:54:00:bb:83:99 Iaid: IPaddr:192.168.39.117 Prefix:24 Hostname:ha-377576-m02 Clientid:01:52:54:00:bb:83:99}
	I0327 23:53:42.128310 1086621 main.go:141] libmachine: (ha-377576-m02) DBG | domain ha-377576-m02 has defined IP address 192.168.39.117 and MAC address 52:54:00:bb:83:99 in network mk-ha-377576
	I0327 23:53:42.128329 1086621 main.go:141] libmachine: (ha-377576-m02) Calling .GetSSHKeyPath
	I0327 23:53:42.128471 1086621 main.go:141] libmachine: (ha-377576-m02) Calling .GetSSHUsername
	I0327 23:53:42.128542 1086621 main.go:141] libmachine: (ha-377576-m02) Calling .GetSSHPort
	I0327 23:53:42.128629 1086621 sshutil.go:53] new ssh client: &{IP:192.168.39.117 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/ha-377576-m02/id_rsa Username:docker}
	I0327 23:53:42.128722 1086621 main.go:141] libmachine: (ha-377576-m02) Calling .GetSSHKeyPath
	I0327 23:53:42.128886 1086621 main.go:141] libmachine: (ha-377576-m02) Calling .GetSSHUsername
	I0327 23:53:42.129068 1086621 sshutil.go:53] new ssh client: &{IP:192.168.39.117 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/ha-377576-m02/id_rsa Username:docker}
	I0327 23:53:42.370792 1086621 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0327 23:53:42.377319 1086621 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0327 23:53:42.377397 1086621 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0327 23:53:42.395227 1086621 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0327 23:53:42.395252 1086621 start.go:494] detecting cgroup driver to use...
	I0327 23:53:42.395323 1086621 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0327 23:53:42.412140 1086621 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0327 23:53:42.426584 1086621 docker.go:217] disabling cri-docker service (if available) ...
	I0327 23:53:42.426650 1086621 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0327 23:53:42.441340 1086621 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0327 23:53:42.456034 1086621 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0327 23:53:42.575600 1086621 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0327 23:53:42.749254 1086621 docker.go:233] disabling docker service ...
	I0327 23:53:42.749352 1086621 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0327 23:53:42.766091 1086621 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0327 23:53:42.780227 1086621 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0327 23:53:42.926840 1086621 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0327 23:53:43.062022 1086621 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0327 23:53:43.076773 1086621 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0327 23:53:43.096231 1086621 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0327 23:53:43.096291 1086621 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0327 23:53:43.107862 1086621 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0327 23:53:43.107934 1086621 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0327 23:53:43.119412 1086621 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0327 23:53:43.131130 1086621 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0327 23:53:43.142892 1086621 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0327 23:53:43.154907 1086621 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0327 23:53:43.167171 1086621 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0327 23:53:43.186608 1086621 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0327 23:53:43.198344 1086621 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0327 23:53:43.208682 1086621 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0327 23:53:43.208750 1086621 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0327 23:53:43.223268 1086621 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0327 23:53:43.236108 1086621 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0327 23:53:43.362449 1086621 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0327 23:53:43.515363 1086621 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0327 23:53:43.515439 1086621 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0327 23:53:43.520709 1086621 start.go:562] Will wait 60s for crictl version
	I0327 23:53:43.520773 1086621 ssh_runner.go:195] Run: which crictl
	I0327 23:53:43.524704 1086621 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0327 23:53:43.568185 1086621 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0327 23:53:43.568277 1086621 ssh_runner.go:195] Run: crio --version
	I0327 23:53:43.601998 1086621 ssh_runner.go:195] Run: crio --version
	I0327 23:53:43.634026 1086621 out.go:177] * Preparing Kubernetes v1.29.3 on CRI-O 1.29.1 ...
	I0327 23:53:43.635731 1086621 out.go:177]   - env NO_PROXY=192.168.39.47
	I0327 23:53:43.637324 1086621 main.go:141] libmachine: (ha-377576-m02) Calling .GetIP
	I0327 23:53:43.640212 1086621 main.go:141] libmachine: (ha-377576-m02) DBG | domain ha-377576-m02 has defined MAC address 52:54:00:bb:83:99 in network mk-ha-377576
	I0327 23:53:43.640708 1086621 main.go:141] libmachine: (ha-377576-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:83:99", ip: ""} in network mk-ha-377576: {Iface:virbr1 ExpiryTime:2024-03-28 00:53:30 +0000 UTC Type:0 Mac:52:54:00:bb:83:99 Iaid: IPaddr:192.168.39.117 Prefix:24 Hostname:ha-377576-m02 Clientid:01:52:54:00:bb:83:99}
	I0327 23:53:43.640734 1086621 main.go:141] libmachine: (ha-377576-m02) DBG | domain ha-377576-m02 has defined IP address 192.168.39.117 and MAC address 52:54:00:bb:83:99 in network mk-ha-377576
	I0327 23:53:43.641028 1086621 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0327 23:53:43.645636 1086621 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0327 23:53:43.658843 1086621 mustload.go:65] Loading cluster: ha-377576
	I0327 23:53:43.659053 1086621 config.go:182] Loaded profile config "ha-377576": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0327 23:53:43.659359 1086621 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0327 23:53:43.659391 1086621 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0327 23:53:43.674527 1086621 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38387
	I0327 23:53:43.675161 1086621 main.go:141] libmachine: () Calling .GetVersion
	I0327 23:53:43.675664 1086621 main.go:141] libmachine: Using API Version  1
	I0327 23:53:43.675684 1086621 main.go:141] libmachine: () Calling .SetConfigRaw
	I0327 23:53:43.676021 1086621 main.go:141] libmachine: () Calling .GetMachineName
	I0327 23:53:43.676225 1086621 main.go:141] libmachine: (ha-377576) Calling .GetState
	I0327 23:53:43.677734 1086621 host.go:66] Checking if "ha-377576" exists ...
	I0327 23:53:43.678020 1086621 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0327 23:53:43.678062 1086621 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0327 23:53:43.693403 1086621 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34833
	I0327 23:53:43.693870 1086621 main.go:141] libmachine: () Calling .GetVersion
	I0327 23:53:43.694348 1086621 main.go:141] libmachine: Using API Version  1
	I0327 23:53:43.694368 1086621 main.go:141] libmachine: () Calling .SetConfigRaw
	I0327 23:53:43.694707 1086621 main.go:141] libmachine: () Calling .GetMachineName
	I0327 23:53:43.694922 1086621 main.go:141] libmachine: (ha-377576) Calling .DriverName
	I0327 23:53:43.695136 1086621 certs.go:68] Setting up /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/ha-377576 for IP: 192.168.39.117
	I0327 23:53:43.695150 1086621 certs.go:194] generating shared ca certs ...
	I0327 23:53:43.695171 1086621 certs.go:226] acquiring lock for ca certs: {Name:mkf4dec8f33bbf51de6ed3aabdf175c7fd744ef9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 23:53:43.695329 1086621 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.key
	I0327 23:53:43.695368 1086621 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/proxy-client-ca.key
	I0327 23:53:43.695377 1086621 certs.go:256] generating profile certs ...
	I0327 23:53:43.695447 1086621 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/ha-377576/client.key
	I0327 23:53:43.695473 1086621 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/ha-377576/apiserver.key.78940cd1
	I0327 23:53:43.695489 1086621 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/ha-377576/apiserver.crt.78940cd1 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.47 192.168.39.117 192.168.39.254]
	I0327 23:53:43.862402 1086621 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/ha-377576/apiserver.crt.78940cd1 ...
	I0327 23:53:43.862434 1086621 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/ha-377576/apiserver.crt.78940cd1: {Name:mk473d722fafe522ae7b30b1d0d075c26a7522f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 23:53:43.862614 1086621 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/ha-377576/apiserver.key.78940cd1 ...
	I0327 23:53:43.862627 1086621 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/ha-377576/apiserver.key.78940cd1: {Name:mk107444c4c288abfb44e45af6913a62c73f33ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 23:53:43.862696 1086621 certs.go:381] copying /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/ha-377576/apiserver.crt.78940cd1 -> /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/ha-377576/apiserver.crt
	I0327 23:53:43.862816 1086621 certs.go:385] copying /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/ha-377576/apiserver.key.78940cd1 -> /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/ha-377576/apiserver.key
	I0327 23:53:43.862945 1086621 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/ha-377576/proxy-client.key
	I0327 23:53:43.862962 1086621 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0327 23:53:43.862975 1086621 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0327 23:53:43.862987 1086621 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18485-1069254/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0327 23:53:43.863001 1086621 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18485-1069254/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0327 23:53:43.863014 1086621 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/ha-377576/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0327 23:53:43.863026 1086621 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/ha-377576/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0327 23:53:43.863040 1086621 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/ha-377576/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0327 23:53:43.863051 1086621 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/ha-377576/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0327 23:53:43.863106 1086621 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/1076522.pem (1338 bytes)
	W0327 23:53:43.863134 1086621 certs.go:480] ignoring /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/1076522_empty.pem, impossibly tiny 0 bytes
	I0327 23:53:43.863144 1086621 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca-key.pem (1675 bytes)
	I0327 23:53:43.863166 1086621 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem (1078 bytes)
	I0327 23:53:43.863187 1086621 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/cert.pem (1123 bytes)
	I0327 23:53:43.863209 1086621 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/key.pem (1679 bytes)
	I0327 23:53:43.863247 1086621 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/files/etc/ssl/certs/10765222.pem (1708 bytes)
	I0327 23:53:43.863275 1086621 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0327 23:53:43.863289 1086621 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/1076522.pem -> /usr/share/ca-certificates/1076522.pem
	I0327 23:53:43.863301 1086621 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18485-1069254/.minikube/files/etc/ssl/certs/10765222.pem -> /usr/share/ca-certificates/10765222.pem
	I0327 23:53:43.863335 1086621 main.go:141] libmachine: (ha-377576) Calling .GetSSHHostname
	I0327 23:53:43.866375 1086621 main.go:141] libmachine: (ha-377576) DBG | domain ha-377576 has defined MAC address 52:54:00:9c:48:13 in network mk-ha-377576
	I0327 23:53:43.866734 1086621 main.go:141] libmachine: (ha-377576) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:48:13", ip: ""} in network mk-ha-377576: {Iface:virbr1 ExpiryTime:2024-03-28 00:52:31 +0000 UTC Type:0 Mac:52:54:00:9c:48:13 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:ha-377576 Clientid:01:52:54:00:9c:48:13}
	I0327 23:53:43.866768 1086621 main.go:141] libmachine: (ha-377576) DBG | domain ha-377576 has defined IP address 192.168.39.47 and MAC address 52:54:00:9c:48:13 in network mk-ha-377576
	I0327 23:53:43.866941 1086621 main.go:141] libmachine: (ha-377576) Calling .GetSSHPort
	I0327 23:53:43.867177 1086621 main.go:141] libmachine: (ha-377576) Calling .GetSSHKeyPath
	I0327 23:53:43.867362 1086621 main.go:141] libmachine: (ha-377576) Calling .GetSSHUsername
	I0327 23:53:43.867535 1086621 sshutil.go:53] new ssh client: &{IP:192.168.39.47 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/ha-377576/id_rsa Username:docker}
	I0327 23:53:43.938712 1086621 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0327 23:53:43.945284 1086621 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0327 23:53:43.957915 1086621 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0327 23:53:43.962856 1086621 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0327 23:53:43.974426 1086621 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0327 23:53:43.979101 1086621 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0327 23:53:43.990511 1086621 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0327 23:53:43.995068 1086621 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0327 23:53:44.006319 1086621 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0327 23:53:44.011165 1086621 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0327 23:53:44.022259 1086621 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0327 23:53:44.026959 1086621 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0327 23:53:44.038404 1086621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0327 23:53:44.064478 1086621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0327 23:53:44.089667 1086621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0327 23:53:44.114566 1086621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0327 23:53:44.139792 1086621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/ha-377576/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0327 23:53:44.165836 1086621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/ha-377576/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0327 23:53:44.193411 1086621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/ha-377576/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0327 23:53:44.221666 1086621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/ha-377576/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0327 23:53:44.248890 1086621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0327 23:53:44.276835 1086621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/1076522.pem --> /usr/share/ca-certificates/1076522.pem (1338 bytes)
	I0327 23:53:44.301323 1086621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/files/etc/ssl/certs/10765222.pem --> /usr/share/ca-certificates/10765222.pem (1708 bytes)
	I0327 23:53:44.326963 1086621 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0327 23:53:44.344218 1086621 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0327 23:53:44.361249 1086621 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0327 23:53:44.378451 1086621 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0327 23:53:44.395826 1086621 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0327 23:53:44.413371 1086621 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0327 23:53:44.431510 1086621 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (758 bytes)
	I0327 23:53:44.449406 1086621 ssh_runner.go:195] Run: openssl version
	I0327 23:53:44.455442 1086621 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0327 23:53:44.466460 1086621 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0327 23:53:44.471040 1086621 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 27 23:34 /usr/share/ca-certificates/minikubeCA.pem
	I0327 23:53:44.471114 1086621 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0327 23:53:44.476932 1086621 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0327 23:53:44.488161 1086621 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1076522.pem && ln -fs /usr/share/ca-certificates/1076522.pem /etc/ssl/certs/1076522.pem"
	I0327 23:53:44.498965 1086621 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1076522.pem
	I0327 23:53:44.503800 1086621 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 27 23:42 /usr/share/ca-certificates/1076522.pem
	I0327 23:53:44.503860 1086621 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1076522.pem
	I0327 23:53:44.509935 1086621 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1076522.pem /etc/ssl/certs/51391683.0"
	I0327 23:53:44.520774 1086621 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10765222.pem && ln -fs /usr/share/ca-certificates/10765222.pem /etc/ssl/certs/10765222.pem"
	I0327 23:53:44.532167 1086621 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10765222.pem
	I0327 23:53:44.536691 1086621 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 27 23:42 /usr/share/ca-certificates/10765222.pem
	I0327 23:53:44.536741 1086621 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10765222.pem
	I0327 23:53:44.542677 1086621 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/10765222.pem /etc/ssl/certs/3ec20f2e.0"
	I0327 23:53:44.553713 1086621 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0327 23:53:44.557898 1086621 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0327 23:53:44.557960 1086621 kubeadm.go:928] updating node {m02 192.168.39.117 8443 v1.29.3 crio true true} ...
	I0327 23:53:44.558066 1086621 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-377576-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.117
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.3 ClusterName:ha-377576 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0327 23:53:44.558095 1086621 kube-vip.go:111] generating kube-vip config ...
	I0327 23:53:44.558139 1086621 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0327 23:53:44.576189 1086621 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0327 23:53:44.576311 1086621 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0327 23:53:44.576393 1086621 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.3
	I0327 23:53:44.586776 1086621 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.29.3: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.29.3': No such file or directory
	
	Initiating transfer...
	I0327 23:53:44.586863 1086621 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.29.3
	I0327 23:53:44.596914 1086621 download.go:107] Downloading: https://dl.k8s.io/release/v1.29.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.29.3/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/linux/amd64/v1.29.3/kubeadm
	I0327 23:53:44.596935 1086621 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.29.3/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.29.3/bin/linux/amd64/kubectl.sha256
	I0327 23:53:44.596939 1086621 download.go:107] Downloading: https://dl.k8s.io/release/v1.29.3/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.29.3/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/linux/amd64/v1.29.3/kubelet
	I0327 23:53:44.596960 1086621 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/linux/amd64/v1.29.3/kubectl -> /var/lib/minikube/binaries/v1.29.3/kubectl
	I0327 23:53:44.597048 1086621 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.29.3/kubectl
	I0327 23:53:44.601577 1086621 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.29.3/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.29.3/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.29.3/kubectl': No such file or directory
	I0327 23:53:44.601604 1086621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/linux/amd64/v1.29.3/kubectl --> /var/lib/minikube/binaries/v1.29.3/kubectl (49799168 bytes)
	I0327 23:54:16.611463 1086621 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/linux/amd64/v1.29.3/kubeadm -> /var/lib/minikube/binaries/v1.29.3/kubeadm
	I0327 23:54:16.611547 1086621 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.29.3/kubeadm
	I0327 23:54:16.617537 1086621 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.29.3/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.29.3/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.29.3/kubeadm': No such file or directory
	I0327 23:54:16.617569 1086621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/linux/amd64/v1.29.3/kubeadm --> /var/lib/minikube/binaries/v1.29.3/kubeadm (48340992 bytes)
	I0327 23:54:57.459077 1086621 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0327 23:54:57.477351 1086621 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/linux/amd64/v1.29.3/kubelet -> /var/lib/minikube/binaries/v1.29.3/kubelet
	I0327 23:54:57.477490 1086621 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.29.3/kubelet
	I0327 23:54:57.482346 1086621 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.29.3/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.29.3/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.29.3/kubelet': No such file or directory
	I0327 23:54:57.482380 1086621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/linux/amd64/v1.29.3/kubelet --> /var/lib/minikube/binaries/v1.29.3/kubelet (111919104 bytes)
	I0327 23:54:57.935005 1086621 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0327 23:54:57.944551 1086621 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0327 23:54:57.961813 1086621 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0327 23:54:57.979150 1086621 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1346 bytes)
	I0327 23:54:57.996772 1086621 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0327 23:54:58.000922 1086621 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0327 23:54:58.014371 1086621 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0327 23:54:58.130424 1086621 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0327 23:54:58.147860 1086621 host.go:66] Checking if "ha-377576" exists ...
	I0327 23:54:58.148199 1086621 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0327 23:54:58.148238 1086621 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0327 23:54:58.164481 1086621 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40685
	I0327 23:54:58.164951 1086621 main.go:141] libmachine: () Calling .GetVersion
	I0327 23:54:58.165554 1086621 main.go:141] libmachine: Using API Version  1
	I0327 23:54:58.165600 1086621 main.go:141] libmachine: () Calling .SetConfigRaw
	I0327 23:54:58.165947 1086621 main.go:141] libmachine: () Calling .GetMachineName
	I0327 23:54:58.166200 1086621 main.go:141] libmachine: (ha-377576) Calling .DriverName
	I0327 23:54:58.166401 1086621 start.go:316] joinCluster: &{Name:ha-377576 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 Cluster
Name:ha-377576 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.47 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.117 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0327 23:54:58.166526 1086621 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0327 23:54:58.166546 1086621 main.go:141] libmachine: (ha-377576) Calling .GetSSHHostname
	I0327 23:54:58.170248 1086621 main.go:141] libmachine: (ha-377576) DBG | domain ha-377576 has defined MAC address 52:54:00:9c:48:13 in network mk-ha-377576
	I0327 23:54:58.170750 1086621 main.go:141] libmachine: (ha-377576) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:48:13", ip: ""} in network mk-ha-377576: {Iface:virbr1 ExpiryTime:2024-03-28 00:52:31 +0000 UTC Type:0 Mac:52:54:00:9c:48:13 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:ha-377576 Clientid:01:52:54:00:9c:48:13}
	I0327 23:54:58.170784 1086621 main.go:141] libmachine: (ha-377576) DBG | domain ha-377576 has defined IP address 192.168.39.47 and MAC address 52:54:00:9c:48:13 in network mk-ha-377576
	I0327 23:54:58.170994 1086621 main.go:141] libmachine: (ha-377576) Calling .GetSSHPort
	I0327 23:54:58.171235 1086621 main.go:141] libmachine: (ha-377576) Calling .GetSSHKeyPath
	I0327 23:54:58.171438 1086621 main.go:141] libmachine: (ha-377576) Calling .GetSSHUsername
	I0327 23:54:58.171628 1086621 sshutil.go:53] new ssh client: &{IP:192.168.39.47 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/ha-377576/id_rsa Username:docker}
	I0327 23:54:58.347215 1086621 start.go:342] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.117 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0327 23:54:58.347278 1086621 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 2506a6.td1hnn5cxoz7asyy --discovery-token-ca-cert-hash sha256:eb47e943713ff45bf2b8bbea854932df17a469668ec0f8e79ea3bb626e56fc59 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-377576-m02 --control-plane --apiserver-advertise-address=192.168.39.117 --apiserver-bind-port=8443"
	I0327 23:55:23.115782 1086621 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 2506a6.td1hnn5cxoz7asyy --discovery-token-ca-cert-hash sha256:eb47e943713ff45bf2b8bbea854932df17a469668ec0f8e79ea3bb626e56fc59 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-377576-m02 --control-plane --apiserver-advertise-address=192.168.39.117 --apiserver-bind-port=8443": (24.768463732s)
	I0327 23:55:23.115836 1086621 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0327 23:55:23.717993 1086621 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-377576-m02 minikube.k8s.io/updated_at=2024_03_27T23_55_23_0700 minikube.k8s.io/version=v1.33.0-beta.0 minikube.k8s.io/commit=1a940f980f77c9bfc8ef93678db8c1f49c9dd79d minikube.k8s.io/name=ha-377576 minikube.k8s.io/primary=false
	I0327 23:55:23.866652 1086621 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-377576-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0327 23:55:23.985138 1086621 start.go:318] duration metric: took 25.818732645s to joinCluster
	I0327 23:55:23.985234 1086621 start.go:234] Will wait 6m0s for node &{Name:m02 IP:192.168.39.117 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0327 23:55:23.988265 1086621 out.go:177] * Verifying Kubernetes components...
	I0327 23:55:23.985583 1086621 config.go:182] Loaded profile config "ha-377576": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0327 23:55:23.989818 1086621 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0327 23:55:24.271450 1086621 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0327 23:55:24.292190 1086621 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18485-1069254/kubeconfig
	I0327 23:55:24.292545 1086621 kapi.go:59] client config for ha-377576: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/ha-377576/client.crt", KeyFile:"/home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/ha-377576/client.key", CAFile:"/home/jenkins/minikube-integration/18485-1069254/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]strin
g(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c58000), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0327 23:55:24.292636 1086621 kubeadm.go:477] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.47:8443
	I0327 23:55:24.292998 1086621 node_ready.go:35] waiting up to 6m0s for node "ha-377576-m02" to be "Ready" ...
	I0327 23:55:24.293114 1086621 round_trippers.go:463] GET https://192.168.39.47:8443/api/v1/nodes/ha-377576-m02
	I0327 23:55:24.293127 1086621 round_trippers.go:469] Request Headers:
	I0327 23:55:24.293165 1086621 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:55:24.293175 1086621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0327 23:55:24.304501 1086621 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0327 23:55:24.793739 1086621 round_trippers.go:463] GET https://192.168.39.47:8443/api/v1/nodes/ha-377576-m02
	I0327 23:55:24.793766 1086621 round_trippers.go:469] Request Headers:
	I0327 23:55:24.793776 1086621 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:55:24.793781 1086621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0327 23:55:24.797308 1086621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0327 23:55:25.293222 1086621 round_trippers.go:463] GET https://192.168.39.47:8443/api/v1/nodes/ha-377576-m02
	I0327 23:55:25.293244 1086621 round_trippers.go:469] Request Headers:
	I0327 23:55:25.293252 1086621 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:55:25.293257 1086621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0327 23:55:25.296587 1086621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0327 23:55:25.794096 1086621 round_trippers.go:463] GET https://192.168.39.47:8443/api/v1/nodes/ha-377576-m02
	I0327 23:55:25.794123 1086621 round_trippers.go:469] Request Headers:
	I0327 23:55:25.794131 1086621 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:55:25.794136 1086621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0327 23:55:25.798629 1086621 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0327 23:55:26.293775 1086621 round_trippers.go:463] GET https://192.168.39.47:8443/api/v1/nodes/ha-377576-m02
	I0327 23:55:26.293808 1086621 round_trippers.go:469] Request Headers:
	I0327 23:55:26.293821 1086621 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:55:26.293826 1086621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0327 23:55:26.298026 1086621 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0327 23:55:26.298971 1086621 node_ready.go:53] node "ha-377576-m02" has status "Ready":"False"
	I0327 23:55:26.793379 1086621 round_trippers.go:463] GET https://192.168.39.47:8443/api/v1/nodes/ha-377576-m02
	I0327 23:55:26.793404 1086621 round_trippers.go:469] Request Headers:
	I0327 23:55:26.793413 1086621 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:55:26.793417 1086621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0327 23:55:26.797180 1086621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0327 23:55:27.293219 1086621 round_trippers.go:463] GET https://192.168.39.47:8443/api/v1/nodes/ha-377576-m02
	I0327 23:55:27.293248 1086621 round_trippers.go:469] Request Headers:
	I0327 23:55:27.293260 1086621 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:55:27.293265 1086621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0327 23:55:27.297031 1086621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0327 23:55:27.794307 1086621 round_trippers.go:463] GET https://192.168.39.47:8443/api/v1/nodes/ha-377576-m02
	I0327 23:55:27.794340 1086621 round_trippers.go:469] Request Headers:
	I0327 23:55:27.794353 1086621 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:55:27.794359 1086621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0327 23:55:27.797961 1086621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0327 23:55:28.293800 1086621 round_trippers.go:463] GET https://192.168.39.47:8443/api/v1/nodes/ha-377576-m02
	I0327 23:55:28.293832 1086621 round_trippers.go:469] Request Headers:
	I0327 23:55:28.293847 1086621 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:55:28.293852 1086621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0327 23:55:28.297787 1086621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0327 23:55:28.794223 1086621 round_trippers.go:463] GET https://192.168.39.47:8443/api/v1/nodes/ha-377576-m02
	I0327 23:55:28.794284 1086621 round_trippers.go:469] Request Headers:
	I0327 23:55:28.794294 1086621 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:55:28.794303 1086621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0327 23:55:28.798436 1086621 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0327 23:55:28.799159 1086621 node_ready.go:53] node "ha-377576-m02" has status "Ready":"False"
	I0327 23:55:29.293448 1086621 round_trippers.go:463] GET https://192.168.39.47:8443/api/v1/nodes/ha-377576-m02
	I0327 23:55:29.293478 1086621 round_trippers.go:469] Request Headers:
	I0327 23:55:29.293489 1086621 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:55:29.293494 1086621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0327 23:55:29.306909 1086621 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0327 23:55:29.793961 1086621 round_trippers.go:463] GET https://192.168.39.47:8443/api/v1/nodes/ha-377576-m02
	I0327 23:55:29.793986 1086621 round_trippers.go:469] Request Headers:
	I0327 23:55:29.793995 1086621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0327 23:55:29.794003 1086621 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:55:29.797902 1086621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0327 23:55:30.293860 1086621 round_trippers.go:463] GET https://192.168.39.47:8443/api/v1/nodes/ha-377576-m02
	I0327 23:55:30.293884 1086621 round_trippers.go:469] Request Headers:
	I0327 23:55:30.293894 1086621 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:55:30.293899 1086621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0327 23:55:30.297751 1086621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0327 23:55:30.298614 1086621 node_ready.go:49] node "ha-377576-m02" has status "Ready":"True"
	I0327 23:55:30.298634 1086621 node_ready.go:38] duration metric: took 6.005611952s for node "ha-377576-m02" to be "Ready" ...
	I0327 23:55:30.298643 1086621 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0327 23:55:30.298712 1086621 round_trippers.go:463] GET https://192.168.39.47:8443/api/v1/namespaces/kube-system/pods
	I0327 23:55:30.298724 1086621 round_trippers.go:469] Request Headers:
	I0327 23:55:30.298730 1086621 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:55:30.298734 1086621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0327 23:55:30.304126 1086621 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0327 23:55:30.310345 1086621 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-47npx" in "kube-system" namespace to be "Ready" ...
	I0327 23:55:30.310428 1086621 round_trippers.go:463] GET https://192.168.39.47:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-47npx
	I0327 23:55:30.310437 1086621 round_trippers.go:469] Request Headers:
	I0327 23:55:30.310445 1086621 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:55:30.310449 1086621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0327 23:55:30.314793 1086621 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0327 23:55:30.315952 1086621 round_trippers.go:463] GET https://192.168.39.47:8443/api/v1/nodes/ha-377576
	I0327 23:55:30.315968 1086621 round_trippers.go:469] Request Headers:
	I0327 23:55:30.315976 1086621 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:55:30.315979 1086621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0327 23:55:30.319079 1086621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0327 23:55:30.319742 1086621 pod_ready.go:92] pod "coredns-76f75df574-47npx" in "kube-system" namespace has status "Ready":"True"
	I0327 23:55:30.319759 1086621 pod_ready.go:81] duration metric: took 9.391861ms for pod "coredns-76f75df574-47npx" in "kube-system" namespace to be "Ready" ...
	I0327 23:55:30.319769 1086621 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-msv9s" in "kube-system" namespace to be "Ready" ...
	I0327 23:55:30.319828 1086621 round_trippers.go:463] GET https://192.168.39.47:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-msv9s
	I0327 23:55:30.319837 1086621 round_trippers.go:469] Request Headers:
	I0327 23:55:30.319843 1086621 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:55:30.319847 1086621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0327 23:55:30.322989 1086621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0327 23:55:30.323881 1086621 round_trippers.go:463] GET https://192.168.39.47:8443/api/v1/nodes/ha-377576
	I0327 23:55:30.323897 1086621 round_trippers.go:469] Request Headers:
	I0327 23:55:30.323907 1086621 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:55:30.323913 1086621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0327 23:55:30.326602 1086621 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0327 23:55:30.327132 1086621 pod_ready.go:92] pod "coredns-76f75df574-msv9s" in "kube-system" namespace has status "Ready":"True"
	I0327 23:55:30.327150 1086621 pod_ready.go:81] duration metric: took 7.373142ms for pod "coredns-76f75df574-msv9s" in "kube-system" namespace to be "Ready" ...
	I0327 23:55:30.327163 1086621 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-377576" in "kube-system" namespace to be "Ready" ...
	I0327 23:55:30.327228 1086621 round_trippers.go:463] GET https://192.168.39.47:8443/api/v1/namespaces/kube-system/pods/etcd-ha-377576
	I0327 23:55:30.327238 1086621 round_trippers.go:469] Request Headers:
	I0327 23:55:30.327249 1086621 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:55:30.327258 1086621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0327 23:55:30.329942 1086621 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0327 23:55:30.330747 1086621 round_trippers.go:463] GET https://192.168.39.47:8443/api/v1/nodes/ha-377576
	I0327 23:55:30.330762 1086621 round_trippers.go:469] Request Headers:
	I0327 23:55:30.330770 1086621 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:55:30.330776 1086621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0327 23:55:30.333524 1086621 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0327 23:55:30.334031 1086621 pod_ready.go:92] pod "etcd-ha-377576" in "kube-system" namespace has status "Ready":"True"
	I0327 23:55:30.334047 1086621 pod_ready.go:81] duration metric: took 6.873231ms for pod "etcd-ha-377576" in "kube-system" namespace to be "Ready" ...
	I0327 23:55:30.334057 1086621 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-377576-m02" in "kube-system" namespace to be "Ready" ...
	I0327 23:55:30.334115 1086621 round_trippers.go:463] GET https://192.168.39.47:8443/api/v1/namespaces/kube-system/pods/etcd-ha-377576-m02
	I0327 23:55:30.334126 1086621 round_trippers.go:469] Request Headers:
	I0327 23:55:30.334136 1086621 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:55:30.334140 1086621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0327 23:55:30.336929 1086621 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0327 23:55:30.337645 1086621 round_trippers.go:463] GET https://192.168.39.47:8443/api/v1/nodes/ha-377576-m02
	I0327 23:55:30.337659 1086621 round_trippers.go:469] Request Headers:
	I0327 23:55:30.337668 1086621 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:55:30.337673 1086621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0327 23:55:30.340451 1086621 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0327 23:55:30.835099 1086621 round_trippers.go:463] GET https://192.168.39.47:8443/api/v1/namespaces/kube-system/pods/etcd-ha-377576-m02
	I0327 23:55:30.835125 1086621 round_trippers.go:469] Request Headers:
	I0327 23:55:30.835136 1086621 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:55:30.835141 1086621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0327 23:55:30.839257 1086621 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0327 23:55:30.840455 1086621 round_trippers.go:463] GET https://192.168.39.47:8443/api/v1/nodes/ha-377576-m02
	I0327 23:55:30.840477 1086621 round_trippers.go:469] Request Headers:
	I0327 23:55:30.840488 1086621 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:55:30.840494 1086621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0327 23:55:30.844353 1086621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0327 23:55:31.335155 1086621 round_trippers.go:463] GET https://192.168.39.47:8443/api/v1/namespaces/kube-system/pods/etcd-ha-377576-m02
	I0327 23:55:31.335181 1086621 round_trippers.go:469] Request Headers:
	I0327 23:55:31.335189 1086621 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:55:31.335195 1086621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0327 23:55:31.344449 1086621 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0327 23:55:31.345199 1086621 round_trippers.go:463] GET https://192.168.39.47:8443/api/v1/nodes/ha-377576-m02
	I0327 23:55:31.345215 1086621 round_trippers.go:469] Request Headers:
	I0327 23:55:31.345225 1086621 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:55:31.345230 1086621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0327 23:55:31.350967 1086621 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0327 23:55:31.834618 1086621 round_trippers.go:463] GET https://192.168.39.47:8443/api/v1/namespaces/kube-system/pods/etcd-ha-377576-m02
	I0327 23:55:31.834647 1086621 round_trippers.go:469] Request Headers:
	I0327 23:55:31.834659 1086621 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:55:31.834664 1086621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0327 23:55:31.838397 1086621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0327 23:55:31.839441 1086621 round_trippers.go:463] GET https://192.168.39.47:8443/api/v1/nodes/ha-377576-m02
	I0327 23:55:31.839457 1086621 round_trippers.go:469] Request Headers:
	I0327 23:55:31.839465 1086621 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:55:31.839469 1086621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0327 23:55:31.843573 1086621 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0327 23:55:32.335111 1086621 round_trippers.go:463] GET https://192.168.39.47:8443/api/v1/namespaces/kube-system/pods/etcd-ha-377576-m02
	I0327 23:55:32.335140 1086621 round_trippers.go:469] Request Headers:
	I0327 23:55:32.335148 1086621 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:55:32.335153 1086621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0327 23:55:32.338841 1086621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0327 23:55:32.339703 1086621 round_trippers.go:463] GET https://192.168.39.47:8443/api/v1/nodes/ha-377576-m02
	I0327 23:55:32.339721 1086621 round_trippers.go:469] Request Headers:
	I0327 23:55:32.339733 1086621 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:55:32.339739 1086621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0327 23:55:32.342668 1086621 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0327 23:55:32.343321 1086621 pod_ready.go:102] pod "etcd-ha-377576-m02" in "kube-system" namespace has status "Ready":"False"
	I0327 23:55:32.834438 1086621 round_trippers.go:463] GET https://192.168.39.47:8443/api/v1/namespaces/kube-system/pods/etcd-ha-377576-m02
	I0327 23:55:32.834470 1086621 round_trippers.go:469] Request Headers:
	I0327 23:55:32.834482 1086621 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:55:32.834488 1086621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0327 23:55:32.840254 1086621 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0327 23:55:32.841752 1086621 round_trippers.go:463] GET https://192.168.39.47:8443/api/v1/nodes/ha-377576-m02
	I0327 23:55:32.841769 1086621 round_trippers.go:469] Request Headers:
	I0327 23:55:32.841777 1086621 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:55:32.841782 1086621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0327 23:55:32.853503 1086621 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0327 23:55:33.335021 1086621 round_trippers.go:463] GET https://192.168.39.47:8443/api/v1/namespaces/kube-system/pods/etcd-ha-377576-m02
	I0327 23:55:33.335053 1086621 round_trippers.go:469] Request Headers:
	I0327 23:55:33.335064 1086621 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:55:33.335075 1086621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0327 23:55:33.338674 1086621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0327 23:55:33.339876 1086621 round_trippers.go:463] GET https://192.168.39.47:8443/api/v1/nodes/ha-377576-m02
	I0327 23:55:33.339892 1086621 round_trippers.go:469] Request Headers:
	I0327 23:55:33.339903 1086621 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:55:33.339910 1086621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0327 23:55:33.343215 1086621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0327 23:55:33.834501 1086621 round_trippers.go:463] GET https://192.168.39.47:8443/api/v1/namespaces/kube-system/pods/etcd-ha-377576-m02
	I0327 23:55:33.834526 1086621 round_trippers.go:469] Request Headers:
	I0327 23:55:33.834534 1086621 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:55:33.834538 1086621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0327 23:55:33.838710 1086621 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0327 23:55:33.839574 1086621 round_trippers.go:463] GET https://192.168.39.47:8443/api/v1/nodes/ha-377576-m02
	I0327 23:55:33.839592 1086621 round_trippers.go:469] Request Headers:
	I0327 23:55:33.839599 1086621 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:55:33.839603 1086621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0327 23:55:33.843186 1086621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0327 23:55:34.334283 1086621 round_trippers.go:463] GET https://192.168.39.47:8443/api/v1/namespaces/kube-system/pods/etcd-ha-377576-m02
	I0327 23:55:34.334317 1086621 round_trippers.go:469] Request Headers:
	I0327 23:55:34.334329 1086621 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:55:34.334336 1086621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0327 23:55:34.339550 1086621 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0327 23:55:34.340850 1086621 round_trippers.go:463] GET https://192.168.39.47:8443/api/v1/nodes/ha-377576-m02
	I0327 23:55:34.340867 1086621 round_trippers.go:469] Request Headers:
	I0327 23:55:34.340875 1086621 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:55:34.340881 1086621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0327 23:55:34.346262 1086621 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0327 23:55:34.347506 1086621 pod_ready.go:92] pod "etcd-ha-377576-m02" in "kube-system" namespace has status "Ready":"True"
	I0327 23:55:34.347527 1086621 pod_ready.go:81] duration metric: took 4.013462372s for pod "etcd-ha-377576-m02" in "kube-system" namespace to be "Ready" ...
	I0327 23:55:34.347542 1086621 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-377576" in "kube-system" namespace to be "Ready" ...
	I0327 23:55:34.347613 1086621 round_trippers.go:463] GET https://192.168.39.47:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-377576
	I0327 23:55:34.347623 1086621 round_trippers.go:469] Request Headers:
	I0327 23:55:34.347636 1086621 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:55:34.347646 1086621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0327 23:55:34.353573 1086621 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0327 23:55:34.354358 1086621 round_trippers.go:463] GET https://192.168.39.47:8443/api/v1/nodes/ha-377576
	I0327 23:55:34.354377 1086621 round_trippers.go:469] Request Headers:
	I0327 23:55:34.354387 1086621 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:55:34.354392 1086621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0327 23:55:34.358321 1086621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0327 23:55:34.359040 1086621 pod_ready.go:92] pod "kube-apiserver-ha-377576" in "kube-system" namespace has status "Ready":"True"
	I0327 23:55:34.359058 1086621 pod_ready.go:81] duration metric: took 11.509065ms for pod "kube-apiserver-ha-377576" in "kube-system" namespace to be "Ready" ...
	I0327 23:55:34.359067 1086621 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-377576-m02" in "kube-system" namespace to be "Ready" ...
	I0327 23:55:34.359122 1086621 round_trippers.go:463] GET https://192.168.39.47:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-377576-m02
	I0327 23:55:34.359130 1086621 round_trippers.go:469] Request Headers:
	I0327 23:55:34.359136 1086621 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:55:34.359140 1086621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0327 23:55:34.362904 1086621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0327 23:55:34.363503 1086621 round_trippers.go:463] GET https://192.168.39.47:8443/api/v1/nodes/ha-377576-m02
	I0327 23:55:34.363520 1086621 round_trippers.go:469] Request Headers:
	I0327 23:55:34.363526 1086621 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:55:34.363531 1086621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0327 23:55:34.367669 1086621 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0327 23:55:34.368113 1086621 pod_ready.go:92] pod "kube-apiserver-ha-377576-m02" in "kube-system" namespace has status "Ready":"True"
	I0327 23:55:34.368131 1086621 pod_ready.go:81] duration metric: took 9.057067ms for pod "kube-apiserver-ha-377576-m02" in "kube-system" namespace to be "Ready" ...
	I0327 23:55:34.368142 1086621 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-377576" in "kube-system" namespace to be "Ready" ...
	I0327 23:55:34.494286 1086621 request.go:629] Waited for 126.036919ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.47:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-377576
	I0327 23:55:34.494359 1086621 round_trippers.go:463] GET https://192.168.39.47:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-377576
	I0327 23:55:34.494364 1086621 round_trippers.go:469] Request Headers:
	I0327 23:55:34.494372 1086621 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:55:34.494380 1086621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0327 23:55:34.498108 1086621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0327 23:55:34.694304 1086621 request.go:629] Waited for 195.386085ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.47:8443/api/v1/nodes/ha-377576
	I0327 23:55:34.694393 1086621 round_trippers.go:463] GET https://192.168.39.47:8443/api/v1/nodes/ha-377576
	I0327 23:55:34.694401 1086621 round_trippers.go:469] Request Headers:
	I0327 23:55:34.694411 1086621 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:55:34.694415 1086621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0327 23:55:34.698177 1086621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0327 23:55:34.699219 1086621 pod_ready.go:92] pod "kube-controller-manager-ha-377576" in "kube-system" namespace has status "Ready":"True"
	I0327 23:55:34.699243 1086621 pod_ready.go:81] duration metric: took 331.095005ms for pod "kube-controller-manager-ha-377576" in "kube-system" namespace to be "Ready" ...
	I0327 23:55:34.699256 1086621 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-377576-m02" in "kube-system" namespace to be "Ready" ...
	I0327 23:55:34.894348 1086621 request.go:629] Waited for 194.995133ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.47:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-377576-m02
	I0327 23:55:34.894433 1086621 round_trippers.go:463] GET https://192.168.39.47:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-377576-m02
	I0327 23:55:34.894441 1086621 round_trippers.go:469] Request Headers:
	I0327 23:55:34.894452 1086621 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:55:34.894462 1086621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0327 23:55:34.898559 1086621 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0327 23:55:35.094833 1086621 request.go:629] Waited for 195.405826ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.47:8443/api/v1/nodes/ha-377576-m02
	I0327 23:55:35.094906 1086621 round_trippers.go:463] GET https://192.168.39.47:8443/api/v1/nodes/ha-377576-m02
	I0327 23:55:35.094911 1086621 round_trippers.go:469] Request Headers:
	I0327 23:55:35.094919 1086621 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:55:35.094924 1086621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0327 23:55:35.098340 1086621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0327 23:55:35.099062 1086621 pod_ready.go:92] pod "kube-controller-manager-ha-377576-m02" in "kube-system" namespace has status "Ready":"True"
	I0327 23:55:35.099082 1086621 pod_ready.go:81] duration metric: took 399.817994ms for pod "kube-controller-manager-ha-377576-m02" in "kube-system" namespace to be "Ready" ...
	I0327 23:55:35.099097 1086621 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-4t77p" in "kube-system" namespace to be "Ready" ...
	I0327 23:55:35.294222 1086621 request.go:629] Waited for 195.021189ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.47:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4t77p
	I0327 23:55:35.294311 1086621 round_trippers.go:463] GET https://192.168.39.47:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4t77p
	I0327 23:55:35.294318 1086621 round_trippers.go:469] Request Headers:
	I0327 23:55:35.294329 1086621 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:55:35.294336 1086621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0327 23:55:35.299084 1086621 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0327 23:55:35.493975 1086621 request.go:629] Waited for 194.213986ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.47:8443/api/v1/nodes/ha-377576
	I0327 23:55:35.494046 1086621 round_trippers.go:463] GET https://192.168.39.47:8443/api/v1/nodes/ha-377576
	I0327 23:55:35.494051 1086621 round_trippers.go:469] Request Headers:
	I0327 23:55:35.494058 1086621 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:55:35.494062 1086621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0327 23:55:35.497449 1086621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0327 23:55:35.498360 1086621 pod_ready.go:92] pod "kube-proxy-4t77p" in "kube-system" namespace has status "Ready":"True"
	I0327 23:55:35.498384 1086621 pod_ready.go:81] duration metric: took 399.278414ms for pod "kube-proxy-4t77p" in "kube-system" namespace to be "Ready" ...
	I0327 23:55:35.498398 1086621 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-k9dcr" in "kube-system" namespace to be "Ready" ...
	I0327 23:55:35.694463 1086621 request.go:629] Waited for 195.979619ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.47:8443/api/v1/namespaces/kube-system/pods/kube-proxy-k9dcr
	I0327 23:55:35.694532 1086621 round_trippers.go:463] GET https://192.168.39.47:8443/api/v1/namespaces/kube-system/pods/kube-proxy-k9dcr
	I0327 23:55:35.694539 1086621 round_trippers.go:469] Request Headers:
	I0327 23:55:35.694546 1086621 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:55:35.694552 1086621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0327 23:55:35.698729 1086621 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0327 23:55:35.894897 1086621 request.go:629] Waited for 195.396289ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.47:8443/api/v1/nodes/ha-377576-m02
	I0327 23:55:35.894965 1086621 round_trippers.go:463] GET https://192.168.39.47:8443/api/v1/nodes/ha-377576-m02
	I0327 23:55:35.894970 1086621 round_trippers.go:469] Request Headers:
	I0327 23:55:35.894978 1086621 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:55:35.894981 1086621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0327 23:55:35.899097 1086621 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0327 23:55:35.899611 1086621 pod_ready.go:92] pod "kube-proxy-k9dcr" in "kube-system" namespace has status "Ready":"True"
	I0327 23:55:35.899633 1086621 pod_ready.go:81] duration metric: took 401.224891ms for pod "kube-proxy-k9dcr" in "kube-system" namespace to be "Ready" ...
	I0327 23:55:35.899644 1086621 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-377576" in "kube-system" namespace to be "Ready" ...
	I0327 23:55:36.094331 1086621 request.go:629] Waited for 194.589054ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.47:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-377576
	I0327 23:55:36.094405 1086621 round_trippers.go:463] GET https://192.168.39.47:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-377576
	I0327 23:55:36.094410 1086621 round_trippers.go:469] Request Headers:
	I0327 23:55:36.094419 1086621 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:55:36.094423 1086621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0327 23:55:36.098005 1086621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0327 23:55:36.294386 1086621 request.go:629] Waited for 195.567508ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.47:8443/api/v1/nodes/ha-377576
	I0327 23:55:36.294452 1086621 round_trippers.go:463] GET https://192.168.39.47:8443/api/v1/nodes/ha-377576
	I0327 23:55:36.294457 1086621 round_trippers.go:469] Request Headers:
	I0327 23:55:36.294465 1086621 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:55:36.294471 1086621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0327 23:55:36.298034 1086621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0327 23:55:36.298650 1086621 pod_ready.go:92] pod "kube-scheduler-ha-377576" in "kube-system" namespace has status "Ready":"True"
	I0327 23:55:36.298676 1086621 pod_ready.go:81] duration metric: took 399.022593ms for pod "kube-scheduler-ha-377576" in "kube-system" namespace to be "Ready" ...
	I0327 23:55:36.298691 1086621 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-377576-m02" in "kube-system" namespace to be "Ready" ...
	I0327 23:55:36.494776 1086621 request.go:629] Waited for 195.998292ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.47:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-377576-m02
	I0327 23:55:36.494870 1086621 round_trippers.go:463] GET https://192.168.39.47:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-377576-m02
	I0327 23:55:36.494876 1086621 round_trippers.go:469] Request Headers:
	I0327 23:55:36.494884 1086621 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:55:36.494890 1086621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0327 23:55:36.500470 1086621 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0327 23:55:36.693969 1086621 request.go:629] Waited for 192.303867ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.47:8443/api/v1/nodes/ha-377576-m02
	I0327 23:55:36.694052 1086621 round_trippers.go:463] GET https://192.168.39.47:8443/api/v1/nodes/ha-377576-m02
	I0327 23:55:36.694061 1086621 round_trippers.go:469] Request Headers:
	I0327 23:55:36.694072 1086621 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:55:36.694077 1086621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0327 23:55:36.701098 1086621 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0327 23:55:36.701748 1086621 pod_ready.go:92] pod "kube-scheduler-ha-377576-m02" in "kube-system" namespace has status "Ready":"True"
	I0327 23:55:36.701779 1086621 pod_ready.go:81] duration metric: took 403.071107ms for pod "kube-scheduler-ha-377576-m02" in "kube-system" namespace to be "Ready" ...
	I0327 23:55:36.701799 1086621 pod_ready.go:38] duration metric: took 6.40314322s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0327 23:55:36.701827 1086621 api_server.go:52] waiting for apiserver process to appear ...
	I0327 23:55:36.701907 1086621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0327 23:55:36.718672 1086621 api_server.go:72] duration metric: took 12.733392053s to wait for apiserver process to appear ...
	I0327 23:55:36.718705 1086621 api_server.go:88] waiting for apiserver healthz status ...
	I0327 23:55:36.718730 1086621 api_server.go:253] Checking apiserver healthz at https://192.168.39.47:8443/healthz ...
	I0327 23:55:36.723277 1086621 api_server.go:279] https://192.168.39.47:8443/healthz returned 200:
	ok
	I0327 23:55:36.723362 1086621 round_trippers.go:463] GET https://192.168.39.47:8443/version
	I0327 23:55:36.723378 1086621 round_trippers.go:469] Request Headers:
	I0327 23:55:36.723389 1086621 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:55:36.723397 1086621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0327 23:55:36.724525 1086621 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0327 23:55:36.724636 1086621 api_server.go:141] control plane version: v1.29.3
	I0327 23:55:36.724654 1086621 api_server.go:131] duration metric: took 5.942511ms to wait for apiserver health ...
	I0327 23:55:36.724663 1086621 system_pods.go:43] waiting for kube-system pods to appear ...
	I0327 23:55:36.894000 1086621 request.go:629] Waited for 169.256759ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.47:8443/api/v1/namespaces/kube-system/pods
	I0327 23:55:36.894083 1086621 round_trippers.go:463] GET https://192.168.39.47:8443/api/v1/namespaces/kube-system/pods
	I0327 23:55:36.894088 1086621 round_trippers.go:469] Request Headers:
	I0327 23:55:36.894096 1086621 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:55:36.894100 1086621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0327 23:55:36.899406 1086621 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0327 23:55:36.904633 1086621 system_pods.go:59] 17 kube-system pods found
	I0327 23:55:36.904666 1086621 system_pods.go:61] "coredns-76f75df574-47npx" [968d63e4-f44a-4e52-b6c0-04e0ed1a068e] Running
	I0327 23:55:36.904671 1086621 system_pods.go:61] "coredns-76f75df574-msv9s" [7c549358-2f35-4345-aa7a-8bbbcfc4ef01] Running
	I0327 23:55:36.904675 1086621 system_pods.go:61] "etcd-ha-377576" [885cacaa-1b61-4f8b-90b5-3f7dbc9df4ad] Running
	I0327 23:55:36.904678 1086621 system_pods.go:61] "etcd-ha-377576-m02" [c3fa0266-db99-4bf1-a3b4-2f050d69e2ff] Running
	I0327 23:55:36.904682 1086621 system_pods.go:61] "kindnet-5zmtk" [4e75cdc5-22da-47f2-9833-b2f4eaa9caac] Running
	I0327 23:55:36.904691 1086621 system_pods.go:61] "kindnet-6wmmc" [ef36a453-2352-47f7-8a75-72abc4004e82] Running
	I0327 23:55:36.904694 1086621 system_pods.go:61] "kube-apiserver-ha-377576" [a1a979ea-0199-4e24-af63-c79b32a66c0e] Running
	I0327 23:55:36.904699 1086621 system_pods.go:61] "kube-apiserver-ha-377576-m02" [516bd332-2602-4380-aac0-3fd71f0834cb] Running
	I0327 23:55:36.904702 1086621 system_pods.go:61] "kube-controller-manager-ha-377576" [f72d4847-2902-4e1f-8852-bdcc020a6099] Running
	I0327 23:55:36.904707 1086621 system_pods.go:61] "kube-controller-manager-ha-377576-m02" [a3e945b8-d18c-434c-b8a7-70510fbce333] Running
	I0327 23:55:36.904711 1086621 system_pods.go:61] "kube-proxy-4t77p" [27eff0c9-9b45-4530-aba9-1a5e0ca60802] Running
	I0327 23:55:36.904714 1086621 system_pods.go:61] "kube-proxy-k9dcr" [07c785f3-3b08-4f43-b957-5f4092f757ea] Running
	I0327 23:55:36.904721 1086621 system_pods.go:61] "kube-scheduler-ha-377576" [6b97a544-a0e8-4c35-b93c-197f200da53b] Running
	I0327 23:55:36.904725 1086621 system_pods.go:61] "kube-scheduler-ha-377576-m02" [91c25780-d677-4394-9624-31dfaec279c3] Running
	I0327 23:55:36.904731 1086621 system_pods.go:61] "kube-vip-ha-377576" [2d4dd5f7-c798-4a52-97f5-4bc068603373] Running
	I0327 23:55:36.904734 1086621 system_pods.go:61] "kube-vip-ha-377576-m02" [dde68b43-553a-4d1b-ad7f-5284653080e4] Running
	I0327 23:55:36.904738 1086621 system_pods.go:61] "storage-provisioner" [9000645c-8323-43af-bd87-011d1574493c] Running
	I0327 23:55:36.904745 1086621 system_pods.go:74] duration metric: took 180.073661ms to wait for pod list to return data ...
	I0327 23:55:36.904757 1086621 default_sa.go:34] waiting for default service account to be created ...
	I0327 23:55:37.094201 1086621 request.go:629] Waited for 189.350418ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.47:8443/api/v1/namespaces/default/serviceaccounts
	I0327 23:55:37.094280 1086621 round_trippers.go:463] GET https://192.168.39.47:8443/api/v1/namespaces/default/serviceaccounts
	I0327 23:55:37.094287 1086621 round_trippers.go:469] Request Headers:
	I0327 23:55:37.094295 1086621 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:55:37.094300 1086621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0327 23:55:37.098334 1086621 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0327 23:55:37.098559 1086621 default_sa.go:45] found service account: "default"
	I0327 23:55:37.098577 1086621 default_sa.go:55] duration metric: took 193.811552ms for default service account to be created ...
	I0327 23:55:37.098587 1086621 system_pods.go:116] waiting for k8s-apps to be running ...
	I0327 23:55:37.294816 1086621 request.go:629] Waited for 196.134816ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.47:8443/api/v1/namespaces/kube-system/pods
	I0327 23:55:37.294900 1086621 round_trippers.go:463] GET https://192.168.39.47:8443/api/v1/namespaces/kube-system/pods
	I0327 23:55:37.294909 1086621 round_trippers.go:469] Request Headers:
	I0327 23:55:37.294921 1086621 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:55:37.294928 1086621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0327 23:55:37.301717 1086621 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0327 23:55:37.306164 1086621 system_pods.go:86] 17 kube-system pods found
	I0327 23:55:37.306188 1086621 system_pods.go:89] "coredns-76f75df574-47npx" [968d63e4-f44a-4e52-b6c0-04e0ed1a068e] Running
	I0327 23:55:37.306193 1086621 system_pods.go:89] "coredns-76f75df574-msv9s" [7c549358-2f35-4345-aa7a-8bbbcfc4ef01] Running
	I0327 23:55:37.306198 1086621 system_pods.go:89] "etcd-ha-377576" [885cacaa-1b61-4f8b-90b5-3f7dbc9df4ad] Running
	I0327 23:55:37.306202 1086621 system_pods.go:89] "etcd-ha-377576-m02" [c3fa0266-db99-4bf1-a3b4-2f050d69e2ff] Running
	I0327 23:55:37.306206 1086621 system_pods.go:89] "kindnet-5zmtk" [4e75cdc5-22da-47f2-9833-b2f4eaa9caac] Running
	I0327 23:55:37.306209 1086621 system_pods.go:89] "kindnet-6wmmc" [ef36a453-2352-47f7-8a75-72abc4004e82] Running
	I0327 23:55:37.306213 1086621 system_pods.go:89] "kube-apiserver-ha-377576" [a1a979ea-0199-4e24-af63-c79b32a66c0e] Running
	I0327 23:55:37.306218 1086621 system_pods.go:89] "kube-apiserver-ha-377576-m02" [516bd332-2602-4380-aac0-3fd71f0834cb] Running
	I0327 23:55:37.306224 1086621 system_pods.go:89] "kube-controller-manager-ha-377576" [f72d4847-2902-4e1f-8852-bdcc020a6099] Running
	I0327 23:55:37.306252 1086621 system_pods.go:89] "kube-controller-manager-ha-377576-m02" [a3e945b8-d18c-434c-b8a7-70510fbce333] Running
	I0327 23:55:37.306263 1086621 system_pods.go:89] "kube-proxy-4t77p" [27eff0c9-9b45-4530-aba9-1a5e0ca60802] Running
	I0327 23:55:37.306269 1086621 system_pods.go:89] "kube-proxy-k9dcr" [07c785f3-3b08-4f43-b957-5f4092f757ea] Running
	I0327 23:55:37.306275 1086621 system_pods.go:89] "kube-scheduler-ha-377576" [6b97a544-a0e8-4c35-b93c-197f200da53b] Running
	I0327 23:55:37.306279 1086621 system_pods.go:89] "kube-scheduler-ha-377576-m02" [91c25780-d677-4394-9624-31dfaec279c3] Running
	I0327 23:55:37.306284 1086621 system_pods.go:89] "kube-vip-ha-377576" [2d4dd5f7-c798-4a52-97f5-4bc068603373] Running
	I0327 23:55:37.306287 1086621 system_pods.go:89] "kube-vip-ha-377576-m02" [dde68b43-553a-4d1b-ad7f-5284653080e4] Running
	I0327 23:55:37.306291 1086621 system_pods.go:89] "storage-provisioner" [9000645c-8323-43af-bd87-011d1574493c] Running
	I0327 23:55:37.306302 1086621 system_pods.go:126] duration metric: took 207.709153ms to wait for k8s-apps to be running ...
	I0327 23:55:37.306311 1086621 system_svc.go:44] waiting for kubelet service to be running ....
	I0327 23:55:37.306373 1086621 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0327 23:55:37.322496 1086621 system_svc.go:56] duration metric: took 16.172159ms WaitForService to wait for kubelet
	I0327 23:55:37.322528 1086621 kubeadm.go:576] duration metric: took 13.337255798s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0327 23:55:37.322554 1086621 node_conditions.go:102] verifying NodePressure condition ...
	I0327 23:55:37.493952 1086621 request.go:629] Waited for 171.283703ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.47:8443/api/v1/nodes
	I0327 23:55:37.494023 1086621 round_trippers.go:463] GET https://192.168.39.47:8443/api/v1/nodes
	I0327 23:55:37.494030 1086621 round_trippers.go:469] Request Headers:
	I0327 23:55:37.494045 1086621 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:55:37.494050 1086621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0327 23:55:37.497664 1086621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0327 23:55:37.498677 1086621 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0327 23:55:37.498704 1086621 node_conditions.go:123] node cpu capacity is 2
	I0327 23:55:37.498719 1086621 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0327 23:55:37.498723 1086621 node_conditions.go:123] node cpu capacity is 2
	I0327 23:55:37.498729 1086621 node_conditions.go:105] duration metric: took 176.168713ms to run NodePressure ...
	I0327 23:55:37.498743 1086621 start.go:240] waiting for startup goroutines ...
	I0327 23:55:37.498779 1086621 start.go:254] writing updated cluster config ...
	I0327 23:55:37.501217 1086621 out.go:177] 
	I0327 23:55:37.502852 1086621 config.go:182] Loaded profile config "ha-377576": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0327 23:55:37.502986 1086621 profile.go:142] Saving config to /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/ha-377576/config.json ...
	I0327 23:55:37.504915 1086621 out.go:177] * Starting "ha-377576-m03" control-plane node in "ha-377576" cluster
	I0327 23:55:37.506153 1086621 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0327 23:55:37.506174 1086621 cache.go:56] Caching tarball of preloaded images
	I0327 23:55:37.506317 1086621 preload.go:173] Found /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0327 23:55:37.506331 1086621 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on crio
	I0327 23:55:37.506437 1086621 profile.go:142] Saving config to /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/ha-377576/config.json ...
	I0327 23:55:37.506637 1086621 start.go:360] acquireMachinesLock for ha-377576-m03: {Name:mk85e225431128bbd27ac7bb3815095957281902 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0327 23:55:37.506687 1086621 start.go:364] duration metric: took 26.886µs to acquireMachinesLock for "ha-377576-m03"
	I0327 23:55:37.506713 1086621 start.go:93] Provisioning new machine with config: &{Name:ha-377576 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.29.3 ClusterName:ha-377576 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.47 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.117 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-d
ns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimi
zations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0327 23:55:37.506843 1086621 start.go:125] createHost starting for "m03" (driver="kvm2")
	I0327 23:55:37.508415 1086621 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0327 23:55:37.508527 1086621 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0327 23:55:37.508575 1086621 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0327 23:55:37.523640 1086621 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41441
	I0327 23:55:37.524175 1086621 main.go:141] libmachine: () Calling .GetVersion
	I0327 23:55:37.524653 1086621 main.go:141] libmachine: Using API Version  1
	I0327 23:55:37.524676 1086621 main.go:141] libmachine: () Calling .SetConfigRaw
	I0327 23:55:37.524988 1086621 main.go:141] libmachine: () Calling .GetMachineName
	I0327 23:55:37.525204 1086621 main.go:141] libmachine: (ha-377576-m03) Calling .GetMachineName
	I0327 23:55:37.525353 1086621 main.go:141] libmachine: (ha-377576-m03) Calling .DriverName
	I0327 23:55:37.525516 1086621 start.go:159] libmachine.API.Create for "ha-377576" (driver="kvm2")
	I0327 23:55:37.525548 1086621 client.go:168] LocalClient.Create starting
	I0327 23:55:37.525588 1086621 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem
	I0327 23:55:37.525627 1086621 main.go:141] libmachine: Decoding PEM data...
	I0327 23:55:37.525653 1086621 main.go:141] libmachine: Parsing certificate...
	I0327 23:55:37.525745 1086621 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/cert.pem
	I0327 23:55:37.525773 1086621 main.go:141] libmachine: Decoding PEM data...
	I0327 23:55:37.525788 1086621 main.go:141] libmachine: Parsing certificate...
	I0327 23:55:37.525818 1086621 main.go:141] libmachine: Running pre-create checks...
	I0327 23:55:37.525830 1086621 main.go:141] libmachine: (ha-377576-m03) Calling .PreCreateCheck
	I0327 23:55:37.525984 1086621 main.go:141] libmachine: (ha-377576-m03) Calling .GetConfigRaw
	I0327 23:55:37.526423 1086621 main.go:141] libmachine: Creating machine...
	I0327 23:55:37.526442 1086621 main.go:141] libmachine: (ha-377576-m03) Calling .Create
	I0327 23:55:37.526566 1086621 main.go:141] libmachine: (ha-377576-m03) Creating KVM machine...
	I0327 23:55:37.527825 1086621 main.go:141] libmachine: (ha-377576-m03) DBG | found existing default KVM network
	I0327 23:55:37.527940 1086621 main.go:141] libmachine: (ha-377576-m03) DBG | found existing private KVM network mk-ha-377576
	I0327 23:55:37.528044 1086621 main.go:141] libmachine: (ha-377576-m03) Setting up store path in /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/ha-377576-m03 ...
	I0327 23:55:37.528064 1086621 main.go:141] libmachine: (ha-377576-m03) Building disk image from file:///home/jenkins/minikube-integration/18485-1069254/.minikube/cache/iso/amd64/minikube-v1.33.0-1711559712-18485-amd64.iso
	I0327 23:55:37.528132 1086621 main.go:141] libmachine: (ha-377576-m03) DBG | I0327 23:55:37.528034 1087512 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18485-1069254/.minikube
	I0327 23:55:37.528270 1086621 main.go:141] libmachine: (ha-377576-m03) Downloading /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18485-1069254/.minikube/cache/iso/amd64/minikube-v1.33.0-1711559712-18485-amd64.iso...
	I0327 23:55:37.781950 1086621 main.go:141] libmachine: (ha-377576-m03) DBG | I0327 23:55:37.781824 1087512 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/ha-377576-m03/id_rsa...
	I0327 23:55:37.902971 1086621 main.go:141] libmachine: (ha-377576-m03) DBG | I0327 23:55:37.902840 1087512 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/ha-377576-m03/ha-377576-m03.rawdisk...
	I0327 23:55:37.903017 1086621 main.go:141] libmachine: (ha-377576-m03) DBG | Writing magic tar header
	I0327 23:55:37.903031 1086621 main.go:141] libmachine: (ha-377576-m03) DBG | Writing SSH key tar header
	I0327 23:55:37.903049 1086621 main.go:141] libmachine: (ha-377576-m03) DBG | I0327 23:55:37.902965 1087512 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/ha-377576-m03 ...
	I0327 23:55:37.903064 1086621 main.go:141] libmachine: (ha-377576-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/ha-377576-m03
	I0327 23:55:37.903137 1086621 main.go:141] libmachine: (ha-377576-m03) Setting executable bit set on /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/ha-377576-m03 (perms=drwx------)
	I0327 23:55:37.903180 1086621 main.go:141] libmachine: (ha-377576-m03) Setting executable bit set on /home/jenkins/minikube-integration/18485-1069254/.minikube/machines (perms=drwxr-xr-x)
	I0327 23:55:37.903198 1086621 main.go:141] libmachine: (ha-377576-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18485-1069254/.minikube/machines
	I0327 23:55:37.903213 1086621 main.go:141] libmachine: (ha-377576-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18485-1069254/.minikube
	I0327 23:55:37.903223 1086621 main.go:141] libmachine: (ha-377576-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18485-1069254
	I0327 23:55:37.903240 1086621 main.go:141] libmachine: (ha-377576-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0327 23:55:37.903249 1086621 main.go:141] libmachine: (ha-377576-m03) DBG | Checking permissions on dir: /home/jenkins
	I0327 23:55:37.903263 1086621 main.go:141] libmachine: (ha-377576-m03) DBG | Checking permissions on dir: /home
	I0327 23:55:37.903276 1086621 main.go:141] libmachine: (ha-377576-m03) DBG | Skipping /home - not owner
	I0327 23:55:37.903285 1086621 main.go:141] libmachine: (ha-377576-m03) Setting executable bit set on /home/jenkins/minikube-integration/18485-1069254/.minikube (perms=drwxr-xr-x)
	I0327 23:55:37.903297 1086621 main.go:141] libmachine: (ha-377576-m03) Setting executable bit set on /home/jenkins/minikube-integration/18485-1069254 (perms=drwxrwxr-x)
	I0327 23:55:37.903306 1086621 main.go:141] libmachine: (ha-377576-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0327 23:55:37.903317 1086621 main.go:141] libmachine: (ha-377576-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0327 23:55:37.903326 1086621 main.go:141] libmachine: (ha-377576-m03) Creating domain...
	I0327 23:55:37.904291 1086621 main.go:141] libmachine: (ha-377576-m03) define libvirt domain using xml: 
	I0327 23:55:37.904316 1086621 main.go:141] libmachine: (ha-377576-m03) <domain type='kvm'>
	I0327 23:55:37.904326 1086621 main.go:141] libmachine: (ha-377576-m03)   <name>ha-377576-m03</name>
	I0327 23:55:37.904334 1086621 main.go:141] libmachine: (ha-377576-m03)   <memory unit='MiB'>2200</memory>
	I0327 23:55:37.904343 1086621 main.go:141] libmachine: (ha-377576-m03)   <vcpu>2</vcpu>
	I0327 23:55:37.904350 1086621 main.go:141] libmachine: (ha-377576-m03)   <features>
	I0327 23:55:37.904363 1086621 main.go:141] libmachine: (ha-377576-m03)     <acpi/>
	I0327 23:55:37.904371 1086621 main.go:141] libmachine: (ha-377576-m03)     <apic/>
	I0327 23:55:37.904380 1086621 main.go:141] libmachine: (ha-377576-m03)     <pae/>
	I0327 23:55:37.904386 1086621 main.go:141] libmachine: (ha-377576-m03)     
	I0327 23:55:37.904394 1086621 main.go:141] libmachine: (ha-377576-m03)   </features>
	I0327 23:55:37.904401 1086621 main.go:141] libmachine: (ha-377576-m03)   <cpu mode='host-passthrough'>
	I0327 23:55:37.904413 1086621 main.go:141] libmachine: (ha-377576-m03)   
	I0327 23:55:37.904420 1086621 main.go:141] libmachine: (ha-377576-m03)   </cpu>
	I0327 23:55:37.904428 1086621 main.go:141] libmachine: (ha-377576-m03)   <os>
	I0327 23:55:37.904441 1086621 main.go:141] libmachine: (ha-377576-m03)     <type>hvm</type>
	I0327 23:55:37.904452 1086621 main.go:141] libmachine: (ha-377576-m03)     <boot dev='cdrom'/>
	I0327 23:55:37.904457 1086621 main.go:141] libmachine: (ha-377576-m03)     <boot dev='hd'/>
	I0327 23:55:37.904463 1086621 main.go:141] libmachine: (ha-377576-m03)     <bootmenu enable='no'/>
	I0327 23:55:37.904468 1086621 main.go:141] libmachine: (ha-377576-m03)   </os>
	I0327 23:55:37.904473 1086621 main.go:141] libmachine: (ha-377576-m03)   <devices>
	I0327 23:55:37.904478 1086621 main.go:141] libmachine: (ha-377576-m03)     <disk type='file' device='cdrom'>
	I0327 23:55:37.904488 1086621 main.go:141] libmachine: (ha-377576-m03)       <source file='/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/ha-377576-m03/boot2docker.iso'/>
	I0327 23:55:37.904494 1086621 main.go:141] libmachine: (ha-377576-m03)       <target dev='hdc' bus='scsi'/>
	I0327 23:55:37.904499 1086621 main.go:141] libmachine: (ha-377576-m03)       <readonly/>
	I0327 23:55:37.904503 1086621 main.go:141] libmachine: (ha-377576-m03)     </disk>
	I0327 23:55:37.904510 1086621 main.go:141] libmachine: (ha-377576-m03)     <disk type='file' device='disk'>
	I0327 23:55:37.904520 1086621 main.go:141] libmachine: (ha-377576-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0327 23:55:37.904529 1086621 main.go:141] libmachine: (ha-377576-m03)       <source file='/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/ha-377576-m03/ha-377576-m03.rawdisk'/>
	I0327 23:55:37.904534 1086621 main.go:141] libmachine: (ha-377576-m03)       <target dev='hda' bus='virtio'/>
	I0327 23:55:37.904539 1086621 main.go:141] libmachine: (ha-377576-m03)     </disk>
	I0327 23:55:37.904550 1086621 main.go:141] libmachine: (ha-377576-m03)     <interface type='network'>
	I0327 23:55:37.904559 1086621 main.go:141] libmachine: (ha-377576-m03)       <source network='mk-ha-377576'/>
	I0327 23:55:37.904563 1086621 main.go:141] libmachine: (ha-377576-m03)       <model type='virtio'/>
	I0327 23:55:37.904568 1086621 main.go:141] libmachine: (ha-377576-m03)     </interface>
	I0327 23:55:37.904576 1086621 main.go:141] libmachine: (ha-377576-m03)     <interface type='network'>
	I0327 23:55:37.904581 1086621 main.go:141] libmachine: (ha-377576-m03)       <source network='default'/>
	I0327 23:55:37.904586 1086621 main.go:141] libmachine: (ha-377576-m03)       <model type='virtio'/>
	I0327 23:55:37.904592 1086621 main.go:141] libmachine: (ha-377576-m03)     </interface>
	I0327 23:55:37.904604 1086621 main.go:141] libmachine: (ha-377576-m03)     <serial type='pty'>
	I0327 23:55:37.904638 1086621 main.go:141] libmachine: (ha-377576-m03)       <target port='0'/>
	I0327 23:55:37.904667 1086621 main.go:141] libmachine: (ha-377576-m03)     </serial>
	I0327 23:55:37.904692 1086621 main.go:141] libmachine: (ha-377576-m03)     <console type='pty'>
	I0327 23:55:37.904714 1086621 main.go:141] libmachine: (ha-377576-m03)       <target type='serial' port='0'/>
	I0327 23:55:37.904728 1086621 main.go:141] libmachine: (ha-377576-m03)     </console>
	I0327 23:55:37.904741 1086621 main.go:141] libmachine: (ha-377576-m03)     <rng model='virtio'>
	I0327 23:55:37.904753 1086621 main.go:141] libmachine: (ha-377576-m03)       <backend model='random'>/dev/random</backend>
	I0327 23:55:37.904759 1086621 main.go:141] libmachine: (ha-377576-m03)     </rng>
	I0327 23:55:37.904765 1086621 main.go:141] libmachine: (ha-377576-m03)     
	I0327 23:55:37.904776 1086621 main.go:141] libmachine: (ha-377576-m03)     
	I0327 23:55:37.904788 1086621 main.go:141] libmachine: (ha-377576-m03)   </devices>
	I0327 23:55:37.904800 1086621 main.go:141] libmachine: (ha-377576-m03) </domain>
	I0327 23:55:37.904815 1086621 main.go:141] libmachine: (ha-377576-m03) 
	I0327 23:55:37.912683 1086621 main.go:141] libmachine: (ha-377576-m03) DBG | domain ha-377576-m03 has defined MAC address 52:54:00:fd:17:0d in network default
	I0327 23:55:37.913367 1086621 main.go:141] libmachine: (ha-377576-m03) Ensuring networks are active...
	I0327 23:55:37.913395 1086621 main.go:141] libmachine: (ha-377576-m03) DBG | domain ha-377576-m03 has defined MAC address 52:54:00:f5:c1:99 in network mk-ha-377576
	I0327 23:55:37.914179 1086621 main.go:141] libmachine: (ha-377576-m03) Ensuring network default is active
	I0327 23:55:37.914598 1086621 main.go:141] libmachine: (ha-377576-m03) Ensuring network mk-ha-377576 is active
	I0327 23:55:37.915024 1086621 main.go:141] libmachine: (ha-377576-m03) Getting domain xml...
	I0327 23:55:37.915693 1086621 main.go:141] libmachine: (ha-377576-m03) Creating domain...
	I0327 23:55:39.175839 1086621 main.go:141] libmachine: (ha-377576-m03) Waiting to get IP...
	I0327 23:55:39.176610 1086621 main.go:141] libmachine: (ha-377576-m03) DBG | domain ha-377576-m03 has defined MAC address 52:54:00:f5:c1:99 in network mk-ha-377576
	I0327 23:55:39.176972 1086621 main.go:141] libmachine: (ha-377576-m03) DBG | unable to find current IP address of domain ha-377576-m03 in network mk-ha-377576
	I0327 23:55:39.177023 1086621 main.go:141] libmachine: (ha-377576-m03) DBG | I0327 23:55:39.176972 1087512 retry.go:31] will retry after 213.405089ms: waiting for machine to come up
	I0327 23:55:39.392470 1086621 main.go:141] libmachine: (ha-377576-m03) DBG | domain ha-377576-m03 has defined MAC address 52:54:00:f5:c1:99 in network mk-ha-377576
	I0327 23:55:39.392959 1086621 main.go:141] libmachine: (ha-377576-m03) DBG | unable to find current IP address of domain ha-377576-m03 in network mk-ha-377576
	I0327 23:55:39.392990 1086621 main.go:141] libmachine: (ha-377576-m03) DBG | I0327 23:55:39.392901 1087512 retry.go:31] will retry after 348.371793ms: waiting for machine to come up
	I0327 23:55:39.742502 1086621 main.go:141] libmachine: (ha-377576-m03) DBG | domain ha-377576-m03 has defined MAC address 52:54:00:f5:c1:99 in network mk-ha-377576
	I0327 23:55:39.742929 1086621 main.go:141] libmachine: (ha-377576-m03) DBG | unable to find current IP address of domain ha-377576-m03 in network mk-ha-377576
	I0327 23:55:39.742959 1086621 main.go:141] libmachine: (ha-377576-m03) DBG | I0327 23:55:39.742881 1087512 retry.go:31] will retry after 367.169553ms: waiting for machine to come up
	I0327 23:55:40.111395 1086621 main.go:141] libmachine: (ha-377576-m03) DBG | domain ha-377576-m03 has defined MAC address 52:54:00:f5:c1:99 in network mk-ha-377576
	I0327 23:55:40.111861 1086621 main.go:141] libmachine: (ha-377576-m03) DBG | unable to find current IP address of domain ha-377576-m03 in network mk-ha-377576
	I0327 23:55:40.111894 1086621 main.go:141] libmachine: (ha-377576-m03) DBG | I0327 23:55:40.111820 1087512 retry.go:31] will retry after 591.714034ms: waiting for machine to come up
	I0327 23:55:40.705655 1086621 main.go:141] libmachine: (ha-377576-m03) DBG | domain ha-377576-m03 has defined MAC address 52:54:00:f5:c1:99 in network mk-ha-377576
	I0327 23:55:40.706080 1086621 main.go:141] libmachine: (ha-377576-m03) DBG | unable to find current IP address of domain ha-377576-m03 in network mk-ha-377576
	I0327 23:55:40.706114 1086621 main.go:141] libmachine: (ha-377576-m03) DBG | I0327 23:55:40.706016 1087512 retry.go:31] will retry after 697.427889ms: waiting for machine to come up
	I0327 23:55:41.404887 1086621 main.go:141] libmachine: (ha-377576-m03) DBG | domain ha-377576-m03 has defined MAC address 52:54:00:f5:c1:99 in network mk-ha-377576
	I0327 23:55:41.405382 1086621 main.go:141] libmachine: (ha-377576-m03) DBG | unable to find current IP address of domain ha-377576-m03 in network mk-ha-377576
	I0327 23:55:41.405411 1086621 main.go:141] libmachine: (ha-377576-m03) DBG | I0327 23:55:41.405339 1087512 retry.go:31] will retry after 639.33076ms: waiting for machine to come up
	I0327 23:55:42.045878 1086621 main.go:141] libmachine: (ha-377576-m03) DBG | domain ha-377576-m03 has defined MAC address 52:54:00:f5:c1:99 in network mk-ha-377576
	I0327 23:55:42.046307 1086621 main.go:141] libmachine: (ha-377576-m03) DBG | unable to find current IP address of domain ha-377576-m03 in network mk-ha-377576
	I0327 23:55:42.046339 1086621 main.go:141] libmachine: (ha-377576-m03) DBG | I0327 23:55:42.046258 1087512 retry.go:31] will retry after 958.955128ms: waiting for machine to come up
	I0327 23:55:43.008657 1086621 main.go:141] libmachine: (ha-377576-m03) DBG | domain ha-377576-m03 has defined MAC address 52:54:00:f5:c1:99 in network mk-ha-377576
	I0327 23:55:43.009179 1086621 main.go:141] libmachine: (ha-377576-m03) DBG | unable to find current IP address of domain ha-377576-m03 in network mk-ha-377576
	I0327 23:55:43.009215 1086621 main.go:141] libmachine: (ha-377576-m03) DBG | I0327 23:55:43.009116 1087512 retry.go:31] will retry after 1.019044797s: waiting for machine to come up
	I0327 23:55:44.029473 1086621 main.go:141] libmachine: (ha-377576-m03) DBG | domain ha-377576-m03 has defined MAC address 52:54:00:f5:c1:99 in network mk-ha-377576
	I0327 23:55:44.030014 1086621 main.go:141] libmachine: (ha-377576-m03) DBG | unable to find current IP address of domain ha-377576-m03 in network mk-ha-377576
	I0327 23:55:44.030056 1086621 main.go:141] libmachine: (ha-377576-m03) DBG | I0327 23:55:44.029969 1087512 retry.go:31] will retry after 1.285580774s: waiting for machine to come up
	I0327 23:55:45.317500 1086621 main.go:141] libmachine: (ha-377576-m03) DBG | domain ha-377576-m03 has defined MAC address 52:54:00:f5:c1:99 in network mk-ha-377576
	I0327 23:55:45.317917 1086621 main.go:141] libmachine: (ha-377576-m03) DBG | unable to find current IP address of domain ha-377576-m03 in network mk-ha-377576
	I0327 23:55:45.317946 1086621 main.go:141] libmachine: (ha-377576-m03) DBG | I0327 23:55:45.317865 1087512 retry.go:31] will retry after 1.460536362s: waiting for machine to come up
	I0327 23:55:46.780529 1086621 main.go:141] libmachine: (ha-377576-m03) DBG | domain ha-377576-m03 has defined MAC address 52:54:00:f5:c1:99 in network mk-ha-377576
	I0327 23:55:46.781026 1086621 main.go:141] libmachine: (ha-377576-m03) DBG | unable to find current IP address of domain ha-377576-m03 in network mk-ha-377576
	I0327 23:55:46.781062 1086621 main.go:141] libmachine: (ha-377576-m03) DBG | I0327 23:55:46.780958 1087512 retry.go:31] will retry after 1.920245901s: waiting for machine to come up
	I0327 23:55:48.703319 1086621 main.go:141] libmachine: (ha-377576-m03) DBG | domain ha-377576-m03 has defined MAC address 52:54:00:f5:c1:99 in network mk-ha-377576
	I0327 23:55:48.703729 1086621 main.go:141] libmachine: (ha-377576-m03) DBG | unable to find current IP address of domain ha-377576-m03 in network mk-ha-377576
	I0327 23:55:48.703764 1086621 main.go:141] libmachine: (ha-377576-m03) DBG | I0327 23:55:48.703692 1087512 retry.go:31] will retry after 2.714118256s: waiting for machine to come up
	I0327 23:55:51.419327 1086621 main.go:141] libmachine: (ha-377576-m03) DBG | domain ha-377576-m03 has defined MAC address 52:54:00:f5:c1:99 in network mk-ha-377576
	I0327 23:55:51.419720 1086621 main.go:141] libmachine: (ha-377576-m03) DBG | unable to find current IP address of domain ha-377576-m03 in network mk-ha-377576
	I0327 23:55:51.419814 1086621 main.go:141] libmachine: (ha-377576-m03) DBG | I0327 23:55:51.419681 1087512 retry.go:31] will retry after 3.81300902s: waiting for machine to come up
	I0327 23:55:55.235976 1086621 main.go:141] libmachine: (ha-377576-m03) DBG | domain ha-377576-m03 has defined MAC address 52:54:00:f5:c1:99 in network mk-ha-377576
	I0327 23:55:55.236562 1086621 main.go:141] libmachine: (ha-377576-m03) DBG | unable to find current IP address of domain ha-377576-m03 in network mk-ha-377576
	I0327 23:55:55.236606 1086621 main.go:141] libmachine: (ha-377576-m03) DBG | I0327 23:55:55.236497 1087512 retry.go:31] will retry after 5.681513625s: waiting for machine to come up
	I0327 23:56:00.921564 1086621 main.go:141] libmachine: (ha-377576-m03) DBG | domain ha-377576-m03 has defined MAC address 52:54:00:f5:c1:99 in network mk-ha-377576
	I0327 23:56:00.922124 1086621 main.go:141] libmachine: (ha-377576-m03) Found IP for machine: 192.168.39.101
	I0327 23:56:00.922150 1086621 main.go:141] libmachine: (ha-377576-m03) DBG | domain ha-377576-m03 has current primary IP address 192.168.39.101 and MAC address 52:54:00:f5:c1:99 in network mk-ha-377576
	I0327 23:56:00.922157 1086621 main.go:141] libmachine: (ha-377576-m03) Reserving static IP address...
	I0327 23:56:00.922600 1086621 main.go:141] libmachine: (ha-377576-m03) DBG | unable to find host DHCP lease matching {name: "ha-377576-m03", mac: "52:54:00:f5:c1:99", ip: "192.168.39.101"} in network mk-ha-377576
	I0327 23:56:01.005124 1086621 main.go:141] libmachine: (ha-377576-m03) DBG | Getting to WaitForSSH function...
	I0327 23:56:01.005161 1086621 main.go:141] libmachine: (ha-377576-m03) Reserved static IP address: 192.168.39.101
	I0327 23:56:01.005175 1086621 main.go:141] libmachine: (ha-377576-m03) Waiting for SSH to be available...
	I0327 23:56:01.008093 1086621 main.go:141] libmachine: (ha-377576-m03) DBG | domain ha-377576-m03 has defined MAC address 52:54:00:f5:c1:99 in network mk-ha-377576
	I0327 23:56:01.008481 1086621 main.go:141] libmachine: (ha-377576-m03) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:f5:c1:99", ip: ""} in network mk-ha-377576
	I0327 23:56:01.008507 1086621 main.go:141] libmachine: (ha-377576-m03) DBG | unable to find defined IP address of network mk-ha-377576 interface with MAC address 52:54:00:f5:c1:99
	I0327 23:56:01.008732 1086621 main.go:141] libmachine: (ha-377576-m03) DBG | Using SSH client type: external
	I0327 23:56:01.008781 1086621 main.go:141] libmachine: (ha-377576-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/ha-377576-m03/id_rsa (-rw-------)
	I0327 23:56:01.008884 1086621 main.go:141] libmachine: (ha-377576-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/ha-377576-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0327 23:56:01.008912 1086621 main.go:141] libmachine: (ha-377576-m03) DBG | About to run SSH command:
	I0327 23:56:01.008933 1086621 main.go:141] libmachine: (ha-377576-m03) DBG | exit 0
	I0327 23:56:01.013657 1086621 main.go:141] libmachine: (ha-377576-m03) DBG | SSH cmd err, output: exit status 255: 
	I0327 23:56:01.013687 1086621 main.go:141] libmachine: (ha-377576-m03) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0327 23:56:01.013696 1086621 main.go:141] libmachine: (ha-377576-m03) DBG | command : exit 0
	I0327 23:56:01.013705 1086621 main.go:141] libmachine: (ha-377576-m03) DBG | err     : exit status 255
	I0327 23:56:01.013716 1086621 main.go:141] libmachine: (ha-377576-m03) DBG | output  : 
	I0327 23:56:04.014436 1086621 main.go:141] libmachine: (ha-377576-m03) DBG | Getting to WaitForSSH function...
	I0327 23:56:04.017146 1086621 main.go:141] libmachine: (ha-377576-m03) DBG | domain ha-377576-m03 has defined MAC address 52:54:00:f5:c1:99 in network mk-ha-377576
	I0327 23:56:04.017559 1086621 main.go:141] libmachine: (ha-377576-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:c1:99", ip: ""} in network mk-ha-377576: {Iface:virbr1 ExpiryTime:2024-03-28 00:55:52 +0000 UTC Type:0 Mac:52:54:00:f5:c1:99 Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:ha-377576-m03 Clientid:01:52:54:00:f5:c1:99}
	I0327 23:56:04.017590 1086621 main.go:141] libmachine: (ha-377576-m03) DBG | domain ha-377576-m03 has defined IP address 192.168.39.101 and MAC address 52:54:00:f5:c1:99 in network mk-ha-377576
	I0327 23:56:04.017784 1086621 main.go:141] libmachine: (ha-377576-m03) DBG | Using SSH client type: external
	I0327 23:56:04.017816 1086621 main.go:141] libmachine: (ha-377576-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/ha-377576-m03/id_rsa (-rw-------)
	I0327 23:56:04.017852 1086621 main.go:141] libmachine: (ha-377576-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.101 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/ha-377576-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0327 23:56:04.017866 1086621 main.go:141] libmachine: (ha-377576-m03) DBG | About to run SSH command:
	I0327 23:56:04.017883 1086621 main.go:141] libmachine: (ha-377576-m03) DBG | exit 0
	I0327 23:56:04.146450 1086621 main.go:141] libmachine: (ha-377576-m03) DBG | SSH cmd err, output: <nil>: 
	I0327 23:56:04.146807 1086621 main.go:141] libmachine: (ha-377576-m03) KVM machine creation complete!
	I0327 23:56:04.147186 1086621 main.go:141] libmachine: (ha-377576-m03) Calling .GetConfigRaw
	I0327 23:56:04.147800 1086621 main.go:141] libmachine: (ha-377576-m03) Calling .DriverName
	I0327 23:56:04.148014 1086621 main.go:141] libmachine: (ha-377576-m03) Calling .DriverName
	I0327 23:56:04.148192 1086621 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0327 23:56:04.148208 1086621 main.go:141] libmachine: (ha-377576-m03) Calling .GetState
	I0327 23:56:04.149562 1086621 main.go:141] libmachine: Detecting operating system of created instance...
	I0327 23:56:04.149578 1086621 main.go:141] libmachine: Waiting for SSH to be available...
	I0327 23:56:04.149584 1086621 main.go:141] libmachine: Getting to WaitForSSH function...
	I0327 23:56:04.149590 1086621 main.go:141] libmachine: (ha-377576-m03) Calling .GetSSHHostname
	I0327 23:56:04.151903 1086621 main.go:141] libmachine: (ha-377576-m03) DBG | domain ha-377576-m03 has defined MAC address 52:54:00:f5:c1:99 in network mk-ha-377576
	I0327 23:56:04.152268 1086621 main.go:141] libmachine: (ha-377576-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:c1:99", ip: ""} in network mk-ha-377576: {Iface:virbr1 ExpiryTime:2024-03-28 00:55:52 +0000 UTC Type:0 Mac:52:54:00:f5:c1:99 Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:ha-377576-m03 Clientid:01:52:54:00:f5:c1:99}
	I0327 23:56:04.152294 1086621 main.go:141] libmachine: (ha-377576-m03) DBG | domain ha-377576-m03 has defined IP address 192.168.39.101 and MAC address 52:54:00:f5:c1:99 in network mk-ha-377576
	I0327 23:56:04.152424 1086621 main.go:141] libmachine: (ha-377576-m03) Calling .GetSSHPort
	I0327 23:56:04.152647 1086621 main.go:141] libmachine: (ha-377576-m03) Calling .GetSSHKeyPath
	I0327 23:56:04.152804 1086621 main.go:141] libmachine: (ha-377576-m03) Calling .GetSSHKeyPath
	I0327 23:56:04.152957 1086621 main.go:141] libmachine: (ha-377576-m03) Calling .GetSSHUsername
	I0327 23:56:04.153129 1086621 main.go:141] libmachine: Using SSH client type: native
	I0327 23:56:04.153428 1086621 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.101 22 <nil> <nil>}
	I0327 23:56:04.153447 1086621 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0327 23:56:04.270314 1086621 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0327 23:56:04.270341 1086621 main.go:141] libmachine: Detecting the provisioner...
	I0327 23:56:04.270349 1086621 main.go:141] libmachine: (ha-377576-m03) Calling .GetSSHHostname
	I0327 23:56:04.273191 1086621 main.go:141] libmachine: (ha-377576-m03) DBG | domain ha-377576-m03 has defined MAC address 52:54:00:f5:c1:99 in network mk-ha-377576
	I0327 23:56:04.273642 1086621 main.go:141] libmachine: (ha-377576-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:c1:99", ip: ""} in network mk-ha-377576: {Iface:virbr1 ExpiryTime:2024-03-28 00:55:52 +0000 UTC Type:0 Mac:52:54:00:f5:c1:99 Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:ha-377576-m03 Clientid:01:52:54:00:f5:c1:99}
	I0327 23:56:04.273654 1086621 main.go:141] libmachine: (ha-377576-m03) DBG | domain ha-377576-m03 has defined IP address 192.168.39.101 and MAC address 52:54:00:f5:c1:99 in network mk-ha-377576
	I0327 23:56:04.273881 1086621 main.go:141] libmachine: (ha-377576-m03) Calling .GetSSHPort
	I0327 23:56:04.274129 1086621 main.go:141] libmachine: (ha-377576-m03) Calling .GetSSHKeyPath
	I0327 23:56:04.274359 1086621 main.go:141] libmachine: (ha-377576-m03) Calling .GetSSHKeyPath
	I0327 23:56:04.274558 1086621 main.go:141] libmachine: (ha-377576-m03) Calling .GetSSHUsername
	I0327 23:56:04.274773 1086621 main.go:141] libmachine: Using SSH client type: native
	I0327 23:56:04.274982 1086621 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.101 22 <nil> <nil>}
	I0327 23:56:04.274996 1086621 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0327 23:56:04.391643 1086621 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0327 23:56:04.391731 1086621 main.go:141] libmachine: found compatible host: buildroot
	I0327 23:56:04.391742 1086621 main.go:141] libmachine: Provisioning with buildroot...
	I0327 23:56:04.391755 1086621 main.go:141] libmachine: (ha-377576-m03) Calling .GetMachineName
	I0327 23:56:04.392045 1086621 buildroot.go:166] provisioning hostname "ha-377576-m03"
	I0327 23:56:04.392085 1086621 main.go:141] libmachine: (ha-377576-m03) Calling .GetMachineName
	I0327 23:56:04.392332 1086621 main.go:141] libmachine: (ha-377576-m03) Calling .GetSSHHostname
	I0327 23:56:04.395471 1086621 main.go:141] libmachine: (ha-377576-m03) DBG | domain ha-377576-m03 has defined MAC address 52:54:00:f5:c1:99 in network mk-ha-377576
	I0327 23:56:04.395879 1086621 main.go:141] libmachine: (ha-377576-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:c1:99", ip: ""} in network mk-ha-377576: {Iface:virbr1 ExpiryTime:2024-03-28 00:55:52 +0000 UTC Type:0 Mac:52:54:00:f5:c1:99 Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:ha-377576-m03 Clientid:01:52:54:00:f5:c1:99}
	I0327 23:56:04.395899 1086621 main.go:141] libmachine: (ha-377576-m03) DBG | domain ha-377576-m03 has defined IP address 192.168.39.101 and MAC address 52:54:00:f5:c1:99 in network mk-ha-377576
	I0327 23:56:04.396170 1086621 main.go:141] libmachine: (ha-377576-m03) Calling .GetSSHPort
	I0327 23:56:04.396388 1086621 main.go:141] libmachine: (ha-377576-m03) Calling .GetSSHKeyPath
	I0327 23:56:04.396560 1086621 main.go:141] libmachine: (ha-377576-m03) Calling .GetSSHKeyPath
	I0327 23:56:04.396725 1086621 main.go:141] libmachine: (ha-377576-m03) Calling .GetSSHUsername
	I0327 23:56:04.396923 1086621 main.go:141] libmachine: Using SSH client type: native
	I0327 23:56:04.397099 1086621 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.101 22 <nil> <nil>}
	I0327 23:56:04.397112 1086621 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-377576-m03 && echo "ha-377576-m03" | sudo tee /etc/hostname
	I0327 23:56:04.526624 1086621 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-377576-m03
	
	I0327 23:56:04.526666 1086621 main.go:141] libmachine: (ha-377576-m03) Calling .GetSSHHostname
	I0327 23:56:04.529734 1086621 main.go:141] libmachine: (ha-377576-m03) DBG | domain ha-377576-m03 has defined MAC address 52:54:00:f5:c1:99 in network mk-ha-377576
	I0327 23:56:04.530188 1086621 main.go:141] libmachine: (ha-377576-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:c1:99", ip: ""} in network mk-ha-377576: {Iface:virbr1 ExpiryTime:2024-03-28 00:55:52 +0000 UTC Type:0 Mac:52:54:00:f5:c1:99 Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:ha-377576-m03 Clientid:01:52:54:00:f5:c1:99}
	I0327 23:56:04.530223 1086621 main.go:141] libmachine: (ha-377576-m03) DBG | domain ha-377576-m03 has defined IP address 192.168.39.101 and MAC address 52:54:00:f5:c1:99 in network mk-ha-377576
	I0327 23:56:04.530423 1086621 main.go:141] libmachine: (ha-377576-m03) Calling .GetSSHPort
	I0327 23:56:04.530657 1086621 main.go:141] libmachine: (ha-377576-m03) Calling .GetSSHKeyPath
	I0327 23:56:04.530839 1086621 main.go:141] libmachine: (ha-377576-m03) Calling .GetSSHKeyPath
	I0327 23:56:04.530983 1086621 main.go:141] libmachine: (ha-377576-m03) Calling .GetSSHUsername
	I0327 23:56:04.531143 1086621 main.go:141] libmachine: Using SSH client type: native
	I0327 23:56:04.531312 1086621 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.101 22 <nil> <nil>}
	I0327 23:56:04.531329 1086621 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-377576-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-377576-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-377576-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0327 23:56:04.656477 1086621 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0327 23:56:04.656524 1086621 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18485-1069254/.minikube CaCertPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18485-1069254/.minikube}
	I0327 23:56:04.656548 1086621 buildroot.go:174] setting up certificates
	I0327 23:56:04.656560 1086621 provision.go:84] configureAuth start
	I0327 23:56:04.656574 1086621 main.go:141] libmachine: (ha-377576-m03) Calling .GetMachineName
	I0327 23:56:04.656952 1086621 main.go:141] libmachine: (ha-377576-m03) Calling .GetIP
	I0327 23:56:04.659851 1086621 main.go:141] libmachine: (ha-377576-m03) DBG | domain ha-377576-m03 has defined MAC address 52:54:00:f5:c1:99 in network mk-ha-377576
	I0327 23:56:04.660404 1086621 main.go:141] libmachine: (ha-377576-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:c1:99", ip: ""} in network mk-ha-377576: {Iface:virbr1 ExpiryTime:2024-03-28 00:55:52 +0000 UTC Type:0 Mac:52:54:00:f5:c1:99 Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:ha-377576-m03 Clientid:01:52:54:00:f5:c1:99}
	I0327 23:56:04.660435 1086621 main.go:141] libmachine: (ha-377576-m03) DBG | domain ha-377576-m03 has defined IP address 192.168.39.101 and MAC address 52:54:00:f5:c1:99 in network mk-ha-377576
	I0327 23:56:04.660622 1086621 main.go:141] libmachine: (ha-377576-m03) Calling .GetSSHHostname
	I0327 23:56:04.663429 1086621 main.go:141] libmachine: (ha-377576-m03) DBG | domain ha-377576-m03 has defined MAC address 52:54:00:f5:c1:99 in network mk-ha-377576
	I0327 23:56:04.663922 1086621 main.go:141] libmachine: (ha-377576-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:c1:99", ip: ""} in network mk-ha-377576: {Iface:virbr1 ExpiryTime:2024-03-28 00:55:52 +0000 UTC Type:0 Mac:52:54:00:f5:c1:99 Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:ha-377576-m03 Clientid:01:52:54:00:f5:c1:99}
	I0327 23:56:04.663955 1086621 main.go:141] libmachine: (ha-377576-m03) DBG | domain ha-377576-m03 has defined IP address 192.168.39.101 and MAC address 52:54:00:f5:c1:99 in network mk-ha-377576
	I0327 23:56:04.664149 1086621 provision.go:143] copyHostCerts
	I0327 23:56:04.664195 1086621 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.pem
	I0327 23:56:04.664244 1086621 exec_runner.go:144] found /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.pem, removing ...
	I0327 23:56:04.664257 1086621 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.pem
	I0327 23:56:04.664337 1086621 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.pem (1078 bytes)
	I0327 23:56:04.664439 1086621 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18485-1069254/.minikube/cert.pem
	I0327 23:56:04.664466 1086621 exec_runner.go:144] found /home/jenkins/minikube-integration/18485-1069254/.minikube/cert.pem, removing ...
	I0327 23:56:04.664474 1086621 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18485-1069254/.minikube/cert.pem
	I0327 23:56:04.664517 1086621 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18485-1069254/.minikube/cert.pem (1123 bytes)
	I0327 23:56:04.664584 1086621 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18485-1069254/.minikube/key.pem
	I0327 23:56:04.664612 1086621 exec_runner.go:144] found /home/jenkins/minikube-integration/18485-1069254/.minikube/key.pem, removing ...
	I0327 23:56:04.664619 1086621 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18485-1069254/.minikube/key.pem
	I0327 23:56:04.664665 1086621 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18485-1069254/.minikube/key.pem (1679 bytes)
	I0327 23:56:04.664759 1086621 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca-key.pem org=jenkins.ha-377576-m03 san=[127.0.0.1 192.168.39.101 ha-377576-m03 localhost minikube]
	I0327 23:56:04.763355 1086621 provision.go:177] copyRemoteCerts
	I0327 23:56:04.763432 1086621 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0327 23:56:04.763471 1086621 main.go:141] libmachine: (ha-377576-m03) Calling .GetSSHHostname
	I0327 23:56:04.766276 1086621 main.go:141] libmachine: (ha-377576-m03) DBG | domain ha-377576-m03 has defined MAC address 52:54:00:f5:c1:99 in network mk-ha-377576
	I0327 23:56:04.766663 1086621 main.go:141] libmachine: (ha-377576-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:c1:99", ip: ""} in network mk-ha-377576: {Iface:virbr1 ExpiryTime:2024-03-28 00:55:52 +0000 UTC Type:0 Mac:52:54:00:f5:c1:99 Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:ha-377576-m03 Clientid:01:52:54:00:f5:c1:99}
	I0327 23:56:04.766696 1086621 main.go:141] libmachine: (ha-377576-m03) DBG | domain ha-377576-m03 has defined IP address 192.168.39.101 and MAC address 52:54:00:f5:c1:99 in network mk-ha-377576
	I0327 23:56:04.766868 1086621 main.go:141] libmachine: (ha-377576-m03) Calling .GetSSHPort
	I0327 23:56:04.767136 1086621 main.go:141] libmachine: (ha-377576-m03) Calling .GetSSHKeyPath
	I0327 23:56:04.767338 1086621 main.go:141] libmachine: (ha-377576-m03) Calling .GetSSHUsername
	I0327 23:56:04.767517 1086621 sshutil.go:53] new ssh client: &{IP:192.168.39.101 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/ha-377576-m03/id_rsa Username:docker}
	I0327 23:56:04.857439 1086621 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0327 23:56:04.857522 1086621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0327 23:56:04.883431 1086621 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0327 23:56:04.883549 1086621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0327 23:56:04.911355 1086621 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0327 23:56:04.911443 1086621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0327 23:56:04.940002 1086621 provision.go:87] duration metric: took 283.428319ms to configureAuth
	I0327 23:56:04.940031 1086621 buildroot.go:189] setting minikube options for container-runtime
	I0327 23:56:04.940251 1086621 config.go:182] Loaded profile config "ha-377576": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0327 23:56:04.940334 1086621 main.go:141] libmachine: (ha-377576-m03) Calling .GetSSHHostname
	I0327 23:56:04.943213 1086621 main.go:141] libmachine: (ha-377576-m03) DBG | domain ha-377576-m03 has defined MAC address 52:54:00:f5:c1:99 in network mk-ha-377576
	I0327 23:56:04.943612 1086621 main.go:141] libmachine: (ha-377576-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:c1:99", ip: ""} in network mk-ha-377576: {Iface:virbr1 ExpiryTime:2024-03-28 00:55:52 +0000 UTC Type:0 Mac:52:54:00:f5:c1:99 Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:ha-377576-m03 Clientid:01:52:54:00:f5:c1:99}
	I0327 23:56:04.943646 1086621 main.go:141] libmachine: (ha-377576-m03) DBG | domain ha-377576-m03 has defined IP address 192.168.39.101 and MAC address 52:54:00:f5:c1:99 in network mk-ha-377576
	I0327 23:56:04.943831 1086621 main.go:141] libmachine: (ha-377576-m03) Calling .GetSSHPort
	I0327 23:56:04.944044 1086621 main.go:141] libmachine: (ha-377576-m03) Calling .GetSSHKeyPath
	I0327 23:56:04.944224 1086621 main.go:141] libmachine: (ha-377576-m03) Calling .GetSSHKeyPath
	I0327 23:56:04.944375 1086621 main.go:141] libmachine: (ha-377576-m03) Calling .GetSSHUsername
	I0327 23:56:04.944525 1086621 main.go:141] libmachine: Using SSH client type: native
	I0327 23:56:04.944709 1086621 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.101 22 <nil> <nil>}
	I0327 23:56:04.944735 1086621 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0327 23:56:05.233217 1086621 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0327 23:56:05.233261 1086621 main.go:141] libmachine: Checking connection to Docker...
	I0327 23:56:05.233273 1086621 main.go:141] libmachine: (ha-377576-m03) Calling .GetURL
	I0327 23:56:05.234691 1086621 main.go:141] libmachine: (ha-377576-m03) DBG | Using libvirt version 6000000
	I0327 23:56:05.237542 1086621 main.go:141] libmachine: (ha-377576-m03) DBG | domain ha-377576-m03 has defined MAC address 52:54:00:f5:c1:99 in network mk-ha-377576
	I0327 23:56:05.237920 1086621 main.go:141] libmachine: (ha-377576-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:c1:99", ip: ""} in network mk-ha-377576: {Iface:virbr1 ExpiryTime:2024-03-28 00:55:52 +0000 UTC Type:0 Mac:52:54:00:f5:c1:99 Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:ha-377576-m03 Clientid:01:52:54:00:f5:c1:99}
	I0327 23:56:05.237957 1086621 main.go:141] libmachine: (ha-377576-m03) DBG | domain ha-377576-m03 has defined IP address 192.168.39.101 and MAC address 52:54:00:f5:c1:99 in network mk-ha-377576
	I0327 23:56:05.238141 1086621 main.go:141] libmachine: Docker is up and running!
	I0327 23:56:05.238162 1086621 main.go:141] libmachine: Reticulating splines...
	I0327 23:56:05.238171 1086621 client.go:171] duration metric: took 27.712611142s to LocalClient.Create
	I0327 23:56:05.238203 1086621 start.go:167] duration metric: took 27.712688435s to libmachine.API.Create "ha-377576"
	I0327 23:56:05.238216 1086621 start.go:293] postStartSetup for "ha-377576-m03" (driver="kvm2")
	I0327 23:56:05.238244 1086621 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0327 23:56:05.238270 1086621 main.go:141] libmachine: (ha-377576-m03) Calling .DriverName
	I0327 23:56:05.238562 1086621 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0327 23:56:05.238589 1086621 main.go:141] libmachine: (ha-377576-m03) Calling .GetSSHHostname
	I0327 23:56:05.241038 1086621 main.go:141] libmachine: (ha-377576-m03) DBG | domain ha-377576-m03 has defined MAC address 52:54:00:f5:c1:99 in network mk-ha-377576
	I0327 23:56:05.241541 1086621 main.go:141] libmachine: (ha-377576-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:c1:99", ip: ""} in network mk-ha-377576: {Iface:virbr1 ExpiryTime:2024-03-28 00:55:52 +0000 UTC Type:0 Mac:52:54:00:f5:c1:99 Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:ha-377576-m03 Clientid:01:52:54:00:f5:c1:99}
	I0327 23:56:05.241570 1086621 main.go:141] libmachine: (ha-377576-m03) DBG | domain ha-377576-m03 has defined IP address 192.168.39.101 and MAC address 52:54:00:f5:c1:99 in network mk-ha-377576
	I0327 23:56:05.241715 1086621 main.go:141] libmachine: (ha-377576-m03) Calling .GetSSHPort
	I0327 23:56:05.241945 1086621 main.go:141] libmachine: (ha-377576-m03) Calling .GetSSHKeyPath
	I0327 23:56:05.242142 1086621 main.go:141] libmachine: (ha-377576-m03) Calling .GetSSHUsername
	I0327 23:56:05.242283 1086621 sshutil.go:53] new ssh client: &{IP:192.168.39.101 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/ha-377576-m03/id_rsa Username:docker}
	I0327 23:56:05.330275 1086621 ssh_runner.go:195] Run: cat /etc/os-release
	I0327 23:56:05.335255 1086621 info.go:137] Remote host: Buildroot 2023.02.9
	I0327 23:56:05.335292 1086621 filesync.go:126] Scanning /home/jenkins/minikube-integration/18485-1069254/.minikube/addons for local assets ...
	I0327 23:56:05.335360 1086621 filesync.go:126] Scanning /home/jenkins/minikube-integration/18485-1069254/.minikube/files for local assets ...
	I0327 23:56:05.335454 1086621 filesync.go:149] local asset: /home/jenkins/minikube-integration/18485-1069254/.minikube/files/etc/ssl/certs/10765222.pem -> 10765222.pem in /etc/ssl/certs
	I0327 23:56:05.335470 1086621 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18485-1069254/.minikube/files/etc/ssl/certs/10765222.pem -> /etc/ssl/certs/10765222.pem
	I0327 23:56:05.335573 1086621 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0327 23:56:05.346463 1086621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/files/etc/ssl/certs/10765222.pem --> /etc/ssl/certs/10765222.pem (1708 bytes)
	I0327 23:56:05.373007 1086621 start.go:296] duration metric: took 134.775912ms for postStartSetup
	I0327 23:56:05.373075 1086621 main.go:141] libmachine: (ha-377576-m03) Calling .GetConfigRaw
	I0327 23:56:05.373682 1086621 main.go:141] libmachine: (ha-377576-m03) Calling .GetIP
	I0327 23:56:05.376460 1086621 main.go:141] libmachine: (ha-377576-m03) DBG | domain ha-377576-m03 has defined MAC address 52:54:00:f5:c1:99 in network mk-ha-377576
	I0327 23:56:05.376885 1086621 main.go:141] libmachine: (ha-377576-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:c1:99", ip: ""} in network mk-ha-377576: {Iface:virbr1 ExpiryTime:2024-03-28 00:55:52 +0000 UTC Type:0 Mac:52:54:00:f5:c1:99 Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:ha-377576-m03 Clientid:01:52:54:00:f5:c1:99}
	I0327 23:56:05.376927 1086621 main.go:141] libmachine: (ha-377576-m03) DBG | domain ha-377576-m03 has defined IP address 192.168.39.101 and MAC address 52:54:00:f5:c1:99 in network mk-ha-377576
	I0327 23:56:05.377243 1086621 profile.go:142] Saving config to /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/ha-377576/config.json ...
	I0327 23:56:05.377492 1086621 start.go:128] duration metric: took 27.870631426s to createHost
	I0327 23:56:05.377557 1086621 main.go:141] libmachine: (ha-377576-m03) Calling .GetSSHHostname
	I0327 23:56:05.379971 1086621 main.go:141] libmachine: (ha-377576-m03) DBG | domain ha-377576-m03 has defined MAC address 52:54:00:f5:c1:99 in network mk-ha-377576
	I0327 23:56:05.380233 1086621 main.go:141] libmachine: (ha-377576-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:c1:99", ip: ""} in network mk-ha-377576: {Iface:virbr1 ExpiryTime:2024-03-28 00:55:52 +0000 UTC Type:0 Mac:52:54:00:f5:c1:99 Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:ha-377576-m03 Clientid:01:52:54:00:f5:c1:99}
	I0327 23:56:05.380262 1086621 main.go:141] libmachine: (ha-377576-m03) DBG | domain ha-377576-m03 has defined IP address 192.168.39.101 and MAC address 52:54:00:f5:c1:99 in network mk-ha-377576
	I0327 23:56:05.380486 1086621 main.go:141] libmachine: (ha-377576-m03) Calling .GetSSHPort
	I0327 23:56:05.380689 1086621 main.go:141] libmachine: (ha-377576-m03) Calling .GetSSHKeyPath
	I0327 23:56:05.380881 1086621 main.go:141] libmachine: (ha-377576-m03) Calling .GetSSHKeyPath
	I0327 23:56:05.381022 1086621 main.go:141] libmachine: (ha-377576-m03) Calling .GetSSHUsername
	I0327 23:56:05.381191 1086621 main.go:141] libmachine: Using SSH client type: native
	I0327 23:56:05.381400 1086621 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.101 22 <nil> <nil>}
	I0327 23:56:05.381420 1086621 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0327 23:56:05.499346 1086621 main.go:141] libmachine: SSH cmd err, output: <nil>: 1711583765.475969753
	
	I0327 23:56:05.499376 1086621 fix.go:216] guest clock: 1711583765.475969753
	I0327 23:56:05.499385 1086621 fix.go:229] Guest: 2024-03-27 23:56:05.475969753 +0000 UTC Remote: 2024-03-27 23:56:05.377506121 +0000 UTC m=+229.366978974 (delta=98.463632ms)
	I0327 23:56:05.499403 1086621 fix.go:200] guest clock delta is within tolerance: 98.463632ms
	I0327 23:56:05.499408 1086621 start.go:83] releasing machines lock for "ha-377576-m03", held for 27.99270788s
	I0327 23:56:05.499430 1086621 main.go:141] libmachine: (ha-377576-m03) Calling .DriverName
	I0327 23:56:05.499716 1086621 main.go:141] libmachine: (ha-377576-m03) Calling .GetIP
	I0327 23:56:05.502554 1086621 main.go:141] libmachine: (ha-377576-m03) DBG | domain ha-377576-m03 has defined MAC address 52:54:00:f5:c1:99 in network mk-ha-377576
	I0327 23:56:05.502975 1086621 main.go:141] libmachine: (ha-377576-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:c1:99", ip: ""} in network mk-ha-377576: {Iface:virbr1 ExpiryTime:2024-03-28 00:55:52 +0000 UTC Type:0 Mac:52:54:00:f5:c1:99 Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:ha-377576-m03 Clientid:01:52:54:00:f5:c1:99}
	I0327 23:56:05.502999 1086621 main.go:141] libmachine: (ha-377576-m03) DBG | domain ha-377576-m03 has defined IP address 192.168.39.101 and MAC address 52:54:00:f5:c1:99 in network mk-ha-377576
	I0327 23:56:05.505355 1086621 out.go:177] * Found network options:
	I0327 23:56:05.506658 1086621 out.go:177]   - NO_PROXY=192.168.39.47,192.168.39.117
	W0327 23:56:05.507868 1086621 proxy.go:119] fail to check proxy env: Error ip not in block
	W0327 23:56:05.507887 1086621 proxy.go:119] fail to check proxy env: Error ip not in block
	I0327 23:56:05.507901 1086621 main.go:141] libmachine: (ha-377576-m03) Calling .DriverName
	I0327 23:56:05.508396 1086621 main.go:141] libmachine: (ha-377576-m03) Calling .DriverName
	I0327 23:56:05.508587 1086621 main.go:141] libmachine: (ha-377576-m03) Calling .DriverName
	I0327 23:56:05.508704 1086621 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0327 23:56:05.508749 1086621 main.go:141] libmachine: (ha-377576-m03) Calling .GetSSHHostname
	W0327 23:56:05.508814 1086621 proxy.go:119] fail to check proxy env: Error ip not in block
	W0327 23:56:05.508850 1086621 proxy.go:119] fail to check proxy env: Error ip not in block
	I0327 23:56:05.508923 1086621 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0327 23:56:05.508946 1086621 main.go:141] libmachine: (ha-377576-m03) Calling .GetSSHHostname
	I0327 23:56:05.511547 1086621 main.go:141] libmachine: (ha-377576-m03) DBG | domain ha-377576-m03 has defined MAC address 52:54:00:f5:c1:99 in network mk-ha-377576
	I0327 23:56:05.511662 1086621 main.go:141] libmachine: (ha-377576-m03) DBG | domain ha-377576-m03 has defined MAC address 52:54:00:f5:c1:99 in network mk-ha-377576
	I0327 23:56:05.511959 1086621 main.go:141] libmachine: (ha-377576-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:c1:99", ip: ""} in network mk-ha-377576: {Iface:virbr1 ExpiryTime:2024-03-28 00:55:52 +0000 UTC Type:0 Mac:52:54:00:f5:c1:99 Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:ha-377576-m03 Clientid:01:52:54:00:f5:c1:99}
	I0327 23:56:05.511985 1086621 main.go:141] libmachine: (ha-377576-m03) DBG | domain ha-377576-m03 has defined IP address 192.168.39.101 and MAC address 52:54:00:f5:c1:99 in network mk-ha-377576
	I0327 23:56:05.512016 1086621 main.go:141] libmachine: (ha-377576-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:c1:99", ip: ""} in network mk-ha-377576: {Iface:virbr1 ExpiryTime:2024-03-28 00:55:52 +0000 UTC Type:0 Mac:52:54:00:f5:c1:99 Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:ha-377576-m03 Clientid:01:52:54:00:f5:c1:99}
	I0327 23:56:05.512032 1086621 main.go:141] libmachine: (ha-377576-m03) DBG | domain ha-377576-m03 has defined IP address 192.168.39.101 and MAC address 52:54:00:f5:c1:99 in network mk-ha-377576
	I0327 23:56:05.512304 1086621 main.go:141] libmachine: (ha-377576-m03) Calling .GetSSHPort
	I0327 23:56:05.512317 1086621 main.go:141] libmachine: (ha-377576-m03) Calling .GetSSHPort
	I0327 23:56:05.512518 1086621 main.go:141] libmachine: (ha-377576-m03) Calling .GetSSHKeyPath
	I0327 23:56:05.512579 1086621 main.go:141] libmachine: (ha-377576-m03) Calling .GetSSHKeyPath
	I0327 23:56:05.512681 1086621 main.go:141] libmachine: (ha-377576-m03) Calling .GetSSHUsername
	I0327 23:56:05.512779 1086621 main.go:141] libmachine: (ha-377576-m03) Calling .GetSSHUsername
	I0327 23:56:05.512878 1086621 sshutil.go:53] new ssh client: &{IP:192.168.39.101 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/ha-377576-m03/id_rsa Username:docker}
	I0327 23:56:05.512928 1086621 sshutil.go:53] new ssh client: &{IP:192.168.39.101 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/ha-377576-m03/id_rsa Username:docker}
	I0327 23:56:05.764029 1086621 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0327 23:56:05.770387 1086621 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0327 23:56:05.770458 1086621 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0327 23:56:05.787525 1086621 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0327 23:56:05.787556 1086621 start.go:494] detecting cgroup driver to use...
	I0327 23:56:05.787625 1086621 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0327 23:56:05.804936 1086621 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0327 23:56:05.820067 1086621 docker.go:217] disabling cri-docker service (if available) ...
	I0327 23:56:05.820146 1086621 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0327 23:56:05.835624 1086621 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0327 23:56:05.850885 1086621 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0327 23:56:05.979530 1086621 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0327 23:56:06.163328 1086621 docker.go:233] disabling docker service ...
	I0327 23:56:06.163417 1086621 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0327 23:56:06.181733 1086621 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0327 23:56:06.196697 1086621 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0327 23:56:06.323799 1086621 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0327 23:56:06.452459 1086621 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0327 23:56:06.466660 1086621 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0327 23:56:06.486969 1086621 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0327 23:56:06.487057 1086621 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0327 23:56:06.498247 1086621 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0327 23:56:06.498338 1086621 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0327 23:56:06.509119 1086621 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0327 23:56:06.520341 1086621 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0327 23:56:06.531966 1086621 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0327 23:56:06.543892 1086621 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0327 23:56:06.555435 1086621 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0327 23:56:06.575004 1086621 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0327 23:56:06.586736 1086621 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0327 23:56:06.597234 1086621 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0327 23:56:06.597306 1086621 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0327 23:56:06.612465 1086621 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0327 23:56:06.625164 1086621 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0327 23:56:06.756049 1086621 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0327 23:56:06.903539 1086621 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0327 23:56:06.903631 1086621 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0327 23:56:06.908884 1086621 start.go:562] Will wait 60s for crictl version
	I0327 23:56:06.908961 1086621 ssh_runner.go:195] Run: which crictl
	I0327 23:56:06.912999 1086621 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0327 23:56:06.955867 1086621 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0327 23:56:06.955975 1086621 ssh_runner.go:195] Run: crio --version
	I0327 23:56:06.986524 1086621 ssh_runner.go:195] Run: crio --version
	I0327 23:56:07.018085 1086621 out.go:177] * Preparing Kubernetes v1.29.3 on CRI-O 1.29.1 ...
	I0327 23:56:07.019735 1086621 out.go:177]   - env NO_PROXY=192.168.39.47
	I0327 23:56:07.021076 1086621 out.go:177]   - env NO_PROXY=192.168.39.47,192.168.39.117
	I0327 23:56:07.022196 1086621 main.go:141] libmachine: (ha-377576-m03) Calling .GetIP
	I0327 23:56:07.025082 1086621 main.go:141] libmachine: (ha-377576-m03) DBG | domain ha-377576-m03 has defined MAC address 52:54:00:f5:c1:99 in network mk-ha-377576
	I0327 23:56:07.025528 1086621 main.go:141] libmachine: (ha-377576-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:c1:99", ip: ""} in network mk-ha-377576: {Iface:virbr1 ExpiryTime:2024-03-28 00:55:52 +0000 UTC Type:0 Mac:52:54:00:f5:c1:99 Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:ha-377576-m03 Clientid:01:52:54:00:f5:c1:99}
	I0327 23:56:07.025558 1086621 main.go:141] libmachine: (ha-377576-m03) DBG | domain ha-377576-m03 has defined IP address 192.168.39.101 and MAC address 52:54:00:f5:c1:99 in network mk-ha-377576
	I0327 23:56:07.025799 1086621 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0327 23:56:07.030288 1086621 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0327 23:56:07.045396 1086621 mustload.go:65] Loading cluster: ha-377576
	I0327 23:56:07.045641 1086621 config.go:182] Loaded profile config "ha-377576": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0327 23:56:07.045894 1086621 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0327 23:56:07.045933 1086621 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0327 23:56:07.062119 1086621 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44963
	I0327 23:56:07.062684 1086621 main.go:141] libmachine: () Calling .GetVersion
	I0327 23:56:07.063307 1086621 main.go:141] libmachine: Using API Version  1
	I0327 23:56:07.063328 1086621 main.go:141] libmachine: () Calling .SetConfigRaw
	I0327 23:56:07.063689 1086621 main.go:141] libmachine: () Calling .GetMachineName
	I0327 23:56:07.063913 1086621 main.go:141] libmachine: (ha-377576) Calling .GetState
	I0327 23:56:07.065492 1086621 host.go:66] Checking if "ha-377576" exists ...
	I0327 23:56:07.065774 1086621 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0327 23:56:07.065813 1086621 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0327 23:56:07.081401 1086621 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40127
	I0327 23:56:07.081869 1086621 main.go:141] libmachine: () Calling .GetVersion
	I0327 23:56:07.082398 1086621 main.go:141] libmachine: Using API Version  1
	I0327 23:56:07.082422 1086621 main.go:141] libmachine: () Calling .SetConfigRaw
	I0327 23:56:07.082766 1086621 main.go:141] libmachine: () Calling .GetMachineName
	I0327 23:56:07.082970 1086621 main.go:141] libmachine: (ha-377576) Calling .DriverName
	I0327 23:56:07.083177 1086621 certs.go:68] Setting up /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/ha-377576 for IP: 192.168.39.101
	I0327 23:56:07.083197 1086621 certs.go:194] generating shared ca certs ...
	I0327 23:56:07.083217 1086621 certs.go:226] acquiring lock for ca certs: {Name:mkf4dec8f33bbf51de6ed3aabdf175c7fd744ef9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 23:56:07.083349 1086621 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.key
	I0327 23:56:07.083385 1086621 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/proxy-client-ca.key
	I0327 23:56:07.083395 1086621 certs.go:256] generating profile certs ...
	I0327 23:56:07.083464 1086621 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/ha-377576/client.key
	I0327 23:56:07.083490 1086621 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/ha-377576/apiserver.key.14ab1afe
	I0327 23:56:07.083506 1086621 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/ha-377576/apiserver.crt.14ab1afe with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.47 192.168.39.117 192.168.39.101 192.168.39.254]
	I0327 23:56:07.233689 1086621 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/ha-377576/apiserver.crt.14ab1afe ...
	I0327 23:56:07.233725 1086621 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/ha-377576/apiserver.crt.14ab1afe: {Name:mke646c03fbf55548f1277ba55ee1c517a259751 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 23:56:07.233948 1086621 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/ha-377576/apiserver.key.14ab1afe ...
	I0327 23:56:07.233968 1086621 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/ha-377576/apiserver.key.14ab1afe: {Name:mkbe768d663de231129cf0d33824155d9f1fcace Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 23:56:07.234070 1086621 certs.go:381] copying /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/ha-377576/apiserver.crt.14ab1afe -> /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/ha-377576/apiserver.crt
	I0327 23:56:07.234215 1086621 certs.go:385] copying /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/ha-377576/apiserver.key.14ab1afe -> /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/ha-377576/apiserver.key
	I0327 23:56:07.234387 1086621 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/ha-377576/proxy-client.key
	I0327 23:56:07.234407 1086621 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0327 23:56:07.234419 1086621 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0327 23:56:07.234432 1086621 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18485-1069254/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0327 23:56:07.234445 1086621 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18485-1069254/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0327 23:56:07.234460 1086621 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/ha-377576/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0327 23:56:07.234473 1086621 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/ha-377576/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0327 23:56:07.234485 1086621 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/ha-377576/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0327 23:56:07.234498 1086621 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/ha-377576/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0327 23:56:07.234545 1086621 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/1076522.pem (1338 bytes)
	W0327 23:56:07.234575 1086621 certs.go:480] ignoring /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/1076522_empty.pem, impossibly tiny 0 bytes
	I0327 23:56:07.234584 1086621 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca-key.pem (1675 bytes)
	I0327 23:56:07.234605 1086621 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem (1078 bytes)
	I0327 23:56:07.234629 1086621 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/cert.pem (1123 bytes)
	I0327 23:56:07.234651 1086621 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/key.pem (1679 bytes)
	I0327 23:56:07.234692 1086621 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/files/etc/ssl/certs/10765222.pem (1708 bytes)
	I0327 23:56:07.234718 1086621 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18485-1069254/.minikube/files/etc/ssl/certs/10765222.pem -> /usr/share/ca-certificates/10765222.pem
	I0327 23:56:07.234732 1086621 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0327 23:56:07.234745 1086621 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/1076522.pem -> /usr/share/ca-certificates/1076522.pem
	I0327 23:56:07.234780 1086621 main.go:141] libmachine: (ha-377576) Calling .GetSSHHostname
	I0327 23:56:07.238242 1086621 main.go:141] libmachine: (ha-377576) DBG | domain ha-377576 has defined MAC address 52:54:00:9c:48:13 in network mk-ha-377576
	I0327 23:56:07.238664 1086621 main.go:141] libmachine: (ha-377576) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:48:13", ip: ""} in network mk-ha-377576: {Iface:virbr1 ExpiryTime:2024-03-28 00:52:31 +0000 UTC Type:0 Mac:52:54:00:9c:48:13 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:ha-377576 Clientid:01:52:54:00:9c:48:13}
	I0327 23:56:07.238695 1086621 main.go:141] libmachine: (ha-377576) DBG | domain ha-377576 has defined IP address 192.168.39.47 and MAC address 52:54:00:9c:48:13 in network mk-ha-377576
	I0327 23:56:07.238893 1086621 main.go:141] libmachine: (ha-377576) Calling .GetSSHPort
	I0327 23:56:07.239133 1086621 main.go:141] libmachine: (ha-377576) Calling .GetSSHKeyPath
	I0327 23:56:07.239313 1086621 main.go:141] libmachine: (ha-377576) Calling .GetSSHUsername
	I0327 23:56:07.239475 1086621 sshutil.go:53] new ssh client: &{IP:192.168.39.47 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/ha-377576/id_rsa Username:docker}
	I0327 23:56:07.310644 1086621 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0327 23:56:07.316568 1086621 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0327 23:56:07.334530 1086621 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0327 23:56:07.341938 1086621 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0327 23:56:07.357980 1086621 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0327 23:56:07.366300 1086621 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0327 23:56:07.380609 1086621 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0327 23:56:07.386934 1086621 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0327 23:56:07.399464 1086621 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0327 23:56:07.404668 1086621 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0327 23:56:07.417488 1086621 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0327 23:56:07.421948 1086621 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0327 23:56:07.433016 1086621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0327 23:56:07.458895 1086621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0327 23:56:07.484924 1086621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0327 23:56:07.511942 1086621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0327 23:56:07.538914 1086621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/ha-377576/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0327 23:56:07.565987 1086621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/ha-377576/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0327 23:56:07.595806 1086621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/ha-377576/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0327 23:56:07.621826 1086621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/ha-377576/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0327 23:56:07.648539 1086621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/files/etc/ssl/certs/10765222.pem --> /usr/share/ca-certificates/10765222.pem (1708 bytes)
	I0327 23:56:07.674897 1086621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0327 23:56:07.700547 1086621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/1076522.pem --> /usr/share/ca-certificates/1076522.pem (1338 bytes)
	I0327 23:56:07.728082 1086621 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0327 23:56:07.746342 1086621 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0327 23:56:07.765427 1086621 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0327 23:56:07.786146 1086621 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0327 23:56:07.805152 1086621 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0327 23:56:07.823638 1086621 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0327 23:56:07.842581 1086621 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (758 bytes)
	I0327 23:56:07.861267 1086621 ssh_runner.go:195] Run: openssl version
	I0327 23:56:07.867702 1086621 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10765222.pem && ln -fs /usr/share/ca-certificates/10765222.pem /etc/ssl/certs/10765222.pem"
	I0327 23:56:07.879425 1086621 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10765222.pem
	I0327 23:56:07.884319 1086621 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 27 23:42 /usr/share/ca-certificates/10765222.pem
	I0327 23:56:07.884381 1086621 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10765222.pem
	I0327 23:56:07.890545 1086621 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/10765222.pem /etc/ssl/certs/3ec20f2e.0"
	I0327 23:56:07.903427 1086621 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0327 23:56:07.915107 1086621 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0327 23:56:07.921670 1086621 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 27 23:34 /usr/share/ca-certificates/minikubeCA.pem
	I0327 23:56:07.921740 1086621 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0327 23:56:07.928119 1086621 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0327 23:56:07.940843 1086621 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1076522.pem && ln -fs /usr/share/ca-certificates/1076522.pem /etc/ssl/certs/1076522.pem"
	I0327 23:56:07.952114 1086621 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1076522.pem
	I0327 23:56:07.957534 1086621 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 27 23:42 /usr/share/ca-certificates/1076522.pem
	I0327 23:56:07.957630 1086621 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1076522.pem
	I0327 23:56:07.964211 1086621 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1076522.pem /etc/ssl/certs/51391683.0"
	I0327 23:56:07.976813 1086621 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0327 23:56:07.981522 1086621 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0327 23:56:07.981586 1086621 kubeadm.go:928] updating node {m03 192.168.39.101 8443 v1.29.3 crio true true} ...
	I0327 23:56:07.981675 1086621 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-377576-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.101
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.3 ClusterName:ha-377576 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0327 23:56:07.981701 1086621 kube-vip.go:111] generating kube-vip config ...
	I0327 23:56:07.981734 1086621 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0327 23:56:07.998559 1086621 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0327 23:56:07.998658 1086621 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0327 23:56:07.998727 1086621 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.3
	I0327 23:56:08.010506 1086621 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.29.3: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.29.3': No such file or directory
	
	Initiating transfer...
	I0327 23:56:08.010577 1086621 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.29.3
	I0327 23:56:08.022366 1086621 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.29.3/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.29.3/bin/linux/amd64/kubectl.sha256
	I0327 23:56:08.022394 1086621 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/linux/amd64/v1.29.3/kubectl -> /var/lib/minikube/binaries/v1.29.3/kubectl
	I0327 23:56:08.022399 1086621 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.29.3/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.29.3/bin/linux/amd64/kubelet.sha256
	I0327 23:56:08.022405 1086621 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.29.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.29.3/bin/linux/amd64/kubeadm.sha256
	I0327 23:56:08.022422 1086621 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/linux/amd64/v1.29.3/kubeadm -> /var/lib/minikube/binaries/v1.29.3/kubeadm
	I0327 23:56:08.022451 1086621 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0327 23:56:08.022468 1086621 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.29.3/kubectl
	I0327 23:56:08.022490 1086621 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.29.3/kubeadm
	I0327 23:56:08.038196 1086621 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.29.3/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.29.3/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.29.3/kubectl': No such file or directory
	I0327 23:56:08.038222 1086621 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/linux/amd64/v1.29.3/kubelet -> /var/lib/minikube/binaries/v1.29.3/kubelet
	I0327 23:56:08.038254 1086621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/linux/amd64/v1.29.3/kubectl --> /var/lib/minikube/binaries/v1.29.3/kubectl (49799168 bytes)
	I0327 23:56:08.038278 1086621 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.29.3/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.29.3/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.29.3/kubeadm': No such file or directory
	I0327 23:56:08.038310 1086621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/linux/amd64/v1.29.3/kubeadm --> /var/lib/minikube/binaries/v1.29.3/kubeadm (48340992 bytes)
	I0327 23:56:08.038319 1086621 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.29.3/kubelet
	I0327 23:56:08.068862 1086621 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.29.3/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.29.3/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.29.3/kubelet': No such file or directory
	I0327 23:56:08.068906 1086621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/linux/amd64/v1.29.3/kubelet --> /var/lib/minikube/binaries/v1.29.3/kubelet (111919104 bytes)
	I0327 23:56:09.084569 1086621 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0327 23:56:09.095546 1086621 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0327 23:56:09.113313 1086621 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0327 23:56:09.130871 1086621 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1346 bytes)
	I0327 23:56:09.148175 1086621 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0327 23:56:09.152365 1086621 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0327 23:56:09.165319 1086621 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0327 23:56:09.303285 1086621 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0327 23:56:09.321947 1086621 host.go:66] Checking if "ha-377576" exists ...
	I0327 23:56:09.322376 1086621 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0327 23:56:09.322428 1086621 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0327 23:56:09.337982 1086621 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36531
	I0327 23:56:09.338573 1086621 main.go:141] libmachine: () Calling .GetVersion
	I0327 23:56:09.339164 1086621 main.go:141] libmachine: Using API Version  1
	I0327 23:56:09.339191 1086621 main.go:141] libmachine: () Calling .SetConfigRaw
	I0327 23:56:09.339526 1086621 main.go:141] libmachine: () Calling .GetMachineName
	I0327 23:56:09.339731 1086621 main.go:141] libmachine: (ha-377576) Calling .DriverName
	I0327 23:56:09.339909 1086621 start.go:316] joinCluster: &{Name:ha-377576 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 Cluster
Name:ha-377576 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.47 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.117 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.101 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false i
nspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fa
lse DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0327 23:56:09.340107 1086621 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0327 23:56:09.340138 1086621 main.go:141] libmachine: (ha-377576) Calling .GetSSHHostname
	I0327 23:56:09.343370 1086621 main.go:141] libmachine: (ha-377576) DBG | domain ha-377576 has defined MAC address 52:54:00:9c:48:13 in network mk-ha-377576
	I0327 23:56:09.343988 1086621 main.go:141] libmachine: (ha-377576) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:48:13", ip: ""} in network mk-ha-377576: {Iface:virbr1 ExpiryTime:2024-03-28 00:52:31 +0000 UTC Type:0 Mac:52:54:00:9c:48:13 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:ha-377576 Clientid:01:52:54:00:9c:48:13}
	I0327 23:56:09.344019 1086621 main.go:141] libmachine: (ha-377576) DBG | domain ha-377576 has defined IP address 192.168.39.47 and MAC address 52:54:00:9c:48:13 in network mk-ha-377576
	I0327 23:56:09.344167 1086621 main.go:141] libmachine: (ha-377576) Calling .GetSSHPort
	I0327 23:56:09.344343 1086621 main.go:141] libmachine: (ha-377576) Calling .GetSSHKeyPath
	I0327 23:56:09.344535 1086621 main.go:141] libmachine: (ha-377576) Calling .GetSSHUsername
	I0327 23:56:09.344696 1086621 sshutil.go:53] new ssh client: &{IP:192.168.39.47 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/ha-377576/id_rsa Username:docker}
	I0327 23:56:09.522187 1086621 start.go:342] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.101 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0327 23:56:09.522260 1086621 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token no2fhc.v651hn034bq9oi06 --discovery-token-ca-cert-hash sha256:eb47e943713ff45bf2b8bbea854932df17a469668ec0f8e79ea3bb626e56fc59 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-377576-m03 --control-plane --apiserver-advertise-address=192.168.39.101 --apiserver-bind-port=8443"
	I0327 23:56:35.655702 1086621 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token no2fhc.v651hn034bq9oi06 --discovery-token-ca-cert-hash sha256:eb47e943713ff45bf2b8bbea854932df17a469668ec0f8e79ea3bb626e56fc59 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-377576-m03 --control-plane --apiserver-advertise-address=192.168.39.101 --apiserver-bind-port=8443": (26.133413587s)
	I0327 23:56:35.655756 1086621 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0327 23:56:36.165787 1086621 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-377576-m03 minikube.k8s.io/updated_at=2024_03_27T23_56_36_0700 minikube.k8s.io/version=v1.33.0-beta.0 minikube.k8s.io/commit=1a940f980f77c9bfc8ef93678db8c1f49c9dd79d minikube.k8s.io/name=ha-377576 minikube.k8s.io/primary=false
	I0327 23:56:36.332599 1086621 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-377576-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0327 23:56:36.442969 1086621 start.go:318] duration metric: took 27.103052401s to joinCluster
	I0327 23:56:36.443060 1086621 start.go:234] Will wait 6m0s for node &{Name:m03 IP:192.168.39.101 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0327 23:56:36.444602 1086621 out.go:177] * Verifying Kubernetes components...
	I0327 23:56:36.443567 1086621 config.go:182] Loaded profile config "ha-377576": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0327 23:56:36.446447 1086621 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0327 23:56:36.654784 1086621 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0327 23:56:36.681979 1086621 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18485-1069254/kubeconfig
	I0327 23:56:36.682394 1086621 kapi.go:59] client config for ha-377576: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/ha-377576/client.crt", KeyFile:"/home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/ha-377576/client.key", CAFile:"/home/jenkins/minikube-integration/18485-1069254/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]strin
g(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c58000), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0327 23:56:36.682490 1086621 kubeadm.go:477] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.47:8443
	I0327 23:56:36.682806 1086621 node_ready.go:35] waiting up to 6m0s for node "ha-377576-m03" to be "Ready" ...
	I0327 23:56:36.682902 1086621 round_trippers.go:463] GET https://192.168.39.47:8443/api/v1/nodes/ha-377576-m03
	I0327 23:56:36.682915 1086621 round_trippers.go:469] Request Headers:
	I0327 23:56:36.682926 1086621 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:56:36.682932 1086621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0327 23:56:36.686557 1086621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0327 23:56:37.183707 1086621 round_trippers.go:463] GET https://192.168.39.47:8443/api/v1/nodes/ha-377576-m03
	I0327 23:56:37.183729 1086621 round_trippers.go:469] Request Headers:
	I0327 23:56:37.183737 1086621 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:56:37.183740 1086621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0327 23:56:37.189182 1086621 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0327 23:56:37.684008 1086621 round_trippers.go:463] GET https://192.168.39.47:8443/api/v1/nodes/ha-377576-m03
	I0327 23:56:37.684034 1086621 round_trippers.go:469] Request Headers:
	I0327 23:56:37.684045 1086621 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:56:37.684052 1086621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0327 23:56:37.688293 1086621 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0327 23:56:38.183310 1086621 round_trippers.go:463] GET https://192.168.39.47:8443/api/v1/nodes/ha-377576-m03
	I0327 23:56:38.183341 1086621 round_trippers.go:469] Request Headers:
	I0327 23:56:38.183353 1086621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0327 23:56:38.183363 1086621 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:56:38.187056 1086621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0327 23:56:38.683957 1086621 round_trippers.go:463] GET https://192.168.39.47:8443/api/v1/nodes/ha-377576-m03
	I0327 23:56:38.683996 1086621 round_trippers.go:469] Request Headers:
	I0327 23:56:38.684008 1086621 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:56:38.684017 1086621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0327 23:56:38.688792 1086621 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0327 23:56:38.690433 1086621 node_ready.go:53] node "ha-377576-m03" has status "Ready":"False"
	I0327 23:56:39.183582 1086621 round_trippers.go:463] GET https://192.168.39.47:8443/api/v1/nodes/ha-377576-m03
	I0327 23:56:39.183615 1086621 round_trippers.go:469] Request Headers:
	I0327 23:56:39.183628 1086621 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:56:39.183634 1086621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0327 23:56:39.188139 1086621 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0327 23:56:39.683378 1086621 round_trippers.go:463] GET https://192.168.39.47:8443/api/v1/nodes/ha-377576-m03
	I0327 23:56:39.683405 1086621 round_trippers.go:469] Request Headers:
	I0327 23:56:39.683413 1086621 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:56:39.683416 1086621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0327 23:56:39.688006 1086621 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0327 23:56:40.183174 1086621 round_trippers.go:463] GET https://192.168.39.47:8443/api/v1/nodes/ha-377576-m03
	I0327 23:56:40.183209 1086621 round_trippers.go:469] Request Headers:
	I0327 23:56:40.183221 1086621 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:56:40.183226 1086621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0327 23:56:40.186909 1086621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0327 23:56:40.684015 1086621 round_trippers.go:463] GET https://192.168.39.47:8443/api/v1/nodes/ha-377576-m03
	I0327 23:56:40.684046 1086621 round_trippers.go:469] Request Headers:
	I0327 23:56:40.684058 1086621 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:56:40.684063 1086621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0327 23:56:40.694808 1086621 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0327 23:56:40.695512 1086621 node_ready.go:53] node "ha-377576-m03" has status "Ready":"False"
	I0327 23:56:41.183500 1086621 round_trippers.go:463] GET https://192.168.39.47:8443/api/v1/nodes/ha-377576-m03
	I0327 23:56:41.183525 1086621 round_trippers.go:469] Request Headers:
	I0327 23:56:41.183532 1086621 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:56:41.183537 1086621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0327 23:56:41.187765 1086621 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0327 23:56:41.683582 1086621 round_trippers.go:463] GET https://192.168.39.47:8443/api/v1/nodes/ha-377576-m03
	I0327 23:56:41.683620 1086621 round_trippers.go:469] Request Headers:
	I0327 23:56:41.683630 1086621 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:56:41.683635 1086621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0327 23:56:41.687754 1086621 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0327 23:56:42.183372 1086621 round_trippers.go:463] GET https://192.168.39.47:8443/api/v1/nodes/ha-377576-m03
	I0327 23:56:42.183403 1086621 round_trippers.go:469] Request Headers:
	I0327 23:56:42.183416 1086621 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:56:42.183420 1086621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0327 23:56:42.187955 1086621 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0327 23:56:42.683349 1086621 round_trippers.go:463] GET https://192.168.39.47:8443/api/v1/nodes/ha-377576-m03
	I0327 23:56:42.683374 1086621 round_trippers.go:469] Request Headers:
	I0327 23:56:42.683383 1086621 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:56:42.683387 1086621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0327 23:56:42.687396 1086621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0327 23:56:42.688590 1086621 node_ready.go:49] node "ha-377576-m03" has status "Ready":"True"
	I0327 23:56:42.688611 1086621 node_ready.go:38] duration metric: took 6.005786492s for node "ha-377576-m03" to be "Ready" ...
	I0327 23:56:42.688621 1086621 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0327 23:56:42.688679 1086621 round_trippers.go:463] GET https://192.168.39.47:8443/api/v1/namespaces/kube-system/pods
	I0327 23:56:42.688688 1086621 round_trippers.go:469] Request Headers:
	I0327 23:56:42.688695 1086621 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:56:42.688702 1086621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0327 23:56:42.698024 1086621 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0327 23:56:42.705114 1086621 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-47npx" in "kube-system" namespace to be "Ready" ...
	I0327 23:56:42.705197 1086621 round_trippers.go:463] GET https://192.168.39.47:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-47npx
	I0327 23:56:42.705206 1086621 round_trippers.go:469] Request Headers:
	I0327 23:56:42.705213 1086621 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:56:42.705218 1086621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0327 23:56:42.708131 1086621 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0327 23:56:42.709135 1086621 round_trippers.go:463] GET https://192.168.39.47:8443/api/v1/nodes/ha-377576
	I0327 23:56:42.709153 1086621 round_trippers.go:469] Request Headers:
	I0327 23:56:42.709162 1086621 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:56:42.709168 1086621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0327 23:56:42.712094 1086621 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0327 23:56:42.712871 1086621 pod_ready.go:92] pod "coredns-76f75df574-47npx" in "kube-system" namespace has status "Ready":"True"
	I0327 23:56:42.712889 1086621 pod_ready.go:81] duration metric: took 7.750876ms for pod "coredns-76f75df574-47npx" in "kube-system" namespace to be "Ready" ...
	I0327 23:56:42.712898 1086621 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-msv9s" in "kube-system" namespace to be "Ready" ...
	I0327 23:56:42.712950 1086621 round_trippers.go:463] GET https://192.168.39.47:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-msv9s
	I0327 23:56:42.712958 1086621 round_trippers.go:469] Request Headers:
	I0327 23:56:42.712965 1086621 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:56:42.712969 1086621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0327 23:56:42.715709 1086621 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0327 23:56:42.716446 1086621 round_trippers.go:463] GET https://192.168.39.47:8443/api/v1/nodes/ha-377576
	I0327 23:56:42.716465 1086621 round_trippers.go:469] Request Headers:
	I0327 23:56:42.716473 1086621 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:56:42.716478 1086621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0327 23:56:42.719444 1086621 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0327 23:56:42.720077 1086621 pod_ready.go:92] pod "coredns-76f75df574-msv9s" in "kube-system" namespace has status "Ready":"True"
	I0327 23:56:42.720100 1086621 pod_ready.go:81] duration metric: took 7.195082ms for pod "coredns-76f75df574-msv9s" in "kube-system" namespace to be "Ready" ...
	I0327 23:56:42.720113 1086621 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-377576" in "kube-system" namespace to be "Ready" ...
	I0327 23:56:42.720181 1086621 round_trippers.go:463] GET https://192.168.39.47:8443/api/v1/namespaces/kube-system/pods/etcd-ha-377576
	I0327 23:56:42.720193 1086621 round_trippers.go:469] Request Headers:
	I0327 23:56:42.720202 1086621 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:56:42.720208 1086621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0327 23:56:42.723109 1086621 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0327 23:56:42.723873 1086621 round_trippers.go:463] GET https://192.168.39.47:8443/api/v1/nodes/ha-377576
	I0327 23:56:42.723889 1086621 round_trippers.go:469] Request Headers:
	I0327 23:56:42.723898 1086621 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:56:42.723905 1086621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0327 23:56:42.726829 1086621 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0327 23:56:42.727423 1086621 pod_ready.go:92] pod "etcd-ha-377576" in "kube-system" namespace has status "Ready":"True"
	I0327 23:56:42.727443 1086621 pod_ready.go:81] duration metric: took 7.323127ms for pod "etcd-ha-377576" in "kube-system" namespace to be "Ready" ...
	I0327 23:56:42.727453 1086621 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-377576-m02" in "kube-system" namespace to be "Ready" ...
	I0327 23:56:42.727510 1086621 round_trippers.go:463] GET https://192.168.39.47:8443/api/v1/namespaces/kube-system/pods/etcd-ha-377576-m02
	I0327 23:56:42.727522 1086621 round_trippers.go:469] Request Headers:
	I0327 23:56:42.727531 1086621 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:56:42.727536 1086621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0327 23:56:42.730683 1086621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0327 23:56:42.731252 1086621 round_trippers.go:463] GET https://192.168.39.47:8443/api/v1/nodes/ha-377576-m02
	I0327 23:56:42.731265 1086621 round_trippers.go:469] Request Headers:
	I0327 23:56:42.731274 1086621 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:56:42.731282 1086621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0327 23:56:42.734162 1086621 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0327 23:56:42.734803 1086621 pod_ready.go:92] pod "etcd-ha-377576-m02" in "kube-system" namespace has status "Ready":"True"
	I0327 23:56:42.734821 1086621 pod_ready.go:81] duration metric: took 7.362639ms for pod "etcd-ha-377576-m02" in "kube-system" namespace to be "Ready" ...
	I0327 23:56:42.734832 1086621 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-377576-m03" in "kube-system" namespace to be "Ready" ...
	I0327 23:56:42.884173 1086621 request.go:629] Waited for 149.266045ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.47:8443/api/v1/namespaces/kube-system/pods/etcd-ha-377576-m03
	I0327 23:56:42.884264 1086621 round_trippers.go:463] GET https://192.168.39.47:8443/api/v1/namespaces/kube-system/pods/etcd-ha-377576-m03
	I0327 23:56:42.884269 1086621 round_trippers.go:469] Request Headers:
	I0327 23:56:42.884277 1086621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0327 23:56:42.884283 1086621 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:56:42.887842 1086621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0327 23:56:43.083824 1086621 request.go:629] Waited for 195.361102ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.47:8443/api/v1/nodes/ha-377576-m03
	I0327 23:56:43.083927 1086621 round_trippers.go:463] GET https://192.168.39.47:8443/api/v1/nodes/ha-377576-m03
	I0327 23:56:43.083936 1086621 round_trippers.go:469] Request Headers:
	I0327 23:56:43.083950 1086621 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:56:43.083958 1086621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0327 23:56:43.087289 1086621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0327 23:56:43.284015 1086621 round_trippers.go:463] GET https://192.168.39.47:8443/api/v1/namespaces/kube-system/pods/etcd-ha-377576-m03
	I0327 23:56:43.284043 1086621 round_trippers.go:469] Request Headers:
	I0327 23:56:43.284055 1086621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0327 23:56:43.284060 1086621 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:56:43.288108 1086621 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0327 23:56:43.484181 1086621 request.go:629] Waited for 195.302432ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.47:8443/api/v1/nodes/ha-377576-m03
	I0327 23:56:43.484244 1086621 round_trippers.go:463] GET https://192.168.39.47:8443/api/v1/nodes/ha-377576-m03
	I0327 23:56:43.484251 1086621 round_trippers.go:469] Request Headers:
	I0327 23:56:43.484261 1086621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0327 23:56:43.484270 1086621 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:56:43.488262 1086621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0327 23:56:43.735353 1086621 round_trippers.go:463] GET https://192.168.39.47:8443/api/v1/namespaces/kube-system/pods/etcd-ha-377576-m03
	I0327 23:56:43.735377 1086621 round_trippers.go:469] Request Headers:
	I0327 23:56:43.735387 1086621 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:56:43.735393 1086621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0327 23:56:43.739458 1086621 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0327 23:56:43.883694 1086621 request.go:629] Waited for 143.301815ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.47:8443/api/v1/nodes/ha-377576-m03
	I0327 23:56:43.883772 1086621 round_trippers.go:463] GET https://192.168.39.47:8443/api/v1/nodes/ha-377576-m03
	I0327 23:56:43.883789 1086621 round_trippers.go:469] Request Headers:
	I0327 23:56:43.883801 1086621 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:56:43.883812 1086621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0327 23:56:43.887912 1086621 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0327 23:56:44.235735 1086621 round_trippers.go:463] GET https://192.168.39.47:8443/api/v1/namespaces/kube-system/pods/etcd-ha-377576-m03
	I0327 23:56:44.235761 1086621 round_trippers.go:469] Request Headers:
	I0327 23:56:44.235770 1086621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0327 23:56:44.235775 1086621 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:56:44.240631 1086621 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0327 23:56:44.284049 1086621 round_trippers.go:463] GET https://192.168.39.47:8443/api/v1/nodes/ha-377576-m03
	I0327 23:56:44.284076 1086621 round_trippers.go:469] Request Headers:
	I0327 23:56:44.284085 1086621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0327 23:56:44.284089 1086621 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:56:44.287503 1086621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0327 23:56:44.735290 1086621 round_trippers.go:463] GET https://192.168.39.47:8443/api/v1/namespaces/kube-system/pods/etcd-ha-377576-m03
	I0327 23:56:44.735319 1086621 round_trippers.go:469] Request Headers:
	I0327 23:56:44.735328 1086621 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:56:44.735335 1086621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0327 23:56:44.741307 1086621 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0327 23:56:44.742133 1086621 round_trippers.go:463] GET https://192.168.39.47:8443/api/v1/nodes/ha-377576-m03
	I0327 23:56:44.742149 1086621 round_trippers.go:469] Request Headers:
	I0327 23:56:44.742156 1086621 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:56:44.742160 1086621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0327 23:56:44.745268 1086621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0327 23:56:44.745776 1086621 pod_ready.go:102] pod "etcd-ha-377576-m03" in "kube-system" namespace has status "Ready":"False"
	I0327 23:56:45.235186 1086621 round_trippers.go:463] GET https://192.168.39.47:8443/api/v1/namespaces/kube-system/pods/etcd-ha-377576-m03
	I0327 23:56:45.235212 1086621 round_trippers.go:469] Request Headers:
	I0327 23:56:45.235220 1086621 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:56:45.235227 1086621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0327 23:56:45.238958 1086621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0327 23:56:45.239781 1086621 round_trippers.go:463] GET https://192.168.39.47:8443/api/v1/nodes/ha-377576-m03
	I0327 23:56:45.239799 1086621 round_trippers.go:469] Request Headers:
	I0327 23:56:45.239810 1086621 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:56:45.239814 1086621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0327 23:56:45.242801 1086621 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0327 23:56:45.735193 1086621 round_trippers.go:463] GET https://192.168.39.47:8443/api/v1/namespaces/kube-system/pods/etcd-ha-377576-m03
	I0327 23:56:45.735222 1086621 round_trippers.go:469] Request Headers:
	I0327 23:56:45.735230 1086621 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:56:45.735234 1086621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0327 23:56:45.739378 1086621 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0327 23:56:45.740482 1086621 round_trippers.go:463] GET https://192.168.39.47:8443/api/v1/nodes/ha-377576-m03
	I0327 23:56:45.740499 1086621 round_trippers.go:469] Request Headers:
	I0327 23:56:45.740508 1086621 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:56:45.740512 1086621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0327 23:56:45.743836 1086621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0327 23:56:46.235737 1086621 round_trippers.go:463] GET https://192.168.39.47:8443/api/v1/namespaces/kube-system/pods/etcd-ha-377576-m03
	I0327 23:56:46.235768 1086621 round_trippers.go:469] Request Headers:
	I0327 23:56:46.235781 1086621 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:56:46.235787 1086621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0327 23:56:46.240205 1086621 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0327 23:56:46.241588 1086621 round_trippers.go:463] GET https://192.168.39.47:8443/api/v1/nodes/ha-377576-m03
	I0327 23:56:46.241604 1086621 round_trippers.go:469] Request Headers:
	I0327 23:56:46.241611 1086621 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:56:46.241617 1086621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0327 23:56:46.244827 1086621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0327 23:56:46.735709 1086621 round_trippers.go:463] GET https://192.168.39.47:8443/api/v1/namespaces/kube-system/pods/etcd-ha-377576-m03
	I0327 23:56:46.735745 1086621 round_trippers.go:469] Request Headers:
	I0327 23:56:46.735755 1086621 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:56:46.735763 1086621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0327 23:56:46.739633 1086621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0327 23:56:46.740501 1086621 round_trippers.go:463] GET https://192.168.39.47:8443/api/v1/nodes/ha-377576-m03
	I0327 23:56:46.740521 1086621 round_trippers.go:469] Request Headers:
	I0327 23:56:46.740531 1086621 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:56:46.740536 1086621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0327 23:56:46.744441 1086621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0327 23:56:47.235859 1086621 round_trippers.go:463] GET https://192.168.39.47:8443/api/v1/namespaces/kube-system/pods/etcd-ha-377576-m03
	I0327 23:56:47.235886 1086621 round_trippers.go:469] Request Headers:
	I0327 23:56:47.235894 1086621 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:56:47.235898 1086621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0327 23:56:47.239551 1086621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0327 23:56:47.240375 1086621 round_trippers.go:463] GET https://192.168.39.47:8443/api/v1/nodes/ha-377576-m03
	I0327 23:56:47.240394 1086621 round_trippers.go:469] Request Headers:
	I0327 23:56:47.240403 1086621 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:56:47.240409 1086621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0327 23:56:47.245574 1086621 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0327 23:56:47.246083 1086621 pod_ready.go:102] pod "etcd-ha-377576-m03" in "kube-system" namespace has status "Ready":"False"
	I0327 23:56:47.735884 1086621 round_trippers.go:463] GET https://192.168.39.47:8443/api/v1/namespaces/kube-system/pods/etcd-ha-377576-m03
	I0327 23:56:47.735911 1086621 round_trippers.go:469] Request Headers:
	I0327 23:56:47.735920 1086621 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:56:47.735923 1086621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0327 23:56:47.740832 1086621 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0327 23:56:47.741463 1086621 round_trippers.go:463] GET https://192.168.39.47:8443/api/v1/nodes/ha-377576-m03
	I0327 23:56:47.741479 1086621 round_trippers.go:469] Request Headers:
	I0327 23:56:47.741487 1086621 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:56:47.741492 1086621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0327 23:56:47.744811 1086621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0327 23:56:48.235433 1086621 round_trippers.go:463] GET https://192.168.39.47:8443/api/v1/namespaces/kube-system/pods/etcd-ha-377576-m03
	I0327 23:56:48.235462 1086621 round_trippers.go:469] Request Headers:
	I0327 23:56:48.235473 1086621 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:56:48.235479 1086621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0327 23:56:48.239406 1086621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0327 23:56:48.240442 1086621 round_trippers.go:463] GET https://192.168.39.47:8443/api/v1/nodes/ha-377576-m03
	I0327 23:56:48.240459 1086621 round_trippers.go:469] Request Headers:
	I0327 23:56:48.240466 1086621 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:56:48.240471 1086621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0327 23:56:48.244012 1086621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0327 23:56:48.736034 1086621 round_trippers.go:463] GET https://192.168.39.47:8443/api/v1/namespaces/kube-system/pods/etcd-ha-377576-m03
	I0327 23:56:48.736064 1086621 round_trippers.go:469] Request Headers:
	I0327 23:56:48.736076 1086621 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:56:48.736083 1086621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0327 23:56:48.740226 1086621 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0327 23:56:48.741114 1086621 round_trippers.go:463] GET https://192.168.39.47:8443/api/v1/nodes/ha-377576-m03
	I0327 23:56:48.741134 1086621 round_trippers.go:469] Request Headers:
	I0327 23:56:48.741141 1086621 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:56:48.741147 1086621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0327 23:56:48.744670 1086621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0327 23:56:49.235562 1086621 round_trippers.go:463] GET https://192.168.39.47:8443/api/v1/namespaces/kube-system/pods/etcd-ha-377576-m03
	I0327 23:56:49.235591 1086621 round_trippers.go:469] Request Headers:
	I0327 23:56:49.235603 1086621 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:56:49.235607 1086621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0327 23:56:49.239989 1086621 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0327 23:56:49.240783 1086621 round_trippers.go:463] GET https://192.168.39.47:8443/api/v1/nodes/ha-377576-m03
	I0327 23:56:49.240804 1086621 round_trippers.go:469] Request Headers:
	I0327 23:56:49.240815 1086621 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:56:49.240823 1086621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0327 23:56:49.244325 1086621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0327 23:56:49.735747 1086621 round_trippers.go:463] GET https://192.168.39.47:8443/api/v1/namespaces/kube-system/pods/etcd-ha-377576-m03
	I0327 23:56:49.735776 1086621 round_trippers.go:469] Request Headers:
	I0327 23:56:49.735787 1086621 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:56:49.735791 1086621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0327 23:56:49.739978 1086621 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0327 23:56:49.741049 1086621 round_trippers.go:463] GET https://192.168.39.47:8443/api/v1/nodes/ha-377576-m03
	I0327 23:56:49.741066 1086621 round_trippers.go:469] Request Headers:
	I0327 23:56:49.741073 1086621 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:56:49.741076 1086621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0327 23:56:49.743872 1086621 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0327 23:56:49.744665 1086621 pod_ready.go:102] pod "etcd-ha-377576-m03" in "kube-system" namespace has status "Ready":"False"
	I0327 23:56:50.235994 1086621 round_trippers.go:463] GET https://192.168.39.47:8443/api/v1/namespaces/kube-system/pods/etcd-ha-377576-m03
	I0327 23:56:50.236021 1086621 round_trippers.go:469] Request Headers:
	I0327 23:56:50.236029 1086621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0327 23:56:50.236034 1086621 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:56:50.240362 1086621 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0327 23:56:50.241161 1086621 round_trippers.go:463] GET https://192.168.39.47:8443/api/v1/nodes/ha-377576-m03
	I0327 23:56:50.241178 1086621 round_trippers.go:469] Request Headers:
	I0327 23:56:50.241185 1086621 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:56:50.241191 1086621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0327 23:56:50.245427 1086621 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0327 23:56:50.246393 1086621 pod_ready.go:92] pod "etcd-ha-377576-m03" in "kube-system" namespace has status "Ready":"True"
	I0327 23:56:50.246419 1086621 pod_ready.go:81] duration metric: took 7.511577614s for pod "etcd-ha-377576-m03" in "kube-system" namespace to be "Ready" ...
	I0327 23:56:50.246446 1086621 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-377576" in "kube-system" namespace to be "Ready" ...
	I0327 23:56:50.246532 1086621 round_trippers.go:463] GET https://192.168.39.47:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-377576
	I0327 23:56:50.246541 1086621 round_trippers.go:469] Request Headers:
	I0327 23:56:50.246550 1086621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0327 23:56:50.246554 1086621 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:56:50.250397 1086621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0327 23:56:50.251162 1086621 round_trippers.go:463] GET https://192.168.39.47:8443/api/v1/nodes/ha-377576
	I0327 23:56:50.251178 1086621 round_trippers.go:469] Request Headers:
	I0327 23:56:50.251186 1086621 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:56:50.251192 1086621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0327 23:56:50.255238 1086621 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0327 23:56:50.255873 1086621 pod_ready.go:92] pod "kube-apiserver-ha-377576" in "kube-system" namespace has status "Ready":"True"
	I0327 23:56:50.255897 1086621 pod_ready.go:81] duration metric: took 9.436535ms for pod "kube-apiserver-ha-377576" in "kube-system" namespace to be "Ready" ...
	I0327 23:56:50.255911 1086621 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-377576-m02" in "kube-system" namespace to be "Ready" ...
	I0327 23:56:50.255993 1086621 round_trippers.go:463] GET https://192.168.39.47:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-377576-m02
	I0327 23:56:50.256009 1086621 round_trippers.go:469] Request Headers:
	I0327 23:56:50.256021 1086621 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:56:50.256030 1086621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0327 23:56:50.259572 1086621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0327 23:56:50.260136 1086621 round_trippers.go:463] GET https://192.168.39.47:8443/api/v1/nodes/ha-377576-m02
	I0327 23:56:50.260151 1086621 round_trippers.go:469] Request Headers:
	I0327 23:56:50.260161 1086621 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:56:50.260165 1086621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0327 23:56:50.264120 1086621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0327 23:56:50.264638 1086621 pod_ready.go:92] pod "kube-apiserver-ha-377576-m02" in "kube-system" namespace has status "Ready":"True"
	I0327 23:56:50.264660 1086621 pod_ready.go:81] duration metric: took 8.741632ms for pod "kube-apiserver-ha-377576-m02" in "kube-system" namespace to be "Ready" ...
	I0327 23:56:50.264673 1086621 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-377576-m03" in "kube-system" namespace to be "Ready" ...
	I0327 23:56:50.264742 1086621 round_trippers.go:463] GET https://192.168.39.47:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-377576-m03
	I0327 23:56:50.264751 1086621 round_trippers.go:469] Request Headers:
	I0327 23:56:50.264759 1086621 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:56:50.264766 1086621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0327 23:56:50.270019 1086621 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0327 23:56:50.283571 1086621 round_trippers.go:463] GET https://192.168.39.47:8443/api/v1/nodes/ha-377576-m03
	I0327 23:56:50.283594 1086621 round_trippers.go:469] Request Headers:
	I0327 23:56:50.283605 1086621 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:56:50.283610 1086621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0327 23:56:50.288806 1086621 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0327 23:56:50.289465 1086621 pod_ready.go:92] pod "kube-apiserver-ha-377576-m03" in "kube-system" namespace has status "Ready":"True"
	I0327 23:56:50.289487 1086621 pod_ready.go:81] duration metric: took 24.804888ms for pod "kube-apiserver-ha-377576-m03" in "kube-system" namespace to be "Ready" ...
	I0327 23:56:50.289503 1086621 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-377576" in "kube-system" namespace to be "Ready" ...
	I0327 23:56:50.483990 1086621 request.go:629] Waited for 194.372281ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.47:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-377576
	I0327 23:56:50.484093 1086621 round_trippers.go:463] GET https://192.168.39.47:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-377576
	I0327 23:56:50.484106 1086621 round_trippers.go:469] Request Headers:
	I0327 23:56:50.484115 1086621 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:56:50.484125 1086621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0327 23:56:50.488203 1086621 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0327 23:56:50.683373 1086621 request.go:629] Waited for 194.304643ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.47:8443/api/v1/nodes/ha-377576
	I0327 23:56:50.683448 1086621 round_trippers.go:463] GET https://192.168.39.47:8443/api/v1/nodes/ha-377576
	I0327 23:56:50.683455 1086621 round_trippers.go:469] Request Headers:
	I0327 23:56:50.683465 1086621 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:56:50.683473 1086621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0327 23:56:50.687353 1086621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0327 23:56:50.688064 1086621 pod_ready.go:92] pod "kube-controller-manager-ha-377576" in "kube-system" namespace has status "Ready":"True"
	I0327 23:56:50.688098 1086621 pod_ready.go:81] duration metric: took 398.584298ms for pod "kube-controller-manager-ha-377576" in "kube-system" namespace to be "Ready" ...
	I0327 23:56:50.688115 1086621 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-377576-m02" in "kube-system" namespace to be "Ready" ...
	I0327 23:56:50.884180 1086621 request.go:629] Waited for 195.982065ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.47:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-377576-m02
	I0327 23:56:50.884255 1086621 round_trippers.go:463] GET https://192.168.39.47:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-377576-m02
	I0327 23:56:50.884261 1086621 round_trippers.go:469] Request Headers:
	I0327 23:56:50.884269 1086621 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:56:50.884272 1086621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0327 23:56:50.887827 1086621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0327 23:56:51.083494 1086621 request.go:629] Waited for 194.90192ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.47:8443/api/v1/nodes/ha-377576-m02
	I0327 23:56:51.083988 1086621 round_trippers.go:463] GET https://192.168.39.47:8443/api/v1/nodes/ha-377576-m02
	I0327 23:56:51.084003 1086621 round_trippers.go:469] Request Headers:
	I0327 23:56:51.084186 1086621 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:56:51.084198 1086621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0327 23:56:51.088830 1086621 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0327 23:56:51.089375 1086621 pod_ready.go:92] pod "kube-controller-manager-ha-377576-m02" in "kube-system" namespace has status "Ready":"True"
	I0327 23:56:51.089398 1086621 pod_ready.go:81] duration metric: took 401.273088ms for pod "kube-controller-manager-ha-377576-m02" in "kube-system" namespace to be "Ready" ...
	I0327 23:56:51.089408 1086621 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-377576-m03" in "kube-system" namespace to be "Ready" ...
	I0327 23:56:51.283667 1086621 request.go:629] Waited for 194.168427ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.47:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-377576-m03
	I0327 23:56:51.283749 1086621 round_trippers.go:463] GET https://192.168.39.47:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-377576-m03
	I0327 23:56:51.283756 1086621 round_trippers.go:469] Request Headers:
	I0327 23:56:51.283765 1086621 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:56:51.283774 1086621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0327 23:56:51.286638 1086621 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0327 23:56:51.483792 1086621 request.go:629] Waited for 196.379227ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.47:8443/api/v1/nodes/ha-377576-m03
	I0327 23:56:51.483859 1086621 round_trippers.go:463] GET https://192.168.39.47:8443/api/v1/nodes/ha-377576-m03
	I0327 23:56:51.483864 1086621 round_trippers.go:469] Request Headers:
	I0327 23:56:51.483871 1086621 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:56:51.483874 1086621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0327 23:56:51.487811 1086621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0327 23:56:51.488590 1086621 pod_ready.go:92] pod "kube-controller-manager-ha-377576-m03" in "kube-system" namespace has status "Ready":"True"
	I0327 23:56:51.488611 1086621 pod_ready.go:81] duration metric: took 399.195466ms for pod "kube-controller-manager-ha-377576-m03" in "kube-system" namespace to be "Ready" ...
	I0327 23:56:51.488622 1086621 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-4t77p" in "kube-system" namespace to be "Ready" ...
	I0327 23:56:51.683627 1086621 request.go:629] Waited for 194.930641ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.47:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4t77p
	I0327 23:56:51.683690 1086621 round_trippers.go:463] GET https://192.168.39.47:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4t77p
	I0327 23:56:51.683695 1086621 round_trippers.go:469] Request Headers:
	I0327 23:56:51.683703 1086621 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:56:51.683708 1086621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0327 23:56:51.687572 1086621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0327 23:56:51.884242 1086621 request.go:629] Waited for 195.42626ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.47:8443/api/v1/nodes/ha-377576
	I0327 23:56:51.884322 1086621 round_trippers.go:463] GET https://192.168.39.47:8443/api/v1/nodes/ha-377576
	I0327 23:56:51.884330 1086621 round_trippers.go:469] Request Headers:
	I0327 23:56:51.884341 1086621 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:56:51.884346 1086621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0327 23:56:51.888227 1086621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0327 23:56:51.888882 1086621 pod_ready.go:92] pod "kube-proxy-4t77p" in "kube-system" namespace has status "Ready":"True"
	I0327 23:56:51.888901 1086621 pod_ready.go:81] duration metric: took 400.273136ms for pod "kube-proxy-4t77p" in "kube-system" namespace to be "Ready" ...
	I0327 23:56:51.888911 1086621 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-5plfq" in "kube-system" namespace to be "Ready" ...
	I0327 23:56:52.084424 1086621 request.go:629] Waited for 195.429144ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.47:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5plfq
	I0327 23:56:52.084497 1086621 round_trippers.go:463] GET https://192.168.39.47:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5plfq
	I0327 23:56:52.084505 1086621 round_trippers.go:469] Request Headers:
	I0327 23:56:52.084515 1086621 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:56:52.084525 1086621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0327 23:56:52.088288 1086621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0327 23:56:52.284352 1086621 request.go:629] Waited for 195.327151ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.47:8443/api/v1/nodes/ha-377576-m03
	I0327 23:56:52.284437 1086621 round_trippers.go:463] GET https://192.168.39.47:8443/api/v1/nodes/ha-377576-m03
	I0327 23:56:52.284445 1086621 round_trippers.go:469] Request Headers:
	I0327 23:56:52.284456 1086621 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:56:52.284463 1086621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0327 23:56:52.288568 1086621 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0327 23:56:52.289385 1086621 pod_ready.go:92] pod "kube-proxy-5plfq" in "kube-system" namespace has status "Ready":"True"
	I0327 23:56:52.289410 1086621 pod_ready.go:81] duration metric: took 400.492143ms for pod "kube-proxy-5plfq" in "kube-system" namespace to be "Ready" ...
	I0327 23:56:52.289424 1086621 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-k9dcr" in "kube-system" namespace to be "Ready" ...
	I0327 23:56:52.483441 1086621 request.go:629] Waited for 193.93715ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.47:8443/api/v1/namespaces/kube-system/pods/kube-proxy-k9dcr
	I0327 23:56:52.483505 1086621 round_trippers.go:463] GET https://192.168.39.47:8443/api/v1/namespaces/kube-system/pods/kube-proxy-k9dcr
	I0327 23:56:52.483510 1086621 round_trippers.go:469] Request Headers:
	I0327 23:56:52.483518 1086621 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:56:52.483523 1086621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0327 23:56:52.487367 1086621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0327 23:56:52.684256 1086621 request.go:629] Waited for 196.267273ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.47:8443/api/v1/nodes/ha-377576-m02
	I0327 23:56:52.684340 1086621 round_trippers.go:463] GET https://192.168.39.47:8443/api/v1/nodes/ha-377576-m02
	I0327 23:56:52.684348 1086621 round_trippers.go:469] Request Headers:
	I0327 23:56:52.684360 1086621 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:56:52.684370 1086621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0327 23:56:52.694392 1086621 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0327 23:56:52.694936 1086621 pod_ready.go:92] pod "kube-proxy-k9dcr" in "kube-system" namespace has status "Ready":"True"
	I0327 23:56:52.694955 1086621 pod_ready.go:81] duration metric: took 405.5237ms for pod "kube-proxy-k9dcr" in "kube-system" namespace to be "Ready" ...
	I0327 23:56:52.694964 1086621 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-377576" in "kube-system" namespace to be "Ready" ...
	I0327 23:56:52.884113 1086621 request.go:629] Waited for 189.030906ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.47:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-377576
	I0327 23:56:52.884204 1086621 round_trippers.go:463] GET https://192.168.39.47:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-377576
	I0327 23:56:52.884216 1086621 round_trippers.go:469] Request Headers:
	I0327 23:56:52.884232 1086621 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:56:52.884242 1086621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0327 23:56:52.888368 1086621 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0327 23:56:53.083460 1086621 request.go:629] Waited for 194.309059ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.47:8443/api/v1/nodes/ha-377576
	I0327 23:56:53.083544 1086621 round_trippers.go:463] GET https://192.168.39.47:8443/api/v1/nodes/ha-377576
	I0327 23:56:53.083554 1086621 round_trippers.go:469] Request Headers:
	I0327 23:56:53.083564 1086621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0327 23:56:53.083590 1086621 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:56:53.088058 1086621 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0327 23:56:53.088783 1086621 pod_ready.go:92] pod "kube-scheduler-ha-377576" in "kube-system" namespace has status "Ready":"True"
	I0327 23:56:53.088810 1086621 pod_ready.go:81] duration metric: took 393.835456ms for pod "kube-scheduler-ha-377576" in "kube-system" namespace to be "Ready" ...
	I0327 23:56:53.088823 1086621 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-377576-m02" in "kube-system" namespace to be "Ready" ...
	I0327 23:56:53.284265 1086621 request.go:629] Waited for 195.327926ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.47:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-377576-m02
	I0327 23:56:53.284398 1086621 round_trippers.go:463] GET https://192.168.39.47:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-377576-m02
	I0327 23:56:53.284410 1086621 round_trippers.go:469] Request Headers:
	I0327 23:56:53.284418 1086621 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:56:53.284422 1086621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0327 23:56:53.288213 1086621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0327 23:56:53.483411 1086621 request.go:629] Waited for 194.300711ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.47:8443/api/v1/nodes/ha-377576-m02
	I0327 23:56:53.483498 1086621 round_trippers.go:463] GET https://192.168.39.47:8443/api/v1/nodes/ha-377576-m02
	I0327 23:56:53.483508 1086621 round_trippers.go:469] Request Headers:
	I0327 23:56:53.483518 1086621 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:56:53.483524 1086621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0327 23:56:53.487515 1086621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0327 23:56:53.488080 1086621 pod_ready.go:92] pod "kube-scheduler-ha-377576-m02" in "kube-system" namespace has status "Ready":"True"
	I0327 23:56:53.488101 1086621 pod_ready.go:81] duration metric: took 399.271123ms for pod "kube-scheduler-ha-377576-m02" in "kube-system" namespace to be "Ready" ...
	I0327 23:56:53.488111 1086621 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-377576-m03" in "kube-system" namespace to be "Ready" ...
	I0327 23:56:53.684190 1086621 request.go:629] Waited for 195.974352ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.47:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-377576-m03
	I0327 23:56:53.684277 1086621 round_trippers.go:463] GET https://192.168.39.47:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-377576-m03
	I0327 23:56:53.684286 1086621 round_trippers.go:469] Request Headers:
	I0327 23:56:53.684299 1086621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0327 23:56:53.684313 1086621 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:56:53.690015 1086621 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0327 23:56:53.884143 1086621 request.go:629] Waited for 193.097337ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.47:8443/api/v1/nodes/ha-377576-m03
	I0327 23:56:53.884219 1086621 round_trippers.go:463] GET https://192.168.39.47:8443/api/v1/nodes/ha-377576-m03
	I0327 23:56:53.884228 1086621 round_trippers.go:469] Request Headers:
	I0327 23:56:53.884241 1086621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0327 23:56:53.884251 1086621 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:56:53.888422 1086621 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0327 23:56:53.889239 1086621 pod_ready.go:92] pod "kube-scheduler-ha-377576-m03" in "kube-system" namespace has status "Ready":"True"
	I0327 23:56:53.889264 1086621 pod_ready.go:81] duration metric: took 401.14261ms for pod "kube-scheduler-ha-377576-m03" in "kube-system" namespace to be "Ready" ...
	I0327 23:56:53.889275 1086621 pod_ready.go:38] duration metric: took 11.200644288s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0327 23:56:53.889292 1086621 api_server.go:52] waiting for apiserver process to appear ...
	I0327 23:56:53.889346 1086621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0327 23:56:53.907707 1086621 api_server.go:72] duration metric: took 17.464605805s to wait for apiserver process to appear ...
	I0327 23:56:53.907737 1086621 api_server.go:88] waiting for apiserver healthz status ...
	I0327 23:56:53.907801 1086621 api_server.go:253] Checking apiserver healthz at https://192.168.39.47:8443/healthz ...
	I0327 23:56:53.914314 1086621 api_server.go:279] https://192.168.39.47:8443/healthz returned 200:
	ok
	I0327 23:56:53.914425 1086621 round_trippers.go:463] GET https://192.168.39.47:8443/version
	I0327 23:56:53.914436 1086621 round_trippers.go:469] Request Headers:
	I0327 23:56:53.914446 1086621 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:56:53.914452 1086621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0327 23:56:53.915710 1086621 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0327 23:56:53.915790 1086621 api_server.go:141] control plane version: v1.29.3
	I0327 23:56:53.915808 1086621 api_server.go:131] duration metric: took 8.063524ms to wait for apiserver health ...
	I0327 23:56:53.915819 1086621 system_pods.go:43] waiting for kube-system pods to appear ...
	I0327 23:56:54.084201 1086621 request.go:629] Waited for 168.294038ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.47:8443/api/v1/namespaces/kube-system/pods
	I0327 23:56:54.084292 1086621 round_trippers.go:463] GET https://192.168.39.47:8443/api/v1/namespaces/kube-system/pods
	I0327 23:56:54.084304 1086621 round_trippers.go:469] Request Headers:
	I0327 23:56:54.084316 1086621 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:56:54.084337 1086621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0327 23:56:54.092921 1086621 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0327 23:56:54.099815 1086621 system_pods.go:59] 24 kube-system pods found
	I0327 23:56:54.099850 1086621 system_pods.go:61] "coredns-76f75df574-47npx" [968d63e4-f44a-4e52-b6c0-04e0ed1a068e] Running
	I0327 23:56:54.099856 1086621 system_pods.go:61] "coredns-76f75df574-msv9s" [7c549358-2f35-4345-aa7a-8bbbcfc4ef01] Running
	I0327 23:56:54.099862 1086621 system_pods.go:61] "etcd-ha-377576" [885cacaa-1b61-4f8b-90b5-3f7dbc9df4ad] Running
	I0327 23:56:54.099868 1086621 system_pods.go:61] "etcd-ha-377576-m02" [c3fa0266-db99-4bf1-a3b4-2f050d69e2ff] Running
	I0327 23:56:54.099873 1086621 system_pods.go:61] "etcd-ha-377576-m03" [57afa52e-1e76-4e4d-8398-ef919c6e4905] Running
	I0327 23:56:54.099878 1086621 system_pods.go:61] "kindnet-5zmtk" [4e75cdc5-22da-47f2-9833-b2f4eaa9caac] Running
	I0327 23:56:54.099886 1086621 system_pods.go:61] "kindnet-6wmmc" [ef36a453-2352-47f7-8a75-72abc4004e82] Running
	I0327 23:56:54.099894 1086621 system_pods.go:61] "kindnet-n8fpn" [223f6537-8296-4147-b72e-da25c00ce693] Running
	I0327 23:56:54.099900 1086621 system_pods.go:61] "kube-apiserver-ha-377576" [a1a979ea-0199-4e24-af63-c79b32a66c0e] Running
	I0327 23:56:54.099909 1086621 system_pods.go:61] "kube-apiserver-ha-377576-m02" [516bd332-2602-4380-aac0-3fd71f0834cb] Running
	I0327 23:56:54.099914 1086621 system_pods.go:61] "kube-apiserver-ha-377576-m03" [a0cf529d-7e29-4df8-9d57-7fa331f256aa] Running
	I0327 23:56:54.099921 1086621 system_pods.go:61] "kube-controller-manager-ha-377576" [f72d4847-2902-4e1f-8852-bdcc020a6099] Running
	I0327 23:56:54.099930 1086621 system_pods.go:61] "kube-controller-manager-ha-377576-m02" [a3e945b8-d18c-434c-b8a7-70510fbce333] Running
	I0327 23:56:54.099935 1086621 system_pods.go:61] "kube-controller-manager-ha-377576-m03" [3d21c9a3-5ed2-4d74-8979-05be2cd7957c] Running
	I0327 23:56:54.099941 1086621 system_pods.go:61] "kube-proxy-4t77p" [27eff0c9-9b45-4530-aba9-1a5e0ca60802] Running
	I0327 23:56:54.099949 1086621 system_pods.go:61] "kube-proxy-5plfq" [7598b740-38ad-4c94-a1e2-0420818e60d1] Running
	I0327 23:56:54.099955 1086621 system_pods.go:61] "kube-proxy-k9dcr" [07c785f3-3b08-4f43-b957-5f4092f757ea] Running
	I0327 23:56:54.099964 1086621 system_pods.go:61] "kube-scheduler-ha-377576" [6b97a544-a0e8-4c35-b93c-197f200da53b] Running
	I0327 23:56:54.099973 1086621 system_pods.go:61] "kube-scheduler-ha-377576-m02" [91c25780-d677-4394-9624-31dfaec279c3] Running
	I0327 23:56:54.099979 1086621 system_pods.go:61] "kube-scheduler-ha-377576-m03" [dbbf81ca-9fea-410e-bbf2-c7e4eecb043d] Running
	I0327 23:56:54.099986 1086621 system_pods.go:61] "kube-vip-ha-377576" [2d4dd5f7-c798-4a52-97f5-4bc068603373] Running
	I0327 23:56:54.099992 1086621 system_pods.go:61] "kube-vip-ha-377576-m02" [dde68b43-553a-4d1b-ad7f-5284653080e4] Running
	I0327 23:56:54.099997 1086621 system_pods.go:61] "kube-vip-ha-377576-m03" [e03923bf-eed7-4645-8673-e81441d197dd] Running
	I0327 23:56:54.100003 1086621 system_pods.go:61] "storage-provisioner" [9000645c-8323-43af-bd87-011d1574493c] Running
	I0327 23:56:54.100015 1086621 system_pods.go:74] duration metric: took 184.185451ms to wait for pod list to return data ...
	I0327 23:56:54.100029 1086621 default_sa.go:34] waiting for default service account to be created ...
	I0327 23:56:54.283420 1086621 request.go:629] Waited for 183.266157ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.47:8443/api/v1/namespaces/default/serviceaccounts
	I0327 23:56:54.283515 1086621 round_trippers.go:463] GET https://192.168.39.47:8443/api/v1/namespaces/default/serviceaccounts
	I0327 23:56:54.283523 1086621 round_trippers.go:469] Request Headers:
	I0327 23:56:54.283531 1086621 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:56:54.283536 1086621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0327 23:56:54.287299 1086621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0327 23:56:54.287441 1086621 default_sa.go:45] found service account: "default"
	I0327 23:56:54.287461 1086621 default_sa.go:55] duration metric: took 187.419615ms for default service account to be created ...
	I0327 23:56:54.287474 1086621 system_pods.go:116] waiting for k8s-apps to be running ...
	I0327 23:56:54.483401 1086621 request.go:629] Waited for 195.840614ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.47:8443/api/v1/namespaces/kube-system/pods
	I0327 23:56:54.483479 1086621 round_trippers.go:463] GET https://192.168.39.47:8443/api/v1/namespaces/kube-system/pods
	I0327 23:56:54.483484 1086621 round_trippers.go:469] Request Headers:
	I0327 23:56:54.483493 1086621 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:56:54.483497 1086621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0327 23:56:54.490936 1086621 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0327 23:56:54.498994 1086621 system_pods.go:86] 24 kube-system pods found
	I0327 23:56:54.499067 1086621 system_pods.go:89] "coredns-76f75df574-47npx" [968d63e4-f44a-4e52-b6c0-04e0ed1a068e] Running
	I0327 23:56:54.499081 1086621 system_pods.go:89] "coredns-76f75df574-msv9s" [7c549358-2f35-4345-aa7a-8bbbcfc4ef01] Running
	I0327 23:56:54.499092 1086621 system_pods.go:89] "etcd-ha-377576" [885cacaa-1b61-4f8b-90b5-3f7dbc9df4ad] Running
	I0327 23:56:54.499098 1086621 system_pods.go:89] "etcd-ha-377576-m02" [c3fa0266-db99-4bf1-a3b4-2f050d69e2ff] Running
	I0327 23:56:54.499108 1086621 system_pods.go:89] "etcd-ha-377576-m03" [57afa52e-1e76-4e4d-8398-ef919c6e4905] Running
	I0327 23:56:54.499116 1086621 system_pods.go:89] "kindnet-5zmtk" [4e75cdc5-22da-47f2-9833-b2f4eaa9caac] Running
	I0327 23:56:54.499121 1086621 system_pods.go:89] "kindnet-6wmmc" [ef36a453-2352-47f7-8a75-72abc4004e82] Running
	I0327 23:56:54.499131 1086621 system_pods.go:89] "kindnet-n8fpn" [223f6537-8296-4147-b72e-da25c00ce693] Running
	I0327 23:56:54.499140 1086621 system_pods.go:89] "kube-apiserver-ha-377576" [a1a979ea-0199-4e24-af63-c79b32a66c0e] Running
	I0327 23:56:54.499148 1086621 system_pods.go:89] "kube-apiserver-ha-377576-m02" [516bd332-2602-4380-aac0-3fd71f0834cb] Running
	I0327 23:56:54.499152 1086621 system_pods.go:89] "kube-apiserver-ha-377576-m03" [a0cf529d-7e29-4df8-9d57-7fa331f256aa] Running
	I0327 23:56:54.499159 1086621 system_pods.go:89] "kube-controller-manager-ha-377576" [f72d4847-2902-4e1f-8852-bdcc020a6099] Running
	I0327 23:56:54.499163 1086621 system_pods.go:89] "kube-controller-manager-ha-377576-m02" [a3e945b8-d18c-434c-b8a7-70510fbce333] Running
	I0327 23:56:54.499175 1086621 system_pods.go:89] "kube-controller-manager-ha-377576-m03" [3d21c9a3-5ed2-4d74-8979-05be2cd7957c] Running
	I0327 23:56:54.499186 1086621 system_pods.go:89] "kube-proxy-4t77p" [27eff0c9-9b45-4530-aba9-1a5e0ca60802] Running
	I0327 23:56:54.499199 1086621 system_pods.go:89] "kube-proxy-5plfq" [7598b740-38ad-4c94-a1e2-0420818e60d1] Running
	I0327 23:56:54.499208 1086621 system_pods.go:89] "kube-proxy-k9dcr" [07c785f3-3b08-4f43-b957-5f4092f757ea] Running
	I0327 23:56:54.499218 1086621 system_pods.go:89] "kube-scheduler-ha-377576" [6b97a544-a0e8-4c35-b93c-197f200da53b] Running
	I0327 23:56:54.499227 1086621 system_pods.go:89] "kube-scheduler-ha-377576-m02" [91c25780-d677-4394-9624-31dfaec279c3] Running
	I0327 23:56:54.499234 1086621 system_pods.go:89] "kube-scheduler-ha-377576-m03" [dbbf81ca-9fea-410e-bbf2-c7e4eecb043d] Running
	I0327 23:56:54.499238 1086621 system_pods.go:89] "kube-vip-ha-377576" [2d4dd5f7-c798-4a52-97f5-4bc068603373] Running
	I0327 23:56:54.499244 1086621 system_pods.go:89] "kube-vip-ha-377576-m02" [dde68b43-553a-4d1b-ad7f-5284653080e4] Running
	I0327 23:56:54.499248 1086621 system_pods.go:89] "kube-vip-ha-377576-m03" [e03923bf-eed7-4645-8673-e81441d197dd] Running
	I0327 23:56:54.499254 1086621 system_pods.go:89] "storage-provisioner" [9000645c-8323-43af-bd87-011d1574493c] Running
	I0327 23:56:54.499261 1086621 system_pods.go:126] duration metric: took 211.778744ms to wait for k8s-apps to be running ...
	I0327 23:56:54.499271 1086621 system_svc.go:44] waiting for kubelet service to be running ....
	I0327 23:56:54.499332 1086621 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0327 23:56:54.516360 1086621 system_svc.go:56] duration metric: took 17.077136ms WaitForService to wait for kubelet
	I0327 23:56:54.516404 1086621 kubeadm.go:576] duration metric: took 18.073307914s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0327 23:56:54.516437 1086621 node_conditions.go:102] verifying NodePressure condition ...
	I0327 23:56:54.683906 1086621 request.go:629] Waited for 167.365703ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.47:8443/api/v1/nodes
	I0327 23:56:54.683980 1086621 round_trippers.go:463] GET https://192.168.39.47:8443/api/v1/nodes
	I0327 23:56:54.683985 1086621 round_trippers.go:469] Request Headers:
	I0327 23:56:54.683993 1086621 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:56:54.683997 1086621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0327 23:56:54.691589 1086621 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0327 23:56:54.692913 1086621 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0327 23:56:54.692939 1086621 node_conditions.go:123] node cpu capacity is 2
	I0327 23:56:54.692953 1086621 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0327 23:56:54.692958 1086621 node_conditions.go:123] node cpu capacity is 2
	I0327 23:56:54.692964 1086621 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0327 23:56:54.692975 1086621 node_conditions.go:123] node cpu capacity is 2
	I0327 23:56:54.692988 1086621 node_conditions.go:105] duration metric: took 176.544348ms to run NodePressure ...
	I0327 23:56:54.693004 1086621 start.go:240] waiting for startup goroutines ...
	I0327 23:56:54.693033 1086621 start.go:254] writing updated cluster config ...
	I0327 23:56:54.693376 1086621 ssh_runner.go:195] Run: rm -f paused
	I0327 23:56:54.755056 1086621 start.go:600] kubectl: 1.29.3, cluster: 1.29.3 (minor skew: 0)
	I0327 23:56:54.757341 1086621 out.go:177] * Done! kubectl is now configured to use "ha-377576" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Mar 28 00:01:25 ha-377576 crio[682]: time="2024-03-28 00:01:25.206407236Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1711584085206384108,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:141828,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e370e126-9ae4-462b-9b7f-cfbe916e3736 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 28 00:01:25 ha-377576 crio[682]: time="2024-03-28 00:01:25.206962643Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=05d7f948-d210-4f10-b435-86b66b472d9d name=/runtime.v1.RuntimeService/ListContainers
	Mar 28 00:01:25 ha-377576 crio[682]: time="2024-03-28 00:01:25.207036431Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=05d7f948-d210-4f10-b435-86b66b472d9d name=/runtime.v1.RuntimeService/ListContainers
	Mar 28 00:01:25 ha-377576 crio[682]: time="2024-03-28 00:01:25.207261400Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:fc41f34db32bf53a65c6af8f9f9eae933fb349e5555060707ec7131ab3a7b835,PodSandboxId:d8bf33d99bda1de3168ad1efb04ca03a74a918d2ca74b585a7fcdb3a108d229e,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1711583818896676098,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-78c89,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3272474d-5490-4c7c-9dfe-ded8488ec32f,},Annotations:map[string]string{io.kubernetes.container.hash: f4bc3217,io.kubernetes.container.restartCount: 0,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d5198968b76968a091c33806ec424495c03395675b20e7eb35e330d16217211,PodSandboxId:78b0408435c31eb2c100690d28760dd80d92d11a1204f91418812c69660e79a8,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1711583597995379318,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-47npx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 968d63e4-f44a-4e52-b6c0-04e0ed1a068e,},Annotations:map[string]string{io.kubernetes.container.hash: d58e1b37,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\
"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed9a38e9f6cd98054d1e672c652e36e6262ceb6eb2a3e14911a5178d222135e7,PodSandboxId:906a95ca7b9309653761922034b8a2dfca231a15f3b955bb1c166ebd569149c8,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1711583597982373668,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-msv9s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid
: 7c549358-2f35-4345-aa7a-8bbbcfc4ef01,},Annotations:map[string]string{io.kubernetes.container.hash: 76b7b3af,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:381348b1458cea236fc315e0a9a42d269c69969b162efaa25de894ac4284ba88,PodSandboxId:bfc67c80fc55899fa456134d4af2ac6fa90fd9b5f87f5a582d5200171283b55c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNI
NG,CreatedAt:1711583597583161047,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9000645c-8323-43af-bd87-011d1574493c,},Annotations:map[string]string{io.kubernetes.container.hash: 43103468,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:196de4c982b9c19fd66bac5f3fa839489745805e12542f317a366989b520706b,PodSandboxId:cea371a7b82b947e5fb342214a35fe253cd152ca1e8ed5c6cc068b4d719ce55e,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1711583
595748903770,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-5zmtk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e75cdc5-22da-47f2-9833-b2f4eaa9caac,},Annotations:map[string]string{io.kubernetes.container.hash: 6486d0c8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a226f01452a72cf6e9a608450715a12f6663ccf2697e8259f809e966809978ce,PodSandboxId:3f1239e30a953ae16c917e002c235d64ba2b2b2f9165e939c99bda7578eae785,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1711583595702313227,Labels:map[string]stri
ng{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4t77p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 27eff0c9-9b45-4530-aba9-1a5e0ca60802,},Annotations:map[string]string{io.kubernetes.container.hash: d0985273,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f28af42c6db4a0efe0547d4442478084f4054bb5d4d47038a8a7f727ec1044df,PodSandboxId:893f7358a6722bb051ca8e38bb9af692a62e5ff985eb3863a033431715b43128,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:58ce44dc60694b0aa547d87d4a8337133961d3a8538021a672ba9bd33b267c9a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1711583579138103960,Labels:map[string]string
{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-377576,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b447b16a29b2ea987c7683714523f85a,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22d460b8d6582d93d5633e1e1af46683647a5632f6b9153f61a6c374dca4f34c,PodSandboxId:97aabc5fbaef976f88ad3764ab080be7aeab5127cc15d872fe7107cf3126e072,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1711583575918296538,Labels:map[string]string{io.kubernetes.container
.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-377576,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 32d18f050adf42c0d971a9903270a7b6,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f113e7564c47f0e2aa57dbb0702acbf86b1d75d00e9210d72d606a1b0505e5b,PodSandboxId:ac57491c8945575ea35326a6b572c93ff05b65a9f7c2a1f53e465d0b97a5fe09,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1711583575841481160,Labels:map[string]string{io.kubernetes.container.na
me: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-377576,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6490cbc40210ad634becf13ac3a1705,},Annotations:map[string]string{io.kubernetes.container.hash: bc2c0f48,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0128cd878ebdae2c6e217413183a103417714ffe00d55c1c92d1353e38238fa,PodSandboxId:bbb9d168e952fe414d641f2ec6a7e1e63a04205d4e2aae1c6ad9a2756c6443ee,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1711583575849937975,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name
: etcd-ha-377576,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ab33d6840338638cbdcd9ebe5fdd4d4,},Annotations:map[string]string{io.kubernetes.container.hash: 7359b186,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:afbf14c176818ba796eacd41f1600b5f09075c6a2b4f1edcad903530221f76ff,PodSandboxId:b75106f2dccc70e077dfb09e12c05d0942a02fa2c4894a1bd68ec76ca144eed7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1711583575809836577,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-377576,io.k
ubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f9eaf884653411ba1f22eb4cdbdfa748,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=05d7f948-d210-4f10-b435-86b66b472d9d name=/runtime.v1.RuntimeService/ListContainers
	Mar 28 00:01:25 ha-377576 crio[682]: time="2024-03-28 00:01:25.257017808Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=31cc2251-dc9d-4891-8f17-8edf36aa854c name=/runtime.v1.RuntimeService/Version
	Mar 28 00:01:25 ha-377576 crio[682]: time="2024-03-28 00:01:25.257463437Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=31cc2251-dc9d-4891-8f17-8edf36aa854c name=/runtime.v1.RuntimeService/Version
	Mar 28 00:01:25 ha-377576 crio[682]: time="2024-03-28 00:01:25.262286070Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8eccdce1-ad97-4ae5-a7ea-654f884c289e name=/runtime.v1.ImageService/ImageFsInfo
	Mar 28 00:01:25 ha-377576 crio[682]: time="2024-03-28 00:01:25.262781952Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1711584085262755837,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:141828,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8eccdce1-ad97-4ae5-a7ea-654f884c289e name=/runtime.v1.ImageService/ImageFsInfo
	Mar 28 00:01:25 ha-377576 crio[682]: time="2024-03-28 00:01:25.263429912Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d4052520-a64c-419a-8315-04d617ecb8ec name=/runtime.v1.RuntimeService/ListContainers
	Mar 28 00:01:25 ha-377576 crio[682]: time="2024-03-28 00:01:25.263536562Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d4052520-a64c-419a-8315-04d617ecb8ec name=/runtime.v1.RuntimeService/ListContainers
	Mar 28 00:01:25 ha-377576 crio[682]: time="2024-03-28 00:01:25.263938915Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:fc41f34db32bf53a65c6af8f9f9eae933fb349e5555060707ec7131ab3a7b835,PodSandboxId:d8bf33d99bda1de3168ad1efb04ca03a74a918d2ca74b585a7fcdb3a108d229e,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1711583818896676098,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-78c89,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3272474d-5490-4c7c-9dfe-ded8488ec32f,},Annotations:map[string]string{io.kubernetes.container.hash: f4bc3217,io.kubernetes.container.restartCount: 0,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d5198968b76968a091c33806ec424495c03395675b20e7eb35e330d16217211,PodSandboxId:78b0408435c31eb2c100690d28760dd80d92d11a1204f91418812c69660e79a8,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1711583597995379318,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-47npx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 968d63e4-f44a-4e52-b6c0-04e0ed1a068e,},Annotations:map[string]string{io.kubernetes.container.hash: d58e1b37,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\
"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed9a38e9f6cd98054d1e672c652e36e6262ceb6eb2a3e14911a5178d222135e7,PodSandboxId:906a95ca7b9309653761922034b8a2dfca231a15f3b955bb1c166ebd569149c8,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1711583597982373668,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-msv9s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid
: 7c549358-2f35-4345-aa7a-8bbbcfc4ef01,},Annotations:map[string]string{io.kubernetes.container.hash: 76b7b3af,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:381348b1458cea236fc315e0a9a42d269c69969b162efaa25de894ac4284ba88,PodSandboxId:bfc67c80fc55899fa456134d4af2ac6fa90fd9b5f87f5a582d5200171283b55c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNI
NG,CreatedAt:1711583597583161047,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9000645c-8323-43af-bd87-011d1574493c,},Annotations:map[string]string{io.kubernetes.container.hash: 43103468,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:196de4c982b9c19fd66bac5f3fa839489745805e12542f317a366989b520706b,PodSandboxId:cea371a7b82b947e5fb342214a35fe253cd152ca1e8ed5c6cc068b4d719ce55e,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1711583
595748903770,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-5zmtk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e75cdc5-22da-47f2-9833-b2f4eaa9caac,},Annotations:map[string]string{io.kubernetes.container.hash: 6486d0c8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a226f01452a72cf6e9a608450715a12f6663ccf2697e8259f809e966809978ce,PodSandboxId:3f1239e30a953ae16c917e002c235d64ba2b2b2f9165e939c99bda7578eae785,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1711583595702313227,Labels:map[string]stri
ng{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4t77p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 27eff0c9-9b45-4530-aba9-1a5e0ca60802,},Annotations:map[string]string{io.kubernetes.container.hash: d0985273,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f28af42c6db4a0efe0547d4442478084f4054bb5d4d47038a8a7f727ec1044df,PodSandboxId:893f7358a6722bb051ca8e38bb9af692a62e5ff985eb3863a033431715b43128,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:58ce44dc60694b0aa547d87d4a8337133961d3a8538021a672ba9bd33b267c9a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1711583579138103960,Labels:map[string]string
{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-377576,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b447b16a29b2ea987c7683714523f85a,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22d460b8d6582d93d5633e1e1af46683647a5632f6b9153f61a6c374dca4f34c,PodSandboxId:97aabc5fbaef976f88ad3764ab080be7aeab5127cc15d872fe7107cf3126e072,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1711583575918296538,Labels:map[string]string{io.kubernetes.container
.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-377576,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 32d18f050adf42c0d971a9903270a7b6,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f113e7564c47f0e2aa57dbb0702acbf86b1d75d00e9210d72d606a1b0505e5b,PodSandboxId:ac57491c8945575ea35326a6b572c93ff05b65a9f7c2a1f53e465d0b97a5fe09,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1711583575841481160,Labels:map[string]string{io.kubernetes.container.na
me: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-377576,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6490cbc40210ad634becf13ac3a1705,},Annotations:map[string]string{io.kubernetes.container.hash: bc2c0f48,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0128cd878ebdae2c6e217413183a103417714ffe00d55c1c92d1353e38238fa,PodSandboxId:bbb9d168e952fe414d641f2ec6a7e1e63a04205d4e2aae1c6ad9a2756c6443ee,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1711583575849937975,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name
: etcd-ha-377576,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ab33d6840338638cbdcd9ebe5fdd4d4,},Annotations:map[string]string{io.kubernetes.container.hash: 7359b186,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:afbf14c176818ba796eacd41f1600b5f09075c6a2b4f1edcad903530221f76ff,PodSandboxId:b75106f2dccc70e077dfb09e12c05d0942a02fa2c4894a1bd68ec76ca144eed7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1711583575809836577,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-377576,io.k
ubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f9eaf884653411ba1f22eb4cdbdfa748,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d4052520-a64c-419a-8315-04d617ecb8ec name=/runtime.v1.RuntimeService/ListContainers
	Mar 28 00:01:25 ha-377576 crio[682]: time="2024-03-28 00:01:25.306768585Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b75217b6-af5e-4c70-b24a-6baf7f6b3645 name=/runtime.v1.RuntimeService/Version
	Mar 28 00:01:25 ha-377576 crio[682]: time="2024-03-28 00:01:25.306844759Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b75217b6-af5e-4c70-b24a-6baf7f6b3645 name=/runtime.v1.RuntimeService/Version
	Mar 28 00:01:25 ha-377576 crio[682]: time="2024-03-28 00:01:25.307858103Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=21457977-636a-495b-9bfe-bea597987354 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 28 00:01:25 ha-377576 crio[682]: time="2024-03-28 00:01:25.308772636Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1711584085308743161,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:141828,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=21457977-636a-495b-9bfe-bea597987354 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 28 00:01:25 ha-377576 crio[682]: time="2024-03-28 00:01:25.309327528Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=77f6268d-011e-4868-bced-74a99374c5eb name=/runtime.v1.RuntimeService/ListContainers
	Mar 28 00:01:25 ha-377576 crio[682]: time="2024-03-28 00:01:25.309388116Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=77f6268d-011e-4868-bced-74a99374c5eb name=/runtime.v1.RuntimeService/ListContainers
	Mar 28 00:01:25 ha-377576 crio[682]: time="2024-03-28 00:01:25.309700949Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:fc41f34db32bf53a65c6af8f9f9eae933fb349e5555060707ec7131ab3a7b835,PodSandboxId:d8bf33d99bda1de3168ad1efb04ca03a74a918d2ca74b585a7fcdb3a108d229e,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1711583818896676098,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-78c89,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3272474d-5490-4c7c-9dfe-ded8488ec32f,},Annotations:map[string]string{io.kubernetes.container.hash: f4bc3217,io.kubernetes.container.restartCount: 0,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d5198968b76968a091c33806ec424495c03395675b20e7eb35e330d16217211,PodSandboxId:78b0408435c31eb2c100690d28760dd80d92d11a1204f91418812c69660e79a8,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1711583597995379318,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-47npx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 968d63e4-f44a-4e52-b6c0-04e0ed1a068e,},Annotations:map[string]string{io.kubernetes.container.hash: d58e1b37,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\
"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed9a38e9f6cd98054d1e672c652e36e6262ceb6eb2a3e14911a5178d222135e7,PodSandboxId:906a95ca7b9309653761922034b8a2dfca231a15f3b955bb1c166ebd569149c8,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1711583597982373668,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-msv9s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid
: 7c549358-2f35-4345-aa7a-8bbbcfc4ef01,},Annotations:map[string]string{io.kubernetes.container.hash: 76b7b3af,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:381348b1458cea236fc315e0a9a42d269c69969b162efaa25de894ac4284ba88,PodSandboxId:bfc67c80fc55899fa456134d4af2ac6fa90fd9b5f87f5a582d5200171283b55c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNI
NG,CreatedAt:1711583597583161047,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9000645c-8323-43af-bd87-011d1574493c,},Annotations:map[string]string{io.kubernetes.container.hash: 43103468,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:196de4c982b9c19fd66bac5f3fa839489745805e12542f317a366989b520706b,PodSandboxId:cea371a7b82b947e5fb342214a35fe253cd152ca1e8ed5c6cc068b4d719ce55e,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1711583
595748903770,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-5zmtk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e75cdc5-22da-47f2-9833-b2f4eaa9caac,},Annotations:map[string]string{io.kubernetes.container.hash: 6486d0c8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a226f01452a72cf6e9a608450715a12f6663ccf2697e8259f809e966809978ce,PodSandboxId:3f1239e30a953ae16c917e002c235d64ba2b2b2f9165e939c99bda7578eae785,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1711583595702313227,Labels:map[string]stri
ng{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4t77p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 27eff0c9-9b45-4530-aba9-1a5e0ca60802,},Annotations:map[string]string{io.kubernetes.container.hash: d0985273,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f28af42c6db4a0efe0547d4442478084f4054bb5d4d47038a8a7f727ec1044df,PodSandboxId:893f7358a6722bb051ca8e38bb9af692a62e5ff985eb3863a033431715b43128,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:58ce44dc60694b0aa547d87d4a8337133961d3a8538021a672ba9bd33b267c9a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1711583579138103960,Labels:map[string]string
{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-377576,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b447b16a29b2ea987c7683714523f85a,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22d460b8d6582d93d5633e1e1af46683647a5632f6b9153f61a6c374dca4f34c,PodSandboxId:97aabc5fbaef976f88ad3764ab080be7aeab5127cc15d872fe7107cf3126e072,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1711583575918296538,Labels:map[string]string{io.kubernetes.container
.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-377576,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 32d18f050adf42c0d971a9903270a7b6,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f113e7564c47f0e2aa57dbb0702acbf86b1d75d00e9210d72d606a1b0505e5b,PodSandboxId:ac57491c8945575ea35326a6b572c93ff05b65a9f7c2a1f53e465d0b97a5fe09,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1711583575841481160,Labels:map[string]string{io.kubernetes.container.na
me: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-377576,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6490cbc40210ad634becf13ac3a1705,},Annotations:map[string]string{io.kubernetes.container.hash: bc2c0f48,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0128cd878ebdae2c6e217413183a103417714ffe00d55c1c92d1353e38238fa,PodSandboxId:bbb9d168e952fe414d641f2ec6a7e1e63a04205d4e2aae1c6ad9a2756c6443ee,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1711583575849937975,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name
: etcd-ha-377576,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ab33d6840338638cbdcd9ebe5fdd4d4,},Annotations:map[string]string{io.kubernetes.container.hash: 7359b186,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:afbf14c176818ba796eacd41f1600b5f09075c6a2b4f1edcad903530221f76ff,PodSandboxId:b75106f2dccc70e077dfb09e12c05d0942a02fa2c4894a1bd68ec76ca144eed7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1711583575809836577,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-377576,io.k
ubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f9eaf884653411ba1f22eb4cdbdfa748,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=77f6268d-011e-4868-bced-74a99374c5eb name=/runtime.v1.RuntimeService/ListContainers
	Mar 28 00:01:25 ha-377576 crio[682]: time="2024-03-28 00:01:25.358090142Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d15bbb5f-4ced-42ab-abef-2274e6e3b37b name=/runtime.v1.RuntimeService/Version
	Mar 28 00:01:25 ha-377576 crio[682]: time="2024-03-28 00:01:25.358183445Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d15bbb5f-4ced-42ab-abef-2274e6e3b37b name=/runtime.v1.RuntimeService/Version
	Mar 28 00:01:25 ha-377576 crio[682]: time="2024-03-28 00:01:25.359426624Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2bf8e990-9f9e-421d-a282-7191bbc74e22 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 28 00:01:25 ha-377576 crio[682]: time="2024-03-28 00:01:25.359964513Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1711584085359941014,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:141828,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2bf8e990-9f9e-421d-a282-7191bbc74e22 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 28 00:01:25 ha-377576 crio[682]: time="2024-03-28 00:01:25.360819191Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=55ddd686-5450-4823-bd1e-47760a394c73 name=/runtime.v1.RuntimeService/ListContainers
	Mar 28 00:01:25 ha-377576 crio[682]: time="2024-03-28 00:01:25.360894409Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=55ddd686-5450-4823-bd1e-47760a394c73 name=/runtime.v1.RuntimeService/ListContainers
	Mar 28 00:01:25 ha-377576 crio[682]: time="2024-03-28 00:01:25.361116712Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:fc41f34db32bf53a65c6af8f9f9eae933fb349e5555060707ec7131ab3a7b835,PodSandboxId:d8bf33d99bda1de3168ad1efb04ca03a74a918d2ca74b585a7fcdb3a108d229e,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1711583818896676098,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-78c89,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3272474d-5490-4c7c-9dfe-ded8488ec32f,},Annotations:map[string]string{io.kubernetes.container.hash: f4bc3217,io.kubernetes.container.restartCount: 0,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d5198968b76968a091c33806ec424495c03395675b20e7eb35e330d16217211,PodSandboxId:78b0408435c31eb2c100690d28760dd80d92d11a1204f91418812c69660e79a8,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1711583597995379318,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-47npx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 968d63e4-f44a-4e52-b6c0-04e0ed1a068e,},Annotations:map[string]string{io.kubernetes.container.hash: d58e1b37,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\
"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed9a38e9f6cd98054d1e672c652e36e6262ceb6eb2a3e14911a5178d222135e7,PodSandboxId:906a95ca7b9309653761922034b8a2dfca231a15f3b955bb1c166ebd569149c8,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1711583597982373668,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-msv9s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid
: 7c549358-2f35-4345-aa7a-8bbbcfc4ef01,},Annotations:map[string]string{io.kubernetes.container.hash: 76b7b3af,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:381348b1458cea236fc315e0a9a42d269c69969b162efaa25de894ac4284ba88,PodSandboxId:bfc67c80fc55899fa456134d4af2ac6fa90fd9b5f87f5a582d5200171283b55c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNI
NG,CreatedAt:1711583597583161047,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9000645c-8323-43af-bd87-011d1574493c,},Annotations:map[string]string{io.kubernetes.container.hash: 43103468,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:196de4c982b9c19fd66bac5f3fa839489745805e12542f317a366989b520706b,PodSandboxId:cea371a7b82b947e5fb342214a35fe253cd152ca1e8ed5c6cc068b4d719ce55e,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1711583
595748903770,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-5zmtk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e75cdc5-22da-47f2-9833-b2f4eaa9caac,},Annotations:map[string]string{io.kubernetes.container.hash: 6486d0c8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a226f01452a72cf6e9a608450715a12f6663ccf2697e8259f809e966809978ce,PodSandboxId:3f1239e30a953ae16c917e002c235d64ba2b2b2f9165e939c99bda7578eae785,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1711583595702313227,Labels:map[string]stri
ng{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4t77p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 27eff0c9-9b45-4530-aba9-1a5e0ca60802,},Annotations:map[string]string{io.kubernetes.container.hash: d0985273,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f28af42c6db4a0efe0547d4442478084f4054bb5d4d47038a8a7f727ec1044df,PodSandboxId:893f7358a6722bb051ca8e38bb9af692a62e5ff985eb3863a033431715b43128,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:58ce44dc60694b0aa547d87d4a8337133961d3a8538021a672ba9bd33b267c9a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1711583579138103960,Labels:map[string]string
{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-377576,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b447b16a29b2ea987c7683714523f85a,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22d460b8d6582d93d5633e1e1af46683647a5632f6b9153f61a6c374dca4f34c,PodSandboxId:97aabc5fbaef976f88ad3764ab080be7aeab5127cc15d872fe7107cf3126e072,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1711583575918296538,Labels:map[string]string{io.kubernetes.container
.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-377576,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 32d18f050adf42c0d971a9903270a7b6,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f113e7564c47f0e2aa57dbb0702acbf86b1d75d00e9210d72d606a1b0505e5b,PodSandboxId:ac57491c8945575ea35326a6b572c93ff05b65a9f7c2a1f53e465d0b97a5fe09,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1711583575841481160,Labels:map[string]string{io.kubernetes.container.na
me: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-377576,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6490cbc40210ad634becf13ac3a1705,},Annotations:map[string]string{io.kubernetes.container.hash: bc2c0f48,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0128cd878ebdae2c6e217413183a103417714ffe00d55c1c92d1353e38238fa,PodSandboxId:bbb9d168e952fe414d641f2ec6a7e1e63a04205d4e2aae1c6ad9a2756c6443ee,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1711583575849937975,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name
: etcd-ha-377576,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ab33d6840338638cbdcd9ebe5fdd4d4,},Annotations:map[string]string{io.kubernetes.container.hash: 7359b186,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:afbf14c176818ba796eacd41f1600b5f09075c6a2b4f1edcad903530221f76ff,PodSandboxId:b75106f2dccc70e077dfb09e12c05d0942a02fa2c4894a1bd68ec76ca144eed7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1711583575809836577,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-377576,io.k
ubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f9eaf884653411ba1f22eb4cdbdfa748,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=55ddd686-5450-4823-bd1e-47760a394c73 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	fc41f34db32bf       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   4 minutes ago       Running             busybox                   0                   d8bf33d99bda1       busybox-7fdf7869d9-78c89
	1d5198968b769       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      8 minutes ago       Running             coredns                   0                   78b0408435c31       coredns-76f75df574-47npx
	ed9a38e9f6cd9       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      8 minutes ago       Running             coredns                   0                   906a95ca7b930       coredns-76f75df574-msv9s
	381348b1458ce       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      8 minutes ago       Running             storage-provisioner       0                   bfc67c80fc558       storage-provisioner
	196de4c982b9c       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5                                      8 minutes ago       Running             kindnet-cni               0                   cea371a7b82b9       kindnet-5zmtk
	a226f01452a72       a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392                                      8 minutes ago       Running             kube-proxy                0                   3f1239e30a953       kube-proxy-4t77p
	f28af42c6db4a       ghcr.io/kube-vip/kube-vip@sha256:58ce44dc60694b0aa547d87d4a8337133961d3a8538021a672ba9bd33b267c9a     8 minutes ago       Running             kube-vip                  0                   893f7358a6722       kube-vip-ha-377576
	22d460b8d6582       6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3                                      8 minutes ago       Running             kube-controller-manager   0                   97aabc5fbaef9       kube-controller-manager-ha-377576
	a0128cd878ebd       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      8 minutes ago       Running             etcd                      0                   bbb9d168e952f       etcd-ha-377576
	5f113e7564c47       39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533                                      8 minutes ago       Running             kube-apiserver            0                   ac57491c89455       kube-apiserver-ha-377576
	afbf14c176818       8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b                                      8 minutes ago       Running             kube-scheduler            0                   b75106f2dccc7       kube-scheduler-ha-377576
	
	
	==> coredns [1d5198968b76968a091c33806ec424495c03395675b20e7eb35e330d16217211] <==
	[INFO] 10.244.0.4:39660 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002146785s
	[INFO] 10.244.0.4:50403 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000179658s
	[INFO] 10.244.0.4:56935 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000188852s
	[INFO] 10.244.0.4:48453 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000160917s
	[INFO] 10.244.0.4:36560 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000067457s
	[INFO] 10.244.2.2:60611 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00032109s
	[INFO] 10.244.2.2:33575 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000137606s
	[INFO] 10.244.2.2:52980 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000106821s
	[INFO] 10.244.2.2:50141 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000114136s
	[INFO] 10.244.1.2:48883 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000154613s
	[INFO] 10.244.1.2:60634 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000118063s
	[INFO] 10.244.1.2:39068 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000170354s
	[INFO] 10.244.0.4:42784 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000130962s
	[INFO] 10.244.0.4:58150 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000087285s
	[INFO] 10.244.0.4:44129 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000081095s
	[INFO] 10.244.0.4:44169 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000047878s
	[INFO] 10.244.2.2:38674 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000113751s
	[INFO] 10.244.1.2:52689 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000279728s
	[INFO] 10.244.0.4:54702 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000138182s
	[INFO] 10.244.0.4:33994 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000143246s
	[INFO] 10.244.0.4:59928 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000149415s
	[INFO] 10.244.0.4:48254 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000119791s
	[INFO] 10.244.2.2:38914 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000113463s
	[INFO] 10.244.2.2:45000 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000084412s
	[INFO] 10.244.2.2:45899 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000082622s
	
	
	==> coredns [ed9a38e9f6cd98054d1e672c652e36e6262ceb6eb2a3e14911a5178d222135e7] <==
	[INFO] 10.244.1.2:48521 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.010184678s
	[INFO] 10.244.0.4:54036 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000157882s
	[INFO] 10.244.0.4:33757 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 60 0.000062762s
	[INFO] 10.244.1.2:42842 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000182746s
	[INFO] 10.244.1.2:38978 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.003075134s
	[INFO] 10.244.1.2:39882 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000227134s
	[INFO] 10.244.1.2:36591 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000230513s
	[INFO] 10.244.1.2:39147 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002683128s
	[INFO] 10.244.1.2:57485 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000145666s
	[INFO] 10.244.1.2:50733 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000171259s
	[INFO] 10.244.0.4:38643 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000147285s
	[INFO] 10.244.0.4:54253 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00151748s
	[INFO] 10.244.0.4:55400 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000105715s
	[INFO] 10.244.2.2:37662 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00219357s
	[INFO] 10.244.2.2:39646 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000125023s
	[INFO] 10.244.2.2:33350 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001640561s
	[INFO] 10.244.2.2:40494 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000076386s
	[INFO] 10.244.1.2:45207 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000150664s
	[INFO] 10.244.2.2:56881 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000230324s
	[INFO] 10.244.2.2:46450 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000102951s
	[INFO] 10.244.2.2:49186 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000107347s
	[INFO] 10.244.1.2:32923 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.00033097s
	[INFO] 10.244.1.2:38607 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000207486s
	[INFO] 10.244.1.2:54186 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000187929s
	[INFO] 10.244.2.2:59559 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000147121s
	
	
	==> describe nodes <==
	Name:               ha-377576
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-377576
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1a940f980f77c9bfc8ef93678db8c1f49c9dd79d
	                    minikube.k8s.io/name=ha-377576
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_03_27T23_53_03_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 27 Mar 2024 23:53:01 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-377576
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 28 Mar 2024 00:01:23 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 27 Mar 2024 23:57:08 +0000   Wed, 27 Mar 2024 23:53:01 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 27 Mar 2024 23:57:08 +0000   Wed, 27 Mar 2024 23:53:01 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 27 Mar 2024 23:57:08 +0000   Wed, 27 Mar 2024 23:53:01 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 27 Mar 2024 23:57:08 +0000   Wed, 27 Mar 2024 23:53:17 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.47
	  Hostname:    ha-377576
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 548afee7a42c42209042fc22e933a640
	  System UUID:                548afee7-a42c-4220-9042-fc22e933a640
	  Boot ID:                    446624d0-3e4c-494a-bf42-903d59e41c0c
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7fdf7869d9-78c89             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m30s
	  kube-system                 coredns-76f75df574-47npx             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     8m10s
	  kube-system                 coredns-76f75df574-msv9s             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     8m10s
	  kube-system                 etcd-ha-377576                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         8m23s
	  kube-system                 kindnet-5zmtk                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      8m11s
	  kube-system                 kube-apiserver-ha-377576             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m23s
	  kube-system                 kube-controller-manager-ha-377576    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m23s
	  kube-system                 kube-proxy-4t77p                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m11s
	  kube-system                 kube-scheduler-ha-377576             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m23s
	  kube-system                 kube-vip-ha-377576                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m23s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m10s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 8m9s   kube-proxy       
	  Normal  Starting                 8m23s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  8m23s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  8m23s  kubelet          Node ha-377576 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m23s  kubelet          Node ha-377576 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m23s  kubelet          Node ha-377576 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           8m11s  node-controller  Node ha-377576 event: Registered Node ha-377576 in Controller
	  Normal  NodeReady                8m8s   kubelet          Node ha-377576 status is now: NodeReady
	  Normal  RegisteredNode           5m47s  node-controller  Node ha-377576 event: Registered Node ha-377576 in Controller
	  Normal  RegisteredNode           4m35s  node-controller  Node ha-377576 event: Registered Node ha-377576 in Controller
	
	
	Name:               ha-377576-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-377576-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1a940f980f77c9bfc8ef93678db8c1f49c9dd79d
	                    minikube.k8s.io/name=ha-377576
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_03_27T23_55_23_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 27 Mar 2024 23:55:20 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-377576-m02
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 27 Mar 2024 23:58:03 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Wed, 27 Mar 2024 23:57:23 +0000   Wed, 27 Mar 2024 23:58:44 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Wed, 27 Mar 2024 23:57:23 +0000   Wed, 27 Mar 2024 23:58:44 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Wed, 27 Mar 2024 23:57:23 +0000   Wed, 27 Mar 2024 23:58:44 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Wed, 27 Mar 2024 23:57:23 +0000   Wed, 27 Mar 2024 23:58:44 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.117
	  Hostname:    ha-377576-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 e8bdd7497a164e8f88f2bc1a3706be52
	  System UUID:                e8bdd749-7a16-4e8f-88f2-bc1a3706be52
	  Boot ID:                    9b021c57-de29-4df2-84eb-a4b0b13be45a
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7fdf7869d9-2dqtf                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m30s
	  kube-system                 etcd-ha-377576-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         6m3s
	  kube-system                 kindnet-6wmmc                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      6m5s
	  kube-system                 kube-apiserver-ha-377576-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m4s
	  kube-system                 kube-controller-manager-ha-377576-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m4s
	  kube-system                 kube-proxy-k9dcr                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m5s
	  kube-system                 kube-scheduler-ha-377576-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m4s
	  kube-system                 kube-vip-ha-377576-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 6m1s                 kube-proxy       
	  Normal  Starting                 6m5s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  6m5s (x8 over 6m5s)  kubelet          Node ha-377576-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m5s (x8 over 6m5s)  kubelet          Node ha-377576-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m5s (x7 over 6m5s)  kubelet          Node ha-377576-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m5s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           6m1s                 node-controller  Node ha-377576-m02 event: Registered Node ha-377576-m02 in Controller
	  Normal  RegisteredNode           5m47s                node-controller  Node ha-377576-m02 event: Registered Node ha-377576-m02 in Controller
	  Normal  RegisteredNode           4m35s                node-controller  Node ha-377576-m02 event: Registered Node ha-377576-m02 in Controller
	  Normal  NodeNotReady             2m41s                node-controller  Node ha-377576-m02 status is now: NodeNotReady
	
	
	Name:               ha-377576-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-377576-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1a940f980f77c9bfc8ef93678db8c1f49c9dd79d
	                    minikube.k8s.io/name=ha-377576
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_03_27T23_56_36_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 27 Mar 2024 23:56:31 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-377576-m03
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 28 Mar 2024 00:01:17 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 27 Mar 2024 23:57:02 +0000   Wed, 27 Mar 2024 23:56:31 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 27 Mar 2024 23:57:02 +0000   Wed, 27 Mar 2024 23:56:31 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 27 Mar 2024 23:57:02 +0000   Wed, 27 Mar 2024 23:56:31 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 27 Mar 2024 23:57:02 +0000   Wed, 27 Mar 2024 23:56:42 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.101
	  Hostname:    ha-377576-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 71074434be55477c85d1de1bbea96887
	  System UUID:                71074434-be55-477c-85d1-de1bbea96887
	  Boot ID:                    772f8d7c-e549-4957-ae7c-91dfd2921db0
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7fdf7869d9-jrh7n                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m30s
	  kube-system                 etcd-ha-377576-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         4m53s
	  kube-system                 kindnet-n8fpn                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      4m54s
	  kube-system                 kube-apiserver-ha-377576-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m53s
	  kube-system                 kube-controller-manager-ha-377576-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m53s
	  kube-system                 kube-proxy-5plfq                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m54s
	  kube-system                 kube-scheduler-ha-377576-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m53s
	  kube-system                 kube-vip-ha-377576-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m49s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m50s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  4m54s (x8 over 4m54s)  kubelet          Node ha-377576-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m54s (x8 over 4m54s)  kubelet          Node ha-377576-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m54s (x7 over 4m54s)  kubelet          Node ha-377576-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m54s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m52s                  node-controller  Node ha-377576-m03 event: Registered Node ha-377576-m03 in Controller
	  Normal  RegisteredNode           4m51s                  node-controller  Node ha-377576-m03 event: Registered Node ha-377576-m03 in Controller
	  Normal  RegisteredNode           4m35s                  node-controller  Node ha-377576-m03 event: Registered Node ha-377576-m03 in Controller
	
	
	Name:               ha-377576-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-377576-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1a940f980f77c9bfc8ef93678db8c1f49c9dd79d
	                    minikube.k8s.io/name=ha-377576
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_03_27T23_57_34_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 27 Mar 2024 23:57:33 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-377576-m04
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 28 Mar 2024 00:01:17 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 27 Mar 2024 23:58:04 +0000   Wed, 27 Mar 2024 23:57:33 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 27 Mar 2024 23:58:04 +0000   Wed, 27 Mar 2024 23:57:33 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 27 Mar 2024 23:58:04 +0000   Wed, 27 Mar 2024 23:57:33 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 27 Mar 2024 23:58:04 +0000   Wed, 27 Mar 2024 23:57:42 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.93
	  Hostname:    ha-377576-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 9888e36a359a48f1aa6b97712e7f2662
	  System UUID:                9888e36a-359a-48f1-aa6b-97712e7f2662
	  Boot ID:                    952cc36b-038c-4c06-a7c6-406fd5b9d995
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-57xkj       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      3m52s
	  kube-system                 kube-proxy-nsmbj    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m52s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m47s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  3m52s (x3 over 3m52s)  kubelet          Node ha-377576-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m52s (x3 over 3m52s)  kubelet          Node ha-377576-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m52s (x3 over 3m52s)  kubelet          Node ha-377576-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m52s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m51s                  node-controller  Node ha-377576-m04 event: Registered Node ha-377576-m04 in Controller
	  Normal  RegisteredNode           3m50s                  node-controller  Node ha-377576-m04 event: Registered Node ha-377576-m04 in Controller
	  Normal  RegisteredNode           3m47s                  node-controller  Node ha-377576-m04 event: Registered Node ha-377576-m04 in Controller
	  Normal  NodeReady                3m43s                  kubelet          Node ha-377576-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Mar27 23:52] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.052795] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.040776] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.536087] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.734753] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.644341] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[ +12.445381] systemd-fstab-generator[598]: Ignoring "noauto" option for root device
	[  +0.055911] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.058244] systemd-fstab-generator[610]: Ignoring "noauto" option for root device
	[  +0.192360] systemd-fstab-generator[624]: Ignoring "noauto" option for root device
	[  +0.112715] systemd-fstab-generator[636]: Ignoring "noauto" option for root device
	[  +0.267509] systemd-fstab-generator[666]: Ignoring "noauto" option for root device
	[  +4.568474] systemd-fstab-generator[768]: Ignoring "noauto" option for root device
	[  +0.064108] kauditd_printk_skb: 130 callbacks suppressed
	[  +4.418967] systemd-fstab-generator[951]: Ignoring "noauto" option for root device
	[  +1.239042] kauditd_printk_skb: 57 callbacks suppressed
	[Mar27 23:53] kauditd_printk_skb: 40 callbacks suppressed
	[  +0.989248] systemd-fstab-generator[1376]: Ignoring "noauto" option for root device
	[ +13.165128] kauditd_printk_skb: 15 callbacks suppressed
	[Mar27 23:55] kauditd_printk_skb: 74 callbacks suppressed
	
	
	==> etcd [a0128cd878ebdae2c6e217413183a103417714ffe00d55c1c92d1353e38238fa] <==
	{"level":"warn","ts":"2024-03-28T00:01:25.624071Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"dda2c3e6a900b50e","from":"dda2c3e6a900b50e","remote-peer-id":"fcedd589fb2f542e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-28T00:01:25.658989Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"dda2c3e6a900b50e","from":"dda2c3e6a900b50e","remote-peer-id":"fcedd589fb2f542e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-28T00:01:25.667592Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"dda2c3e6a900b50e","from":"dda2c3e6a900b50e","remote-peer-id":"fcedd589fb2f542e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-28T00:01:25.672689Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"dda2c3e6a900b50e","from":"dda2c3e6a900b50e","remote-peer-id":"fcedd589fb2f542e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-28T00:01:25.689364Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"dda2c3e6a900b50e","from":"dda2c3e6a900b50e","remote-peer-id":"fcedd589fb2f542e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-28T00:01:25.698897Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"dda2c3e6a900b50e","from":"dda2c3e6a900b50e","remote-peer-id":"fcedd589fb2f542e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-28T00:01:25.706228Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"dda2c3e6a900b50e","from":"dda2c3e6a900b50e","remote-peer-id":"fcedd589fb2f542e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-28T00:01:25.712085Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"dda2c3e6a900b50e","from":"dda2c3e6a900b50e","remote-peer-id":"fcedd589fb2f542e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-28T00:01:25.71664Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"dda2c3e6a900b50e","from":"dda2c3e6a900b50e","remote-peer-id":"fcedd589fb2f542e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-28T00:01:25.723998Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"dda2c3e6a900b50e","from":"dda2c3e6a900b50e","remote-peer-id":"fcedd589fb2f542e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-28T00:01:25.729089Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"dda2c3e6a900b50e","from":"dda2c3e6a900b50e","remote-peer-id":"fcedd589fb2f542e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-28T00:01:25.736445Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"dda2c3e6a900b50e","from":"dda2c3e6a900b50e","remote-peer-id":"fcedd589fb2f542e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-28T00:01:25.746918Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"dda2c3e6a900b50e","from":"dda2c3e6a900b50e","remote-peer-id":"fcedd589fb2f542e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-28T00:01:25.752861Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"dda2c3e6a900b50e","from":"dda2c3e6a900b50e","remote-peer-id":"fcedd589fb2f542e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-28T00:01:25.757226Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"dda2c3e6a900b50e","from":"dda2c3e6a900b50e","remote-peer-id":"fcedd589fb2f542e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-28T00:01:25.765927Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"dda2c3e6a900b50e","from":"dda2c3e6a900b50e","remote-peer-id":"fcedd589fb2f542e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-28T00:01:25.774565Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"dda2c3e6a900b50e","from":"dda2c3e6a900b50e","remote-peer-id":"fcedd589fb2f542e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-28T00:01:25.781898Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"dda2c3e6a900b50e","from":"dda2c3e6a900b50e","remote-peer-id":"fcedd589fb2f542e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-28T00:01:25.786772Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"dda2c3e6a900b50e","from":"dda2c3e6a900b50e","remote-peer-id":"fcedd589fb2f542e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-28T00:01:25.787972Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"dda2c3e6a900b50e","from":"dda2c3e6a900b50e","remote-peer-id":"fcedd589fb2f542e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-28T00:01:25.791784Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"dda2c3e6a900b50e","from":"dda2c3e6a900b50e","remote-peer-id":"fcedd589fb2f542e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-28T00:01:25.799595Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"dda2c3e6a900b50e","from":"dda2c3e6a900b50e","remote-peer-id":"fcedd589fb2f542e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-28T00:01:25.806208Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"dda2c3e6a900b50e","from":"dda2c3e6a900b50e","remote-peer-id":"fcedd589fb2f542e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-28T00:01:25.820797Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"dda2c3e6a900b50e","from":"dda2c3e6a900b50e","remote-peer-id":"fcedd589fb2f542e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-28T00:01:25.824585Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"dda2c3e6a900b50e","from":"dda2c3e6a900b50e","remote-peer-id":"fcedd589fb2f542e","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 00:01:25 up 9 min,  0 users,  load average: 0.96, 0.77, 0.35
	Linux ha-377576 5.10.207 #1 SMP Wed Mar 27 22:02:20 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [196de4c982b9c19fd66bac5f3fa839489745805e12542f317a366989b520706b] <==
	I0328 00:00:47.397722       1 main.go:250] Node ha-377576-m04 has CIDR [10.244.3.0/24] 
	I0328 00:00:57.412807       1 main.go:223] Handling node with IPs: map[192.168.39.47:{}]
	I0328 00:00:57.412904       1 main.go:227] handling current node
	I0328 00:00:57.412927       1 main.go:223] Handling node with IPs: map[192.168.39.117:{}]
	I0328 00:00:57.412944       1 main.go:250] Node ha-377576-m02 has CIDR [10.244.1.0/24] 
	I0328 00:00:57.413078       1 main.go:223] Handling node with IPs: map[192.168.39.101:{}]
	I0328 00:00:57.413098       1 main.go:250] Node ha-377576-m03 has CIDR [10.244.2.0/24] 
	I0328 00:00:57.413166       1 main.go:223] Handling node with IPs: map[192.168.39.93:{}]
	I0328 00:00:57.413185       1 main.go:250] Node ha-377576-m04 has CIDR [10.244.3.0/24] 
	I0328 00:01:07.427376       1 main.go:223] Handling node with IPs: map[192.168.39.47:{}]
	I0328 00:01:07.427424       1 main.go:227] handling current node
	I0328 00:01:07.427435       1 main.go:223] Handling node with IPs: map[192.168.39.117:{}]
	I0328 00:01:07.427441       1 main.go:250] Node ha-377576-m02 has CIDR [10.244.1.0/24] 
	I0328 00:01:07.427609       1 main.go:223] Handling node with IPs: map[192.168.39.101:{}]
	I0328 00:01:07.427636       1 main.go:250] Node ha-377576-m03 has CIDR [10.244.2.0/24] 
	I0328 00:01:07.427684       1 main.go:223] Handling node with IPs: map[192.168.39.93:{}]
	I0328 00:01:07.427689       1 main.go:250] Node ha-377576-m04 has CIDR [10.244.3.0/24] 
	I0328 00:01:17.443555       1 main.go:223] Handling node with IPs: map[192.168.39.47:{}]
	I0328 00:01:17.443644       1 main.go:227] handling current node
	I0328 00:01:17.443660       1 main.go:223] Handling node with IPs: map[192.168.39.117:{}]
	I0328 00:01:17.443666       1 main.go:250] Node ha-377576-m02 has CIDR [10.244.1.0/24] 
	I0328 00:01:17.443783       1 main.go:223] Handling node with IPs: map[192.168.39.101:{}]
	I0328 00:01:17.443788       1 main.go:250] Node ha-377576-m03 has CIDR [10.244.2.0/24] 
	I0328 00:01:17.443833       1 main.go:223] Handling node with IPs: map[192.168.39.93:{}]
	I0328 00:01:17.443861       1 main.go:250] Node ha-377576-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [5f113e7564c47f0e2aa57dbb0702acbf86b1d75d00e9210d72d606a1b0505e5b] <==
	I0327 23:52:58.602484       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0327 23:52:58.602589       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0327 23:52:58.602912       1 aggregator.go:165] initial CRD sync complete...
	I0327 23:52:58.602950       1 autoregister_controller.go:141] Starting autoregister controller
	I0327 23:52:58.602957       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0327 23:52:58.602962       1 cache.go:39] Caches are synced for autoregister controller
	I0327 23:52:58.656155       1 controller.go:624] quota admission added evaluator for: namespaces
	I0327 23:52:58.669066       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0327 23:52:58.685463       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0327 23:52:58.763771       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0327 23:52:59.502968       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0327 23:52:59.508615       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0327 23:52:59.508680       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0327 23:53:00.133191       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0327 23:53:00.184659       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0327 23:53:00.331947       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0327 23:53:00.346384       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.47]
	I0327 23:53:00.347287       1 controller.go:624] quota admission added evaluator for: endpoints
	I0327 23:53:00.351876       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0327 23:53:00.528981       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0327 23:53:02.496870       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0327 23:53:02.517479       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0327 23:53:02.530303       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0327 23:53:14.644261       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	I0327 23:53:15.098771       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [22d460b8d6582d93d5633e1e1af46683647a5632f6b9153f61a6c374dca4f34c] <==
	I0327 23:56:59.802638       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="139.102µs"
	E0327 23:57:33.565264       1 certificate_controller.go:146] Sync csr-c72s4 failed with : error updating signature for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io "csr-c72s4": the object has been modified; please apply your changes to the latest version and try again
	I0327 23:57:33.600483       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-377576-m04\" does not exist"
	I0327 23:57:33.652219       1 range_allocator.go:380] "Set node PodCIDR" node="ha-377576-m04" podCIDRs=["10.244.3.0/24"]
	I0327 23:57:33.660867       1 event.go:376] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-tzvbj"
	I0327 23:57:33.661119       1 event.go:376] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-nsmbj"
	I0327 23:57:33.750910       1 event.go:376] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: kube-proxy-t6nx6"
	I0327 23:57:33.772460       1 event.go:376] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: kindnet-tzvbj"
	I0327 23:57:33.856750       1 event.go:376] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: kube-proxy-prljv"
	I0327 23:57:33.871118       1 event.go:376] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: kindnet-pn9dw"
	I0327 23:57:34.129097       1 node_lifecycle_controller.go:874] "Missing timestamp for Node. Assuming now as a timestamp" node="ha-377576-m04"
	I0327 23:57:34.129327       1 event.go:376] "Event occurred" object="ha-377576-m04" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node ha-377576-m04 event: Registered Node ha-377576-m04 in Controller"
	I0327 23:57:43.008283       1 topologycache.go:237] "Can't get CPU or zone information for node" node="ha-377576-m04"
	I0327 23:58:44.158629       1 topologycache.go:237] "Can't get CPU or zone information for node" node="ha-377576-m04"
	I0327 23:58:44.159072       1 event.go:376] "Event occurred" object="ha-377576-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="NodeNotReady" message="Node ha-377576-m02 status is now: NodeNotReady"
	I0327 23:58:44.179770       1 event.go:376] "Event occurred" object="kube-system/etcd-ha-377576-m02" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0327 23:58:44.203197       1 event.go:376] "Event occurred" object="kube-system/kube-vip-ha-377576-m02" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0327 23:58:44.213596       1 event.go:376] "Event occurred" object="default/busybox-7fdf7869d9-2dqtf" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0327 23:58:44.230027       1 event.go:376] "Event occurred" object="kube-system/kindnet-6wmmc" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0327 23:58:44.256763       1 event.go:376] "Event occurred" object="kube-system/kube-proxy-k9dcr" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0327 23:58:44.285708       1 event.go:376] "Event occurred" object="kube-system/kube-controller-manager-ha-377576-m02" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0327 23:58:44.289987       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="75.746521ms"
	I0327 23:58:44.290123       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="45.009µs"
	I0327 23:58:44.307661       1 event.go:376] "Event occurred" object="kube-system/kube-apiserver-ha-377576-m02" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0327 23:58:44.343032       1 event.go:376] "Event occurred" object="kube-system/kube-scheduler-ha-377576-m02" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	
	
	==> kube-proxy [a226f01452a72cf6e9a608450715a12f6663ccf2697e8259f809e966809978ce] <==
	I0327 23:53:15.959892       1 server_others.go:72] "Using iptables proxy"
	I0327 23:53:15.983320       1 server.go:1050] "Successfully retrieved node IP(s)" IPs=["192.168.39.47"]
	I0327 23:53:16.055266       1 server_others.go:146] "No iptables support for family" ipFamily="IPv6"
	I0327 23:53:16.055358       1 server.go:654] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0327 23:53:16.055446       1 server_others.go:168] "Using iptables Proxier"
	I0327 23:53:16.064618       1 proxier.go:245] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0327 23:53:16.065411       1 server.go:865] "Version info" version="v1.29.3"
	I0327 23:53:16.065456       1 server.go:867] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0327 23:53:16.072197       1 config.go:188] "Starting service config controller"
	I0327 23:53:16.072660       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0327 23:53:16.072718       1 config.go:97] "Starting endpoint slice config controller"
	I0327 23:53:16.072726       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0327 23:53:16.074737       1 config.go:315] "Starting node config controller"
	I0327 23:53:16.074765       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0327 23:53:16.172890       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0327 23:53:16.172897       1 shared_informer.go:318] Caches are synced for service config
	I0327 23:53:16.175683       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [afbf14c176818ba796eacd41f1600b5f09075c6a2b4f1edcad903530221f76ff] <==
	W0327 23:52:58.721691       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0327 23:52:58.721703       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0327 23:52:58.721890       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0327 23:52:58.721901       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0327 23:52:58.726852       1 reflector.go:539] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0327 23:52:58.726931       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0327 23:52:58.727134       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0327 23:52:58.727145       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0327 23:52:58.727180       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0327 23:52:58.727191       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0327 23:52:58.727312       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0327 23:52:58.728036       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0327 23:52:59.556968       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0327 23:52:59.557037       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0327 23:52:59.658438       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0327 23:52:59.658484       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0327 23:52:59.748727       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0327 23:52:59.748764       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0327 23:52:59.830540       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0327 23:52:59.830590       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0327 23:52:59.911289       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0327 23:52:59.911418       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0327 23:53:00.016951       1 reflector.go:539] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0327 23:53:00.017332       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0327 23:53:03.289815       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Mar 27 23:57:02 ha-377576 kubelet[1383]: E0327 23:57:02.708996    1383 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 27 23:57:02 ha-377576 kubelet[1383]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 27 23:57:02 ha-377576 kubelet[1383]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 27 23:57:02 ha-377576 kubelet[1383]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 27 23:57:02 ha-377576 kubelet[1383]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 27 23:58:02 ha-377576 kubelet[1383]: E0327 23:58:02.709132    1383 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 27 23:58:02 ha-377576 kubelet[1383]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 27 23:58:02 ha-377576 kubelet[1383]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 27 23:58:02 ha-377576 kubelet[1383]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 27 23:58:02 ha-377576 kubelet[1383]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 27 23:59:02 ha-377576 kubelet[1383]: E0327 23:59:02.709132    1383 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 27 23:59:02 ha-377576 kubelet[1383]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 27 23:59:02 ha-377576 kubelet[1383]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 27 23:59:02 ha-377576 kubelet[1383]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 27 23:59:02 ha-377576 kubelet[1383]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 28 00:00:02 ha-377576 kubelet[1383]: E0328 00:00:02.709291    1383 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 28 00:00:02 ha-377576 kubelet[1383]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 28 00:00:02 ha-377576 kubelet[1383]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 28 00:00:02 ha-377576 kubelet[1383]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 28 00:00:02 ha-377576 kubelet[1383]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 28 00:01:02 ha-377576 kubelet[1383]: E0328 00:01:02.709883    1383 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 28 00:01:02 ha-377576 kubelet[1383]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 28 00:01:02 ha-377576 kubelet[1383]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 28 00:01:02 ha-377576 kubelet[1383]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 28 00:01:02 ha-377576 kubelet[1383]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-377576 -n ha-377576
helpers_test.go:261: (dbg) Run:  kubectl --context ha-377576 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (56.40s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (397.58s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-377576 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-377576 -v=7 --alsologtostderr
E0328 00:01:48.893858 1076522 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/functional-800754/client.crt: no such file or directory
E0328 00:02:37.403101 1076522 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/addons-910864/client.crt: no such file or directory
ha_test.go:462: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p ha-377576 -v=7 --alsologtostderr: exit status 82 (2m2.04509457s)

                                                
                                                
-- stdout --
	* Stopping node "ha-377576-m04"  ...
	* Stopping node "ha-377576-m03"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0328 00:01:27.493066 1092156 out.go:291] Setting OutFile to fd 1 ...
	I0328 00:01:27.493593 1092156 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 00:01:27.493646 1092156 out.go:304] Setting ErrFile to fd 2...
	I0328 00:01:27.493663 1092156 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 00:01:27.494141 1092156 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18485-1069254/.minikube/bin
	I0328 00:01:27.494969 1092156 out.go:298] Setting JSON to false
	I0328 00:01:27.495084 1092156 mustload.go:65] Loading cluster: ha-377576
	I0328 00:01:27.495527 1092156 config.go:182] Loaded profile config "ha-377576": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0328 00:01:27.495644 1092156 profile.go:142] Saving config to /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/ha-377576/config.json ...
	I0328 00:01:27.495843 1092156 mustload.go:65] Loading cluster: ha-377576
	I0328 00:01:27.496132 1092156 config.go:182] Loaded profile config "ha-377576": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0328 00:01:27.496192 1092156 stop.go:39] StopHost: ha-377576-m04
	I0328 00:01:27.496680 1092156 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0328 00:01:27.496740 1092156 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 00:01:27.512296 1092156 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41615
	I0328 00:01:27.512936 1092156 main.go:141] libmachine: () Calling .GetVersion
	I0328 00:01:27.513743 1092156 main.go:141] libmachine: Using API Version  1
	I0328 00:01:27.513779 1092156 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 00:01:27.514182 1092156 main.go:141] libmachine: () Calling .GetMachineName
	I0328 00:01:27.516740 1092156 out.go:177] * Stopping node "ha-377576-m04"  ...
	I0328 00:01:27.518407 1092156 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0328 00:01:27.518441 1092156 main.go:141] libmachine: (ha-377576-m04) Calling .DriverName
	I0328 00:01:27.518730 1092156 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0328 00:01:27.518770 1092156 main.go:141] libmachine: (ha-377576-m04) Calling .GetSSHHostname
	I0328 00:01:27.521653 1092156 main.go:141] libmachine: (ha-377576-m04) DBG | domain ha-377576-m04 has defined MAC address 52:54:00:6a:c2:81 in network mk-ha-377576
	I0328 00:01:27.522071 1092156 main.go:141] libmachine: (ha-377576-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:c2:81", ip: ""} in network mk-ha-377576: {Iface:virbr1 ExpiryTime:2024-03-28 00:57:19 +0000 UTC Type:0 Mac:52:54:00:6a:c2:81 Iaid: IPaddr:192.168.39.93 Prefix:24 Hostname:ha-377576-m04 Clientid:01:52:54:00:6a:c2:81}
	I0328 00:01:27.522102 1092156 main.go:141] libmachine: (ha-377576-m04) DBG | domain ha-377576-m04 has defined IP address 192.168.39.93 and MAC address 52:54:00:6a:c2:81 in network mk-ha-377576
	I0328 00:01:27.522258 1092156 main.go:141] libmachine: (ha-377576-m04) Calling .GetSSHPort
	I0328 00:01:27.522449 1092156 main.go:141] libmachine: (ha-377576-m04) Calling .GetSSHKeyPath
	I0328 00:01:27.522615 1092156 main.go:141] libmachine: (ha-377576-m04) Calling .GetSSHUsername
	I0328 00:01:27.522728 1092156 sshutil.go:53] new ssh client: &{IP:192.168.39.93 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/ha-377576-m04/id_rsa Username:docker}
	I0328 00:01:27.609844 1092156 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0328 00:01:27.665254 1092156 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0328 00:01:27.720624 1092156 main.go:141] libmachine: Stopping "ha-377576-m04"...
	I0328 00:01:27.720667 1092156 main.go:141] libmachine: (ha-377576-m04) Calling .GetState
	I0328 00:01:27.722529 1092156 main.go:141] libmachine: (ha-377576-m04) Calling .Stop
	I0328 00:01:27.726665 1092156 main.go:141] libmachine: (ha-377576-m04) Waiting for machine to stop 0/120
	I0328 00:01:29.023620 1092156 main.go:141] libmachine: (ha-377576-m04) Calling .GetState
	I0328 00:01:29.025152 1092156 main.go:141] libmachine: Machine "ha-377576-m04" was stopped.
	I0328 00:01:29.025177 1092156 stop.go:75] duration metric: took 1.506774913s to stop
	I0328 00:01:29.025204 1092156 stop.go:39] StopHost: ha-377576-m03
	I0328 00:01:29.025662 1092156 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0328 00:01:29.025722 1092156 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 00:01:29.041403 1092156 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45433
	I0328 00:01:29.042018 1092156 main.go:141] libmachine: () Calling .GetVersion
	I0328 00:01:29.042628 1092156 main.go:141] libmachine: Using API Version  1
	I0328 00:01:29.042654 1092156 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 00:01:29.043024 1092156 main.go:141] libmachine: () Calling .GetMachineName
	I0328 00:01:29.045379 1092156 out.go:177] * Stopping node "ha-377576-m03"  ...
	I0328 00:01:29.046752 1092156 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0328 00:01:29.046777 1092156 main.go:141] libmachine: (ha-377576-m03) Calling .DriverName
	I0328 00:01:29.047009 1092156 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0328 00:01:29.047034 1092156 main.go:141] libmachine: (ha-377576-m03) Calling .GetSSHHostname
	I0328 00:01:29.050508 1092156 main.go:141] libmachine: (ha-377576-m03) DBG | domain ha-377576-m03 has defined MAC address 52:54:00:f5:c1:99 in network mk-ha-377576
	I0328 00:01:29.050997 1092156 main.go:141] libmachine: (ha-377576-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:c1:99", ip: ""} in network mk-ha-377576: {Iface:virbr1 ExpiryTime:2024-03-28 00:55:52 +0000 UTC Type:0 Mac:52:54:00:f5:c1:99 Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:ha-377576-m03 Clientid:01:52:54:00:f5:c1:99}
	I0328 00:01:29.051018 1092156 main.go:141] libmachine: (ha-377576-m03) DBG | domain ha-377576-m03 has defined IP address 192.168.39.101 and MAC address 52:54:00:f5:c1:99 in network mk-ha-377576
	I0328 00:01:29.051261 1092156 main.go:141] libmachine: (ha-377576-m03) Calling .GetSSHPort
	I0328 00:01:29.051435 1092156 main.go:141] libmachine: (ha-377576-m03) Calling .GetSSHKeyPath
	I0328 00:01:29.051656 1092156 main.go:141] libmachine: (ha-377576-m03) Calling .GetSSHUsername
	I0328 00:01:29.051822 1092156 sshutil.go:53] new ssh client: &{IP:192.168.39.101 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/ha-377576-m03/id_rsa Username:docker}
	I0328 00:01:29.142612 1092156 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0328 00:01:29.198669 1092156 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0328 00:01:29.254733 1092156 main.go:141] libmachine: Stopping "ha-377576-m03"...
	I0328 00:01:29.254784 1092156 main.go:141] libmachine: (ha-377576-m03) Calling .GetState
	I0328 00:01:29.256690 1092156 main.go:141] libmachine: (ha-377576-m03) Calling .Stop
	I0328 00:01:29.260846 1092156 main.go:141] libmachine: (ha-377576-m03) Waiting for machine to stop 0/120
	I0328 00:01:30.262407 1092156 main.go:141] libmachine: (ha-377576-m03) Waiting for machine to stop 1/120
	I0328 00:01:31.264733 1092156 main.go:141] libmachine: (ha-377576-m03) Waiting for machine to stop 2/120
	I0328 00:01:32.266206 1092156 main.go:141] libmachine: (ha-377576-m03) Waiting for machine to stop 3/120
	I0328 00:01:33.267685 1092156 main.go:141] libmachine: (ha-377576-m03) Waiting for machine to stop 4/120
	I0328 00:01:34.269821 1092156 main.go:141] libmachine: (ha-377576-m03) Waiting for machine to stop 5/120
	I0328 00:01:35.271485 1092156 main.go:141] libmachine: (ha-377576-m03) Waiting for machine to stop 6/120
	I0328 00:01:36.273634 1092156 main.go:141] libmachine: (ha-377576-m03) Waiting for machine to stop 7/120
	I0328 00:01:37.275379 1092156 main.go:141] libmachine: (ha-377576-m03) Waiting for machine to stop 8/120
	I0328 00:01:38.277173 1092156 main.go:141] libmachine: (ha-377576-m03) Waiting for machine to stop 9/120
	I0328 00:01:39.279224 1092156 main.go:141] libmachine: (ha-377576-m03) Waiting for machine to stop 10/120
	I0328 00:01:40.281055 1092156 main.go:141] libmachine: (ha-377576-m03) Waiting for machine to stop 11/120
	I0328 00:01:41.282991 1092156 main.go:141] libmachine: (ha-377576-m03) Waiting for machine to stop 12/120
	I0328 00:01:42.284643 1092156 main.go:141] libmachine: (ha-377576-m03) Waiting for machine to stop 13/120
	I0328 00:01:43.286534 1092156 main.go:141] libmachine: (ha-377576-m03) Waiting for machine to stop 14/120
	I0328 00:01:44.289261 1092156 main.go:141] libmachine: (ha-377576-m03) Waiting for machine to stop 15/120
	I0328 00:01:45.290829 1092156 main.go:141] libmachine: (ha-377576-m03) Waiting for machine to stop 16/120
	I0328 00:01:46.292621 1092156 main.go:141] libmachine: (ha-377576-m03) Waiting for machine to stop 17/120
	I0328 00:01:47.294604 1092156 main.go:141] libmachine: (ha-377576-m03) Waiting for machine to stop 18/120
	I0328 00:01:48.296225 1092156 main.go:141] libmachine: (ha-377576-m03) Waiting for machine to stop 19/120
	I0328 00:01:49.297693 1092156 main.go:141] libmachine: (ha-377576-m03) Waiting for machine to stop 20/120
	I0328 00:01:50.299374 1092156 main.go:141] libmachine: (ha-377576-m03) Waiting for machine to stop 21/120
	I0328 00:01:51.300853 1092156 main.go:141] libmachine: (ha-377576-m03) Waiting for machine to stop 22/120
	I0328 00:01:52.302534 1092156 main.go:141] libmachine: (ha-377576-m03) Waiting for machine to stop 23/120
	I0328 00:01:53.304119 1092156 main.go:141] libmachine: (ha-377576-m03) Waiting for machine to stop 24/120
	I0328 00:01:54.306126 1092156 main.go:141] libmachine: (ha-377576-m03) Waiting for machine to stop 25/120
	I0328 00:01:55.307779 1092156 main.go:141] libmachine: (ha-377576-m03) Waiting for machine to stop 26/120
	I0328 00:01:56.309263 1092156 main.go:141] libmachine: (ha-377576-m03) Waiting for machine to stop 27/120
	I0328 00:01:57.310835 1092156 main.go:141] libmachine: (ha-377576-m03) Waiting for machine to stop 28/120
	I0328 00:01:58.312256 1092156 main.go:141] libmachine: (ha-377576-m03) Waiting for machine to stop 29/120
	I0328 00:01:59.314365 1092156 main.go:141] libmachine: (ha-377576-m03) Waiting for machine to stop 30/120
	I0328 00:02:00.316082 1092156 main.go:141] libmachine: (ha-377576-m03) Waiting for machine to stop 31/120
	I0328 00:02:01.318664 1092156 main.go:141] libmachine: (ha-377576-m03) Waiting for machine to stop 32/120
	I0328 00:02:02.321124 1092156 main.go:141] libmachine: (ha-377576-m03) Waiting for machine to stop 33/120
	I0328 00:02:03.322639 1092156 main.go:141] libmachine: (ha-377576-m03) Waiting for machine to stop 34/120
	I0328 00:02:04.324650 1092156 main.go:141] libmachine: (ha-377576-m03) Waiting for machine to stop 35/120
	I0328 00:02:05.326158 1092156 main.go:141] libmachine: (ha-377576-m03) Waiting for machine to stop 36/120
	I0328 00:02:06.327943 1092156 main.go:141] libmachine: (ha-377576-m03) Waiting for machine to stop 37/120
	I0328 00:02:07.329442 1092156 main.go:141] libmachine: (ha-377576-m03) Waiting for machine to stop 38/120
	I0328 00:02:08.330958 1092156 main.go:141] libmachine: (ha-377576-m03) Waiting for machine to stop 39/120
	I0328 00:02:09.333337 1092156 main.go:141] libmachine: (ha-377576-m03) Waiting for machine to stop 40/120
	I0328 00:02:10.335002 1092156 main.go:141] libmachine: (ha-377576-m03) Waiting for machine to stop 41/120
	I0328 00:02:11.336524 1092156 main.go:141] libmachine: (ha-377576-m03) Waiting for machine to stop 42/120
	I0328 00:02:12.337895 1092156 main.go:141] libmachine: (ha-377576-m03) Waiting for machine to stop 43/120
	I0328 00:02:13.339344 1092156 main.go:141] libmachine: (ha-377576-m03) Waiting for machine to stop 44/120
	I0328 00:02:14.340897 1092156 main.go:141] libmachine: (ha-377576-m03) Waiting for machine to stop 45/120
	I0328 00:02:15.342806 1092156 main.go:141] libmachine: (ha-377576-m03) Waiting for machine to stop 46/120
	I0328 00:02:16.344840 1092156 main.go:141] libmachine: (ha-377576-m03) Waiting for machine to stop 47/120
	I0328 00:02:17.346379 1092156 main.go:141] libmachine: (ha-377576-m03) Waiting for machine to stop 48/120
	I0328 00:02:18.347805 1092156 main.go:141] libmachine: (ha-377576-m03) Waiting for machine to stop 49/120
	I0328 00:02:19.349777 1092156 main.go:141] libmachine: (ha-377576-m03) Waiting for machine to stop 50/120
	I0328 00:02:20.351351 1092156 main.go:141] libmachine: (ha-377576-m03) Waiting for machine to stop 51/120
	I0328 00:02:21.352785 1092156 main.go:141] libmachine: (ha-377576-m03) Waiting for machine to stop 52/120
	I0328 00:02:22.354990 1092156 main.go:141] libmachine: (ha-377576-m03) Waiting for machine to stop 53/120
	I0328 00:02:23.356457 1092156 main.go:141] libmachine: (ha-377576-m03) Waiting for machine to stop 54/120
	I0328 00:02:24.358603 1092156 main.go:141] libmachine: (ha-377576-m03) Waiting for machine to stop 55/120
	I0328 00:02:25.359874 1092156 main.go:141] libmachine: (ha-377576-m03) Waiting for machine to stop 56/120
	I0328 00:02:26.361460 1092156 main.go:141] libmachine: (ha-377576-m03) Waiting for machine to stop 57/120
	I0328 00:02:27.363227 1092156 main.go:141] libmachine: (ha-377576-m03) Waiting for machine to stop 58/120
	I0328 00:02:28.364921 1092156 main.go:141] libmachine: (ha-377576-m03) Waiting for machine to stop 59/120
	I0328 00:02:29.366862 1092156 main.go:141] libmachine: (ha-377576-m03) Waiting for machine to stop 60/120
	I0328 00:02:30.368412 1092156 main.go:141] libmachine: (ha-377576-m03) Waiting for machine to stop 61/120
	I0328 00:02:31.369949 1092156 main.go:141] libmachine: (ha-377576-m03) Waiting for machine to stop 62/120
	I0328 00:02:32.371394 1092156 main.go:141] libmachine: (ha-377576-m03) Waiting for machine to stop 63/120
	I0328 00:02:33.373812 1092156 main.go:141] libmachine: (ha-377576-m03) Waiting for machine to stop 64/120
	I0328 00:02:34.375792 1092156 main.go:141] libmachine: (ha-377576-m03) Waiting for machine to stop 65/120
	I0328 00:02:35.377128 1092156 main.go:141] libmachine: (ha-377576-m03) Waiting for machine to stop 66/120
	I0328 00:02:36.378777 1092156 main.go:141] libmachine: (ha-377576-m03) Waiting for machine to stop 67/120
	I0328 00:02:37.380716 1092156 main.go:141] libmachine: (ha-377576-m03) Waiting for machine to stop 68/120
	I0328 00:02:38.382330 1092156 main.go:141] libmachine: (ha-377576-m03) Waiting for machine to stop 69/120
	I0328 00:02:39.384068 1092156 main.go:141] libmachine: (ha-377576-m03) Waiting for machine to stop 70/120
	I0328 00:02:40.385310 1092156 main.go:141] libmachine: (ha-377576-m03) Waiting for machine to stop 71/120
	I0328 00:02:41.386908 1092156 main.go:141] libmachine: (ha-377576-m03) Waiting for machine to stop 72/120
	I0328 00:02:42.388256 1092156 main.go:141] libmachine: (ha-377576-m03) Waiting for machine to stop 73/120
	I0328 00:02:43.389665 1092156 main.go:141] libmachine: (ha-377576-m03) Waiting for machine to stop 74/120
	I0328 00:02:44.391427 1092156 main.go:141] libmachine: (ha-377576-m03) Waiting for machine to stop 75/120
	I0328 00:02:45.392948 1092156 main.go:141] libmachine: (ha-377576-m03) Waiting for machine to stop 76/120
	I0328 00:02:46.394278 1092156 main.go:141] libmachine: (ha-377576-m03) Waiting for machine to stop 77/120
	I0328 00:02:47.395633 1092156 main.go:141] libmachine: (ha-377576-m03) Waiting for machine to stop 78/120
	I0328 00:02:48.397008 1092156 main.go:141] libmachine: (ha-377576-m03) Waiting for machine to stop 79/120
	I0328 00:02:49.399212 1092156 main.go:141] libmachine: (ha-377576-m03) Waiting for machine to stop 80/120
	I0328 00:02:50.400815 1092156 main.go:141] libmachine: (ha-377576-m03) Waiting for machine to stop 81/120
	I0328 00:02:51.402416 1092156 main.go:141] libmachine: (ha-377576-m03) Waiting for machine to stop 82/120
	I0328 00:02:52.403944 1092156 main.go:141] libmachine: (ha-377576-m03) Waiting for machine to stop 83/120
	I0328 00:02:53.405496 1092156 main.go:141] libmachine: (ha-377576-m03) Waiting for machine to stop 84/120
	I0328 00:02:54.407308 1092156 main.go:141] libmachine: (ha-377576-m03) Waiting for machine to stop 85/120
	I0328 00:02:55.408716 1092156 main.go:141] libmachine: (ha-377576-m03) Waiting for machine to stop 86/120
	I0328 00:02:56.410095 1092156 main.go:141] libmachine: (ha-377576-m03) Waiting for machine to stop 87/120
	I0328 00:02:57.411509 1092156 main.go:141] libmachine: (ha-377576-m03) Waiting for machine to stop 88/120
	I0328 00:02:58.412874 1092156 main.go:141] libmachine: (ha-377576-m03) Waiting for machine to stop 89/120
	I0328 00:02:59.414816 1092156 main.go:141] libmachine: (ha-377576-m03) Waiting for machine to stop 90/120
	I0328 00:03:00.416307 1092156 main.go:141] libmachine: (ha-377576-m03) Waiting for machine to stop 91/120
	I0328 00:03:01.418411 1092156 main.go:141] libmachine: (ha-377576-m03) Waiting for machine to stop 92/120
	I0328 00:03:02.419811 1092156 main.go:141] libmachine: (ha-377576-m03) Waiting for machine to stop 93/120
	I0328 00:03:03.421323 1092156 main.go:141] libmachine: (ha-377576-m03) Waiting for machine to stop 94/120
	I0328 00:03:04.422798 1092156 main.go:141] libmachine: (ha-377576-m03) Waiting for machine to stop 95/120
	I0328 00:03:05.424317 1092156 main.go:141] libmachine: (ha-377576-m03) Waiting for machine to stop 96/120
	I0328 00:03:06.426127 1092156 main.go:141] libmachine: (ha-377576-m03) Waiting for machine to stop 97/120
	I0328 00:03:07.427715 1092156 main.go:141] libmachine: (ha-377576-m03) Waiting for machine to stop 98/120
	I0328 00:03:08.429086 1092156 main.go:141] libmachine: (ha-377576-m03) Waiting for machine to stop 99/120
	I0328 00:03:09.431071 1092156 main.go:141] libmachine: (ha-377576-m03) Waiting for machine to stop 100/120
	I0328 00:03:10.432648 1092156 main.go:141] libmachine: (ha-377576-m03) Waiting for machine to stop 101/120
	I0328 00:03:11.434131 1092156 main.go:141] libmachine: (ha-377576-m03) Waiting for machine to stop 102/120
	I0328 00:03:12.435631 1092156 main.go:141] libmachine: (ha-377576-m03) Waiting for machine to stop 103/120
	I0328 00:03:13.437287 1092156 main.go:141] libmachine: (ha-377576-m03) Waiting for machine to stop 104/120
	I0328 00:03:14.439449 1092156 main.go:141] libmachine: (ha-377576-m03) Waiting for machine to stop 105/120
	I0328 00:03:15.441203 1092156 main.go:141] libmachine: (ha-377576-m03) Waiting for machine to stop 106/120
	I0328 00:03:16.442692 1092156 main.go:141] libmachine: (ha-377576-m03) Waiting for machine to stop 107/120
	I0328 00:03:17.444337 1092156 main.go:141] libmachine: (ha-377576-m03) Waiting for machine to stop 108/120
	I0328 00:03:18.445718 1092156 main.go:141] libmachine: (ha-377576-m03) Waiting for machine to stop 109/120
	I0328 00:03:19.447775 1092156 main.go:141] libmachine: (ha-377576-m03) Waiting for machine to stop 110/120
	I0328 00:03:20.449219 1092156 main.go:141] libmachine: (ha-377576-m03) Waiting for machine to stop 111/120
	I0328 00:03:21.450983 1092156 main.go:141] libmachine: (ha-377576-m03) Waiting for machine to stop 112/120
	I0328 00:03:22.452462 1092156 main.go:141] libmachine: (ha-377576-m03) Waiting for machine to stop 113/120
	I0328 00:03:23.454271 1092156 main.go:141] libmachine: (ha-377576-m03) Waiting for machine to stop 114/120
	I0328 00:03:24.456408 1092156 main.go:141] libmachine: (ha-377576-m03) Waiting for machine to stop 115/120
	I0328 00:03:25.458035 1092156 main.go:141] libmachine: (ha-377576-m03) Waiting for machine to stop 116/120
	I0328 00:03:26.460599 1092156 main.go:141] libmachine: (ha-377576-m03) Waiting for machine to stop 117/120
	I0328 00:03:27.462279 1092156 main.go:141] libmachine: (ha-377576-m03) Waiting for machine to stop 118/120
	I0328 00:03:28.463891 1092156 main.go:141] libmachine: (ha-377576-m03) Waiting for machine to stop 119/120
	I0328 00:03:29.464872 1092156 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0328 00:03:29.464983 1092156 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0328 00:03:29.467197 1092156 out.go:177] 
	W0328 00:03:29.468873 1092156 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0328 00:03:29.468895 1092156 out.go:239] * 
	* 
	W0328 00:03:29.474149 1092156 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_2.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_2.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0328 00:03:29.475587 1092156 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:464: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p ha-377576 -v=7 --alsologtostderr" : exit status 82
ha_test.go:467: (dbg) Run:  out/minikube-linux-amd64 start -p ha-377576 --wait=true -v=7 --alsologtostderr
E0328 00:06:14.356127 1076522 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/addons-910864/client.crt: no such file or directory
E0328 00:06:21.207700 1076522 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/functional-800754/client.crt: no such file or directory
ha_test.go:467: (dbg) Done: out/minikube-linux-amd64 start -p ha-377576 --wait=true -v=7 --alsologtostderr: (4m32.330193749s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-377576
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-377576 -n ha-377576
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartClusterKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-377576 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-377576 logs -n 25: (2.276109338s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartClusterKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|----------------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   |    Version     |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|----------------|---------------------|---------------------|
	| cp      | ha-377576 cp ha-377576-m03:/home/docker/cp-test.txt                              | ha-377576 | jenkins | v1.33.0-beta.0 | 27 Mar 24 23:57 UTC | 27 Mar 24 23:58 UTC |
	|         | ha-377576-m02:/home/docker/cp-test_ha-377576-m03_ha-377576-m02.txt               |           |         |                |                     |                     |
	| ssh     | ha-377576 ssh -n                                                                 | ha-377576 | jenkins | v1.33.0-beta.0 | 27 Mar 24 23:58 UTC | 27 Mar 24 23:58 UTC |
	|         | ha-377576-m03 sudo cat                                                           |           |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |                |                     |                     |
	| ssh     | ha-377576 ssh -n ha-377576-m02 sudo cat                                          | ha-377576 | jenkins | v1.33.0-beta.0 | 27 Mar 24 23:58 UTC | 27 Mar 24 23:58 UTC |
	|         | /home/docker/cp-test_ha-377576-m03_ha-377576-m02.txt                             |           |         |                |                     |                     |
	| cp      | ha-377576 cp ha-377576-m03:/home/docker/cp-test.txt                              | ha-377576 | jenkins | v1.33.0-beta.0 | 27 Mar 24 23:58 UTC | 27 Mar 24 23:58 UTC |
	|         | ha-377576-m04:/home/docker/cp-test_ha-377576-m03_ha-377576-m04.txt               |           |         |                |                     |                     |
	| ssh     | ha-377576 ssh -n                                                                 | ha-377576 | jenkins | v1.33.0-beta.0 | 27 Mar 24 23:58 UTC | 27 Mar 24 23:58 UTC |
	|         | ha-377576-m03 sudo cat                                                           |           |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |                |                     |                     |
	| ssh     | ha-377576 ssh -n ha-377576-m04 sudo cat                                          | ha-377576 | jenkins | v1.33.0-beta.0 | 27 Mar 24 23:58 UTC | 27 Mar 24 23:58 UTC |
	|         | /home/docker/cp-test_ha-377576-m03_ha-377576-m04.txt                             |           |         |                |                     |                     |
	| cp      | ha-377576 cp testdata/cp-test.txt                                                | ha-377576 | jenkins | v1.33.0-beta.0 | 27 Mar 24 23:58 UTC | 27 Mar 24 23:58 UTC |
	|         | ha-377576-m04:/home/docker/cp-test.txt                                           |           |         |                |                     |                     |
	| ssh     | ha-377576 ssh -n                                                                 | ha-377576 | jenkins | v1.33.0-beta.0 | 27 Mar 24 23:58 UTC | 27 Mar 24 23:58 UTC |
	|         | ha-377576-m04 sudo cat                                                           |           |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |                |                     |                     |
	| cp      | ha-377576 cp ha-377576-m04:/home/docker/cp-test.txt                              | ha-377576 | jenkins | v1.33.0-beta.0 | 27 Mar 24 23:58 UTC | 27 Mar 24 23:58 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3418864072/001/cp-test_ha-377576-m04.txt |           |         |                |                     |                     |
	| ssh     | ha-377576 ssh -n                                                                 | ha-377576 | jenkins | v1.33.0-beta.0 | 27 Mar 24 23:58 UTC | 27 Mar 24 23:58 UTC |
	|         | ha-377576-m04 sudo cat                                                           |           |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |                |                     |                     |
	| cp      | ha-377576 cp ha-377576-m04:/home/docker/cp-test.txt                              | ha-377576 | jenkins | v1.33.0-beta.0 | 27 Mar 24 23:58 UTC | 27 Mar 24 23:58 UTC |
	|         | ha-377576:/home/docker/cp-test_ha-377576-m04_ha-377576.txt                       |           |         |                |                     |                     |
	| ssh     | ha-377576 ssh -n                                                                 | ha-377576 | jenkins | v1.33.0-beta.0 | 27 Mar 24 23:58 UTC | 27 Mar 24 23:58 UTC |
	|         | ha-377576-m04 sudo cat                                                           |           |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |                |                     |                     |
	| ssh     | ha-377576 ssh -n ha-377576 sudo cat                                              | ha-377576 | jenkins | v1.33.0-beta.0 | 27 Mar 24 23:58 UTC | 27 Mar 24 23:58 UTC |
	|         | /home/docker/cp-test_ha-377576-m04_ha-377576.txt                                 |           |         |                |                     |                     |
	| cp      | ha-377576 cp ha-377576-m04:/home/docker/cp-test.txt                              | ha-377576 | jenkins | v1.33.0-beta.0 | 27 Mar 24 23:58 UTC | 27 Mar 24 23:58 UTC |
	|         | ha-377576-m02:/home/docker/cp-test_ha-377576-m04_ha-377576-m02.txt               |           |         |                |                     |                     |
	| ssh     | ha-377576 ssh -n                                                                 | ha-377576 | jenkins | v1.33.0-beta.0 | 27 Mar 24 23:58 UTC | 27 Mar 24 23:58 UTC |
	|         | ha-377576-m04 sudo cat                                                           |           |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |                |                     |                     |
	| ssh     | ha-377576 ssh -n ha-377576-m02 sudo cat                                          | ha-377576 | jenkins | v1.33.0-beta.0 | 27 Mar 24 23:58 UTC | 27 Mar 24 23:58 UTC |
	|         | /home/docker/cp-test_ha-377576-m04_ha-377576-m02.txt                             |           |         |                |                     |                     |
	| cp      | ha-377576 cp ha-377576-m04:/home/docker/cp-test.txt                              | ha-377576 | jenkins | v1.33.0-beta.0 | 27 Mar 24 23:58 UTC | 27 Mar 24 23:58 UTC |
	|         | ha-377576-m03:/home/docker/cp-test_ha-377576-m04_ha-377576-m03.txt               |           |         |                |                     |                     |
	| ssh     | ha-377576 ssh -n                                                                 | ha-377576 | jenkins | v1.33.0-beta.0 | 27 Mar 24 23:58 UTC | 27 Mar 24 23:58 UTC |
	|         | ha-377576-m04 sudo cat                                                           |           |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |                |                     |                     |
	| ssh     | ha-377576 ssh -n ha-377576-m03 sudo cat                                          | ha-377576 | jenkins | v1.33.0-beta.0 | 27 Mar 24 23:58 UTC | 27 Mar 24 23:58 UTC |
	|         | /home/docker/cp-test_ha-377576-m04_ha-377576-m03.txt                             |           |         |                |                     |                     |
	| node    | ha-377576 node stop m02 -v=7                                                     | ha-377576 | jenkins | v1.33.0-beta.0 | 27 Mar 24 23:58 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |                |                     |                     |
	| node    | ha-377576 node start m02 -v=7                                                    | ha-377576 | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:00 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |                |                     |                     |
	| node    | list -p ha-377576 -v=7                                                           | ha-377576 | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:01 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |                |                     |                     |
	| stop    | -p ha-377576 -v=7                                                                | ha-377576 | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:01 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |                |                     |                     |
	| start   | -p ha-377576 --wait=true -v=7                                                    | ha-377576 | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:03 UTC | 28 Mar 24 00:08 UTC |
	|         | --alsologtostderr                                                                |           |         |                |                     |                     |
	| node    | list -p ha-377576                                                                | ha-377576 | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:08 UTC |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/28 00:03:29
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0328 00:03:29.540124 1092522 out.go:291] Setting OutFile to fd 1 ...
	I0328 00:03:29.540407 1092522 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 00:03:29.540418 1092522 out.go:304] Setting ErrFile to fd 2...
	I0328 00:03:29.540423 1092522 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 00:03:29.540622 1092522 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18485-1069254/.minikube/bin
	I0328 00:03:29.541195 1092522 out.go:298] Setting JSON to false
	I0328 00:03:29.542305 1092522 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":27907,"bootTime":1711556303,"procs":185,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1054-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0328 00:03:29.542377 1092522 start.go:139] virtualization: kvm guest
	I0328 00:03:29.544850 1092522 out.go:177] * [ha-377576] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I0328 00:03:29.546258 1092522 out.go:177]   - MINIKUBE_LOCATION=18485
	I0328 00:03:29.546312 1092522 notify.go:220] Checking for updates...
	I0328 00:03:29.547676 1092522 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0328 00:03:29.549267 1092522 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18485-1069254/kubeconfig
	I0328 00:03:29.550538 1092522 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18485-1069254/.minikube
	I0328 00:03:29.551754 1092522 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0328 00:03:29.552995 1092522 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0328 00:03:29.554672 1092522 config.go:182] Loaded profile config "ha-377576": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0328 00:03:29.554776 1092522 driver.go:392] Setting default libvirt URI to qemu:///system
	I0328 00:03:29.555211 1092522 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0328 00:03:29.555263 1092522 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 00:03:29.571350 1092522 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42217
	I0328 00:03:29.571943 1092522 main.go:141] libmachine: () Calling .GetVersion
	I0328 00:03:29.572571 1092522 main.go:141] libmachine: Using API Version  1
	I0328 00:03:29.572605 1092522 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 00:03:29.573054 1092522 main.go:141] libmachine: () Calling .GetMachineName
	I0328 00:03:29.573280 1092522 main.go:141] libmachine: (ha-377576) Calling .DriverName
	I0328 00:03:29.611251 1092522 out.go:177] * Using the kvm2 driver based on existing profile
	I0328 00:03:29.612724 1092522 start.go:297] selected driver: kvm2
	I0328 00:03:29.612744 1092522 start.go:901] validating driver "kvm2" against &{Name:ha-377576 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.29.3 ClusterName:ha-377576 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.47 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.117 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.101 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.93 Port:0 KubernetesVersion:v1.29.3 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk
:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGI
D:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0328 00:03:29.612865 1092522 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0328 00:03:29.613348 1092522 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0328 00:03:29.613473 1092522 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18485-1069254/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0328 00:03:29.629496 1092522 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0-beta.0
	I0328 00:03:29.630732 1092522 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0328 00:03:29.630808 1092522 cni.go:84] Creating CNI manager for ""
	I0328 00:03:29.630820 1092522 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0328 00:03:29.630893 1092522 start.go:340] cluster config:
	{Name:ha-377576 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:ha-377576 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.47 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.117 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.101 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.93 Port:0 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-till
er:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0328 00:03:29.631069 1092522 iso.go:125] acquiring lock: {Name:mk3da1fa7d63a581c817f327d86a827697458cb8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0328 00:03:29.632958 1092522 out.go:177] * Starting "ha-377576" primary control-plane node in "ha-377576" cluster
	I0328 00:03:29.634055 1092522 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0328 00:03:29.634103 1092522 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4
	I0328 00:03:29.634119 1092522 cache.go:56] Caching tarball of preloaded images
	I0328 00:03:29.634264 1092522 preload.go:173] Found /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0328 00:03:29.634281 1092522 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on crio
	I0328 00:03:29.634481 1092522 profile.go:142] Saving config to /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/ha-377576/config.json ...
	I0328 00:03:29.634741 1092522 start.go:360] acquireMachinesLock for ha-377576: {Name:mk85e225431128bbd27ac7bb3815095957281902 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0328 00:03:29.634805 1092522 start.go:364] duration metric: took 36.089µs to acquireMachinesLock for "ha-377576"
	I0328 00:03:29.634825 1092522 start.go:96] Skipping create...Using existing machine configuration
	I0328 00:03:29.634866 1092522 fix.go:54] fixHost starting: 
	I0328 00:03:29.635311 1092522 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0328 00:03:29.635367 1092522 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 00:03:29.649991 1092522 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37761
	I0328 00:03:29.650531 1092522 main.go:141] libmachine: () Calling .GetVersion
	I0328 00:03:29.651161 1092522 main.go:141] libmachine: Using API Version  1
	I0328 00:03:29.651188 1092522 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 00:03:29.651623 1092522 main.go:141] libmachine: () Calling .GetMachineName
	I0328 00:03:29.651835 1092522 main.go:141] libmachine: (ha-377576) Calling .DriverName
	I0328 00:03:29.652028 1092522 main.go:141] libmachine: (ha-377576) Calling .GetState
	I0328 00:03:29.653705 1092522 fix.go:112] recreateIfNeeded on ha-377576: state=Running err=<nil>
	W0328 00:03:29.653741 1092522 fix.go:138] unexpected machine state, will restart: <nil>
	I0328 00:03:29.655744 1092522 out.go:177] * Updating the running kvm2 "ha-377576" VM ...
	I0328 00:03:29.657234 1092522 machine.go:94] provisionDockerMachine start ...
	I0328 00:03:29.657260 1092522 main.go:141] libmachine: (ha-377576) Calling .DriverName
	I0328 00:03:29.657497 1092522 main.go:141] libmachine: (ha-377576) Calling .GetSSHHostname
	I0328 00:03:29.660426 1092522 main.go:141] libmachine: (ha-377576) DBG | domain ha-377576 has defined MAC address 52:54:00:9c:48:13 in network mk-ha-377576
	I0328 00:03:29.661011 1092522 main.go:141] libmachine: (ha-377576) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:48:13", ip: ""} in network mk-ha-377576: {Iface:virbr1 ExpiryTime:2024-03-28 00:52:31 +0000 UTC Type:0 Mac:52:54:00:9c:48:13 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:ha-377576 Clientid:01:52:54:00:9c:48:13}
	I0328 00:03:29.661042 1092522 main.go:141] libmachine: (ha-377576) DBG | domain ha-377576 has defined IP address 192.168.39.47 and MAC address 52:54:00:9c:48:13 in network mk-ha-377576
	I0328 00:03:29.661194 1092522 main.go:141] libmachine: (ha-377576) Calling .GetSSHPort
	I0328 00:03:29.661407 1092522 main.go:141] libmachine: (ha-377576) Calling .GetSSHKeyPath
	I0328 00:03:29.661585 1092522 main.go:141] libmachine: (ha-377576) Calling .GetSSHKeyPath
	I0328 00:03:29.661767 1092522 main.go:141] libmachine: (ha-377576) Calling .GetSSHUsername
	I0328 00:03:29.661959 1092522 main.go:141] libmachine: Using SSH client type: native
	I0328 00:03:29.662147 1092522 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.47 22 <nil> <nil>}
	I0328 00:03:29.662159 1092522 main.go:141] libmachine: About to run SSH command:
	hostname
	I0328 00:03:29.767912 1092522 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-377576
	
	I0328 00:03:29.767947 1092522 main.go:141] libmachine: (ha-377576) Calling .GetMachineName
	I0328 00:03:29.768203 1092522 buildroot.go:166] provisioning hostname "ha-377576"
	I0328 00:03:29.768240 1092522 main.go:141] libmachine: (ha-377576) Calling .GetMachineName
	I0328 00:03:29.768460 1092522 main.go:141] libmachine: (ha-377576) Calling .GetSSHHostname
	I0328 00:03:29.771493 1092522 main.go:141] libmachine: (ha-377576) DBG | domain ha-377576 has defined MAC address 52:54:00:9c:48:13 in network mk-ha-377576
	I0328 00:03:29.771896 1092522 main.go:141] libmachine: (ha-377576) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:48:13", ip: ""} in network mk-ha-377576: {Iface:virbr1 ExpiryTime:2024-03-28 00:52:31 +0000 UTC Type:0 Mac:52:54:00:9c:48:13 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:ha-377576 Clientid:01:52:54:00:9c:48:13}
	I0328 00:03:29.771928 1092522 main.go:141] libmachine: (ha-377576) DBG | domain ha-377576 has defined IP address 192.168.39.47 and MAC address 52:54:00:9c:48:13 in network mk-ha-377576
	I0328 00:03:29.772097 1092522 main.go:141] libmachine: (ha-377576) Calling .GetSSHPort
	I0328 00:03:29.772348 1092522 main.go:141] libmachine: (ha-377576) Calling .GetSSHKeyPath
	I0328 00:03:29.772535 1092522 main.go:141] libmachine: (ha-377576) Calling .GetSSHKeyPath
	I0328 00:03:29.772716 1092522 main.go:141] libmachine: (ha-377576) Calling .GetSSHUsername
	I0328 00:03:29.772849 1092522 main.go:141] libmachine: Using SSH client type: native
	I0328 00:03:29.773063 1092522 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.47 22 <nil> <nil>}
	I0328 00:03:29.773092 1092522 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-377576 && echo "ha-377576" | sudo tee /etc/hostname
	I0328 00:03:29.896811 1092522 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-377576
	
	I0328 00:03:29.896842 1092522 main.go:141] libmachine: (ha-377576) Calling .GetSSHHostname
	I0328 00:03:29.900106 1092522 main.go:141] libmachine: (ha-377576) DBG | domain ha-377576 has defined MAC address 52:54:00:9c:48:13 in network mk-ha-377576
	I0328 00:03:29.900519 1092522 main.go:141] libmachine: (ha-377576) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:48:13", ip: ""} in network mk-ha-377576: {Iface:virbr1 ExpiryTime:2024-03-28 00:52:31 +0000 UTC Type:0 Mac:52:54:00:9c:48:13 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:ha-377576 Clientid:01:52:54:00:9c:48:13}
	I0328 00:03:29.900554 1092522 main.go:141] libmachine: (ha-377576) DBG | domain ha-377576 has defined IP address 192.168.39.47 and MAC address 52:54:00:9c:48:13 in network mk-ha-377576
	I0328 00:03:29.900749 1092522 main.go:141] libmachine: (ha-377576) Calling .GetSSHPort
	I0328 00:03:29.900988 1092522 main.go:141] libmachine: (ha-377576) Calling .GetSSHKeyPath
	I0328 00:03:29.901189 1092522 main.go:141] libmachine: (ha-377576) Calling .GetSSHKeyPath
	I0328 00:03:29.901367 1092522 main.go:141] libmachine: (ha-377576) Calling .GetSSHUsername
	I0328 00:03:29.901557 1092522 main.go:141] libmachine: Using SSH client type: native
	I0328 00:03:29.901726 1092522 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.47 22 <nil> <nil>}
	I0328 00:03:29.901742 1092522 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-377576' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-377576/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-377576' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0328 00:03:30.007248 1092522 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0328 00:03:30.007296 1092522 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18485-1069254/.minikube CaCertPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18485-1069254/.minikube}
	I0328 00:03:30.007324 1092522 buildroot.go:174] setting up certificates
	I0328 00:03:30.007340 1092522 provision.go:84] configureAuth start
	I0328 00:03:30.007358 1092522 main.go:141] libmachine: (ha-377576) Calling .GetMachineName
	I0328 00:03:30.007653 1092522 main.go:141] libmachine: (ha-377576) Calling .GetIP
	I0328 00:03:30.010626 1092522 main.go:141] libmachine: (ha-377576) DBG | domain ha-377576 has defined MAC address 52:54:00:9c:48:13 in network mk-ha-377576
	I0328 00:03:30.011115 1092522 main.go:141] libmachine: (ha-377576) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:48:13", ip: ""} in network mk-ha-377576: {Iface:virbr1 ExpiryTime:2024-03-28 00:52:31 +0000 UTC Type:0 Mac:52:54:00:9c:48:13 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:ha-377576 Clientid:01:52:54:00:9c:48:13}
	I0328 00:03:30.011147 1092522 main.go:141] libmachine: (ha-377576) DBG | domain ha-377576 has defined IP address 192.168.39.47 and MAC address 52:54:00:9c:48:13 in network mk-ha-377576
	I0328 00:03:30.011322 1092522 main.go:141] libmachine: (ha-377576) Calling .GetSSHHostname
	I0328 00:03:30.013700 1092522 main.go:141] libmachine: (ha-377576) DBG | domain ha-377576 has defined MAC address 52:54:00:9c:48:13 in network mk-ha-377576
	I0328 00:03:30.014155 1092522 main.go:141] libmachine: (ha-377576) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:48:13", ip: ""} in network mk-ha-377576: {Iface:virbr1 ExpiryTime:2024-03-28 00:52:31 +0000 UTC Type:0 Mac:52:54:00:9c:48:13 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:ha-377576 Clientid:01:52:54:00:9c:48:13}
	I0328 00:03:30.014184 1092522 main.go:141] libmachine: (ha-377576) DBG | domain ha-377576 has defined IP address 192.168.39.47 and MAC address 52:54:00:9c:48:13 in network mk-ha-377576
	I0328 00:03:30.014413 1092522 provision.go:143] copyHostCerts
	I0328 00:03:30.014450 1092522 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.pem
	I0328 00:03:30.014494 1092522 exec_runner.go:144] found /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.pem, removing ...
	I0328 00:03:30.014504 1092522 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.pem
	I0328 00:03:30.014581 1092522 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.pem (1078 bytes)
	I0328 00:03:30.014711 1092522 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18485-1069254/.minikube/cert.pem
	I0328 00:03:30.014733 1092522 exec_runner.go:144] found /home/jenkins/minikube-integration/18485-1069254/.minikube/cert.pem, removing ...
	I0328 00:03:30.014740 1092522 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18485-1069254/.minikube/cert.pem
	I0328 00:03:30.014766 1092522 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18485-1069254/.minikube/cert.pem (1123 bytes)
	I0328 00:03:30.014809 1092522 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18485-1069254/.minikube/key.pem
	I0328 00:03:30.014826 1092522 exec_runner.go:144] found /home/jenkins/minikube-integration/18485-1069254/.minikube/key.pem, removing ...
	I0328 00:03:30.014832 1092522 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18485-1069254/.minikube/key.pem
	I0328 00:03:30.014851 1092522 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18485-1069254/.minikube/key.pem (1679 bytes)
	I0328 00:03:30.014896 1092522 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca-key.pem org=jenkins.ha-377576 san=[127.0.0.1 192.168.39.47 ha-377576 localhost minikube]
	I0328 00:03:30.299041 1092522 provision.go:177] copyRemoteCerts
	I0328 00:03:30.299122 1092522 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0328 00:03:30.299214 1092522 main.go:141] libmachine: (ha-377576) Calling .GetSSHHostname
	I0328 00:03:30.302018 1092522 main.go:141] libmachine: (ha-377576) DBG | domain ha-377576 has defined MAC address 52:54:00:9c:48:13 in network mk-ha-377576
	I0328 00:03:30.302379 1092522 main.go:141] libmachine: (ha-377576) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:48:13", ip: ""} in network mk-ha-377576: {Iface:virbr1 ExpiryTime:2024-03-28 00:52:31 +0000 UTC Type:0 Mac:52:54:00:9c:48:13 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:ha-377576 Clientid:01:52:54:00:9c:48:13}
	I0328 00:03:30.302417 1092522 main.go:141] libmachine: (ha-377576) DBG | domain ha-377576 has defined IP address 192.168.39.47 and MAC address 52:54:00:9c:48:13 in network mk-ha-377576
	I0328 00:03:30.302645 1092522 main.go:141] libmachine: (ha-377576) Calling .GetSSHPort
	I0328 00:03:30.302879 1092522 main.go:141] libmachine: (ha-377576) Calling .GetSSHKeyPath
	I0328 00:03:30.303022 1092522 main.go:141] libmachine: (ha-377576) Calling .GetSSHUsername
	I0328 00:03:30.303159 1092522 sshutil.go:53] new ssh client: &{IP:192.168.39.47 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/ha-377576/id_rsa Username:docker}
	I0328 00:03:30.389123 1092522 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0328 00:03:30.389203 1092522 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0328 00:03:30.420003 1092522 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0328 00:03:30.420080 1092522 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0328 00:03:30.447779 1092522 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0328 00:03:30.447867 1092522 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0328 00:03:30.476299 1092522 provision.go:87] duration metric: took 468.938476ms to configureAuth
	I0328 00:03:30.476332 1092522 buildroot.go:189] setting minikube options for container-runtime
	I0328 00:03:30.476641 1092522 config.go:182] Loaded profile config "ha-377576": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0328 00:03:30.476742 1092522 main.go:141] libmachine: (ha-377576) Calling .GetSSHHostname
	I0328 00:03:30.479515 1092522 main.go:141] libmachine: (ha-377576) DBG | domain ha-377576 has defined MAC address 52:54:00:9c:48:13 in network mk-ha-377576
	I0328 00:03:30.479929 1092522 main.go:141] libmachine: (ha-377576) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:48:13", ip: ""} in network mk-ha-377576: {Iface:virbr1 ExpiryTime:2024-03-28 00:52:31 +0000 UTC Type:0 Mac:52:54:00:9c:48:13 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:ha-377576 Clientid:01:52:54:00:9c:48:13}
	I0328 00:03:30.479954 1092522 main.go:141] libmachine: (ha-377576) DBG | domain ha-377576 has defined IP address 192.168.39.47 and MAC address 52:54:00:9c:48:13 in network mk-ha-377576
	I0328 00:03:30.480139 1092522 main.go:141] libmachine: (ha-377576) Calling .GetSSHPort
	I0328 00:03:30.480352 1092522 main.go:141] libmachine: (ha-377576) Calling .GetSSHKeyPath
	I0328 00:03:30.480560 1092522 main.go:141] libmachine: (ha-377576) Calling .GetSSHKeyPath
	I0328 00:03:30.480719 1092522 main.go:141] libmachine: (ha-377576) Calling .GetSSHUsername
	I0328 00:03:30.480890 1092522 main.go:141] libmachine: Using SSH client type: native
	I0328 00:03:30.481052 1092522 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.47 22 <nil> <nil>}
	I0328 00:03:30.481066 1092522 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0328 00:05:01.308940 1092522 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0328 00:05:01.308990 1092522 machine.go:97] duration metric: took 1m31.651733633s to provisionDockerMachine
	I0328 00:05:01.309005 1092522 start.go:293] postStartSetup for "ha-377576" (driver="kvm2")
	I0328 00:05:01.309018 1092522 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0328 00:05:01.309038 1092522 main.go:141] libmachine: (ha-377576) Calling .DriverName
	I0328 00:05:01.309445 1092522 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0328 00:05:01.309488 1092522 main.go:141] libmachine: (ha-377576) Calling .GetSSHHostname
	I0328 00:05:01.312671 1092522 main.go:141] libmachine: (ha-377576) DBG | domain ha-377576 has defined MAC address 52:54:00:9c:48:13 in network mk-ha-377576
	I0328 00:05:01.313091 1092522 main.go:141] libmachine: (ha-377576) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:48:13", ip: ""} in network mk-ha-377576: {Iface:virbr1 ExpiryTime:2024-03-28 00:52:31 +0000 UTC Type:0 Mac:52:54:00:9c:48:13 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:ha-377576 Clientid:01:52:54:00:9c:48:13}
	I0328 00:05:01.313118 1092522 main.go:141] libmachine: (ha-377576) DBG | domain ha-377576 has defined IP address 192.168.39.47 and MAC address 52:54:00:9c:48:13 in network mk-ha-377576
	I0328 00:05:01.313315 1092522 main.go:141] libmachine: (ha-377576) Calling .GetSSHPort
	I0328 00:05:01.313571 1092522 main.go:141] libmachine: (ha-377576) Calling .GetSSHKeyPath
	I0328 00:05:01.313758 1092522 main.go:141] libmachine: (ha-377576) Calling .GetSSHUsername
	I0328 00:05:01.313905 1092522 sshutil.go:53] new ssh client: &{IP:192.168.39.47 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/ha-377576/id_rsa Username:docker}
	I0328 00:05:01.399853 1092522 ssh_runner.go:195] Run: cat /etc/os-release
	I0328 00:05:01.404548 1092522 info.go:137] Remote host: Buildroot 2023.02.9
	I0328 00:05:01.404587 1092522 filesync.go:126] Scanning /home/jenkins/minikube-integration/18485-1069254/.minikube/addons for local assets ...
	I0328 00:05:01.404679 1092522 filesync.go:126] Scanning /home/jenkins/minikube-integration/18485-1069254/.minikube/files for local assets ...
	I0328 00:05:01.404764 1092522 filesync.go:149] local asset: /home/jenkins/minikube-integration/18485-1069254/.minikube/files/etc/ssl/certs/10765222.pem -> 10765222.pem in /etc/ssl/certs
	I0328 00:05:01.404777 1092522 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18485-1069254/.minikube/files/etc/ssl/certs/10765222.pem -> /etc/ssl/certs/10765222.pem
	I0328 00:05:01.404856 1092522 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0328 00:05:01.415592 1092522 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/files/etc/ssl/certs/10765222.pem --> /etc/ssl/certs/10765222.pem (1708 bytes)
	I0328 00:05:01.441491 1092522 start.go:296] duration metric: took 132.470927ms for postStartSetup
	I0328 00:05:01.441543 1092522 main.go:141] libmachine: (ha-377576) Calling .DriverName
	I0328 00:05:01.441893 1092522 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0328 00:05:01.441919 1092522 main.go:141] libmachine: (ha-377576) Calling .GetSSHHostname
	I0328 00:05:01.444822 1092522 main.go:141] libmachine: (ha-377576) DBG | domain ha-377576 has defined MAC address 52:54:00:9c:48:13 in network mk-ha-377576
	I0328 00:05:01.445155 1092522 main.go:141] libmachine: (ha-377576) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:48:13", ip: ""} in network mk-ha-377576: {Iface:virbr1 ExpiryTime:2024-03-28 00:52:31 +0000 UTC Type:0 Mac:52:54:00:9c:48:13 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:ha-377576 Clientid:01:52:54:00:9c:48:13}
	I0328 00:05:01.445182 1092522 main.go:141] libmachine: (ha-377576) DBG | domain ha-377576 has defined IP address 192.168.39.47 and MAC address 52:54:00:9c:48:13 in network mk-ha-377576
	I0328 00:05:01.445365 1092522 main.go:141] libmachine: (ha-377576) Calling .GetSSHPort
	I0328 00:05:01.445590 1092522 main.go:141] libmachine: (ha-377576) Calling .GetSSHKeyPath
	I0328 00:05:01.445768 1092522 main.go:141] libmachine: (ha-377576) Calling .GetSSHUsername
	I0328 00:05:01.445957 1092522 sshutil.go:53] new ssh client: &{IP:192.168.39.47 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/ha-377576/id_rsa Username:docker}
	W0328 00:05:01.525601 1092522 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0328 00:05:01.525631 1092522 fix.go:56] duration metric: took 1m31.890767476s for fixHost
	I0328 00:05:01.525656 1092522 main.go:141] libmachine: (ha-377576) Calling .GetSSHHostname
	I0328 00:05:01.528474 1092522 main.go:141] libmachine: (ha-377576) DBG | domain ha-377576 has defined MAC address 52:54:00:9c:48:13 in network mk-ha-377576
	I0328 00:05:01.529013 1092522 main.go:141] libmachine: (ha-377576) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:48:13", ip: ""} in network mk-ha-377576: {Iface:virbr1 ExpiryTime:2024-03-28 00:52:31 +0000 UTC Type:0 Mac:52:54:00:9c:48:13 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:ha-377576 Clientid:01:52:54:00:9c:48:13}
	I0328 00:05:01.529042 1092522 main.go:141] libmachine: (ha-377576) DBG | domain ha-377576 has defined IP address 192.168.39.47 and MAC address 52:54:00:9c:48:13 in network mk-ha-377576
	I0328 00:05:01.529223 1092522 main.go:141] libmachine: (ha-377576) Calling .GetSSHPort
	I0328 00:05:01.529480 1092522 main.go:141] libmachine: (ha-377576) Calling .GetSSHKeyPath
	I0328 00:05:01.529692 1092522 main.go:141] libmachine: (ha-377576) Calling .GetSSHKeyPath
	I0328 00:05:01.529831 1092522 main.go:141] libmachine: (ha-377576) Calling .GetSSHUsername
	I0328 00:05:01.530081 1092522 main.go:141] libmachine: Using SSH client type: native
	I0328 00:05:01.530345 1092522 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.47 22 <nil> <nil>}
	I0328 00:05:01.530361 1092522 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0328 00:05:01.631506 1092522 main.go:141] libmachine: SSH cmd err, output: <nil>: 1711584301.600399857
	
	I0328 00:05:01.631541 1092522 fix.go:216] guest clock: 1711584301.600399857
	I0328 00:05:01.631550 1092522 fix.go:229] Guest: 2024-03-28 00:05:01.600399857 +0000 UTC Remote: 2024-03-28 00:05:01.52563955 +0000 UTC m=+92.037580048 (delta=74.760307ms)
	I0328 00:05:01.631571 1092522 fix.go:200] guest clock delta is within tolerance: 74.760307ms
	I0328 00:05:01.631577 1092522 start.go:83] releasing machines lock for "ha-377576", held for 1m31.996760278s
	I0328 00:05:01.631596 1092522 main.go:141] libmachine: (ha-377576) Calling .DriverName
	I0328 00:05:01.631879 1092522 main.go:141] libmachine: (ha-377576) Calling .GetIP
	I0328 00:05:01.634584 1092522 main.go:141] libmachine: (ha-377576) DBG | domain ha-377576 has defined MAC address 52:54:00:9c:48:13 in network mk-ha-377576
	I0328 00:05:01.634936 1092522 main.go:141] libmachine: (ha-377576) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:48:13", ip: ""} in network mk-ha-377576: {Iface:virbr1 ExpiryTime:2024-03-28 00:52:31 +0000 UTC Type:0 Mac:52:54:00:9c:48:13 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:ha-377576 Clientid:01:52:54:00:9c:48:13}
	I0328 00:05:01.634981 1092522 main.go:141] libmachine: (ha-377576) DBG | domain ha-377576 has defined IP address 192.168.39.47 and MAC address 52:54:00:9c:48:13 in network mk-ha-377576
	I0328 00:05:01.635132 1092522 main.go:141] libmachine: (ha-377576) Calling .DriverName
	I0328 00:05:01.635765 1092522 main.go:141] libmachine: (ha-377576) Calling .DriverName
	I0328 00:05:01.635948 1092522 main.go:141] libmachine: (ha-377576) Calling .DriverName
	I0328 00:05:01.636028 1092522 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0328 00:05:01.636087 1092522 main.go:141] libmachine: (ha-377576) Calling .GetSSHHostname
	I0328 00:05:01.636210 1092522 ssh_runner.go:195] Run: cat /version.json
	I0328 00:05:01.636240 1092522 main.go:141] libmachine: (ha-377576) Calling .GetSSHHostname
	I0328 00:05:01.639083 1092522 main.go:141] libmachine: (ha-377576) DBG | domain ha-377576 has defined MAC address 52:54:00:9c:48:13 in network mk-ha-377576
	I0328 00:05:01.639282 1092522 main.go:141] libmachine: (ha-377576) DBG | domain ha-377576 has defined MAC address 52:54:00:9c:48:13 in network mk-ha-377576
	I0328 00:05:01.639540 1092522 main.go:141] libmachine: (ha-377576) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:48:13", ip: ""} in network mk-ha-377576: {Iface:virbr1 ExpiryTime:2024-03-28 00:52:31 +0000 UTC Type:0 Mac:52:54:00:9c:48:13 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:ha-377576 Clientid:01:52:54:00:9c:48:13}
	I0328 00:05:01.639570 1092522 main.go:141] libmachine: (ha-377576) DBG | domain ha-377576 has defined IP address 192.168.39.47 and MAC address 52:54:00:9c:48:13 in network mk-ha-377576
	I0328 00:05:01.639688 1092522 main.go:141] libmachine: (ha-377576) Calling .GetSSHPort
	I0328 00:05:01.639759 1092522 main.go:141] libmachine: (ha-377576) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:48:13", ip: ""} in network mk-ha-377576: {Iface:virbr1 ExpiryTime:2024-03-28 00:52:31 +0000 UTC Type:0 Mac:52:54:00:9c:48:13 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:ha-377576 Clientid:01:52:54:00:9c:48:13}
	I0328 00:05:01.639788 1092522 main.go:141] libmachine: (ha-377576) DBG | domain ha-377576 has defined IP address 192.168.39.47 and MAC address 52:54:00:9c:48:13 in network mk-ha-377576
	I0328 00:05:01.639882 1092522 main.go:141] libmachine: (ha-377576) Calling .GetSSHKeyPath
	I0328 00:05:01.639971 1092522 main.go:141] libmachine: (ha-377576) Calling .GetSSHPort
	I0328 00:05:01.640049 1092522 main.go:141] libmachine: (ha-377576) Calling .GetSSHUsername
	I0328 00:05:01.640119 1092522 main.go:141] libmachine: (ha-377576) Calling .GetSSHKeyPath
	I0328 00:05:01.640181 1092522 sshutil.go:53] new ssh client: &{IP:192.168.39.47 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/ha-377576/id_rsa Username:docker}
	I0328 00:05:01.640285 1092522 main.go:141] libmachine: (ha-377576) Calling .GetSSHUsername
	I0328 00:05:01.640432 1092522 sshutil.go:53] new ssh client: &{IP:192.168.39.47 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/ha-377576/id_rsa Username:docker}
	I0328 00:05:01.745883 1092522 ssh_runner.go:195] Run: systemctl --version
	I0328 00:05:01.752699 1092522 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0328 00:05:01.921338 1092522 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0328 00:05:01.931719 1092522 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0328 00:05:01.931798 1092522 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0328 00:05:01.942440 1092522 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0328 00:05:01.942475 1092522 start.go:494] detecting cgroup driver to use...
	I0328 00:05:01.942575 1092522 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0328 00:05:01.959925 1092522 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0328 00:05:01.974713 1092522 docker.go:217] disabling cri-docker service (if available) ...
	I0328 00:05:01.974787 1092522 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0328 00:05:01.989216 1092522 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0328 00:05:02.003588 1092522 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0328 00:05:02.150998 1092522 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0328 00:05:02.304022 1092522 docker.go:233] disabling docker service ...
	I0328 00:05:02.304116 1092522 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0328 00:05:02.322375 1092522 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0328 00:05:02.336513 1092522 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0328 00:05:02.491935 1092522 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0328 00:05:02.643787 1092522 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0328 00:05:02.660880 1092522 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0328 00:05:02.684057 1092522 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0328 00:05:02.684141 1092522 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 00:05:02.695423 1092522 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0328 00:05:02.695510 1092522 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 00:05:02.706655 1092522 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 00:05:02.718648 1092522 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 00:05:02.729718 1092522 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0328 00:05:02.742748 1092522 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 00:05:02.755294 1092522 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 00:05:02.769343 1092522 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 00:05:02.781233 1092522 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0328 00:05:02.791796 1092522 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0328 00:05:02.801701 1092522 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0328 00:05:02.954304 1092522 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0328 00:05:03.266296 1092522 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0328 00:05:03.266376 1092522 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0328 00:05:03.272620 1092522 start.go:562] Will wait 60s for crictl version
	I0328 00:05:03.272702 1092522 ssh_runner.go:195] Run: which crictl
	I0328 00:05:03.277046 1092522 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0328 00:05:03.323295 1092522 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0328 00:05:03.323376 1092522 ssh_runner.go:195] Run: crio --version
	I0328 00:05:03.355016 1092522 ssh_runner.go:195] Run: crio --version
	I0328 00:05:03.387296 1092522 out.go:177] * Preparing Kubernetes v1.29.3 on CRI-O 1.29.1 ...
	I0328 00:05:03.388556 1092522 main.go:141] libmachine: (ha-377576) Calling .GetIP
	I0328 00:05:03.391204 1092522 main.go:141] libmachine: (ha-377576) DBG | domain ha-377576 has defined MAC address 52:54:00:9c:48:13 in network mk-ha-377576
	I0328 00:05:03.391541 1092522 main.go:141] libmachine: (ha-377576) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:48:13", ip: ""} in network mk-ha-377576: {Iface:virbr1 ExpiryTime:2024-03-28 00:52:31 +0000 UTC Type:0 Mac:52:54:00:9c:48:13 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:ha-377576 Clientid:01:52:54:00:9c:48:13}
	I0328 00:05:03.391567 1092522 main.go:141] libmachine: (ha-377576) DBG | domain ha-377576 has defined IP address 192.168.39.47 and MAC address 52:54:00:9c:48:13 in network mk-ha-377576
	I0328 00:05:03.391858 1092522 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0328 00:05:03.397332 1092522 kubeadm.go:877] updating cluster {Name:ha-377576 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 Cl
usterName:ha-377576 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.47 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.117 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.101 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.93 Port:0 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fres
hpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mo
untIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0328 00:05:03.397492 1092522 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0328 00:05:03.397537 1092522 ssh_runner.go:195] Run: sudo crictl images --output json
	I0328 00:05:03.440642 1092522 crio.go:514] all images are preloaded for cri-o runtime.
	I0328 00:05:03.440668 1092522 crio.go:433] Images already preloaded, skipping extraction
	I0328 00:05:03.440722 1092522 ssh_runner.go:195] Run: sudo crictl images --output json
	I0328 00:05:03.480578 1092522 crio.go:514] all images are preloaded for cri-o runtime.
	I0328 00:05:03.480612 1092522 cache_images.go:84] Images are preloaded, skipping loading
	I0328 00:05:03.480623 1092522 kubeadm.go:928] updating node { 192.168.39.47 8443 v1.29.3 crio true true} ...
	I0328 00:05:03.480741 1092522 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-377576 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.47
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.3 ClusterName:ha-377576 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0328 00:05:03.480827 1092522 ssh_runner.go:195] Run: crio config
	I0328 00:05:03.552853 1092522 cni.go:84] Creating CNI manager for ""
	I0328 00:05:03.552878 1092522 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0328 00:05:03.552887 1092522 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0328 00:05:03.552910 1092522 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.47 APIServerPort:8443 KubernetesVersion:v1.29.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-377576 NodeName:ha-377576 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.47"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.47 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/m
anifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0328 00:05:03.553113 1092522 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.47
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-377576"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.47
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.47"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0328 00:05:03.553139 1092522 kube-vip.go:111] generating kube-vip config ...
	I0328 00:05:03.553196 1092522 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0328 00:05:03.624367 1092522 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0328 00:05:03.624496 1092522 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0328 00:05:03.624566 1092522 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.3
	I0328 00:05:03.644586 1092522 binaries.go:44] Found k8s binaries, skipping transfer
	I0328 00:05:03.644669 1092522 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0328 00:05:03.675726 1092522 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I0328 00:05:03.707948 1092522 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0328 00:05:03.760121 1092522 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2150 bytes)
	I0328 00:05:03.800002 1092522 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1346 bytes)
	I0328 00:05:03.844647 1092522 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0328 00:05:03.850773 1092522 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0328 00:05:04.097656 1092522 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0328 00:05:04.142285 1092522 certs.go:68] Setting up /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/ha-377576 for IP: 192.168.39.47
	I0328 00:05:04.142325 1092522 certs.go:194] generating shared ca certs ...
	I0328 00:05:04.142348 1092522 certs.go:226] acquiring lock for ca certs: {Name:mkf4dec8f33bbf51de6ed3aabdf175c7fd744ef9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 00:05:04.142607 1092522 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.key
	I0328 00:05:04.142659 1092522 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/proxy-client-ca.key
	I0328 00:05:04.142671 1092522 certs.go:256] generating profile certs ...
	I0328 00:05:04.142749 1092522 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/ha-377576/client.key
	I0328 00:05:04.142785 1092522 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/ha-377576/apiserver.key.d6fe7f95
	I0328 00:05:04.142809 1092522 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/ha-377576/apiserver.crt.d6fe7f95 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.47 192.168.39.117 192.168.39.101 192.168.39.254]
	I0328 00:05:04.273379 1092522 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/ha-377576/apiserver.crt.d6fe7f95 ...
	I0328 00:05:04.273417 1092522 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/ha-377576/apiserver.crt.d6fe7f95: {Name:mkf04883c4cf2d81860f4e10e8346d686986085a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 00:05:04.273613 1092522 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/ha-377576/apiserver.key.d6fe7f95 ...
	I0328 00:05:04.273632 1092522 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/ha-377576/apiserver.key.d6fe7f95: {Name:mkf90c22de3adc8e09b81aa5db0c365e0f956b11 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 00:05:04.273700 1092522 certs.go:381] copying /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/ha-377576/apiserver.crt.d6fe7f95 -> /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/ha-377576/apiserver.crt
	I0328 00:05:04.273841 1092522 certs.go:385] copying /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/ha-377576/apiserver.key.d6fe7f95 -> /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/ha-377576/apiserver.key
	I0328 00:05:04.273970 1092522 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/ha-377576/proxy-client.key
	I0328 00:05:04.273989 1092522 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0328 00:05:04.274000 1092522 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0328 00:05:04.274013 1092522 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18485-1069254/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0328 00:05:04.274025 1092522 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18485-1069254/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0328 00:05:04.274038 1092522 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/ha-377576/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0328 00:05:04.274048 1092522 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/ha-377576/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0328 00:05:04.274057 1092522 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/ha-377576/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0328 00:05:04.274067 1092522 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/ha-377576/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0328 00:05:04.274117 1092522 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/1076522.pem (1338 bytes)
	W0328 00:05:04.274150 1092522 certs.go:480] ignoring /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/1076522_empty.pem, impossibly tiny 0 bytes
	I0328 00:05:04.274159 1092522 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca-key.pem (1675 bytes)
	I0328 00:05:04.274179 1092522 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem (1078 bytes)
	I0328 00:05:04.274201 1092522 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/cert.pem (1123 bytes)
	I0328 00:05:04.274222 1092522 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/key.pem (1679 bytes)
	I0328 00:05:04.274272 1092522 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/files/etc/ssl/certs/10765222.pem (1708 bytes)
	I0328 00:05:04.274298 1092522 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0328 00:05:04.274319 1092522 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/1076522.pem -> /usr/share/ca-certificates/1076522.pem
	I0328 00:05:04.274332 1092522 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18485-1069254/.minikube/files/etc/ssl/certs/10765222.pem -> /usr/share/ca-certificates/10765222.pem
	I0328 00:05:04.275055 1092522 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0328 00:05:04.303647 1092522 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0328 00:05:04.329152 1092522 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0328 00:05:04.354122 1092522 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0328 00:05:04.379663 1092522 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/ha-377576/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0328 00:05:04.406009 1092522 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/ha-377576/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0328 00:05:04.430786 1092522 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/ha-377576/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0328 00:05:04.456007 1092522 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/ha-377576/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0328 00:05:04.483635 1092522 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0328 00:05:04.509694 1092522 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/1076522.pem --> /usr/share/ca-certificates/1076522.pem (1338 bytes)
	I0328 00:05:04.534853 1092522 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/files/etc/ssl/certs/10765222.pem --> /usr/share/ca-certificates/10765222.pem (1708 bytes)
	I0328 00:05:04.575696 1092522 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0328 00:05:04.593914 1092522 ssh_runner.go:195] Run: openssl version
	I0328 00:05:04.600325 1092522 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0328 00:05:04.612381 1092522 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0328 00:05:04.617241 1092522 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 27 23:34 /usr/share/ca-certificates/minikubeCA.pem
	I0328 00:05:04.617311 1092522 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0328 00:05:04.623256 1092522 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0328 00:05:04.634251 1092522 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1076522.pem && ln -fs /usr/share/ca-certificates/1076522.pem /etc/ssl/certs/1076522.pem"
	I0328 00:05:04.647505 1092522 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1076522.pem
	I0328 00:05:04.652611 1092522 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 27 23:42 /usr/share/ca-certificates/1076522.pem
	I0328 00:05:04.652690 1092522 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1076522.pem
	I0328 00:05:04.659140 1092522 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1076522.pem /etc/ssl/certs/51391683.0"
	I0328 00:05:04.671609 1092522 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10765222.pem && ln -fs /usr/share/ca-certificates/10765222.pem /etc/ssl/certs/10765222.pem"
	I0328 00:05:04.684796 1092522 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10765222.pem
	I0328 00:05:04.690021 1092522 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 27 23:42 /usr/share/ca-certificates/10765222.pem
	I0328 00:05:04.690135 1092522 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10765222.pem
	I0328 00:05:04.696724 1092522 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/10765222.pem /etc/ssl/certs/3ec20f2e.0"
	I0328 00:05:04.708828 1092522 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0328 00:05:04.714086 1092522 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0328 00:05:04.720794 1092522 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0328 00:05:04.727243 1092522 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0328 00:05:04.733870 1092522 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0328 00:05:04.741195 1092522 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0328 00:05:04.747571 1092522 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0328 00:05:04.753642 1092522 kubeadm.go:391] StartCluster: {Name:ha-377576 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 Clust
erName:ha-377576 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.47 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.117 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.101 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.93 Port:0 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpo
d:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mount
IP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0328 00:05:04.753790 1092522 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0328 00:05:04.753855 1092522 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0328 00:05:04.799029 1092522 cri.go:89] found id: "e280dd2cc82d6fb9730125bb5cc514d7b33be2a3a03727e5adeb43a7f1399eb8"
	I0328 00:05:04.799062 1092522 cri.go:89] found id: "0a2fd3dc48780368d474066f78db57cea4362acc353f7604458e5ad6b3d1c6bb"
	I0328 00:05:04.799067 1092522 cri.go:89] found id: "153e7ff305a05cd7c6257c6dd77ef4d3cf09a9a1759ca7bbcd492a381e5fff6e"
	I0328 00:05:04.799071 1092522 cri.go:89] found id: "c1d4cd43b2dc79a102752a811349247b046bc6478161c113c0d2b9a9741e4aab"
	I0328 00:05:04.799074 1092522 cri.go:89] found id: "3bc1caf41cc2a4eece146f29899d95e195dd1cdeea37643ae3d3b2804d15af7e"
	I0328 00:05:04.799077 1092522 cri.go:89] found id: "1285bba92deaf6fc58b611f235178ae99f08f9474c30ca6b904d51aa1da9f40f"
	I0328 00:05:04.799080 1092522 cri.go:89] found id: "42dcabde2aec964660ef004661b1aca7c5fb8ef5bed0007775f67b975b44adfa"
	I0328 00:05:04.799082 1092522 cri.go:89] found id: "1d5198968b76968a091c33806ec424495c03395675b20e7eb35e330d16217211"
	I0328 00:05:04.799084 1092522 cri.go:89] found id: "ed9a38e9f6cd98054d1e672c652e36e6262ceb6eb2a3e14911a5178d222135e7"
	I0328 00:05:04.799091 1092522 cri.go:89] found id: "381348b1458cea236fc315e0a9a42d269c69969b162efaa25de894ac4284ba88"
	I0328 00:05:04.799094 1092522 cri.go:89] found id: "a226f01452a72cf6e9a608450715a12f6663ccf2697e8259f809e966809978ce"
	I0328 00:05:04.799098 1092522 cri.go:89] found id: "f28af42c6db4a0efe0547d4442478084f4054bb5d4d47038a8a7f727ec1044df"
	I0328 00:05:04.799103 1092522 cri.go:89] found id: "22d460b8d6582d93d5633e1e1af46683647a5632f6b9153f61a6c374dca4f34c"
	I0328 00:05:04.799107 1092522 cri.go:89] found id: "a0128cd878ebdae2c6e217413183a103417714ffe00d55c1c92d1353e38238fa"
	I0328 00:05:04.799113 1092522 cri.go:89] found id: "5f113e7564c47f0e2aa57dbb0702acbf86b1d75d00e9210d72d606a1b0505e5b"
	I0328 00:05:04.799117 1092522 cri.go:89] found id: "afbf14c176818ba796eacd41f1600b5f09075c6a2b4f1edcad903530221f76ff"
	I0328 00:05:04.799121 1092522 cri.go:89] found id: ""
	I0328 00:05:04.799182 1092522 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Mar 28 00:08:02 ha-377576 crio[3893]: time="2024-03-28 00:08:02.823209655Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:446878900fc2ff3c1baac3e23199c7573f58770442bddf2548e3ccbfa9d3b300,PodSandboxId:d6ee7152bab39abb61e4c40513bcbf1c1719037be0b22e184bd532002df38257,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:4,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1711584387693883636,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-5zmtk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e75cdc5-22da-47f2-9833-b2f4eaa9caac,},Annotations:map[string]string{io.kubernetes.container.hash: 6486d0c8,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1639d05b99a2d4c643e2a8925ba47e02dea0aecef0e22871c3d3ef765cb08394,PodSandboxId:904097cb5f15281f081d3f374e7a82a4e22c19468f949bb1d8dd5110b05fbf0c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1711584384689393202,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9000645c-8323-43af-bd87-011d1574493c,},Annotations:map[string]string{io.kubernetes.container.hash: 43103468,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termin
ation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7618cf90b394f83cf98ba25b6a878b77ac0c7bacb6fa65a7cf7ddaae3d859976,PodSandboxId:cb101e6a739d939c7192d22a06ebbaf222b63e1b6c3d7b3a9e82ddaa67bbffc1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1711584350713245285,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-377576,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 32d18f050adf42c0d971a9903270a7b6,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2930b0267019902b8b9ce7cd18b907f89d409676028d92de1dc551850d78f276,PodSandboxId:997d89c34a5f31417acabe4367ffd068159f7cd6447ab1e4c7b0c7caeb3d7a93,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1711584348688612607,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-377576,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6490cbc40210ad634becf13ac3a1705,},Annotations:map[string]string{io.kubernetes.container.hash: bc2c0f48,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.ku
bernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5298cacdf731fecbce32ad2d9f328670947c3e2bc41da8560a41746efa183376,PodSandboxId:6eafda94672d564fe253dfd43b5bde346da3dfa6d5efeb98a979e953c11b959d,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1711584343983600963,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-78c89,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3272474d-5490-4c7c-9dfe-ded8488ec32f,},Annotations:map[string]string{io.kubernetes.container.hash: f4bc3217,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessa
gePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0d9ac0997d255850bd452ddf61795477c05b16d5a1c77900748c11ef0e86ad8a,PodSandboxId:904097cb5f15281f081d3f374e7a82a4e22c19468f949bb1d8dd5110b05fbf0c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1711584336688854876,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9000645c-8323-43af-bd87-011d1574493c,},Annotations:map[string]string{io.kubernetes.container.hash: 43103468,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: Fil
e,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5116a84093b94656654ec8266161d1bc13b32d0e34905ba6962ac6769dc5c6da,PodSandboxId:0b075b5717795f3c07856e1151edd79045a7fe74683a224f07536c576926d280,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1711584326540796553,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-377576,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d35f096ae88a36bb3ae6fa7f31554e39,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePer
iod: 30,},},&Container{Id:f043217131ac4124beae3e8ebcbab4be1eebd60b6b040475b8d40b6951c8837f,PodSandboxId:657c51c3b5e7d8e9d3e371ead69b9b6e2d781ebd27d42dfdb2641e1a6e236c06,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1711584310671441561,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4t77p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 27eff0c9-9b45-4530-aba9-1a5e0ca60802,},Annotations:map[string]string{io.kubernetes.container.hash: d0985273,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb34785
03fe928ca51e348aec8e5aa7eebe7cfdc8b7226d85976590d73be90dd,PodSandboxId:d6ee7152bab39abb61e4c40513bcbf1c1719037be0b22e184bd532002df38257,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1711584310871480282,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-5zmtk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e75cdc5-22da-47f2-9833-b2f4eaa9caac,},Annotations:map[string]string{io.kubernetes.container.hash: 6486d0c8,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d05a3cee8583ee0f268ce6a367e7e700f8f87f1d9
6c985228250245d5f5a258,PodSandboxId:abd9c1ff4a2863311efb200d0e3b601a5035722036d7c765e297bc887033d393,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1711584310538629450,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-377576,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ab33d6840338638cbdcd9ebe5fdd4d4,},Annotations:map[string]string{io.kubernetes.container.hash: 7359b186,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f3bb93ac0ec38b966779f4260f9506e31f38cc6702ce1751f61b06271e40fcb3,PodSandboxId:cb101e6a739d939
c7192d22a06ebbaf222b63e1b6c3d7b3a9e82ddaa67bbffc1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_EXITED,CreatedAt:1711584310528973873,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-377576,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 32d18f050adf42c0d971a9903270a7b6,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:20e7e0f2dabc114fd527652f912485cbd4476203fe6c63338ed46931f28df715,PodSandboxId:997d89c34a
5f31417acabe4367ffd068159f7cd6447ab1e4c7b0c7caeb3d7a93,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_EXITED,CreatedAt:1711584310410197624,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-377576,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6490cbc40210ad634becf13ac3a1705,},Annotations:map[string]string{io.kubernetes.container.hash: bc2c0f48,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f5f23730141a94b0d7db08dffd1dd111243d4d4c6ee706bec92a8ea3c1872258,PodSandboxId:ee57ddbaf2fe4d35bb46bd35e54a00b9
4b130707708a49beace86386b10fe913,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1711584310358266833,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-377576,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f9eaf884653411ba1f22eb4cdbdfa748,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a2fd3dc48780368d474066f78db57cea4362acc353f7604458e5ad6b3d1c6bb,PodSandboxId:8519de872cf9776787e79d23757b0be34bc8c680161a0665bf6b6
cb54b3bf07f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1711584303839080092,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-msv9s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c549358-2f35-4345-aa7a-8bbbcfc4ef01,},Annotations:map[string]string{io.kubernetes.container.hash: 76b7b3af,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File
,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e280dd2cc82d6fb9730125bb5cc514d7b33be2a3a03727e5adeb43a7f1399eb8,PodSandboxId:e7de52be4110de636969e1b12c92959bebb4213ef1e41dcc9a17ddae078e2f6e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1711584303845663995,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-47npx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 968d63e4-f44a-4e52-b6c0-04e0ed1a068e,},Annotations:map[string]string{io.kubernetes.container.hash: d58e1b37,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPor
t\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc41f34db32bf53a65c6af8f9f9eae933fb349e5555060707ec7131ab3a7b835,PodSandboxId:d8bf33d99bda1de3168ad1efb04ca03a74a918d2ca74b585a7fcdb3a108d229e,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1711583818896885728,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-78c89,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3272474d-5490-4c7c-9dfe-ded8488ec32f,},Annotations:map[string]string{io.kuber
netes.container.hash: f4bc3217,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d5198968b76968a091c33806ec424495c03395675b20e7eb35e330d16217211,PodSandboxId:78b0408435c31eb2c100690d28760dd80d92d11a1204f91418812c69660e79a8,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1711583597995455547,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-47npx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 968d63e4-f44a-4e52-b6c0-04e0ed1a068e,},Annotations:map[string]string{io.kubernetes.container.hash: d58e1b37,i
o.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed9a38e9f6cd98054d1e672c652e36e6262ceb6eb2a3e14911a5178d222135e7,PodSandboxId:906a95ca7b9309653761922034b8a2dfca231a15f3b955bb1c166ebd569149c8,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1711583597982972322,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: core
dns-76f75df574-msv9s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c549358-2f35-4345-aa7a-8bbbcfc4ef01,},Annotations:map[string]string{io.kubernetes.container.hash: 76b7b3af,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a226f01452a72cf6e9a608450715a12f6663ccf2697e8259f809e966809978ce,PodSandboxId:3f1239e30a953ae16c917e002c235d64ba2b2b2f9165e939c99bda7578eae785,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b
0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_EXITED,CreatedAt:1711583595702331437,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4t77p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 27eff0c9-9b45-4530-aba9-1a5e0ca60802,},Annotations:map[string]string{io.kubernetes.container.hash: d0985273,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0128cd878ebdae2c6e217413183a103417714ffe00d55c1c92d1353e38238fa,PodSandboxId:bbb9d168e952fe414d641f2ec6a7e1e63a04205d4e2aae1c6ad9a2756c6443ee,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83
d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1711583575850415387,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-377576,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ab33d6840338638cbdcd9ebe5fdd4d4,},Annotations:map[string]string{io.kubernetes.container.hash: 7359b186,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:afbf14c176818ba796eacd41f1600b5f09075c6a2b4f1edcad903530221f76ff,PodSandboxId:b75106f2dccc70e077dfb09e12c05d0942a02fa2c4894a1bd68ec76ca144eed7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_EXITED,Create
dAt:1711583575810399508,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-377576,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f9eaf884653411ba1f22eb4cdbdfa748,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=66f96db3-3926-4ff6-aea7-0e52aa54f2b4 name=/runtime.v1.RuntimeService/ListContainers
	Mar 28 00:08:02 ha-377576 crio[3893]: time="2024-03-28 00:08:02.827208024Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5795ef16-fdad-45dc-b810-91080104d456 name=/runtime.v1.RuntimeService/ListContainers
	Mar 28 00:08:02 ha-377576 crio[3893]: time="2024-03-28 00:08:02.827276152Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5795ef16-fdad-45dc-b810-91080104d456 name=/runtime.v1.RuntimeService/ListContainers
	Mar 28 00:08:02 ha-377576 crio[3893]: time="2024-03-28 00:08:02.828353158Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:446878900fc2ff3c1baac3e23199c7573f58770442bddf2548e3ccbfa9d3b300,PodSandboxId:d6ee7152bab39abb61e4c40513bcbf1c1719037be0b22e184bd532002df38257,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:4,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1711584387693883636,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-5zmtk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e75cdc5-22da-47f2-9833-b2f4eaa9caac,},Annotations:map[string]string{io.kubernetes.container.hash: 6486d0c8,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1639d05b99a2d4c643e2a8925ba47e02dea0aecef0e22871c3d3ef765cb08394,PodSandboxId:904097cb5f15281f081d3f374e7a82a4e22c19468f949bb1d8dd5110b05fbf0c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1711584384689393202,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9000645c-8323-43af-bd87-011d1574493c,},Annotations:map[string]string{io.kubernetes.container.hash: 43103468,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termin
ation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7618cf90b394f83cf98ba25b6a878b77ac0c7bacb6fa65a7cf7ddaae3d859976,PodSandboxId:cb101e6a739d939c7192d22a06ebbaf222b63e1b6c3d7b3a9e82ddaa67bbffc1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1711584350713245285,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-377576,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 32d18f050adf42c0d971a9903270a7b6,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2930b0267019902b8b9ce7cd18b907f89d409676028d92de1dc551850d78f276,PodSandboxId:997d89c34a5f31417acabe4367ffd068159f7cd6447ab1e4c7b0c7caeb3d7a93,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1711584348688612607,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-377576,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6490cbc40210ad634becf13ac3a1705,},Annotations:map[string]string{io.kubernetes.container.hash: bc2c0f48,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.ku
bernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5298cacdf731fecbce32ad2d9f328670947c3e2bc41da8560a41746efa183376,PodSandboxId:6eafda94672d564fe253dfd43b5bde346da3dfa6d5efeb98a979e953c11b959d,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1711584343983600963,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-78c89,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3272474d-5490-4c7c-9dfe-ded8488ec32f,},Annotations:map[string]string{io.kubernetes.container.hash: f4bc3217,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessa
gePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0d9ac0997d255850bd452ddf61795477c05b16d5a1c77900748c11ef0e86ad8a,PodSandboxId:904097cb5f15281f081d3f374e7a82a4e22c19468f949bb1d8dd5110b05fbf0c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1711584336688854876,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9000645c-8323-43af-bd87-011d1574493c,},Annotations:map[string]string{io.kubernetes.container.hash: 43103468,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: Fil
e,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5116a84093b94656654ec8266161d1bc13b32d0e34905ba6962ac6769dc5c6da,PodSandboxId:0b075b5717795f3c07856e1151edd79045a7fe74683a224f07536c576926d280,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1711584326540796553,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-377576,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d35f096ae88a36bb3ae6fa7f31554e39,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePer
iod: 30,},},&Container{Id:f043217131ac4124beae3e8ebcbab4be1eebd60b6b040475b8d40b6951c8837f,PodSandboxId:657c51c3b5e7d8e9d3e371ead69b9b6e2d781ebd27d42dfdb2641e1a6e236c06,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1711584310671441561,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4t77p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 27eff0c9-9b45-4530-aba9-1a5e0ca60802,},Annotations:map[string]string{io.kubernetes.container.hash: d0985273,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb34785
03fe928ca51e348aec8e5aa7eebe7cfdc8b7226d85976590d73be90dd,PodSandboxId:d6ee7152bab39abb61e4c40513bcbf1c1719037be0b22e184bd532002df38257,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1711584310871480282,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-5zmtk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e75cdc5-22da-47f2-9833-b2f4eaa9caac,},Annotations:map[string]string{io.kubernetes.container.hash: 6486d0c8,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d05a3cee8583ee0f268ce6a367e7e700f8f87f1d9
6c985228250245d5f5a258,PodSandboxId:abd9c1ff4a2863311efb200d0e3b601a5035722036d7c765e297bc887033d393,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1711584310538629450,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-377576,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ab33d6840338638cbdcd9ebe5fdd4d4,},Annotations:map[string]string{io.kubernetes.container.hash: 7359b186,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f3bb93ac0ec38b966779f4260f9506e31f38cc6702ce1751f61b06271e40fcb3,PodSandboxId:cb101e6a739d939
c7192d22a06ebbaf222b63e1b6c3d7b3a9e82ddaa67bbffc1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_EXITED,CreatedAt:1711584310528973873,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-377576,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 32d18f050adf42c0d971a9903270a7b6,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:20e7e0f2dabc114fd527652f912485cbd4476203fe6c63338ed46931f28df715,PodSandboxId:997d89c34a
5f31417acabe4367ffd068159f7cd6447ab1e4c7b0c7caeb3d7a93,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_EXITED,CreatedAt:1711584310410197624,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-377576,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6490cbc40210ad634becf13ac3a1705,},Annotations:map[string]string{io.kubernetes.container.hash: bc2c0f48,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f5f23730141a94b0d7db08dffd1dd111243d4d4c6ee706bec92a8ea3c1872258,PodSandboxId:ee57ddbaf2fe4d35bb46bd35e54a00b9
4b130707708a49beace86386b10fe913,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1711584310358266833,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-377576,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f9eaf884653411ba1f22eb4cdbdfa748,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a2fd3dc48780368d474066f78db57cea4362acc353f7604458e5ad6b3d1c6bb,PodSandboxId:8519de872cf9776787e79d23757b0be34bc8c680161a0665bf6b6
cb54b3bf07f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1711584303839080092,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-msv9s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c549358-2f35-4345-aa7a-8bbbcfc4ef01,},Annotations:map[string]string{io.kubernetes.container.hash: 76b7b3af,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File
,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e280dd2cc82d6fb9730125bb5cc514d7b33be2a3a03727e5adeb43a7f1399eb8,PodSandboxId:e7de52be4110de636969e1b12c92959bebb4213ef1e41dcc9a17ddae078e2f6e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1711584303845663995,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-47npx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 968d63e4-f44a-4e52-b6c0-04e0ed1a068e,},Annotations:map[string]string{io.kubernetes.container.hash: d58e1b37,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPor
t\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc41f34db32bf53a65c6af8f9f9eae933fb349e5555060707ec7131ab3a7b835,PodSandboxId:d8bf33d99bda1de3168ad1efb04ca03a74a918d2ca74b585a7fcdb3a108d229e,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1711583818896885728,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-78c89,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3272474d-5490-4c7c-9dfe-ded8488ec32f,},Annotations:map[string]string{io.kuber
netes.container.hash: f4bc3217,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d5198968b76968a091c33806ec424495c03395675b20e7eb35e330d16217211,PodSandboxId:78b0408435c31eb2c100690d28760dd80d92d11a1204f91418812c69660e79a8,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1711583597995455547,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-47npx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 968d63e4-f44a-4e52-b6c0-04e0ed1a068e,},Annotations:map[string]string{io.kubernetes.container.hash: d58e1b37,i
o.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed9a38e9f6cd98054d1e672c652e36e6262ceb6eb2a3e14911a5178d222135e7,PodSandboxId:906a95ca7b9309653761922034b8a2dfca231a15f3b955bb1c166ebd569149c8,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1711583597982972322,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: core
dns-76f75df574-msv9s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c549358-2f35-4345-aa7a-8bbbcfc4ef01,},Annotations:map[string]string{io.kubernetes.container.hash: 76b7b3af,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a226f01452a72cf6e9a608450715a12f6663ccf2697e8259f809e966809978ce,PodSandboxId:3f1239e30a953ae16c917e002c235d64ba2b2b2f9165e939c99bda7578eae785,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b
0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_EXITED,CreatedAt:1711583595702331437,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4t77p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 27eff0c9-9b45-4530-aba9-1a5e0ca60802,},Annotations:map[string]string{io.kubernetes.container.hash: d0985273,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0128cd878ebdae2c6e217413183a103417714ffe00d55c1c92d1353e38238fa,PodSandboxId:bbb9d168e952fe414d641f2ec6a7e1e63a04205d4e2aae1c6ad9a2756c6443ee,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83
d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1711583575850415387,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-377576,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ab33d6840338638cbdcd9ebe5fdd4d4,},Annotations:map[string]string{io.kubernetes.container.hash: 7359b186,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:afbf14c176818ba796eacd41f1600b5f09075c6a2b4f1edcad903530221f76ff,PodSandboxId:b75106f2dccc70e077dfb09e12c05d0942a02fa2c4894a1bd68ec76ca144eed7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_EXITED,Create
dAt:1711583575810399508,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-377576,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f9eaf884653411ba1f22eb4cdbdfa748,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5795ef16-fdad-45dc-b810-91080104d456 name=/runtime.v1.RuntimeService/ListContainers
	Mar 28 00:08:02 ha-377576 crio[3893]: time="2024-03-28 00:08:02.829779339Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f64290a7-4118-4216-b8e6-1901a119cfa3 name=/runtime.v1.RuntimeService/ListContainers
	Mar 28 00:08:02 ha-377576 crio[3893]: time="2024-03-28 00:08:02.829892913Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f64290a7-4118-4216-b8e6-1901a119cfa3 name=/runtime.v1.RuntimeService/ListContainers
	Mar 28 00:08:02 ha-377576 crio[3893]: time="2024-03-28 00:08:02.830454885Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:446878900fc2ff3c1baac3e23199c7573f58770442bddf2548e3ccbfa9d3b300,PodSandboxId:d6ee7152bab39abb61e4c40513bcbf1c1719037be0b22e184bd532002df38257,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:4,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1711584387693883636,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-5zmtk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e75cdc5-22da-47f2-9833-b2f4eaa9caac,},Annotations:map[string]string{io.kubernetes.container.hash: 6486d0c8,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1639d05b99a2d4c643e2a8925ba47e02dea0aecef0e22871c3d3ef765cb08394,PodSandboxId:904097cb5f15281f081d3f374e7a82a4e22c19468f949bb1d8dd5110b05fbf0c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1711584384689393202,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9000645c-8323-43af-bd87-011d1574493c,},Annotations:map[string]string{io.kubernetes.container.hash: 43103468,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termin
ation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7618cf90b394f83cf98ba25b6a878b77ac0c7bacb6fa65a7cf7ddaae3d859976,PodSandboxId:cb101e6a739d939c7192d22a06ebbaf222b63e1b6c3d7b3a9e82ddaa67bbffc1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1711584350713245285,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-377576,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 32d18f050adf42c0d971a9903270a7b6,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2930b0267019902b8b9ce7cd18b907f89d409676028d92de1dc551850d78f276,PodSandboxId:997d89c34a5f31417acabe4367ffd068159f7cd6447ab1e4c7b0c7caeb3d7a93,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1711584348688612607,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-377576,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6490cbc40210ad634becf13ac3a1705,},Annotations:map[string]string{io.kubernetes.container.hash: bc2c0f48,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.ku
bernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5298cacdf731fecbce32ad2d9f328670947c3e2bc41da8560a41746efa183376,PodSandboxId:6eafda94672d564fe253dfd43b5bde346da3dfa6d5efeb98a979e953c11b959d,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1711584343983600963,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-78c89,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3272474d-5490-4c7c-9dfe-ded8488ec32f,},Annotations:map[string]string{io.kubernetes.container.hash: f4bc3217,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessa
gePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0d9ac0997d255850bd452ddf61795477c05b16d5a1c77900748c11ef0e86ad8a,PodSandboxId:904097cb5f15281f081d3f374e7a82a4e22c19468f949bb1d8dd5110b05fbf0c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1711584336688854876,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9000645c-8323-43af-bd87-011d1574493c,},Annotations:map[string]string{io.kubernetes.container.hash: 43103468,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: Fil
e,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5116a84093b94656654ec8266161d1bc13b32d0e34905ba6962ac6769dc5c6da,PodSandboxId:0b075b5717795f3c07856e1151edd79045a7fe74683a224f07536c576926d280,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1711584326540796553,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-377576,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d35f096ae88a36bb3ae6fa7f31554e39,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePer
iod: 30,},},&Container{Id:f043217131ac4124beae3e8ebcbab4be1eebd60b6b040475b8d40b6951c8837f,PodSandboxId:657c51c3b5e7d8e9d3e371ead69b9b6e2d781ebd27d42dfdb2641e1a6e236c06,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1711584310671441561,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4t77p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 27eff0c9-9b45-4530-aba9-1a5e0ca60802,},Annotations:map[string]string{io.kubernetes.container.hash: d0985273,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb34785
03fe928ca51e348aec8e5aa7eebe7cfdc8b7226d85976590d73be90dd,PodSandboxId:d6ee7152bab39abb61e4c40513bcbf1c1719037be0b22e184bd532002df38257,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1711584310871480282,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-5zmtk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e75cdc5-22da-47f2-9833-b2f4eaa9caac,},Annotations:map[string]string{io.kubernetes.container.hash: 6486d0c8,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d05a3cee8583ee0f268ce6a367e7e700f8f87f1d9
6c985228250245d5f5a258,PodSandboxId:abd9c1ff4a2863311efb200d0e3b601a5035722036d7c765e297bc887033d393,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1711584310538629450,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-377576,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ab33d6840338638cbdcd9ebe5fdd4d4,},Annotations:map[string]string{io.kubernetes.container.hash: 7359b186,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f3bb93ac0ec38b966779f4260f9506e31f38cc6702ce1751f61b06271e40fcb3,PodSandboxId:cb101e6a739d939
c7192d22a06ebbaf222b63e1b6c3d7b3a9e82ddaa67bbffc1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_EXITED,CreatedAt:1711584310528973873,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-377576,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 32d18f050adf42c0d971a9903270a7b6,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:20e7e0f2dabc114fd527652f912485cbd4476203fe6c63338ed46931f28df715,PodSandboxId:997d89c34a
5f31417acabe4367ffd068159f7cd6447ab1e4c7b0c7caeb3d7a93,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_EXITED,CreatedAt:1711584310410197624,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-377576,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6490cbc40210ad634becf13ac3a1705,},Annotations:map[string]string{io.kubernetes.container.hash: bc2c0f48,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f5f23730141a94b0d7db08dffd1dd111243d4d4c6ee706bec92a8ea3c1872258,PodSandboxId:ee57ddbaf2fe4d35bb46bd35e54a00b9
4b130707708a49beace86386b10fe913,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1711584310358266833,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-377576,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f9eaf884653411ba1f22eb4cdbdfa748,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a2fd3dc48780368d474066f78db57cea4362acc353f7604458e5ad6b3d1c6bb,PodSandboxId:8519de872cf9776787e79d23757b0be34bc8c680161a0665bf6b6
cb54b3bf07f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1711584303839080092,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-msv9s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c549358-2f35-4345-aa7a-8bbbcfc4ef01,},Annotations:map[string]string{io.kubernetes.container.hash: 76b7b3af,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File
,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e280dd2cc82d6fb9730125bb5cc514d7b33be2a3a03727e5adeb43a7f1399eb8,PodSandboxId:e7de52be4110de636969e1b12c92959bebb4213ef1e41dcc9a17ddae078e2f6e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1711584303845663995,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-47npx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 968d63e4-f44a-4e52-b6c0-04e0ed1a068e,},Annotations:map[string]string{io.kubernetes.container.hash: d58e1b37,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPor
t\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc41f34db32bf53a65c6af8f9f9eae933fb349e5555060707ec7131ab3a7b835,PodSandboxId:d8bf33d99bda1de3168ad1efb04ca03a74a918d2ca74b585a7fcdb3a108d229e,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1711583818896885728,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-78c89,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3272474d-5490-4c7c-9dfe-ded8488ec32f,},Annotations:map[string]string{io.kuber
netes.container.hash: f4bc3217,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d5198968b76968a091c33806ec424495c03395675b20e7eb35e330d16217211,PodSandboxId:78b0408435c31eb2c100690d28760dd80d92d11a1204f91418812c69660e79a8,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1711583597995455547,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-47npx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 968d63e4-f44a-4e52-b6c0-04e0ed1a068e,},Annotations:map[string]string{io.kubernetes.container.hash: d58e1b37,i
o.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed9a38e9f6cd98054d1e672c652e36e6262ceb6eb2a3e14911a5178d222135e7,PodSandboxId:906a95ca7b9309653761922034b8a2dfca231a15f3b955bb1c166ebd569149c8,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1711583597982972322,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: core
dns-76f75df574-msv9s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c549358-2f35-4345-aa7a-8bbbcfc4ef01,},Annotations:map[string]string{io.kubernetes.container.hash: 76b7b3af,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a226f01452a72cf6e9a608450715a12f6663ccf2697e8259f809e966809978ce,PodSandboxId:3f1239e30a953ae16c917e002c235d64ba2b2b2f9165e939c99bda7578eae785,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b
0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_EXITED,CreatedAt:1711583595702331437,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4t77p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 27eff0c9-9b45-4530-aba9-1a5e0ca60802,},Annotations:map[string]string{io.kubernetes.container.hash: d0985273,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0128cd878ebdae2c6e217413183a103417714ffe00d55c1c92d1353e38238fa,PodSandboxId:bbb9d168e952fe414d641f2ec6a7e1e63a04205d4e2aae1c6ad9a2756c6443ee,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83
d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1711583575850415387,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-377576,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ab33d6840338638cbdcd9ebe5fdd4d4,},Annotations:map[string]string{io.kubernetes.container.hash: 7359b186,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:afbf14c176818ba796eacd41f1600b5f09075c6a2b4f1edcad903530221f76ff,PodSandboxId:b75106f2dccc70e077dfb09e12c05d0942a02fa2c4894a1bd68ec76ca144eed7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_EXITED,Create
dAt:1711583575810399508,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-377576,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f9eaf884653411ba1f22eb4cdbdfa748,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f64290a7-4118-4216-b8e6-1901a119cfa3 name=/runtime.v1.RuntimeService/ListContainers
	Mar 28 00:08:02 ha-377576 crio[3893]: time="2024-03-28 00:08:02.831868899Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=7c6efc1c-ea84-460a-a37c-25be011280bb name=/runtime.v1.RuntimeService/ListPodSandbox
	Mar 28 00:08:02 ha-377576 crio[3893]: time="2024-03-28 00:08:02.832192232Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:6eafda94672d564fe253dfd43b5bde346da3dfa6d5efeb98a979e953c11b959d,Metadata:&PodSandboxMetadata{Name:busybox-7fdf7869d9-78c89,Uid:3272474d-5490-4c7c-9dfe-ded8488ec32f,Namespace:default,Attempt:1,},State:SANDBOX_READY,CreatedAt:1711584343840642749,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-7fdf7869d9-78c89,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3272474d-5490-4c7c-9dfe-ded8488ec32f,pod-template-hash: 7fdf7869d9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-03-27T23:56:55.769204613Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:0b075b5717795f3c07856e1151edd79045a7fe74683a224f07536c576926d280,Metadata:&PodSandboxMetadata{Name:kube-vip-ha-377576,Uid:d35f096ae88a36bb3ae6fa7f31554e39,Namespace:kube-system,Attempt:0,},State:SANDBOX
_READY,CreatedAt:1711584326421950649,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-vip-ha-377576,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d35f096ae88a36bb3ae6fa7f31554e39,},Annotations:map[string]string{kubernetes.io/config.hash: d35f096ae88a36bb3ae6fa7f31554e39,kubernetes.io/config.seen: 2024-03-28T00:05:03.811722807Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:d6ee7152bab39abb61e4c40513bcbf1c1719037be0b22e184bd532002df38257,Metadata:&PodSandboxMetadata{Name:kindnet-5zmtk,Uid:4e75cdc5-22da-47f2-9833-b2f4eaa9caac,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1711584310090122597,Labels:map[string]string{app: kindnet,controller-revision-hash: bb65b84c4,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-5zmtk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e75cdc5-22da-47f2-9833-b2f4eaa9caac,k8s-app: kindnet,pod-template-generation: 1,tier: node,},Annotations:map[string]stri
ng{kubernetes.io/config.seen: 2024-03-27T23:53:14.742963562Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:abd9c1ff4a2863311efb200d0e3b601a5035722036d7c765e297bc887033d393,Metadata:&PodSandboxMetadata{Name:etcd-ha-377576,Uid:4ab33d6840338638cbdcd9ebe5fdd4d4,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1711584310084756689,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-ha-377576,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ab33d6840338638cbdcd9ebe5fdd4d4,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.47:2379,kubernetes.io/config.hash: 4ab33d6840338638cbdcd9ebe5fdd4d4,kubernetes.io/config.seen: 2024-03-27T23:53:02.623885547Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:cb101e6a739d939c7192d22a06ebbaf222b63e1b6c3d7b3a9e82ddaa67bbffc1,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-ha-377576,
Uid:32d18f050adf42c0d971a9903270a7b6,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1711584310080358271,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-ha-377576,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 32d18f050adf42c0d971a9903270a7b6,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 32d18f050adf42c0d971a9903270a7b6,kubernetes.io/config.seen: 2024-03-27T23:53:02.623887137Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:657c51c3b5e7d8e9d3e371ead69b9b6e2d781ebd27d42dfdb2641e1a6e236c06,Metadata:&PodSandboxMetadata{Name:kube-proxy-4t77p,Uid:27eff0c9-9b45-4530-aba9-1a5e0ca60802,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1711584310077328211,Labels:map[string]string{controller-revision-hash: 7659797656,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-4t77p,io.kubernetes.pod.namespace: kube-sys
tem,io.kubernetes.pod.uid: 27eff0c9-9b45-4530-aba9-1a5e0ca60802,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-03-27T23:53:14.702934028Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:904097cb5f15281f081d3f374e7a82a4e22c19468f949bb1d8dd5110b05fbf0c,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:9000645c-8323-43af-bd87-011d1574493c,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1711584310061574691,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9000645c-8323-43af-bd87-011d1574493c,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integra
tion-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-03-27T23:53:17.152782440Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:997d89c34a5f31417acabe4367ffd068159f7cd6447ab1e4c7b0c7caeb3d7a93,Metadata:&PodSandboxMetadata{Name:kube-apiserver-ha-377576,Uid:d6490cbc40210ad634becf13ac3a1705,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1711584310054948864,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-ha-377576,i
o.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6490cbc40210ad634becf13ac3a1705,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.47:8443,kubernetes.io/config.hash: d6490cbc40210ad634becf13ac3a1705,kubernetes.io/config.seen: 2024-03-27T23:53:02.623886445Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:ee57ddbaf2fe4d35bb46bd35e54a00b94b130707708a49beace86386b10fe913,Metadata:&PodSandboxMetadata{Name:kube-scheduler-ha-377576,Uid:f9eaf884653411ba1f22eb4cdbdfa748,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1711584310046541244,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-ha-377576,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f9eaf884653411ba1f22eb4cdbdfa748,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: f9eaf884653411ba1f22eb4cdbdfa748,kubernetes.io/con
fig.seen: 2024-03-27T23:53:02.623874418Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:8519de872cf9776787e79d23757b0be34bc8c680161a0665bf6b6cb54b3bf07f,Metadata:&PodSandboxMetadata{Name:coredns-76f75df574-msv9s,Uid:7c549358-2f35-4345-aa7a-8bbbcfc4ef01,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1711584303596766588,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-76f75df574-msv9s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c549358-2f35-4345-aa7a-8bbbcfc4ef01,k8s-app: kube-dns,pod-template-hash: 76f75df574,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-03-27T23:53:17.160150224Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:e7de52be4110de636969e1b12c92959bebb4213ef1e41dcc9a17ddae078e2f6e,Metadata:&PodSandboxMetadata{Name:coredns-76f75df574-47npx,Uid:968d63e4-f44a-4e52-b6c0-04e0ed1a068e,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1711584303574699830,L
abels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-76f75df574-47npx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 968d63e4-f44a-4e52-b6c0-04e0ed1a068e,k8s-app: kube-dns,pod-template-hash: 76f75df574,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-03-27T23:53:17.161820690Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:d8bf33d99bda1de3168ad1efb04ca03a74a918d2ca74b585a7fcdb3a108d229e,Metadata:&PodSandboxMetadata{Name:busybox-7fdf7869d9-78c89,Uid:3272474d-5490-4c7c-9dfe-ded8488ec32f,Namespace:default,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1711583816091168834,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-7fdf7869d9-78c89,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3272474d-5490-4c7c-9dfe-ded8488ec32f,pod-template-hash: 7fdf7869d9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-03-27T23:56:55.769204613Z,kubernetes.io/config.sou
rce: api,},RuntimeHandler:,},&PodSandbox{Id:906a95ca7b9309653761922034b8a2dfca231a15f3b955bb1c166ebd569149c8,Metadata:&PodSandboxMetadata{Name:coredns-76f75df574-msv9s,Uid:7c549358-2f35-4345-aa7a-8bbbcfc4ef01,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1711583597775450695,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-76f75df574-msv9s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c549358-2f35-4345-aa7a-8bbbcfc4ef01,k8s-app: kube-dns,pod-template-hash: 76f75df574,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-03-27T23:53:17.160150224Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:78b0408435c31eb2c100690d28760dd80d92d11a1204f91418812c69660e79a8,Metadata:&PodSandboxMetadata{Name:coredns-76f75df574-47npx,Uid:968d63e4-f44a-4e52-b6c0-04e0ed1a068e,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1711583597773976171,Labels:map[string]string{io.kubernetes.container.name: POD,io
.kubernetes.pod.name: coredns-76f75df574-47npx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 968d63e4-f44a-4e52-b6c0-04e0ed1a068e,k8s-app: kube-dns,pod-template-hash: 76f75df574,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-03-27T23:53:17.161820690Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:3f1239e30a953ae16c917e002c235d64ba2b2b2f9165e939c99bda7578eae785,Metadata:&PodSandboxMetadata{Name:kube-proxy-4t77p,Uid:27eff0c9-9b45-4530-aba9-1a5e0ca60802,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1711583595612354224,Labels:map[string]string{controller-revision-hash: 7659797656,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-4t77p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 27eff0c9-9b45-4530-aba9-1a5e0ca60802,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-03-27T23:53:14.702934028Z,kubernetes.io/config.source: api,},RuntimeHandler:,
},&PodSandbox{Id:b75106f2dccc70e077dfb09e12c05d0942a02fa2c4894a1bd68ec76ca144eed7,Metadata:&PodSandboxMetadata{Name:kube-scheduler-ha-377576,Uid:f9eaf884653411ba1f22eb4cdbdfa748,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1711583575608364306,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-ha-377576,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f9eaf884653411ba1f22eb4cdbdfa748,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: f9eaf884653411ba1f22eb4cdbdfa748,kubernetes.io/config.seen: 2024-03-27T23:52:54.937077592Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:bbb9d168e952fe414d641f2ec6a7e1e63a04205d4e2aae1c6ad9a2756c6443ee,Metadata:&PodSandboxMetadata{Name:etcd-ha-377576,Uid:4ab33d6840338638cbdcd9ebe5fdd4d4,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1711583575593728215,Labels:map[string]string{component: etcd,io.kuberne
tes.container.name: POD,io.kubernetes.pod.name: etcd-ha-377576,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ab33d6840338638cbdcd9ebe5fdd4d4,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.47:2379,kubernetes.io/config.hash: 4ab33d6840338638cbdcd9ebe5fdd4d4,kubernetes.io/config.seen: 2024-03-27T23:52:54.937079080Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=7c6efc1c-ea84-460a-a37c-25be011280bb name=/runtime.v1.RuntimeService/ListPodSandbox
	Mar 28 00:08:02 ha-377576 crio[3893]: time="2024-03-28 00:08:02.887860723Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=da7ead3e-d451-4b13-ac8e-7f39bd1dad72 name=/runtime.v1.RuntimeService/Version
	Mar 28 00:08:02 ha-377576 crio[3893]: time="2024-03-28 00:08:02.887937314Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=da7ead3e-d451-4b13-ac8e-7f39bd1dad72 name=/runtime.v1.RuntimeService/Version
	Mar 28 00:08:02 ha-377576 crio[3893]: time="2024-03-28 00:08:02.889919584Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b741e50d-9803-4a23-87bd-b4c6d1decae3 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 28 00:08:02 ha-377576 crio[3893]: time="2024-03-28 00:08:02.890337578Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1711584482890315116,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:141828,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b741e50d-9803-4a23-87bd-b4c6d1decae3 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 28 00:08:02 ha-377576 crio[3893]: time="2024-03-28 00:08:02.891067295Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=79e417c5-8af0-4378-88a0-80363ac9f77d name=/runtime.v1.RuntimeService/ListContainers
	Mar 28 00:08:02 ha-377576 crio[3893]: time="2024-03-28 00:08:02.891198040Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=79e417c5-8af0-4378-88a0-80363ac9f77d name=/runtime.v1.RuntimeService/ListContainers
	Mar 28 00:08:02 ha-377576 crio[3893]: time="2024-03-28 00:08:02.891987949Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:446878900fc2ff3c1baac3e23199c7573f58770442bddf2548e3ccbfa9d3b300,PodSandboxId:d6ee7152bab39abb61e4c40513bcbf1c1719037be0b22e184bd532002df38257,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:4,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1711584387693883636,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-5zmtk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e75cdc5-22da-47f2-9833-b2f4eaa9caac,},Annotations:map[string]string{io.kubernetes.container.hash: 6486d0c8,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1639d05b99a2d4c643e2a8925ba47e02dea0aecef0e22871c3d3ef765cb08394,PodSandboxId:904097cb5f15281f081d3f374e7a82a4e22c19468f949bb1d8dd5110b05fbf0c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1711584384689393202,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9000645c-8323-43af-bd87-011d1574493c,},Annotations:map[string]string{io.kubernetes.container.hash: 43103468,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termin
ation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7618cf90b394f83cf98ba25b6a878b77ac0c7bacb6fa65a7cf7ddaae3d859976,PodSandboxId:cb101e6a739d939c7192d22a06ebbaf222b63e1b6c3d7b3a9e82ddaa67bbffc1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1711584350713245285,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-377576,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 32d18f050adf42c0d971a9903270a7b6,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2930b0267019902b8b9ce7cd18b907f89d409676028d92de1dc551850d78f276,PodSandboxId:997d89c34a5f31417acabe4367ffd068159f7cd6447ab1e4c7b0c7caeb3d7a93,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1711584348688612607,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-377576,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6490cbc40210ad634becf13ac3a1705,},Annotations:map[string]string{io.kubernetes.container.hash: bc2c0f48,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.ku
bernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5298cacdf731fecbce32ad2d9f328670947c3e2bc41da8560a41746efa183376,PodSandboxId:6eafda94672d564fe253dfd43b5bde346da3dfa6d5efeb98a979e953c11b959d,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1711584343983600963,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-78c89,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3272474d-5490-4c7c-9dfe-ded8488ec32f,},Annotations:map[string]string{io.kubernetes.container.hash: f4bc3217,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessa
gePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0d9ac0997d255850bd452ddf61795477c05b16d5a1c77900748c11ef0e86ad8a,PodSandboxId:904097cb5f15281f081d3f374e7a82a4e22c19468f949bb1d8dd5110b05fbf0c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1711584336688854876,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9000645c-8323-43af-bd87-011d1574493c,},Annotations:map[string]string{io.kubernetes.container.hash: 43103468,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: Fil
e,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5116a84093b94656654ec8266161d1bc13b32d0e34905ba6962ac6769dc5c6da,PodSandboxId:0b075b5717795f3c07856e1151edd79045a7fe74683a224f07536c576926d280,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1711584326540796553,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-377576,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d35f096ae88a36bb3ae6fa7f31554e39,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePer
iod: 30,},},&Container{Id:f043217131ac4124beae3e8ebcbab4be1eebd60b6b040475b8d40b6951c8837f,PodSandboxId:657c51c3b5e7d8e9d3e371ead69b9b6e2d781ebd27d42dfdb2641e1a6e236c06,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1711584310671441561,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4t77p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 27eff0c9-9b45-4530-aba9-1a5e0ca60802,},Annotations:map[string]string{io.kubernetes.container.hash: d0985273,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb34785
03fe928ca51e348aec8e5aa7eebe7cfdc8b7226d85976590d73be90dd,PodSandboxId:d6ee7152bab39abb61e4c40513bcbf1c1719037be0b22e184bd532002df38257,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1711584310871480282,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-5zmtk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e75cdc5-22da-47f2-9833-b2f4eaa9caac,},Annotations:map[string]string{io.kubernetes.container.hash: 6486d0c8,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d05a3cee8583ee0f268ce6a367e7e700f8f87f1d9
6c985228250245d5f5a258,PodSandboxId:abd9c1ff4a2863311efb200d0e3b601a5035722036d7c765e297bc887033d393,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1711584310538629450,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-377576,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ab33d6840338638cbdcd9ebe5fdd4d4,},Annotations:map[string]string{io.kubernetes.container.hash: 7359b186,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f3bb93ac0ec38b966779f4260f9506e31f38cc6702ce1751f61b06271e40fcb3,PodSandboxId:cb101e6a739d939
c7192d22a06ebbaf222b63e1b6c3d7b3a9e82ddaa67bbffc1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_EXITED,CreatedAt:1711584310528973873,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-377576,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 32d18f050adf42c0d971a9903270a7b6,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:20e7e0f2dabc114fd527652f912485cbd4476203fe6c63338ed46931f28df715,PodSandboxId:997d89c34a
5f31417acabe4367ffd068159f7cd6447ab1e4c7b0c7caeb3d7a93,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_EXITED,CreatedAt:1711584310410197624,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-377576,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6490cbc40210ad634becf13ac3a1705,},Annotations:map[string]string{io.kubernetes.container.hash: bc2c0f48,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f5f23730141a94b0d7db08dffd1dd111243d4d4c6ee706bec92a8ea3c1872258,PodSandboxId:ee57ddbaf2fe4d35bb46bd35e54a00b9
4b130707708a49beace86386b10fe913,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1711584310358266833,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-377576,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f9eaf884653411ba1f22eb4cdbdfa748,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a2fd3dc48780368d474066f78db57cea4362acc353f7604458e5ad6b3d1c6bb,PodSandboxId:8519de872cf9776787e79d23757b0be34bc8c680161a0665bf6b6
cb54b3bf07f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1711584303839080092,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-msv9s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c549358-2f35-4345-aa7a-8bbbcfc4ef01,},Annotations:map[string]string{io.kubernetes.container.hash: 76b7b3af,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File
,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e280dd2cc82d6fb9730125bb5cc514d7b33be2a3a03727e5adeb43a7f1399eb8,PodSandboxId:e7de52be4110de636969e1b12c92959bebb4213ef1e41dcc9a17ddae078e2f6e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1711584303845663995,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-47npx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 968d63e4-f44a-4e52-b6c0-04e0ed1a068e,},Annotations:map[string]string{io.kubernetes.container.hash: d58e1b37,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPor
t\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc41f34db32bf53a65c6af8f9f9eae933fb349e5555060707ec7131ab3a7b835,PodSandboxId:d8bf33d99bda1de3168ad1efb04ca03a74a918d2ca74b585a7fcdb3a108d229e,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1711583818896885728,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-78c89,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3272474d-5490-4c7c-9dfe-ded8488ec32f,},Annotations:map[string]string{io.kuber
netes.container.hash: f4bc3217,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d5198968b76968a091c33806ec424495c03395675b20e7eb35e330d16217211,PodSandboxId:78b0408435c31eb2c100690d28760dd80d92d11a1204f91418812c69660e79a8,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1711583597995455547,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-47npx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 968d63e4-f44a-4e52-b6c0-04e0ed1a068e,},Annotations:map[string]string{io.kubernetes.container.hash: d58e1b37,i
o.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed9a38e9f6cd98054d1e672c652e36e6262ceb6eb2a3e14911a5178d222135e7,PodSandboxId:906a95ca7b9309653761922034b8a2dfca231a15f3b955bb1c166ebd569149c8,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1711583597982972322,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: core
dns-76f75df574-msv9s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c549358-2f35-4345-aa7a-8bbbcfc4ef01,},Annotations:map[string]string{io.kubernetes.container.hash: 76b7b3af,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a226f01452a72cf6e9a608450715a12f6663ccf2697e8259f809e966809978ce,PodSandboxId:3f1239e30a953ae16c917e002c235d64ba2b2b2f9165e939c99bda7578eae785,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b
0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_EXITED,CreatedAt:1711583595702331437,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4t77p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 27eff0c9-9b45-4530-aba9-1a5e0ca60802,},Annotations:map[string]string{io.kubernetes.container.hash: d0985273,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0128cd878ebdae2c6e217413183a103417714ffe00d55c1c92d1353e38238fa,PodSandboxId:bbb9d168e952fe414d641f2ec6a7e1e63a04205d4e2aae1c6ad9a2756c6443ee,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83
d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1711583575850415387,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-377576,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ab33d6840338638cbdcd9ebe5fdd4d4,},Annotations:map[string]string{io.kubernetes.container.hash: 7359b186,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:afbf14c176818ba796eacd41f1600b5f09075c6a2b4f1edcad903530221f76ff,PodSandboxId:b75106f2dccc70e077dfb09e12c05d0942a02fa2c4894a1bd68ec76ca144eed7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_EXITED,Create
dAt:1711583575810399508,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-377576,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f9eaf884653411ba1f22eb4cdbdfa748,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=79e417c5-8af0-4378-88a0-80363ac9f77d name=/runtime.v1.RuntimeService/ListContainers
	Mar 28 00:08:02 ha-377576 crio[3893]: time="2024-03-28 00:08:02.962788160Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f161bf22-d409-4e59-bd69-7e9f29ff536b name=/runtime.v1.RuntimeService/Version
	Mar 28 00:08:02 ha-377576 crio[3893]: time="2024-03-28 00:08:02.962918648Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f161bf22-d409-4e59-bd69-7e9f29ff536b name=/runtime.v1.RuntimeService/Version
	Mar 28 00:08:02 ha-377576 crio[3893]: time="2024-03-28 00:08:02.965469386Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=fd62aaae-23e1-4ddd-b660-1fc150826a44 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 28 00:08:02 ha-377576 crio[3893]: time="2024-03-28 00:08:02.966032850Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1711584482966001385,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:141828,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=fd62aaae-23e1-4ddd-b660-1fc150826a44 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 28 00:08:02 ha-377576 crio[3893]: time="2024-03-28 00:08:02.969169630Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7e0b371e-f256-446f-a0ea-e9da72693287 name=/runtime.v1.RuntimeService/ListContainers
	Mar 28 00:08:02 ha-377576 crio[3893]: time="2024-03-28 00:08:02.969248824Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7e0b371e-f256-446f-a0ea-e9da72693287 name=/runtime.v1.RuntimeService/ListContainers
	Mar 28 00:08:02 ha-377576 crio[3893]: time="2024-03-28 00:08:02.969871757Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:446878900fc2ff3c1baac3e23199c7573f58770442bddf2548e3ccbfa9d3b300,PodSandboxId:d6ee7152bab39abb61e4c40513bcbf1c1719037be0b22e184bd532002df38257,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:4,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1711584387693883636,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-5zmtk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e75cdc5-22da-47f2-9833-b2f4eaa9caac,},Annotations:map[string]string{io.kubernetes.container.hash: 6486d0c8,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1639d05b99a2d4c643e2a8925ba47e02dea0aecef0e22871c3d3ef765cb08394,PodSandboxId:904097cb5f15281f081d3f374e7a82a4e22c19468f949bb1d8dd5110b05fbf0c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1711584384689393202,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9000645c-8323-43af-bd87-011d1574493c,},Annotations:map[string]string{io.kubernetes.container.hash: 43103468,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termin
ation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7618cf90b394f83cf98ba25b6a878b77ac0c7bacb6fa65a7cf7ddaae3d859976,PodSandboxId:cb101e6a739d939c7192d22a06ebbaf222b63e1b6c3d7b3a9e82ddaa67bbffc1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1711584350713245285,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-377576,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 32d18f050adf42c0d971a9903270a7b6,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2930b0267019902b8b9ce7cd18b907f89d409676028d92de1dc551850d78f276,PodSandboxId:997d89c34a5f31417acabe4367ffd068159f7cd6447ab1e4c7b0c7caeb3d7a93,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1711584348688612607,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-377576,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6490cbc40210ad634becf13ac3a1705,},Annotations:map[string]string{io.kubernetes.container.hash: bc2c0f48,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.ku
bernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5298cacdf731fecbce32ad2d9f328670947c3e2bc41da8560a41746efa183376,PodSandboxId:6eafda94672d564fe253dfd43b5bde346da3dfa6d5efeb98a979e953c11b959d,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1711584343983600963,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-78c89,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3272474d-5490-4c7c-9dfe-ded8488ec32f,},Annotations:map[string]string{io.kubernetes.container.hash: f4bc3217,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessa
gePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0d9ac0997d255850bd452ddf61795477c05b16d5a1c77900748c11ef0e86ad8a,PodSandboxId:904097cb5f15281f081d3f374e7a82a4e22c19468f949bb1d8dd5110b05fbf0c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1711584336688854876,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9000645c-8323-43af-bd87-011d1574493c,},Annotations:map[string]string{io.kubernetes.container.hash: 43103468,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: Fil
e,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5116a84093b94656654ec8266161d1bc13b32d0e34905ba6962ac6769dc5c6da,PodSandboxId:0b075b5717795f3c07856e1151edd79045a7fe74683a224f07536c576926d280,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1711584326540796553,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-377576,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d35f096ae88a36bb3ae6fa7f31554e39,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePer
iod: 30,},},&Container{Id:f043217131ac4124beae3e8ebcbab4be1eebd60b6b040475b8d40b6951c8837f,PodSandboxId:657c51c3b5e7d8e9d3e371ead69b9b6e2d781ebd27d42dfdb2641e1a6e236c06,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1711584310671441561,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4t77p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 27eff0c9-9b45-4530-aba9-1a5e0ca60802,},Annotations:map[string]string{io.kubernetes.container.hash: d0985273,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb34785
03fe928ca51e348aec8e5aa7eebe7cfdc8b7226d85976590d73be90dd,PodSandboxId:d6ee7152bab39abb61e4c40513bcbf1c1719037be0b22e184bd532002df38257,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1711584310871480282,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-5zmtk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e75cdc5-22da-47f2-9833-b2f4eaa9caac,},Annotations:map[string]string{io.kubernetes.container.hash: 6486d0c8,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d05a3cee8583ee0f268ce6a367e7e700f8f87f1d9
6c985228250245d5f5a258,PodSandboxId:abd9c1ff4a2863311efb200d0e3b601a5035722036d7c765e297bc887033d393,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1711584310538629450,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-377576,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ab33d6840338638cbdcd9ebe5fdd4d4,},Annotations:map[string]string{io.kubernetes.container.hash: 7359b186,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f3bb93ac0ec38b966779f4260f9506e31f38cc6702ce1751f61b06271e40fcb3,PodSandboxId:cb101e6a739d939
c7192d22a06ebbaf222b63e1b6c3d7b3a9e82ddaa67bbffc1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_EXITED,CreatedAt:1711584310528973873,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-377576,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 32d18f050adf42c0d971a9903270a7b6,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:20e7e0f2dabc114fd527652f912485cbd4476203fe6c63338ed46931f28df715,PodSandboxId:997d89c34a
5f31417acabe4367ffd068159f7cd6447ab1e4c7b0c7caeb3d7a93,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_EXITED,CreatedAt:1711584310410197624,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-377576,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6490cbc40210ad634becf13ac3a1705,},Annotations:map[string]string{io.kubernetes.container.hash: bc2c0f48,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f5f23730141a94b0d7db08dffd1dd111243d4d4c6ee706bec92a8ea3c1872258,PodSandboxId:ee57ddbaf2fe4d35bb46bd35e54a00b9
4b130707708a49beace86386b10fe913,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1711584310358266833,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-377576,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f9eaf884653411ba1f22eb4cdbdfa748,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a2fd3dc48780368d474066f78db57cea4362acc353f7604458e5ad6b3d1c6bb,PodSandboxId:8519de872cf9776787e79d23757b0be34bc8c680161a0665bf6b6
cb54b3bf07f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1711584303839080092,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-msv9s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c549358-2f35-4345-aa7a-8bbbcfc4ef01,},Annotations:map[string]string{io.kubernetes.container.hash: 76b7b3af,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File
,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e280dd2cc82d6fb9730125bb5cc514d7b33be2a3a03727e5adeb43a7f1399eb8,PodSandboxId:e7de52be4110de636969e1b12c92959bebb4213ef1e41dcc9a17ddae078e2f6e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1711584303845663995,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-47npx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 968d63e4-f44a-4e52-b6c0-04e0ed1a068e,},Annotations:map[string]string{io.kubernetes.container.hash: d58e1b37,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPor
t\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc41f34db32bf53a65c6af8f9f9eae933fb349e5555060707ec7131ab3a7b835,PodSandboxId:d8bf33d99bda1de3168ad1efb04ca03a74a918d2ca74b585a7fcdb3a108d229e,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1711583818896885728,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-78c89,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3272474d-5490-4c7c-9dfe-ded8488ec32f,},Annotations:map[string]string{io.kuber
netes.container.hash: f4bc3217,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d5198968b76968a091c33806ec424495c03395675b20e7eb35e330d16217211,PodSandboxId:78b0408435c31eb2c100690d28760dd80d92d11a1204f91418812c69660e79a8,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1711583597995455547,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-47npx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 968d63e4-f44a-4e52-b6c0-04e0ed1a068e,},Annotations:map[string]string{io.kubernetes.container.hash: d58e1b37,i
o.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed9a38e9f6cd98054d1e672c652e36e6262ceb6eb2a3e14911a5178d222135e7,PodSandboxId:906a95ca7b9309653761922034b8a2dfca231a15f3b955bb1c166ebd569149c8,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1711583597982972322,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: core
dns-76f75df574-msv9s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c549358-2f35-4345-aa7a-8bbbcfc4ef01,},Annotations:map[string]string{io.kubernetes.container.hash: 76b7b3af,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a226f01452a72cf6e9a608450715a12f6663ccf2697e8259f809e966809978ce,PodSandboxId:3f1239e30a953ae16c917e002c235d64ba2b2b2f9165e939c99bda7578eae785,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b
0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_EXITED,CreatedAt:1711583595702331437,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4t77p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 27eff0c9-9b45-4530-aba9-1a5e0ca60802,},Annotations:map[string]string{io.kubernetes.container.hash: d0985273,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0128cd878ebdae2c6e217413183a103417714ffe00d55c1c92d1353e38238fa,PodSandboxId:bbb9d168e952fe414d641f2ec6a7e1e63a04205d4e2aae1c6ad9a2756c6443ee,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83
d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1711583575850415387,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-377576,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ab33d6840338638cbdcd9ebe5fdd4d4,},Annotations:map[string]string{io.kubernetes.container.hash: 7359b186,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:afbf14c176818ba796eacd41f1600b5f09075c6a2b4f1edcad903530221f76ff,PodSandboxId:b75106f2dccc70e077dfb09e12c05d0942a02fa2c4894a1bd68ec76ca144eed7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_EXITED,Create
dAt:1711583575810399508,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-377576,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f9eaf884653411ba1f22eb4cdbdfa748,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7e0b371e-f256-446f-a0ea-e9da72693287 name=/runtime.v1.RuntimeService/ListContainers
	Mar 28 00:08:03 ha-377576 crio[3893]: time="2024-03-28 00:08:03.012912857Z" level=debug msg="Request: &StatusRequest{Verbose:false,}" file="otel-collector/interceptors.go:62" id=7dd8ce88-fbf3-4e5a-857c-85d8cdb9bf15 name=/runtime.v1.RuntimeService/Status
	Mar 28 00:08:03 ha-377576 crio[3893]: time="2024-03-28 00:08:03.013575064Z" level=debug msg="Response: &StatusResponse{Status:&RuntimeStatus{Conditions:[]*RuntimeCondition{&RuntimeCondition{Type:RuntimeReady,Status:true,Reason:,Message:,},&RuntimeCondition{Type:NetworkReady,Status:true,Reason:,Message:,},},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=7dd8ce88-fbf3-4e5a-857c-85d8cdb9bf15 name=/runtime.v1.RuntimeService/Status
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	446878900fc2f       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5                                      About a minute ago   Running             kindnet-cni               4                   d6ee7152bab39       kindnet-5zmtk
	1639d05b99a2d       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      About a minute ago   Running             storage-provisioner       4                   904097cb5f152       storage-provisioner
	7618cf90b394f       6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3                                      2 minutes ago        Running             kube-controller-manager   2                   cb101e6a739d9       kube-controller-manager-ha-377576
	2930b02670199       39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533                                      2 minutes ago        Running             kube-apiserver            3                   997d89c34a5f3       kube-apiserver-ha-377576
	5298cacdf731f       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      2 minutes ago        Running             busybox                   1                   6eafda94672d5       busybox-7fdf7869d9-78c89
	0d9ac0997d255       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      2 minutes ago        Exited              storage-provisioner       3                   904097cb5f152       storage-provisioner
	5116a84093b94       22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba                                      2 minutes ago        Running             kube-vip                  0                   0b075b5717795       kube-vip-ha-377576
	bb3478503fe92       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5                                      2 minutes ago        Exited              kindnet-cni               3                   d6ee7152bab39       kindnet-5zmtk
	f043217131ac4       a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392                                      2 minutes ago        Running             kube-proxy                1                   657c51c3b5e7d       kube-proxy-4t77p
	3d05a3cee8583       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      2 minutes ago        Running             etcd                      1                   abd9c1ff4a286       etcd-ha-377576
	f3bb93ac0ec38       6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3                                      2 minutes ago        Exited              kube-controller-manager   1                   cb101e6a739d9       kube-controller-manager-ha-377576
	20e7e0f2dabc1       39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533                                      2 minutes ago        Exited              kube-apiserver            2                   997d89c34a5f3       kube-apiserver-ha-377576
	f5f23730141a9       8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b                                      2 minutes ago        Running             kube-scheduler            1                   ee57ddbaf2fe4       kube-scheduler-ha-377576
	e280dd2cc82d6       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      2 minutes ago        Running             coredns                   1                   e7de52be4110d       coredns-76f75df574-47npx
	0a2fd3dc48780       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      2 minutes ago        Running             coredns                   1                   8519de872cf97       coredns-76f75df574-msv9s
	fc41f34db32bf       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   11 minutes ago       Exited              busybox                   0                   d8bf33d99bda1       busybox-7fdf7869d9-78c89
	1d5198968b769       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      14 minutes ago       Exited              coredns                   0                   78b0408435c31       coredns-76f75df574-47npx
	ed9a38e9f6cd9       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      14 minutes ago       Exited              coredns                   0                   906a95ca7b930       coredns-76f75df574-msv9s
	a226f01452a72       a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392                                      14 minutes ago       Exited              kube-proxy                0                   3f1239e30a953       kube-proxy-4t77p
	a0128cd878ebd       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      15 minutes ago       Exited              etcd                      0                   bbb9d168e952f       etcd-ha-377576
	afbf14c176818       8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b                                      15 minutes ago       Exited              kube-scheduler            0                   b75106f2dccc7       kube-scheduler-ha-377576
	
	
	==> coredns [0a2fd3dc48780368d474066f78db57cea4362acc353f7604458e5ad6b3d1c6bb] <==
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[731237309]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (28-Mar-2024 00:05:15.180) (total time: 10000ms):
	Trace[731237309]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10000ms (00:05:25.181)
	Trace[731237309]: [10.000938818s] [10.000938818s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:43126->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:43126->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [1d5198968b76968a091c33806ec424495c03395675b20e7eb35e330d16217211] <==
	[INFO] 10.244.2.2:60611 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00032109s
	[INFO] 10.244.2.2:33575 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000137606s
	[INFO] 10.244.2.2:52980 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000106821s
	[INFO] 10.244.2.2:50141 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000114136s
	[INFO] 10.244.1.2:48883 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000154613s
	[INFO] 10.244.1.2:60634 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000118063s
	[INFO] 10.244.1.2:39068 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000170354s
	[INFO] 10.244.0.4:42784 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000130962s
	[INFO] 10.244.0.4:58150 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000087285s
	[INFO] 10.244.0.4:44129 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000081095s
	[INFO] 10.244.0.4:44169 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000047878s
	[INFO] 10.244.2.2:38674 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000113751s
	[INFO] 10.244.1.2:52689 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000279728s
	[INFO] 10.244.0.4:54702 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000138182s
	[INFO] 10.244.0.4:33994 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000143246s
	[INFO] 10.244.0.4:59928 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000149415s
	[INFO] 10.244.0.4:48254 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000119791s
	[INFO] 10.244.2.2:38914 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000113463s
	[INFO] 10.244.2.2:45000 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000084412s
	[INFO] 10.244.2.2:45899 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000082622s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: the server has asked for the client to provide credentials (get namespaces) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=21, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: the server has asked for the client to provide credentials (get services) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=21, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: the server has asked for the client to provide credentials (get endpointslices.discovery.k8s.io) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=21, ErrCode=NO_ERROR, debug=""
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [e280dd2cc82d6fb9730125bb5cc514d7b33be2a3a03727e5adeb43a7f1399eb8] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[89180718]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (28-Mar-2024 00:05:11.125) (total time: 10001ms):
	Trace[89180718]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (00:05:21.126)
	Trace[89180718]: [10.001694349s] [10.001694349s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[1688599920]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (28-Mar-2024 00:05:13.322) (total time: 10002ms):
	Trace[1688599920]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10002ms (00:05:23.324)
	Trace[1688599920]: [10.002575539s] [10.002575539s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:43404->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:43404->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [ed9a38e9f6cd98054d1e672c652e36e6262ceb6eb2a3e14911a5178d222135e7] <==
	[INFO] 10.244.1.2:39882 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000227134s
	[INFO] 10.244.1.2:36591 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000230513s
	[INFO] 10.244.1.2:39147 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002683128s
	[INFO] 10.244.1.2:57485 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000145666s
	[INFO] 10.244.1.2:50733 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000171259s
	[INFO] 10.244.0.4:38643 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000147285s
	[INFO] 10.244.0.4:54253 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00151748s
	[INFO] 10.244.0.4:55400 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000105715s
	[INFO] 10.244.2.2:37662 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00219357s
	[INFO] 10.244.2.2:39646 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000125023s
	[INFO] 10.244.2.2:33350 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001640561s
	[INFO] 10.244.2.2:40494 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000076386s
	[INFO] 10.244.1.2:45207 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000150664s
	[INFO] 10.244.2.2:56881 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000230324s
	[INFO] 10.244.2.2:46450 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000102951s
	[INFO] 10.244.2.2:49186 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000107347s
	[INFO] 10.244.1.2:32923 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.00033097s
	[INFO] 10.244.1.2:38607 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000207486s
	[INFO] 10.244.1.2:54186 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000187929s
	[INFO] 10.244.2.2:59559 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000147121s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: the server has asked for the client to provide credentials (get services) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=21, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: the server has asked for the client to provide credentials (get namespaces) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=21, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: the server has asked for the client to provide credentials (get endpointslices.discovery.k8s.io) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=21, ErrCode=NO_ERROR, debug=""
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               ha-377576
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-377576
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1a940f980f77c9bfc8ef93678db8c1f49c9dd79d
	                    minikube.k8s.io/name=ha-377576
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_03_27T23_53_03_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 27 Mar 2024 23:53:01 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-377576
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 28 Mar 2024 00:08:03 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 28 Mar 2024 00:05:54 +0000   Wed, 27 Mar 2024 23:53:01 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 28 Mar 2024 00:05:54 +0000   Wed, 27 Mar 2024 23:53:01 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 28 Mar 2024 00:05:54 +0000   Wed, 27 Mar 2024 23:53:01 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 28 Mar 2024 00:05:54 +0000   Wed, 27 Mar 2024 23:53:17 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.47
	  Hostname:    ha-377576
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 548afee7a42c42209042fc22e933a640
	  System UUID:                548afee7-a42c-4220-9042-fc22e933a640
	  Boot ID:                    446624d0-3e4c-494a-bf42-903d59e41c0c
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7fdf7869d9-78c89             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 coredns-76f75df574-47npx             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     14m
	  kube-system                 coredns-76f75df574-msv9s             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     14m
	  kube-system                 etcd-ha-377576                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         15m
	  kube-system                 kindnet-5zmtk                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      14m
	  kube-system                 kube-apiserver-ha-377576             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-controller-manager-ha-377576    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-proxy-4t77p                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-scheduler-ha-377576             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-vip-ha-377576                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         88s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                  From             Message
	  ----     ------                   ----                 ----             -------
	  Normal   Starting                 14m                  kube-proxy       
	  Normal   Starting                 2m6s                 kube-proxy       
	  Normal   NodeHasNoDiskPressure    15m                  kubelet          Node ha-377576 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 15m                  kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  15m                  kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  15m                  kubelet          Node ha-377576 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     15m                  kubelet          Node ha-377576 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           14m                  node-controller  Node ha-377576 event: Registered Node ha-377576 in Controller
	  Normal   NodeReady                14m                  kubelet          Node ha-377576 status is now: NodeReady
	  Normal   RegisteredNode           12m                  node-controller  Node ha-377576 event: Registered Node ha-377576 in Controller
	  Normal   RegisteredNode           11m                  node-controller  Node ha-377576 event: Registered Node ha-377576 in Controller
	  Warning  ContainerGCFailed        3m1s (x2 over 4m1s)  kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           2m                   node-controller  Node ha-377576 event: Registered Node ha-377576 in Controller
	  Normal   RegisteredNode           2m                   node-controller  Node ha-377576 event: Registered Node ha-377576 in Controller
	  Normal   RegisteredNode           36s                  node-controller  Node ha-377576 event: Registered Node ha-377576 in Controller
	
	
	Name:               ha-377576-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-377576-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1a940f980f77c9bfc8ef93678db8c1f49c9dd79d
	                    minikube.k8s.io/name=ha-377576
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_03_27T23_55_23_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 27 Mar 2024 23:55:20 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-377576-m02
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 28 Mar 2024 00:07:58 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 28 Mar 2024 00:06:36 +0000   Thu, 28 Mar 2024 00:05:56 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 28 Mar 2024 00:06:36 +0000   Thu, 28 Mar 2024 00:05:56 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 28 Mar 2024 00:06:36 +0000   Thu, 28 Mar 2024 00:05:56 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 28 Mar 2024 00:06:36 +0000   Thu, 28 Mar 2024 00:05:56 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.117
	  Hostname:    ha-377576-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 e8bdd7497a164e8f88f2bc1a3706be52
	  System UUID:                e8bdd749-7a16-4e8f-88f2-bc1a3706be52
	  Boot ID:                    aea9ba56-088a-4867-8d0a-150f94cf447e
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7fdf7869d9-2dqtf                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 etcd-ha-377576-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         12m
	  kube-system                 kindnet-6wmmc                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      12m
	  kube-system                 kube-apiserver-ha-377576-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-controller-manager-ha-377576-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-proxy-k9dcr                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-scheduler-ha-377576-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-vip-ha-377576-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 12m                    kube-proxy       
	  Normal  Starting                 109s                   kube-proxy       
	  Normal  NodeHasSufficientMemory  12m (x8 over 12m)      kubelet          Node ha-377576-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  12m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientPID     12m (x7 over 12m)      kubelet          Node ha-377576-m02 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    12m (x8 over 12m)      kubelet          Node ha-377576-m02 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 12m                    kubelet          Starting kubelet.
	  Normal  RegisteredNode           12m                    node-controller  Node ha-377576-m02 event: Registered Node ha-377576-m02 in Controller
	  Normal  RegisteredNode           12m                    node-controller  Node ha-377576-m02 event: Registered Node ha-377576-m02 in Controller
	  Normal  RegisteredNode           11m                    node-controller  Node ha-377576-m02 event: Registered Node ha-377576-m02 in Controller
	  Normal  NodeNotReady             9m19s                  node-controller  Node ha-377576-m02 status is now: NodeNotReady
	  Normal  Starting                 2m35s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m35s (x8 over 2m35s)  kubelet          Node ha-377576-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m35s (x8 over 2m35s)  kubelet          Node ha-377576-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m35s (x7 over 2m35s)  kubelet          Node ha-377576-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m35s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           2m                     node-controller  Node ha-377576-m02 event: Registered Node ha-377576-m02 in Controller
	  Normal  RegisteredNode           2m                     node-controller  Node ha-377576-m02 event: Registered Node ha-377576-m02 in Controller
	  Normal  RegisteredNode           36s                    node-controller  Node ha-377576-m02 event: Registered Node ha-377576-m02 in Controller
	
	
	Name:               ha-377576-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-377576-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1a940f980f77c9bfc8ef93678db8c1f49c9dd79d
	                    minikube.k8s.io/name=ha-377576
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_03_27T23_56_36_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 27 Mar 2024 23:56:31 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-377576-m03
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 28 Mar 2024 00:07:53 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 28 Mar 2024 00:07:33 +0000   Thu, 28 Mar 2024 00:07:02 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 28 Mar 2024 00:07:33 +0000   Thu, 28 Mar 2024 00:07:02 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 28 Mar 2024 00:07:33 +0000   Thu, 28 Mar 2024 00:07:02 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 28 Mar 2024 00:07:33 +0000   Thu, 28 Mar 2024 00:07:02 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.101
	  Hostname:    ha-377576-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 71074434be55477c85d1de1bbea96887
	  System UUID:                71074434-be55-477c-85d1-de1bbea96887
	  Boot ID:                    837cf4ee-6f04-4ccf-909c-87a5449006cf
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7fdf7869d9-jrh7n                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 etcd-ha-377576-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         11m
	  kube-system                 kindnet-n8fpn                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      11m
	  kube-system                 kube-apiserver-ha-377576-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-controller-manager-ha-377576-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-proxy-5plfq                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-scheduler-ha-377576-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-vip-ha-377576-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 11m                kube-proxy       
	  Normal   Starting                 39s                kube-proxy       
	  Normal   NodeHasSufficientMemory  11m (x8 over 11m)  kubelet          Node ha-377576-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    11m (x8 over 11m)  kubelet          Node ha-377576-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     11m (x7 over 11m)  kubelet          Node ha-377576-m03 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  11m                kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           11m                node-controller  Node ha-377576-m03 event: Registered Node ha-377576-m03 in Controller
	  Normal   RegisteredNode           11m                node-controller  Node ha-377576-m03 event: Registered Node ha-377576-m03 in Controller
	  Normal   RegisteredNode           11m                node-controller  Node ha-377576-m03 event: Registered Node ha-377576-m03 in Controller
	  Normal   RegisteredNode           2m                 node-controller  Node ha-377576-m03 event: Registered Node ha-377576-m03 in Controller
	  Normal   RegisteredNode           2m                 node-controller  Node ha-377576-m03 event: Registered Node ha-377576-m03 in Controller
	  Normal   NodeNotReady             80s                node-controller  Node ha-377576-m03 status is now: NodeNotReady
	  Normal   Starting                 61s                kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  61s                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  61s (x2 over 61s)  kubelet          Node ha-377576-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    61s (x2 over 61s)  kubelet          Node ha-377576-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     61s (x2 over 61s)  kubelet          Node ha-377576-m03 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 61s                kubelet          Node ha-377576-m03 has been rebooted, boot id: 837cf4ee-6f04-4ccf-909c-87a5449006cf
	  Normal   NodeReady                61s                kubelet          Node ha-377576-m03 status is now: NodeReady
	  Normal   RegisteredNode           37s                node-controller  Node ha-377576-m03 event: Registered Node ha-377576-m03 in Controller
	
	
	Name:               ha-377576-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-377576-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1a940f980f77c9bfc8ef93678db8c1f49c9dd79d
	                    minikube.k8s.io/name=ha-377576
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_03_27T23_57_34_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 27 Mar 2024 23:57:33 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-377576-m04
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 28 Mar 2024 00:07:55 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 28 Mar 2024 00:07:55 +0000   Thu, 28 Mar 2024 00:07:55 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 28 Mar 2024 00:07:55 +0000   Thu, 28 Mar 2024 00:07:55 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 28 Mar 2024 00:07:55 +0000   Thu, 28 Mar 2024 00:07:55 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 28 Mar 2024 00:07:55 +0000   Thu, 28 Mar 2024 00:07:55 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.93
	  Hostname:    ha-377576-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 9888e36a359a48f1aa6b97712e7f2662
	  System UUID:                9888e36a-359a-48f1-aa6b-97712e7f2662
	  Boot ID:                    2a204f54-9894-47ce-8cd2-4156d335ee08
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-57xkj       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      10m
	  kube-system                 kube-proxy-nsmbj    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 4s                 kube-proxy       
	  Normal   Starting                 10m                kube-proxy       
	  Normal   NodeHasSufficientMemory  10m (x3 over 10m)  kubelet          Node ha-377576-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    10m (x3 over 10m)  kubelet          Node ha-377576-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     10m (x3 over 10m)  kubelet          Node ha-377576-m04 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  10m                kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           10m                node-controller  Node ha-377576-m04 event: Registered Node ha-377576-m04 in Controller
	  Normal   RegisteredNode           10m                node-controller  Node ha-377576-m04 event: Registered Node ha-377576-m04 in Controller
	  Normal   RegisteredNode           10m                node-controller  Node ha-377576-m04 event: Registered Node ha-377576-m04 in Controller
	  Normal   NodeReady                10m                kubelet          Node ha-377576-m04 status is now: NodeReady
	  Normal   RegisteredNode           2m                 node-controller  Node ha-377576-m04 event: Registered Node ha-377576-m04 in Controller
	  Normal   RegisteredNode           2m                 node-controller  Node ha-377576-m04 event: Registered Node ha-377576-m04 in Controller
	  Normal   NodeNotReady             80s                node-controller  Node ha-377576-m04 status is now: NodeNotReady
	  Normal   RegisteredNode           37s                node-controller  Node ha-377576-m04 event: Registered Node ha-377576-m04 in Controller
	  Normal   Starting                 8s                 kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  8s                 kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  8s (x3 over 8s)    kubelet          Node ha-377576-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    8s (x3 over 8s)    kubelet          Node ha-377576-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     8s (x3 over 8s)    kubelet          Node ha-377576-m04 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 8s (x2 over 8s)    kubelet          Node ha-377576-m04 has been rebooted, boot id: 2a204f54-9894-47ce-8cd2-4156d335ee08
	  Normal   NodeReady                8s (x2 over 8s)    kubelet          Node ha-377576-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[ +12.445381] systemd-fstab-generator[598]: Ignoring "noauto" option for root device
	[  +0.055911] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.058244] systemd-fstab-generator[610]: Ignoring "noauto" option for root device
	[  +0.192360] systemd-fstab-generator[624]: Ignoring "noauto" option for root device
	[  +0.112715] systemd-fstab-generator[636]: Ignoring "noauto" option for root device
	[  +0.267509] systemd-fstab-generator[666]: Ignoring "noauto" option for root device
	[  +4.568474] systemd-fstab-generator[768]: Ignoring "noauto" option for root device
	[  +0.064108] kauditd_printk_skb: 130 callbacks suppressed
	[  +4.418967] systemd-fstab-generator[951]: Ignoring "noauto" option for root device
	[  +1.239042] kauditd_printk_skb: 57 callbacks suppressed
	[Mar27 23:53] kauditd_printk_skb: 40 callbacks suppressed
	[  +0.989248] systemd-fstab-generator[1376]: Ignoring "noauto" option for root device
	[ +13.165128] kauditd_printk_skb: 15 callbacks suppressed
	[Mar27 23:55] kauditd_printk_skb: 74 callbacks suppressed
	[Mar28 00:05] systemd-fstab-generator[3806]: Ignoring "noauto" option for root device
	[  +0.158435] systemd-fstab-generator[3818]: Ignoring "noauto" option for root device
	[  +0.186777] systemd-fstab-generator[3832]: Ignoring "noauto" option for root device
	[  +0.155223] systemd-fstab-generator[3844]: Ignoring "noauto" option for root device
	[  +0.308136] systemd-fstab-generator[3877]: Ignoring "noauto" option for root device
	[  +1.053355] systemd-fstab-generator[4111]: Ignoring "noauto" option for root device
	[  +6.217474] kauditd_printk_skb: 142 callbacks suppressed
	[ +16.387174] kauditd_printk_skb: 67 callbacks suppressed
	[ +24.306679] kauditd_printk_skb: 5 callbacks suppressed
	[Mar28 00:07] kauditd_printk_skb: 2 callbacks suppressed
	
	
	==> etcd [3d05a3cee8583ee0f268ce6a367e7e700f8f87f1d96c985228250245d5f5a258] <==
	{"level":"warn","ts":"2024-03-28T00:06:56.350578Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"dda2c3e6a900b50e","from":"dda2c3e6a900b50e","remote-peer-id":"e5d33a179970ddaa","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-28T00:06:56.450017Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"dda2c3e6a900b50e","from":"dda2c3e6a900b50e","remote-peer-id":"e5d33a179970ddaa","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-28T00:06:56.456555Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"e5d33a179970ddaa","rtt":"0s","error":"dial tcp 192.168.39.101:2380: connect: no route to host"}
	{"level":"warn","ts":"2024-03-28T00:06:56.456668Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"e5d33a179970ddaa","rtt":"0s","error":"dial tcp 192.168.39.101:2380: connect: no route to host"}
	{"level":"warn","ts":"2024-03-28T00:06:56.550964Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"dda2c3e6a900b50e","from":"dda2c3e6a900b50e","remote-peer-id":"e5d33a179970ddaa","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-28T00:06:56.594611Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"dda2c3e6a900b50e","from":"dda2c3e6a900b50e","remote-peer-id":"e5d33a179970ddaa","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-28T00:06:56.597946Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"dda2c3e6a900b50e","from":"dda2c3e6a900b50e","remote-peer-id":"e5d33a179970ddaa","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-28T00:06:56.650415Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"dda2c3e6a900b50e","from":"dda2c3e6a900b50e","remote-peer-id":"e5d33a179970ddaa","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-28T00:06:56.777737Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.101:2380/version","remote-member-id":"e5d33a179970ddaa","error":"Get \"https://192.168.39.101:2380/version\": dial tcp 192.168.39.101:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-03-28T00:06:56.777848Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"e5d33a179970ddaa","error":"Get \"https://192.168.39.101:2380/version\": dial tcp 192.168.39.101:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-03-28T00:07:00.779776Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.101:2380/version","remote-member-id":"e5d33a179970ddaa","error":"Get \"https://192.168.39.101:2380/version\": dial tcp 192.168.39.101:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-03-28T00:07:00.779844Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"e5d33a179970ddaa","error":"Get \"https://192.168.39.101:2380/version\": dial tcp 192.168.39.101:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-03-28T00:07:01.457077Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"e5d33a179970ddaa","rtt":"0s","error":"dial tcp 192.168.39.101:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-03-28T00:07:01.457157Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"e5d33a179970ddaa","rtt":"0s","error":"dial tcp 192.168.39.101:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-03-28T00:07:04.782384Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.101:2380/version","remote-member-id":"e5d33a179970ddaa","error":"Get \"https://192.168.39.101:2380/version\": dial tcp 192.168.39.101:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-03-28T00:07:04.782465Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"e5d33a179970ddaa","error":"Get \"https://192.168.39.101:2380/version\": dial tcp 192.168.39.101:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-03-28T00:07:06.458196Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"e5d33a179970ddaa","rtt":"0s","error":"dial tcp 192.168.39.101:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-03-28T00:07:06.45837Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"e5d33a179970ddaa","rtt":"0s","error":"dial tcp 192.168.39.101:2380: connect: connection refused"}
	{"level":"info","ts":"2024-03-28T00:07:07.80021Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"e5d33a179970ddaa"}
	{"level":"info","ts":"2024-03-28T00:07:07.800308Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"dda2c3e6a900b50e","remote-peer-id":"e5d33a179970ddaa"}
	{"level":"info","ts":"2024-03-28T00:07:07.802264Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"dda2c3e6a900b50e","remote-peer-id":"e5d33a179970ddaa"}
	{"level":"info","ts":"2024-03-28T00:07:07.819768Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"dda2c3e6a900b50e","to":"e5d33a179970ddaa","stream-type":"stream Message"}
	{"level":"info","ts":"2024-03-28T00:07:07.821266Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"dda2c3e6a900b50e","remote-peer-id":"e5d33a179970ddaa"}
	{"level":"info","ts":"2024-03-28T00:07:07.823244Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"dda2c3e6a900b50e","to":"e5d33a179970ddaa","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-03-28T00:07:07.823312Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"dda2c3e6a900b50e","remote-peer-id":"e5d33a179970ddaa"}
	
	
	==> etcd [a0128cd878ebdae2c6e217413183a103417714ffe00d55c1c92d1353e38238fa] <==
	2024/03/28 00:03:30 WARNING: [core] [Server #6] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-03-28T00:03:30.633124Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"261.732399ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/flowschemas/\" range_end:\"/registry/flowschemas0\" limit:500 ","response":"","error":"context canceled"}
	{"level":"info","ts":"2024-03-28T00:03:30.646767Z","caller":"traceutil/trace.go:171","msg":"trace[1164427338] range","detail":"{range_begin:/registry/flowschemas/; range_end:/registry/flowschemas0; }","duration":"275.357753ms","start":"2024-03-28T00:03:30.371359Z","end":"2024-03-28T00:03:30.646717Z","steps":["trace[1164427338] 'agreement among raft nodes before linearized reading'  (duration: 261.761077ms)"],"step_count":1}
	2024/03/28 00:03:30 WARNING: [core] [Server #6] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-03-28T00:03:30.633139Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"173.569415ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/configmaps/kube-system/kube-apiserver-legacy-service-account-token-tracking\" limit:500 ","response":"","error":"context canceled"}
	{"level":"info","ts":"2024-03-28T00:03:30.64699Z","caller":"traceutil/trace.go:171","msg":"trace[1926107880] range","detail":"{range_begin:/registry/configmaps/kube-system/kube-apiserver-legacy-service-account-token-tracking; range_end:; }","duration":"187.553492ms","start":"2024-03-28T00:03:30.459428Z","end":"2024-03-28T00:03:30.646982Z","steps":["trace[1926107880] 'agreement among raft nodes before linearized reading'  (duration: 173.707151ms)"],"step_count":1}
	2024/03/28 00:03:30 WARNING: [core] [Server #6] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"info","ts":"2024-03-28T00:03:30.673598Z","caller":"etcdserver/server.go:1462","msg":"skipped leadership transfer; local server is not leader","local-member-id":"dda2c3e6a900b50e","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-03-28T00:03:30.673921Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"fcedd589fb2f542e"}
	{"level":"info","ts":"2024-03-28T00:03:30.674017Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"fcedd589fb2f542e"}
	{"level":"info","ts":"2024-03-28T00:03:30.674138Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"fcedd589fb2f542e"}
	{"level":"info","ts":"2024-03-28T00:03:30.674274Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"dda2c3e6a900b50e","remote-peer-id":"fcedd589fb2f542e"}
	{"level":"info","ts":"2024-03-28T00:03:30.674378Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"dda2c3e6a900b50e","remote-peer-id":"fcedd589fb2f542e"}
	{"level":"info","ts":"2024-03-28T00:03:30.674464Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"dda2c3e6a900b50e","remote-peer-id":"fcedd589fb2f542e"}
	{"level":"info","ts":"2024-03-28T00:03:30.674569Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"fcedd589fb2f542e"}
	{"level":"info","ts":"2024-03-28T00:03:30.675356Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"e5d33a179970ddaa"}
	{"level":"info","ts":"2024-03-28T00:03:30.675396Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"e5d33a179970ddaa"}
	{"level":"info","ts":"2024-03-28T00:03:30.675475Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"e5d33a179970ddaa"}
	{"level":"info","ts":"2024-03-28T00:03:30.675715Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"dda2c3e6a900b50e","remote-peer-id":"e5d33a179970ddaa"}
	{"level":"info","ts":"2024-03-28T00:03:30.675769Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"dda2c3e6a900b50e","remote-peer-id":"e5d33a179970ddaa"}
	{"level":"info","ts":"2024-03-28T00:03:30.675978Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"dda2c3e6a900b50e","remote-peer-id":"e5d33a179970ddaa"}
	{"level":"info","ts":"2024-03-28T00:03:30.676449Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"e5d33a179970ddaa"}
	{"level":"info","ts":"2024-03-28T00:03:30.67871Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.47:2380"}
	{"level":"info","ts":"2024-03-28T00:03:30.678823Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.47:2380"}
	{"level":"info","ts":"2024-03-28T00:03:30.678871Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"ha-377576","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.47:2380"],"advertise-client-urls":["https://192.168.39.47:2379"]}
	
	
	==> kernel <==
	 00:08:03 up 15 min,  0 users,  load average: 0.94, 0.76, 0.49
	Linux ha-377576 5.10.207 #1 SMP Wed Mar 27 22:02:20 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [446878900fc2ff3c1baac3e23199c7573f58770442bddf2548e3ccbfa9d3b300] <==
	I0328 00:07:28.776919       1 main.go:250] Node ha-377576-m04 has CIDR [10.244.3.0/24] 
	I0328 00:07:38.793170       1 main.go:223] Handling node with IPs: map[192.168.39.47:{}]
	I0328 00:07:38.793226       1 main.go:227] handling current node
	I0328 00:07:38.793255       1 main.go:223] Handling node with IPs: map[192.168.39.117:{}]
	I0328 00:07:38.793261       1 main.go:250] Node ha-377576-m02 has CIDR [10.244.1.0/24] 
	I0328 00:07:38.793445       1 main.go:223] Handling node with IPs: map[192.168.39.101:{}]
	I0328 00:07:38.793474       1 main.go:250] Node ha-377576-m03 has CIDR [10.244.2.0/24] 
	I0328 00:07:38.793590       1 main.go:223] Handling node with IPs: map[192.168.39.93:{}]
	I0328 00:07:38.793623       1 main.go:250] Node ha-377576-m04 has CIDR [10.244.3.0/24] 
	I0328 00:07:48.809804       1 main.go:223] Handling node with IPs: map[192.168.39.47:{}]
	I0328 00:07:48.809852       1 main.go:227] handling current node
	I0328 00:07:48.809882       1 main.go:223] Handling node with IPs: map[192.168.39.117:{}]
	I0328 00:07:48.809888       1 main.go:250] Node ha-377576-m02 has CIDR [10.244.1.0/24] 
	I0328 00:07:48.810026       1 main.go:223] Handling node with IPs: map[192.168.39.101:{}]
	I0328 00:07:48.810053       1 main.go:250] Node ha-377576-m03 has CIDR [10.244.2.0/24] 
	I0328 00:07:48.810108       1 main.go:223] Handling node with IPs: map[192.168.39.93:{}]
	I0328 00:07:48.810113       1 main.go:250] Node ha-377576-m04 has CIDR [10.244.3.0/24] 
	I0328 00:07:58.825187       1 main.go:223] Handling node with IPs: map[192.168.39.47:{}]
	I0328 00:07:58.825367       1 main.go:227] handling current node
	I0328 00:07:58.825411       1 main.go:223] Handling node with IPs: map[192.168.39.117:{}]
	I0328 00:07:58.825443       1 main.go:250] Node ha-377576-m02 has CIDR [10.244.1.0/24] 
	I0328 00:07:58.825722       1 main.go:223] Handling node with IPs: map[192.168.39.101:{}]
	I0328 00:07:58.825782       1 main.go:250] Node ha-377576-m03 has CIDR [10.244.2.0/24] 
	I0328 00:07:58.825943       1 main.go:223] Handling node with IPs: map[192.168.39.93:{}]
	I0328 00:07:58.825999       1 main.go:250] Node ha-377576-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kindnet [bb3478503fe928ca51e348aec8e5aa7eebe7cfdc8b7226d85976590d73be90dd] <==
	I0328 00:05:11.339932       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0328 00:05:11.340022       1 main.go:107] hostIP = 192.168.39.47
	podIP = 192.168.39.47
	I0328 00:05:11.340206       1 main.go:116] setting mtu 1500 for CNI 
	I0328 00:05:11.340251       1 main.go:146] kindnetd IP family: "ipv4"
	I0328 00:05:11.340297       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0328 00:05:13.352146       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	I0328 00:05:16.424310       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	I0328 00:05:19.496081       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	I0328 00:05:31.506727       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": net/http: TLS handshake timeout
	I0328 00:05:34.856323       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	panic: Reached maximum retries obtaining node list: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	
	goroutine 1 [running]:
	main.main()
		/go/src/cmd/kindnetd/main.go:195 +0xd3d
	
	
	==> kube-apiserver [20e7e0f2dabc114fd527652f912485cbd4476203fe6c63338ed46931f28df715] <==
	I0328 00:05:11.014936       1 options.go:222] external host was not specified, using 192.168.39.47
	I0328 00:05:11.016084       1 server.go:148] Version: v1.29.3
	I0328 00:05:11.016136       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0328 00:05:11.515055       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0328 00:05:11.515098       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0328 00:05:11.515351       1 instance.go:297] Using reconciler: lease
	I0328 00:05:11.515756       1 shared_informer.go:311] Waiting for caches to sync for node_authorizer
	W0328 00:05:31.512573       1 logging.go:59] [core] [Channel #1 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	F0328 00:05:31.517319       1 instance.go:290] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-apiserver [2930b0267019902b8b9ce7cd18b907f89d409676028d92de1dc551850d78f276] <==
	I0328 00:05:50.966157       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0328 00:05:50.968800       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0328 00:05:50.968834       1 shared_informer.go:311] Waiting for caches to sync for crd-autoregister
	I0328 00:05:50.969180       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0328 00:05:50.969331       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0328 00:05:51.150965       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0328 00:05:51.151010       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0328 00:05:51.151125       1 shared_informer.go:318] Caches are synced for configmaps
	I0328 00:05:51.151793       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0328 00:05:51.151848       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0328 00:05:51.152473       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	W0328 00:05:51.168055       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.101]
	I0328 00:05:51.170593       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0328 00:05:51.170668       1 aggregator.go:165] initial CRD sync complete...
	I0328 00:05:51.170705       1 autoregister_controller.go:141] Starting autoregister controller
	I0328 00:05:51.170727       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0328 00:05:51.170749       1 cache.go:39] Caches are synced for autoregister controller
	I0328 00:05:51.171970       1 controller.go:624] quota admission added evaluator for: endpoints
	I0328 00:05:51.180431       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0328 00:05:51.181904       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0328 00:05:51.183552       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E0328 00:05:51.198089       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I0328 00:05:51.963105       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0328 00:05:52.429099       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.101 192.168.39.47]
	W0328 00:06:12.428398       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.117 192.168.39.47]
	
	
	==> kube-controller-manager [7618cf90b394f83cf98ba25b6a878b77ac0c7bacb6fa65a7cf7ddaae3d859976] <==
	I0328 00:06:16.724808       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="14.799916ms"
	I0328 00:06:16.725602       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="120.418µs"
	I0328 00:06:27.812133       1 endpointslice_controller.go:310] "Error syncing endpoint slices for service, retrying" key="kube-system/kube-dns" err="failed to update kube-dns-njkvc EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-njkvc\": the object has been modified; please apply your changes to the latest version and try again"
	I0328 00:06:27.812580       1 event.go:364] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"b68e3da5-21aa-4e8d-b648-2c736e5c7481", APIVersion:"v1", ResourceVersion:"234", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-njkvc EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-njkvc": the object has been modified; please apply your changes to the latest version and try again
	I0328 00:06:27.835999       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="49.691046ms"
	I0328 00:06:27.836163       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="96.761µs"
	I0328 00:06:43.713294       1 event.go:376] "Event occurred" object="ha-377576-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="NodeNotReady" message="Node ha-377576-m03 status is now: NodeNotReady"
	I0328 00:06:43.713357       1 event.go:376] "Event occurred" object="ha-377576-m04" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="NodeNotReady" message="Node ha-377576-m04 status is now: NodeNotReady"
	I0328 00:06:43.731727       1 event.go:376] "Event occurred" object="kube-system/kube-proxy-5plfq" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0328 00:06:43.742481       1 event.go:376] "Event occurred" object="kube-system/kindnet-57xkj" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0328 00:06:43.755435       1 event.go:376] "Event occurred" object="kube-system/kube-scheduler-ha-377576-m03" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0328 00:06:43.777864       1 event.go:376] "Event occurred" object="kube-system/kube-proxy-nsmbj" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0328 00:06:43.781597       1 event.go:376] "Event occurred" object="kube-system/kube-vip-ha-377576-m03" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0328 00:06:43.798487       1 event.go:376] "Event occurred" object="default/busybox-7fdf7869d9-jrh7n" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0328 00:06:43.811701       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="12.086073ms"
	I0328 00:06:43.812586       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="239.967µs"
	I0328 00:06:43.824010       1 event.go:376] "Event occurred" object="kube-system/etcd-ha-377576-m03" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0328 00:06:43.861892       1 event.go:376] "Event occurred" object="kube-system/kindnet-n8fpn" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0328 00:06:43.887225       1 event.go:376] "Event occurred" object="kube-system/kube-apiserver-ha-377576-m03" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0328 00:06:43.911923       1 event.go:376] "Event occurred" object="kube-system/kube-controller-manager-ha-377576-m03" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0328 00:07:03.815299       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="93.391µs"
	I0328 00:07:03.929780       1 event.go:376] "Event occurred" object="default/busybox-7fdf7869d9-jrh7n" fieldPath="" kind="Pod" apiVersion="v1" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-7fdf7869d9-jrh7n"
	I0328 00:07:21.890731       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="13.72339ms"
	I0328 00:07:21.890859       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="52.488µs"
	I0328 00:07:55.618372       1 topologycache.go:237] "Can't get CPU or zone information for node" node="ha-377576-m04"
	
	
	==> kube-controller-manager [f3bb93ac0ec38b966779f4260f9506e31f38cc6702ce1751f61b06271e40fcb3] <==
	I0328 00:05:11.638848       1 serving.go:380] Generated self-signed cert in-memory
	I0328 00:05:12.135906       1 controllermanager.go:187] "Starting" version="v1.29.3"
	I0328 00:05:12.135991       1 controllermanager.go:189] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0328 00:05:12.138197       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0328 00:05:12.138346       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0328 00:05:12.139821       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0328 00:05:12.139886       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	E0328 00:05:32.524666       1 controllermanager.go:232] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.39.47:8443/healthz\": dial tcp 192.168.39.47:8443: connect: connection refused"
	
	
	==> kube-proxy [a226f01452a72cf6e9a608450715a12f6663ccf2697e8259f809e966809978ce] <==
	E0328 00:02:15.816214       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-377576&resourceVersion=1873": dial tcp 192.168.39.254:8443: connect: no route to host
	W0328 00:02:19.016287       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1941": dial tcp 192.168.39.254:8443: connect: no route to host
	E0328 00:02:19.016429       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1941": dial tcp 192.168.39.254:8443: connect: no route to host
	W0328 00:02:22.089780       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-377576&resourceVersion=1873": dial tcp 192.168.39.254:8443: connect: no route to host
	E0328 00:02:22.089912       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-377576&resourceVersion=1873": dial tcp 192.168.39.254:8443: connect: no route to host
	W0328 00:02:22.090115       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1950": dial tcp 192.168.39.254:8443: connect: no route to host
	E0328 00:02:22.090393       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1950": dial tcp 192.168.39.254:8443: connect: no route to host
	W0328 00:02:25.160770       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1941": dial tcp 192.168.39.254:8443: connect: no route to host
	E0328 00:02:25.160848       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1941": dial tcp 192.168.39.254:8443: connect: no route to host
	W0328 00:02:31.305093       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-377576&resourceVersion=1873": dial tcp 192.168.39.254:8443: connect: no route to host
	E0328 00:02:31.305342       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-377576&resourceVersion=1873": dial tcp 192.168.39.254:8443: connect: no route to host
	W0328 00:02:34.377949       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1950": dial tcp 192.168.39.254:8443: connect: no route to host
	E0328 00:02:34.378039       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1950": dial tcp 192.168.39.254:8443: connect: no route to host
	W0328 00:02:34.377969       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1941": dial tcp 192.168.39.254:8443: connect: no route to host
	E0328 00:02:34.378243       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1941": dial tcp 192.168.39.254:8443: connect: no route to host
	W0328 00:02:46.665101       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-377576&resourceVersion=1873": dial tcp 192.168.39.254:8443: connect: no route to host
	E0328 00:02:46.665321       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-377576&resourceVersion=1873": dial tcp 192.168.39.254:8443: connect: no route to host
	W0328 00:02:52.808607       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1950": dial tcp 192.168.39.254:8443: connect: no route to host
	E0328 00:02:52.809168       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1950": dial tcp 192.168.39.254:8443: connect: no route to host
	W0328 00:02:58.953990       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1941": dial tcp 192.168.39.254:8443: connect: no route to host
	E0328 00:02:58.954066       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1941": dial tcp 192.168.39.254:8443: connect: no route to host
	W0328 00:03:14.312439       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-377576&resourceVersion=1873": dial tcp 192.168.39.254:8443: connect: no route to host
	E0328 00:03:14.312582       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-377576&resourceVersion=1873": dial tcp 192.168.39.254:8443: connect: no route to host
	W0328 00:03:23.530218       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1950": dial tcp 192.168.39.254:8443: connect: no route to host
	E0328 00:03:23.530448       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1950": dial tcp 192.168.39.254:8443: connect: no route to host
	
	
	==> kube-proxy [f043217131ac4124beae3e8ebcbab4be1eebd60b6b040475b8d40b6951c8837f] <==
	I0328 00:05:12.217150       1 server_others.go:72] "Using iptables proxy"
	E0328 00:05:14.121190       1 server.go:1039] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-377576\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0328 00:05:17.192997       1 server.go:1039] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-377576\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0328 00:05:20.264419       1 server.go:1039] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-377576\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0328 00:05:26.409164       1 server.go:1039] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-377576\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0328 00:05:38.697116       1 server.go:1039] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-377576\": dial tcp 192.168.39.254:8443: connect: no route to host"
	I0328 00:05:57.074634       1 server.go:1050] "Successfully retrieved node IP(s)" IPs=["192.168.39.47"]
	I0328 00:05:57.123774       1 server_others.go:146] "No iptables support for family" ipFamily="IPv6"
	I0328 00:05:57.123800       1 server.go:654] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0328 00:05:57.123827       1 server_others.go:168] "Using iptables Proxier"
	I0328 00:05:57.127000       1 proxier.go:245] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0328 00:05:57.127433       1 server.go:865] "Version info" version="v1.29.3"
	I0328 00:05:57.127600       1 server.go:867] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0328 00:05:57.130080       1 config.go:188] "Starting service config controller"
	I0328 00:05:57.130182       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0328 00:05:57.130307       1 config.go:97] "Starting endpoint slice config controller"
	I0328 00:05:57.130409       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0328 00:05:57.131848       1 config.go:315] "Starting node config controller"
	I0328 00:05:57.131884       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0328 00:05:57.230864       1 shared_informer.go:318] Caches are synced for service config
	I0328 00:05:57.230864       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0328 00:05:57.232440       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [afbf14c176818ba796eacd41f1600b5f09075c6a2b4f1edcad903530221f76ff] <==
	W0328 00:03:27.278673       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0328 00:03:27.278773       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0328 00:03:27.389576       1 reflector.go:539] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0328 00:03:27.389673       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0328 00:03:27.776693       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0328 00:03:27.776725       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0328 00:03:27.804225       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0328 00:03:27.804331       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0328 00:03:27.979804       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0328 00:03:27.979924       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0328 00:03:28.069602       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0328 00:03:28.069699       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0328 00:03:28.654719       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0328 00:03:28.654917       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0328 00:03:29.449738       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0328 00:03:29.449844       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0328 00:03:29.513816       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0328 00:03:29.513950       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0328 00:03:29.569728       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0328 00:03:29.569823       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0328 00:03:29.900915       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0328 00:03:29.900968       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0328 00:03:30.623098       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0328 00:03:30.625331       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0328 00:03:30.627835       1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kube-scheduler [f5f23730141a94b0d7db08dffd1dd111243d4d4c6ee706bec92a8ea3c1872258] <==
	W0328 00:05:42.198211       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicaSet: Get "https://192.168.39.47:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.47:8443: connect: connection refused
	E0328 00:05:42.198310       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get "https://192.168.39.47:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.47:8443: connect: connection refused
	W0328 00:05:42.793355       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.47:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.47:8443: connect: connection refused
	E0328 00:05:42.793596       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.47:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.47:8443: connect: connection refused
	W0328 00:05:42.957371       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.39.47:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.39.47:8443: connect: connection refused
	E0328 00:05:42.957637       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.39.47:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.39.47:8443: connect: connection refused
	W0328 00:05:47.469657       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StorageClass: Get "https://192.168.39.47:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.47:8443: connect: connection refused
	E0328 00:05:47.469808       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get "https://192.168.39.47:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.47:8443: connect: connection refused
	W0328 00:05:48.277229       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSINode: Get "https://192.168.39.47:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.47:8443: connect: connection refused
	E0328 00:05:48.277364       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get "https://192.168.39.47:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.47:8443: connect: connection refused
	W0328 00:05:48.346645       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicationController: Get "https://192.168.39.47:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.47:8443: connect: connection refused
	E0328 00:05:48.346763       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get "https://192.168.39.47:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.47:8443: connect: connection refused
	W0328 00:05:51.077568       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0328 00:05:51.077633       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0328 00:05:51.077728       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0328 00:05:51.077761       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0328 00:05:51.077834       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0328 00:05:51.077872       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0328 00:05:51.077934       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0328 00:05:51.077971       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0328 00:05:51.083546       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0328 00:05:51.083591       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0328 00:05:51.083686       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0328 00:05:51.083721       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0328 00:05:53.929851       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Mar 28 00:06:01 ha-377576 kubelet[1383]: E0328 00:06:01.675936    1383 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kindnet-cni pod=kindnet-5zmtk_kube-system(4e75cdc5-22da-47f2-9833-b2f4eaa9caac)\"" pod="kube-system/kindnet-5zmtk" podUID="4e75cdc5-22da-47f2-9833-b2f4eaa9caac"
	Mar 28 00:06:02 ha-377576 kubelet[1383]: E0328 00:06:02.709791    1383 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 28 00:06:02 ha-377576 kubelet[1383]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 28 00:06:02 ha-377576 kubelet[1383]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 28 00:06:02 ha-377576 kubelet[1383]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 28 00:06:02 ha-377576 kubelet[1383]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 28 00:06:02 ha-377576 kubelet[1383]: I0328 00:06:02.719225    1383 scope.go:117] "RemoveContainer" containerID="42dcabde2aec964660ef004661b1aca7c5fb8ef5bed0007775f67b975b44adfa"
	Mar 28 00:06:11 ha-377576 kubelet[1383]: I0328 00:06:11.675720    1383 scope.go:117] "RemoveContainer" containerID="0d9ac0997d255850bd452ddf61795477c05b16d5a1c77900748c11ef0e86ad8a"
	Mar 28 00:06:11 ha-377576 kubelet[1383]: E0328 00:06:11.676590    1383 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(9000645c-8323-43af-bd87-011d1574493c)\"" pod="kube-system/storage-provisioner" podUID="9000645c-8323-43af-bd87-011d1574493c"
	Mar 28 00:06:15 ha-377576 kubelet[1383]: I0328 00:06:15.674898    1383 scope.go:117] "RemoveContainer" containerID="bb3478503fe928ca51e348aec8e5aa7eebe7cfdc8b7226d85976590d73be90dd"
	Mar 28 00:06:15 ha-377576 kubelet[1383]: E0328 00:06:15.675594    1383 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kindnet-cni pod=kindnet-5zmtk_kube-system(4e75cdc5-22da-47f2-9833-b2f4eaa9caac)\"" pod="kube-system/kindnet-5zmtk" podUID="4e75cdc5-22da-47f2-9833-b2f4eaa9caac"
	Mar 28 00:06:24 ha-377576 kubelet[1383]: I0328 00:06:24.674814    1383 scope.go:117] "RemoveContainer" containerID="0d9ac0997d255850bd452ddf61795477c05b16d5a1c77900748c11ef0e86ad8a"
	Mar 28 00:06:27 ha-377576 kubelet[1383]: I0328 00:06:27.675329    1383 scope.go:117] "RemoveContainer" containerID="bb3478503fe928ca51e348aec8e5aa7eebe7cfdc8b7226d85976590d73be90dd"
	Mar 28 00:06:35 ha-377576 kubelet[1383]: I0328 00:06:35.674842    1383 kubelet.go:1903] "Trying to delete pod" pod="kube-system/kube-vip-ha-377576" podUID="2d4dd5f7-c798-4a52-97f5-4bc068603373"
	Mar 28 00:06:35 ha-377576 kubelet[1383]: I0328 00:06:35.698885    1383 kubelet.go:1908] "Deleted mirror pod because it is outdated" pod="kube-system/kube-vip-ha-377576"
	Mar 28 00:07:02 ha-377576 kubelet[1383]: E0328 00:07:02.713328    1383 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 28 00:07:02 ha-377576 kubelet[1383]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 28 00:07:02 ha-377576 kubelet[1383]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 28 00:07:02 ha-377576 kubelet[1383]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 28 00:07:02 ha-377576 kubelet[1383]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 28 00:08:02 ha-377576 kubelet[1383]: E0328 00:08:02.715553    1383 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 28 00:08:02 ha-377576 kubelet[1383]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 28 00:08:02 ha-377576 kubelet[1383]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 28 00:08:02 ha-377576 kubelet[1383]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 28 00:08:02 ha-377576 kubelet[1383]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0328 00:08:02.325991 1093648 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/18485-1069254/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-377576 -n ha-377576
helpers_test.go:261: (dbg) Run:  kubectl --context ha-377576 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (397.58s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (142.25s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-amd64 -p ha-377576 stop -v=7 --alsologtostderr
ha_test.go:531: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-377576 stop -v=7 --alsologtostderr: exit status 82 (2m0.507313233s)

                                                
                                                
-- stdout --
	* Stopping node "ha-377576-m04"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0328 00:08:22.793485 1094042 out.go:291] Setting OutFile to fd 1 ...
	I0328 00:08:22.793743 1094042 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 00:08:22.793753 1094042 out.go:304] Setting ErrFile to fd 2...
	I0328 00:08:22.793757 1094042 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 00:08:22.793957 1094042 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18485-1069254/.minikube/bin
	I0328 00:08:22.794220 1094042 out.go:298] Setting JSON to false
	I0328 00:08:22.794328 1094042 mustload.go:65] Loading cluster: ha-377576
	I0328 00:08:22.794685 1094042 config.go:182] Loaded profile config "ha-377576": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0328 00:08:22.794767 1094042 profile.go:142] Saving config to /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/ha-377576/config.json ...
	I0328 00:08:22.794950 1094042 mustload.go:65] Loading cluster: ha-377576
	I0328 00:08:22.795078 1094042 config.go:182] Loaded profile config "ha-377576": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0328 00:08:22.795121 1094042 stop.go:39] StopHost: ha-377576-m04
	I0328 00:08:22.795467 1094042 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0328 00:08:22.795520 1094042 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 00:08:22.810336 1094042 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34549
	I0328 00:08:22.810875 1094042 main.go:141] libmachine: () Calling .GetVersion
	I0328 00:08:22.811474 1094042 main.go:141] libmachine: Using API Version  1
	I0328 00:08:22.811499 1094042 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 00:08:22.811932 1094042 main.go:141] libmachine: () Calling .GetMachineName
	I0328 00:08:22.814463 1094042 out.go:177] * Stopping node "ha-377576-m04"  ...
	I0328 00:08:22.815850 1094042 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0328 00:08:22.815893 1094042 main.go:141] libmachine: (ha-377576-m04) Calling .DriverName
	I0328 00:08:22.816139 1094042 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0328 00:08:22.816176 1094042 main.go:141] libmachine: (ha-377576-m04) Calling .GetSSHHostname
	I0328 00:08:22.819243 1094042 main.go:141] libmachine: (ha-377576-m04) DBG | domain ha-377576-m04 has defined MAC address 52:54:00:6a:c2:81 in network mk-ha-377576
	I0328 00:08:22.819694 1094042 main.go:141] libmachine: (ha-377576-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:c2:81", ip: ""} in network mk-ha-377576: {Iface:virbr1 ExpiryTime:2024-03-28 01:07:47 +0000 UTC Type:0 Mac:52:54:00:6a:c2:81 Iaid: IPaddr:192.168.39.93 Prefix:24 Hostname:ha-377576-m04 Clientid:01:52:54:00:6a:c2:81}
	I0328 00:08:22.819731 1094042 main.go:141] libmachine: (ha-377576-m04) DBG | domain ha-377576-m04 has defined IP address 192.168.39.93 and MAC address 52:54:00:6a:c2:81 in network mk-ha-377576
	I0328 00:08:22.819895 1094042 main.go:141] libmachine: (ha-377576-m04) Calling .GetSSHPort
	I0328 00:08:22.820114 1094042 main.go:141] libmachine: (ha-377576-m04) Calling .GetSSHKeyPath
	I0328 00:08:22.820284 1094042 main.go:141] libmachine: (ha-377576-m04) Calling .GetSSHUsername
	I0328 00:08:22.820454 1094042 sshutil.go:53] new ssh client: &{IP:192.168.39.93 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/ha-377576-m04/id_rsa Username:docker}
	I0328 00:08:22.906206 1094042 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0328 00:08:22.959913 1094042 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0328 00:08:23.014467 1094042 main.go:141] libmachine: Stopping "ha-377576-m04"...
	I0328 00:08:23.014505 1094042 main.go:141] libmachine: (ha-377576-m04) Calling .GetState
	I0328 00:08:23.016170 1094042 main.go:141] libmachine: (ha-377576-m04) Calling .Stop
	I0328 00:08:23.020934 1094042 main.go:141] libmachine: (ha-377576-m04) Waiting for machine to stop 0/120
	I0328 00:08:24.022591 1094042 main.go:141] libmachine: (ha-377576-m04) Waiting for machine to stop 1/120
	I0328 00:08:25.024164 1094042 main.go:141] libmachine: (ha-377576-m04) Waiting for machine to stop 2/120
	I0328 00:08:26.025668 1094042 main.go:141] libmachine: (ha-377576-m04) Waiting for machine to stop 3/120
	I0328 00:08:27.027639 1094042 main.go:141] libmachine: (ha-377576-m04) Waiting for machine to stop 4/120
	I0328 00:08:28.030210 1094042 main.go:141] libmachine: (ha-377576-m04) Waiting for machine to stop 5/120
	I0328 00:08:29.031778 1094042 main.go:141] libmachine: (ha-377576-m04) Waiting for machine to stop 6/120
	I0328 00:08:30.034110 1094042 main.go:141] libmachine: (ha-377576-m04) Waiting for machine to stop 7/120
	I0328 00:08:31.036356 1094042 main.go:141] libmachine: (ha-377576-m04) Waiting for machine to stop 8/120
	I0328 00:08:32.037831 1094042 main.go:141] libmachine: (ha-377576-m04) Waiting for machine to stop 9/120
	I0328 00:08:33.039530 1094042 main.go:141] libmachine: (ha-377576-m04) Waiting for machine to stop 10/120
	I0328 00:08:34.040911 1094042 main.go:141] libmachine: (ha-377576-m04) Waiting for machine to stop 11/120
	I0328 00:08:35.042283 1094042 main.go:141] libmachine: (ha-377576-m04) Waiting for machine to stop 12/120
	I0328 00:08:36.043724 1094042 main.go:141] libmachine: (ha-377576-m04) Waiting for machine to stop 13/120
	I0328 00:08:37.045116 1094042 main.go:141] libmachine: (ha-377576-m04) Waiting for machine to stop 14/120
	I0328 00:08:38.047091 1094042 main.go:141] libmachine: (ha-377576-m04) Waiting for machine to stop 15/120
	I0328 00:08:39.049418 1094042 main.go:141] libmachine: (ha-377576-m04) Waiting for machine to stop 16/120
	I0328 00:08:40.050979 1094042 main.go:141] libmachine: (ha-377576-m04) Waiting for machine to stop 17/120
	I0328 00:08:41.053024 1094042 main.go:141] libmachine: (ha-377576-m04) Waiting for machine to stop 18/120
	I0328 00:08:42.054438 1094042 main.go:141] libmachine: (ha-377576-m04) Waiting for machine to stop 19/120
	I0328 00:08:43.056252 1094042 main.go:141] libmachine: (ha-377576-m04) Waiting for machine to stop 20/120
	I0328 00:08:44.057895 1094042 main.go:141] libmachine: (ha-377576-m04) Waiting for machine to stop 21/120
	I0328 00:08:45.059349 1094042 main.go:141] libmachine: (ha-377576-m04) Waiting for machine to stop 22/120
	I0328 00:08:46.060796 1094042 main.go:141] libmachine: (ha-377576-m04) Waiting for machine to stop 23/120
	I0328 00:08:47.062150 1094042 main.go:141] libmachine: (ha-377576-m04) Waiting for machine to stop 24/120
	I0328 00:08:48.063704 1094042 main.go:141] libmachine: (ha-377576-m04) Waiting for machine to stop 25/120
	I0328 00:08:49.065296 1094042 main.go:141] libmachine: (ha-377576-m04) Waiting for machine to stop 26/120
	I0328 00:08:50.066911 1094042 main.go:141] libmachine: (ha-377576-m04) Waiting for machine to stop 27/120
	I0328 00:08:51.068818 1094042 main.go:141] libmachine: (ha-377576-m04) Waiting for machine to stop 28/120
	I0328 00:08:52.070435 1094042 main.go:141] libmachine: (ha-377576-m04) Waiting for machine to stop 29/120
	I0328 00:08:53.072759 1094042 main.go:141] libmachine: (ha-377576-m04) Waiting for machine to stop 30/120
	I0328 00:08:54.074057 1094042 main.go:141] libmachine: (ha-377576-m04) Waiting for machine to stop 31/120
	I0328 00:08:55.075420 1094042 main.go:141] libmachine: (ha-377576-m04) Waiting for machine to stop 32/120
	I0328 00:08:56.077166 1094042 main.go:141] libmachine: (ha-377576-m04) Waiting for machine to stop 33/120
	I0328 00:08:57.078622 1094042 main.go:141] libmachine: (ha-377576-m04) Waiting for machine to stop 34/120
	I0328 00:08:58.080625 1094042 main.go:141] libmachine: (ha-377576-m04) Waiting for machine to stop 35/120
	I0328 00:08:59.082274 1094042 main.go:141] libmachine: (ha-377576-m04) Waiting for machine to stop 36/120
	I0328 00:09:00.083660 1094042 main.go:141] libmachine: (ha-377576-m04) Waiting for machine to stop 37/120
	I0328 00:09:01.086151 1094042 main.go:141] libmachine: (ha-377576-m04) Waiting for machine to stop 38/120
	I0328 00:09:02.087425 1094042 main.go:141] libmachine: (ha-377576-m04) Waiting for machine to stop 39/120
	I0328 00:09:03.089288 1094042 main.go:141] libmachine: (ha-377576-m04) Waiting for machine to stop 40/120
	I0328 00:09:04.090837 1094042 main.go:141] libmachine: (ha-377576-m04) Waiting for machine to stop 41/120
	I0328 00:09:05.092703 1094042 main.go:141] libmachine: (ha-377576-m04) Waiting for machine to stop 42/120
	I0328 00:09:06.095147 1094042 main.go:141] libmachine: (ha-377576-m04) Waiting for machine to stop 43/120
	I0328 00:09:07.096604 1094042 main.go:141] libmachine: (ha-377576-m04) Waiting for machine to stop 44/120
	I0328 00:09:08.098280 1094042 main.go:141] libmachine: (ha-377576-m04) Waiting for machine to stop 45/120
	I0328 00:09:09.100524 1094042 main.go:141] libmachine: (ha-377576-m04) Waiting for machine to stop 46/120
	I0328 00:09:10.101828 1094042 main.go:141] libmachine: (ha-377576-m04) Waiting for machine to stop 47/120
	I0328 00:09:11.103374 1094042 main.go:141] libmachine: (ha-377576-m04) Waiting for machine to stop 48/120
	I0328 00:09:12.104765 1094042 main.go:141] libmachine: (ha-377576-m04) Waiting for machine to stop 49/120
	I0328 00:09:13.106503 1094042 main.go:141] libmachine: (ha-377576-m04) Waiting for machine to stop 50/120
	I0328 00:09:14.108953 1094042 main.go:141] libmachine: (ha-377576-m04) Waiting for machine to stop 51/120
	I0328 00:09:15.110301 1094042 main.go:141] libmachine: (ha-377576-m04) Waiting for machine to stop 52/120
	I0328 00:09:16.111654 1094042 main.go:141] libmachine: (ha-377576-m04) Waiting for machine to stop 53/120
	I0328 00:09:17.113405 1094042 main.go:141] libmachine: (ha-377576-m04) Waiting for machine to stop 54/120
	I0328 00:09:18.115713 1094042 main.go:141] libmachine: (ha-377576-m04) Waiting for machine to stop 55/120
	I0328 00:09:19.118187 1094042 main.go:141] libmachine: (ha-377576-m04) Waiting for machine to stop 56/120
	I0328 00:09:20.119594 1094042 main.go:141] libmachine: (ha-377576-m04) Waiting for machine to stop 57/120
	I0328 00:09:21.121073 1094042 main.go:141] libmachine: (ha-377576-m04) Waiting for machine to stop 58/120
	I0328 00:09:22.122772 1094042 main.go:141] libmachine: (ha-377576-m04) Waiting for machine to stop 59/120
	I0328 00:09:23.124935 1094042 main.go:141] libmachine: (ha-377576-m04) Waiting for machine to stop 60/120
	I0328 00:09:24.126453 1094042 main.go:141] libmachine: (ha-377576-m04) Waiting for machine to stop 61/120
	I0328 00:09:25.128012 1094042 main.go:141] libmachine: (ha-377576-m04) Waiting for machine to stop 62/120
	I0328 00:09:26.130050 1094042 main.go:141] libmachine: (ha-377576-m04) Waiting for machine to stop 63/120
	I0328 00:09:27.131563 1094042 main.go:141] libmachine: (ha-377576-m04) Waiting for machine to stop 64/120
	I0328 00:09:28.133142 1094042 main.go:141] libmachine: (ha-377576-m04) Waiting for machine to stop 65/120
	I0328 00:09:29.134883 1094042 main.go:141] libmachine: (ha-377576-m04) Waiting for machine to stop 66/120
	I0328 00:09:30.136312 1094042 main.go:141] libmachine: (ha-377576-m04) Waiting for machine to stop 67/120
	I0328 00:09:31.137943 1094042 main.go:141] libmachine: (ha-377576-m04) Waiting for machine to stop 68/120
	I0328 00:09:32.139350 1094042 main.go:141] libmachine: (ha-377576-m04) Waiting for machine to stop 69/120
	I0328 00:09:33.141539 1094042 main.go:141] libmachine: (ha-377576-m04) Waiting for machine to stop 70/120
	I0328 00:09:34.143013 1094042 main.go:141] libmachine: (ha-377576-m04) Waiting for machine to stop 71/120
	I0328 00:09:35.144484 1094042 main.go:141] libmachine: (ha-377576-m04) Waiting for machine to stop 72/120
	I0328 00:09:36.146069 1094042 main.go:141] libmachine: (ha-377576-m04) Waiting for machine to stop 73/120
	I0328 00:09:37.147544 1094042 main.go:141] libmachine: (ha-377576-m04) Waiting for machine to stop 74/120
	I0328 00:09:38.149518 1094042 main.go:141] libmachine: (ha-377576-m04) Waiting for machine to stop 75/120
	I0328 00:09:39.150933 1094042 main.go:141] libmachine: (ha-377576-m04) Waiting for machine to stop 76/120
	I0328 00:09:40.152902 1094042 main.go:141] libmachine: (ha-377576-m04) Waiting for machine to stop 77/120
	I0328 00:09:41.154334 1094042 main.go:141] libmachine: (ha-377576-m04) Waiting for machine to stop 78/120
	I0328 00:09:42.155995 1094042 main.go:141] libmachine: (ha-377576-m04) Waiting for machine to stop 79/120
	I0328 00:09:43.157918 1094042 main.go:141] libmachine: (ha-377576-m04) Waiting for machine to stop 80/120
	I0328 00:09:44.159514 1094042 main.go:141] libmachine: (ha-377576-m04) Waiting for machine to stop 81/120
	I0328 00:09:45.161023 1094042 main.go:141] libmachine: (ha-377576-m04) Waiting for machine to stop 82/120
	I0328 00:09:46.162594 1094042 main.go:141] libmachine: (ha-377576-m04) Waiting for machine to stop 83/120
	I0328 00:09:47.164085 1094042 main.go:141] libmachine: (ha-377576-m04) Waiting for machine to stop 84/120
	I0328 00:09:48.165557 1094042 main.go:141] libmachine: (ha-377576-m04) Waiting for machine to stop 85/120
	I0328 00:09:49.167077 1094042 main.go:141] libmachine: (ha-377576-m04) Waiting for machine to stop 86/120
	I0328 00:09:50.168387 1094042 main.go:141] libmachine: (ha-377576-m04) Waiting for machine to stop 87/120
	I0328 00:09:51.169956 1094042 main.go:141] libmachine: (ha-377576-m04) Waiting for machine to stop 88/120
	I0328 00:09:52.171313 1094042 main.go:141] libmachine: (ha-377576-m04) Waiting for machine to stop 89/120
	I0328 00:09:53.172749 1094042 main.go:141] libmachine: (ha-377576-m04) Waiting for machine to stop 90/120
	I0328 00:09:54.174836 1094042 main.go:141] libmachine: (ha-377576-m04) Waiting for machine to stop 91/120
	I0328 00:09:55.176488 1094042 main.go:141] libmachine: (ha-377576-m04) Waiting for machine to stop 92/120
	I0328 00:09:56.178185 1094042 main.go:141] libmachine: (ha-377576-m04) Waiting for machine to stop 93/120
	I0328 00:09:57.180598 1094042 main.go:141] libmachine: (ha-377576-m04) Waiting for machine to stop 94/120
	I0328 00:09:58.183190 1094042 main.go:141] libmachine: (ha-377576-m04) Waiting for machine to stop 95/120
	I0328 00:09:59.184637 1094042 main.go:141] libmachine: (ha-377576-m04) Waiting for machine to stop 96/120
	I0328 00:10:00.186400 1094042 main.go:141] libmachine: (ha-377576-m04) Waiting for machine to stop 97/120
	I0328 00:10:01.187746 1094042 main.go:141] libmachine: (ha-377576-m04) Waiting for machine to stop 98/120
	I0328 00:10:02.189340 1094042 main.go:141] libmachine: (ha-377576-m04) Waiting for machine to stop 99/120
	I0328 00:10:03.191739 1094042 main.go:141] libmachine: (ha-377576-m04) Waiting for machine to stop 100/120
	I0328 00:10:04.193119 1094042 main.go:141] libmachine: (ha-377576-m04) Waiting for machine to stop 101/120
	I0328 00:10:05.195331 1094042 main.go:141] libmachine: (ha-377576-m04) Waiting for machine to stop 102/120
	I0328 00:10:06.196879 1094042 main.go:141] libmachine: (ha-377576-m04) Waiting for machine to stop 103/120
	I0328 00:10:07.198772 1094042 main.go:141] libmachine: (ha-377576-m04) Waiting for machine to stop 104/120
	I0328 00:10:08.200937 1094042 main.go:141] libmachine: (ha-377576-m04) Waiting for machine to stop 105/120
	I0328 00:10:09.202672 1094042 main.go:141] libmachine: (ha-377576-m04) Waiting for machine to stop 106/120
	I0328 00:10:10.204725 1094042 main.go:141] libmachine: (ha-377576-m04) Waiting for machine to stop 107/120
	I0328 00:10:11.206151 1094042 main.go:141] libmachine: (ha-377576-m04) Waiting for machine to stop 108/120
	I0328 00:10:12.207608 1094042 main.go:141] libmachine: (ha-377576-m04) Waiting for machine to stop 109/120
	I0328 00:10:13.209913 1094042 main.go:141] libmachine: (ha-377576-m04) Waiting for machine to stop 110/120
	I0328 00:10:14.211607 1094042 main.go:141] libmachine: (ha-377576-m04) Waiting for machine to stop 111/120
	I0328 00:10:15.212967 1094042 main.go:141] libmachine: (ha-377576-m04) Waiting for machine to stop 112/120
	I0328 00:10:16.214615 1094042 main.go:141] libmachine: (ha-377576-m04) Waiting for machine to stop 113/120
	I0328 00:10:17.216901 1094042 main.go:141] libmachine: (ha-377576-m04) Waiting for machine to stop 114/120
	I0328 00:10:18.219182 1094042 main.go:141] libmachine: (ha-377576-m04) Waiting for machine to stop 115/120
	I0328 00:10:19.221475 1094042 main.go:141] libmachine: (ha-377576-m04) Waiting for machine to stop 116/120
	I0328 00:10:20.223116 1094042 main.go:141] libmachine: (ha-377576-m04) Waiting for machine to stop 117/120
	I0328 00:10:21.224507 1094042 main.go:141] libmachine: (ha-377576-m04) Waiting for machine to stop 118/120
	I0328 00:10:22.226129 1094042 main.go:141] libmachine: (ha-377576-m04) Waiting for machine to stop 119/120
	I0328 00:10:23.226740 1094042 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0328 00:10:23.226814 1094042 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0328 00:10:23.228973 1094042 out.go:177] 
	W0328 00:10:23.230542 1094042 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0328 00:10:23.230569 1094042 out.go:239] * 
	* 
	W0328 00:10:23.235231 1094042 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_2.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_2.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0328 00:10:23.236819 1094042 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:533: failed to stop cluster. args "out/minikube-linux-amd64 -p ha-377576 stop -v=7 --alsologtostderr": exit status 82
ha_test.go:537: (dbg) Run:  out/minikube-linux-amd64 -p ha-377576 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-377576 status -v=7 --alsologtostderr: exit status 3 (19.045819135s)

                                                
                                                
-- stdout --
	ha-377576
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-377576-m02
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-377576-m04
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0328 00:10:23.299805 1094357 out.go:291] Setting OutFile to fd 1 ...
	I0328 00:10:23.300303 1094357 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 00:10:23.300319 1094357 out.go:304] Setting ErrFile to fd 2...
	I0328 00:10:23.300327 1094357 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 00:10:23.300835 1094357 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18485-1069254/.minikube/bin
	I0328 00:10:23.301174 1094357 out.go:298] Setting JSON to false
	I0328 00:10:23.301346 1094357 notify.go:220] Checking for updates...
	I0328 00:10:23.301386 1094357 mustload.go:65] Loading cluster: ha-377576
	I0328 00:10:23.301901 1094357 config.go:182] Loaded profile config "ha-377576": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0328 00:10:23.301924 1094357 status.go:255] checking status of ha-377576 ...
	I0328 00:10:23.302465 1094357 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0328 00:10:23.302523 1094357 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 00:10:23.324163 1094357 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42653
	I0328 00:10:23.324604 1094357 main.go:141] libmachine: () Calling .GetVersion
	I0328 00:10:23.325325 1094357 main.go:141] libmachine: Using API Version  1
	I0328 00:10:23.325356 1094357 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 00:10:23.325771 1094357 main.go:141] libmachine: () Calling .GetMachineName
	I0328 00:10:23.326002 1094357 main.go:141] libmachine: (ha-377576) Calling .GetState
	I0328 00:10:23.327713 1094357 status.go:330] ha-377576 host status = "Running" (err=<nil>)
	I0328 00:10:23.327751 1094357 host.go:66] Checking if "ha-377576" exists ...
	I0328 00:10:23.328045 1094357 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0328 00:10:23.328080 1094357 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 00:10:23.342822 1094357 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41475
	I0328 00:10:23.343292 1094357 main.go:141] libmachine: () Calling .GetVersion
	I0328 00:10:23.343737 1094357 main.go:141] libmachine: Using API Version  1
	I0328 00:10:23.343761 1094357 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 00:10:23.344149 1094357 main.go:141] libmachine: () Calling .GetMachineName
	I0328 00:10:23.344338 1094357 main.go:141] libmachine: (ha-377576) Calling .GetIP
	I0328 00:10:23.346824 1094357 main.go:141] libmachine: (ha-377576) DBG | domain ha-377576 has defined MAC address 52:54:00:9c:48:13 in network mk-ha-377576
	I0328 00:10:23.347256 1094357 main.go:141] libmachine: (ha-377576) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:48:13", ip: ""} in network mk-ha-377576: {Iface:virbr1 ExpiryTime:2024-03-28 00:52:31 +0000 UTC Type:0 Mac:52:54:00:9c:48:13 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:ha-377576 Clientid:01:52:54:00:9c:48:13}
	I0328 00:10:23.347297 1094357 main.go:141] libmachine: (ha-377576) DBG | domain ha-377576 has defined IP address 192.168.39.47 and MAC address 52:54:00:9c:48:13 in network mk-ha-377576
	I0328 00:10:23.347358 1094357 host.go:66] Checking if "ha-377576" exists ...
	I0328 00:10:23.347734 1094357 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0328 00:10:23.347779 1094357 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 00:10:23.362956 1094357 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46089
	I0328 00:10:23.363440 1094357 main.go:141] libmachine: () Calling .GetVersion
	I0328 00:10:23.364014 1094357 main.go:141] libmachine: Using API Version  1
	I0328 00:10:23.364040 1094357 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 00:10:23.364393 1094357 main.go:141] libmachine: () Calling .GetMachineName
	I0328 00:10:23.364604 1094357 main.go:141] libmachine: (ha-377576) Calling .DriverName
	I0328 00:10:23.364821 1094357 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0328 00:10:23.364855 1094357 main.go:141] libmachine: (ha-377576) Calling .GetSSHHostname
	I0328 00:10:23.367801 1094357 main.go:141] libmachine: (ha-377576) DBG | domain ha-377576 has defined MAC address 52:54:00:9c:48:13 in network mk-ha-377576
	I0328 00:10:23.368334 1094357 main.go:141] libmachine: (ha-377576) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:48:13", ip: ""} in network mk-ha-377576: {Iface:virbr1 ExpiryTime:2024-03-28 00:52:31 +0000 UTC Type:0 Mac:52:54:00:9c:48:13 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:ha-377576 Clientid:01:52:54:00:9c:48:13}
	I0328 00:10:23.368367 1094357 main.go:141] libmachine: (ha-377576) DBG | domain ha-377576 has defined IP address 192.168.39.47 and MAC address 52:54:00:9c:48:13 in network mk-ha-377576
	I0328 00:10:23.368474 1094357 main.go:141] libmachine: (ha-377576) Calling .GetSSHPort
	I0328 00:10:23.368649 1094357 main.go:141] libmachine: (ha-377576) Calling .GetSSHKeyPath
	I0328 00:10:23.368818 1094357 main.go:141] libmachine: (ha-377576) Calling .GetSSHUsername
	I0328 00:10:23.368949 1094357 sshutil.go:53] new ssh client: &{IP:192.168.39.47 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/ha-377576/id_rsa Username:docker}
	I0328 00:10:23.452543 1094357 ssh_runner.go:195] Run: systemctl --version
	I0328 00:10:23.460465 1094357 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0328 00:10:23.484607 1094357 kubeconfig.go:125] found "ha-377576" server: "https://192.168.39.254:8443"
	I0328 00:10:23.484649 1094357 api_server.go:166] Checking apiserver status ...
	I0328 00:10:23.484697 1094357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 00:10:23.506333 1094357 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/5159/cgroup
	W0328 00:10:23.522151 1094357 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/5159/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0328 00:10:23.522216 1094357 ssh_runner.go:195] Run: ls
	I0328 00:10:23.527409 1094357 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0328 00:10:23.532122 1094357 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0328 00:10:23.532146 1094357 status.go:422] ha-377576 apiserver status = Running (err=<nil>)
	I0328 00:10:23.532156 1094357 status.go:257] ha-377576 status: &{Name:ha-377576 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0328 00:10:23.532180 1094357 status.go:255] checking status of ha-377576-m02 ...
	I0328 00:10:23.532519 1094357 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0328 00:10:23.532562 1094357 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 00:10:23.548027 1094357 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37053
	I0328 00:10:23.548506 1094357 main.go:141] libmachine: () Calling .GetVersion
	I0328 00:10:23.548990 1094357 main.go:141] libmachine: Using API Version  1
	I0328 00:10:23.549016 1094357 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 00:10:23.549459 1094357 main.go:141] libmachine: () Calling .GetMachineName
	I0328 00:10:23.549631 1094357 main.go:141] libmachine: (ha-377576-m02) Calling .GetState
	I0328 00:10:23.551609 1094357 status.go:330] ha-377576-m02 host status = "Running" (err=<nil>)
	I0328 00:10:23.551632 1094357 host.go:66] Checking if "ha-377576-m02" exists ...
	I0328 00:10:23.552007 1094357 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0328 00:10:23.552053 1094357 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 00:10:23.569568 1094357 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34349
	I0328 00:10:23.570058 1094357 main.go:141] libmachine: () Calling .GetVersion
	I0328 00:10:23.570674 1094357 main.go:141] libmachine: Using API Version  1
	I0328 00:10:23.570701 1094357 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 00:10:23.571132 1094357 main.go:141] libmachine: () Calling .GetMachineName
	I0328 00:10:23.571384 1094357 main.go:141] libmachine: (ha-377576-m02) Calling .GetIP
	I0328 00:10:23.574402 1094357 main.go:141] libmachine: (ha-377576-m02) DBG | domain ha-377576-m02 has defined MAC address 52:54:00:bb:83:99 in network mk-ha-377576
	I0328 00:10:23.574789 1094357 main.go:141] libmachine: (ha-377576-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:83:99", ip: ""} in network mk-ha-377576: {Iface:virbr1 ExpiryTime:2024-03-28 01:05:16 +0000 UTC Type:0 Mac:52:54:00:bb:83:99 Iaid: IPaddr:192.168.39.117 Prefix:24 Hostname:ha-377576-m02 Clientid:01:52:54:00:bb:83:99}
	I0328 00:10:23.574822 1094357 main.go:141] libmachine: (ha-377576-m02) DBG | domain ha-377576-m02 has defined IP address 192.168.39.117 and MAC address 52:54:00:bb:83:99 in network mk-ha-377576
	I0328 00:10:23.575101 1094357 host.go:66] Checking if "ha-377576-m02" exists ...
	I0328 00:10:23.575545 1094357 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0328 00:10:23.575604 1094357 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 00:10:23.590796 1094357 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37209
	I0328 00:10:23.591313 1094357 main.go:141] libmachine: () Calling .GetVersion
	I0328 00:10:23.591767 1094357 main.go:141] libmachine: Using API Version  1
	I0328 00:10:23.591793 1094357 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 00:10:23.592133 1094357 main.go:141] libmachine: () Calling .GetMachineName
	I0328 00:10:23.592323 1094357 main.go:141] libmachine: (ha-377576-m02) Calling .DriverName
	I0328 00:10:23.592504 1094357 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0328 00:10:23.592525 1094357 main.go:141] libmachine: (ha-377576-m02) Calling .GetSSHHostname
	I0328 00:10:23.595161 1094357 main.go:141] libmachine: (ha-377576-m02) DBG | domain ha-377576-m02 has defined MAC address 52:54:00:bb:83:99 in network mk-ha-377576
	I0328 00:10:23.595509 1094357 main.go:141] libmachine: (ha-377576-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:83:99", ip: ""} in network mk-ha-377576: {Iface:virbr1 ExpiryTime:2024-03-28 01:05:16 +0000 UTC Type:0 Mac:52:54:00:bb:83:99 Iaid: IPaddr:192.168.39.117 Prefix:24 Hostname:ha-377576-m02 Clientid:01:52:54:00:bb:83:99}
	I0328 00:10:23.595540 1094357 main.go:141] libmachine: (ha-377576-m02) DBG | domain ha-377576-m02 has defined IP address 192.168.39.117 and MAC address 52:54:00:bb:83:99 in network mk-ha-377576
	I0328 00:10:23.595676 1094357 main.go:141] libmachine: (ha-377576-m02) Calling .GetSSHPort
	I0328 00:10:23.595863 1094357 main.go:141] libmachine: (ha-377576-m02) Calling .GetSSHKeyPath
	I0328 00:10:23.596029 1094357 main.go:141] libmachine: (ha-377576-m02) Calling .GetSSHUsername
	I0328 00:10:23.596200 1094357 sshutil.go:53] new ssh client: &{IP:192.168.39.117 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/ha-377576-m02/id_rsa Username:docker}
	I0328 00:10:23.701093 1094357 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0328 00:10:23.720036 1094357 kubeconfig.go:125] found "ha-377576" server: "https://192.168.39.254:8443"
	I0328 00:10:23.720069 1094357 api_server.go:166] Checking apiserver status ...
	I0328 00:10:23.720146 1094357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 00:10:23.734927 1094357 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1424/cgroup
	W0328 00:10:23.744696 1094357 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1424/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0328 00:10:23.744752 1094357 ssh_runner.go:195] Run: ls
	I0328 00:10:23.749268 1094357 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0328 00:10:23.753543 1094357 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0328 00:10:23.753568 1094357 status.go:422] ha-377576-m02 apiserver status = Running (err=<nil>)
	I0328 00:10:23.753581 1094357 status.go:257] ha-377576-m02 status: &{Name:ha-377576-m02 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0328 00:10:23.753604 1094357 status.go:255] checking status of ha-377576-m04 ...
	I0328 00:10:23.753901 1094357 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0328 00:10:23.753947 1094357 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 00:10:23.769636 1094357 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43391
	I0328 00:10:23.770132 1094357 main.go:141] libmachine: () Calling .GetVersion
	I0328 00:10:23.770721 1094357 main.go:141] libmachine: Using API Version  1
	I0328 00:10:23.770743 1094357 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 00:10:23.771152 1094357 main.go:141] libmachine: () Calling .GetMachineName
	I0328 00:10:23.771411 1094357 main.go:141] libmachine: (ha-377576-m04) Calling .GetState
	I0328 00:10:23.773016 1094357 status.go:330] ha-377576-m04 host status = "Running" (err=<nil>)
	I0328 00:10:23.773036 1094357 host.go:66] Checking if "ha-377576-m04" exists ...
	I0328 00:10:23.773317 1094357 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0328 00:10:23.773352 1094357 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 00:10:23.789128 1094357 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34233
	I0328 00:10:23.789602 1094357 main.go:141] libmachine: () Calling .GetVersion
	I0328 00:10:23.790149 1094357 main.go:141] libmachine: Using API Version  1
	I0328 00:10:23.790173 1094357 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 00:10:23.790502 1094357 main.go:141] libmachine: () Calling .GetMachineName
	I0328 00:10:23.790716 1094357 main.go:141] libmachine: (ha-377576-m04) Calling .GetIP
	I0328 00:10:23.793150 1094357 main.go:141] libmachine: (ha-377576-m04) DBG | domain ha-377576-m04 has defined MAC address 52:54:00:6a:c2:81 in network mk-ha-377576
	I0328 00:10:23.793833 1094357 main.go:141] libmachine: (ha-377576-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:c2:81", ip: ""} in network mk-ha-377576: {Iface:virbr1 ExpiryTime:2024-03-28 01:07:47 +0000 UTC Type:0 Mac:52:54:00:6a:c2:81 Iaid: IPaddr:192.168.39.93 Prefix:24 Hostname:ha-377576-m04 Clientid:01:52:54:00:6a:c2:81}
	I0328 00:10:23.793860 1094357 main.go:141] libmachine: (ha-377576-m04) DBG | domain ha-377576-m04 has defined IP address 192.168.39.93 and MAC address 52:54:00:6a:c2:81 in network mk-ha-377576
	I0328 00:10:23.794040 1094357 host.go:66] Checking if "ha-377576-m04" exists ...
	I0328 00:10:23.794421 1094357 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0328 00:10:23.794468 1094357 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 00:10:23.810371 1094357 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43693
	I0328 00:10:23.810830 1094357 main.go:141] libmachine: () Calling .GetVersion
	I0328 00:10:23.811304 1094357 main.go:141] libmachine: Using API Version  1
	I0328 00:10:23.811326 1094357 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 00:10:23.811631 1094357 main.go:141] libmachine: () Calling .GetMachineName
	I0328 00:10:23.811861 1094357 main.go:141] libmachine: (ha-377576-m04) Calling .DriverName
	I0328 00:10:23.812090 1094357 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0328 00:10:23.812113 1094357 main.go:141] libmachine: (ha-377576-m04) Calling .GetSSHHostname
	I0328 00:10:23.815022 1094357 main.go:141] libmachine: (ha-377576-m04) DBG | domain ha-377576-m04 has defined MAC address 52:54:00:6a:c2:81 in network mk-ha-377576
	I0328 00:10:23.815453 1094357 main.go:141] libmachine: (ha-377576-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:c2:81", ip: ""} in network mk-ha-377576: {Iface:virbr1 ExpiryTime:2024-03-28 01:07:47 +0000 UTC Type:0 Mac:52:54:00:6a:c2:81 Iaid: IPaddr:192.168.39.93 Prefix:24 Hostname:ha-377576-m04 Clientid:01:52:54:00:6a:c2:81}
	I0328 00:10:23.815476 1094357 main.go:141] libmachine: (ha-377576-m04) DBG | domain ha-377576-m04 has defined IP address 192.168.39.93 and MAC address 52:54:00:6a:c2:81 in network mk-ha-377576
	I0328 00:10:23.815626 1094357 main.go:141] libmachine: (ha-377576-m04) Calling .GetSSHPort
	I0328 00:10:23.815826 1094357 main.go:141] libmachine: (ha-377576-m04) Calling .GetSSHKeyPath
	I0328 00:10:23.816003 1094357 main.go:141] libmachine: (ha-377576-m04) Calling .GetSSHUsername
	I0328 00:10:23.816147 1094357 sshutil.go:53] new ssh client: &{IP:192.168.39.93 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/ha-377576-m04/id_rsa Username:docker}
	W0328 00:10:42.282558 1094357 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.93:22: connect: no route to host
	W0328 00:10:42.282711 1094357 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.93:22: connect: no route to host
	E0328 00:10:42.282735 1094357 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.93:22: connect: no route to host
	I0328 00:10:42.282743 1094357 status.go:257] ha-377576-m04 status: &{Name:ha-377576-m04 Host:Error Kubelet:Nonexistent APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	E0328 00:10:42.282770 1094357 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.93:22: connect: no route to host

                                                
                                                
** /stderr **
ha_test.go:540: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-377576 status -v=7 --alsologtostderr" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-377576 -n ha-377576
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopCluster FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopCluster]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-377576 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-377576 logs -n 25: (1.983498281s)
helpers_test.go:252: TestMultiControlPlane/serial/StopCluster logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|----------------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   |    Version     |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|----------------|---------------------|---------------------|
	| ssh     | ha-377576 ssh -n ha-377576-m02 sudo cat                                          | ha-377576 | jenkins | v1.33.0-beta.0 | 27 Mar 24 23:58 UTC | 27 Mar 24 23:58 UTC |
	|         | /home/docker/cp-test_ha-377576-m03_ha-377576-m02.txt                             |           |         |                |                     |                     |
	| cp      | ha-377576 cp ha-377576-m03:/home/docker/cp-test.txt                              | ha-377576 | jenkins | v1.33.0-beta.0 | 27 Mar 24 23:58 UTC | 27 Mar 24 23:58 UTC |
	|         | ha-377576-m04:/home/docker/cp-test_ha-377576-m03_ha-377576-m04.txt               |           |         |                |                     |                     |
	| ssh     | ha-377576 ssh -n                                                                 | ha-377576 | jenkins | v1.33.0-beta.0 | 27 Mar 24 23:58 UTC | 27 Mar 24 23:58 UTC |
	|         | ha-377576-m03 sudo cat                                                           |           |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |                |                     |                     |
	| ssh     | ha-377576 ssh -n ha-377576-m04 sudo cat                                          | ha-377576 | jenkins | v1.33.0-beta.0 | 27 Mar 24 23:58 UTC | 27 Mar 24 23:58 UTC |
	|         | /home/docker/cp-test_ha-377576-m03_ha-377576-m04.txt                             |           |         |                |                     |                     |
	| cp      | ha-377576 cp testdata/cp-test.txt                                                | ha-377576 | jenkins | v1.33.0-beta.0 | 27 Mar 24 23:58 UTC | 27 Mar 24 23:58 UTC |
	|         | ha-377576-m04:/home/docker/cp-test.txt                                           |           |         |                |                     |                     |
	| ssh     | ha-377576 ssh -n                                                                 | ha-377576 | jenkins | v1.33.0-beta.0 | 27 Mar 24 23:58 UTC | 27 Mar 24 23:58 UTC |
	|         | ha-377576-m04 sudo cat                                                           |           |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |                |                     |                     |
	| cp      | ha-377576 cp ha-377576-m04:/home/docker/cp-test.txt                              | ha-377576 | jenkins | v1.33.0-beta.0 | 27 Mar 24 23:58 UTC | 27 Mar 24 23:58 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3418864072/001/cp-test_ha-377576-m04.txt |           |         |                |                     |                     |
	| ssh     | ha-377576 ssh -n                                                                 | ha-377576 | jenkins | v1.33.0-beta.0 | 27 Mar 24 23:58 UTC | 27 Mar 24 23:58 UTC |
	|         | ha-377576-m04 sudo cat                                                           |           |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |                |                     |                     |
	| cp      | ha-377576 cp ha-377576-m04:/home/docker/cp-test.txt                              | ha-377576 | jenkins | v1.33.0-beta.0 | 27 Mar 24 23:58 UTC | 27 Mar 24 23:58 UTC |
	|         | ha-377576:/home/docker/cp-test_ha-377576-m04_ha-377576.txt                       |           |         |                |                     |                     |
	| ssh     | ha-377576 ssh -n                                                                 | ha-377576 | jenkins | v1.33.0-beta.0 | 27 Mar 24 23:58 UTC | 27 Mar 24 23:58 UTC |
	|         | ha-377576-m04 sudo cat                                                           |           |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |                |                     |                     |
	| ssh     | ha-377576 ssh -n ha-377576 sudo cat                                              | ha-377576 | jenkins | v1.33.0-beta.0 | 27 Mar 24 23:58 UTC | 27 Mar 24 23:58 UTC |
	|         | /home/docker/cp-test_ha-377576-m04_ha-377576.txt                                 |           |         |                |                     |                     |
	| cp      | ha-377576 cp ha-377576-m04:/home/docker/cp-test.txt                              | ha-377576 | jenkins | v1.33.0-beta.0 | 27 Mar 24 23:58 UTC | 27 Mar 24 23:58 UTC |
	|         | ha-377576-m02:/home/docker/cp-test_ha-377576-m04_ha-377576-m02.txt               |           |         |                |                     |                     |
	| ssh     | ha-377576 ssh -n                                                                 | ha-377576 | jenkins | v1.33.0-beta.0 | 27 Mar 24 23:58 UTC | 27 Mar 24 23:58 UTC |
	|         | ha-377576-m04 sudo cat                                                           |           |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |                |                     |                     |
	| ssh     | ha-377576 ssh -n ha-377576-m02 sudo cat                                          | ha-377576 | jenkins | v1.33.0-beta.0 | 27 Mar 24 23:58 UTC | 27 Mar 24 23:58 UTC |
	|         | /home/docker/cp-test_ha-377576-m04_ha-377576-m02.txt                             |           |         |                |                     |                     |
	| cp      | ha-377576 cp ha-377576-m04:/home/docker/cp-test.txt                              | ha-377576 | jenkins | v1.33.0-beta.0 | 27 Mar 24 23:58 UTC | 27 Mar 24 23:58 UTC |
	|         | ha-377576-m03:/home/docker/cp-test_ha-377576-m04_ha-377576-m03.txt               |           |         |                |                     |                     |
	| ssh     | ha-377576 ssh -n                                                                 | ha-377576 | jenkins | v1.33.0-beta.0 | 27 Mar 24 23:58 UTC | 27 Mar 24 23:58 UTC |
	|         | ha-377576-m04 sudo cat                                                           |           |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |                |                     |                     |
	| ssh     | ha-377576 ssh -n ha-377576-m03 sudo cat                                          | ha-377576 | jenkins | v1.33.0-beta.0 | 27 Mar 24 23:58 UTC | 27 Mar 24 23:58 UTC |
	|         | /home/docker/cp-test_ha-377576-m04_ha-377576-m03.txt                             |           |         |                |                     |                     |
	| node    | ha-377576 node stop m02 -v=7                                                     | ha-377576 | jenkins | v1.33.0-beta.0 | 27 Mar 24 23:58 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |                |                     |                     |
	| node    | ha-377576 node start m02 -v=7                                                    | ha-377576 | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:00 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |                |                     |                     |
	| node    | list -p ha-377576 -v=7                                                           | ha-377576 | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:01 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |                |                     |                     |
	| stop    | -p ha-377576 -v=7                                                                | ha-377576 | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:01 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |                |                     |                     |
	| start   | -p ha-377576 --wait=true -v=7                                                    | ha-377576 | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:03 UTC | 28 Mar 24 00:08 UTC |
	|         | --alsologtostderr                                                                |           |         |                |                     |                     |
	| node    | list -p ha-377576                                                                | ha-377576 | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:08 UTC |                     |
	| node    | ha-377576 node delete m03 -v=7                                                   | ha-377576 | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:08 UTC | 28 Mar 24 00:08 UTC |
	|         | --alsologtostderr                                                                |           |         |                |                     |                     |
	| stop    | ha-377576 stop -v=7                                                              | ha-377576 | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:08 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |                |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/28 00:03:29
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0328 00:03:29.540124 1092522 out.go:291] Setting OutFile to fd 1 ...
	I0328 00:03:29.540407 1092522 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 00:03:29.540418 1092522 out.go:304] Setting ErrFile to fd 2...
	I0328 00:03:29.540423 1092522 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 00:03:29.540622 1092522 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18485-1069254/.minikube/bin
	I0328 00:03:29.541195 1092522 out.go:298] Setting JSON to false
	I0328 00:03:29.542305 1092522 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":27907,"bootTime":1711556303,"procs":185,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1054-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0328 00:03:29.542377 1092522 start.go:139] virtualization: kvm guest
	I0328 00:03:29.544850 1092522 out.go:177] * [ha-377576] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I0328 00:03:29.546258 1092522 out.go:177]   - MINIKUBE_LOCATION=18485
	I0328 00:03:29.546312 1092522 notify.go:220] Checking for updates...
	I0328 00:03:29.547676 1092522 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0328 00:03:29.549267 1092522 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18485-1069254/kubeconfig
	I0328 00:03:29.550538 1092522 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18485-1069254/.minikube
	I0328 00:03:29.551754 1092522 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0328 00:03:29.552995 1092522 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0328 00:03:29.554672 1092522 config.go:182] Loaded profile config "ha-377576": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0328 00:03:29.554776 1092522 driver.go:392] Setting default libvirt URI to qemu:///system
	I0328 00:03:29.555211 1092522 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0328 00:03:29.555263 1092522 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 00:03:29.571350 1092522 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42217
	I0328 00:03:29.571943 1092522 main.go:141] libmachine: () Calling .GetVersion
	I0328 00:03:29.572571 1092522 main.go:141] libmachine: Using API Version  1
	I0328 00:03:29.572605 1092522 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 00:03:29.573054 1092522 main.go:141] libmachine: () Calling .GetMachineName
	I0328 00:03:29.573280 1092522 main.go:141] libmachine: (ha-377576) Calling .DriverName
	I0328 00:03:29.611251 1092522 out.go:177] * Using the kvm2 driver based on existing profile
	I0328 00:03:29.612724 1092522 start.go:297] selected driver: kvm2
	I0328 00:03:29.612744 1092522 start.go:901] validating driver "kvm2" against &{Name:ha-377576 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.29.3 ClusterName:ha-377576 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.47 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.117 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.101 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.93 Port:0 KubernetesVersion:v1.29.3 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk
:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGI
D:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0328 00:03:29.612865 1092522 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0328 00:03:29.613348 1092522 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0328 00:03:29.613473 1092522 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18485-1069254/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0328 00:03:29.629496 1092522 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0-beta.0
	I0328 00:03:29.630732 1092522 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0328 00:03:29.630808 1092522 cni.go:84] Creating CNI manager for ""
	I0328 00:03:29.630820 1092522 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0328 00:03:29.630893 1092522 start.go:340] cluster config:
	{Name:ha-377576 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:ha-377576 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.47 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.117 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.101 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.93 Port:0 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-till
er:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0328 00:03:29.631069 1092522 iso.go:125] acquiring lock: {Name:mk3da1fa7d63a581c817f327d86a827697458cb8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0328 00:03:29.632958 1092522 out.go:177] * Starting "ha-377576" primary control-plane node in "ha-377576" cluster
	I0328 00:03:29.634055 1092522 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0328 00:03:29.634103 1092522 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4
	I0328 00:03:29.634119 1092522 cache.go:56] Caching tarball of preloaded images
	I0328 00:03:29.634264 1092522 preload.go:173] Found /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0328 00:03:29.634281 1092522 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on crio
	I0328 00:03:29.634481 1092522 profile.go:142] Saving config to /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/ha-377576/config.json ...
	I0328 00:03:29.634741 1092522 start.go:360] acquireMachinesLock for ha-377576: {Name:mk85e225431128bbd27ac7bb3815095957281902 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0328 00:03:29.634805 1092522 start.go:364] duration metric: took 36.089µs to acquireMachinesLock for "ha-377576"
	I0328 00:03:29.634825 1092522 start.go:96] Skipping create...Using existing machine configuration
	I0328 00:03:29.634866 1092522 fix.go:54] fixHost starting: 
	I0328 00:03:29.635311 1092522 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0328 00:03:29.635367 1092522 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 00:03:29.649991 1092522 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37761
	I0328 00:03:29.650531 1092522 main.go:141] libmachine: () Calling .GetVersion
	I0328 00:03:29.651161 1092522 main.go:141] libmachine: Using API Version  1
	I0328 00:03:29.651188 1092522 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 00:03:29.651623 1092522 main.go:141] libmachine: () Calling .GetMachineName
	I0328 00:03:29.651835 1092522 main.go:141] libmachine: (ha-377576) Calling .DriverName
	I0328 00:03:29.652028 1092522 main.go:141] libmachine: (ha-377576) Calling .GetState
	I0328 00:03:29.653705 1092522 fix.go:112] recreateIfNeeded on ha-377576: state=Running err=<nil>
	W0328 00:03:29.653741 1092522 fix.go:138] unexpected machine state, will restart: <nil>
	I0328 00:03:29.655744 1092522 out.go:177] * Updating the running kvm2 "ha-377576" VM ...
	I0328 00:03:29.657234 1092522 machine.go:94] provisionDockerMachine start ...
	I0328 00:03:29.657260 1092522 main.go:141] libmachine: (ha-377576) Calling .DriverName
	I0328 00:03:29.657497 1092522 main.go:141] libmachine: (ha-377576) Calling .GetSSHHostname
	I0328 00:03:29.660426 1092522 main.go:141] libmachine: (ha-377576) DBG | domain ha-377576 has defined MAC address 52:54:00:9c:48:13 in network mk-ha-377576
	I0328 00:03:29.661011 1092522 main.go:141] libmachine: (ha-377576) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:48:13", ip: ""} in network mk-ha-377576: {Iface:virbr1 ExpiryTime:2024-03-28 00:52:31 +0000 UTC Type:0 Mac:52:54:00:9c:48:13 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:ha-377576 Clientid:01:52:54:00:9c:48:13}
	I0328 00:03:29.661042 1092522 main.go:141] libmachine: (ha-377576) DBG | domain ha-377576 has defined IP address 192.168.39.47 and MAC address 52:54:00:9c:48:13 in network mk-ha-377576
	I0328 00:03:29.661194 1092522 main.go:141] libmachine: (ha-377576) Calling .GetSSHPort
	I0328 00:03:29.661407 1092522 main.go:141] libmachine: (ha-377576) Calling .GetSSHKeyPath
	I0328 00:03:29.661585 1092522 main.go:141] libmachine: (ha-377576) Calling .GetSSHKeyPath
	I0328 00:03:29.661767 1092522 main.go:141] libmachine: (ha-377576) Calling .GetSSHUsername
	I0328 00:03:29.661959 1092522 main.go:141] libmachine: Using SSH client type: native
	I0328 00:03:29.662147 1092522 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.47 22 <nil> <nil>}
	I0328 00:03:29.662159 1092522 main.go:141] libmachine: About to run SSH command:
	hostname
	I0328 00:03:29.767912 1092522 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-377576
	
	I0328 00:03:29.767947 1092522 main.go:141] libmachine: (ha-377576) Calling .GetMachineName
	I0328 00:03:29.768203 1092522 buildroot.go:166] provisioning hostname "ha-377576"
	I0328 00:03:29.768240 1092522 main.go:141] libmachine: (ha-377576) Calling .GetMachineName
	I0328 00:03:29.768460 1092522 main.go:141] libmachine: (ha-377576) Calling .GetSSHHostname
	I0328 00:03:29.771493 1092522 main.go:141] libmachine: (ha-377576) DBG | domain ha-377576 has defined MAC address 52:54:00:9c:48:13 in network mk-ha-377576
	I0328 00:03:29.771896 1092522 main.go:141] libmachine: (ha-377576) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:48:13", ip: ""} in network mk-ha-377576: {Iface:virbr1 ExpiryTime:2024-03-28 00:52:31 +0000 UTC Type:0 Mac:52:54:00:9c:48:13 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:ha-377576 Clientid:01:52:54:00:9c:48:13}
	I0328 00:03:29.771928 1092522 main.go:141] libmachine: (ha-377576) DBG | domain ha-377576 has defined IP address 192.168.39.47 and MAC address 52:54:00:9c:48:13 in network mk-ha-377576
	I0328 00:03:29.772097 1092522 main.go:141] libmachine: (ha-377576) Calling .GetSSHPort
	I0328 00:03:29.772348 1092522 main.go:141] libmachine: (ha-377576) Calling .GetSSHKeyPath
	I0328 00:03:29.772535 1092522 main.go:141] libmachine: (ha-377576) Calling .GetSSHKeyPath
	I0328 00:03:29.772716 1092522 main.go:141] libmachine: (ha-377576) Calling .GetSSHUsername
	I0328 00:03:29.772849 1092522 main.go:141] libmachine: Using SSH client type: native
	I0328 00:03:29.773063 1092522 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.47 22 <nil> <nil>}
	I0328 00:03:29.773092 1092522 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-377576 && echo "ha-377576" | sudo tee /etc/hostname
	I0328 00:03:29.896811 1092522 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-377576
	
	I0328 00:03:29.896842 1092522 main.go:141] libmachine: (ha-377576) Calling .GetSSHHostname
	I0328 00:03:29.900106 1092522 main.go:141] libmachine: (ha-377576) DBG | domain ha-377576 has defined MAC address 52:54:00:9c:48:13 in network mk-ha-377576
	I0328 00:03:29.900519 1092522 main.go:141] libmachine: (ha-377576) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:48:13", ip: ""} in network mk-ha-377576: {Iface:virbr1 ExpiryTime:2024-03-28 00:52:31 +0000 UTC Type:0 Mac:52:54:00:9c:48:13 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:ha-377576 Clientid:01:52:54:00:9c:48:13}
	I0328 00:03:29.900554 1092522 main.go:141] libmachine: (ha-377576) DBG | domain ha-377576 has defined IP address 192.168.39.47 and MAC address 52:54:00:9c:48:13 in network mk-ha-377576
	I0328 00:03:29.900749 1092522 main.go:141] libmachine: (ha-377576) Calling .GetSSHPort
	I0328 00:03:29.900988 1092522 main.go:141] libmachine: (ha-377576) Calling .GetSSHKeyPath
	I0328 00:03:29.901189 1092522 main.go:141] libmachine: (ha-377576) Calling .GetSSHKeyPath
	I0328 00:03:29.901367 1092522 main.go:141] libmachine: (ha-377576) Calling .GetSSHUsername
	I0328 00:03:29.901557 1092522 main.go:141] libmachine: Using SSH client type: native
	I0328 00:03:29.901726 1092522 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.47 22 <nil> <nil>}
	I0328 00:03:29.901742 1092522 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-377576' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-377576/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-377576' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0328 00:03:30.007248 1092522 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0328 00:03:30.007296 1092522 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18485-1069254/.minikube CaCertPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18485-1069254/.minikube}
	I0328 00:03:30.007324 1092522 buildroot.go:174] setting up certificates
	I0328 00:03:30.007340 1092522 provision.go:84] configureAuth start
	I0328 00:03:30.007358 1092522 main.go:141] libmachine: (ha-377576) Calling .GetMachineName
	I0328 00:03:30.007653 1092522 main.go:141] libmachine: (ha-377576) Calling .GetIP
	I0328 00:03:30.010626 1092522 main.go:141] libmachine: (ha-377576) DBG | domain ha-377576 has defined MAC address 52:54:00:9c:48:13 in network mk-ha-377576
	I0328 00:03:30.011115 1092522 main.go:141] libmachine: (ha-377576) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:48:13", ip: ""} in network mk-ha-377576: {Iface:virbr1 ExpiryTime:2024-03-28 00:52:31 +0000 UTC Type:0 Mac:52:54:00:9c:48:13 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:ha-377576 Clientid:01:52:54:00:9c:48:13}
	I0328 00:03:30.011147 1092522 main.go:141] libmachine: (ha-377576) DBG | domain ha-377576 has defined IP address 192.168.39.47 and MAC address 52:54:00:9c:48:13 in network mk-ha-377576
	I0328 00:03:30.011322 1092522 main.go:141] libmachine: (ha-377576) Calling .GetSSHHostname
	I0328 00:03:30.013700 1092522 main.go:141] libmachine: (ha-377576) DBG | domain ha-377576 has defined MAC address 52:54:00:9c:48:13 in network mk-ha-377576
	I0328 00:03:30.014155 1092522 main.go:141] libmachine: (ha-377576) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:48:13", ip: ""} in network mk-ha-377576: {Iface:virbr1 ExpiryTime:2024-03-28 00:52:31 +0000 UTC Type:0 Mac:52:54:00:9c:48:13 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:ha-377576 Clientid:01:52:54:00:9c:48:13}
	I0328 00:03:30.014184 1092522 main.go:141] libmachine: (ha-377576) DBG | domain ha-377576 has defined IP address 192.168.39.47 and MAC address 52:54:00:9c:48:13 in network mk-ha-377576
	I0328 00:03:30.014413 1092522 provision.go:143] copyHostCerts
	I0328 00:03:30.014450 1092522 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.pem
	I0328 00:03:30.014494 1092522 exec_runner.go:144] found /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.pem, removing ...
	I0328 00:03:30.014504 1092522 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.pem
	I0328 00:03:30.014581 1092522 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.pem (1078 bytes)
	I0328 00:03:30.014711 1092522 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18485-1069254/.minikube/cert.pem
	I0328 00:03:30.014733 1092522 exec_runner.go:144] found /home/jenkins/minikube-integration/18485-1069254/.minikube/cert.pem, removing ...
	I0328 00:03:30.014740 1092522 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18485-1069254/.minikube/cert.pem
	I0328 00:03:30.014766 1092522 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18485-1069254/.minikube/cert.pem (1123 bytes)
	I0328 00:03:30.014809 1092522 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18485-1069254/.minikube/key.pem
	I0328 00:03:30.014826 1092522 exec_runner.go:144] found /home/jenkins/minikube-integration/18485-1069254/.minikube/key.pem, removing ...
	I0328 00:03:30.014832 1092522 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18485-1069254/.minikube/key.pem
	I0328 00:03:30.014851 1092522 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18485-1069254/.minikube/key.pem (1679 bytes)
	I0328 00:03:30.014896 1092522 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca-key.pem org=jenkins.ha-377576 san=[127.0.0.1 192.168.39.47 ha-377576 localhost minikube]
	I0328 00:03:30.299041 1092522 provision.go:177] copyRemoteCerts
	I0328 00:03:30.299122 1092522 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0328 00:03:30.299214 1092522 main.go:141] libmachine: (ha-377576) Calling .GetSSHHostname
	I0328 00:03:30.302018 1092522 main.go:141] libmachine: (ha-377576) DBG | domain ha-377576 has defined MAC address 52:54:00:9c:48:13 in network mk-ha-377576
	I0328 00:03:30.302379 1092522 main.go:141] libmachine: (ha-377576) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:48:13", ip: ""} in network mk-ha-377576: {Iface:virbr1 ExpiryTime:2024-03-28 00:52:31 +0000 UTC Type:0 Mac:52:54:00:9c:48:13 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:ha-377576 Clientid:01:52:54:00:9c:48:13}
	I0328 00:03:30.302417 1092522 main.go:141] libmachine: (ha-377576) DBG | domain ha-377576 has defined IP address 192.168.39.47 and MAC address 52:54:00:9c:48:13 in network mk-ha-377576
	I0328 00:03:30.302645 1092522 main.go:141] libmachine: (ha-377576) Calling .GetSSHPort
	I0328 00:03:30.302879 1092522 main.go:141] libmachine: (ha-377576) Calling .GetSSHKeyPath
	I0328 00:03:30.303022 1092522 main.go:141] libmachine: (ha-377576) Calling .GetSSHUsername
	I0328 00:03:30.303159 1092522 sshutil.go:53] new ssh client: &{IP:192.168.39.47 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/ha-377576/id_rsa Username:docker}
	I0328 00:03:30.389123 1092522 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0328 00:03:30.389203 1092522 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0328 00:03:30.420003 1092522 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0328 00:03:30.420080 1092522 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0328 00:03:30.447779 1092522 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0328 00:03:30.447867 1092522 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0328 00:03:30.476299 1092522 provision.go:87] duration metric: took 468.938476ms to configureAuth
	I0328 00:03:30.476332 1092522 buildroot.go:189] setting minikube options for container-runtime
	I0328 00:03:30.476641 1092522 config.go:182] Loaded profile config "ha-377576": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0328 00:03:30.476742 1092522 main.go:141] libmachine: (ha-377576) Calling .GetSSHHostname
	I0328 00:03:30.479515 1092522 main.go:141] libmachine: (ha-377576) DBG | domain ha-377576 has defined MAC address 52:54:00:9c:48:13 in network mk-ha-377576
	I0328 00:03:30.479929 1092522 main.go:141] libmachine: (ha-377576) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:48:13", ip: ""} in network mk-ha-377576: {Iface:virbr1 ExpiryTime:2024-03-28 00:52:31 +0000 UTC Type:0 Mac:52:54:00:9c:48:13 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:ha-377576 Clientid:01:52:54:00:9c:48:13}
	I0328 00:03:30.479954 1092522 main.go:141] libmachine: (ha-377576) DBG | domain ha-377576 has defined IP address 192.168.39.47 and MAC address 52:54:00:9c:48:13 in network mk-ha-377576
	I0328 00:03:30.480139 1092522 main.go:141] libmachine: (ha-377576) Calling .GetSSHPort
	I0328 00:03:30.480352 1092522 main.go:141] libmachine: (ha-377576) Calling .GetSSHKeyPath
	I0328 00:03:30.480560 1092522 main.go:141] libmachine: (ha-377576) Calling .GetSSHKeyPath
	I0328 00:03:30.480719 1092522 main.go:141] libmachine: (ha-377576) Calling .GetSSHUsername
	I0328 00:03:30.480890 1092522 main.go:141] libmachine: Using SSH client type: native
	I0328 00:03:30.481052 1092522 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.47 22 <nil> <nil>}
	I0328 00:03:30.481066 1092522 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0328 00:05:01.308940 1092522 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0328 00:05:01.308990 1092522 machine.go:97] duration metric: took 1m31.651733633s to provisionDockerMachine
	I0328 00:05:01.309005 1092522 start.go:293] postStartSetup for "ha-377576" (driver="kvm2")
	I0328 00:05:01.309018 1092522 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0328 00:05:01.309038 1092522 main.go:141] libmachine: (ha-377576) Calling .DriverName
	I0328 00:05:01.309445 1092522 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0328 00:05:01.309488 1092522 main.go:141] libmachine: (ha-377576) Calling .GetSSHHostname
	I0328 00:05:01.312671 1092522 main.go:141] libmachine: (ha-377576) DBG | domain ha-377576 has defined MAC address 52:54:00:9c:48:13 in network mk-ha-377576
	I0328 00:05:01.313091 1092522 main.go:141] libmachine: (ha-377576) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:48:13", ip: ""} in network mk-ha-377576: {Iface:virbr1 ExpiryTime:2024-03-28 00:52:31 +0000 UTC Type:0 Mac:52:54:00:9c:48:13 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:ha-377576 Clientid:01:52:54:00:9c:48:13}
	I0328 00:05:01.313118 1092522 main.go:141] libmachine: (ha-377576) DBG | domain ha-377576 has defined IP address 192.168.39.47 and MAC address 52:54:00:9c:48:13 in network mk-ha-377576
	I0328 00:05:01.313315 1092522 main.go:141] libmachine: (ha-377576) Calling .GetSSHPort
	I0328 00:05:01.313571 1092522 main.go:141] libmachine: (ha-377576) Calling .GetSSHKeyPath
	I0328 00:05:01.313758 1092522 main.go:141] libmachine: (ha-377576) Calling .GetSSHUsername
	I0328 00:05:01.313905 1092522 sshutil.go:53] new ssh client: &{IP:192.168.39.47 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/ha-377576/id_rsa Username:docker}
	I0328 00:05:01.399853 1092522 ssh_runner.go:195] Run: cat /etc/os-release
	I0328 00:05:01.404548 1092522 info.go:137] Remote host: Buildroot 2023.02.9
	I0328 00:05:01.404587 1092522 filesync.go:126] Scanning /home/jenkins/minikube-integration/18485-1069254/.minikube/addons for local assets ...
	I0328 00:05:01.404679 1092522 filesync.go:126] Scanning /home/jenkins/minikube-integration/18485-1069254/.minikube/files for local assets ...
	I0328 00:05:01.404764 1092522 filesync.go:149] local asset: /home/jenkins/minikube-integration/18485-1069254/.minikube/files/etc/ssl/certs/10765222.pem -> 10765222.pem in /etc/ssl/certs
	I0328 00:05:01.404777 1092522 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18485-1069254/.minikube/files/etc/ssl/certs/10765222.pem -> /etc/ssl/certs/10765222.pem
	I0328 00:05:01.404856 1092522 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0328 00:05:01.415592 1092522 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/files/etc/ssl/certs/10765222.pem --> /etc/ssl/certs/10765222.pem (1708 bytes)
	I0328 00:05:01.441491 1092522 start.go:296] duration metric: took 132.470927ms for postStartSetup
	I0328 00:05:01.441543 1092522 main.go:141] libmachine: (ha-377576) Calling .DriverName
	I0328 00:05:01.441893 1092522 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0328 00:05:01.441919 1092522 main.go:141] libmachine: (ha-377576) Calling .GetSSHHostname
	I0328 00:05:01.444822 1092522 main.go:141] libmachine: (ha-377576) DBG | domain ha-377576 has defined MAC address 52:54:00:9c:48:13 in network mk-ha-377576
	I0328 00:05:01.445155 1092522 main.go:141] libmachine: (ha-377576) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:48:13", ip: ""} in network mk-ha-377576: {Iface:virbr1 ExpiryTime:2024-03-28 00:52:31 +0000 UTC Type:0 Mac:52:54:00:9c:48:13 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:ha-377576 Clientid:01:52:54:00:9c:48:13}
	I0328 00:05:01.445182 1092522 main.go:141] libmachine: (ha-377576) DBG | domain ha-377576 has defined IP address 192.168.39.47 and MAC address 52:54:00:9c:48:13 in network mk-ha-377576
	I0328 00:05:01.445365 1092522 main.go:141] libmachine: (ha-377576) Calling .GetSSHPort
	I0328 00:05:01.445590 1092522 main.go:141] libmachine: (ha-377576) Calling .GetSSHKeyPath
	I0328 00:05:01.445768 1092522 main.go:141] libmachine: (ha-377576) Calling .GetSSHUsername
	I0328 00:05:01.445957 1092522 sshutil.go:53] new ssh client: &{IP:192.168.39.47 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/ha-377576/id_rsa Username:docker}
	W0328 00:05:01.525601 1092522 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0328 00:05:01.525631 1092522 fix.go:56] duration metric: took 1m31.890767476s for fixHost
	I0328 00:05:01.525656 1092522 main.go:141] libmachine: (ha-377576) Calling .GetSSHHostname
	I0328 00:05:01.528474 1092522 main.go:141] libmachine: (ha-377576) DBG | domain ha-377576 has defined MAC address 52:54:00:9c:48:13 in network mk-ha-377576
	I0328 00:05:01.529013 1092522 main.go:141] libmachine: (ha-377576) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:48:13", ip: ""} in network mk-ha-377576: {Iface:virbr1 ExpiryTime:2024-03-28 00:52:31 +0000 UTC Type:0 Mac:52:54:00:9c:48:13 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:ha-377576 Clientid:01:52:54:00:9c:48:13}
	I0328 00:05:01.529042 1092522 main.go:141] libmachine: (ha-377576) DBG | domain ha-377576 has defined IP address 192.168.39.47 and MAC address 52:54:00:9c:48:13 in network mk-ha-377576
	I0328 00:05:01.529223 1092522 main.go:141] libmachine: (ha-377576) Calling .GetSSHPort
	I0328 00:05:01.529480 1092522 main.go:141] libmachine: (ha-377576) Calling .GetSSHKeyPath
	I0328 00:05:01.529692 1092522 main.go:141] libmachine: (ha-377576) Calling .GetSSHKeyPath
	I0328 00:05:01.529831 1092522 main.go:141] libmachine: (ha-377576) Calling .GetSSHUsername
	I0328 00:05:01.530081 1092522 main.go:141] libmachine: Using SSH client type: native
	I0328 00:05:01.530345 1092522 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.47 22 <nil> <nil>}
	I0328 00:05:01.530361 1092522 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0328 00:05:01.631506 1092522 main.go:141] libmachine: SSH cmd err, output: <nil>: 1711584301.600399857
	
	I0328 00:05:01.631541 1092522 fix.go:216] guest clock: 1711584301.600399857
	I0328 00:05:01.631550 1092522 fix.go:229] Guest: 2024-03-28 00:05:01.600399857 +0000 UTC Remote: 2024-03-28 00:05:01.52563955 +0000 UTC m=+92.037580048 (delta=74.760307ms)
	I0328 00:05:01.631571 1092522 fix.go:200] guest clock delta is within tolerance: 74.760307ms
	I0328 00:05:01.631577 1092522 start.go:83] releasing machines lock for "ha-377576", held for 1m31.996760278s
	I0328 00:05:01.631596 1092522 main.go:141] libmachine: (ha-377576) Calling .DriverName
	I0328 00:05:01.631879 1092522 main.go:141] libmachine: (ha-377576) Calling .GetIP
	I0328 00:05:01.634584 1092522 main.go:141] libmachine: (ha-377576) DBG | domain ha-377576 has defined MAC address 52:54:00:9c:48:13 in network mk-ha-377576
	I0328 00:05:01.634936 1092522 main.go:141] libmachine: (ha-377576) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:48:13", ip: ""} in network mk-ha-377576: {Iface:virbr1 ExpiryTime:2024-03-28 00:52:31 +0000 UTC Type:0 Mac:52:54:00:9c:48:13 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:ha-377576 Clientid:01:52:54:00:9c:48:13}
	I0328 00:05:01.634981 1092522 main.go:141] libmachine: (ha-377576) DBG | domain ha-377576 has defined IP address 192.168.39.47 and MAC address 52:54:00:9c:48:13 in network mk-ha-377576
	I0328 00:05:01.635132 1092522 main.go:141] libmachine: (ha-377576) Calling .DriverName
	I0328 00:05:01.635765 1092522 main.go:141] libmachine: (ha-377576) Calling .DriverName
	I0328 00:05:01.635948 1092522 main.go:141] libmachine: (ha-377576) Calling .DriverName
	I0328 00:05:01.636028 1092522 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0328 00:05:01.636087 1092522 main.go:141] libmachine: (ha-377576) Calling .GetSSHHostname
	I0328 00:05:01.636210 1092522 ssh_runner.go:195] Run: cat /version.json
	I0328 00:05:01.636240 1092522 main.go:141] libmachine: (ha-377576) Calling .GetSSHHostname
	I0328 00:05:01.639083 1092522 main.go:141] libmachine: (ha-377576) DBG | domain ha-377576 has defined MAC address 52:54:00:9c:48:13 in network mk-ha-377576
	I0328 00:05:01.639282 1092522 main.go:141] libmachine: (ha-377576) DBG | domain ha-377576 has defined MAC address 52:54:00:9c:48:13 in network mk-ha-377576
	I0328 00:05:01.639540 1092522 main.go:141] libmachine: (ha-377576) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:48:13", ip: ""} in network mk-ha-377576: {Iface:virbr1 ExpiryTime:2024-03-28 00:52:31 +0000 UTC Type:0 Mac:52:54:00:9c:48:13 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:ha-377576 Clientid:01:52:54:00:9c:48:13}
	I0328 00:05:01.639570 1092522 main.go:141] libmachine: (ha-377576) DBG | domain ha-377576 has defined IP address 192.168.39.47 and MAC address 52:54:00:9c:48:13 in network mk-ha-377576
	I0328 00:05:01.639688 1092522 main.go:141] libmachine: (ha-377576) Calling .GetSSHPort
	I0328 00:05:01.639759 1092522 main.go:141] libmachine: (ha-377576) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:48:13", ip: ""} in network mk-ha-377576: {Iface:virbr1 ExpiryTime:2024-03-28 00:52:31 +0000 UTC Type:0 Mac:52:54:00:9c:48:13 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:ha-377576 Clientid:01:52:54:00:9c:48:13}
	I0328 00:05:01.639788 1092522 main.go:141] libmachine: (ha-377576) DBG | domain ha-377576 has defined IP address 192.168.39.47 and MAC address 52:54:00:9c:48:13 in network mk-ha-377576
	I0328 00:05:01.639882 1092522 main.go:141] libmachine: (ha-377576) Calling .GetSSHKeyPath
	I0328 00:05:01.639971 1092522 main.go:141] libmachine: (ha-377576) Calling .GetSSHPort
	I0328 00:05:01.640049 1092522 main.go:141] libmachine: (ha-377576) Calling .GetSSHUsername
	I0328 00:05:01.640119 1092522 main.go:141] libmachine: (ha-377576) Calling .GetSSHKeyPath
	I0328 00:05:01.640181 1092522 sshutil.go:53] new ssh client: &{IP:192.168.39.47 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/ha-377576/id_rsa Username:docker}
	I0328 00:05:01.640285 1092522 main.go:141] libmachine: (ha-377576) Calling .GetSSHUsername
	I0328 00:05:01.640432 1092522 sshutil.go:53] new ssh client: &{IP:192.168.39.47 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/ha-377576/id_rsa Username:docker}
	I0328 00:05:01.745883 1092522 ssh_runner.go:195] Run: systemctl --version
	I0328 00:05:01.752699 1092522 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0328 00:05:01.921338 1092522 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0328 00:05:01.931719 1092522 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0328 00:05:01.931798 1092522 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0328 00:05:01.942440 1092522 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0328 00:05:01.942475 1092522 start.go:494] detecting cgroup driver to use...
	I0328 00:05:01.942575 1092522 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0328 00:05:01.959925 1092522 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0328 00:05:01.974713 1092522 docker.go:217] disabling cri-docker service (if available) ...
	I0328 00:05:01.974787 1092522 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0328 00:05:01.989216 1092522 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0328 00:05:02.003588 1092522 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0328 00:05:02.150998 1092522 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0328 00:05:02.304022 1092522 docker.go:233] disabling docker service ...
	I0328 00:05:02.304116 1092522 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0328 00:05:02.322375 1092522 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0328 00:05:02.336513 1092522 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0328 00:05:02.491935 1092522 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0328 00:05:02.643787 1092522 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0328 00:05:02.660880 1092522 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0328 00:05:02.684057 1092522 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0328 00:05:02.684141 1092522 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 00:05:02.695423 1092522 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0328 00:05:02.695510 1092522 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 00:05:02.706655 1092522 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 00:05:02.718648 1092522 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 00:05:02.729718 1092522 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0328 00:05:02.742748 1092522 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 00:05:02.755294 1092522 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 00:05:02.769343 1092522 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 00:05:02.781233 1092522 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0328 00:05:02.791796 1092522 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0328 00:05:02.801701 1092522 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0328 00:05:02.954304 1092522 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0328 00:05:03.266296 1092522 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0328 00:05:03.266376 1092522 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0328 00:05:03.272620 1092522 start.go:562] Will wait 60s for crictl version
	I0328 00:05:03.272702 1092522 ssh_runner.go:195] Run: which crictl
	I0328 00:05:03.277046 1092522 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0328 00:05:03.323295 1092522 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0328 00:05:03.323376 1092522 ssh_runner.go:195] Run: crio --version
	I0328 00:05:03.355016 1092522 ssh_runner.go:195] Run: crio --version
	I0328 00:05:03.387296 1092522 out.go:177] * Preparing Kubernetes v1.29.3 on CRI-O 1.29.1 ...
	I0328 00:05:03.388556 1092522 main.go:141] libmachine: (ha-377576) Calling .GetIP
	I0328 00:05:03.391204 1092522 main.go:141] libmachine: (ha-377576) DBG | domain ha-377576 has defined MAC address 52:54:00:9c:48:13 in network mk-ha-377576
	I0328 00:05:03.391541 1092522 main.go:141] libmachine: (ha-377576) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:48:13", ip: ""} in network mk-ha-377576: {Iface:virbr1 ExpiryTime:2024-03-28 00:52:31 +0000 UTC Type:0 Mac:52:54:00:9c:48:13 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:ha-377576 Clientid:01:52:54:00:9c:48:13}
	I0328 00:05:03.391567 1092522 main.go:141] libmachine: (ha-377576) DBG | domain ha-377576 has defined IP address 192.168.39.47 and MAC address 52:54:00:9c:48:13 in network mk-ha-377576
	I0328 00:05:03.391858 1092522 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0328 00:05:03.397332 1092522 kubeadm.go:877] updating cluster {Name:ha-377576 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 Cl
usterName:ha-377576 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.47 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.117 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.101 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.93 Port:0 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fres
hpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mo
untIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0328 00:05:03.397492 1092522 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0328 00:05:03.397537 1092522 ssh_runner.go:195] Run: sudo crictl images --output json
	I0328 00:05:03.440642 1092522 crio.go:514] all images are preloaded for cri-o runtime.
	I0328 00:05:03.440668 1092522 crio.go:433] Images already preloaded, skipping extraction
	I0328 00:05:03.440722 1092522 ssh_runner.go:195] Run: sudo crictl images --output json
	I0328 00:05:03.480578 1092522 crio.go:514] all images are preloaded for cri-o runtime.
	I0328 00:05:03.480612 1092522 cache_images.go:84] Images are preloaded, skipping loading
	I0328 00:05:03.480623 1092522 kubeadm.go:928] updating node { 192.168.39.47 8443 v1.29.3 crio true true} ...
	I0328 00:05:03.480741 1092522 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-377576 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.47
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.3 ClusterName:ha-377576 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0328 00:05:03.480827 1092522 ssh_runner.go:195] Run: crio config
	I0328 00:05:03.552853 1092522 cni.go:84] Creating CNI manager for ""
	I0328 00:05:03.552878 1092522 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0328 00:05:03.552887 1092522 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0328 00:05:03.552910 1092522 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.47 APIServerPort:8443 KubernetesVersion:v1.29.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-377576 NodeName:ha-377576 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.47"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.47 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/m
anifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0328 00:05:03.553113 1092522 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.47
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-377576"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.47
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.47"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0328 00:05:03.553139 1092522 kube-vip.go:111] generating kube-vip config ...
	I0328 00:05:03.553196 1092522 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0328 00:05:03.624367 1092522 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0328 00:05:03.624496 1092522 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0328 00:05:03.624566 1092522 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.3
	I0328 00:05:03.644586 1092522 binaries.go:44] Found k8s binaries, skipping transfer
	I0328 00:05:03.644669 1092522 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0328 00:05:03.675726 1092522 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I0328 00:05:03.707948 1092522 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0328 00:05:03.760121 1092522 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2150 bytes)
	I0328 00:05:03.800002 1092522 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1346 bytes)
	I0328 00:05:03.844647 1092522 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0328 00:05:03.850773 1092522 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0328 00:05:04.097656 1092522 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0328 00:05:04.142285 1092522 certs.go:68] Setting up /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/ha-377576 for IP: 192.168.39.47
	I0328 00:05:04.142325 1092522 certs.go:194] generating shared ca certs ...
	I0328 00:05:04.142348 1092522 certs.go:226] acquiring lock for ca certs: {Name:mkf4dec8f33bbf51de6ed3aabdf175c7fd744ef9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 00:05:04.142607 1092522 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.key
	I0328 00:05:04.142659 1092522 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/proxy-client-ca.key
	I0328 00:05:04.142671 1092522 certs.go:256] generating profile certs ...
	I0328 00:05:04.142749 1092522 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/ha-377576/client.key
	I0328 00:05:04.142785 1092522 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/ha-377576/apiserver.key.d6fe7f95
	I0328 00:05:04.142809 1092522 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/ha-377576/apiserver.crt.d6fe7f95 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.47 192.168.39.117 192.168.39.101 192.168.39.254]
	I0328 00:05:04.273379 1092522 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/ha-377576/apiserver.crt.d6fe7f95 ...
	I0328 00:05:04.273417 1092522 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/ha-377576/apiserver.crt.d6fe7f95: {Name:mkf04883c4cf2d81860f4e10e8346d686986085a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 00:05:04.273613 1092522 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/ha-377576/apiserver.key.d6fe7f95 ...
	I0328 00:05:04.273632 1092522 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/ha-377576/apiserver.key.d6fe7f95: {Name:mkf90c22de3adc8e09b81aa5db0c365e0f956b11 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 00:05:04.273700 1092522 certs.go:381] copying /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/ha-377576/apiserver.crt.d6fe7f95 -> /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/ha-377576/apiserver.crt
	I0328 00:05:04.273841 1092522 certs.go:385] copying /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/ha-377576/apiserver.key.d6fe7f95 -> /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/ha-377576/apiserver.key
	I0328 00:05:04.273970 1092522 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/ha-377576/proxy-client.key
	I0328 00:05:04.273989 1092522 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0328 00:05:04.274000 1092522 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0328 00:05:04.274013 1092522 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18485-1069254/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0328 00:05:04.274025 1092522 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18485-1069254/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0328 00:05:04.274038 1092522 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/ha-377576/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0328 00:05:04.274048 1092522 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/ha-377576/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0328 00:05:04.274057 1092522 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/ha-377576/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0328 00:05:04.274067 1092522 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/ha-377576/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0328 00:05:04.274117 1092522 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/1076522.pem (1338 bytes)
	W0328 00:05:04.274150 1092522 certs.go:480] ignoring /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/1076522_empty.pem, impossibly tiny 0 bytes
	I0328 00:05:04.274159 1092522 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca-key.pem (1675 bytes)
	I0328 00:05:04.274179 1092522 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem (1078 bytes)
	I0328 00:05:04.274201 1092522 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/cert.pem (1123 bytes)
	I0328 00:05:04.274222 1092522 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/key.pem (1679 bytes)
	I0328 00:05:04.274272 1092522 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/files/etc/ssl/certs/10765222.pem (1708 bytes)
	I0328 00:05:04.274298 1092522 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0328 00:05:04.274319 1092522 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/1076522.pem -> /usr/share/ca-certificates/1076522.pem
	I0328 00:05:04.274332 1092522 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18485-1069254/.minikube/files/etc/ssl/certs/10765222.pem -> /usr/share/ca-certificates/10765222.pem
	I0328 00:05:04.275055 1092522 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0328 00:05:04.303647 1092522 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0328 00:05:04.329152 1092522 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0328 00:05:04.354122 1092522 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0328 00:05:04.379663 1092522 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/ha-377576/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0328 00:05:04.406009 1092522 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/ha-377576/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0328 00:05:04.430786 1092522 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/ha-377576/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0328 00:05:04.456007 1092522 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/ha-377576/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0328 00:05:04.483635 1092522 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0328 00:05:04.509694 1092522 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/1076522.pem --> /usr/share/ca-certificates/1076522.pem (1338 bytes)
	I0328 00:05:04.534853 1092522 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/files/etc/ssl/certs/10765222.pem --> /usr/share/ca-certificates/10765222.pem (1708 bytes)
	I0328 00:05:04.575696 1092522 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0328 00:05:04.593914 1092522 ssh_runner.go:195] Run: openssl version
	I0328 00:05:04.600325 1092522 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0328 00:05:04.612381 1092522 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0328 00:05:04.617241 1092522 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 27 23:34 /usr/share/ca-certificates/minikubeCA.pem
	I0328 00:05:04.617311 1092522 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0328 00:05:04.623256 1092522 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0328 00:05:04.634251 1092522 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1076522.pem && ln -fs /usr/share/ca-certificates/1076522.pem /etc/ssl/certs/1076522.pem"
	I0328 00:05:04.647505 1092522 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1076522.pem
	I0328 00:05:04.652611 1092522 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 27 23:42 /usr/share/ca-certificates/1076522.pem
	I0328 00:05:04.652690 1092522 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1076522.pem
	I0328 00:05:04.659140 1092522 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1076522.pem /etc/ssl/certs/51391683.0"
	I0328 00:05:04.671609 1092522 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10765222.pem && ln -fs /usr/share/ca-certificates/10765222.pem /etc/ssl/certs/10765222.pem"
	I0328 00:05:04.684796 1092522 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10765222.pem
	I0328 00:05:04.690021 1092522 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 27 23:42 /usr/share/ca-certificates/10765222.pem
	I0328 00:05:04.690135 1092522 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10765222.pem
	I0328 00:05:04.696724 1092522 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/10765222.pem /etc/ssl/certs/3ec20f2e.0"
	I0328 00:05:04.708828 1092522 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0328 00:05:04.714086 1092522 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0328 00:05:04.720794 1092522 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0328 00:05:04.727243 1092522 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0328 00:05:04.733870 1092522 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0328 00:05:04.741195 1092522 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0328 00:05:04.747571 1092522 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0328 00:05:04.753642 1092522 kubeadm.go:391] StartCluster: {Name:ha-377576 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 Clust
erName:ha-377576 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.47 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.117 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.101 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.93 Port:0 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpo
d:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mount
IP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0328 00:05:04.753790 1092522 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0328 00:05:04.753855 1092522 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0328 00:05:04.799029 1092522 cri.go:89] found id: "e280dd2cc82d6fb9730125bb5cc514d7b33be2a3a03727e5adeb43a7f1399eb8"
	I0328 00:05:04.799062 1092522 cri.go:89] found id: "0a2fd3dc48780368d474066f78db57cea4362acc353f7604458e5ad6b3d1c6bb"
	I0328 00:05:04.799067 1092522 cri.go:89] found id: "153e7ff305a05cd7c6257c6dd77ef4d3cf09a9a1759ca7bbcd492a381e5fff6e"
	I0328 00:05:04.799071 1092522 cri.go:89] found id: "c1d4cd43b2dc79a102752a811349247b046bc6478161c113c0d2b9a9741e4aab"
	I0328 00:05:04.799074 1092522 cri.go:89] found id: "3bc1caf41cc2a4eece146f29899d95e195dd1cdeea37643ae3d3b2804d15af7e"
	I0328 00:05:04.799077 1092522 cri.go:89] found id: "1285bba92deaf6fc58b611f235178ae99f08f9474c30ca6b904d51aa1da9f40f"
	I0328 00:05:04.799080 1092522 cri.go:89] found id: "42dcabde2aec964660ef004661b1aca7c5fb8ef5bed0007775f67b975b44adfa"
	I0328 00:05:04.799082 1092522 cri.go:89] found id: "1d5198968b76968a091c33806ec424495c03395675b20e7eb35e330d16217211"
	I0328 00:05:04.799084 1092522 cri.go:89] found id: "ed9a38e9f6cd98054d1e672c652e36e6262ceb6eb2a3e14911a5178d222135e7"
	I0328 00:05:04.799091 1092522 cri.go:89] found id: "381348b1458cea236fc315e0a9a42d269c69969b162efaa25de894ac4284ba88"
	I0328 00:05:04.799094 1092522 cri.go:89] found id: "a226f01452a72cf6e9a608450715a12f6663ccf2697e8259f809e966809978ce"
	I0328 00:05:04.799098 1092522 cri.go:89] found id: "f28af42c6db4a0efe0547d4442478084f4054bb5d4d47038a8a7f727ec1044df"
	I0328 00:05:04.799103 1092522 cri.go:89] found id: "22d460b8d6582d93d5633e1e1af46683647a5632f6b9153f61a6c374dca4f34c"
	I0328 00:05:04.799107 1092522 cri.go:89] found id: "a0128cd878ebdae2c6e217413183a103417714ffe00d55c1c92d1353e38238fa"
	I0328 00:05:04.799113 1092522 cri.go:89] found id: "5f113e7564c47f0e2aa57dbb0702acbf86b1d75d00e9210d72d606a1b0505e5b"
	I0328 00:05:04.799117 1092522 cri.go:89] found id: "afbf14c176818ba796eacd41f1600b5f09075c6a2b4f1edcad903530221f76ff"
	I0328 00:05:04.799121 1092522 cri.go:89] found id: ""
	I0328 00:05:04.799182 1092522 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Mar 28 00:10:42 ha-377576 crio[3893]: time="2024-03-28 00:10:42.971461426Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6a804626-49cb-47cb-83d6-32f609799a61 name=/runtime.v1.RuntimeService/ListContainers
	Mar 28 00:10:42 ha-377576 crio[3893]: time="2024-03-28 00:10:42.971938106Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:446878900fc2ff3c1baac3e23199c7573f58770442bddf2548e3ccbfa9d3b300,PodSandboxId:d6ee7152bab39abb61e4c40513bcbf1c1719037be0b22e184bd532002df38257,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:4,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1711584387693883636,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-5zmtk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e75cdc5-22da-47f2-9833-b2f4eaa9caac,},Annotations:map[string]string{io.kubernetes.container.hash: 6486d0c8,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1639d05b99a2d4c643e2a8925ba47e02dea0aecef0e22871c3d3ef765cb08394,PodSandboxId:904097cb5f15281f081d3f374e7a82a4e22c19468f949bb1d8dd5110b05fbf0c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1711584384689393202,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9000645c-8323-43af-bd87-011d1574493c,},Annotations:map[string]string{io.kubernetes.container.hash: 43103468,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termin
ation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7618cf90b394f83cf98ba25b6a878b77ac0c7bacb6fa65a7cf7ddaae3d859976,PodSandboxId:cb101e6a739d939c7192d22a06ebbaf222b63e1b6c3d7b3a9e82ddaa67bbffc1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1711584350713245285,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-377576,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 32d18f050adf42c0d971a9903270a7b6,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2930b0267019902b8b9ce7cd18b907f89d409676028d92de1dc551850d78f276,PodSandboxId:997d89c34a5f31417acabe4367ffd068159f7cd6447ab1e4c7b0c7caeb3d7a93,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1711584348688612607,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-377576,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6490cbc40210ad634becf13ac3a1705,},Annotations:map[string]string{io.kubernetes.container.hash: bc2c0f48,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.ku
bernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5298cacdf731fecbce32ad2d9f328670947c3e2bc41da8560a41746efa183376,PodSandboxId:6eafda94672d564fe253dfd43b5bde346da3dfa6d5efeb98a979e953c11b959d,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1711584343983600963,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-78c89,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3272474d-5490-4c7c-9dfe-ded8488ec32f,},Annotations:map[string]string{io.kubernetes.container.hash: f4bc3217,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessa
gePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0d9ac0997d255850bd452ddf61795477c05b16d5a1c77900748c11ef0e86ad8a,PodSandboxId:904097cb5f15281f081d3f374e7a82a4e22c19468f949bb1d8dd5110b05fbf0c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1711584336688854876,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9000645c-8323-43af-bd87-011d1574493c,},Annotations:map[string]string{io.kubernetes.container.hash: 43103468,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: Fil
e,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5116a84093b94656654ec8266161d1bc13b32d0e34905ba6962ac6769dc5c6da,PodSandboxId:0b075b5717795f3c07856e1151edd79045a7fe74683a224f07536c576926d280,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1711584326540796553,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-377576,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d35f096ae88a36bb3ae6fa7f31554e39,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePer
iod: 30,},},&Container{Id:f043217131ac4124beae3e8ebcbab4be1eebd60b6b040475b8d40b6951c8837f,PodSandboxId:657c51c3b5e7d8e9d3e371ead69b9b6e2d781ebd27d42dfdb2641e1a6e236c06,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1711584310671441561,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4t77p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 27eff0c9-9b45-4530-aba9-1a5e0ca60802,},Annotations:map[string]string{io.kubernetes.container.hash: d0985273,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb34785
03fe928ca51e348aec8e5aa7eebe7cfdc8b7226d85976590d73be90dd,PodSandboxId:d6ee7152bab39abb61e4c40513bcbf1c1719037be0b22e184bd532002df38257,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1711584310871480282,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-5zmtk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e75cdc5-22da-47f2-9833-b2f4eaa9caac,},Annotations:map[string]string{io.kubernetes.container.hash: 6486d0c8,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d05a3cee8583ee0f268ce6a367e7e700f8f87f1d9
6c985228250245d5f5a258,PodSandboxId:abd9c1ff4a2863311efb200d0e3b601a5035722036d7c765e297bc887033d393,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1711584310538629450,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-377576,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ab33d6840338638cbdcd9ebe5fdd4d4,},Annotations:map[string]string{io.kubernetes.container.hash: 7359b186,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f3bb93ac0ec38b966779f4260f9506e31f38cc6702ce1751f61b06271e40fcb3,PodSandboxId:cb101e6a739d939
c7192d22a06ebbaf222b63e1b6c3d7b3a9e82ddaa67bbffc1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_EXITED,CreatedAt:1711584310528973873,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-377576,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 32d18f050adf42c0d971a9903270a7b6,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:20e7e0f2dabc114fd527652f912485cbd4476203fe6c63338ed46931f28df715,PodSandboxId:997d89c34a
5f31417acabe4367ffd068159f7cd6447ab1e4c7b0c7caeb3d7a93,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_EXITED,CreatedAt:1711584310410197624,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-377576,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6490cbc40210ad634becf13ac3a1705,},Annotations:map[string]string{io.kubernetes.container.hash: bc2c0f48,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f5f23730141a94b0d7db08dffd1dd111243d4d4c6ee706bec92a8ea3c1872258,PodSandboxId:ee57ddbaf2fe4d35bb46bd35e54a00b9
4b130707708a49beace86386b10fe913,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1711584310358266833,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-377576,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f9eaf884653411ba1f22eb4cdbdfa748,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a2fd3dc48780368d474066f78db57cea4362acc353f7604458e5ad6b3d1c6bb,PodSandboxId:8519de872cf9776787e79d23757b0be34bc8c680161a0665bf6b6
cb54b3bf07f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1711584303839080092,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-msv9s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c549358-2f35-4345-aa7a-8bbbcfc4ef01,},Annotations:map[string]string{io.kubernetes.container.hash: 76b7b3af,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File
,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e280dd2cc82d6fb9730125bb5cc514d7b33be2a3a03727e5adeb43a7f1399eb8,PodSandboxId:e7de52be4110de636969e1b12c92959bebb4213ef1e41dcc9a17ddae078e2f6e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1711584303845663995,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-47npx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 968d63e4-f44a-4e52-b6c0-04e0ed1a068e,},Annotations:map[string]string{io.kubernetes.container.hash: d58e1b37,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPor
t\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc41f34db32bf53a65c6af8f9f9eae933fb349e5555060707ec7131ab3a7b835,PodSandboxId:d8bf33d99bda1de3168ad1efb04ca03a74a918d2ca74b585a7fcdb3a108d229e,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1711583818896885728,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-78c89,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3272474d-5490-4c7c-9dfe-ded8488ec32f,},Annotations:map[string]string{io.kuber
netes.container.hash: f4bc3217,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d5198968b76968a091c33806ec424495c03395675b20e7eb35e330d16217211,PodSandboxId:78b0408435c31eb2c100690d28760dd80d92d11a1204f91418812c69660e79a8,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1711583597995455547,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-47npx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 968d63e4-f44a-4e52-b6c0-04e0ed1a068e,},Annotations:map[string]string{io.kubernetes.container.hash: d58e1b37,i
o.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed9a38e9f6cd98054d1e672c652e36e6262ceb6eb2a3e14911a5178d222135e7,PodSandboxId:906a95ca7b9309653761922034b8a2dfca231a15f3b955bb1c166ebd569149c8,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1711583597982972322,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: core
dns-76f75df574-msv9s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c549358-2f35-4345-aa7a-8bbbcfc4ef01,},Annotations:map[string]string{io.kubernetes.container.hash: 76b7b3af,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a226f01452a72cf6e9a608450715a12f6663ccf2697e8259f809e966809978ce,PodSandboxId:3f1239e30a953ae16c917e002c235d64ba2b2b2f9165e939c99bda7578eae785,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b
0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_EXITED,CreatedAt:1711583595702331437,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4t77p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 27eff0c9-9b45-4530-aba9-1a5e0ca60802,},Annotations:map[string]string{io.kubernetes.container.hash: d0985273,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0128cd878ebdae2c6e217413183a103417714ffe00d55c1c92d1353e38238fa,PodSandboxId:bbb9d168e952fe414d641f2ec6a7e1e63a04205d4e2aae1c6ad9a2756c6443ee,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83
d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1711583575850415387,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-377576,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ab33d6840338638cbdcd9ebe5fdd4d4,},Annotations:map[string]string{io.kubernetes.container.hash: 7359b186,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:afbf14c176818ba796eacd41f1600b5f09075c6a2b4f1edcad903530221f76ff,PodSandboxId:b75106f2dccc70e077dfb09e12c05d0942a02fa2c4894a1bd68ec76ca144eed7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_EXITED,Create
dAt:1711583575810399508,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-377576,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f9eaf884653411ba1f22eb4cdbdfa748,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6a804626-49cb-47cb-83d6-32f609799a61 name=/runtime.v1.RuntimeService/ListContainers
	Mar 28 00:10:43 ha-377576 crio[3893]: time="2024-03-28 00:10:43.019322479Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=5761e194-6c22-4c4b-a5dd-ccd3c20a5e33 name=/runtime.v1.RuntimeService/Version
	Mar 28 00:10:43 ha-377576 crio[3893]: time="2024-03-28 00:10:43.019414809Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=5761e194-6c22-4c4b-a5dd-ccd3c20a5e33 name=/runtime.v1.RuntimeService/Version
	Mar 28 00:10:43 ha-377576 crio[3893]: time="2024-03-28 00:10:43.020871606Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d9b1d3ec-15ef-4cd9-99fc-63b7eb111909 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 28 00:10:43 ha-377576 crio[3893]: time="2024-03-28 00:10:43.021305904Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1711584643021279744,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:141828,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d9b1d3ec-15ef-4cd9-99fc-63b7eb111909 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 28 00:10:43 ha-377576 crio[3893]: time="2024-03-28 00:10:43.021994536Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=08b8780a-143c-4eca-a15e-471884c07da5 name=/runtime.v1.RuntimeService/ListContainers
	Mar 28 00:10:43 ha-377576 crio[3893]: time="2024-03-28 00:10:43.022073371Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=08b8780a-143c-4eca-a15e-471884c07da5 name=/runtime.v1.RuntimeService/ListContainers
	Mar 28 00:10:43 ha-377576 crio[3893]: time="2024-03-28 00:10:43.025715889Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:446878900fc2ff3c1baac3e23199c7573f58770442bddf2548e3ccbfa9d3b300,PodSandboxId:d6ee7152bab39abb61e4c40513bcbf1c1719037be0b22e184bd532002df38257,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:4,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1711584387693883636,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-5zmtk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e75cdc5-22da-47f2-9833-b2f4eaa9caac,},Annotations:map[string]string{io.kubernetes.container.hash: 6486d0c8,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1639d05b99a2d4c643e2a8925ba47e02dea0aecef0e22871c3d3ef765cb08394,PodSandboxId:904097cb5f15281f081d3f374e7a82a4e22c19468f949bb1d8dd5110b05fbf0c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1711584384689393202,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9000645c-8323-43af-bd87-011d1574493c,},Annotations:map[string]string{io.kubernetes.container.hash: 43103468,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termin
ation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7618cf90b394f83cf98ba25b6a878b77ac0c7bacb6fa65a7cf7ddaae3d859976,PodSandboxId:cb101e6a739d939c7192d22a06ebbaf222b63e1b6c3d7b3a9e82ddaa67bbffc1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1711584350713245285,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-377576,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 32d18f050adf42c0d971a9903270a7b6,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2930b0267019902b8b9ce7cd18b907f89d409676028d92de1dc551850d78f276,PodSandboxId:997d89c34a5f31417acabe4367ffd068159f7cd6447ab1e4c7b0c7caeb3d7a93,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1711584348688612607,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-377576,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6490cbc40210ad634becf13ac3a1705,},Annotations:map[string]string{io.kubernetes.container.hash: bc2c0f48,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.ku
bernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5298cacdf731fecbce32ad2d9f328670947c3e2bc41da8560a41746efa183376,PodSandboxId:6eafda94672d564fe253dfd43b5bde346da3dfa6d5efeb98a979e953c11b959d,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1711584343983600963,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-78c89,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3272474d-5490-4c7c-9dfe-ded8488ec32f,},Annotations:map[string]string{io.kubernetes.container.hash: f4bc3217,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessa
gePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0d9ac0997d255850bd452ddf61795477c05b16d5a1c77900748c11ef0e86ad8a,PodSandboxId:904097cb5f15281f081d3f374e7a82a4e22c19468f949bb1d8dd5110b05fbf0c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1711584336688854876,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9000645c-8323-43af-bd87-011d1574493c,},Annotations:map[string]string{io.kubernetes.container.hash: 43103468,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: Fil
e,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5116a84093b94656654ec8266161d1bc13b32d0e34905ba6962ac6769dc5c6da,PodSandboxId:0b075b5717795f3c07856e1151edd79045a7fe74683a224f07536c576926d280,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1711584326540796553,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-377576,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d35f096ae88a36bb3ae6fa7f31554e39,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePer
iod: 30,},},&Container{Id:f043217131ac4124beae3e8ebcbab4be1eebd60b6b040475b8d40b6951c8837f,PodSandboxId:657c51c3b5e7d8e9d3e371ead69b9b6e2d781ebd27d42dfdb2641e1a6e236c06,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1711584310671441561,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4t77p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 27eff0c9-9b45-4530-aba9-1a5e0ca60802,},Annotations:map[string]string{io.kubernetes.container.hash: d0985273,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb34785
03fe928ca51e348aec8e5aa7eebe7cfdc8b7226d85976590d73be90dd,PodSandboxId:d6ee7152bab39abb61e4c40513bcbf1c1719037be0b22e184bd532002df38257,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1711584310871480282,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-5zmtk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e75cdc5-22da-47f2-9833-b2f4eaa9caac,},Annotations:map[string]string{io.kubernetes.container.hash: 6486d0c8,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d05a3cee8583ee0f268ce6a367e7e700f8f87f1d9
6c985228250245d5f5a258,PodSandboxId:abd9c1ff4a2863311efb200d0e3b601a5035722036d7c765e297bc887033d393,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1711584310538629450,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-377576,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ab33d6840338638cbdcd9ebe5fdd4d4,},Annotations:map[string]string{io.kubernetes.container.hash: 7359b186,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f3bb93ac0ec38b966779f4260f9506e31f38cc6702ce1751f61b06271e40fcb3,PodSandboxId:cb101e6a739d939
c7192d22a06ebbaf222b63e1b6c3d7b3a9e82ddaa67bbffc1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_EXITED,CreatedAt:1711584310528973873,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-377576,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 32d18f050adf42c0d971a9903270a7b6,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:20e7e0f2dabc114fd527652f912485cbd4476203fe6c63338ed46931f28df715,PodSandboxId:997d89c34a
5f31417acabe4367ffd068159f7cd6447ab1e4c7b0c7caeb3d7a93,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_EXITED,CreatedAt:1711584310410197624,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-377576,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6490cbc40210ad634becf13ac3a1705,},Annotations:map[string]string{io.kubernetes.container.hash: bc2c0f48,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f5f23730141a94b0d7db08dffd1dd111243d4d4c6ee706bec92a8ea3c1872258,PodSandboxId:ee57ddbaf2fe4d35bb46bd35e54a00b9
4b130707708a49beace86386b10fe913,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1711584310358266833,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-377576,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f9eaf884653411ba1f22eb4cdbdfa748,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a2fd3dc48780368d474066f78db57cea4362acc353f7604458e5ad6b3d1c6bb,PodSandboxId:8519de872cf9776787e79d23757b0be34bc8c680161a0665bf6b6
cb54b3bf07f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1711584303839080092,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-msv9s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c549358-2f35-4345-aa7a-8bbbcfc4ef01,},Annotations:map[string]string{io.kubernetes.container.hash: 76b7b3af,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File
,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e280dd2cc82d6fb9730125bb5cc514d7b33be2a3a03727e5adeb43a7f1399eb8,PodSandboxId:e7de52be4110de636969e1b12c92959bebb4213ef1e41dcc9a17ddae078e2f6e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1711584303845663995,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-47npx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 968d63e4-f44a-4e52-b6c0-04e0ed1a068e,},Annotations:map[string]string{io.kubernetes.container.hash: d58e1b37,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPor
t\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc41f34db32bf53a65c6af8f9f9eae933fb349e5555060707ec7131ab3a7b835,PodSandboxId:d8bf33d99bda1de3168ad1efb04ca03a74a918d2ca74b585a7fcdb3a108d229e,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1711583818896885728,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-78c89,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3272474d-5490-4c7c-9dfe-ded8488ec32f,},Annotations:map[string]string{io.kuber
netes.container.hash: f4bc3217,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d5198968b76968a091c33806ec424495c03395675b20e7eb35e330d16217211,PodSandboxId:78b0408435c31eb2c100690d28760dd80d92d11a1204f91418812c69660e79a8,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1711583597995455547,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-47npx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 968d63e4-f44a-4e52-b6c0-04e0ed1a068e,},Annotations:map[string]string{io.kubernetes.container.hash: d58e1b37,i
o.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed9a38e9f6cd98054d1e672c652e36e6262ceb6eb2a3e14911a5178d222135e7,PodSandboxId:906a95ca7b9309653761922034b8a2dfca231a15f3b955bb1c166ebd569149c8,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1711583597982972322,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: core
dns-76f75df574-msv9s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c549358-2f35-4345-aa7a-8bbbcfc4ef01,},Annotations:map[string]string{io.kubernetes.container.hash: 76b7b3af,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a226f01452a72cf6e9a608450715a12f6663ccf2697e8259f809e966809978ce,PodSandboxId:3f1239e30a953ae16c917e002c235d64ba2b2b2f9165e939c99bda7578eae785,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b
0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_EXITED,CreatedAt:1711583595702331437,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4t77p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 27eff0c9-9b45-4530-aba9-1a5e0ca60802,},Annotations:map[string]string{io.kubernetes.container.hash: d0985273,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0128cd878ebdae2c6e217413183a103417714ffe00d55c1c92d1353e38238fa,PodSandboxId:bbb9d168e952fe414d641f2ec6a7e1e63a04205d4e2aae1c6ad9a2756c6443ee,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83
d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1711583575850415387,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-377576,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ab33d6840338638cbdcd9ebe5fdd4d4,},Annotations:map[string]string{io.kubernetes.container.hash: 7359b186,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:afbf14c176818ba796eacd41f1600b5f09075c6a2b4f1edcad903530221f76ff,PodSandboxId:b75106f2dccc70e077dfb09e12c05d0942a02fa2c4894a1bd68ec76ca144eed7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_EXITED,Create
dAt:1711583575810399508,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-377576,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f9eaf884653411ba1f22eb4cdbdfa748,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=08b8780a-143c-4eca-a15e-471884c07da5 name=/runtime.v1.RuntimeService/ListContainers
	Mar 28 00:10:43 ha-377576 crio[3893]: time="2024-03-28 00:10:43.068809326Z" level=debug msg="Request: &StatusRequest{Verbose:false,}" file="otel-collector/interceptors.go:62" id=883a5ff3-8342-4a9c-b924-c46d471b0643 name=/runtime.v1.RuntimeService/Status
	Mar 28 00:10:43 ha-377576 crio[3893]: time="2024-03-28 00:10:43.068918884Z" level=debug msg="Response: &StatusResponse{Status:&RuntimeStatus{Conditions:[]*RuntimeCondition{&RuntimeCondition{Type:RuntimeReady,Status:true,Reason:,Message:,},&RuntimeCondition{Type:NetworkReady,Status:true,Reason:,Message:,},},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=883a5ff3-8342-4a9c-b924-c46d471b0643 name=/runtime.v1.RuntimeService/Status
	Mar 28 00:10:43 ha-377576 crio[3893]: time="2024-03-28 00:10:43.076019059Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=31f2bdd8-bd38-4327-974a-cbcf25a7fe93 name=/runtime.v1.RuntimeService/Version
	Mar 28 00:10:43 ha-377576 crio[3893]: time="2024-03-28 00:10:43.076115932Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=31f2bdd8-bd38-4327-974a-cbcf25a7fe93 name=/runtime.v1.RuntimeService/Version
	Mar 28 00:10:43 ha-377576 crio[3893]: time="2024-03-28 00:10:43.077852062Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=3648b14d-46c3-4ae8-85c7-2bfb0d9308f4 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 28 00:10:43 ha-377576 crio[3893]: time="2024-03-28 00:10:43.078273573Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1711584643078250903,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:141828,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3648b14d-46c3-4ae8-85c7-2bfb0d9308f4 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 28 00:10:43 ha-377576 crio[3893]: time="2024-03-28 00:10:43.079052925Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4e879f87-b40c-4e76-933e-a4fd4bd6826e name=/runtime.v1.RuntimeService/ListContainers
	Mar 28 00:10:43 ha-377576 crio[3893]: time="2024-03-28 00:10:43.079130176Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4e879f87-b40c-4e76-933e-a4fd4bd6826e name=/runtime.v1.RuntimeService/ListContainers
	Mar 28 00:10:43 ha-377576 crio[3893]: time="2024-03-28 00:10:43.079791073Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:446878900fc2ff3c1baac3e23199c7573f58770442bddf2548e3ccbfa9d3b300,PodSandboxId:d6ee7152bab39abb61e4c40513bcbf1c1719037be0b22e184bd532002df38257,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:4,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1711584387693883636,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-5zmtk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e75cdc5-22da-47f2-9833-b2f4eaa9caac,},Annotations:map[string]string{io.kubernetes.container.hash: 6486d0c8,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1639d05b99a2d4c643e2a8925ba47e02dea0aecef0e22871c3d3ef765cb08394,PodSandboxId:904097cb5f15281f081d3f374e7a82a4e22c19468f949bb1d8dd5110b05fbf0c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1711584384689393202,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9000645c-8323-43af-bd87-011d1574493c,},Annotations:map[string]string{io.kubernetes.container.hash: 43103468,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termin
ation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7618cf90b394f83cf98ba25b6a878b77ac0c7bacb6fa65a7cf7ddaae3d859976,PodSandboxId:cb101e6a739d939c7192d22a06ebbaf222b63e1b6c3d7b3a9e82ddaa67bbffc1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1711584350713245285,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-377576,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 32d18f050adf42c0d971a9903270a7b6,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2930b0267019902b8b9ce7cd18b907f89d409676028d92de1dc551850d78f276,PodSandboxId:997d89c34a5f31417acabe4367ffd068159f7cd6447ab1e4c7b0c7caeb3d7a93,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1711584348688612607,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-377576,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6490cbc40210ad634becf13ac3a1705,},Annotations:map[string]string{io.kubernetes.container.hash: bc2c0f48,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.ku
bernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5298cacdf731fecbce32ad2d9f328670947c3e2bc41da8560a41746efa183376,PodSandboxId:6eafda94672d564fe253dfd43b5bde346da3dfa6d5efeb98a979e953c11b959d,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1711584343983600963,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-78c89,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3272474d-5490-4c7c-9dfe-ded8488ec32f,},Annotations:map[string]string{io.kubernetes.container.hash: f4bc3217,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessa
gePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0d9ac0997d255850bd452ddf61795477c05b16d5a1c77900748c11ef0e86ad8a,PodSandboxId:904097cb5f15281f081d3f374e7a82a4e22c19468f949bb1d8dd5110b05fbf0c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1711584336688854876,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9000645c-8323-43af-bd87-011d1574493c,},Annotations:map[string]string{io.kubernetes.container.hash: 43103468,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: Fil
e,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5116a84093b94656654ec8266161d1bc13b32d0e34905ba6962ac6769dc5c6da,PodSandboxId:0b075b5717795f3c07856e1151edd79045a7fe74683a224f07536c576926d280,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1711584326540796553,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-377576,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d35f096ae88a36bb3ae6fa7f31554e39,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePer
iod: 30,},},&Container{Id:f043217131ac4124beae3e8ebcbab4be1eebd60b6b040475b8d40b6951c8837f,PodSandboxId:657c51c3b5e7d8e9d3e371ead69b9b6e2d781ebd27d42dfdb2641e1a6e236c06,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1711584310671441561,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4t77p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 27eff0c9-9b45-4530-aba9-1a5e0ca60802,},Annotations:map[string]string{io.kubernetes.container.hash: d0985273,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb34785
03fe928ca51e348aec8e5aa7eebe7cfdc8b7226d85976590d73be90dd,PodSandboxId:d6ee7152bab39abb61e4c40513bcbf1c1719037be0b22e184bd532002df38257,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1711584310871480282,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-5zmtk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e75cdc5-22da-47f2-9833-b2f4eaa9caac,},Annotations:map[string]string{io.kubernetes.container.hash: 6486d0c8,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d05a3cee8583ee0f268ce6a367e7e700f8f87f1d9
6c985228250245d5f5a258,PodSandboxId:abd9c1ff4a2863311efb200d0e3b601a5035722036d7c765e297bc887033d393,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1711584310538629450,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-377576,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ab33d6840338638cbdcd9ebe5fdd4d4,},Annotations:map[string]string{io.kubernetes.container.hash: 7359b186,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f3bb93ac0ec38b966779f4260f9506e31f38cc6702ce1751f61b06271e40fcb3,PodSandboxId:cb101e6a739d939
c7192d22a06ebbaf222b63e1b6c3d7b3a9e82ddaa67bbffc1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_EXITED,CreatedAt:1711584310528973873,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-377576,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 32d18f050adf42c0d971a9903270a7b6,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:20e7e0f2dabc114fd527652f912485cbd4476203fe6c63338ed46931f28df715,PodSandboxId:997d89c34a
5f31417acabe4367ffd068159f7cd6447ab1e4c7b0c7caeb3d7a93,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_EXITED,CreatedAt:1711584310410197624,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-377576,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6490cbc40210ad634becf13ac3a1705,},Annotations:map[string]string{io.kubernetes.container.hash: bc2c0f48,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f5f23730141a94b0d7db08dffd1dd111243d4d4c6ee706bec92a8ea3c1872258,PodSandboxId:ee57ddbaf2fe4d35bb46bd35e54a00b9
4b130707708a49beace86386b10fe913,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1711584310358266833,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-377576,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f9eaf884653411ba1f22eb4cdbdfa748,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a2fd3dc48780368d474066f78db57cea4362acc353f7604458e5ad6b3d1c6bb,PodSandboxId:8519de872cf9776787e79d23757b0be34bc8c680161a0665bf6b6
cb54b3bf07f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1711584303839080092,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-msv9s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c549358-2f35-4345-aa7a-8bbbcfc4ef01,},Annotations:map[string]string{io.kubernetes.container.hash: 76b7b3af,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File
,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e280dd2cc82d6fb9730125bb5cc514d7b33be2a3a03727e5adeb43a7f1399eb8,PodSandboxId:e7de52be4110de636969e1b12c92959bebb4213ef1e41dcc9a17ddae078e2f6e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1711584303845663995,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-47npx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 968d63e4-f44a-4e52-b6c0-04e0ed1a068e,},Annotations:map[string]string{io.kubernetes.container.hash: d58e1b37,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPor
t\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc41f34db32bf53a65c6af8f9f9eae933fb349e5555060707ec7131ab3a7b835,PodSandboxId:d8bf33d99bda1de3168ad1efb04ca03a74a918d2ca74b585a7fcdb3a108d229e,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1711583818896885728,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-78c89,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3272474d-5490-4c7c-9dfe-ded8488ec32f,},Annotations:map[string]string{io.kuber
netes.container.hash: f4bc3217,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d5198968b76968a091c33806ec424495c03395675b20e7eb35e330d16217211,PodSandboxId:78b0408435c31eb2c100690d28760dd80d92d11a1204f91418812c69660e79a8,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1711583597995455547,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-47npx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 968d63e4-f44a-4e52-b6c0-04e0ed1a068e,},Annotations:map[string]string{io.kubernetes.container.hash: d58e1b37,i
o.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed9a38e9f6cd98054d1e672c652e36e6262ceb6eb2a3e14911a5178d222135e7,PodSandboxId:906a95ca7b9309653761922034b8a2dfca231a15f3b955bb1c166ebd569149c8,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1711583597982972322,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: core
dns-76f75df574-msv9s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c549358-2f35-4345-aa7a-8bbbcfc4ef01,},Annotations:map[string]string{io.kubernetes.container.hash: 76b7b3af,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a226f01452a72cf6e9a608450715a12f6663ccf2697e8259f809e966809978ce,PodSandboxId:3f1239e30a953ae16c917e002c235d64ba2b2b2f9165e939c99bda7578eae785,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b
0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_EXITED,CreatedAt:1711583595702331437,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4t77p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 27eff0c9-9b45-4530-aba9-1a5e0ca60802,},Annotations:map[string]string{io.kubernetes.container.hash: d0985273,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0128cd878ebdae2c6e217413183a103417714ffe00d55c1c92d1353e38238fa,PodSandboxId:bbb9d168e952fe414d641f2ec6a7e1e63a04205d4e2aae1c6ad9a2756c6443ee,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83
d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1711583575850415387,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-377576,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ab33d6840338638cbdcd9ebe5fdd4d4,},Annotations:map[string]string{io.kubernetes.container.hash: 7359b186,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:afbf14c176818ba796eacd41f1600b5f09075c6a2b4f1edcad903530221f76ff,PodSandboxId:b75106f2dccc70e077dfb09e12c05d0942a02fa2c4894a1bd68ec76ca144eed7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_EXITED,Create
dAt:1711583575810399508,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-377576,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f9eaf884653411ba1f22eb4cdbdfa748,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4e879f87-b40c-4e76-933e-a4fd4bd6826e name=/runtime.v1.RuntimeService/ListContainers
	Mar 28 00:10:43 ha-377576 crio[3893]: time="2024-03-28 00:10:43.129765619Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=57158732-8b29-4440-8fd5-fe5f8603e5c5 name=/runtime.v1.RuntimeService/Version
	Mar 28 00:10:43 ha-377576 crio[3893]: time="2024-03-28 00:10:43.129875398Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=57158732-8b29-4440-8fd5-fe5f8603e5c5 name=/runtime.v1.RuntimeService/Version
	Mar 28 00:10:43 ha-377576 crio[3893]: time="2024-03-28 00:10:43.131582821Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c88e3d3b-940f-481b-b0bd-5f9e6377dbc1 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 28 00:10:43 ha-377576 crio[3893]: time="2024-03-28 00:10:43.132069005Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1711584643132042995,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:141828,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c88e3d3b-940f-481b-b0bd-5f9e6377dbc1 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 28 00:10:43 ha-377576 crio[3893]: time="2024-03-28 00:10:43.133021121Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c302f65b-a65a-4f64-9a46-b02eebcaf485 name=/runtime.v1.RuntimeService/ListContainers
	Mar 28 00:10:43 ha-377576 crio[3893]: time="2024-03-28 00:10:43.133084611Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c302f65b-a65a-4f64-9a46-b02eebcaf485 name=/runtime.v1.RuntimeService/ListContainers
	Mar 28 00:10:43 ha-377576 crio[3893]: time="2024-03-28 00:10:43.133560192Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:446878900fc2ff3c1baac3e23199c7573f58770442bddf2548e3ccbfa9d3b300,PodSandboxId:d6ee7152bab39abb61e4c40513bcbf1c1719037be0b22e184bd532002df38257,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:4,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1711584387693883636,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-5zmtk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e75cdc5-22da-47f2-9833-b2f4eaa9caac,},Annotations:map[string]string{io.kubernetes.container.hash: 6486d0c8,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1639d05b99a2d4c643e2a8925ba47e02dea0aecef0e22871c3d3ef765cb08394,PodSandboxId:904097cb5f15281f081d3f374e7a82a4e22c19468f949bb1d8dd5110b05fbf0c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1711584384689393202,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9000645c-8323-43af-bd87-011d1574493c,},Annotations:map[string]string{io.kubernetes.container.hash: 43103468,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termin
ation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7618cf90b394f83cf98ba25b6a878b77ac0c7bacb6fa65a7cf7ddaae3d859976,PodSandboxId:cb101e6a739d939c7192d22a06ebbaf222b63e1b6c3d7b3a9e82ddaa67bbffc1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1711584350713245285,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-377576,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 32d18f050adf42c0d971a9903270a7b6,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2930b0267019902b8b9ce7cd18b907f89d409676028d92de1dc551850d78f276,PodSandboxId:997d89c34a5f31417acabe4367ffd068159f7cd6447ab1e4c7b0c7caeb3d7a93,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1711584348688612607,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-377576,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6490cbc40210ad634becf13ac3a1705,},Annotations:map[string]string{io.kubernetes.container.hash: bc2c0f48,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.ku
bernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5298cacdf731fecbce32ad2d9f328670947c3e2bc41da8560a41746efa183376,PodSandboxId:6eafda94672d564fe253dfd43b5bde346da3dfa6d5efeb98a979e953c11b959d,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1711584343983600963,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-78c89,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3272474d-5490-4c7c-9dfe-ded8488ec32f,},Annotations:map[string]string{io.kubernetes.container.hash: f4bc3217,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessa
gePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0d9ac0997d255850bd452ddf61795477c05b16d5a1c77900748c11ef0e86ad8a,PodSandboxId:904097cb5f15281f081d3f374e7a82a4e22c19468f949bb1d8dd5110b05fbf0c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1711584336688854876,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9000645c-8323-43af-bd87-011d1574493c,},Annotations:map[string]string{io.kubernetes.container.hash: 43103468,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: Fil
e,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5116a84093b94656654ec8266161d1bc13b32d0e34905ba6962ac6769dc5c6da,PodSandboxId:0b075b5717795f3c07856e1151edd79045a7fe74683a224f07536c576926d280,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1711584326540796553,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-377576,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d35f096ae88a36bb3ae6fa7f31554e39,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePer
iod: 30,},},&Container{Id:f043217131ac4124beae3e8ebcbab4be1eebd60b6b040475b8d40b6951c8837f,PodSandboxId:657c51c3b5e7d8e9d3e371ead69b9b6e2d781ebd27d42dfdb2641e1a6e236c06,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1711584310671441561,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4t77p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 27eff0c9-9b45-4530-aba9-1a5e0ca60802,},Annotations:map[string]string{io.kubernetes.container.hash: d0985273,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb34785
03fe928ca51e348aec8e5aa7eebe7cfdc8b7226d85976590d73be90dd,PodSandboxId:d6ee7152bab39abb61e4c40513bcbf1c1719037be0b22e184bd532002df38257,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1711584310871480282,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-5zmtk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e75cdc5-22da-47f2-9833-b2f4eaa9caac,},Annotations:map[string]string{io.kubernetes.container.hash: 6486d0c8,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d05a3cee8583ee0f268ce6a367e7e700f8f87f1d9
6c985228250245d5f5a258,PodSandboxId:abd9c1ff4a2863311efb200d0e3b601a5035722036d7c765e297bc887033d393,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1711584310538629450,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-377576,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ab33d6840338638cbdcd9ebe5fdd4d4,},Annotations:map[string]string{io.kubernetes.container.hash: 7359b186,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f3bb93ac0ec38b966779f4260f9506e31f38cc6702ce1751f61b06271e40fcb3,PodSandboxId:cb101e6a739d939
c7192d22a06ebbaf222b63e1b6c3d7b3a9e82ddaa67bbffc1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_EXITED,CreatedAt:1711584310528973873,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-377576,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 32d18f050adf42c0d971a9903270a7b6,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:20e7e0f2dabc114fd527652f912485cbd4476203fe6c63338ed46931f28df715,PodSandboxId:997d89c34a
5f31417acabe4367ffd068159f7cd6447ab1e4c7b0c7caeb3d7a93,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_EXITED,CreatedAt:1711584310410197624,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-377576,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6490cbc40210ad634becf13ac3a1705,},Annotations:map[string]string{io.kubernetes.container.hash: bc2c0f48,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f5f23730141a94b0d7db08dffd1dd111243d4d4c6ee706bec92a8ea3c1872258,PodSandboxId:ee57ddbaf2fe4d35bb46bd35e54a00b9
4b130707708a49beace86386b10fe913,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1711584310358266833,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-377576,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f9eaf884653411ba1f22eb4cdbdfa748,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a2fd3dc48780368d474066f78db57cea4362acc353f7604458e5ad6b3d1c6bb,PodSandboxId:8519de872cf9776787e79d23757b0be34bc8c680161a0665bf6b6
cb54b3bf07f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1711584303839080092,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-msv9s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c549358-2f35-4345-aa7a-8bbbcfc4ef01,},Annotations:map[string]string{io.kubernetes.container.hash: 76b7b3af,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File
,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e280dd2cc82d6fb9730125bb5cc514d7b33be2a3a03727e5adeb43a7f1399eb8,PodSandboxId:e7de52be4110de636969e1b12c92959bebb4213ef1e41dcc9a17ddae078e2f6e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1711584303845663995,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-47npx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 968d63e4-f44a-4e52-b6c0-04e0ed1a068e,},Annotations:map[string]string{io.kubernetes.container.hash: d58e1b37,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPor
t\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc41f34db32bf53a65c6af8f9f9eae933fb349e5555060707ec7131ab3a7b835,PodSandboxId:d8bf33d99bda1de3168ad1efb04ca03a74a918d2ca74b585a7fcdb3a108d229e,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1711583818896885728,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-78c89,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3272474d-5490-4c7c-9dfe-ded8488ec32f,},Annotations:map[string]string{io.kuber
netes.container.hash: f4bc3217,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d5198968b76968a091c33806ec424495c03395675b20e7eb35e330d16217211,PodSandboxId:78b0408435c31eb2c100690d28760dd80d92d11a1204f91418812c69660e79a8,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1711583597995455547,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-47npx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 968d63e4-f44a-4e52-b6c0-04e0ed1a068e,},Annotations:map[string]string{io.kubernetes.container.hash: d58e1b37,i
o.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed9a38e9f6cd98054d1e672c652e36e6262ceb6eb2a3e14911a5178d222135e7,PodSandboxId:906a95ca7b9309653761922034b8a2dfca231a15f3b955bb1c166ebd569149c8,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1711583597982972322,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: core
dns-76f75df574-msv9s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c549358-2f35-4345-aa7a-8bbbcfc4ef01,},Annotations:map[string]string{io.kubernetes.container.hash: 76b7b3af,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a226f01452a72cf6e9a608450715a12f6663ccf2697e8259f809e966809978ce,PodSandboxId:3f1239e30a953ae16c917e002c235d64ba2b2b2f9165e939c99bda7578eae785,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b
0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_EXITED,CreatedAt:1711583595702331437,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4t77p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 27eff0c9-9b45-4530-aba9-1a5e0ca60802,},Annotations:map[string]string{io.kubernetes.container.hash: d0985273,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0128cd878ebdae2c6e217413183a103417714ffe00d55c1c92d1353e38238fa,PodSandboxId:bbb9d168e952fe414d641f2ec6a7e1e63a04205d4e2aae1c6ad9a2756c6443ee,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83
d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1711583575850415387,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-377576,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ab33d6840338638cbdcd9ebe5fdd4d4,},Annotations:map[string]string{io.kubernetes.container.hash: 7359b186,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:afbf14c176818ba796eacd41f1600b5f09075c6a2b4f1edcad903530221f76ff,PodSandboxId:b75106f2dccc70e077dfb09e12c05d0942a02fa2c4894a1bd68ec76ca144eed7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_EXITED,Create
dAt:1711583575810399508,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-377576,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f9eaf884653411ba1f22eb4cdbdfa748,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c302f65b-a65a-4f64-9a46-b02eebcaf485 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	446878900fc2f       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5                                      4 minutes ago       Running             kindnet-cni               4                   d6ee7152bab39       kindnet-5zmtk
	1639d05b99a2d       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      4 minutes ago       Running             storage-provisioner       4                   904097cb5f152       storage-provisioner
	7618cf90b394f       6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3                                      4 minutes ago       Running             kube-controller-manager   2                   cb101e6a739d9       kube-controller-manager-ha-377576
	2930b02670199       39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533                                      4 minutes ago       Running             kube-apiserver            3                   997d89c34a5f3       kube-apiserver-ha-377576
	5298cacdf731f       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      4 minutes ago       Running             busybox                   1                   6eafda94672d5       busybox-7fdf7869d9-78c89
	0d9ac0997d255       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      5 minutes ago       Exited              storage-provisioner       3                   904097cb5f152       storage-provisioner
	5116a84093b94       22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba                                      5 minutes ago       Running             kube-vip                  0                   0b075b5717795       kube-vip-ha-377576
	bb3478503fe92       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5                                      5 minutes ago       Exited              kindnet-cni               3                   d6ee7152bab39       kindnet-5zmtk
	f043217131ac4       a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392                                      5 minutes ago       Running             kube-proxy                1                   657c51c3b5e7d       kube-proxy-4t77p
	3d05a3cee8583       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      5 minutes ago       Running             etcd                      1                   abd9c1ff4a286       etcd-ha-377576
	f3bb93ac0ec38       6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3                                      5 minutes ago       Exited              kube-controller-manager   1                   cb101e6a739d9       kube-controller-manager-ha-377576
	20e7e0f2dabc1       39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533                                      5 minutes ago       Exited              kube-apiserver            2                   997d89c34a5f3       kube-apiserver-ha-377576
	f5f23730141a9       8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b                                      5 minutes ago       Running             kube-scheduler            1                   ee57ddbaf2fe4       kube-scheduler-ha-377576
	e280dd2cc82d6       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      5 minutes ago       Running             coredns                   1                   e7de52be4110d       coredns-76f75df574-47npx
	0a2fd3dc48780       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      5 minutes ago       Running             coredns                   1                   8519de872cf97       coredns-76f75df574-msv9s
	fc41f34db32bf       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   13 minutes ago      Exited              busybox                   0                   d8bf33d99bda1       busybox-7fdf7869d9-78c89
	1d5198968b769       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      17 minutes ago      Exited              coredns                   0                   78b0408435c31       coredns-76f75df574-47npx
	ed9a38e9f6cd9       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      17 minutes ago      Exited              coredns                   0                   906a95ca7b930       coredns-76f75df574-msv9s
	a226f01452a72       a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392                                      17 minutes ago      Exited              kube-proxy                0                   3f1239e30a953       kube-proxy-4t77p
	a0128cd878ebd       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      17 minutes ago      Exited              etcd                      0                   bbb9d168e952f       etcd-ha-377576
	afbf14c176818       8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b                                      17 minutes ago      Exited              kube-scheduler            0                   b75106f2dccc7       kube-scheduler-ha-377576
	
	
	==> coredns [0a2fd3dc48780368d474066f78db57cea4362acc353f7604458e5ad6b3d1c6bb] <==
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[731237309]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (28-Mar-2024 00:05:15.180) (total time: 10000ms):
	Trace[731237309]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10000ms (00:05:25.181)
	Trace[731237309]: [10.000938818s] [10.000938818s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:43126->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:43126->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [1d5198968b76968a091c33806ec424495c03395675b20e7eb35e330d16217211] <==
	[INFO] 10.244.2.2:60611 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00032109s
	[INFO] 10.244.2.2:33575 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000137606s
	[INFO] 10.244.2.2:52980 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000106821s
	[INFO] 10.244.2.2:50141 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000114136s
	[INFO] 10.244.1.2:48883 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000154613s
	[INFO] 10.244.1.2:60634 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000118063s
	[INFO] 10.244.1.2:39068 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000170354s
	[INFO] 10.244.0.4:42784 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000130962s
	[INFO] 10.244.0.4:58150 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000087285s
	[INFO] 10.244.0.4:44129 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000081095s
	[INFO] 10.244.0.4:44169 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000047878s
	[INFO] 10.244.2.2:38674 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000113751s
	[INFO] 10.244.1.2:52689 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000279728s
	[INFO] 10.244.0.4:54702 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000138182s
	[INFO] 10.244.0.4:33994 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000143246s
	[INFO] 10.244.0.4:59928 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000149415s
	[INFO] 10.244.0.4:48254 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000119791s
	[INFO] 10.244.2.2:38914 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000113463s
	[INFO] 10.244.2.2:45000 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000084412s
	[INFO] 10.244.2.2:45899 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000082622s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: the server has asked for the client to provide credentials (get namespaces) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=21, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: the server has asked for the client to provide credentials (get services) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=21, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: the server has asked for the client to provide credentials (get endpointslices.discovery.k8s.io) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=21, ErrCode=NO_ERROR, debug=""
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [e280dd2cc82d6fb9730125bb5cc514d7b33be2a3a03727e5adeb43a7f1399eb8] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[89180718]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (28-Mar-2024 00:05:11.125) (total time: 10001ms):
	Trace[89180718]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (00:05:21.126)
	Trace[89180718]: [10.001694349s] [10.001694349s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[1688599920]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (28-Mar-2024 00:05:13.322) (total time: 10002ms):
	Trace[1688599920]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10002ms (00:05:23.324)
	Trace[1688599920]: [10.002575539s] [10.002575539s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:43404->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:43404->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [ed9a38e9f6cd98054d1e672c652e36e6262ceb6eb2a3e14911a5178d222135e7] <==
	[INFO] 10.244.1.2:39882 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000227134s
	[INFO] 10.244.1.2:36591 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000230513s
	[INFO] 10.244.1.2:39147 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002683128s
	[INFO] 10.244.1.2:57485 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000145666s
	[INFO] 10.244.1.2:50733 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000171259s
	[INFO] 10.244.0.4:38643 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000147285s
	[INFO] 10.244.0.4:54253 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00151748s
	[INFO] 10.244.0.4:55400 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000105715s
	[INFO] 10.244.2.2:37662 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00219357s
	[INFO] 10.244.2.2:39646 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000125023s
	[INFO] 10.244.2.2:33350 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001640561s
	[INFO] 10.244.2.2:40494 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000076386s
	[INFO] 10.244.1.2:45207 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000150664s
	[INFO] 10.244.2.2:56881 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000230324s
	[INFO] 10.244.2.2:46450 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000102951s
	[INFO] 10.244.2.2:49186 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000107347s
	[INFO] 10.244.1.2:32923 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.00033097s
	[INFO] 10.244.1.2:38607 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000207486s
	[INFO] 10.244.1.2:54186 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000187929s
	[INFO] 10.244.2.2:59559 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000147121s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: the server has asked for the client to provide credentials (get services) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=21, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: the server has asked for the client to provide credentials (get namespaces) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=21, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: the server has asked for the client to provide credentials (get endpointslices.discovery.k8s.io) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=21, ErrCode=NO_ERROR, debug=""
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               ha-377576
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-377576
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1a940f980f77c9bfc8ef93678db8c1f49c9dd79d
	                    minikube.k8s.io/name=ha-377576
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_03_27T23_53_03_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 27 Mar 2024 23:53:01 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-377576
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 28 Mar 2024 00:10:36 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 28 Mar 2024 00:05:54 +0000   Wed, 27 Mar 2024 23:53:01 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 28 Mar 2024 00:05:54 +0000   Wed, 27 Mar 2024 23:53:01 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 28 Mar 2024 00:05:54 +0000   Wed, 27 Mar 2024 23:53:01 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 28 Mar 2024 00:05:54 +0000   Wed, 27 Mar 2024 23:53:17 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.47
	  Hostname:    ha-377576
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 548afee7a42c42209042fc22e933a640
	  System UUID:                548afee7-a42c-4220-9042-fc22e933a640
	  Boot ID:                    446624d0-3e4c-494a-bf42-903d59e41c0c
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7fdf7869d9-78c89             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 coredns-76f75df574-47npx             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     17m
	  kube-system                 coredns-76f75df574-msv9s             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     17m
	  kube-system                 etcd-ha-377576                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         17m
	  kube-system                 kindnet-5zmtk                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      17m
	  kube-system                 kube-apiserver-ha-377576             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17m
	  kube-system                 kube-controller-manager-ha-377576    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17m
	  kube-system                 kube-proxy-4t77p                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17m
	  kube-system                 kube-scheduler-ha-377576             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17m
	  kube-system                 kube-vip-ha-377576                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m8s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 17m                    kube-proxy       
	  Normal   Starting                 4m46s                  kube-proxy       
	  Normal   NodeHasNoDiskPressure    17m                    kubelet          Node ha-377576 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 17m                    kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  17m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  17m                    kubelet          Node ha-377576 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     17m                    kubelet          Node ha-377576 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           17m                    node-controller  Node ha-377576 event: Registered Node ha-377576 in Controller
	  Normal   NodeReady                17m                    kubelet          Node ha-377576 status is now: NodeReady
	  Normal   RegisteredNode           15m                    node-controller  Node ha-377576 event: Registered Node ha-377576 in Controller
	  Normal   RegisteredNode           13m                    node-controller  Node ha-377576 event: Registered Node ha-377576 in Controller
	  Warning  ContainerGCFailed        5m41s (x2 over 6m41s)  kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           4m40s                  node-controller  Node ha-377576 event: Registered Node ha-377576 in Controller
	  Normal   RegisteredNode           4m40s                  node-controller  Node ha-377576 event: Registered Node ha-377576 in Controller
	  Normal   RegisteredNode           3m16s                  node-controller  Node ha-377576 event: Registered Node ha-377576 in Controller
	
	
	Name:               ha-377576-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-377576-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1a940f980f77c9bfc8ef93678db8c1f49c9dd79d
	                    minikube.k8s.io/name=ha-377576
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_03_27T23_55_23_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 27 Mar 2024 23:55:20 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-377576-m02
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 28 Mar 2024 00:10:36 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 28 Mar 2024 00:09:04 +0000   Thu, 28 Mar 2024 00:09:04 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 28 Mar 2024 00:09:04 +0000   Thu, 28 Mar 2024 00:09:04 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 28 Mar 2024 00:09:04 +0000   Thu, 28 Mar 2024 00:09:04 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 28 Mar 2024 00:09:04 +0000   Thu, 28 Mar 2024 00:09:04 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.117
	  Hostname:    ha-377576-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 e8bdd7497a164e8f88f2bc1a3706be52
	  System UUID:                e8bdd749-7a16-4e8f-88f2-bc1a3706be52
	  Boot ID:                    aea9ba56-088a-4867-8d0a-150f94cf447e
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7fdf7869d9-2dqtf                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 etcd-ha-377576-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         15m
	  kube-system                 kindnet-6wmmc                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      15m
	  kube-system                 kube-apiserver-ha-377576-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-controller-manager-ha-377576-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-proxy-k9dcr                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-scheduler-ha-377576-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-vip-ha-377576-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m29s                  kube-proxy       
	  Normal  Starting                 15m                    kube-proxy       
	  Normal  NodeHasSufficientPID     15m (x7 over 15m)      kubelet          Node ha-377576-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  15m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 15m                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  15m (x8 over 15m)      kubelet          Node ha-377576-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    15m (x8 over 15m)      kubelet          Node ha-377576-m02 status is now: NodeHasNoDiskPressure
	  Normal  RegisteredNode           15m                    node-controller  Node ha-377576-m02 event: Registered Node ha-377576-m02 in Controller
	  Normal  RegisteredNode           15m                    node-controller  Node ha-377576-m02 event: Registered Node ha-377576-m02 in Controller
	  Normal  RegisteredNode           13m                    node-controller  Node ha-377576-m02 event: Registered Node ha-377576-m02 in Controller
	  Normal  NodeNotReady             11m                    node-controller  Node ha-377576-m02 status is now: NodeNotReady
	  Normal  NodeHasSufficientMemory  5m15s (x8 over 5m15s)  kubelet          Node ha-377576-m02 status is now: NodeHasSufficientMemory
	  Normal  Starting                 5m15s                  kubelet          Starting kubelet.
	  Normal  NodeHasNoDiskPressure    5m15s (x8 over 5m15s)  kubelet          Node ha-377576-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m15s (x7 over 5m15s)  kubelet          Node ha-377576-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m15s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m40s                  node-controller  Node ha-377576-m02 event: Registered Node ha-377576-m02 in Controller
	  Normal  RegisteredNode           4m40s                  node-controller  Node ha-377576-m02 event: Registered Node ha-377576-m02 in Controller
	  Normal  RegisteredNode           3m16s                  node-controller  Node ha-377576-m02 event: Registered Node ha-377576-m02 in Controller
	  Normal  NodeNotReady             105s                   node-controller  Node ha-377576-m02 status is now: NodeNotReady
	
	
	Name:               ha-377576-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-377576-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1a940f980f77c9bfc8ef93678db8c1f49c9dd79d
	                    minikube.k8s.io/name=ha-377576
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_03_27T23_57_34_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 27 Mar 2024 23:57:33 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-377576-m04
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 28 Mar 2024 00:08:15 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Thu, 28 Mar 2024 00:07:55 +0000   Thu, 28 Mar 2024 00:08:58 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Thu, 28 Mar 2024 00:07:55 +0000   Thu, 28 Mar 2024 00:08:58 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Thu, 28 Mar 2024 00:07:55 +0000   Thu, 28 Mar 2024 00:08:58 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Thu, 28 Mar 2024 00:07:55 +0000   Thu, 28 Mar 2024 00:08:58 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.93
	  Hostname:    ha-377576-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 9888e36a359a48f1aa6b97712e7f2662
	  System UUID:                9888e36a-359a-48f1-aa6b-97712e7f2662
	  Boot ID:                    2a204f54-9894-47ce-8cd2-4156d335ee08
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7fdf7869d9-plgbk    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m38s
	  kube-system                 kindnet-57xkj               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      13m
	  kube-system                 kube-proxy-nsmbj            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m44s                  kube-proxy       
	  Normal   Starting                 13m                    kube-proxy       
	  Normal   NodeAllocatableEnforced  13m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  13m (x3 over 13m)      kubelet          Node ha-377576-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    13m (x3 over 13m)      kubelet          Node ha-377576-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     13m (x3 over 13m)      kubelet          Node ha-377576-m04 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           13m                    node-controller  Node ha-377576-m04 event: Registered Node ha-377576-m04 in Controller
	  Normal   RegisteredNode           13m                    node-controller  Node ha-377576-m04 event: Registered Node ha-377576-m04 in Controller
	  Normal   RegisteredNode           13m                    node-controller  Node ha-377576-m04 event: Registered Node ha-377576-m04 in Controller
	  Normal   NodeReady                13m                    kubelet          Node ha-377576-m04 status is now: NodeReady
	  Normal   RegisteredNode           4m40s                  node-controller  Node ha-377576-m04 event: Registered Node ha-377576-m04 in Controller
	  Normal   RegisteredNode           4m40s                  node-controller  Node ha-377576-m04 event: Registered Node ha-377576-m04 in Controller
	  Normal   RegisteredNode           3m17s                  node-controller  Node ha-377576-m04 event: Registered Node ha-377576-m04 in Controller
	  Normal   NodeHasSufficientMemory  2m48s (x3 over 2m48s)  kubelet          Node ha-377576-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeAllocatableEnforced  2m48s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   Starting                 2m48s                  kubelet          Starting kubelet.
	  Normal   NodeHasNoDiskPressure    2m48s (x3 over 2m48s)  kubelet          Node ha-377576-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m48s (x3 over 2m48s)  kubelet          Node ha-377576-m04 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 2m48s (x2 over 2m48s)  kubelet          Node ha-377576-m04 has been rebooted, boot id: 2a204f54-9894-47ce-8cd2-4156d335ee08
	  Normal   NodeReady                2m48s (x2 over 2m48s)  kubelet          Node ha-377576-m04 status is now: NodeReady
	  Normal   NodeNotReady             105s (x2 over 4m)      node-controller  Node ha-377576-m04 status is now: NodeNotReady
	
	
	==> dmesg <==
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[ +12.445381] systemd-fstab-generator[598]: Ignoring "noauto" option for root device
	[  +0.055911] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.058244] systemd-fstab-generator[610]: Ignoring "noauto" option for root device
	[  +0.192360] systemd-fstab-generator[624]: Ignoring "noauto" option for root device
	[  +0.112715] systemd-fstab-generator[636]: Ignoring "noauto" option for root device
	[  +0.267509] systemd-fstab-generator[666]: Ignoring "noauto" option for root device
	[  +4.568474] systemd-fstab-generator[768]: Ignoring "noauto" option for root device
	[  +0.064108] kauditd_printk_skb: 130 callbacks suppressed
	[  +4.418967] systemd-fstab-generator[951]: Ignoring "noauto" option for root device
	[  +1.239042] kauditd_printk_skb: 57 callbacks suppressed
	[Mar27 23:53] kauditd_printk_skb: 40 callbacks suppressed
	[  +0.989248] systemd-fstab-generator[1376]: Ignoring "noauto" option for root device
	[ +13.165128] kauditd_printk_skb: 15 callbacks suppressed
	[Mar27 23:55] kauditd_printk_skb: 74 callbacks suppressed
	[Mar28 00:05] systemd-fstab-generator[3806]: Ignoring "noauto" option for root device
	[  +0.158435] systemd-fstab-generator[3818]: Ignoring "noauto" option for root device
	[  +0.186777] systemd-fstab-generator[3832]: Ignoring "noauto" option for root device
	[  +0.155223] systemd-fstab-generator[3844]: Ignoring "noauto" option for root device
	[  +0.308136] systemd-fstab-generator[3877]: Ignoring "noauto" option for root device
	[  +1.053355] systemd-fstab-generator[4111]: Ignoring "noauto" option for root device
	[  +6.217474] kauditd_printk_skb: 142 callbacks suppressed
	[ +16.387174] kauditd_printk_skb: 67 callbacks suppressed
	[ +24.306679] kauditd_printk_skb: 5 callbacks suppressed
	[Mar28 00:07] kauditd_printk_skb: 2 callbacks suppressed
	
	
	==> etcd [3d05a3cee8583ee0f268ce6a367e7e700f8f87f1d96c985228250245d5f5a258] <==
	{"level":"warn","ts":"2024-03-28T00:07:06.45837Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"e5d33a179970ddaa","rtt":"0s","error":"dial tcp 192.168.39.101:2380: connect: connection refused"}
	{"level":"info","ts":"2024-03-28T00:07:07.80021Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"e5d33a179970ddaa"}
	{"level":"info","ts":"2024-03-28T00:07:07.800308Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"dda2c3e6a900b50e","remote-peer-id":"e5d33a179970ddaa"}
	{"level":"info","ts":"2024-03-28T00:07:07.802264Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"dda2c3e6a900b50e","remote-peer-id":"e5d33a179970ddaa"}
	{"level":"info","ts":"2024-03-28T00:07:07.819768Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"dda2c3e6a900b50e","to":"e5d33a179970ddaa","stream-type":"stream Message"}
	{"level":"info","ts":"2024-03-28T00:07:07.821266Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"dda2c3e6a900b50e","remote-peer-id":"e5d33a179970ddaa"}
	{"level":"info","ts":"2024-03-28T00:07:07.823244Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"dda2c3e6a900b50e","to":"e5d33a179970ddaa","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-03-28T00:07:07.823312Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"dda2c3e6a900b50e","remote-peer-id":"e5d33a179970ddaa"}
	{"level":"info","ts":"2024-03-28T00:08:09.053746Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dda2c3e6a900b50e switched to configuration voters=(15970542624054490382 18225458055639684142)"}
	{"level":"info","ts":"2024-03-28T00:08:09.056256Z","caller":"membership/cluster.go:472","msg":"removed member","cluster-id":"f4536840deabf9cf","local-member-id":"dda2c3e6a900b50e","removed-remote-peer-id":"e5d33a179970ddaa","removed-remote-peer-urls":["https://192.168.39.101:2380"]}
	{"level":"info","ts":"2024-03-28T00:08:09.056456Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"e5d33a179970ddaa"}
	{"level":"warn","ts":"2024-03-28T00:08:09.057221Z","caller":"rafthttp/stream.go:286","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"e5d33a179970ddaa"}
	{"level":"info","ts":"2024-03-28T00:08:09.057297Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"e5d33a179970ddaa"}
	{"level":"warn","ts":"2024-03-28T00:08:09.05779Z","caller":"rafthttp/stream.go:286","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"e5d33a179970ddaa"}
	{"level":"info","ts":"2024-03-28T00:08:09.057862Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"e5d33a179970ddaa"}
	{"level":"info","ts":"2024-03-28T00:08:09.058124Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"dda2c3e6a900b50e","remote-peer-id":"e5d33a179970ddaa"}
	{"level":"warn","ts":"2024-03-28T00:08:09.058543Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"dda2c3e6a900b50e","remote-peer-id":"e5d33a179970ddaa","error":"context canceled"}
	{"level":"warn","ts":"2024-03-28T00:08:09.058717Z","caller":"rafthttp/peer_status.go:66","msg":"peer became inactive (message send to peer failed)","peer-id":"e5d33a179970ddaa","error":"failed to read e5d33a179970ddaa on stream MsgApp v2 (context canceled)"}
	{"level":"info","ts":"2024-03-28T00:08:09.058795Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"dda2c3e6a900b50e","remote-peer-id":"e5d33a179970ddaa"}
	{"level":"warn","ts":"2024-03-28T00:08:09.059113Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"dda2c3e6a900b50e","remote-peer-id":"e5d33a179970ddaa","error":"context canceled"}
	{"level":"info","ts":"2024-03-28T00:08:09.059276Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"dda2c3e6a900b50e","remote-peer-id":"e5d33a179970ddaa"}
	{"level":"info","ts":"2024-03-28T00:08:09.05936Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"e5d33a179970ddaa"}
	{"level":"info","ts":"2024-03-28T00:08:09.059405Z","caller":"rafthttp/transport.go:355","msg":"removed remote peer","local-member-id":"dda2c3e6a900b50e","removed-remote-peer-id":"e5d33a179970ddaa"}
	{"level":"warn","ts":"2024-03-28T00:08:09.073026Z","caller":"rafthttp/http.go:394","msg":"rejected stream from remote peer because it was removed","local-member-id":"dda2c3e6a900b50e","remote-peer-id-stream-handler":"dda2c3e6a900b50e","remote-peer-id-from":"e5d33a179970ddaa"}
	{"level":"warn","ts":"2024-03-28T00:08:09.075802Z","caller":"rafthttp/http.go:394","msg":"rejected stream from remote peer because it was removed","local-member-id":"dda2c3e6a900b50e","remote-peer-id-stream-handler":"dda2c3e6a900b50e","remote-peer-id-from":"e5d33a179970ddaa"}
	
	
	==> etcd [a0128cd878ebdae2c6e217413183a103417714ffe00d55c1c92d1353e38238fa] <==
	2024/03/28 00:03:30 WARNING: [core] [Server #6] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-03-28T00:03:30.633124Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"261.732399ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/flowschemas/\" range_end:\"/registry/flowschemas0\" limit:500 ","response":"","error":"context canceled"}
	{"level":"info","ts":"2024-03-28T00:03:30.646767Z","caller":"traceutil/trace.go:171","msg":"trace[1164427338] range","detail":"{range_begin:/registry/flowschemas/; range_end:/registry/flowschemas0; }","duration":"275.357753ms","start":"2024-03-28T00:03:30.371359Z","end":"2024-03-28T00:03:30.646717Z","steps":["trace[1164427338] 'agreement among raft nodes before linearized reading'  (duration: 261.761077ms)"],"step_count":1}
	2024/03/28 00:03:30 WARNING: [core] [Server #6] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-03-28T00:03:30.633139Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"173.569415ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/configmaps/kube-system/kube-apiserver-legacy-service-account-token-tracking\" limit:500 ","response":"","error":"context canceled"}
	{"level":"info","ts":"2024-03-28T00:03:30.64699Z","caller":"traceutil/trace.go:171","msg":"trace[1926107880] range","detail":"{range_begin:/registry/configmaps/kube-system/kube-apiserver-legacy-service-account-token-tracking; range_end:; }","duration":"187.553492ms","start":"2024-03-28T00:03:30.459428Z","end":"2024-03-28T00:03:30.646982Z","steps":["trace[1926107880] 'agreement among raft nodes before linearized reading'  (duration: 173.707151ms)"],"step_count":1}
	2024/03/28 00:03:30 WARNING: [core] [Server #6] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"info","ts":"2024-03-28T00:03:30.673598Z","caller":"etcdserver/server.go:1462","msg":"skipped leadership transfer; local server is not leader","local-member-id":"dda2c3e6a900b50e","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-03-28T00:03:30.673921Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"fcedd589fb2f542e"}
	{"level":"info","ts":"2024-03-28T00:03:30.674017Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"fcedd589fb2f542e"}
	{"level":"info","ts":"2024-03-28T00:03:30.674138Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"fcedd589fb2f542e"}
	{"level":"info","ts":"2024-03-28T00:03:30.674274Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"dda2c3e6a900b50e","remote-peer-id":"fcedd589fb2f542e"}
	{"level":"info","ts":"2024-03-28T00:03:30.674378Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"dda2c3e6a900b50e","remote-peer-id":"fcedd589fb2f542e"}
	{"level":"info","ts":"2024-03-28T00:03:30.674464Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"dda2c3e6a900b50e","remote-peer-id":"fcedd589fb2f542e"}
	{"level":"info","ts":"2024-03-28T00:03:30.674569Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"fcedd589fb2f542e"}
	{"level":"info","ts":"2024-03-28T00:03:30.675356Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"e5d33a179970ddaa"}
	{"level":"info","ts":"2024-03-28T00:03:30.675396Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"e5d33a179970ddaa"}
	{"level":"info","ts":"2024-03-28T00:03:30.675475Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"e5d33a179970ddaa"}
	{"level":"info","ts":"2024-03-28T00:03:30.675715Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"dda2c3e6a900b50e","remote-peer-id":"e5d33a179970ddaa"}
	{"level":"info","ts":"2024-03-28T00:03:30.675769Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"dda2c3e6a900b50e","remote-peer-id":"e5d33a179970ddaa"}
	{"level":"info","ts":"2024-03-28T00:03:30.675978Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"dda2c3e6a900b50e","remote-peer-id":"e5d33a179970ddaa"}
	{"level":"info","ts":"2024-03-28T00:03:30.676449Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"e5d33a179970ddaa"}
	{"level":"info","ts":"2024-03-28T00:03:30.67871Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.47:2380"}
	{"level":"info","ts":"2024-03-28T00:03:30.678823Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.47:2380"}
	{"level":"info","ts":"2024-03-28T00:03:30.678871Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"ha-377576","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.47:2380"],"advertise-client-urls":["https://192.168.39.47:2379"]}
	
	
	==> kernel <==
	 00:10:43 up 18 min,  0 users,  load average: 0.45, 0.54, 0.44
	Linux ha-377576 5.10.207 #1 SMP Wed Mar 27 22:02:20 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [446878900fc2ff3c1baac3e23199c7573f58770442bddf2548e3ccbfa9d3b300] <==
	I0328 00:09:59.030730       1 main.go:250] Node ha-377576-m04 has CIDR [10.244.3.0/24] 
	I0328 00:10:09.047951       1 main.go:223] Handling node with IPs: map[192.168.39.47:{}]
	I0328 00:10:09.048167       1 main.go:227] handling current node
	I0328 00:10:09.048217       1 main.go:223] Handling node with IPs: map[192.168.39.117:{}]
	I0328 00:10:09.048249       1 main.go:250] Node ha-377576-m02 has CIDR [10.244.1.0/24] 
	I0328 00:10:09.048397       1 main.go:223] Handling node with IPs: map[192.168.39.93:{}]
	I0328 00:10:09.048445       1 main.go:250] Node ha-377576-m04 has CIDR [10.244.3.0/24] 
	I0328 00:10:19.063135       1 main.go:223] Handling node with IPs: map[192.168.39.47:{}]
	I0328 00:10:19.063332       1 main.go:227] handling current node
	I0328 00:10:19.063369       1 main.go:223] Handling node with IPs: map[192.168.39.117:{}]
	I0328 00:10:19.063460       1 main.go:250] Node ha-377576-m02 has CIDR [10.244.1.0/24] 
	I0328 00:10:19.063797       1 main.go:223] Handling node with IPs: map[192.168.39.93:{}]
	I0328 00:10:19.063895       1 main.go:250] Node ha-377576-m04 has CIDR [10.244.3.0/24] 
	I0328 00:10:29.070210       1 main.go:223] Handling node with IPs: map[192.168.39.47:{}]
	I0328 00:10:29.070330       1 main.go:227] handling current node
	I0328 00:10:29.070354       1 main.go:223] Handling node with IPs: map[192.168.39.117:{}]
	I0328 00:10:29.070372       1 main.go:250] Node ha-377576-m02 has CIDR [10.244.1.0/24] 
	I0328 00:10:29.070594       1 main.go:223] Handling node with IPs: map[192.168.39.93:{}]
	I0328 00:10:29.070636       1 main.go:250] Node ha-377576-m04 has CIDR [10.244.3.0/24] 
	I0328 00:10:39.079690       1 main.go:223] Handling node with IPs: map[192.168.39.47:{}]
	I0328 00:10:39.079791       1 main.go:227] handling current node
	I0328 00:10:39.079820       1 main.go:223] Handling node with IPs: map[192.168.39.117:{}]
	I0328 00:10:39.079839       1 main.go:250] Node ha-377576-m02 has CIDR [10.244.1.0/24] 
	I0328 00:10:39.079967       1 main.go:223] Handling node with IPs: map[192.168.39.93:{}]
	I0328 00:10:39.079987       1 main.go:250] Node ha-377576-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kindnet [bb3478503fe928ca51e348aec8e5aa7eebe7cfdc8b7226d85976590d73be90dd] <==
	I0328 00:05:11.339932       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0328 00:05:11.340022       1 main.go:107] hostIP = 192.168.39.47
	podIP = 192.168.39.47
	I0328 00:05:11.340206       1 main.go:116] setting mtu 1500 for CNI 
	I0328 00:05:11.340251       1 main.go:146] kindnetd IP family: "ipv4"
	I0328 00:05:11.340297       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0328 00:05:13.352146       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	I0328 00:05:16.424310       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	I0328 00:05:19.496081       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	I0328 00:05:31.506727       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": net/http: TLS handshake timeout
	I0328 00:05:34.856323       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	panic: Reached maximum retries obtaining node list: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	
	goroutine 1 [running]:
	main.main()
		/go/src/cmd/kindnetd/main.go:195 +0xd3d
	
	
	==> kube-apiserver [20e7e0f2dabc114fd527652f912485cbd4476203fe6c63338ed46931f28df715] <==
	I0328 00:05:11.014936       1 options.go:222] external host was not specified, using 192.168.39.47
	I0328 00:05:11.016084       1 server.go:148] Version: v1.29.3
	I0328 00:05:11.016136       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0328 00:05:11.515055       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0328 00:05:11.515098       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0328 00:05:11.515351       1 instance.go:297] Using reconciler: lease
	I0328 00:05:11.515756       1 shared_informer.go:311] Waiting for caches to sync for node_authorizer
	W0328 00:05:31.512573       1 logging.go:59] [core] [Channel #1 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	F0328 00:05:31.517319       1 instance.go:290] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-apiserver [2930b0267019902b8b9ce7cd18b907f89d409676028d92de1dc551850d78f276] <==
	I0328 00:05:50.968800       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0328 00:05:50.968834       1 shared_informer.go:311] Waiting for caches to sync for crd-autoregister
	I0328 00:05:50.969180       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0328 00:05:50.969331       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0328 00:05:51.150965       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0328 00:05:51.151010       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0328 00:05:51.151125       1 shared_informer.go:318] Caches are synced for configmaps
	I0328 00:05:51.151793       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0328 00:05:51.151848       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0328 00:05:51.152473       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	W0328 00:05:51.168055       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.101]
	I0328 00:05:51.170593       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0328 00:05:51.170668       1 aggregator.go:165] initial CRD sync complete...
	I0328 00:05:51.170705       1 autoregister_controller.go:141] Starting autoregister controller
	I0328 00:05:51.170727       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0328 00:05:51.170749       1 cache.go:39] Caches are synced for autoregister controller
	I0328 00:05:51.171970       1 controller.go:624] quota admission added evaluator for: endpoints
	I0328 00:05:51.180431       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0328 00:05:51.181904       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0328 00:05:51.183552       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E0328 00:05:51.198089       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I0328 00:05:51.963105       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0328 00:05:52.429099       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.101 192.168.39.47]
	W0328 00:06:12.428398       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.117 192.168.39.47]
	W0328 00:08:22.440896       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.117 192.168.39.47]
	
	
	==> kube-controller-manager [7618cf90b394f83cf98ba25b6a878b77ac0c7bacb6fa65a7cf7ddaae3d859976] <==
	I0328 00:08:59.145995       1 event.go:376] "Event occurred" object="kube-system/kindnet-6wmmc" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0328 00:08:59.172387       1 event.go:376] "Event occurred" object="kube-system/kube-apiserver-ha-377576-m02" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0328 00:08:59.198650       1 event.go:376] "Event occurred" object="kube-system/kube-controller-manager-ha-377576-m02" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	E0328 00:09:03.697286       1 gc_controller.go:153] "Failed to get node" err="node \"ha-377576-m03\" not found" node="ha-377576-m03"
	E0328 00:09:03.697371       1 gc_controller.go:153] "Failed to get node" err="node \"ha-377576-m03\" not found" node="ha-377576-m03"
	E0328 00:09:03.697380       1 gc_controller.go:153] "Failed to get node" err="node \"ha-377576-m03\" not found" node="ha-377576-m03"
	E0328 00:09:03.697386       1 gc_controller.go:153] "Failed to get node" err="node \"ha-377576-m03\" not found" node="ha-377576-m03"
	E0328 00:09:03.697392       1 gc_controller.go:153] "Failed to get node" err="node \"ha-377576-m03\" not found" node="ha-377576-m03"
	I0328 00:09:03.711025       1 gc_controller.go:344] "PodGC is force deleting Pod" pod="kube-system/kube-vip-ha-377576-m03"
	I0328 00:09:03.742909       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" pod="kube-system/kube-vip-ha-377576-m03"
	I0328 00:09:03.742987       1 gc_controller.go:344] "PodGC is force deleting Pod" pod="kube-system/etcd-ha-377576-m03"
	I0328 00:09:03.810135       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" pod="kube-system/etcd-ha-377576-m03"
	I0328 00:09:03.810197       1 gc_controller.go:344] "PodGC is force deleting Pod" pod="kube-system/kube-apiserver-ha-377576-m03"
	I0328 00:09:03.867900       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" pod="kube-system/kube-apiserver-ha-377576-m03"
	I0328 00:09:03.867947       1 gc_controller.go:344] "PodGC is force deleting Pod" pod="kube-system/kindnet-n8fpn"
	I0328 00:09:03.908447       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" pod="kube-system/kindnet-n8fpn"
	I0328 00:09:03.908753       1 gc_controller.go:344] "PodGC is force deleting Pod" pod="kube-system/kube-scheduler-ha-377576-m03"
	I0328 00:09:03.953806       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" pod="kube-system/kube-scheduler-ha-377576-m03"
	I0328 00:09:03.953855       1 gc_controller.go:344] "PodGC is force deleting Pod" pod="kube-system/kube-proxy-5plfq"
	I0328 00:09:03.990397       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" pod="kube-system/kube-proxy-5plfq"
	I0328 00:09:03.990606       1 gc_controller.go:344] "PodGC is force deleting Pod" pod="kube-system/kube-controller-manager-ha-377576-m03"
	I0328 00:09:04.043235       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" pod="kube-system/kube-controller-manager-ha-377576-m03"
	I0328 00:09:08.279727       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="18.34097ms"
	I0328 00:09:08.280309       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="54.834µs"
	I0328 00:09:09.012295       1 event.go:376] "Event occurred" object="default/busybox-7fdf7869d9-2dqtf" fieldPath="" kind="Pod" apiVersion="v1" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-7fdf7869d9-2dqtf"
	
	
	==> kube-controller-manager [f3bb93ac0ec38b966779f4260f9506e31f38cc6702ce1751f61b06271e40fcb3] <==
	I0328 00:05:11.638848       1 serving.go:380] Generated self-signed cert in-memory
	I0328 00:05:12.135906       1 controllermanager.go:187] "Starting" version="v1.29.3"
	I0328 00:05:12.135991       1 controllermanager.go:189] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0328 00:05:12.138197       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0328 00:05:12.138346       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0328 00:05:12.139821       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0328 00:05:12.139886       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	E0328 00:05:32.524666       1 controllermanager.go:232] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.39.47:8443/healthz\": dial tcp 192.168.39.47:8443: connect: connection refused"
	
	
	==> kube-proxy [a226f01452a72cf6e9a608450715a12f6663ccf2697e8259f809e966809978ce] <==
	E0328 00:02:15.816214       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-377576&resourceVersion=1873": dial tcp 192.168.39.254:8443: connect: no route to host
	W0328 00:02:19.016287       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1941": dial tcp 192.168.39.254:8443: connect: no route to host
	E0328 00:02:19.016429       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1941": dial tcp 192.168.39.254:8443: connect: no route to host
	W0328 00:02:22.089780       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-377576&resourceVersion=1873": dial tcp 192.168.39.254:8443: connect: no route to host
	E0328 00:02:22.089912       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-377576&resourceVersion=1873": dial tcp 192.168.39.254:8443: connect: no route to host
	W0328 00:02:22.090115       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1950": dial tcp 192.168.39.254:8443: connect: no route to host
	E0328 00:02:22.090393       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1950": dial tcp 192.168.39.254:8443: connect: no route to host
	W0328 00:02:25.160770       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1941": dial tcp 192.168.39.254:8443: connect: no route to host
	E0328 00:02:25.160848       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1941": dial tcp 192.168.39.254:8443: connect: no route to host
	W0328 00:02:31.305093       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-377576&resourceVersion=1873": dial tcp 192.168.39.254:8443: connect: no route to host
	E0328 00:02:31.305342       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-377576&resourceVersion=1873": dial tcp 192.168.39.254:8443: connect: no route to host
	W0328 00:02:34.377949       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1950": dial tcp 192.168.39.254:8443: connect: no route to host
	E0328 00:02:34.378039       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1950": dial tcp 192.168.39.254:8443: connect: no route to host
	W0328 00:02:34.377969       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1941": dial tcp 192.168.39.254:8443: connect: no route to host
	E0328 00:02:34.378243       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1941": dial tcp 192.168.39.254:8443: connect: no route to host
	W0328 00:02:46.665101       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-377576&resourceVersion=1873": dial tcp 192.168.39.254:8443: connect: no route to host
	E0328 00:02:46.665321       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-377576&resourceVersion=1873": dial tcp 192.168.39.254:8443: connect: no route to host
	W0328 00:02:52.808607       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1950": dial tcp 192.168.39.254:8443: connect: no route to host
	E0328 00:02:52.809168       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1950": dial tcp 192.168.39.254:8443: connect: no route to host
	W0328 00:02:58.953990       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1941": dial tcp 192.168.39.254:8443: connect: no route to host
	E0328 00:02:58.954066       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1941": dial tcp 192.168.39.254:8443: connect: no route to host
	W0328 00:03:14.312439       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-377576&resourceVersion=1873": dial tcp 192.168.39.254:8443: connect: no route to host
	E0328 00:03:14.312582       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-377576&resourceVersion=1873": dial tcp 192.168.39.254:8443: connect: no route to host
	W0328 00:03:23.530218       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1950": dial tcp 192.168.39.254:8443: connect: no route to host
	E0328 00:03:23.530448       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1950": dial tcp 192.168.39.254:8443: connect: no route to host
	
	
	==> kube-proxy [f043217131ac4124beae3e8ebcbab4be1eebd60b6b040475b8d40b6951c8837f] <==
	I0328 00:05:12.217150       1 server_others.go:72] "Using iptables proxy"
	E0328 00:05:14.121190       1 server.go:1039] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-377576\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0328 00:05:17.192997       1 server.go:1039] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-377576\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0328 00:05:20.264419       1 server.go:1039] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-377576\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0328 00:05:26.409164       1 server.go:1039] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-377576\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0328 00:05:38.697116       1 server.go:1039] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-377576\": dial tcp 192.168.39.254:8443: connect: no route to host"
	I0328 00:05:57.074634       1 server.go:1050] "Successfully retrieved node IP(s)" IPs=["192.168.39.47"]
	I0328 00:05:57.123774       1 server_others.go:146] "No iptables support for family" ipFamily="IPv6"
	I0328 00:05:57.123800       1 server.go:654] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0328 00:05:57.123827       1 server_others.go:168] "Using iptables Proxier"
	I0328 00:05:57.127000       1 proxier.go:245] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0328 00:05:57.127433       1 server.go:865] "Version info" version="v1.29.3"
	I0328 00:05:57.127600       1 server.go:867] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0328 00:05:57.130080       1 config.go:188] "Starting service config controller"
	I0328 00:05:57.130182       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0328 00:05:57.130307       1 config.go:97] "Starting endpoint slice config controller"
	I0328 00:05:57.130409       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0328 00:05:57.131848       1 config.go:315] "Starting node config controller"
	I0328 00:05:57.131884       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0328 00:05:57.230864       1 shared_informer.go:318] Caches are synced for service config
	I0328 00:05:57.230864       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0328 00:05:57.232440       1 shared_informer.go:318] Caches are synced for node config
	W0328 00:09:07.461266       1 reflector.go:462] vendor/k8s.io/client-go/informers/factory.go:159: watch of *v1.EndpointSlice ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	W0328 00:09:07.461407       1 reflector.go:462] vendor/k8s.io/client-go/informers/factory.go:159: watch of *v1.Node ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	W0328 00:09:07.461455       1 reflector.go:462] vendor/k8s.io/client-go/informers/factory.go:159: watch of *v1.Service ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	
	
	==> kube-scheduler [afbf14c176818ba796eacd41f1600b5f09075c6a2b4f1edcad903530221f76ff] <==
	W0328 00:03:27.278673       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0328 00:03:27.278773       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0328 00:03:27.389576       1 reflector.go:539] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0328 00:03:27.389673       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0328 00:03:27.776693       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0328 00:03:27.776725       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0328 00:03:27.804225       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0328 00:03:27.804331       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0328 00:03:27.979804       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0328 00:03:27.979924       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0328 00:03:28.069602       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0328 00:03:28.069699       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0328 00:03:28.654719       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0328 00:03:28.654917       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0328 00:03:29.449738       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0328 00:03:29.449844       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0328 00:03:29.513816       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0328 00:03:29.513950       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0328 00:03:29.569728       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0328 00:03:29.569823       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0328 00:03:29.900915       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0328 00:03:29.900968       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0328 00:03:30.623098       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0328 00:03:30.625331       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0328 00:03:30.627835       1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kube-scheduler [f5f23730141a94b0d7db08dffd1dd111243d4d4c6ee706bec92a8ea3c1872258] <==
	W0328 00:05:42.198211       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicaSet: Get "https://192.168.39.47:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.47:8443: connect: connection refused
	E0328 00:05:42.198310       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get "https://192.168.39.47:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.47:8443: connect: connection refused
	W0328 00:05:42.793355       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.47:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.47:8443: connect: connection refused
	E0328 00:05:42.793596       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.47:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.47:8443: connect: connection refused
	W0328 00:05:42.957371       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.39.47:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.39.47:8443: connect: connection refused
	E0328 00:05:42.957637       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.39.47:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.39.47:8443: connect: connection refused
	W0328 00:05:47.469657       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StorageClass: Get "https://192.168.39.47:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.47:8443: connect: connection refused
	E0328 00:05:47.469808       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get "https://192.168.39.47:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.47:8443: connect: connection refused
	W0328 00:05:48.277229       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSINode: Get "https://192.168.39.47:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.47:8443: connect: connection refused
	E0328 00:05:48.277364       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get "https://192.168.39.47:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.47:8443: connect: connection refused
	W0328 00:05:48.346645       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicationController: Get "https://192.168.39.47:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.47:8443: connect: connection refused
	E0328 00:05:48.346763       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get "https://192.168.39.47:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.47:8443: connect: connection refused
	W0328 00:05:51.077568       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0328 00:05:51.077633       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0328 00:05:51.077728       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0328 00:05:51.077761       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0328 00:05:51.077834       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0328 00:05:51.077872       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0328 00:05:51.077934       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0328 00:05:51.077971       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0328 00:05:51.083546       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0328 00:05:51.083591       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0328 00:05:51.083686       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0328 00:05:51.083721       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0328 00:05:53.929851       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Mar 28 00:06:15 ha-377576 kubelet[1383]: E0328 00:06:15.675594    1383 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kindnet-cni pod=kindnet-5zmtk_kube-system(4e75cdc5-22da-47f2-9833-b2f4eaa9caac)\"" pod="kube-system/kindnet-5zmtk" podUID="4e75cdc5-22da-47f2-9833-b2f4eaa9caac"
	Mar 28 00:06:24 ha-377576 kubelet[1383]: I0328 00:06:24.674814    1383 scope.go:117] "RemoveContainer" containerID="0d9ac0997d255850bd452ddf61795477c05b16d5a1c77900748c11ef0e86ad8a"
	Mar 28 00:06:27 ha-377576 kubelet[1383]: I0328 00:06:27.675329    1383 scope.go:117] "RemoveContainer" containerID="bb3478503fe928ca51e348aec8e5aa7eebe7cfdc8b7226d85976590d73be90dd"
	Mar 28 00:06:35 ha-377576 kubelet[1383]: I0328 00:06:35.674842    1383 kubelet.go:1903] "Trying to delete pod" pod="kube-system/kube-vip-ha-377576" podUID="2d4dd5f7-c798-4a52-97f5-4bc068603373"
	Mar 28 00:06:35 ha-377576 kubelet[1383]: I0328 00:06:35.698885    1383 kubelet.go:1908] "Deleted mirror pod because it is outdated" pod="kube-system/kube-vip-ha-377576"
	Mar 28 00:07:02 ha-377576 kubelet[1383]: E0328 00:07:02.713328    1383 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 28 00:07:02 ha-377576 kubelet[1383]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 28 00:07:02 ha-377576 kubelet[1383]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 28 00:07:02 ha-377576 kubelet[1383]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 28 00:07:02 ha-377576 kubelet[1383]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 28 00:08:02 ha-377576 kubelet[1383]: E0328 00:08:02.715553    1383 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 28 00:08:02 ha-377576 kubelet[1383]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 28 00:08:02 ha-377576 kubelet[1383]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 28 00:08:02 ha-377576 kubelet[1383]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 28 00:08:02 ha-377576 kubelet[1383]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 28 00:09:02 ha-377576 kubelet[1383]: E0328 00:09:02.709191    1383 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 28 00:09:02 ha-377576 kubelet[1383]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 28 00:09:02 ha-377576 kubelet[1383]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 28 00:09:02 ha-377576 kubelet[1383]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 28 00:09:02 ha-377576 kubelet[1383]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 28 00:10:02 ha-377576 kubelet[1383]: E0328 00:10:02.708917    1383 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 28 00:10:02 ha-377576 kubelet[1383]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 28 00:10:02 ha-377576 kubelet[1383]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 28 00:10:02 ha-377576 kubelet[1383]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 28 00:10:02 ha-377576 kubelet[1383]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0328 00:10:42.663674 1094487 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/18485-1069254/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-377576 -n ha-377576
helpers_test.go:261: (dbg) Run:  kubectl --context ha-377576 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/StopCluster FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopCluster (142.25s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (313.01s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-200224
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-200224
E0328 00:26:14.356019 1076522 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/addons-910864/client.crt: no such file or directory
E0328 00:26:21.208708 1076522 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/functional-800754/client.crt: no such file or directory
multinode_test.go:321: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p multinode-200224: exit status 82 (2m2.731105476s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-200224-m03"  ...
	* Stopping node "multinode-200224-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_2.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:323: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p multinode-200224" : exit status 82
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-200224 --wait=true -v=8 --alsologtostderr
E0328 00:29:24.255363 1076522 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/functional-800754/client.crt: no such file or directory
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-200224 --wait=true -v=8 --alsologtostderr: (3m7.749993799s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-200224
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-200224 -n multinode-200224
helpers_test.go:244: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/RestartKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-200224 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-200224 logs -n 25: (1.654046689s)
helpers_test.go:252: TestMultiNode/serial/RestartKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|----------------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   |    Version     |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|----------------|---------------------|---------------------|
	| ssh     | multinode-200224 ssh -n                                                                 | multinode-200224 | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:24 UTC | 28 Mar 24 00:24 UTC |
	|         | multinode-200224-m02 sudo cat                                                           |                  |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |                |                     |                     |
	| cp      | multinode-200224 cp multinode-200224-m02:/home/docker/cp-test.txt                       | multinode-200224 | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:24 UTC | 28 Mar 24 00:24 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile3842904601/001/cp-test_multinode-200224-m02.txt         |                  |         |                |                     |                     |
	| ssh     | multinode-200224 ssh -n                                                                 | multinode-200224 | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:24 UTC | 28 Mar 24 00:24 UTC |
	|         | multinode-200224-m02 sudo cat                                                           |                  |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |                |                     |                     |
	| cp      | multinode-200224 cp multinode-200224-m02:/home/docker/cp-test.txt                       | multinode-200224 | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:24 UTC | 28 Mar 24 00:24 UTC |
	|         | multinode-200224:/home/docker/cp-test_multinode-200224-m02_multinode-200224.txt         |                  |         |                |                     |                     |
	| ssh     | multinode-200224 ssh -n                                                                 | multinode-200224 | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:24 UTC | 28 Mar 24 00:24 UTC |
	|         | multinode-200224-m02 sudo cat                                                           |                  |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |                |                     |                     |
	| ssh     | multinode-200224 ssh -n multinode-200224 sudo cat                                       | multinode-200224 | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:24 UTC | 28 Mar 24 00:24 UTC |
	|         | /home/docker/cp-test_multinode-200224-m02_multinode-200224.txt                          |                  |         |                |                     |                     |
	| cp      | multinode-200224 cp multinode-200224-m02:/home/docker/cp-test.txt                       | multinode-200224 | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:24 UTC | 28 Mar 24 00:24 UTC |
	|         | multinode-200224-m03:/home/docker/cp-test_multinode-200224-m02_multinode-200224-m03.txt |                  |         |                |                     |                     |
	| ssh     | multinode-200224 ssh -n                                                                 | multinode-200224 | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:24 UTC | 28 Mar 24 00:24 UTC |
	|         | multinode-200224-m02 sudo cat                                                           |                  |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |                |                     |                     |
	| ssh     | multinode-200224 ssh -n multinode-200224-m03 sudo cat                                   | multinode-200224 | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:24 UTC | 28 Mar 24 00:24 UTC |
	|         | /home/docker/cp-test_multinode-200224-m02_multinode-200224-m03.txt                      |                  |         |                |                     |                     |
	| cp      | multinode-200224 cp testdata/cp-test.txt                                                | multinode-200224 | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:24 UTC | 28 Mar 24 00:24 UTC |
	|         | multinode-200224-m03:/home/docker/cp-test.txt                                           |                  |         |                |                     |                     |
	| ssh     | multinode-200224 ssh -n                                                                 | multinode-200224 | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:24 UTC | 28 Mar 24 00:24 UTC |
	|         | multinode-200224-m03 sudo cat                                                           |                  |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |                |                     |                     |
	| cp      | multinode-200224 cp multinode-200224-m03:/home/docker/cp-test.txt                       | multinode-200224 | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:24 UTC | 28 Mar 24 00:24 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile3842904601/001/cp-test_multinode-200224-m03.txt         |                  |         |                |                     |                     |
	| ssh     | multinode-200224 ssh -n                                                                 | multinode-200224 | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:24 UTC | 28 Mar 24 00:24 UTC |
	|         | multinode-200224-m03 sudo cat                                                           |                  |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |                |                     |                     |
	| cp      | multinode-200224 cp multinode-200224-m03:/home/docker/cp-test.txt                       | multinode-200224 | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:24 UTC | 28 Mar 24 00:24 UTC |
	|         | multinode-200224:/home/docker/cp-test_multinode-200224-m03_multinode-200224.txt         |                  |         |                |                     |                     |
	| ssh     | multinode-200224 ssh -n                                                                 | multinode-200224 | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:24 UTC | 28 Mar 24 00:24 UTC |
	|         | multinode-200224-m03 sudo cat                                                           |                  |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |                |                     |                     |
	| ssh     | multinode-200224 ssh -n multinode-200224 sudo cat                                       | multinode-200224 | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:24 UTC | 28 Mar 24 00:24 UTC |
	|         | /home/docker/cp-test_multinode-200224-m03_multinode-200224.txt                          |                  |         |                |                     |                     |
	| cp      | multinode-200224 cp multinode-200224-m03:/home/docker/cp-test.txt                       | multinode-200224 | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:24 UTC | 28 Mar 24 00:24 UTC |
	|         | multinode-200224-m02:/home/docker/cp-test_multinode-200224-m03_multinode-200224-m02.txt |                  |         |                |                     |                     |
	| ssh     | multinode-200224 ssh -n                                                                 | multinode-200224 | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:24 UTC | 28 Mar 24 00:24 UTC |
	|         | multinode-200224-m03 sudo cat                                                           |                  |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |                |                     |                     |
	| ssh     | multinode-200224 ssh -n multinode-200224-m02 sudo cat                                   | multinode-200224 | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:24 UTC | 28 Mar 24 00:24 UTC |
	|         | /home/docker/cp-test_multinode-200224-m03_multinode-200224-m02.txt                      |                  |         |                |                     |                     |
	| node    | multinode-200224 node stop m03                                                          | multinode-200224 | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:24 UTC | 28 Mar 24 00:24 UTC |
	| node    | multinode-200224 node start                                                             | multinode-200224 | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:24 UTC | 28 Mar 24 00:25 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |                |                     |                     |
	| node    | list -p multinode-200224                                                                | multinode-200224 | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:25 UTC |                     |
	| stop    | -p multinode-200224                                                                     | multinode-200224 | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:25 UTC |                     |
	| start   | -p multinode-200224                                                                     | multinode-200224 | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:27 UTC | 28 Mar 24 00:30 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |                |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |                |                     |                     |
	| node    | list -p multinode-200224                                                                | multinode-200224 | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:30 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/28 00:27:31
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0328 00:27:31.227138 1103152 out.go:291] Setting OutFile to fd 1 ...
	I0328 00:27:31.227256 1103152 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 00:27:31.227261 1103152 out.go:304] Setting ErrFile to fd 2...
	I0328 00:27:31.227265 1103152 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 00:27:31.227461 1103152 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18485-1069254/.minikube/bin
	I0328 00:27:31.228032 1103152 out.go:298] Setting JSON to false
	I0328 00:27:31.228992 1103152 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":29348,"bootTime":1711556303,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1054-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0328 00:27:31.229069 1103152 start.go:139] virtualization: kvm guest
	I0328 00:27:31.231774 1103152 out.go:177] * [multinode-200224] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I0328 00:27:31.233786 1103152 out.go:177]   - MINIKUBE_LOCATION=18485
	I0328 00:27:31.233698 1103152 notify.go:220] Checking for updates...
	I0328 00:27:31.235359 1103152 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0328 00:27:31.237068 1103152 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18485-1069254/kubeconfig
	I0328 00:27:31.238511 1103152 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18485-1069254/.minikube
	I0328 00:27:31.239903 1103152 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0328 00:27:31.241319 1103152 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0328 00:27:31.242985 1103152 config.go:182] Loaded profile config "multinode-200224": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0328 00:27:31.243080 1103152 driver.go:392] Setting default libvirt URI to qemu:///system
	I0328 00:27:31.243525 1103152 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0328 00:27:31.243574 1103152 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 00:27:31.259623 1103152 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41745
	I0328 00:27:31.260093 1103152 main.go:141] libmachine: () Calling .GetVersion
	I0328 00:27:31.260692 1103152 main.go:141] libmachine: Using API Version  1
	I0328 00:27:31.260717 1103152 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 00:27:31.261162 1103152 main.go:141] libmachine: () Calling .GetMachineName
	I0328 00:27:31.261420 1103152 main.go:141] libmachine: (multinode-200224) Calling .DriverName
	I0328 00:27:31.298792 1103152 out.go:177] * Using the kvm2 driver based on existing profile
	I0328 00:27:31.300080 1103152 start.go:297] selected driver: kvm2
	I0328 00:27:31.300090 1103152 start.go:901] validating driver "kvm2" against &{Name:multinode-200224 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.29.3 ClusterName:multinode-200224 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.88 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.161 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.22 Port:0 KubernetesVersion:v1.29.3 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingre
ss-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0328 00:27:31.300217 1103152 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0328 00:27:31.300537 1103152 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0328 00:27:31.300618 1103152 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18485-1069254/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0328 00:27:31.316370 1103152 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0-beta.0
	I0328 00:27:31.317198 1103152 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0328 00:27:31.317266 1103152 cni.go:84] Creating CNI manager for ""
	I0328 00:27:31.317281 1103152 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0328 00:27:31.317344 1103152 start.go:340] cluster config:
	{Name:multinode-200224 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:multinode-200224 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.88 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.161 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.22 Port:0 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false ko
ng:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnet
ClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0328 00:27:31.317469 1103152 iso.go:125] acquiring lock: {Name:mk3da1fa7d63a581c817f327d86a827697458cb8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0328 00:27:31.319275 1103152 out.go:177] * Starting "multinode-200224" primary control-plane node in "multinode-200224" cluster
	I0328 00:27:31.320721 1103152 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0328 00:27:31.320770 1103152 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4
	I0328 00:27:31.320784 1103152 cache.go:56] Caching tarball of preloaded images
	I0328 00:27:31.320872 1103152 preload.go:173] Found /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0328 00:27:31.320883 1103152 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on crio
	I0328 00:27:31.320993 1103152 profile.go:142] Saving config to /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/multinode-200224/config.json ...
	I0328 00:27:31.321180 1103152 start.go:360] acquireMachinesLock for multinode-200224: {Name:mk85e225431128bbd27ac7bb3815095957281902 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0328 00:27:31.321220 1103152 start.go:364] duration metric: took 20.939µs to acquireMachinesLock for "multinode-200224"
	I0328 00:27:31.321234 1103152 start.go:96] Skipping create...Using existing machine configuration
	I0328 00:27:31.321243 1103152 fix.go:54] fixHost starting: 
	I0328 00:27:31.321499 1103152 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0328 00:27:31.321535 1103152 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 00:27:31.336360 1103152 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41547
	I0328 00:27:31.336887 1103152 main.go:141] libmachine: () Calling .GetVersion
	I0328 00:27:31.337332 1103152 main.go:141] libmachine: Using API Version  1
	I0328 00:27:31.337355 1103152 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 00:27:31.337693 1103152 main.go:141] libmachine: () Calling .GetMachineName
	I0328 00:27:31.337877 1103152 main.go:141] libmachine: (multinode-200224) Calling .DriverName
	I0328 00:27:31.338040 1103152 main.go:141] libmachine: (multinode-200224) Calling .GetState
	I0328 00:27:31.339784 1103152 fix.go:112] recreateIfNeeded on multinode-200224: state=Running err=<nil>
	W0328 00:27:31.339803 1103152 fix.go:138] unexpected machine state, will restart: <nil>
	I0328 00:27:31.344173 1103152 out.go:177] * Updating the running kvm2 "multinode-200224" VM ...
	I0328 00:27:31.347217 1103152 machine.go:94] provisionDockerMachine start ...
	I0328 00:27:31.347246 1103152 main.go:141] libmachine: (multinode-200224) Calling .DriverName
	I0328 00:27:31.347510 1103152 main.go:141] libmachine: (multinode-200224) Calling .GetSSHHostname
	I0328 00:27:31.350493 1103152 main.go:141] libmachine: (multinode-200224) DBG | domain multinode-200224 has defined MAC address 52:54:00:36:d0:1a in network mk-multinode-200224
	I0328 00:27:31.351023 1103152 main.go:141] libmachine: (multinode-200224) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:d0:1a", ip: ""} in network mk-multinode-200224: {Iface:virbr1 ExpiryTime:2024-03-28 01:22:32 +0000 UTC Type:0 Mac:52:54:00:36:d0:1a Iaid: IPaddr:192.168.39.88 Prefix:24 Hostname:multinode-200224 Clientid:01:52:54:00:36:d0:1a}
	I0328 00:27:31.351056 1103152 main.go:141] libmachine: (multinode-200224) DBG | domain multinode-200224 has defined IP address 192.168.39.88 and MAC address 52:54:00:36:d0:1a in network mk-multinode-200224
	I0328 00:27:31.351220 1103152 main.go:141] libmachine: (multinode-200224) Calling .GetSSHPort
	I0328 00:27:31.351494 1103152 main.go:141] libmachine: (multinode-200224) Calling .GetSSHKeyPath
	I0328 00:27:31.351693 1103152 main.go:141] libmachine: (multinode-200224) Calling .GetSSHKeyPath
	I0328 00:27:31.351862 1103152 main.go:141] libmachine: (multinode-200224) Calling .GetSSHUsername
	I0328 00:27:31.352020 1103152 main.go:141] libmachine: Using SSH client type: native
	I0328 00:27:31.352222 1103152 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.88 22 <nil> <nil>}
	I0328 00:27:31.352235 1103152 main.go:141] libmachine: About to run SSH command:
	hostname
	I0328 00:27:31.474689 1103152 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-200224
	
	I0328 00:27:31.474730 1103152 main.go:141] libmachine: (multinode-200224) Calling .GetMachineName
	I0328 00:27:31.474979 1103152 buildroot.go:166] provisioning hostname "multinode-200224"
	I0328 00:27:31.475016 1103152 main.go:141] libmachine: (multinode-200224) Calling .GetMachineName
	I0328 00:27:31.475235 1103152 main.go:141] libmachine: (multinode-200224) Calling .GetSSHHostname
	I0328 00:27:31.477936 1103152 main.go:141] libmachine: (multinode-200224) DBG | domain multinode-200224 has defined MAC address 52:54:00:36:d0:1a in network mk-multinode-200224
	I0328 00:27:31.478397 1103152 main.go:141] libmachine: (multinode-200224) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:d0:1a", ip: ""} in network mk-multinode-200224: {Iface:virbr1 ExpiryTime:2024-03-28 01:22:32 +0000 UTC Type:0 Mac:52:54:00:36:d0:1a Iaid: IPaddr:192.168.39.88 Prefix:24 Hostname:multinode-200224 Clientid:01:52:54:00:36:d0:1a}
	I0328 00:27:31.478430 1103152 main.go:141] libmachine: (multinode-200224) DBG | domain multinode-200224 has defined IP address 192.168.39.88 and MAC address 52:54:00:36:d0:1a in network mk-multinode-200224
	I0328 00:27:31.478657 1103152 main.go:141] libmachine: (multinode-200224) Calling .GetSSHPort
	I0328 00:27:31.478898 1103152 main.go:141] libmachine: (multinode-200224) Calling .GetSSHKeyPath
	I0328 00:27:31.479058 1103152 main.go:141] libmachine: (multinode-200224) Calling .GetSSHKeyPath
	I0328 00:27:31.479190 1103152 main.go:141] libmachine: (multinode-200224) Calling .GetSSHUsername
	I0328 00:27:31.479382 1103152 main.go:141] libmachine: Using SSH client type: native
	I0328 00:27:31.479554 1103152 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.88 22 <nil> <nil>}
	I0328 00:27:31.479567 1103152 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-200224 && echo "multinode-200224" | sudo tee /etc/hostname
	I0328 00:27:31.615145 1103152 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-200224
	
	I0328 00:27:31.615175 1103152 main.go:141] libmachine: (multinode-200224) Calling .GetSSHHostname
	I0328 00:27:31.617879 1103152 main.go:141] libmachine: (multinode-200224) DBG | domain multinode-200224 has defined MAC address 52:54:00:36:d0:1a in network mk-multinode-200224
	I0328 00:27:31.618254 1103152 main.go:141] libmachine: (multinode-200224) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:d0:1a", ip: ""} in network mk-multinode-200224: {Iface:virbr1 ExpiryTime:2024-03-28 01:22:32 +0000 UTC Type:0 Mac:52:54:00:36:d0:1a Iaid: IPaddr:192.168.39.88 Prefix:24 Hostname:multinode-200224 Clientid:01:52:54:00:36:d0:1a}
	I0328 00:27:31.618294 1103152 main.go:141] libmachine: (multinode-200224) DBG | domain multinode-200224 has defined IP address 192.168.39.88 and MAC address 52:54:00:36:d0:1a in network mk-multinode-200224
	I0328 00:27:31.618531 1103152 main.go:141] libmachine: (multinode-200224) Calling .GetSSHPort
	I0328 00:27:31.618752 1103152 main.go:141] libmachine: (multinode-200224) Calling .GetSSHKeyPath
	I0328 00:27:31.618915 1103152 main.go:141] libmachine: (multinode-200224) Calling .GetSSHKeyPath
	I0328 00:27:31.619049 1103152 main.go:141] libmachine: (multinode-200224) Calling .GetSSHUsername
	I0328 00:27:31.619277 1103152 main.go:141] libmachine: Using SSH client type: native
	I0328 00:27:31.619442 1103152 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.88 22 <nil> <nil>}
	I0328 00:27:31.619458 1103152 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-200224' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-200224/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-200224' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0328 00:27:31.727376 1103152 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0328 00:27:31.727405 1103152 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18485-1069254/.minikube CaCertPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18485-1069254/.minikube}
	I0328 00:27:31.727427 1103152 buildroot.go:174] setting up certificates
	I0328 00:27:31.727437 1103152 provision.go:84] configureAuth start
	I0328 00:27:31.727446 1103152 main.go:141] libmachine: (multinode-200224) Calling .GetMachineName
	I0328 00:27:31.727758 1103152 main.go:141] libmachine: (multinode-200224) Calling .GetIP
	I0328 00:27:31.730502 1103152 main.go:141] libmachine: (multinode-200224) DBG | domain multinode-200224 has defined MAC address 52:54:00:36:d0:1a in network mk-multinode-200224
	I0328 00:27:31.730866 1103152 main.go:141] libmachine: (multinode-200224) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:d0:1a", ip: ""} in network mk-multinode-200224: {Iface:virbr1 ExpiryTime:2024-03-28 01:22:32 +0000 UTC Type:0 Mac:52:54:00:36:d0:1a Iaid: IPaddr:192.168.39.88 Prefix:24 Hostname:multinode-200224 Clientid:01:52:54:00:36:d0:1a}
	I0328 00:27:31.730893 1103152 main.go:141] libmachine: (multinode-200224) DBG | domain multinode-200224 has defined IP address 192.168.39.88 and MAC address 52:54:00:36:d0:1a in network mk-multinode-200224
	I0328 00:27:31.731061 1103152 main.go:141] libmachine: (multinode-200224) Calling .GetSSHHostname
	I0328 00:27:31.733533 1103152 main.go:141] libmachine: (multinode-200224) DBG | domain multinode-200224 has defined MAC address 52:54:00:36:d0:1a in network mk-multinode-200224
	I0328 00:27:31.733880 1103152 main.go:141] libmachine: (multinode-200224) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:d0:1a", ip: ""} in network mk-multinode-200224: {Iface:virbr1 ExpiryTime:2024-03-28 01:22:32 +0000 UTC Type:0 Mac:52:54:00:36:d0:1a Iaid: IPaddr:192.168.39.88 Prefix:24 Hostname:multinode-200224 Clientid:01:52:54:00:36:d0:1a}
	I0328 00:27:31.733917 1103152 main.go:141] libmachine: (multinode-200224) DBG | domain multinode-200224 has defined IP address 192.168.39.88 and MAC address 52:54:00:36:d0:1a in network mk-multinode-200224
	I0328 00:27:31.734177 1103152 provision.go:143] copyHostCerts
	I0328 00:27:31.734211 1103152 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.pem
	I0328 00:27:31.734274 1103152 exec_runner.go:144] found /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.pem, removing ...
	I0328 00:27:31.734287 1103152 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.pem
	I0328 00:27:31.734363 1103152 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.pem (1078 bytes)
	I0328 00:27:31.734442 1103152 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18485-1069254/.minikube/cert.pem
	I0328 00:27:31.734460 1103152 exec_runner.go:144] found /home/jenkins/minikube-integration/18485-1069254/.minikube/cert.pem, removing ...
	I0328 00:27:31.734466 1103152 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18485-1069254/.minikube/cert.pem
	I0328 00:27:31.734491 1103152 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18485-1069254/.minikube/cert.pem (1123 bytes)
	I0328 00:27:31.734531 1103152 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18485-1069254/.minikube/key.pem
	I0328 00:27:31.734547 1103152 exec_runner.go:144] found /home/jenkins/minikube-integration/18485-1069254/.minikube/key.pem, removing ...
	I0328 00:27:31.734554 1103152 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18485-1069254/.minikube/key.pem
	I0328 00:27:31.734573 1103152 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18485-1069254/.minikube/key.pem (1679 bytes)
	I0328 00:27:31.734619 1103152 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca-key.pem org=jenkins.multinode-200224 san=[127.0.0.1 192.168.39.88 localhost minikube multinode-200224]
	I0328 00:27:31.891932 1103152 provision.go:177] copyRemoteCerts
	I0328 00:27:31.891997 1103152 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0328 00:27:31.892024 1103152 main.go:141] libmachine: (multinode-200224) Calling .GetSSHHostname
	I0328 00:27:31.895385 1103152 main.go:141] libmachine: (multinode-200224) DBG | domain multinode-200224 has defined MAC address 52:54:00:36:d0:1a in network mk-multinode-200224
	I0328 00:27:31.895775 1103152 main.go:141] libmachine: (multinode-200224) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:d0:1a", ip: ""} in network mk-multinode-200224: {Iface:virbr1 ExpiryTime:2024-03-28 01:22:32 +0000 UTC Type:0 Mac:52:54:00:36:d0:1a Iaid: IPaddr:192.168.39.88 Prefix:24 Hostname:multinode-200224 Clientid:01:52:54:00:36:d0:1a}
	I0328 00:27:31.895810 1103152 main.go:141] libmachine: (multinode-200224) DBG | domain multinode-200224 has defined IP address 192.168.39.88 and MAC address 52:54:00:36:d0:1a in network mk-multinode-200224
	I0328 00:27:31.895998 1103152 main.go:141] libmachine: (multinode-200224) Calling .GetSSHPort
	I0328 00:27:31.896234 1103152 main.go:141] libmachine: (multinode-200224) Calling .GetSSHKeyPath
	I0328 00:27:31.896441 1103152 main.go:141] libmachine: (multinode-200224) Calling .GetSSHUsername
	I0328 00:27:31.896651 1103152 sshutil.go:53] new ssh client: &{IP:192.168.39.88 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/multinode-200224/id_rsa Username:docker}
	I0328 00:27:31.988327 1103152 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0328 00:27:31.988416 1103152 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0328 00:27:32.024701 1103152 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0328 00:27:32.024790 1103152 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0328 00:27:32.053325 1103152 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0328 00:27:32.053413 1103152 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0328 00:27:32.082138 1103152 provision.go:87] duration metric: took 354.687044ms to configureAuth
	I0328 00:27:32.082167 1103152 buildroot.go:189] setting minikube options for container-runtime
	I0328 00:27:32.082434 1103152 config.go:182] Loaded profile config "multinode-200224": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0328 00:27:32.082525 1103152 main.go:141] libmachine: (multinode-200224) Calling .GetSSHHostname
	I0328 00:27:32.085264 1103152 main.go:141] libmachine: (multinode-200224) DBG | domain multinode-200224 has defined MAC address 52:54:00:36:d0:1a in network mk-multinode-200224
	I0328 00:27:32.085657 1103152 main.go:141] libmachine: (multinode-200224) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:d0:1a", ip: ""} in network mk-multinode-200224: {Iface:virbr1 ExpiryTime:2024-03-28 01:22:32 +0000 UTC Type:0 Mac:52:54:00:36:d0:1a Iaid: IPaddr:192.168.39.88 Prefix:24 Hostname:multinode-200224 Clientid:01:52:54:00:36:d0:1a}
	I0328 00:27:32.085681 1103152 main.go:141] libmachine: (multinode-200224) DBG | domain multinode-200224 has defined IP address 192.168.39.88 and MAC address 52:54:00:36:d0:1a in network mk-multinode-200224
	I0328 00:27:32.085866 1103152 main.go:141] libmachine: (multinode-200224) Calling .GetSSHPort
	I0328 00:27:32.086108 1103152 main.go:141] libmachine: (multinode-200224) Calling .GetSSHKeyPath
	I0328 00:27:32.086285 1103152 main.go:141] libmachine: (multinode-200224) Calling .GetSSHKeyPath
	I0328 00:27:32.086449 1103152 main.go:141] libmachine: (multinode-200224) Calling .GetSSHUsername
	I0328 00:27:32.086628 1103152 main.go:141] libmachine: Using SSH client type: native
	I0328 00:27:32.086832 1103152 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.88 22 <nil> <nil>}
	I0328 00:27:32.086849 1103152 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0328 00:29:02.848182 1103152 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0328 00:29:02.848216 1103152 machine.go:97] duration metric: took 1m31.50097817s to provisionDockerMachine
	I0328 00:29:02.848230 1103152 start.go:293] postStartSetup for "multinode-200224" (driver="kvm2")
	I0328 00:29:02.848245 1103152 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0328 00:29:02.848268 1103152 main.go:141] libmachine: (multinode-200224) Calling .DriverName
	I0328 00:29:02.848735 1103152 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0328 00:29:02.848768 1103152 main.go:141] libmachine: (multinode-200224) Calling .GetSSHHostname
	I0328 00:29:02.852447 1103152 main.go:141] libmachine: (multinode-200224) DBG | domain multinode-200224 has defined MAC address 52:54:00:36:d0:1a in network mk-multinode-200224
	I0328 00:29:02.853127 1103152 main.go:141] libmachine: (multinode-200224) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:d0:1a", ip: ""} in network mk-multinode-200224: {Iface:virbr1 ExpiryTime:2024-03-28 01:22:32 +0000 UTC Type:0 Mac:52:54:00:36:d0:1a Iaid: IPaddr:192.168.39.88 Prefix:24 Hostname:multinode-200224 Clientid:01:52:54:00:36:d0:1a}
	I0328 00:29:02.853158 1103152 main.go:141] libmachine: (multinode-200224) DBG | domain multinode-200224 has defined IP address 192.168.39.88 and MAC address 52:54:00:36:d0:1a in network mk-multinode-200224
	I0328 00:29:02.853319 1103152 main.go:141] libmachine: (multinode-200224) Calling .GetSSHPort
	I0328 00:29:02.853551 1103152 main.go:141] libmachine: (multinode-200224) Calling .GetSSHKeyPath
	I0328 00:29:02.853743 1103152 main.go:141] libmachine: (multinode-200224) Calling .GetSSHUsername
	I0328 00:29:02.853917 1103152 sshutil.go:53] new ssh client: &{IP:192.168.39.88 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/multinode-200224/id_rsa Username:docker}
	I0328 00:29:02.943195 1103152 ssh_runner.go:195] Run: cat /etc/os-release
	I0328 00:29:02.947939 1103152 command_runner.go:130] > NAME=Buildroot
	I0328 00:29:02.947967 1103152 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0328 00:29:02.947974 1103152 command_runner.go:130] > ID=buildroot
	I0328 00:29:02.947981 1103152 command_runner.go:130] > VERSION_ID=2023.02.9
	I0328 00:29:02.947989 1103152 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0328 00:29:02.948043 1103152 info.go:137] Remote host: Buildroot 2023.02.9
	I0328 00:29:02.948068 1103152 filesync.go:126] Scanning /home/jenkins/minikube-integration/18485-1069254/.minikube/addons for local assets ...
	I0328 00:29:02.948152 1103152 filesync.go:126] Scanning /home/jenkins/minikube-integration/18485-1069254/.minikube/files for local assets ...
	I0328 00:29:02.948252 1103152 filesync.go:149] local asset: /home/jenkins/minikube-integration/18485-1069254/.minikube/files/etc/ssl/certs/10765222.pem -> 10765222.pem in /etc/ssl/certs
	I0328 00:29:02.948265 1103152 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18485-1069254/.minikube/files/etc/ssl/certs/10765222.pem -> /etc/ssl/certs/10765222.pem
	I0328 00:29:02.948368 1103152 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0328 00:29:02.959641 1103152 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/files/etc/ssl/certs/10765222.pem --> /etc/ssl/certs/10765222.pem (1708 bytes)
	I0328 00:29:02.986509 1103152 start.go:296] duration metric: took 138.26107ms for postStartSetup
	I0328 00:29:02.986573 1103152 fix.go:56] duration metric: took 1m31.665330616s for fixHost
	I0328 00:29:02.986598 1103152 main.go:141] libmachine: (multinode-200224) Calling .GetSSHHostname
	I0328 00:29:02.989818 1103152 main.go:141] libmachine: (multinode-200224) DBG | domain multinode-200224 has defined MAC address 52:54:00:36:d0:1a in network mk-multinode-200224
	I0328 00:29:02.990307 1103152 main.go:141] libmachine: (multinode-200224) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:d0:1a", ip: ""} in network mk-multinode-200224: {Iface:virbr1 ExpiryTime:2024-03-28 01:22:32 +0000 UTC Type:0 Mac:52:54:00:36:d0:1a Iaid: IPaddr:192.168.39.88 Prefix:24 Hostname:multinode-200224 Clientid:01:52:54:00:36:d0:1a}
	I0328 00:29:02.990336 1103152 main.go:141] libmachine: (multinode-200224) DBG | domain multinode-200224 has defined IP address 192.168.39.88 and MAC address 52:54:00:36:d0:1a in network mk-multinode-200224
	I0328 00:29:02.990590 1103152 main.go:141] libmachine: (multinode-200224) Calling .GetSSHPort
	I0328 00:29:02.990826 1103152 main.go:141] libmachine: (multinode-200224) Calling .GetSSHKeyPath
	I0328 00:29:02.991015 1103152 main.go:141] libmachine: (multinode-200224) Calling .GetSSHKeyPath
	I0328 00:29:02.991179 1103152 main.go:141] libmachine: (multinode-200224) Calling .GetSSHUsername
	I0328 00:29:02.991374 1103152 main.go:141] libmachine: Using SSH client type: native
	I0328 00:29:02.991551 1103152 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.88 22 <nil> <nil>}
	I0328 00:29:02.991562 1103152 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0328 00:29:03.099724 1103152 main.go:141] libmachine: SSH cmd err, output: <nil>: 1711585743.073157601
	
	I0328 00:29:03.099750 1103152 fix.go:216] guest clock: 1711585743.073157601
	I0328 00:29:03.099757 1103152 fix.go:229] Guest: 2024-03-28 00:29:03.073157601 +0000 UTC Remote: 2024-03-28 00:29:02.986578288 +0000 UTC m=+91.812055347 (delta=86.579313ms)
	I0328 00:29:03.099805 1103152 fix.go:200] guest clock delta is within tolerance: 86.579313ms
	I0328 00:29:03.099811 1103152 start.go:83] releasing machines lock for "multinode-200224", held for 1m31.778582087s
	I0328 00:29:03.099839 1103152 main.go:141] libmachine: (multinode-200224) Calling .DriverName
	I0328 00:29:03.100186 1103152 main.go:141] libmachine: (multinode-200224) Calling .GetIP
	I0328 00:29:03.102829 1103152 main.go:141] libmachine: (multinode-200224) DBG | domain multinode-200224 has defined MAC address 52:54:00:36:d0:1a in network mk-multinode-200224
	I0328 00:29:03.103316 1103152 main.go:141] libmachine: (multinode-200224) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:d0:1a", ip: ""} in network mk-multinode-200224: {Iface:virbr1 ExpiryTime:2024-03-28 01:22:32 +0000 UTC Type:0 Mac:52:54:00:36:d0:1a Iaid: IPaddr:192.168.39.88 Prefix:24 Hostname:multinode-200224 Clientid:01:52:54:00:36:d0:1a}
	I0328 00:29:03.103349 1103152 main.go:141] libmachine: (multinode-200224) DBG | domain multinode-200224 has defined IP address 192.168.39.88 and MAC address 52:54:00:36:d0:1a in network mk-multinode-200224
	I0328 00:29:03.103507 1103152 main.go:141] libmachine: (multinode-200224) Calling .DriverName
	I0328 00:29:03.104150 1103152 main.go:141] libmachine: (multinode-200224) Calling .DriverName
	I0328 00:29:03.104350 1103152 main.go:141] libmachine: (multinode-200224) Calling .DriverName
	I0328 00:29:03.104451 1103152 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0328 00:29:03.104501 1103152 main.go:141] libmachine: (multinode-200224) Calling .GetSSHHostname
	I0328 00:29:03.104631 1103152 ssh_runner.go:195] Run: cat /version.json
	I0328 00:29:03.104665 1103152 main.go:141] libmachine: (multinode-200224) Calling .GetSSHHostname
	I0328 00:29:03.107497 1103152 main.go:141] libmachine: (multinode-200224) DBG | domain multinode-200224 has defined MAC address 52:54:00:36:d0:1a in network mk-multinode-200224
	I0328 00:29:03.107878 1103152 main.go:141] libmachine: (multinode-200224) DBG | domain multinode-200224 has defined MAC address 52:54:00:36:d0:1a in network mk-multinode-200224
	I0328 00:29:03.107962 1103152 main.go:141] libmachine: (multinode-200224) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:d0:1a", ip: ""} in network mk-multinode-200224: {Iface:virbr1 ExpiryTime:2024-03-28 01:22:32 +0000 UTC Type:0 Mac:52:54:00:36:d0:1a Iaid: IPaddr:192.168.39.88 Prefix:24 Hostname:multinode-200224 Clientid:01:52:54:00:36:d0:1a}
	I0328 00:29:03.107988 1103152 main.go:141] libmachine: (multinode-200224) DBG | domain multinode-200224 has defined IP address 192.168.39.88 and MAC address 52:54:00:36:d0:1a in network mk-multinode-200224
	I0328 00:29:03.108169 1103152 main.go:141] libmachine: (multinode-200224) Calling .GetSSHPort
	I0328 00:29:03.108282 1103152 main.go:141] libmachine: (multinode-200224) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:d0:1a", ip: ""} in network mk-multinode-200224: {Iface:virbr1 ExpiryTime:2024-03-28 01:22:32 +0000 UTC Type:0 Mac:52:54:00:36:d0:1a Iaid: IPaddr:192.168.39.88 Prefix:24 Hostname:multinode-200224 Clientid:01:52:54:00:36:d0:1a}
	I0328 00:29:03.108314 1103152 main.go:141] libmachine: (multinode-200224) DBG | domain multinode-200224 has defined IP address 192.168.39.88 and MAC address 52:54:00:36:d0:1a in network mk-multinode-200224
	I0328 00:29:03.108537 1103152 main.go:141] libmachine: (multinode-200224) Calling .GetSSHPort
	I0328 00:29:03.108556 1103152 main.go:141] libmachine: (multinode-200224) Calling .GetSSHKeyPath
	I0328 00:29:03.108752 1103152 main.go:141] libmachine: (multinode-200224) Calling .GetSSHKeyPath
	I0328 00:29:03.108771 1103152 main.go:141] libmachine: (multinode-200224) Calling .GetSSHUsername
	I0328 00:29:03.108936 1103152 main.go:141] libmachine: (multinode-200224) Calling .GetSSHUsername
	I0328 00:29:03.108952 1103152 sshutil.go:53] new ssh client: &{IP:192.168.39.88 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/multinode-200224/id_rsa Username:docker}
	I0328 00:29:03.109144 1103152 sshutil.go:53] new ssh client: &{IP:192.168.39.88 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/multinode-200224/id_rsa Username:docker}
	I0328 00:29:03.187854 1103152 command_runner.go:130] > {"iso_version": "v1.33.0-1711559712-18485", "kicbase_version": "v0.0.43-beta.0", "minikube_version": "v1.33.0-beta.0", "commit": "db97f5257476488cfa11a4cd2d95d2aa6fbd9d33"}
	I0328 00:29:03.188255 1103152 ssh_runner.go:195] Run: systemctl --version
	I0328 00:29:03.228744 1103152 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0328 00:29:03.229526 1103152 command_runner.go:130] > systemd 252 (252)
	I0328 00:29:03.229562 1103152 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0328 00:29:03.229637 1103152 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0328 00:29:03.388301 1103152 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0328 00:29:03.397669 1103152 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0328 00:29:03.397788 1103152 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0328 00:29:03.397859 1103152 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0328 00:29:03.408781 1103152 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0328 00:29:03.408814 1103152 start.go:494] detecting cgroup driver to use...
	I0328 00:29:03.408892 1103152 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0328 00:29:03.427340 1103152 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0328 00:29:03.442715 1103152 docker.go:217] disabling cri-docker service (if available) ...
	I0328 00:29:03.442798 1103152 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0328 00:29:03.458411 1103152 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0328 00:29:03.473847 1103152 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0328 00:29:03.622296 1103152 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0328 00:29:03.765541 1103152 docker.go:233] disabling docker service ...
	I0328 00:29:03.765655 1103152 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0328 00:29:03.785079 1103152 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0328 00:29:03.800720 1103152 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0328 00:29:03.945380 1103152 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0328 00:29:04.090196 1103152 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0328 00:29:04.106512 1103152 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0328 00:29:04.126782 1103152 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0328 00:29:04.126864 1103152 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0328 00:29:04.126917 1103152 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 00:29:04.138539 1103152 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0328 00:29:04.138626 1103152 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 00:29:04.150071 1103152 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 00:29:04.161478 1103152 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 00:29:04.173182 1103152 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0328 00:29:04.184544 1103152 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 00:29:04.196238 1103152 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 00:29:04.207833 1103152 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 00:29:04.228041 1103152 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0328 00:29:04.254639 1103152 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0328 00:29:04.254743 1103152 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0328 00:29:04.275161 1103152 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0328 00:29:04.418353 1103152 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0328 00:29:07.820833 1103152 ssh_runner.go:235] Completed: sudo systemctl restart crio: (3.402429511s)
	I0328 00:29:07.820872 1103152 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0328 00:29:07.820934 1103152 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0328 00:29:07.826286 1103152 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0328 00:29:07.826312 1103152 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0328 00:29:07.826319 1103152 command_runner.go:130] > Device: 0,22	Inode: 1336        Links: 1
	I0328 00:29:07.826326 1103152 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0328 00:29:07.826330 1103152 command_runner.go:130] > Access: 2024-03-28 00:29:07.679370955 +0000
	I0328 00:29:07.826339 1103152 command_runner.go:130] > Modify: 2024-03-28 00:29:07.679370955 +0000
	I0328 00:29:07.826347 1103152 command_runner.go:130] > Change: 2024-03-28 00:29:07.679370955 +0000
	I0328 00:29:07.826353 1103152 command_runner.go:130] >  Birth: -
	I0328 00:29:07.826378 1103152 start.go:562] Will wait 60s for crictl version
	I0328 00:29:07.826443 1103152 ssh_runner.go:195] Run: which crictl
	I0328 00:29:07.830790 1103152 command_runner.go:130] > /usr/bin/crictl
	I0328 00:29:07.831153 1103152 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0328 00:29:07.869588 1103152 command_runner.go:130] > Version:  0.1.0
	I0328 00:29:07.869617 1103152 command_runner.go:130] > RuntimeName:  cri-o
	I0328 00:29:07.870761 1103152 command_runner.go:130] > RuntimeVersion:  1.29.1
	I0328 00:29:07.870798 1103152 command_runner.go:130] > RuntimeApiVersion:  v1
	I0328 00:29:07.872175 1103152 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0328 00:29:07.872255 1103152 ssh_runner.go:195] Run: crio --version
	I0328 00:29:07.902965 1103152 command_runner.go:130] > crio version 1.29.1
	I0328 00:29:07.902997 1103152 command_runner.go:130] > Version:        1.29.1
	I0328 00:29:07.903007 1103152 command_runner.go:130] > GitCommit:      unknown
	I0328 00:29:07.903014 1103152 command_runner.go:130] > GitCommitDate:  unknown
	I0328 00:29:07.903021 1103152 command_runner.go:130] > GitTreeState:   clean
	I0328 00:29:07.903029 1103152 command_runner.go:130] > BuildDate:      2024-03-27T22:46:22Z
	I0328 00:29:07.903037 1103152 command_runner.go:130] > GoVersion:      go1.21.6
	I0328 00:29:07.903041 1103152 command_runner.go:130] > Compiler:       gc
	I0328 00:29:07.903045 1103152 command_runner.go:130] > Platform:       linux/amd64
	I0328 00:29:07.903049 1103152 command_runner.go:130] > Linkmode:       dynamic
	I0328 00:29:07.903055 1103152 command_runner.go:130] > BuildTags:      
	I0328 00:29:07.903059 1103152 command_runner.go:130] >   containers_image_ostree_stub
	I0328 00:29:07.903066 1103152 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0328 00:29:07.903070 1103152 command_runner.go:130] >   btrfs_noversion
	I0328 00:29:07.903075 1103152 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0328 00:29:07.903079 1103152 command_runner.go:130] >   libdm_no_deferred_remove
	I0328 00:29:07.903087 1103152 command_runner.go:130] >   seccomp
	I0328 00:29:07.903091 1103152 command_runner.go:130] > LDFlags:          unknown
	I0328 00:29:07.903095 1103152 command_runner.go:130] > SeccompEnabled:   true
	I0328 00:29:07.903099 1103152 command_runner.go:130] > AppArmorEnabled:  false
	I0328 00:29:07.903258 1103152 ssh_runner.go:195] Run: crio --version
	I0328 00:29:07.931193 1103152 command_runner.go:130] > crio version 1.29.1
	I0328 00:29:07.931218 1103152 command_runner.go:130] > Version:        1.29.1
	I0328 00:29:07.931224 1103152 command_runner.go:130] > GitCommit:      unknown
	I0328 00:29:07.931228 1103152 command_runner.go:130] > GitCommitDate:  unknown
	I0328 00:29:07.931231 1103152 command_runner.go:130] > GitTreeState:   clean
	I0328 00:29:07.931237 1103152 command_runner.go:130] > BuildDate:      2024-03-27T22:46:22Z
	I0328 00:29:07.931242 1103152 command_runner.go:130] > GoVersion:      go1.21.6
	I0328 00:29:07.931245 1103152 command_runner.go:130] > Compiler:       gc
	I0328 00:29:07.931250 1103152 command_runner.go:130] > Platform:       linux/amd64
	I0328 00:29:07.931254 1103152 command_runner.go:130] > Linkmode:       dynamic
	I0328 00:29:07.931259 1103152 command_runner.go:130] > BuildTags:      
	I0328 00:29:07.931263 1103152 command_runner.go:130] >   containers_image_ostree_stub
	I0328 00:29:07.931268 1103152 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0328 00:29:07.931272 1103152 command_runner.go:130] >   btrfs_noversion
	I0328 00:29:07.931276 1103152 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0328 00:29:07.931282 1103152 command_runner.go:130] >   libdm_no_deferred_remove
	I0328 00:29:07.931286 1103152 command_runner.go:130] >   seccomp
	I0328 00:29:07.931292 1103152 command_runner.go:130] > LDFlags:          unknown
	I0328 00:29:07.931296 1103152 command_runner.go:130] > SeccompEnabled:   true
	I0328 00:29:07.931307 1103152 command_runner.go:130] > AppArmorEnabled:  false
	I0328 00:29:07.934944 1103152 out.go:177] * Preparing Kubernetes v1.29.3 on CRI-O 1.29.1 ...
	I0328 00:29:07.936316 1103152 main.go:141] libmachine: (multinode-200224) Calling .GetIP
	I0328 00:29:07.939195 1103152 main.go:141] libmachine: (multinode-200224) DBG | domain multinode-200224 has defined MAC address 52:54:00:36:d0:1a in network mk-multinode-200224
	I0328 00:29:07.939544 1103152 main.go:141] libmachine: (multinode-200224) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:d0:1a", ip: ""} in network mk-multinode-200224: {Iface:virbr1 ExpiryTime:2024-03-28 01:22:32 +0000 UTC Type:0 Mac:52:54:00:36:d0:1a Iaid: IPaddr:192.168.39.88 Prefix:24 Hostname:multinode-200224 Clientid:01:52:54:00:36:d0:1a}
	I0328 00:29:07.939574 1103152 main.go:141] libmachine: (multinode-200224) DBG | domain multinode-200224 has defined IP address 192.168.39.88 and MAC address 52:54:00:36:d0:1a in network mk-multinode-200224
	I0328 00:29:07.939760 1103152 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0328 00:29:07.944303 1103152 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0328 00:29:07.944403 1103152 kubeadm.go:877] updating cluster {Name:multinode-200224 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
29.3 ClusterName:multinode-200224 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.88 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.161 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.22 Port:0 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:fals
e inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations
:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0328 00:29:07.944550 1103152 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0328 00:29:07.944610 1103152 ssh_runner.go:195] Run: sudo crictl images --output json
	I0328 00:29:08.003857 1103152 command_runner.go:130] > {
	I0328 00:29:08.003889 1103152 command_runner.go:130] >   "images": [
	I0328 00:29:08.003895 1103152 command_runner.go:130] >     {
	I0328 00:29:08.003906 1103152 command_runner.go:130] >       "id": "4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5",
	I0328 00:29:08.003913 1103152 command_runner.go:130] >       "repoTags": [
	I0328 00:29:08.003921 1103152 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240202-8f1494ea"
	I0328 00:29:08.003926 1103152 command_runner.go:130] >       ],
	I0328 00:29:08.003932 1103152 command_runner.go:130] >       "repoDigests": [
	I0328 00:29:08.003944 1103152 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988",
	I0328 00:29:08.003955 1103152 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:bdddbe20c61d325166b48dd517059f5b93c21526eb74c5c80d86cd6d37236bac"
	I0328 00:29:08.003965 1103152 command_runner.go:130] >       ],
	I0328 00:29:08.003973 1103152 command_runner.go:130] >       "size": "65291810",
	I0328 00:29:08.003980 1103152 command_runner.go:130] >       "uid": null,
	I0328 00:29:08.003988 1103152 command_runner.go:130] >       "username": "",
	I0328 00:29:08.004002 1103152 command_runner.go:130] >       "spec": null,
	I0328 00:29:08.004016 1103152 command_runner.go:130] >       "pinned": false
	I0328 00:29:08.004023 1103152 command_runner.go:130] >     },
	I0328 00:29:08.004030 1103152 command_runner.go:130] >     {
	I0328 00:29:08.004040 1103152 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0328 00:29:08.004050 1103152 command_runner.go:130] >       "repoTags": [
	I0328 00:29:08.004060 1103152 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0328 00:29:08.004069 1103152 command_runner.go:130] >       ],
	I0328 00:29:08.004077 1103152 command_runner.go:130] >       "repoDigests": [
	I0328 00:29:08.004091 1103152 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0328 00:29:08.004104 1103152 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0328 00:29:08.004111 1103152 command_runner.go:130] >       ],
	I0328 00:29:08.004118 1103152 command_runner.go:130] >       "size": "1363676",
	I0328 00:29:08.004126 1103152 command_runner.go:130] >       "uid": null,
	I0328 00:29:08.004136 1103152 command_runner.go:130] >       "username": "",
	I0328 00:29:08.004145 1103152 command_runner.go:130] >       "spec": null,
	I0328 00:29:08.004152 1103152 command_runner.go:130] >       "pinned": false
	I0328 00:29:08.004161 1103152 command_runner.go:130] >     },
	I0328 00:29:08.004168 1103152 command_runner.go:130] >     {
	I0328 00:29:08.004181 1103152 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0328 00:29:08.004197 1103152 command_runner.go:130] >       "repoTags": [
	I0328 00:29:08.004209 1103152 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0328 00:29:08.004218 1103152 command_runner.go:130] >       ],
	I0328 00:29:08.004225 1103152 command_runner.go:130] >       "repoDigests": [
	I0328 00:29:08.004241 1103152 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0328 00:29:08.004257 1103152 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0328 00:29:08.004266 1103152 command_runner.go:130] >       ],
	I0328 00:29:08.004274 1103152 command_runner.go:130] >       "size": "31470524",
	I0328 00:29:08.004283 1103152 command_runner.go:130] >       "uid": null,
	I0328 00:29:08.004291 1103152 command_runner.go:130] >       "username": "",
	I0328 00:29:08.004301 1103152 command_runner.go:130] >       "spec": null,
	I0328 00:29:08.004309 1103152 command_runner.go:130] >       "pinned": false
	I0328 00:29:08.004319 1103152 command_runner.go:130] >     },
	I0328 00:29:08.004324 1103152 command_runner.go:130] >     {
	I0328 00:29:08.004338 1103152 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0328 00:29:08.004349 1103152 command_runner.go:130] >       "repoTags": [
	I0328 00:29:08.004361 1103152 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0328 00:29:08.004370 1103152 command_runner.go:130] >       ],
	I0328 00:29:08.004380 1103152 command_runner.go:130] >       "repoDigests": [
	I0328 00:29:08.004395 1103152 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0328 00:29:08.004415 1103152 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0328 00:29:08.004424 1103152 command_runner.go:130] >       ],
	I0328 00:29:08.004431 1103152 command_runner.go:130] >       "size": "61245718",
	I0328 00:29:08.004438 1103152 command_runner.go:130] >       "uid": null,
	I0328 00:29:08.004445 1103152 command_runner.go:130] >       "username": "nonroot",
	I0328 00:29:08.004453 1103152 command_runner.go:130] >       "spec": null,
	I0328 00:29:08.004461 1103152 command_runner.go:130] >       "pinned": false
	I0328 00:29:08.004469 1103152 command_runner.go:130] >     },
	I0328 00:29:08.004475 1103152 command_runner.go:130] >     {
	I0328 00:29:08.004489 1103152 command_runner.go:130] >       "id": "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899",
	I0328 00:29:08.004498 1103152 command_runner.go:130] >       "repoTags": [
	I0328 00:29:08.004508 1103152 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.12-0"
	I0328 00:29:08.004516 1103152 command_runner.go:130] >       ],
	I0328 00:29:08.004524 1103152 command_runner.go:130] >       "repoDigests": [
	I0328 00:29:08.004539 1103152 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62",
	I0328 00:29:08.004554 1103152 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"
	I0328 00:29:08.004567 1103152 command_runner.go:130] >       ],
	I0328 00:29:08.004575 1103152 command_runner.go:130] >       "size": "150779692",
	I0328 00:29:08.004584 1103152 command_runner.go:130] >       "uid": {
	I0328 00:29:08.004591 1103152 command_runner.go:130] >         "value": "0"
	I0328 00:29:08.004600 1103152 command_runner.go:130] >       },
	I0328 00:29:08.004607 1103152 command_runner.go:130] >       "username": "",
	I0328 00:29:08.004615 1103152 command_runner.go:130] >       "spec": null,
	I0328 00:29:08.004622 1103152 command_runner.go:130] >       "pinned": false
	I0328 00:29:08.004631 1103152 command_runner.go:130] >     },
	I0328 00:29:08.004637 1103152 command_runner.go:130] >     {
	I0328 00:29:08.004650 1103152 command_runner.go:130] >       "id": "39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533",
	I0328 00:29:08.004660 1103152 command_runner.go:130] >       "repoTags": [
	I0328 00:29:08.004669 1103152 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.29.3"
	I0328 00:29:08.004678 1103152 command_runner.go:130] >       ],
	I0328 00:29:08.004685 1103152 command_runner.go:130] >       "repoDigests": [
	I0328 00:29:08.004697 1103152 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:21be2c03b528e582a63a41d8270f469ad1b24e2f6ba8238386768fc981ca1322",
	I0328 00:29:08.004713 1103152 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:ebd35bc7ef24672c5c50ffccb21f71307a82d4fb20c0ecb6d3d27b28b69e0e3c"
	I0328 00:29:08.004723 1103152 command_runner.go:130] >       ],
	I0328 00:29:08.004733 1103152 command_runner.go:130] >       "size": "128508878",
	I0328 00:29:08.004743 1103152 command_runner.go:130] >       "uid": {
	I0328 00:29:08.004751 1103152 command_runner.go:130] >         "value": "0"
	I0328 00:29:08.004758 1103152 command_runner.go:130] >       },
	I0328 00:29:08.004768 1103152 command_runner.go:130] >       "username": "",
	I0328 00:29:08.004776 1103152 command_runner.go:130] >       "spec": null,
	I0328 00:29:08.004784 1103152 command_runner.go:130] >       "pinned": false
	I0328 00:29:08.004791 1103152 command_runner.go:130] >     },
	I0328 00:29:08.004798 1103152 command_runner.go:130] >     {
	I0328 00:29:08.004812 1103152 command_runner.go:130] >       "id": "6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3",
	I0328 00:29:08.004821 1103152 command_runner.go:130] >       "repoTags": [
	I0328 00:29:08.004831 1103152 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.29.3"
	I0328 00:29:08.004840 1103152 command_runner.go:130] >       ],
	I0328 00:29:08.004847 1103152 command_runner.go:130] >       "repoDigests": [
	I0328 00:29:08.004863 1103152 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:495e03d609009733264502138f33ab4ebff55e4ccc34b51fce1dc48eba5aa606",
	I0328 00:29:08.004880 1103152 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:5a7968649f8aee83d5a2d75d6d377ba2680df25b0b97b3be12fa10f15ad67104"
	I0328 00:29:08.004888 1103152 command_runner.go:130] >       ],
	I0328 00:29:08.004895 1103152 command_runner.go:130] >       "size": "123142962",
	I0328 00:29:08.004904 1103152 command_runner.go:130] >       "uid": {
	I0328 00:29:08.004911 1103152 command_runner.go:130] >         "value": "0"
	I0328 00:29:08.004919 1103152 command_runner.go:130] >       },
	I0328 00:29:08.004926 1103152 command_runner.go:130] >       "username": "",
	I0328 00:29:08.004935 1103152 command_runner.go:130] >       "spec": null,
	I0328 00:29:08.004943 1103152 command_runner.go:130] >       "pinned": false
	I0328 00:29:08.004952 1103152 command_runner.go:130] >     },
	I0328 00:29:08.004958 1103152 command_runner.go:130] >     {
	I0328 00:29:08.004969 1103152 command_runner.go:130] >       "id": "a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392",
	I0328 00:29:08.004978 1103152 command_runner.go:130] >       "repoTags": [
	I0328 00:29:08.004987 1103152 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.29.3"
	I0328 00:29:08.004996 1103152 command_runner.go:130] >       ],
	I0328 00:29:08.005006 1103152 command_runner.go:130] >       "repoDigests": [
	I0328 00:29:08.005029 1103152 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:d137dd922e588abc7b0e2f20afd338065e9abccdecfe705abfb19f588fbac11d",
	I0328 00:29:08.005045 1103152 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:fa87cba052adcb992bd59bd1304115c6f3b3fb370407805ba52af3d9ff3f0863"
	I0328 00:29:08.005054 1103152 command_runner.go:130] >       ],
	I0328 00:29:08.005062 1103152 command_runner.go:130] >       "size": "83634073",
	I0328 00:29:08.005073 1103152 command_runner.go:130] >       "uid": null,
	I0328 00:29:08.005079 1103152 command_runner.go:130] >       "username": "",
	I0328 00:29:08.005084 1103152 command_runner.go:130] >       "spec": null,
	I0328 00:29:08.005090 1103152 command_runner.go:130] >       "pinned": false
	I0328 00:29:08.005094 1103152 command_runner.go:130] >     },
	I0328 00:29:08.005102 1103152 command_runner.go:130] >     {
	I0328 00:29:08.005111 1103152 command_runner.go:130] >       "id": "8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b",
	I0328 00:29:08.005119 1103152 command_runner.go:130] >       "repoTags": [
	I0328 00:29:08.005127 1103152 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.29.3"
	I0328 00:29:08.005134 1103152 command_runner.go:130] >       ],
	I0328 00:29:08.005144 1103152 command_runner.go:130] >       "repoDigests": [
	I0328 00:29:08.005167 1103152 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:6fb91d791db6d62f6b1ac9dbed23fdb597335550d99ff8333d53c4136e889b3a",
	I0328 00:29:08.005184 1103152 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:c6dae5df00e42512d2baa3e1e74efbf08bddd595e930123f6021f715198b8e88"
	I0328 00:29:08.005201 1103152 command_runner.go:130] >       ],
	I0328 00:29:08.005208 1103152 command_runner.go:130] >       "size": "60724018",
	I0328 00:29:08.005218 1103152 command_runner.go:130] >       "uid": {
	I0328 00:29:08.005226 1103152 command_runner.go:130] >         "value": "0"
	I0328 00:29:08.005235 1103152 command_runner.go:130] >       },
	I0328 00:29:08.005244 1103152 command_runner.go:130] >       "username": "",
	I0328 00:29:08.005254 1103152 command_runner.go:130] >       "spec": null,
	I0328 00:29:08.005262 1103152 command_runner.go:130] >       "pinned": false
	I0328 00:29:08.005268 1103152 command_runner.go:130] >     },
	I0328 00:29:08.005275 1103152 command_runner.go:130] >     {
	I0328 00:29:08.005286 1103152 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0328 00:29:08.005296 1103152 command_runner.go:130] >       "repoTags": [
	I0328 00:29:08.005305 1103152 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0328 00:29:08.005320 1103152 command_runner.go:130] >       ],
	I0328 00:29:08.005334 1103152 command_runner.go:130] >       "repoDigests": [
	I0328 00:29:08.005349 1103152 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0328 00:29:08.005364 1103152 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0328 00:29:08.005371 1103152 command_runner.go:130] >       ],
	I0328 00:29:08.005381 1103152 command_runner.go:130] >       "size": "750414",
	I0328 00:29:08.005388 1103152 command_runner.go:130] >       "uid": {
	I0328 00:29:08.005399 1103152 command_runner.go:130] >         "value": "65535"
	I0328 00:29:08.005408 1103152 command_runner.go:130] >       },
	I0328 00:29:08.005415 1103152 command_runner.go:130] >       "username": "",
	I0328 00:29:08.005426 1103152 command_runner.go:130] >       "spec": null,
	I0328 00:29:08.005436 1103152 command_runner.go:130] >       "pinned": true
	I0328 00:29:08.005443 1103152 command_runner.go:130] >     }
	I0328 00:29:08.005451 1103152 command_runner.go:130] >   ]
	I0328 00:29:08.005457 1103152 command_runner.go:130] > }
	I0328 00:29:08.005667 1103152 crio.go:514] all images are preloaded for cri-o runtime.
	I0328 00:29:08.005682 1103152 crio.go:433] Images already preloaded, skipping extraction
	I0328 00:29:08.005745 1103152 ssh_runner.go:195] Run: sudo crictl images --output json
	I0328 00:29:08.047523 1103152 command_runner.go:130] > {
	I0328 00:29:08.047557 1103152 command_runner.go:130] >   "images": [
	I0328 00:29:08.047563 1103152 command_runner.go:130] >     {
	I0328 00:29:08.047576 1103152 command_runner.go:130] >       "id": "4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5",
	I0328 00:29:08.047584 1103152 command_runner.go:130] >       "repoTags": [
	I0328 00:29:08.047591 1103152 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240202-8f1494ea"
	I0328 00:29:08.047595 1103152 command_runner.go:130] >       ],
	I0328 00:29:08.047601 1103152 command_runner.go:130] >       "repoDigests": [
	I0328 00:29:08.047616 1103152 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988",
	I0328 00:29:08.047627 1103152 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:bdddbe20c61d325166b48dd517059f5b93c21526eb74c5c80d86cd6d37236bac"
	I0328 00:29:08.047643 1103152 command_runner.go:130] >       ],
	I0328 00:29:08.047652 1103152 command_runner.go:130] >       "size": "65291810",
	I0328 00:29:08.047659 1103152 command_runner.go:130] >       "uid": null,
	I0328 00:29:08.047666 1103152 command_runner.go:130] >       "username": "",
	I0328 00:29:08.047686 1103152 command_runner.go:130] >       "spec": null,
	I0328 00:29:08.047699 1103152 command_runner.go:130] >       "pinned": false
	I0328 00:29:08.047705 1103152 command_runner.go:130] >     },
	I0328 00:29:08.047710 1103152 command_runner.go:130] >     {
	I0328 00:29:08.047721 1103152 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0328 00:29:08.047731 1103152 command_runner.go:130] >       "repoTags": [
	I0328 00:29:08.047740 1103152 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0328 00:29:08.047749 1103152 command_runner.go:130] >       ],
	I0328 00:29:08.047757 1103152 command_runner.go:130] >       "repoDigests": [
	I0328 00:29:08.047770 1103152 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0328 00:29:08.047785 1103152 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0328 00:29:08.047792 1103152 command_runner.go:130] >       ],
	I0328 00:29:08.047800 1103152 command_runner.go:130] >       "size": "1363676",
	I0328 00:29:08.047806 1103152 command_runner.go:130] >       "uid": null,
	I0328 00:29:08.047821 1103152 command_runner.go:130] >       "username": "",
	I0328 00:29:08.047831 1103152 command_runner.go:130] >       "spec": null,
	I0328 00:29:08.047839 1103152 command_runner.go:130] >       "pinned": false
	I0328 00:29:08.047845 1103152 command_runner.go:130] >     },
	I0328 00:29:08.047851 1103152 command_runner.go:130] >     {
	I0328 00:29:08.047870 1103152 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0328 00:29:08.047878 1103152 command_runner.go:130] >       "repoTags": [
	I0328 00:29:08.047888 1103152 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0328 00:29:08.047897 1103152 command_runner.go:130] >       ],
	I0328 00:29:08.047905 1103152 command_runner.go:130] >       "repoDigests": [
	I0328 00:29:08.047922 1103152 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0328 00:29:08.047938 1103152 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0328 00:29:08.047947 1103152 command_runner.go:130] >       ],
	I0328 00:29:08.047955 1103152 command_runner.go:130] >       "size": "31470524",
	I0328 00:29:08.047965 1103152 command_runner.go:130] >       "uid": null,
	I0328 00:29:08.047972 1103152 command_runner.go:130] >       "username": "",
	I0328 00:29:08.047985 1103152 command_runner.go:130] >       "spec": null,
	I0328 00:29:08.047992 1103152 command_runner.go:130] >       "pinned": false
	I0328 00:29:08.048001 1103152 command_runner.go:130] >     },
	I0328 00:29:08.048007 1103152 command_runner.go:130] >     {
	I0328 00:29:08.048020 1103152 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0328 00:29:08.048030 1103152 command_runner.go:130] >       "repoTags": [
	I0328 00:29:08.048038 1103152 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0328 00:29:08.048059 1103152 command_runner.go:130] >       ],
	I0328 00:29:08.048065 1103152 command_runner.go:130] >       "repoDigests": [
	I0328 00:29:08.048078 1103152 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0328 00:29:08.048111 1103152 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0328 00:29:08.048123 1103152 command_runner.go:130] >       ],
	I0328 00:29:08.048131 1103152 command_runner.go:130] >       "size": "61245718",
	I0328 00:29:08.048140 1103152 command_runner.go:130] >       "uid": null,
	I0328 00:29:08.048150 1103152 command_runner.go:130] >       "username": "nonroot",
	I0328 00:29:08.048162 1103152 command_runner.go:130] >       "spec": null,
	I0328 00:29:08.048173 1103152 command_runner.go:130] >       "pinned": false
	I0328 00:29:08.048181 1103152 command_runner.go:130] >     },
	I0328 00:29:08.048190 1103152 command_runner.go:130] >     {
	I0328 00:29:08.048200 1103152 command_runner.go:130] >       "id": "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899",
	I0328 00:29:08.048210 1103152 command_runner.go:130] >       "repoTags": [
	I0328 00:29:08.048220 1103152 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.12-0"
	I0328 00:29:08.048229 1103152 command_runner.go:130] >       ],
	I0328 00:29:08.048237 1103152 command_runner.go:130] >       "repoDigests": [
	I0328 00:29:08.048252 1103152 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62",
	I0328 00:29:08.048269 1103152 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"
	I0328 00:29:08.048279 1103152 command_runner.go:130] >       ],
	I0328 00:29:08.048289 1103152 command_runner.go:130] >       "size": "150779692",
	I0328 00:29:08.048298 1103152 command_runner.go:130] >       "uid": {
	I0328 00:29:08.048305 1103152 command_runner.go:130] >         "value": "0"
	I0328 00:29:08.048314 1103152 command_runner.go:130] >       },
	I0328 00:29:08.048321 1103152 command_runner.go:130] >       "username": "",
	I0328 00:29:08.048329 1103152 command_runner.go:130] >       "spec": null,
	I0328 00:29:08.048339 1103152 command_runner.go:130] >       "pinned": false
	I0328 00:29:08.048345 1103152 command_runner.go:130] >     },
	I0328 00:29:08.048354 1103152 command_runner.go:130] >     {
	I0328 00:29:08.048366 1103152 command_runner.go:130] >       "id": "39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533",
	I0328 00:29:08.048376 1103152 command_runner.go:130] >       "repoTags": [
	I0328 00:29:08.048385 1103152 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.29.3"
	I0328 00:29:08.048393 1103152 command_runner.go:130] >       ],
	I0328 00:29:08.048401 1103152 command_runner.go:130] >       "repoDigests": [
	I0328 00:29:08.048417 1103152 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:21be2c03b528e582a63a41d8270f469ad1b24e2f6ba8238386768fc981ca1322",
	I0328 00:29:08.048432 1103152 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:ebd35bc7ef24672c5c50ffccb21f71307a82d4fb20c0ecb6d3d27b28b69e0e3c"
	I0328 00:29:08.048441 1103152 command_runner.go:130] >       ],
	I0328 00:29:08.048449 1103152 command_runner.go:130] >       "size": "128508878",
	I0328 00:29:08.048458 1103152 command_runner.go:130] >       "uid": {
	I0328 00:29:08.048465 1103152 command_runner.go:130] >         "value": "0"
	I0328 00:29:08.048474 1103152 command_runner.go:130] >       },
	I0328 00:29:08.048482 1103152 command_runner.go:130] >       "username": "",
	I0328 00:29:08.048492 1103152 command_runner.go:130] >       "spec": null,
	I0328 00:29:08.048501 1103152 command_runner.go:130] >       "pinned": false
	I0328 00:29:08.048507 1103152 command_runner.go:130] >     },
	I0328 00:29:08.048516 1103152 command_runner.go:130] >     {
	I0328 00:29:08.048526 1103152 command_runner.go:130] >       "id": "6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3",
	I0328 00:29:08.048537 1103152 command_runner.go:130] >       "repoTags": [
	I0328 00:29:08.048549 1103152 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.29.3"
	I0328 00:29:08.048555 1103152 command_runner.go:130] >       ],
	I0328 00:29:08.048563 1103152 command_runner.go:130] >       "repoDigests": [
	I0328 00:29:08.048581 1103152 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:495e03d609009733264502138f33ab4ebff55e4ccc34b51fce1dc48eba5aa606",
	I0328 00:29:08.048597 1103152 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:5a7968649f8aee83d5a2d75d6d377ba2680df25b0b97b3be12fa10f15ad67104"
	I0328 00:29:08.048610 1103152 command_runner.go:130] >       ],
	I0328 00:29:08.048623 1103152 command_runner.go:130] >       "size": "123142962",
	I0328 00:29:08.048632 1103152 command_runner.go:130] >       "uid": {
	I0328 00:29:08.048640 1103152 command_runner.go:130] >         "value": "0"
	I0328 00:29:08.048649 1103152 command_runner.go:130] >       },
	I0328 00:29:08.048656 1103152 command_runner.go:130] >       "username": "",
	I0328 00:29:08.048667 1103152 command_runner.go:130] >       "spec": null,
	I0328 00:29:08.048677 1103152 command_runner.go:130] >       "pinned": false
	I0328 00:29:08.048683 1103152 command_runner.go:130] >     },
	I0328 00:29:08.048692 1103152 command_runner.go:130] >     {
	I0328 00:29:08.048702 1103152 command_runner.go:130] >       "id": "a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392",
	I0328 00:29:08.048712 1103152 command_runner.go:130] >       "repoTags": [
	I0328 00:29:08.048722 1103152 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.29.3"
	I0328 00:29:08.048731 1103152 command_runner.go:130] >       ],
	I0328 00:29:08.048740 1103152 command_runner.go:130] >       "repoDigests": [
	I0328 00:29:08.048760 1103152 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:d137dd922e588abc7b0e2f20afd338065e9abccdecfe705abfb19f588fbac11d",
	I0328 00:29:08.048775 1103152 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:fa87cba052adcb992bd59bd1304115c6f3b3fb370407805ba52af3d9ff3f0863"
	I0328 00:29:08.048784 1103152 command_runner.go:130] >       ],
	I0328 00:29:08.048799 1103152 command_runner.go:130] >       "size": "83634073",
	I0328 00:29:08.048810 1103152 command_runner.go:130] >       "uid": null,
	I0328 00:29:08.048821 1103152 command_runner.go:130] >       "username": "",
	I0328 00:29:08.048831 1103152 command_runner.go:130] >       "spec": null,
	I0328 00:29:08.048839 1103152 command_runner.go:130] >       "pinned": false
	I0328 00:29:08.048848 1103152 command_runner.go:130] >     },
	I0328 00:29:08.048854 1103152 command_runner.go:130] >     {
	I0328 00:29:08.048865 1103152 command_runner.go:130] >       "id": "8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b",
	I0328 00:29:08.048874 1103152 command_runner.go:130] >       "repoTags": [
	I0328 00:29:08.048884 1103152 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.29.3"
	I0328 00:29:08.048892 1103152 command_runner.go:130] >       ],
	I0328 00:29:08.048900 1103152 command_runner.go:130] >       "repoDigests": [
	I0328 00:29:08.048915 1103152 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:6fb91d791db6d62f6b1ac9dbed23fdb597335550d99ff8333d53c4136e889b3a",
	I0328 00:29:08.048937 1103152 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:c6dae5df00e42512d2baa3e1e74efbf08bddd595e930123f6021f715198b8e88"
	I0328 00:29:08.048946 1103152 command_runner.go:130] >       ],
	I0328 00:29:08.048957 1103152 command_runner.go:130] >       "size": "60724018",
	I0328 00:29:08.048967 1103152 command_runner.go:130] >       "uid": {
	I0328 00:29:08.048976 1103152 command_runner.go:130] >         "value": "0"
	I0328 00:29:08.048985 1103152 command_runner.go:130] >       },
	I0328 00:29:08.048994 1103152 command_runner.go:130] >       "username": "",
	I0328 00:29:08.049003 1103152 command_runner.go:130] >       "spec": null,
	I0328 00:29:08.049010 1103152 command_runner.go:130] >       "pinned": false
	I0328 00:29:08.049023 1103152 command_runner.go:130] >     },
	I0328 00:29:08.049030 1103152 command_runner.go:130] >     {
	I0328 00:29:08.049072 1103152 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0328 00:29:08.049082 1103152 command_runner.go:130] >       "repoTags": [
	I0328 00:29:08.049091 1103152 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0328 00:29:08.049099 1103152 command_runner.go:130] >       ],
	I0328 00:29:08.049107 1103152 command_runner.go:130] >       "repoDigests": [
	I0328 00:29:08.049122 1103152 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0328 00:29:08.049141 1103152 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0328 00:29:08.049151 1103152 command_runner.go:130] >       ],
	I0328 00:29:08.049160 1103152 command_runner.go:130] >       "size": "750414",
	I0328 00:29:08.049170 1103152 command_runner.go:130] >       "uid": {
	I0328 00:29:08.049178 1103152 command_runner.go:130] >         "value": "65535"
	I0328 00:29:08.049186 1103152 command_runner.go:130] >       },
	I0328 00:29:08.049193 1103152 command_runner.go:130] >       "username": "",
	I0328 00:29:08.049201 1103152 command_runner.go:130] >       "spec": null,
	I0328 00:29:08.049211 1103152 command_runner.go:130] >       "pinned": true
	I0328 00:29:08.049217 1103152 command_runner.go:130] >     }
	I0328 00:29:08.049222 1103152 command_runner.go:130] >   ]
	I0328 00:29:08.049228 1103152 command_runner.go:130] > }
	I0328 00:29:08.050301 1103152 crio.go:514] all images are preloaded for cri-o runtime.
	I0328 00:29:08.050328 1103152 cache_images.go:84] Images are preloaded, skipping loading
	I0328 00:29:08.050337 1103152 kubeadm.go:928] updating node { 192.168.39.88 8443 v1.29.3 crio true true} ...
	I0328 00:29:08.050458 1103152 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-200224 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.88
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.3 ClusterName:multinode-200224 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0328 00:29:08.050529 1103152 ssh_runner.go:195] Run: crio config
	I0328 00:29:08.093056 1103152 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0328 00:29:08.093092 1103152 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0328 00:29:08.093102 1103152 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0328 00:29:08.093107 1103152 command_runner.go:130] > #
	I0328 00:29:08.093117 1103152 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0328 00:29:08.093126 1103152 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0328 00:29:08.093136 1103152 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0328 00:29:08.093155 1103152 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0328 00:29:08.093160 1103152 command_runner.go:130] > # reload'.
	I0328 00:29:08.093166 1103152 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0328 00:29:08.093172 1103152 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0328 00:29:08.093179 1103152 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0328 00:29:08.093185 1103152 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0328 00:29:08.093195 1103152 command_runner.go:130] > [crio]
	I0328 00:29:08.093206 1103152 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0328 00:29:08.093222 1103152 command_runner.go:130] > # containers images, in this directory.
	I0328 00:29:08.093230 1103152 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0328 00:29:08.093266 1103152 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0328 00:29:08.093275 1103152 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0328 00:29:08.093284 1103152 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I0328 00:29:08.093291 1103152 command_runner.go:130] > # imagestore = ""
	I0328 00:29:08.093300 1103152 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0328 00:29:08.093307 1103152 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0328 00:29:08.093311 1103152 command_runner.go:130] > storage_driver = "overlay"
	I0328 00:29:08.093316 1103152 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0328 00:29:08.093325 1103152 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0328 00:29:08.093331 1103152 command_runner.go:130] > storage_option = [
	I0328 00:29:08.093340 1103152 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0328 00:29:08.093346 1103152 command_runner.go:130] > ]
	I0328 00:29:08.093357 1103152 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0328 00:29:08.093370 1103152 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0328 00:29:08.093378 1103152 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0328 00:29:08.093388 1103152 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0328 00:29:08.093396 1103152 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0328 00:29:08.093401 1103152 command_runner.go:130] > # always happen on a node reboot
	I0328 00:29:08.093405 1103152 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0328 00:29:08.093462 1103152 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0328 00:29:08.093481 1103152 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0328 00:29:08.093490 1103152 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0328 00:29:08.093497 1103152 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I0328 00:29:08.093509 1103152 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0328 00:29:08.093525 1103152 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0328 00:29:08.093533 1103152 command_runner.go:130] > # internal_wipe = true
	I0328 00:29:08.093545 1103152 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I0328 00:29:08.093556 1103152 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I0328 00:29:08.093564 1103152 command_runner.go:130] > # internal_repair = false
	I0328 00:29:08.093572 1103152 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0328 00:29:08.093581 1103152 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0328 00:29:08.093592 1103152 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0328 00:29:08.093601 1103152 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0328 00:29:08.093614 1103152 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0328 00:29:08.093620 1103152 command_runner.go:130] > [crio.api]
	I0328 00:29:08.093629 1103152 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0328 00:29:08.093637 1103152 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0328 00:29:08.093652 1103152 command_runner.go:130] > # IP address on which the stream server will listen.
	I0328 00:29:08.093659 1103152 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0328 00:29:08.093669 1103152 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0328 00:29:08.093680 1103152 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0328 00:29:08.093694 1103152 command_runner.go:130] > # stream_port = "0"
	I0328 00:29:08.093707 1103152 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0328 00:29:08.093714 1103152 command_runner.go:130] > # stream_enable_tls = false
	I0328 00:29:08.093727 1103152 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0328 00:29:08.093735 1103152 command_runner.go:130] > # stream_idle_timeout = ""
	I0328 00:29:08.093747 1103152 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0328 00:29:08.093756 1103152 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0328 00:29:08.093762 1103152 command_runner.go:130] > # minutes.
	I0328 00:29:08.093776 1103152 command_runner.go:130] > # stream_tls_cert = ""
	I0328 00:29:08.093787 1103152 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0328 00:29:08.093800 1103152 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0328 00:29:08.093809 1103152 command_runner.go:130] > # stream_tls_key = ""
	I0328 00:29:08.093821 1103152 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0328 00:29:08.093833 1103152 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0328 00:29:08.093851 1103152 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0328 00:29:08.093860 1103152 command_runner.go:130] > # stream_tls_ca = ""
	I0328 00:29:08.093873 1103152 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I0328 00:29:08.093884 1103152 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0328 00:29:08.093896 1103152 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I0328 00:29:08.093906 1103152 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0328 00:29:08.093917 1103152 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0328 00:29:08.093927 1103152 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0328 00:29:08.093931 1103152 command_runner.go:130] > [crio.runtime]
	I0328 00:29:08.093939 1103152 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0328 00:29:08.093951 1103152 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0328 00:29:08.093959 1103152 command_runner.go:130] > # "nofile=1024:2048"
	I0328 00:29:08.093973 1103152 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0328 00:29:08.093983 1103152 command_runner.go:130] > # default_ulimits = [
	I0328 00:29:08.093989 1103152 command_runner.go:130] > # ]
	I0328 00:29:08.093999 1103152 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0328 00:29:08.094009 1103152 command_runner.go:130] > # no_pivot = false
	I0328 00:29:08.094017 1103152 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0328 00:29:08.094031 1103152 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0328 00:29:08.094041 1103152 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0328 00:29:08.094055 1103152 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0328 00:29:08.094070 1103152 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0328 00:29:08.094083 1103152 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0328 00:29:08.094093 1103152 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0328 00:29:08.094100 1103152 command_runner.go:130] > # Cgroup setting for conmon
	I0328 00:29:08.094113 1103152 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0328 00:29:08.094121 1103152 command_runner.go:130] > conmon_cgroup = "pod"
	I0328 00:29:08.094131 1103152 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0328 00:29:08.094142 1103152 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0328 00:29:08.094154 1103152 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0328 00:29:08.094163 1103152 command_runner.go:130] > conmon_env = [
	I0328 00:29:08.094173 1103152 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0328 00:29:08.094180 1103152 command_runner.go:130] > ]
	I0328 00:29:08.094188 1103152 command_runner.go:130] > # Additional environment variables to set for all the
	I0328 00:29:08.094199 1103152 command_runner.go:130] > # containers. These are overridden if set in the
	I0328 00:29:08.094209 1103152 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0328 00:29:08.094219 1103152 command_runner.go:130] > # default_env = [
	I0328 00:29:08.094225 1103152 command_runner.go:130] > # ]
	I0328 00:29:08.094255 1103152 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0328 00:29:08.094270 1103152 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I0328 00:29:08.094279 1103152 command_runner.go:130] > # selinux = false
	I0328 00:29:08.094290 1103152 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0328 00:29:08.094303 1103152 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0328 00:29:08.094312 1103152 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0328 00:29:08.094316 1103152 command_runner.go:130] > # seccomp_profile = ""
	I0328 00:29:08.094327 1103152 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0328 00:29:08.094340 1103152 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0328 00:29:08.094350 1103152 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0328 00:29:08.094361 1103152 command_runner.go:130] > # which might increase security.
	I0328 00:29:08.094369 1103152 command_runner.go:130] > # This option is currently deprecated,
	I0328 00:29:08.094381 1103152 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I0328 00:29:08.094392 1103152 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0328 00:29:08.094405 1103152 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0328 00:29:08.094418 1103152 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0328 00:29:08.094431 1103152 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0328 00:29:08.094445 1103152 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0328 00:29:08.094457 1103152 command_runner.go:130] > # This option supports live configuration reload.
	I0328 00:29:08.094474 1103152 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0328 00:29:08.094486 1103152 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0328 00:29:08.094495 1103152 command_runner.go:130] > # the cgroup blockio controller.
	I0328 00:29:08.094500 1103152 command_runner.go:130] > # blockio_config_file = ""
	I0328 00:29:08.094512 1103152 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I0328 00:29:08.094521 1103152 command_runner.go:130] > # blockio parameters.
	I0328 00:29:08.094528 1103152 command_runner.go:130] > # blockio_reload = false
	I0328 00:29:08.094541 1103152 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0328 00:29:08.094551 1103152 command_runner.go:130] > # irqbalance daemon.
	I0328 00:29:08.094559 1103152 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0328 00:29:08.094572 1103152 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I0328 00:29:08.094584 1103152 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I0328 00:29:08.094596 1103152 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I0328 00:29:08.094608 1103152 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I0328 00:29:08.094622 1103152 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0328 00:29:08.094631 1103152 command_runner.go:130] > # This option supports live configuration reload.
	I0328 00:29:08.094641 1103152 command_runner.go:130] > # rdt_config_file = ""
	I0328 00:29:08.094649 1103152 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0328 00:29:08.094659 1103152 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0328 00:29:08.094680 1103152 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0328 00:29:08.094693 1103152 command_runner.go:130] > # separate_pull_cgroup = ""
	I0328 00:29:08.094705 1103152 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0328 00:29:08.094718 1103152 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0328 00:29:08.094725 1103152 command_runner.go:130] > # will be added.
	I0328 00:29:08.094733 1103152 command_runner.go:130] > # default_capabilities = [
	I0328 00:29:08.094742 1103152 command_runner.go:130] > # 	"CHOWN",
	I0328 00:29:08.094749 1103152 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0328 00:29:08.094758 1103152 command_runner.go:130] > # 	"FSETID",
	I0328 00:29:08.094764 1103152 command_runner.go:130] > # 	"FOWNER",
	I0328 00:29:08.094773 1103152 command_runner.go:130] > # 	"SETGID",
	I0328 00:29:08.094779 1103152 command_runner.go:130] > # 	"SETUID",
	I0328 00:29:08.094786 1103152 command_runner.go:130] > # 	"SETPCAP",
	I0328 00:29:08.094790 1103152 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0328 00:29:08.094798 1103152 command_runner.go:130] > # 	"KILL",
	I0328 00:29:08.094804 1103152 command_runner.go:130] > # ]
	I0328 00:29:08.094820 1103152 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0328 00:29:08.094835 1103152 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0328 00:29:08.094846 1103152 command_runner.go:130] > # add_inheritable_capabilities = false
	I0328 00:29:08.094855 1103152 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0328 00:29:08.094868 1103152 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0328 00:29:08.094877 1103152 command_runner.go:130] > default_sysctls = [
	I0328 00:29:08.094886 1103152 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I0328 00:29:08.094892 1103152 command_runner.go:130] > ]
	I0328 00:29:08.094899 1103152 command_runner.go:130] > # List of devices on the host that a
	I0328 00:29:08.094912 1103152 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0328 00:29:08.094919 1103152 command_runner.go:130] > # allowed_devices = [
	I0328 00:29:08.094929 1103152 command_runner.go:130] > # 	"/dev/fuse",
	I0328 00:29:08.094935 1103152 command_runner.go:130] > # ]
	I0328 00:29:08.094943 1103152 command_runner.go:130] > # List of additional devices. specified as
	I0328 00:29:08.094957 1103152 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0328 00:29:08.094968 1103152 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0328 00:29:08.094979 1103152 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0328 00:29:08.094986 1103152 command_runner.go:130] > # additional_devices = [
	I0328 00:29:08.094991 1103152 command_runner.go:130] > # ]
	I0328 00:29:08.095003 1103152 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0328 00:29:08.095015 1103152 command_runner.go:130] > # cdi_spec_dirs = [
	I0328 00:29:08.095021 1103152 command_runner.go:130] > # 	"/etc/cdi",
	I0328 00:29:08.095032 1103152 command_runner.go:130] > # 	"/var/run/cdi",
	I0328 00:29:08.095037 1103152 command_runner.go:130] > # ]
	I0328 00:29:08.095048 1103152 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0328 00:29:08.095060 1103152 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0328 00:29:08.095070 1103152 command_runner.go:130] > # Defaults to false.
	I0328 00:29:08.095078 1103152 command_runner.go:130] > # device_ownership_from_security_context = false
	I0328 00:29:08.095088 1103152 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0328 00:29:08.095095 1103152 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0328 00:29:08.095104 1103152 command_runner.go:130] > # hooks_dir = [
	I0328 00:29:08.095113 1103152 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0328 00:29:08.095122 1103152 command_runner.go:130] > # ]
	I0328 00:29:08.095131 1103152 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0328 00:29:08.095145 1103152 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0328 00:29:08.095156 1103152 command_runner.go:130] > # its default mounts from the following two files:
	I0328 00:29:08.095165 1103152 command_runner.go:130] > #
	I0328 00:29:08.095173 1103152 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0328 00:29:08.095184 1103152 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0328 00:29:08.095197 1103152 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0328 00:29:08.095206 1103152 command_runner.go:130] > #
	I0328 00:29:08.095217 1103152 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0328 00:29:08.095230 1103152 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0328 00:29:08.095244 1103152 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0328 00:29:08.095255 1103152 command_runner.go:130] > #      only add mounts it finds in this file.
	I0328 00:29:08.095261 1103152 command_runner.go:130] > #
	I0328 00:29:08.095265 1103152 command_runner.go:130] > # default_mounts_file = ""
	I0328 00:29:08.095276 1103152 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0328 00:29:08.095295 1103152 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0328 00:29:08.095305 1103152 command_runner.go:130] > pids_limit = 1024
	I0328 00:29:08.095315 1103152 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0328 00:29:08.095328 1103152 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0328 00:29:08.095341 1103152 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0328 00:29:08.095356 1103152 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0328 00:29:08.095362 1103152 command_runner.go:130] > # log_size_max = -1
	I0328 00:29:08.095371 1103152 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I0328 00:29:08.095381 1103152 command_runner.go:130] > # log_to_journald = false
	I0328 00:29:08.095392 1103152 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0328 00:29:08.095403 1103152 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0328 00:29:08.095414 1103152 command_runner.go:130] > # Path to directory for container attach sockets.
	I0328 00:29:08.095422 1103152 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0328 00:29:08.095433 1103152 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0328 00:29:08.095443 1103152 command_runner.go:130] > # bind_mount_prefix = ""
	I0328 00:29:08.095449 1103152 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0328 00:29:08.095457 1103152 command_runner.go:130] > # read_only = false
	I0328 00:29:08.095469 1103152 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0328 00:29:08.095482 1103152 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0328 00:29:08.095493 1103152 command_runner.go:130] > # live configuration reload.
	I0328 00:29:08.095503 1103152 command_runner.go:130] > # log_level = "info"
	I0328 00:29:08.095512 1103152 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0328 00:29:08.095523 1103152 command_runner.go:130] > # This option supports live configuration reload.
	I0328 00:29:08.095532 1103152 command_runner.go:130] > # log_filter = ""
	I0328 00:29:08.095542 1103152 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0328 00:29:08.095552 1103152 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0328 00:29:08.095556 1103152 command_runner.go:130] > # separated by comma.
	I0328 00:29:08.095571 1103152 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0328 00:29:08.095581 1103152 command_runner.go:130] > # uid_mappings = ""
	I0328 00:29:08.095590 1103152 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0328 00:29:08.095603 1103152 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0328 00:29:08.095613 1103152 command_runner.go:130] > # separated by comma.
	I0328 00:29:08.095625 1103152 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0328 00:29:08.095635 1103152 command_runner.go:130] > # gid_mappings = ""
	I0328 00:29:08.095645 1103152 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0328 00:29:08.095655 1103152 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0328 00:29:08.095666 1103152 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0328 00:29:08.095682 1103152 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0328 00:29:08.095694 1103152 command_runner.go:130] > # minimum_mappable_uid = -1
	I0328 00:29:08.095708 1103152 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0328 00:29:08.095720 1103152 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0328 00:29:08.095733 1103152 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0328 00:29:08.095748 1103152 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0328 00:29:08.095756 1103152 command_runner.go:130] > # minimum_mappable_gid = -1
	I0328 00:29:08.095763 1103152 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0328 00:29:08.095776 1103152 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0328 00:29:08.095789 1103152 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0328 00:29:08.095799 1103152 command_runner.go:130] > # ctr_stop_timeout = 30
	I0328 00:29:08.095809 1103152 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0328 00:29:08.095821 1103152 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0328 00:29:08.095832 1103152 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0328 00:29:08.095840 1103152 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0328 00:29:08.095845 1103152 command_runner.go:130] > drop_infra_ctr = false
	I0328 00:29:08.095856 1103152 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0328 00:29:08.095866 1103152 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0328 00:29:08.095882 1103152 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0328 00:29:08.095891 1103152 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0328 00:29:08.095902 1103152 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I0328 00:29:08.095915 1103152 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I0328 00:29:08.095923 1103152 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I0328 00:29:08.095929 1103152 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I0328 00:29:08.095935 1103152 command_runner.go:130] > # shared_cpuset = ""
	I0328 00:29:08.095947 1103152 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0328 00:29:08.095959 1103152 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0328 00:29:08.095967 1103152 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0328 00:29:08.095981 1103152 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0328 00:29:08.095991 1103152 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0328 00:29:08.096000 1103152 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I0328 00:29:08.096013 1103152 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I0328 00:29:08.096023 1103152 command_runner.go:130] > # enable_criu_support = false
	I0328 00:29:08.096029 1103152 command_runner.go:130] > # Enable/disable the generation of the container,
	I0328 00:29:08.096043 1103152 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I0328 00:29:08.096054 1103152 command_runner.go:130] > # enable_pod_events = false
	I0328 00:29:08.096064 1103152 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0328 00:29:08.096077 1103152 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0328 00:29:08.096088 1103152 command_runner.go:130] > # The name is matched against the runtimes map below.
	I0328 00:29:08.096098 1103152 command_runner.go:130] > # default_runtime = "runc"
	I0328 00:29:08.096109 1103152 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0328 00:29:08.096121 1103152 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0328 00:29:08.096136 1103152 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I0328 00:29:08.096148 1103152 command_runner.go:130] > # creation as a file is not desired either.
	I0328 00:29:08.096161 1103152 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0328 00:29:08.096172 1103152 command_runner.go:130] > # the hostname is being managed dynamically.
	I0328 00:29:08.096183 1103152 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0328 00:29:08.096191 1103152 command_runner.go:130] > # ]
	I0328 00:29:08.096204 1103152 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0328 00:29:08.096215 1103152 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0328 00:29:08.096225 1103152 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I0328 00:29:08.096238 1103152 command_runner.go:130] > # Each entry in the table should follow the format:
	I0328 00:29:08.096247 1103152 command_runner.go:130] > #
	I0328 00:29:08.096255 1103152 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I0328 00:29:08.096266 1103152 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I0328 00:29:08.096293 1103152 command_runner.go:130] > # runtime_type = "oci"
	I0328 00:29:08.096304 1103152 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I0328 00:29:08.096314 1103152 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I0328 00:29:08.096322 1103152 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I0328 00:29:08.096328 1103152 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I0328 00:29:08.096352 1103152 command_runner.go:130] > # monitor_env = []
	I0328 00:29:08.096364 1103152 command_runner.go:130] > # privileged_without_host_devices = false
	I0328 00:29:08.096375 1103152 command_runner.go:130] > # allowed_annotations = []
	I0328 00:29:08.096387 1103152 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I0328 00:29:08.096396 1103152 command_runner.go:130] > # Where:
	I0328 00:29:08.096407 1103152 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I0328 00:29:08.096421 1103152 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I0328 00:29:08.096432 1103152 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0328 00:29:08.096442 1103152 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0328 00:29:08.096452 1103152 command_runner.go:130] > #   in $PATH.
	I0328 00:29:08.096466 1103152 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I0328 00:29:08.096477 1103152 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0328 00:29:08.096493 1103152 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I0328 00:29:08.096502 1103152 command_runner.go:130] > #   state.
	I0328 00:29:08.096515 1103152 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0328 00:29:08.096526 1103152 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0328 00:29:08.096535 1103152 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0328 00:29:08.096546 1103152 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0328 00:29:08.096559 1103152 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0328 00:29:08.096573 1103152 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0328 00:29:08.096584 1103152 command_runner.go:130] > #   The currently recognized values are:
	I0328 00:29:08.096598 1103152 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0328 00:29:08.096612 1103152 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0328 00:29:08.096624 1103152 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0328 00:29:08.096632 1103152 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0328 00:29:08.096647 1103152 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0328 00:29:08.096662 1103152 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0328 00:29:08.096676 1103152 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I0328 00:29:08.096694 1103152 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I0328 00:29:08.096707 1103152 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0328 00:29:08.096719 1103152 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I0328 00:29:08.096727 1103152 command_runner.go:130] > #   deprecated option "conmon".
	I0328 00:29:08.096738 1103152 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I0328 00:29:08.096750 1103152 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I0328 00:29:08.096764 1103152 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I0328 00:29:08.096776 1103152 command_runner.go:130] > #   should be moved to the container's cgroup
	I0328 00:29:08.096792 1103152 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I0328 00:29:08.096803 1103152 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I0328 00:29:08.096816 1103152 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I0328 00:29:08.096827 1103152 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I0328 00:29:08.096833 1103152 command_runner.go:130] > #
	I0328 00:29:08.096839 1103152 command_runner.go:130] > # Using the seccomp notifier feature:
	I0328 00:29:08.096847 1103152 command_runner.go:130] > #
	I0328 00:29:08.096858 1103152 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I0328 00:29:08.096872 1103152 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I0328 00:29:08.096880 1103152 command_runner.go:130] > #
	I0328 00:29:08.096893 1103152 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I0328 00:29:08.096905 1103152 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I0328 00:29:08.096912 1103152 command_runner.go:130] > #
	I0328 00:29:08.096918 1103152 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I0328 00:29:08.096926 1103152 command_runner.go:130] > # feature.
	I0328 00:29:08.096933 1103152 command_runner.go:130] > #
	I0328 00:29:08.096946 1103152 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I0328 00:29:08.096960 1103152 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I0328 00:29:08.096973 1103152 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I0328 00:29:08.096985 1103152 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I0328 00:29:08.096998 1103152 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I0328 00:29:08.097006 1103152 command_runner.go:130] > #
	I0328 00:29:08.097013 1103152 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I0328 00:29:08.097025 1103152 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I0328 00:29:08.097033 1103152 command_runner.go:130] > #
	I0328 00:29:08.097043 1103152 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I0328 00:29:08.097055 1103152 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I0328 00:29:08.097064 1103152 command_runner.go:130] > #
	I0328 00:29:08.097075 1103152 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I0328 00:29:08.097087 1103152 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I0328 00:29:08.097096 1103152 command_runner.go:130] > # limitation.
	I0328 00:29:08.097104 1103152 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0328 00:29:08.097113 1103152 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0328 00:29:08.097123 1103152 command_runner.go:130] > runtime_type = "oci"
	I0328 00:29:08.097133 1103152 command_runner.go:130] > runtime_root = "/run/runc"
	I0328 00:29:08.097141 1103152 command_runner.go:130] > runtime_config_path = ""
	I0328 00:29:08.097153 1103152 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I0328 00:29:08.097163 1103152 command_runner.go:130] > monitor_cgroup = "pod"
	I0328 00:29:08.097172 1103152 command_runner.go:130] > monitor_exec_cgroup = ""
	I0328 00:29:08.097181 1103152 command_runner.go:130] > monitor_env = [
	I0328 00:29:08.097194 1103152 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0328 00:29:08.097201 1103152 command_runner.go:130] > ]
	I0328 00:29:08.097205 1103152 command_runner.go:130] > privileged_without_host_devices = false
	I0328 00:29:08.097217 1103152 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0328 00:29:08.097229 1103152 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0328 00:29:08.097239 1103152 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0328 00:29:08.097255 1103152 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0328 00:29:08.097271 1103152 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0328 00:29:08.097284 1103152 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0328 00:29:08.097299 1103152 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0328 00:29:08.097315 1103152 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0328 00:29:08.097328 1103152 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0328 00:29:08.097340 1103152 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0328 00:29:08.097349 1103152 command_runner.go:130] > # Example:
	I0328 00:29:08.097356 1103152 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0328 00:29:08.097368 1103152 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0328 00:29:08.097379 1103152 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0328 00:29:08.097387 1103152 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0328 00:29:08.097394 1103152 command_runner.go:130] > # cpuset = 0
	I0328 00:29:08.097398 1103152 command_runner.go:130] > # cpushares = "0-1"
	I0328 00:29:08.097402 1103152 command_runner.go:130] > # Where:
	I0328 00:29:08.097409 1103152 command_runner.go:130] > # The workload name is workload-type.
	I0328 00:29:08.097424 1103152 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0328 00:29:08.097432 1103152 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0328 00:29:08.097444 1103152 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0328 00:29:08.097462 1103152 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0328 00:29:08.097474 1103152 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0328 00:29:08.097485 1103152 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I0328 00:29:08.097495 1103152 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I0328 00:29:08.097504 1103152 command_runner.go:130] > # Default value is set to true
	I0328 00:29:08.097515 1103152 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I0328 00:29:08.097529 1103152 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I0328 00:29:08.097541 1103152 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I0328 00:29:08.097551 1103152 command_runner.go:130] > # Default value is set to 'false'
	I0328 00:29:08.097562 1103152 command_runner.go:130] > # disable_hostport_mapping = false
	I0328 00:29:08.097575 1103152 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0328 00:29:08.097581 1103152 command_runner.go:130] > #
	I0328 00:29:08.097588 1103152 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0328 00:29:08.097603 1103152 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0328 00:29:08.097614 1103152 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0328 00:29:08.097624 1103152 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0328 00:29:08.097633 1103152 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0328 00:29:08.097639 1103152 command_runner.go:130] > [crio.image]
	I0328 00:29:08.097649 1103152 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0328 00:29:08.097657 1103152 command_runner.go:130] > # default_transport = "docker://"
	I0328 00:29:08.097666 1103152 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0328 00:29:08.097673 1103152 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0328 00:29:08.097680 1103152 command_runner.go:130] > # global_auth_file = ""
	I0328 00:29:08.097694 1103152 command_runner.go:130] > # The image used to instantiate infra containers.
	I0328 00:29:08.097702 1103152 command_runner.go:130] > # This option supports live configuration reload.
	I0328 00:29:08.097711 1103152 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.9"
	I0328 00:29:08.097723 1103152 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0328 00:29:08.097732 1103152 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0328 00:29:08.097740 1103152 command_runner.go:130] > # This option supports live configuration reload.
	I0328 00:29:08.097746 1103152 command_runner.go:130] > # pause_image_auth_file = ""
	I0328 00:29:08.097752 1103152 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0328 00:29:08.097758 1103152 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0328 00:29:08.097771 1103152 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0328 00:29:08.097780 1103152 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0328 00:29:08.097788 1103152 command_runner.go:130] > # pause_command = "/pause"
	I0328 00:29:08.097798 1103152 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I0328 00:29:08.097807 1103152 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I0328 00:29:08.097817 1103152 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I0328 00:29:08.097827 1103152 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I0328 00:29:08.097836 1103152 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I0328 00:29:08.097843 1103152 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I0328 00:29:08.097853 1103152 command_runner.go:130] > # pinned_images = [
	I0328 00:29:08.097859 1103152 command_runner.go:130] > # ]
	I0328 00:29:08.097871 1103152 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0328 00:29:08.097885 1103152 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0328 00:29:08.097897 1103152 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0328 00:29:08.097910 1103152 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0328 00:29:08.097919 1103152 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0328 00:29:08.097923 1103152 command_runner.go:130] > # signature_policy = ""
	I0328 00:29:08.097929 1103152 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I0328 00:29:08.097943 1103152 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I0328 00:29:08.097956 1103152 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I0328 00:29:08.097967 1103152 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I0328 00:29:08.097980 1103152 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I0328 00:29:08.097990 1103152 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I0328 00:29:08.098005 1103152 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0328 00:29:08.098018 1103152 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0328 00:29:08.098026 1103152 command_runner.go:130] > # changing them here.
	I0328 00:29:08.098030 1103152 command_runner.go:130] > # insecure_registries = [
	I0328 00:29:08.098037 1103152 command_runner.go:130] > # ]
	I0328 00:29:08.098046 1103152 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0328 00:29:08.098058 1103152 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0328 00:29:08.098069 1103152 command_runner.go:130] > # image_volumes = "mkdir"
	I0328 00:29:08.098080 1103152 command_runner.go:130] > # Temporary directory to use for storing big files
	I0328 00:29:08.098090 1103152 command_runner.go:130] > # big_files_temporary_dir = ""
	I0328 00:29:08.098102 1103152 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0328 00:29:08.098111 1103152 command_runner.go:130] > # CNI plugins.
	I0328 00:29:08.098120 1103152 command_runner.go:130] > [crio.network]
	I0328 00:29:08.098130 1103152 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0328 00:29:08.098140 1103152 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0328 00:29:08.098150 1103152 command_runner.go:130] > # cni_default_network = ""
	I0328 00:29:08.098163 1103152 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0328 00:29:08.098173 1103152 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0328 00:29:08.098185 1103152 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0328 00:29:08.098194 1103152 command_runner.go:130] > # plugin_dirs = [
	I0328 00:29:08.098204 1103152 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0328 00:29:08.098211 1103152 command_runner.go:130] > # ]
	I0328 00:29:08.098217 1103152 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0328 00:29:08.098225 1103152 command_runner.go:130] > [crio.metrics]
	I0328 00:29:08.098245 1103152 command_runner.go:130] > # Globally enable or disable metrics support.
	I0328 00:29:08.098255 1103152 command_runner.go:130] > enable_metrics = true
	I0328 00:29:08.098266 1103152 command_runner.go:130] > # Specify enabled metrics collectors.
	I0328 00:29:08.098277 1103152 command_runner.go:130] > # Per default all metrics are enabled.
	I0328 00:29:08.098289 1103152 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0328 00:29:08.098302 1103152 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0328 00:29:08.098313 1103152 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0328 00:29:08.098320 1103152 command_runner.go:130] > # metrics_collectors = [
	I0328 00:29:08.098326 1103152 command_runner.go:130] > # 	"operations",
	I0328 00:29:08.098337 1103152 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0328 00:29:08.098349 1103152 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0328 00:29:08.098358 1103152 command_runner.go:130] > # 	"operations_errors",
	I0328 00:29:08.098367 1103152 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0328 00:29:08.098378 1103152 command_runner.go:130] > # 	"image_pulls_by_name",
	I0328 00:29:08.098387 1103152 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0328 00:29:08.098395 1103152 command_runner.go:130] > # 	"image_pulls_failures",
	I0328 00:29:08.098403 1103152 command_runner.go:130] > # 	"image_pulls_successes",
	I0328 00:29:08.098408 1103152 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0328 00:29:08.098418 1103152 command_runner.go:130] > # 	"image_layer_reuse",
	I0328 00:29:08.098429 1103152 command_runner.go:130] > # 	"containers_events_dropped_total",
	I0328 00:29:08.098442 1103152 command_runner.go:130] > # 	"containers_oom_total",
	I0328 00:29:08.098451 1103152 command_runner.go:130] > # 	"containers_oom",
	I0328 00:29:08.098460 1103152 command_runner.go:130] > # 	"processes_defunct",
	I0328 00:29:08.098470 1103152 command_runner.go:130] > # 	"operations_total",
	I0328 00:29:08.098480 1103152 command_runner.go:130] > # 	"operations_latency_seconds",
	I0328 00:29:08.098491 1103152 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0328 00:29:08.098498 1103152 command_runner.go:130] > # 	"operations_errors_total",
	I0328 00:29:08.098502 1103152 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0328 00:29:08.098515 1103152 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0328 00:29:08.098527 1103152 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0328 00:29:08.098534 1103152 command_runner.go:130] > # 	"image_pulls_success_total",
	I0328 00:29:08.098544 1103152 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0328 00:29:08.098554 1103152 command_runner.go:130] > # 	"containers_oom_count_total",
	I0328 00:29:08.098566 1103152 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I0328 00:29:08.098576 1103152 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I0328 00:29:08.098585 1103152 command_runner.go:130] > # ]
	I0328 00:29:08.098595 1103152 command_runner.go:130] > # The port on which the metrics server will listen.
	I0328 00:29:08.098602 1103152 command_runner.go:130] > # metrics_port = 9090
	I0328 00:29:08.098610 1103152 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0328 00:29:08.098620 1103152 command_runner.go:130] > # metrics_socket = ""
	I0328 00:29:08.098630 1103152 command_runner.go:130] > # The certificate for the secure metrics server.
	I0328 00:29:08.098643 1103152 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0328 00:29:08.098655 1103152 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0328 00:29:08.098666 1103152 command_runner.go:130] > # certificate on any modification event.
	I0328 00:29:08.098676 1103152 command_runner.go:130] > # metrics_cert = ""
	I0328 00:29:08.098684 1103152 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0328 00:29:08.098696 1103152 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0328 00:29:08.098705 1103152 command_runner.go:130] > # metrics_key = ""
	I0328 00:29:08.098718 1103152 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0328 00:29:08.098728 1103152 command_runner.go:130] > [crio.tracing]
	I0328 00:29:08.098740 1103152 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0328 00:29:08.098750 1103152 command_runner.go:130] > # enable_tracing = false
	I0328 00:29:08.098762 1103152 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0328 00:29:08.098772 1103152 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0328 00:29:08.098784 1103152 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I0328 00:29:08.098791 1103152 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0328 00:29:08.098798 1103152 command_runner.go:130] > # CRI-O NRI configuration.
	I0328 00:29:08.098807 1103152 command_runner.go:130] > [crio.nri]
	I0328 00:29:08.098817 1103152 command_runner.go:130] > # Globally enable or disable NRI.
	I0328 00:29:08.098824 1103152 command_runner.go:130] > # enable_nri = false
	I0328 00:29:08.098834 1103152 command_runner.go:130] > # NRI socket to listen on.
	I0328 00:29:08.098845 1103152 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I0328 00:29:08.098853 1103152 command_runner.go:130] > # NRI plugin directory to use.
	I0328 00:29:08.098863 1103152 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I0328 00:29:08.098877 1103152 command_runner.go:130] > # NRI plugin configuration directory to use.
	I0328 00:29:08.098886 1103152 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I0328 00:29:08.098892 1103152 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I0328 00:29:08.098901 1103152 command_runner.go:130] > # nri_disable_connections = false
	I0328 00:29:08.098912 1103152 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I0328 00:29:08.098923 1103152 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I0328 00:29:08.098935 1103152 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I0328 00:29:08.098945 1103152 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I0328 00:29:08.098960 1103152 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0328 00:29:08.098968 1103152 command_runner.go:130] > [crio.stats]
	I0328 00:29:08.098977 1103152 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0328 00:29:08.098987 1103152 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0328 00:29:08.098998 1103152 command_runner.go:130] > # stats_collection_period = 0
	I0328 00:29:08.099031 1103152 command_runner.go:130] ! time="2024-03-28 00:29:08.057689871Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I0328 00:29:08.099052 1103152 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0328 00:29:08.099200 1103152 cni.go:84] Creating CNI manager for ""
	I0328 00:29:08.099219 1103152 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0328 00:29:08.099227 1103152 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0328 00:29:08.099255 1103152 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.88 APIServerPort:8443 KubernetesVersion:v1.29.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-200224 NodeName:multinode-200224 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.88"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.88 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0328 00:29:08.099427 1103152 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.88
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-200224"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.88
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.88"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0328 00:29:08.099508 1103152 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.3
	I0328 00:29:08.110930 1103152 command_runner.go:130] > kubeadm
	I0328 00:29:08.110959 1103152 command_runner.go:130] > kubectl
	I0328 00:29:08.110963 1103152 command_runner.go:130] > kubelet
	I0328 00:29:08.110998 1103152 binaries.go:44] Found k8s binaries, skipping transfer
	I0328 00:29:08.111061 1103152 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0328 00:29:08.121519 1103152 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (315 bytes)
	I0328 00:29:08.139092 1103152 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0328 00:29:08.156952 1103152 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0328 00:29:08.174345 1103152 ssh_runner.go:195] Run: grep 192.168.39.88	control-plane.minikube.internal$ /etc/hosts
	I0328 00:29:08.178261 1103152 command_runner.go:130] > 192.168.39.88	control-plane.minikube.internal
	I0328 00:29:08.178458 1103152 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0328 00:29:08.327719 1103152 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0328 00:29:08.343089 1103152 certs.go:68] Setting up /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/multinode-200224 for IP: 192.168.39.88
	I0328 00:29:08.343121 1103152 certs.go:194] generating shared ca certs ...
	I0328 00:29:08.343139 1103152 certs.go:226] acquiring lock for ca certs: {Name:mkf4dec8f33bbf51de6ed3aabdf175c7fd744ef9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 00:29:08.343294 1103152 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.key
	I0328 00:29:08.343329 1103152 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/proxy-client-ca.key
	I0328 00:29:08.343339 1103152 certs.go:256] generating profile certs ...
	I0328 00:29:08.343422 1103152 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/multinode-200224/client.key
	I0328 00:29:08.343475 1103152 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/multinode-200224/apiserver.key.99234e7d
	I0328 00:29:08.343509 1103152 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/multinode-200224/proxy-client.key
	I0328 00:29:08.343521 1103152 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0328 00:29:08.343545 1103152 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0328 00:29:08.343560 1103152 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18485-1069254/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0328 00:29:08.343570 1103152 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18485-1069254/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0328 00:29:08.343580 1103152 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/multinode-200224/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0328 00:29:08.343600 1103152 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/multinode-200224/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0328 00:29:08.343615 1103152 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/multinode-200224/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0328 00:29:08.343629 1103152 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/multinode-200224/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0328 00:29:08.343680 1103152 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/1076522.pem (1338 bytes)
	W0328 00:29:08.343708 1103152 certs.go:480] ignoring /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/1076522_empty.pem, impossibly tiny 0 bytes
	I0328 00:29:08.343720 1103152 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca-key.pem (1675 bytes)
	I0328 00:29:08.343743 1103152 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem (1078 bytes)
	I0328 00:29:08.343766 1103152 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/cert.pem (1123 bytes)
	I0328 00:29:08.343788 1103152 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/key.pem (1679 bytes)
	I0328 00:29:08.343825 1103152 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/files/etc/ssl/certs/10765222.pem (1708 bytes)
	I0328 00:29:08.343854 1103152 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0328 00:29:08.343869 1103152 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/1076522.pem -> /usr/share/ca-certificates/1076522.pem
	I0328 00:29:08.343882 1103152 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18485-1069254/.minikube/files/etc/ssl/certs/10765222.pem -> /usr/share/ca-certificates/10765222.pem
	I0328 00:29:08.344628 1103152 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0328 00:29:08.371570 1103152 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0328 00:29:08.398189 1103152 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0328 00:29:08.423638 1103152 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0328 00:29:08.448999 1103152 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/multinode-200224/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0328 00:29:08.475074 1103152 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/multinode-200224/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0328 00:29:08.499467 1103152 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/multinode-200224/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0328 00:29:08.523715 1103152 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/multinode-200224/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0328 00:29:08.548018 1103152 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0328 00:29:08.573481 1103152 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/1076522.pem --> /usr/share/ca-certificates/1076522.pem (1338 bytes)
	I0328 00:29:08.598469 1103152 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/files/etc/ssl/certs/10765222.pem --> /usr/share/ca-certificates/10765222.pem (1708 bytes)
	I0328 00:29:08.623258 1103152 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0328 00:29:08.639726 1103152 ssh_runner.go:195] Run: openssl version
	I0328 00:29:08.645983 1103152 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0328 00:29:08.646078 1103152 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0328 00:29:08.657621 1103152 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0328 00:29:08.662303 1103152 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Mar 27 23:34 /usr/share/ca-certificates/minikubeCA.pem
	I0328 00:29:08.662334 1103152 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 27 23:34 /usr/share/ca-certificates/minikubeCA.pem
	I0328 00:29:08.662384 1103152 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0328 00:29:08.668270 1103152 command_runner.go:130] > b5213941
	I0328 00:29:08.668359 1103152 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0328 00:29:08.677933 1103152 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1076522.pem && ln -fs /usr/share/ca-certificates/1076522.pem /etc/ssl/certs/1076522.pem"
	I0328 00:29:08.688670 1103152 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1076522.pem
	I0328 00:29:08.693094 1103152 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Mar 27 23:42 /usr/share/ca-certificates/1076522.pem
	I0328 00:29:08.693388 1103152 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 27 23:42 /usr/share/ca-certificates/1076522.pem
	I0328 00:29:08.693436 1103152 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1076522.pem
	I0328 00:29:08.698917 1103152 command_runner.go:130] > 51391683
	I0328 00:29:08.699229 1103152 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1076522.pem /etc/ssl/certs/51391683.0"
	I0328 00:29:08.708717 1103152 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10765222.pem && ln -fs /usr/share/ca-certificates/10765222.pem /etc/ssl/certs/10765222.pem"
	I0328 00:29:08.720335 1103152 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10765222.pem
	I0328 00:29:08.725120 1103152 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Mar 27 23:42 /usr/share/ca-certificates/10765222.pem
	I0328 00:29:08.725157 1103152 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 27 23:42 /usr/share/ca-certificates/10765222.pem
	I0328 00:29:08.725196 1103152 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10765222.pem
	I0328 00:29:08.731180 1103152 command_runner.go:130] > 3ec20f2e
	I0328 00:29:08.731243 1103152 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/10765222.pem /etc/ssl/certs/3ec20f2e.0"
	I0328 00:29:08.741487 1103152 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0328 00:29:08.746114 1103152 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0328 00:29:08.746135 1103152 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0328 00:29:08.746141 1103152 command_runner.go:130] > Device: 253,1	Inode: 7339526     Links: 1
	I0328 00:29:08.746147 1103152 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0328 00:29:08.746153 1103152 command_runner.go:130] > Access: 2024-03-28 00:22:46.667068132 +0000
	I0328 00:29:08.746165 1103152 command_runner.go:130] > Modify: 2024-03-28 00:22:46.667068132 +0000
	I0328 00:29:08.746175 1103152 command_runner.go:130] > Change: 2024-03-28 00:22:46.667068132 +0000
	I0328 00:29:08.746182 1103152 command_runner.go:130] >  Birth: 2024-03-28 00:22:46.667068132 +0000
	I0328 00:29:08.746272 1103152 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0328 00:29:08.751887 1103152 command_runner.go:130] > Certificate will not expire
	I0328 00:29:08.752123 1103152 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0328 00:29:08.757900 1103152 command_runner.go:130] > Certificate will not expire
	I0328 00:29:08.758150 1103152 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0328 00:29:08.763857 1103152 command_runner.go:130] > Certificate will not expire
	I0328 00:29:08.763950 1103152 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0328 00:29:08.769597 1103152 command_runner.go:130] > Certificate will not expire
	I0328 00:29:08.769654 1103152 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0328 00:29:08.775143 1103152 command_runner.go:130] > Certificate will not expire
	I0328 00:29:08.775303 1103152 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0328 00:29:08.780763 1103152 command_runner.go:130] > Certificate will not expire
	I0328 00:29:08.780986 1103152 kubeadm.go:391] StartCluster: {Name:multinode-200224 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.
3 ClusterName:multinode-200224 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.88 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.161 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.22 Port:0 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false i
nspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fa
lse DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0328 00:29:08.781137 1103152 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0328 00:29:08.781193 1103152 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0328 00:29:08.820322 1103152 command_runner.go:130] > 9ed073676e722da08dd05f2af2ceea6db672978bf9a7c2166d8cdd0e68c55cf5
	I0328 00:29:08.820346 1103152 command_runner.go:130] > ce9178b60c4e7c1b7a135a15545d1f6069bd804b1ddc2bb6bb7040925e3401a1
	I0328 00:29:08.820352 1103152 command_runner.go:130] > dbe677740910dd5d9206540ecea26aa8ac44bad90beedfeb44189e329f1e7f05
	I0328 00:29:08.820358 1103152 command_runner.go:130] > fb82d42c8f86742edc73783b3dfdef5b9cc4658c3ddc375a06b5ea65bd837da5
	I0328 00:29:08.820363 1103152 command_runner.go:130] > 68ae2f434f3deb10aa0a571899142907e2801ba5da048845b37822888969b398
	I0328 00:29:08.820368 1103152 command_runner.go:130] > 0e309dc4a326fe31f7f6c1bfc6da7ab8ff22862a35eba9e7aa9b4ff15b9737b6
	I0328 00:29:08.820374 1103152 command_runner.go:130] > 3fccdc262ed43923c10e7dc2c2d3f31d086e8cae5e8a1adf35c700e71ac085c6
	I0328 00:29:08.820382 1103152 command_runner.go:130] > 6fbf200e2f5995f89adef7939641ef941f9f8ac8e54c7d64c2bf74baed49b800
	I0328 00:29:08.821777 1103152 cri.go:89] found id: "9ed073676e722da08dd05f2af2ceea6db672978bf9a7c2166d8cdd0e68c55cf5"
	I0328 00:29:08.821792 1103152 cri.go:89] found id: "ce9178b60c4e7c1b7a135a15545d1f6069bd804b1ddc2bb6bb7040925e3401a1"
	I0328 00:29:08.821796 1103152 cri.go:89] found id: "dbe677740910dd5d9206540ecea26aa8ac44bad90beedfeb44189e329f1e7f05"
	I0328 00:29:08.821800 1103152 cri.go:89] found id: "fb82d42c8f86742edc73783b3dfdef5b9cc4658c3ddc375a06b5ea65bd837da5"
	I0328 00:29:08.821802 1103152 cri.go:89] found id: "68ae2f434f3deb10aa0a571899142907e2801ba5da048845b37822888969b398"
	I0328 00:29:08.821805 1103152 cri.go:89] found id: "0e309dc4a326fe31f7f6c1bfc6da7ab8ff22862a35eba9e7aa9b4ff15b9737b6"
	I0328 00:29:08.821808 1103152 cri.go:89] found id: "3fccdc262ed43923c10e7dc2c2d3f31d086e8cae5e8a1adf35c700e71ac085c6"
	I0328 00:29:08.821811 1103152 cri.go:89] found id: "6fbf200e2f5995f89adef7939641ef941f9f8ac8e54c7d64c2bf74baed49b800"
	I0328 00:29:08.821813 1103152 cri.go:89] found id: ""
	I0328 00:29:08.821854 1103152 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Mar 28 00:30:39 multinode-200224 crio[2841]: time="2024-03-28 00:30:39.657998226Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1711585839657968693,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:130111,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=aab2f75c-9129-4b55-9102-c6836c448e2d name=/runtime.v1.ImageService/ImageFsInfo
	Mar 28 00:30:39 multinode-200224 crio[2841]: time="2024-03-28 00:30:39.658468071Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ab3e06e5-de7f-4d9d-a810-3d8c04bb4af3 name=/runtime.v1.RuntimeService/ListContainers
	Mar 28 00:30:39 multinode-200224 crio[2841]: time="2024-03-28 00:30:39.658547233Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ab3e06e5-de7f-4d9d-a810-3d8c04bb4af3 name=/runtime.v1.RuntimeService/ListContainers
	Mar 28 00:30:39 multinode-200224 crio[2841]: time="2024-03-28 00:30:39.658972287Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e48d2ff176f073d38e6ad1fd628ec8a718efffa03851bab7a22dae283a54b95d,PodSandboxId:8bec63c7bb1e58c7d00d3098dd7a3bfabcb829e07757ccc519a45f98731340f4,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1711585789658607598,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-4mbrk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a88eae05-06c4-4a76-9f77-af448b7c0704,},Annotations:map[string]string{io.kubernetes.container.hash: da0006fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b533e6c726ea6ed1277d347af17f080ce2afc86504a3b08d2b06b561fdce86e7,PodSandboxId:42b54cf59b877cf418cfe9dd5e8db1794aefa4223d9548fda611f535a8836600,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1711585756181887042,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-ncgjv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 30a4f6bf-5542-476f-b1af-837031a00c50,},Annotations:map[string]string{io.kubernetes.container.hash: d1b55d19,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kub
ernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4bc9e1fd5fd94b80983e60854d25fe1478e0bc1049691b83957baa96fa04543e,PodSandboxId:4d9db886d6f8c3b35a2feca27f26ed2f26f2ee746d53e25ab202e8185c320a84,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1711585756108419330,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-g5sdz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7a0e7db0-a552-4642-825f-da6ee01e6121,},Annotations:map[string]string{io.kubernetes.container.hash: 88796073,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol
\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81447b81bf61942708a9748573bd5dd9ca1d2c1891deb60362255634ed437cf1,PodSandboxId:b7913ac412fa3a662b3f635255c1a467a739f5f07d13fe47e08df1a004f99d25,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1711585756033916246,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3bd5c912-2288-420d-a7a2-d73f2c34a5ed,},A
nnotations:map[string]string{io.kubernetes.container.hash: 35ee7de3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46c3c1988d5ad53bff3254f6b5fbcb07e9d341c25390a721f9cb15c63e302898,PodSandboxId:8f3debc7fde1148605dc96dad0bf3d49f7387b139ed1b2fe10f0b4d0e9f3d68e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1711585755993777083,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-p2g9p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae4313c7-a926-4da8-bfc2-7dae1422d958,},Annotations:map[string]string{io.k
ubernetes.container.hash: b48261f0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:767d65958abd9c1074e5b189fbc2524166f7f1c3a91990207131a500a1efdd7f,PodSandboxId:75462da1806aef166c05438ba0310317dddeb940660721bebc5d8066293c6ba3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1711585751111073258,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-200224,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c865a9c1ae98bee042cbdef78ac1661e,},Annotations:map[string]string{io.kubernetes.cont
ainer.hash: be150834,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:013303f3ea31fd8274cb3bf3859cfa6cbc15563590eb945d675b932bac6c3efb,PodSandboxId:026825fe8f8b7e6c6ef6d86c33a20708c10c88489447078a1bec9e86518d24c5,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1711585751068690127,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-200224,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 439c378fd07501a8dead5aed861f13e7,},Annotations:map[string]string{io.kubernetes.container.hash: c5340b9,io.kubernetes.container.
restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b3a9b4858d44e5afb56269a375649980f786d727b89ff9556550e692636fab82,PodSandboxId:448b875a38262bdf20fb8c0d242c65b5b3bc059b8cc02d268905cafd5eb95bde,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1711585751060523840,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-200224,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db39c19d4710f792ba253c58204b3fd4,},Annotations:map[string]string{io.kubernetes.container.hash: d593ec80,io.kubernetes.container.restartCount:
1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a59125a9daa4c893fd45d629c12fd055db5fb6cbfc3f0205e1b7c2789a83f8da,PodSandboxId:b9a010a3fb716893ab564b2b70f1f78881ab631cc9bf84f91f59f0235f95422f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1711585751027374603,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-200224,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 32db6766711b42ca248c705dc74d448e,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:400e13249ff22f6f5fff380a9b09989689add62a554466af727eda89c89d5a8a,PodSandboxId:28401e2f6084f5204fe5c79b6fad4857507ed38493632dcd5793de7ddd053561,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1711585440515353167,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-4mbrk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a88eae05-06c4-4a76-9f77-af448b7c0704,},Annotations:map[string]string{io.kubernetes.container.hash: da0006fa,io.kubernetes.container.re
startCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ed073676e722da08dd05f2af2ceea6db672978bf9a7c2166d8cdd0e68c55cf5,PodSandboxId:ee7f59bd3e83ff182dc2870085e8dde1ac0476f4ed5c212895a2b54a7b1d2145,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1711585393172390704,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-g5sdz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7a0e7db0-a552-4642-825f-da6ee01e6121,},Annotations:map[string]string{io.kubernetes.container.hash: 88796073,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contai
nerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ce9178b60c4e7c1b7a135a15545d1f6069bd804b1ddc2bb6bb7040925e3401a1,PodSandboxId:ed4a7c41ab819bc8c36d7e2ab25a2a12997d2920f2f2fb3afdbb263c586ec4a3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1711585393079508402,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod
.namespace: kube-system,io.kubernetes.pod.uid: 3bd5c912-2288-420d-a7a2-d73f2c34a5ed,},Annotations:map[string]string{io.kubernetes.container.hash: 35ee7de3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dbe677740910dd5d9206540ecea26aa8ac44bad90beedfeb44189e329f1e7f05,PodSandboxId:ea1011de4d4b6657be5634eb7131426dfaf71adda1e016d49136887bbc0d7a9a,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1711585391513010402,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-ncgjv,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: 30a4f6bf-5542-476f-b1af-837031a00c50,},Annotations:map[string]string{io.kubernetes.container.hash: d1b55d19,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb82d42c8f86742edc73783b3dfdef5b9cc4658c3ddc375a06b5ea65bd837da5,PodSandboxId:8e1d52aa1272a7dffa0d0477292d339dd15d53962c6c3a485e7ba027258d0cd5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_EXITED,CreatedAt:1711585391274860794,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-p2g9p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae4313c7-a926-4da8-bfc
2-7dae1422d958,},Annotations:map[string]string{io.kubernetes.container.hash: b48261f0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68ae2f434f3deb10aa0a571899142907e2801ba5da048845b37822888969b398,PodSandboxId:fd28139122e292890b1d14a5668c359edc911ba1328ccd09054fbe9ee9099a59,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_EXITED,CreatedAt:1711585371292676447,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-200224,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db39c19d4710f792ba253c58204b3fd4,
},Annotations:map[string]string{io.kubernetes.container.hash: d593ec80,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e309dc4a326fe31f7f6c1bfc6da7ab8ff22862a35eba9e7aa9b4ff15b9737b6,PodSandboxId:caae122c7bfc36f8d133510bf3080172efebf3bc11889137437f0b9c2716cf79,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1711585371277974289,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-200224,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 439c378fd07501a8dead5aed861f13e7,},Annotations:map[string]string{io.kubernetes
.container.hash: c5340b9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6fbf200e2f5995f89adef7939641ef941f9f8ac8e54c7d64c2bf74baed49b800,PodSandboxId:5fef10ef65dd3ba44f4c351e380d6247428dbf047e9cca78d17d3928a7e8f0e5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_EXITED,CreatedAt:1711585371223017912,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-200224,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 32db6766711b42ca248c705dc74d448e,},Annotations:map[string]string{io
.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3fccdc262ed43923c10e7dc2c2d3f31d086e8cae5e8a1adf35c700e71ac085c6,PodSandboxId:3f002ba61607eb1d8647387d6277f7b6391e7e06d60cc4cc049fed995ef98eb4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_EXITED,CreatedAt:1711585371223349476,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-200224,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c865a9c1ae98bee042cbdef78ac1661e,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: be150834,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ab3e06e5-de7f-4d9d-a810-3d8c04bb4af3 name=/runtime.v1.RuntimeService/ListContainers
	Mar 28 00:30:39 multinode-200224 crio[2841]: time="2024-03-28 00:30:39.702403071Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9f26b8d6-cde9-4aef-848d-81b5ae80d3e3 name=/runtime.v1.RuntimeService/Version
	Mar 28 00:30:39 multinode-200224 crio[2841]: time="2024-03-28 00:30:39.702496973Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9f26b8d6-cde9-4aef-848d-81b5ae80d3e3 name=/runtime.v1.RuntimeService/Version
	Mar 28 00:30:39 multinode-200224 crio[2841]: time="2024-03-28 00:30:39.703579652Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=6f277e3f-6953-4441-b9ff-813c6a748ba0 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 28 00:30:39 multinode-200224 crio[2841]: time="2024-03-28 00:30:39.704182667Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1711585839704156466,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:130111,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6f277e3f-6953-4441-b9ff-813c6a748ba0 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 28 00:30:39 multinode-200224 crio[2841]: time="2024-03-28 00:30:39.704920102Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=dff6f700-80ac-4800-bf03-f528de47b63f name=/runtime.v1.RuntimeService/ListContainers
	Mar 28 00:30:39 multinode-200224 crio[2841]: time="2024-03-28 00:30:39.704999347Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=dff6f700-80ac-4800-bf03-f528de47b63f name=/runtime.v1.RuntimeService/ListContainers
	Mar 28 00:30:39 multinode-200224 crio[2841]: time="2024-03-28 00:30:39.705359012Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e48d2ff176f073d38e6ad1fd628ec8a718efffa03851bab7a22dae283a54b95d,PodSandboxId:8bec63c7bb1e58c7d00d3098dd7a3bfabcb829e07757ccc519a45f98731340f4,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1711585789658607598,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-4mbrk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a88eae05-06c4-4a76-9f77-af448b7c0704,},Annotations:map[string]string{io.kubernetes.container.hash: da0006fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b533e6c726ea6ed1277d347af17f080ce2afc86504a3b08d2b06b561fdce86e7,PodSandboxId:42b54cf59b877cf418cfe9dd5e8db1794aefa4223d9548fda611f535a8836600,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1711585756181887042,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-ncgjv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 30a4f6bf-5542-476f-b1af-837031a00c50,},Annotations:map[string]string{io.kubernetes.container.hash: d1b55d19,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kub
ernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4bc9e1fd5fd94b80983e60854d25fe1478e0bc1049691b83957baa96fa04543e,PodSandboxId:4d9db886d6f8c3b35a2feca27f26ed2f26f2ee746d53e25ab202e8185c320a84,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1711585756108419330,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-g5sdz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7a0e7db0-a552-4642-825f-da6ee01e6121,},Annotations:map[string]string{io.kubernetes.container.hash: 88796073,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol
\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81447b81bf61942708a9748573bd5dd9ca1d2c1891deb60362255634ed437cf1,PodSandboxId:b7913ac412fa3a662b3f635255c1a467a739f5f07d13fe47e08df1a004f99d25,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1711585756033916246,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3bd5c912-2288-420d-a7a2-d73f2c34a5ed,},A
nnotations:map[string]string{io.kubernetes.container.hash: 35ee7de3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46c3c1988d5ad53bff3254f6b5fbcb07e9d341c25390a721f9cb15c63e302898,PodSandboxId:8f3debc7fde1148605dc96dad0bf3d49f7387b139ed1b2fe10f0b4d0e9f3d68e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1711585755993777083,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-p2g9p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae4313c7-a926-4da8-bfc2-7dae1422d958,},Annotations:map[string]string{io.k
ubernetes.container.hash: b48261f0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:767d65958abd9c1074e5b189fbc2524166f7f1c3a91990207131a500a1efdd7f,PodSandboxId:75462da1806aef166c05438ba0310317dddeb940660721bebc5d8066293c6ba3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1711585751111073258,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-200224,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c865a9c1ae98bee042cbdef78ac1661e,},Annotations:map[string]string{io.kubernetes.cont
ainer.hash: be150834,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:013303f3ea31fd8274cb3bf3859cfa6cbc15563590eb945d675b932bac6c3efb,PodSandboxId:026825fe8f8b7e6c6ef6d86c33a20708c10c88489447078a1bec9e86518d24c5,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1711585751068690127,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-200224,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 439c378fd07501a8dead5aed861f13e7,},Annotations:map[string]string{io.kubernetes.container.hash: c5340b9,io.kubernetes.container.
restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b3a9b4858d44e5afb56269a375649980f786d727b89ff9556550e692636fab82,PodSandboxId:448b875a38262bdf20fb8c0d242c65b5b3bc059b8cc02d268905cafd5eb95bde,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1711585751060523840,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-200224,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db39c19d4710f792ba253c58204b3fd4,},Annotations:map[string]string{io.kubernetes.container.hash: d593ec80,io.kubernetes.container.restartCount:
1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a59125a9daa4c893fd45d629c12fd055db5fb6cbfc3f0205e1b7c2789a83f8da,PodSandboxId:b9a010a3fb716893ab564b2b70f1f78881ab631cc9bf84f91f59f0235f95422f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1711585751027374603,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-200224,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 32db6766711b42ca248c705dc74d448e,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:400e13249ff22f6f5fff380a9b09989689add62a554466af727eda89c89d5a8a,PodSandboxId:28401e2f6084f5204fe5c79b6fad4857507ed38493632dcd5793de7ddd053561,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1711585440515353167,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-4mbrk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a88eae05-06c4-4a76-9f77-af448b7c0704,},Annotations:map[string]string{io.kubernetes.container.hash: da0006fa,io.kubernetes.container.re
startCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ed073676e722da08dd05f2af2ceea6db672978bf9a7c2166d8cdd0e68c55cf5,PodSandboxId:ee7f59bd3e83ff182dc2870085e8dde1ac0476f4ed5c212895a2b54a7b1d2145,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1711585393172390704,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-g5sdz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7a0e7db0-a552-4642-825f-da6ee01e6121,},Annotations:map[string]string{io.kubernetes.container.hash: 88796073,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contai
nerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ce9178b60c4e7c1b7a135a15545d1f6069bd804b1ddc2bb6bb7040925e3401a1,PodSandboxId:ed4a7c41ab819bc8c36d7e2ab25a2a12997d2920f2f2fb3afdbb263c586ec4a3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1711585393079508402,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod
.namespace: kube-system,io.kubernetes.pod.uid: 3bd5c912-2288-420d-a7a2-d73f2c34a5ed,},Annotations:map[string]string{io.kubernetes.container.hash: 35ee7de3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dbe677740910dd5d9206540ecea26aa8ac44bad90beedfeb44189e329f1e7f05,PodSandboxId:ea1011de4d4b6657be5634eb7131426dfaf71adda1e016d49136887bbc0d7a9a,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1711585391513010402,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-ncgjv,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: 30a4f6bf-5542-476f-b1af-837031a00c50,},Annotations:map[string]string{io.kubernetes.container.hash: d1b55d19,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb82d42c8f86742edc73783b3dfdef5b9cc4658c3ddc375a06b5ea65bd837da5,PodSandboxId:8e1d52aa1272a7dffa0d0477292d339dd15d53962c6c3a485e7ba027258d0cd5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_EXITED,CreatedAt:1711585391274860794,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-p2g9p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae4313c7-a926-4da8-bfc
2-7dae1422d958,},Annotations:map[string]string{io.kubernetes.container.hash: b48261f0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68ae2f434f3deb10aa0a571899142907e2801ba5da048845b37822888969b398,PodSandboxId:fd28139122e292890b1d14a5668c359edc911ba1328ccd09054fbe9ee9099a59,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_EXITED,CreatedAt:1711585371292676447,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-200224,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db39c19d4710f792ba253c58204b3fd4,
},Annotations:map[string]string{io.kubernetes.container.hash: d593ec80,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e309dc4a326fe31f7f6c1bfc6da7ab8ff22862a35eba9e7aa9b4ff15b9737b6,PodSandboxId:caae122c7bfc36f8d133510bf3080172efebf3bc11889137437f0b9c2716cf79,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1711585371277974289,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-200224,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 439c378fd07501a8dead5aed861f13e7,},Annotations:map[string]string{io.kubernetes
.container.hash: c5340b9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6fbf200e2f5995f89adef7939641ef941f9f8ac8e54c7d64c2bf74baed49b800,PodSandboxId:5fef10ef65dd3ba44f4c351e380d6247428dbf047e9cca78d17d3928a7e8f0e5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_EXITED,CreatedAt:1711585371223017912,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-200224,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 32db6766711b42ca248c705dc74d448e,},Annotations:map[string]string{io
.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3fccdc262ed43923c10e7dc2c2d3f31d086e8cae5e8a1adf35c700e71ac085c6,PodSandboxId:3f002ba61607eb1d8647387d6277f7b6391e7e06d60cc4cc049fed995ef98eb4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_EXITED,CreatedAt:1711585371223349476,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-200224,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c865a9c1ae98bee042cbdef78ac1661e,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: be150834,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=dff6f700-80ac-4800-bf03-f528de47b63f name=/runtime.v1.RuntimeService/ListContainers
	Mar 28 00:30:39 multinode-200224 crio[2841]: time="2024-03-28 00:30:39.749614841Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=219d0570-0cf2-43cf-8162-5beb0282435c name=/runtime.v1.RuntimeService/Version
	Mar 28 00:30:39 multinode-200224 crio[2841]: time="2024-03-28 00:30:39.749686053Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=219d0570-0cf2-43cf-8162-5beb0282435c name=/runtime.v1.RuntimeService/Version
	Mar 28 00:30:39 multinode-200224 crio[2841]: time="2024-03-28 00:30:39.750962085Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=19ef314e-a0d5-4e0f-aed2-d72770eab168 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 28 00:30:39 multinode-200224 crio[2841]: time="2024-03-28 00:30:39.751400183Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1711585839751374813,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:130111,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=19ef314e-a0d5-4e0f-aed2-d72770eab168 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 28 00:30:39 multinode-200224 crio[2841]: time="2024-03-28 00:30:39.752148561Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=41039a9e-20dd-486d-8cf3-aabab21995d0 name=/runtime.v1.RuntimeService/ListContainers
	Mar 28 00:30:39 multinode-200224 crio[2841]: time="2024-03-28 00:30:39.752230677Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=41039a9e-20dd-486d-8cf3-aabab21995d0 name=/runtime.v1.RuntimeService/ListContainers
	Mar 28 00:30:39 multinode-200224 crio[2841]: time="2024-03-28 00:30:39.752612673Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e48d2ff176f073d38e6ad1fd628ec8a718efffa03851bab7a22dae283a54b95d,PodSandboxId:8bec63c7bb1e58c7d00d3098dd7a3bfabcb829e07757ccc519a45f98731340f4,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1711585789658607598,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-4mbrk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a88eae05-06c4-4a76-9f77-af448b7c0704,},Annotations:map[string]string{io.kubernetes.container.hash: da0006fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b533e6c726ea6ed1277d347af17f080ce2afc86504a3b08d2b06b561fdce86e7,PodSandboxId:42b54cf59b877cf418cfe9dd5e8db1794aefa4223d9548fda611f535a8836600,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1711585756181887042,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-ncgjv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 30a4f6bf-5542-476f-b1af-837031a00c50,},Annotations:map[string]string{io.kubernetes.container.hash: d1b55d19,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kub
ernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4bc9e1fd5fd94b80983e60854d25fe1478e0bc1049691b83957baa96fa04543e,PodSandboxId:4d9db886d6f8c3b35a2feca27f26ed2f26f2ee746d53e25ab202e8185c320a84,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1711585756108419330,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-g5sdz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7a0e7db0-a552-4642-825f-da6ee01e6121,},Annotations:map[string]string{io.kubernetes.container.hash: 88796073,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol
\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81447b81bf61942708a9748573bd5dd9ca1d2c1891deb60362255634ed437cf1,PodSandboxId:b7913ac412fa3a662b3f635255c1a467a739f5f07d13fe47e08df1a004f99d25,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1711585756033916246,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3bd5c912-2288-420d-a7a2-d73f2c34a5ed,},A
nnotations:map[string]string{io.kubernetes.container.hash: 35ee7de3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46c3c1988d5ad53bff3254f6b5fbcb07e9d341c25390a721f9cb15c63e302898,PodSandboxId:8f3debc7fde1148605dc96dad0bf3d49f7387b139ed1b2fe10f0b4d0e9f3d68e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1711585755993777083,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-p2g9p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae4313c7-a926-4da8-bfc2-7dae1422d958,},Annotations:map[string]string{io.k
ubernetes.container.hash: b48261f0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:767d65958abd9c1074e5b189fbc2524166f7f1c3a91990207131a500a1efdd7f,PodSandboxId:75462da1806aef166c05438ba0310317dddeb940660721bebc5d8066293c6ba3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1711585751111073258,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-200224,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c865a9c1ae98bee042cbdef78ac1661e,},Annotations:map[string]string{io.kubernetes.cont
ainer.hash: be150834,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:013303f3ea31fd8274cb3bf3859cfa6cbc15563590eb945d675b932bac6c3efb,PodSandboxId:026825fe8f8b7e6c6ef6d86c33a20708c10c88489447078a1bec9e86518d24c5,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1711585751068690127,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-200224,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 439c378fd07501a8dead5aed861f13e7,},Annotations:map[string]string{io.kubernetes.container.hash: c5340b9,io.kubernetes.container.
restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b3a9b4858d44e5afb56269a375649980f786d727b89ff9556550e692636fab82,PodSandboxId:448b875a38262bdf20fb8c0d242c65b5b3bc059b8cc02d268905cafd5eb95bde,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1711585751060523840,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-200224,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db39c19d4710f792ba253c58204b3fd4,},Annotations:map[string]string{io.kubernetes.container.hash: d593ec80,io.kubernetes.container.restartCount:
1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a59125a9daa4c893fd45d629c12fd055db5fb6cbfc3f0205e1b7c2789a83f8da,PodSandboxId:b9a010a3fb716893ab564b2b70f1f78881ab631cc9bf84f91f59f0235f95422f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1711585751027374603,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-200224,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 32db6766711b42ca248c705dc74d448e,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:400e13249ff22f6f5fff380a9b09989689add62a554466af727eda89c89d5a8a,PodSandboxId:28401e2f6084f5204fe5c79b6fad4857507ed38493632dcd5793de7ddd053561,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1711585440515353167,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-4mbrk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a88eae05-06c4-4a76-9f77-af448b7c0704,},Annotations:map[string]string{io.kubernetes.container.hash: da0006fa,io.kubernetes.container.re
startCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ed073676e722da08dd05f2af2ceea6db672978bf9a7c2166d8cdd0e68c55cf5,PodSandboxId:ee7f59bd3e83ff182dc2870085e8dde1ac0476f4ed5c212895a2b54a7b1d2145,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1711585393172390704,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-g5sdz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7a0e7db0-a552-4642-825f-da6ee01e6121,},Annotations:map[string]string{io.kubernetes.container.hash: 88796073,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contai
nerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ce9178b60c4e7c1b7a135a15545d1f6069bd804b1ddc2bb6bb7040925e3401a1,PodSandboxId:ed4a7c41ab819bc8c36d7e2ab25a2a12997d2920f2f2fb3afdbb263c586ec4a3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1711585393079508402,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod
.namespace: kube-system,io.kubernetes.pod.uid: 3bd5c912-2288-420d-a7a2-d73f2c34a5ed,},Annotations:map[string]string{io.kubernetes.container.hash: 35ee7de3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dbe677740910dd5d9206540ecea26aa8ac44bad90beedfeb44189e329f1e7f05,PodSandboxId:ea1011de4d4b6657be5634eb7131426dfaf71adda1e016d49136887bbc0d7a9a,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1711585391513010402,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-ncgjv,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: 30a4f6bf-5542-476f-b1af-837031a00c50,},Annotations:map[string]string{io.kubernetes.container.hash: d1b55d19,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb82d42c8f86742edc73783b3dfdef5b9cc4658c3ddc375a06b5ea65bd837da5,PodSandboxId:8e1d52aa1272a7dffa0d0477292d339dd15d53962c6c3a485e7ba027258d0cd5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_EXITED,CreatedAt:1711585391274860794,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-p2g9p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae4313c7-a926-4da8-bfc
2-7dae1422d958,},Annotations:map[string]string{io.kubernetes.container.hash: b48261f0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68ae2f434f3deb10aa0a571899142907e2801ba5da048845b37822888969b398,PodSandboxId:fd28139122e292890b1d14a5668c359edc911ba1328ccd09054fbe9ee9099a59,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_EXITED,CreatedAt:1711585371292676447,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-200224,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db39c19d4710f792ba253c58204b3fd4,
},Annotations:map[string]string{io.kubernetes.container.hash: d593ec80,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e309dc4a326fe31f7f6c1bfc6da7ab8ff22862a35eba9e7aa9b4ff15b9737b6,PodSandboxId:caae122c7bfc36f8d133510bf3080172efebf3bc11889137437f0b9c2716cf79,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1711585371277974289,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-200224,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 439c378fd07501a8dead5aed861f13e7,},Annotations:map[string]string{io.kubernetes
.container.hash: c5340b9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6fbf200e2f5995f89adef7939641ef941f9f8ac8e54c7d64c2bf74baed49b800,PodSandboxId:5fef10ef65dd3ba44f4c351e380d6247428dbf047e9cca78d17d3928a7e8f0e5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_EXITED,CreatedAt:1711585371223017912,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-200224,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 32db6766711b42ca248c705dc74d448e,},Annotations:map[string]string{io
.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3fccdc262ed43923c10e7dc2c2d3f31d086e8cae5e8a1adf35c700e71ac085c6,PodSandboxId:3f002ba61607eb1d8647387d6277f7b6391e7e06d60cc4cc049fed995ef98eb4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_EXITED,CreatedAt:1711585371223349476,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-200224,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c865a9c1ae98bee042cbdef78ac1661e,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: be150834,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=41039a9e-20dd-486d-8cf3-aabab21995d0 name=/runtime.v1.RuntimeService/ListContainers
	Mar 28 00:30:39 multinode-200224 crio[2841]: time="2024-03-28 00:30:39.800646835Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=76df5bac-0469-4b80-91f6-47a5c8ab8ef3 name=/runtime.v1.RuntimeService/Version
	Mar 28 00:30:39 multinode-200224 crio[2841]: time="2024-03-28 00:30:39.800741080Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=76df5bac-0469-4b80-91f6-47a5c8ab8ef3 name=/runtime.v1.RuntimeService/Version
	Mar 28 00:30:39 multinode-200224 crio[2841]: time="2024-03-28 00:30:39.801564599Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=358f91eb-7b02-48ce-a42c-27a1e265a8a4 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 28 00:30:39 multinode-200224 crio[2841]: time="2024-03-28 00:30:39.802087438Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1711585839802065466,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:130111,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=358f91eb-7b02-48ce-a42c-27a1e265a8a4 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 28 00:30:39 multinode-200224 crio[2841]: time="2024-03-28 00:30:39.802606200Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7ef33e9d-4439-4b23-9f70-ed0d817232f8 name=/runtime.v1.RuntimeService/ListContainers
	Mar 28 00:30:39 multinode-200224 crio[2841]: time="2024-03-28 00:30:39.802665730Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7ef33e9d-4439-4b23-9f70-ed0d817232f8 name=/runtime.v1.RuntimeService/ListContainers
	Mar 28 00:30:39 multinode-200224 crio[2841]: time="2024-03-28 00:30:39.803100322Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e48d2ff176f073d38e6ad1fd628ec8a718efffa03851bab7a22dae283a54b95d,PodSandboxId:8bec63c7bb1e58c7d00d3098dd7a3bfabcb829e07757ccc519a45f98731340f4,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1711585789658607598,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-4mbrk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a88eae05-06c4-4a76-9f77-af448b7c0704,},Annotations:map[string]string{io.kubernetes.container.hash: da0006fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b533e6c726ea6ed1277d347af17f080ce2afc86504a3b08d2b06b561fdce86e7,PodSandboxId:42b54cf59b877cf418cfe9dd5e8db1794aefa4223d9548fda611f535a8836600,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1711585756181887042,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-ncgjv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 30a4f6bf-5542-476f-b1af-837031a00c50,},Annotations:map[string]string{io.kubernetes.container.hash: d1b55d19,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kub
ernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4bc9e1fd5fd94b80983e60854d25fe1478e0bc1049691b83957baa96fa04543e,PodSandboxId:4d9db886d6f8c3b35a2feca27f26ed2f26f2ee746d53e25ab202e8185c320a84,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1711585756108419330,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-g5sdz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7a0e7db0-a552-4642-825f-da6ee01e6121,},Annotations:map[string]string{io.kubernetes.container.hash: 88796073,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol
\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81447b81bf61942708a9748573bd5dd9ca1d2c1891deb60362255634ed437cf1,PodSandboxId:b7913ac412fa3a662b3f635255c1a467a739f5f07d13fe47e08df1a004f99d25,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1711585756033916246,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3bd5c912-2288-420d-a7a2-d73f2c34a5ed,},A
nnotations:map[string]string{io.kubernetes.container.hash: 35ee7de3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46c3c1988d5ad53bff3254f6b5fbcb07e9d341c25390a721f9cb15c63e302898,PodSandboxId:8f3debc7fde1148605dc96dad0bf3d49f7387b139ed1b2fe10f0b4d0e9f3d68e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1711585755993777083,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-p2g9p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae4313c7-a926-4da8-bfc2-7dae1422d958,},Annotations:map[string]string{io.k
ubernetes.container.hash: b48261f0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:767d65958abd9c1074e5b189fbc2524166f7f1c3a91990207131a500a1efdd7f,PodSandboxId:75462da1806aef166c05438ba0310317dddeb940660721bebc5d8066293c6ba3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1711585751111073258,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-200224,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c865a9c1ae98bee042cbdef78ac1661e,},Annotations:map[string]string{io.kubernetes.cont
ainer.hash: be150834,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:013303f3ea31fd8274cb3bf3859cfa6cbc15563590eb945d675b932bac6c3efb,PodSandboxId:026825fe8f8b7e6c6ef6d86c33a20708c10c88489447078a1bec9e86518d24c5,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1711585751068690127,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-200224,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 439c378fd07501a8dead5aed861f13e7,},Annotations:map[string]string{io.kubernetes.container.hash: c5340b9,io.kubernetes.container.
restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b3a9b4858d44e5afb56269a375649980f786d727b89ff9556550e692636fab82,PodSandboxId:448b875a38262bdf20fb8c0d242c65b5b3bc059b8cc02d268905cafd5eb95bde,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1711585751060523840,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-200224,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db39c19d4710f792ba253c58204b3fd4,},Annotations:map[string]string{io.kubernetes.container.hash: d593ec80,io.kubernetes.container.restartCount:
1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a59125a9daa4c893fd45d629c12fd055db5fb6cbfc3f0205e1b7c2789a83f8da,PodSandboxId:b9a010a3fb716893ab564b2b70f1f78881ab631cc9bf84f91f59f0235f95422f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1711585751027374603,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-200224,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 32db6766711b42ca248c705dc74d448e,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:400e13249ff22f6f5fff380a9b09989689add62a554466af727eda89c89d5a8a,PodSandboxId:28401e2f6084f5204fe5c79b6fad4857507ed38493632dcd5793de7ddd053561,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1711585440515353167,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-4mbrk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a88eae05-06c4-4a76-9f77-af448b7c0704,},Annotations:map[string]string{io.kubernetes.container.hash: da0006fa,io.kubernetes.container.re
startCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ed073676e722da08dd05f2af2ceea6db672978bf9a7c2166d8cdd0e68c55cf5,PodSandboxId:ee7f59bd3e83ff182dc2870085e8dde1ac0476f4ed5c212895a2b54a7b1d2145,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1711585393172390704,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-g5sdz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7a0e7db0-a552-4642-825f-da6ee01e6121,},Annotations:map[string]string{io.kubernetes.container.hash: 88796073,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contai
nerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ce9178b60c4e7c1b7a135a15545d1f6069bd804b1ddc2bb6bb7040925e3401a1,PodSandboxId:ed4a7c41ab819bc8c36d7e2ab25a2a12997d2920f2f2fb3afdbb263c586ec4a3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1711585393079508402,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod
.namespace: kube-system,io.kubernetes.pod.uid: 3bd5c912-2288-420d-a7a2-d73f2c34a5ed,},Annotations:map[string]string{io.kubernetes.container.hash: 35ee7de3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dbe677740910dd5d9206540ecea26aa8ac44bad90beedfeb44189e329f1e7f05,PodSandboxId:ea1011de4d4b6657be5634eb7131426dfaf71adda1e016d49136887bbc0d7a9a,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1711585391513010402,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-ncgjv,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: 30a4f6bf-5542-476f-b1af-837031a00c50,},Annotations:map[string]string{io.kubernetes.container.hash: d1b55d19,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb82d42c8f86742edc73783b3dfdef5b9cc4658c3ddc375a06b5ea65bd837da5,PodSandboxId:8e1d52aa1272a7dffa0d0477292d339dd15d53962c6c3a485e7ba027258d0cd5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_EXITED,CreatedAt:1711585391274860794,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-p2g9p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae4313c7-a926-4da8-bfc
2-7dae1422d958,},Annotations:map[string]string{io.kubernetes.container.hash: b48261f0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68ae2f434f3deb10aa0a571899142907e2801ba5da048845b37822888969b398,PodSandboxId:fd28139122e292890b1d14a5668c359edc911ba1328ccd09054fbe9ee9099a59,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_EXITED,CreatedAt:1711585371292676447,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-200224,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db39c19d4710f792ba253c58204b3fd4,
},Annotations:map[string]string{io.kubernetes.container.hash: d593ec80,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e309dc4a326fe31f7f6c1bfc6da7ab8ff22862a35eba9e7aa9b4ff15b9737b6,PodSandboxId:caae122c7bfc36f8d133510bf3080172efebf3bc11889137437f0b9c2716cf79,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1711585371277974289,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-200224,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 439c378fd07501a8dead5aed861f13e7,},Annotations:map[string]string{io.kubernetes
.container.hash: c5340b9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6fbf200e2f5995f89adef7939641ef941f9f8ac8e54c7d64c2bf74baed49b800,PodSandboxId:5fef10ef65dd3ba44f4c351e380d6247428dbf047e9cca78d17d3928a7e8f0e5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_EXITED,CreatedAt:1711585371223017912,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-200224,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 32db6766711b42ca248c705dc74d448e,},Annotations:map[string]string{io
.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3fccdc262ed43923c10e7dc2c2d3f31d086e8cae5e8a1adf35c700e71ac085c6,PodSandboxId:3f002ba61607eb1d8647387d6277f7b6391e7e06d60cc4cc049fed995ef98eb4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_EXITED,CreatedAt:1711585371223349476,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-200224,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c865a9c1ae98bee042cbdef78ac1661e,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: be150834,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7ef33e9d-4439-4b23-9f70-ed0d817232f8 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	e48d2ff176f07       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      50 seconds ago       Running             busybox                   1                   8bec63c7bb1e5       busybox-7fdf7869d9-4mbrk
	b533e6c726ea6       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5                                      About a minute ago   Running             kindnet-cni               1                   42b54cf59b877       kindnet-ncgjv
	4bc9e1fd5fd94       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      About a minute ago   Running             coredns                   1                   4d9db886d6f8c       coredns-76f75df574-g5sdz
	81447b81bf619       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      About a minute ago   Running             storage-provisioner       1                   b7913ac412fa3       storage-provisioner
	46c3c1988d5ad       a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392                                      About a minute ago   Running             kube-proxy                1                   8f3debc7fde11       kube-proxy-p2g9p
	767d65958abd9       8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b                                      About a minute ago   Running             kube-scheduler            1                   75462da1806ae       kube-scheduler-multinode-200224
	013303f3ea31f       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      About a minute ago   Running             etcd                      1                   026825fe8f8b7       etcd-multinode-200224
	b3a9b4858d44e       39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533                                      About a minute ago   Running             kube-apiserver            1                   448b875a38262       kube-apiserver-multinode-200224
	a59125a9daa4c       6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3                                      About a minute ago   Running             kube-controller-manager   1                   b9a010a3fb716       kube-controller-manager-multinode-200224
	400e13249ff22       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   6 minutes ago        Exited              busybox                   0                   28401e2f6084f       busybox-7fdf7869d9-4mbrk
	9ed073676e722       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      7 minutes ago        Exited              coredns                   0                   ee7f59bd3e83f       coredns-76f75df574-g5sdz
	ce9178b60c4e7       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      7 minutes ago        Exited              storage-provisioner       0                   ed4a7c41ab819       storage-provisioner
	dbe677740910d       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5                                      7 minutes ago        Exited              kindnet-cni               0                   ea1011de4d4b6       kindnet-ncgjv
	fb82d42c8f867       a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392                                      7 minutes ago        Exited              kube-proxy                0                   8e1d52aa1272a       kube-proxy-p2g9p
	68ae2f434f3de       39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533                                      7 minutes ago        Exited              kube-apiserver            0                   fd28139122e29       kube-apiserver-multinode-200224
	0e309dc4a326f       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      7 minutes ago        Exited              etcd                      0                   caae122c7bfc3       etcd-multinode-200224
	3fccdc262ed43       8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b                                      7 minutes ago        Exited              kube-scheduler            0                   3f002ba61607e       kube-scheduler-multinode-200224
	6fbf200e2f599       6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3                                      7 minutes ago        Exited              kube-controller-manager   0                   5fef10ef65dd3       kube-controller-manager-multinode-200224
	
	
	==> coredns [4bc9e1fd5fd94b80983e60854d25fe1478e0bc1049691b83957baa96fa04543e] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:50673 - 49026 "HINFO IN 3489849291402505905.2552829470997845361. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.018228535s
	
	
	==> coredns [9ed073676e722da08dd05f2af2ceea6db672978bf9a7c2166d8cdd0e68c55cf5] <==
	[INFO] 10.244.0.3:40165 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001805092s
	[INFO] 10.244.0.3:58145 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000095334s
	[INFO] 10.244.0.3:37683 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000038462s
	[INFO] 10.244.0.3:58166 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001392646s
	[INFO] 10.244.0.3:58146 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000142948s
	[INFO] 10.244.0.3:34408 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000057186s
	[INFO] 10.244.0.3:32874 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000079804s
	[INFO] 10.244.1.2:48151 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000266274s
	[INFO] 10.244.1.2:59335 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000114137s
	[INFO] 10.244.1.2:44040 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000158796s
	[INFO] 10.244.1.2:43600 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000166812s
	[INFO] 10.244.0.3:47520 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000081783s
	[INFO] 10.244.0.3:40507 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000177821s
	[INFO] 10.244.0.3:45723 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000093065s
	[INFO] 10.244.0.3:44022 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000114454s
	[INFO] 10.244.1.2:40139 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000177278s
	[INFO] 10.244.1.2:53375 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000227157s
	[INFO] 10.244.1.2:59766 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000141481s
	[INFO] 10.244.1.2:56308 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.00017968s
	[INFO] 10.244.0.3:57752 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000228663s
	[INFO] 10.244.0.3:47439 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000141543s
	[INFO] 10.244.0.3:55317 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000196663s
	[INFO] 10.244.0.3:60667 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000123004s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               multinode-200224
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-200224
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1a940f980f77c9bfc8ef93678db8c1f49c9dd79d
	                    minikube.k8s.io/name=multinode-200224
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_03_28T00_22_57_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 28 Mar 2024 00:22:53 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-200224
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 28 Mar 2024 00:30:36 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 28 Mar 2024 00:29:14 +0000   Thu, 28 Mar 2024 00:22:52 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 28 Mar 2024 00:29:14 +0000   Thu, 28 Mar 2024 00:22:52 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 28 Mar 2024 00:29:14 +0000   Thu, 28 Mar 2024 00:22:52 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 28 Mar 2024 00:29:14 +0000   Thu, 28 Mar 2024 00:23:12 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.88
	  Hostname:    multinode-200224
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 20370049e416440389c3dd654a8f9e60
	  System UUID:                20370049-e416-4403-89c3-dd654a8f9e60
	  Boot ID:                    0cc20c18-f5e7-47f6-b7fe-26dd73344a27
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7fdf7869d9-4mbrk                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m43s
	  kube-system                 coredns-76f75df574-g5sdz                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     7m31s
	  kube-system                 etcd-multinode-200224                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         7m43s
	  kube-system                 kindnet-ncgjv                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      7m31s
	  kube-system                 kube-apiserver-multinode-200224             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m46s
	  kube-system                 kube-controller-manager-multinode-200224    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m43s
	  kube-system                 kube-proxy-p2g9p                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m31s
	  kube-system                 kube-scheduler-multinode-200224             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m43s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m30s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 7m28s                  kube-proxy       
	  Normal  Starting                 83s                    kube-proxy       
	  Normal  NodeHasSufficientMemory  7m50s (x8 over 7m50s)  kubelet          Node multinode-200224 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m50s (x8 over 7m50s)  kubelet          Node multinode-200224 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m50s (x7 over 7m50s)  kubelet          Node multinode-200224 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  7m50s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 7m44s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  7m43s                  kubelet          Node multinode-200224 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m43s                  kubelet          Node multinode-200224 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m43s                  kubelet          Node multinode-200224 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  7m43s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           7m31s                  node-controller  Node multinode-200224 event: Registered Node multinode-200224 in Controller
	  Normal  NodeReady                7m28s                  kubelet          Node multinode-200224 status is now: NodeReady
	  Normal  Starting                 90s                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  90s (x8 over 90s)      kubelet          Node multinode-200224 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    90s (x8 over 90s)      kubelet          Node multinode-200224 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     90s (x7 over 90s)      kubelet          Node multinode-200224 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  90s                    kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           73s                    node-controller  Node multinode-200224 event: Registered Node multinode-200224 in Controller
	
	
	Name:               multinode-200224-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-200224-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1a940f980f77c9bfc8ef93678db8c1f49c9dd79d
	                    minikube.k8s.io/name=multinode-200224
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_03_28T00_29_58_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 28 Mar 2024 00:29:57 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-200224-m02
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 28 Mar 2024 00:30:38 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 28 Mar 2024 00:30:28 +0000   Thu, 28 Mar 2024 00:29:57 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 28 Mar 2024 00:30:28 +0000   Thu, 28 Mar 2024 00:29:57 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 28 Mar 2024 00:30:28 +0000   Thu, 28 Mar 2024 00:29:57 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 28 Mar 2024 00:30:28 +0000   Thu, 28 Mar 2024 00:30:07 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.161
	  Hostname:    multinode-200224-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 9c96bf1424864470866d5d24da138300
	  System UUID:                9c96bf14-2486-4470-866d-5d24da138300
	  Boot ID:                    352ad25b-71f6-45c7-b2a9-86468eea75fa
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7fdf7869d9-x4g8t    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         48s
	  kube-system                 kindnet-fdhcl               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      6m55s
	  kube-system                 kube-proxy-pgph8            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m55s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 38s                    kube-proxy       
	  Normal  Starting                 6m50s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  6m55s (x2 over 6m55s)  kubelet          Node multinode-200224-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m55s (x2 over 6m55s)  kubelet          Node multinode-200224-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m55s (x2 over 6m55s)  kubelet          Node multinode-200224-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m55s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                6m46s                  kubelet          Node multinode-200224-m02 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  43s (x2 over 43s)      kubelet          Node multinode-200224-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    43s (x2 over 43s)      kubelet          Node multinode-200224-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     43s (x2 over 43s)      kubelet          Node multinode-200224-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  43s                    kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           38s                    node-controller  Node multinode-200224-m02 event: Registered Node multinode-200224-m02 in Controller
	  Normal  NodeReady                33s                    kubelet          Node multinode-200224-m02 status is now: NodeReady
	
	
	Name:               multinode-200224-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-200224-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1a940f980f77c9bfc8ef93678db8c1f49c9dd79d
	                    minikube.k8s.io/name=multinode-200224
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_03_28T00_30_28_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 28 Mar 2024 00:30:27 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-200224-m03
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 28 Mar 2024 00:30:37 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 28 Mar 2024 00:30:36 +0000   Thu, 28 Mar 2024 00:30:27 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 28 Mar 2024 00:30:36 +0000   Thu, 28 Mar 2024 00:30:27 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 28 Mar 2024 00:30:36 +0000   Thu, 28 Mar 2024 00:30:27 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 28 Mar 2024 00:30:36 +0000   Thu, 28 Mar 2024 00:30:36 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.22
	  Hostname:    multinode-200224-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 67a56df49b0842f6ba239cb36a6d8b67
	  System UUID:                67a56df4-9b08-42f6-ba23-9cb36a6d8b67
	  Boot ID:                    bc1dbde6-7237-4976-8f20-c4f5924b0842
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-dcqkg       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      6m6s
	  kube-system                 kube-proxy-5ws9q    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m6s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m18s                  kube-proxy       
	  Normal  Starting                 6m                     kube-proxy       
	  Normal  Starting                 8s                     kube-proxy       
	  Normal  NodeHasSufficientMemory  6m6s (x2 over 6m6s)    kubelet          Node multinode-200224-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m6s (x2 over 6m6s)    kubelet          Node multinode-200224-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m6s (x2 over 6m6s)    kubelet          Node multinode-200224-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m6s                   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                5m56s                  kubelet          Node multinode-200224-m03 status is now: NodeReady
	  Normal  NodeHasSufficientPID     5m24s (x2 over 5m24s)  kubelet          Node multinode-200224-m03 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    5m24s (x2 over 5m24s)  kubelet          Node multinode-200224-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  5m24s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  5m24s (x2 over 5m24s)  kubelet          Node multinode-200224-m03 status is now: NodeHasSufficientMemory
	  Normal  Starting                 5m24s                  kubelet          Starting kubelet.
	  Normal  NodeReady                5m15s                  kubelet          Node multinode-200224-m03 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  13s (x2 over 13s)      kubelet          Node multinode-200224-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13s (x2 over 13s)      kubelet          Node multinode-200224-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13s (x2 over 13s)      kubelet          Node multinode-200224-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  13s                    kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           8s                     node-controller  Node multinode-200224-m03 event: Registered Node multinode-200224-m03 in Controller
	  Normal  NodeReady                4s                     kubelet          Node multinode-200224-m03 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.071448] systemd-fstab-generator[611]: Ignoring "noauto" option for root device
	[  +0.174544] systemd-fstab-generator[625]: Ignoring "noauto" option for root device
	[  +0.132653] systemd-fstab-generator[637]: Ignoring "noauto" option for root device
	[  +0.273145] systemd-fstab-generator[666]: Ignoring "noauto" option for root device
	[  +4.481088] systemd-fstab-generator[765]: Ignoring "noauto" option for root device
	[  +0.066475] kauditd_printk_skb: 130 callbacks suppressed
	[  +5.086709] systemd-fstab-generator[955]: Ignoring "noauto" option for root device
	[  +0.065307] kauditd_printk_skb: 18 callbacks suppressed
	[  +6.701905] systemd-fstab-generator[1289]: Ignoring "noauto" option for root device
	[  +0.081652] kauditd_printk_skb: 69 callbacks suppressed
	[Mar28 00:23] kauditd_printk_skb: 18 callbacks suppressed
	[  +7.128553] systemd-fstab-generator[1477]: Ignoring "noauto" option for root device
	[ +48.394690] kauditd_printk_skb: 82 callbacks suppressed
	[Mar28 00:29] systemd-fstab-generator[2758]: Ignoring "noauto" option for root device
	[  +0.145638] systemd-fstab-generator[2770]: Ignoring "noauto" option for root device
	[  +0.184336] systemd-fstab-generator[2786]: Ignoring "noauto" option for root device
	[  +0.141313] systemd-fstab-generator[2798]: Ignoring "noauto" option for root device
	[  +0.325205] systemd-fstab-generator[2826]: Ignoring "noauto" option for root device
	[  +3.906936] systemd-fstab-generator[2926]: Ignoring "noauto" option for root device
	[  +1.860741] systemd-fstab-generator[3052]: Ignoring "noauto" option for root device
	[  +0.085858] kauditd_printk_skb: 122 callbacks suppressed
	[  +5.714290] kauditd_printk_skb: 52 callbacks suppressed
	[ +11.261728] kauditd_printk_skb: 32 callbacks suppressed
	[  +4.265730] systemd-fstab-generator[3872]: Ignoring "noauto" option for root device
	[ +18.199637] kauditd_printk_skb: 14 callbacks suppressed
	
	
	==> etcd [013303f3ea31fd8274cb3bf3859cfa6cbc15563590eb945d675b932bac6c3efb] <==
	{"level":"info","ts":"2024-03-28T00:29:11.718317Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-03-28T00:29:11.718328Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-03-28T00:29:11.718622Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aa0bd43d5988e1af switched to configuration voters=(12253120571151802799)"}
	{"level":"info","ts":"2024-03-28T00:29:11.718703Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"9f9d2ecdb39156b6","local-member-id":"aa0bd43d5988e1af","added-peer-id":"aa0bd43d5988e1af","added-peer-peer-urls":["https://192.168.39.88:2380"]}
	{"level":"info","ts":"2024-03-28T00:29:11.718896Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"9f9d2ecdb39156b6","local-member-id":"aa0bd43d5988e1af","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-28T00:29:11.718947Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-28T00:29:11.731135Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-03-28T00:29:11.73144Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"aa0bd43d5988e1af","initial-advertise-peer-urls":["https://192.168.39.88:2380"],"listen-peer-urls":["https://192.168.39.88:2380"],"advertise-client-urls":["https://192.168.39.88:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.88:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-03-28T00:29:11.736871Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-03-28T00:29:11.737004Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.88:2380"}
	{"level":"info","ts":"2024-03-28T00:29:11.742982Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.88:2380"}
	{"level":"info","ts":"2024-03-28T00:29:13.147197Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aa0bd43d5988e1af is starting a new election at term 2"}
	{"level":"info","ts":"2024-03-28T00:29:13.147314Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aa0bd43d5988e1af became pre-candidate at term 2"}
	{"level":"info","ts":"2024-03-28T00:29:13.147381Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aa0bd43d5988e1af received MsgPreVoteResp from aa0bd43d5988e1af at term 2"}
	{"level":"info","ts":"2024-03-28T00:29:13.147435Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aa0bd43d5988e1af became candidate at term 3"}
	{"level":"info","ts":"2024-03-28T00:29:13.147461Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aa0bd43d5988e1af received MsgVoteResp from aa0bd43d5988e1af at term 3"}
	{"level":"info","ts":"2024-03-28T00:29:13.147488Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aa0bd43d5988e1af became leader at term 3"}
	{"level":"info","ts":"2024-03-28T00:29:13.14752Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aa0bd43d5988e1af elected leader aa0bd43d5988e1af at term 3"}
	{"level":"info","ts":"2024-03-28T00:29:13.153379Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"aa0bd43d5988e1af","local-member-attributes":"{Name:multinode-200224 ClientURLs:[https://192.168.39.88:2379]}","request-path":"/0/members/aa0bd43d5988e1af/attributes","cluster-id":"9f9d2ecdb39156b6","publish-timeout":"7s"}
	{"level":"info","ts":"2024-03-28T00:29:13.153486Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-28T00:29:13.153741Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-03-28T00:29:13.15385Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-03-28T00:29:13.153872Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-28T00:29:13.155838Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.88:2379"}
	{"level":"info","ts":"2024-03-28T00:29:13.156103Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> etcd [0e309dc4a326fe31f7f6c1bfc6da7ab8ff22862a35eba9e7aa9b4ff15b9737b6] <==
	{"level":"warn","ts":"2024-03-28T00:23:45.912875Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"248.15647ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/csinodes/multinode-200224-m02\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-03-28T00:23:45.912982Z","caller":"traceutil/trace.go:171","msg":"trace[513155614] range","detail":"{range_begin:/registry/csinodes/multinode-200224-m02; range_end:; response_count:0; response_revision:475; }","duration":"248.302011ms","start":"2024-03-28T00:23:45.66467Z","end":"2024-03-28T00:23:45.912972Z","steps":["trace[513155614] 'agreement among raft nodes before linearized reading'  (duration: 248.170488ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-28T00:23:45.912916Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"196.694191ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-03-28T00:23:45.913175Z","caller":"traceutil/trace.go:171","msg":"trace[2139540438] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:475; }","duration":"196.982746ms","start":"2024-03-28T00:23:45.716183Z","end":"2024-03-28T00:23:45.913166Z","steps":["trace[2139540438] 'agreement among raft nodes before linearized reading'  (duration: 196.702156ms)"],"step_count":1}
	{"level":"info","ts":"2024-03-28T00:24:34.876231Z","caller":"traceutil/trace.go:171","msg":"trace[1149993796] transaction","detail":"{read_only:false; response_revision:602; number_of_response:1; }","duration":"237.964811ms","start":"2024-03-28T00:24:34.638234Z","end":"2024-03-28T00:24:34.876199Z","steps":["trace[1149993796] 'process raft request'  (duration: 237.853876ms)"],"step_count":1}
	{"level":"info","ts":"2024-03-28T00:24:34.877391Z","caller":"traceutil/trace.go:171","msg":"trace[1884719100] linearizableReadLoop","detail":"{readStateIndex:634; appliedIndex:633; }","duration":"161.414976ms","start":"2024-03-28T00:24:34.715964Z","end":"2024-03-28T00:24:34.877379Z","steps":["trace[1884719100] 'read index received'  (duration: 160.552208ms)","trace[1884719100] 'applied index is now lower than readState.Index'  (duration: 862.125µs)"],"step_count":2}
	{"level":"info","ts":"2024-03-28T00:24:34.877631Z","caller":"traceutil/trace.go:171","msg":"trace[1173800584] transaction","detail":"{read_only:false; response_revision:603; number_of_response:1; }","duration":"182.925356ms","start":"2024-03-28T00:24:34.694694Z","end":"2024-03-28T00:24:34.877619Z","steps":["trace[1173800584] 'process raft request'  (duration: 182.622641ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-28T00:24:34.877964Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"161.981998ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-03-28T00:24:34.878039Z","caller":"traceutil/trace.go:171","msg":"trace[2087793377] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:603; }","duration":"162.112455ms","start":"2024-03-28T00:24:34.715904Z","end":"2024-03-28T00:24:34.878017Z","steps":["trace[2087793377] 'agreement among raft nodes before linearized reading'  (duration: 161.960396ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-28T00:24:37.94996Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"153.222579ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-03-28T00:24:37.950075Z","caller":"traceutil/trace.go:171","msg":"trace[1546703697] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:634; }","duration":"153.353017ms","start":"2024-03-28T00:24:37.796711Z","end":"2024-03-28T00:24:37.950064Z","steps":["trace[1546703697] 'range keys from in-memory index tree'  (duration: 153.208739ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-28T00:24:37.950519Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"129.805898ms","expected-duration":"100ms","prefix":"","request":"header:<ID:16262373470336570209 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:628 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1028 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-03-28T00:24:37.950599Z","caller":"traceutil/trace.go:171","msg":"trace[977788010] transaction","detail":"{read_only:false; response_revision:635; number_of_response:1; }","duration":"172.63094ms","start":"2024-03-28T00:24:37.777958Z","end":"2024-03-28T00:24:37.950588Z","steps":["trace[977788010] 'process raft request'  (duration: 42.477261ms)","trace[977788010] 'compare'  (duration: 129.480573ms)"],"step_count":2}
	{"level":"info","ts":"2024-03-28T00:24:38.229119Z","caller":"traceutil/trace.go:171","msg":"trace[1517688260] transaction","detail":"{read_only:false; response_revision:636; number_of_response:1; }","duration":"193.900501ms","start":"2024-03-28T00:24:38.035197Z","end":"2024-03-28T00:24:38.229097Z","steps":["trace[1517688260] 'process raft request'  (duration: 123.929714ms)","trace[1517688260] 'compare'  (duration: 69.833581ms)"],"step_count":2}
	{"level":"info","ts":"2024-03-28T00:27:32.205163Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-03-28T00:27:32.205276Z","caller":"embed/etcd.go:375","msg":"closing etcd server","name":"multinode-200224","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.88:2380"],"advertise-client-urls":["https://192.168.39.88:2379"]}
	{"level":"warn","ts":"2024-03-28T00:27:32.205431Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-03-28T00:27:32.205521Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	2024/03/28 00:27:32 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-03-28T00:27:32.248138Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.88:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-03-28T00:27:32.249531Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.88:2379: use of closed network connection"}
	{"level":"info","ts":"2024-03-28T00:27:32.249762Z","caller":"etcdserver/server.go:1471","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aa0bd43d5988e1af","current-leader-member-id":"aa0bd43d5988e1af"}
	{"level":"info","ts":"2024-03-28T00:27:32.262043Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.88:2380"}
	{"level":"info","ts":"2024-03-28T00:27:32.262151Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.88:2380"}
	{"level":"info","ts":"2024-03-28T00:27:32.26216Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"multinode-200224","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.88:2380"],"advertise-client-urls":["https://192.168.39.88:2379"]}
	
	
	==> kernel <==
	 00:30:40 up 8 min,  0 users,  load average: 0.84, 0.43, 0.20
	Linux multinode-200224 5.10.207 #1 SMP Wed Mar 27 22:02:20 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [b533e6c726ea6ed1277d347af17f080ce2afc86504a3b08d2b06b561fdce86e7] <==
	I0328 00:29:57.024162       1 main.go:227] handling current node
	I0328 00:29:57.024239       1 main.go:223] Handling node with IPs: map[192.168.39.22:{}]
	I0328 00:29:57.024285       1 main.go:250] Node multinode-200224-m03 has CIDR [10.244.3.0/24] 
	I0328 00:30:07.041959       1 main.go:223] Handling node with IPs: map[192.168.39.88:{}]
	I0328 00:30:07.042054       1 main.go:227] handling current node
	I0328 00:30:07.042077       1 main.go:223] Handling node with IPs: map[192.168.39.161:{}]
	I0328 00:30:07.042096       1 main.go:250] Node multinode-200224-m02 has CIDR [10.244.1.0/24] 
	I0328 00:30:07.042208       1 main.go:223] Handling node with IPs: map[192.168.39.22:{}]
	I0328 00:30:07.042229       1 main.go:250] Node multinode-200224-m03 has CIDR [10.244.3.0/24] 
	I0328 00:30:17.052218       1 main.go:223] Handling node with IPs: map[192.168.39.88:{}]
	I0328 00:30:17.052504       1 main.go:227] handling current node
	I0328 00:30:17.052541       1 main.go:223] Handling node with IPs: map[192.168.39.161:{}]
	I0328 00:30:17.052563       1 main.go:250] Node multinode-200224-m02 has CIDR [10.244.1.0/24] 
	I0328 00:30:17.052695       1 main.go:223] Handling node with IPs: map[192.168.39.22:{}]
	I0328 00:30:17.052721       1 main.go:250] Node multinode-200224-m03 has CIDR [10.244.3.0/24] 
	I0328 00:30:27.065072       1 main.go:223] Handling node with IPs: map[192.168.39.88:{}]
	I0328 00:30:27.065113       1 main.go:227] handling current node
	I0328 00:30:27.065125       1 main.go:223] Handling node with IPs: map[192.168.39.161:{}]
	I0328 00:30:27.065130       1 main.go:250] Node multinode-200224-m02 has CIDR [10.244.1.0/24] 
	I0328 00:30:37.070907       1 main.go:223] Handling node with IPs: map[192.168.39.88:{}]
	I0328 00:30:37.071049       1 main.go:227] handling current node
	I0328 00:30:37.071078       1 main.go:223] Handling node with IPs: map[192.168.39.161:{}]
	I0328 00:30:37.071098       1 main.go:250] Node multinode-200224-m02 has CIDR [10.244.1.0/24] 
	I0328 00:30:37.071274       1 main.go:223] Handling node with IPs: map[192.168.39.22:{}]
	I0328 00:30:37.071314       1 main.go:250] Node multinode-200224-m03 has CIDR [10.244.2.0/24] 
	
	
	==> kindnet [dbe677740910dd5d9206540ecea26aa8ac44bad90beedfeb44189e329f1e7f05] <==
	I0328 00:26:42.478828       1 main.go:250] Node multinode-200224-m03 has CIDR [10.244.3.0/24] 
	I0328 00:26:52.485376       1 main.go:223] Handling node with IPs: map[192.168.39.88:{}]
	I0328 00:26:52.485422       1 main.go:227] handling current node
	I0328 00:26:52.485433       1 main.go:223] Handling node with IPs: map[192.168.39.161:{}]
	I0328 00:26:52.485439       1 main.go:250] Node multinode-200224-m02 has CIDR [10.244.1.0/24] 
	I0328 00:26:52.485564       1 main.go:223] Handling node with IPs: map[192.168.39.22:{}]
	I0328 00:26:52.485594       1 main.go:250] Node multinode-200224-m03 has CIDR [10.244.3.0/24] 
	I0328 00:27:02.494950       1 main.go:223] Handling node with IPs: map[192.168.39.88:{}]
	I0328 00:27:02.494998       1 main.go:227] handling current node
	I0328 00:27:02.495010       1 main.go:223] Handling node with IPs: map[192.168.39.161:{}]
	I0328 00:27:02.495016       1 main.go:250] Node multinode-200224-m02 has CIDR [10.244.1.0/24] 
	I0328 00:27:02.495125       1 main.go:223] Handling node with IPs: map[192.168.39.22:{}]
	I0328 00:27:02.495130       1 main.go:250] Node multinode-200224-m03 has CIDR [10.244.3.0/24] 
	I0328 00:27:12.508506       1 main.go:223] Handling node with IPs: map[192.168.39.88:{}]
	I0328 00:27:12.508595       1 main.go:227] handling current node
	I0328 00:27:12.508606       1 main.go:223] Handling node with IPs: map[192.168.39.161:{}]
	I0328 00:27:12.508618       1 main.go:250] Node multinode-200224-m02 has CIDR [10.244.1.0/24] 
	I0328 00:27:12.508726       1 main.go:223] Handling node with IPs: map[192.168.39.22:{}]
	I0328 00:27:12.508752       1 main.go:250] Node multinode-200224-m03 has CIDR [10.244.3.0/24] 
	I0328 00:27:22.517728       1 main.go:223] Handling node with IPs: map[192.168.39.88:{}]
	I0328 00:27:22.517776       1 main.go:227] handling current node
	I0328 00:27:22.517842       1 main.go:223] Handling node with IPs: map[192.168.39.161:{}]
	I0328 00:27:22.517851       1 main.go:250] Node multinode-200224-m02 has CIDR [10.244.1.0/24] 
	I0328 00:27:22.517977       1 main.go:223] Handling node with IPs: map[192.168.39.22:{}]
	I0328 00:27:22.518003       1 main.go:250] Node multinode-200224-m03 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [68ae2f434f3deb10aa0a571899142907e2801ba5da048845b37822888969b398] <==
	I0328 00:27:32.234872       1 controller.go:129] Ending legacy_token_tracking_controller
	I0328 00:27:32.234894       1 controller.go:130] Shutting down legacy_token_tracking_controller
	I0328 00:27:32.234921       1 available_controller.go:439] Shutting down AvailableConditionController
	W0328 00:27:32.234986       1 logging.go:59] [core] [Channel #82 SubChannel #83] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0328 00:27:32.235126       1 logging.go:59] [core] [Channel #160 SubChannel #161] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0328 00:27:32.235202       1 logging.go:59] [core] [Channel #145 SubChannel #146] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0328 00:27:32.235266       1 logging.go:59] [core] [Channel #166 SubChannel #167] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0328 00:27:32.235300       1 logging.go:59] [core] [Channel #88 SubChannel #89] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0328 00:27:32.235336       1 logging.go:59] [core] [Channel #142 SubChannel #143] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0328 00:27:32.235501       1 logging.go:59] [core] [Channel #169 SubChannel #170] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0328 00:27:32.235587       1 logging.go:59] [core] [Channel #15 SubChannel #16] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0328 00:27:32.235655       1 logging.go:59] [core] [Channel #58 SubChannel #59] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0328 00:27:32.235694       1 logging.go:59] [core] [Channel #97 SubChannel #98] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	I0328 00:27:32.235898       1 dynamic_serving_content.go:146] "Shutting down controller" name="aggregator-proxy-cert::/var/lib/minikube/certs/front-proxy-client.crt::/var/lib/minikube/certs/front-proxy-client.key"
	I0328 00:27:32.236131       1 dynamic_cafile_content.go:171] "Shutting down controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0328 00:27:32.236208       1 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0328 00:27:32.236245       1 object_count_tracker.go:151] "StorageObjectCountTracker pruner is exiting"
	I0328 00:27:32.236275       1 controller.go:86] Shutting down OpenAPI V3 AggregationController
	I0328 00:27:32.236344       1 controller.go:84] Shutting down OpenAPI AggregationController
	W0328 00:27:32.236453       1 logging.go:59] [core] [Channel #154 SubChannel #155] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	I0328 00:27:32.236551       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0328 00:27:32.236622       1 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0328 00:27:32.241073       1 dynamic_serving_content.go:146] "Shutting down controller" name="serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key"
	W0328 00:27:32.241698       1 logging.go:59] [core] [Channel #136 SubChannel #137] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0328 00:27:32.242860       1 logging.go:59] [core] [Channel #70 SubChannel #71] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [b3a9b4858d44e5afb56269a375649980f786d727b89ff9556550e692636fab82] <==
	I0328 00:29:14.491745       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0328 00:29:14.491843       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0328 00:29:14.491976       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0328 00:29:14.499166       1 controller.go:78] Starting OpenAPI AggregationController
	I0328 00:29:14.572715       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0328 00:29:14.585064       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0328 00:29:14.585323       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0328 00:29:14.597974       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0328 00:29:14.598045       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0328 00:29:14.598253       1 shared_informer.go:318] Caches are synced for configmaps
	I0328 00:29:14.598444       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0328 00:29:14.598483       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0328 00:29:14.609031       1 aggregator.go:165] initial CRD sync complete...
	I0328 00:29:14.609066       1 autoregister_controller.go:141] Starting autoregister controller
	I0328 00:29:14.609073       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0328 00:29:14.609079       1 cache.go:39] Caches are synced for autoregister controller
	I0328 00:29:14.623570       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0328 00:29:15.506430       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0328 00:29:17.001758       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0328 00:29:17.129842       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0328 00:29:17.145941       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0328 00:29:17.239084       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0328 00:29:17.247134       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0328 00:29:27.024718       1 controller.go:624] quota admission added evaluator for: endpoints
	I0328 00:29:27.116284       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [6fbf200e2f5995f89adef7939641ef941f9f8ac8e54c7d64c2bf74baed49b800] <==
	I0328 00:24:01.327035       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="5.82092ms"
	I0328 00:24:01.328984       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="61.232µs"
	I0328 00:24:34.883979       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-200224-m03\" does not exist"
	I0328 00:24:34.884903       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-200224-m02"
	I0328 00:24:34.908220       1 event.go:376] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-5ws9q"
	I0328 00:24:34.908290       1 event.go:376] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-dcqkg"
	I0328 00:24:34.929329       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-200224-m03" podCIDRs=["10.244.2.0/24"]
	I0328 00:24:39.053327       1 node_lifecycle_controller.go:874] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-200224-m03"
	I0328 00:24:39.053497       1 event.go:376] "Event occurred" object="multinode-200224-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-200224-m03 event: Registered Node multinode-200224-m03 in Controller"
	I0328 00:24:44.911414       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-200224-m02"
	I0328 00:25:15.754987       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-200224-m02"
	I0328 00:25:16.835251       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-200224-m02"
	I0328 00:25:16.835380       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-200224-m03\" does not exist"
	I0328 00:25:16.859189       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-200224-m03" podCIDRs=["10.244.3.0/24"]
	I0328 00:25:25.475972       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-200224-m02"
	I0328 00:26:09.110689       1 event.go:376] "Event occurred" object="multinode-200224-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="NodeNotReady" message="Node multinode-200224-m02 status is now: NodeNotReady"
	I0328 00:26:09.111024       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-200224-m03"
	I0328 00:26:09.116595       1 event.go:376] "Event occurred" object="multinode-200224-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="NodeNotReady" message="Node multinode-200224-m03 status is now: NodeNotReady"
	I0328 00:26:09.128985       1 event.go:376] "Event occurred" object="kube-system/kube-proxy-pgph8" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0328 00:26:09.135944       1 event.go:376] "Event occurred" object="kube-system/kindnet-dcqkg" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0328 00:26:09.149774       1 event.go:376] "Event occurred" object="kube-system/kindnet-fdhcl" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0328 00:26:09.154197       1 event.go:376] "Event occurred" object="kube-system/kube-proxy-5ws9q" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0328 00:26:09.169606       1 event.go:376] "Event occurred" object="default/busybox-7fdf7869d9-2h8w6" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0328 00:26:09.175221       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="9.969083ms"
	I0328 00:26:09.175459       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="103.708µs"
	
	
	==> kube-controller-manager [a59125a9daa4c893fd45d629c12fd055db5fb6cbfc3f0205e1b7c2789a83f8da] <==
	I0328 00:29:57.029399       1 event.go:376] "Event occurred" object="multinode-200224-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RemovingNode" message="Node multinode-200224-m02 event: Removing Node multinode-200224-m02 from Controller"
	I0328 00:29:57.102028       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-200224-m02\" does not exist"
	I0328 00:29:57.102237       1 event.go:376] "Event occurred" object="default/busybox-7fdf7869d9-2h8w6" fieldPath="" kind="Pod" apiVersion="v1" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-7fdf7869d9-2h8w6"
	I0328 00:29:57.115122       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-200224-m02" podCIDRs=["10.244.1.0/24"]
	I0328 00:29:58.197632       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="72.771µs"
	I0328 00:29:58.994690       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="295.537µs"
	I0328 00:29:59.009026       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="58.147µs"
	I0328 00:29:59.023774       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="62.379µs"
	I0328 00:29:59.070867       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="147.092µs"
	I0328 00:29:59.083537       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="71.893µs"
	I0328 00:29:59.088234       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="55.919µs"
	I0328 00:30:02.031689       1 event.go:376] "Event occurred" object="multinode-200224-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-200224-m02 event: Registered Node multinode-200224-m02 in Controller"
	I0328 00:30:07.165674       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-200224-m02"
	I0328 00:30:07.186702       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="51.446µs"
	I0328 00:30:07.203459       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="38.529µs"
	I0328 00:30:11.338522       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="6.263906ms"
	I0328 00:30:11.340038       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="78.908µs"
	I0328 00:30:12.044033       1 event.go:376] "Event occurred" object="default/busybox-7fdf7869d9-x4g8t" fieldPath="" kind="Pod" apiVersion="v1" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-7fdf7869d9-x4g8t"
	I0328 00:30:25.893439       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-200224-m02"
	I0328 00:30:27.047142       1 event.go:376] "Event occurred" object="multinode-200224-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RemovingNode" message="Node multinode-200224-m03 event: Removing Node multinode-200224-m03 from Controller"
	I0328 00:30:27.100679       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-200224-m02"
	I0328 00:30:27.100889       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-200224-m03\" does not exist"
	I0328 00:30:27.125054       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-200224-m03" podCIDRs=["10.244.2.0/24"]
	I0328 00:30:32.048001       1 event.go:376] "Event occurred" object="multinode-200224-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-200224-m03 event: Registered Node multinode-200224-m03 in Controller"
	I0328 00:30:36.615261       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-200224-m02"
	
	
	==> kube-proxy [46c3c1988d5ad53bff3254f6b5fbcb07e9d341c25390a721f9cb15c63e302898] <==
	I0328 00:29:16.335033       1 server_others.go:72] "Using iptables proxy"
	I0328 00:29:16.362287       1 server.go:1050] "Successfully retrieved node IP(s)" IPs=["192.168.39.88"]
	I0328 00:29:16.496180       1 server_others.go:146] "No iptables support for family" ipFamily="IPv6"
	I0328 00:29:16.496208       1 server.go:654] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0328 00:29:16.496229       1 server_others.go:168] "Using iptables Proxier"
	I0328 00:29:16.501491       1 proxier.go:245] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0328 00:29:16.501903       1 server.go:865] "Version info" version="v1.29.3"
	I0328 00:29:16.501919       1 server.go:867] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0328 00:29:16.505390       1 config.go:188] "Starting service config controller"
	I0328 00:29:16.506457       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0328 00:29:16.508966       1 config.go:97] "Starting endpoint slice config controller"
	I0328 00:29:16.508979       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0328 00:29:16.510528       1 config.go:315] "Starting node config controller"
	I0328 00:29:16.510543       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0328 00:29:16.608077       1 shared_informer.go:318] Caches are synced for service config
	I0328 00:29:16.611926       1 shared_informer.go:318] Caches are synced for node config
	I0328 00:29:16.612042       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [fb82d42c8f86742edc73783b3dfdef5b9cc4658c3ddc375a06b5ea65bd837da5] <==
	I0328 00:23:11.406084       1 server_others.go:72] "Using iptables proxy"
	I0328 00:23:11.415672       1 server.go:1050] "Successfully retrieved node IP(s)" IPs=["192.168.39.88"]
	I0328 00:23:11.454636       1 server_others.go:146] "No iptables support for family" ipFamily="IPv6"
	I0328 00:23:11.454757       1 server.go:654] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0328 00:23:11.454837       1 server_others.go:168] "Using iptables Proxier"
	I0328 00:23:11.457597       1 proxier.go:245] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0328 00:23:11.457935       1 server.go:865] "Version info" version="v1.29.3"
	I0328 00:23:11.457965       1 server.go:867] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0328 00:23:11.459204       1 config.go:188] "Starting service config controller"
	I0328 00:23:11.459241       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0328 00:23:11.459259       1 config.go:97] "Starting endpoint slice config controller"
	I0328 00:23:11.459263       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0328 00:23:11.459549       1 config.go:315] "Starting node config controller"
	I0328 00:23:11.459584       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0328 00:23:11.560225       1 shared_informer.go:318] Caches are synced for service config
	I0328 00:23:11.560300       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0328 00:23:11.560537       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [3fccdc262ed43923c10e7dc2c2d3f31d086e8cae5e8a1adf35c700e71ac085c6] <==
	W0328 00:22:53.790869       1 reflector.go:539] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0328 00:22:53.793062       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0328 00:22:53.790960       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0328 00:22:53.793102       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0328 00:22:53.791017       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0328 00:22:53.793116       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0328 00:22:53.791051       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0328 00:22:53.793128       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0328 00:22:53.791093       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0328 00:22:53.793139       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0328 00:22:54.712602       1 reflector.go:539] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0328 00:22:54.713055       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0328 00:22:54.777247       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0328 00:22:54.777297       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0328 00:22:54.851310       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0328 00:22:54.851436       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0328 00:22:54.878581       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0328 00:22:54.879198       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0328 00:22:55.011033       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0328 00:22:55.011156       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0328 00:22:56.672884       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0328 00:27:32.223729       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0328 00:27:32.223914       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0328 00:27:32.224384       1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E0328 00:27:32.224624       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [767d65958abd9c1074e5b189fbc2524166f7f1c3a91990207131a500a1efdd7f] <==
	I0328 00:29:12.172356       1 serving.go:380] Generated self-signed cert in-memory
	W0328 00:29:14.535399       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0328 00:29:14.536038       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0328 00:29:14.536173       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0328 00:29:14.536199       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0328 00:29:14.610455       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.29.3"
	I0328 00:29:14.610596       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0328 00:29:14.613219       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0328 00:29:14.613700       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0328 00:29:14.616539       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0328 00:29:14.613919       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0328 00:29:14.716775       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Mar 28 00:29:15 multinode-200224 kubelet[3059]: I0328 00:29:15.339704    3059 topology_manager.go:215] "Topology Admit Handler" podUID="30a4f6bf-5542-476f-b1af-837031a00c50" podNamespace="kube-system" podName="kindnet-ncgjv"
	Mar 28 00:29:15 multinode-200224 kubelet[3059]: I0328 00:29:15.339911    3059 topology_manager.go:215] "Topology Admit Handler" podUID="3bd5c912-2288-420d-a7a2-d73f2c34a5ed" podNamespace="kube-system" podName="storage-provisioner"
	Mar 28 00:29:15 multinode-200224 kubelet[3059]: I0328 00:29:15.339989    3059 topology_manager.go:215] "Topology Admit Handler" podUID="a88eae05-06c4-4a76-9f77-af448b7c0704" podNamespace="default" podName="busybox-7fdf7869d9-4mbrk"
	Mar 28 00:29:15 multinode-200224 kubelet[3059]: I0328 00:29:15.440761    3059 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world"
	Mar 28 00:29:15 multinode-200224 kubelet[3059]: I0328 00:29:15.452426    3059 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/30a4f6bf-5542-476f-b1af-837031a00c50-xtables-lock\") pod \"kindnet-ncgjv\" (UID: \"30a4f6bf-5542-476f-b1af-837031a00c50\") " pod="kube-system/kindnet-ncgjv"
	Mar 28 00:29:15 multinode-200224 kubelet[3059]: I0328 00:29:15.452713    3059 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/30a4f6bf-5542-476f-b1af-837031a00c50-lib-modules\") pod \"kindnet-ncgjv\" (UID: \"30a4f6bf-5542-476f-b1af-837031a00c50\") " pod="kube-system/kindnet-ncgjv"
	Mar 28 00:29:15 multinode-200224 kubelet[3059]: I0328 00:29:15.452890    3059 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/3bd5c912-2288-420d-a7a2-d73f2c34a5ed-tmp\") pod \"storage-provisioner\" (UID: \"3bd5c912-2288-420d-a7a2-d73f2c34a5ed\") " pod="kube-system/storage-provisioner"
	Mar 28 00:29:15 multinode-200224 kubelet[3059]: I0328 00:29:15.453096    3059 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ae4313c7-a926-4da8-bfc2-7dae1422d958-lib-modules\") pod \"kube-proxy-p2g9p\" (UID: \"ae4313c7-a926-4da8-bfc2-7dae1422d958\") " pod="kube-system/kube-proxy-p2g9p"
	Mar 28 00:29:15 multinode-200224 kubelet[3059]: I0328 00:29:15.453233    3059 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/30a4f6bf-5542-476f-b1af-837031a00c50-cni-cfg\") pod \"kindnet-ncgjv\" (UID: \"30a4f6bf-5542-476f-b1af-837031a00c50\") " pod="kube-system/kindnet-ncgjv"
	Mar 28 00:29:15 multinode-200224 kubelet[3059]: I0328 00:29:15.453352    3059 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ae4313c7-a926-4da8-bfc2-7dae1422d958-xtables-lock\") pod \"kube-proxy-p2g9p\" (UID: \"ae4313c7-a926-4da8-bfc2-7dae1422d958\") " pod="kube-system/kube-proxy-p2g9p"
	Mar 28 00:29:23 multinode-200224 kubelet[3059]: I0328 00:29:23.573314    3059 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Mar 28 00:30:10 multinode-200224 kubelet[3059]: E0328 00:30:10.388493    3059 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 28 00:30:10 multinode-200224 kubelet[3059]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 28 00:30:10 multinode-200224 kubelet[3059]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 28 00:30:10 multinode-200224 kubelet[3059]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 28 00:30:10 multinode-200224 kubelet[3059]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 28 00:30:10 multinode-200224 kubelet[3059]: E0328 00:30:10.417340    3059 manager.go:1116] Failed to create existing container: /kubepods/pod30a4f6bf-5542-476f-b1af-837031a00c50/crio-ea1011de4d4b6657be5634eb7131426dfaf71adda1e016d49136887bbc0d7a9a: Error finding container ea1011de4d4b6657be5634eb7131426dfaf71adda1e016d49136887bbc0d7a9a: Status 404 returned error can't find the container with id ea1011de4d4b6657be5634eb7131426dfaf71adda1e016d49136887bbc0d7a9a
	Mar 28 00:30:10 multinode-200224 kubelet[3059]: E0328 00:30:10.417578    3059 manager.go:1116] Failed to create existing container: /kubepods/burstable/poddb39c19d4710f792ba253c58204b3fd4/crio-fd28139122e292890b1d14a5668c359edc911ba1328ccd09054fbe9ee9099a59: Error finding container fd28139122e292890b1d14a5668c359edc911ba1328ccd09054fbe9ee9099a59: Status 404 returned error can't find the container with id fd28139122e292890b1d14a5668c359edc911ba1328ccd09054fbe9ee9099a59
	Mar 28 00:30:10 multinode-200224 kubelet[3059]: E0328 00:30:10.417670    3059 manager.go:1116] Failed to create existing container: /kubepods/besteffort/poda88eae05-06c4-4a76-9f77-af448b7c0704/crio-28401e2f6084f5204fe5c79b6fad4857507ed38493632dcd5793de7ddd053561: Error finding container 28401e2f6084f5204fe5c79b6fad4857507ed38493632dcd5793de7ddd053561: Status 404 returned error can't find the container with id 28401e2f6084f5204fe5c79b6fad4857507ed38493632dcd5793de7ddd053561
	Mar 28 00:30:10 multinode-200224 kubelet[3059]: E0328 00:30:10.418287    3059 manager.go:1116] Failed to create existing container: /kubepods/burstable/pod7a0e7db0-a552-4642-825f-da6ee01e6121/crio-ee7f59bd3e83ff182dc2870085e8dde1ac0476f4ed5c212895a2b54a7b1d2145: Error finding container ee7f59bd3e83ff182dc2870085e8dde1ac0476f4ed5c212895a2b54a7b1d2145: Status 404 returned error can't find the container with id ee7f59bd3e83ff182dc2870085e8dde1ac0476f4ed5c212895a2b54a7b1d2145
	Mar 28 00:30:10 multinode-200224 kubelet[3059]: E0328 00:30:10.418658    3059 manager.go:1116] Failed to create existing container: /kubepods/burstable/pod32db6766711b42ca248c705dc74d448e/crio-5fef10ef65dd3ba44f4c351e380d6247428dbf047e9cca78d17d3928a7e8f0e5: Error finding container 5fef10ef65dd3ba44f4c351e380d6247428dbf047e9cca78d17d3928a7e8f0e5: Status 404 returned error can't find the container with id 5fef10ef65dd3ba44f4c351e380d6247428dbf047e9cca78d17d3928a7e8f0e5
	Mar 28 00:30:10 multinode-200224 kubelet[3059]: E0328 00:30:10.419135    3059 manager.go:1116] Failed to create existing container: /kubepods/burstable/pod439c378fd07501a8dead5aed861f13e7/crio-caae122c7bfc36f8d133510bf3080172efebf3bc11889137437f0b9c2716cf79: Error finding container caae122c7bfc36f8d133510bf3080172efebf3bc11889137437f0b9c2716cf79: Status 404 returned error can't find the container with id caae122c7bfc36f8d133510bf3080172efebf3bc11889137437f0b9c2716cf79
	Mar 28 00:30:10 multinode-200224 kubelet[3059]: E0328 00:30:10.419528    3059 manager.go:1116] Failed to create existing container: /kubepods/besteffort/pod3bd5c912-2288-420d-a7a2-d73f2c34a5ed/crio-ed4a7c41ab819bc8c36d7e2ab25a2a12997d2920f2f2fb3afdbb263c586ec4a3: Error finding container ed4a7c41ab819bc8c36d7e2ab25a2a12997d2920f2f2fb3afdbb263c586ec4a3: Status 404 returned error can't find the container with id ed4a7c41ab819bc8c36d7e2ab25a2a12997d2920f2f2fb3afdbb263c586ec4a3
	Mar 28 00:30:10 multinode-200224 kubelet[3059]: E0328 00:30:10.419920    3059 manager.go:1116] Failed to create existing container: /kubepods/besteffort/podae4313c7-a926-4da8-bfc2-7dae1422d958/crio-8e1d52aa1272a7dffa0d0477292d339dd15d53962c6c3a485e7ba027258d0cd5: Error finding container 8e1d52aa1272a7dffa0d0477292d339dd15d53962c6c3a485e7ba027258d0cd5: Status 404 returned error can't find the container with id 8e1d52aa1272a7dffa0d0477292d339dd15d53962c6c3a485e7ba027258d0cd5
	Mar 28 00:30:10 multinode-200224 kubelet[3059]: E0328 00:30:10.420126    3059 manager.go:1116] Failed to create existing container: /kubepods/burstable/podc865a9c1ae98bee042cbdef78ac1661e/crio-3f002ba61607eb1d8647387d6277f7b6391e7e06d60cc4cc049fed995ef98eb4: Error finding container 3f002ba61607eb1d8647387d6277f7b6391e7e06d60cc4cc049fed995ef98eb4: Status 404 returned error can't find the container with id 3f002ba61607eb1d8647387d6277f7b6391e7e06d60cc4cc049fed995ef98eb4
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0328 00:30:39.366022 1104034 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/18485-1069254/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-200224 -n multinode-200224
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-200224 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (313.01s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (141.58s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-200224 stop
E0328 00:31:14.355920 1076522 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/addons-910864/client.crt: no such file or directory
E0328 00:31:21.209189 1076522 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/functional-800754/client.crt: no such file or directory
multinode_test.go:345: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-200224 stop: exit status 82 (2m0.490441719s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-200224-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_2.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:347: failed to stop cluster. args "out/minikube-linux-amd64 -p multinode-200224 stop": exit status 82
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-200224 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-200224 status: exit status 3 (18.795429955s)

                                                
                                                
-- stdout --
	multinode-200224
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-200224-m02
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0328 00:33:03.210588 1104579 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.161:22: connect: no route to host
	E0328 00:33:03.210650 1104579 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.161:22: connect: no route to host

                                                
                                                
** /stderr **
multinode_test.go:354: failed to run minikube status. args "out/minikube-linux-amd64 -p multinode-200224 status" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-200224 -n multinode-200224
helpers_test.go:244: <<< TestMultiNode/serial/StopMultiNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/StopMultiNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-200224 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-200224 logs -n 25: (1.571466693s)
helpers_test.go:252: TestMultiNode/serial/StopMultiNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|----------------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   |    Version     |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|----------------|---------------------|---------------------|
	| ssh     | multinode-200224 ssh -n                                                                 | multinode-200224 | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:24 UTC | 28 Mar 24 00:24 UTC |
	|         | multinode-200224-m02 sudo cat                                                           |                  |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |                |                     |                     |
	| cp      | multinode-200224 cp multinode-200224-m02:/home/docker/cp-test.txt                       | multinode-200224 | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:24 UTC | 28 Mar 24 00:24 UTC |
	|         | multinode-200224:/home/docker/cp-test_multinode-200224-m02_multinode-200224.txt         |                  |         |                |                     |                     |
	| ssh     | multinode-200224 ssh -n                                                                 | multinode-200224 | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:24 UTC | 28 Mar 24 00:24 UTC |
	|         | multinode-200224-m02 sudo cat                                                           |                  |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |                |                     |                     |
	| ssh     | multinode-200224 ssh -n multinode-200224 sudo cat                                       | multinode-200224 | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:24 UTC | 28 Mar 24 00:24 UTC |
	|         | /home/docker/cp-test_multinode-200224-m02_multinode-200224.txt                          |                  |         |                |                     |                     |
	| cp      | multinode-200224 cp multinode-200224-m02:/home/docker/cp-test.txt                       | multinode-200224 | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:24 UTC | 28 Mar 24 00:24 UTC |
	|         | multinode-200224-m03:/home/docker/cp-test_multinode-200224-m02_multinode-200224-m03.txt |                  |         |                |                     |                     |
	| ssh     | multinode-200224 ssh -n                                                                 | multinode-200224 | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:24 UTC | 28 Mar 24 00:24 UTC |
	|         | multinode-200224-m02 sudo cat                                                           |                  |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |                |                     |                     |
	| ssh     | multinode-200224 ssh -n multinode-200224-m03 sudo cat                                   | multinode-200224 | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:24 UTC | 28 Mar 24 00:24 UTC |
	|         | /home/docker/cp-test_multinode-200224-m02_multinode-200224-m03.txt                      |                  |         |                |                     |                     |
	| cp      | multinode-200224 cp testdata/cp-test.txt                                                | multinode-200224 | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:24 UTC | 28 Mar 24 00:24 UTC |
	|         | multinode-200224-m03:/home/docker/cp-test.txt                                           |                  |         |                |                     |                     |
	| ssh     | multinode-200224 ssh -n                                                                 | multinode-200224 | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:24 UTC | 28 Mar 24 00:24 UTC |
	|         | multinode-200224-m03 sudo cat                                                           |                  |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |                |                     |                     |
	| cp      | multinode-200224 cp multinode-200224-m03:/home/docker/cp-test.txt                       | multinode-200224 | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:24 UTC | 28 Mar 24 00:24 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile3842904601/001/cp-test_multinode-200224-m03.txt         |                  |         |                |                     |                     |
	| ssh     | multinode-200224 ssh -n                                                                 | multinode-200224 | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:24 UTC | 28 Mar 24 00:24 UTC |
	|         | multinode-200224-m03 sudo cat                                                           |                  |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |                |                     |                     |
	| cp      | multinode-200224 cp multinode-200224-m03:/home/docker/cp-test.txt                       | multinode-200224 | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:24 UTC | 28 Mar 24 00:24 UTC |
	|         | multinode-200224:/home/docker/cp-test_multinode-200224-m03_multinode-200224.txt         |                  |         |                |                     |                     |
	| ssh     | multinode-200224 ssh -n                                                                 | multinode-200224 | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:24 UTC | 28 Mar 24 00:24 UTC |
	|         | multinode-200224-m03 sudo cat                                                           |                  |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |                |                     |                     |
	| ssh     | multinode-200224 ssh -n multinode-200224 sudo cat                                       | multinode-200224 | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:24 UTC | 28 Mar 24 00:24 UTC |
	|         | /home/docker/cp-test_multinode-200224-m03_multinode-200224.txt                          |                  |         |                |                     |                     |
	| cp      | multinode-200224 cp multinode-200224-m03:/home/docker/cp-test.txt                       | multinode-200224 | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:24 UTC | 28 Mar 24 00:24 UTC |
	|         | multinode-200224-m02:/home/docker/cp-test_multinode-200224-m03_multinode-200224-m02.txt |                  |         |                |                     |                     |
	| ssh     | multinode-200224 ssh -n                                                                 | multinode-200224 | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:24 UTC | 28 Mar 24 00:24 UTC |
	|         | multinode-200224-m03 sudo cat                                                           |                  |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |                |                     |                     |
	| ssh     | multinode-200224 ssh -n multinode-200224-m02 sudo cat                                   | multinode-200224 | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:24 UTC | 28 Mar 24 00:24 UTC |
	|         | /home/docker/cp-test_multinode-200224-m03_multinode-200224-m02.txt                      |                  |         |                |                     |                     |
	| node    | multinode-200224 node stop m03                                                          | multinode-200224 | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:24 UTC | 28 Mar 24 00:24 UTC |
	| node    | multinode-200224 node start                                                             | multinode-200224 | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:24 UTC | 28 Mar 24 00:25 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |                |                     |                     |
	| node    | list -p multinode-200224                                                                | multinode-200224 | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:25 UTC |                     |
	| stop    | -p multinode-200224                                                                     | multinode-200224 | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:25 UTC |                     |
	| start   | -p multinode-200224                                                                     | multinode-200224 | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:27 UTC | 28 Mar 24 00:30 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |                |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |                |                     |                     |
	| node    | list -p multinode-200224                                                                | multinode-200224 | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:30 UTC |                     |
	| node    | multinode-200224 node delete                                                            | multinode-200224 | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:30 UTC | 28 Mar 24 00:30 UTC |
	|         | m03                                                                                     |                  |         |                |                     |                     |
	| stop    | multinode-200224 stop                                                                   | multinode-200224 | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:30 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/28 00:27:31
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0328 00:27:31.227138 1103152 out.go:291] Setting OutFile to fd 1 ...
	I0328 00:27:31.227256 1103152 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 00:27:31.227261 1103152 out.go:304] Setting ErrFile to fd 2...
	I0328 00:27:31.227265 1103152 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 00:27:31.227461 1103152 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18485-1069254/.minikube/bin
	I0328 00:27:31.228032 1103152 out.go:298] Setting JSON to false
	I0328 00:27:31.228992 1103152 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":29348,"bootTime":1711556303,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1054-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0328 00:27:31.229069 1103152 start.go:139] virtualization: kvm guest
	I0328 00:27:31.231774 1103152 out.go:177] * [multinode-200224] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I0328 00:27:31.233786 1103152 out.go:177]   - MINIKUBE_LOCATION=18485
	I0328 00:27:31.233698 1103152 notify.go:220] Checking for updates...
	I0328 00:27:31.235359 1103152 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0328 00:27:31.237068 1103152 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18485-1069254/kubeconfig
	I0328 00:27:31.238511 1103152 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18485-1069254/.minikube
	I0328 00:27:31.239903 1103152 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0328 00:27:31.241319 1103152 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0328 00:27:31.242985 1103152 config.go:182] Loaded profile config "multinode-200224": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0328 00:27:31.243080 1103152 driver.go:392] Setting default libvirt URI to qemu:///system
	I0328 00:27:31.243525 1103152 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0328 00:27:31.243574 1103152 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 00:27:31.259623 1103152 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41745
	I0328 00:27:31.260093 1103152 main.go:141] libmachine: () Calling .GetVersion
	I0328 00:27:31.260692 1103152 main.go:141] libmachine: Using API Version  1
	I0328 00:27:31.260717 1103152 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 00:27:31.261162 1103152 main.go:141] libmachine: () Calling .GetMachineName
	I0328 00:27:31.261420 1103152 main.go:141] libmachine: (multinode-200224) Calling .DriverName
	I0328 00:27:31.298792 1103152 out.go:177] * Using the kvm2 driver based on existing profile
	I0328 00:27:31.300080 1103152 start.go:297] selected driver: kvm2
	I0328 00:27:31.300090 1103152 start.go:901] validating driver "kvm2" against &{Name:multinode-200224 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.29.3 ClusterName:multinode-200224 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.88 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.161 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.22 Port:0 KubernetesVersion:v1.29.3 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingre
ss-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0328 00:27:31.300217 1103152 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0328 00:27:31.300537 1103152 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0328 00:27:31.300618 1103152 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18485-1069254/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0328 00:27:31.316370 1103152 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0-beta.0
	I0328 00:27:31.317198 1103152 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0328 00:27:31.317266 1103152 cni.go:84] Creating CNI manager for ""
	I0328 00:27:31.317281 1103152 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0328 00:27:31.317344 1103152 start.go:340] cluster config:
	{Name:multinode-200224 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:multinode-200224 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.88 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.161 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.22 Port:0 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false ko
ng:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnet
ClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0328 00:27:31.317469 1103152 iso.go:125] acquiring lock: {Name:mk3da1fa7d63a581c817f327d86a827697458cb8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0328 00:27:31.319275 1103152 out.go:177] * Starting "multinode-200224" primary control-plane node in "multinode-200224" cluster
	I0328 00:27:31.320721 1103152 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0328 00:27:31.320770 1103152 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4
	I0328 00:27:31.320784 1103152 cache.go:56] Caching tarball of preloaded images
	I0328 00:27:31.320872 1103152 preload.go:173] Found /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0328 00:27:31.320883 1103152 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on crio
	I0328 00:27:31.320993 1103152 profile.go:142] Saving config to /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/multinode-200224/config.json ...
	I0328 00:27:31.321180 1103152 start.go:360] acquireMachinesLock for multinode-200224: {Name:mk85e225431128bbd27ac7bb3815095957281902 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0328 00:27:31.321220 1103152 start.go:364] duration metric: took 20.939µs to acquireMachinesLock for "multinode-200224"
	I0328 00:27:31.321234 1103152 start.go:96] Skipping create...Using existing machine configuration
	I0328 00:27:31.321243 1103152 fix.go:54] fixHost starting: 
	I0328 00:27:31.321499 1103152 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0328 00:27:31.321535 1103152 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 00:27:31.336360 1103152 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41547
	I0328 00:27:31.336887 1103152 main.go:141] libmachine: () Calling .GetVersion
	I0328 00:27:31.337332 1103152 main.go:141] libmachine: Using API Version  1
	I0328 00:27:31.337355 1103152 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 00:27:31.337693 1103152 main.go:141] libmachine: () Calling .GetMachineName
	I0328 00:27:31.337877 1103152 main.go:141] libmachine: (multinode-200224) Calling .DriverName
	I0328 00:27:31.338040 1103152 main.go:141] libmachine: (multinode-200224) Calling .GetState
	I0328 00:27:31.339784 1103152 fix.go:112] recreateIfNeeded on multinode-200224: state=Running err=<nil>
	W0328 00:27:31.339803 1103152 fix.go:138] unexpected machine state, will restart: <nil>
	I0328 00:27:31.344173 1103152 out.go:177] * Updating the running kvm2 "multinode-200224" VM ...
	I0328 00:27:31.347217 1103152 machine.go:94] provisionDockerMachine start ...
	I0328 00:27:31.347246 1103152 main.go:141] libmachine: (multinode-200224) Calling .DriverName
	I0328 00:27:31.347510 1103152 main.go:141] libmachine: (multinode-200224) Calling .GetSSHHostname
	I0328 00:27:31.350493 1103152 main.go:141] libmachine: (multinode-200224) DBG | domain multinode-200224 has defined MAC address 52:54:00:36:d0:1a in network mk-multinode-200224
	I0328 00:27:31.351023 1103152 main.go:141] libmachine: (multinode-200224) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:d0:1a", ip: ""} in network mk-multinode-200224: {Iface:virbr1 ExpiryTime:2024-03-28 01:22:32 +0000 UTC Type:0 Mac:52:54:00:36:d0:1a Iaid: IPaddr:192.168.39.88 Prefix:24 Hostname:multinode-200224 Clientid:01:52:54:00:36:d0:1a}
	I0328 00:27:31.351056 1103152 main.go:141] libmachine: (multinode-200224) DBG | domain multinode-200224 has defined IP address 192.168.39.88 and MAC address 52:54:00:36:d0:1a in network mk-multinode-200224
	I0328 00:27:31.351220 1103152 main.go:141] libmachine: (multinode-200224) Calling .GetSSHPort
	I0328 00:27:31.351494 1103152 main.go:141] libmachine: (multinode-200224) Calling .GetSSHKeyPath
	I0328 00:27:31.351693 1103152 main.go:141] libmachine: (multinode-200224) Calling .GetSSHKeyPath
	I0328 00:27:31.351862 1103152 main.go:141] libmachine: (multinode-200224) Calling .GetSSHUsername
	I0328 00:27:31.352020 1103152 main.go:141] libmachine: Using SSH client type: native
	I0328 00:27:31.352222 1103152 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.88 22 <nil> <nil>}
	I0328 00:27:31.352235 1103152 main.go:141] libmachine: About to run SSH command:
	hostname
	I0328 00:27:31.474689 1103152 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-200224
	
	I0328 00:27:31.474730 1103152 main.go:141] libmachine: (multinode-200224) Calling .GetMachineName
	I0328 00:27:31.474979 1103152 buildroot.go:166] provisioning hostname "multinode-200224"
	I0328 00:27:31.475016 1103152 main.go:141] libmachine: (multinode-200224) Calling .GetMachineName
	I0328 00:27:31.475235 1103152 main.go:141] libmachine: (multinode-200224) Calling .GetSSHHostname
	I0328 00:27:31.477936 1103152 main.go:141] libmachine: (multinode-200224) DBG | domain multinode-200224 has defined MAC address 52:54:00:36:d0:1a in network mk-multinode-200224
	I0328 00:27:31.478397 1103152 main.go:141] libmachine: (multinode-200224) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:d0:1a", ip: ""} in network mk-multinode-200224: {Iface:virbr1 ExpiryTime:2024-03-28 01:22:32 +0000 UTC Type:0 Mac:52:54:00:36:d0:1a Iaid: IPaddr:192.168.39.88 Prefix:24 Hostname:multinode-200224 Clientid:01:52:54:00:36:d0:1a}
	I0328 00:27:31.478430 1103152 main.go:141] libmachine: (multinode-200224) DBG | domain multinode-200224 has defined IP address 192.168.39.88 and MAC address 52:54:00:36:d0:1a in network mk-multinode-200224
	I0328 00:27:31.478657 1103152 main.go:141] libmachine: (multinode-200224) Calling .GetSSHPort
	I0328 00:27:31.478898 1103152 main.go:141] libmachine: (multinode-200224) Calling .GetSSHKeyPath
	I0328 00:27:31.479058 1103152 main.go:141] libmachine: (multinode-200224) Calling .GetSSHKeyPath
	I0328 00:27:31.479190 1103152 main.go:141] libmachine: (multinode-200224) Calling .GetSSHUsername
	I0328 00:27:31.479382 1103152 main.go:141] libmachine: Using SSH client type: native
	I0328 00:27:31.479554 1103152 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.88 22 <nil> <nil>}
	I0328 00:27:31.479567 1103152 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-200224 && echo "multinode-200224" | sudo tee /etc/hostname
	I0328 00:27:31.615145 1103152 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-200224
	
	I0328 00:27:31.615175 1103152 main.go:141] libmachine: (multinode-200224) Calling .GetSSHHostname
	I0328 00:27:31.617879 1103152 main.go:141] libmachine: (multinode-200224) DBG | domain multinode-200224 has defined MAC address 52:54:00:36:d0:1a in network mk-multinode-200224
	I0328 00:27:31.618254 1103152 main.go:141] libmachine: (multinode-200224) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:d0:1a", ip: ""} in network mk-multinode-200224: {Iface:virbr1 ExpiryTime:2024-03-28 01:22:32 +0000 UTC Type:0 Mac:52:54:00:36:d0:1a Iaid: IPaddr:192.168.39.88 Prefix:24 Hostname:multinode-200224 Clientid:01:52:54:00:36:d0:1a}
	I0328 00:27:31.618294 1103152 main.go:141] libmachine: (multinode-200224) DBG | domain multinode-200224 has defined IP address 192.168.39.88 and MAC address 52:54:00:36:d0:1a in network mk-multinode-200224
	I0328 00:27:31.618531 1103152 main.go:141] libmachine: (multinode-200224) Calling .GetSSHPort
	I0328 00:27:31.618752 1103152 main.go:141] libmachine: (multinode-200224) Calling .GetSSHKeyPath
	I0328 00:27:31.618915 1103152 main.go:141] libmachine: (multinode-200224) Calling .GetSSHKeyPath
	I0328 00:27:31.619049 1103152 main.go:141] libmachine: (multinode-200224) Calling .GetSSHUsername
	I0328 00:27:31.619277 1103152 main.go:141] libmachine: Using SSH client type: native
	I0328 00:27:31.619442 1103152 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.88 22 <nil> <nil>}
	I0328 00:27:31.619458 1103152 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-200224' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-200224/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-200224' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0328 00:27:31.727376 1103152 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0328 00:27:31.727405 1103152 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18485-1069254/.minikube CaCertPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18485-1069254/.minikube}
	I0328 00:27:31.727427 1103152 buildroot.go:174] setting up certificates
	I0328 00:27:31.727437 1103152 provision.go:84] configureAuth start
	I0328 00:27:31.727446 1103152 main.go:141] libmachine: (multinode-200224) Calling .GetMachineName
	I0328 00:27:31.727758 1103152 main.go:141] libmachine: (multinode-200224) Calling .GetIP
	I0328 00:27:31.730502 1103152 main.go:141] libmachine: (multinode-200224) DBG | domain multinode-200224 has defined MAC address 52:54:00:36:d0:1a in network mk-multinode-200224
	I0328 00:27:31.730866 1103152 main.go:141] libmachine: (multinode-200224) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:d0:1a", ip: ""} in network mk-multinode-200224: {Iface:virbr1 ExpiryTime:2024-03-28 01:22:32 +0000 UTC Type:0 Mac:52:54:00:36:d0:1a Iaid: IPaddr:192.168.39.88 Prefix:24 Hostname:multinode-200224 Clientid:01:52:54:00:36:d0:1a}
	I0328 00:27:31.730893 1103152 main.go:141] libmachine: (multinode-200224) DBG | domain multinode-200224 has defined IP address 192.168.39.88 and MAC address 52:54:00:36:d0:1a in network mk-multinode-200224
	I0328 00:27:31.731061 1103152 main.go:141] libmachine: (multinode-200224) Calling .GetSSHHostname
	I0328 00:27:31.733533 1103152 main.go:141] libmachine: (multinode-200224) DBG | domain multinode-200224 has defined MAC address 52:54:00:36:d0:1a in network mk-multinode-200224
	I0328 00:27:31.733880 1103152 main.go:141] libmachine: (multinode-200224) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:d0:1a", ip: ""} in network mk-multinode-200224: {Iface:virbr1 ExpiryTime:2024-03-28 01:22:32 +0000 UTC Type:0 Mac:52:54:00:36:d0:1a Iaid: IPaddr:192.168.39.88 Prefix:24 Hostname:multinode-200224 Clientid:01:52:54:00:36:d0:1a}
	I0328 00:27:31.733917 1103152 main.go:141] libmachine: (multinode-200224) DBG | domain multinode-200224 has defined IP address 192.168.39.88 and MAC address 52:54:00:36:d0:1a in network mk-multinode-200224
	I0328 00:27:31.734177 1103152 provision.go:143] copyHostCerts
	I0328 00:27:31.734211 1103152 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.pem
	I0328 00:27:31.734274 1103152 exec_runner.go:144] found /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.pem, removing ...
	I0328 00:27:31.734287 1103152 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.pem
	I0328 00:27:31.734363 1103152 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.pem (1078 bytes)
	I0328 00:27:31.734442 1103152 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18485-1069254/.minikube/cert.pem
	I0328 00:27:31.734460 1103152 exec_runner.go:144] found /home/jenkins/minikube-integration/18485-1069254/.minikube/cert.pem, removing ...
	I0328 00:27:31.734466 1103152 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18485-1069254/.minikube/cert.pem
	I0328 00:27:31.734491 1103152 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18485-1069254/.minikube/cert.pem (1123 bytes)
	I0328 00:27:31.734531 1103152 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18485-1069254/.minikube/key.pem
	I0328 00:27:31.734547 1103152 exec_runner.go:144] found /home/jenkins/minikube-integration/18485-1069254/.minikube/key.pem, removing ...
	I0328 00:27:31.734554 1103152 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18485-1069254/.minikube/key.pem
	I0328 00:27:31.734573 1103152 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18485-1069254/.minikube/key.pem (1679 bytes)
	I0328 00:27:31.734619 1103152 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca-key.pem org=jenkins.multinode-200224 san=[127.0.0.1 192.168.39.88 localhost minikube multinode-200224]
	I0328 00:27:31.891932 1103152 provision.go:177] copyRemoteCerts
	I0328 00:27:31.891997 1103152 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0328 00:27:31.892024 1103152 main.go:141] libmachine: (multinode-200224) Calling .GetSSHHostname
	I0328 00:27:31.895385 1103152 main.go:141] libmachine: (multinode-200224) DBG | domain multinode-200224 has defined MAC address 52:54:00:36:d0:1a in network mk-multinode-200224
	I0328 00:27:31.895775 1103152 main.go:141] libmachine: (multinode-200224) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:d0:1a", ip: ""} in network mk-multinode-200224: {Iface:virbr1 ExpiryTime:2024-03-28 01:22:32 +0000 UTC Type:0 Mac:52:54:00:36:d0:1a Iaid: IPaddr:192.168.39.88 Prefix:24 Hostname:multinode-200224 Clientid:01:52:54:00:36:d0:1a}
	I0328 00:27:31.895810 1103152 main.go:141] libmachine: (multinode-200224) DBG | domain multinode-200224 has defined IP address 192.168.39.88 and MAC address 52:54:00:36:d0:1a in network mk-multinode-200224
	I0328 00:27:31.895998 1103152 main.go:141] libmachine: (multinode-200224) Calling .GetSSHPort
	I0328 00:27:31.896234 1103152 main.go:141] libmachine: (multinode-200224) Calling .GetSSHKeyPath
	I0328 00:27:31.896441 1103152 main.go:141] libmachine: (multinode-200224) Calling .GetSSHUsername
	I0328 00:27:31.896651 1103152 sshutil.go:53] new ssh client: &{IP:192.168.39.88 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/multinode-200224/id_rsa Username:docker}
	I0328 00:27:31.988327 1103152 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0328 00:27:31.988416 1103152 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0328 00:27:32.024701 1103152 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0328 00:27:32.024790 1103152 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0328 00:27:32.053325 1103152 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0328 00:27:32.053413 1103152 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0328 00:27:32.082138 1103152 provision.go:87] duration metric: took 354.687044ms to configureAuth
	I0328 00:27:32.082167 1103152 buildroot.go:189] setting minikube options for container-runtime
	I0328 00:27:32.082434 1103152 config.go:182] Loaded profile config "multinode-200224": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0328 00:27:32.082525 1103152 main.go:141] libmachine: (multinode-200224) Calling .GetSSHHostname
	I0328 00:27:32.085264 1103152 main.go:141] libmachine: (multinode-200224) DBG | domain multinode-200224 has defined MAC address 52:54:00:36:d0:1a in network mk-multinode-200224
	I0328 00:27:32.085657 1103152 main.go:141] libmachine: (multinode-200224) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:d0:1a", ip: ""} in network mk-multinode-200224: {Iface:virbr1 ExpiryTime:2024-03-28 01:22:32 +0000 UTC Type:0 Mac:52:54:00:36:d0:1a Iaid: IPaddr:192.168.39.88 Prefix:24 Hostname:multinode-200224 Clientid:01:52:54:00:36:d0:1a}
	I0328 00:27:32.085681 1103152 main.go:141] libmachine: (multinode-200224) DBG | domain multinode-200224 has defined IP address 192.168.39.88 and MAC address 52:54:00:36:d0:1a in network mk-multinode-200224
	I0328 00:27:32.085866 1103152 main.go:141] libmachine: (multinode-200224) Calling .GetSSHPort
	I0328 00:27:32.086108 1103152 main.go:141] libmachine: (multinode-200224) Calling .GetSSHKeyPath
	I0328 00:27:32.086285 1103152 main.go:141] libmachine: (multinode-200224) Calling .GetSSHKeyPath
	I0328 00:27:32.086449 1103152 main.go:141] libmachine: (multinode-200224) Calling .GetSSHUsername
	I0328 00:27:32.086628 1103152 main.go:141] libmachine: Using SSH client type: native
	I0328 00:27:32.086832 1103152 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.88 22 <nil> <nil>}
	I0328 00:27:32.086849 1103152 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0328 00:29:02.848182 1103152 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0328 00:29:02.848216 1103152 machine.go:97] duration metric: took 1m31.50097817s to provisionDockerMachine
	I0328 00:29:02.848230 1103152 start.go:293] postStartSetup for "multinode-200224" (driver="kvm2")
	I0328 00:29:02.848245 1103152 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0328 00:29:02.848268 1103152 main.go:141] libmachine: (multinode-200224) Calling .DriverName
	I0328 00:29:02.848735 1103152 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0328 00:29:02.848768 1103152 main.go:141] libmachine: (multinode-200224) Calling .GetSSHHostname
	I0328 00:29:02.852447 1103152 main.go:141] libmachine: (multinode-200224) DBG | domain multinode-200224 has defined MAC address 52:54:00:36:d0:1a in network mk-multinode-200224
	I0328 00:29:02.853127 1103152 main.go:141] libmachine: (multinode-200224) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:d0:1a", ip: ""} in network mk-multinode-200224: {Iface:virbr1 ExpiryTime:2024-03-28 01:22:32 +0000 UTC Type:0 Mac:52:54:00:36:d0:1a Iaid: IPaddr:192.168.39.88 Prefix:24 Hostname:multinode-200224 Clientid:01:52:54:00:36:d0:1a}
	I0328 00:29:02.853158 1103152 main.go:141] libmachine: (multinode-200224) DBG | domain multinode-200224 has defined IP address 192.168.39.88 and MAC address 52:54:00:36:d0:1a in network mk-multinode-200224
	I0328 00:29:02.853319 1103152 main.go:141] libmachine: (multinode-200224) Calling .GetSSHPort
	I0328 00:29:02.853551 1103152 main.go:141] libmachine: (multinode-200224) Calling .GetSSHKeyPath
	I0328 00:29:02.853743 1103152 main.go:141] libmachine: (multinode-200224) Calling .GetSSHUsername
	I0328 00:29:02.853917 1103152 sshutil.go:53] new ssh client: &{IP:192.168.39.88 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/multinode-200224/id_rsa Username:docker}
	I0328 00:29:02.943195 1103152 ssh_runner.go:195] Run: cat /etc/os-release
	I0328 00:29:02.947939 1103152 command_runner.go:130] > NAME=Buildroot
	I0328 00:29:02.947967 1103152 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0328 00:29:02.947974 1103152 command_runner.go:130] > ID=buildroot
	I0328 00:29:02.947981 1103152 command_runner.go:130] > VERSION_ID=2023.02.9
	I0328 00:29:02.947989 1103152 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0328 00:29:02.948043 1103152 info.go:137] Remote host: Buildroot 2023.02.9
	I0328 00:29:02.948068 1103152 filesync.go:126] Scanning /home/jenkins/minikube-integration/18485-1069254/.minikube/addons for local assets ...
	I0328 00:29:02.948152 1103152 filesync.go:126] Scanning /home/jenkins/minikube-integration/18485-1069254/.minikube/files for local assets ...
	I0328 00:29:02.948252 1103152 filesync.go:149] local asset: /home/jenkins/minikube-integration/18485-1069254/.minikube/files/etc/ssl/certs/10765222.pem -> 10765222.pem in /etc/ssl/certs
	I0328 00:29:02.948265 1103152 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18485-1069254/.minikube/files/etc/ssl/certs/10765222.pem -> /etc/ssl/certs/10765222.pem
	I0328 00:29:02.948368 1103152 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0328 00:29:02.959641 1103152 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/files/etc/ssl/certs/10765222.pem --> /etc/ssl/certs/10765222.pem (1708 bytes)
	I0328 00:29:02.986509 1103152 start.go:296] duration metric: took 138.26107ms for postStartSetup
	I0328 00:29:02.986573 1103152 fix.go:56] duration metric: took 1m31.665330616s for fixHost
	I0328 00:29:02.986598 1103152 main.go:141] libmachine: (multinode-200224) Calling .GetSSHHostname
	I0328 00:29:02.989818 1103152 main.go:141] libmachine: (multinode-200224) DBG | domain multinode-200224 has defined MAC address 52:54:00:36:d0:1a in network mk-multinode-200224
	I0328 00:29:02.990307 1103152 main.go:141] libmachine: (multinode-200224) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:d0:1a", ip: ""} in network mk-multinode-200224: {Iface:virbr1 ExpiryTime:2024-03-28 01:22:32 +0000 UTC Type:0 Mac:52:54:00:36:d0:1a Iaid: IPaddr:192.168.39.88 Prefix:24 Hostname:multinode-200224 Clientid:01:52:54:00:36:d0:1a}
	I0328 00:29:02.990336 1103152 main.go:141] libmachine: (multinode-200224) DBG | domain multinode-200224 has defined IP address 192.168.39.88 and MAC address 52:54:00:36:d0:1a in network mk-multinode-200224
	I0328 00:29:02.990590 1103152 main.go:141] libmachine: (multinode-200224) Calling .GetSSHPort
	I0328 00:29:02.990826 1103152 main.go:141] libmachine: (multinode-200224) Calling .GetSSHKeyPath
	I0328 00:29:02.991015 1103152 main.go:141] libmachine: (multinode-200224) Calling .GetSSHKeyPath
	I0328 00:29:02.991179 1103152 main.go:141] libmachine: (multinode-200224) Calling .GetSSHUsername
	I0328 00:29:02.991374 1103152 main.go:141] libmachine: Using SSH client type: native
	I0328 00:29:02.991551 1103152 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.88 22 <nil> <nil>}
	I0328 00:29:02.991562 1103152 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0328 00:29:03.099724 1103152 main.go:141] libmachine: SSH cmd err, output: <nil>: 1711585743.073157601
	
	I0328 00:29:03.099750 1103152 fix.go:216] guest clock: 1711585743.073157601
	I0328 00:29:03.099757 1103152 fix.go:229] Guest: 2024-03-28 00:29:03.073157601 +0000 UTC Remote: 2024-03-28 00:29:02.986578288 +0000 UTC m=+91.812055347 (delta=86.579313ms)
	I0328 00:29:03.099805 1103152 fix.go:200] guest clock delta is within tolerance: 86.579313ms
	I0328 00:29:03.099811 1103152 start.go:83] releasing machines lock for "multinode-200224", held for 1m31.778582087s
	I0328 00:29:03.099839 1103152 main.go:141] libmachine: (multinode-200224) Calling .DriverName
	I0328 00:29:03.100186 1103152 main.go:141] libmachine: (multinode-200224) Calling .GetIP
	I0328 00:29:03.102829 1103152 main.go:141] libmachine: (multinode-200224) DBG | domain multinode-200224 has defined MAC address 52:54:00:36:d0:1a in network mk-multinode-200224
	I0328 00:29:03.103316 1103152 main.go:141] libmachine: (multinode-200224) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:d0:1a", ip: ""} in network mk-multinode-200224: {Iface:virbr1 ExpiryTime:2024-03-28 01:22:32 +0000 UTC Type:0 Mac:52:54:00:36:d0:1a Iaid: IPaddr:192.168.39.88 Prefix:24 Hostname:multinode-200224 Clientid:01:52:54:00:36:d0:1a}
	I0328 00:29:03.103349 1103152 main.go:141] libmachine: (multinode-200224) DBG | domain multinode-200224 has defined IP address 192.168.39.88 and MAC address 52:54:00:36:d0:1a in network mk-multinode-200224
	I0328 00:29:03.103507 1103152 main.go:141] libmachine: (multinode-200224) Calling .DriverName
	I0328 00:29:03.104150 1103152 main.go:141] libmachine: (multinode-200224) Calling .DriverName
	I0328 00:29:03.104350 1103152 main.go:141] libmachine: (multinode-200224) Calling .DriverName
	I0328 00:29:03.104451 1103152 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0328 00:29:03.104501 1103152 main.go:141] libmachine: (multinode-200224) Calling .GetSSHHostname
	I0328 00:29:03.104631 1103152 ssh_runner.go:195] Run: cat /version.json
	I0328 00:29:03.104665 1103152 main.go:141] libmachine: (multinode-200224) Calling .GetSSHHostname
	I0328 00:29:03.107497 1103152 main.go:141] libmachine: (multinode-200224) DBG | domain multinode-200224 has defined MAC address 52:54:00:36:d0:1a in network mk-multinode-200224
	I0328 00:29:03.107878 1103152 main.go:141] libmachine: (multinode-200224) DBG | domain multinode-200224 has defined MAC address 52:54:00:36:d0:1a in network mk-multinode-200224
	I0328 00:29:03.107962 1103152 main.go:141] libmachine: (multinode-200224) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:d0:1a", ip: ""} in network mk-multinode-200224: {Iface:virbr1 ExpiryTime:2024-03-28 01:22:32 +0000 UTC Type:0 Mac:52:54:00:36:d0:1a Iaid: IPaddr:192.168.39.88 Prefix:24 Hostname:multinode-200224 Clientid:01:52:54:00:36:d0:1a}
	I0328 00:29:03.107988 1103152 main.go:141] libmachine: (multinode-200224) DBG | domain multinode-200224 has defined IP address 192.168.39.88 and MAC address 52:54:00:36:d0:1a in network mk-multinode-200224
	I0328 00:29:03.108169 1103152 main.go:141] libmachine: (multinode-200224) Calling .GetSSHPort
	I0328 00:29:03.108282 1103152 main.go:141] libmachine: (multinode-200224) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:d0:1a", ip: ""} in network mk-multinode-200224: {Iface:virbr1 ExpiryTime:2024-03-28 01:22:32 +0000 UTC Type:0 Mac:52:54:00:36:d0:1a Iaid: IPaddr:192.168.39.88 Prefix:24 Hostname:multinode-200224 Clientid:01:52:54:00:36:d0:1a}
	I0328 00:29:03.108314 1103152 main.go:141] libmachine: (multinode-200224) DBG | domain multinode-200224 has defined IP address 192.168.39.88 and MAC address 52:54:00:36:d0:1a in network mk-multinode-200224
	I0328 00:29:03.108537 1103152 main.go:141] libmachine: (multinode-200224) Calling .GetSSHPort
	I0328 00:29:03.108556 1103152 main.go:141] libmachine: (multinode-200224) Calling .GetSSHKeyPath
	I0328 00:29:03.108752 1103152 main.go:141] libmachine: (multinode-200224) Calling .GetSSHKeyPath
	I0328 00:29:03.108771 1103152 main.go:141] libmachine: (multinode-200224) Calling .GetSSHUsername
	I0328 00:29:03.108936 1103152 main.go:141] libmachine: (multinode-200224) Calling .GetSSHUsername
	I0328 00:29:03.108952 1103152 sshutil.go:53] new ssh client: &{IP:192.168.39.88 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/multinode-200224/id_rsa Username:docker}
	I0328 00:29:03.109144 1103152 sshutil.go:53] new ssh client: &{IP:192.168.39.88 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/multinode-200224/id_rsa Username:docker}
	I0328 00:29:03.187854 1103152 command_runner.go:130] > {"iso_version": "v1.33.0-1711559712-18485", "kicbase_version": "v0.0.43-beta.0", "minikube_version": "v1.33.0-beta.0", "commit": "db97f5257476488cfa11a4cd2d95d2aa6fbd9d33"}
	I0328 00:29:03.188255 1103152 ssh_runner.go:195] Run: systemctl --version
	I0328 00:29:03.228744 1103152 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0328 00:29:03.229526 1103152 command_runner.go:130] > systemd 252 (252)
	I0328 00:29:03.229562 1103152 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0328 00:29:03.229637 1103152 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0328 00:29:03.388301 1103152 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0328 00:29:03.397669 1103152 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0328 00:29:03.397788 1103152 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0328 00:29:03.397859 1103152 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0328 00:29:03.408781 1103152 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0328 00:29:03.408814 1103152 start.go:494] detecting cgroup driver to use...
	I0328 00:29:03.408892 1103152 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0328 00:29:03.427340 1103152 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0328 00:29:03.442715 1103152 docker.go:217] disabling cri-docker service (if available) ...
	I0328 00:29:03.442798 1103152 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0328 00:29:03.458411 1103152 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0328 00:29:03.473847 1103152 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0328 00:29:03.622296 1103152 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0328 00:29:03.765541 1103152 docker.go:233] disabling docker service ...
	I0328 00:29:03.765655 1103152 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0328 00:29:03.785079 1103152 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0328 00:29:03.800720 1103152 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0328 00:29:03.945380 1103152 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0328 00:29:04.090196 1103152 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0328 00:29:04.106512 1103152 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0328 00:29:04.126782 1103152 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0328 00:29:04.126864 1103152 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0328 00:29:04.126917 1103152 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 00:29:04.138539 1103152 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0328 00:29:04.138626 1103152 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 00:29:04.150071 1103152 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 00:29:04.161478 1103152 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 00:29:04.173182 1103152 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0328 00:29:04.184544 1103152 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 00:29:04.196238 1103152 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 00:29:04.207833 1103152 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 00:29:04.228041 1103152 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0328 00:29:04.254639 1103152 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0328 00:29:04.254743 1103152 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0328 00:29:04.275161 1103152 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0328 00:29:04.418353 1103152 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0328 00:29:07.820833 1103152 ssh_runner.go:235] Completed: sudo systemctl restart crio: (3.402429511s)
	I0328 00:29:07.820872 1103152 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0328 00:29:07.820934 1103152 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0328 00:29:07.826286 1103152 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0328 00:29:07.826312 1103152 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0328 00:29:07.826319 1103152 command_runner.go:130] > Device: 0,22	Inode: 1336        Links: 1
	I0328 00:29:07.826326 1103152 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0328 00:29:07.826330 1103152 command_runner.go:130] > Access: 2024-03-28 00:29:07.679370955 +0000
	I0328 00:29:07.826339 1103152 command_runner.go:130] > Modify: 2024-03-28 00:29:07.679370955 +0000
	I0328 00:29:07.826347 1103152 command_runner.go:130] > Change: 2024-03-28 00:29:07.679370955 +0000
	I0328 00:29:07.826353 1103152 command_runner.go:130] >  Birth: -
	I0328 00:29:07.826378 1103152 start.go:562] Will wait 60s for crictl version
	I0328 00:29:07.826443 1103152 ssh_runner.go:195] Run: which crictl
	I0328 00:29:07.830790 1103152 command_runner.go:130] > /usr/bin/crictl
	I0328 00:29:07.831153 1103152 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0328 00:29:07.869588 1103152 command_runner.go:130] > Version:  0.1.0
	I0328 00:29:07.869617 1103152 command_runner.go:130] > RuntimeName:  cri-o
	I0328 00:29:07.870761 1103152 command_runner.go:130] > RuntimeVersion:  1.29.1
	I0328 00:29:07.870798 1103152 command_runner.go:130] > RuntimeApiVersion:  v1
	I0328 00:29:07.872175 1103152 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0328 00:29:07.872255 1103152 ssh_runner.go:195] Run: crio --version
	I0328 00:29:07.902965 1103152 command_runner.go:130] > crio version 1.29.1
	I0328 00:29:07.902997 1103152 command_runner.go:130] > Version:        1.29.1
	I0328 00:29:07.903007 1103152 command_runner.go:130] > GitCommit:      unknown
	I0328 00:29:07.903014 1103152 command_runner.go:130] > GitCommitDate:  unknown
	I0328 00:29:07.903021 1103152 command_runner.go:130] > GitTreeState:   clean
	I0328 00:29:07.903029 1103152 command_runner.go:130] > BuildDate:      2024-03-27T22:46:22Z
	I0328 00:29:07.903037 1103152 command_runner.go:130] > GoVersion:      go1.21.6
	I0328 00:29:07.903041 1103152 command_runner.go:130] > Compiler:       gc
	I0328 00:29:07.903045 1103152 command_runner.go:130] > Platform:       linux/amd64
	I0328 00:29:07.903049 1103152 command_runner.go:130] > Linkmode:       dynamic
	I0328 00:29:07.903055 1103152 command_runner.go:130] > BuildTags:      
	I0328 00:29:07.903059 1103152 command_runner.go:130] >   containers_image_ostree_stub
	I0328 00:29:07.903066 1103152 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0328 00:29:07.903070 1103152 command_runner.go:130] >   btrfs_noversion
	I0328 00:29:07.903075 1103152 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0328 00:29:07.903079 1103152 command_runner.go:130] >   libdm_no_deferred_remove
	I0328 00:29:07.903087 1103152 command_runner.go:130] >   seccomp
	I0328 00:29:07.903091 1103152 command_runner.go:130] > LDFlags:          unknown
	I0328 00:29:07.903095 1103152 command_runner.go:130] > SeccompEnabled:   true
	I0328 00:29:07.903099 1103152 command_runner.go:130] > AppArmorEnabled:  false
	I0328 00:29:07.903258 1103152 ssh_runner.go:195] Run: crio --version
	I0328 00:29:07.931193 1103152 command_runner.go:130] > crio version 1.29.1
	I0328 00:29:07.931218 1103152 command_runner.go:130] > Version:        1.29.1
	I0328 00:29:07.931224 1103152 command_runner.go:130] > GitCommit:      unknown
	I0328 00:29:07.931228 1103152 command_runner.go:130] > GitCommitDate:  unknown
	I0328 00:29:07.931231 1103152 command_runner.go:130] > GitTreeState:   clean
	I0328 00:29:07.931237 1103152 command_runner.go:130] > BuildDate:      2024-03-27T22:46:22Z
	I0328 00:29:07.931242 1103152 command_runner.go:130] > GoVersion:      go1.21.6
	I0328 00:29:07.931245 1103152 command_runner.go:130] > Compiler:       gc
	I0328 00:29:07.931250 1103152 command_runner.go:130] > Platform:       linux/amd64
	I0328 00:29:07.931254 1103152 command_runner.go:130] > Linkmode:       dynamic
	I0328 00:29:07.931259 1103152 command_runner.go:130] > BuildTags:      
	I0328 00:29:07.931263 1103152 command_runner.go:130] >   containers_image_ostree_stub
	I0328 00:29:07.931268 1103152 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0328 00:29:07.931272 1103152 command_runner.go:130] >   btrfs_noversion
	I0328 00:29:07.931276 1103152 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0328 00:29:07.931282 1103152 command_runner.go:130] >   libdm_no_deferred_remove
	I0328 00:29:07.931286 1103152 command_runner.go:130] >   seccomp
	I0328 00:29:07.931292 1103152 command_runner.go:130] > LDFlags:          unknown
	I0328 00:29:07.931296 1103152 command_runner.go:130] > SeccompEnabled:   true
	I0328 00:29:07.931307 1103152 command_runner.go:130] > AppArmorEnabled:  false
	I0328 00:29:07.934944 1103152 out.go:177] * Preparing Kubernetes v1.29.3 on CRI-O 1.29.1 ...
	I0328 00:29:07.936316 1103152 main.go:141] libmachine: (multinode-200224) Calling .GetIP
	I0328 00:29:07.939195 1103152 main.go:141] libmachine: (multinode-200224) DBG | domain multinode-200224 has defined MAC address 52:54:00:36:d0:1a in network mk-multinode-200224
	I0328 00:29:07.939544 1103152 main.go:141] libmachine: (multinode-200224) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:d0:1a", ip: ""} in network mk-multinode-200224: {Iface:virbr1 ExpiryTime:2024-03-28 01:22:32 +0000 UTC Type:0 Mac:52:54:00:36:d0:1a Iaid: IPaddr:192.168.39.88 Prefix:24 Hostname:multinode-200224 Clientid:01:52:54:00:36:d0:1a}
	I0328 00:29:07.939574 1103152 main.go:141] libmachine: (multinode-200224) DBG | domain multinode-200224 has defined IP address 192.168.39.88 and MAC address 52:54:00:36:d0:1a in network mk-multinode-200224
	I0328 00:29:07.939760 1103152 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0328 00:29:07.944303 1103152 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0328 00:29:07.944403 1103152 kubeadm.go:877] updating cluster {Name:multinode-200224 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
29.3 ClusterName:multinode-200224 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.88 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.161 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.22 Port:0 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:fals
e inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations
:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0328 00:29:07.944550 1103152 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0328 00:29:07.944610 1103152 ssh_runner.go:195] Run: sudo crictl images --output json
	I0328 00:29:08.003857 1103152 command_runner.go:130] > {
	I0328 00:29:08.003889 1103152 command_runner.go:130] >   "images": [
	I0328 00:29:08.003895 1103152 command_runner.go:130] >     {
	I0328 00:29:08.003906 1103152 command_runner.go:130] >       "id": "4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5",
	I0328 00:29:08.003913 1103152 command_runner.go:130] >       "repoTags": [
	I0328 00:29:08.003921 1103152 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240202-8f1494ea"
	I0328 00:29:08.003926 1103152 command_runner.go:130] >       ],
	I0328 00:29:08.003932 1103152 command_runner.go:130] >       "repoDigests": [
	I0328 00:29:08.003944 1103152 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988",
	I0328 00:29:08.003955 1103152 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:bdddbe20c61d325166b48dd517059f5b93c21526eb74c5c80d86cd6d37236bac"
	I0328 00:29:08.003965 1103152 command_runner.go:130] >       ],
	I0328 00:29:08.003973 1103152 command_runner.go:130] >       "size": "65291810",
	I0328 00:29:08.003980 1103152 command_runner.go:130] >       "uid": null,
	I0328 00:29:08.003988 1103152 command_runner.go:130] >       "username": "",
	I0328 00:29:08.004002 1103152 command_runner.go:130] >       "spec": null,
	I0328 00:29:08.004016 1103152 command_runner.go:130] >       "pinned": false
	I0328 00:29:08.004023 1103152 command_runner.go:130] >     },
	I0328 00:29:08.004030 1103152 command_runner.go:130] >     {
	I0328 00:29:08.004040 1103152 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0328 00:29:08.004050 1103152 command_runner.go:130] >       "repoTags": [
	I0328 00:29:08.004060 1103152 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0328 00:29:08.004069 1103152 command_runner.go:130] >       ],
	I0328 00:29:08.004077 1103152 command_runner.go:130] >       "repoDigests": [
	I0328 00:29:08.004091 1103152 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0328 00:29:08.004104 1103152 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0328 00:29:08.004111 1103152 command_runner.go:130] >       ],
	I0328 00:29:08.004118 1103152 command_runner.go:130] >       "size": "1363676",
	I0328 00:29:08.004126 1103152 command_runner.go:130] >       "uid": null,
	I0328 00:29:08.004136 1103152 command_runner.go:130] >       "username": "",
	I0328 00:29:08.004145 1103152 command_runner.go:130] >       "spec": null,
	I0328 00:29:08.004152 1103152 command_runner.go:130] >       "pinned": false
	I0328 00:29:08.004161 1103152 command_runner.go:130] >     },
	I0328 00:29:08.004168 1103152 command_runner.go:130] >     {
	I0328 00:29:08.004181 1103152 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0328 00:29:08.004197 1103152 command_runner.go:130] >       "repoTags": [
	I0328 00:29:08.004209 1103152 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0328 00:29:08.004218 1103152 command_runner.go:130] >       ],
	I0328 00:29:08.004225 1103152 command_runner.go:130] >       "repoDigests": [
	I0328 00:29:08.004241 1103152 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0328 00:29:08.004257 1103152 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0328 00:29:08.004266 1103152 command_runner.go:130] >       ],
	I0328 00:29:08.004274 1103152 command_runner.go:130] >       "size": "31470524",
	I0328 00:29:08.004283 1103152 command_runner.go:130] >       "uid": null,
	I0328 00:29:08.004291 1103152 command_runner.go:130] >       "username": "",
	I0328 00:29:08.004301 1103152 command_runner.go:130] >       "spec": null,
	I0328 00:29:08.004309 1103152 command_runner.go:130] >       "pinned": false
	I0328 00:29:08.004319 1103152 command_runner.go:130] >     },
	I0328 00:29:08.004324 1103152 command_runner.go:130] >     {
	I0328 00:29:08.004338 1103152 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0328 00:29:08.004349 1103152 command_runner.go:130] >       "repoTags": [
	I0328 00:29:08.004361 1103152 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0328 00:29:08.004370 1103152 command_runner.go:130] >       ],
	I0328 00:29:08.004380 1103152 command_runner.go:130] >       "repoDigests": [
	I0328 00:29:08.004395 1103152 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0328 00:29:08.004415 1103152 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0328 00:29:08.004424 1103152 command_runner.go:130] >       ],
	I0328 00:29:08.004431 1103152 command_runner.go:130] >       "size": "61245718",
	I0328 00:29:08.004438 1103152 command_runner.go:130] >       "uid": null,
	I0328 00:29:08.004445 1103152 command_runner.go:130] >       "username": "nonroot",
	I0328 00:29:08.004453 1103152 command_runner.go:130] >       "spec": null,
	I0328 00:29:08.004461 1103152 command_runner.go:130] >       "pinned": false
	I0328 00:29:08.004469 1103152 command_runner.go:130] >     },
	I0328 00:29:08.004475 1103152 command_runner.go:130] >     {
	I0328 00:29:08.004489 1103152 command_runner.go:130] >       "id": "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899",
	I0328 00:29:08.004498 1103152 command_runner.go:130] >       "repoTags": [
	I0328 00:29:08.004508 1103152 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.12-0"
	I0328 00:29:08.004516 1103152 command_runner.go:130] >       ],
	I0328 00:29:08.004524 1103152 command_runner.go:130] >       "repoDigests": [
	I0328 00:29:08.004539 1103152 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62",
	I0328 00:29:08.004554 1103152 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"
	I0328 00:29:08.004567 1103152 command_runner.go:130] >       ],
	I0328 00:29:08.004575 1103152 command_runner.go:130] >       "size": "150779692",
	I0328 00:29:08.004584 1103152 command_runner.go:130] >       "uid": {
	I0328 00:29:08.004591 1103152 command_runner.go:130] >         "value": "0"
	I0328 00:29:08.004600 1103152 command_runner.go:130] >       },
	I0328 00:29:08.004607 1103152 command_runner.go:130] >       "username": "",
	I0328 00:29:08.004615 1103152 command_runner.go:130] >       "spec": null,
	I0328 00:29:08.004622 1103152 command_runner.go:130] >       "pinned": false
	I0328 00:29:08.004631 1103152 command_runner.go:130] >     },
	I0328 00:29:08.004637 1103152 command_runner.go:130] >     {
	I0328 00:29:08.004650 1103152 command_runner.go:130] >       "id": "39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533",
	I0328 00:29:08.004660 1103152 command_runner.go:130] >       "repoTags": [
	I0328 00:29:08.004669 1103152 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.29.3"
	I0328 00:29:08.004678 1103152 command_runner.go:130] >       ],
	I0328 00:29:08.004685 1103152 command_runner.go:130] >       "repoDigests": [
	I0328 00:29:08.004697 1103152 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:21be2c03b528e582a63a41d8270f469ad1b24e2f6ba8238386768fc981ca1322",
	I0328 00:29:08.004713 1103152 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:ebd35bc7ef24672c5c50ffccb21f71307a82d4fb20c0ecb6d3d27b28b69e0e3c"
	I0328 00:29:08.004723 1103152 command_runner.go:130] >       ],
	I0328 00:29:08.004733 1103152 command_runner.go:130] >       "size": "128508878",
	I0328 00:29:08.004743 1103152 command_runner.go:130] >       "uid": {
	I0328 00:29:08.004751 1103152 command_runner.go:130] >         "value": "0"
	I0328 00:29:08.004758 1103152 command_runner.go:130] >       },
	I0328 00:29:08.004768 1103152 command_runner.go:130] >       "username": "",
	I0328 00:29:08.004776 1103152 command_runner.go:130] >       "spec": null,
	I0328 00:29:08.004784 1103152 command_runner.go:130] >       "pinned": false
	I0328 00:29:08.004791 1103152 command_runner.go:130] >     },
	I0328 00:29:08.004798 1103152 command_runner.go:130] >     {
	I0328 00:29:08.004812 1103152 command_runner.go:130] >       "id": "6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3",
	I0328 00:29:08.004821 1103152 command_runner.go:130] >       "repoTags": [
	I0328 00:29:08.004831 1103152 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.29.3"
	I0328 00:29:08.004840 1103152 command_runner.go:130] >       ],
	I0328 00:29:08.004847 1103152 command_runner.go:130] >       "repoDigests": [
	I0328 00:29:08.004863 1103152 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:495e03d609009733264502138f33ab4ebff55e4ccc34b51fce1dc48eba5aa606",
	I0328 00:29:08.004880 1103152 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:5a7968649f8aee83d5a2d75d6d377ba2680df25b0b97b3be12fa10f15ad67104"
	I0328 00:29:08.004888 1103152 command_runner.go:130] >       ],
	I0328 00:29:08.004895 1103152 command_runner.go:130] >       "size": "123142962",
	I0328 00:29:08.004904 1103152 command_runner.go:130] >       "uid": {
	I0328 00:29:08.004911 1103152 command_runner.go:130] >         "value": "0"
	I0328 00:29:08.004919 1103152 command_runner.go:130] >       },
	I0328 00:29:08.004926 1103152 command_runner.go:130] >       "username": "",
	I0328 00:29:08.004935 1103152 command_runner.go:130] >       "spec": null,
	I0328 00:29:08.004943 1103152 command_runner.go:130] >       "pinned": false
	I0328 00:29:08.004952 1103152 command_runner.go:130] >     },
	I0328 00:29:08.004958 1103152 command_runner.go:130] >     {
	I0328 00:29:08.004969 1103152 command_runner.go:130] >       "id": "a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392",
	I0328 00:29:08.004978 1103152 command_runner.go:130] >       "repoTags": [
	I0328 00:29:08.004987 1103152 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.29.3"
	I0328 00:29:08.004996 1103152 command_runner.go:130] >       ],
	I0328 00:29:08.005006 1103152 command_runner.go:130] >       "repoDigests": [
	I0328 00:29:08.005029 1103152 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:d137dd922e588abc7b0e2f20afd338065e9abccdecfe705abfb19f588fbac11d",
	I0328 00:29:08.005045 1103152 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:fa87cba052adcb992bd59bd1304115c6f3b3fb370407805ba52af3d9ff3f0863"
	I0328 00:29:08.005054 1103152 command_runner.go:130] >       ],
	I0328 00:29:08.005062 1103152 command_runner.go:130] >       "size": "83634073",
	I0328 00:29:08.005073 1103152 command_runner.go:130] >       "uid": null,
	I0328 00:29:08.005079 1103152 command_runner.go:130] >       "username": "",
	I0328 00:29:08.005084 1103152 command_runner.go:130] >       "spec": null,
	I0328 00:29:08.005090 1103152 command_runner.go:130] >       "pinned": false
	I0328 00:29:08.005094 1103152 command_runner.go:130] >     },
	I0328 00:29:08.005102 1103152 command_runner.go:130] >     {
	I0328 00:29:08.005111 1103152 command_runner.go:130] >       "id": "8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b",
	I0328 00:29:08.005119 1103152 command_runner.go:130] >       "repoTags": [
	I0328 00:29:08.005127 1103152 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.29.3"
	I0328 00:29:08.005134 1103152 command_runner.go:130] >       ],
	I0328 00:29:08.005144 1103152 command_runner.go:130] >       "repoDigests": [
	I0328 00:29:08.005167 1103152 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:6fb91d791db6d62f6b1ac9dbed23fdb597335550d99ff8333d53c4136e889b3a",
	I0328 00:29:08.005184 1103152 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:c6dae5df00e42512d2baa3e1e74efbf08bddd595e930123f6021f715198b8e88"
	I0328 00:29:08.005201 1103152 command_runner.go:130] >       ],
	I0328 00:29:08.005208 1103152 command_runner.go:130] >       "size": "60724018",
	I0328 00:29:08.005218 1103152 command_runner.go:130] >       "uid": {
	I0328 00:29:08.005226 1103152 command_runner.go:130] >         "value": "0"
	I0328 00:29:08.005235 1103152 command_runner.go:130] >       },
	I0328 00:29:08.005244 1103152 command_runner.go:130] >       "username": "",
	I0328 00:29:08.005254 1103152 command_runner.go:130] >       "spec": null,
	I0328 00:29:08.005262 1103152 command_runner.go:130] >       "pinned": false
	I0328 00:29:08.005268 1103152 command_runner.go:130] >     },
	I0328 00:29:08.005275 1103152 command_runner.go:130] >     {
	I0328 00:29:08.005286 1103152 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0328 00:29:08.005296 1103152 command_runner.go:130] >       "repoTags": [
	I0328 00:29:08.005305 1103152 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0328 00:29:08.005320 1103152 command_runner.go:130] >       ],
	I0328 00:29:08.005334 1103152 command_runner.go:130] >       "repoDigests": [
	I0328 00:29:08.005349 1103152 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0328 00:29:08.005364 1103152 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0328 00:29:08.005371 1103152 command_runner.go:130] >       ],
	I0328 00:29:08.005381 1103152 command_runner.go:130] >       "size": "750414",
	I0328 00:29:08.005388 1103152 command_runner.go:130] >       "uid": {
	I0328 00:29:08.005399 1103152 command_runner.go:130] >         "value": "65535"
	I0328 00:29:08.005408 1103152 command_runner.go:130] >       },
	I0328 00:29:08.005415 1103152 command_runner.go:130] >       "username": "",
	I0328 00:29:08.005426 1103152 command_runner.go:130] >       "spec": null,
	I0328 00:29:08.005436 1103152 command_runner.go:130] >       "pinned": true
	I0328 00:29:08.005443 1103152 command_runner.go:130] >     }
	I0328 00:29:08.005451 1103152 command_runner.go:130] >   ]
	I0328 00:29:08.005457 1103152 command_runner.go:130] > }
	I0328 00:29:08.005667 1103152 crio.go:514] all images are preloaded for cri-o runtime.
	I0328 00:29:08.005682 1103152 crio.go:433] Images already preloaded, skipping extraction
	I0328 00:29:08.005745 1103152 ssh_runner.go:195] Run: sudo crictl images --output json
	I0328 00:29:08.047523 1103152 command_runner.go:130] > {
	I0328 00:29:08.047557 1103152 command_runner.go:130] >   "images": [
	I0328 00:29:08.047563 1103152 command_runner.go:130] >     {
	I0328 00:29:08.047576 1103152 command_runner.go:130] >       "id": "4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5",
	I0328 00:29:08.047584 1103152 command_runner.go:130] >       "repoTags": [
	I0328 00:29:08.047591 1103152 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240202-8f1494ea"
	I0328 00:29:08.047595 1103152 command_runner.go:130] >       ],
	I0328 00:29:08.047601 1103152 command_runner.go:130] >       "repoDigests": [
	I0328 00:29:08.047616 1103152 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988",
	I0328 00:29:08.047627 1103152 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:bdddbe20c61d325166b48dd517059f5b93c21526eb74c5c80d86cd6d37236bac"
	I0328 00:29:08.047643 1103152 command_runner.go:130] >       ],
	I0328 00:29:08.047652 1103152 command_runner.go:130] >       "size": "65291810",
	I0328 00:29:08.047659 1103152 command_runner.go:130] >       "uid": null,
	I0328 00:29:08.047666 1103152 command_runner.go:130] >       "username": "",
	I0328 00:29:08.047686 1103152 command_runner.go:130] >       "spec": null,
	I0328 00:29:08.047699 1103152 command_runner.go:130] >       "pinned": false
	I0328 00:29:08.047705 1103152 command_runner.go:130] >     },
	I0328 00:29:08.047710 1103152 command_runner.go:130] >     {
	I0328 00:29:08.047721 1103152 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0328 00:29:08.047731 1103152 command_runner.go:130] >       "repoTags": [
	I0328 00:29:08.047740 1103152 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0328 00:29:08.047749 1103152 command_runner.go:130] >       ],
	I0328 00:29:08.047757 1103152 command_runner.go:130] >       "repoDigests": [
	I0328 00:29:08.047770 1103152 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0328 00:29:08.047785 1103152 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0328 00:29:08.047792 1103152 command_runner.go:130] >       ],
	I0328 00:29:08.047800 1103152 command_runner.go:130] >       "size": "1363676",
	I0328 00:29:08.047806 1103152 command_runner.go:130] >       "uid": null,
	I0328 00:29:08.047821 1103152 command_runner.go:130] >       "username": "",
	I0328 00:29:08.047831 1103152 command_runner.go:130] >       "spec": null,
	I0328 00:29:08.047839 1103152 command_runner.go:130] >       "pinned": false
	I0328 00:29:08.047845 1103152 command_runner.go:130] >     },
	I0328 00:29:08.047851 1103152 command_runner.go:130] >     {
	I0328 00:29:08.047870 1103152 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0328 00:29:08.047878 1103152 command_runner.go:130] >       "repoTags": [
	I0328 00:29:08.047888 1103152 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0328 00:29:08.047897 1103152 command_runner.go:130] >       ],
	I0328 00:29:08.047905 1103152 command_runner.go:130] >       "repoDigests": [
	I0328 00:29:08.047922 1103152 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0328 00:29:08.047938 1103152 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0328 00:29:08.047947 1103152 command_runner.go:130] >       ],
	I0328 00:29:08.047955 1103152 command_runner.go:130] >       "size": "31470524",
	I0328 00:29:08.047965 1103152 command_runner.go:130] >       "uid": null,
	I0328 00:29:08.047972 1103152 command_runner.go:130] >       "username": "",
	I0328 00:29:08.047985 1103152 command_runner.go:130] >       "spec": null,
	I0328 00:29:08.047992 1103152 command_runner.go:130] >       "pinned": false
	I0328 00:29:08.048001 1103152 command_runner.go:130] >     },
	I0328 00:29:08.048007 1103152 command_runner.go:130] >     {
	I0328 00:29:08.048020 1103152 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0328 00:29:08.048030 1103152 command_runner.go:130] >       "repoTags": [
	I0328 00:29:08.048038 1103152 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0328 00:29:08.048059 1103152 command_runner.go:130] >       ],
	I0328 00:29:08.048065 1103152 command_runner.go:130] >       "repoDigests": [
	I0328 00:29:08.048078 1103152 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0328 00:29:08.048111 1103152 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0328 00:29:08.048123 1103152 command_runner.go:130] >       ],
	I0328 00:29:08.048131 1103152 command_runner.go:130] >       "size": "61245718",
	I0328 00:29:08.048140 1103152 command_runner.go:130] >       "uid": null,
	I0328 00:29:08.048150 1103152 command_runner.go:130] >       "username": "nonroot",
	I0328 00:29:08.048162 1103152 command_runner.go:130] >       "spec": null,
	I0328 00:29:08.048173 1103152 command_runner.go:130] >       "pinned": false
	I0328 00:29:08.048181 1103152 command_runner.go:130] >     },
	I0328 00:29:08.048190 1103152 command_runner.go:130] >     {
	I0328 00:29:08.048200 1103152 command_runner.go:130] >       "id": "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899",
	I0328 00:29:08.048210 1103152 command_runner.go:130] >       "repoTags": [
	I0328 00:29:08.048220 1103152 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.12-0"
	I0328 00:29:08.048229 1103152 command_runner.go:130] >       ],
	I0328 00:29:08.048237 1103152 command_runner.go:130] >       "repoDigests": [
	I0328 00:29:08.048252 1103152 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62",
	I0328 00:29:08.048269 1103152 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"
	I0328 00:29:08.048279 1103152 command_runner.go:130] >       ],
	I0328 00:29:08.048289 1103152 command_runner.go:130] >       "size": "150779692",
	I0328 00:29:08.048298 1103152 command_runner.go:130] >       "uid": {
	I0328 00:29:08.048305 1103152 command_runner.go:130] >         "value": "0"
	I0328 00:29:08.048314 1103152 command_runner.go:130] >       },
	I0328 00:29:08.048321 1103152 command_runner.go:130] >       "username": "",
	I0328 00:29:08.048329 1103152 command_runner.go:130] >       "spec": null,
	I0328 00:29:08.048339 1103152 command_runner.go:130] >       "pinned": false
	I0328 00:29:08.048345 1103152 command_runner.go:130] >     },
	I0328 00:29:08.048354 1103152 command_runner.go:130] >     {
	I0328 00:29:08.048366 1103152 command_runner.go:130] >       "id": "39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533",
	I0328 00:29:08.048376 1103152 command_runner.go:130] >       "repoTags": [
	I0328 00:29:08.048385 1103152 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.29.3"
	I0328 00:29:08.048393 1103152 command_runner.go:130] >       ],
	I0328 00:29:08.048401 1103152 command_runner.go:130] >       "repoDigests": [
	I0328 00:29:08.048417 1103152 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:21be2c03b528e582a63a41d8270f469ad1b24e2f6ba8238386768fc981ca1322",
	I0328 00:29:08.048432 1103152 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:ebd35bc7ef24672c5c50ffccb21f71307a82d4fb20c0ecb6d3d27b28b69e0e3c"
	I0328 00:29:08.048441 1103152 command_runner.go:130] >       ],
	I0328 00:29:08.048449 1103152 command_runner.go:130] >       "size": "128508878",
	I0328 00:29:08.048458 1103152 command_runner.go:130] >       "uid": {
	I0328 00:29:08.048465 1103152 command_runner.go:130] >         "value": "0"
	I0328 00:29:08.048474 1103152 command_runner.go:130] >       },
	I0328 00:29:08.048482 1103152 command_runner.go:130] >       "username": "",
	I0328 00:29:08.048492 1103152 command_runner.go:130] >       "spec": null,
	I0328 00:29:08.048501 1103152 command_runner.go:130] >       "pinned": false
	I0328 00:29:08.048507 1103152 command_runner.go:130] >     },
	I0328 00:29:08.048516 1103152 command_runner.go:130] >     {
	I0328 00:29:08.048526 1103152 command_runner.go:130] >       "id": "6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3",
	I0328 00:29:08.048537 1103152 command_runner.go:130] >       "repoTags": [
	I0328 00:29:08.048549 1103152 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.29.3"
	I0328 00:29:08.048555 1103152 command_runner.go:130] >       ],
	I0328 00:29:08.048563 1103152 command_runner.go:130] >       "repoDigests": [
	I0328 00:29:08.048581 1103152 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:495e03d609009733264502138f33ab4ebff55e4ccc34b51fce1dc48eba5aa606",
	I0328 00:29:08.048597 1103152 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:5a7968649f8aee83d5a2d75d6d377ba2680df25b0b97b3be12fa10f15ad67104"
	I0328 00:29:08.048610 1103152 command_runner.go:130] >       ],
	I0328 00:29:08.048623 1103152 command_runner.go:130] >       "size": "123142962",
	I0328 00:29:08.048632 1103152 command_runner.go:130] >       "uid": {
	I0328 00:29:08.048640 1103152 command_runner.go:130] >         "value": "0"
	I0328 00:29:08.048649 1103152 command_runner.go:130] >       },
	I0328 00:29:08.048656 1103152 command_runner.go:130] >       "username": "",
	I0328 00:29:08.048667 1103152 command_runner.go:130] >       "spec": null,
	I0328 00:29:08.048677 1103152 command_runner.go:130] >       "pinned": false
	I0328 00:29:08.048683 1103152 command_runner.go:130] >     },
	I0328 00:29:08.048692 1103152 command_runner.go:130] >     {
	I0328 00:29:08.048702 1103152 command_runner.go:130] >       "id": "a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392",
	I0328 00:29:08.048712 1103152 command_runner.go:130] >       "repoTags": [
	I0328 00:29:08.048722 1103152 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.29.3"
	I0328 00:29:08.048731 1103152 command_runner.go:130] >       ],
	I0328 00:29:08.048740 1103152 command_runner.go:130] >       "repoDigests": [
	I0328 00:29:08.048760 1103152 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:d137dd922e588abc7b0e2f20afd338065e9abccdecfe705abfb19f588fbac11d",
	I0328 00:29:08.048775 1103152 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:fa87cba052adcb992bd59bd1304115c6f3b3fb370407805ba52af3d9ff3f0863"
	I0328 00:29:08.048784 1103152 command_runner.go:130] >       ],
	I0328 00:29:08.048799 1103152 command_runner.go:130] >       "size": "83634073",
	I0328 00:29:08.048810 1103152 command_runner.go:130] >       "uid": null,
	I0328 00:29:08.048821 1103152 command_runner.go:130] >       "username": "",
	I0328 00:29:08.048831 1103152 command_runner.go:130] >       "spec": null,
	I0328 00:29:08.048839 1103152 command_runner.go:130] >       "pinned": false
	I0328 00:29:08.048848 1103152 command_runner.go:130] >     },
	I0328 00:29:08.048854 1103152 command_runner.go:130] >     {
	I0328 00:29:08.048865 1103152 command_runner.go:130] >       "id": "8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b",
	I0328 00:29:08.048874 1103152 command_runner.go:130] >       "repoTags": [
	I0328 00:29:08.048884 1103152 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.29.3"
	I0328 00:29:08.048892 1103152 command_runner.go:130] >       ],
	I0328 00:29:08.048900 1103152 command_runner.go:130] >       "repoDigests": [
	I0328 00:29:08.048915 1103152 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:6fb91d791db6d62f6b1ac9dbed23fdb597335550d99ff8333d53c4136e889b3a",
	I0328 00:29:08.048937 1103152 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:c6dae5df00e42512d2baa3e1e74efbf08bddd595e930123f6021f715198b8e88"
	I0328 00:29:08.048946 1103152 command_runner.go:130] >       ],
	I0328 00:29:08.048957 1103152 command_runner.go:130] >       "size": "60724018",
	I0328 00:29:08.048967 1103152 command_runner.go:130] >       "uid": {
	I0328 00:29:08.048976 1103152 command_runner.go:130] >         "value": "0"
	I0328 00:29:08.048985 1103152 command_runner.go:130] >       },
	I0328 00:29:08.048994 1103152 command_runner.go:130] >       "username": "",
	I0328 00:29:08.049003 1103152 command_runner.go:130] >       "spec": null,
	I0328 00:29:08.049010 1103152 command_runner.go:130] >       "pinned": false
	I0328 00:29:08.049023 1103152 command_runner.go:130] >     },
	I0328 00:29:08.049030 1103152 command_runner.go:130] >     {
	I0328 00:29:08.049072 1103152 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0328 00:29:08.049082 1103152 command_runner.go:130] >       "repoTags": [
	I0328 00:29:08.049091 1103152 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0328 00:29:08.049099 1103152 command_runner.go:130] >       ],
	I0328 00:29:08.049107 1103152 command_runner.go:130] >       "repoDigests": [
	I0328 00:29:08.049122 1103152 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0328 00:29:08.049141 1103152 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0328 00:29:08.049151 1103152 command_runner.go:130] >       ],
	I0328 00:29:08.049160 1103152 command_runner.go:130] >       "size": "750414",
	I0328 00:29:08.049170 1103152 command_runner.go:130] >       "uid": {
	I0328 00:29:08.049178 1103152 command_runner.go:130] >         "value": "65535"
	I0328 00:29:08.049186 1103152 command_runner.go:130] >       },
	I0328 00:29:08.049193 1103152 command_runner.go:130] >       "username": "",
	I0328 00:29:08.049201 1103152 command_runner.go:130] >       "spec": null,
	I0328 00:29:08.049211 1103152 command_runner.go:130] >       "pinned": true
	I0328 00:29:08.049217 1103152 command_runner.go:130] >     }
	I0328 00:29:08.049222 1103152 command_runner.go:130] >   ]
	I0328 00:29:08.049228 1103152 command_runner.go:130] > }
	I0328 00:29:08.050301 1103152 crio.go:514] all images are preloaded for cri-o runtime.
	I0328 00:29:08.050328 1103152 cache_images.go:84] Images are preloaded, skipping loading
	I0328 00:29:08.050337 1103152 kubeadm.go:928] updating node { 192.168.39.88 8443 v1.29.3 crio true true} ...
	I0328 00:29:08.050458 1103152 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-200224 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.88
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.3 ClusterName:multinode-200224 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0328 00:29:08.050529 1103152 ssh_runner.go:195] Run: crio config
	I0328 00:29:08.093056 1103152 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0328 00:29:08.093092 1103152 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0328 00:29:08.093102 1103152 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0328 00:29:08.093107 1103152 command_runner.go:130] > #
	I0328 00:29:08.093117 1103152 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0328 00:29:08.093126 1103152 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0328 00:29:08.093136 1103152 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0328 00:29:08.093155 1103152 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0328 00:29:08.093160 1103152 command_runner.go:130] > # reload'.
	I0328 00:29:08.093166 1103152 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0328 00:29:08.093172 1103152 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0328 00:29:08.093179 1103152 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0328 00:29:08.093185 1103152 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0328 00:29:08.093195 1103152 command_runner.go:130] > [crio]
	I0328 00:29:08.093206 1103152 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0328 00:29:08.093222 1103152 command_runner.go:130] > # containers images, in this directory.
	I0328 00:29:08.093230 1103152 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0328 00:29:08.093266 1103152 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0328 00:29:08.093275 1103152 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0328 00:29:08.093284 1103152 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I0328 00:29:08.093291 1103152 command_runner.go:130] > # imagestore = ""
	I0328 00:29:08.093300 1103152 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0328 00:29:08.093307 1103152 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0328 00:29:08.093311 1103152 command_runner.go:130] > storage_driver = "overlay"
	I0328 00:29:08.093316 1103152 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0328 00:29:08.093325 1103152 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0328 00:29:08.093331 1103152 command_runner.go:130] > storage_option = [
	I0328 00:29:08.093340 1103152 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0328 00:29:08.093346 1103152 command_runner.go:130] > ]
	I0328 00:29:08.093357 1103152 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0328 00:29:08.093370 1103152 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0328 00:29:08.093378 1103152 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0328 00:29:08.093388 1103152 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0328 00:29:08.093396 1103152 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0328 00:29:08.093401 1103152 command_runner.go:130] > # always happen on a node reboot
	I0328 00:29:08.093405 1103152 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0328 00:29:08.093462 1103152 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0328 00:29:08.093481 1103152 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0328 00:29:08.093490 1103152 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0328 00:29:08.093497 1103152 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I0328 00:29:08.093509 1103152 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0328 00:29:08.093525 1103152 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0328 00:29:08.093533 1103152 command_runner.go:130] > # internal_wipe = true
	I0328 00:29:08.093545 1103152 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I0328 00:29:08.093556 1103152 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I0328 00:29:08.093564 1103152 command_runner.go:130] > # internal_repair = false
	I0328 00:29:08.093572 1103152 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0328 00:29:08.093581 1103152 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0328 00:29:08.093592 1103152 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0328 00:29:08.093601 1103152 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0328 00:29:08.093614 1103152 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0328 00:29:08.093620 1103152 command_runner.go:130] > [crio.api]
	I0328 00:29:08.093629 1103152 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0328 00:29:08.093637 1103152 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0328 00:29:08.093652 1103152 command_runner.go:130] > # IP address on which the stream server will listen.
	I0328 00:29:08.093659 1103152 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0328 00:29:08.093669 1103152 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0328 00:29:08.093680 1103152 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0328 00:29:08.093694 1103152 command_runner.go:130] > # stream_port = "0"
	I0328 00:29:08.093707 1103152 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0328 00:29:08.093714 1103152 command_runner.go:130] > # stream_enable_tls = false
	I0328 00:29:08.093727 1103152 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0328 00:29:08.093735 1103152 command_runner.go:130] > # stream_idle_timeout = ""
	I0328 00:29:08.093747 1103152 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0328 00:29:08.093756 1103152 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0328 00:29:08.093762 1103152 command_runner.go:130] > # minutes.
	I0328 00:29:08.093776 1103152 command_runner.go:130] > # stream_tls_cert = ""
	I0328 00:29:08.093787 1103152 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0328 00:29:08.093800 1103152 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0328 00:29:08.093809 1103152 command_runner.go:130] > # stream_tls_key = ""
	I0328 00:29:08.093821 1103152 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0328 00:29:08.093833 1103152 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0328 00:29:08.093851 1103152 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0328 00:29:08.093860 1103152 command_runner.go:130] > # stream_tls_ca = ""
	I0328 00:29:08.093873 1103152 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I0328 00:29:08.093884 1103152 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0328 00:29:08.093896 1103152 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I0328 00:29:08.093906 1103152 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0328 00:29:08.093917 1103152 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0328 00:29:08.093927 1103152 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0328 00:29:08.093931 1103152 command_runner.go:130] > [crio.runtime]
	I0328 00:29:08.093939 1103152 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0328 00:29:08.093951 1103152 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0328 00:29:08.093959 1103152 command_runner.go:130] > # "nofile=1024:2048"
	I0328 00:29:08.093973 1103152 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0328 00:29:08.093983 1103152 command_runner.go:130] > # default_ulimits = [
	I0328 00:29:08.093989 1103152 command_runner.go:130] > # ]
	I0328 00:29:08.093999 1103152 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0328 00:29:08.094009 1103152 command_runner.go:130] > # no_pivot = false
	I0328 00:29:08.094017 1103152 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0328 00:29:08.094031 1103152 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0328 00:29:08.094041 1103152 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0328 00:29:08.094055 1103152 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0328 00:29:08.094070 1103152 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0328 00:29:08.094083 1103152 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0328 00:29:08.094093 1103152 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0328 00:29:08.094100 1103152 command_runner.go:130] > # Cgroup setting for conmon
	I0328 00:29:08.094113 1103152 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0328 00:29:08.094121 1103152 command_runner.go:130] > conmon_cgroup = "pod"
	I0328 00:29:08.094131 1103152 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0328 00:29:08.094142 1103152 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0328 00:29:08.094154 1103152 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0328 00:29:08.094163 1103152 command_runner.go:130] > conmon_env = [
	I0328 00:29:08.094173 1103152 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0328 00:29:08.094180 1103152 command_runner.go:130] > ]
	I0328 00:29:08.094188 1103152 command_runner.go:130] > # Additional environment variables to set for all the
	I0328 00:29:08.094199 1103152 command_runner.go:130] > # containers. These are overridden if set in the
	I0328 00:29:08.094209 1103152 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0328 00:29:08.094219 1103152 command_runner.go:130] > # default_env = [
	I0328 00:29:08.094225 1103152 command_runner.go:130] > # ]
	I0328 00:29:08.094255 1103152 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0328 00:29:08.094270 1103152 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I0328 00:29:08.094279 1103152 command_runner.go:130] > # selinux = false
	I0328 00:29:08.094290 1103152 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0328 00:29:08.094303 1103152 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0328 00:29:08.094312 1103152 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0328 00:29:08.094316 1103152 command_runner.go:130] > # seccomp_profile = ""
	I0328 00:29:08.094327 1103152 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0328 00:29:08.094340 1103152 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0328 00:29:08.094350 1103152 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0328 00:29:08.094361 1103152 command_runner.go:130] > # which might increase security.
	I0328 00:29:08.094369 1103152 command_runner.go:130] > # This option is currently deprecated,
	I0328 00:29:08.094381 1103152 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I0328 00:29:08.094392 1103152 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0328 00:29:08.094405 1103152 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0328 00:29:08.094418 1103152 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0328 00:29:08.094431 1103152 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0328 00:29:08.094445 1103152 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0328 00:29:08.094457 1103152 command_runner.go:130] > # This option supports live configuration reload.
	I0328 00:29:08.094474 1103152 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0328 00:29:08.094486 1103152 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0328 00:29:08.094495 1103152 command_runner.go:130] > # the cgroup blockio controller.
	I0328 00:29:08.094500 1103152 command_runner.go:130] > # blockio_config_file = ""
	I0328 00:29:08.094512 1103152 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I0328 00:29:08.094521 1103152 command_runner.go:130] > # blockio parameters.
	I0328 00:29:08.094528 1103152 command_runner.go:130] > # blockio_reload = false
	I0328 00:29:08.094541 1103152 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0328 00:29:08.094551 1103152 command_runner.go:130] > # irqbalance daemon.
	I0328 00:29:08.094559 1103152 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0328 00:29:08.094572 1103152 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I0328 00:29:08.094584 1103152 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I0328 00:29:08.094596 1103152 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I0328 00:29:08.094608 1103152 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I0328 00:29:08.094622 1103152 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0328 00:29:08.094631 1103152 command_runner.go:130] > # This option supports live configuration reload.
	I0328 00:29:08.094641 1103152 command_runner.go:130] > # rdt_config_file = ""
	I0328 00:29:08.094649 1103152 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0328 00:29:08.094659 1103152 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0328 00:29:08.094680 1103152 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0328 00:29:08.094693 1103152 command_runner.go:130] > # separate_pull_cgroup = ""
	I0328 00:29:08.094705 1103152 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0328 00:29:08.094718 1103152 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0328 00:29:08.094725 1103152 command_runner.go:130] > # will be added.
	I0328 00:29:08.094733 1103152 command_runner.go:130] > # default_capabilities = [
	I0328 00:29:08.094742 1103152 command_runner.go:130] > # 	"CHOWN",
	I0328 00:29:08.094749 1103152 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0328 00:29:08.094758 1103152 command_runner.go:130] > # 	"FSETID",
	I0328 00:29:08.094764 1103152 command_runner.go:130] > # 	"FOWNER",
	I0328 00:29:08.094773 1103152 command_runner.go:130] > # 	"SETGID",
	I0328 00:29:08.094779 1103152 command_runner.go:130] > # 	"SETUID",
	I0328 00:29:08.094786 1103152 command_runner.go:130] > # 	"SETPCAP",
	I0328 00:29:08.094790 1103152 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0328 00:29:08.094798 1103152 command_runner.go:130] > # 	"KILL",
	I0328 00:29:08.094804 1103152 command_runner.go:130] > # ]
	I0328 00:29:08.094820 1103152 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0328 00:29:08.094835 1103152 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0328 00:29:08.094846 1103152 command_runner.go:130] > # add_inheritable_capabilities = false
	I0328 00:29:08.094855 1103152 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0328 00:29:08.094868 1103152 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0328 00:29:08.094877 1103152 command_runner.go:130] > default_sysctls = [
	I0328 00:29:08.094886 1103152 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I0328 00:29:08.094892 1103152 command_runner.go:130] > ]
	I0328 00:29:08.094899 1103152 command_runner.go:130] > # List of devices on the host that a
	I0328 00:29:08.094912 1103152 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0328 00:29:08.094919 1103152 command_runner.go:130] > # allowed_devices = [
	I0328 00:29:08.094929 1103152 command_runner.go:130] > # 	"/dev/fuse",
	I0328 00:29:08.094935 1103152 command_runner.go:130] > # ]
	I0328 00:29:08.094943 1103152 command_runner.go:130] > # List of additional devices. specified as
	I0328 00:29:08.094957 1103152 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0328 00:29:08.094968 1103152 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0328 00:29:08.094979 1103152 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0328 00:29:08.094986 1103152 command_runner.go:130] > # additional_devices = [
	I0328 00:29:08.094991 1103152 command_runner.go:130] > # ]
	I0328 00:29:08.095003 1103152 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0328 00:29:08.095015 1103152 command_runner.go:130] > # cdi_spec_dirs = [
	I0328 00:29:08.095021 1103152 command_runner.go:130] > # 	"/etc/cdi",
	I0328 00:29:08.095032 1103152 command_runner.go:130] > # 	"/var/run/cdi",
	I0328 00:29:08.095037 1103152 command_runner.go:130] > # ]
	I0328 00:29:08.095048 1103152 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0328 00:29:08.095060 1103152 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0328 00:29:08.095070 1103152 command_runner.go:130] > # Defaults to false.
	I0328 00:29:08.095078 1103152 command_runner.go:130] > # device_ownership_from_security_context = false
	I0328 00:29:08.095088 1103152 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0328 00:29:08.095095 1103152 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0328 00:29:08.095104 1103152 command_runner.go:130] > # hooks_dir = [
	I0328 00:29:08.095113 1103152 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0328 00:29:08.095122 1103152 command_runner.go:130] > # ]
	I0328 00:29:08.095131 1103152 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0328 00:29:08.095145 1103152 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0328 00:29:08.095156 1103152 command_runner.go:130] > # its default mounts from the following two files:
	I0328 00:29:08.095165 1103152 command_runner.go:130] > #
	I0328 00:29:08.095173 1103152 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0328 00:29:08.095184 1103152 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0328 00:29:08.095197 1103152 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0328 00:29:08.095206 1103152 command_runner.go:130] > #
	I0328 00:29:08.095217 1103152 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0328 00:29:08.095230 1103152 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0328 00:29:08.095244 1103152 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0328 00:29:08.095255 1103152 command_runner.go:130] > #      only add mounts it finds in this file.
	I0328 00:29:08.095261 1103152 command_runner.go:130] > #
	I0328 00:29:08.095265 1103152 command_runner.go:130] > # default_mounts_file = ""
	I0328 00:29:08.095276 1103152 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0328 00:29:08.095295 1103152 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0328 00:29:08.095305 1103152 command_runner.go:130] > pids_limit = 1024
	I0328 00:29:08.095315 1103152 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0328 00:29:08.095328 1103152 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0328 00:29:08.095341 1103152 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0328 00:29:08.095356 1103152 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0328 00:29:08.095362 1103152 command_runner.go:130] > # log_size_max = -1
	I0328 00:29:08.095371 1103152 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I0328 00:29:08.095381 1103152 command_runner.go:130] > # log_to_journald = false
	I0328 00:29:08.095392 1103152 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0328 00:29:08.095403 1103152 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0328 00:29:08.095414 1103152 command_runner.go:130] > # Path to directory for container attach sockets.
	I0328 00:29:08.095422 1103152 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0328 00:29:08.095433 1103152 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0328 00:29:08.095443 1103152 command_runner.go:130] > # bind_mount_prefix = ""
	I0328 00:29:08.095449 1103152 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0328 00:29:08.095457 1103152 command_runner.go:130] > # read_only = false
	I0328 00:29:08.095469 1103152 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0328 00:29:08.095482 1103152 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0328 00:29:08.095493 1103152 command_runner.go:130] > # live configuration reload.
	I0328 00:29:08.095503 1103152 command_runner.go:130] > # log_level = "info"
	I0328 00:29:08.095512 1103152 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0328 00:29:08.095523 1103152 command_runner.go:130] > # This option supports live configuration reload.
	I0328 00:29:08.095532 1103152 command_runner.go:130] > # log_filter = ""
	I0328 00:29:08.095542 1103152 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0328 00:29:08.095552 1103152 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0328 00:29:08.095556 1103152 command_runner.go:130] > # separated by comma.
	I0328 00:29:08.095571 1103152 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0328 00:29:08.095581 1103152 command_runner.go:130] > # uid_mappings = ""
	I0328 00:29:08.095590 1103152 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0328 00:29:08.095603 1103152 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0328 00:29:08.095613 1103152 command_runner.go:130] > # separated by comma.
	I0328 00:29:08.095625 1103152 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0328 00:29:08.095635 1103152 command_runner.go:130] > # gid_mappings = ""
	I0328 00:29:08.095645 1103152 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0328 00:29:08.095655 1103152 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0328 00:29:08.095666 1103152 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0328 00:29:08.095682 1103152 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0328 00:29:08.095694 1103152 command_runner.go:130] > # minimum_mappable_uid = -1
	I0328 00:29:08.095708 1103152 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0328 00:29:08.095720 1103152 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0328 00:29:08.095733 1103152 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0328 00:29:08.095748 1103152 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0328 00:29:08.095756 1103152 command_runner.go:130] > # minimum_mappable_gid = -1
	I0328 00:29:08.095763 1103152 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0328 00:29:08.095776 1103152 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0328 00:29:08.095789 1103152 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0328 00:29:08.095799 1103152 command_runner.go:130] > # ctr_stop_timeout = 30
	I0328 00:29:08.095809 1103152 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0328 00:29:08.095821 1103152 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0328 00:29:08.095832 1103152 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0328 00:29:08.095840 1103152 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0328 00:29:08.095845 1103152 command_runner.go:130] > drop_infra_ctr = false
	I0328 00:29:08.095856 1103152 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0328 00:29:08.095866 1103152 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0328 00:29:08.095882 1103152 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0328 00:29:08.095891 1103152 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0328 00:29:08.095902 1103152 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I0328 00:29:08.095915 1103152 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I0328 00:29:08.095923 1103152 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I0328 00:29:08.095929 1103152 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I0328 00:29:08.095935 1103152 command_runner.go:130] > # shared_cpuset = ""
	I0328 00:29:08.095947 1103152 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0328 00:29:08.095959 1103152 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0328 00:29:08.095967 1103152 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0328 00:29:08.095981 1103152 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0328 00:29:08.095991 1103152 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0328 00:29:08.096000 1103152 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I0328 00:29:08.096013 1103152 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I0328 00:29:08.096023 1103152 command_runner.go:130] > # enable_criu_support = false
	I0328 00:29:08.096029 1103152 command_runner.go:130] > # Enable/disable the generation of the container,
	I0328 00:29:08.096043 1103152 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I0328 00:29:08.096054 1103152 command_runner.go:130] > # enable_pod_events = false
	I0328 00:29:08.096064 1103152 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0328 00:29:08.096077 1103152 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0328 00:29:08.096088 1103152 command_runner.go:130] > # The name is matched against the runtimes map below.
	I0328 00:29:08.096098 1103152 command_runner.go:130] > # default_runtime = "runc"
	I0328 00:29:08.096109 1103152 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0328 00:29:08.096121 1103152 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0328 00:29:08.096136 1103152 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I0328 00:29:08.096148 1103152 command_runner.go:130] > # creation as a file is not desired either.
	I0328 00:29:08.096161 1103152 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0328 00:29:08.096172 1103152 command_runner.go:130] > # the hostname is being managed dynamically.
	I0328 00:29:08.096183 1103152 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0328 00:29:08.096191 1103152 command_runner.go:130] > # ]
	I0328 00:29:08.096204 1103152 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0328 00:29:08.096215 1103152 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0328 00:29:08.096225 1103152 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I0328 00:29:08.096238 1103152 command_runner.go:130] > # Each entry in the table should follow the format:
	I0328 00:29:08.096247 1103152 command_runner.go:130] > #
	I0328 00:29:08.096255 1103152 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I0328 00:29:08.096266 1103152 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I0328 00:29:08.096293 1103152 command_runner.go:130] > # runtime_type = "oci"
	I0328 00:29:08.096304 1103152 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I0328 00:29:08.096314 1103152 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I0328 00:29:08.096322 1103152 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I0328 00:29:08.096328 1103152 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I0328 00:29:08.096352 1103152 command_runner.go:130] > # monitor_env = []
	I0328 00:29:08.096364 1103152 command_runner.go:130] > # privileged_without_host_devices = false
	I0328 00:29:08.096375 1103152 command_runner.go:130] > # allowed_annotations = []
	I0328 00:29:08.096387 1103152 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I0328 00:29:08.096396 1103152 command_runner.go:130] > # Where:
	I0328 00:29:08.096407 1103152 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I0328 00:29:08.096421 1103152 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I0328 00:29:08.096432 1103152 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0328 00:29:08.096442 1103152 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0328 00:29:08.096452 1103152 command_runner.go:130] > #   in $PATH.
	I0328 00:29:08.096466 1103152 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I0328 00:29:08.096477 1103152 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0328 00:29:08.096493 1103152 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I0328 00:29:08.096502 1103152 command_runner.go:130] > #   state.
	I0328 00:29:08.096515 1103152 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0328 00:29:08.096526 1103152 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0328 00:29:08.096535 1103152 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0328 00:29:08.096546 1103152 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0328 00:29:08.096559 1103152 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0328 00:29:08.096573 1103152 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0328 00:29:08.096584 1103152 command_runner.go:130] > #   The currently recognized values are:
	I0328 00:29:08.096598 1103152 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0328 00:29:08.096612 1103152 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0328 00:29:08.096624 1103152 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0328 00:29:08.096632 1103152 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0328 00:29:08.096647 1103152 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0328 00:29:08.096662 1103152 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0328 00:29:08.096676 1103152 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I0328 00:29:08.096694 1103152 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I0328 00:29:08.096707 1103152 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0328 00:29:08.096719 1103152 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I0328 00:29:08.096727 1103152 command_runner.go:130] > #   deprecated option "conmon".
	I0328 00:29:08.096738 1103152 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I0328 00:29:08.096750 1103152 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I0328 00:29:08.096764 1103152 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I0328 00:29:08.096776 1103152 command_runner.go:130] > #   should be moved to the container's cgroup
	I0328 00:29:08.096792 1103152 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I0328 00:29:08.096803 1103152 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I0328 00:29:08.096816 1103152 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I0328 00:29:08.096827 1103152 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I0328 00:29:08.096833 1103152 command_runner.go:130] > #
	I0328 00:29:08.096839 1103152 command_runner.go:130] > # Using the seccomp notifier feature:
	I0328 00:29:08.096847 1103152 command_runner.go:130] > #
	I0328 00:29:08.096858 1103152 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I0328 00:29:08.096872 1103152 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I0328 00:29:08.096880 1103152 command_runner.go:130] > #
	I0328 00:29:08.096893 1103152 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I0328 00:29:08.096905 1103152 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I0328 00:29:08.096912 1103152 command_runner.go:130] > #
	I0328 00:29:08.096918 1103152 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I0328 00:29:08.096926 1103152 command_runner.go:130] > # feature.
	I0328 00:29:08.096933 1103152 command_runner.go:130] > #
	I0328 00:29:08.096946 1103152 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I0328 00:29:08.096960 1103152 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I0328 00:29:08.096973 1103152 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I0328 00:29:08.096985 1103152 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I0328 00:29:08.096998 1103152 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I0328 00:29:08.097006 1103152 command_runner.go:130] > #
	I0328 00:29:08.097013 1103152 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I0328 00:29:08.097025 1103152 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I0328 00:29:08.097033 1103152 command_runner.go:130] > #
	I0328 00:29:08.097043 1103152 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I0328 00:29:08.097055 1103152 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I0328 00:29:08.097064 1103152 command_runner.go:130] > #
	I0328 00:29:08.097075 1103152 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I0328 00:29:08.097087 1103152 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I0328 00:29:08.097096 1103152 command_runner.go:130] > # limitation.
	I0328 00:29:08.097104 1103152 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0328 00:29:08.097113 1103152 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0328 00:29:08.097123 1103152 command_runner.go:130] > runtime_type = "oci"
	I0328 00:29:08.097133 1103152 command_runner.go:130] > runtime_root = "/run/runc"
	I0328 00:29:08.097141 1103152 command_runner.go:130] > runtime_config_path = ""
	I0328 00:29:08.097153 1103152 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I0328 00:29:08.097163 1103152 command_runner.go:130] > monitor_cgroup = "pod"
	I0328 00:29:08.097172 1103152 command_runner.go:130] > monitor_exec_cgroup = ""
	I0328 00:29:08.097181 1103152 command_runner.go:130] > monitor_env = [
	I0328 00:29:08.097194 1103152 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0328 00:29:08.097201 1103152 command_runner.go:130] > ]
	I0328 00:29:08.097205 1103152 command_runner.go:130] > privileged_without_host_devices = false
	I0328 00:29:08.097217 1103152 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0328 00:29:08.097229 1103152 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0328 00:29:08.097239 1103152 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0328 00:29:08.097255 1103152 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0328 00:29:08.097271 1103152 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0328 00:29:08.097284 1103152 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0328 00:29:08.097299 1103152 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0328 00:29:08.097315 1103152 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0328 00:29:08.097328 1103152 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0328 00:29:08.097340 1103152 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0328 00:29:08.097349 1103152 command_runner.go:130] > # Example:
	I0328 00:29:08.097356 1103152 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0328 00:29:08.097368 1103152 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0328 00:29:08.097379 1103152 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0328 00:29:08.097387 1103152 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0328 00:29:08.097394 1103152 command_runner.go:130] > # cpuset = 0
	I0328 00:29:08.097398 1103152 command_runner.go:130] > # cpushares = "0-1"
	I0328 00:29:08.097402 1103152 command_runner.go:130] > # Where:
	I0328 00:29:08.097409 1103152 command_runner.go:130] > # The workload name is workload-type.
	I0328 00:29:08.097424 1103152 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0328 00:29:08.097432 1103152 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0328 00:29:08.097444 1103152 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0328 00:29:08.097462 1103152 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0328 00:29:08.097474 1103152 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0328 00:29:08.097485 1103152 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I0328 00:29:08.097495 1103152 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I0328 00:29:08.097504 1103152 command_runner.go:130] > # Default value is set to true
	I0328 00:29:08.097515 1103152 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I0328 00:29:08.097529 1103152 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I0328 00:29:08.097541 1103152 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I0328 00:29:08.097551 1103152 command_runner.go:130] > # Default value is set to 'false'
	I0328 00:29:08.097562 1103152 command_runner.go:130] > # disable_hostport_mapping = false
	I0328 00:29:08.097575 1103152 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0328 00:29:08.097581 1103152 command_runner.go:130] > #
	I0328 00:29:08.097588 1103152 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0328 00:29:08.097603 1103152 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0328 00:29:08.097614 1103152 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0328 00:29:08.097624 1103152 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0328 00:29:08.097633 1103152 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0328 00:29:08.097639 1103152 command_runner.go:130] > [crio.image]
	I0328 00:29:08.097649 1103152 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0328 00:29:08.097657 1103152 command_runner.go:130] > # default_transport = "docker://"
	I0328 00:29:08.097666 1103152 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0328 00:29:08.097673 1103152 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0328 00:29:08.097680 1103152 command_runner.go:130] > # global_auth_file = ""
	I0328 00:29:08.097694 1103152 command_runner.go:130] > # The image used to instantiate infra containers.
	I0328 00:29:08.097702 1103152 command_runner.go:130] > # This option supports live configuration reload.
	I0328 00:29:08.097711 1103152 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.9"
	I0328 00:29:08.097723 1103152 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0328 00:29:08.097732 1103152 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0328 00:29:08.097740 1103152 command_runner.go:130] > # This option supports live configuration reload.
	I0328 00:29:08.097746 1103152 command_runner.go:130] > # pause_image_auth_file = ""
	I0328 00:29:08.097752 1103152 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0328 00:29:08.097758 1103152 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0328 00:29:08.097771 1103152 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0328 00:29:08.097780 1103152 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0328 00:29:08.097788 1103152 command_runner.go:130] > # pause_command = "/pause"
	I0328 00:29:08.097798 1103152 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I0328 00:29:08.097807 1103152 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I0328 00:29:08.097817 1103152 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I0328 00:29:08.097827 1103152 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I0328 00:29:08.097836 1103152 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I0328 00:29:08.097843 1103152 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I0328 00:29:08.097853 1103152 command_runner.go:130] > # pinned_images = [
	I0328 00:29:08.097859 1103152 command_runner.go:130] > # ]
	I0328 00:29:08.097871 1103152 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0328 00:29:08.097885 1103152 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0328 00:29:08.097897 1103152 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0328 00:29:08.097910 1103152 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0328 00:29:08.097919 1103152 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0328 00:29:08.097923 1103152 command_runner.go:130] > # signature_policy = ""
	I0328 00:29:08.097929 1103152 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I0328 00:29:08.097943 1103152 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I0328 00:29:08.097956 1103152 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I0328 00:29:08.097967 1103152 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I0328 00:29:08.097980 1103152 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I0328 00:29:08.097990 1103152 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I0328 00:29:08.098005 1103152 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0328 00:29:08.098018 1103152 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0328 00:29:08.098026 1103152 command_runner.go:130] > # changing them here.
	I0328 00:29:08.098030 1103152 command_runner.go:130] > # insecure_registries = [
	I0328 00:29:08.098037 1103152 command_runner.go:130] > # ]
	I0328 00:29:08.098046 1103152 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0328 00:29:08.098058 1103152 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0328 00:29:08.098069 1103152 command_runner.go:130] > # image_volumes = "mkdir"
	I0328 00:29:08.098080 1103152 command_runner.go:130] > # Temporary directory to use for storing big files
	I0328 00:29:08.098090 1103152 command_runner.go:130] > # big_files_temporary_dir = ""
	I0328 00:29:08.098102 1103152 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0328 00:29:08.098111 1103152 command_runner.go:130] > # CNI plugins.
	I0328 00:29:08.098120 1103152 command_runner.go:130] > [crio.network]
	I0328 00:29:08.098130 1103152 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0328 00:29:08.098140 1103152 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0328 00:29:08.098150 1103152 command_runner.go:130] > # cni_default_network = ""
	I0328 00:29:08.098163 1103152 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0328 00:29:08.098173 1103152 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0328 00:29:08.098185 1103152 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0328 00:29:08.098194 1103152 command_runner.go:130] > # plugin_dirs = [
	I0328 00:29:08.098204 1103152 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0328 00:29:08.098211 1103152 command_runner.go:130] > # ]
	I0328 00:29:08.098217 1103152 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0328 00:29:08.098225 1103152 command_runner.go:130] > [crio.metrics]
	I0328 00:29:08.098245 1103152 command_runner.go:130] > # Globally enable or disable metrics support.
	I0328 00:29:08.098255 1103152 command_runner.go:130] > enable_metrics = true
	I0328 00:29:08.098266 1103152 command_runner.go:130] > # Specify enabled metrics collectors.
	I0328 00:29:08.098277 1103152 command_runner.go:130] > # Per default all metrics are enabled.
	I0328 00:29:08.098289 1103152 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0328 00:29:08.098302 1103152 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0328 00:29:08.098313 1103152 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0328 00:29:08.098320 1103152 command_runner.go:130] > # metrics_collectors = [
	I0328 00:29:08.098326 1103152 command_runner.go:130] > # 	"operations",
	I0328 00:29:08.098337 1103152 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0328 00:29:08.098349 1103152 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0328 00:29:08.098358 1103152 command_runner.go:130] > # 	"operations_errors",
	I0328 00:29:08.098367 1103152 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0328 00:29:08.098378 1103152 command_runner.go:130] > # 	"image_pulls_by_name",
	I0328 00:29:08.098387 1103152 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0328 00:29:08.098395 1103152 command_runner.go:130] > # 	"image_pulls_failures",
	I0328 00:29:08.098403 1103152 command_runner.go:130] > # 	"image_pulls_successes",
	I0328 00:29:08.098408 1103152 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0328 00:29:08.098418 1103152 command_runner.go:130] > # 	"image_layer_reuse",
	I0328 00:29:08.098429 1103152 command_runner.go:130] > # 	"containers_events_dropped_total",
	I0328 00:29:08.098442 1103152 command_runner.go:130] > # 	"containers_oom_total",
	I0328 00:29:08.098451 1103152 command_runner.go:130] > # 	"containers_oom",
	I0328 00:29:08.098460 1103152 command_runner.go:130] > # 	"processes_defunct",
	I0328 00:29:08.098470 1103152 command_runner.go:130] > # 	"operations_total",
	I0328 00:29:08.098480 1103152 command_runner.go:130] > # 	"operations_latency_seconds",
	I0328 00:29:08.098491 1103152 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0328 00:29:08.098498 1103152 command_runner.go:130] > # 	"operations_errors_total",
	I0328 00:29:08.098502 1103152 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0328 00:29:08.098515 1103152 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0328 00:29:08.098527 1103152 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0328 00:29:08.098534 1103152 command_runner.go:130] > # 	"image_pulls_success_total",
	I0328 00:29:08.098544 1103152 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0328 00:29:08.098554 1103152 command_runner.go:130] > # 	"containers_oom_count_total",
	I0328 00:29:08.098566 1103152 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I0328 00:29:08.098576 1103152 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I0328 00:29:08.098585 1103152 command_runner.go:130] > # ]
	I0328 00:29:08.098595 1103152 command_runner.go:130] > # The port on which the metrics server will listen.
	I0328 00:29:08.098602 1103152 command_runner.go:130] > # metrics_port = 9090
	I0328 00:29:08.098610 1103152 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0328 00:29:08.098620 1103152 command_runner.go:130] > # metrics_socket = ""
	I0328 00:29:08.098630 1103152 command_runner.go:130] > # The certificate for the secure metrics server.
	I0328 00:29:08.098643 1103152 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0328 00:29:08.098655 1103152 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0328 00:29:08.098666 1103152 command_runner.go:130] > # certificate on any modification event.
	I0328 00:29:08.098676 1103152 command_runner.go:130] > # metrics_cert = ""
	I0328 00:29:08.098684 1103152 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0328 00:29:08.098696 1103152 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0328 00:29:08.098705 1103152 command_runner.go:130] > # metrics_key = ""
	I0328 00:29:08.098718 1103152 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0328 00:29:08.098728 1103152 command_runner.go:130] > [crio.tracing]
	I0328 00:29:08.098740 1103152 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0328 00:29:08.098750 1103152 command_runner.go:130] > # enable_tracing = false
	I0328 00:29:08.098762 1103152 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0328 00:29:08.098772 1103152 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0328 00:29:08.098784 1103152 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I0328 00:29:08.098791 1103152 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0328 00:29:08.098798 1103152 command_runner.go:130] > # CRI-O NRI configuration.
	I0328 00:29:08.098807 1103152 command_runner.go:130] > [crio.nri]
	I0328 00:29:08.098817 1103152 command_runner.go:130] > # Globally enable or disable NRI.
	I0328 00:29:08.098824 1103152 command_runner.go:130] > # enable_nri = false
	I0328 00:29:08.098834 1103152 command_runner.go:130] > # NRI socket to listen on.
	I0328 00:29:08.098845 1103152 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I0328 00:29:08.098853 1103152 command_runner.go:130] > # NRI plugin directory to use.
	I0328 00:29:08.098863 1103152 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I0328 00:29:08.098877 1103152 command_runner.go:130] > # NRI plugin configuration directory to use.
	I0328 00:29:08.098886 1103152 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I0328 00:29:08.098892 1103152 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I0328 00:29:08.098901 1103152 command_runner.go:130] > # nri_disable_connections = false
	I0328 00:29:08.098912 1103152 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I0328 00:29:08.098923 1103152 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I0328 00:29:08.098935 1103152 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I0328 00:29:08.098945 1103152 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I0328 00:29:08.098960 1103152 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0328 00:29:08.098968 1103152 command_runner.go:130] > [crio.stats]
	I0328 00:29:08.098977 1103152 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0328 00:29:08.098987 1103152 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0328 00:29:08.098998 1103152 command_runner.go:130] > # stats_collection_period = 0
	I0328 00:29:08.099031 1103152 command_runner.go:130] ! time="2024-03-28 00:29:08.057689871Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I0328 00:29:08.099052 1103152 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0328 00:29:08.099200 1103152 cni.go:84] Creating CNI manager for ""
	I0328 00:29:08.099219 1103152 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0328 00:29:08.099227 1103152 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0328 00:29:08.099255 1103152 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.88 APIServerPort:8443 KubernetesVersion:v1.29.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-200224 NodeName:multinode-200224 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.88"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.88 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0328 00:29:08.099427 1103152 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.88
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-200224"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.88
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.88"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0328 00:29:08.099508 1103152 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.3
	I0328 00:29:08.110930 1103152 command_runner.go:130] > kubeadm
	I0328 00:29:08.110959 1103152 command_runner.go:130] > kubectl
	I0328 00:29:08.110963 1103152 command_runner.go:130] > kubelet
	I0328 00:29:08.110998 1103152 binaries.go:44] Found k8s binaries, skipping transfer
	I0328 00:29:08.111061 1103152 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0328 00:29:08.121519 1103152 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (315 bytes)
	I0328 00:29:08.139092 1103152 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0328 00:29:08.156952 1103152 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0328 00:29:08.174345 1103152 ssh_runner.go:195] Run: grep 192.168.39.88	control-plane.minikube.internal$ /etc/hosts
	I0328 00:29:08.178261 1103152 command_runner.go:130] > 192.168.39.88	control-plane.minikube.internal
	I0328 00:29:08.178458 1103152 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0328 00:29:08.327719 1103152 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0328 00:29:08.343089 1103152 certs.go:68] Setting up /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/multinode-200224 for IP: 192.168.39.88
	I0328 00:29:08.343121 1103152 certs.go:194] generating shared ca certs ...
	I0328 00:29:08.343139 1103152 certs.go:226] acquiring lock for ca certs: {Name:mkf4dec8f33bbf51de6ed3aabdf175c7fd744ef9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 00:29:08.343294 1103152 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.key
	I0328 00:29:08.343329 1103152 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/proxy-client-ca.key
	I0328 00:29:08.343339 1103152 certs.go:256] generating profile certs ...
	I0328 00:29:08.343422 1103152 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/multinode-200224/client.key
	I0328 00:29:08.343475 1103152 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/multinode-200224/apiserver.key.99234e7d
	I0328 00:29:08.343509 1103152 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/multinode-200224/proxy-client.key
	I0328 00:29:08.343521 1103152 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0328 00:29:08.343545 1103152 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0328 00:29:08.343560 1103152 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18485-1069254/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0328 00:29:08.343570 1103152 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18485-1069254/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0328 00:29:08.343580 1103152 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/multinode-200224/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0328 00:29:08.343600 1103152 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/multinode-200224/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0328 00:29:08.343615 1103152 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/multinode-200224/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0328 00:29:08.343629 1103152 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/multinode-200224/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0328 00:29:08.343680 1103152 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/1076522.pem (1338 bytes)
	W0328 00:29:08.343708 1103152 certs.go:480] ignoring /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/1076522_empty.pem, impossibly tiny 0 bytes
	I0328 00:29:08.343720 1103152 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca-key.pem (1675 bytes)
	I0328 00:29:08.343743 1103152 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem (1078 bytes)
	I0328 00:29:08.343766 1103152 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/cert.pem (1123 bytes)
	I0328 00:29:08.343788 1103152 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/key.pem (1679 bytes)
	I0328 00:29:08.343825 1103152 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/files/etc/ssl/certs/10765222.pem (1708 bytes)
	I0328 00:29:08.343854 1103152 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0328 00:29:08.343869 1103152 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/1076522.pem -> /usr/share/ca-certificates/1076522.pem
	I0328 00:29:08.343882 1103152 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18485-1069254/.minikube/files/etc/ssl/certs/10765222.pem -> /usr/share/ca-certificates/10765222.pem
	I0328 00:29:08.344628 1103152 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0328 00:29:08.371570 1103152 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0328 00:29:08.398189 1103152 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0328 00:29:08.423638 1103152 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0328 00:29:08.448999 1103152 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/multinode-200224/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0328 00:29:08.475074 1103152 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/multinode-200224/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0328 00:29:08.499467 1103152 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/multinode-200224/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0328 00:29:08.523715 1103152 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/multinode-200224/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0328 00:29:08.548018 1103152 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0328 00:29:08.573481 1103152 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/1076522.pem --> /usr/share/ca-certificates/1076522.pem (1338 bytes)
	I0328 00:29:08.598469 1103152 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/files/etc/ssl/certs/10765222.pem --> /usr/share/ca-certificates/10765222.pem (1708 bytes)
	I0328 00:29:08.623258 1103152 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0328 00:29:08.639726 1103152 ssh_runner.go:195] Run: openssl version
	I0328 00:29:08.645983 1103152 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0328 00:29:08.646078 1103152 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0328 00:29:08.657621 1103152 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0328 00:29:08.662303 1103152 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Mar 27 23:34 /usr/share/ca-certificates/minikubeCA.pem
	I0328 00:29:08.662334 1103152 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 27 23:34 /usr/share/ca-certificates/minikubeCA.pem
	I0328 00:29:08.662384 1103152 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0328 00:29:08.668270 1103152 command_runner.go:130] > b5213941
	I0328 00:29:08.668359 1103152 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0328 00:29:08.677933 1103152 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1076522.pem && ln -fs /usr/share/ca-certificates/1076522.pem /etc/ssl/certs/1076522.pem"
	I0328 00:29:08.688670 1103152 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1076522.pem
	I0328 00:29:08.693094 1103152 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Mar 27 23:42 /usr/share/ca-certificates/1076522.pem
	I0328 00:29:08.693388 1103152 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 27 23:42 /usr/share/ca-certificates/1076522.pem
	I0328 00:29:08.693436 1103152 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1076522.pem
	I0328 00:29:08.698917 1103152 command_runner.go:130] > 51391683
	I0328 00:29:08.699229 1103152 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1076522.pem /etc/ssl/certs/51391683.0"
	I0328 00:29:08.708717 1103152 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10765222.pem && ln -fs /usr/share/ca-certificates/10765222.pem /etc/ssl/certs/10765222.pem"
	I0328 00:29:08.720335 1103152 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10765222.pem
	I0328 00:29:08.725120 1103152 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Mar 27 23:42 /usr/share/ca-certificates/10765222.pem
	I0328 00:29:08.725157 1103152 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 27 23:42 /usr/share/ca-certificates/10765222.pem
	I0328 00:29:08.725196 1103152 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10765222.pem
	I0328 00:29:08.731180 1103152 command_runner.go:130] > 3ec20f2e
	I0328 00:29:08.731243 1103152 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/10765222.pem /etc/ssl/certs/3ec20f2e.0"
	I0328 00:29:08.741487 1103152 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0328 00:29:08.746114 1103152 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0328 00:29:08.746135 1103152 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0328 00:29:08.746141 1103152 command_runner.go:130] > Device: 253,1	Inode: 7339526     Links: 1
	I0328 00:29:08.746147 1103152 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0328 00:29:08.746153 1103152 command_runner.go:130] > Access: 2024-03-28 00:22:46.667068132 +0000
	I0328 00:29:08.746165 1103152 command_runner.go:130] > Modify: 2024-03-28 00:22:46.667068132 +0000
	I0328 00:29:08.746175 1103152 command_runner.go:130] > Change: 2024-03-28 00:22:46.667068132 +0000
	I0328 00:29:08.746182 1103152 command_runner.go:130] >  Birth: 2024-03-28 00:22:46.667068132 +0000
	I0328 00:29:08.746272 1103152 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0328 00:29:08.751887 1103152 command_runner.go:130] > Certificate will not expire
	I0328 00:29:08.752123 1103152 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0328 00:29:08.757900 1103152 command_runner.go:130] > Certificate will not expire
	I0328 00:29:08.758150 1103152 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0328 00:29:08.763857 1103152 command_runner.go:130] > Certificate will not expire
	I0328 00:29:08.763950 1103152 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0328 00:29:08.769597 1103152 command_runner.go:130] > Certificate will not expire
	I0328 00:29:08.769654 1103152 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0328 00:29:08.775143 1103152 command_runner.go:130] > Certificate will not expire
	I0328 00:29:08.775303 1103152 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0328 00:29:08.780763 1103152 command_runner.go:130] > Certificate will not expire
	I0328 00:29:08.780986 1103152 kubeadm.go:391] StartCluster: {Name:multinode-200224 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.
3 ClusterName:multinode-200224 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.88 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.161 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.22 Port:0 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false i
nspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fa
lse DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0328 00:29:08.781137 1103152 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0328 00:29:08.781193 1103152 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0328 00:29:08.820322 1103152 command_runner.go:130] > 9ed073676e722da08dd05f2af2ceea6db672978bf9a7c2166d8cdd0e68c55cf5
	I0328 00:29:08.820346 1103152 command_runner.go:130] > ce9178b60c4e7c1b7a135a15545d1f6069bd804b1ddc2bb6bb7040925e3401a1
	I0328 00:29:08.820352 1103152 command_runner.go:130] > dbe677740910dd5d9206540ecea26aa8ac44bad90beedfeb44189e329f1e7f05
	I0328 00:29:08.820358 1103152 command_runner.go:130] > fb82d42c8f86742edc73783b3dfdef5b9cc4658c3ddc375a06b5ea65bd837da5
	I0328 00:29:08.820363 1103152 command_runner.go:130] > 68ae2f434f3deb10aa0a571899142907e2801ba5da048845b37822888969b398
	I0328 00:29:08.820368 1103152 command_runner.go:130] > 0e309dc4a326fe31f7f6c1bfc6da7ab8ff22862a35eba9e7aa9b4ff15b9737b6
	I0328 00:29:08.820374 1103152 command_runner.go:130] > 3fccdc262ed43923c10e7dc2c2d3f31d086e8cae5e8a1adf35c700e71ac085c6
	I0328 00:29:08.820382 1103152 command_runner.go:130] > 6fbf200e2f5995f89adef7939641ef941f9f8ac8e54c7d64c2bf74baed49b800
	I0328 00:29:08.821777 1103152 cri.go:89] found id: "9ed073676e722da08dd05f2af2ceea6db672978bf9a7c2166d8cdd0e68c55cf5"
	I0328 00:29:08.821792 1103152 cri.go:89] found id: "ce9178b60c4e7c1b7a135a15545d1f6069bd804b1ddc2bb6bb7040925e3401a1"
	I0328 00:29:08.821796 1103152 cri.go:89] found id: "dbe677740910dd5d9206540ecea26aa8ac44bad90beedfeb44189e329f1e7f05"
	I0328 00:29:08.821800 1103152 cri.go:89] found id: "fb82d42c8f86742edc73783b3dfdef5b9cc4658c3ddc375a06b5ea65bd837da5"
	I0328 00:29:08.821802 1103152 cri.go:89] found id: "68ae2f434f3deb10aa0a571899142907e2801ba5da048845b37822888969b398"
	I0328 00:29:08.821805 1103152 cri.go:89] found id: "0e309dc4a326fe31f7f6c1bfc6da7ab8ff22862a35eba9e7aa9b4ff15b9737b6"
	I0328 00:29:08.821808 1103152 cri.go:89] found id: "3fccdc262ed43923c10e7dc2c2d3f31d086e8cae5e8a1adf35c700e71ac085c6"
	I0328 00:29:08.821811 1103152 cri.go:89] found id: "6fbf200e2f5995f89adef7939641ef941f9f8ac8e54c7d64c2bf74baed49b800"
	I0328 00:29:08.821813 1103152 cri.go:89] found id: ""
	I0328 00:29:08.821854 1103152 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Mar 28 00:33:03 multinode-200224 crio[2841]: time="2024-03-28 00:33:03.888409867Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1711585983888385858,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:130111,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=13f4d36f-30e7-4e75-b5c0-03955d29b52c name=/runtime.v1.ImageService/ImageFsInfo
	Mar 28 00:33:03 multinode-200224 crio[2841]: time="2024-03-28 00:33:03.889308105Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e88e5fed-1d89-4775-ae4c-beaf943825a9 name=/runtime.v1.RuntimeService/ListContainers
	Mar 28 00:33:03 multinode-200224 crio[2841]: time="2024-03-28 00:33:03.889385138Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e88e5fed-1d89-4775-ae4c-beaf943825a9 name=/runtime.v1.RuntimeService/ListContainers
	Mar 28 00:33:03 multinode-200224 crio[2841]: time="2024-03-28 00:33:03.889741808Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e48d2ff176f073d38e6ad1fd628ec8a718efffa03851bab7a22dae283a54b95d,PodSandboxId:8bec63c7bb1e58c7d00d3098dd7a3bfabcb829e07757ccc519a45f98731340f4,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1711585789658607598,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-4mbrk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a88eae05-06c4-4a76-9f77-af448b7c0704,},Annotations:map[string]string{io.kubernetes.container.hash: da0006fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b533e6c726ea6ed1277d347af17f080ce2afc86504a3b08d2b06b561fdce86e7,PodSandboxId:42b54cf59b877cf418cfe9dd5e8db1794aefa4223d9548fda611f535a8836600,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1711585756181887042,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-ncgjv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 30a4f6bf-5542-476f-b1af-837031a00c50,},Annotations:map[string]string{io.kubernetes.container.hash: d1b55d19,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kub
ernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4bc9e1fd5fd94b80983e60854d25fe1478e0bc1049691b83957baa96fa04543e,PodSandboxId:4d9db886d6f8c3b35a2feca27f26ed2f26f2ee746d53e25ab202e8185c320a84,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1711585756108419330,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-g5sdz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7a0e7db0-a552-4642-825f-da6ee01e6121,},Annotations:map[string]string{io.kubernetes.container.hash: 88796073,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol
\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81447b81bf61942708a9748573bd5dd9ca1d2c1891deb60362255634ed437cf1,PodSandboxId:b7913ac412fa3a662b3f635255c1a467a739f5f07d13fe47e08df1a004f99d25,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1711585756033916246,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3bd5c912-2288-420d-a7a2-d73f2c34a5ed,},A
nnotations:map[string]string{io.kubernetes.container.hash: 35ee7de3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46c3c1988d5ad53bff3254f6b5fbcb07e9d341c25390a721f9cb15c63e302898,PodSandboxId:8f3debc7fde1148605dc96dad0bf3d49f7387b139ed1b2fe10f0b4d0e9f3d68e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1711585755993777083,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-p2g9p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae4313c7-a926-4da8-bfc2-7dae1422d958,},Annotations:map[string]string{io.k
ubernetes.container.hash: b48261f0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:767d65958abd9c1074e5b189fbc2524166f7f1c3a91990207131a500a1efdd7f,PodSandboxId:75462da1806aef166c05438ba0310317dddeb940660721bebc5d8066293c6ba3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1711585751111073258,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-200224,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c865a9c1ae98bee042cbdef78ac1661e,},Annotations:map[string]string{io.kubernetes.cont
ainer.hash: be150834,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:013303f3ea31fd8274cb3bf3859cfa6cbc15563590eb945d675b932bac6c3efb,PodSandboxId:026825fe8f8b7e6c6ef6d86c33a20708c10c88489447078a1bec9e86518d24c5,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1711585751068690127,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-200224,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 439c378fd07501a8dead5aed861f13e7,},Annotations:map[string]string{io.kubernetes.container.hash: c5340b9,io.kubernetes.container.
restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b3a9b4858d44e5afb56269a375649980f786d727b89ff9556550e692636fab82,PodSandboxId:448b875a38262bdf20fb8c0d242c65b5b3bc059b8cc02d268905cafd5eb95bde,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1711585751060523840,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-200224,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db39c19d4710f792ba253c58204b3fd4,},Annotations:map[string]string{io.kubernetes.container.hash: d593ec80,io.kubernetes.container.restartCount:
1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a59125a9daa4c893fd45d629c12fd055db5fb6cbfc3f0205e1b7c2789a83f8da,PodSandboxId:b9a010a3fb716893ab564b2b70f1f78881ab631cc9bf84f91f59f0235f95422f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1711585751027374603,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-200224,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 32db6766711b42ca248c705dc74d448e,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:400e13249ff22f6f5fff380a9b09989689add62a554466af727eda89c89d5a8a,PodSandboxId:28401e2f6084f5204fe5c79b6fad4857507ed38493632dcd5793de7ddd053561,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1711585440515353167,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-4mbrk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a88eae05-06c4-4a76-9f77-af448b7c0704,},Annotations:map[string]string{io.kubernetes.container.hash: da0006fa,io.kubernetes.container.re
startCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ed073676e722da08dd05f2af2ceea6db672978bf9a7c2166d8cdd0e68c55cf5,PodSandboxId:ee7f59bd3e83ff182dc2870085e8dde1ac0476f4ed5c212895a2b54a7b1d2145,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1711585393172390704,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-g5sdz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7a0e7db0-a552-4642-825f-da6ee01e6121,},Annotations:map[string]string{io.kubernetes.container.hash: 88796073,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contai
nerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ce9178b60c4e7c1b7a135a15545d1f6069bd804b1ddc2bb6bb7040925e3401a1,PodSandboxId:ed4a7c41ab819bc8c36d7e2ab25a2a12997d2920f2f2fb3afdbb263c586ec4a3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1711585393079508402,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod
.namespace: kube-system,io.kubernetes.pod.uid: 3bd5c912-2288-420d-a7a2-d73f2c34a5ed,},Annotations:map[string]string{io.kubernetes.container.hash: 35ee7de3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dbe677740910dd5d9206540ecea26aa8ac44bad90beedfeb44189e329f1e7f05,PodSandboxId:ea1011de4d4b6657be5634eb7131426dfaf71adda1e016d49136887bbc0d7a9a,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1711585391513010402,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-ncgjv,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: 30a4f6bf-5542-476f-b1af-837031a00c50,},Annotations:map[string]string{io.kubernetes.container.hash: d1b55d19,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb82d42c8f86742edc73783b3dfdef5b9cc4658c3ddc375a06b5ea65bd837da5,PodSandboxId:8e1d52aa1272a7dffa0d0477292d339dd15d53962c6c3a485e7ba027258d0cd5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_EXITED,CreatedAt:1711585391274860794,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-p2g9p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae4313c7-a926-4da8-bfc
2-7dae1422d958,},Annotations:map[string]string{io.kubernetes.container.hash: b48261f0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68ae2f434f3deb10aa0a571899142907e2801ba5da048845b37822888969b398,PodSandboxId:fd28139122e292890b1d14a5668c359edc911ba1328ccd09054fbe9ee9099a59,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_EXITED,CreatedAt:1711585371292676447,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-200224,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db39c19d4710f792ba253c58204b3fd4,
},Annotations:map[string]string{io.kubernetes.container.hash: d593ec80,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e309dc4a326fe31f7f6c1bfc6da7ab8ff22862a35eba9e7aa9b4ff15b9737b6,PodSandboxId:caae122c7bfc36f8d133510bf3080172efebf3bc11889137437f0b9c2716cf79,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1711585371277974289,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-200224,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 439c378fd07501a8dead5aed861f13e7,},Annotations:map[string]string{io.kubernetes
.container.hash: c5340b9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6fbf200e2f5995f89adef7939641ef941f9f8ac8e54c7d64c2bf74baed49b800,PodSandboxId:5fef10ef65dd3ba44f4c351e380d6247428dbf047e9cca78d17d3928a7e8f0e5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_EXITED,CreatedAt:1711585371223017912,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-200224,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 32db6766711b42ca248c705dc74d448e,},Annotations:map[string]string{io
.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3fccdc262ed43923c10e7dc2c2d3f31d086e8cae5e8a1adf35c700e71ac085c6,PodSandboxId:3f002ba61607eb1d8647387d6277f7b6391e7e06d60cc4cc049fed995ef98eb4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_EXITED,CreatedAt:1711585371223349476,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-200224,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c865a9c1ae98bee042cbdef78ac1661e,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: be150834,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e88e5fed-1d89-4775-ae4c-beaf943825a9 name=/runtime.v1.RuntimeService/ListContainers
	Mar 28 00:33:03 multinode-200224 crio[2841]: time="2024-03-28 00:33:03.933925181Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=cfa6477e-260a-459d-b0d5-aa1d58bb7feb name=/runtime.v1.RuntimeService/Version
	Mar 28 00:33:03 multinode-200224 crio[2841]: time="2024-03-28 00:33:03.934024913Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=cfa6477e-260a-459d-b0d5-aa1d58bb7feb name=/runtime.v1.RuntimeService/Version
	Mar 28 00:33:03 multinode-200224 crio[2841]: time="2024-03-28 00:33:03.935557925Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2baaf777-2dbe-40f1-b577-37825085bcb5 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 28 00:33:03 multinode-200224 crio[2841]: time="2024-03-28 00:33:03.936227006Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1711585983936198993,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:130111,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2baaf777-2dbe-40f1-b577-37825085bcb5 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 28 00:33:03 multinode-200224 crio[2841]: time="2024-03-28 00:33:03.936827092Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=51372f70-6131-40aa-a170-24518fa60568 name=/runtime.v1.RuntimeService/ListContainers
	Mar 28 00:33:03 multinode-200224 crio[2841]: time="2024-03-28 00:33:03.936906405Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=51372f70-6131-40aa-a170-24518fa60568 name=/runtime.v1.RuntimeService/ListContainers
	Mar 28 00:33:03 multinode-200224 crio[2841]: time="2024-03-28 00:33:03.937640879Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e48d2ff176f073d38e6ad1fd628ec8a718efffa03851bab7a22dae283a54b95d,PodSandboxId:8bec63c7bb1e58c7d00d3098dd7a3bfabcb829e07757ccc519a45f98731340f4,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1711585789658607598,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-4mbrk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a88eae05-06c4-4a76-9f77-af448b7c0704,},Annotations:map[string]string{io.kubernetes.container.hash: da0006fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b533e6c726ea6ed1277d347af17f080ce2afc86504a3b08d2b06b561fdce86e7,PodSandboxId:42b54cf59b877cf418cfe9dd5e8db1794aefa4223d9548fda611f535a8836600,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1711585756181887042,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-ncgjv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 30a4f6bf-5542-476f-b1af-837031a00c50,},Annotations:map[string]string{io.kubernetes.container.hash: d1b55d19,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kub
ernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4bc9e1fd5fd94b80983e60854d25fe1478e0bc1049691b83957baa96fa04543e,PodSandboxId:4d9db886d6f8c3b35a2feca27f26ed2f26f2ee746d53e25ab202e8185c320a84,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1711585756108419330,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-g5sdz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7a0e7db0-a552-4642-825f-da6ee01e6121,},Annotations:map[string]string{io.kubernetes.container.hash: 88796073,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol
\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81447b81bf61942708a9748573bd5dd9ca1d2c1891deb60362255634ed437cf1,PodSandboxId:b7913ac412fa3a662b3f635255c1a467a739f5f07d13fe47e08df1a004f99d25,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1711585756033916246,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3bd5c912-2288-420d-a7a2-d73f2c34a5ed,},A
nnotations:map[string]string{io.kubernetes.container.hash: 35ee7de3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46c3c1988d5ad53bff3254f6b5fbcb07e9d341c25390a721f9cb15c63e302898,PodSandboxId:8f3debc7fde1148605dc96dad0bf3d49f7387b139ed1b2fe10f0b4d0e9f3d68e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1711585755993777083,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-p2g9p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae4313c7-a926-4da8-bfc2-7dae1422d958,},Annotations:map[string]string{io.k
ubernetes.container.hash: b48261f0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:767d65958abd9c1074e5b189fbc2524166f7f1c3a91990207131a500a1efdd7f,PodSandboxId:75462da1806aef166c05438ba0310317dddeb940660721bebc5d8066293c6ba3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1711585751111073258,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-200224,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c865a9c1ae98bee042cbdef78ac1661e,},Annotations:map[string]string{io.kubernetes.cont
ainer.hash: be150834,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:013303f3ea31fd8274cb3bf3859cfa6cbc15563590eb945d675b932bac6c3efb,PodSandboxId:026825fe8f8b7e6c6ef6d86c33a20708c10c88489447078a1bec9e86518d24c5,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1711585751068690127,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-200224,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 439c378fd07501a8dead5aed861f13e7,},Annotations:map[string]string{io.kubernetes.container.hash: c5340b9,io.kubernetes.container.
restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b3a9b4858d44e5afb56269a375649980f786d727b89ff9556550e692636fab82,PodSandboxId:448b875a38262bdf20fb8c0d242c65b5b3bc059b8cc02d268905cafd5eb95bde,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1711585751060523840,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-200224,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db39c19d4710f792ba253c58204b3fd4,},Annotations:map[string]string{io.kubernetes.container.hash: d593ec80,io.kubernetes.container.restartCount:
1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a59125a9daa4c893fd45d629c12fd055db5fb6cbfc3f0205e1b7c2789a83f8da,PodSandboxId:b9a010a3fb716893ab564b2b70f1f78881ab631cc9bf84f91f59f0235f95422f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1711585751027374603,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-200224,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 32db6766711b42ca248c705dc74d448e,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:400e13249ff22f6f5fff380a9b09989689add62a554466af727eda89c89d5a8a,PodSandboxId:28401e2f6084f5204fe5c79b6fad4857507ed38493632dcd5793de7ddd053561,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1711585440515353167,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-4mbrk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a88eae05-06c4-4a76-9f77-af448b7c0704,},Annotations:map[string]string{io.kubernetes.container.hash: da0006fa,io.kubernetes.container.re
startCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ed073676e722da08dd05f2af2ceea6db672978bf9a7c2166d8cdd0e68c55cf5,PodSandboxId:ee7f59bd3e83ff182dc2870085e8dde1ac0476f4ed5c212895a2b54a7b1d2145,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1711585393172390704,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-g5sdz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7a0e7db0-a552-4642-825f-da6ee01e6121,},Annotations:map[string]string{io.kubernetes.container.hash: 88796073,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contai
nerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ce9178b60c4e7c1b7a135a15545d1f6069bd804b1ddc2bb6bb7040925e3401a1,PodSandboxId:ed4a7c41ab819bc8c36d7e2ab25a2a12997d2920f2f2fb3afdbb263c586ec4a3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1711585393079508402,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod
.namespace: kube-system,io.kubernetes.pod.uid: 3bd5c912-2288-420d-a7a2-d73f2c34a5ed,},Annotations:map[string]string{io.kubernetes.container.hash: 35ee7de3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dbe677740910dd5d9206540ecea26aa8ac44bad90beedfeb44189e329f1e7f05,PodSandboxId:ea1011de4d4b6657be5634eb7131426dfaf71adda1e016d49136887bbc0d7a9a,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1711585391513010402,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-ncgjv,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: 30a4f6bf-5542-476f-b1af-837031a00c50,},Annotations:map[string]string{io.kubernetes.container.hash: d1b55d19,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb82d42c8f86742edc73783b3dfdef5b9cc4658c3ddc375a06b5ea65bd837da5,PodSandboxId:8e1d52aa1272a7dffa0d0477292d339dd15d53962c6c3a485e7ba027258d0cd5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_EXITED,CreatedAt:1711585391274860794,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-p2g9p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae4313c7-a926-4da8-bfc
2-7dae1422d958,},Annotations:map[string]string{io.kubernetes.container.hash: b48261f0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68ae2f434f3deb10aa0a571899142907e2801ba5da048845b37822888969b398,PodSandboxId:fd28139122e292890b1d14a5668c359edc911ba1328ccd09054fbe9ee9099a59,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_EXITED,CreatedAt:1711585371292676447,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-200224,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db39c19d4710f792ba253c58204b3fd4,
},Annotations:map[string]string{io.kubernetes.container.hash: d593ec80,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e309dc4a326fe31f7f6c1bfc6da7ab8ff22862a35eba9e7aa9b4ff15b9737b6,PodSandboxId:caae122c7bfc36f8d133510bf3080172efebf3bc11889137437f0b9c2716cf79,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1711585371277974289,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-200224,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 439c378fd07501a8dead5aed861f13e7,},Annotations:map[string]string{io.kubernetes
.container.hash: c5340b9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6fbf200e2f5995f89adef7939641ef941f9f8ac8e54c7d64c2bf74baed49b800,PodSandboxId:5fef10ef65dd3ba44f4c351e380d6247428dbf047e9cca78d17d3928a7e8f0e5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_EXITED,CreatedAt:1711585371223017912,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-200224,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 32db6766711b42ca248c705dc74d448e,},Annotations:map[string]string{io
.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3fccdc262ed43923c10e7dc2c2d3f31d086e8cae5e8a1adf35c700e71ac085c6,PodSandboxId:3f002ba61607eb1d8647387d6277f7b6391e7e06d60cc4cc049fed995ef98eb4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_EXITED,CreatedAt:1711585371223349476,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-200224,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c865a9c1ae98bee042cbdef78ac1661e,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: be150834,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=51372f70-6131-40aa-a170-24518fa60568 name=/runtime.v1.RuntimeService/ListContainers
	Mar 28 00:33:03 multinode-200224 crio[2841]: time="2024-03-28 00:33:03.985466137Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=0352995c-65eb-4c49-8118-7847a80de73a name=/runtime.v1.RuntimeService/Version
	Mar 28 00:33:03 multinode-200224 crio[2841]: time="2024-03-28 00:33:03.985559461Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=0352995c-65eb-4c49-8118-7847a80de73a name=/runtime.v1.RuntimeService/Version
	Mar 28 00:33:03 multinode-200224 crio[2841]: time="2024-03-28 00:33:03.987310066Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e6a67c50-2e62-4f82-805c-3d7b86edf35e name=/runtime.v1.ImageService/ImageFsInfo
	Mar 28 00:33:03 multinode-200224 crio[2841]: time="2024-03-28 00:33:03.987739457Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1711585983987716216,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:130111,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e6a67c50-2e62-4f82-805c-3d7b86edf35e name=/runtime.v1.ImageService/ImageFsInfo
	Mar 28 00:33:03 multinode-200224 crio[2841]: time="2024-03-28 00:33:03.988546635Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=205280b9-04f5-41b8-a9d8-082c0da8b1b4 name=/runtime.v1.RuntimeService/ListContainers
	Mar 28 00:33:03 multinode-200224 crio[2841]: time="2024-03-28 00:33:03.988867523Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=205280b9-04f5-41b8-a9d8-082c0da8b1b4 name=/runtime.v1.RuntimeService/ListContainers
	Mar 28 00:33:03 multinode-200224 crio[2841]: time="2024-03-28 00:33:03.989874155Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e48d2ff176f073d38e6ad1fd628ec8a718efffa03851bab7a22dae283a54b95d,PodSandboxId:8bec63c7bb1e58c7d00d3098dd7a3bfabcb829e07757ccc519a45f98731340f4,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1711585789658607598,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-4mbrk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a88eae05-06c4-4a76-9f77-af448b7c0704,},Annotations:map[string]string{io.kubernetes.container.hash: da0006fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b533e6c726ea6ed1277d347af17f080ce2afc86504a3b08d2b06b561fdce86e7,PodSandboxId:42b54cf59b877cf418cfe9dd5e8db1794aefa4223d9548fda611f535a8836600,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1711585756181887042,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-ncgjv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 30a4f6bf-5542-476f-b1af-837031a00c50,},Annotations:map[string]string{io.kubernetes.container.hash: d1b55d19,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kub
ernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4bc9e1fd5fd94b80983e60854d25fe1478e0bc1049691b83957baa96fa04543e,PodSandboxId:4d9db886d6f8c3b35a2feca27f26ed2f26f2ee746d53e25ab202e8185c320a84,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1711585756108419330,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-g5sdz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7a0e7db0-a552-4642-825f-da6ee01e6121,},Annotations:map[string]string{io.kubernetes.container.hash: 88796073,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol
\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81447b81bf61942708a9748573bd5dd9ca1d2c1891deb60362255634ed437cf1,PodSandboxId:b7913ac412fa3a662b3f635255c1a467a739f5f07d13fe47e08df1a004f99d25,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1711585756033916246,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3bd5c912-2288-420d-a7a2-d73f2c34a5ed,},A
nnotations:map[string]string{io.kubernetes.container.hash: 35ee7de3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46c3c1988d5ad53bff3254f6b5fbcb07e9d341c25390a721f9cb15c63e302898,PodSandboxId:8f3debc7fde1148605dc96dad0bf3d49f7387b139ed1b2fe10f0b4d0e9f3d68e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1711585755993777083,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-p2g9p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae4313c7-a926-4da8-bfc2-7dae1422d958,},Annotations:map[string]string{io.k
ubernetes.container.hash: b48261f0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:767d65958abd9c1074e5b189fbc2524166f7f1c3a91990207131a500a1efdd7f,PodSandboxId:75462da1806aef166c05438ba0310317dddeb940660721bebc5d8066293c6ba3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1711585751111073258,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-200224,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c865a9c1ae98bee042cbdef78ac1661e,},Annotations:map[string]string{io.kubernetes.cont
ainer.hash: be150834,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:013303f3ea31fd8274cb3bf3859cfa6cbc15563590eb945d675b932bac6c3efb,PodSandboxId:026825fe8f8b7e6c6ef6d86c33a20708c10c88489447078a1bec9e86518d24c5,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1711585751068690127,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-200224,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 439c378fd07501a8dead5aed861f13e7,},Annotations:map[string]string{io.kubernetes.container.hash: c5340b9,io.kubernetes.container.
restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b3a9b4858d44e5afb56269a375649980f786d727b89ff9556550e692636fab82,PodSandboxId:448b875a38262bdf20fb8c0d242c65b5b3bc059b8cc02d268905cafd5eb95bde,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1711585751060523840,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-200224,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db39c19d4710f792ba253c58204b3fd4,},Annotations:map[string]string{io.kubernetes.container.hash: d593ec80,io.kubernetes.container.restartCount:
1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a59125a9daa4c893fd45d629c12fd055db5fb6cbfc3f0205e1b7c2789a83f8da,PodSandboxId:b9a010a3fb716893ab564b2b70f1f78881ab631cc9bf84f91f59f0235f95422f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1711585751027374603,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-200224,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 32db6766711b42ca248c705dc74d448e,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:400e13249ff22f6f5fff380a9b09989689add62a554466af727eda89c89d5a8a,PodSandboxId:28401e2f6084f5204fe5c79b6fad4857507ed38493632dcd5793de7ddd053561,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1711585440515353167,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-4mbrk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a88eae05-06c4-4a76-9f77-af448b7c0704,},Annotations:map[string]string{io.kubernetes.container.hash: da0006fa,io.kubernetes.container.re
startCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ed073676e722da08dd05f2af2ceea6db672978bf9a7c2166d8cdd0e68c55cf5,PodSandboxId:ee7f59bd3e83ff182dc2870085e8dde1ac0476f4ed5c212895a2b54a7b1d2145,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1711585393172390704,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-g5sdz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7a0e7db0-a552-4642-825f-da6ee01e6121,},Annotations:map[string]string{io.kubernetes.container.hash: 88796073,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contai
nerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ce9178b60c4e7c1b7a135a15545d1f6069bd804b1ddc2bb6bb7040925e3401a1,PodSandboxId:ed4a7c41ab819bc8c36d7e2ab25a2a12997d2920f2f2fb3afdbb263c586ec4a3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1711585393079508402,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod
.namespace: kube-system,io.kubernetes.pod.uid: 3bd5c912-2288-420d-a7a2-d73f2c34a5ed,},Annotations:map[string]string{io.kubernetes.container.hash: 35ee7de3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dbe677740910dd5d9206540ecea26aa8ac44bad90beedfeb44189e329f1e7f05,PodSandboxId:ea1011de4d4b6657be5634eb7131426dfaf71adda1e016d49136887bbc0d7a9a,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1711585391513010402,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-ncgjv,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: 30a4f6bf-5542-476f-b1af-837031a00c50,},Annotations:map[string]string{io.kubernetes.container.hash: d1b55d19,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb82d42c8f86742edc73783b3dfdef5b9cc4658c3ddc375a06b5ea65bd837da5,PodSandboxId:8e1d52aa1272a7dffa0d0477292d339dd15d53962c6c3a485e7ba027258d0cd5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_EXITED,CreatedAt:1711585391274860794,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-p2g9p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae4313c7-a926-4da8-bfc
2-7dae1422d958,},Annotations:map[string]string{io.kubernetes.container.hash: b48261f0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68ae2f434f3deb10aa0a571899142907e2801ba5da048845b37822888969b398,PodSandboxId:fd28139122e292890b1d14a5668c359edc911ba1328ccd09054fbe9ee9099a59,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_EXITED,CreatedAt:1711585371292676447,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-200224,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db39c19d4710f792ba253c58204b3fd4,
},Annotations:map[string]string{io.kubernetes.container.hash: d593ec80,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e309dc4a326fe31f7f6c1bfc6da7ab8ff22862a35eba9e7aa9b4ff15b9737b6,PodSandboxId:caae122c7bfc36f8d133510bf3080172efebf3bc11889137437f0b9c2716cf79,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1711585371277974289,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-200224,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 439c378fd07501a8dead5aed861f13e7,},Annotations:map[string]string{io.kubernetes
.container.hash: c5340b9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6fbf200e2f5995f89adef7939641ef941f9f8ac8e54c7d64c2bf74baed49b800,PodSandboxId:5fef10ef65dd3ba44f4c351e380d6247428dbf047e9cca78d17d3928a7e8f0e5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_EXITED,CreatedAt:1711585371223017912,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-200224,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 32db6766711b42ca248c705dc74d448e,},Annotations:map[string]string{io
.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3fccdc262ed43923c10e7dc2c2d3f31d086e8cae5e8a1adf35c700e71ac085c6,PodSandboxId:3f002ba61607eb1d8647387d6277f7b6391e7e06d60cc4cc049fed995ef98eb4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_EXITED,CreatedAt:1711585371223349476,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-200224,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c865a9c1ae98bee042cbdef78ac1661e,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: be150834,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=205280b9-04f5-41b8-a9d8-082c0da8b1b4 name=/runtime.v1.RuntimeService/ListContainers
	Mar 28 00:33:04 multinode-200224 crio[2841]: time="2024-03-28 00:33:04.033961223Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9086aa4a-4733-45e8-97d8-ed42422c7c47 name=/runtime.v1.RuntimeService/Version
	Mar 28 00:33:04 multinode-200224 crio[2841]: time="2024-03-28 00:33:04.034055346Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9086aa4a-4733-45e8-97d8-ed42422c7c47 name=/runtime.v1.RuntimeService/Version
	Mar 28 00:33:04 multinode-200224 crio[2841]: time="2024-03-28 00:33:04.038969267Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=dfe76b10-ec67-4762-b7f9-5f9f3c3c7a7d name=/runtime.v1.ImageService/ImageFsInfo
	Mar 28 00:33:04 multinode-200224 crio[2841]: time="2024-03-28 00:33:04.039489436Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1711585984039465246,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:130111,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=dfe76b10-ec67-4762-b7f9-5f9f3c3c7a7d name=/runtime.v1.ImageService/ImageFsInfo
	Mar 28 00:33:04 multinode-200224 crio[2841]: time="2024-03-28 00:33:04.040131871Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1003fbf1-fdd5-48be-95f8-53b5c4cdaded name=/runtime.v1.RuntimeService/ListContainers
	Mar 28 00:33:04 multinode-200224 crio[2841]: time="2024-03-28 00:33:04.040189890Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1003fbf1-fdd5-48be-95f8-53b5c4cdaded name=/runtime.v1.RuntimeService/ListContainers
	Mar 28 00:33:04 multinode-200224 crio[2841]: time="2024-03-28 00:33:04.040910684Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e48d2ff176f073d38e6ad1fd628ec8a718efffa03851bab7a22dae283a54b95d,PodSandboxId:8bec63c7bb1e58c7d00d3098dd7a3bfabcb829e07757ccc519a45f98731340f4,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1711585789658607598,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-4mbrk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a88eae05-06c4-4a76-9f77-af448b7c0704,},Annotations:map[string]string{io.kubernetes.container.hash: da0006fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b533e6c726ea6ed1277d347af17f080ce2afc86504a3b08d2b06b561fdce86e7,PodSandboxId:42b54cf59b877cf418cfe9dd5e8db1794aefa4223d9548fda611f535a8836600,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1711585756181887042,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-ncgjv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 30a4f6bf-5542-476f-b1af-837031a00c50,},Annotations:map[string]string{io.kubernetes.container.hash: d1b55d19,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kub
ernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4bc9e1fd5fd94b80983e60854d25fe1478e0bc1049691b83957baa96fa04543e,PodSandboxId:4d9db886d6f8c3b35a2feca27f26ed2f26f2ee746d53e25ab202e8185c320a84,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1711585756108419330,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-g5sdz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7a0e7db0-a552-4642-825f-da6ee01e6121,},Annotations:map[string]string{io.kubernetes.container.hash: 88796073,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol
\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81447b81bf61942708a9748573bd5dd9ca1d2c1891deb60362255634ed437cf1,PodSandboxId:b7913ac412fa3a662b3f635255c1a467a739f5f07d13fe47e08df1a004f99d25,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1711585756033916246,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3bd5c912-2288-420d-a7a2-d73f2c34a5ed,},A
nnotations:map[string]string{io.kubernetes.container.hash: 35ee7de3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46c3c1988d5ad53bff3254f6b5fbcb07e9d341c25390a721f9cb15c63e302898,PodSandboxId:8f3debc7fde1148605dc96dad0bf3d49f7387b139ed1b2fe10f0b4d0e9f3d68e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1711585755993777083,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-p2g9p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae4313c7-a926-4da8-bfc2-7dae1422d958,},Annotations:map[string]string{io.k
ubernetes.container.hash: b48261f0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:767d65958abd9c1074e5b189fbc2524166f7f1c3a91990207131a500a1efdd7f,PodSandboxId:75462da1806aef166c05438ba0310317dddeb940660721bebc5d8066293c6ba3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1711585751111073258,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-200224,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c865a9c1ae98bee042cbdef78ac1661e,},Annotations:map[string]string{io.kubernetes.cont
ainer.hash: be150834,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:013303f3ea31fd8274cb3bf3859cfa6cbc15563590eb945d675b932bac6c3efb,PodSandboxId:026825fe8f8b7e6c6ef6d86c33a20708c10c88489447078a1bec9e86518d24c5,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1711585751068690127,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-200224,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 439c378fd07501a8dead5aed861f13e7,},Annotations:map[string]string{io.kubernetes.container.hash: c5340b9,io.kubernetes.container.
restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b3a9b4858d44e5afb56269a375649980f786d727b89ff9556550e692636fab82,PodSandboxId:448b875a38262bdf20fb8c0d242c65b5b3bc059b8cc02d268905cafd5eb95bde,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1711585751060523840,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-200224,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db39c19d4710f792ba253c58204b3fd4,},Annotations:map[string]string{io.kubernetes.container.hash: d593ec80,io.kubernetes.container.restartCount:
1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a59125a9daa4c893fd45d629c12fd055db5fb6cbfc3f0205e1b7c2789a83f8da,PodSandboxId:b9a010a3fb716893ab564b2b70f1f78881ab631cc9bf84f91f59f0235f95422f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1711585751027374603,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-200224,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 32db6766711b42ca248c705dc74d448e,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:400e13249ff22f6f5fff380a9b09989689add62a554466af727eda89c89d5a8a,PodSandboxId:28401e2f6084f5204fe5c79b6fad4857507ed38493632dcd5793de7ddd053561,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1711585440515353167,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-4mbrk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a88eae05-06c4-4a76-9f77-af448b7c0704,},Annotations:map[string]string{io.kubernetes.container.hash: da0006fa,io.kubernetes.container.re
startCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ed073676e722da08dd05f2af2ceea6db672978bf9a7c2166d8cdd0e68c55cf5,PodSandboxId:ee7f59bd3e83ff182dc2870085e8dde1ac0476f4ed5c212895a2b54a7b1d2145,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1711585393172390704,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-g5sdz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7a0e7db0-a552-4642-825f-da6ee01e6121,},Annotations:map[string]string{io.kubernetes.container.hash: 88796073,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contai
nerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ce9178b60c4e7c1b7a135a15545d1f6069bd804b1ddc2bb6bb7040925e3401a1,PodSandboxId:ed4a7c41ab819bc8c36d7e2ab25a2a12997d2920f2f2fb3afdbb263c586ec4a3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1711585393079508402,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod
.namespace: kube-system,io.kubernetes.pod.uid: 3bd5c912-2288-420d-a7a2-d73f2c34a5ed,},Annotations:map[string]string{io.kubernetes.container.hash: 35ee7de3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dbe677740910dd5d9206540ecea26aa8ac44bad90beedfeb44189e329f1e7f05,PodSandboxId:ea1011de4d4b6657be5634eb7131426dfaf71adda1e016d49136887bbc0d7a9a,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1711585391513010402,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-ncgjv,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: 30a4f6bf-5542-476f-b1af-837031a00c50,},Annotations:map[string]string{io.kubernetes.container.hash: d1b55d19,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb82d42c8f86742edc73783b3dfdef5b9cc4658c3ddc375a06b5ea65bd837da5,PodSandboxId:8e1d52aa1272a7dffa0d0477292d339dd15d53962c6c3a485e7ba027258d0cd5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_EXITED,CreatedAt:1711585391274860794,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-p2g9p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae4313c7-a926-4da8-bfc
2-7dae1422d958,},Annotations:map[string]string{io.kubernetes.container.hash: b48261f0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68ae2f434f3deb10aa0a571899142907e2801ba5da048845b37822888969b398,PodSandboxId:fd28139122e292890b1d14a5668c359edc911ba1328ccd09054fbe9ee9099a59,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_EXITED,CreatedAt:1711585371292676447,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-200224,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db39c19d4710f792ba253c58204b3fd4,
},Annotations:map[string]string{io.kubernetes.container.hash: d593ec80,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e309dc4a326fe31f7f6c1bfc6da7ab8ff22862a35eba9e7aa9b4ff15b9737b6,PodSandboxId:caae122c7bfc36f8d133510bf3080172efebf3bc11889137437f0b9c2716cf79,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1711585371277974289,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-200224,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 439c378fd07501a8dead5aed861f13e7,},Annotations:map[string]string{io.kubernetes
.container.hash: c5340b9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6fbf200e2f5995f89adef7939641ef941f9f8ac8e54c7d64c2bf74baed49b800,PodSandboxId:5fef10ef65dd3ba44f4c351e380d6247428dbf047e9cca78d17d3928a7e8f0e5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_EXITED,CreatedAt:1711585371223017912,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-200224,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 32db6766711b42ca248c705dc74d448e,},Annotations:map[string]string{io
.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3fccdc262ed43923c10e7dc2c2d3f31d086e8cae5e8a1adf35c700e71ac085c6,PodSandboxId:3f002ba61607eb1d8647387d6277f7b6391e7e06d60cc4cc049fed995ef98eb4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_EXITED,CreatedAt:1711585371223349476,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-200224,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c865a9c1ae98bee042cbdef78ac1661e,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: be150834,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1003fbf1-fdd5-48be-95f8-53b5c4cdaded name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	e48d2ff176f07       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      3 minutes ago       Running             busybox                   1                   8bec63c7bb1e5       busybox-7fdf7869d9-4mbrk
	b533e6c726ea6       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5                                      3 minutes ago       Running             kindnet-cni               1                   42b54cf59b877       kindnet-ncgjv
	4bc9e1fd5fd94       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      3 minutes ago       Running             coredns                   1                   4d9db886d6f8c       coredns-76f75df574-g5sdz
	81447b81bf619       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      3 minutes ago       Running             storage-provisioner       1                   b7913ac412fa3       storage-provisioner
	46c3c1988d5ad       a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392                                      3 minutes ago       Running             kube-proxy                1                   8f3debc7fde11       kube-proxy-p2g9p
	767d65958abd9       8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b                                      3 minutes ago       Running             kube-scheduler            1                   75462da1806ae       kube-scheduler-multinode-200224
	013303f3ea31f       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      3 minutes ago       Running             etcd                      1                   026825fe8f8b7       etcd-multinode-200224
	b3a9b4858d44e       39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533                                      3 minutes ago       Running             kube-apiserver            1                   448b875a38262       kube-apiserver-multinode-200224
	a59125a9daa4c       6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3                                      3 minutes ago       Running             kube-controller-manager   1                   b9a010a3fb716       kube-controller-manager-multinode-200224
	400e13249ff22       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   9 minutes ago       Exited              busybox                   0                   28401e2f6084f       busybox-7fdf7869d9-4mbrk
	9ed073676e722       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      9 minutes ago       Exited              coredns                   0                   ee7f59bd3e83f       coredns-76f75df574-g5sdz
	ce9178b60c4e7       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      9 minutes ago       Exited              storage-provisioner       0                   ed4a7c41ab819       storage-provisioner
	dbe677740910d       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5                                      9 minutes ago       Exited              kindnet-cni               0                   ea1011de4d4b6       kindnet-ncgjv
	fb82d42c8f867       a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392                                      9 minutes ago       Exited              kube-proxy                0                   8e1d52aa1272a       kube-proxy-p2g9p
	68ae2f434f3de       39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533                                      10 minutes ago      Exited              kube-apiserver            0                   fd28139122e29       kube-apiserver-multinode-200224
	0e309dc4a326f       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      10 minutes ago      Exited              etcd                      0                   caae122c7bfc3       etcd-multinode-200224
	3fccdc262ed43       8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b                                      10 minutes ago      Exited              kube-scheduler            0                   3f002ba61607e       kube-scheduler-multinode-200224
	6fbf200e2f599       6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3                                      10 minutes ago      Exited              kube-controller-manager   0                   5fef10ef65dd3       kube-controller-manager-multinode-200224
	
	
	==> coredns [4bc9e1fd5fd94b80983e60854d25fe1478e0bc1049691b83957baa96fa04543e] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:50673 - 49026 "HINFO IN 3489849291402505905.2552829470997845361. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.018228535s
	
	
	==> coredns [9ed073676e722da08dd05f2af2ceea6db672978bf9a7c2166d8cdd0e68c55cf5] <==
	[INFO] 10.244.0.3:40165 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001805092s
	[INFO] 10.244.0.3:58145 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000095334s
	[INFO] 10.244.0.3:37683 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000038462s
	[INFO] 10.244.0.3:58166 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001392646s
	[INFO] 10.244.0.3:58146 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000142948s
	[INFO] 10.244.0.3:34408 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000057186s
	[INFO] 10.244.0.3:32874 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000079804s
	[INFO] 10.244.1.2:48151 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000266274s
	[INFO] 10.244.1.2:59335 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000114137s
	[INFO] 10.244.1.2:44040 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000158796s
	[INFO] 10.244.1.2:43600 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000166812s
	[INFO] 10.244.0.3:47520 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000081783s
	[INFO] 10.244.0.3:40507 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000177821s
	[INFO] 10.244.0.3:45723 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000093065s
	[INFO] 10.244.0.3:44022 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000114454s
	[INFO] 10.244.1.2:40139 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000177278s
	[INFO] 10.244.1.2:53375 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000227157s
	[INFO] 10.244.1.2:59766 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000141481s
	[INFO] 10.244.1.2:56308 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.00017968s
	[INFO] 10.244.0.3:57752 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000228663s
	[INFO] 10.244.0.3:47439 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000141543s
	[INFO] 10.244.0.3:55317 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000196663s
	[INFO] 10.244.0.3:60667 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000123004s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               multinode-200224
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-200224
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1a940f980f77c9bfc8ef93678db8c1f49c9dd79d
	                    minikube.k8s.io/name=multinode-200224
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_03_28T00_22_57_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 28 Mar 2024 00:22:53 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-200224
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 28 Mar 2024 00:32:59 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 28 Mar 2024 00:29:14 +0000   Thu, 28 Mar 2024 00:22:52 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 28 Mar 2024 00:29:14 +0000   Thu, 28 Mar 2024 00:22:52 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 28 Mar 2024 00:29:14 +0000   Thu, 28 Mar 2024 00:22:52 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 28 Mar 2024 00:29:14 +0000   Thu, 28 Mar 2024 00:23:12 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.88
	  Hostname:    multinode-200224
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 20370049e416440389c3dd654a8f9e60
	  System UUID:                20370049-e416-4403-89c3-dd654a8f9e60
	  Boot ID:                    0cc20c18-f5e7-47f6-b7fe-26dd73344a27
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7fdf7869d9-4mbrk                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m7s
	  kube-system                 coredns-76f75df574-g5sdz                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m55s
	  kube-system                 etcd-multinode-200224                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         10m
	  kube-system                 kindnet-ncgjv                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      9m55s
	  kube-system                 kube-apiserver-multinode-200224             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-controller-manager-multinode-200224    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-proxy-p2g9p                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m55s
	  kube-system                 kube-scheduler-multinode-200224             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m54s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 9m52s                  kube-proxy       
	  Normal  Starting                 3m47s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  10m (x8 over 10m)      kubelet          Node multinode-200224 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m (x8 over 10m)      kubelet          Node multinode-200224 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m (x7 over 10m)      kubelet          Node multinode-200224 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  10m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 10m                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  10m                    kubelet          Node multinode-200224 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m                    kubelet          Node multinode-200224 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m                    kubelet          Node multinode-200224 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  10m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           9m55s                  node-controller  Node multinode-200224 event: Registered Node multinode-200224 in Controller
	  Normal  NodeReady                9m52s                  kubelet          Node multinode-200224 status is now: NodeReady
	  Normal  Starting                 3m54s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m54s (x8 over 3m54s)  kubelet          Node multinode-200224 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m54s (x8 over 3m54s)  kubelet          Node multinode-200224 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m54s (x7 over 3m54s)  kubelet          Node multinode-200224 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m54s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m37s                  node-controller  Node multinode-200224 event: Registered Node multinode-200224 in Controller
	
	
	Name:               multinode-200224-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-200224-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1a940f980f77c9bfc8ef93678db8c1f49c9dd79d
	                    minikube.k8s.io/name=multinode-200224
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_03_28T00_29_58_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 28 Mar 2024 00:29:57 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-200224-m02
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 28 Mar 2024 00:30:38 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Thu, 28 Mar 2024 00:30:28 +0000   Thu, 28 Mar 2024 00:31:22 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Thu, 28 Mar 2024 00:30:28 +0000   Thu, 28 Mar 2024 00:31:22 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Thu, 28 Mar 2024 00:30:28 +0000   Thu, 28 Mar 2024 00:31:22 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Thu, 28 Mar 2024 00:30:28 +0000   Thu, 28 Mar 2024 00:31:22 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.161
	  Hostname:    multinode-200224-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 9c96bf1424864470866d5d24da138300
	  System UUID:                9c96bf14-2486-4470-866d-5d24da138300
	  Boot ID:                    352ad25b-71f6-45c7-b2a9-86468eea75fa
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7fdf7869d9-x4g8t    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m12s
	  kube-system                 kindnet-fdhcl               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      9m19s
	  kube-system                 kube-proxy-pgph8            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m19s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 9m14s                  kube-proxy       
	  Normal  Starting                 3m2s                   kube-proxy       
	  Normal  NodeHasNoDiskPressure    9m19s (x2 over 9m19s)  kubelet          Node multinode-200224-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m19s (x2 over 9m19s)  kubelet          Node multinode-200224-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9m19s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  9m19s (x2 over 9m19s)  kubelet          Node multinode-200224-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeReady                9m10s                  kubelet          Node multinode-200224-m02 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  3m7s (x2 over 3m7s)    kubelet          Node multinode-200224-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m7s (x2 over 3m7s)    kubelet          Node multinode-200224-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m7s (x2 over 3m7s)    kubelet          Node multinode-200224-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m7s                   kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m2s                   node-controller  Node multinode-200224-m02 event: Registered Node multinode-200224-m02 in Controller
	  Normal  NodeReady                2m57s                  kubelet          Node multinode-200224-m02 status is now: NodeReady
	  Normal  NodeNotReady             102s                   node-controller  Node multinode-200224-m02 status is now: NodeNotReady
	
	
	==> dmesg <==
	[  +0.071448] systemd-fstab-generator[611]: Ignoring "noauto" option for root device
	[  +0.174544] systemd-fstab-generator[625]: Ignoring "noauto" option for root device
	[  +0.132653] systemd-fstab-generator[637]: Ignoring "noauto" option for root device
	[  +0.273145] systemd-fstab-generator[666]: Ignoring "noauto" option for root device
	[  +4.481088] systemd-fstab-generator[765]: Ignoring "noauto" option for root device
	[  +0.066475] kauditd_printk_skb: 130 callbacks suppressed
	[  +5.086709] systemd-fstab-generator[955]: Ignoring "noauto" option for root device
	[  +0.065307] kauditd_printk_skb: 18 callbacks suppressed
	[  +6.701905] systemd-fstab-generator[1289]: Ignoring "noauto" option for root device
	[  +0.081652] kauditd_printk_skb: 69 callbacks suppressed
	[Mar28 00:23] kauditd_printk_skb: 18 callbacks suppressed
	[  +7.128553] systemd-fstab-generator[1477]: Ignoring "noauto" option for root device
	[ +48.394690] kauditd_printk_skb: 82 callbacks suppressed
	[Mar28 00:29] systemd-fstab-generator[2758]: Ignoring "noauto" option for root device
	[  +0.145638] systemd-fstab-generator[2770]: Ignoring "noauto" option for root device
	[  +0.184336] systemd-fstab-generator[2786]: Ignoring "noauto" option for root device
	[  +0.141313] systemd-fstab-generator[2798]: Ignoring "noauto" option for root device
	[  +0.325205] systemd-fstab-generator[2826]: Ignoring "noauto" option for root device
	[  +3.906936] systemd-fstab-generator[2926]: Ignoring "noauto" option for root device
	[  +1.860741] systemd-fstab-generator[3052]: Ignoring "noauto" option for root device
	[  +0.085858] kauditd_printk_skb: 122 callbacks suppressed
	[  +5.714290] kauditd_printk_skb: 52 callbacks suppressed
	[ +11.261728] kauditd_printk_skb: 32 callbacks suppressed
	[  +4.265730] systemd-fstab-generator[3872]: Ignoring "noauto" option for root device
	[ +18.199637] kauditd_printk_skb: 14 callbacks suppressed
	
	
	==> etcd [013303f3ea31fd8274cb3bf3859cfa6cbc15563590eb945d675b932bac6c3efb] <==
	{"level":"info","ts":"2024-03-28T00:29:11.718317Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-03-28T00:29:11.718328Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-03-28T00:29:11.718622Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aa0bd43d5988e1af switched to configuration voters=(12253120571151802799)"}
	{"level":"info","ts":"2024-03-28T00:29:11.718703Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"9f9d2ecdb39156b6","local-member-id":"aa0bd43d5988e1af","added-peer-id":"aa0bd43d5988e1af","added-peer-peer-urls":["https://192.168.39.88:2380"]}
	{"level":"info","ts":"2024-03-28T00:29:11.718896Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"9f9d2ecdb39156b6","local-member-id":"aa0bd43d5988e1af","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-28T00:29:11.718947Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-28T00:29:11.731135Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-03-28T00:29:11.73144Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"aa0bd43d5988e1af","initial-advertise-peer-urls":["https://192.168.39.88:2380"],"listen-peer-urls":["https://192.168.39.88:2380"],"advertise-client-urls":["https://192.168.39.88:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.88:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-03-28T00:29:11.736871Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-03-28T00:29:11.737004Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.88:2380"}
	{"level":"info","ts":"2024-03-28T00:29:11.742982Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.88:2380"}
	{"level":"info","ts":"2024-03-28T00:29:13.147197Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aa0bd43d5988e1af is starting a new election at term 2"}
	{"level":"info","ts":"2024-03-28T00:29:13.147314Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aa0bd43d5988e1af became pre-candidate at term 2"}
	{"level":"info","ts":"2024-03-28T00:29:13.147381Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aa0bd43d5988e1af received MsgPreVoteResp from aa0bd43d5988e1af at term 2"}
	{"level":"info","ts":"2024-03-28T00:29:13.147435Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aa0bd43d5988e1af became candidate at term 3"}
	{"level":"info","ts":"2024-03-28T00:29:13.147461Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aa0bd43d5988e1af received MsgVoteResp from aa0bd43d5988e1af at term 3"}
	{"level":"info","ts":"2024-03-28T00:29:13.147488Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aa0bd43d5988e1af became leader at term 3"}
	{"level":"info","ts":"2024-03-28T00:29:13.14752Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aa0bd43d5988e1af elected leader aa0bd43d5988e1af at term 3"}
	{"level":"info","ts":"2024-03-28T00:29:13.153379Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"aa0bd43d5988e1af","local-member-attributes":"{Name:multinode-200224 ClientURLs:[https://192.168.39.88:2379]}","request-path":"/0/members/aa0bd43d5988e1af/attributes","cluster-id":"9f9d2ecdb39156b6","publish-timeout":"7s"}
	{"level":"info","ts":"2024-03-28T00:29:13.153486Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-28T00:29:13.153741Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-03-28T00:29:13.15385Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-03-28T00:29:13.153872Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-28T00:29:13.155838Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.88:2379"}
	{"level":"info","ts":"2024-03-28T00:29:13.156103Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> etcd [0e309dc4a326fe31f7f6c1bfc6da7ab8ff22862a35eba9e7aa9b4ff15b9737b6] <==
	{"level":"warn","ts":"2024-03-28T00:23:45.912875Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"248.15647ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/csinodes/multinode-200224-m02\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-03-28T00:23:45.912982Z","caller":"traceutil/trace.go:171","msg":"trace[513155614] range","detail":"{range_begin:/registry/csinodes/multinode-200224-m02; range_end:; response_count:0; response_revision:475; }","duration":"248.302011ms","start":"2024-03-28T00:23:45.66467Z","end":"2024-03-28T00:23:45.912972Z","steps":["trace[513155614] 'agreement among raft nodes before linearized reading'  (duration: 248.170488ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-28T00:23:45.912916Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"196.694191ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-03-28T00:23:45.913175Z","caller":"traceutil/trace.go:171","msg":"trace[2139540438] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:475; }","duration":"196.982746ms","start":"2024-03-28T00:23:45.716183Z","end":"2024-03-28T00:23:45.913166Z","steps":["trace[2139540438] 'agreement among raft nodes before linearized reading'  (duration: 196.702156ms)"],"step_count":1}
	{"level":"info","ts":"2024-03-28T00:24:34.876231Z","caller":"traceutil/trace.go:171","msg":"trace[1149993796] transaction","detail":"{read_only:false; response_revision:602; number_of_response:1; }","duration":"237.964811ms","start":"2024-03-28T00:24:34.638234Z","end":"2024-03-28T00:24:34.876199Z","steps":["trace[1149993796] 'process raft request'  (duration: 237.853876ms)"],"step_count":1}
	{"level":"info","ts":"2024-03-28T00:24:34.877391Z","caller":"traceutil/trace.go:171","msg":"trace[1884719100] linearizableReadLoop","detail":"{readStateIndex:634; appliedIndex:633; }","duration":"161.414976ms","start":"2024-03-28T00:24:34.715964Z","end":"2024-03-28T00:24:34.877379Z","steps":["trace[1884719100] 'read index received'  (duration: 160.552208ms)","trace[1884719100] 'applied index is now lower than readState.Index'  (duration: 862.125µs)"],"step_count":2}
	{"level":"info","ts":"2024-03-28T00:24:34.877631Z","caller":"traceutil/trace.go:171","msg":"trace[1173800584] transaction","detail":"{read_only:false; response_revision:603; number_of_response:1; }","duration":"182.925356ms","start":"2024-03-28T00:24:34.694694Z","end":"2024-03-28T00:24:34.877619Z","steps":["trace[1173800584] 'process raft request'  (duration: 182.622641ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-28T00:24:34.877964Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"161.981998ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-03-28T00:24:34.878039Z","caller":"traceutil/trace.go:171","msg":"trace[2087793377] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:603; }","duration":"162.112455ms","start":"2024-03-28T00:24:34.715904Z","end":"2024-03-28T00:24:34.878017Z","steps":["trace[2087793377] 'agreement among raft nodes before linearized reading'  (duration: 161.960396ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-28T00:24:37.94996Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"153.222579ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-03-28T00:24:37.950075Z","caller":"traceutil/trace.go:171","msg":"trace[1546703697] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:634; }","duration":"153.353017ms","start":"2024-03-28T00:24:37.796711Z","end":"2024-03-28T00:24:37.950064Z","steps":["trace[1546703697] 'range keys from in-memory index tree'  (duration: 153.208739ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-28T00:24:37.950519Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"129.805898ms","expected-duration":"100ms","prefix":"","request":"header:<ID:16262373470336570209 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:628 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1028 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-03-28T00:24:37.950599Z","caller":"traceutil/trace.go:171","msg":"trace[977788010] transaction","detail":"{read_only:false; response_revision:635; number_of_response:1; }","duration":"172.63094ms","start":"2024-03-28T00:24:37.777958Z","end":"2024-03-28T00:24:37.950588Z","steps":["trace[977788010] 'process raft request'  (duration: 42.477261ms)","trace[977788010] 'compare'  (duration: 129.480573ms)"],"step_count":2}
	{"level":"info","ts":"2024-03-28T00:24:38.229119Z","caller":"traceutil/trace.go:171","msg":"trace[1517688260] transaction","detail":"{read_only:false; response_revision:636; number_of_response:1; }","duration":"193.900501ms","start":"2024-03-28T00:24:38.035197Z","end":"2024-03-28T00:24:38.229097Z","steps":["trace[1517688260] 'process raft request'  (duration: 123.929714ms)","trace[1517688260] 'compare'  (duration: 69.833581ms)"],"step_count":2}
	{"level":"info","ts":"2024-03-28T00:27:32.205163Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-03-28T00:27:32.205276Z","caller":"embed/etcd.go:375","msg":"closing etcd server","name":"multinode-200224","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.88:2380"],"advertise-client-urls":["https://192.168.39.88:2379"]}
	{"level":"warn","ts":"2024-03-28T00:27:32.205431Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-03-28T00:27:32.205521Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	2024/03/28 00:27:32 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-03-28T00:27:32.248138Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.88:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-03-28T00:27:32.249531Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.88:2379: use of closed network connection"}
	{"level":"info","ts":"2024-03-28T00:27:32.249762Z","caller":"etcdserver/server.go:1471","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aa0bd43d5988e1af","current-leader-member-id":"aa0bd43d5988e1af"}
	{"level":"info","ts":"2024-03-28T00:27:32.262043Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.88:2380"}
	{"level":"info","ts":"2024-03-28T00:27:32.262151Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.88:2380"}
	{"level":"info","ts":"2024-03-28T00:27:32.26216Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"multinode-200224","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.88:2380"],"advertise-client-urls":["https://192.168.39.88:2379"]}
	
	
	==> kernel <==
	 00:33:04 up 10 min,  0 users,  load average: 0.61, 0.43, 0.22
	Linux multinode-200224 5.10.207 #1 SMP Wed Mar 27 22:02:20 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [b533e6c726ea6ed1277d347af17f080ce2afc86504a3b08d2b06b561fdce86e7] <==
	I0328 00:31:57.162110       1 main.go:250] Node multinode-200224-m02 has CIDR [10.244.1.0/24] 
	I0328 00:32:07.172291       1 main.go:223] Handling node with IPs: map[192.168.39.88:{}]
	I0328 00:32:07.172341       1 main.go:227] handling current node
	I0328 00:32:07.172352       1 main.go:223] Handling node with IPs: map[192.168.39.161:{}]
	I0328 00:32:07.172358       1 main.go:250] Node multinode-200224-m02 has CIDR [10.244.1.0/24] 
	I0328 00:32:17.179722       1 main.go:223] Handling node with IPs: map[192.168.39.88:{}]
	I0328 00:32:17.179764       1 main.go:227] handling current node
	I0328 00:32:17.179775       1 main.go:223] Handling node with IPs: map[192.168.39.161:{}]
	I0328 00:32:17.179781       1 main.go:250] Node multinode-200224-m02 has CIDR [10.244.1.0/24] 
	I0328 00:32:27.196972       1 main.go:223] Handling node with IPs: map[192.168.39.88:{}]
	I0328 00:32:27.197080       1 main.go:227] handling current node
	I0328 00:32:27.197108       1 main.go:223] Handling node with IPs: map[192.168.39.161:{}]
	I0328 00:32:27.197127       1 main.go:250] Node multinode-200224-m02 has CIDR [10.244.1.0/24] 
	I0328 00:32:37.202388       1 main.go:223] Handling node with IPs: map[192.168.39.88:{}]
	I0328 00:32:37.202432       1 main.go:227] handling current node
	I0328 00:32:37.202443       1 main.go:223] Handling node with IPs: map[192.168.39.161:{}]
	I0328 00:32:37.202449       1 main.go:250] Node multinode-200224-m02 has CIDR [10.244.1.0/24] 
	I0328 00:32:47.208172       1 main.go:223] Handling node with IPs: map[192.168.39.88:{}]
	I0328 00:32:47.208270       1 main.go:227] handling current node
	I0328 00:32:47.208303       1 main.go:223] Handling node with IPs: map[192.168.39.161:{}]
	I0328 00:32:47.208322       1 main.go:250] Node multinode-200224-m02 has CIDR [10.244.1.0/24] 
	I0328 00:32:57.221419       1 main.go:223] Handling node with IPs: map[192.168.39.88:{}]
	I0328 00:32:57.221460       1 main.go:227] handling current node
	I0328 00:32:57.221471       1 main.go:223] Handling node with IPs: map[192.168.39.161:{}]
	I0328 00:32:57.221476       1 main.go:250] Node multinode-200224-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kindnet [dbe677740910dd5d9206540ecea26aa8ac44bad90beedfeb44189e329f1e7f05] <==
	I0328 00:26:42.478828       1 main.go:250] Node multinode-200224-m03 has CIDR [10.244.3.0/24] 
	I0328 00:26:52.485376       1 main.go:223] Handling node with IPs: map[192.168.39.88:{}]
	I0328 00:26:52.485422       1 main.go:227] handling current node
	I0328 00:26:52.485433       1 main.go:223] Handling node with IPs: map[192.168.39.161:{}]
	I0328 00:26:52.485439       1 main.go:250] Node multinode-200224-m02 has CIDR [10.244.1.0/24] 
	I0328 00:26:52.485564       1 main.go:223] Handling node with IPs: map[192.168.39.22:{}]
	I0328 00:26:52.485594       1 main.go:250] Node multinode-200224-m03 has CIDR [10.244.3.0/24] 
	I0328 00:27:02.494950       1 main.go:223] Handling node with IPs: map[192.168.39.88:{}]
	I0328 00:27:02.494998       1 main.go:227] handling current node
	I0328 00:27:02.495010       1 main.go:223] Handling node with IPs: map[192.168.39.161:{}]
	I0328 00:27:02.495016       1 main.go:250] Node multinode-200224-m02 has CIDR [10.244.1.0/24] 
	I0328 00:27:02.495125       1 main.go:223] Handling node with IPs: map[192.168.39.22:{}]
	I0328 00:27:02.495130       1 main.go:250] Node multinode-200224-m03 has CIDR [10.244.3.0/24] 
	I0328 00:27:12.508506       1 main.go:223] Handling node with IPs: map[192.168.39.88:{}]
	I0328 00:27:12.508595       1 main.go:227] handling current node
	I0328 00:27:12.508606       1 main.go:223] Handling node with IPs: map[192.168.39.161:{}]
	I0328 00:27:12.508618       1 main.go:250] Node multinode-200224-m02 has CIDR [10.244.1.0/24] 
	I0328 00:27:12.508726       1 main.go:223] Handling node with IPs: map[192.168.39.22:{}]
	I0328 00:27:12.508752       1 main.go:250] Node multinode-200224-m03 has CIDR [10.244.3.0/24] 
	I0328 00:27:22.517728       1 main.go:223] Handling node with IPs: map[192.168.39.88:{}]
	I0328 00:27:22.517776       1 main.go:227] handling current node
	I0328 00:27:22.517842       1 main.go:223] Handling node with IPs: map[192.168.39.161:{}]
	I0328 00:27:22.517851       1 main.go:250] Node multinode-200224-m02 has CIDR [10.244.1.0/24] 
	I0328 00:27:22.517977       1 main.go:223] Handling node with IPs: map[192.168.39.22:{}]
	I0328 00:27:22.518003       1 main.go:250] Node multinode-200224-m03 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [68ae2f434f3deb10aa0a571899142907e2801ba5da048845b37822888969b398] <==
	I0328 00:27:32.234872       1 controller.go:129] Ending legacy_token_tracking_controller
	I0328 00:27:32.234894       1 controller.go:130] Shutting down legacy_token_tracking_controller
	I0328 00:27:32.234921       1 available_controller.go:439] Shutting down AvailableConditionController
	W0328 00:27:32.234986       1 logging.go:59] [core] [Channel #82 SubChannel #83] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0328 00:27:32.235126       1 logging.go:59] [core] [Channel #160 SubChannel #161] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0328 00:27:32.235202       1 logging.go:59] [core] [Channel #145 SubChannel #146] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0328 00:27:32.235266       1 logging.go:59] [core] [Channel #166 SubChannel #167] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0328 00:27:32.235300       1 logging.go:59] [core] [Channel #88 SubChannel #89] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0328 00:27:32.235336       1 logging.go:59] [core] [Channel #142 SubChannel #143] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0328 00:27:32.235501       1 logging.go:59] [core] [Channel #169 SubChannel #170] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0328 00:27:32.235587       1 logging.go:59] [core] [Channel #15 SubChannel #16] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0328 00:27:32.235655       1 logging.go:59] [core] [Channel #58 SubChannel #59] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0328 00:27:32.235694       1 logging.go:59] [core] [Channel #97 SubChannel #98] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	I0328 00:27:32.235898       1 dynamic_serving_content.go:146] "Shutting down controller" name="aggregator-proxy-cert::/var/lib/minikube/certs/front-proxy-client.crt::/var/lib/minikube/certs/front-proxy-client.key"
	I0328 00:27:32.236131       1 dynamic_cafile_content.go:171] "Shutting down controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0328 00:27:32.236208       1 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0328 00:27:32.236245       1 object_count_tracker.go:151] "StorageObjectCountTracker pruner is exiting"
	I0328 00:27:32.236275       1 controller.go:86] Shutting down OpenAPI V3 AggregationController
	I0328 00:27:32.236344       1 controller.go:84] Shutting down OpenAPI AggregationController
	W0328 00:27:32.236453       1 logging.go:59] [core] [Channel #154 SubChannel #155] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	I0328 00:27:32.236551       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0328 00:27:32.236622       1 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0328 00:27:32.241073       1 dynamic_serving_content.go:146] "Shutting down controller" name="serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key"
	W0328 00:27:32.241698       1 logging.go:59] [core] [Channel #136 SubChannel #137] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0328 00:27:32.242860       1 logging.go:59] [core] [Channel #70 SubChannel #71] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [b3a9b4858d44e5afb56269a375649980f786d727b89ff9556550e692636fab82] <==
	I0328 00:29:14.491745       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0328 00:29:14.491843       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0328 00:29:14.491976       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0328 00:29:14.499166       1 controller.go:78] Starting OpenAPI AggregationController
	I0328 00:29:14.572715       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0328 00:29:14.585064       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0328 00:29:14.585323       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0328 00:29:14.597974       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0328 00:29:14.598045       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0328 00:29:14.598253       1 shared_informer.go:318] Caches are synced for configmaps
	I0328 00:29:14.598444       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0328 00:29:14.598483       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0328 00:29:14.609031       1 aggregator.go:165] initial CRD sync complete...
	I0328 00:29:14.609066       1 autoregister_controller.go:141] Starting autoregister controller
	I0328 00:29:14.609073       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0328 00:29:14.609079       1 cache.go:39] Caches are synced for autoregister controller
	I0328 00:29:14.623570       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0328 00:29:15.506430       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0328 00:29:17.001758       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0328 00:29:17.129842       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0328 00:29:17.145941       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0328 00:29:17.239084       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0328 00:29:17.247134       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0328 00:29:27.024718       1 controller.go:624] quota admission added evaluator for: endpoints
	I0328 00:29:27.116284       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [6fbf200e2f5995f89adef7939641ef941f9f8ac8e54c7d64c2bf74baed49b800] <==
	I0328 00:24:01.327035       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="5.82092ms"
	I0328 00:24:01.328984       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="61.232µs"
	I0328 00:24:34.883979       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-200224-m03\" does not exist"
	I0328 00:24:34.884903       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-200224-m02"
	I0328 00:24:34.908220       1 event.go:376] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-5ws9q"
	I0328 00:24:34.908290       1 event.go:376] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-dcqkg"
	I0328 00:24:34.929329       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-200224-m03" podCIDRs=["10.244.2.0/24"]
	I0328 00:24:39.053327       1 node_lifecycle_controller.go:874] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-200224-m03"
	I0328 00:24:39.053497       1 event.go:376] "Event occurred" object="multinode-200224-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-200224-m03 event: Registered Node multinode-200224-m03 in Controller"
	I0328 00:24:44.911414       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-200224-m02"
	I0328 00:25:15.754987       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-200224-m02"
	I0328 00:25:16.835251       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-200224-m02"
	I0328 00:25:16.835380       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-200224-m03\" does not exist"
	I0328 00:25:16.859189       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-200224-m03" podCIDRs=["10.244.3.0/24"]
	I0328 00:25:25.475972       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-200224-m02"
	I0328 00:26:09.110689       1 event.go:376] "Event occurred" object="multinode-200224-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="NodeNotReady" message="Node multinode-200224-m02 status is now: NodeNotReady"
	I0328 00:26:09.111024       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-200224-m03"
	I0328 00:26:09.116595       1 event.go:376] "Event occurred" object="multinode-200224-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="NodeNotReady" message="Node multinode-200224-m03 status is now: NodeNotReady"
	I0328 00:26:09.128985       1 event.go:376] "Event occurred" object="kube-system/kube-proxy-pgph8" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0328 00:26:09.135944       1 event.go:376] "Event occurred" object="kube-system/kindnet-dcqkg" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0328 00:26:09.149774       1 event.go:376] "Event occurred" object="kube-system/kindnet-fdhcl" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0328 00:26:09.154197       1 event.go:376] "Event occurred" object="kube-system/kube-proxy-5ws9q" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0328 00:26:09.169606       1 event.go:376] "Event occurred" object="default/busybox-7fdf7869d9-2h8w6" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0328 00:26:09.175221       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="9.969083ms"
	I0328 00:26:09.175459       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="103.708µs"
	
	
	==> kube-controller-manager [a59125a9daa4c893fd45d629c12fd055db5fb6cbfc3f0205e1b7c2789a83f8da] <==
	I0328 00:30:07.165674       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-200224-m02"
	I0328 00:30:07.186702       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="51.446µs"
	I0328 00:30:07.203459       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="38.529µs"
	I0328 00:30:11.338522       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="6.263906ms"
	I0328 00:30:11.340038       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="78.908µs"
	I0328 00:30:12.044033       1 event.go:376] "Event occurred" object="default/busybox-7fdf7869d9-x4g8t" fieldPath="" kind="Pod" apiVersion="v1" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-7fdf7869d9-x4g8t"
	I0328 00:30:25.893439       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-200224-m02"
	I0328 00:30:27.047142       1 event.go:376] "Event occurred" object="multinode-200224-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RemovingNode" message="Node multinode-200224-m03 event: Removing Node multinode-200224-m03 from Controller"
	I0328 00:30:27.100679       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-200224-m02"
	I0328 00:30:27.100889       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-200224-m03\" does not exist"
	I0328 00:30:27.125054       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-200224-m03" podCIDRs=["10.244.2.0/24"]
	I0328 00:30:32.048001       1 event.go:376] "Event occurred" object="multinode-200224-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-200224-m03 event: Registered Node multinode-200224-m03 in Controller"
	I0328 00:30:36.615261       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-200224-m02"
	I0328 00:30:42.554642       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-200224-m02"
	I0328 00:30:47.066408       1 event.go:376] "Event occurred" object="multinode-200224-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RemovingNode" message="Node multinode-200224-m03 event: Removing Node multinode-200224-m03 from Controller"
	I0328 00:31:07.012100       1 gc_controller.go:344] "PodGC is force deleting Pod" pod="kube-system/kindnet-dcqkg"
	I0328 00:31:07.054093       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" pod="kube-system/kindnet-dcqkg"
	I0328 00:31:07.054219       1 gc_controller.go:344] "PodGC is force deleting Pod" pod="kube-system/kube-proxy-5ws9q"
	I0328 00:31:07.080860       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" pod="kube-system/kube-proxy-5ws9q"
	I0328 00:31:22.085863       1 event.go:376] "Event occurred" object="multinode-200224-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="NodeNotReady" message="Node multinode-200224-m02 status is now: NodeNotReady"
	I0328 00:31:22.102868       1 event.go:376] "Event occurred" object="kube-system/kindnet-fdhcl" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0328 00:31:22.116559       1 event.go:376] "Event occurred" object="kube-system/kube-proxy-pgph8" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0328 00:31:22.131157       1 event.go:376] "Event occurred" object="default/busybox-7fdf7869d9-x4g8t" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0328 00:31:22.142294       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="11.885893ms"
	I0328 00:31:22.142539       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="128.14µs"
	
	
	==> kube-proxy [46c3c1988d5ad53bff3254f6b5fbcb07e9d341c25390a721f9cb15c63e302898] <==
	I0328 00:29:16.335033       1 server_others.go:72] "Using iptables proxy"
	I0328 00:29:16.362287       1 server.go:1050] "Successfully retrieved node IP(s)" IPs=["192.168.39.88"]
	I0328 00:29:16.496180       1 server_others.go:146] "No iptables support for family" ipFamily="IPv6"
	I0328 00:29:16.496208       1 server.go:654] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0328 00:29:16.496229       1 server_others.go:168] "Using iptables Proxier"
	I0328 00:29:16.501491       1 proxier.go:245] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0328 00:29:16.501903       1 server.go:865] "Version info" version="v1.29.3"
	I0328 00:29:16.501919       1 server.go:867] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0328 00:29:16.505390       1 config.go:188] "Starting service config controller"
	I0328 00:29:16.506457       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0328 00:29:16.508966       1 config.go:97] "Starting endpoint slice config controller"
	I0328 00:29:16.508979       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0328 00:29:16.510528       1 config.go:315] "Starting node config controller"
	I0328 00:29:16.510543       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0328 00:29:16.608077       1 shared_informer.go:318] Caches are synced for service config
	I0328 00:29:16.611926       1 shared_informer.go:318] Caches are synced for node config
	I0328 00:29:16.612042       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [fb82d42c8f86742edc73783b3dfdef5b9cc4658c3ddc375a06b5ea65bd837da5] <==
	I0328 00:23:11.406084       1 server_others.go:72] "Using iptables proxy"
	I0328 00:23:11.415672       1 server.go:1050] "Successfully retrieved node IP(s)" IPs=["192.168.39.88"]
	I0328 00:23:11.454636       1 server_others.go:146] "No iptables support for family" ipFamily="IPv6"
	I0328 00:23:11.454757       1 server.go:654] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0328 00:23:11.454837       1 server_others.go:168] "Using iptables Proxier"
	I0328 00:23:11.457597       1 proxier.go:245] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0328 00:23:11.457935       1 server.go:865] "Version info" version="v1.29.3"
	I0328 00:23:11.457965       1 server.go:867] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0328 00:23:11.459204       1 config.go:188] "Starting service config controller"
	I0328 00:23:11.459241       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0328 00:23:11.459259       1 config.go:97] "Starting endpoint slice config controller"
	I0328 00:23:11.459263       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0328 00:23:11.459549       1 config.go:315] "Starting node config controller"
	I0328 00:23:11.459584       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0328 00:23:11.560225       1 shared_informer.go:318] Caches are synced for service config
	I0328 00:23:11.560300       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0328 00:23:11.560537       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [3fccdc262ed43923c10e7dc2c2d3f31d086e8cae5e8a1adf35c700e71ac085c6] <==
	W0328 00:22:53.790869       1 reflector.go:539] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0328 00:22:53.793062       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0328 00:22:53.790960       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0328 00:22:53.793102       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0328 00:22:53.791017       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0328 00:22:53.793116       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0328 00:22:53.791051       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0328 00:22:53.793128       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0328 00:22:53.791093       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0328 00:22:53.793139       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0328 00:22:54.712602       1 reflector.go:539] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0328 00:22:54.713055       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0328 00:22:54.777247       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0328 00:22:54.777297       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0328 00:22:54.851310       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0328 00:22:54.851436       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0328 00:22:54.878581       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0328 00:22:54.879198       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0328 00:22:55.011033       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0328 00:22:55.011156       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0328 00:22:56.672884       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0328 00:27:32.223729       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0328 00:27:32.223914       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0328 00:27:32.224384       1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E0328 00:27:32.224624       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [767d65958abd9c1074e5b189fbc2524166f7f1c3a91990207131a500a1efdd7f] <==
	I0328 00:29:12.172356       1 serving.go:380] Generated self-signed cert in-memory
	W0328 00:29:14.535399       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0328 00:29:14.536038       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0328 00:29:14.536173       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0328 00:29:14.536199       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0328 00:29:14.610455       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.29.3"
	I0328 00:29:14.610596       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0328 00:29:14.613219       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0328 00:29:14.613700       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0328 00:29:14.616539       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0328 00:29:14.613919       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0328 00:29:14.716775       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Mar 28 00:31:10 multinode-200224 kubelet[3059]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 28 00:31:10 multinode-200224 kubelet[3059]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 28 00:31:10 multinode-200224 kubelet[3059]: E0328 00:31:10.417515    3059 manager.go:1116] Failed to create existing container: /kubepods/pod30a4f6bf-5542-476f-b1af-837031a00c50/crio-ea1011de4d4b6657be5634eb7131426dfaf71adda1e016d49136887bbc0d7a9a: Error finding container ea1011de4d4b6657be5634eb7131426dfaf71adda1e016d49136887bbc0d7a9a: Status 404 returned error can't find the container with id ea1011de4d4b6657be5634eb7131426dfaf71adda1e016d49136887bbc0d7a9a
	Mar 28 00:31:10 multinode-200224 kubelet[3059]: E0328 00:31:10.417912    3059 manager.go:1116] Failed to create existing container: /kubepods/burstable/pod439c378fd07501a8dead5aed861f13e7/crio-caae122c7bfc36f8d133510bf3080172efebf3bc11889137437f0b9c2716cf79: Error finding container caae122c7bfc36f8d133510bf3080172efebf3bc11889137437f0b9c2716cf79: Status 404 returned error can't find the container with id caae122c7bfc36f8d133510bf3080172efebf3bc11889137437f0b9c2716cf79
	Mar 28 00:31:10 multinode-200224 kubelet[3059]: E0328 00:31:10.418400    3059 manager.go:1116] Failed to create existing container: /kubepods/burstable/pod7a0e7db0-a552-4642-825f-da6ee01e6121/crio-ee7f59bd3e83ff182dc2870085e8dde1ac0476f4ed5c212895a2b54a7b1d2145: Error finding container ee7f59bd3e83ff182dc2870085e8dde1ac0476f4ed5c212895a2b54a7b1d2145: Status 404 returned error can't find the container with id ee7f59bd3e83ff182dc2870085e8dde1ac0476f4ed5c212895a2b54a7b1d2145
	Mar 28 00:31:10 multinode-200224 kubelet[3059]: E0328 00:31:10.418622    3059 manager.go:1116] Failed to create existing container: /kubepods/besteffort/pod3bd5c912-2288-420d-a7a2-d73f2c34a5ed/crio-ed4a7c41ab819bc8c36d7e2ab25a2a12997d2920f2f2fb3afdbb263c586ec4a3: Error finding container ed4a7c41ab819bc8c36d7e2ab25a2a12997d2920f2f2fb3afdbb263c586ec4a3: Status 404 returned error can't find the container with id ed4a7c41ab819bc8c36d7e2ab25a2a12997d2920f2f2fb3afdbb263c586ec4a3
	Mar 28 00:31:10 multinode-200224 kubelet[3059]: E0328 00:31:10.418967    3059 manager.go:1116] Failed to create existing container: /kubepods/besteffort/podae4313c7-a926-4da8-bfc2-7dae1422d958/crio-8e1d52aa1272a7dffa0d0477292d339dd15d53962c6c3a485e7ba027258d0cd5: Error finding container 8e1d52aa1272a7dffa0d0477292d339dd15d53962c6c3a485e7ba027258d0cd5: Status 404 returned error can't find the container with id 8e1d52aa1272a7dffa0d0477292d339dd15d53962c6c3a485e7ba027258d0cd5
	Mar 28 00:31:10 multinode-200224 kubelet[3059]: E0328 00:31:10.419200    3059 manager.go:1116] Failed to create existing container: /kubepods/burstable/podc865a9c1ae98bee042cbdef78ac1661e/crio-3f002ba61607eb1d8647387d6277f7b6391e7e06d60cc4cc049fed995ef98eb4: Error finding container 3f002ba61607eb1d8647387d6277f7b6391e7e06d60cc4cc049fed995ef98eb4: Status 404 returned error can't find the container with id 3f002ba61607eb1d8647387d6277f7b6391e7e06d60cc4cc049fed995ef98eb4
	Mar 28 00:31:10 multinode-200224 kubelet[3059]: E0328 00:31:10.419373    3059 manager.go:1116] Failed to create existing container: /kubepods/burstable/pod32db6766711b42ca248c705dc74d448e/crio-5fef10ef65dd3ba44f4c351e380d6247428dbf047e9cca78d17d3928a7e8f0e5: Error finding container 5fef10ef65dd3ba44f4c351e380d6247428dbf047e9cca78d17d3928a7e8f0e5: Status 404 returned error can't find the container with id 5fef10ef65dd3ba44f4c351e380d6247428dbf047e9cca78d17d3928a7e8f0e5
	Mar 28 00:31:10 multinode-200224 kubelet[3059]: E0328 00:31:10.419575    3059 manager.go:1116] Failed to create existing container: /kubepods/burstable/poddb39c19d4710f792ba253c58204b3fd4/crio-fd28139122e292890b1d14a5668c359edc911ba1328ccd09054fbe9ee9099a59: Error finding container fd28139122e292890b1d14a5668c359edc911ba1328ccd09054fbe9ee9099a59: Status 404 returned error can't find the container with id fd28139122e292890b1d14a5668c359edc911ba1328ccd09054fbe9ee9099a59
	Mar 28 00:31:10 multinode-200224 kubelet[3059]: E0328 00:31:10.419951    3059 manager.go:1116] Failed to create existing container: /kubepods/besteffort/poda88eae05-06c4-4a76-9f77-af448b7c0704/crio-28401e2f6084f5204fe5c79b6fad4857507ed38493632dcd5793de7ddd053561: Error finding container 28401e2f6084f5204fe5c79b6fad4857507ed38493632dcd5793de7ddd053561: Status 404 returned error can't find the container with id 28401e2f6084f5204fe5c79b6fad4857507ed38493632dcd5793de7ddd053561
	Mar 28 00:32:10 multinode-200224 kubelet[3059]: E0328 00:32:10.390318    3059 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 28 00:32:10 multinode-200224 kubelet[3059]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 28 00:32:10 multinode-200224 kubelet[3059]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 28 00:32:10 multinode-200224 kubelet[3059]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 28 00:32:10 multinode-200224 kubelet[3059]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 28 00:32:10 multinode-200224 kubelet[3059]: E0328 00:32:10.417438    3059 manager.go:1116] Failed to create existing container: /kubepods/besteffort/pod3bd5c912-2288-420d-a7a2-d73f2c34a5ed/crio-ed4a7c41ab819bc8c36d7e2ab25a2a12997d2920f2f2fb3afdbb263c586ec4a3: Error finding container ed4a7c41ab819bc8c36d7e2ab25a2a12997d2920f2f2fb3afdbb263c586ec4a3: Status 404 returned error can't find the container with id ed4a7c41ab819bc8c36d7e2ab25a2a12997d2920f2f2fb3afdbb263c586ec4a3
	Mar 28 00:32:10 multinode-200224 kubelet[3059]: E0328 00:32:10.417748    3059 manager.go:1116] Failed to create existing container: /kubepods/besteffort/podae4313c7-a926-4da8-bfc2-7dae1422d958/crio-8e1d52aa1272a7dffa0d0477292d339dd15d53962c6c3a485e7ba027258d0cd5: Error finding container 8e1d52aa1272a7dffa0d0477292d339dd15d53962c6c3a485e7ba027258d0cd5: Status 404 returned error can't find the container with id 8e1d52aa1272a7dffa0d0477292d339dd15d53962c6c3a485e7ba027258d0cd5
	Mar 28 00:32:10 multinode-200224 kubelet[3059]: E0328 00:32:10.418297    3059 manager.go:1116] Failed to create existing container: /kubepods/burstable/podc865a9c1ae98bee042cbdef78ac1661e/crio-3f002ba61607eb1d8647387d6277f7b6391e7e06d60cc4cc049fed995ef98eb4: Error finding container 3f002ba61607eb1d8647387d6277f7b6391e7e06d60cc4cc049fed995ef98eb4: Status 404 returned error can't find the container with id 3f002ba61607eb1d8647387d6277f7b6391e7e06d60cc4cc049fed995ef98eb4
	Mar 28 00:32:10 multinode-200224 kubelet[3059]: E0328 00:32:10.418518    3059 manager.go:1116] Failed to create existing container: /kubepods/burstable/poddb39c19d4710f792ba253c58204b3fd4/crio-fd28139122e292890b1d14a5668c359edc911ba1328ccd09054fbe9ee9099a59: Error finding container fd28139122e292890b1d14a5668c359edc911ba1328ccd09054fbe9ee9099a59: Status 404 returned error can't find the container with id fd28139122e292890b1d14a5668c359edc911ba1328ccd09054fbe9ee9099a59
	Mar 28 00:32:10 multinode-200224 kubelet[3059]: E0328 00:32:10.418725    3059 manager.go:1116] Failed to create existing container: /kubepods/burstable/pod32db6766711b42ca248c705dc74d448e/crio-5fef10ef65dd3ba44f4c351e380d6247428dbf047e9cca78d17d3928a7e8f0e5: Error finding container 5fef10ef65dd3ba44f4c351e380d6247428dbf047e9cca78d17d3928a7e8f0e5: Status 404 returned error can't find the container with id 5fef10ef65dd3ba44f4c351e380d6247428dbf047e9cca78d17d3928a7e8f0e5
	Mar 28 00:32:10 multinode-200224 kubelet[3059]: E0328 00:32:10.418921    3059 manager.go:1116] Failed to create existing container: /kubepods/pod30a4f6bf-5542-476f-b1af-837031a00c50/crio-ea1011de4d4b6657be5634eb7131426dfaf71adda1e016d49136887bbc0d7a9a: Error finding container ea1011de4d4b6657be5634eb7131426dfaf71adda1e016d49136887bbc0d7a9a: Status 404 returned error can't find the container with id ea1011de4d4b6657be5634eb7131426dfaf71adda1e016d49136887bbc0d7a9a
	Mar 28 00:32:10 multinode-200224 kubelet[3059]: E0328 00:32:10.419132    3059 manager.go:1116] Failed to create existing container: /kubepods/burstable/pod439c378fd07501a8dead5aed861f13e7/crio-caae122c7bfc36f8d133510bf3080172efebf3bc11889137437f0b9c2716cf79: Error finding container caae122c7bfc36f8d133510bf3080172efebf3bc11889137437f0b9c2716cf79: Status 404 returned error can't find the container with id caae122c7bfc36f8d133510bf3080172efebf3bc11889137437f0b9c2716cf79
	Mar 28 00:32:10 multinode-200224 kubelet[3059]: E0328 00:32:10.419450    3059 manager.go:1116] Failed to create existing container: /kubepods/burstable/pod7a0e7db0-a552-4642-825f-da6ee01e6121/crio-ee7f59bd3e83ff182dc2870085e8dde1ac0476f4ed5c212895a2b54a7b1d2145: Error finding container ee7f59bd3e83ff182dc2870085e8dde1ac0476f4ed5c212895a2b54a7b1d2145: Status 404 returned error can't find the container with id ee7f59bd3e83ff182dc2870085e8dde1ac0476f4ed5c212895a2b54a7b1d2145
	Mar 28 00:32:10 multinode-200224 kubelet[3059]: E0328 00:32:10.419641    3059 manager.go:1116] Failed to create existing container: /kubepods/besteffort/poda88eae05-06c4-4a76-9f77-af448b7c0704/crio-28401e2f6084f5204fe5c79b6fad4857507ed38493632dcd5793de7ddd053561: Error finding container 28401e2f6084f5204fe5c79b6fad4857507ed38493632dcd5793de7ddd053561: Status 404 returned error can't find the container with id 28401e2f6084f5204fe5c79b6fad4857507ed38493632dcd5793de7ddd053561
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0328 00:33:03.606285 1104702 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/18485-1069254/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-200224 -n multinode-200224
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-200224 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/StopMultiNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/StopMultiNode (141.58s)

                                                
                                    
x
+
TestPreload (220.73s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-700024 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-700024 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4: (2m18.332900753s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-700024 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-700024 image pull gcr.io/k8s-minikube/busybox: (2.665517178s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-700024
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-700024: (7.30856637s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-700024 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-700024 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio: (1m9.025042617s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-700024 image list
preload_test.go:76: Expected to find gcr.io/k8s-minikube/busybox in image list output, instead got 
-- stdout --
	registry.k8s.io/pause:3.7
	registry.k8s.io/kube-scheduler:v1.24.4
	registry.k8s.io/kube-proxy:v1.24.4
	registry.k8s.io/kube-controller-manager:v1.24.4
	registry.k8s.io/kube-apiserver:v1.24.4
	registry.k8s.io/etcd:3.5.3-0
	registry.k8s.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/kube-scheduler:v1.24.4
	k8s.gcr.io/kube-proxy:v1.24.4
	k8s.gcr.io/kube-controller-manager:v1.24.4
	k8s.gcr.io/kube-apiserver:v1.24.4
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	docker.io/kindest/kindnetd:v20220726-ed811e41

                                                
                                                
-- /stdout --
panic.go:626: *** TestPreload FAILED at 2024-03-28 00:40:23.376669532 +0000 UTC m=+4054.628148462
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-700024 -n test-preload-700024
helpers_test.go:244: <<< TestPreload FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPreload]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-700024 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p test-preload-700024 logs -n 25: (1.297076891s)
helpers_test.go:252: TestPreload logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	| Command |                                          Args                                           |       Profile        |  User   |    Version     |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	| ssh     | multinode-200224 ssh -n                                                                 | multinode-200224     | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:24 UTC | 28 Mar 24 00:24 UTC |
	|         | multinode-200224-m03 sudo cat                                                           |                      |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |                |                     |                     |
	| ssh     | multinode-200224 ssh -n multinode-200224 sudo cat                                       | multinode-200224     | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:24 UTC | 28 Mar 24 00:24 UTC |
	|         | /home/docker/cp-test_multinode-200224-m03_multinode-200224.txt                          |                      |         |                |                     |                     |
	| cp      | multinode-200224 cp multinode-200224-m03:/home/docker/cp-test.txt                       | multinode-200224     | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:24 UTC | 28 Mar 24 00:24 UTC |
	|         | multinode-200224-m02:/home/docker/cp-test_multinode-200224-m03_multinode-200224-m02.txt |                      |         |                |                     |                     |
	| ssh     | multinode-200224 ssh -n                                                                 | multinode-200224     | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:24 UTC | 28 Mar 24 00:24 UTC |
	|         | multinode-200224-m03 sudo cat                                                           |                      |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |                |                     |                     |
	| ssh     | multinode-200224 ssh -n multinode-200224-m02 sudo cat                                   | multinode-200224     | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:24 UTC | 28 Mar 24 00:24 UTC |
	|         | /home/docker/cp-test_multinode-200224-m03_multinode-200224-m02.txt                      |                      |         |                |                     |                     |
	| node    | multinode-200224 node stop m03                                                          | multinode-200224     | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:24 UTC | 28 Mar 24 00:24 UTC |
	| node    | multinode-200224 node start                                                             | multinode-200224     | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:24 UTC | 28 Mar 24 00:25 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                      |         |                |                     |                     |
	| node    | list -p multinode-200224                                                                | multinode-200224     | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:25 UTC |                     |
	| stop    | -p multinode-200224                                                                     | multinode-200224     | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:25 UTC |                     |
	| start   | -p multinode-200224                                                                     | multinode-200224     | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:27 UTC | 28 Mar 24 00:30 UTC |
	|         | --wait=true -v=8                                                                        |                      |         |                |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |                |                     |                     |
	| node    | list -p multinode-200224                                                                | multinode-200224     | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:30 UTC |                     |
	| node    | multinode-200224 node delete                                                            | multinode-200224     | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:30 UTC | 28 Mar 24 00:30 UTC |
	|         | m03                                                                                     |                      |         |                |                     |                     |
	| stop    | multinode-200224 stop                                                                   | multinode-200224     | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:30 UTC |                     |
	| start   | -p multinode-200224                                                                     | multinode-200224     | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:33 UTC | 28 Mar 24 00:35 UTC |
	|         | --wait=true -v=8                                                                        |                      |         |                |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |                |                     |                     |
	|         | --driver=kvm2                                                                           |                      |         |                |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |                |                     |                     |
	| node    | list -p multinode-200224                                                                | multinode-200224     | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:35 UTC |                     |
	| start   | -p multinode-200224-m02                                                                 | multinode-200224-m02 | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:35 UTC |                     |
	|         | --driver=kvm2                                                                           |                      |         |                |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |                |                     |                     |
	| start   | -p multinode-200224-m03                                                                 | multinode-200224-m03 | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:35 UTC | 28 Mar 24 00:36 UTC |
	|         | --driver=kvm2                                                                           |                      |         |                |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |                |                     |                     |
	| node    | add -p multinode-200224                                                                 | multinode-200224     | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:36 UTC |                     |
	| delete  | -p multinode-200224-m03                                                                 | multinode-200224-m03 | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:36 UTC | 28 Mar 24 00:36 UTC |
	| delete  | -p multinode-200224                                                                     | multinode-200224     | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:36 UTC | 28 Mar 24 00:36 UTC |
	| start   | -p test-preload-700024                                                                  | test-preload-700024  | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:36 UTC | 28 Mar 24 00:39 UTC |
	|         | --memory=2200                                                                           |                      |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                                                           |                      |         |                |                     |                     |
	|         | --preload=false --driver=kvm2                                                           |                      |         |                |                     |                     |
	|         |  --container-runtime=crio                                                               |                      |         |                |                     |                     |
	|         | --kubernetes-version=v1.24.4                                                            |                      |         |                |                     |                     |
	| image   | test-preload-700024 image pull                                                          | test-preload-700024  | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:39 UTC | 28 Mar 24 00:39 UTC |
	|         | gcr.io/k8s-minikube/busybox                                                             |                      |         |                |                     |                     |
	| stop    | -p test-preload-700024                                                                  | test-preload-700024  | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:39 UTC | 28 Mar 24 00:39 UTC |
	| start   | -p test-preload-700024                                                                  | test-preload-700024  | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:39 UTC | 28 Mar 24 00:40 UTC |
	|         | --memory=2200                                                                           |                      |         |                |                     |                     |
	|         | --alsologtostderr -v=1                                                                  |                      |         |                |                     |                     |
	|         | --wait=true --driver=kvm2                                                               |                      |         |                |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |                |                     |                     |
	| image   | test-preload-700024 image list                                                          | test-preload-700024  | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:40 UTC | 28 Mar 24 00:40 UTC |
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/28 00:39:14
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0328 00:39:14.162917 1106824 out.go:291] Setting OutFile to fd 1 ...
	I0328 00:39:14.163173 1106824 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 00:39:14.163183 1106824 out.go:304] Setting ErrFile to fd 2...
	I0328 00:39:14.163187 1106824 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 00:39:14.163357 1106824 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18485-1069254/.minikube/bin
	I0328 00:39:14.163936 1106824 out.go:298] Setting JSON to false
	I0328 00:39:14.164938 1106824 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":30051,"bootTime":1711556303,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1054-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0328 00:39:14.165004 1106824 start.go:139] virtualization: kvm guest
	I0328 00:39:14.167524 1106824 out.go:177] * [test-preload-700024] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I0328 00:39:14.169543 1106824 out.go:177]   - MINIKUBE_LOCATION=18485
	I0328 00:39:14.171119 1106824 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0328 00:39:14.169629 1106824 notify.go:220] Checking for updates...
	I0328 00:39:14.172860 1106824 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18485-1069254/kubeconfig
	I0328 00:39:14.174370 1106824 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18485-1069254/.minikube
	I0328 00:39:14.175714 1106824 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0328 00:39:14.176980 1106824 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0328 00:39:14.178743 1106824 config.go:182] Loaded profile config "test-preload-700024": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0328 00:39:14.179198 1106824 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0328 00:39:14.179241 1106824 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 00:39:14.194418 1106824 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40849
	I0328 00:39:14.194971 1106824 main.go:141] libmachine: () Calling .GetVersion
	I0328 00:39:14.195519 1106824 main.go:141] libmachine: Using API Version  1
	I0328 00:39:14.195543 1106824 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 00:39:14.195956 1106824 main.go:141] libmachine: () Calling .GetMachineName
	I0328 00:39:14.196194 1106824 main.go:141] libmachine: (test-preload-700024) Calling .DriverName
	I0328 00:39:14.198170 1106824 out.go:177] * Kubernetes 1.29.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.29.3
	I0328 00:39:14.199633 1106824 driver.go:392] Setting default libvirt URI to qemu:///system
	I0328 00:39:14.199938 1106824 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0328 00:39:14.199976 1106824 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 00:39:14.215266 1106824 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44011
	I0328 00:39:14.215727 1106824 main.go:141] libmachine: () Calling .GetVersion
	I0328 00:39:14.216243 1106824 main.go:141] libmachine: Using API Version  1
	I0328 00:39:14.216271 1106824 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 00:39:14.216611 1106824 main.go:141] libmachine: () Calling .GetMachineName
	I0328 00:39:14.216811 1106824 main.go:141] libmachine: (test-preload-700024) Calling .DriverName
	I0328 00:39:14.254331 1106824 out.go:177] * Using the kvm2 driver based on existing profile
	I0328 00:39:14.255493 1106824 start.go:297] selected driver: kvm2
	I0328 00:39:14.255510 1106824 start.go:901] validating driver "kvm2" against &{Name:test-preload-700024 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.24.4 ClusterName:test-preload-700024 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.15 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L M
ountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0328 00:39:14.255609 1106824 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0328 00:39:14.256327 1106824 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0328 00:39:14.256399 1106824 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18485-1069254/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0328 00:39:14.272272 1106824 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0-beta.0
	I0328 00:39:14.272611 1106824 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0328 00:39:14.272682 1106824 cni.go:84] Creating CNI manager for ""
	I0328 00:39:14.272697 1106824 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0328 00:39:14.272748 1106824 start.go:340] cluster config:
	{Name:test-preload-700024 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-700024 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.15 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTyp
e:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0328 00:39:14.272866 1106824 iso.go:125] acquiring lock: {Name:mk3da1fa7d63a581c817f327d86a827697458cb8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0328 00:39:14.274881 1106824 out.go:177] * Starting "test-preload-700024" primary control-plane node in "test-preload-700024" cluster
	I0328 00:39:14.276307 1106824 preload.go:132] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0328 00:39:14.689698 1106824 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4
	I0328 00:39:14.689745 1106824 cache.go:56] Caching tarball of preloaded images
	I0328 00:39:14.689916 1106824 preload.go:132] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0328 00:39:14.692029 1106824 out.go:177] * Downloading Kubernetes v1.24.4 preload ...
	I0328 00:39:14.693552 1106824 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0328 00:39:14.809704 1106824 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4?checksum=md5:b2ee0ab83ed99f9e7ff71cb0cf27e8f9 -> /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4
	I0328 00:39:26.444544 1106824 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0328 00:39:26.444648 1106824 preload.go:255] verifying checksum of /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0328 00:39:27.430494 1106824 cache.go:59] Finished verifying existence of preloaded tar for v1.24.4 on crio
	I0328 00:39:27.430649 1106824 profile.go:142] Saving config to /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/test-preload-700024/config.json ...
	I0328 00:39:27.430880 1106824 start.go:360] acquireMachinesLock for test-preload-700024: {Name:mk85e225431128bbd27ac7bb3815095957281902 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0328 00:39:27.430955 1106824 start.go:364] duration metric: took 51.321µs to acquireMachinesLock for "test-preload-700024"
	I0328 00:39:27.430970 1106824 start.go:96] Skipping create...Using existing machine configuration
	I0328 00:39:27.430976 1106824 fix.go:54] fixHost starting: 
	I0328 00:39:27.431299 1106824 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0328 00:39:27.431336 1106824 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 00:39:27.446505 1106824 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43771
	I0328 00:39:27.446969 1106824 main.go:141] libmachine: () Calling .GetVersion
	I0328 00:39:27.447438 1106824 main.go:141] libmachine: Using API Version  1
	I0328 00:39:27.447463 1106824 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 00:39:27.447860 1106824 main.go:141] libmachine: () Calling .GetMachineName
	I0328 00:39:27.448086 1106824 main.go:141] libmachine: (test-preload-700024) Calling .DriverName
	I0328 00:39:27.448302 1106824 main.go:141] libmachine: (test-preload-700024) Calling .GetState
	I0328 00:39:27.450330 1106824 fix.go:112] recreateIfNeeded on test-preload-700024: state=Stopped err=<nil>
	I0328 00:39:27.450372 1106824 main.go:141] libmachine: (test-preload-700024) Calling .DriverName
	W0328 00:39:27.450565 1106824 fix.go:138] unexpected machine state, will restart: <nil>
	I0328 00:39:27.454402 1106824 out.go:177] * Restarting existing kvm2 VM for "test-preload-700024" ...
	I0328 00:39:27.457610 1106824 main.go:141] libmachine: (test-preload-700024) Calling .Start
	I0328 00:39:27.457840 1106824 main.go:141] libmachine: (test-preload-700024) Ensuring networks are active...
	I0328 00:39:27.458710 1106824 main.go:141] libmachine: (test-preload-700024) Ensuring network default is active
	I0328 00:39:27.459172 1106824 main.go:141] libmachine: (test-preload-700024) Ensuring network mk-test-preload-700024 is active
	I0328 00:39:27.459556 1106824 main.go:141] libmachine: (test-preload-700024) Getting domain xml...
	I0328 00:39:27.460277 1106824 main.go:141] libmachine: (test-preload-700024) Creating domain...
	I0328 00:39:28.653163 1106824 main.go:141] libmachine: (test-preload-700024) Waiting to get IP...
	I0328 00:39:28.654096 1106824 main.go:141] libmachine: (test-preload-700024) DBG | domain test-preload-700024 has defined MAC address 52:54:00:87:60:2c in network mk-test-preload-700024
	I0328 00:39:28.654433 1106824 main.go:141] libmachine: (test-preload-700024) DBG | unable to find current IP address of domain test-preload-700024 in network mk-test-preload-700024
	I0328 00:39:28.654513 1106824 main.go:141] libmachine: (test-preload-700024) DBG | I0328 00:39:28.654412 1106892 retry.go:31] will retry after 305.266742ms: waiting for machine to come up
	I0328 00:39:28.960994 1106824 main.go:141] libmachine: (test-preload-700024) DBG | domain test-preload-700024 has defined MAC address 52:54:00:87:60:2c in network mk-test-preload-700024
	I0328 00:39:28.961401 1106824 main.go:141] libmachine: (test-preload-700024) DBG | unable to find current IP address of domain test-preload-700024 in network mk-test-preload-700024
	I0328 00:39:28.961433 1106824 main.go:141] libmachine: (test-preload-700024) DBG | I0328 00:39:28.961364 1106892 retry.go:31] will retry after 371.819951ms: waiting for machine to come up
	I0328 00:39:29.335347 1106824 main.go:141] libmachine: (test-preload-700024) DBG | domain test-preload-700024 has defined MAC address 52:54:00:87:60:2c in network mk-test-preload-700024
	I0328 00:39:29.335920 1106824 main.go:141] libmachine: (test-preload-700024) DBG | unable to find current IP address of domain test-preload-700024 in network mk-test-preload-700024
	I0328 00:39:29.335952 1106824 main.go:141] libmachine: (test-preload-700024) DBG | I0328 00:39:29.335857 1106892 retry.go:31] will retry after 471.677686ms: waiting for machine to come up
	I0328 00:39:29.809605 1106824 main.go:141] libmachine: (test-preload-700024) DBG | domain test-preload-700024 has defined MAC address 52:54:00:87:60:2c in network mk-test-preload-700024
	I0328 00:39:29.810094 1106824 main.go:141] libmachine: (test-preload-700024) DBG | unable to find current IP address of domain test-preload-700024 in network mk-test-preload-700024
	I0328 00:39:29.810129 1106824 main.go:141] libmachine: (test-preload-700024) DBG | I0328 00:39:29.810039 1106892 retry.go:31] will retry after 579.624327ms: waiting for machine to come up
	I0328 00:39:30.390748 1106824 main.go:141] libmachine: (test-preload-700024) DBG | domain test-preload-700024 has defined MAC address 52:54:00:87:60:2c in network mk-test-preload-700024
	I0328 00:39:30.391164 1106824 main.go:141] libmachine: (test-preload-700024) DBG | unable to find current IP address of domain test-preload-700024 in network mk-test-preload-700024
	I0328 00:39:30.391192 1106824 main.go:141] libmachine: (test-preload-700024) DBG | I0328 00:39:30.391128 1106892 retry.go:31] will retry after 640.506978ms: waiting for machine to come up
	I0328 00:39:31.032961 1106824 main.go:141] libmachine: (test-preload-700024) DBG | domain test-preload-700024 has defined MAC address 52:54:00:87:60:2c in network mk-test-preload-700024
	I0328 00:39:31.033337 1106824 main.go:141] libmachine: (test-preload-700024) DBG | unable to find current IP address of domain test-preload-700024 in network mk-test-preload-700024
	I0328 00:39:31.033372 1106824 main.go:141] libmachine: (test-preload-700024) DBG | I0328 00:39:31.033279 1106892 retry.go:31] will retry after 772.547666ms: waiting for machine to come up
	I0328 00:39:31.807226 1106824 main.go:141] libmachine: (test-preload-700024) DBG | domain test-preload-700024 has defined MAC address 52:54:00:87:60:2c in network mk-test-preload-700024
	I0328 00:39:31.807636 1106824 main.go:141] libmachine: (test-preload-700024) DBG | unable to find current IP address of domain test-preload-700024 in network mk-test-preload-700024
	I0328 00:39:31.807667 1106824 main.go:141] libmachine: (test-preload-700024) DBG | I0328 00:39:31.807560 1106892 retry.go:31] will retry after 1.002810287s: waiting for machine to come up
	I0328 00:39:32.812460 1106824 main.go:141] libmachine: (test-preload-700024) DBG | domain test-preload-700024 has defined MAC address 52:54:00:87:60:2c in network mk-test-preload-700024
	I0328 00:39:32.812933 1106824 main.go:141] libmachine: (test-preload-700024) DBG | unable to find current IP address of domain test-preload-700024 in network mk-test-preload-700024
	I0328 00:39:32.812967 1106824 main.go:141] libmachine: (test-preload-700024) DBG | I0328 00:39:32.812866 1106892 retry.go:31] will retry after 1.093243369s: waiting for machine to come up
	I0328 00:39:33.907291 1106824 main.go:141] libmachine: (test-preload-700024) DBG | domain test-preload-700024 has defined MAC address 52:54:00:87:60:2c in network mk-test-preload-700024
	I0328 00:39:33.907732 1106824 main.go:141] libmachine: (test-preload-700024) DBG | unable to find current IP address of domain test-preload-700024 in network mk-test-preload-700024
	I0328 00:39:33.907757 1106824 main.go:141] libmachine: (test-preload-700024) DBG | I0328 00:39:33.907677 1106892 retry.go:31] will retry after 1.537285299s: waiting for machine to come up
	I0328 00:39:35.447350 1106824 main.go:141] libmachine: (test-preload-700024) DBG | domain test-preload-700024 has defined MAC address 52:54:00:87:60:2c in network mk-test-preload-700024
	I0328 00:39:35.447733 1106824 main.go:141] libmachine: (test-preload-700024) DBG | unable to find current IP address of domain test-preload-700024 in network mk-test-preload-700024
	I0328 00:39:35.447761 1106824 main.go:141] libmachine: (test-preload-700024) DBG | I0328 00:39:35.447680 1106892 retry.go:31] will retry after 1.410049884s: waiting for machine to come up
	I0328 00:39:36.860486 1106824 main.go:141] libmachine: (test-preload-700024) DBG | domain test-preload-700024 has defined MAC address 52:54:00:87:60:2c in network mk-test-preload-700024
	I0328 00:39:36.860989 1106824 main.go:141] libmachine: (test-preload-700024) DBG | unable to find current IP address of domain test-preload-700024 in network mk-test-preload-700024
	I0328 00:39:36.861016 1106824 main.go:141] libmachine: (test-preload-700024) DBG | I0328 00:39:36.860946 1106892 retry.go:31] will retry after 2.814132065s: waiting for machine to come up
	I0328 00:39:39.678456 1106824 main.go:141] libmachine: (test-preload-700024) DBG | domain test-preload-700024 has defined MAC address 52:54:00:87:60:2c in network mk-test-preload-700024
	I0328 00:39:39.678866 1106824 main.go:141] libmachine: (test-preload-700024) DBG | unable to find current IP address of domain test-preload-700024 in network mk-test-preload-700024
	I0328 00:39:39.678892 1106824 main.go:141] libmachine: (test-preload-700024) DBG | I0328 00:39:39.678813 1106892 retry.go:31] will retry after 3.362746881s: waiting for machine to come up
	I0328 00:39:43.042757 1106824 main.go:141] libmachine: (test-preload-700024) DBG | domain test-preload-700024 has defined MAC address 52:54:00:87:60:2c in network mk-test-preload-700024
	I0328 00:39:43.043196 1106824 main.go:141] libmachine: (test-preload-700024) DBG | unable to find current IP address of domain test-preload-700024 in network mk-test-preload-700024
	I0328 00:39:43.043224 1106824 main.go:141] libmachine: (test-preload-700024) DBG | I0328 00:39:43.043144 1106892 retry.go:31] will retry after 3.730178894s: waiting for machine to come up
	I0328 00:39:46.778269 1106824 main.go:141] libmachine: (test-preload-700024) DBG | domain test-preload-700024 has defined MAC address 52:54:00:87:60:2c in network mk-test-preload-700024
	I0328 00:39:46.778823 1106824 main.go:141] libmachine: (test-preload-700024) Found IP for machine: 192.168.39.15
	I0328 00:39:46.778842 1106824 main.go:141] libmachine: (test-preload-700024) Reserving static IP address...
	I0328 00:39:46.778856 1106824 main.go:141] libmachine: (test-preload-700024) DBG | domain test-preload-700024 has current primary IP address 192.168.39.15 and MAC address 52:54:00:87:60:2c in network mk-test-preload-700024
	I0328 00:39:46.779349 1106824 main.go:141] libmachine: (test-preload-700024) Reserved static IP address: 192.168.39.15
	I0328 00:39:46.779385 1106824 main.go:141] libmachine: (test-preload-700024) DBG | found host DHCP lease matching {name: "test-preload-700024", mac: "52:54:00:87:60:2c", ip: "192.168.39.15"} in network mk-test-preload-700024: {Iface:virbr1 ExpiryTime:2024-03-28 01:39:38 +0000 UTC Type:0 Mac:52:54:00:87:60:2c Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:test-preload-700024 Clientid:01:52:54:00:87:60:2c}
	I0328 00:39:46.779392 1106824 main.go:141] libmachine: (test-preload-700024) Waiting for SSH to be available...
	I0328 00:39:46.779417 1106824 main.go:141] libmachine: (test-preload-700024) DBG | skip adding static IP to network mk-test-preload-700024 - found existing host DHCP lease matching {name: "test-preload-700024", mac: "52:54:00:87:60:2c", ip: "192.168.39.15"}
	I0328 00:39:46.779428 1106824 main.go:141] libmachine: (test-preload-700024) DBG | Getting to WaitForSSH function...
	I0328 00:39:46.781799 1106824 main.go:141] libmachine: (test-preload-700024) DBG | domain test-preload-700024 has defined MAC address 52:54:00:87:60:2c in network mk-test-preload-700024
	I0328 00:39:46.782160 1106824 main.go:141] libmachine: (test-preload-700024) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:60:2c", ip: ""} in network mk-test-preload-700024: {Iface:virbr1 ExpiryTime:2024-03-28 01:39:38 +0000 UTC Type:0 Mac:52:54:00:87:60:2c Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:test-preload-700024 Clientid:01:52:54:00:87:60:2c}
	I0328 00:39:46.782199 1106824 main.go:141] libmachine: (test-preload-700024) DBG | domain test-preload-700024 has defined IP address 192.168.39.15 and MAC address 52:54:00:87:60:2c in network mk-test-preload-700024
	I0328 00:39:46.782361 1106824 main.go:141] libmachine: (test-preload-700024) DBG | Using SSH client type: external
	I0328 00:39:46.782393 1106824 main.go:141] libmachine: (test-preload-700024) DBG | Using SSH private key: /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/test-preload-700024/id_rsa (-rw-------)
	I0328 00:39:46.782427 1106824 main.go:141] libmachine: (test-preload-700024) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.15 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/test-preload-700024/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0328 00:39:46.782448 1106824 main.go:141] libmachine: (test-preload-700024) DBG | About to run SSH command:
	I0328 00:39:46.782468 1106824 main.go:141] libmachine: (test-preload-700024) DBG | exit 0
	I0328 00:39:46.906495 1106824 main.go:141] libmachine: (test-preload-700024) DBG | SSH cmd err, output: <nil>: 
	I0328 00:39:46.906867 1106824 main.go:141] libmachine: (test-preload-700024) Calling .GetConfigRaw
	I0328 00:39:46.907622 1106824 main.go:141] libmachine: (test-preload-700024) Calling .GetIP
	I0328 00:39:46.910103 1106824 main.go:141] libmachine: (test-preload-700024) DBG | domain test-preload-700024 has defined MAC address 52:54:00:87:60:2c in network mk-test-preload-700024
	I0328 00:39:46.910516 1106824 main.go:141] libmachine: (test-preload-700024) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:60:2c", ip: ""} in network mk-test-preload-700024: {Iface:virbr1 ExpiryTime:2024-03-28 01:39:38 +0000 UTC Type:0 Mac:52:54:00:87:60:2c Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:test-preload-700024 Clientid:01:52:54:00:87:60:2c}
	I0328 00:39:46.910553 1106824 main.go:141] libmachine: (test-preload-700024) DBG | domain test-preload-700024 has defined IP address 192.168.39.15 and MAC address 52:54:00:87:60:2c in network mk-test-preload-700024
	I0328 00:39:46.910849 1106824 profile.go:142] Saving config to /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/test-preload-700024/config.json ...
	I0328 00:39:46.911055 1106824 machine.go:94] provisionDockerMachine start ...
	I0328 00:39:46.911075 1106824 main.go:141] libmachine: (test-preload-700024) Calling .DriverName
	I0328 00:39:46.911313 1106824 main.go:141] libmachine: (test-preload-700024) Calling .GetSSHHostname
	I0328 00:39:46.913700 1106824 main.go:141] libmachine: (test-preload-700024) DBG | domain test-preload-700024 has defined MAC address 52:54:00:87:60:2c in network mk-test-preload-700024
	I0328 00:39:46.914061 1106824 main.go:141] libmachine: (test-preload-700024) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:60:2c", ip: ""} in network mk-test-preload-700024: {Iface:virbr1 ExpiryTime:2024-03-28 01:39:38 +0000 UTC Type:0 Mac:52:54:00:87:60:2c Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:test-preload-700024 Clientid:01:52:54:00:87:60:2c}
	I0328 00:39:46.914089 1106824 main.go:141] libmachine: (test-preload-700024) DBG | domain test-preload-700024 has defined IP address 192.168.39.15 and MAC address 52:54:00:87:60:2c in network mk-test-preload-700024
	I0328 00:39:46.914255 1106824 main.go:141] libmachine: (test-preload-700024) Calling .GetSSHPort
	I0328 00:39:46.914450 1106824 main.go:141] libmachine: (test-preload-700024) Calling .GetSSHKeyPath
	I0328 00:39:46.914610 1106824 main.go:141] libmachine: (test-preload-700024) Calling .GetSSHKeyPath
	I0328 00:39:46.914769 1106824 main.go:141] libmachine: (test-preload-700024) Calling .GetSSHUsername
	I0328 00:39:46.914941 1106824 main.go:141] libmachine: Using SSH client type: native
	I0328 00:39:46.915235 1106824 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.15 22 <nil> <nil>}
	I0328 00:39:46.915247 1106824 main.go:141] libmachine: About to run SSH command:
	hostname
	I0328 00:39:47.018983 1106824 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0328 00:39:47.019015 1106824 main.go:141] libmachine: (test-preload-700024) Calling .GetMachineName
	I0328 00:39:47.019318 1106824 buildroot.go:166] provisioning hostname "test-preload-700024"
	I0328 00:39:47.019365 1106824 main.go:141] libmachine: (test-preload-700024) Calling .GetMachineName
	I0328 00:39:47.019552 1106824 main.go:141] libmachine: (test-preload-700024) Calling .GetSSHHostname
	I0328 00:39:47.022457 1106824 main.go:141] libmachine: (test-preload-700024) DBG | domain test-preload-700024 has defined MAC address 52:54:00:87:60:2c in network mk-test-preload-700024
	I0328 00:39:47.022878 1106824 main.go:141] libmachine: (test-preload-700024) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:60:2c", ip: ""} in network mk-test-preload-700024: {Iface:virbr1 ExpiryTime:2024-03-28 01:39:38 +0000 UTC Type:0 Mac:52:54:00:87:60:2c Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:test-preload-700024 Clientid:01:52:54:00:87:60:2c}
	I0328 00:39:47.022902 1106824 main.go:141] libmachine: (test-preload-700024) DBG | domain test-preload-700024 has defined IP address 192.168.39.15 and MAC address 52:54:00:87:60:2c in network mk-test-preload-700024
	I0328 00:39:47.023116 1106824 main.go:141] libmachine: (test-preload-700024) Calling .GetSSHPort
	I0328 00:39:47.023314 1106824 main.go:141] libmachine: (test-preload-700024) Calling .GetSSHKeyPath
	I0328 00:39:47.023429 1106824 main.go:141] libmachine: (test-preload-700024) Calling .GetSSHKeyPath
	I0328 00:39:47.023539 1106824 main.go:141] libmachine: (test-preload-700024) Calling .GetSSHUsername
	I0328 00:39:47.023684 1106824 main.go:141] libmachine: Using SSH client type: native
	I0328 00:39:47.023894 1106824 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.15 22 <nil> <nil>}
	I0328 00:39:47.023911 1106824 main.go:141] libmachine: About to run SSH command:
	sudo hostname test-preload-700024 && echo "test-preload-700024" | sudo tee /etc/hostname
	I0328 00:39:47.140927 1106824 main.go:141] libmachine: SSH cmd err, output: <nil>: test-preload-700024
	
	I0328 00:39:47.140962 1106824 main.go:141] libmachine: (test-preload-700024) Calling .GetSSHHostname
	I0328 00:39:47.144103 1106824 main.go:141] libmachine: (test-preload-700024) DBG | domain test-preload-700024 has defined MAC address 52:54:00:87:60:2c in network mk-test-preload-700024
	I0328 00:39:47.144511 1106824 main.go:141] libmachine: (test-preload-700024) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:60:2c", ip: ""} in network mk-test-preload-700024: {Iface:virbr1 ExpiryTime:2024-03-28 01:39:38 +0000 UTC Type:0 Mac:52:54:00:87:60:2c Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:test-preload-700024 Clientid:01:52:54:00:87:60:2c}
	I0328 00:39:47.144532 1106824 main.go:141] libmachine: (test-preload-700024) DBG | domain test-preload-700024 has defined IP address 192.168.39.15 and MAC address 52:54:00:87:60:2c in network mk-test-preload-700024
	I0328 00:39:47.144687 1106824 main.go:141] libmachine: (test-preload-700024) Calling .GetSSHPort
	I0328 00:39:47.144900 1106824 main.go:141] libmachine: (test-preload-700024) Calling .GetSSHKeyPath
	I0328 00:39:47.145090 1106824 main.go:141] libmachine: (test-preload-700024) Calling .GetSSHKeyPath
	I0328 00:39:47.145247 1106824 main.go:141] libmachine: (test-preload-700024) Calling .GetSSHUsername
	I0328 00:39:47.145441 1106824 main.go:141] libmachine: Using SSH client type: native
	I0328 00:39:47.145610 1106824 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.15 22 <nil> <nil>}
	I0328 00:39:47.145628 1106824 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\stest-preload-700024' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-700024/g' /etc/hosts;
				else 
					echo '127.0.1.1 test-preload-700024' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0328 00:39:47.255963 1106824 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0328 00:39:47.256036 1106824 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18485-1069254/.minikube CaCertPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18485-1069254/.minikube}
	I0328 00:39:47.256083 1106824 buildroot.go:174] setting up certificates
	I0328 00:39:47.256093 1106824 provision.go:84] configureAuth start
	I0328 00:39:47.256105 1106824 main.go:141] libmachine: (test-preload-700024) Calling .GetMachineName
	I0328 00:39:47.256431 1106824 main.go:141] libmachine: (test-preload-700024) Calling .GetIP
	I0328 00:39:47.259148 1106824 main.go:141] libmachine: (test-preload-700024) DBG | domain test-preload-700024 has defined MAC address 52:54:00:87:60:2c in network mk-test-preload-700024
	I0328 00:39:47.259578 1106824 main.go:141] libmachine: (test-preload-700024) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:60:2c", ip: ""} in network mk-test-preload-700024: {Iface:virbr1 ExpiryTime:2024-03-28 01:39:38 +0000 UTC Type:0 Mac:52:54:00:87:60:2c Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:test-preload-700024 Clientid:01:52:54:00:87:60:2c}
	I0328 00:39:47.259606 1106824 main.go:141] libmachine: (test-preload-700024) DBG | domain test-preload-700024 has defined IP address 192.168.39.15 and MAC address 52:54:00:87:60:2c in network mk-test-preload-700024
	I0328 00:39:47.259713 1106824 main.go:141] libmachine: (test-preload-700024) Calling .GetSSHHostname
	I0328 00:39:47.261915 1106824 main.go:141] libmachine: (test-preload-700024) DBG | domain test-preload-700024 has defined MAC address 52:54:00:87:60:2c in network mk-test-preload-700024
	I0328 00:39:47.262344 1106824 main.go:141] libmachine: (test-preload-700024) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:60:2c", ip: ""} in network mk-test-preload-700024: {Iface:virbr1 ExpiryTime:2024-03-28 01:39:38 +0000 UTC Type:0 Mac:52:54:00:87:60:2c Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:test-preload-700024 Clientid:01:52:54:00:87:60:2c}
	I0328 00:39:47.262375 1106824 main.go:141] libmachine: (test-preload-700024) DBG | domain test-preload-700024 has defined IP address 192.168.39.15 and MAC address 52:54:00:87:60:2c in network mk-test-preload-700024
	I0328 00:39:47.262565 1106824 provision.go:143] copyHostCerts
	I0328 00:39:47.262643 1106824 exec_runner.go:144] found /home/jenkins/minikube-integration/18485-1069254/.minikube/key.pem, removing ...
	I0328 00:39:47.262658 1106824 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18485-1069254/.minikube/key.pem
	I0328 00:39:47.262726 1106824 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18485-1069254/.minikube/key.pem (1679 bytes)
	I0328 00:39:47.262815 1106824 exec_runner.go:144] found /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.pem, removing ...
	I0328 00:39:47.262824 1106824 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.pem
	I0328 00:39:47.262847 1106824 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.pem (1078 bytes)
	I0328 00:39:47.262899 1106824 exec_runner.go:144] found /home/jenkins/minikube-integration/18485-1069254/.minikube/cert.pem, removing ...
	I0328 00:39:47.262907 1106824 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18485-1069254/.minikube/cert.pem
	I0328 00:39:47.262926 1106824 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18485-1069254/.minikube/cert.pem (1123 bytes)
	I0328 00:39:47.262976 1106824 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca-key.pem org=jenkins.test-preload-700024 san=[127.0.0.1 192.168.39.15 localhost minikube test-preload-700024]
	I0328 00:39:47.453544 1106824 provision.go:177] copyRemoteCerts
	I0328 00:39:47.453611 1106824 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0328 00:39:47.453649 1106824 main.go:141] libmachine: (test-preload-700024) Calling .GetSSHHostname
	I0328 00:39:47.456813 1106824 main.go:141] libmachine: (test-preload-700024) DBG | domain test-preload-700024 has defined MAC address 52:54:00:87:60:2c in network mk-test-preload-700024
	I0328 00:39:47.457182 1106824 main.go:141] libmachine: (test-preload-700024) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:60:2c", ip: ""} in network mk-test-preload-700024: {Iface:virbr1 ExpiryTime:2024-03-28 01:39:38 +0000 UTC Type:0 Mac:52:54:00:87:60:2c Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:test-preload-700024 Clientid:01:52:54:00:87:60:2c}
	I0328 00:39:47.457220 1106824 main.go:141] libmachine: (test-preload-700024) DBG | domain test-preload-700024 has defined IP address 192.168.39.15 and MAC address 52:54:00:87:60:2c in network mk-test-preload-700024
	I0328 00:39:47.457406 1106824 main.go:141] libmachine: (test-preload-700024) Calling .GetSSHPort
	I0328 00:39:47.457648 1106824 main.go:141] libmachine: (test-preload-700024) Calling .GetSSHKeyPath
	I0328 00:39:47.457815 1106824 main.go:141] libmachine: (test-preload-700024) Calling .GetSSHUsername
	I0328 00:39:47.458011 1106824 sshutil.go:53] new ssh client: &{IP:192.168.39.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/test-preload-700024/id_rsa Username:docker}
	I0328 00:39:47.540822 1106824 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0328 00:39:47.567302 1106824 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0328 00:39:47.593484 1106824 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0328 00:39:47.619251 1106824 provision.go:87] duration metric: took 363.143903ms to configureAuth
	I0328 00:39:47.619286 1106824 buildroot.go:189] setting minikube options for container-runtime
	I0328 00:39:47.619460 1106824 config.go:182] Loaded profile config "test-preload-700024": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0328 00:39:47.619545 1106824 main.go:141] libmachine: (test-preload-700024) Calling .GetSSHHostname
	I0328 00:39:47.622366 1106824 main.go:141] libmachine: (test-preload-700024) DBG | domain test-preload-700024 has defined MAC address 52:54:00:87:60:2c in network mk-test-preload-700024
	I0328 00:39:47.622800 1106824 main.go:141] libmachine: (test-preload-700024) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:60:2c", ip: ""} in network mk-test-preload-700024: {Iface:virbr1 ExpiryTime:2024-03-28 01:39:38 +0000 UTC Type:0 Mac:52:54:00:87:60:2c Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:test-preload-700024 Clientid:01:52:54:00:87:60:2c}
	I0328 00:39:47.622831 1106824 main.go:141] libmachine: (test-preload-700024) DBG | domain test-preload-700024 has defined IP address 192.168.39.15 and MAC address 52:54:00:87:60:2c in network mk-test-preload-700024
	I0328 00:39:47.623005 1106824 main.go:141] libmachine: (test-preload-700024) Calling .GetSSHPort
	I0328 00:39:47.623250 1106824 main.go:141] libmachine: (test-preload-700024) Calling .GetSSHKeyPath
	I0328 00:39:47.623431 1106824 main.go:141] libmachine: (test-preload-700024) Calling .GetSSHKeyPath
	I0328 00:39:47.623614 1106824 main.go:141] libmachine: (test-preload-700024) Calling .GetSSHUsername
	I0328 00:39:47.623839 1106824 main.go:141] libmachine: Using SSH client type: native
	I0328 00:39:47.624026 1106824 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.15 22 <nil> <nil>}
	I0328 00:39:47.624047 1106824 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0328 00:39:47.891625 1106824 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0328 00:39:47.891662 1106824 machine.go:97] duration metric: took 980.591049ms to provisionDockerMachine
	I0328 00:39:47.891679 1106824 start.go:293] postStartSetup for "test-preload-700024" (driver="kvm2")
	I0328 00:39:47.891693 1106824 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0328 00:39:47.891716 1106824 main.go:141] libmachine: (test-preload-700024) Calling .DriverName
	I0328 00:39:47.892147 1106824 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0328 00:39:47.892181 1106824 main.go:141] libmachine: (test-preload-700024) Calling .GetSSHHostname
	I0328 00:39:47.895196 1106824 main.go:141] libmachine: (test-preload-700024) DBG | domain test-preload-700024 has defined MAC address 52:54:00:87:60:2c in network mk-test-preload-700024
	I0328 00:39:47.895579 1106824 main.go:141] libmachine: (test-preload-700024) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:60:2c", ip: ""} in network mk-test-preload-700024: {Iface:virbr1 ExpiryTime:2024-03-28 01:39:38 +0000 UTC Type:0 Mac:52:54:00:87:60:2c Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:test-preload-700024 Clientid:01:52:54:00:87:60:2c}
	I0328 00:39:47.895604 1106824 main.go:141] libmachine: (test-preload-700024) DBG | domain test-preload-700024 has defined IP address 192.168.39.15 and MAC address 52:54:00:87:60:2c in network mk-test-preload-700024
	I0328 00:39:47.895769 1106824 main.go:141] libmachine: (test-preload-700024) Calling .GetSSHPort
	I0328 00:39:47.895987 1106824 main.go:141] libmachine: (test-preload-700024) Calling .GetSSHKeyPath
	I0328 00:39:47.896165 1106824 main.go:141] libmachine: (test-preload-700024) Calling .GetSSHUsername
	I0328 00:39:47.896308 1106824 sshutil.go:53] new ssh client: &{IP:192.168.39.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/test-preload-700024/id_rsa Username:docker}
	I0328 00:39:47.977824 1106824 ssh_runner.go:195] Run: cat /etc/os-release
	I0328 00:39:47.982463 1106824 info.go:137] Remote host: Buildroot 2023.02.9
	I0328 00:39:47.982504 1106824 filesync.go:126] Scanning /home/jenkins/minikube-integration/18485-1069254/.minikube/addons for local assets ...
	I0328 00:39:47.982586 1106824 filesync.go:126] Scanning /home/jenkins/minikube-integration/18485-1069254/.minikube/files for local assets ...
	I0328 00:39:47.982680 1106824 filesync.go:149] local asset: /home/jenkins/minikube-integration/18485-1069254/.minikube/files/etc/ssl/certs/10765222.pem -> 10765222.pem in /etc/ssl/certs
	I0328 00:39:47.982770 1106824 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0328 00:39:47.992543 1106824 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/files/etc/ssl/certs/10765222.pem --> /etc/ssl/certs/10765222.pem (1708 bytes)
	I0328 00:39:48.018511 1106824 start.go:296] duration metric: took 126.813088ms for postStartSetup
	I0328 00:39:48.018560 1106824 fix.go:56] duration metric: took 20.587583301s for fixHost
	I0328 00:39:48.018584 1106824 main.go:141] libmachine: (test-preload-700024) Calling .GetSSHHostname
	I0328 00:39:48.021209 1106824 main.go:141] libmachine: (test-preload-700024) DBG | domain test-preload-700024 has defined MAC address 52:54:00:87:60:2c in network mk-test-preload-700024
	I0328 00:39:48.021536 1106824 main.go:141] libmachine: (test-preload-700024) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:60:2c", ip: ""} in network mk-test-preload-700024: {Iface:virbr1 ExpiryTime:2024-03-28 01:39:38 +0000 UTC Type:0 Mac:52:54:00:87:60:2c Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:test-preload-700024 Clientid:01:52:54:00:87:60:2c}
	I0328 00:39:48.021584 1106824 main.go:141] libmachine: (test-preload-700024) DBG | domain test-preload-700024 has defined IP address 192.168.39.15 and MAC address 52:54:00:87:60:2c in network mk-test-preload-700024
	I0328 00:39:48.021812 1106824 main.go:141] libmachine: (test-preload-700024) Calling .GetSSHPort
	I0328 00:39:48.022019 1106824 main.go:141] libmachine: (test-preload-700024) Calling .GetSSHKeyPath
	I0328 00:39:48.022215 1106824 main.go:141] libmachine: (test-preload-700024) Calling .GetSSHKeyPath
	I0328 00:39:48.022374 1106824 main.go:141] libmachine: (test-preload-700024) Calling .GetSSHUsername
	I0328 00:39:48.022549 1106824 main.go:141] libmachine: Using SSH client type: native
	I0328 00:39:48.022712 1106824 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.15 22 <nil> <nil>}
	I0328 00:39:48.022723 1106824 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0328 00:39:48.123366 1106824 main.go:141] libmachine: SSH cmd err, output: <nil>: 1711586388.094056651
	
	I0328 00:39:48.123398 1106824 fix.go:216] guest clock: 1711586388.094056651
	I0328 00:39:48.123409 1106824 fix.go:229] Guest: 2024-03-28 00:39:48.094056651 +0000 UTC Remote: 2024-03-28 00:39:48.018564459 +0000 UTC m=+33.906967388 (delta=75.492192ms)
	I0328 00:39:48.123461 1106824 fix.go:200] guest clock delta is within tolerance: 75.492192ms
	I0328 00:39:48.123469 1106824 start.go:83] releasing machines lock for "test-preload-700024", held for 20.692503428s
	I0328 00:39:48.123497 1106824 main.go:141] libmachine: (test-preload-700024) Calling .DriverName
	I0328 00:39:48.123806 1106824 main.go:141] libmachine: (test-preload-700024) Calling .GetIP
	I0328 00:39:48.126610 1106824 main.go:141] libmachine: (test-preload-700024) DBG | domain test-preload-700024 has defined MAC address 52:54:00:87:60:2c in network mk-test-preload-700024
	I0328 00:39:48.126960 1106824 main.go:141] libmachine: (test-preload-700024) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:60:2c", ip: ""} in network mk-test-preload-700024: {Iface:virbr1 ExpiryTime:2024-03-28 01:39:38 +0000 UTC Type:0 Mac:52:54:00:87:60:2c Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:test-preload-700024 Clientid:01:52:54:00:87:60:2c}
	I0328 00:39:48.126994 1106824 main.go:141] libmachine: (test-preload-700024) DBG | domain test-preload-700024 has defined IP address 192.168.39.15 and MAC address 52:54:00:87:60:2c in network mk-test-preload-700024
	I0328 00:39:48.127199 1106824 main.go:141] libmachine: (test-preload-700024) Calling .DriverName
	I0328 00:39:48.127864 1106824 main.go:141] libmachine: (test-preload-700024) Calling .DriverName
	I0328 00:39:48.128099 1106824 main.go:141] libmachine: (test-preload-700024) Calling .DriverName
	I0328 00:39:48.128232 1106824 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0328 00:39:48.128278 1106824 main.go:141] libmachine: (test-preload-700024) Calling .GetSSHHostname
	I0328 00:39:48.128335 1106824 ssh_runner.go:195] Run: cat /version.json
	I0328 00:39:48.128360 1106824 main.go:141] libmachine: (test-preload-700024) Calling .GetSSHHostname
	I0328 00:39:48.131199 1106824 main.go:141] libmachine: (test-preload-700024) DBG | domain test-preload-700024 has defined MAC address 52:54:00:87:60:2c in network mk-test-preload-700024
	I0328 00:39:48.131451 1106824 main.go:141] libmachine: (test-preload-700024) DBG | domain test-preload-700024 has defined MAC address 52:54:00:87:60:2c in network mk-test-preload-700024
	I0328 00:39:48.131559 1106824 main.go:141] libmachine: (test-preload-700024) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:60:2c", ip: ""} in network mk-test-preload-700024: {Iface:virbr1 ExpiryTime:2024-03-28 01:39:38 +0000 UTC Type:0 Mac:52:54:00:87:60:2c Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:test-preload-700024 Clientid:01:52:54:00:87:60:2c}
	I0328 00:39:48.131604 1106824 main.go:141] libmachine: (test-preload-700024) DBG | domain test-preload-700024 has defined IP address 192.168.39.15 and MAC address 52:54:00:87:60:2c in network mk-test-preload-700024
	I0328 00:39:48.131759 1106824 main.go:141] libmachine: (test-preload-700024) Calling .GetSSHPort
	I0328 00:39:48.131876 1106824 main.go:141] libmachine: (test-preload-700024) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:60:2c", ip: ""} in network mk-test-preload-700024: {Iface:virbr1 ExpiryTime:2024-03-28 01:39:38 +0000 UTC Type:0 Mac:52:54:00:87:60:2c Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:test-preload-700024 Clientid:01:52:54:00:87:60:2c}
	I0328 00:39:48.131905 1106824 main.go:141] libmachine: (test-preload-700024) DBG | domain test-preload-700024 has defined IP address 192.168.39.15 and MAC address 52:54:00:87:60:2c in network mk-test-preload-700024
	I0328 00:39:48.131972 1106824 main.go:141] libmachine: (test-preload-700024) Calling .GetSSHKeyPath
	I0328 00:39:48.132078 1106824 main.go:141] libmachine: (test-preload-700024) Calling .GetSSHPort
	I0328 00:39:48.132169 1106824 main.go:141] libmachine: (test-preload-700024) Calling .GetSSHUsername
	I0328 00:39:48.132261 1106824 main.go:141] libmachine: (test-preload-700024) Calling .GetSSHKeyPath
	I0328 00:39:48.132335 1106824 sshutil.go:53] new ssh client: &{IP:192.168.39.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/test-preload-700024/id_rsa Username:docker}
	I0328 00:39:48.132392 1106824 main.go:141] libmachine: (test-preload-700024) Calling .GetSSHUsername
	I0328 00:39:48.132525 1106824 sshutil.go:53] new ssh client: &{IP:192.168.39.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/test-preload-700024/id_rsa Username:docker}
	I0328 00:39:48.207991 1106824 ssh_runner.go:195] Run: systemctl --version
	I0328 00:39:48.243358 1106824 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0328 00:39:48.388384 1106824 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0328 00:39:48.395284 1106824 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0328 00:39:48.395370 1106824 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0328 00:39:48.411918 1106824 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0328 00:39:48.411948 1106824 start.go:494] detecting cgroup driver to use...
	I0328 00:39:48.412017 1106824 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0328 00:39:48.428936 1106824 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0328 00:39:48.444230 1106824 docker.go:217] disabling cri-docker service (if available) ...
	I0328 00:39:48.444314 1106824 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0328 00:39:48.460016 1106824 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0328 00:39:48.475467 1106824 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0328 00:39:48.597215 1106824 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0328 00:39:48.739591 1106824 docker.go:233] disabling docker service ...
	I0328 00:39:48.739692 1106824 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0328 00:39:48.754072 1106824 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0328 00:39:48.767384 1106824 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0328 00:39:48.903250 1106824 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0328 00:39:49.018811 1106824 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0328 00:39:49.033753 1106824 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0328 00:39:49.053376 1106824 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.7" pause image...
	I0328 00:39:49.053453 1106824 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.7"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 00:39:49.063997 1106824 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0328 00:39:49.064070 1106824 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 00:39:49.074883 1106824 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 00:39:49.085728 1106824 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 00:39:49.096809 1106824 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0328 00:39:49.108175 1106824 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 00:39:49.119281 1106824 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 00:39:49.138293 1106824 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 00:39:49.149019 1106824 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0328 00:39:49.158713 1106824 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0328 00:39:49.158769 1106824 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0328 00:39:49.172570 1106824 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0328 00:39:49.182426 1106824 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0328 00:39:49.296273 1106824 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0328 00:39:49.447187 1106824 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0328 00:39:49.447294 1106824 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0328 00:39:49.452849 1106824 start.go:562] Will wait 60s for crictl version
	I0328 00:39:49.452927 1106824 ssh_runner.go:195] Run: which crictl
	I0328 00:39:49.457070 1106824 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0328 00:39:49.498472 1106824 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0328 00:39:49.498590 1106824 ssh_runner.go:195] Run: crio --version
	I0328 00:39:49.528867 1106824 ssh_runner.go:195] Run: crio --version
	I0328 00:39:49.559846 1106824 out.go:177] * Preparing Kubernetes v1.24.4 on CRI-O 1.29.1 ...
	I0328 00:39:49.561278 1106824 main.go:141] libmachine: (test-preload-700024) Calling .GetIP
	I0328 00:39:49.564081 1106824 main.go:141] libmachine: (test-preload-700024) DBG | domain test-preload-700024 has defined MAC address 52:54:00:87:60:2c in network mk-test-preload-700024
	I0328 00:39:49.564401 1106824 main.go:141] libmachine: (test-preload-700024) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:60:2c", ip: ""} in network mk-test-preload-700024: {Iface:virbr1 ExpiryTime:2024-03-28 01:39:38 +0000 UTC Type:0 Mac:52:54:00:87:60:2c Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:test-preload-700024 Clientid:01:52:54:00:87:60:2c}
	I0328 00:39:49.564428 1106824 main.go:141] libmachine: (test-preload-700024) DBG | domain test-preload-700024 has defined IP address 192.168.39.15 and MAC address 52:54:00:87:60:2c in network mk-test-preload-700024
	I0328 00:39:49.564699 1106824 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0328 00:39:49.569170 1106824 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0328 00:39:49.582135 1106824 kubeadm.go:877] updating cluster {Name:test-preload-700024 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:
v1.24.4 ClusterName:test-preload-700024 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.15 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker
MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0328 00:39:49.582286 1106824 preload.go:132] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0328 00:39:49.582349 1106824 ssh_runner.go:195] Run: sudo crictl images --output json
	I0328 00:39:49.620389 1106824 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.24.4". assuming images are not preloaded.
	I0328 00:39:49.620462 1106824 ssh_runner.go:195] Run: which lz4
	I0328 00:39:49.624775 1106824 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0328 00:39:49.629094 1106824 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0328 00:39:49.629132 1106824 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (459355427 bytes)
	I0328 00:39:51.367217 1106824 crio.go:462] duration metric: took 1.742489548s to copy over tarball
	I0328 00:39:51.367308 1106824 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0328 00:39:53.848819 1106824 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.481482494s)
	I0328 00:39:53.848846 1106824 crio.go:469] duration metric: took 2.481589087s to extract the tarball
	I0328 00:39:53.848854 1106824 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0328 00:39:53.890282 1106824 ssh_runner.go:195] Run: sudo crictl images --output json
	I0328 00:39:53.938796 1106824 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.24.4". assuming images are not preloaded.
	I0328 00:39:53.938823 1106824 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.4 registry.k8s.io/kube-controller-manager:v1.24.4 registry.k8s.io/kube-scheduler:v1.24.4 registry.k8s.io/kube-proxy:v1.24.4 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0328 00:39:53.938896 1106824 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0328 00:39:53.938915 1106824 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I0328 00:39:53.938944 1106824 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I0328 00:39:53.938977 1106824 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0328 00:39:53.939016 1106824 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0328 00:39:53.938983 1106824 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0328 00:39:53.938915 1106824 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0328 00:39:53.938999 1106824 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I0328 00:39:53.940492 1106824 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0328 00:39:53.940523 1106824 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0328 00:39:53.940526 1106824 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0328 00:39:53.940503 1106824 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0328 00:39:53.940498 1106824 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I0328 00:39:53.940561 1106824 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0328 00:39:53.940566 1106824 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I0328 00:39:53.940569 1106824 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I0328 00:39:54.137246 1106824 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.4
	I0328 00:39:54.148964 1106824 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.4
	I0328 00:39:54.154701 1106824 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0328 00:39:54.156013 1106824 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0328 00:39:54.156025 1106824 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.4
	I0328 00:39:54.157057 1106824 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.4
	I0328 00:39:54.165604 1106824 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0328 00:39:54.216618 1106824 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.4" needs transfer: "registry.k8s.io/kube-proxy:v1.24.4" does not exist at hash "7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7" in container runtime
	I0328 00:39:54.216671 1106824 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.24.4
	I0328 00:39:54.216716 1106824 ssh_runner.go:195] Run: which crictl
	I0328 00:39:54.326509 1106824 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.4" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.4" does not exist at hash "1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48" in container runtime
	I0328 00:39:54.326553 1106824 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0328 00:39:54.326580 1106824 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03" in container runtime
	I0328 00:39:54.326606 1106824 ssh_runner.go:195] Run: which crictl
	I0328 00:39:54.326614 1106824 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0328 00:39:54.326628 1106824 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.4" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.4" does not exist at hash "6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d" in container runtime
	I0328 00:39:54.326658 1106824 ssh_runner.go:195] Run: which crictl
	I0328 00:39:54.326671 1106824 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b" in container runtime
	I0328 00:39:54.326694 1106824 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0328 00:39:54.326727 1106824 ssh_runner.go:195] Run: which crictl
	I0328 00:39:54.326659 1106824 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.24.4
	I0328 00:39:54.326787 1106824 ssh_runner.go:195] Run: which crictl
	I0328 00:39:54.330205 1106824 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.4" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.4" does not exist at hash "03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9" in container runtime
	I0328 00:39:54.330244 1106824 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.24.4
	I0328 00:39:54.330278 1106824 ssh_runner.go:195] Run: which crictl
	I0328 00:39:54.330306 1106824 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "221177c6082a88ea4f6240ab2450d540955ac6f4d5454f0e15751b653ebda165" in container runtime
	I0328 00:39:54.330343 1106824 cri.go:218] Removing image: registry.k8s.io/pause:3.7
	I0328 00:39:54.330351 1106824 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I0328 00:39:54.330384 1106824 ssh_runner.go:195] Run: which crictl
	I0328 00:39:54.336211 1106824 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I0328 00:39:54.339583 1106824 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0328 00:39:54.339688 1106824 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I0328 00:39:54.339739 1106824 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I0328 00:39:54.415602 1106824 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.24.4
	I0328 00:39:54.415684 1106824 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I0328 00:39:54.415728 1106824 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.24.4
	I0328 00:39:54.415818 1106824 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I0328 00:39:54.459889 1106824 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.24.4
	I0328 00:39:54.459952 1106824 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6
	I0328 00:39:54.460019 1106824 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0328 00:39:54.460035 1106824 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.8.6
	I0328 00:39:54.461377 1106824 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0
	I0328 00:39:54.461441 1106824 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.3-0
	I0328 00:39:54.461442 1106824 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.24.4
	I0328 00:39:54.461548 1106824 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0328 00:39:54.501508 1106824 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.24.4
	I0328 00:39:54.501605 1106824 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.24.4 (exists)
	I0328 00:39:54.501622 1106824 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.24.4
	I0328 00:39:54.501629 1106824 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0328 00:39:54.501652 1106824 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.24.4
	I0328 00:39:54.507179 1106824 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7
	I0328 00:39:54.507255 1106824 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.8.6 (exists)
	I0328 00:39:54.507293 1106824 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.7
	I0328 00:39:54.507303 1106824 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.24.4 (exists)
	I0328 00:39:54.507316 1106824 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.3-0 (exists)
	I0328 00:39:54.507339 1106824 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.24.4 (exists)
	I0328 00:39:54.510738 1106824 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.24.4 (exists)
	I0328 00:39:55.221151 1106824 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0328 00:39:57.162876 1106824 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.7: (2.655558372s)
	I0328 00:39:57.162916 1106824 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/pause_3.7 (exists)
	I0328 00:39:57.162987 1106824 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.941794378s)
	I0328 00:39:57.162990 1106824 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.24.4: (2.661301706s)
	I0328 00:39:57.163051 1106824 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.24.4 from cache
	I0328 00:39:57.163078 1106824 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0328 00:39:57.163146 1106824 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.8.6
	I0328 00:39:57.513220 1106824 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0328 00:39:57.513276 1106824 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0328 00:39:57.513340 1106824 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0328 00:39:58.261035 1106824 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.24.4 from cache
	I0328 00:39:58.261102 1106824 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0328 00:39:58.261165 1106824 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.3-0
	I0328 00:40:00.315766 1106824 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.3-0: (2.054566686s)
	I0328 00:40:00.315809 1106824 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0 from cache
	I0328 00:40:00.315843 1106824 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0328 00:40:00.315899 1106824 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0328 00:40:01.062848 1106824 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.24.4 from cache
	I0328 00:40:01.062907 1106824 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0328 00:40:01.062968 1106824 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0328 00:40:01.516171 1106824 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.24.4 from cache
	I0328 00:40:01.516218 1106824 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.7
	I0328 00:40:01.516284 1106824 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.7
	I0328 00:40:01.660404 1106824 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7 from cache
	I0328 00:40:01.660471 1106824 cache_images.go:123] Successfully loaded all cached images
	I0328 00:40:01.660478 1106824 cache_images.go:92] duration metric: took 7.721641495s to LoadCachedImages
	I0328 00:40:01.660496 1106824 kubeadm.go:928] updating node { 192.168.39.15 8443 v1.24.4 crio true true} ...
	I0328 00:40:01.660636 1106824 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=test-preload-700024 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.4 ClusterName:test-preload-700024 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0328 00:40:01.660725 1106824 ssh_runner.go:195] Run: crio config
	I0328 00:40:01.713838 1106824 cni.go:84] Creating CNI manager for ""
	I0328 00:40:01.713866 1106824 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0328 00:40:01.713877 1106824 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0328 00:40:01.713900 1106824 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.15 APIServerPort:8443 KubernetesVersion:v1.24.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:test-preload-700024 NodeName:test-preload-700024 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPa
th:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0328 00:40:01.714062 1106824 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "test-preload-700024"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0328 00:40:01.714158 1106824 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.4
	I0328 00:40:01.724302 1106824 binaries.go:44] Found k8s binaries, skipping transfer
	I0328 00:40:01.724375 1106824 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0328 00:40:01.733979 1106824 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I0328 00:40:01.751536 1106824 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0328 00:40:01.768654 1106824 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2103 bytes)
	I0328 00:40:01.786239 1106824 ssh_runner.go:195] Run: grep 192.168.39.15	control-plane.minikube.internal$ /etc/hosts
	I0328 00:40:01.790174 1106824 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.15	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0328 00:40:01.801886 1106824 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0328 00:40:01.918156 1106824 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0328 00:40:01.937336 1106824 certs.go:68] Setting up /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/test-preload-700024 for IP: 192.168.39.15
	I0328 00:40:01.937365 1106824 certs.go:194] generating shared ca certs ...
	I0328 00:40:01.937389 1106824 certs.go:226] acquiring lock for ca certs: {Name:mkf4dec8f33bbf51de6ed3aabdf175c7fd744ef9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 00:40:01.937564 1106824 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.key
	I0328 00:40:01.937623 1106824 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/proxy-client-ca.key
	I0328 00:40:01.937640 1106824 certs.go:256] generating profile certs ...
	I0328 00:40:01.937754 1106824 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/test-preload-700024/client.key
	I0328 00:40:01.937815 1106824 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/test-preload-700024/apiserver.key.879fdb65
	I0328 00:40:01.937852 1106824 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/test-preload-700024/proxy-client.key
	I0328 00:40:01.937956 1106824 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/1076522.pem (1338 bytes)
	W0328 00:40:01.937985 1106824 certs.go:480] ignoring /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/1076522_empty.pem, impossibly tiny 0 bytes
	I0328 00:40:01.937993 1106824 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca-key.pem (1675 bytes)
	I0328 00:40:01.938013 1106824 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem (1078 bytes)
	I0328 00:40:01.938034 1106824 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/cert.pem (1123 bytes)
	I0328 00:40:01.938054 1106824 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/key.pem (1679 bytes)
	I0328 00:40:01.938090 1106824 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/files/etc/ssl/certs/10765222.pem (1708 bytes)
	I0328 00:40:01.938793 1106824 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0328 00:40:01.976743 1106824 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0328 00:40:02.022792 1106824 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0328 00:40:02.055251 1106824 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0328 00:40:02.083371 1106824 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/test-preload-700024/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0328 00:40:02.121986 1106824 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/test-preload-700024/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0328 00:40:02.146482 1106824 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/test-preload-700024/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0328 00:40:02.170731 1106824 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/test-preload-700024/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0328 00:40:02.194335 1106824 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/1076522.pem --> /usr/share/ca-certificates/1076522.pem (1338 bytes)
	I0328 00:40:02.218121 1106824 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/files/etc/ssl/certs/10765222.pem --> /usr/share/ca-certificates/10765222.pem (1708 bytes)
	I0328 00:40:02.244454 1106824 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0328 00:40:02.269892 1106824 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0328 00:40:02.288108 1106824 ssh_runner.go:195] Run: openssl version
	I0328 00:40:02.294059 1106824 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0328 00:40:02.305690 1106824 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0328 00:40:02.310575 1106824 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 27 23:34 /usr/share/ca-certificates/minikubeCA.pem
	I0328 00:40:02.310647 1106824 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0328 00:40:02.316423 1106824 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0328 00:40:02.327851 1106824 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1076522.pem && ln -fs /usr/share/ca-certificates/1076522.pem /etc/ssl/certs/1076522.pem"
	I0328 00:40:02.339365 1106824 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1076522.pem
	I0328 00:40:02.343994 1106824 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 27 23:42 /usr/share/ca-certificates/1076522.pem
	I0328 00:40:02.344065 1106824 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1076522.pem
	I0328 00:40:02.350292 1106824 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1076522.pem /etc/ssl/certs/51391683.0"
	I0328 00:40:02.362677 1106824 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10765222.pem && ln -fs /usr/share/ca-certificates/10765222.pem /etc/ssl/certs/10765222.pem"
	I0328 00:40:02.375138 1106824 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10765222.pem
	I0328 00:40:02.379920 1106824 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 27 23:42 /usr/share/ca-certificates/10765222.pem
	I0328 00:40:02.379990 1106824 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10765222.pem
	I0328 00:40:02.386211 1106824 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/10765222.pem /etc/ssl/certs/3ec20f2e.0"
	I0328 00:40:02.398128 1106824 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0328 00:40:02.403048 1106824 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0328 00:40:02.409272 1106824 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0328 00:40:02.415340 1106824 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0328 00:40:02.421356 1106824 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0328 00:40:02.428401 1106824 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0328 00:40:02.434942 1106824 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0328 00:40:02.441166 1106824 kubeadm.go:391] StartCluster: {Name:test-preload-700024 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
24.4 ClusterName:test-preload-700024 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.15 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0328 00:40:02.441256 1106824 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0328 00:40:02.441304 1106824 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0328 00:40:02.482870 1106824 cri.go:89] found id: ""
	I0328 00:40:02.482945 1106824 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0328 00:40:02.494192 1106824 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0328 00:40:02.494221 1106824 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0328 00:40:02.494226 1106824 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0328 00:40:02.494297 1106824 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0328 00:40:02.504595 1106824 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0328 00:40:02.505193 1106824 kubeconfig.go:47] verify endpoint returned: get endpoint: "test-preload-700024" does not appear in /home/jenkins/minikube-integration/18485-1069254/kubeconfig
	I0328 00:40:02.505355 1106824 kubeconfig.go:62] /home/jenkins/minikube-integration/18485-1069254/kubeconfig needs updating (will repair): [kubeconfig missing "test-preload-700024" cluster setting kubeconfig missing "test-preload-700024" context setting]
	I0328 00:40:02.505707 1106824 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18485-1069254/kubeconfig: {Name:mk608bb0dce90860f51972a105c5a28b1a9b081e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 00:40:02.506601 1106824 kapi.go:59] client config for test-preload-700024: &rest.Config{Host:"https://192.168.39.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/test-preload-700024/client.crt", KeyFile:"/home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/test-preload-700024/client.key", CAFile:"/home/jenkins/minikube-integration/18485-1069254/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]u
int8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c58000), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0328 00:40:02.507618 1106824 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0328 00:40:02.518259 1106824 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.39.15
	I0328 00:40:02.518293 1106824 kubeadm.go:1154] stopping kube-system containers ...
	I0328 00:40:02.518306 1106824 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0328 00:40:02.518349 1106824 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0328 00:40:02.560391 1106824 cri.go:89] found id: ""
	I0328 00:40:02.560495 1106824 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0328 00:40:02.578746 1106824 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0328 00:40:02.589264 1106824 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0328 00:40:02.589290 1106824 kubeadm.go:156] found existing configuration files:
	
	I0328 00:40:02.589348 1106824 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0328 00:40:02.599346 1106824 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0328 00:40:02.599427 1106824 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0328 00:40:02.609774 1106824 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0328 00:40:02.619773 1106824 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0328 00:40:02.619845 1106824 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0328 00:40:02.629988 1106824 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0328 00:40:02.639700 1106824 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0328 00:40:02.639760 1106824 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0328 00:40:02.650126 1106824 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0328 00:40:02.660208 1106824 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0328 00:40:02.660282 1106824 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0328 00:40:02.670733 1106824 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0328 00:40:02.681165 1106824 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0328 00:40:02.769705 1106824 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0328 00:40:03.465619 1106824 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0328 00:40:03.730385 1106824 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0328 00:40:03.805329 1106824 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0328 00:40:03.891913 1106824 api_server.go:52] waiting for apiserver process to appear ...
	I0328 00:40:03.892020 1106824 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 00:40:04.392758 1106824 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 00:40:04.892995 1106824 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 00:40:04.952170 1106824 api_server.go:72] duration metric: took 1.060253858s to wait for apiserver process to appear ...
	I0328 00:40:04.952207 1106824 api_server.go:88] waiting for apiserver healthz status ...
	I0328 00:40:04.952232 1106824 api_server.go:253] Checking apiserver healthz at https://192.168.39.15:8443/healthz ...
	I0328 00:40:04.952864 1106824 api_server.go:269] stopped: https://192.168.39.15:8443/healthz: Get "https://192.168.39.15:8443/healthz": dial tcp 192.168.39.15:8443: connect: connection refused
	I0328 00:40:05.452609 1106824 api_server.go:253] Checking apiserver healthz at https://192.168.39.15:8443/healthz ...
	I0328 00:40:08.670685 1106824 api_server.go:279] https://192.168.39.15:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0328 00:40:08.670725 1106824 api_server.go:103] status: https://192.168.39.15:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0328 00:40:08.670745 1106824 api_server.go:253] Checking apiserver healthz at https://192.168.39.15:8443/healthz ...
	I0328 00:40:08.696749 1106824 api_server.go:279] https://192.168.39.15:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0328 00:40:08.696782 1106824 api_server.go:103] status: https://192.168.39.15:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0328 00:40:08.953225 1106824 api_server.go:253] Checking apiserver healthz at https://192.168.39.15:8443/healthz ...
	I0328 00:40:08.958583 1106824 api_server.go:279] https://192.168.39.15:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0328 00:40:08.958615 1106824 api_server.go:103] status: https://192.168.39.15:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0328 00:40:09.452731 1106824 api_server.go:253] Checking apiserver healthz at https://192.168.39.15:8443/healthz ...
	I0328 00:40:09.472815 1106824 api_server.go:279] https://192.168.39.15:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0328 00:40:09.472853 1106824 api_server.go:103] status: https://192.168.39.15:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0328 00:40:09.952378 1106824 api_server.go:253] Checking apiserver healthz at https://192.168.39.15:8443/healthz ...
	I0328 00:40:09.957608 1106824 api_server.go:279] https://192.168.39.15:8443/healthz returned 200:
	ok
	I0328 00:40:09.963790 1106824 api_server.go:141] control plane version: v1.24.4
	I0328 00:40:09.963818 1106824 api_server.go:131] duration metric: took 5.011603321s to wait for apiserver health ...
	I0328 00:40:09.963827 1106824 cni.go:84] Creating CNI manager for ""
	I0328 00:40:09.963834 1106824 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0328 00:40:09.965999 1106824 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0328 00:40:09.967639 1106824 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0328 00:40:09.978495 1106824 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0328 00:40:10.002656 1106824 system_pods.go:43] waiting for kube-system pods to appear ...
	I0328 00:40:10.017729 1106824 system_pods.go:59] 7 kube-system pods found
	I0328 00:40:10.017769 1106824 system_pods.go:61] "coredns-6d4b75cb6d-bj24b" [bf58ddc7-916a-4347-8d94-783a3bb0a98f] Running
	I0328 00:40:10.017777 1106824 system_pods.go:61] "etcd-test-preload-700024" [5d365027-b67f-421f-89a0-f7edbea72689] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0328 00:40:10.017783 1106824 system_pods.go:61] "kube-apiserver-test-preload-700024" [b1ad15a4-62bf-4c3d-a0db-03f7402f329f] Running
	I0328 00:40:10.017795 1106824 system_pods.go:61] "kube-controller-manager-test-preload-700024" [60823a15-83b2-4dd2-a653-f7a7a8e11d5e] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0328 00:40:10.017801 1106824 system_pods.go:61] "kube-proxy-624lz" [a0c9fe23-917f-454c-a6c5-e87f373e6836] Running
	I0328 00:40:10.017808 1106824 system_pods.go:61] "kube-scheduler-test-preload-700024" [ef0f53b0-6d4a-4e61-bf75-28542644bde1] Running
	I0328 00:40:10.017817 1106824 system_pods.go:61] "storage-provisioner" [d82dc3ad-0639-4847-8ade-054194034bfd] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0328 00:40:10.017827 1106824 system_pods.go:74] duration metric: took 15.144326ms to wait for pod list to return data ...
	I0328 00:40:10.017849 1106824 node_conditions.go:102] verifying NodePressure condition ...
	I0328 00:40:10.021190 1106824 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0328 00:40:10.021228 1106824 node_conditions.go:123] node cpu capacity is 2
	I0328 00:40:10.021239 1106824 node_conditions.go:105] duration metric: took 3.384667ms to run NodePressure ...
	I0328 00:40:10.021260 1106824 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0328 00:40:10.342957 1106824 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0328 00:40:10.351299 1106824 kubeadm.go:733] kubelet initialised
	I0328 00:40:10.351323 1106824 kubeadm.go:734] duration metric: took 8.332111ms waiting for restarted kubelet to initialise ...
	I0328 00:40:10.351332 1106824 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0328 00:40:10.358524 1106824 pod_ready.go:78] waiting up to 4m0s for pod "coredns-6d4b75cb6d-bj24b" in "kube-system" namespace to be "Ready" ...
	I0328 00:40:10.370822 1106824 pod_ready.go:97] node "test-preload-700024" hosting pod "coredns-6d4b75cb6d-bj24b" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-700024" has status "Ready":"False"
	I0328 00:40:10.370856 1106824 pod_ready.go:81] duration metric: took 12.298063ms for pod "coredns-6d4b75cb6d-bj24b" in "kube-system" namespace to be "Ready" ...
	E0328 00:40:10.370869 1106824 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-700024" hosting pod "coredns-6d4b75cb6d-bj24b" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-700024" has status "Ready":"False"
	I0328 00:40:10.370882 1106824 pod_ready.go:78] waiting up to 4m0s for pod "etcd-test-preload-700024" in "kube-system" namespace to be "Ready" ...
	I0328 00:40:10.381566 1106824 pod_ready.go:97] node "test-preload-700024" hosting pod "etcd-test-preload-700024" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-700024" has status "Ready":"False"
	I0328 00:40:10.381596 1106824 pod_ready.go:81] duration metric: took 10.701448ms for pod "etcd-test-preload-700024" in "kube-system" namespace to be "Ready" ...
	E0328 00:40:10.381609 1106824 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-700024" hosting pod "etcd-test-preload-700024" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-700024" has status "Ready":"False"
	I0328 00:40:10.381619 1106824 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-test-preload-700024" in "kube-system" namespace to be "Ready" ...
	I0328 00:40:10.387753 1106824 pod_ready.go:97] node "test-preload-700024" hosting pod "kube-apiserver-test-preload-700024" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-700024" has status "Ready":"False"
	I0328 00:40:10.387777 1106824 pod_ready.go:81] duration metric: took 6.147487ms for pod "kube-apiserver-test-preload-700024" in "kube-system" namespace to be "Ready" ...
	E0328 00:40:10.387787 1106824 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-700024" hosting pod "kube-apiserver-test-preload-700024" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-700024" has status "Ready":"False"
	I0328 00:40:10.387793 1106824 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-test-preload-700024" in "kube-system" namespace to be "Ready" ...
	I0328 00:40:10.406910 1106824 pod_ready.go:97] node "test-preload-700024" hosting pod "kube-controller-manager-test-preload-700024" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-700024" has status "Ready":"False"
	I0328 00:40:10.406940 1106824 pod_ready.go:81] duration metric: took 19.136501ms for pod "kube-controller-manager-test-preload-700024" in "kube-system" namespace to be "Ready" ...
	E0328 00:40:10.406950 1106824 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-700024" hosting pod "kube-controller-manager-test-preload-700024" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-700024" has status "Ready":"False"
	I0328 00:40:10.406957 1106824 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-624lz" in "kube-system" namespace to be "Ready" ...
	I0328 00:40:10.816077 1106824 pod_ready.go:97] node "test-preload-700024" hosting pod "kube-proxy-624lz" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-700024" has status "Ready":"False"
	I0328 00:40:10.816110 1106824 pod_ready.go:81] duration metric: took 409.144331ms for pod "kube-proxy-624lz" in "kube-system" namespace to be "Ready" ...
	E0328 00:40:10.816120 1106824 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-700024" hosting pod "kube-proxy-624lz" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-700024" has status "Ready":"False"
	I0328 00:40:10.816127 1106824 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-test-preload-700024" in "kube-system" namespace to be "Ready" ...
	I0328 00:40:11.206342 1106824 pod_ready.go:97] node "test-preload-700024" hosting pod "kube-scheduler-test-preload-700024" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-700024" has status "Ready":"False"
	I0328 00:40:11.206371 1106824 pod_ready.go:81] duration metric: took 390.237774ms for pod "kube-scheduler-test-preload-700024" in "kube-system" namespace to be "Ready" ...
	E0328 00:40:11.206381 1106824 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-700024" hosting pod "kube-scheduler-test-preload-700024" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-700024" has status "Ready":"False"
	I0328 00:40:11.206392 1106824 pod_ready.go:38] duration metric: took 855.049455ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0328 00:40:11.206422 1106824 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0328 00:40:11.219663 1106824 ops.go:34] apiserver oom_adj: -16
	I0328 00:40:11.219687 1106824 kubeadm.go:591] duration metric: took 8.725437956s to restartPrimaryControlPlane
	I0328 00:40:11.219703 1106824 kubeadm.go:393] duration metric: took 8.778545956s to StartCluster
	I0328 00:40:11.219722 1106824 settings.go:142] acquiring lock: {Name:mk594dd5d083edcb4c668528431bf610fe4ea638 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 00:40:11.219815 1106824 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18485-1069254/kubeconfig
	I0328 00:40:11.220459 1106824 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18485-1069254/kubeconfig: {Name:mk608bb0dce90860f51972a105c5a28b1a9b081e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 00:40:11.220740 1106824 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.39.15 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0328 00:40:11.222579 1106824 out.go:177] * Verifying Kubernetes components...
	I0328 00:40:11.220823 1106824 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0328 00:40:11.220965 1106824 config.go:182] Loaded profile config "test-preload-700024": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0328 00:40:11.222688 1106824 addons.go:69] Setting storage-provisioner=true in profile "test-preload-700024"
	I0328 00:40:11.223840 1106824 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0328 00:40:11.223866 1106824 addons.go:234] Setting addon storage-provisioner=true in "test-preload-700024"
	W0328 00:40:11.223880 1106824 addons.go:243] addon storage-provisioner should already be in state true
	I0328 00:40:11.222694 1106824 addons.go:69] Setting default-storageclass=true in profile "test-preload-700024"
	I0328 00:40:11.223920 1106824 host.go:66] Checking if "test-preload-700024" exists ...
	I0328 00:40:11.223931 1106824 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "test-preload-700024"
	I0328 00:40:11.224233 1106824 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0328 00:40:11.224274 1106824 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 00:40:11.224236 1106824 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0328 00:40:11.224359 1106824 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 00:40:11.239157 1106824 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44609
	I0328 00:40:11.239559 1106824 main.go:141] libmachine: () Calling .GetVersion
	I0328 00:40:11.240056 1106824 main.go:141] libmachine: Using API Version  1
	I0328 00:40:11.240083 1106824 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 00:40:11.240446 1106824 main.go:141] libmachine: () Calling .GetMachineName
	I0328 00:40:11.240653 1106824 main.go:141] libmachine: (test-preload-700024) Calling .GetState
	I0328 00:40:11.242839 1106824 kapi.go:59] client config for test-preload-700024: &rest.Config{Host:"https://192.168.39.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/test-preload-700024/client.crt", KeyFile:"/home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/test-preload-700024/client.key", CAFile:"/home/jenkins/minikube-integration/18485-1069254/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]u
int8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c58000), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0328 00:40:11.243076 1106824 addons.go:234] Setting addon default-storageclass=true in "test-preload-700024"
	W0328 00:40:11.243090 1106824 addons.go:243] addon default-storageclass should already be in state true
	I0328 00:40:11.243112 1106824 host.go:66] Checking if "test-preload-700024" exists ...
	I0328 00:40:11.243344 1106824 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0328 00:40:11.243378 1106824 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 00:40:11.244404 1106824 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44615
	I0328 00:40:11.244844 1106824 main.go:141] libmachine: () Calling .GetVersion
	I0328 00:40:11.245340 1106824 main.go:141] libmachine: Using API Version  1
	I0328 00:40:11.245373 1106824 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 00:40:11.245722 1106824 main.go:141] libmachine: () Calling .GetMachineName
	I0328 00:40:11.246340 1106824 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0328 00:40:11.246384 1106824 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 00:40:11.258787 1106824 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39541
	I0328 00:40:11.259319 1106824 main.go:141] libmachine: () Calling .GetVersion
	I0328 00:40:11.259889 1106824 main.go:141] libmachine: Using API Version  1
	I0328 00:40:11.259916 1106824 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 00:40:11.260236 1106824 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36851
	I0328 00:40:11.260309 1106824 main.go:141] libmachine: () Calling .GetMachineName
	I0328 00:40:11.260583 1106824 main.go:141] libmachine: () Calling .GetVersion
	I0328 00:40:11.260899 1106824 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0328 00:40:11.260955 1106824 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 00:40:11.261019 1106824 main.go:141] libmachine: Using API Version  1
	I0328 00:40:11.261042 1106824 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 00:40:11.261351 1106824 main.go:141] libmachine: () Calling .GetMachineName
	I0328 00:40:11.261594 1106824 main.go:141] libmachine: (test-preload-700024) Calling .GetState
	I0328 00:40:11.263266 1106824 main.go:141] libmachine: (test-preload-700024) Calling .DriverName
	I0328 00:40:11.265367 1106824 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0328 00:40:11.266766 1106824 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0328 00:40:11.266788 1106824 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0328 00:40:11.266808 1106824 main.go:141] libmachine: (test-preload-700024) Calling .GetSSHHostname
	I0328 00:40:11.269531 1106824 main.go:141] libmachine: (test-preload-700024) DBG | domain test-preload-700024 has defined MAC address 52:54:00:87:60:2c in network mk-test-preload-700024
	I0328 00:40:11.269911 1106824 main.go:141] libmachine: (test-preload-700024) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:60:2c", ip: ""} in network mk-test-preload-700024: {Iface:virbr1 ExpiryTime:2024-03-28 01:39:38 +0000 UTC Type:0 Mac:52:54:00:87:60:2c Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:test-preload-700024 Clientid:01:52:54:00:87:60:2c}
	I0328 00:40:11.269939 1106824 main.go:141] libmachine: (test-preload-700024) DBG | domain test-preload-700024 has defined IP address 192.168.39.15 and MAC address 52:54:00:87:60:2c in network mk-test-preload-700024
	I0328 00:40:11.270103 1106824 main.go:141] libmachine: (test-preload-700024) Calling .GetSSHPort
	I0328 00:40:11.270294 1106824 main.go:141] libmachine: (test-preload-700024) Calling .GetSSHKeyPath
	I0328 00:40:11.270434 1106824 main.go:141] libmachine: (test-preload-700024) Calling .GetSSHUsername
	I0328 00:40:11.270590 1106824 sshutil.go:53] new ssh client: &{IP:192.168.39.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/test-preload-700024/id_rsa Username:docker}
	I0328 00:40:11.279539 1106824 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45531
	I0328 00:40:11.279899 1106824 main.go:141] libmachine: () Calling .GetVersion
	I0328 00:40:11.280367 1106824 main.go:141] libmachine: Using API Version  1
	I0328 00:40:11.280391 1106824 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 00:40:11.280756 1106824 main.go:141] libmachine: () Calling .GetMachineName
	I0328 00:40:11.280978 1106824 main.go:141] libmachine: (test-preload-700024) Calling .GetState
	I0328 00:40:11.282436 1106824 main.go:141] libmachine: (test-preload-700024) Calling .DriverName
	I0328 00:40:11.282702 1106824 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0328 00:40:11.282717 1106824 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0328 00:40:11.282730 1106824 main.go:141] libmachine: (test-preload-700024) Calling .GetSSHHostname
	I0328 00:40:11.285498 1106824 main.go:141] libmachine: (test-preload-700024) DBG | domain test-preload-700024 has defined MAC address 52:54:00:87:60:2c in network mk-test-preload-700024
	I0328 00:40:11.285938 1106824 main.go:141] libmachine: (test-preload-700024) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:60:2c", ip: ""} in network mk-test-preload-700024: {Iface:virbr1 ExpiryTime:2024-03-28 01:39:38 +0000 UTC Type:0 Mac:52:54:00:87:60:2c Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:test-preload-700024 Clientid:01:52:54:00:87:60:2c}
	I0328 00:40:11.285969 1106824 main.go:141] libmachine: (test-preload-700024) DBG | domain test-preload-700024 has defined IP address 192.168.39.15 and MAC address 52:54:00:87:60:2c in network mk-test-preload-700024
	I0328 00:40:11.286261 1106824 main.go:141] libmachine: (test-preload-700024) Calling .GetSSHPort
	I0328 00:40:11.286470 1106824 main.go:141] libmachine: (test-preload-700024) Calling .GetSSHKeyPath
	I0328 00:40:11.286675 1106824 main.go:141] libmachine: (test-preload-700024) Calling .GetSSHUsername
	I0328 00:40:11.286864 1106824 sshutil.go:53] new ssh client: &{IP:192.168.39.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/test-preload-700024/id_rsa Username:docker}
	I0328 00:40:11.408156 1106824 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0328 00:40:11.433103 1106824 node_ready.go:35] waiting up to 6m0s for node "test-preload-700024" to be "Ready" ...
	I0328 00:40:11.481620 1106824 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0328 00:40:11.579794 1106824 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0328 00:40:12.486893 1106824 main.go:141] libmachine: Making call to close driver server
	I0328 00:40:12.486923 1106824 main.go:141] libmachine: (test-preload-700024) Calling .Close
	I0328 00:40:12.487037 1106824 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.005365554s)
	I0328 00:40:12.487091 1106824 main.go:141] libmachine: Making call to close driver server
	I0328 00:40:12.487108 1106824 main.go:141] libmachine: (test-preload-700024) Calling .Close
	I0328 00:40:12.487239 1106824 main.go:141] libmachine: Successfully made call to close driver server
	I0328 00:40:12.487271 1106824 main.go:141] libmachine: Making call to close connection to plugin binary
	I0328 00:40:12.487294 1106824 main.go:141] libmachine: Making call to close driver server
	I0328 00:40:12.487315 1106824 main.go:141] libmachine: (test-preload-700024) Calling .Close
	I0328 00:40:12.487379 1106824 main.go:141] libmachine: Successfully made call to close driver server
	I0328 00:40:12.487401 1106824 main.go:141] libmachine: Making call to close connection to plugin binary
	I0328 00:40:12.487402 1106824 main.go:141] libmachine: (test-preload-700024) DBG | Closing plugin on server side
	I0328 00:40:12.487417 1106824 main.go:141] libmachine: Making call to close driver server
	I0328 00:40:12.487426 1106824 main.go:141] libmachine: (test-preload-700024) Calling .Close
	I0328 00:40:12.487565 1106824 main.go:141] libmachine: Successfully made call to close driver server
	I0328 00:40:12.487607 1106824 main.go:141] libmachine: Making call to close connection to plugin binary
	I0328 00:40:12.487648 1106824 main.go:141] libmachine: Successfully made call to close driver server
	I0328 00:40:12.487669 1106824 main.go:141] libmachine: Making call to close connection to plugin binary
	I0328 00:40:12.487609 1106824 main.go:141] libmachine: (test-preload-700024) DBG | Closing plugin on server side
	I0328 00:40:12.494907 1106824 main.go:141] libmachine: Making call to close driver server
	I0328 00:40:12.494923 1106824 main.go:141] libmachine: (test-preload-700024) Calling .Close
	I0328 00:40:12.495175 1106824 main.go:141] libmachine: Successfully made call to close driver server
	I0328 00:40:12.495191 1106824 main.go:141] libmachine: Making call to close connection to plugin binary
	I0328 00:40:12.495201 1106824 main.go:141] libmachine: (test-preload-700024) DBG | Closing plugin on server side
	I0328 00:40:12.497300 1106824 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0328 00:40:12.498580 1106824 addons.go:505] duration metric: took 1.277772724s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0328 00:40:13.438345 1106824 node_ready.go:53] node "test-preload-700024" has status "Ready":"False"
	I0328 00:40:15.438893 1106824 node_ready.go:53] node "test-preload-700024" has status "Ready":"False"
	I0328 00:40:17.937868 1106824 node_ready.go:53] node "test-preload-700024" has status "Ready":"False"
	I0328 00:40:19.437017 1106824 node_ready.go:49] node "test-preload-700024" has status "Ready":"True"
	I0328 00:40:19.437051 1106824 node_ready.go:38] duration metric: took 8.003912457s for node "test-preload-700024" to be "Ready" ...
	I0328 00:40:19.437064 1106824 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0328 00:40:19.442586 1106824 pod_ready.go:78] waiting up to 6m0s for pod "coredns-6d4b75cb6d-bj24b" in "kube-system" namespace to be "Ready" ...
	I0328 00:40:19.447676 1106824 pod_ready.go:92] pod "coredns-6d4b75cb6d-bj24b" in "kube-system" namespace has status "Ready":"True"
	I0328 00:40:19.447698 1106824 pod_ready.go:81] duration metric: took 5.081841ms for pod "coredns-6d4b75cb6d-bj24b" in "kube-system" namespace to be "Ready" ...
	I0328 00:40:19.447709 1106824 pod_ready.go:78] waiting up to 6m0s for pod "etcd-test-preload-700024" in "kube-system" namespace to be "Ready" ...
	I0328 00:40:19.452324 1106824 pod_ready.go:92] pod "etcd-test-preload-700024" in "kube-system" namespace has status "Ready":"True"
	I0328 00:40:19.452345 1106824 pod_ready.go:81] duration metric: took 4.627372ms for pod "etcd-test-preload-700024" in "kube-system" namespace to be "Ready" ...
	I0328 00:40:19.452354 1106824 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-test-preload-700024" in "kube-system" namespace to be "Ready" ...
	I0328 00:40:19.456345 1106824 pod_ready.go:92] pod "kube-apiserver-test-preload-700024" in "kube-system" namespace has status "Ready":"True"
	I0328 00:40:19.456367 1106824 pod_ready.go:81] duration metric: took 4.003771ms for pod "kube-apiserver-test-preload-700024" in "kube-system" namespace to be "Ready" ...
	I0328 00:40:19.456377 1106824 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-test-preload-700024" in "kube-system" namespace to be "Ready" ...
	I0328 00:40:19.460440 1106824 pod_ready.go:92] pod "kube-controller-manager-test-preload-700024" in "kube-system" namespace has status "Ready":"True"
	I0328 00:40:19.460460 1106824 pod_ready.go:81] duration metric: took 4.076534ms for pod "kube-controller-manager-test-preload-700024" in "kube-system" namespace to be "Ready" ...
	I0328 00:40:19.460469 1106824 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-624lz" in "kube-system" namespace to be "Ready" ...
	I0328 00:40:19.837763 1106824 pod_ready.go:92] pod "kube-proxy-624lz" in "kube-system" namespace has status "Ready":"True"
	I0328 00:40:19.837793 1106824 pod_ready.go:81] duration metric: took 377.317759ms for pod "kube-proxy-624lz" in "kube-system" namespace to be "Ready" ...
	I0328 00:40:19.837804 1106824 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-test-preload-700024" in "kube-system" namespace to be "Ready" ...
	I0328 00:40:21.845835 1106824 pod_ready.go:102] pod "kube-scheduler-test-preload-700024" in "kube-system" namespace has status "Ready":"False"
	I0328 00:40:22.344026 1106824 pod_ready.go:92] pod "kube-scheduler-test-preload-700024" in "kube-system" namespace has status "Ready":"True"
	I0328 00:40:22.344058 1106824 pod_ready.go:81] duration metric: took 2.506246677s for pod "kube-scheduler-test-preload-700024" in "kube-system" namespace to be "Ready" ...
	I0328 00:40:22.344072 1106824 pod_ready.go:38] duration metric: took 2.906991572s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0328 00:40:22.344090 1106824 api_server.go:52] waiting for apiserver process to appear ...
	I0328 00:40:22.344158 1106824 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 00:40:22.360335 1106824 api_server.go:72] duration metric: took 11.139551332s to wait for apiserver process to appear ...
	I0328 00:40:22.360379 1106824 api_server.go:88] waiting for apiserver healthz status ...
	I0328 00:40:22.360401 1106824 api_server.go:253] Checking apiserver healthz at https://192.168.39.15:8443/healthz ...
	I0328 00:40:22.365494 1106824 api_server.go:279] https://192.168.39.15:8443/healthz returned 200:
	ok
	I0328 00:40:22.366378 1106824 api_server.go:141] control plane version: v1.24.4
	I0328 00:40:22.366406 1106824 api_server.go:131] duration metric: took 6.01632ms to wait for apiserver health ...
	I0328 00:40:22.366415 1106824 system_pods.go:43] waiting for kube-system pods to appear ...
	I0328 00:40:22.441748 1106824 system_pods.go:59] 7 kube-system pods found
	I0328 00:40:22.441784 1106824 system_pods.go:61] "coredns-6d4b75cb6d-bj24b" [bf58ddc7-916a-4347-8d94-783a3bb0a98f] Running
	I0328 00:40:22.441789 1106824 system_pods.go:61] "etcd-test-preload-700024" [5d365027-b67f-421f-89a0-f7edbea72689] Running
	I0328 00:40:22.441795 1106824 system_pods.go:61] "kube-apiserver-test-preload-700024" [b1ad15a4-62bf-4c3d-a0db-03f7402f329f] Running
	I0328 00:40:22.441802 1106824 system_pods.go:61] "kube-controller-manager-test-preload-700024" [60823a15-83b2-4dd2-a653-f7a7a8e11d5e] Running
	I0328 00:40:22.441808 1106824 system_pods.go:61] "kube-proxy-624lz" [a0c9fe23-917f-454c-a6c5-e87f373e6836] Running
	I0328 00:40:22.441815 1106824 system_pods.go:61] "kube-scheduler-test-preload-700024" [ef0f53b0-6d4a-4e61-bf75-28542644bde1] Running
	I0328 00:40:22.441825 1106824 system_pods.go:61] "storage-provisioner" [d82dc3ad-0639-4847-8ade-054194034bfd] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0328 00:40:22.441834 1106824 system_pods.go:74] duration metric: took 75.413051ms to wait for pod list to return data ...
	I0328 00:40:22.441847 1106824 default_sa.go:34] waiting for default service account to be created ...
	I0328 00:40:22.637539 1106824 default_sa.go:45] found service account: "default"
	I0328 00:40:22.637573 1106824 default_sa.go:55] duration metric: took 195.71845ms for default service account to be created ...
	I0328 00:40:22.637583 1106824 system_pods.go:116] waiting for k8s-apps to be running ...
	I0328 00:40:22.840782 1106824 system_pods.go:86] 7 kube-system pods found
	I0328 00:40:22.840822 1106824 system_pods.go:89] "coredns-6d4b75cb6d-bj24b" [bf58ddc7-916a-4347-8d94-783a3bb0a98f] Running
	I0328 00:40:22.840828 1106824 system_pods.go:89] "etcd-test-preload-700024" [5d365027-b67f-421f-89a0-f7edbea72689] Running
	I0328 00:40:22.840833 1106824 system_pods.go:89] "kube-apiserver-test-preload-700024" [b1ad15a4-62bf-4c3d-a0db-03f7402f329f] Running
	I0328 00:40:22.840839 1106824 system_pods.go:89] "kube-controller-manager-test-preload-700024" [60823a15-83b2-4dd2-a653-f7a7a8e11d5e] Running
	I0328 00:40:22.840845 1106824 system_pods.go:89] "kube-proxy-624lz" [a0c9fe23-917f-454c-a6c5-e87f373e6836] Running
	I0328 00:40:22.840852 1106824 system_pods.go:89] "kube-scheduler-test-preload-700024" [ef0f53b0-6d4a-4e61-bf75-28542644bde1] Running
	I0328 00:40:22.840862 1106824 system_pods.go:89] "storage-provisioner" [d82dc3ad-0639-4847-8ade-054194034bfd] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0328 00:40:22.840872 1106824 system_pods.go:126] duration metric: took 203.283099ms to wait for k8s-apps to be running ...
	I0328 00:40:22.840884 1106824 system_svc.go:44] waiting for kubelet service to be running ....
	I0328 00:40:22.840946 1106824 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0328 00:40:22.855996 1106824 system_svc.go:56] duration metric: took 15.099751ms WaitForService to wait for kubelet
	I0328 00:40:22.856037 1106824 kubeadm.go:576] duration metric: took 11.63526277s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0328 00:40:22.856065 1106824 node_conditions.go:102] verifying NodePressure condition ...
	I0328 00:40:23.038972 1106824 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0328 00:40:23.039014 1106824 node_conditions.go:123] node cpu capacity is 2
	I0328 00:40:23.039028 1106824 node_conditions.go:105] duration metric: took 182.956338ms to run NodePressure ...
	I0328 00:40:23.039043 1106824 start.go:240] waiting for startup goroutines ...
	I0328 00:40:23.039054 1106824 start.go:245] waiting for cluster config update ...
	I0328 00:40:23.039069 1106824 start.go:254] writing updated cluster config ...
	I0328 00:40:23.039443 1106824 ssh_runner.go:195] Run: rm -f paused
	I0328 00:40:23.090763 1106824 start.go:600] kubectl: 1.29.3, cluster: 1.24.4 (minor skew: 5)
	I0328 00:40:23.092879 1106824 out.go:177] 
	W0328 00:40:23.094082 1106824 out.go:239] ! /usr/local/bin/kubectl is version 1.29.3, which may have incompatibilities with Kubernetes 1.24.4.
	I0328 00:40:23.095295 1106824 out.go:177]   - Want kubectl v1.24.4? Try 'minikube kubectl -- get pods -A'
	I0328 00:40:23.096653 1106824 out.go:177] * Done! kubectl is now configured to use "test-preload-700024" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Mar 28 00:40:24 test-preload-700024 crio[670]: time="2024-03-28 00:40:24.125585744Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1711586424125562312,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b0e21717-408e-45b2-b37d-7076e9cfab81 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 28 00:40:24 test-preload-700024 crio[670]: time="2024-03-28 00:40:24.126619063Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1ee97031-627b-4790-b0ba-576e24f68ef9 name=/runtime.v1.RuntimeService/ListContainers
	Mar 28 00:40:24 test-preload-700024 crio[670]: time="2024-03-28 00:40:24.126698750Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1ee97031-627b-4790-b0ba-576e24f68ef9 name=/runtime.v1.RuntimeService/ListContainers
	Mar 28 00:40:24 test-preload-700024 crio[670]: time="2024-03-28 00:40:24.126923986Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1c4d52b22cc54ab943eee9821e9fc3b2f9ab1ff78e99ed060866b481fb13a581,PodSandboxId:789ccac04fceec87c92ca6f7a89658d78d15fd3e0a3642834cc90c6b5be57c01,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1711586423984262788,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d82dc3ad-0639-4847-8ade-054194034bfd,},Annotations:map[string]string{io.kubernetes.container.hash: bd8edced,io.kubernetes.container.restartCount: 3,io.kubernetes.co
ntainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:834f334a0e35380c2d839deb418e287356feb9977856c0300d914d4f17bc2f03,PodSandboxId:f0091911b209b97c831295d2a2c51fb1adf2f7c780b6144114cbd385c8a75090,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1711586416956264512,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-bj24b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf58ddc7-916a-4347-8d94-783a3bb0a98f,},Annotations:map[string]string{io.kubernetes.container.hash: bcd399c3,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"U
DP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2335d0efbd6b38012aaa7853526c41bf3685f7fcff3a5f1855049bd093118c63,PodSandboxId:ee6afff0b880880527361794585efead277f94cbf362568ea0246b287af5a7f9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1711586410241091145,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-624lz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0
c9fe23-917f-454c-a6c5-e87f373e6836,},Annotations:map[string]string{io.kubernetes.container.hash: 2eb651e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e04dc2ed1524d9534b2cb3a4bd80f40131fec74ceecb498aeacbf3d52a3ab958,PodSandboxId:789ccac04fceec87c92ca6f7a89658d78d15fd3e0a3642834cc90c6b5be57c01,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1711586410032255931,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d82dc3ad-0639-48
47-8ade-054194034bfd,},Annotations:map[string]string{io.kubernetes.container.hash: bd8edced,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:69e682b48973c0e6542ccd260265fd37c75e6094a7312df6867f92995654e9e2,PodSandboxId:a13191932e5cfa5601f7308230ca1d88c9e331741a2a6f1e58c28c0702052095,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1711586404693352345,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-700024,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6572f52357b1597ab5a5143
359c4f7db,},Annotations:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b38c8e366fb7dc33c8f78130cc8b39f1857199e6b766aeb185b8874413227e5,PodSandboxId:95191b05c86385a88c1cc38bcef7f77f1bcbd9dd6349384be0b74024bfc1be6d,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1711586404660211400,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-700024,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb461de89a4c7cf829726bc18e96b098,},Annotations:map[string]string
{io.kubernetes.container.hash: 35e3e01d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6b985cdad73c7b9f6661101317db02b918f2dd59a405f7b0482696258d17a813,PodSandboxId:9283da01f8452b0b0ab4cd42bfa2d9cd1b292a094f9342278952fce6cfe16c58,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1711586404649425007,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-700024,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b243ae23ac1e55392c95eaa1b8c010c6,},Annotations:m
ap[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:94438ba9a45ff5cb3847b92d0a96f8c02c00924bf96a07e865f871fc4ceb9bf0,PodSandboxId:022a0edf8204458bc744f223695729e0a720086e059b0b68a09c450c30d425ee,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1711586404594467246,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-700024,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb99061e92b6e79d0253d224337fac8a,},Annotations:map[string]s
tring{io.kubernetes.container.hash: 45f5e2b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1ee97031-627b-4790-b0ba-576e24f68ef9 name=/runtime.v1.RuntimeService/ListContainers
	Mar 28 00:40:24 test-preload-700024 crio[670]: time="2024-03-28 00:40:24.180239981Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=aa7871d3-f36c-4f3d-876f-015ffebb5b9f name=/runtime.v1.RuntimeService/Version
	Mar 28 00:40:24 test-preload-700024 crio[670]: time="2024-03-28 00:40:24.180583289Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=aa7871d3-f36c-4f3d-876f-015ffebb5b9f name=/runtime.v1.RuntimeService/Version
	Mar 28 00:40:24 test-preload-700024 crio[670]: time="2024-03-28 00:40:24.182188659Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=18b6cc36-ef90-455d-851a-c9fd846ced6d name=/runtime.v1.ImageService/ImageFsInfo
	Mar 28 00:40:24 test-preload-700024 crio[670]: time="2024-03-28 00:40:24.182635758Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1711586424182612216,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=18b6cc36-ef90-455d-851a-c9fd846ced6d name=/runtime.v1.ImageService/ImageFsInfo
	Mar 28 00:40:24 test-preload-700024 crio[670]: time="2024-03-28 00:40:24.183323988Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a250b055-bc62-4206-8dab-b84a6d74d491 name=/runtime.v1.RuntimeService/ListContainers
	Mar 28 00:40:24 test-preload-700024 crio[670]: time="2024-03-28 00:40:24.183376712Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a250b055-bc62-4206-8dab-b84a6d74d491 name=/runtime.v1.RuntimeService/ListContainers
	Mar 28 00:40:24 test-preload-700024 crio[670]: time="2024-03-28 00:40:24.183582926Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1c4d52b22cc54ab943eee9821e9fc3b2f9ab1ff78e99ed060866b481fb13a581,PodSandboxId:789ccac04fceec87c92ca6f7a89658d78d15fd3e0a3642834cc90c6b5be57c01,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1711586423984262788,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d82dc3ad-0639-4847-8ade-054194034bfd,},Annotations:map[string]string{io.kubernetes.container.hash: bd8edced,io.kubernetes.container.restartCount: 3,io.kubernetes.co
ntainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:834f334a0e35380c2d839deb418e287356feb9977856c0300d914d4f17bc2f03,PodSandboxId:f0091911b209b97c831295d2a2c51fb1adf2f7c780b6144114cbd385c8a75090,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1711586416956264512,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-bj24b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf58ddc7-916a-4347-8d94-783a3bb0a98f,},Annotations:map[string]string{io.kubernetes.container.hash: bcd399c3,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"U
DP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2335d0efbd6b38012aaa7853526c41bf3685f7fcff3a5f1855049bd093118c63,PodSandboxId:ee6afff0b880880527361794585efead277f94cbf362568ea0246b287af5a7f9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1711586410241091145,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-624lz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0
c9fe23-917f-454c-a6c5-e87f373e6836,},Annotations:map[string]string{io.kubernetes.container.hash: 2eb651e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e04dc2ed1524d9534b2cb3a4bd80f40131fec74ceecb498aeacbf3d52a3ab958,PodSandboxId:789ccac04fceec87c92ca6f7a89658d78d15fd3e0a3642834cc90c6b5be57c01,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1711586410032255931,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d82dc3ad-0639-48
47-8ade-054194034bfd,},Annotations:map[string]string{io.kubernetes.container.hash: bd8edced,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:69e682b48973c0e6542ccd260265fd37c75e6094a7312df6867f92995654e9e2,PodSandboxId:a13191932e5cfa5601f7308230ca1d88c9e331741a2a6f1e58c28c0702052095,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1711586404693352345,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-700024,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6572f52357b1597ab5a5143
359c4f7db,},Annotations:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b38c8e366fb7dc33c8f78130cc8b39f1857199e6b766aeb185b8874413227e5,PodSandboxId:95191b05c86385a88c1cc38bcef7f77f1bcbd9dd6349384be0b74024bfc1be6d,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1711586404660211400,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-700024,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb461de89a4c7cf829726bc18e96b098,},Annotations:map[string]string
{io.kubernetes.container.hash: 35e3e01d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6b985cdad73c7b9f6661101317db02b918f2dd59a405f7b0482696258d17a813,PodSandboxId:9283da01f8452b0b0ab4cd42bfa2d9cd1b292a094f9342278952fce6cfe16c58,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1711586404649425007,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-700024,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b243ae23ac1e55392c95eaa1b8c010c6,},Annotations:m
ap[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:94438ba9a45ff5cb3847b92d0a96f8c02c00924bf96a07e865f871fc4ceb9bf0,PodSandboxId:022a0edf8204458bc744f223695729e0a720086e059b0b68a09c450c30d425ee,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1711586404594467246,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-700024,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb99061e92b6e79d0253d224337fac8a,},Annotations:map[string]s
tring{io.kubernetes.container.hash: 45f5e2b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a250b055-bc62-4206-8dab-b84a6d74d491 name=/runtime.v1.RuntimeService/ListContainers
	Mar 28 00:40:24 test-preload-700024 crio[670]: time="2024-03-28 00:40:24.224234610Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=fd6cfd96-38bc-4486-aad8-338a87ed8de2 name=/runtime.v1.RuntimeService/Version
	Mar 28 00:40:24 test-preload-700024 crio[670]: time="2024-03-28 00:40:24.224353642Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=fd6cfd96-38bc-4486-aad8-338a87ed8de2 name=/runtime.v1.RuntimeService/Version
	Mar 28 00:40:24 test-preload-700024 crio[670]: time="2024-03-28 00:40:24.225593048Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=18d6c412-1978-407b-b2ed-a538d5cce0e9 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 28 00:40:24 test-preload-700024 crio[670]: time="2024-03-28 00:40:24.226165327Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1711586424226142665,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=18d6c412-1978-407b-b2ed-a538d5cce0e9 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 28 00:40:24 test-preload-700024 crio[670]: time="2024-03-28 00:40:24.226805980Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5dd13c49-a211-4b17-bcc4-d6d348b83ee8 name=/runtime.v1.RuntimeService/ListContainers
	Mar 28 00:40:24 test-preload-700024 crio[670]: time="2024-03-28 00:40:24.226935354Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5dd13c49-a211-4b17-bcc4-d6d348b83ee8 name=/runtime.v1.RuntimeService/ListContainers
	Mar 28 00:40:24 test-preload-700024 crio[670]: time="2024-03-28 00:40:24.227135827Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1c4d52b22cc54ab943eee9821e9fc3b2f9ab1ff78e99ed060866b481fb13a581,PodSandboxId:789ccac04fceec87c92ca6f7a89658d78d15fd3e0a3642834cc90c6b5be57c01,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1711586423984262788,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d82dc3ad-0639-4847-8ade-054194034bfd,},Annotations:map[string]string{io.kubernetes.container.hash: bd8edced,io.kubernetes.container.restartCount: 3,io.kubernetes.co
ntainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:834f334a0e35380c2d839deb418e287356feb9977856c0300d914d4f17bc2f03,PodSandboxId:f0091911b209b97c831295d2a2c51fb1adf2f7c780b6144114cbd385c8a75090,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1711586416956264512,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-bj24b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf58ddc7-916a-4347-8d94-783a3bb0a98f,},Annotations:map[string]string{io.kubernetes.container.hash: bcd399c3,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"U
DP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2335d0efbd6b38012aaa7853526c41bf3685f7fcff3a5f1855049bd093118c63,PodSandboxId:ee6afff0b880880527361794585efead277f94cbf362568ea0246b287af5a7f9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1711586410241091145,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-624lz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0
c9fe23-917f-454c-a6c5-e87f373e6836,},Annotations:map[string]string{io.kubernetes.container.hash: 2eb651e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e04dc2ed1524d9534b2cb3a4bd80f40131fec74ceecb498aeacbf3d52a3ab958,PodSandboxId:789ccac04fceec87c92ca6f7a89658d78d15fd3e0a3642834cc90c6b5be57c01,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1711586410032255931,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d82dc3ad-0639-48
47-8ade-054194034bfd,},Annotations:map[string]string{io.kubernetes.container.hash: bd8edced,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:69e682b48973c0e6542ccd260265fd37c75e6094a7312df6867f92995654e9e2,PodSandboxId:a13191932e5cfa5601f7308230ca1d88c9e331741a2a6f1e58c28c0702052095,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1711586404693352345,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-700024,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6572f52357b1597ab5a5143
359c4f7db,},Annotations:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b38c8e366fb7dc33c8f78130cc8b39f1857199e6b766aeb185b8874413227e5,PodSandboxId:95191b05c86385a88c1cc38bcef7f77f1bcbd9dd6349384be0b74024bfc1be6d,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1711586404660211400,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-700024,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb461de89a4c7cf829726bc18e96b098,},Annotations:map[string]string
{io.kubernetes.container.hash: 35e3e01d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6b985cdad73c7b9f6661101317db02b918f2dd59a405f7b0482696258d17a813,PodSandboxId:9283da01f8452b0b0ab4cd42bfa2d9cd1b292a094f9342278952fce6cfe16c58,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1711586404649425007,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-700024,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b243ae23ac1e55392c95eaa1b8c010c6,},Annotations:m
ap[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:94438ba9a45ff5cb3847b92d0a96f8c02c00924bf96a07e865f871fc4ceb9bf0,PodSandboxId:022a0edf8204458bc744f223695729e0a720086e059b0b68a09c450c30d425ee,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1711586404594467246,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-700024,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb99061e92b6e79d0253d224337fac8a,},Annotations:map[string]s
tring{io.kubernetes.container.hash: 45f5e2b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5dd13c49-a211-4b17-bcc4-d6d348b83ee8 name=/runtime.v1.RuntimeService/ListContainers
	Mar 28 00:40:24 test-preload-700024 crio[670]: time="2024-03-28 00:40:24.271963828Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=5b06e72c-8533-442f-9dc3-8690336fd912 name=/runtime.v1.RuntimeService/Version
	Mar 28 00:40:24 test-preload-700024 crio[670]: time="2024-03-28 00:40:24.272056059Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=5b06e72c-8533-442f-9dc3-8690336fd912 name=/runtime.v1.RuntimeService/Version
	Mar 28 00:40:24 test-preload-700024 crio[670]: time="2024-03-28 00:40:24.274419403Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ba888770-8049-45ee-bb3c-c61f4a8118b2 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 28 00:40:24 test-preload-700024 crio[670]: time="2024-03-28 00:40:24.274840272Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1711586424274816909,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ba888770-8049-45ee-bb3c-c61f4a8118b2 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 28 00:40:24 test-preload-700024 crio[670]: time="2024-03-28 00:40:24.275551898Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=54cd25a9-270b-4146-a79f-0d5d297cfa85 name=/runtime.v1.RuntimeService/ListContainers
	Mar 28 00:40:24 test-preload-700024 crio[670]: time="2024-03-28 00:40:24.275630351Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=54cd25a9-270b-4146-a79f-0d5d297cfa85 name=/runtime.v1.RuntimeService/ListContainers
	Mar 28 00:40:24 test-preload-700024 crio[670]: time="2024-03-28 00:40:24.275811023Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1c4d52b22cc54ab943eee9821e9fc3b2f9ab1ff78e99ed060866b481fb13a581,PodSandboxId:789ccac04fceec87c92ca6f7a89658d78d15fd3e0a3642834cc90c6b5be57c01,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1711586423984262788,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d82dc3ad-0639-4847-8ade-054194034bfd,},Annotations:map[string]string{io.kubernetes.container.hash: bd8edced,io.kubernetes.container.restartCount: 3,io.kubernetes.co
ntainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:834f334a0e35380c2d839deb418e287356feb9977856c0300d914d4f17bc2f03,PodSandboxId:f0091911b209b97c831295d2a2c51fb1adf2f7c780b6144114cbd385c8a75090,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1711586416956264512,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-bj24b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf58ddc7-916a-4347-8d94-783a3bb0a98f,},Annotations:map[string]string{io.kubernetes.container.hash: bcd399c3,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"U
DP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2335d0efbd6b38012aaa7853526c41bf3685f7fcff3a5f1855049bd093118c63,PodSandboxId:ee6afff0b880880527361794585efead277f94cbf362568ea0246b287af5a7f9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1711586410241091145,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-624lz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0
c9fe23-917f-454c-a6c5-e87f373e6836,},Annotations:map[string]string{io.kubernetes.container.hash: 2eb651e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e04dc2ed1524d9534b2cb3a4bd80f40131fec74ceecb498aeacbf3d52a3ab958,PodSandboxId:789ccac04fceec87c92ca6f7a89658d78d15fd3e0a3642834cc90c6b5be57c01,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1711586410032255931,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d82dc3ad-0639-48
47-8ade-054194034bfd,},Annotations:map[string]string{io.kubernetes.container.hash: bd8edced,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:69e682b48973c0e6542ccd260265fd37c75e6094a7312df6867f92995654e9e2,PodSandboxId:a13191932e5cfa5601f7308230ca1d88c9e331741a2a6f1e58c28c0702052095,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1711586404693352345,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-700024,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6572f52357b1597ab5a5143
359c4f7db,},Annotations:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b38c8e366fb7dc33c8f78130cc8b39f1857199e6b766aeb185b8874413227e5,PodSandboxId:95191b05c86385a88c1cc38bcef7f77f1bcbd9dd6349384be0b74024bfc1be6d,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1711586404660211400,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-700024,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb461de89a4c7cf829726bc18e96b098,},Annotations:map[string]string
{io.kubernetes.container.hash: 35e3e01d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6b985cdad73c7b9f6661101317db02b918f2dd59a405f7b0482696258d17a813,PodSandboxId:9283da01f8452b0b0ab4cd42bfa2d9cd1b292a094f9342278952fce6cfe16c58,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1711586404649425007,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-700024,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b243ae23ac1e55392c95eaa1b8c010c6,},Annotations:m
ap[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:94438ba9a45ff5cb3847b92d0a96f8c02c00924bf96a07e865f871fc4ceb9bf0,PodSandboxId:022a0edf8204458bc744f223695729e0a720086e059b0b68a09c450c30d425ee,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1711586404594467246,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-700024,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb99061e92b6e79d0253d224337fac8a,},Annotations:map[string]s
tring{io.kubernetes.container.hash: 45f5e2b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=54cd25a9-270b-4146-a79f-0d5d297cfa85 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED                  STATE               NAME                      ATTEMPT             POD ID              POD
	1c4d52b22cc54       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   Less than a second ago   Running             storage-provisioner       3                   789ccac04fcee       storage-provisioner
	834f334a0e353       a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03   7 seconds ago            Running             coredns                   1                   f0091911b209b       coredns-6d4b75cb6d-bj24b
	2335d0efbd6b3       7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7   14 seconds ago           Running             kube-proxy                1                   ee6afff0b8808       kube-proxy-624lz
	e04dc2ed1524d       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   14 seconds ago           Exited              storage-provisioner       2                   789ccac04fcee       storage-provisioner
	69e682b48973c       03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9   19 seconds ago           Running             kube-scheduler            1                   a13191932e5cf       kube-scheduler-test-preload-700024
	5b38c8e366fb7       aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b   19 seconds ago           Running             etcd                      1                   95191b05c8638       etcd-test-preload-700024
	6b985cdad73c7       1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48   19 seconds ago           Running             kube-controller-manager   1                   9283da01f8452       kube-controller-manager-test-preload-700024
	94438ba9a45ff       6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d   19 seconds ago           Running             kube-apiserver            1                   022a0edf82044       kube-apiserver-test-preload-700024
	
	
	==> coredns [834f334a0e35380c2d839deb418e287356feb9977856c0300d914d4f17bc2f03] <==
	.:53
	[INFO] plugin/reload: Running configuration MD5 = bbeeddb09682f41960fef01b05cb3a3d
	CoreDNS-1.8.6
	linux/amd64, go1.17.1, 13a9191
	[INFO] 127.0.0.1:44048 - 432 "HINFO IN 41489054958681220.7757152315524877447. udp 55 false 512" NXDOMAIN qr,rd,ra 55 0.00851139s
	
	
	==> describe nodes <==
	Name:               test-preload-700024
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=test-preload-700024
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1a940f980f77c9bfc8ef93678db8c1f49c9dd79d
	                    minikube.k8s.io/name=test-preload-700024
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_03_28T00_38_48_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 28 Mar 2024 00:38:44 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  test-preload-700024
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 28 Mar 2024 00:40:18 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 28 Mar 2024 00:40:19 +0000   Thu, 28 Mar 2024 00:38:41 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 28 Mar 2024 00:40:19 +0000   Thu, 28 Mar 2024 00:38:41 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 28 Mar 2024 00:40:19 +0000   Thu, 28 Mar 2024 00:38:41 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 28 Mar 2024 00:40:19 +0000   Thu, 28 Mar 2024 00:40:19 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.15
	  Hostname:    test-preload-700024
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 8de53b72261e46ee9bb992dbb1913b39
	  System UUID:                8de53b72-261e-46ee-9bb9-92dbb1913b39
	  Boot ID:                    0a00a257-ac10-485e-bef8-4db6ebdac423
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.24.4
	  Kube-Proxy Version:         v1.24.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                           CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                           ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-bj24b                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     84s
	  kube-system                 etcd-test-preload-700024                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         98s
	  kube-system                 kube-apiserver-test-preload-700024             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         96s
	  kube-system                 kube-controller-manager-test-preload-700024    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         96s
	  kube-system                 kube-proxy-624lz                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         84s
	  kube-system                 kube-scheduler-test-preload-700024             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         96s
	  kube-system                 storage-provisioner                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         82s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (8%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 13s                kube-proxy       
	  Normal  Starting                 82s                kube-proxy       
	  Normal  Starting                 96s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  96s                kubelet          Node test-preload-700024 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    96s                kubelet          Node test-preload-700024 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     96s                kubelet          Node test-preload-700024 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  96s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                86s                kubelet          Node test-preload-700024 status is now: NodeReady
	  Normal  RegisteredNode           84s                node-controller  Node test-preload-700024 event: Registered Node test-preload-700024 in Controller
	  Normal  Starting                 21s                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  21s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  20s (x8 over 21s)  kubelet          Node test-preload-700024 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    20s (x8 over 21s)  kubelet          Node test-preload-700024 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     20s (x7 over 21s)  kubelet          Node test-preload-700024 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           3s                 node-controller  Node test-preload-700024 event: Registered Node test-preload-700024 in Controller
	
	
	==> dmesg <==
	[Mar28 00:39] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.053019] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.041433] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.554381] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.742489] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +2.392040] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.612137] systemd-fstab-generator[585]: Ignoring "noauto" option for root device
	[  +0.062218] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.063716] systemd-fstab-generator[598]: Ignoring "noauto" option for root device
	[  +0.169019] systemd-fstab-generator[612]: Ignoring "noauto" option for root device
	[  +0.132711] systemd-fstab-generator[624]: Ignoring "noauto" option for root device
	[  +0.274763] systemd-fstab-generator[653]: Ignoring "noauto" option for root device
	[Mar28 00:40] systemd-fstab-generator[930]: Ignoring "noauto" option for root device
	[  +0.057697] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.742927] systemd-fstab-generator[1060]: Ignoring "noauto" option for root device
	[  +5.873891] kauditd_printk_skb: 105 callbacks suppressed
	[  +1.779319] systemd-fstab-generator[1738]: Ignoring "noauto" option for root device
	[  +5.495371] kauditd_printk_skb: 59 callbacks suppressed
	[  +7.253328] kauditd_printk_skb: 7 callbacks suppressed
	
	
	==> etcd [5b38c8e366fb7dc33c8f78130cc8b39f1857199e6b766aeb185b8874413227e5] <==
	{"level":"info","ts":"2024-03-28T00:40:05.121Z","caller":"etcdserver/server.go:851","msg":"starting etcd server","local-member-id":"aadd773bb1fe5a6f","local-server-version":"3.5.3","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2024-03-28T00:40:05.128Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-03-28T00:40:05.128Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"aadd773bb1fe5a6f","initial-advertise-peer-urls":["https://192.168.39.15:2380"],"listen-peer-urls":["https://192.168.39.15:2380"],"advertise-client-urls":["https://192.168.39.15:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.15:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-03-28T00:40:05.130Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-03-28T00:40:05.131Z","caller":"etcdserver/server.go:752","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2024-03-28T00:40:05.131Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"192.168.39.15:2380"}
	{"level":"info","ts":"2024-03-28T00:40:05.132Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.39.15:2380"}
	{"level":"info","ts":"2024-03-28T00:40:05.132Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aadd773bb1fe5a6f switched to configuration voters=(12312128054573816431)"}
	{"level":"info","ts":"2024-03-28T00:40:05.132Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"546e0a293cd37a14","local-member-id":"aadd773bb1fe5a6f","added-peer-id":"aadd773bb1fe5a6f","added-peer-peer-urls":["https://192.168.39.15:2380"]}
	{"level":"info","ts":"2024-03-28T00:40:05.132Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"546e0a293cd37a14","local-member-id":"aadd773bb1fe5a6f","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-28T00:40:05.134Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-28T00:40:06.165Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aadd773bb1fe5a6f is starting a new election at term 2"}
	{"level":"info","ts":"2024-03-28T00:40:06.165Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aadd773bb1fe5a6f became pre-candidate at term 2"}
	{"level":"info","ts":"2024-03-28T00:40:06.165Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aadd773bb1fe5a6f received MsgPreVoteResp from aadd773bb1fe5a6f at term 2"}
	{"level":"info","ts":"2024-03-28T00:40:06.165Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aadd773bb1fe5a6f became candidate at term 3"}
	{"level":"info","ts":"2024-03-28T00:40:06.165Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aadd773bb1fe5a6f received MsgVoteResp from aadd773bb1fe5a6f at term 3"}
	{"level":"info","ts":"2024-03-28T00:40:06.165Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aadd773bb1fe5a6f became leader at term 3"}
	{"level":"info","ts":"2024-03-28T00:40:06.165Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aadd773bb1fe5a6f elected leader aadd773bb1fe5a6f at term 3"}
	{"level":"info","ts":"2024-03-28T00:40:06.167Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"aadd773bb1fe5a6f","local-member-attributes":"{Name:test-preload-700024 ClientURLs:[https://192.168.39.15:2379]}","request-path":"/0/members/aadd773bb1fe5a6f/attributes","cluster-id":"546e0a293cd37a14","publish-timeout":"7s"}
	{"level":"info","ts":"2024-03-28T00:40:06.168Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-28T00:40:06.168Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-28T00:40:06.169Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.39.15:2379"}
	{"level":"info","ts":"2024-03-28T00:40:06.169Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-03-28T00:40:06.171Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-03-28T00:40:06.171Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 00:40:24 up 0 min,  0 users,  load average: 0.32, 0.10, 0.03
	Linux test-preload-700024 5.10.207 #1 SMP Wed Mar 27 22:02:20 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [94438ba9a45ff5cb3847b92d0a96f8c02c00924bf96a07e865f871fc4ceb9bf0] <==
	I0328 00:40:08.631030       1 establishing_controller.go:76] Starting EstablishingController
	I0328 00:40:08.631109       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0328 00:40:08.631140       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0328 00:40:08.631172       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0328 00:40:08.631227       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0328 00:40:08.650115       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	E0328 00:40:08.692189       1 controller.go:169] Error removing old endpoints from kubernetes service: no master IPs were listed in storage, refusing to erase all endpoints for the kubernetes service
	I0328 00:40:08.703061       1 shared_informer.go:262] Caches are synced for node_authorizer
	I0328 00:40:08.729008       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0328 00:40:08.756076       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0328 00:40:08.763037       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I0328 00:40:08.767613       1 cache.go:39] Caches are synced for autoregister controller
	I0328 00:40:08.770090       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0328 00:40:08.776554       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I0328 00:40:08.780150       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I0328 00:40:09.249787       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0328 00:40:09.571250       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0328 00:40:10.221470       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0328 00:40:10.238472       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0328 00:40:10.286838       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0328 00:40:10.310717       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0328 00:40:10.325068       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0328 00:40:10.539216       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	I0328 00:40:21.158734       1 controller.go:611] quota admission added evaluator for: endpoints
	I0328 00:40:21.165510       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [6b985cdad73c7b9f6661101317db02b918f2dd59a405f7b0482696258d17a813] <==
	I0328 00:40:21.068840       1 shared_informer.go:262] Caches are synced for PV protection
	I0328 00:40:21.071542       1 shared_informer.go:262] Caches are synced for crt configmap
	I0328 00:40:21.073763       1 shared_informer.go:262] Caches are synced for ephemeral
	I0328 00:40:21.073844       1 shared_informer.go:262] Caches are synced for bootstrap_signer
	I0328 00:40:21.076483       1 shared_informer.go:262] Caches are synced for daemon sets
	I0328 00:40:21.079035       1 shared_informer.go:262] Caches are synced for job
	I0328 00:40:21.081400       1 shared_informer.go:262] Caches are synced for ClusterRoleAggregator
	I0328 00:40:21.085332       1 shared_informer.go:262] Caches are synced for HPA
	I0328 00:40:21.085384       1 shared_informer.go:262] Caches are synced for TTL after finished
	I0328 00:40:21.087679       1 shared_informer.go:262] Caches are synced for deployment
	I0328 00:40:21.094646       1 shared_informer.go:262] Caches are synced for stateful set
	I0328 00:40:21.101263       1 shared_informer.go:262] Caches are synced for attach detach
	I0328 00:40:21.145381       1 shared_informer.go:262] Caches are synced for endpoint
	I0328 00:40:21.148196       1 shared_informer.go:262] Caches are synced for endpoint_slice_mirroring
	I0328 00:40:21.152640       1 shared_informer.go:262] Caches are synced for endpoint_slice
	I0328 00:40:21.219222       1 shared_informer.go:262] Caches are synced for certificate-csrapproving
	I0328 00:40:21.273824       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kubelet-serving
	I0328 00:40:21.275111       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0328 00:40:21.275194       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kubelet-client
	I0328 00:40:21.277554       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-legacy-unknown
	I0328 00:40:21.293184       1 shared_informer.go:262] Caches are synced for resource quota
	I0328 00:40:21.313957       1 shared_informer.go:262] Caches are synced for resource quota
	I0328 00:40:21.734069       1 shared_informer.go:262] Caches are synced for garbage collector
	I0328 00:40:21.770374       1 shared_informer.go:262] Caches are synced for garbage collector
	I0328 00:40:21.770469       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	
	
	==> kube-proxy [2335d0efbd6b38012aaa7853526c41bf3685f7fcff3a5f1855049bd093118c63] <==
	I0328 00:40:10.491809       1 node.go:163] Successfully retrieved node IP: 192.168.39.15
	I0328 00:40:10.492194       1 server_others.go:138] "Detected node IP" address="192.168.39.15"
	I0328 00:40:10.492286       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0328 00:40:10.525479       1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I0328 00:40:10.525499       1 server_others.go:206] "Using iptables Proxier"
	I0328 00:40:10.525518       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0328 00:40:10.527128       1 server.go:661] "Version info" version="v1.24.4"
	I0328 00:40:10.527439       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0328 00:40:10.530209       1 config.go:317] "Starting service config controller"
	I0328 00:40:10.530513       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0328 00:40:10.530570       1 config.go:226] "Starting endpoint slice config controller"
	I0328 00:40:10.530599       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0328 00:40:10.532055       1 config.go:444] "Starting node config controller"
	I0328 00:40:10.532087       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0328 00:40:10.631657       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I0328 00:40:10.631828       1 shared_informer.go:262] Caches are synced for service config
	I0328 00:40:10.632588       1 shared_informer.go:262] Caches are synced for node config
	
	
	==> kube-scheduler [69e682b48973c0e6542ccd260265fd37c75e6094a7312df6867f92995654e9e2] <==
	I0328 00:40:05.612945       1 serving.go:348] Generated self-signed cert in-memory
	W0328 00:40:08.664970       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0328 00:40:08.665220       1 authentication.go:346] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0328 00:40:08.665301       1 authentication.go:347] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0328 00:40:08.665327       1 authentication.go:348] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0328 00:40:08.702766       1 server.go:147] "Starting Kubernetes Scheduler" version="v1.24.4"
	I0328 00:40:08.702805       1 server.go:149] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0328 00:40:08.706409       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0328 00:40:08.706580       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0328 00:40:08.706614       1 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0328 00:40:08.706635       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0328 00:40:08.807155       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Mar 28 00:40:08 test-preload-700024 kubelet[1067]: I0328 00:40:08.906480    1067 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/d82dc3ad-0639-4847-8ade-054194034bfd-tmp\") pod \"storage-provisioner\" (UID: \"d82dc3ad-0639-4847-8ade-054194034bfd\") " pod="kube-system/storage-provisioner"
	Mar 28 00:40:08 test-preload-700024 kubelet[1067]: I0328 00:40:08.906498    1067 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/a0c9fe23-917f-454c-a6c5-e87f373e6836-kube-proxy\") pod \"kube-proxy-624lz\" (UID: \"a0c9fe23-917f-454c-a6c5-e87f373e6836\") " pod="kube-system/kube-proxy-624lz"
	Mar 28 00:40:08 test-preload-700024 kubelet[1067]: I0328 00:40:08.906517    1067 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a0c9fe23-917f-454c-a6c5-e87f373e6836-xtables-lock\") pod \"kube-proxy-624lz\" (UID: \"a0c9fe23-917f-454c-a6c5-e87f373e6836\") " pod="kube-system/kube-proxy-624lz"
	Mar 28 00:40:08 test-preload-700024 kubelet[1067]: I0328 00:40:08.906537    1067 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h8v2f\" (UniqueName: \"kubernetes.io/projected/bf58ddc7-916a-4347-8d94-783a3bb0a98f-kube-api-access-h8v2f\") pod \"coredns-6d4b75cb6d-bj24b\" (UID: \"bf58ddc7-916a-4347-8d94-783a3bb0a98f\") " pod="kube-system/coredns-6d4b75cb6d-bj24b"
	Mar 28 00:40:08 test-preload-700024 kubelet[1067]: I0328 00:40:08.906556    1067 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p5g9x\" (UniqueName: \"kubernetes.io/projected/d82dc3ad-0639-4847-8ade-054194034bfd-kube-api-access-p5g9x\") pod \"storage-provisioner\" (UID: \"d82dc3ad-0639-4847-8ade-054194034bfd\") " pod="kube-system/storage-provisioner"
	Mar 28 00:40:08 test-preload-700024 kubelet[1067]: I0328 00:40:08.906575    1067 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lbx4x\" (UniqueName: \"kubernetes.io/projected/a0c9fe23-917f-454c-a6c5-e87f373e6836-kube-api-access-lbx4x\") pod \"kube-proxy-624lz\" (UID: \"a0c9fe23-917f-454c-a6c5-e87f373e6836\") " pod="kube-system/kube-proxy-624lz"
	Mar 28 00:40:08 test-preload-700024 kubelet[1067]: I0328 00:40:08.906584    1067 reconciler.go:159] "Reconciler: start to sync state"
	Mar 28 00:40:08 test-preload-700024 kubelet[1067]: E0328 00:40:08.909218    1067 kubelet.go:2349] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"
	Mar 28 00:40:09 test-preload-700024 kubelet[1067]: E0328 00:40:09.011623    1067 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Mar 28 00:40:09 test-preload-700024 kubelet[1067]: E0328 00:40:09.011957    1067 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/bf58ddc7-916a-4347-8d94-783a3bb0a98f-config-volume podName:bf58ddc7-916a-4347-8d94-783a3bb0a98f nodeName:}" failed. No retries permitted until 2024-03-28 00:40:09.511754689 +0000 UTC m=+5.789646429 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/bf58ddc7-916a-4347-8d94-783a3bb0a98f-config-volume") pod "coredns-6d4b75cb6d-bj24b" (UID: "bf58ddc7-916a-4347-8d94-783a3bb0a98f") : object "kube-system"/"coredns" not registered
	Mar 28 00:40:09 test-preload-700024 kubelet[1067]: E0328 00:40:09.513856    1067 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Mar 28 00:40:09 test-preload-700024 kubelet[1067]: E0328 00:40:09.514026    1067 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/bf58ddc7-916a-4347-8d94-783a3bb0a98f-config-volume podName:bf58ddc7-916a-4347-8d94-783a3bb0a98f nodeName:}" failed. No retries permitted until 2024-03-28 00:40:10.513999358 +0000 UTC m=+6.791891119 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/bf58ddc7-916a-4347-8d94-783a3bb0a98f-config-volume") pod "coredns-6d4b75cb6d-bj24b" (UID: "bf58ddc7-916a-4347-8d94-783a3bb0a98f") : object "kube-system"/"coredns" not registered
	Mar 28 00:40:10 test-preload-700024 kubelet[1067]: I0328 00:40:10.019206    1067 scope.go:110] "RemoveContainer" containerID="97f83e55ebe3d96d23730c9116ef98198a93f7c82fedd0444565d27b322d4372"
	Mar 28 00:40:10 test-preload-700024 kubelet[1067]: E0328 00:40:10.520384    1067 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Mar 28 00:40:10 test-preload-700024 kubelet[1067]: E0328 00:40:10.520489    1067 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/bf58ddc7-916a-4347-8d94-783a3bb0a98f-config-volume podName:bf58ddc7-916a-4347-8d94-783a3bb0a98f nodeName:}" failed. No retries permitted until 2024-03-28 00:40:12.520474764 +0000 UTC m=+8.798366503 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/bf58ddc7-916a-4347-8d94-783a3bb0a98f-config-volume") pod "coredns-6d4b75cb6d-bj24b" (UID: "bf58ddc7-916a-4347-8d94-783a3bb0a98f") : object "kube-system"/"coredns" not registered
	Mar 28 00:40:10 test-preload-700024 kubelet[1067]: E0328 00:40:10.971865    1067 pod_workers.go:951] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-6d4b75cb6d-bj24b" podUID=bf58ddc7-916a-4347-8d94-783a3bb0a98f
	Mar 28 00:40:11 test-preload-700024 kubelet[1067]: I0328 00:40:11.025525    1067 scope.go:110] "RemoveContainer" containerID="97f83e55ebe3d96d23730c9116ef98198a93f7c82fedd0444565d27b322d4372"
	Mar 28 00:40:11 test-preload-700024 kubelet[1067]: I0328 00:40:11.025725    1067 scope.go:110] "RemoveContainer" containerID="e04dc2ed1524d9534b2cb3a4bd80f40131fec74ceecb498aeacbf3d52a3ab958"
	Mar 28 00:40:11 test-preload-700024 kubelet[1067]: E0328 00:40:11.025938    1067 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(d82dc3ad-0639-4847-8ade-054194034bfd)\"" pod="kube-system/storage-provisioner" podUID=d82dc3ad-0639-4847-8ade-054194034bfd
	Mar 28 00:40:12 test-preload-700024 kubelet[1067]: I0328 00:40:12.037342    1067 scope.go:110] "RemoveContainer" containerID="e04dc2ed1524d9534b2cb3a4bd80f40131fec74ceecb498aeacbf3d52a3ab958"
	Mar 28 00:40:12 test-preload-700024 kubelet[1067]: E0328 00:40:12.037532    1067 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(d82dc3ad-0639-4847-8ade-054194034bfd)\"" pod="kube-system/storage-provisioner" podUID=d82dc3ad-0639-4847-8ade-054194034bfd
	Mar 28 00:40:12 test-preload-700024 kubelet[1067]: E0328 00:40:12.532666    1067 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Mar 28 00:40:12 test-preload-700024 kubelet[1067]: E0328 00:40:12.532739    1067 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/bf58ddc7-916a-4347-8d94-783a3bb0a98f-config-volume podName:bf58ddc7-916a-4347-8d94-783a3bb0a98f nodeName:}" failed. No retries permitted until 2024-03-28 00:40:16.532724526 +0000 UTC m=+12.810616266 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/bf58ddc7-916a-4347-8d94-783a3bb0a98f-config-volume") pod "coredns-6d4b75cb6d-bj24b" (UID: "bf58ddc7-916a-4347-8d94-783a3bb0a98f") : object "kube-system"/"coredns" not registered
	Mar 28 00:40:12 test-preload-700024 kubelet[1067]: E0328 00:40:12.972005    1067 pod_workers.go:951] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-6d4b75cb6d-bj24b" podUID=bf58ddc7-916a-4347-8d94-783a3bb0a98f
	Mar 28 00:40:23 test-preload-700024 kubelet[1067]: I0328 00:40:23.972514    1067 scope.go:110] "RemoveContainer" containerID="e04dc2ed1524d9534b2cb3a4bd80f40131fec74ceecb498aeacbf3d52a3ab958"
	
	
	==> storage-provisioner [1c4d52b22cc54ab943eee9821e9fc3b2f9ab1ff78e99ed060866b481fb13a581] <==
	I0328 00:40:24.124424       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0328 00:40:24.146693       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0328 00:40:24.146767       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	
	
	==> storage-provisioner [e04dc2ed1524d9534b2cb3a4bd80f40131fec74ceecb498aeacbf3d52a3ab958] <==
	I0328 00:40:10.168283       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0328 00:40:10.179991       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p test-preload-700024 -n test-preload-700024
helpers_test.go:261: (dbg) Run:  kubectl --context test-preload-700024 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPreload FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "test-preload-700024" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-700024
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-700024: (1.154428872s)
--- FAIL: TestPreload (220.73s)

                                                
                                    
x
+
TestKubernetesUpgrade (393.51s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-615158 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
E0328 00:46:14.356000 1076522 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/addons-910864/client.crt: no such file or directory
E0328 00:46:21.209133 1076522 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/functional-800754/client.crt: no such file or directory
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-615158 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109 (5m19.830809527s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-615158] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18485
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18485-1069254/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18485-1069254/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "kubernetes-upgrade-615158" primary control-plane node in "kubernetes-upgrade-615158" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0328 00:46:10.553093 1113654 out.go:291] Setting OutFile to fd 1 ...
	I0328 00:46:10.553394 1113654 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 00:46:10.553406 1113654 out.go:304] Setting ErrFile to fd 2...
	I0328 00:46:10.553411 1113654 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 00:46:10.553602 1113654 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18485-1069254/.minikube/bin
	I0328 00:46:10.554200 1113654 out.go:298] Setting JSON to false
	I0328 00:46:10.555243 1113654 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":30468,"bootTime":1711556303,"procs":212,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1054-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0328 00:46:10.555317 1113654 start.go:139] virtualization: kvm guest
	I0328 00:46:10.557508 1113654 out.go:177] * [kubernetes-upgrade-615158] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I0328 00:46:10.559216 1113654 notify.go:220] Checking for updates...
	I0328 00:46:10.559227 1113654 out.go:177]   - MINIKUBE_LOCATION=18485
	I0328 00:46:10.560609 1113654 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0328 00:46:10.561846 1113654 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18485-1069254/kubeconfig
	I0328 00:46:10.563285 1113654 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18485-1069254/.minikube
	I0328 00:46:10.564524 1113654 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0328 00:46:10.565670 1113654 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0328 00:46:10.567227 1113654 config.go:182] Loaded profile config "NoKubernetes-636163": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I0328 00:46:10.567315 1113654 config.go:182] Loaded profile config "cert-expiration-927384": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0328 00:46:10.567433 1113654 config.go:182] Loaded profile config "pause-040046": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0328 00:46:10.567558 1113654 driver.go:392] Setting default libvirt URI to qemu:///system
	I0328 00:46:10.608120 1113654 out.go:177] * Using the kvm2 driver based on user configuration
	I0328 00:46:10.609255 1113654 start.go:297] selected driver: kvm2
	I0328 00:46:10.609273 1113654 start.go:901] validating driver "kvm2" against <nil>
	I0328 00:46:10.609285 1113654 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0328 00:46:10.610188 1113654 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0328 00:46:10.610343 1113654 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18485-1069254/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0328 00:46:10.627049 1113654 install.go:137] /home/jenkins/minikube-integration/18485-1069254/.minikube/bin/docker-machine-driver-kvm2 version is 1.33.0-beta.0
	I0328 00:46:10.627142 1113654 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0328 00:46:10.627483 1113654 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0328 00:46:10.627579 1113654 cni.go:84] Creating CNI manager for ""
	I0328 00:46:10.627598 1113654 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0328 00:46:10.627607 1113654 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0328 00:46:10.627692 1113654 start.go:340] cluster config:
	{Name:kubernetes-upgrade-615158 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-615158 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0328 00:46:10.627829 1113654 iso.go:125] acquiring lock: {Name:mk3da1fa7d63a581c817f327d86a827697458cb8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0328 00:46:10.629671 1113654 out.go:177] * Starting "kubernetes-upgrade-615158" primary control-plane node in "kubernetes-upgrade-615158" cluster
	I0328 00:46:10.630981 1113654 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0328 00:46:10.631027 1113654 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0328 00:46:10.631035 1113654 cache.go:56] Caching tarball of preloaded images
	I0328 00:46:10.631132 1113654 preload.go:173] Found /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0328 00:46:10.631147 1113654 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0328 00:46:10.631240 1113654 profile.go:142] Saving config to /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/kubernetes-upgrade-615158/config.json ...
	I0328 00:46:10.631257 1113654 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/kubernetes-upgrade-615158/config.json: {Name:mk1fd31febc1e402ad5cdc54d040091bbaeee447 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 00:46:10.631377 1113654 start.go:360] acquireMachinesLock for kubernetes-upgrade-615158: {Name:mk85e225431128bbd27ac7bb3815095957281902 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0328 00:46:56.595333 1113654 start.go:364] duration metric: took 45.963930054s to acquireMachinesLock for "kubernetes-upgrade-615158"
	I0328 00:46:56.595427 1113654 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-615158 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-615158 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0328 00:46:56.595540 1113654 start.go:125] createHost starting for "" (driver="kvm2")
	I0328 00:46:56.597448 1113654 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0328 00:46:56.597670 1113654 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18485-1069254/.minikube/bin/docker-machine-driver-kvm2
	I0328 00:46:56.597723 1113654 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 00:46:56.615150 1113654 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36861
	I0328 00:46:56.615579 1113654 main.go:141] libmachine: () Calling .GetVersion
	I0328 00:46:56.616233 1113654 main.go:141] libmachine: Using API Version  1
	I0328 00:46:56.616257 1113654 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 00:46:56.616633 1113654 main.go:141] libmachine: () Calling .GetMachineName
	I0328 00:46:56.616923 1113654 main.go:141] libmachine: (kubernetes-upgrade-615158) Calling .GetMachineName
	I0328 00:46:56.617130 1113654 main.go:141] libmachine: (kubernetes-upgrade-615158) Calling .DriverName
	I0328 00:46:56.617293 1113654 start.go:159] libmachine.API.Create for "kubernetes-upgrade-615158" (driver="kvm2")
	I0328 00:46:56.617324 1113654 client.go:168] LocalClient.Create starting
	I0328 00:46:56.617383 1113654 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem
	I0328 00:46:56.617416 1113654 main.go:141] libmachine: Decoding PEM data...
	I0328 00:46:56.617431 1113654 main.go:141] libmachine: Parsing certificate...
	I0328 00:46:56.617486 1113654 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/cert.pem
	I0328 00:46:56.617505 1113654 main.go:141] libmachine: Decoding PEM data...
	I0328 00:46:56.617516 1113654 main.go:141] libmachine: Parsing certificate...
	I0328 00:46:56.617530 1113654 main.go:141] libmachine: Running pre-create checks...
	I0328 00:46:56.617540 1113654 main.go:141] libmachine: (kubernetes-upgrade-615158) Calling .PreCreateCheck
	I0328 00:46:56.617938 1113654 main.go:141] libmachine: (kubernetes-upgrade-615158) Calling .GetConfigRaw
	I0328 00:46:56.618421 1113654 main.go:141] libmachine: Creating machine...
	I0328 00:46:56.618437 1113654 main.go:141] libmachine: (kubernetes-upgrade-615158) Calling .Create
	I0328 00:46:56.618582 1113654 main.go:141] libmachine: (kubernetes-upgrade-615158) Creating KVM machine...
	I0328 00:46:56.619946 1113654 main.go:141] libmachine: (kubernetes-upgrade-615158) DBG | found existing default KVM network
	I0328 00:46:56.621760 1113654 main.go:141] libmachine: (kubernetes-upgrade-615158) DBG | I0328 00:46:56.621583 1113937 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:08:55:f9} reservation:<nil>}
	I0328 00:46:56.623397 1113654 main.go:141] libmachine: (kubernetes-upgrade-615158) DBG | I0328 00:46:56.623311 1113937 network.go:206] using free private subnet 192.168.50.0/24: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00028ec60}
	I0328 00:46:56.623435 1113654 main.go:141] libmachine: (kubernetes-upgrade-615158) DBG | created network xml: 
	I0328 00:46:56.623450 1113654 main.go:141] libmachine: (kubernetes-upgrade-615158) DBG | <network>
	I0328 00:46:56.623464 1113654 main.go:141] libmachine: (kubernetes-upgrade-615158) DBG |   <name>mk-kubernetes-upgrade-615158</name>
	I0328 00:46:56.623486 1113654 main.go:141] libmachine: (kubernetes-upgrade-615158) DBG |   <dns enable='no'/>
	I0328 00:46:56.623506 1113654 main.go:141] libmachine: (kubernetes-upgrade-615158) DBG |   
	I0328 00:46:56.623532 1113654 main.go:141] libmachine: (kubernetes-upgrade-615158) DBG |   <ip address='192.168.50.1' netmask='255.255.255.0'>
	I0328 00:46:56.623551 1113654 main.go:141] libmachine: (kubernetes-upgrade-615158) DBG |     <dhcp>
	I0328 00:46:56.623562 1113654 main.go:141] libmachine: (kubernetes-upgrade-615158) DBG |       <range start='192.168.50.2' end='192.168.50.253'/>
	I0328 00:46:56.623573 1113654 main.go:141] libmachine: (kubernetes-upgrade-615158) DBG |     </dhcp>
	I0328 00:46:56.623581 1113654 main.go:141] libmachine: (kubernetes-upgrade-615158) DBG |   </ip>
	I0328 00:46:56.623592 1113654 main.go:141] libmachine: (kubernetes-upgrade-615158) DBG |   
	I0328 00:46:56.623600 1113654 main.go:141] libmachine: (kubernetes-upgrade-615158) DBG | </network>
	I0328 00:46:56.623605 1113654 main.go:141] libmachine: (kubernetes-upgrade-615158) DBG | 
	I0328 00:46:56.629801 1113654 main.go:141] libmachine: (kubernetes-upgrade-615158) DBG | trying to create private KVM network mk-kubernetes-upgrade-615158 192.168.50.0/24...
	I0328 00:46:56.709200 1113654 main.go:141] libmachine: (kubernetes-upgrade-615158) DBG | private KVM network mk-kubernetes-upgrade-615158 192.168.50.0/24 created
	I0328 00:46:56.709238 1113654 main.go:141] libmachine: (kubernetes-upgrade-615158) Setting up store path in /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/kubernetes-upgrade-615158 ...
	I0328 00:46:56.709254 1113654 main.go:141] libmachine: (kubernetes-upgrade-615158) DBG | I0328 00:46:56.709195 1113937 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18485-1069254/.minikube
	I0328 00:46:56.709273 1113654 main.go:141] libmachine: (kubernetes-upgrade-615158) Building disk image from file:///home/jenkins/minikube-integration/18485-1069254/.minikube/cache/iso/amd64/minikube-v1.33.0-1711559712-18485-amd64.iso
	I0328 00:46:56.709404 1113654 main.go:141] libmachine: (kubernetes-upgrade-615158) Downloading /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18485-1069254/.minikube/cache/iso/amd64/minikube-v1.33.0-1711559712-18485-amd64.iso...
	I0328 00:46:56.972019 1113654 main.go:141] libmachine: (kubernetes-upgrade-615158) DBG | I0328 00:46:56.971878 1113937 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/kubernetes-upgrade-615158/id_rsa...
	I0328 00:46:57.132677 1113654 main.go:141] libmachine: (kubernetes-upgrade-615158) DBG | I0328 00:46:57.132546 1113937 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/kubernetes-upgrade-615158/kubernetes-upgrade-615158.rawdisk...
	I0328 00:46:57.132710 1113654 main.go:141] libmachine: (kubernetes-upgrade-615158) DBG | Writing magic tar header
	I0328 00:46:57.132726 1113654 main.go:141] libmachine: (kubernetes-upgrade-615158) DBG | Writing SSH key tar header
	I0328 00:46:57.132753 1113654 main.go:141] libmachine: (kubernetes-upgrade-615158) DBG | I0328 00:46:57.132682 1113937 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/kubernetes-upgrade-615158 ...
	I0328 00:46:57.132771 1113654 main.go:141] libmachine: (kubernetes-upgrade-615158) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/kubernetes-upgrade-615158
	I0328 00:46:57.132824 1113654 main.go:141] libmachine: (kubernetes-upgrade-615158) Setting executable bit set on /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/kubernetes-upgrade-615158 (perms=drwx------)
	I0328 00:46:57.132845 1113654 main.go:141] libmachine: (kubernetes-upgrade-615158) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18485-1069254/.minikube/machines
	I0328 00:46:57.132859 1113654 main.go:141] libmachine: (kubernetes-upgrade-615158) Setting executable bit set on /home/jenkins/minikube-integration/18485-1069254/.minikube/machines (perms=drwxr-xr-x)
	I0328 00:46:57.132873 1113654 main.go:141] libmachine: (kubernetes-upgrade-615158) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18485-1069254/.minikube
	I0328 00:46:57.132891 1113654 main.go:141] libmachine: (kubernetes-upgrade-615158) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18485-1069254
	I0328 00:46:57.132905 1113654 main.go:141] libmachine: (kubernetes-upgrade-615158) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0328 00:46:57.132918 1113654 main.go:141] libmachine: (kubernetes-upgrade-615158) DBG | Checking permissions on dir: /home/jenkins
	I0328 00:46:57.132928 1113654 main.go:141] libmachine: (kubernetes-upgrade-615158) DBG | Checking permissions on dir: /home
	I0328 00:46:57.132941 1113654 main.go:141] libmachine: (kubernetes-upgrade-615158) Setting executable bit set on /home/jenkins/minikube-integration/18485-1069254/.minikube (perms=drwxr-xr-x)
	I0328 00:46:57.132954 1113654 main.go:141] libmachine: (kubernetes-upgrade-615158) DBG | Skipping /home - not owner
	I0328 00:46:57.132974 1113654 main.go:141] libmachine: (kubernetes-upgrade-615158) Setting executable bit set on /home/jenkins/minikube-integration/18485-1069254 (perms=drwxrwxr-x)
	I0328 00:46:57.132988 1113654 main.go:141] libmachine: (kubernetes-upgrade-615158) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0328 00:46:57.133001 1113654 main.go:141] libmachine: (kubernetes-upgrade-615158) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0328 00:46:57.133019 1113654 main.go:141] libmachine: (kubernetes-upgrade-615158) Creating domain...
	I0328 00:46:57.134200 1113654 main.go:141] libmachine: (kubernetes-upgrade-615158) define libvirt domain using xml: 
	I0328 00:46:57.134239 1113654 main.go:141] libmachine: (kubernetes-upgrade-615158) <domain type='kvm'>
	I0328 00:46:57.134250 1113654 main.go:141] libmachine: (kubernetes-upgrade-615158)   <name>kubernetes-upgrade-615158</name>
	I0328 00:46:57.134258 1113654 main.go:141] libmachine: (kubernetes-upgrade-615158)   <memory unit='MiB'>2200</memory>
	I0328 00:46:57.134272 1113654 main.go:141] libmachine: (kubernetes-upgrade-615158)   <vcpu>2</vcpu>
	I0328 00:46:57.134284 1113654 main.go:141] libmachine: (kubernetes-upgrade-615158)   <features>
	I0328 00:46:57.134296 1113654 main.go:141] libmachine: (kubernetes-upgrade-615158)     <acpi/>
	I0328 00:46:57.134308 1113654 main.go:141] libmachine: (kubernetes-upgrade-615158)     <apic/>
	I0328 00:46:57.134321 1113654 main.go:141] libmachine: (kubernetes-upgrade-615158)     <pae/>
	I0328 00:46:57.134336 1113654 main.go:141] libmachine: (kubernetes-upgrade-615158)     
	I0328 00:46:57.134349 1113654 main.go:141] libmachine: (kubernetes-upgrade-615158)   </features>
	I0328 00:46:57.134361 1113654 main.go:141] libmachine: (kubernetes-upgrade-615158)   <cpu mode='host-passthrough'>
	I0328 00:46:57.134373 1113654 main.go:141] libmachine: (kubernetes-upgrade-615158)   
	I0328 00:46:57.134384 1113654 main.go:141] libmachine: (kubernetes-upgrade-615158)   </cpu>
	I0328 00:46:57.134397 1113654 main.go:141] libmachine: (kubernetes-upgrade-615158)   <os>
	I0328 00:46:57.134409 1113654 main.go:141] libmachine: (kubernetes-upgrade-615158)     <type>hvm</type>
	I0328 00:46:57.134444 1113654 main.go:141] libmachine: (kubernetes-upgrade-615158)     <boot dev='cdrom'/>
	I0328 00:46:57.134474 1113654 main.go:141] libmachine: (kubernetes-upgrade-615158)     <boot dev='hd'/>
	I0328 00:46:57.134489 1113654 main.go:141] libmachine: (kubernetes-upgrade-615158)     <bootmenu enable='no'/>
	I0328 00:46:57.134501 1113654 main.go:141] libmachine: (kubernetes-upgrade-615158)   </os>
	I0328 00:46:57.134515 1113654 main.go:141] libmachine: (kubernetes-upgrade-615158)   <devices>
	I0328 00:46:57.134528 1113654 main.go:141] libmachine: (kubernetes-upgrade-615158)     <disk type='file' device='cdrom'>
	I0328 00:46:57.134556 1113654 main.go:141] libmachine: (kubernetes-upgrade-615158)       <source file='/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/kubernetes-upgrade-615158/boot2docker.iso'/>
	I0328 00:46:57.134574 1113654 main.go:141] libmachine: (kubernetes-upgrade-615158)       <target dev='hdc' bus='scsi'/>
	I0328 00:46:57.134587 1113654 main.go:141] libmachine: (kubernetes-upgrade-615158)       <readonly/>
	I0328 00:46:57.134598 1113654 main.go:141] libmachine: (kubernetes-upgrade-615158)     </disk>
	I0328 00:46:57.134613 1113654 main.go:141] libmachine: (kubernetes-upgrade-615158)     <disk type='file' device='disk'>
	I0328 00:46:57.134629 1113654 main.go:141] libmachine: (kubernetes-upgrade-615158)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0328 00:46:57.134660 1113654 main.go:141] libmachine: (kubernetes-upgrade-615158)       <source file='/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/kubernetes-upgrade-615158/kubernetes-upgrade-615158.rawdisk'/>
	I0328 00:46:57.134677 1113654 main.go:141] libmachine: (kubernetes-upgrade-615158)       <target dev='hda' bus='virtio'/>
	I0328 00:46:57.134690 1113654 main.go:141] libmachine: (kubernetes-upgrade-615158)     </disk>
	I0328 00:46:57.134706 1113654 main.go:141] libmachine: (kubernetes-upgrade-615158)     <interface type='network'>
	I0328 00:46:57.134724 1113654 main.go:141] libmachine: (kubernetes-upgrade-615158)       <source network='mk-kubernetes-upgrade-615158'/>
	I0328 00:46:57.134743 1113654 main.go:141] libmachine: (kubernetes-upgrade-615158)       <model type='virtio'/>
	I0328 00:46:57.134758 1113654 main.go:141] libmachine: (kubernetes-upgrade-615158)     </interface>
	I0328 00:46:57.134772 1113654 main.go:141] libmachine: (kubernetes-upgrade-615158)     <interface type='network'>
	I0328 00:46:57.134786 1113654 main.go:141] libmachine: (kubernetes-upgrade-615158)       <source network='default'/>
	I0328 00:46:57.134802 1113654 main.go:141] libmachine: (kubernetes-upgrade-615158)       <model type='virtio'/>
	I0328 00:46:57.134814 1113654 main.go:141] libmachine: (kubernetes-upgrade-615158)     </interface>
	I0328 00:46:57.134829 1113654 main.go:141] libmachine: (kubernetes-upgrade-615158)     <serial type='pty'>
	I0328 00:46:57.134842 1113654 main.go:141] libmachine: (kubernetes-upgrade-615158)       <target port='0'/>
	I0328 00:46:57.134858 1113654 main.go:141] libmachine: (kubernetes-upgrade-615158)     </serial>
	I0328 00:46:57.134874 1113654 main.go:141] libmachine: (kubernetes-upgrade-615158)     <console type='pty'>
	I0328 00:46:57.134886 1113654 main.go:141] libmachine: (kubernetes-upgrade-615158)       <target type='serial' port='0'/>
	I0328 00:46:57.134898 1113654 main.go:141] libmachine: (kubernetes-upgrade-615158)     </console>
	I0328 00:46:57.134909 1113654 main.go:141] libmachine: (kubernetes-upgrade-615158)     <rng model='virtio'>
	I0328 00:46:57.134925 1113654 main.go:141] libmachine: (kubernetes-upgrade-615158)       <backend model='random'>/dev/random</backend>
	I0328 00:46:57.134940 1113654 main.go:141] libmachine: (kubernetes-upgrade-615158)     </rng>
	I0328 00:46:57.134953 1113654 main.go:141] libmachine: (kubernetes-upgrade-615158)     
	I0328 00:46:57.134964 1113654 main.go:141] libmachine: (kubernetes-upgrade-615158)     
	I0328 00:46:57.134977 1113654 main.go:141] libmachine: (kubernetes-upgrade-615158)   </devices>
	I0328 00:46:57.134987 1113654 main.go:141] libmachine: (kubernetes-upgrade-615158) </domain>
	I0328 00:46:57.135001 1113654 main.go:141] libmachine: (kubernetes-upgrade-615158) 
	I0328 00:46:57.142253 1113654 main.go:141] libmachine: (kubernetes-upgrade-615158) DBG | domain kubernetes-upgrade-615158 has defined MAC address 52:54:00:ba:78:c8 in network default
	I0328 00:46:57.143070 1113654 main.go:141] libmachine: (kubernetes-upgrade-615158) Ensuring networks are active...
	I0328 00:46:57.143092 1113654 main.go:141] libmachine: (kubernetes-upgrade-615158) DBG | domain kubernetes-upgrade-615158 has defined MAC address 52:54:00:9b:31:7a in network mk-kubernetes-upgrade-615158
	I0328 00:46:57.144047 1113654 main.go:141] libmachine: (kubernetes-upgrade-615158) Ensuring network default is active
	I0328 00:46:57.144440 1113654 main.go:141] libmachine: (kubernetes-upgrade-615158) Ensuring network mk-kubernetes-upgrade-615158 is active
	I0328 00:46:57.145147 1113654 main.go:141] libmachine: (kubernetes-upgrade-615158) Getting domain xml...
	I0328 00:46:57.145835 1113654 main.go:141] libmachine: (kubernetes-upgrade-615158) Creating domain...
	I0328 00:46:58.430395 1113654 main.go:141] libmachine: (kubernetes-upgrade-615158) Waiting to get IP...
	I0328 00:46:58.431372 1113654 main.go:141] libmachine: (kubernetes-upgrade-615158) DBG | domain kubernetes-upgrade-615158 has defined MAC address 52:54:00:9b:31:7a in network mk-kubernetes-upgrade-615158
	I0328 00:46:58.431767 1113654 main.go:141] libmachine: (kubernetes-upgrade-615158) DBG | unable to find current IP address of domain kubernetes-upgrade-615158 in network mk-kubernetes-upgrade-615158
	I0328 00:46:58.431802 1113654 main.go:141] libmachine: (kubernetes-upgrade-615158) DBG | I0328 00:46:58.431741 1113937 retry.go:31] will retry after 225.556368ms: waiting for machine to come up
	I0328 00:46:58.659569 1113654 main.go:141] libmachine: (kubernetes-upgrade-615158) DBG | domain kubernetes-upgrade-615158 has defined MAC address 52:54:00:9b:31:7a in network mk-kubernetes-upgrade-615158
	I0328 00:46:58.660190 1113654 main.go:141] libmachine: (kubernetes-upgrade-615158) DBG | unable to find current IP address of domain kubernetes-upgrade-615158 in network mk-kubernetes-upgrade-615158
	I0328 00:46:58.660218 1113654 main.go:141] libmachine: (kubernetes-upgrade-615158) DBG | I0328 00:46:58.660132 1113937 retry.go:31] will retry after 338.633708ms: waiting for machine to come up
	I0328 00:46:59.000682 1113654 main.go:141] libmachine: (kubernetes-upgrade-615158) DBG | domain kubernetes-upgrade-615158 has defined MAC address 52:54:00:9b:31:7a in network mk-kubernetes-upgrade-615158
	I0328 00:46:59.001490 1113654 main.go:141] libmachine: (kubernetes-upgrade-615158) DBG | unable to find current IP address of domain kubernetes-upgrade-615158 in network mk-kubernetes-upgrade-615158
	I0328 00:46:59.001514 1113654 main.go:141] libmachine: (kubernetes-upgrade-615158) DBG | I0328 00:46:59.001466 1113937 retry.go:31] will retry after 356.253292ms: waiting for machine to come up
	I0328 00:46:59.358876 1113654 main.go:141] libmachine: (kubernetes-upgrade-615158) DBG | domain kubernetes-upgrade-615158 has defined MAC address 52:54:00:9b:31:7a in network mk-kubernetes-upgrade-615158
	I0328 00:46:59.359399 1113654 main.go:141] libmachine: (kubernetes-upgrade-615158) DBG | unable to find current IP address of domain kubernetes-upgrade-615158 in network mk-kubernetes-upgrade-615158
	I0328 00:46:59.359422 1113654 main.go:141] libmachine: (kubernetes-upgrade-615158) DBG | I0328 00:46:59.359363 1113937 retry.go:31] will retry after 522.722757ms: waiting for machine to come up
	I0328 00:46:59.950057 1113654 main.go:141] libmachine: (kubernetes-upgrade-615158) DBG | domain kubernetes-upgrade-615158 has defined MAC address 52:54:00:9b:31:7a in network mk-kubernetes-upgrade-615158
	I0328 00:46:59.950544 1113654 main.go:141] libmachine: (kubernetes-upgrade-615158) DBG | unable to find current IP address of domain kubernetes-upgrade-615158 in network mk-kubernetes-upgrade-615158
	I0328 00:46:59.950582 1113654 main.go:141] libmachine: (kubernetes-upgrade-615158) DBG | I0328 00:46:59.950522 1113937 retry.go:31] will retry after 594.464892ms: waiting for machine to come up
	I0328 00:47:00.546890 1113654 main.go:141] libmachine: (kubernetes-upgrade-615158) DBG | domain kubernetes-upgrade-615158 has defined MAC address 52:54:00:9b:31:7a in network mk-kubernetes-upgrade-615158
	I0328 00:47:00.547476 1113654 main.go:141] libmachine: (kubernetes-upgrade-615158) DBG | unable to find current IP address of domain kubernetes-upgrade-615158 in network mk-kubernetes-upgrade-615158
	I0328 00:47:00.547505 1113654 main.go:141] libmachine: (kubernetes-upgrade-615158) DBG | I0328 00:47:00.547420 1113937 retry.go:31] will retry after 858.846427ms: waiting for machine to come up
	I0328 00:47:01.408593 1113654 main.go:141] libmachine: (kubernetes-upgrade-615158) DBG | domain kubernetes-upgrade-615158 has defined MAC address 52:54:00:9b:31:7a in network mk-kubernetes-upgrade-615158
	I0328 00:47:01.409124 1113654 main.go:141] libmachine: (kubernetes-upgrade-615158) DBG | unable to find current IP address of domain kubernetes-upgrade-615158 in network mk-kubernetes-upgrade-615158
	I0328 00:47:01.409148 1113654 main.go:141] libmachine: (kubernetes-upgrade-615158) DBG | I0328 00:47:01.409060 1113937 retry.go:31] will retry after 966.271938ms: waiting for machine to come up
	I0328 00:47:02.377141 1113654 main.go:141] libmachine: (kubernetes-upgrade-615158) DBG | domain kubernetes-upgrade-615158 has defined MAC address 52:54:00:9b:31:7a in network mk-kubernetes-upgrade-615158
	I0328 00:47:02.377776 1113654 main.go:141] libmachine: (kubernetes-upgrade-615158) DBG | unable to find current IP address of domain kubernetes-upgrade-615158 in network mk-kubernetes-upgrade-615158
	I0328 00:47:02.377805 1113654 main.go:141] libmachine: (kubernetes-upgrade-615158) DBG | I0328 00:47:02.377733 1113937 retry.go:31] will retry after 1.488474438s: waiting for machine to come up
	I0328 00:47:03.867381 1113654 main.go:141] libmachine: (kubernetes-upgrade-615158) DBG | domain kubernetes-upgrade-615158 has defined MAC address 52:54:00:9b:31:7a in network mk-kubernetes-upgrade-615158
	I0328 00:47:03.867776 1113654 main.go:141] libmachine: (kubernetes-upgrade-615158) DBG | unable to find current IP address of domain kubernetes-upgrade-615158 in network mk-kubernetes-upgrade-615158
	I0328 00:47:03.867804 1113654 main.go:141] libmachine: (kubernetes-upgrade-615158) DBG | I0328 00:47:03.867729 1113937 retry.go:31] will retry after 1.596143159s: waiting for machine to come up
	I0328 00:47:05.466816 1113654 main.go:141] libmachine: (kubernetes-upgrade-615158) DBG | domain kubernetes-upgrade-615158 has defined MAC address 52:54:00:9b:31:7a in network mk-kubernetes-upgrade-615158
	I0328 00:47:05.467434 1113654 main.go:141] libmachine: (kubernetes-upgrade-615158) DBG | unable to find current IP address of domain kubernetes-upgrade-615158 in network mk-kubernetes-upgrade-615158
	I0328 00:47:05.467468 1113654 main.go:141] libmachine: (kubernetes-upgrade-615158) DBG | I0328 00:47:05.467376 1113937 retry.go:31] will retry after 2.031840704s: waiting for machine to come up
	I0328 00:47:07.500847 1113654 main.go:141] libmachine: (kubernetes-upgrade-615158) DBG | domain kubernetes-upgrade-615158 has defined MAC address 52:54:00:9b:31:7a in network mk-kubernetes-upgrade-615158
	I0328 00:47:07.501386 1113654 main.go:141] libmachine: (kubernetes-upgrade-615158) DBG | unable to find current IP address of domain kubernetes-upgrade-615158 in network mk-kubernetes-upgrade-615158
	I0328 00:47:07.501432 1113654 main.go:141] libmachine: (kubernetes-upgrade-615158) DBG | I0328 00:47:07.501306 1113937 retry.go:31] will retry after 2.677312514s: waiting for machine to come up
	I0328 00:47:10.182292 1113654 main.go:141] libmachine: (kubernetes-upgrade-615158) DBG | domain kubernetes-upgrade-615158 has defined MAC address 52:54:00:9b:31:7a in network mk-kubernetes-upgrade-615158
	I0328 00:47:10.182795 1113654 main.go:141] libmachine: (kubernetes-upgrade-615158) DBG | unable to find current IP address of domain kubernetes-upgrade-615158 in network mk-kubernetes-upgrade-615158
	I0328 00:47:10.182823 1113654 main.go:141] libmachine: (kubernetes-upgrade-615158) DBG | I0328 00:47:10.182746 1113937 retry.go:31] will retry after 3.230415314s: waiting for machine to come up
	I0328 00:47:13.415500 1113654 main.go:141] libmachine: (kubernetes-upgrade-615158) DBG | domain kubernetes-upgrade-615158 has defined MAC address 52:54:00:9b:31:7a in network mk-kubernetes-upgrade-615158
	I0328 00:47:13.415941 1113654 main.go:141] libmachine: (kubernetes-upgrade-615158) DBG | unable to find current IP address of domain kubernetes-upgrade-615158 in network mk-kubernetes-upgrade-615158
	I0328 00:47:13.415960 1113654 main.go:141] libmachine: (kubernetes-upgrade-615158) DBG | I0328 00:47:13.415912 1113937 retry.go:31] will retry after 3.364434072s: waiting for machine to come up
	I0328 00:47:16.781724 1113654 main.go:141] libmachine: (kubernetes-upgrade-615158) DBG | domain kubernetes-upgrade-615158 has defined MAC address 52:54:00:9b:31:7a in network mk-kubernetes-upgrade-615158
	I0328 00:47:16.782149 1113654 main.go:141] libmachine: (kubernetes-upgrade-615158) DBG | unable to find current IP address of domain kubernetes-upgrade-615158 in network mk-kubernetes-upgrade-615158
	I0328 00:47:16.782173 1113654 main.go:141] libmachine: (kubernetes-upgrade-615158) DBG | I0328 00:47:16.782101 1113937 retry.go:31] will retry after 5.054358867s: waiting for machine to come up
	I0328 00:47:21.838543 1113654 main.go:141] libmachine: (kubernetes-upgrade-615158) DBG | domain kubernetes-upgrade-615158 has defined MAC address 52:54:00:9b:31:7a in network mk-kubernetes-upgrade-615158
	I0328 00:47:21.839076 1113654 main.go:141] libmachine: (kubernetes-upgrade-615158) Found IP for machine: 192.168.50.160
	I0328 00:47:21.839104 1113654 main.go:141] libmachine: (kubernetes-upgrade-615158) Reserving static IP address...
	I0328 00:47:21.839121 1113654 main.go:141] libmachine: (kubernetes-upgrade-615158) DBG | domain kubernetes-upgrade-615158 has current primary IP address 192.168.50.160 and MAC address 52:54:00:9b:31:7a in network mk-kubernetes-upgrade-615158
	I0328 00:47:21.839459 1113654 main.go:141] libmachine: (kubernetes-upgrade-615158) DBG | unable to find host DHCP lease matching {name: "kubernetes-upgrade-615158", mac: "52:54:00:9b:31:7a", ip: "192.168.50.160"} in network mk-kubernetes-upgrade-615158
	I0328 00:47:21.919397 1113654 main.go:141] libmachine: (kubernetes-upgrade-615158) DBG | Getting to WaitForSSH function...
	I0328 00:47:21.919442 1113654 main.go:141] libmachine: (kubernetes-upgrade-615158) Reserved static IP address: 192.168.50.160
	I0328 00:47:21.919457 1113654 main.go:141] libmachine: (kubernetes-upgrade-615158) Waiting for SSH to be available...
	I0328 00:47:21.922125 1113654 main.go:141] libmachine: (kubernetes-upgrade-615158) DBG | domain kubernetes-upgrade-615158 has defined MAC address 52:54:00:9b:31:7a in network mk-kubernetes-upgrade-615158
	I0328 00:47:21.922560 1113654 main.go:141] libmachine: (kubernetes-upgrade-615158) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:31:7a", ip: ""} in network mk-kubernetes-upgrade-615158: {Iface:virbr2 ExpiryTime:2024-03-28 01:47:12 +0000 UTC Type:0 Mac:52:54:00:9b:31:7a Iaid: IPaddr:192.168.50.160 Prefix:24 Hostname:minikube Clientid:01:52:54:00:9b:31:7a}
	I0328 00:47:21.922591 1113654 main.go:141] libmachine: (kubernetes-upgrade-615158) DBG | domain kubernetes-upgrade-615158 has defined IP address 192.168.50.160 and MAC address 52:54:00:9b:31:7a in network mk-kubernetes-upgrade-615158
	I0328 00:47:21.922762 1113654 main.go:141] libmachine: (kubernetes-upgrade-615158) DBG | Using SSH client type: external
	I0328 00:47:21.922796 1113654 main.go:141] libmachine: (kubernetes-upgrade-615158) DBG | Using SSH private key: /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/kubernetes-upgrade-615158/id_rsa (-rw-------)
	I0328 00:47:21.922835 1113654 main.go:141] libmachine: (kubernetes-upgrade-615158) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.160 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/kubernetes-upgrade-615158/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0328 00:47:21.922856 1113654 main.go:141] libmachine: (kubernetes-upgrade-615158) DBG | About to run SSH command:
	I0328 00:47:21.922871 1113654 main.go:141] libmachine: (kubernetes-upgrade-615158) DBG | exit 0
	I0328 00:47:22.046464 1113654 main.go:141] libmachine: (kubernetes-upgrade-615158) DBG | SSH cmd err, output: <nil>: 
	I0328 00:47:22.046717 1113654 main.go:141] libmachine: (kubernetes-upgrade-615158) KVM machine creation complete!
	I0328 00:47:22.047005 1113654 main.go:141] libmachine: (kubernetes-upgrade-615158) Calling .GetConfigRaw
	I0328 00:47:22.047609 1113654 main.go:141] libmachine: (kubernetes-upgrade-615158) Calling .DriverName
	I0328 00:47:22.047875 1113654 main.go:141] libmachine: (kubernetes-upgrade-615158) Calling .DriverName
	I0328 00:47:22.048114 1113654 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0328 00:47:22.048142 1113654 main.go:141] libmachine: (kubernetes-upgrade-615158) Calling .GetState
	I0328 00:47:22.049445 1113654 main.go:141] libmachine: Detecting operating system of created instance...
	I0328 00:47:22.049461 1113654 main.go:141] libmachine: Waiting for SSH to be available...
	I0328 00:47:22.049467 1113654 main.go:141] libmachine: Getting to WaitForSSH function...
	I0328 00:47:22.049474 1113654 main.go:141] libmachine: (kubernetes-upgrade-615158) Calling .GetSSHHostname
	I0328 00:47:22.051860 1113654 main.go:141] libmachine: (kubernetes-upgrade-615158) DBG | domain kubernetes-upgrade-615158 has defined MAC address 52:54:00:9b:31:7a in network mk-kubernetes-upgrade-615158
	I0328 00:47:22.052329 1113654 main.go:141] libmachine: (kubernetes-upgrade-615158) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:31:7a", ip: ""} in network mk-kubernetes-upgrade-615158: {Iface:virbr2 ExpiryTime:2024-03-28 01:47:12 +0000 UTC Type:0 Mac:52:54:00:9b:31:7a Iaid: IPaddr:192.168.50.160 Prefix:24 Hostname:kubernetes-upgrade-615158 Clientid:01:52:54:00:9b:31:7a}
	I0328 00:47:22.052361 1113654 main.go:141] libmachine: (kubernetes-upgrade-615158) DBG | domain kubernetes-upgrade-615158 has defined IP address 192.168.50.160 and MAC address 52:54:00:9b:31:7a in network mk-kubernetes-upgrade-615158
	I0328 00:47:22.052517 1113654 main.go:141] libmachine: (kubernetes-upgrade-615158) Calling .GetSSHPort
	I0328 00:47:22.052735 1113654 main.go:141] libmachine: (kubernetes-upgrade-615158) Calling .GetSSHKeyPath
	I0328 00:47:22.052994 1113654 main.go:141] libmachine: (kubernetes-upgrade-615158) Calling .GetSSHKeyPath
	I0328 00:47:22.053173 1113654 main.go:141] libmachine: (kubernetes-upgrade-615158) Calling .GetSSHUsername
	I0328 00:47:22.053373 1113654 main.go:141] libmachine: Using SSH client type: native
	I0328 00:47:22.053616 1113654 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.160 22 <nil> <nil>}
	I0328 00:47:22.053631 1113654 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0328 00:47:22.153747 1113654 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0328 00:47:22.153779 1113654 main.go:141] libmachine: Detecting the provisioner...
	I0328 00:47:22.153789 1113654 main.go:141] libmachine: (kubernetes-upgrade-615158) Calling .GetSSHHostname
	I0328 00:47:22.156777 1113654 main.go:141] libmachine: (kubernetes-upgrade-615158) DBG | domain kubernetes-upgrade-615158 has defined MAC address 52:54:00:9b:31:7a in network mk-kubernetes-upgrade-615158
	I0328 00:47:22.157113 1113654 main.go:141] libmachine: (kubernetes-upgrade-615158) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:31:7a", ip: ""} in network mk-kubernetes-upgrade-615158: {Iface:virbr2 ExpiryTime:2024-03-28 01:47:12 +0000 UTC Type:0 Mac:52:54:00:9b:31:7a Iaid: IPaddr:192.168.50.160 Prefix:24 Hostname:kubernetes-upgrade-615158 Clientid:01:52:54:00:9b:31:7a}
	I0328 00:47:22.157148 1113654 main.go:141] libmachine: (kubernetes-upgrade-615158) DBG | domain kubernetes-upgrade-615158 has defined IP address 192.168.50.160 and MAC address 52:54:00:9b:31:7a in network mk-kubernetes-upgrade-615158
	I0328 00:47:22.157305 1113654 main.go:141] libmachine: (kubernetes-upgrade-615158) Calling .GetSSHPort
	I0328 00:47:22.157519 1113654 main.go:141] libmachine: (kubernetes-upgrade-615158) Calling .GetSSHKeyPath
	I0328 00:47:22.157701 1113654 main.go:141] libmachine: (kubernetes-upgrade-615158) Calling .GetSSHKeyPath
	I0328 00:47:22.157896 1113654 main.go:141] libmachine: (kubernetes-upgrade-615158) Calling .GetSSHUsername
	I0328 00:47:22.158088 1113654 main.go:141] libmachine: Using SSH client type: native
	I0328 00:47:22.158285 1113654 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.160 22 <nil> <nil>}
	I0328 00:47:22.158297 1113654 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0328 00:47:22.259277 1113654 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0328 00:47:22.259379 1113654 main.go:141] libmachine: found compatible host: buildroot
	I0328 00:47:22.259395 1113654 main.go:141] libmachine: Provisioning with buildroot...
	I0328 00:47:22.259407 1113654 main.go:141] libmachine: (kubernetes-upgrade-615158) Calling .GetMachineName
	I0328 00:47:22.259714 1113654 buildroot.go:166] provisioning hostname "kubernetes-upgrade-615158"
	I0328 00:47:22.259743 1113654 main.go:141] libmachine: (kubernetes-upgrade-615158) Calling .GetMachineName
	I0328 00:47:22.259948 1113654 main.go:141] libmachine: (kubernetes-upgrade-615158) Calling .GetSSHHostname
	I0328 00:47:22.262673 1113654 main.go:141] libmachine: (kubernetes-upgrade-615158) DBG | domain kubernetes-upgrade-615158 has defined MAC address 52:54:00:9b:31:7a in network mk-kubernetes-upgrade-615158
	I0328 00:47:22.263017 1113654 main.go:141] libmachine: (kubernetes-upgrade-615158) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:31:7a", ip: ""} in network mk-kubernetes-upgrade-615158: {Iface:virbr2 ExpiryTime:2024-03-28 01:47:12 +0000 UTC Type:0 Mac:52:54:00:9b:31:7a Iaid: IPaddr:192.168.50.160 Prefix:24 Hostname:kubernetes-upgrade-615158 Clientid:01:52:54:00:9b:31:7a}
	I0328 00:47:22.263045 1113654 main.go:141] libmachine: (kubernetes-upgrade-615158) DBG | domain kubernetes-upgrade-615158 has defined IP address 192.168.50.160 and MAC address 52:54:00:9b:31:7a in network mk-kubernetes-upgrade-615158
	I0328 00:47:22.263187 1113654 main.go:141] libmachine: (kubernetes-upgrade-615158) Calling .GetSSHPort
	I0328 00:47:22.263388 1113654 main.go:141] libmachine: (kubernetes-upgrade-615158) Calling .GetSSHKeyPath
	I0328 00:47:22.263555 1113654 main.go:141] libmachine: (kubernetes-upgrade-615158) Calling .GetSSHKeyPath
	I0328 00:47:22.263696 1113654 main.go:141] libmachine: (kubernetes-upgrade-615158) Calling .GetSSHUsername
	I0328 00:47:22.263836 1113654 main.go:141] libmachine: Using SSH client type: native
	I0328 00:47:22.263997 1113654 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.160 22 <nil> <nil>}
	I0328 00:47:22.264010 1113654 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-615158 && echo "kubernetes-upgrade-615158" | sudo tee /etc/hostname
	I0328 00:47:22.376302 1113654 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-615158
	
	I0328 00:47:22.376362 1113654 main.go:141] libmachine: (kubernetes-upgrade-615158) Calling .GetSSHHostname
	I0328 00:47:22.379232 1113654 main.go:141] libmachine: (kubernetes-upgrade-615158) DBG | domain kubernetes-upgrade-615158 has defined MAC address 52:54:00:9b:31:7a in network mk-kubernetes-upgrade-615158
	I0328 00:47:22.379726 1113654 main.go:141] libmachine: (kubernetes-upgrade-615158) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:31:7a", ip: ""} in network mk-kubernetes-upgrade-615158: {Iface:virbr2 ExpiryTime:2024-03-28 01:47:12 +0000 UTC Type:0 Mac:52:54:00:9b:31:7a Iaid: IPaddr:192.168.50.160 Prefix:24 Hostname:kubernetes-upgrade-615158 Clientid:01:52:54:00:9b:31:7a}
	I0328 00:47:22.379771 1113654 main.go:141] libmachine: (kubernetes-upgrade-615158) DBG | domain kubernetes-upgrade-615158 has defined IP address 192.168.50.160 and MAC address 52:54:00:9b:31:7a in network mk-kubernetes-upgrade-615158
	I0328 00:47:22.379928 1113654 main.go:141] libmachine: (kubernetes-upgrade-615158) Calling .GetSSHPort
	I0328 00:47:22.380141 1113654 main.go:141] libmachine: (kubernetes-upgrade-615158) Calling .GetSSHKeyPath
	I0328 00:47:22.380342 1113654 main.go:141] libmachine: (kubernetes-upgrade-615158) Calling .GetSSHKeyPath
	I0328 00:47:22.380480 1113654 main.go:141] libmachine: (kubernetes-upgrade-615158) Calling .GetSSHUsername
	I0328 00:47:22.380696 1113654 main.go:141] libmachine: Using SSH client type: native
	I0328 00:47:22.380861 1113654 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.160 22 <nil> <nil>}
	I0328 00:47:22.380878 1113654 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-615158' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-615158/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-615158' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0328 00:47:22.488458 1113654 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0328 00:47:22.488490 1113654 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18485-1069254/.minikube CaCertPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18485-1069254/.minikube}
	I0328 00:47:22.488509 1113654 buildroot.go:174] setting up certificates
	I0328 00:47:22.488519 1113654 provision.go:84] configureAuth start
	I0328 00:47:22.488528 1113654 main.go:141] libmachine: (kubernetes-upgrade-615158) Calling .GetMachineName
	I0328 00:47:22.488830 1113654 main.go:141] libmachine: (kubernetes-upgrade-615158) Calling .GetIP
	I0328 00:47:22.491644 1113654 main.go:141] libmachine: (kubernetes-upgrade-615158) DBG | domain kubernetes-upgrade-615158 has defined MAC address 52:54:00:9b:31:7a in network mk-kubernetes-upgrade-615158
	I0328 00:47:22.492112 1113654 main.go:141] libmachine: (kubernetes-upgrade-615158) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:31:7a", ip: ""} in network mk-kubernetes-upgrade-615158: {Iface:virbr2 ExpiryTime:2024-03-28 01:47:12 +0000 UTC Type:0 Mac:52:54:00:9b:31:7a Iaid: IPaddr:192.168.50.160 Prefix:24 Hostname:kubernetes-upgrade-615158 Clientid:01:52:54:00:9b:31:7a}
	I0328 00:47:22.492140 1113654 main.go:141] libmachine: (kubernetes-upgrade-615158) DBG | domain kubernetes-upgrade-615158 has defined IP address 192.168.50.160 and MAC address 52:54:00:9b:31:7a in network mk-kubernetes-upgrade-615158
	I0328 00:47:22.492269 1113654 main.go:141] libmachine: (kubernetes-upgrade-615158) Calling .GetSSHHostname
	I0328 00:47:22.494573 1113654 main.go:141] libmachine: (kubernetes-upgrade-615158) DBG | domain kubernetes-upgrade-615158 has defined MAC address 52:54:00:9b:31:7a in network mk-kubernetes-upgrade-615158
	I0328 00:47:22.494923 1113654 main.go:141] libmachine: (kubernetes-upgrade-615158) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:31:7a", ip: ""} in network mk-kubernetes-upgrade-615158: {Iface:virbr2 ExpiryTime:2024-03-28 01:47:12 +0000 UTC Type:0 Mac:52:54:00:9b:31:7a Iaid: IPaddr:192.168.50.160 Prefix:24 Hostname:kubernetes-upgrade-615158 Clientid:01:52:54:00:9b:31:7a}
	I0328 00:47:22.494954 1113654 main.go:141] libmachine: (kubernetes-upgrade-615158) DBG | domain kubernetes-upgrade-615158 has defined IP address 192.168.50.160 and MAC address 52:54:00:9b:31:7a in network mk-kubernetes-upgrade-615158
	I0328 00:47:22.495091 1113654 provision.go:143] copyHostCerts
	I0328 00:47:22.495176 1113654 exec_runner.go:144] found /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.pem, removing ...
	I0328 00:47:22.495187 1113654 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.pem
	I0328 00:47:22.495247 1113654 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.pem (1078 bytes)
	I0328 00:47:22.495351 1113654 exec_runner.go:144] found /home/jenkins/minikube-integration/18485-1069254/.minikube/cert.pem, removing ...
	I0328 00:47:22.495366 1113654 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18485-1069254/.minikube/cert.pem
	I0328 00:47:22.495400 1113654 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18485-1069254/.minikube/cert.pem (1123 bytes)
	I0328 00:47:22.495545 1113654 exec_runner.go:144] found /home/jenkins/minikube-integration/18485-1069254/.minikube/key.pem, removing ...
	I0328 00:47:22.495563 1113654 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18485-1069254/.minikube/key.pem
	I0328 00:47:22.495595 1113654 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18485-1069254/.minikube/key.pem (1679 bytes)
	I0328 00:47:22.495659 1113654 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-615158 san=[127.0.0.1 192.168.50.160 kubernetes-upgrade-615158 localhost minikube]
	I0328 00:47:22.700600 1113654 provision.go:177] copyRemoteCerts
	I0328 00:47:22.700671 1113654 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0328 00:47:22.700700 1113654 main.go:141] libmachine: (kubernetes-upgrade-615158) Calling .GetSSHHostname
	I0328 00:47:22.703461 1113654 main.go:141] libmachine: (kubernetes-upgrade-615158) DBG | domain kubernetes-upgrade-615158 has defined MAC address 52:54:00:9b:31:7a in network mk-kubernetes-upgrade-615158
	I0328 00:47:22.703823 1113654 main.go:141] libmachine: (kubernetes-upgrade-615158) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:31:7a", ip: ""} in network mk-kubernetes-upgrade-615158: {Iface:virbr2 ExpiryTime:2024-03-28 01:47:12 +0000 UTC Type:0 Mac:52:54:00:9b:31:7a Iaid: IPaddr:192.168.50.160 Prefix:24 Hostname:kubernetes-upgrade-615158 Clientid:01:52:54:00:9b:31:7a}
	I0328 00:47:22.703859 1113654 main.go:141] libmachine: (kubernetes-upgrade-615158) DBG | domain kubernetes-upgrade-615158 has defined IP address 192.168.50.160 and MAC address 52:54:00:9b:31:7a in network mk-kubernetes-upgrade-615158
	I0328 00:47:22.704046 1113654 main.go:141] libmachine: (kubernetes-upgrade-615158) Calling .GetSSHPort
	I0328 00:47:22.704291 1113654 main.go:141] libmachine: (kubernetes-upgrade-615158) Calling .GetSSHKeyPath
	I0328 00:47:22.704481 1113654 main.go:141] libmachine: (kubernetes-upgrade-615158) Calling .GetSSHUsername
	I0328 00:47:22.704692 1113654 sshutil.go:53] new ssh client: &{IP:192.168.50.160 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/kubernetes-upgrade-615158/id_rsa Username:docker}
	I0328 00:47:22.793979 1113654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0328 00:47:22.820261 1113654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0328 00:47:22.845931 1113654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0328 00:47:22.871213 1113654 provision.go:87] duration metric: took 382.680164ms to configureAuth
	I0328 00:47:22.871255 1113654 buildroot.go:189] setting minikube options for container-runtime
	I0328 00:47:22.871481 1113654 config.go:182] Loaded profile config "kubernetes-upgrade-615158": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0328 00:47:22.871614 1113654 main.go:141] libmachine: (kubernetes-upgrade-615158) Calling .GetSSHHostname
	I0328 00:47:22.874558 1113654 main.go:141] libmachine: (kubernetes-upgrade-615158) DBG | domain kubernetes-upgrade-615158 has defined MAC address 52:54:00:9b:31:7a in network mk-kubernetes-upgrade-615158
	I0328 00:47:22.874928 1113654 main.go:141] libmachine: (kubernetes-upgrade-615158) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:31:7a", ip: ""} in network mk-kubernetes-upgrade-615158: {Iface:virbr2 ExpiryTime:2024-03-28 01:47:12 +0000 UTC Type:0 Mac:52:54:00:9b:31:7a Iaid: IPaddr:192.168.50.160 Prefix:24 Hostname:kubernetes-upgrade-615158 Clientid:01:52:54:00:9b:31:7a}
	I0328 00:47:22.874960 1113654 main.go:141] libmachine: (kubernetes-upgrade-615158) DBG | domain kubernetes-upgrade-615158 has defined IP address 192.168.50.160 and MAC address 52:54:00:9b:31:7a in network mk-kubernetes-upgrade-615158
	I0328 00:47:22.875146 1113654 main.go:141] libmachine: (kubernetes-upgrade-615158) Calling .GetSSHPort
	I0328 00:47:22.875379 1113654 main.go:141] libmachine: (kubernetes-upgrade-615158) Calling .GetSSHKeyPath
	I0328 00:47:22.875553 1113654 main.go:141] libmachine: (kubernetes-upgrade-615158) Calling .GetSSHKeyPath
	I0328 00:47:22.875694 1113654 main.go:141] libmachine: (kubernetes-upgrade-615158) Calling .GetSSHUsername
	I0328 00:47:22.875846 1113654 main.go:141] libmachine: Using SSH client type: native
	I0328 00:47:22.876015 1113654 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.160 22 <nil> <nil>}
	I0328 00:47:22.876032 1113654 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0328 00:47:23.138951 1113654 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0328 00:47:23.138985 1113654 main.go:141] libmachine: Checking connection to Docker...
	I0328 00:47:23.138996 1113654 main.go:141] libmachine: (kubernetes-upgrade-615158) Calling .GetURL
	I0328 00:47:23.140367 1113654 main.go:141] libmachine: (kubernetes-upgrade-615158) DBG | Using libvirt version 6000000
	I0328 00:47:23.142971 1113654 main.go:141] libmachine: (kubernetes-upgrade-615158) DBG | domain kubernetes-upgrade-615158 has defined MAC address 52:54:00:9b:31:7a in network mk-kubernetes-upgrade-615158
	I0328 00:47:23.143420 1113654 main.go:141] libmachine: (kubernetes-upgrade-615158) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:31:7a", ip: ""} in network mk-kubernetes-upgrade-615158: {Iface:virbr2 ExpiryTime:2024-03-28 01:47:12 +0000 UTC Type:0 Mac:52:54:00:9b:31:7a Iaid: IPaddr:192.168.50.160 Prefix:24 Hostname:kubernetes-upgrade-615158 Clientid:01:52:54:00:9b:31:7a}
	I0328 00:47:23.143454 1113654 main.go:141] libmachine: (kubernetes-upgrade-615158) DBG | domain kubernetes-upgrade-615158 has defined IP address 192.168.50.160 and MAC address 52:54:00:9b:31:7a in network mk-kubernetes-upgrade-615158
	I0328 00:47:23.143623 1113654 main.go:141] libmachine: Docker is up and running!
	I0328 00:47:23.143639 1113654 main.go:141] libmachine: Reticulating splines...
	I0328 00:47:23.143646 1113654 client.go:171] duration metric: took 26.526312162s to LocalClient.Create
	I0328 00:47:23.143673 1113654 start.go:167] duration metric: took 26.526381681s to libmachine.API.Create "kubernetes-upgrade-615158"
	I0328 00:47:23.143686 1113654 start.go:293] postStartSetup for "kubernetes-upgrade-615158" (driver="kvm2")
	I0328 00:47:23.143704 1113654 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0328 00:47:23.143728 1113654 main.go:141] libmachine: (kubernetes-upgrade-615158) Calling .DriverName
	I0328 00:47:23.144000 1113654 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0328 00:47:23.144032 1113654 main.go:141] libmachine: (kubernetes-upgrade-615158) Calling .GetSSHHostname
	I0328 00:47:23.146177 1113654 main.go:141] libmachine: (kubernetes-upgrade-615158) DBG | domain kubernetes-upgrade-615158 has defined MAC address 52:54:00:9b:31:7a in network mk-kubernetes-upgrade-615158
	I0328 00:47:23.146528 1113654 main.go:141] libmachine: (kubernetes-upgrade-615158) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:31:7a", ip: ""} in network mk-kubernetes-upgrade-615158: {Iface:virbr2 ExpiryTime:2024-03-28 01:47:12 +0000 UTC Type:0 Mac:52:54:00:9b:31:7a Iaid: IPaddr:192.168.50.160 Prefix:24 Hostname:kubernetes-upgrade-615158 Clientid:01:52:54:00:9b:31:7a}
	I0328 00:47:23.146554 1113654 main.go:141] libmachine: (kubernetes-upgrade-615158) DBG | domain kubernetes-upgrade-615158 has defined IP address 192.168.50.160 and MAC address 52:54:00:9b:31:7a in network mk-kubernetes-upgrade-615158
	I0328 00:47:23.146727 1113654 main.go:141] libmachine: (kubernetes-upgrade-615158) Calling .GetSSHPort
	I0328 00:47:23.146931 1113654 main.go:141] libmachine: (kubernetes-upgrade-615158) Calling .GetSSHKeyPath
	I0328 00:47:23.147098 1113654 main.go:141] libmachine: (kubernetes-upgrade-615158) Calling .GetSSHUsername
	I0328 00:47:23.147241 1113654 sshutil.go:53] new ssh client: &{IP:192.168.50.160 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/kubernetes-upgrade-615158/id_rsa Username:docker}
	I0328 00:47:23.227400 1113654 ssh_runner.go:195] Run: cat /etc/os-release
	I0328 00:47:23.232066 1113654 info.go:137] Remote host: Buildroot 2023.02.9
	I0328 00:47:23.232098 1113654 filesync.go:126] Scanning /home/jenkins/minikube-integration/18485-1069254/.minikube/addons for local assets ...
	I0328 00:47:23.232198 1113654 filesync.go:126] Scanning /home/jenkins/minikube-integration/18485-1069254/.minikube/files for local assets ...
	I0328 00:47:23.232287 1113654 filesync.go:149] local asset: /home/jenkins/minikube-integration/18485-1069254/.minikube/files/etc/ssl/certs/10765222.pem -> 10765222.pem in /etc/ssl/certs
	I0328 00:47:23.232398 1113654 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0328 00:47:23.244907 1113654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/files/etc/ssl/certs/10765222.pem --> /etc/ssl/certs/10765222.pem (1708 bytes)
	I0328 00:47:23.272158 1113654 start.go:296] duration metric: took 128.449476ms for postStartSetup
	I0328 00:47:23.272230 1113654 main.go:141] libmachine: (kubernetes-upgrade-615158) Calling .GetConfigRaw
	I0328 00:47:23.272878 1113654 main.go:141] libmachine: (kubernetes-upgrade-615158) Calling .GetIP
	I0328 00:47:23.275450 1113654 main.go:141] libmachine: (kubernetes-upgrade-615158) DBG | domain kubernetes-upgrade-615158 has defined MAC address 52:54:00:9b:31:7a in network mk-kubernetes-upgrade-615158
	I0328 00:47:23.275847 1113654 main.go:141] libmachine: (kubernetes-upgrade-615158) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:31:7a", ip: ""} in network mk-kubernetes-upgrade-615158: {Iface:virbr2 ExpiryTime:2024-03-28 01:47:12 +0000 UTC Type:0 Mac:52:54:00:9b:31:7a Iaid: IPaddr:192.168.50.160 Prefix:24 Hostname:kubernetes-upgrade-615158 Clientid:01:52:54:00:9b:31:7a}
	I0328 00:47:23.275887 1113654 main.go:141] libmachine: (kubernetes-upgrade-615158) DBG | domain kubernetes-upgrade-615158 has defined IP address 192.168.50.160 and MAC address 52:54:00:9b:31:7a in network mk-kubernetes-upgrade-615158
	I0328 00:47:23.276071 1113654 profile.go:142] Saving config to /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/kubernetes-upgrade-615158/config.json ...
	I0328 00:47:23.276253 1113654 start.go:128] duration metric: took 26.680699801s to createHost
	I0328 00:47:23.276306 1113654 main.go:141] libmachine: (kubernetes-upgrade-615158) Calling .GetSSHHostname
	I0328 00:47:23.278682 1113654 main.go:141] libmachine: (kubernetes-upgrade-615158) DBG | domain kubernetes-upgrade-615158 has defined MAC address 52:54:00:9b:31:7a in network mk-kubernetes-upgrade-615158
	I0328 00:47:23.278982 1113654 main.go:141] libmachine: (kubernetes-upgrade-615158) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:31:7a", ip: ""} in network mk-kubernetes-upgrade-615158: {Iface:virbr2 ExpiryTime:2024-03-28 01:47:12 +0000 UTC Type:0 Mac:52:54:00:9b:31:7a Iaid: IPaddr:192.168.50.160 Prefix:24 Hostname:kubernetes-upgrade-615158 Clientid:01:52:54:00:9b:31:7a}
	I0328 00:47:23.279018 1113654 main.go:141] libmachine: (kubernetes-upgrade-615158) DBG | domain kubernetes-upgrade-615158 has defined IP address 192.168.50.160 and MAC address 52:54:00:9b:31:7a in network mk-kubernetes-upgrade-615158
	I0328 00:47:23.279135 1113654 main.go:141] libmachine: (kubernetes-upgrade-615158) Calling .GetSSHPort
	I0328 00:47:23.279339 1113654 main.go:141] libmachine: (kubernetes-upgrade-615158) Calling .GetSSHKeyPath
	I0328 00:47:23.279466 1113654 main.go:141] libmachine: (kubernetes-upgrade-615158) Calling .GetSSHKeyPath
	I0328 00:47:23.279584 1113654 main.go:141] libmachine: (kubernetes-upgrade-615158) Calling .GetSSHUsername
	I0328 00:47:23.279754 1113654 main.go:141] libmachine: Using SSH client type: native
	I0328 00:47:23.279922 1113654 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.160 22 <nil> <nil>}
	I0328 00:47:23.279933 1113654 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0328 00:47:23.379164 1113654 main.go:141] libmachine: SSH cmd err, output: <nil>: 1711586843.364389724
	
	I0328 00:47:23.379189 1113654 fix.go:216] guest clock: 1711586843.364389724
	I0328 00:47:23.379196 1113654 fix.go:229] Guest: 2024-03-28 00:47:23.364389724 +0000 UTC Remote: 2024-03-28 00:47:23.276265825 +0000 UTC m=+72.773138182 (delta=88.123899ms)
	I0328 00:47:23.379217 1113654 fix.go:200] guest clock delta is within tolerance: 88.123899ms
	I0328 00:47:23.379221 1113654 start.go:83] releasing machines lock for "kubernetes-upgrade-615158", held for 26.783833203s
	I0328 00:47:23.379250 1113654 main.go:141] libmachine: (kubernetes-upgrade-615158) Calling .DriverName
	I0328 00:47:23.379551 1113654 main.go:141] libmachine: (kubernetes-upgrade-615158) Calling .GetIP
	I0328 00:47:23.382217 1113654 main.go:141] libmachine: (kubernetes-upgrade-615158) DBG | domain kubernetes-upgrade-615158 has defined MAC address 52:54:00:9b:31:7a in network mk-kubernetes-upgrade-615158
	I0328 00:47:23.382658 1113654 main.go:141] libmachine: (kubernetes-upgrade-615158) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:31:7a", ip: ""} in network mk-kubernetes-upgrade-615158: {Iface:virbr2 ExpiryTime:2024-03-28 01:47:12 +0000 UTC Type:0 Mac:52:54:00:9b:31:7a Iaid: IPaddr:192.168.50.160 Prefix:24 Hostname:kubernetes-upgrade-615158 Clientid:01:52:54:00:9b:31:7a}
	I0328 00:47:23.382689 1113654 main.go:141] libmachine: (kubernetes-upgrade-615158) DBG | domain kubernetes-upgrade-615158 has defined IP address 192.168.50.160 and MAC address 52:54:00:9b:31:7a in network mk-kubernetes-upgrade-615158
	I0328 00:47:23.382879 1113654 main.go:141] libmachine: (kubernetes-upgrade-615158) Calling .DriverName
	I0328 00:47:23.383440 1113654 main.go:141] libmachine: (kubernetes-upgrade-615158) Calling .DriverName
	I0328 00:47:23.383654 1113654 main.go:141] libmachine: (kubernetes-upgrade-615158) Calling .DriverName
	I0328 00:47:23.383751 1113654 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0328 00:47:23.383796 1113654 main.go:141] libmachine: (kubernetes-upgrade-615158) Calling .GetSSHHostname
	I0328 00:47:23.383923 1113654 ssh_runner.go:195] Run: cat /version.json
	I0328 00:47:23.383951 1113654 main.go:141] libmachine: (kubernetes-upgrade-615158) Calling .GetSSHHostname
	I0328 00:47:23.386725 1113654 main.go:141] libmachine: (kubernetes-upgrade-615158) DBG | domain kubernetes-upgrade-615158 has defined MAC address 52:54:00:9b:31:7a in network mk-kubernetes-upgrade-615158
	I0328 00:47:23.386919 1113654 main.go:141] libmachine: (kubernetes-upgrade-615158) DBG | domain kubernetes-upgrade-615158 has defined MAC address 52:54:00:9b:31:7a in network mk-kubernetes-upgrade-615158
	I0328 00:47:23.387095 1113654 main.go:141] libmachine: (kubernetes-upgrade-615158) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:31:7a", ip: ""} in network mk-kubernetes-upgrade-615158: {Iface:virbr2 ExpiryTime:2024-03-28 01:47:12 +0000 UTC Type:0 Mac:52:54:00:9b:31:7a Iaid: IPaddr:192.168.50.160 Prefix:24 Hostname:kubernetes-upgrade-615158 Clientid:01:52:54:00:9b:31:7a}
	I0328 00:47:23.387125 1113654 main.go:141] libmachine: (kubernetes-upgrade-615158) DBG | domain kubernetes-upgrade-615158 has defined IP address 192.168.50.160 and MAC address 52:54:00:9b:31:7a in network mk-kubernetes-upgrade-615158
	I0328 00:47:23.387260 1113654 main.go:141] libmachine: (kubernetes-upgrade-615158) Calling .GetSSHPort
	I0328 00:47:23.387391 1113654 main.go:141] libmachine: (kubernetes-upgrade-615158) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:31:7a", ip: ""} in network mk-kubernetes-upgrade-615158: {Iface:virbr2 ExpiryTime:2024-03-28 01:47:12 +0000 UTC Type:0 Mac:52:54:00:9b:31:7a Iaid: IPaddr:192.168.50.160 Prefix:24 Hostname:kubernetes-upgrade-615158 Clientid:01:52:54:00:9b:31:7a}
	I0328 00:47:23.387449 1113654 main.go:141] libmachine: (kubernetes-upgrade-615158) Calling .GetSSHKeyPath
	I0328 00:47:23.387516 1113654 main.go:141] libmachine: (kubernetes-upgrade-615158) DBG | domain kubernetes-upgrade-615158 has defined IP address 192.168.50.160 and MAC address 52:54:00:9b:31:7a in network mk-kubernetes-upgrade-615158
	I0328 00:47:23.387599 1113654 main.go:141] libmachine: (kubernetes-upgrade-615158) Calling .GetSSHPort
	I0328 00:47:23.387673 1113654 main.go:141] libmachine: (kubernetes-upgrade-615158) Calling .GetSSHUsername
	I0328 00:47:23.387794 1113654 main.go:141] libmachine: (kubernetes-upgrade-615158) Calling .GetSSHKeyPath
	I0328 00:47:23.387898 1113654 sshutil.go:53] new ssh client: &{IP:192.168.50.160 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/kubernetes-upgrade-615158/id_rsa Username:docker}
	I0328 00:47:23.387918 1113654 main.go:141] libmachine: (kubernetes-upgrade-615158) Calling .GetSSHUsername
	I0328 00:47:23.388052 1113654 sshutil.go:53] new ssh client: &{IP:192.168.50.160 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/kubernetes-upgrade-615158/id_rsa Username:docker}
	I0328 00:47:23.497819 1113654 ssh_runner.go:195] Run: systemctl --version
	I0328 00:47:23.506336 1113654 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0328 00:47:23.670345 1113654 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0328 00:47:23.676392 1113654 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0328 00:47:23.676470 1113654 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0328 00:47:23.692834 1113654 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0328 00:47:23.692864 1113654 start.go:494] detecting cgroup driver to use...
	I0328 00:47:23.692945 1113654 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0328 00:47:23.711918 1113654 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0328 00:47:23.728688 1113654 docker.go:217] disabling cri-docker service (if available) ...
	I0328 00:47:23.728757 1113654 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0328 00:47:23.744702 1113654 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0328 00:47:23.760366 1113654 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0328 00:47:23.892068 1113654 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0328 00:47:24.064190 1113654 docker.go:233] disabling docker service ...
	I0328 00:47:24.064282 1113654 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0328 00:47:24.080441 1113654 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0328 00:47:24.094391 1113654 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0328 00:47:24.214804 1113654 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0328 00:47:24.334756 1113654 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0328 00:47:24.350525 1113654 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0328 00:47:24.370783 1113654 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0328 00:47:24.370865 1113654 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 00:47:24.382040 1113654 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0328 00:47:24.382124 1113654 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 00:47:24.393089 1113654 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 00:47:24.404210 1113654 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 00:47:24.415589 1113654 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0328 00:47:24.427705 1113654 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0328 00:47:24.438588 1113654 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0328 00:47:24.438663 1113654 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0328 00:47:24.453397 1113654 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0328 00:47:24.464855 1113654 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0328 00:47:24.581049 1113654 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0328 00:47:24.743169 1113654 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0328 00:47:24.743259 1113654 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0328 00:47:24.748851 1113654 start.go:562] Will wait 60s for crictl version
	I0328 00:47:24.748915 1113654 ssh_runner.go:195] Run: which crictl
	I0328 00:47:24.753878 1113654 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0328 00:47:24.797025 1113654 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0328 00:47:24.797133 1113654 ssh_runner.go:195] Run: crio --version
	I0328 00:47:24.827539 1113654 ssh_runner.go:195] Run: crio --version
	I0328 00:47:24.866601 1113654 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0328 00:47:24.867733 1113654 main.go:141] libmachine: (kubernetes-upgrade-615158) Calling .GetIP
	I0328 00:47:24.870830 1113654 main.go:141] libmachine: (kubernetes-upgrade-615158) DBG | domain kubernetes-upgrade-615158 has defined MAC address 52:54:00:9b:31:7a in network mk-kubernetes-upgrade-615158
	I0328 00:47:24.871361 1113654 main.go:141] libmachine: (kubernetes-upgrade-615158) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:31:7a", ip: ""} in network mk-kubernetes-upgrade-615158: {Iface:virbr2 ExpiryTime:2024-03-28 01:47:12 +0000 UTC Type:0 Mac:52:54:00:9b:31:7a Iaid: IPaddr:192.168.50.160 Prefix:24 Hostname:kubernetes-upgrade-615158 Clientid:01:52:54:00:9b:31:7a}
	I0328 00:47:24.871394 1113654 main.go:141] libmachine: (kubernetes-upgrade-615158) DBG | domain kubernetes-upgrade-615158 has defined IP address 192.168.50.160 and MAC address 52:54:00:9b:31:7a in network mk-kubernetes-upgrade-615158
	I0328 00:47:24.871661 1113654 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0328 00:47:24.877017 1113654 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0328 00:47:24.893916 1113654 kubeadm.go:877] updating cluster {Name:kubernetes-upgrade-615158 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.20.0 ClusterName:kubernetes-upgrade-615158 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.160 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimi
zations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0328 00:47:24.894063 1113654 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0328 00:47:24.894167 1113654 ssh_runner.go:195] Run: sudo crictl images --output json
	I0328 00:47:24.943009 1113654 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0328 00:47:24.943098 1113654 ssh_runner.go:195] Run: which lz4
	I0328 00:47:24.947817 1113654 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0328 00:47:24.953740 1113654 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0328 00:47:24.953779 1113654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0328 00:47:26.936216 1113654 crio.go:462] duration metric: took 1.988440553s to copy over tarball
	I0328 00:47:26.936323 1113654 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0328 00:47:29.721435 1113654 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.785074256s)
	I0328 00:47:29.721472 1113654 crio.go:469] duration metric: took 2.785216354s to extract the tarball
	I0328 00:47:29.721484 1113654 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0328 00:47:29.767942 1113654 ssh_runner.go:195] Run: sudo crictl images --output json
	I0328 00:47:29.817939 1113654 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0328 00:47:29.818036 1113654 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0328 00:47:29.818103 1113654 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0328 00:47:29.818169 1113654 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0328 00:47:29.818218 1113654 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0328 00:47:29.818257 1113654 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0328 00:47:29.818261 1113654 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0328 00:47:29.818443 1113654 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0328 00:47:29.818452 1113654 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0328 00:47:29.818240 1113654 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0328 00:47:29.819854 1113654 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0328 00:47:29.819881 1113654 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0328 00:47:29.819889 1113654 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0328 00:47:29.819868 1113654 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0328 00:47:29.819853 1113654 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0328 00:47:29.820021 1113654 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0328 00:47:29.820029 1113654 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0328 00:47:29.820181 1113654 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0328 00:47:30.038785 1113654 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0328 00:47:30.049332 1113654 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0328 00:47:30.055012 1113654 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0328 00:47:30.055041 1113654 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0328 00:47:30.055852 1113654 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0328 00:47:30.056825 1113654 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0328 00:47:30.057022 1113654 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0328 00:47:30.146323 1113654 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0328 00:47:30.146383 1113654 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0328 00:47:30.146442 1113654 ssh_runner.go:195] Run: which crictl
	I0328 00:47:30.217415 1113654 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0328 00:47:30.217475 1113654 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0328 00:47:30.217479 1113654 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0328 00:47:30.217520 1113654 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0328 00:47:30.217529 1113654 ssh_runner.go:195] Run: which crictl
	I0328 00:47:30.217526 1113654 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0328 00:47:30.217559 1113654 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0328 00:47:30.217574 1113654 ssh_runner.go:195] Run: which crictl
	I0328 00:47:30.217605 1113654 ssh_runner.go:195] Run: which crictl
	I0328 00:47:30.237480 1113654 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0328 00:47:30.237533 1113654 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0328 00:47:30.237589 1113654 ssh_runner.go:195] Run: which crictl
	I0328 00:47:30.256505 1113654 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0328 00:47:30.256556 1113654 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0328 00:47:30.256613 1113654 ssh_runner.go:195] Run: which crictl
	I0328 00:47:30.259928 1113654 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0328 00:47:30.259953 1113654 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0328 00:47:30.259983 1113654 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0328 00:47:30.259991 1113654 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0328 00:47:30.260023 1113654 ssh_runner.go:195] Run: which crictl
	I0328 00:47:30.260052 1113654 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0328 00:47:30.260076 1113654 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0328 00:47:30.259934 1113654 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0328 00:47:30.267350 1113654 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0328 00:47:30.402946 1113654 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0328 00:47:30.409204 1113654 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0328 00:47:30.409295 1113654 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0328 00:47:30.409331 1113654 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0328 00:47:30.409295 1113654 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0328 00:47:30.409395 1113654 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0328 00:47:30.409455 1113654 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0328 00:47:30.445005 1113654 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0328 00:47:30.666930 1113654 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0328 00:47:30.908973 1113654 cache_images.go:92] duration metric: took 1.090906047s to LoadCachedImages
	W0328 00:47:30.909084 1113654 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0: no such file or directory
	X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0: no such file or directory
	I0328 00:47:30.909105 1113654 kubeadm.go:928] updating node { 192.168.50.160 8443 v1.20.0 crio true true} ...
	I0328 00:47:30.909305 1113654 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=kubernetes-upgrade-615158 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.160
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-615158 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0328 00:47:30.909384 1113654 ssh_runner.go:195] Run: crio config
	I0328 00:47:30.965947 1113654 cni.go:84] Creating CNI manager for ""
	I0328 00:47:30.965971 1113654 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0328 00:47:30.965980 1113654 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0328 00:47:30.965999 1113654 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.160 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-615158 NodeName:kubernetes-upgrade-615158 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.160"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.160 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0328 00:47:30.966168 1113654 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.160
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "kubernetes-upgrade-615158"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.160
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.160"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0328 00:47:30.966265 1113654 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0328 00:47:30.977649 1113654 binaries.go:44] Found k8s binaries, skipping transfer
	I0328 00:47:30.977745 1113654 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0328 00:47:30.988205 1113654 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (433 bytes)
	I0328 00:47:31.012325 1113654 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0328 00:47:31.032158 1113654 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2126 bytes)
	I0328 00:47:31.051157 1113654 ssh_runner.go:195] Run: grep 192.168.50.160	control-plane.minikube.internal$ /etc/hosts
	I0328 00:47:31.055570 1113654 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.160	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0328 00:47:31.069514 1113654 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0328 00:47:31.200949 1113654 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0328 00:47:31.219787 1113654 certs.go:68] Setting up /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/kubernetes-upgrade-615158 for IP: 192.168.50.160
	I0328 00:47:31.219822 1113654 certs.go:194] generating shared ca certs ...
	I0328 00:47:31.219845 1113654 certs.go:226] acquiring lock for ca certs: {Name:mkf4dec8f33bbf51de6ed3aabdf175c7fd744ef9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 00:47:31.220098 1113654 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.key
	I0328 00:47:31.220173 1113654 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/proxy-client-ca.key
	I0328 00:47:31.220189 1113654 certs.go:256] generating profile certs ...
	I0328 00:47:31.220282 1113654 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/kubernetes-upgrade-615158/client.key
	I0328 00:47:31.220301 1113654 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/kubernetes-upgrade-615158/client.crt with IP's: []
	I0328 00:47:31.334526 1113654 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/kubernetes-upgrade-615158/client.crt ...
	I0328 00:47:31.334567 1113654 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/kubernetes-upgrade-615158/client.crt: {Name:mk360507744d8f6e2691da5b91c5a3a6c4b052a0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 00:47:31.334790 1113654 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/kubernetes-upgrade-615158/client.key ...
	I0328 00:47:31.334813 1113654 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/kubernetes-upgrade-615158/client.key: {Name:mk073ab04701cfe3609eef57749292b1221ca6c5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 00:47:31.334949 1113654 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/kubernetes-upgrade-615158/apiserver.key.3c04a02a
	I0328 00:47:31.334973 1113654 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/kubernetes-upgrade-615158/apiserver.crt.3c04a02a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.50.160]
	I0328 00:47:31.449459 1113654 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/kubernetes-upgrade-615158/apiserver.crt.3c04a02a ...
	I0328 00:47:31.449491 1113654 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/kubernetes-upgrade-615158/apiserver.crt.3c04a02a: {Name:mk3373571f0aa84008e941c81f0bd72d7c30a64b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 00:47:31.449667 1113654 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/kubernetes-upgrade-615158/apiserver.key.3c04a02a ...
	I0328 00:47:31.449681 1113654 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/kubernetes-upgrade-615158/apiserver.key.3c04a02a: {Name:mk1f8f515b96bf80f3cd46cca8bf7b0800b795ad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 00:47:31.449752 1113654 certs.go:381] copying /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/kubernetes-upgrade-615158/apiserver.crt.3c04a02a -> /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/kubernetes-upgrade-615158/apiserver.crt
	I0328 00:47:31.449846 1113654 certs.go:385] copying /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/kubernetes-upgrade-615158/apiserver.key.3c04a02a -> /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/kubernetes-upgrade-615158/apiserver.key
	I0328 00:47:31.449903 1113654 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/kubernetes-upgrade-615158/proxy-client.key
	I0328 00:47:31.449920 1113654 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/kubernetes-upgrade-615158/proxy-client.crt with IP's: []
	I0328 00:47:31.517726 1113654 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/kubernetes-upgrade-615158/proxy-client.crt ...
	I0328 00:47:31.517757 1113654 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/kubernetes-upgrade-615158/proxy-client.crt: {Name:mk12710d9a3313607402232a695bfac470dc2dfb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 00:47:31.517914 1113654 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/kubernetes-upgrade-615158/proxy-client.key ...
	I0328 00:47:31.517927 1113654 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/kubernetes-upgrade-615158/proxy-client.key: {Name:mk9064cb2d6b74488710bf3b35a30fad1469f503 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 00:47:31.518090 1113654 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/1076522.pem (1338 bytes)
	W0328 00:47:31.518128 1113654 certs.go:480] ignoring /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/1076522_empty.pem, impossibly tiny 0 bytes
	I0328 00:47:31.518143 1113654 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca-key.pem (1675 bytes)
	I0328 00:47:31.518165 1113654 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem (1078 bytes)
	I0328 00:47:31.518187 1113654 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/cert.pem (1123 bytes)
	I0328 00:47:31.518208 1113654 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/key.pem (1679 bytes)
	I0328 00:47:31.518261 1113654 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/files/etc/ssl/certs/10765222.pem (1708 bytes)
	I0328 00:47:31.518853 1113654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0328 00:47:31.549843 1113654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0328 00:47:31.580241 1113654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0328 00:47:31.608658 1113654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0328 00:47:31.636361 1113654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/kubernetes-upgrade-615158/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0328 00:47:31.663231 1113654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/kubernetes-upgrade-615158/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0328 00:47:31.689761 1113654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/kubernetes-upgrade-615158/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0328 00:47:31.716174 1113654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/kubernetes-upgrade-615158/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0328 00:47:31.745659 1113654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/1076522.pem --> /usr/share/ca-certificates/1076522.pem (1338 bytes)
	I0328 00:47:31.773916 1113654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/files/etc/ssl/certs/10765222.pem --> /usr/share/ca-certificates/10765222.pem (1708 bytes)
	I0328 00:47:31.802007 1113654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0328 00:47:31.829680 1113654 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0328 00:47:31.849402 1113654 ssh_runner.go:195] Run: openssl version
	I0328 00:47:31.855682 1113654 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1076522.pem && ln -fs /usr/share/ca-certificates/1076522.pem /etc/ssl/certs/1076522.pem"
	I0328 00:47:31.868075 1113654 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1076522.pem
	I0328 00:47:31.873509 1113654 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 27 23:42 /usr/share/ca-certificates/1076522.pem
	I0328 00:47:31.873589 1113654 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1076522.pem
	I0328 00:47:31.880279 1113654 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1076522.pem /etc/ssl/certs/51391683.0"
	I0328 00:47:31.893098 1113654 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10765222.pem && ln -fs /usr/share/ca-certificates/10765222.pem /etc/ssl/certs/10765222.pem"
	I0328 00:47:31.905468 1113654 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10765222.pem
	I0328 00:47:31.911170 1113654 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 27 23:42 /usr/share/ca-certificates/10765222.pem
	I0328 00:47:31.911255 1113654 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10765222.pem
	I0328 00:47:31.918102 1113654 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/10765222.pem /etc/ssl/certs/3ec20f2e.0"
	I0328 00:47:31.931732 1113654 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0328 00:47:31.943988 1113654 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0328 00:47:31.949137 1113654 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 27 23:34 /usr/share/ca-certificates/minikubeCA.pem
	I0328 00:47:31.949210 1113654 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0328 00:47:31.955356 1113654 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0328 00:47:31.970263 1113654 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0328 00:47:31.975847 1113654 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0328 00:47:31.975904 1113654 kubeadm.go:391] StartCluster: {Name:kubernetes-upgrade-615158 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.20.0 ClusterName:kubernetes-upgrade-615158 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.160 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0328 00:47:31.975983 1113654 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0328 00:47:31.976047 1113654 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0328 00:47:32.017795 1113654 cri.go:89] found id: ""
	I0328 00:47:32.017914 1113654 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0328 00:47:32.032638 1113654 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0328 00:47:32.043653 1113654 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0328 00:47:32.066806 1113654 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0328 00:47:32.066830 1113654 kubeadm.go:156] found existing configuration files:
	
	I0328 00:47:32.066894 1113654 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0328 00:47:32.081858 1113654 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0328 00:47:32.081938 1113654 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0328 00:47:32.108324 1113654 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0328 00:47:32.124527 1113654 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0328 00:47:32.124606 1113654 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0328 00:47:32.135639 1113654 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0328 00:47:32.152840 1113654 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0328 00:47:32.152932 1113654 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0328 00:47:32.163879 1113654 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0328 00:47:32.174492 1113654 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0328 00:47:32.174578 1113654 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0328 00:47:32.184727 1113654 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0328 00:47:32.499972 1113654 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0328 00:49:30.695009 1113654 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0328 00:49:30.695136 1113654 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0328 00:49:30.696795 1113654 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0328 00:49:30.696861 1113654 kubeadm.go:309] [preflight] Running pre-flight checks
	I0328 00:49:30.696952 1113654 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0328 00:49:30.697089 1113654 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0328 00:49:30.697246 1113654 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0328 00:49:30.697345 1113654 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0328 00:49:30.699389 1113654 out.go:204]   - Generating certificates and keys ...
	I0328 00:49:30.699479 1113654 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0328 00:49:30.699572 1113654 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0328 00:49:30.699672 1113654 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0328 00:49:30.699768 1113654 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0328 00:49:30.699853 1113654 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0328 00:49:30.699935 1113654 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0328 00:49:30.700007 1113654 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0328 00:49:30.700177 1113654 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-615158 localhost] and IPs [192.168.50.160 127.0.0.1 ::1]
	I0328 00:49:30.700268 1113654 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0328 00:49:30.700456 1113654 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-615158 localhost] and IPs [192.168.50.160 127.0.0.1 ::1]
	I0328 00:49:30.700545 1113654 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0328 00:49:30.700626 1113654 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0328 00:49:30.700687 1113654 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0328 00:49:30.700867 1113654 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0328 00:49:30.700953 1113654 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0328 00:49:30.701036 1113654 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0328 00:49:30.701127 1113654 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0328 00:49:30.701199 1113654 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0328 00:49:30.701350 1113654 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0328 00:49:30.701454 1113654 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0328 00:49:30.701507 1113654 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0328 00:49:30.701601 1113654 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0328 00:49:30.703200 1113654 out.go:204]   - Booting up control plane ...
	I0328 00:49:30.703299 1113654 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0328 00:49:30.703384 1113654 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0328 00:49:30.703486 1113654 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0328 00:49:30.703626 1113654 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0328 00:49:30.703835 1113654 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0328 00:49:30.703910 1113654 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0328 00:49:30.704002 1113654 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0328 00:49:30.704281 1113654 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0328 00:49:30.704385 1113654 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0328 00:49:30.704580 1113654 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0328 00:49:30.704675 1113654 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0328 00:49:30.704909 1113654 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0328 00:49:30.704988 1113654 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0328 00:49:30.705243 1113654 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0328 00:49:30.705333 1113654 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0328 00:49:30.705562 1113654 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0328 00:49:30.705583 1113654 kubeadm.go:309] 
	I0328 00:49:30.705647 1113654 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0328 00:49:30.705715 1113654 kubeadm.go:309] 		timed out waiting for the condition
	I0328 00:49:30.705728 1113654 kubeadm.go:309] 
	I0328 00:49:30.705783 1113654 kubeadm.go:309] 	This error is likely caused by:
	I0328 00:49:30.705847 1113654 kubeadm.go:309] 		- The kubelet is not running
	I0328 00:49:30.705982 1113654 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0328 00:49:30.705993 1113654 kubeadm.go:309] 
	I0328 00:49:30.706165 1113654 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0328 00:49:30.706200 1113654 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0328 00:49:30.706276 1113654 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0328 00:49:30.706288 1113654 kubeadm.go:309] 
	I0328 00:49:30.706408 1113654 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0328 00:49:30.706517 1113654 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0328 00:49:30.706536 1113654 kubeadm.go:309] 
	I0328 00:49:30.706678 1113654 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0328 00:49:30.706795 1113654 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0328 00:49:30.706896 1113654 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0328 00:49:30.706976 1113654 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0328 00:49:30.706998 1113654 kubeadm.go:309] 
	W0328 00:49:30.707171 1113654 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-615158 localhost] and IPs [192.168.50.160 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-615158 localhost] and IPs [192.168.50.160 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-615158 localhost] and IPs [192.168.50.160 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-615158 localhost] and IPs [192.168.50.160 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0328 00:49:30.707218 1113654 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0328 00:49:32.900323 1113654 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (2.193063569s)
	I0328 00:49:32.900424 1113654 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0328 00:49:32.918505 1113654 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0328 00:49:32.932954 1113654 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0328 00:49:32.932979 1113654 kubeadm.go:156] found existing configuration files:
	
	I0328 00:49:32.933039 1113654 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0328 00:49:32.945826 1113654 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0328 00:49:32.945904 1113654 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0328 00:49:32.958585 1113654 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0328 00:49:32.972421 1113654 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0328 00:49:32.972520 1113654 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0328 00:49:32.986368 1113654 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0328 00:49:32.999479 1113654 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0328 00:49:32.999546 1113654 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0328 00:49:33.013801 1113654 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0328 00:49:33.025270 1113654 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0328 00:49:33.025354 1113654 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0328 00:49:33.037780 1113654 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0328 00:49:33.252815 1113654 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0328 00:51:29.531776 1113654 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0328 00:51:29.531901 1113654 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0328 00:51:29.533480 1113654 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0328 00:51:29.533604 1113654 kubeadm.go:309] [preflight] Running pre-flight checks
	I0328 00:51:29.533706 1113654 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0328 00:51:29.533840 1113654 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0328 00:51:29.533966 1113654 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0328 00:51:29.534055 1113654 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0328 00:51:29.560981 1113654 out.go:204]   - Generating certificates and keys ...
	I0328 00:51:29.561119 1113654 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0328 00:51:29.561209 1113654 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0328 00:51:29.561339 1113654 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0328 00:51:29.561449 1113654 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0328 00:51:29.561562 1113654 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0328 00:51:29.561678 1113654 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0328 00:51:29.561762 1113654 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0328 00:51:29.561841 1113654 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0328 00:51:29.561931 1113654 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0328 00:51:29.562027 1113654 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0328 00:51:29.562083 1113654 kubeadm.go:309] [certs] Using the existing "sa" key
	I0328 00:51:29.562154 1113654 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0328 00:51:29.562226 1113654 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0328 00:51:29.562325 1113654 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0328 00:51:29.562422 1113654 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0328 00:51:29.562512 1113654 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0328 00:51:29.562670 1113654 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0328 00:51:29.562784 1113654 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0328 00:51:29.562858 1113654 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0328 00:51:29.562958 1113654 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0328 00:51:29.684633 1113654 out.go:204]   - Booting up control plane ...
	I0328 00:51:29.684774 1113654 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0328 00:51:29.684852 1113654 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0328 00:51:29.684929 1113654 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0328 00:51:29.685035 1113654 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0328 00:51:29.685178 1113654 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0328 00:51:29.685224 1113654 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0328 00:51:29.685282 1113654 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0328 00:51:29.685425 1113654 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0328 00:51:29.685479 1113654 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0328 00:51:29.685655 1113654 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0328 00:51:29.685728 1113654 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0328 00:51:29.685881 1113654 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0328 00:51:29.685935 1113654 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0328 00:51:29.686098 1113654 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0328 00:51:29.686181 1113654 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0328 00:51:29.686438 1113654 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0328 00:51:29.686454 1113654 kubeadm.go:309] 
	I0328 00:51:29.686512 1113654 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0328 00:51:29.686571 1113654 kubeadm.go:309] 		timed out waiting for the condition
	I0328 00:51:29.686582 1113654 kubeadm.go:309] 
	I0328 00:51:29.686630 1113654 kubeadm.go:309] 	This error is likely caused by:
	I0328 00:51:29.686685 1113654 kubeadm.go:309] 		- The kubelet is not running
	I0328 00:51:29.686844 1113654 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0328 00:51:29.686855 1113654 kubeadm.go:309] 
	I0328 00:51:29.686992 1113654 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0328 00:51:29.687047 1113654 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0328 00:51:29.687105 1113654 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0328 00:51:29.687123 1113654 kubeadm.go:309] 
	I0328 00:51:29.687273 1113654 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0328 00:51:29.687384 1113654 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0328 00:51:29.687393 1113654 kubeadm.go:309] 
	I0328 00:51:29.687487 1113654 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0328 00:51:29.687568 1113654 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0328 00:51:29.687641 1113654 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0328 00:51:29.687706 1113654 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0328 00:51:29.687762 1113654 kubeadm.go:309] 
	I0328 00:51:29.687805 1113654 kubeadm.go:393] duration metric: took 3m57.711904117s to StartCluster
	I0328 00:51:29.687861 1113654 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 00:51:29.687919 1113654 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 00:51:29.742650 1113654 cri.go:89] found id: ""
	I0328 00:51:29.742689 1113654 logs.go:276] 0 containers: []
	W0328 00:51:29.742701 1113654 logs.go:278] No container was found matching "kube-apiserver"
	I0328 00:51:29.742708 1113654 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 00:51:29.742785 1113654 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 00:51:29.781267 1113654 cri.go:89] found id: ""
	I0328 00:51:29.781302 1113654 logs.go:276] 0 containers: []
	W0328 00:51:29.781311 1113654 logs.go:278] No container was found matching "etcd"
	I0328 00:51:29.781317 1113654 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 00:51:29.781383 1113654 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 00:51:29.823976 1113654 cri.go:89] found id: ""
	I0328 00:51:29.824009 1113654 logs.go:276] 0 containers: []
	W0328 00:51:29.824022 1113654 logs.go:278] No container was found matching "coredns"
	I0328 00:51:29.824031 1113654 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 00:51:29.824095 1113654 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 00:51:29.864130 1113654 cri.go:89] found id: ""
	I0328 00:51:29.864160 1113654 logs.go:276] 0 containers: []
	W0328 00:51:29.864169 1113654 logs.go:278] No container was found matching "kube-scheduler"
	I0328 00:51:29.864175 1113654 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 00:51:29.864248 1113654 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 00:51:29.903476 1113654 cri.go:89] found id: ""
	I0328 00:51:29.903506 1113654 logs.go:276] 0 containers: []
	W0328 00:51:29.903516 1113654 logs.go:278] No container was found matching "kube-proxy"
	I0328 00:51:29.903524 1113654 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 00:51:29.903587 1113654 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 00:51:29.941377 1113654 cri.go:89] found id: ""
	I0328 00:51:29.941408 1113654 logs.go:276] 0 containers: []
	W0328 00:51:29.941417 1113654 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 00:51:29.941424 1113654 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 00:51:29.941483 1113654 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 00:51:29.982055 1113654 cri.go:89] found id: ""
	I0328 00:51:29.982092 1113654 logs.go:276] 0 containers: []
	W0328 00:51:29.982105 1113654 logs.go:278] No container was found matching "kindnet"
	I0328 00:51:29.982119 1113654 logs.go:123] Gathering logs for container status ...
	I0328 00:51:29.982137 1113654 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 00:51:30.033139 1113654 logs.go:123] Gathering logs for kubelet ...
	I0328 00:51:30.033176 1113654 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 00:51:30.089854 1113654 logs.go:123] Gathering logs for dmesg ...
	I0328 00:51:30.089895 1113654 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 00:51:30.105738 1113654 logs.go:123] Gathering logs for describe nodes ...
	I0328 00:51:30.105775 1113654 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 00:51:30.219531 1113654 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 00:51:30.219565 1113654 logs.go:123] Gathering logs for CRI-O ...
	I0328 00:51:30.219582 1113654 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	W0328 00:51:30.313792 1113654 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0328 00:51:30.313847 1113654 out.go:239] * 
	* 
	W0328 00:51:30.313908 1113654 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0328 00:51:30.313937 1113654 out.go:239] * 
	* 
	W0328 00:51:30.314699 1113654 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0328 00:51:30.318348 1113654 out.go:177] 
	W0328 00:51:30.319669 1113654 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0328 00:51:30.319720 1113654 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0328 00:51:30.319742 1113654 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0328 00:51:30.321333 1113654 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-linux-amd64 start -p kubernetes-upgrade-615158 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-615158
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-615158: (3.316450858s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-615158 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-615158 status --format={{.Host}}: exit status 7 (88.362259ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-615158 --memory=2200 --kubernetes-version=v1.30.0-beta.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-615158 --memory=2200 --kubernetes-version=v1.30.0-beta.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (39.149815238s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-615158 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-615158 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-615158 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio: exit status 106 (120.741172ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-615158] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18485
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18485-1069254/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18485-1069254/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.30.0-beta.0 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-615158
	    minikube start -p kubernetes-upgrade-615158 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-6151582 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.30.0-beta.0, by running:
	    
	    minikube start -p kubernetes-upgrade-615158 --kubernetes-version=v1.30.0-beta.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-615158 --memory=2200 --kubernetes-version=v1.30.0-beta.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-615158 --memory=2200 --kubernetes-version=v1.30.0-beta.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (27.289697274s)
version_upgrade_test.go:279: *** TestKubernetesUpgrade FAILED at 2024-03-28 00:52:40.43831137 +0000 UTC m=+4791.689790315
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p kubernetes-upgrade-615158 -n kubernetes-upgrade-615158
helpers_test.go:244: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-615158 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p kubernetes-upgrade-615158 logs -n 25: (1.504223008s)
helpers_test.go:252: TestKubernetesUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|---------------------------|---------|----------------|---------------------|---------------------|
	| Command |                         Args                         |          Profile          |  User   |    Version     |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|---------------------------|---------|----------------|---------------------|---------------------|
	| ssh     | -p enable-default-cni-443419                         | enable-default-cni-443419 | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:52 UTC | 28 Mar 24 00:52 UTC |
	|         | sudo iptables-save                                   |                           |         |                |                     |                     |
	| ssh     | -p enable-default-cni-443419                         | enable-default-cni-443419 | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:52 UTC | 28 Mar 24 00:52 UTC |
	|         | sudo iptables -t nat -L -n -v                        |                           |         |                |                     |                     |
	| ssh     | -p enable-default-cni-443419                         | enable-default-cni-443419 | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:52 UTC | 28 Mar 24 00:52 UTC |
	|         | sudo systemctl status kubelet                        |                           |         |                |                     |                     |
	|         | --all --full --no-pager                              |                           |         |                |                     |                     |
	| ssh     | -p enable-default-cni-443419                         | enable-default-cni-443419 | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:52 UTC | 28 Mar 24 00:52 UTC |
	|         | sudo systemctl cat kubelet                           |                           |         |                |                     |                     |
	|         | --no-pager                                           |                           |         |                |                     |                     |
	| ssh     | -p enable-default-cni-443419                         | enable-default-cni-443419 | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:52 UTC | 28 Mar 24 00:52 UTC |
	|         | sudo journalctl -xeu kubelet                         |                           |         |                |                     |                     |
	|         | --all --full --no-pager                              |                           |         |                |                     |                     |
	| ssh     | -p enable-default-cni-443419                         | enable-default-cni-443419 | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:52 UTC | 28 Mar 24 00:52 UTC |
	|         | sudo cat                                             |                           |         |                |                     |                     |
	|         | /etc/kubernetes/kubelet.conf                         |                           |         |                |                     |                     |
	| ssh     | -p enable-default-cni-443419                         | enable-default-cni-443419 | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:52 UTC | 28 Mar 24 00:52 UTC |
	|         | sudo cat                                             |                           |         |                |                     |                     |
	|         | /var/lib/kubelet/config.yaml                         |                           |         |                |                     |                     |
	| ssh     | -p enable-default-cni-443419                         | enable-default-cni-443419 | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:52 UTC |                     |
	|         | sudo systemctl status docker                         |                           |         |                |                     |                     |
	|         | --all --full --no-pager                              |                           |         |                |                     |                     |
	| ssh     | -p enable-default-cni-443419                         | enable-default-cni-443419 | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:52 UTC | 28 Mar 24 00:52 UTC |
	|         | sudo systemctl cat docker                            |                           |         |                |                     |                     |
	|         | --no-pager                                           |                           |         |                |                     |                     |
	| ssh     | -p enable-default-cni-443419                         | enable-default-cni-443419 | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:52 UTC | 28 Mar 24 00:52 UTC |
	|         | sudo cat                                             |                           |         |                |                     |                     |
	|         | /etc/docker/daemon.json                              |                           |         |                |                     |                     |
	| ssh     | -p enable-default-cni-443419                         | enable-default-cni-443419 | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:52 UTC |                     |
	|         | sudo docker system info                              |                           |         |                |                     |                     |
	| ssh     | -p enable-default-cni-443419                         | enable-default-cni-443419 | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:52 UTC |                     |
	|         | sudo systemctl status                                |                           |         |                |                     |                     |
	|         | cri-docker --all --full                              |                           |         |                |                     |                     |
	|         | --no-pager                                           |                           |         |                |                     |                     |
	| ssh     | -p enable-default-cni-443419                         | enable-default-cni-443419 | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:52 UTC | 28 Mar 24 00:52 UTC |
	|         | sudo systemctl cat cri-docker                        |                           |         |                |                     |                     |
	|         | --no-pager                                           |                           |         |                |                     |                     |
	| ssh     | -p enable-default-cni-443419 sudo cat                | enable-default-cni-443419 | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:52 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                           |         |                |                     |                     |
	| ssh     | -p enable-default-cni-443419 sudo cat                | enable-default-cni-443419 | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:52 UTC | 28 Mar 24 00:52 UTC |
	|         | /usr/lib/systemd/system/cri-docker.service           |                           |         |                |                     |                     |
	| ssh     | -p enable-default-cni-443419                         | enable-default-cni-443419 | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:52 UTC | 28 Mar 24 00:52 UTC |
	|         | sudo cri-dockerd --version                           |                           |         |                |                     |                     |
	| ssh     | -p enable-default-cni-443419                         | enable-default-cni-443419 | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:52 UTC |                     |
	|         | sudo systemctl status                                |                           |         |                |                     |                     |
	|         | containerd --all --full                              |                           |         |                |                     |                     |
	|         | --no-pager                                           |                           |         |                |                     |                     |
	| ssh     | -p enable-default-cni-443419                         | enable-default-cni-443419 | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:52 UTC | 28 Mar 24 00:52 UTC |
	|         | sudo systemctl cat containerd                        |                           |         |                |                     |                     |
	|         | --no-pager                                           |                           |         |                |                     |                     |
	| ssh     | -p enable-default-cni-443419 sudo cat                | enable-default-cni-443419 | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:52 UTC | 28 Mar 24 00:52 UTC |
	|         | /lib/systemd/system/containerd.service               |                           |         |                |                     |                     |
	| ssh     | -p enable-default-cni-443419                         | enable-default-cni-443419 | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:52 UTC | 28 Mar 24 00:52 UTC |
	|         | sudo cat                                             |                           |         |                |                     |                     |
	|         | /etc/containerd/config.toml                          |                           |         |                |                     |                     |
	| ssh     | -p enable-default-cni-443419                         | enable-default-cni-443419 | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:52 UTC | 28 Mar 24 00:52 UTC |
	|         | sudo containerd config dump                          |                           |         |                |                     |                     |
	| ssh     | -p enable-default-cni-443419                         | enable-default-cni-443419 | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:52 UTC | 28 Mar 24 00:52 UTC |
	|         | sudo systemctl status crio                           |                           |         |                |                     |                     |
	|         | --all --full --no-pager                              |                           |         |                |                     |                     |
	| ssh     | -p enable-default-cni-443419                         | enable-default-cni-443419 | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:52 UTC | 28 Mar 24 00:52 UTC |
	|         | sudo systemctl cat crio                              |                           |         |                |                     |                     |
	|         | --no-pager                                           |                           |         |                |                     |                     |
	| ssh     | -p enable-default-cni-443419                         | enable-default-cni-443419 | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:52 UTC | 28 Mar 24 00:52 UTC |
	|         | sudo find /etc/crio -type f                          |                           |         |                |                     |                     |
	|         | -exec sh -c 'echo {}; cat {}'                        |                           |         |                |                     |                     |
	|         | \;                                                   |                           |         |                |                     |                     |
	| ssh     | -p enable-default-cni-443419                         | enable-default-cni-443419 | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:52 UTC |                     |
	|         | sudo crio config                                     |                           |         |                |                     |                     |
	|---------|------------------------------------------------------|---------------------------|---------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/28 00:52:18
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0328 00:52:18.230654 1123084 out.go:291] Setting OutFile to fd 1 ...
	I0328 00:52:18.230810 1123084 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 00:52:18.230819 1123084 out.go:304] Setting ErrFile to fd 2...
	I0328 00:52:18.230824 1123084 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 00:52:18.231006 1123084 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18485-1069254/.minikube/bin
	I0328 00:52:18.231633 1123084 out.go:298] Setting JSON to false
	I0328 00:52:18.232868 1123084 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":30835,"bootTime":1711556303,"procs":306,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1054-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0328 00:52:18.232943 1123084 start.go:139] virtualization: kvm guest
	I0328 00:52:18.235159 1123084 out.go:177] * [bridge-443419] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I0328 00:52:18.236397 1123084 out.go:177]   - MINIKUBE_LOCATION=18485
	I0328 00:52:18.237626 1123084 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0328 00:52:18.236477 1123084 notify.go:220] Checking for updates...
	I0328 00:52:18.239059 1123084 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18485-1069254/kubeconfig
	I0328 00:52:18.240441 1123084 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18485-1069254/.minikube
	I0328 00:52:18.241723 1123084 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0328 00:52:18.243044 1123084 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0328 00:52:18.244754 1123084 config.go:182] Loaded profile config "enable-default-cni-443419": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0328 00:52:18.244850 1123084 config.go:182] Loaded profile config "flannel-443419": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0328 00:52:18.244924 1123084 config.go:182] Loaded profile config "kubernetes-upgrade-615158": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0-beta.0
	I0328 00:52:18.245007 1123084 driver.go:392] Setting default libvirt URI to qemu:///system
	I0328 00:52:18.289164 1123084 out.go:177] * Using the kvm2 driver based on user configuration
	I0328 00:52:18.290589 1123084 start.go:297] selected driver: kvm2
	I0328 00:52:18.290611 1123084 start.go:901] validating driver "kvm2" against <nil>
	I0328 00:52:18.290629 1123084 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0328 00:52:18.291452 1123084 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0328 00:52:18.291595 1123084 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18485-1069254/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0328 00:52:18.308328 1123084 install.go:137] /home/jenkins/minikube-integration/18485-1069254/.minikube/bin/docker-machine-driver-kvm2 version is 1.33.0-beta.0
	I0328 00:52:18.308449 1123084 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0328 00:52:18.308785 1123084 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0328 00:52:18.308850 1123084 cni.go:84] Creating CNI manager for "bridge"
	I0328 00:52:18.308862 1123084 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0328 00:52:18.308916 1123084 start.go:340] cluster config:
	{Name:bridge-443419 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:bridge-443419 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock
: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0328 00:52:18.309042 1123084 iso.go:125] acquiring lock: {Name:mk3da1fa7d63a581c817f327d86a827697458cb8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0328 00:52:18.311133 1123084 out.go:177] * Starting "bridge-443419" primary control-plane node in "bridge-443419" cluster
	I0328 00:52:15.882940 1121345 main.go:141] libmachine: (flannel-443419) DBG | domain flannel-443419 has defined MAC address 52:54:00:89:e8:02 in network mk-flannel-443419
	I0328 00:52:15.883530 1121345 main.go:141] libmachine: (flannel-443419) DBG | unable to find current IP address of domain flannel-443419 in network mk-flannel-443419
	I0328 00:52:15.883577 1121345 main.go:141] libmachine: (flannel-443419) DBG | I0328 00:52:15.883491 1121368 retry.go:31] will retry after 4.931122007s: waiting for machine to come up
	I0328 00:52:22.295824 1122507 start.go:364] duration metric: took 8.934627556s to acquireMachinesLock for "kubernetes-upgrade-615158"
	I0328 00:52:22.295898 1122507 start.go:96] Skipping create...Using existing machine configuration
	I0328 00:52:22.295909 1122507 fix.go:54] fixHost starting: 
	I0328 00:52:22.296465 1122507 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18485-1069254/.minikube/bin/docker-machine-driver-kvm2
	I0328 00:52:22.296510 1122507 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 00:52:22.314622 1122507 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36445
	I0328 00:52:22.315079 1122507 main.go:141] libmachine: () Calling .GetVersion
	I0328 00:52:22.315637 1122507 main.go:141] libmachine: Using API Version  1
	I0328 00:52:22.315663 1122507 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 00:52:22.316015 1122507 main.go:141] libmachine: () Calling .GetMachineName
	I0328 00:52:22.316261 1122507 main.go:141] libmachine: (kubernetes-upgrade-615158) Calling .DriverName
	I0328 00:52:22.316432 1122507 main.go:141] libmachine: (kubernetes-upgrade-615158) Calling .GetState
	I0328 00:52:22.318218 1122507 fix.go:112] recreateIfNeeded on kubernetes-upgrade-615158: state=Running err=<nil>
	W0328 00:52:22.318266 1122507 fix.go:138] unexpected machine state, will restart: <nil>
	I0328 00:52:22.320513 1122507 out.go:177] * Updating the running kvm2 "kubernetes-upgrade-615158" VM ...
	I0328 00:52:22.321807 1122507 machine.go:94] provisionDockerMachine start ...
	I0328 00:52:22.321829 1122507 main.go:141] libmachine: (kubernetes-upgrade-615158) Calling .DriverName
	I0328 00:52:22.322042 1122507 main.go:141] libmachine: (kubernetes-upgrade-615158) Calling .GetSSHHostname
	I0328 00:52:22.324854 1122507 main.go:141] libmachine: (kubernetes-upgrade-615158) DBG | domain kubernetes-upgrade-615158 has defined MAC address 52:54:00:9b:31:7a in network mk-kubernetes-upgrade-615158
	I0328 00:52:22.325222 1122507 main.go:141] libmachine: (kubernetes-upgrade-615158) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:31:7a", ip: ""} in network mk-kubernetes-upgrade-615158: {Iface:virbr2 ExpiryTime:2024-03-28 01:47:12 +0000 UTC Type:0 Mac:52:54:00:9b:31:7a Iaid: IPaddr:192.168.50.160 Prefix:24 Hostname:kubernetes-upgrade-615158 Clientid:01:52:54:00:9b:31:7a}
	I0328 00:52:22.325257 1122507 main.go:141] libmachine: (kubernetes-upgrade-615158) DBG | domain kubernetes-upgrade-615158 has defined IP address 192.168.50.160 and MAC address 52:54:00:9b:31:7a in network mk-kubernetes-upgrade-615158
	I0328 00:52:22.325415 1122507 main.go:141] libmachine: (kubernetes-upgrade-615158) Calling .GetSSHPort
	I0328 00:52:22.325609 1122507 main.go:141] libmachine: (kubernetes-upgrade-615158) Calling .GetSSHKeyPath
	I0328 00:52:22.325791 1122507 main.go:141] libmachine: (kubernetes-upgrade-615158) Calling .GetSSHKeyPath
	I0328 00:52:22.325940 1122507 main.go:141] libmachine: (kubernetes-upgrade-615158) Calling .GetSSHUsername
	I0328 00:52:22.326119 1122507 main.go:141] libmachine: Using SSH client type: native
	I0328 00:52:22.326382 1122507 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.160 22 <nil> <nil>}
	I0328 00:52:22.326398 1122507 main.go:141] libmachine: About to run SSH command:
	hostname
	I0328 00:52:22.443951 1122507 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-615158
	
	I0328 00:52:22.443989 1122507 main.go:141] libmachine: (kubernetes-upgrade-615158) Calling .GetMachineName
	I0328 00:52:22.444318 1122507 buildroot.go:166] provisioning hostname "kubernetes-upgrade-615158"
	I0328 00:52:22.444356 1122507 main.go:141] libmachine: (kubernetes-upgrade-615158) Calling .GetMachineName
	I0328 00:52:22.444592 1122507 main.go:141] libmachine: (kubernetes-upgrade-615158) Calling .GetSSHHostname
	I0328 00:52:22.447653 1122507 main.go:141] libmachine: (kubernetes-upgrade-615158) DBG | domain kubernetes-upgrade-615158 has defined MAC address 52:54:00:9b:31:7a in network mk-kubernetes-upgrade-615158
	I0328 00:52:22.448068 1122507 main.go:141] libmachine: (kubernetes-upgrade-615158) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:31:7a", ip: ""} in network mk-kubernetes-upgrade-615158: {Iface:virbr2 ExpiryTime:2024-03-28 01:47:12 +0000 UTC Type:0 Mac:52:54:00:9b:31:7a Iaid: IPaddr:192.168.50.160 Prefix:24 Hostname:kubernetes-upgrade-615158 Clientid:01:52:54:00:9b:31:7a}
	I0328 00:52:22.448104 1122507 main.go:141] libmachine: (kubernetes-upgrade-615158) DBG | domain kubernetes-upgrade-615158 has defined IP address 192.168.50.160 and MAC address 52:54:00:9b:31:7a in network mk-kubernetes-upgrade-615158
	I0328 00:52:22.448304 1122507 main.go:141] libmachine: (kubernetes-upgrade-615158) Calling .GetSSHPort
	I0328 00:52:22.448542 1122507 main.go:141] libmachine: (kubernetes-upgrade-615158) Calling .GetSSHKeyPath
	I0328 00:52:22.448780 1122507 main.go:141] libmachine: (kubernetes-upgrade-615158) Calling .GetSSHKeyPath
	I0328 00:52:22.448987 1122507 main.go:141] libmachine: (kubernetes-upgrade-615158) Calling .GetSSHUsername
	I0328 00:52:22.449186 1122507 main.go:141] libmachine: Using SSH client type: native
	I0328 00:52:22.449399 1122507 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.160 22 <nil> <nil>}
	I0328 00:52:22.449420 1122507 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-615158 && echo "kubernetes-upgrade-615158" | sudo tee /etc/hostname
	I0328 00:52:22.587214 1122507 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-615158
	
	I0328 00:52:22.587253 1122507 main.go:141] libmachine: (kubernetes-upgrade-615158) Calling .GetSSHHostname
	I0328 00:52:22.590388 1122507 main.go:141] libmachine: (kubernetes-upgrade-615158) DBG | domain kubernetes-upgrade-615158 has defined MAC address 52:54:00:9b:31:7a in network mk-kubernetes-upgrade-615158
	I0328 00:52:22.590829 1122507 main.go:141] libmachine: (kubernetes-upgrade-615158) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:31:7a", ip: ""} in network mk-kubernetes-upgrade-615158: {Iface:virbr2 ExpiryTime:2024-03-28 01:47:12 +0000 UTC Type:0 Mac:52:54:00:9b:31:7a Iaid: IPaddr:192.168.50.160 Prefix:24 Hostname:kubernetes-upgrade-615158 Clientid:01:52:54:00:9b:31:7a}
	I0328 00:52:22.590864 1122507 main.go:141] libmachine: (kubernetes-upgrade-615158) DBG | domain kubernetes-upgrade-615158 has defined IP address 192.168.50.160 and MAC address 52:54:00:9b:31:7a in network mk-kubernetes-upgrade-615158
	I0328 00:52:22.591040 1122507 main.go:141] libmachine: (kubernetes-upgrade-615158) Calling .GetSSHPort
	I0328 00:52:22.591265 1122507 main.go:141] libmachine: (kubernetes-upgrade-615158) Calling .GetSSHKeyPath
	I0328 00:52:22.591440 1122507 main.go:141] libmachine: (kubernetes-upgrade-615158) Calling .GetSSHKeyPath
	I0328 00:52:22.591617 1122507 main.go:141] libmachine: (kubernetes-upgrade-615158) Calling .GetSSHUsername
	I0328 00:52:22.591806 1122507 main.go:141] libmachine: Using SSH client type: native
	I0328 00:52:22.592024 1122507 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.160 22 <nil> <nil>}
	I0328 00:52:22.592042 1122507 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-615158' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-615158/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-615158' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0328 00:52:22.720409 1122507 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0328 00:52:22.720456 1122507 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18485-1069254/.minikube CaCertPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18485-1069254/.minikube}
	I0328 00:52:22.720479 1122507 buildroot.go:174] setting up certificates
	I0328 00:52:22.720494 1122507 provision.go:84] configureAuth start
	I0328 00:52:22.720511 1122507 main.go:141] libmachine: (kubernetes-upgrade-615158) Calling .GetMachineName
	I0328 00:52:22.720885 1122507 main.go:141] libmachine: (kubernetes-upgrade-615158) Calling .GetIP
	I0328 00:52:22.723863 1122507 main.go:141] libmachine: (kubernetes-upgrade-615158) DBG | domain kubernetes-upgrade-615158 has defined MAC address 52:54:00:9b:31:7a in network mk-kubernetes-upgrade-615158
	I0328 00:52:22.724413 1122507 main.go:141] libmachine: (kubernetes-upgrade-615158) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:31:7a", ip: ""} in network mk-kubernetes-upgrade-615158: {Iface:virbr2 ExpiryTime:2024-03-28 01:47:12 +0000 UTC Type:0 Mac:52:54:00:9b:31:7a Iaid: IPaddr:192.168.50.160 Prefix:24 Hostname:kubernetes-upgrade-615158 Clientid:01:52:54:00:9b:31:7a}
	I0328 00:52:22.724451 1122507 main.go:141] libmachine: (kubernetes-upgrade-615158) DBG | domain kubernetes-upgrade-615158 has defined IP address 192.168.50.160 and MAC address 52:54:00:9b:31:7a in network mk-kubernetes-upgrade-615158
	I0328 00:52:22.724571 1122507 main.go:141] libmachine: (kubernetes-upgrade-615158) Calling .GetSSHHostname
	I0328 00:52:22.727065 1122507 main.go:141] libmachine: (kubernetes-upgrade-615158) DBG | domain kubernetes-upgrade-615158 has defined MAC address 52:54:00:9b:31:7a in network mk-kubernetes-upgrade-615158
	I0328 00:52:22.727406 1122507 main.go:141] libmachine: (kubernetes-upgrade-615158) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:31:7a", ip: ""} in network mk-kubernetes-upgrade-615158: {Iface:virbr2 ExpiryTime:2024-03-28 01:47:12 +0000 UTC Type:0 Mac:52:54:00:9b:31:7a Iaid: IPaddr:192.168.50.160 Prefix:24 Hostname:kubernetes-upgrade-615158 Clientid:01:52:54:00:9b:31:7a}
	I0328 00:52:22.727456 1122507 main.go:141] libmachine: (kubernetes-upgrade-615158) DBG | domain kubernetes-upgrade-615158 has defined IP address 192.168.50.160 and MAC address 52:54:00:9b:31:7a in network mk-kubernetes-upgrade-615158
	I0328 00:52:22.727509 1122507 provision.go:143] copyHostCerts
	I0328 00:52:22.727587 1122507 exec_runner.go:144] found /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.pem, removing ...
	I0328 00:52:22.727601 1122507 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.pem
	I0328 00:52:22.727669 1122507 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.pem (1078 bytes)
	I0328 00:52:22.727787 1122507 exec_runner.go:144] found /home/jenkins/minikube-integration/18485-1069254/.minikube/cert.pem, removing ...
	I0328 00:52:22.727798 1122507 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18485-1069254/.minikube/cert.pem
	I0328 00:52:22.727832 1122507 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18485-1069254/.minikube/cert.pem (1123 bytes)
	I0328 00:52:22.727911 1122507 exec_runner.go:144] found /home/jenkins/minikube-integration/18485-1069254/.minikube/key.pem, removing ...
	I0328 00:52:22.727920 1122507 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18485-1069254/.minikube/key.pem
	I0328 00:52:22.727951 1122507 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18485-1069254/.minikube/key.pem (1679 bytes)
	I0328 00:52:22.728026 1122507 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-615158 san=[127.0.0.1 192.168.50.160 kubernetes-upgrade-615158 localhost minikube]
	I0328 00:52:22.774518 1122507 provision.go:177] copyRemoteCerts
	I0328 00:52:22.774595 1122507 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0328 00:52:22.774630 1122507 main.go:141] libmachine: (kubernetes-upgrade-615158) Calling .GetSSHHostname
	I0328 00:52:22.777401 1122507 main.go:141] libmachine: (kubernetes-upgrade-615158) DBG | domain kubernetes-upgrade-615158 has defined MAC address 52:54:00:9b:31:7a in network mk-kubernetes-upgrade-615158
	I0328 00:52:22.777828 1122507 main.go:141] libmachine: (kubernetes-upgrade-615158) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:31:7a", ip: ""} in network mk-kubernetes-upgrade-615158: {Iface:virbr2 ExpiryTime:2024-03-28 01:47:12 +0000 UTC Type:0 Mac:52:54:00:9b:31:7a Iaid: IPaddr:192.168.50.160 Prefix:24 Hostname:kubernetes-upgrade-615158 Clientid:01:52:54:00:9b:31:7a}
	I0328 00:52:22.777864 1122507 main.go:141] libmachine: (kubernetes-upgrade-615158) DBG | domain kubernetes-upgrade-615158 has defined IP address 192.168.50.160 and MAC address 52:54:00:9b:31:7a in network mk-kubernetes-upgrade-615158
	I0328 00:52:22.778038 1122507 main.go:141] libmachine: (kubernetes-upgrade-615158) Calling .GetSSHPort
	I0328 00:52:22.778288 1122507 main.go:141] libmachine: (kubernetes-upgrade-615158) Calling .GetSSHKeyPath
	I0328 00:52:22.778461 1122507 main.go:141] libmachine: (kubernetes-upgrade-615158) Calling .GetSSHUsername
	I0328 00:52:22.778629 1122507 sshutil.go:53] new ssh client: &{IP:192.168.50.160 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/kubernetes-upgrade-615158/id_rsa Username:docker}
	I0328 00:52:22.874941 1122507 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0328 00:52:22.911161 1122507 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0328 00:52:22.945942 1122507 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0328 00:52:22.977880 1122507 provision.go:87] duration metric: took 257.367183ms to configureAuth
	I0328 00:52:22.977914 1122507 buildroot.go:189] setting minikube options for container-runtime
	I0328 00:52:22.978073 1122507 config.go:182] Loaded profile config "kubernetes-upgrade-615158": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0-beta.0
	I0328 00:52:22.978152 1122507 main.go:141] libmachine: (kubernetes-upgrade-615158) Calling .GetSSHHostname
	I0328 00:52:22.980916 1122507 main.go:141] libmachine: (kubernetes-upgrade-615158) DBG | domain kubernetes-upgrade-615158 has defined MAC address 52:54:00:9b:31:7a in network mk-kubernetes-upgrade-615158
	I0328 00:52:22.981344 1122507 main.go:141] libmachine: (kubernetes-upgrade-615158) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:31:7a", ip: ""} in network mk-kubernetes-upgrade-615158: {Iface:virbr2 ExpiryTime:2024-03-28 01:47:12 +0000 UTC Type:0 Mac:52:54:00:9b:31:7a Iaid: IPaddr:192.168.50.160 Prefix:24 Hostname:kubernetes-upgrade-615158 Clientid:01:52:54:00:9b:31:7a}
	I0328 00:52:22.981384 1122507 main.go:141] libmachine: (kubernetes-upgrade-615158) DBG | domain kubernetes-upgrade-615158 has defined IP address 192.168.50.160 and MAC address 52:54:00:9b:31:7a in network mk-kubernetes-upgrade-615158
	I0328 00:52:22.981551 1122507 main.go:141] libmachine: (kubernetes-upgrade-615158) Calling .GetSSHPort
	I0328 00:52:22.981745 1122507 main.go:141] libmachine: (kubernetes-upgrade-615158) Calling .GetSSHKeyPath
	I0328 00:52:22.981928 1122507 main.go:141] libmachine: (kubernetes-upgrade-615158) Calling .GetSSHKeyPath
	I0328 00:52:22.982127 1122507 main.go:141] libmachine: (kubernetes-upgrade-615158) Calling .GetSSHUsername
	I0328 00:52:22.982366 1122507 main.go:141] libmachine: Using SSH client type: native
	I0328 00:52:22.982581 1122507 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.160 22 <nil> <nil>}
	I0328 00:52:22.982600 1122507 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0328 00:52:18.312428 1123084 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0328 00:52:18.312486 1123084 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4
	I0328 00:52:18.312498 1123084 cache.go:56] Caching tarball of preloaded images
	I0328 00:52:18.312603 1123084 preload.go:173] Found /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0328 00:52:18.312618 1123084 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on crio
	I0328 00:52:18.312765 1123084 profile.go:142] Saving config to /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/bridge-443419/config.json ...
	I0328 00:52:18.312795 1123084 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/bridge-443419/config.json: {Name:mk0d32478534f907170f7b8363608c7fd2e31be2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 00:52:18.312995 1123084 start.go:360] acquireMachinesLock for bridge-443419: {Name:mk85e225431128bbd27ac7bb3815095957281902 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0328 00:52:20.819425 1121345 main.go:141] libmachine: (flannel-443419) DBG | domain flannel-443419 has defined MAC address 52:54:00:89:e8:02 in network mk-flannel-443419
	I0328 00:52:20.820138 1121345 main.go:141] libmachine: (flannel-443419) Found IP for machine: 192.168.72.7
	I0328 00:52:20.820169 1121345 main.go:141] libmachine: (flannel-443419) Reserving static IP address...
	I0328 00:52:20.820190 1121345 main.go:141] libmachine: (flannel-443419) DBG | domain flannel-443419 has current primary IP address 192.168.72.7 and MAC address 52:54:00:89:e8:02 in network mk-flannel-443419
	I0328 00:52:20.820543 1121345 main.go:141] libmachine: (flannel-443419) DBG | unable to find host DHCP lease matching {name: "flannel-443419", mac: "52:54:00:89:e8:02", ip: "192.168.72.7"} in network mk-flannel-443419
	I0328 00:52:20.906598 1121345 main.go:141] libmachine: (flannel-443419) DBG | Getting to WaitForSSH function...
	I0328 00:52:20.906721 1121345 main.go:141] libmachine: (flannel-443419) Reserved static IP address: 192.168.72.7
	I0328 00:52:20.906756 1121345 main.go:141] libmachine: (flannel-443419) Waiting for SSH to be available...
	I0328 00:52:20.909502 1121345 main.go:141] libmachine: (flannel-443419) DBG | domain flannel-443419 has defined MAC address 52:54:00:89:e8:02 in network mk-flannel-443419
	I0328 00:52:20.910126 1121345 main.go:141] libmachine: (flannel-443419) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:e8:02", ip: ""} in network mk-flannel-443419: {Iface:virbr4 ExpiryTime:2024-03-28 01:52:13 +0000 UTC Type:0 Mac:52:54:00:89:e8:02 Iaid: IPaddr:192.168.72.7 Prefix:24 Hostname:minikube Clientid:01:52:54:00:89:e8:02}
	I0328 00:52:20.910158 1121345 main.go:141] libmachine: (flannel-443419) DBG | domain flannel-443419 has defined IP address 192.168.72.7 and MAC address 52:54:00:89:e8:02 in network mk-flannel-443419
	I0328 00:52:20.910376 1121345 main.go:141] libmachine: (flannel-443419) DBG | Using SSH client type: external
	I0328 00:52:20.910406 1121345 main.go:141] libmachine: (flannel-443419) DBG | Using SSH private key: /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/flannel-443419/id_rsa (-rw-------)
	I0328 00:52:20.910436 1121345 main.go:141] libmachine: (flannel-443419) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.7 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/flannel-443419/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0328 00:52:20.910451 1121345 main.go:141] libmachine: (flannel-443419) DBG | About to run SSH command:
	I0328 00:52:20.910470 1121345 main.go:141] libmachine: (flannel-443419) DBG | exit 0
	I0328 00:52:21.034879 1121345 main.go:141] libmachine: (flannel-443419) DBG | SSH cmd err, output: <nil>: 
	I0328 00:52:21.035166 1121345 main.go:141] libmachine: (flannel-443419) KVM machine creation complete!
	I0328 00:52:21.035579 1121345 main.go:141] libmachine: (flannel-443419) Calling .GetConfigRaw
	I0328 00:52:21.036133 1121345 main.go:141] libmachine: (flannel-443419) Calling .DriverName
	I0328 00:52:21.036334 1121345 main.go:141] libmachine: (flannel-443419) Calling .DriverName
	I0328 00:52:21.036490 1121345 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0328 00:52:21.036511 1121345 main.go:141] libmachine: (flannel-443419) Calling .GetState
	I0328 00:52:21.037717 1121345 main.go:141] libmachine: Detecting operating system of created instance...
	I0328 00:52:21.037735 1121345 main.go:141] libmachine: Waiting for SSH to be available...
	I0328 00:52:21.037743 1121345 main.go:141] libmachine: Getting to WaitForSSH function...
	I0328 00:52:21.037750 1121345 main.go:141] libmachine: (flannel-443419) Calling .GetSSHHostname
	I0328 00:52:21.040136 1121345 main.go:141] libmachine: (flannel-443419) DBG | domain flannel-443419 has defined MAC address 52:54:00:89:e8:02 in network mk-flannel-443419
	I0328 00:52:21.040432 1121345 main.go:141] libmachine: (flannel-443419) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:e8:02", ip: ""} in network mk-flannel-443419: {Iface:virbr4 ExpiryTime:2024-03-28 01:52:13 +0000 UTC Type:0 Mac:52:54:00:89:e8:02 Iaid: IPaddr:192.168.72.7 Prefix:24 Hostname:flannel-443419 Clientid:01:52:54:00:89:e8:02}
	I0328 00:52:21.040467 1121345 main.go:141] libmachine: (flannel-443419) DBG | domain flannel-443419 has defined IP address 192.168.72.7 and MAC address 52:54:00:89:e8:02 in network mk-flannel-443419
	I0328 00:52:21.040621 1121345 main.go:141] libmachine: (flannel-443419) Calling .GetSSHPort
	I0328 00:52:21.040814 1121345 main.go:141] libmachine: (flannel-443419) Calling .GetSSHKeyPath
	I0328 00:52:21.041025 1121345 main.go:141] libmachine: (flannel-443419) Calling .GetSSHKeyPath
	I0328 00:52:21.041196 1121345 main.go:141] libmachine: (flannel-443419) Calling .GetSSHUsername
	I0328 00:52:21.041387 1121345 main.go:141] libmachine: Using SSH client type: native
	I0328 00:52:21.041581 1121345 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.7 22 <nil> <nil>}
	I0328 00:52:21.041595 1121345 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0328 00:52:21.145701 1121345 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0328 00:52:21.145732 1121345 main.go:141] libmachine: Detecting the provisioner...
	I0328 00:52:21.145746 1121345 main.go:141] libmachine: (flannel-443419) Calling .GetSSHHostname
	I0328 00:52:21.148666 1121345 main.go:141] libmachine: (flannel-443419) DBG | domain flannel-443419 has defined MAC address 52:54:00:89:e8:02 in network mk-flannel-443419
	I0328 00:52:21.149067 1121345 main.go:141] libmachine: (flannel-443419) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:e8:02", ip: ""} in network mk-flannel-443419: {Iface:virbr4 ExpiryTime:2024-03-28 01:52:13 +0000 UTC Type:0 Mac:52:54:00:89:e8:02 Iaid: IPaddr:192.168.72.7 Prefix:24 Hostname:flannel-443419 Clientid:01:52:54:00:89:e8:02}
	I0328 00:52:21.149103 1121345 main.go:141] libmachine: (flannel-443419) DBG | domain flannel-443419 has defined IP address 192.168.72.7 and MAC address 52:54:00:89:e8:02 in network mk-flannel-443419
	I0328 00:52:21.149282 1121345 main.go:141] libmachine: (flannel-443419) Calling .GetSSHPort
	I0328 00:52:21.149545 1121345 main.go:141] libmachine: (flannel-443419) Calling .GetSSHKeyPath
	I0328 00:52:21.149757 1121345 main.go:141] libmachine: (flannel-443419) Calling .GetSSHKeyPath
	I0328 00:52:21.149953 1121345 main.go:141] libmachine: (flannel-443419) Calling .GetSSHUsername
	I0328 00:52:21.150171 1121345 main.go:141] libmachine: Using SSH client type: native
	I0328 00:52:21.150395 1121345 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.7 22 <nil> <nil>}
	I0328 00:52:21.150408 1121345 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0328 00:52:21.259594 1121345 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0328 00:52:21.259690 1121345 main.go:141] libmachine: found compatible host: buildroot
	I0328 00:52:21.259704 1121345 main.go:141] libmachine: Provisioning with buildroot...
	I0328 00:52:21.259720 1121345 main.go:141] libmachine: (flannel-443419) Calling .GetMachineName
	I0328 00:52:21.260004 1121345 buildroot.go:166] provisioning hostname "flannel-443419"
	I0328 00:52:21.260033 1121345 main.go:141] libmachine: (flannel-443419) Calling .GetMachineName
	I0328 00:52:21.260237 1121345 main.go:141] libmachine: (flannel-443419) Calling .GetSSHHostname
	I0328 00:52:21.262817 1121345 main.go:141] libmachine: (flannel-443419) DBG | domain flannel-443419 has defined MAC address 52:54:00:89:e8:02 in network mk-flannel-443419
	I0328 00:52:21.263160 1121345 main.go:141] libmachine: (flannel-443419) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:e8:02", ip: ""} in network mk-flannel-443419: {Iface:virbr4 ExpiryTime:2024-03-28 01:52:13 +0000 UTC Type:0 Mac:52:54:00:89:e8:02 Iaid: IPaddr:192.168.72.7 Prefix:24 Hostname:flannel-443419 Clientid:01:52:54:00:89:e8:02}
	I0328 00:52:21.263193 1121345 main.go:141] libmachine: (flannel-443419) DBG | domain flannel-443419 has defined IP address 192.168.72.7 and MAC address 52:54:00:89:e8:02 in network mk-flannel-443419
	I0328 00:52:21.263294 1121345 main.go:141] libmachine: (flannel-443419) Calling .GetSSHPort
	I0328 00:52:21.263493 1121345 main.go:141] libmachine: (flannel-443419) Calling .GetSSHKeyPath
	I0328 00:52:21.263656 1121345 main.go:141] libmachine: (flannel-443419) Calling .GetSSHKeyPath
	I0328 00:52:21.263801 1121345 main.go:141] libmachine: (flannel-443419) Calling .GetSSHUsername
	I0328 00:52:21.263965 1121345 main.go:141] libmachine: Using SSH client type: native
	I0328 00:52:21.264202 1121345 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.7 22 <nil> <nil>}
	I0328 00:52:21.264220 1121345 main.go:141] libmachine: About to run SSH command:
	sudo hostname flannel-443419 && echo "flannel-443419" | sudo tee /etc/hostname
	I0328 00:52:21.380752 1121345 main.go:141] libmachine: SSH cmd err, output: <nil>: flannel-443419
	
	I0328 00:52:21.380786 1121345 main.go:141] libmachine: (flannel-443419) Calling .GetSSHHostname
	I0328 00:52:21.383977 1121345 main.go:141] libmachine: (flannel-443419) DBG | domain flannel-443419 has defined MAC address 52:54:00:89:e8:02 in network mk-flannel-443419
	I0328 00:52:21.384373 1121345 main.go:141] libmachine: (flannel-443419) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:e8:02", ip: ""} in network mk-flannel-443419: {Iface:virbr4 ExpiryTime:2024-03-28 01:52:13 +0000 UTC Type:0 Mac:52:54:00:89:e8:02 Iaid: IPaddr:192.168.72.7 Prefix:24 Hostname:flannel-443419 Clientid:01:52:54:00:89:e8:02}
	I0328 00:52:21.384407 1121345 main.go:141] libmachine: (flannel-443419) DBG | domain flannel-443419 has defined IP address 192.168.72.7 and MAC address 52:54:00:89:e8:02 in network mk-flannel-443419
	I0328 00:52:21.384558 1121345 main.go:141] libmachine: (flannel-443419) Calling .GetSSHPort
	I0328 00:52:21.384754 1121345 main.go:141] libmachine: (flannel-443419) Calling .GetSSHKeyPath
	I0328 00:52:21.384957 1121345 main.go:141] libmachine: (flannel-443419) Calling .GetSSHKeyPath
	I0328 00:52:21.385201 1121345 main.go:141] libmachine: (flannel-443419) Calling .GetSSHUsername
	I0328 00:52:21.385444 1121345 main.go:141] libmachine: Using SSH client type: native
	I0328 00:52:21.385665 1121345 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.7 22 <nil> <nil>}
	I0328 00:52:21.385693 1121345 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sflannel-443419' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 flannel-443419/g' /etc/hosts;
				else 
					echo '127.0.1.1 flannel-443419' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0328 00:52:21.500386 1121345 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0328 00:52:21.500420 1121345 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18485-1069254/.minikube CaCertPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18485-1069254/.minikube}
	I0328 00:52:21.500448 1121345 buildroot.go:174] setting up certificates
	I0328 00:52:21.500462 1121345 provision.go:84] configureAuth start
	I0328 00:52:21.500471 1121345 main.go:141] libmachine: (flannel-443419) Calling .GetMachineName
	I0328 00:52:21.500815 1121345 main.go:141] libmachine: (flannel-443419) Calling .GetIP
	I0328 00:52:21.503891 1121345 main.go:141] libmachine: (flannel-443419) DBG | domain flannel-443419 has defined MAC address 52:54:00:89:e8:02 in network mk-flannel-443419
	I0328 00:52:21.504280 1121345 main.go:141] libmachine: (flannel-443419) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:e8:02", ip: ""} in network mk-flannel-443419: {Iface:virbr4 ExpiryTime:2024-03-28 01:52:13 +0000 UTC Type:0 Mac:52:54:00:89:e8:02 Iaid: IPaddr:192.168.72.7 Prefix:24 Hostname:flannel-443419 Clientid:01:52:54:00:89:e8:02}
	I0328 00:52:21.504310 1121345 main.go:141] libmachine: (flannel-443419) DBG | domain flannel-443419 has defined IP address 192.168.72.7 and MAC address 52:54:00:89:e8:02 in network mk-flannel-443419
	I0328 00:52:21.504468 1121345 main.go:141] libmachine: (flannel-443419) Calling .GetSSHHostname
	I0328 00:52:21.506889 1121345 main.go:141] libmachine: (flannel-443419) DBG | domain flannel-443419 has defined MAC address 52:54:00:89:e8:02 in network mk-flannel-443419
	I0328 00:52:21.507268 1121345 main.go:141] libmachine: (flannel-443419) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:e8:02", ip: ""} in network mk-flannel-443419: {Iface:virbr4 ExpiryTime:2024-03-28 01:52:13 +0000 UTC Type:0 Mac:52:54:00:89:e8:02 Iaid: IPaddr:192.168.72.7 Prefix:24 Hostname:flannel-443419 Clientid:01:52:54:00:89:e8:02}
	I0328 00:52:21.507293 1121345 main.go:141] libmachine: (flannel-443419) DBG | domain flannel-443419 has defined IP address 192.168.72.7 and MAC address 52:54:00:89:e8:02 in network mk-flannel-443419
	I0328 00:52:21.507444 1121345 provision.go:143] copyHostCerts
	I0328 00:52:21.507530 1121345 exec_runner.go:144] found /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.pem, removing ...
	I0328 00:52:21.507555 1121345 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.pem
	I0328 00:52:21.507636 1121345 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.pem (1078 bytes)
	I0328 00:52:21.507756 1121345 exec_runner.go:144] found /home/jenkins/minikube-integration/18485-1069254/.minikube/cert.pem, removing ...
	I0328 00:52:21.507768 1121345 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18485-1069254/.minikube/cert.pem
	I0328 00:52:21.507801 1121345 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18485-1069254/.minikube/cert.pem (1123 bytes)
	I0328 00:52:21.507877 1121345 exec_runner.go:144] found /home/jenkins/minikube-integration/18485-1069254/.minikube/key.pem, removing ...
	I0328 00:52:21.507887 1121345 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18485-1069254/.minikube/key.pem
	I0328 00:52:21.507917 1121345 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18485-1069254/.minikube/key.pem (1679 bytes)
	I0328 00:52:21.507985 1121345 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca-key.pem org=jenkins.flannel-443419 san=[127.0.0.1 192.168.72.7 flannel-443419 localhost minikube]
	I0328 00:52:21.598053 1121345 provision.go:177] copyRemoteCerts
	I0328 00:52:21.598135 1121345 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0328 00:52:21.598171 1121345 main.go:141] libmachine: (flannel-443419) Calling .GetSSHHostname
	I0328 00:52:21.601046 1121345 main.go:141] libmachine: (flannel-443419) DBG | domain flannel-443419 has defined MAC address 52:54:00:89:e8:02 in network mk-flannel-443419
	I0328 00:52:21.601328 1121345 main.go:141] libmachine: (flannel-443419) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:e8:02", ip: ""} in network mk-flannel-443419: {Iface:virbr4 ExpiryTime:2024-03-28 01:52:13 +0000 UTC Type:0 Mac:52:54:00:89:e8:02 Iaid: IPaddr:192.168.72.7 Prefix:24 Hostname:flannel-443419 Clientid:01:52:54:00:89:e8:02}
	I0328 00:52:21.601357 1121345 main.go:141] libmachine: (flannel-443419) DBG | domain flannel-443419 has defined IP address 192.168.72.7 and MAC address 52:54:00:89:e8:02 in network mk-flannel-443419
	I0328 00:52:21.601499 1121345 main.go:141] libmachine: (flannel-443419) Calling .GetSSHPort
	I0328 00:52:21.601719 1121345 main.go:141] libmachine: (flannel-443419) Calling .GetSSHKeyPath
	I0328 00:52:21.601920 1121345 main.go:141] libmachine: (flannel-443419) Calling .GetSSHUsername
	I0328 00:52:21.602073 1121345 sshutil.go:53] new ssh client: &{IP:192.168.72.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/flannel-443419/id_rsa Username:docker}
	I0328 00:52:21.688927 1121345 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0328 00:52:21.718398 1121345 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server.pem --> /etc/docker/server.pem (1212 bytes)
	I0328 00:52:21.745253 1121345 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0328 00:52:21.771340 1121345 provision.go:87] duration metric: took 270.861756ms to configureAuth
	I0328 00:52:21.771380 1121345 buildroot.go:189] setting minikube options for container-runtime
	I0328 00:52:21.771573 1121345 config.go:182] Loaded profile config "flannel-443419": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0328 00:52:21.771684 1121345 main.go:141] libmachine: (flannel-443419) Calling .GetSSHHostname
	I0328 00:52:21.774576 1121345 main.go:141] libmachine: (flannel-443419) DBG | domain flannel-443419 has defined MAC address 52:54:00:89:e8:02 in network mk-flannel-443419
	I0328 00:52:21.774936 1121345 main.go:141] libmachine: (flannel-443419) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:e8:02", ip: ""} in network mk-flannel-443419: {Iface:virbr4 ExpiryTime:2024-03-28 01:52:13 +0000 UTC Type:0 Mac:52:54:00:89:e8:02 Iaid: IPaddr:192.168.72.7 Prefix:24 Hostname:flannel-443419 Clientid:01:52:54:00:89:e8:02}
	I0328 00:52:21.774994 1121345 main.go:141] libmachine: (flannel-443419) DBG | domain flannel-443419 has defined IP address 192.168.72.7 and MAC address 52:54:00:89:e8:02 in network mk-flannel-443419
	I0328 00:52:21.775179 1121345 main.go:141] libmachine: (flannel-443419) Calling .GetSSHPort
	I0328 00:52:21.775396 1121345 main.go:141] libmachine: (flannel-443419) Calling .GetSSHKeyPath
	I0328 00:52:21.775583 1121345 main.go:141] libmachine: (flannel-443419) Calling .GetSSHKeyPath
	I0328 00:52:21.775782 1121345 main.go:141] libmachine: (flannel-443419) Calling .GetSSHUsername
	I0328 00:52:21.775984 1121345 main.go:141] libmachine: Using SSH client type: native
	I0328 00:52:21.776144 1121345 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.7 22 <nil> <nil>}
	I0328 00:52:21.776159 1121345 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0328 00:52:22.044073 1121345 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0328 00:52:22.044109 1121345 main.go:141] libmachine: Checking connection to Docker...
	I0328 00:52:22.044117 1121345 main.go:141] libmachine: (flannel-443419) Calling .GetURL
	I0328 00:52:22.045484 1121345 main.go:141] libmachine: (flannel-443419) DBG | Using libvirt version 6000000
	I0328 00:52:22.048421 1121345 main.go:141] libmachine: (flannel-443419) DBG | domain flannel-443419 has defined MAC address 52:54:00:89:e8:02 in network mk-flannel-443419
	I0328 00:52:22.048789 1121345 main.go:141] libmachine: (flannel-443419) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:e8:02", ip: ""} in network mk-flannel-443419: {Iface:virbr4 ExpiryTime:2024-03-28 01:52:13 +0000 UTC Type:0 Mac:52:54:00:89:e8:02 Iaid: IPaddr:192.168.72.7 Prefix:24 Hostname:flannel-443419 Clientid:01:52:54:00:89:e8:02}
	I0328 00:52:22.048817 1121345 main.go:141] libmachine: (flannel-443419) DBG | domain flannel-443419 has defined IP address 192.168.72.7 and MAC address 52:54:00:89:e8:02 in network mk-flannel-443419
	I0328 00:52:22.048964 1121345 main.go:141] libmachine: Docker is up and running!
	I0328 00:52:22.048979 1121345 main.go:141] libmachine: Reticulating splines...
	I0328 00:52:22.048986 1121345 client.go:171] duration metric: took 26.232538626s to LocalClient.Create
	I0328 00:52:22.049010 1121345 start.go:167] duration metric: took 26.232607328s to libmachine.API.Create "flannel-443419"
	I0328 00:52:22.049020 1121345 start.go:293] postStartSetup for "flannel-443419" (driver="kvm2")
	I0328 00:52:22.049037 1121345 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0328 00:52:22.049084 1121345 main.go:141] libmachine: (flannel-443419) Calling .DriverName
	I0328 00:52:22.049374 1121345 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0328 00:52:22.049398 1121345 main.go:141] libmachine: (flannel-443419) Calling .GetSSHHostname
	I0328 00:52:22.051620 1121345 main.go:141] libmachine: (flannel-443419) DBG | domain flannel-443419 has defined MAC address 52:54:00:89:e8:02 in network mk-flannel-443419
	I0328 00:52:22.051927 1121345 main.go:141] libmachine: (flannel-443419) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:e8:02", ip: ""} in network mk-flannel-443419: {Iface:virbr4 ExpiryTime:2024-03-28 01:52:13 +0000 UTC Type:0 Mac:52:54:00:89:e8:02 Iaid: IPaddr:192.168.72.7 Prefix:24 Hostname:flannel-443419 Clientid:01:52:54:00:89:e8:02}
	I0328 00:52:22.051954 1121345 main.go:141] libmachine: (flannel-443419) DBG | domain flannel-443419 has defined IP address 192.168.72.7 and MAC address 52:54:00:89:e8:02 in network mk-flannel-443419
	I0328 00:52:22.052103 1121345 main.go:141] libmachine: (flannel-443419) Calling .GetSSHPort
	I0328 00:52:22.052303 1121345 main.go:141] libmachine: (flannel-443419) Calling .GetSSHKeyPath
	I0328 00:52:22.052451 1121345 main.go:141] libmachine: (flannel-443419) Calling .GetSSHUsername
	I0328 00:52:22.052615 1121345 sshutil.go:53] new ssh client: &{IP:192.168.72.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/flannel-443419/id_rsa Username:docker}
	I0328 00:52:22.134712 1121345 ssh_runner.go:195] Run: cat /etc/os-release
	I0328 00:52:22.140357 1121345 info.go:137] Remote host: Buildroot 2023.02.9
	I0328 00:52:22.140389 1121345 filesync.go:126] Scanning /home/jenkins/minikube-integration/18485-1069254/.minikube/addons for local assets ...
	I0328 00:52:22.140479 1121345 filesync.go:126] Scanning /home/jenkins/minikube-integration/18485-1069254/.minikube/files for local assets ...
	I0328 00:52:22.140566 1121345 filesync.go:149] local asset: /home/jenkins/minikube-integration/18485-1069254/.minikube/files/etc/ssl/certs/10765222.pem -> 10765222.pem in /etc/ssl/certs
	I0328 00:52:22.140681 1121345 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0328 00:52:22.152014 1121345 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/files/etc/ssl/certs/10765222.pem --> /etc/ssl/certs/10765222.pem (1708 bytes)
	I0328 00:52:22.178062 1121345 start.go:296] duration metric: took 129.022587ms for postStartSetup
	I0328 00:52:22.178117 1121345 main.go:141] libmachine: (flannel-443419) Calling .GetConfigRaw
	I0328 00:52:22.178780 1121345 main.go:141] libmachine: (flannel-443419) Calling .GetIP
	I0328 00:52:22.181782 1121345 main.go:141] libmachine: (flannel-443419) DBG | domain flannel-443419 has defined MAC address 52:54:00:89:e8:02 in network mk-flannel-443419
	I0328 00:52:22.182205 1121345 main.go:141] libmachine: (flannel-443419) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:e8:02", ip: ""} in network mk-flannel-443419: {Iface:virbr4 ExpiryTime:2024-03-28 01:52:13 +0000 UTC Type:0 Mac:52:54:00:89:e8:02 Iaid: IPaddr:192.168.72.7 Prefix:24 Hostname:flannel-443419 Clientid:01:52:54:00:89:e8:02}
	I0328 00:52:22.182256 1121345 main.go:141] libmachine: (flannel-443419) DBG | domain flannel-443419 has defined IP address 192.168.72.7 and MAC address 52:54:00:89:e8:02 in network mk-flannel-443419
	I0328 00:52:22.182545 1121345 profile.go:142] Saving config to /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/flannel-443419/config.json ...
	I0328 00:52:22.182791 1121345 start.go:128] duration metric: took 26.390238713s to createHost
	I0328 00:52:22.182818 1121345 main.go:141] libmachine: (flannel-443419) Calling .GetSSHHostname
	I0328 00:52:22.185322 1121345 main.go:141] libmachine: (flannel-443419) DBG | domain flannel-443419 has defined MAC address 52:54:00:89:e8:02 in network mk-flannel-443419
	I0328 00:52:22.185722 1121345 main.go:141] libmachine: (flannel-443419) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:e8:02", ip: ""} in network mk-flannel-443419: {Iface:virbr4 ExpiryTime:2024-03-28 01:52:13 +0000 UTC Type:0 Mac:52:54:00:89:e8:02 Iaid: IPaddr:192.168.72.7 Prefix:24 Hostname:flannel-443419 Clientid:01:52:54:00:89:e8:02}
	I0328 00:52:22.185762 1121345 main.go:141] libmachine: (flannel-443419) DBG | domain flannel-443419 has defined IP address 192.168.72.7 and MAC address 52:54:00:89:e8:02 in network mk-flannel-443419
	I0328 00:52:22.185959 1121345 main.go:141] libmachine: (flannel-443419) Calling .GetSSHPort
	I0328 00:52:22.186139 1121345 main.go:141] libmachine: (flannel-443419) Calling .GetSSHKeyPath
	I0328 00:52:22.186294 1121345 main.go:141] libmachine: (flannel-443419) Calling .GetSSHKeyPath
	I0328 00:52:22.186410 1121345 main.go:141] libmachine: (flannel-443419) Calling .GetSSHUsername
	I0328 00:52:22.186575 1121345 main.go:141] libmachine: Using SSH client type: native
	I0328 00:52:22.186854 1121345 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.7 22 <nil> <nil>}
	I0328 00:52:22.186878 1121345 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0328 00:52:22.295632 1121345 main.go:141] libmachine: SSH cmd err, output: <nil>: 1711587142.237217729
	
	I0328 00:52:22.295659 1121345 fix.go:216] guest clock: 1711587142.237217729
	I0328 00:52:22.295670 1121345 fix.go:229] Guest: 2024-03-28 00:52:22.237217729 +0000 UTC Remote: 2024-03-28 00:52:22.182802717 +0000 UTC m=+27.512240636 (delta=54.415012ms)
	I0328 00:52:22.295698 1121345 fix.go:200] guest clock delta is within tolerance: 54.415012ms
	I0328 00:52:22.295705 1121345 start.go:83] releasing machines lock for "flannel-443419", held for 26.503354544s
	I0328 00:52:22.295730 1121345 main.go:141] libmachine: (flannel-443419) Calling .DriverName
	I0328 00:52:22.296077 1121345 main.go:141] libmachine: (flannel-443419) Calling .GetIP
	I0328 00:52:22.298990 1121345 main.go:141] libmachine: (flannel-443419) DBG | domain flannel-443419 has defined MAC address 52:54:00:89:e8:02 in network mk-flannel-443419
	I0328 00:52:22.299376 1121345 main.go:141] libmachine: (flannel-443419) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:e8:02", ip: ""} in network mk-flannel-443419: {Iface:virbr4 ExpiryTime:2024-03-28 01:52:13 +0000 UTC Type:0 Mac:52:54:00:89:e8:02 Iaid: IPaddr:192.168.72.7 Prefix:24 Hostname:flannel-443419 Clientid:01:52:54:00:89:e8:02}
	I0328 00:52:22.299409 1121345 main.go:141] libmachine: (flannel-443419) DBG | domain flannel-443419 has defined IP address 192.168.72.7 and MAC address 52:54:00:89:e8:02 in network mk-flannel-443419
	I0328 00:52:22.299531 1121345 main.go:141] libmachine: (flannel-443419) Calling .DriverName
	I0328 00:52:22.300083 1121345 main.go:141] libmachine: (flannel-443419) Calling .DriverName
	I0328 00:52:22.300304 1121345 main.go:141] libmachine: (flannel-443419) Calling .DriverName
	I0328 00:52:22.300406 1121345 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0328 00:52:22.300457 1121345 main.go:141] libmachine: (flannel-443419) Calling .GetSSHHostname
	I0328 00:52:22.300583 1121345 ssh_runner.go:195] Run: cat /version.json
	I0328 00:52:22.300610 1121345 main.go:141] libmachine: (flannel-443419) Calling .GetSSHHostname
	I0328 00:52:22.303777 1121345 main.go:141] libmachine: (flannel-443419) DBG | domain flannel-443419 has defined MAC address 52:54:00:89:e8:02 in network mk-flannel-443419
	I0328 00:52:22.304176 1121345 main.go:141] libmachine: (flannel-443419) DBG | domain flannel-443419 has defined MAC address 52:54:00:89:e8:02 in network mk-flannel-443419
	I0328 00:52:22.304899 1121345 main.go:141] libmachine: (flannel-443419) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:e8:02", ip: ""} in network mk-flannel-443419: {Iface:virbr4 ExpiryTime:2024-03-28 01:52:13 +0000 UTC Type:0 Mac:52:54:00:89:e8:02 Iaid: IPaddr:192.168.72.7 Prefix:24 Hostname:flannel-443419 Clientid:01:52:54:00:89:e8:02}
	I0328 00:52:22.304930 1121345 main.go:141] libmachine: (flannel-443419) DBG | domain flannel-443419 has defined IP address 192.168.72.7 and MAC address 52:54:00:89:e8:02 in network mk-flannel-443419
	I0328 00:52:22.304953 1121345 main.go:141] libmachine: (flannel-443419) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:e8:02", ip: ""} in network mk-flannel-443419: {Iface:virbr4 ExpiryTime:2024-03-28 01:52:13 +0000 UTC Type:0 Mac:52:54:00:89:e8:02 Iaid: IPaddr:192.168.72.7 Prefix:24 Hostname:flannel-443419 Clientid:01:52:54:00:89:e8:02}
	I0328 00:52:22.304968 1121345 main.go:141] libmachine: (flannel-443419) DBG | domain flannel-443419 has defined IP address 192.168.72.7 and MAC address 52:54:00:89:e8:02 in network mk-flannel-443419
	I0328 00:52:22.305004 1121345 main.go:141] libmachine: (flannel-443419) Calling .GetSSHPort
	I0328 00:52:22.305207 1121345 main.go:141] libmachine: (flannel-443419) Calling .GetSSHKeyPath
	I0328 00:52:22.305245 1121345 main.go:141] libmachine: (flannel-443419) Calling .GetSSHPort
	I0328 00:52:22.305392 1121345 main.go:141] libmachine: (flannel-443419) Calling .GetSSHKeyPath
	I0328 00:52:22.305447 1121345 main.go:141] libmachine: (flannel-443419) Calling .GetSSHUsername
	I0328 00:52:22.305568 1121345 main.go:141] libmachine: (flannel-443419) Calling .GetSSHUsername
	I0328 00:52:22.305650 1121345 sshutil.go:53] new ssh client: &{IP:192.168.72.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/flannel-443419/id_rsa Username:docker}
	I0328 00:52:22.305858 1121345 sshutil.go:53] new ssh client: &{IP:192.168.72.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/flannel-443419/id_rsa Username:docker}
	I0328 00:52:22.384069 1121345 ssh_runner.go:195] Run: systemctl --version
	I0328 00:52:22.423787 1121345 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0328 00:52:22.598330 1121345 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0328 00:52:22.605892 1121345 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0328 00:52:22.605966 1121345 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0328 00:52:22.622567 1121345 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0328 00:52:22.622592 1121345 start.go:494] detecting cgroup driver to use...
	I0328 00:52:22.622671 1121345 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0328 00:52:22.639507 1121345 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0328 00:52:22.656477 1121345 docker.go:217] disabling cri-docker service (if available) ...
	I0328 00:52:22.656538 1121345 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0328 00:52:22.672916 1121345 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0328 00:52:22.687802 1121345 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0328 00:52:22.808691 1121345 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0328 00:52:22.990192 1121345 docker.go:233] disabling docker service ...
	I0328 00:52:22.990282 1121345 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0328 00:52:23.008582 1121345 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0328 00:52:23.023621 1121345 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0328 00:52:23.187676 1121345 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0328 00:52:23.358336 1121345 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0328 00:52:23.378323 1121345 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0328 00:52:23.404275 1121345 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0328 00:52:23.404341 1121345 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 00:52:23.416830 1121345 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0328 00:52:23.416918 1121345 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 00:52:23.430755 1121345 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 00:52:23.446667 1121345 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 00:52:23.462694 1121345 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0328 00:52:23.477978 1121345 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 00:52:23.491873 1121345 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 00:52:23.512315 1121345 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 00:52:23.524538 1121345 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0328 00:52:23.538693 1121345 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0328 00:52:23.538788 1121345 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0328 00:52:23.555005 1121345 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0328 00:52:23.566605 1121345 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0328 00:52:23.733703 1121345 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0328 00:52:23.899499 1121345 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0328 00:52:23.899593 1121345 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0328 00:52:23.904781 1121345 start.go:562] Will wait 60s for crictl version
	I0328 00:52:23.904857 1121345 ssh_runner.go:195] Run: which crictl
	I0328 00:52:23.909269 1121345 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0328 00:52:23.952988 1121345 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0328 00:52:23.953087 1121345 ssh_runner.go:195] Run: crio --version
	I0328 00:52:23.987860 1121345 ssh_runner.go:195] Run: crio --version
	I0328 00:52:24.021807 1121345 out.go:177] * Preparing Kubernetes v1.29.3 on CRI-O 1.29.1 ...
	I0328 00:52:24.023141 1121345 main.go:141] libmachine: (flannel-443419) Calling .GetIP
	I0328 00:52:24.026154 1121345 main.go:141] libmachine: (flannel-443419) DBG | domain flannel-443419 has defined MAC address 52:54:00:89:e8:02 in network mk-flannel-443419
	I0328 00:52:24.026569 1121345 main.go:141] libmachine: (flannel-443419) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:e8:02", ip: ""} in network mk-flannel-443419: {Iface:virbr4 ExpiryTime:2024-03-28 01:52:13 +0000 UTC Type:0 Mac:52:54:00:89:e8:02 Iaid: IPaddr:192.168.72.7 Prefix:24 Hostname:flannel-443419 Clientid:01:52:54:00:89:e8:02}
	I0328 00:52:24.026599 1121345 main.go:141] libmachine: (flannel-443419) DBG | domain flannel-443419 has defined IP address 192.168.72.7 and MAC address 52:54:00:89:e8:02 in network mk-flannel-443419
	I0328 00:52:24.026841 1121345 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0328 00:52:24.031324 1121345 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0328 00:52:24.044268 1121345 kubeadm.go:877] updating cluster {Name:flannel-443419 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29
.3 ClusterName:flannel-443419 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP:192.168.72.7 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:
0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0328 00:52:24.044370 1121345 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0328 00:52:24.044415 1121345 ssh_runner.go:195] Run: sudo crictl images --output json
	I0328 00:52:24.085807 1121345 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.3". assuming images are not preloaded.
	I0328 00:52:24.085890 1121345 ssh_runner.go:195] Run: which lz4
	I0328 00:52:24.090582 1121345 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0328 00:52:24.095221 1121345 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0328 00:52:24.095258 1121345 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (402967820 bytes)
	I0328 00:52:24.904721 1123084 start.go:364] duration metric: took 6.591677107s to acquireMachinesLock for "bridge-443419"
	I0328 00:52:24.904807 1123084 start.go:93] Provisioning new machine with config: &{Name:bridge-443419 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.29.3 ClusterName:bridge-443419 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0328 00:52:24.904958 1123084 start.go:125] createHost starting for "" (driver="kvm2")
	I0328 00:52:24.161434 1122507 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0328 00:52:24.161469 1122507 machine.go:97] duration metric: took 1.839646592s to provisionDockerMachine
	I0328 00:52:24.161484 1122507 start.go:293] postStartSetup for "kubernetes-upgrade-615158" (driver="kvm2")
	I0328 00:52:24.161503 1122507 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0328 00:52:24.161531 1122507 main.go:141] libmachine: (kubernetes-upgrade-615158) Calling .DriverName
	I0328 00:52:24.161929 1122507 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0328 00:52:24.161974 1122507 main.go:141] libmachine: (kubernetes-upgrade-615158) Calling .GetSSHHostname
	I0328 00:52:24.165176 1122507 main.go:141] libmachine: (kubernetes-upgrade-615158) DBG | domain kubernetes-upgrade-615158 has defined MAC address 52:54:00:9b:31:7a in network mk-kubernetes-upgrade-615158
	I0328 00:52:24.165580 1122507 main.go:141] libmachine: (kubernetes-upgrade-615158) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:31:7a", ip: ""} in network mk-kubernetes-upgrade-615158: {Iface:virbr2 ExpiryTime:2024-03-28 01:47:12 +0000 UTC Type:0 Mac:52:54:00:9b:31:7a Iaid: IPaddr:192.168.50.160 Prefix:24 Hostname:kubernetes-upgrade-615158 Clientid:01:52:54:00:9b:31:7a}
	I0328 00:52:24.165616 1122507 main.go:141] libmachine: (kubernetes-upgrade-615158) DBG | domain kubernetes-upgrade-615158 has defined IP address 192.168.50.160 and MAC address 52:54:00:9b:31:7a in network mk-kubernetes-upgrade-615158
	I0328 00:52:24.165808 1122507 main.go:141] libmachine: (kubernetes-upgrade-615158) Calling .GetSSHPort
	I0328 00:52:24.166004 1122507 main.go:141] libmachine: (kubernetes-upgrade-615158) Calling .GetSSHKeyPath
	I0328 00:52:24.166153 1122507 main.go:141] libmachine: (kubernetes-upgrade-615158) Calling .GetSSHUsername
	I0328 00:52:24.166281 1122507 sshutil.go:53] new ssh client: &{IP:192.168.50.160 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/kubernetes-upgrade-615158/id_rsa Username:docker}
	I0328 00:52:24.352857 1122507 ssh_runner.go:195] Run: cat /etc/os-release
	I0328 00:52:24.390676 1122507 info.go:137] Remote host: Buildroot 2023.02.9
	I0328 00:52:24.390724 1122507 filesync.go:126] Scanning /home/jenkins/minikube-integration/18485-1069254/.minikube/addons for local assets ...
	I0328 00:52:24.390815 1122507 filesync.go:126] Scanning /home/jenkins/minikube-integration/18485-1069254/.minikube/files for local assets ...
	I0328 00:52:24.390915 1122507 filesync.go:149] local asset: /home/jenkins/minikube-integration/18485-1069254/.minikube/files/etc/ssl/certs/10765222.pem -> 10765222.pem in /etc/ssl/certs
	I0328 00:52:24.391039 1122507 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0328 00:52:24.475701 1122507 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/files/etc/ssl/certs/10765222.pem --> /etc/ssl/certs/10765222.pem (1708 bytes)
	I0328 00:52:24.600726 1122507 start.go:296] duration metric: took 439.216627ms for postStartSetup
	I0328 00:52:24.600780 1122507 fix.go:56] duration metric: took 2.304870003s for fixHost
	I0328 00:52:24.600809 1122507 main.go:141] libmachine: (kubernetes-upgrade-615158) Calling .GetSSHHostname
	I0328 00:52:24.605364 1122507 main.go:141] libmachine: (kubernetes-upgrade-615158) DBG | domain kubernetes-upgrade-615158 has defined MAC address 52:54:00:9b:31:7a in network mk-kubernetes-upgrade-615158
	I0328 00:52:24.605876 1122507 main.go:141] libmachine: (kubernetes-upgrade-615158) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:31:7a", ip: ""} in network mk-kubernetes-upgrade-615158: {Iface:virbr2 ExpiryTime:2024-03-28 01:47:12 +0000 UTC Type:0 Mac:52:54:00:9b:31:7a Iaid: IPaddr:192.168.50.160 Prefix:24 Hostname:kubernetes-upgrade-615158 Clientid:01:52:54:00:9b:31:7a}
	I0328 00:52:24.605919 1122507 main.go:141] libmachine: (kubernetes-upgrade-615158) DBG | domain kubernetes-upgrade-615158 has defined IP address 192.168.50.160 and MAC address 52:54:00:9b:31:7a in network mk-kubernetes-upgrade-615158
	I0328 00:52:24.606294 1122507 main.go:141] libmachine: (kubernetes-upgrade-615158) Calling .GetSSHPort
	I0328 00:52:24.606539 1122507 main.go:141] libmachine: (kubernetes-upgrade-615158) Calling .GetSSHKeyPath
	I0328 00:52:24.606756 1122507 main.go:141] libmachine: (kubernetes-upgrade-615158) Calling .GetSSHKeyPath
	I0328 00:52:24.606944 1122507 main.go:141] libmachine: (kubernetes-upgrade-615158) Calling .GetSSHUsername
	I0328 00:52:24.607170 1122507 main.go:141] libmachine: Using SSH client type: native
	I0328 00:52:24.607411 1122507 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.160 22 <nil> <nil>}
	I0328 00:52:24.607431 1122507 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0328 00:52:24.904528 1122507 main.go:141] libmachine: SSH cmd err, output: <nil>: 1711587144.902121516
	
	I0328 00:52:24.904558 1122507 fix.go:216] guest clock: 1711587144.902121516
	I0328 00:52:24.904570 1122507 fix.go:229] Guest: 2024-03-28 00:52:24.902121516 +0000 UTC Remote: 2024-03-28 00:52:24.600785312 +0000 UTC m=+11.447741628 (delta=301.336204ms)
	I0328 00:52:24.904606 1122507 fix.go:200] guest clock delta is within tolerance: 301.336204ms
	I0328 00:52:24.904615 1122507 start.go:83] releasing machines lock for "kubernetes-upgrade-615158", held for 2.608739945s
	I0328 00:52:24.904656 1122507 main.go:141] libmachine: (kubernetes-upgrade-615158) Calling .DriverName
	I0328 00:52:24.905009 1122507 main.go:141] libmachine: (kubernetes-upgrade-615158) Calling .GetIP
	I0328 00:52:24.911212 1122507 main.go:141] libmachine: (kubernetes-upgrade-615158) DBG | domain kubernetes-upgrade-615158 has defined MAC address 52:54:00:9b:31:7a in network mk-kubernetes-upgrade-615158
	I0328 00:52:24.911716 1122507 main.go:141] libmachine: (kubernetes-upgrade-615158) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:31:7a", ip: ""} in network mk-kubernetes-upgrade-615158: {Iface:virbr2 ExpiryTime:2024-03-28 01:47:12 +0000 UTC Type:0 Mac:52:54:00:9b:31:7a Iaid: IPaddr:192.168.50.160 Prefix:24 Hostname:kubernetes-upgrade-615158 Clientid:01:52:54:00:9b:31:7a}
	I0328 00:52:24.911754 1122507 main.go:141] libmachine: (kubernetes-upgrade-615158) DBG | domain kubernetes-upgrade-615158 has defined IP address 192.168.50.160 and MAC address 52:54:00:9b:31:7a in network mk-kubernetes-upgrade-615158
	I0328 00:52:24.914764 1122507 main.go:141] libmachine: (kubernetes-upgrade-615158) Calling .DriverName
	I0328 00:52:24.915446 1122507 main.go:141] libmachine: (kubernetes-upgrade-615158) Calling .DriverName
	I0328 00:52:24.915697 1122507 main.go:141] libmachine: (kubernetes-upgrade-615158) Calling .DriverName
	I0328 00:52:24.915828 1122507 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0328 00:52:24.915879 1122507 main.go:141] libmachine: (kubernetes-upgrade-615158) Calling .GetSSHHostname
	I0328 00:52:24.916307 1122507 ssh_runner.go:195] Run: cat /version.json
	I0328 00:52:24.916337 1122507 main.go:141] libmachine: (kubernetes-upgrade-615158) Calling .GetSSHHostname
	I0328 00:52:24.919960 1122507 main.go:141] libmachine: (kubernetes-upgrade-615158) DBG | domain kubernetes-upgrade-615158 has defined MAC address 52:54:00:9b:31:7a in network mk-kubernetes-upgrade-615158
	I0328 00:52:24.920523 1122507 main.go:141] libmachine: (kubernetes-upgrade-615158) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:31:7a", ip: ""} in network mk-kubernetes-upgrade-615158: {Iface:virbr2 ExpiryTime:2024-03-28 01:47:12 +0000 UTC Type:0 Mac:52:54:00:9b:31:7a Iaid: IPaddr:192.168.50.160 Prefix:24 Hostname:kubernetes-upgrade-615158 Clientid:01:52:54:00:9b:31:7a}
	I0328 00:52:24.920551 1122507 main.go:141] libmachine: (kubernetes-upgrade-615158) DBG | domain kubernetes-upgrade-615158 has defined IP address 192.168.50.160 and MAC address 52:54:00:9b:31:7a in network mk-kubernetes-upgrade-615158
	I0328 00:52:24.920720 1122507 main.go:141] libmachine: (kubernetes-upgrade-615158) Calling .GetSSHPort
	I0328 00:52:24.920907 1122507 main.go:141] libmachine: (kubernetes-upgrade-615158) DBG | domain kubernetes-upgrade-615158 has defined MAC address 52:54:00:9b:31:7a in network mk-kubernetes-upgrade-615158
	I0328 00:52:24.920952 1122507 main.go:141] libmachine: (kubernetes-upgrade-615158) Calling .GetSSHKeyPath
	I0328 00:52:24.921121 1122507 main.go:141] libmachine: (kubernetes-upgrade-615158) Calling .GetSSHUsername
	I0328 00:52:24.921389 1122507 sshutil.go:53] new ssh client: &{IP:192.168.50.160 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/kubernetes-upgrade-615158/id_rsa Username:docker}
	I0328 00:52:24.921741 1122507 main.go:141] libmachine: (kubernetes-upgrade-615158) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:31:7a", ip: ""} in network mk-kubernetes-upgrade-615158: {Iface:virbr2 ExpiryTime:2024-03-28 01:47:12 +0000 UTC Type:0 Mac:52:54:00:9b:31:7a Iaid: IPaddr:192.168.50.160 Prefix:24 Hostname:kubernetes-upgrade-615158 Clientid:01:52:54:00:9b:31:7a}
	I0328 00:52:24.921761 1122507 main.go:141] libmachine: (kubernetes-upgrade-615158) DBG | domain kubernetes-upgrade-615158 has defined IP address 192.168.50.160 and MAC address 52:54:00:9b:31:7a in network mk-kubernetes-upgrade-615158
	I0328 00:52:24.921807 1122507 main.go:141] libmachine: (kubernetes-upgrade-615158) Calling .GetSSHPort
	I0328 00:52:24.921972 1122507 main.go:141] libmachine: (kubernetes-upgrade-615158) Calling .GetSSHKeyPath
	I0328 00:52:24.922124 1122507 main.go:141] libmachine: (kubernetes-upgrade-615158) Calling .GetSSHUsername
	I0328 00:52:24.922276 1122507 sshutil.go:53] new ssh client: &{IP:192.168.50.160 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/kubernetes-upgrade-615158/id_rsa Username:docker}
	I0328 00:52:25.089012 1122507 ssh_runner.go:195] Run: systemctl --version
	I0328 00:52:25.108019 1122507 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0328 00:52:25.375950 1122507 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0328 00:52:25.386420 1122507 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0328 00:52:25.386501 1122507 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0328 00:52:25.403624 1122507 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0328 00:52:25.403651 1122507 start.go:494] detecting cgroup driver to use...
	I0328 00:52:25.403718 1122507 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0328 00:52:25.436096 1122507 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0328 00:52:25.467977 1122507 docker.go:217] disabling cri-docker service (if available) ...
	I0328 00:52:25.468056 1122507 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0328 00:52:25.493899 1122507 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0328 00:52:25.526484 1122507 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0328 00:52:25.772069 1122507 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0328 00:52:25.947878 1122507 docker.go:233] disabling docker service ...
	I0328 00:52:25.947967 1122507 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0328 00:52:25.981574 1122507 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0328 00:52:26.006738 1122507 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0328 00:52:26.216518 1122507 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0328 00:52:26.456234 1122507 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0328 00:52:26.478667 1122507 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0328 00:52:26.512546 1122507 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0328 00:52:26.512624 1122507 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 00:52:26.532994 1122507 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0328 00:52:26.533062 1122507 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 00:52:26.551489 1122507 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 00:52:26.570015 1122507 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 00:52:26.589762 1122507 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0328 00:52:26.607089 1122507 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 00:52:26.625881 1122507 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 00:52:26.645394 1122507 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 00:52:26.658924 1122507 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0328 00:52:26.674415 1122507 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0328 00:52:26.691792 1122507 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0328 00:52:26.944439 1122507 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0328 00:52:27.445499 1122507 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0328 00:52:27.445586 1122507 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0328 00:52:27.455835 1122507 start.go:562] Will wait 60s for crictl version
	I0328 00:52:27.455907 1122507 ssh_runner.go:195] Run: which crictl
	I0328 00:52:27.464705 1122507 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0328 00:52:27.597871 1122507 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0328 00:52:27.597949 1122507 ssh_runner.go:195] Run: crio --version
	I0328 00:52:27.770334 1122507 ssh_runner.go:195] Run: crio --version
	I0328 00:52:27.881754 1122507 out.go:177] * Preparing Kubernetes v1.30.0-beta.0 on CRI-O 1.29.1 ...
	I0328 00:52:27.883244 1122507 main.go:141] libmachine: (kubernetes-upgrade-615158) Calling .GetIP
	I0328 00:52:27.886531 1122507 main.go:141] libmachine: (kubernetes-upgrade-615158) DBG | domain kubernetes-upgrade-615158 has defined MAC address 52:54:00:9b:31:7a in network mk-kubernetes-upgrade-615158
	I0328 00:52:27.887021 1122507 main.go:141] libmachine: (kubernetes-upgrade-615158) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:31:7a", ip: ""} in network mk-kubernetes-upgrade-615158: {Iface:virbr2 ExpiryTime:2024-03-28 01:47:12 +0000 UTC Type:0 Mac:52:54:00:9b:31:7a Iaid: IPaddr:192.168.50.160 Prefix:24 Hostname:kubernetes-upgrade-615158 Clientid:01:52:54:00:9b:31:7a}
	I0328 00:52:27.887051 1122507 main.go:141] libmachine: (kubernetes-upgrade-615158) DBG | domain kubernetes-upgrade-615158 has defined IP address 192.168.50.160 and MAC address 52:54:00:9b:31:7a in network mk-kubernetes-upgrade-615158
	I0328 00:52:27.887452 1122507 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0328 00:52:27.895261 1122507 kubeadm.go:877] updating cluster {Name:kubernetes-upgrade-615158 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.30.0-beta.0 ClusterName:kubernetes-upgrade-615158 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.160 Port:8443 KubernetesVersion:v1.30.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0328 00:52:27.895388 1122507 preload.go:132] Checking if preload exists for k8s version v1.30.0-beta.0 and runtime crio
	I0328 00:52:27.895460 1122507 ssh_runner.go:195] Run: sudo crictl images --output json
	I0328 00:52:27.954781 1122507 crio.go:514] all images are preloaded for cri-o runtime.
	I0328 00:52:27.954810 1122507 crio.go:433] Images already preloaded, skipping extraction
	I0328 00:52:27.954878 1122507 ssh_runner.go:195] Run: sudo crictl images --output json
	I0328 00:52:28.012387 1122507 crio.go:514] all images are preloaded for cri-o runtime.
	I0328 00:52:28.012469 1122507 cache_images.go:84] Images are preloaded, skipping loading
	I0328 00:52:28.012484 1122507 kubeadm.go:928] updating node { 192.168.50.160 8443 v1.30.0-beta.0 crio true true} ...
	I0328 00:52:28.012632 1122507 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=kubernetes-upgrade-615158 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.160
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0-beta.0 ClusterName:kubernetes-upgrade-615158 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0328 00:52:28.012729 1122507 ssh_runner.go:195] Run: crio config
	I0328 00:52:28.097176 1122507 cni.go:84] Creating CNI manager for ""
	I0328 00:52:28.097202 1122507 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0328 00:52:28.097211 1122507 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0328 00:52:28.097234 1122507 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.160 APIServerPort:8443 KubernetesVersion:v1.30.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-615158 NodeName:kubernetes-upgrade-615158 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.160"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.160 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/ce
rts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0328 00:52:28.097361 1122507 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.160
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "kubernetes-upgrade-615158"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.160
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.160"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0328 00:52:28.097466 1122507 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0-beta.0
	I0328 00:52:28.110472 1122507 binaries.go:44] Found k8s binaries, skipping transfer
	I0328 00:52:28.110558 1122507 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0328 00:52:28.122976 1122507 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (332 bytes)
	I0328 00:52:28.142807 1122507 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I0328 00:52:28.164668 1122507 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2176 bytes)
	I0328 00:52:28.186951 1122507 ssh_runner.go:195] Run: grep 192.168.50.160	control-plane.minikube.internal$ /etc/hosts
	I0328 00:52:28.191772 1122507 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0328 00:52:24.906902 1123084 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0328 00:52:24.907089 1123084 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18485-1069254/.minikube/bin/docker-machine-driver-kvm2
	I0328 00:52:24.907118 1123084 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 00:52:24.929774 1123084 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39815
	I0328 00:52:24.930321 1123084 main.go:141] libmachine: () Calling .GetVersion
	I0328 00:52:24.931160 1123084 main.go:141] libmachine: Using API Version  1
	I0328 00:52:24.931187 1123084 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 00:52:24.932401 1123084 main.go:141] libmachine: () Calling .GetMachineName
	I0328 00:52:24.932815 1123084 main.go:141] libmachine: (bridge-443419) Calling .GetMachineName
	I0328 00:52:24.933746 1123084 main.go:141] libmachine: (bridge-443419) Calling .DriverName
	I0328 00:52:24.933943 1123084 start.go:159] libmachine.API.Create for "bridge-443419" (driver="kvm2")
	I0328 00:52:24.933975 1123084 client.go:168] LocalClient.Create starting
	I0328 00:52:24.934015 1123084 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem
	I0328 00:52:24.934055 1123084 main.go:141] libmachine: Decoding PEM data...
	I0328 00:52:24.934075 1123084 main.go:141] libmachine: Parsing certificate...
	I0328 00:52:24.934149 1123084 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/cert.pem
	I0328 00:52:24.934174 1123084 main.go:141] libmachine: Decoding PEM data...
	I0328 00:52:24.934187 1123084 main.go:141] libmachine: Parsing certificate...
	I0328 00:52:24.934207 1123084 main.go:141] libmachine: Running pre-create checks...
	I0328 00:52:24.934215 1123084 main.go:141] libmachine: (bridge-443419) Calling .PreCreateCheck
	I0328 00:52:24.934642 1123084 main.go:141] libmachine: (bridge-443419) Calling .GetConfigRaw
	I0328 00:52:24.935234 1123084 main.go:141] libmachine: Creating machine...
	I0328 00:52:24.935254 1123084 main.go:141] libmachine: (bridge-443419) Calling .Create
	I0328 00:52:24.935396 1123084 main.go:141] libmachine: (bridge-443419) Creating KVM machine...
	I0328 00:52:24.937017 1123084 main.go:141] libmachine: (bridge-443419) DBG | found existing default KVM network
	I0328 00:52:24.939406 1123084 main.go:141] libmachine: (bridge-443419) DBG | I0328 00:52:24.939215 1123202 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000214fd0}
	I0328 00:52:24.939460 1123084 main.go:141] libmachine: (bridge-443419) DBG | created network xml: 
	I0328 00:52:24.939486 1123084 main.go:141] libmachine: (bridge-443419) DBG | <network>
	I0328 00:52:24.939504 1123084 main.go:141] libmachine: (bridge-443419) DBG |   <name>mk-bridge-443419</name>
	I0328 00:52:24.939516 1123084 main.go:141] libmachine: (bridge-443419) DBG |   <dns enable='no'/>
	I0328 00:52:24.939524 1123084 main.go:141] libmachine: (bridge-443419) DBG |   
	I0328 00:52:24.939536 1123084 main.go:141] libmachine: (bridge-443419) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0328 00:52:24.939549 1123084 main.go:141] libmachine: (bridge-443419) DBG |     <dhcp>
	I0328 00:52:24.939561 1123084 main.go:141] libmachine: (bridge-443419) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0328 00:52:24.939573 1123084 main.go:141] libmachine: (bridge-443419) DBG |     </dhcp>
	I0328 00:52:24.939584 1123084 main.go:141] libmachine: (bridge-443419) DBG |   </ip>
	I0328 00:52:24.939591 1123084 main.go:141] libmachine: (bridge-443419) DBG |   
	I0328 00:52:24.939601 1123084 main.go:141] libmachine: (bridge-443419) DBG | </network>
	I0328 00:52:24.939611 1123084 main.go:141] libmachine: (bridge-443419) DBG | 
	I0328 00:52:24.946498 1123084 main.go:141] libmachine: (bridge-443419) DBG | trying to create private KVM network mk-bridge-443419 192.168.39.0/24...
	I0328 00:52:25.064674 1123084 main.go:141] libmachine: (bridge-443419) DBG | private KVM network mk-bridge-443419 192.168.39.0/24 created
	I0328 00:52:25.064809 1123084 main.go:141] libmachine: (bridge-443419) Setting up store path in /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/bridge-443419 ...
	I0328 00:52:25.064867 1123084 main.go:141] libmachine: (bridge-443419) Building disk image from file:///home/jenkins/minikube-integration/18485-1069254/.minikube/cache/iso/amd64/minikube-v1.33.0-1711559712-18485-amd64.iso
	I0328 00:52:25.064923 1123084 main.go:141] libmachine: (bridge-443419) DBG | I0328 00:52:25.064896 1123202 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18485-1069254/.minikube
	I0328 00:52:25.065035 1123084 main.go:141] libmachine: (bridge-443419) Downloading /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18485-1069254/.minikube/cache/iso/amd64/minikube-v1.33.0-1711559712-18485-amd64.iso...
	I0328 00:52:25.394556 1123084 main.go:141] libmachine: (bridge-443419) DBG | I0328 00:52:25.394440 1123202 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/bridge-443419/id_rsa...
	I0328 00:52:25.690356 1123084 main.go:141] libmachine: (bridge-443419) Setting executable bit set on /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/bridge-443419 (perms=drwx------)
	I0328 00:52:25.690403 1123084 main.go:141] libmachine: (bridge-443419) DBG | I0328 00:52:25.689340 1123202 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/bridge-443419/bridge-443419.rawdisk...
	I0328 00:52:25.690419 1123084 main.go:141] libmachine: (bridge-443419) Setting executable bit set on /home/jenkins/minikube-integration/18485-1069254/.minikube/machines (perms=drwxr-xr-x)
	I0328 00:52:25.690436 1123084 main.go:141] libmachine: (bridge-443419) Setting executable bit set on /home/jenkins/minikube-integration/18485-1069254/.minikube (perms=drwxr-xr-x)
	I0328 00:52:25.690447 1123084 main.go:141] libmachine: (bridge-443419) Setting executable bit set on /home/jenkins/minikube-integration/18485-1069254 (perms=drwxrwxr-x)
	I0328 00:52:25.690459 1123084 main.go:141] libmachine: (bridge-443419) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0328 00:52:25.690468 1123084 main.go:141] libmachine: (bridge-443419) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0328 00:52:25.690482 1123084 main.go:141] libmachine: (bridge-443419) Creating domain...
	I0328 00:52:25.690488 1123084 main.go:141] libmachine: (bridge-443419) DBG | Writing magic tar header
	I0328 00:52:25.690499 1123084 main.go:141] libmachine: (bridge-443419) DBG | Writing SSH key tar header
	I0328 00:52:25.690516 1123084 main.go:141] libmachine: (bridge-443419) DBG | I0328 00:52:25.689520 1123202 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/bridge-443419 ...
	I0328 00:52:25.690531 1123084 main.go:141] libmachine: (bridge-443419) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/bridge-443419
	I0328 00:52:25.690546 1123084 main.go:141] libmachine: (bridge-443419) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18485-1069254/.minikube/machines
	I0328 00:52:25.690561 1123084 main.go:141] libmachine: (bridge-443419) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18485-1069254/.minikube
	I0328 00:52:25.690573 1123084 main.go:141] libmachine: (bridge-443419) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18485-1069254
	I0328 00:52:25.690645 1123084 main.go:141] libmachine: (bridge-443419) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0328 00:52:25.690674 1123084 main.go:141] libmachine: (bridge-443419) DBG | Checking permissions on dir: /home/jenkins
	I0328 00:52:25.690690 1123084 main.go:141] libmachine: (bridge-443419) DBG | Checking permissions on dir: /home
	I0328 00:52:25.690720 1123084 main.go:141] libmachine: (bridge-443419) DBG | Skipping /home - not owner
	I0328 00:52:25.691390 1123084 main.go:141] libmachine: (bridge-443419) define libvirt domain using xml: 
	I0328 00:52:25.691421 1123084 main.go:141] libmachine: (bridge-443419) <domain type='kvm'>
	I0328 00:52:25.691435 1123084 main.go:141] libmachine: (bridge-443419)   <name>bridge-443419</name>
	I0328 00:52:25.691444 1123084 main.go:141] libmachine: (bridge-443419)   <memory unit='MiB'>3072</memory>
	I0328 00:52:25.691454 1123084 main.go:141] libmachine: (bridge-443419)   <vcpu>2</vcpu>
	I0328 00:52:25.691461 1123084 main.go:141] libmachine: (bridge-443419)   <features>
	I0328 00:52:25.691470 1123084 main.go:141] libmachine: (bridge-443419)     <acpi/>
	I0328 00:52:25.691477 1123084 main.go:141] libmachine: (bridge-443419)     <apic/>
	I0328 00:52:25.691484 1123084 main.go:141] libmachine: (bridge-443419)     <pae/>
	I0328 00:52:25.691493 1123084 main.go:141] libmachine: (bridge-443419)     
	I0328 00:52:25.691500 1123084 main.go:141] libmachine: (bridge-443419)   </features>
	I0328 00:52:25.691507 1123084 main.go:141] libmachine: (bridge-443419)   <cpu mode='host-passthrough'>
	I0328 00:52:25.691514 1123084 main.go:141] libmachine: (bridge-443419)   
	I0328 00:52:25.691519 1123084 main.go:141] libmachine: (bridge-443419)   </cpu>
	I0328 00:52:25.691527 1123084 main.go:141] libmachine: (bridge-443419)   <os>
	I0328 00:52:25.691534 1123084 main.go:141] libmachine: (bridge-443419)     <type>hvm</type>
	I0328 00:52:25.691541 1123084 main.go:141] libmachine: (bridge-443419)     <boot dev='cdrom'/>
	I0328 00:52:25.691548 1123084 main.go:141] libmachine: (bridge-443419)     <boot dev='hd'/>
	I0328 00:52:25.691555 1123084 main.go:141] libmachine: (bridge-443419)     <bootmenu enable='no'/>
	I0328 00:52:25.691568 1123084 main.go:141] libmachine: (bridge-443419)   </os>
	I0328 00:52:25.691575 1123084 main.go:141] libmachine: (bridge-443419)   <devices>
	I0328 00:52:25.691583 1123084 main.go:141] libmachine: (bridge-443419)     <disk type='file' device='cdrom'>
	I0328 00:52:25.691596 1123084 main.go:141] libmachine: (bridge-443419)       <source file='/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/bridge-443419/boot2docker.iso'/>
	I0328 00:52:25.691606 1123084 main.go:141] libmachine: (bridge-443419)       <target dev='hdc' bus='scsi'/>
	I0328 00:52:25.691614 1123084 main.go:141] libmachine: (bridge-443419)       <readonly/>
	I0328 00:52:25.691621 1123084 main.go:141] libmachine: (bridge-443419)     </disk>
	I0328 00:52:25.691631 1123084 main.go:141] libmachine: (bridge-443419)     <disk type='file' device='disk'>
	I0328 00:52:25.691639 1123084 main.go:141] libmachine: (bridge-443419)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0328 00:52:25.691649 1123084 main.go:141] libmachine: (bridge-443419)       <source file='/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/bridge-443419/bridge-443419.rawdisk'/>
	I0328 00:52:25.691657 1123084 main.go:141] libmachine: (bridge-443419)       <target dev='hda' bus='virtio'/>
	I0328 00:52:25.691664 1123084 main.go:141] libmachine: (bridge-443419)     </disk>
	I0328 00:52:25.691671 1123084 main.go:141] libmachine: (bridge-443419)     <interface type='network'>
	I0328 00:52:25.691680 1123084 main.go:141] libmachine: (bridge-443419)       <source network='mk-bridge-443419'/>
	I0328 00:52:25.691686 1123084 main.go:141] libmachine: (bridge-443419)       <model type='virtio'/>
	I0328 00:52:25.691694 1123084 main.go:141] libmachine: (bridge-443419)     </interface>
	I0328 00:52:25.691700 1123084 main.go:141] libmachine: (bridge-443419)     <interface type='network'>
	I0328 00:52:25.691708 1123084 main.go:141] libmachine: (bridge-443419)       <source network='default'/>
	I0328 00:52:25.691720 1123084 main.go:141] libmachine: (bridge-443419)       <model type='virtio'/>
	I0328 00:52:25.691728 1123084 main.go:141] libmachine: (bridge-443419)     </interface>
	I0328 00:52:25.691744 1123084 main.go:141] libmachine: (bridge-443419)     <serial type='pty'>
	I0328 00:52:25.691752 1123084 main.go:141] libmachine: (bridge-443419)       <target port='0'/>
	I0328 00:52:25.691758 1123084 main.go:141] libmachine: (bridge-443419)     </serial>
	I0328 00:52:25.691767 1123084 main.go:141] libmachine: (bridge-443419)     <console type='pty'>
	I0328 00:52:25.691774 1123084 main.go:141] libmachine: (bridge-443419)       <target type='serial' port='0'/>
	I0328 00:52:25.691783 1123084 main.go:141] libmachine: (bridge-443419)     </console>
	I0328 00:52:25.691791 1123084 main.go:141] libmachine: (bridge-443419)     <rng model='virtio'>
	I0328 00:52:25.691800 1123084 main.go:141] libmachine: (bridge-443419)       <backend model='random'>/dev/random</backend>
	I0328 00:52:25.691806 1123084 main.go:141] libmachine: (bridge-443419)     </rng>
	I0328 00:52:25.691813 1123084 main.go:141] libmachine: (bridge-443419)     
	I0328 00:52:25.691818 1123084 main.go:141] libmachine: (bridge-443419)     
	I0328 00:52:25.691826 1123084 main.go:141] libmachine: (bridge-443419)   </devices>
	I0328 00:52:25.691833 1123084 main.go:141] libmachine: (bridge-443419) </domain>
	I0328 00:52:25.691842 1123084 main.go:141] libmachine: (bridge-443419) 
	I0328 00:52:25.697485 1123084 main.go:141] libmachine: (bridge-443419) DBG | domain bridge-443419 has defined MAC address 52:54:00:37:36:56 in network default
	I0328 00:52:25.698261 1123084 main.go:141] libmachine: (bridge-443419) Ensuring networks are active...
	I0328 00:52:25.698294 1123084 main.go:141] libmachine: (bridge-443419) DBG | domain bridge-443419 has defined MAC address 52:54:00:22:b8:d8 in network mk-bridge-443419
	I0328 00:52:25.699222 1123084 main.go:141] libmachine: (bridge-443419) Ensuring network default is active
	I0328 00:52:25.699551 1123084 main.go:141] libmachine: (bridge-443419) Ensuring network mk-bridge-443419 is active
	I0328 00:52:25.700435 1123084 main.go:141] libmachine: (bridge-443419) Getting domain xml...
	I0328 00:52:25.701459 1123084 main.go:141] libmachine: (bridge-443419) Creating domain...
	I0328 00:52:27.409077 1123084 main.go:141] libmachine: (bridge-443419) Waiting to get IP...
	I0328 00:52:27.410314 1123084 main.go:141] libmachine: (bridge-443419) DBG | domain bridge-443419 has defined MAC address 52:54:00:22:b8:d8 in network mk-bridge-443419
	I0328 00:52:27.410879 1123084 main.go:141] libmachine: (bridge-443419) DBG | unable to find current IP address of domain bridge-443419 in network mk-bridge-443419
	I0328 00:52:27.410978 1123084 main.go:141] libmachine: (bridge-443419) DBG | I0328 00:52:27.410895 1123202 retry.go:31] will retry after 263.382676ms: waiting for machine to come up
	I0328 00:52:27.676483 1123084 main.go:141] libmachine: (bridge-443419) DBG | domain bridge-443419 has defined MAC address 52:54:00:22:b8:d8 in network mk-bridge-443419
	I0328 00:52:27.677058 1123084 main.go:141] libmachine: (bridge-443419) DBG | unable to find current IP address of domain bridge-443419 in network mk-bridge-443419
	I0328 00:52:27.677096 1123084 main.go:141] libmachine: (bridge-443419) DBG | I0328 00:52:27.676954 1123202 retry.go:31] will retry after 256.139552ms: waiting for machine to come up
	I0328 00:52:27.934628 1123084 main.go:141] libmachine: (bridge-443419) DBG | domain bridge-443419 has defined MAC address 52:54:00:22:b8:d8 in network mk-bridge-443419
	I0328 00:52:27.935258 1123084 main.go:141] libmachine: (bridge-443419) DBG | unable to find current IP address of domain bridge-443419 in network mk-bridge-443419
	I0328 00:52:27.935286 1123084 main.go:141] libmachine: (bridge-443419) DBG | I0328 00:52:27.935205 1123202 retry.go:31] will retry after 438.736848ms: waiting for machine to come up
	I0328 00:52:25.834875 1121345 crio.go:462] duration metric: took 1.744332012s to copy over tarball
	I0328 00:52:25.834983 1121345 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0328 00:52:28.868735 1121345 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.033712331s)
	I0328 00:52:28.868783 1121345 crio.go:469] duration metric: took 3.03386949s to extract the tarball
	I0328 00:52:28.868794 1121345 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0328 00:52:28.928151 1121345 ssh_runner.go:195] Run: sudo crictl images --output json
	I0328 00:52:28.985249 1121345 crio.go:514] all images are preloaded for cri-o runtime.
	I0328 00:52:28.985279 1121345 cache_images.go:84] Images are preloaded, skipping loading
	I0328 00:52:28.985289 1121345 kubeadm.go:928] updating node { 192.168.72.7 8443 v1.29.3 crio true true} ...
	I0328 00:52:28.985442 1121345 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=flannel-443419 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.7
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.3 ClusterName:flannel-443419 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel}
	I0328 00:52:28.985554 1121345 ssh_runner.go:195] Run: crio config
	I0328 00:52:29.051937 1121345 cni.go:84] Creating CNI manager for "flannel"
	I0328 00:52:29.051965 1121345 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0328 00:52:29.052006 1121345 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.7 APIServerPort:8443 KubernetesVersion:v1.29.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:flannel-443419 NodeName:flannel-443419 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.7"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.7 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuber
netes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0328 00:52:29.052199 1121345 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.7
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "flannel-443419"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.7
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.7"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0328 00:52:29.052271 1121345 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.3
	I0328 00:52:29.065953 1121345 binaries.go:44] Found k8s binaries, skipping transfer
	I0328 00:52:29.066046 1121345 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0328 00:52:29.085970 1121345 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0328 00:52:29.115919 1121345 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0328 00:52:29.141741 1121345 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2152 bytes)
	I0328 00:52:29.174110 1121345 ssh_runner.go:195] Run: grep 192.168.72.7	control-plane.minikube.internal$ /etc/hosts
	I0328 00:52:29.180099 1121345 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.7	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0328 00:52:29.198181 1121345 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0328 00:52:29.359095 1121345 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0328 00:52:29.379072 1121345 certs.go:68] Setting up /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/flannel-443419 for IP: 192.168.72.7
	I0328 00:52:29.379103 1121345 certs.go:194] generating shared ca certs ...
	I0328 00:52:29.379127 1121345 certs.go:226] acquiring lock for ca certs: {Name:mkf4dec8f33bbf51de6ed3aabdf175c7fd744ef9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 00:52:29.379312 1121345 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.key
	I0328 00:52:29.379371 1121345 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/proxy-client-ca.key
	I0328 00:52:29.379386 1121345 certs.go:256] generating profile certs ...
	I0328 00:52:29.379461 1121345 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/flannel-443419/client.key
	I0328 00:52:29.379481 1121345 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/flannel-443419/client.crt with IP's: []
	I0328 00:52:29.815395 1121345 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/flannel-443419/client.crt ...
	I0328 00:52:29.815423 1121345 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/flannel-443419/client.crt: {Name:mkf5e62635e5b324c422cae120f61ccace8f4d70 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 00:52:29.815576 1121345 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/flannel-443419/client.key ...
	I0328 00:52:29.815586 1121345 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/flannel-443419/client.key: {Name:mk10b8a8c97536a968405cc6c0badff748ab7956 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 00:52:29.815690 1121345 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/flannel-443419/apiserver.key.2c0b999d
	I0328 00:52:29.815709 1121345 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/flannel-443419/apiserver.crt.2c0b999d with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.72.7]
	I0328 00:52:30.066790 1121345 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/flannel-443419/apiserver.crt.2c0b999d ...
	I0328 00:52:30.066824 1121345 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/flannel-443419/apiserver.crt.2c0b999d: {Name:mk5b3bad69875d34978099ee03454843a9b521e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 00:52:30.106579 1121345 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/flannel-443419/apiserver.key.2c0b999d ...
	I0328 00:52:30.106636 1121345 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/flannel-443419/apiserver.key.2c0b999d: {Name:mk023c419a1f6c40c050f557e2ff82132ec6e533 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 00:52:30.106841 1121345 certs.go:381] copying /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/flannel-443419/apiserver.crt.2c0b999d -> /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/flannel-443419/apiserver.crt
	I0328 00:52:30.106966 1121345 certs.go:385] copying /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/flannel-443419/apiserver.key.2c0b999d -> /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/flannel-443419/apiserver.key
	I0328 00:52:30.107046 1121345 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/flannel-443419/proxy-client.key
	I0328 00:52:30.107069 1121345 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/flannel-443419/proxy-client.crt with IP's: []
	I0328 00:52:30.227397 1121345 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/flannel-443419/proxy-client.crt ...
	I0328 00:52:30.227434 1121345 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/flannel-443419/proxy-client.crt: {Name:mk546d556f543ad05f912f7ad8ba629f5bdf9133 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 00:52:30.227618 1121345 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/flannel-443419/proxy-client.key ...
	I0328 00:52:30.227636 1121345 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/flannel-443419/proxy-client.key: {Name:mk44a747ed5de77422c0cca1e19d4afb0ead25da Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 00:52:30.227840 1121345 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/1076522.pem (1338 bytes)
	W0328 00:52:30.227894 1121345 certs.go:480] ignoring /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/1076522_empty.pem, impossibly tiny 0 bytes
	I0328 00:52:30.227913 1121345 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca-key.pem (1675 bytes)
	I0328 00:52:30.227950 1121345 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem (1078 bytes)
	I0328 00:52:30.227989 1121345 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/cert.pem (1123 bytes)
	I0328 00:52:30.228021 1121345 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/key.pem (1679 bytes)
	I0328 00:52:30.228076 1121345 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/files/etc/ssl/certs/10765222.pem (1708 bytes)
	I0328 00:52:30.228783 1121345 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0328 00:52:30.265542 1121345 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0328 00:52:30.301630 1121345 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0328 00:52:30.335191 1121345 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0328 00:52:30.369959 1121345 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/flannel-443419/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0328 00:52:30.432820 1121345 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/flannel-443419/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0328 00:52:30.464646 1121345 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/flannel-443419/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0328 00:52:30.497912 1121345 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/flannel-443419/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0328 00:52:30.532587 1121345 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0328 00:52:30.566352 1121345 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/1076522.pem --> /usr/share/ca-certificates/1076522.pem (1338 bytes)
	I0328 00:52:30.601631 1121345 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/files/etc/ssl/certs/10765222.pem --> /usr/share/ca-certificates/10765222.pem (1708 bytes)
	I0328 00:52:30.631736 1121345 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0328 00:52:30.654306 1121345 ssh_runner.go:195] Run: openssl version
	I0328 00:52:30.666700 1121345 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0328 00:52:30.682305 1121345 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0328 00:52:30.689635 1121345 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 27 23:34 /usr/share/ca-certificates/minikubeCA.pem
	I0328 00:52:30.689713 1121345 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0328 00:52:30.700758 1121345 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0328 00:52:30.716340 1121345 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1076522.pem && ln -fs /usr/share/ca-certificates/1076522.pem /etc/ssl/certs/1076522.pem"
	I0328 00:52:30.733704 1121345 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1076522.pem
	I0328 00:52:30.740123 1121345 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 27 23:42 /usr/share/ca-certificates/1076522.pem
	I0328 00:52:30.740204 1121345 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1076522.pem
	I0328 00:52:30.751316 1121345 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1076522.pem /etc/ssl/certs/51391683.0"
	I0328 00:52:30.773659 1121345 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10765222.pem && ln -fs /usr/share/ca-certificates/10765222.pem /etc/ssl/certs/10765222.pem"
	I0328 00:52:30.797940 1121345 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10765222.pem
	I0328 00:52:30.809040 1121345 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 27 23:42 /usr/share/ca-certificates/10765222.pem
	I0328 00:52:30.809104 1121345 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10765222.pem
	I0328 00:52:30.817802 1121345 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/10765222.pem /etc/ssl/certs/3ec20f2e.0"
	I0328 00:52:30.835896 1121345 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0328 00:52:30.842699 1121345 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0328 00:52:30.842756 1121345 kubeadm.go:391] StartCluster: {Name:flannel-443419 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3
ClusterName:flannel-443419 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP:192.168.72.7 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0328 00:52:30.842857 1121345 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0328 00:52:30.842911 1121345 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0328 00:52:30.896809 1121345 cri.go:89] found id: ""
	I0328 00:52:30.896893 1121345 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0328 00:52:30.911599 1121345 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0328 00:52:30.925900 1121345 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0328 00:52:30.939566 1121345 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0328 00:52:30.939597 1121345 kubeadm.go:156] found existing configuration files:
	
	I0328 00:52:30.939658 1121345 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0328 00:52:30.952906 1121345 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0328 00:52:30.952978 1121345 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0328 00:52:30.966677 1121345 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0328 00:52:30.976601 1121345 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0328 00:52:30.976677 1121345 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0328 00:52:30.987183 1121345 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0328 00:52:30.997269 1121345 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0328 00:52:30.997335 1121345 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0328 00:52:31.010093 1121345 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0328 00:52:31.020784 1121345 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0328 00:52:31.020863 1121345 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0328 00:52:31.031785 1121345 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0328 00:52:31.097750 1121345 kubeadm.go:309] [init] Using Kubernetes version: v1.29.3
	I0328 00:52:31.098155 1121345 kubeadm.go:309] [preflight] Running pre-flight checks
	I0328 00:52:31.267749 1121345 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0328 00:52:31.267844 1121345 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0328 00:52:31.267922 1121345 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0328 00:52:31.591699 1121345 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0328 00:52:28.341448 1122507 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0328 00:52:28.360959 1122507 certs.go:68] Setting up /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/kubernetes-upgrade-615158 for IP: 192.168.50.160
	I0328 00:52:28.360989 1122507 certs.go:194] generating shared ca certs ...
	I0328 00:52:28.361020 1122507 certs.go:226] acquiring lock for ca certs: {Name:mkf4dec8f33bbf51de6ed3aabdf175c7fd744ef9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 00:52:28.361193 1122507 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.key
	I0328 00:52:28.361252 1122507 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/proxy-client-ca.key
	I0328 00:52:28.361268 1122507 certs.go:256] generating profile certs ...
	I0328 00:52:28.361377 1122507 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/kubernetes-upgrade-615158/client.key
	I0328 00:52:28.361438 1122507 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/kubernetes-upgrade-615158/apiserver.key.3c04a02a
	I0328 00:52:28.361488 1122507 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/kubernetes-upgrade-615158/proxy-client.key
	I0328 00:52:28.361639 1122507 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/1076522.pem (1338 bytes)
	W0328 00:52:28.361679 1122507 certs.go:480] ignoring /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/1076522_empty.pem, impossibly tiny 0 bytes
	I0328 00:52:28.361693 1122507 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca-key.pem (1675 bytes)
	I0328 00:52:28.361723 1122507 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem (1078 bytes)
	I0328 00:52:28.361761 1122507 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/cert.pem (1123 bytes)
	I0328 00:52:28.361790 1122507 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/key.pem (1679 bytes)
	I0328 00:52:28.361850 1122507 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/files/etc/ssl/certs/10765222.pem (1708 bytes)
	I0328 00:52:28.362564 1122507 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0328 00:52:28.397465 1122507 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0328 00:52:28.425405 1122507 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0328 00:52:28.457335 1122507 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0328 00:52:28.497962 1122507 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/kubernetes-upgrade-615158/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0328 00:52:28.542224 1122507 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/kubernetes-upgrade-615158/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0328 00:52:28.595599 1122507 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/kubernetes-upgrade-615158/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0328 00:52:28.629765 1122507 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/kubernetes-upgrade-615158/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0328 00:52:28.661551 1122507 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0328 00:52:28.693217 1122507 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/1076522.pem --> /usr/share/ca-certificates/1076522.pem (1338 bytes)
	I0328 00:52:28.729434 1122507 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/files/etc/ssl/certs/10765222.pem --> /usr/share/ca-certificates/10765222.pem (1708 bytes)
	I0328 00:52:28.767675 1122507 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0328 00:52:28.793788 1122507 ssh_runner.go:195] Run: openssl version
	I0328 00:52:28.802263 1122507 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0328 00:52:28.819360 1122507 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0328 00:52:28.829298 1122507 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 27 23:34 /usr/share/ca-certificates/minikubeCA.pem
	I0328 00:52:28.829389 1122507 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0328 00:52:28.838060 1122507 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0328 00:52:28.854435 1122507 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1076522.pem && ln -fs /usr/share/ca-certificates/1076522.pem /etc/ssl/certs/1076522.pem"
	I0328 00:52:28.873002 1122507 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1076522.pem
	I0328 00:52:28.878728 1122507 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 27 23:42 /usr/share/ca-certificates/1076522.pem
	I0328 00:52:28.878818 1122507 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1076522.pem
	I0328 00:52:28.886367 1122507 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1076522.pem /etc/ssl/certs/51391683.0"
	I0328 00:52:28.900469 1122507 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10765222.pem && ln -fs /usr/share/ca-certificates/10765222.pem /etc/ssl/certs/10765222.pem"
	I0328 00:52:28.923532 1122507 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10765222.pem
	I0328 00:52:28.931131 1122507 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 27 23:42 /usr/share/ca-certificates/10765222.pem
	I0328 00:52:28.931209 1122507 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10765222.pem
	I0328 00:52:28.938782 1122507 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/10765222.pem /etc/ssl/certs/3ec20f2e.0"
	I0328 00:52:28.954627 1122507 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0328 00:52:28.961258 1122507 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0328 00:52:28.968556 1122507 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0328 00:52:28.976441 1122507 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0328 00:52:28.986298 1122507 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0328 00:52:28.996354 1122507 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0328 00:52:29.004646 1122507 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0328 00:52:29.013249 1122507 kubeadm.go:391] StartCluster: {Name:kubernetes-upgrade-615158 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.30.0-beta.0 ClusterName:kubernetes-upgrade-615158 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.160 Port:8443 KubernetesVersion:v1.30.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPo
rt:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0328 00:52:29.013334 1122507 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0328 00:52:29.013463 1122507 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0328 00:52:29.066668 1122507 cri.go:89] found id: "ff43377ffd798c54cae03d76c7793738943a083c5261998129e38d70a025ef33"
	I0328 00:52:29.066695 1122507 cri.go:89] found id: "eb430449f89841b54d4db47be099541b462f715f7021597a18deda95f3209fc0"
	I0328 00:52:29.066701 1122507 cri.go:89] found id: "c63adcfe748b4f461d0b90dfc33d81a986edbd41f0f4066814c30805a8bf036f"
	I0328 00:52:29.066704 1122507 cri.go:89] found id: "962bef07988fa9bed76c7b197daa2963d6a8476dfcd29cd274d66b58ba2a307f"
	I0328 00:52:29.066708 1122507 cri.go:89] found id: ""
	I0328 00:52:29.066757 1122507 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Mar 28 00:52:41 kubernetes-upgrade-615158 crio[1884]: time="2024-03-28 00:52:41.439802339Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1711587161439275984,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:121235,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=deaaf076-3475-4010-83de-bb30e34b9c57 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 28 00:52:41 kubernetes-upgrade-615158 crio[1884]: time="2024-03-28 00:52:41.440610727Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a97ee8d2-fb5d-46ab-8c06-8f0405cb4a04 name=/runtime.v1.RuntimeService/ListContainers
	Mar 28 00:52:41 kubernetes-upgrade-615158 crio[1884]: time="2024-03-28 00:52:41.440810550Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a97ee8d2-fb5d-46ab-8c06-8f0405cb4a04 name=/runtime.v1.RuntimeService/ListContainers
	Mar 28 00:52:41 kubernetes-upgrade-615158 crio[1884]: time="2024-03-28 00:52:41.442557144Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ef5194763932673885ef6d9abf9c0dd2d8271d2a7fa31395ce3fa9757b0ad0d1,PodSandboxId:4445bf618732a2d1917ef4fb80da899472f03bc18218f554db5e3e746c691c17,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:f68bf73ee2fbd4ea8b58c2038ce18d4e06cb59833c44dda7435b586add021841,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f68bf73ee2fbd4ea8b58c2038ce18d4e06cb59833c44dda7435b586add021841,State:CONTAINER_RUNNING,CreatedAt:1711587153314065799,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-615158,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ed7d4b7e43d2b73acc7f2e7f8929f46,},Annotations:map[string]string{io.kubernetes.container.hash: 3378c71d,io.kubernetes.c
ontainer.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f7378bed1cd49887613c9adc1316c5066be8f3754de4dd9399b5c04a2cabd7cf,PodSandboxId:2ed42601f98dd155add17ac633f88a2eaf2cae192d933df2ba4cc81f9c73afb8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c2da1dd389fe0ec92e0cd9e98f8def82c47e8e08ab27041cd23683c60aa1dcaa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2da1dd389fe0ec92e0cd9e98f8def82c47e8e08ab27041cd23683c60aa1dcaa,State:CONTAINER_RUNNING,CreatedAt:1711587153325017631,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-615158,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 64a41d778a725f680a503fc14986e62d,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9432dd,io.kubernetes.contai
ner.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f7cfaf7dda66b3859f6c06d60de89685c822c122fd6f2498e8fef0f6614730b6,PodSandboxId:1e38d98a0aa69e1f6ad114e3c0d363c5dd7ea3c7cd5d26c78b3320d6db427f53,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:746ac2129b574b8743dca05f512b7e097235ca7229b75100a38ec9cdb23454ac,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:746ac2129b574b8743dca05f512b7e097235ca7229b75100a38ec9cdb23454ac,State:CONTAINER_RUNNING,CreatedAt:1711587153354419241,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-615158,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9aa36ea0a69a598325f7a3c234743bab,},Annotations:map[string]string{io.kubernetes.container.hash: 27285f37,io.kubernetes.container.r
estartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e0813c8654f4eb81c38381228c3db95749aef814f79bb10c935defe5881c373,PodSandboxId:d5ee1285f9f22a92f5815601536e4208a2c563a381518f0f79c2bed2b4fef941,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1711587153330100112,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-615158,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b451a57fb37b27b846d568551cf4b3a,},Annotations:map[string]string{io.kubernetes.container.hash: 83b14685,io.kubernetes.container.restartCount: 2,io.kubernetes.contai
ner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb430449f89841b54d4db47be099541b462f715f7021597a18deda95f3209fc0,PodSandboxId:c4988f82bab3a43299dd44090394816ab8c3599128912de59df9764d57d43be4,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1711587144638191907,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-615158,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b451a57fb37b27b846d568551cf4b3a,},Annotations:map[string]string{io.kubernetes.container.hash: 83b14685,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/ter
mination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff43377ffd798c54cae03d76c7793738943a083c5261998129e38d70a025ef33,PodSandboxId:fa0f144c0927ff04ba57751c38683b78fee2eca8f2b7663a25050838c98ebee7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:746ac2129b574b8743dca05f512b7e097235ca7229b75100a38ec9cdb23454ac,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:746ac2129b574b8743dca05f512b7e097235ca7229b75100a38ec9cdb23454ac,State:CONTAINER_EXITED,CreatedAt:1711587144675839507,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-615158,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9aa36ea0a69a598325f7a3c234743bab,},Annotations:map[string]string{io.kubernetes.container.hash: 27285f37,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/terminati
on-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c63adcfe748b4f461d0b90dfc33d81a986edbd41f0f4066814c30805a8bf036f,PodSandboxId:fa5f336f24a60f638f234f1dfab7d158f32596a3cd6b5fa9820a616fa27a3875,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:f68bf73ee2fbd4ea8b58c2038ce18d4e06cb59833c44dda7435b586add021841,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f68bf73ee2fbd4ea8b58c2038ce18d4e06cb59833c44dda7435b586add021841,State:CONTAINER_EXITED,CreatedAt:1711587144612871210,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-615158,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ed7d4b7e43d2b73acc7f2e7f8929f46,},Annotations:map[string]string{io.kubernetes.container.hash: 3378c71d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:962bef07988fa9bed76c7b197daa2963d6a8476dfcd29cd274d66b58ba2a307f,PodSandboxId:834543e71019c587b8b065c207d625b7aa9dc9d51162a0c5b9d2ac90819c25c9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c2da1dd389fe0ec92e0cd9e98f8def82c47e8e08ab27041cd23683c60aa1dcaa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2da1dd389fe0ec92e0cd9e98f8def82c47e8e08ab27041cd23683c60aa1dcaa,State:CONTAINER_EXITED,CreatedAt:1711587144463311776,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-615158,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 64a41d778a725f680a503fc14986e62d,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9432dd,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a97ee8d2-fb5d-46ab-8c06-8f0405cb4a04 name=/runtime.v1.RuntimeService/ListContainers
	Mar 28 00:52:41 kubernetes-upgrade-615158 crio[1884]: time="2024-03-28 00:52:41.500120733Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=31202d04-72cc-4b8b-af0b-2c7981ea0466 name=/runtime.v1.RuntimeService/Version
	Mar 28 00:52:41 kubernetes-upgrade-615158 crio[1884]: time="2024-03-28 00:52:41.500223805Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=31202d04-72cc-4b8b-af0b-2c7981ea0466 name=/runtime.v1.RuntimeService/Version
	Mar 28 00:52:41 kubernetes-upgrade-615158 crio[1884]: time="2024-03-28 00:52:41.501954000Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=91a5a07e-5d24-48f0-bcea-1a9736d4a11e name=/runtime.v1.ImageService/ImageFsInfo
	Mar 28 00:52:41 kubernetes-upgrade-615158 crio[1884]: time="2024-03-28 00:52:41.502562203Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1711587161502532977,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:121235,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=91a5a07e-5d24-48f0-bcea-1a9736d4a11e name=/runtime.v1.ImageService/ImageFsInfo
	Mar 28 00:52:41 kubernetes-upgrade-615158 crio[1884]: time="2024-03-28 00:52:41.503336995Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5bf32289-f188-4ba2-98dc-aec3cad3a144 name=/runtime.v1.RuntimeService/ListContainers
	Mar 28 00:52:41 kubernetes-upgrade-615158 crio[1884]: time="2024-03-28 00:52:41.503406267Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5bf32289-f188-4ba2-98dc-aec3cad3a144 name=/runtime.v1.RuntimeService/ListContainers
	Mar 28 00:52:41 kubernetes-upgrade-615158 crio[1884]: time="2024-03-28 00:52:41.503911655Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ef5194763932673885ef6d9abf9c0dd2d8271d2a7fa31395ce3fa9757b0ad0d1,PodSandboxId:4445bf618732a2d1917ef4fb80da899472f03bc18218f554db5e3e746c691c17,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:f68bf73ee2fbd4ea8b58c2038ce18d4e06cb59833c44dda7435b586add021841,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f68bf73ee2fbd4ea8b58c2038ce18d4e06cb59833c44dda7435b586add021841,State:CONTAINER_RUNNING,CreatedAt:1711587153314065799,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-615158,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ed7d4b7e43d2b73acc7f2e7f8929f46,},Annotations:map[string]string{io.kubernetes.container.hash: 3378c71d,io.kubernetes.c
ontainer.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f7378bed1cd49887613c9adc1316c5066be8f3754de4dd9399b5c04a2cabd7cf,PodSandboxId:2ed42601f98dd155add17ac633f88a2eaf2cae192d933df2ba4cc81f9c73afb8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c2da1dd389fe0ec92e0cd9e98f8def82c47e8e08ab27041cd23683c60aa1dcaa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2da1dd389fe0ec92e0cd9e98f8def82c47e8e08ab27041cd23683c60aa1dcaa,State:CONTAINER_RUNNING,CreatedAt:1711587153325017631,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-615158,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 64a41d778a725f680a503fc14986e62d,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9432dd,io.kubernetes.contai
ner.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f7cfaf7dda66b3859f6c06d60de89685c822c122fd6f2498e8fef0f6614730b6,PodSandboxId:1e38d98a0aa69e1f6ad114e3c0d363c5dd7ea3c7cd5d26c78b3320d6db427f53,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:746ac2129b574b8743dca05f512b7e097235ca7229b75100a38ec9cdb23454ac,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:746ac2129b574b8743dca05f512b7e097235ca7229b75100a38ec9cdb23454ac,State:CONTAINER_RUNNING,CreatedAt:1711587153354419241,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-615158,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9aa36ea0a69a598325f7a3c234743bab,},Annotations:map[string]string{io.kubernetes.container.hash: 27285f37,io.kubernetes.container.r
estartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e0813c8654f4eb81c38381228c3db95749aef814f79bb10c935defe5881c373,PodSandboxId:d5ee1285f9f22a92f5815601536e4208a2c563a381518f0f79c2bed2b4fef941,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1711587153330100112,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-615158,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b451a57fb37b27b846d568551cf4b3a,},Annotations:map[string]string{io.kubernetes.container.hash: 83b14685,io.kubernetes.container.restartCount: 2,io.kubernetes.contai
ner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb430449f89841b54d4db47be099541b462f715f7021597a18deda95f3209fc0,PodSandboxId:c4988f82bab3a43299dd44090394816ab8c3599128912de59df9764d57d43be4,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1711587144638191907,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-615158,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b451a57fb37b27b846d568551cf4b3a,},Annotations:map[string]string{io.kubernetes.container.hash: 83b14685,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/ter
mination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff43377ffd798c54cae03d76c7793738943a083c5261998129e38d70a025ef33,PodSandboxId:fa0f144c0927ff04ba57751c38683b78fee2eca8f2b7663a25050838c98ebee7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:746ac2129b574b8743dca05f512b7e097235ca7229b75100a38ec9cdb23454ac,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:746ac2129b574b8743dca05f512b7e097235ca7229b75100a38ec9cdb23454ac,State:CONTAINER_EXITED,CreatedAt:1711587144675839507,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-615158,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9aa36ea0a69a598325f7a3c234743bab,},Annotations:map[string]string{io.kubernetes.container.hash: 27285f37,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/terminati
on-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c63adcfe748b4f461d0b90dfc33d81a986edbd41f0f4066814c30805a8bf036f,PodSandboxId:fa5f336f24a60f638f234f1dfab7d158f32596a3cd6b5fa9820a616fa27a3875,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:f68bf73ee2fbd4ea8b58c2038ce18d4e06cb59833c44dda7435b586add021841,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f68bf73ee2fbd4ea8b58c2038ce18d4e06cb59833c44dda7435b586add021841,State:CONTAINER_EXITED,CreatedAt:1711587144612871210,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-615158,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ed7d4b7e43d2b73acc7f2e7f8929f46,},Annotations:map[string]string{io.kubernetes.container.hash: 3378c71d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:962bef07988fa9bed76c7b197daa2963d6a8476dfcd29cd274d66b58ba2a307f,PodSandboxId:834543e71019c587b8b065c207d625b7aa9dc9d51162a0c5b9d2ac90819c25c9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c2da1dd389fe0ec92e0cd9e98f8def82c47e8e08ab27041cd23683c60aa1dcaa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2da1dd389fe0ec92e0cd9e98f8def82c47e8e08ab27041cd23683c60aa1dcaa,State:CONTAINER_EXITED,CreatedAt:1711587144463311776,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-615158,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 64a41d778a725f680a503fc14986e62d,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9432dd,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5bf32289-f188-4ba2-98dc-aec3cad3a144 name=/runtime.v1.RuntimeService/ListContainers
	Mar 28 00:52:41 kubernetes-upgrade-615158 crio[1884]: time="2024-03-28 00:52:41.563780165Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=12908c24-7f0c-4534-b2c0-f386c4376a20 name=/runtime.v1.RuntimeService/Version
	Mar 28 00:52:41 kubernetes-upgrade-615158 crio[1884]: time="2024-03-28 00:52:41.563869895Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=12908c24-7f0c-4534-b2c0-f386c4376a20 name=/runtime.v1.RuntimeService/Version
	Mar 28 00:52:41 kubernetes-upgrade-615158 crio[1884]: time="2024-03-28 00:52:41.564889366Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=fb9dbdbb-ef77-49f1-9bf8-f0071e09b666 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 28 00:52:41 kubernetes-upgrade-615158 crio[1884]: time="2024-03-28 00:52:41.566004720Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1711587161565975051,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:121235,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=fb9dbdbb-ef77-49f1-9bf8-f0071e09b666 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 28 00:52:41 kubernetes-upgrade-615158 crio[1884]: time="2024-03-28 00:52:41.567139413Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a67fff03-41f6-4285-9615-45a3ee992abb name=/runtime.v1.RuntimeService/ListContainers
	Mar 28 00:52:41 kubernetes-upgrade-615158 crio[1884]: time="2024-03-28 00:52:41.567366020Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a67fff03-41f6-4285-9615-45a3ee992abb name=/runtime.v1.RuntimeService/ListContainers
	Mar 28 00:52:41 kubernetes-upgrade-615158 crio[1884]: time="2024-03-28 00:52:41.567861951Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ef5194763932673885ef6d9abf9c0dd2d8271d2a7fa31395ce3fa9757b0ad0d1,PodSandboxId:4445bf618732a2d1917ef4fb80da899472f03bc18218f554db5e3e746c691c17,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:f68bf73ee2fbd4ea8b58c2038ce18d4e06cb59833c44dda7435b586add021841,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f68bf73ee2fbd4ea8b58c2038ce18d4e06cb59833c44dda7435b586add021841,State:CONTAINER_RUNNING,CreatedAt:1711587153314065799,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-615158,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ed7d4b7e43d2b73acc7f2e7f8929f46,},Annotations:map[string]string{io.kubernetes.container.hash: 3378c71d,io.kubernetes.c
ontainer.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f7378bed1cd49887613c9adc1316c5066be8f3754de4dd9399b5c04a2cabd7cf,PodSandboxId:2ed42601f98dd155add17ac633f88a2eaf2cae192d933df2ba4cc81f9c73afb8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c2da1dd389fe0ec92e0cd9e98f8def82c47e8e08ab27041cd23683c60aa1dcaa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2da1dd389fe0ec92e0cd9e98f8def82c47e8e08ab27041cd23683c60aa1dcaa,State:CONTAINER_RUNNING,CreatedAt:1711587153325017631,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-615158,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 64a41d778a725f680a503fc14986e62d,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9432dd,io.kubernetes.contai
ner.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f7cfaf7dda66b3859f6c06d60de89685c822c122fd6f2498e8fef0f6614730b6,PodSandboxId:1e38d98a0aa69e1f6ad114e3c0d363c5dd7ea3c7cd5d26c78b3320d6db427f53,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:746ac2129b574b8743dca05f512b7e097235ca7229b75100a38ec9cdb23454ac,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:746ac2129b574b8743dca05f512b7e097235ca7229b75100a38ec9cdb23454ac,State:CONTAINER_RUNNING,CreatedAt:1711587153354419241,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-615158,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9aa36ea0a69a598325f7a3c234743bab,},Annotations:map[string]string{io.kubernetes.container.hash: 27285f37,io.kubernetes.container.r
estartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e0813c8654f4eb81c38381228c3db95749aef814f79bb10c935defe5881c373,PodSandboxId:d5ee1285f9f22a92f5815601536e4208a2c563a381518f0f79c2bed2b4fef941,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1711587153330100112,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-615158,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b451a57fb37b27b846d568551cf4b3a,},Annotations:map[string]string{io.kubernetes.container.hash: 83b14685,io.kubernetes.container.restartCount: 2,io.kubernetes.contai
ner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb430449f89841b54d4db47be099541b462f715f7021597a18deda95f3209fc0,PodSandboxId:c4988f82bab3a43299dd44090394816ab8c3599128912de59df9764d57d43be4,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1711587144638191907,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-615158,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b451a57fb37b27b846d568551cf4b3a,},Annotations:map[string]string{io.kubernetes.container.hash: 83b14685,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/ter
mination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff43377ffd798c54cae03d76c7793738943a083c5261998129e38d70a025ef33,PodSandboxId:fa0f144c0927ff04ba57751c38683b78fee2eca8f2b7663a25050838c98ebee7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:746ac2129b574b8743dca05f512b7e097235ca7229b75100a38ec9cdb23454ac,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:746ac2129b574b8743dca05f512b7e097235ca7229b75100a38ec9cdb23454ac,State:CONTAINER_EXITED,CreatedAt:1711587144675839507,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-615158,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9aa36ea0a69a598325f7a3c234743bab,},Annotations:map[string]string{io.kubernetes.container.hash: 27285f37,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/terminati
on-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c63adcfe748b4f461d0b90dfc33d81a986edbd41f0f4066814c30805a8bf036f,PodSandboxId:fa5f336f24a60f638f234f1dfab7d158f32596a3cd6b5fa9820a616fa27a3875,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:f68bf73ee2fbd4ea8b58c2038ce18d4e06cb59833c44dda7435b586add021841,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f68bf73ee2fbd4ea8b58c2038ce18d4e06cb59833c44dda7435b586add021841,State:CONTAINER_EXITED,CreatedAt:1711587144612871210,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-615158,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ed7d4b7e43d2b73acc7f2e7f8929f46,},Annotations:map[string]string{io.kubernetes.container.hash: 3378c71d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:962bef07988fa9bed76c7b197daa2963d6a8476dfcd29cd274d66b58ba2a307f,PodSandboxId:834543e71019c587b8b065c207d625b7aa9dc9d51162a0c5b9d2ac90819c25c9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c2da1dd389fe0ec92e0cd9e98f8def82c47e8e08ab27041cd23683c60aa1dcaa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2da1dd389fe0ec92e0cd9e98f8def82c47e8e08ab27041cd23683c60aa1dcaa,State:CONTAINER_EXITED,CreatedAt:1711587144463311776,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-615158,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 64a41d778a725f680a503fc14986e62d,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9432dd,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a67fff03-41f6-4285-9615-45a3ee992abb name=/runtime.v1.RuntimeService/ListContainers
	Mar 28 00:52:41 kubernetes-upgrade-615158 crio[1884]: time="2024-03-28 00:52:41.608746699Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=20fd72cf-381f-427f-9a28-773e9233a02d name=/runtime.v1.RuntimeService/Version
	Mar 28 00:52:41 kubernetes-upgrade-615158 crio[1884]: time="2024-03-28 00:52:41.608846313Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=20fd72cf-381f-427f-9a28-773e9233a02d name=/runtime.v1.RuntimeService/Version
	Mar 28 00:52:41 kubernetes-upgrade-615158 crio[1884]: time="2024-03-28 00:52:41.610657735Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=314e15e7-3a81-4906-be7d-8ab40517a946 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 28 00:52:41 kubernetes-upgrade-615158 crio[1884]: time="2024-03-28 00:52:41.611173705Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1711587161611146960,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:121235,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=314e15e7-3a81-4906-be7d-8ab40517a946 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 28 00:52:41 kubernetes-upgrade-615158 crio[1884]: time="2024-03-28 00:52:41.612177868Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=cf2f916f-d0e7-47f3-97c6-2087157b999e name=/runtime.v1.RuntimeService/ListContainers
	Mar 28 00:52:41 kubernetes-upgrade-615158 crio[1884]: time="2024-03-28 00:52:41.612243895Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=cf2f916f-d0e7-47f3-97c6-2087157b999e name=/runtime.v1.RuntimeService/ListContainers
	Mar 28 00:52:41 kubernetes-upgrade-615158 crio[1884]: time="2024-03-28 00:52:41.630806215Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ef5194763932673885ef6d9abf9c0dd2d8271d2a7fa31395ce3fa9757b0ad0d1,PodSandboxId:4445bf618732a2d1917ef4fb80da899472f03bc18218f554db5e3e746c691c17,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:f68bf73ee2fbd4ea8b58c2038ce18d4e06cb59833c44dda7435b586add021841,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f68bf73ee2fbd4ea8b58c2038ce18d4e06cb59833c44dda7435b586add021841,State:CONTAINER_RUNNING,CreatedAt:1711587153314065799,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-615158,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ed7d4b7e43d2b73acc7f2e7f8929f46,},Annotations:map[string]string{io.kubernetes.container.hash: 3378c71d,io.kubernetes.c
ontainer.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f7378bed1cd49887613c9adc1316c5066be8f3754de4dd9399b5c04a2cabd7cf,PodSandboxId:2ed42601f98dd155add17ac633f88a2eaf2cae192d933df2ba4cc81f9c73afb8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c2da1dd389fe0ec92e0cd9e98f8def82c47e8e08ab27041cd23683c60aa1dcaa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2da1dd389fe0ec92e0cd9e98f8def82c47e8e08ab27041cd23683c60aa1dcaa,State:CONTAINER_RUNNING,CreatedAt:1711587153325017631,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-615158,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 64a41d778a725f680a503fc14986e62d,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9432dd,io.kubernetes.contai
ner.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f7cfaf7dda66b3859f6c06d60de89685c822c122fd6f2498e8fef0f6614730b6,PodSandboxId:1e38d98a0aa69e1f6ad114e3c0d363c5dd7ea3c7cd5d26c78b3320d6db427f53,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:746ac2129b574b8743dca05f512b7e097235ca7229b75100a38ec9cdb23454ac,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:746ac2129b574b8743dca05f512b7e097235ca7229b75100a38ec9cdb23454ac,State:CONTAINER_RUNNING,CreatedAt:1711587153354419241,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-615158,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9aa36ea0a69a598325f7a3c234743bab,},Annotations:map[string]string{io.kubernetes.container.hash: 27285f37,io.kubernetes.container.r
estartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e0813c8654f4eb81c38381228c3db95749aef814f79bb10c935defe5881c373,PodSandboxId:d5ee1285f9f22a92f5815601536e4208a2c563a381518f0f79c2bed2b4fef941,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1711587153330100112,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-615158,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b451a57fb37b27b846d568551cf4b3a,},Annotations:map[string]string{io.kubernetes.container.hash: 83b14685,io.kubernetes.container.restartCount: 2,io.kubernetes.contai
ner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb430449f89841b54d4db47be099541b462f715f7021597a18deda95f3209fc0,PodSandboxId:c4988f82bab3a43299dd44090394816ab8c3599128912de59df9764d57d43be4,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1711587144638191907,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-615158,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b451a57fb37b27b846d568551cf4b3a,},Annotations:map[string]string{io.kubernetes.container.hash: 83b14685,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/ter
mination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff43377ffd798c54cae03d76c7793738943a083c5261998129e38d70a025ef33,PodSandboxId:fa0f144c0927ff04ba57751c38683b78fee2eca8f2b7663a25050838c98ebee7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:746ac2129b574b8743dca05f512b7e097235ca7229b75100a38ec9cdb23454ac,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:746ac2129b574b8743dca05f512b7e097235ca7229b75100a38ec9cdb23454ac,State:CONTAINER_EXITED,CreatedAt:1711587144675839507,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-615158,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9aa36ea0a69a598325f7a3c234743bab,},Annotations:map[string]string{io.kubernetes.container.hash: 27285f37,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/terminati
on-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c63adcfe748b4f461d0b90dfc33d81a986edbd41f0f4066814c30805a8bf036f,PodSandboxId:fa5f336f24a60f638f234f1dfab7d158f32596a3cd6b5fa9820a616fa27a3875,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:f68bf73ee2fbd4ea8b58c2038ce18d4e06cb59833c44dda7435b586add021841,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f68bf73ee2fbd4ea8b58c2038ce18d4e06cb59833c44dda7435b586add021841,State:CONTAINER_EXITED,CreatedAt:1711587144612871210,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-615158,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ed7d4b7e43d2b73acc7f2e7f8929f46,},Annotations:map[string]string{io.kubernetes.container.hash: 3378c71d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:962bef07988fa9bed76c7b197daa2963d6a8476dfcd29cd274d66b58ba2a307f,PodSandboxId:834543e71019c587b8b065c207d625b7aa9dc9d51162a0c5b9d2ac90819c25c9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c2da1dd389fe0ec92e0cd9e98f8def82c47e8e08ab27041cd23683c60aa1dcaa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2da1dd389fe0ec92e0cd9e98f8def82c47e8e08ab27041cd23683c60aa1dcaa,State:CONTAINER_EXITED,CreatedAt:1711587144463311776,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-615158,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 64a41d778a725f680a503fc14986e62d,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9432dd,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=cf2f916f-d0e7-47f3-97c6-2087157b999e name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	f7cfaf7dda66b       746ac2129b574b8743dca05f512b7e097235ca7229b75100a38ec9cdb23454ac   8 seconds ago       Running             kube-scheduler            2                   1e38d98a0aa69       kube-scheduler-kubernetes-upgrade-615158
	0e0813c8654f4       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   8 seconds ago       Running             etcd                      2                   d5ee1285f9f22       etcd-kubernetes-upgrade-615158
	f7378bed1cd49       c2da1dd389fe0ec92e0cd9e98f8def82c47e8e08ab27041cd23683c60aa1dcaa   8 seconds ago       Running             kube-apiserver            2                   2ed42601f98dd       kube-apiserver-kubernetes-upgrade-615158
	ef51947639326       f68bf73ee2fbd4ea8b58c2038ce18d4e06cb59833c44dda7435b586add021841   8 seconds ago       Running             kube-controller-manager   2                   4445bf618732a       kube-controller-manager-kubernetes-upgrade-615158
	ff43377ffd798       746ac2129b574b8743dca05f512b7e097235ca7229b75100a38ec9cdb23454ac   17 seconds ago      Exited              kube-scheduler            1                   fa0f144c0927f       kube-scheduler-kubernetes-upgrade-615158
	eb430449f8984       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   17 seconds ago      Exited              etcd                      1                   c4988f82bab3a       etcd-kubernetes-upgrade-615158
	c63adcfe748b4       f68bf73ee2fbd4ea8b58c2038ce18d4e06cb59833c44dda7435b586add021841   17 seconds ago      Exited              kube-controller-manager   1                   fa5f336f24a60       kube-controller-manager-kubernetes-upgrade-615158
	962bef07988fa       c2da1dd389fe0ec92e0cd9e98f8def82c47e8e08ab27041cd23683c60aa1dcaa   17 seconds ago      Exited              kube-apiserver            1                   834543e71019c       kube-apiserver-kubernetes-upgrade-615158
	
	
	==> describe nodes <==
	Name:               kubernetes-upgrade-615158
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=kubernetes-upgrade-615158
	                    kubernetes.io/os=linux
	Annotations:        volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 28 Mar 2024 00:52:09 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  kubernetes-upgrade-615158
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 28 Mar 2024 00:52:36 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 28 Mar 2024 00:52:37 +0000   Thu, 28 Mar 2024 00:52:06 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 28 Mar 2024 00:52:37 +0000   Thu, 28 Mar 2024 00:52:06 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 28 Mar 2024 00:52:37 +0000   Thu, 28 Mar 2024 00:52:06 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 28 Mar 2024 00:52:37 +0000   Thu, 28 Mar 2024 00:52:11 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.160
	  Hostname:    kubernetes-upgrade-615158
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 1bc421b5ea8e4845b3524e4031f55ec8
	  System UUID:                1bc421b5-ea8e-4845-b352-4e4031f55ec8
	  Boot ID:                    95f30849-4e7a-4ce6-9c4e-7072ab6b37d8
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0-beta.0
	  Kube-Proxy Version:         v1.30.0-beta.0
	Non-terminated Pods:          (4 in total)
	  Namespace                   Name                                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                 ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-kubernetes-upgrade-615158                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         30s
	  kube-system                 kube-apiserver-kubernetes-upgrade-615158             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         31s
	  kube-system                 kube-controller-manager-kubernetes-upgrade-615158    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         26s
	  kube-system                 kube-scheduler-kubernetes-upgrade-615158             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                650m (32%!)(MISSING)  0 (0%!)(MISSING)
	  memory             100Mi (4%!)(MISSING)  0 (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From     Message
	  ----    ------                   ----               ----     -------
	  Normal  Starting                 37s                kubelet  Starting kubelet.
	  Normal  NodeHasSufficientMemory  37s (x8 over 37s)  kubelet  Node kubernetes-upgrade-615158 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    37s (x8 over 37s)  kubelet  Node kubernetes-upgrade-615158 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     37s (x7 over 37s)  kubelet  Node kubernetes-upgrade-615158 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  37s                kubelet  Updated Node Allocatable limit across pods
	  Normal  Starting                 9s                 kubelet  Starting kubelet.
	  Normal  NodeHasSufficientMemory  9s (x8 over 9s)    kubelet  Node kubernetes-upgrade-615158 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9s (x8 over 9s)    kubelet  Node kubernetes-upgrade-615158 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9s (x7 over 9s)    kubelet  Node kubernetes-upgrade-615158 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9s                 kubelet  Updated Node Allocatable limit across pods
	
	
	==> dmesg <==
	[  +1.817367] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000010] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.577191] systemd-fstab-generator[567]: Ignoring "noauto" option for root device
	[  +0.060851] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.059060] systemd-fstab-generator[580]: Ignoring "noauto" option for root device
	[  +0.221174] systemd-fstab-generator[594]: Ignoring "noauto" option for root device
	[  +0.130387] systemd-fstab-generator[606]: Ignoring "noauto" option for root device
	[  +0.299823] systemd-fstab-generator[636]: Ignoring "noauto" option for root device
	[Mar28 00:52] systemd-fstab-generator[733]: Ignoring "noauto" option for root device
	[  +0.062326] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.510191] systemd-fstab-generator[864]: Ignoring "noauto" option for root device
	[  +7.297877] systemd-fstab-generator[1244]: Ignoring "noauto" option for root device
	[  +0.110402] kauditd_printk_skb: 97 callbacks suppressed
	[  +6.225240] kauditd_printk_skb: 18 callbacks suppressed
	[  +7.431958] systemd-fstab-generator[1804]: Ignoring "noauto" option for root device
	[  +0.242199] systemd-fstab-generator[1816]: Ignoring "noauto" option for root device
	[  +0.239721] systemd-fstab-generator[1830]: Ignoring "noauto" option for root device
	[  +0.200068] systemd-fstab-generator[1842]: Ignoring "noauto" option for root device
	[  +0.490303] systemd-fstab-generator[1871]: Ignoring "noauto" option for root device
	[  +1.471312] systemd-fstab-generator[2199]: Ignoring "noauto" option for root device
	[  +4.134340] systemd-fstab-generator[2322]: Ignoring "noauto" option for root device
	[  +0.078247] kauditd_printk_skb: 186 callbacks suppressed
	[  +6.468787] systemd-fstab-generator[2588]: Ignoring "noauto" option for root device
	[  +0.149765] kauditd_printk_skb: 36 callbacks suppressed
	
	
	==> etcd [0e0813c8654f4eb81c38381228c3db95749aef814f79bb10c935defe5881c373] <==
	{"level":"info","ts":"2024-03-28T00:52:33.699813Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"920bcdbcfb5048e5","local-member-id":"2a9aa266eb7ea815","added-peer-id":"2a9aa266eb7ea815","added-peer-peer-urls":["https://192.168.50.160:2380"]}
	{"level":"info","ts":"2024-03-28T00:52:33.700179Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"920bcdbcfb5048e5","local-member-id":"2a9aa266eb7ea815","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-28T00:52:33.70026Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-28T00:52:33.699312Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-03-28T00:52:33.707047Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-03-28T00:52:33.707185Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-03-28T00:52:33.730521Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-03-28T00:52:33.730771Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.50.160:2380"}
	{"level":"info","ts":"2024-03-28T00:52:33.731301Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.50.160:2380"}
	{"level":"info","ts":"2024-03-28T00:52:33.732024Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"2a9aa266eb7ea815","initial-advertise-peer-urls":["https://192.168.50.160:2380"],"listen-peer-urls":["https://192.168.50.160:2380"],"advertise-client-urls":["https://192.168.50.160:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.50.160:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-03-28T00:52:33.734911Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-03-28T00:52:35.181696Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2a9aa266eb7ea815 is starting a new election at term 3"}
	{"level":"info","ts":"2024-03-28T00:52:35.18178Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2a9aa266eb7ea815 became pre-candidate at term 3"}
	{"level":"info","ts":"2024-03-28T00:52:35.181801Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2a9aa266eb7ea815 received MsgPreVoteResp from 2a9aa266eb7ea815 at term 3"}
	{"level":"info","ts":"2024-03-28T00:52:35.181812Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2a9aa266eb7ea815 became candidate at term 4"}
	{"level":"info","ts":"2024-03-28T00:52:35.181818Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2a9aa266eb7ea815 received MsgVoteResp from 2a9aa266eb7ea815 at term 4"}
	{"level":"info","ts":"2024-03-28T00:52:35.181826Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2a9aa266eb7ea815 became leader at term 4"}
	{"level":"info","ts":"2024-03-28T00:52:35.181834Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 2a9aa266eb7ea815 elected leader 2a9aa266eb7ea815 at term 4"}
	{"level":"info","ts":"2024-03-28T00:52:35.190014Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"2a9aa266eb7ea815","local-member-attributes":"{Name:kubernetes-upgrade-615158 ClientURLs:[https://192.168.50.160:2379]}","request-path":"/0/members/2a9aa266eb7ea815/attributes","cluster-id":"920bcdbcfb5048e5","publish-timeout":"7s"}
	{"level":"info","ts":"2024-03-28T00:52:35.190029Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-28T00:52:35.19005Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-28T00:52:35.193152Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-03-28T00:52:35.195563Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-03-28T00:52:35.195625Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-03-28T00:52:35.195836Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.160:2379"}
	
	
	==> etcd [eb430449f89841b54d4db47be099541b462f715f7021597a18deda95f3209fc0] <==
	{"level":"info","ts":"2024-03-28T00:52:25.513176Z","caller":"mvcc/kvstore.go:407","msg":"kvstore restored","current-rev":290}
	{"level":"info","ts":"2024-03-28T00:52:25.571133Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	{"level":"info","ts":"2024-03-28T00:52:25.596041Z","caller":"etcdserver/corrupt.go:96","msg":"starting initial corruption check","local-member-id":"2a9aa266eb7ea815","timeout":"7s"}
	{"level":"info","ts":"2024-03-28T00:52:25.596326Z","caller":"etcdserver/corrupt.go:177","msg":"initial corruption checking passed; no corruption","local-member-id":"2a9aa266eb7ea815"}
	{"level":"info","ts":"2024-03-28T00:52:25.596402Z","caller":"etcdserver/server.go:860","msg":"starting etcd server","local-member-id":"2a9aa266eb7ea815","local-server-version":"3.5.12","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2024-03-28T00:52:25.607837Z","caller":"etcdserver/server.go:760","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2024-03-28T00:52:25.608116Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-03-28T00:52:25.608219Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-03-28T00:52:25.608261Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-03-28T00:52:25.608714Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2a9aa266eb7ea815 switched to configuration voters=(3069944658927724565)"}
	{"level":"info","ts":"2024-03-28T00:52:25.609638Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"920bcdbcfb5048e5","local-member-id":"2a9aa266eb7ea815","added-peer-id":"2a9aa266eb7ea815","added-peer-peer-urls":["https://192.168.50.160:2380"]}
	{"level":"info","ts":"2024-03-28T00:52:25.609848Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"920bcdbcfb5048e5","local-member-id":"2a9aa266eb7ea815","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-28T00:52:25.611782Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-28T00:52:25.617548Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-03-28T00:52:25.61775Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.50.160:2380"}
	{"level":"info","ts":"2024-03-28T00:52:25.61794Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.50.160:2380"}
	{"level":"info","ts":"2024-03-28T00:52:25.620026Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"2a9aa266eb7ea815","initial-advertise-peer-urls":["https://192.168.50.160:2380"],"listen-peer-urls":["https://192.168.50.160:2380"],"advertise-client-urls":["https://192.168.50.160:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.50.160:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-03-28T00:52:25.620104Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-03-28T00:52:26.999068Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2a9aa266eb7ea815 is starting a new election at term 2"}
	{"level":"info","ts":"2024-03-28T00:52:26.999146Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2a9aa266eb7ea815 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-03-28T00:52:26.99918Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2a9aa266eb7ea815 received MsgPreVoteResp from 2a9aa266eb7ea815 at term 2"}
	{"level":"info","ts":"2024-03-28T00:52:26.999195Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2a9aa266eb7ea815 became candidate at term 3"}
	{"level":"info","ts":"2024-03-28T00:52:26.999203Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2a9aa266eb7ea815 received MsgVoteResp from 2a9aa266eb7ea815 at term 3"}
	{"level":"info","ts":"2024-03-28T00:52:26.999214Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2a9aa266eb7ea815 became leader at term 3"}
	{"level":"info","ts":"2024-03-28T00:52:26.999225Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 2a9aa266eb7ea815 elected leader 2a9aa266eb7ea815 at term 3"}
	
	
	==> kernel <==
	 00:52:41 up 1 min,  0 users,  load average: 1.60, 0.43, 0.15
	Linux kubernetes-upgrade-615158 5.10.207 #1 SMP Wed Mar 27 22:02:20 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [962bef07988fa9bed76c7b197daa2963d6a8476dfcd29cd274d66b58ba2a307f] <==
	I0328 00:52:25.039800       1 options.go:221] external host was not specified, using 192.168.50.160
	I0328 00:52:25.041224       1 server.go:148] Version: v1.30.0-beta.0
	I0328 00:52:25.041282       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0328 00:52:26.078934       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0328 00:52:26.082322       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0328 00:52:26.083653       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0328 00:52:26.083744       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0328 00:52:26.083901       1 instance.go:299] Using reconciler: lease
	
	
	==> kube-apiserver [f7378bed1cd49887613c9adc1316c5066be8f3754de4dd9399b5c04a2cabd7cf] <==
	I0328 00:52:36.958693       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0328 00:52:36.958939       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0328 00:52:36.958987       1 shared_informer.go:313] Waiting for caches to sync for crd-autoregister
	I0328 00:52:37.011736       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0328 00:52:37.011909       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0328 00:52:37.011963       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0328 00:52:37.022251       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0328 00:52:37.022545       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0328 00:52:37.022645       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0328 00:52:37.022704       1 shared_informer.go:320] Caches are synced for configmaps
	I0328 00:52:37.033261       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0328 00:52:37.051192       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0328 00:52:37.051275       1 policy_source.go:224] refreshing policies
	I0328 00:52:37.063038       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0328 00:52:37.063240       1 aggregator.go:165] initial CRD sync complete...
	I0328 00:52:37.063284       1 autoregister_controller.go:141] Starting autoregister controller
	I0328 00:52:37.063314       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0328 00:52:37.063342       1 cache.go:39] Caches are synced for autoregister controller
	I0328 00:52:37.119384       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0328 00:52:37.919173       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0328 00:52:38.627072       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0328 00:52:38.644362       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0328 00:52:38.693051       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0328 00:52:38.802880       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0328 00:52:38.813257       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	
	
	==> kube-controller-manager [c63adcfe748b4f461d0b90dfc33d81a986edbd41f0f4066814c30805a8bf036f] <==
	I0328 00:52:26.476314       1 serving.go:380] Generated self-signed cert in-memory
	
	
	==> kube-controller-manager [ef5194763932673885ef6d9abf9c0dd2d8271d2a7fa31395ce3fa9757b0ad0d1] <==
	I0328 00:52:40.844657       1 shared_informer.go:313] Waiting for caches to sync for GC
	I0328 00:52:40.894091       1 node_lifecycle_controller.go:425] "Controller will reconcile labels" logger="node-lifecycle-controller"
	I0328 00:52:40.894270       1 controllermanager.go:759] "Started controller" controller="node-lifecycle-controller"
	I0328 00:52:40.894576       1 node_lifecycle_controller.go:459] "Sending events to api server" logger="node-lifecycle-controller"
	I0328 00:52:40.894719       1 node_lifecycle_controller.go:470] "Starting node controller" logger="node-lifecycle-controller"
	I0328 00:52:40.894758       1 shared_informer.go:313] Waiting for caches to sync for taint
	I0328 00:52:41.055254       1 controllermanager.go:759] "Started controller" controller="namespace-controller"
	I0328 00:52:41.055345       1 namespace_controller.go:197] "Starting namespace controller" logger="namespace-controller"
	I0328 00:52:41.055362       1 shared_informer.go:313] Waiting for caches to sync for namespace
	I0328 00:52:41.193889       1 controllermanager.go:759] "Started controller" controller="garbage-collector-controller"
	I0328 00:52:41.194255       1 garbagecollector.go:146] "Starting controller" logger="garbage-collector-controller" controller="garbagecollector"
	I0328 00:52:41.194296       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0328 00:52:41.194317       1 graph_builder.go:336] "Running" logger="garbage-collector-controller" component="GraphBuilder"
	I0328 00:52:41.244793       1 controllermanager.go:759] "Started controller" controller="persistentvolume-binder-controller"
	I0328 00:52:41.244931       1 pv_controller_base.go:313] "Starting persistent volume controller" logger="persistentvolume-binder-controller"
	I0328 00:52:41.244944       1 shared_informer.go:313] Waiting for caches to sync for persistent volume
	I0328 00:52:41.496248       1 controllermanager.go:759] "Started controller" controller="endpointslice-mirroring-controller"
	I0328 00:52:41.496403       1 endpointslicemirroring_controller.go:223] "Starting EndpointSliceMirroring controller" logger="endpointslice-mirroring-controller"
	I0328 00:52:41.496419       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice_mirroring
	E0328 00:52:41.544280       1 core.go:274] "Failed to start cloud node lifecycle controller" err="no cloud provider provided" logger="cloud-node-lifecycle-controller"
	I0328 00:52:41.544306       1 controllermanager.go:737] "Warning: skipping controller" controller="cloud-node-lifecycle-controller"
	I0328 00:52:41.544319       1 controllermanager.go:711] "Controller is disabled by a feature gate" controller="resourceclaim-controller" requiredFeatureGates=["DynamicResourceAllocation"]
	I0328 00:52:41.594769       1 controllermanager.go:759] "Started controller" controller="certificatesigningrequest-approving-controller"
	I0328 00:52:41.594882       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-approving-controller" name="csrapproving"
	I0328 00:52:41.594909       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrapproving
	
	
	==> kube-scheduler [f7cfaf7dda66b3859f6c06d60de89685c822c122fd6f2498e8fef0f6614730b6] <==
	I0328 00:52:34.648221       1 serving.go:380] Generated self-signed cert in-memory
	W0328 00:52:36.989777       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0328 00:52:36.989982       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0328 00:52:36.990134       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0328 00:52:36.990272       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0328 00:52:37.035599       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.0-beta.0"
	I0328 00:52:37.035791       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0328 00:52:37.038323       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0328 00:52:37.038444       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0328 00:52:37.039182       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0328 00:52:37.038544       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0328 00:52:37.139505       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [ff43377ffd798c54cae03d76c7793738943a083c5261998129e38d70a025ef33] <==
	
	
	==> kubelet <==
	Mar 28 00:52:37 kubernetes-upgrade-615158 kubelet[2329]: I0328 00:52:37.085987    2329 kubelet_node_status.go:112] "Node was previously registered" node="kubernetes-upgrade-615158"
	Mar 28 00:52:37 kubernetes-upgrade-615158 kubelet[2329]: I0328 00:52:37.086134    2329 kubelet_node_status.go:76] "Successfully registered node" node="kubernetes-upgrade-615158"
	Mar 28 00:52:37 kubernetes-upgrade-615158 kubelet[2329]: I0328 00:52:37.644182    2329 apiserver.go:52] "Watching apiserver"
	Mar 28 00:52:37 kubernetes-upgrade-615158 kubelet[2329]: E0328 00:52:37.648895    2329 kubelet_pods.go:2464] "unknown runtime class" runtimeClassName=""
	Mar 28 00:52:37 kubernetes-upgrade-615158 kubelet[2329]: E0328 00:52:37.648936    2329 kubelet_pods.go:2464] "unknown runtime class" runtimeClassName=""
	Mar 28 00:52:37 kubernetes-upgrade-615158 kubelet[2329]: E0328 00:52:37.648946    2329 kubelet_pods.go:2464] "unknown runtime class" runtimeClassName=""
	Mar 28 00:52:37 kubernetes-upgrade-615158 kubelet[2329]: E0328 00:52:37.649431    2329 kubelet_pods.go:2464] "unknown runtime class" runtimeClassName=""
	Mar 28 00:52:37 kubernetes-upgrade-615158 kubelet[2329]: E0328 00:52:37.649603    2329 kubelet_pods.go:2464] "unknown runtime class" runtimeClassName=""
	Mar 28 00:52:37 kubernetes-upgrade-615158 kubelet[2329]: E0328 00:52:37.649653    2329 kubelet_pods.go:2464] "unknown runtime class" runtimeClassName=""
	Mar 28 00:52:37 kubernetes-upgrade-615158 kubelet[2329]: E0328 00:52:37.649839    2329 kubelet_pods.go:2464] "unknown runtime class" runtimeClassName=""
	Mar 28 00:52:37 kubernetes-upgrade-615158 kubelet[2329]: E0328 00:52:37.649897    2329 kubelet_pods.go:2464] "unknown runtime class" runtimeClassName=""
	Mar 28 00:52:37 kubernetes-upgrade-615158 kubelet[2329]: E0328 00:52:37.649906    2329 kubelet_pods.go:2464] "unknown runtime class" runtimeClassName=""
	Mar 28 00:52:37 kubernetes-upgrade-615158 kubelet[2329]: E0328 00:52:37.650690    2329 kubelet_pods.go:2464] "unknown runtime class" runtimeClassName=""
	Mar 28 00:52:37 kubernetes-upgrade-615158 kubelet[2329]: E0328 00:52:37.650743    2329 kubelet_pods.go:2464] "unknown runtime class" runtimeClassName=""
	Mar 28 00:52:37 kubernetes-upgrade-615158 kubelet[2329]: E0328 00:52:37.650753    2329 kubelet_pods.go:2464] "unknown runtime class" runtimeClassName=""
	Mar 28 00:52:37 kubernetes-upgrade-615158 kubelet[2329]: I0328 00:52:37.658048    2329 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	Mar 28 00:52:39 kubernetes-upgrade-615158 kubelet[2329]: E0328 00:52:39.093142    2329 kubelet_pods.go:2464] "unknown runtime class" runtimeClassName=""
	Mar 28 00:52:39 kubernetes-upgrade-615158 kubelet[2329]: E0328 00:52:39.093210    2329 kubelet_pods.go:2464] "unknown runtime class" runtimeClassName=""
	Mar 28 00:52:39 kubernetes-upgrade-615158 kubelet[2329]: E0328 00:52:39.093220    2329 kubelet_pods.go:2464] "unknown runtime class" runtimeClassName=""
	Mar 28 00:52:40 kubernetes-upgrade-615158 kubelet[2329]: E0328 00:52:40.377875    2329 kubelet_pods.go:2464] "unknown runtime class" runtimeClassName=""
	Mar 28 00:52:40 kubernetes-upgrade-615158 kubelet[2329]: E0328 00:52:40.377928    2329 kubelet_pods.go:2464] "unknown runtime class" runtimeClassName=""
	Mar 28 00:52:40 kubernetes-upgrade-615158 kubelet[2329]: E0328 00:52:40.377936    2329 kubelet_pods.go:2464] "unknown runtime class" runtimeClassName=""
	Mar 28 00:52:40 kubernetes-upgrade-615158 kubelet[2329]: E0328 00:52:40.923647    2329 kubelet_pods.go:2464] "unknown runtime class" runtimeClassName=""
	Mar 28 00:52:40 kubernetes-upgrade-615158 kubelet[2329]: E0328 00:52:40.923692    2329 kubelet_pods.go:2464] "unknown runtime class" runtimeClassName=""
	Mar 28 00:52:40 kubernetes-upgrade-615158 kubelet[2329]: E0328 00:52:40.923697    2329 kubelet_pods.go:2464] "unknown runtime class" runtimeClassName=""
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0328 00:52:40.979502 1124531 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/18485-1069254/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p kubernetes-upgrade-615158 -n kubernetes-upgrade-615158
helpers_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-615158 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: storage-provisioner
helpers_test.go:274: ======> post-mortem[TestKubernetesUpgrade]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context kubernetes-upgrade-615158 describe pod storage-provisioner
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-615158 describe pod storage-provisioner: exit status 1 (67.682883ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context kubernetes-upgrade-615158 describe pod storage-provisioner: exit status 1
helpers_test.go:175: Cleaning up "kubernetes-upgrade-615158" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-615158
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-615158: (1.129271071s)
--- FAIL: TestKubernetesUpgrade (393.51s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (100.49s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-040046 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-040046 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m35.489124416s)
pause_test.go:100: expected the second start log output to include "The running cluster does not require reconfiguration" but got: 
-- stdout --
	* [pause-040046] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18485
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18485-1069254/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18485-1069254/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting "pause-040046" primary control-plane node in "pause-040046" cluster
	* Updating the running kvm2 "pause-040046" VM ...
	* Preparing Kubernetes v1.29.3 on CRI-O 1.29.1 ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	* Enabled addons: 
	* Done! kubectl is now configured to use "pause-040046" cluster and "default" namespace by default

                                                
                                                
-- /stdout --
** stderr ** 
	I0328 00:47:09.124605 1114185 out.go:291] Setting OutFile to fd 1 ...
	I0328 00:47:09.124779 1114185 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 00:47:09.124792 1114185 out.go:304] Setting ErrFile to fd 2...
	I0328 00:47:09.124798 1114185 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 00:47:09.125146 1114185 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18485-1069254/.minikube/bin
	I0328 00:47:09.126044 1114185 out.go:298] Setting JSON to false
	I0328 00:47:09.127374 1114185 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":30526,"bootTime":1711556303,"procs":212,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1054-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0328 00:47:09.127452 1114185 start.go:139] virtualization: kvm guest
	I0328 00:47:09.130019 1114185 out.go:177] * [pause-040046] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I0328 00:47:09.131451 1114185 notify.go:220] Checking for updates...
	I0328 00:47:09.131489 1114185 out.go:177]   - MINIKUBE_LOCATION=18485
	I0328 00:47:09.133079 1114185 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0328 00:47:09.134761 1114185 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18485-1069254/kubeconfig
	I0328 00:47:09.136408 1114185 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18485-1069254/.minikube
	I0328 00:47:09.137885 1114185 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0328 00:47:09.139238 1114185 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0328 00:47:09.141015 1114185 config.go:182] Loaded profile config "pause-040046": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0328 00:47:09.141469 1114185 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18485-1069254/.minikube/bin/docker-machine-driver-kvm2
	I0328 00:47:09.141530 1114185 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 00:47:09.158516 1114185 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39617
	I0328 00:47:09.159188 1114185 main.go:141] libmachine: () Calling .GetVersion
	I0328 00:47:09.159874 1114185 main.go:141] libmachine: Using API Version  1
	I0328 00:47:09.159941 1114185 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 00:47:09.160426 1114185 main.go:141] libmachine: () Calling .GetMachineName
	I0328 00:47:09.160659 1114185 main.go:141] libmachine: (pause-040046) Calling .DriverName
	I0328 00:47:09.160957 1114185 driver.go:392] Setting default libvirt URI to qemu:///system
	I0328 00:47:09.161437 1114185 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18485-1069254/.minikube/bin/docker-machine-driver-kvm2
	I0328 00:47:09.161486 1114185 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 00:47:09.179313 1114185 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42551
	I0328 00:47:09.179877 1114185 main.go:141] libmachine: () Calling .GetVersion
	I0328 00:47:09.180459 1114185 main.go:141] libmachine: Using API Version  1
	I0328 00:47:09.180484 1114185 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 00:47:09.180896 1114185 main.go:141] libmachine: () Calling .GetMachineName
	I0328 00:47:09.181117 1114185 main.go:141] libmachine: (pause-040046) Calling .DriverName
	I0328 00:47:09.220788 1114185 out.go:177] * Using the kvm2 driver based on existing profile
	I0328 00:47:09.222177 1114185 start.go:297] selected driver: kvm2
	I0328 00:47:09.222198 1114185 start.go:901] validating driver "kvm2" against &{Name:pause-040046 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernetes
Version:v1.29.3 ClusterName:pause-040046 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.233 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-dev
ice-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0328 00:47:09.222410 1114185 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0328 00:47:09.222933 1114185 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0328 00:47:09.223054 1114185 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18485-1069254/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0328 00:47:09.239905 1114185 install.go:137] /home/jenkins/minikube-integration/18485-1069254/.minikube/bin/docker-machine-driver-kvm2 version is 1.33.0-beta.0
	I0328 00:47:09.240690 1114185 cni.go:84] Creating CNI manager for ""
	I0328 00:47:09.240707 1114185 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0328 00:47:09.240765 1114185 start.go:340] cluster config:
	{Name:pause-040046 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:pause-040046 Namespace:default APIServerHAVIP: API
ServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.233 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:
false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0328 00:47:09.240896 1114185 iso.go:125] acquiring lock: {Name:mk3da1fa7d63a581c817f327d86a827697458cb8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0328 00:47:09.243625 1114185 out.go:177] * Starting "pause-040046" primary control-plane node in "pause-040046" cluster
	I0328 00:47:09.244926 1114185 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0328 00:47:09.245000 1114185 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4
	I0328 00:47:09.245016 1114185 cache.go:56] Caching tarball of preloaded images
	I0328 00:47:09.245121 1114185 preload.go:173] Found /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0328 00:47:09.245141 1114185 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on crio
	I0328 00:47:09.245298 1114185 profile.go:142] Saving config to /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/pause-040046/config.json ...
	I0328 00:47:09.245534 1114185 start.go:360] acquireMachinesLock for pause-040046: {Name:mk85e225431128bbd27ac7bb3815095957281902 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0328 00:47:48.391076 1114185 start.go:364] duration metric: took 39.145450949s to acquireMachinesLock for "pause-040046"
	I0328 00:47:48.391193 1114185 start.go:96] Skipping create...Using existing machine configuration
	I0328 00:47:48.391203 1114185 fix.go:54] fixHost starting: 
	I0328 00:47:48.391695 1114185 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18485-1069254/.minikube/bin/docker-machine-driver-kvm2
	I0328 00:47:48.391740 1114185 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 00:47:48.409444 1114185 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33961
	I0328 00:47:48.409895 1114185 main.go:141] libmachine: () Calling .GetVersion
	I0328 00:47:48.410441 1114185 main.go:141] libmachine: Using API Version  1
	I0328 00:47:48.410475 1114185 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 00:47:48.410870 1114185 main.go:141] libmachine: () Calling .GetMachineName
	I0328 00:47:48.411080 1114185 main.go:141] libmachine: (pause-040046) Calling .DriverName
	I0328 00:47:48.411275 1114185 main.go:141] libmachine: (pause-040046) Calling .GetState
	I0328 00:47:48.412868 1114185 fix.go:112] recreateIfNeeded on pause-040046: state=Running err=<nil>
	W0328 00:47:48.412889 1114185 fix.go:138] unexpected machine state, will restart: <nil>
	I0328 00:47:48.415019 1114185 out.go:177] * Updating the running kvm2 "pause-040046" VM ...
	I0328 00:47:48.416230 1114185 machine.go:94] provisionDockerMachine start ...
	I0328 00:47:48.416256 1114185 main.go:141] libmachine: (pause-040046) Calling .DriverName
	I0328 00:47:48.416491 1114185 main.go:141] libmachine: (pause-040046) Calling .GetSSHHostname
	I0328 00:47:48.418926 1114185 main.go:141] libmachine: (pause-040046) DBG | domain pause-040046 has defined MAC address 52:54:00:3c:48:cd in network mk-pause-040046
	I0328 00:47:48.419334 1114185 main.go:141] libmachine: (pause-040046) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:48:cd", ip: ""} in network mk-pause-040046: {Iface:virbr1 ExpiryTime:2024-03-28 01:46:23 +0000 UTC Type:0 Mac:52:54:00:3c:48:cd Iaid: IPaddr:192.168.39.233 Prefix:24 Hostname:pause-040046 Clientid:01:52:54:00:3c:48:cd}
	I0328 00:47:48.419358 1114185 main.go:141] libmachine: (pause-040046) DBG | domain pause-040046 has defined IP address 192.168.39.233 and MAC address 52:54:00:3c:48:cd in network mk-pause-040046
	I0328 00:47:48.419533 1114185 main.go:141] libmachine: (pause-040046) Calling .GetSSHPort
	I0328 00:47:48.419706 1114185 main.go:141] libmachine: (pause-040046) Calling .GetSSHKeyPath
	I0328 00:47:48.419915 1114185 main.go:141] libmachine: (pause-040046) Calling .GetSSHKeyPath
	I0328 00:47:48.420063 1114185 main.go:141] libmachine: (pause-040046) Calling .GetSSHUsername
	I0328 00:47:48.420249 1114185 main.go:141] libmachine: Using SSH client type: native
	I0328 00:47:48.420471 1114185 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.233 22 <nil> <nil>}
	I0328 00:47:48.420484 1114185 main.go:141] libmachine: About to run SSH command:
	hostname
	I0328 00:47:48.539655 1114185 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-040046
	
	I0328 00:47:48.539689 1114185 main.go:141] libmachine: (pause-040046) Calling .GetMachineName
	I0328 00:47:48.539988 1114185 buildroot.go:166] provisioning hostname "pause-040046"
	I0328 00:47:48.540043 1114185 main.go:141] libmachine: (pause-040046) Calling .GetMachineName
	I0328 00:47:48.540298 1114185 main.go:141] libmachine: (pause-040046) Calling .GetSSHHostname
	I0328 00:47:48.542990 1114185 main.go:141] libmachine: (pause-040046) DBG | domain pause-040046 has defined MAC address 52:54:00:3c:48:cd in network mk-pause-040046
	I0328 00:47:48.543436 1114185 main.go:141] libmachine: (pause-040046) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:48:cd", ip: ""} in network mk-pause-040046: {Iface:virbr1 ExpiryTime:2024-03-28 01:46:23 +0000 UTC Type:0 Mac:52:54:00:3c:48:cd Iaid: IPaddr:192.168.39.233 Prefix:24 Hostname:pause-040046 Clientid:01:52:54:00:3c:48:cd}
	I0328 00:47:48.543470 1114185 main.go:141] libmachine: (pause-040046) DBG | domain pause-040046 has defined IP address 192.168.39.233 and MAC address 52:54:00:3c:48:cd in network mk-pause-040046
	I0328 00:47:48.543685 1114185 main.go:141] libmachine: (pause-040046) Calling .GetSSHPort
	I0328 00:47:48.543903 1114185 main.go:141] libmachine: (pause-040046) Calling .GetSSHKeyPath
	I0328 00:47:48.544097 1114185 main.go:141] libmachine: (pause-040046) Calling .GetSSHKeyPath
	I0328 00:47:48.544259 1114185 main.go:141] libmachine: (pause-040046) Calling .GetSSHUsername
	I0328 00:47:48.544441 1114185 main.go:141] libmachine: Using SSH client type: native
	I0328 00:47:48.544648 1114185 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.233 22 <nil> <nil>}
	I0328 00:47:48.544665 1114185 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-040046 && echo "pause-040046" | sudo tee /etc/hostname
	I0328 00:47:48.669982 1114185 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-040046
	
	I0328 00:47:48.670040 1114185 main.go:141] libmachine: (pause-040046) Calling .GetSSHHostname
	I0328 00:47:48.672744 1114185 main.go:141] libmachine: (pause-040046) DBG | domain pause-040046 has defined MAC address 52:54:00:3c:48:cd in network mk-pause-040046
	I0328 00:47:48.673052 1114185 main.go:141] libmachine: (pause-040046) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:48:cd", ip: ""} in network mk-pause-040046: {Iface:virbr1 ExpiryTime:2024-03-28 01:46:23 +0000 UTC Type:0 Mac:52:54:00:3c:48:cd Iaid: IPaddr:192.168.39.233 Prefix:24 Hostname:pause-040046 Clientid:01:52:54:00:3c:48:cd}
	I0328 00:47:48.673085 1114185 main.go:141] libmachine: (pause-040046) DBG | domain pause-040046 has defined IP address 192.168.39.233 and MAC address 52:54:00:3c:48:cd in network mk-pause-040046
	I0328 00:47:48.673321 1114185 main.go:141] libmachine: (pause-040046) Calling .GetSSHPort
	I0328 00:47:48.673517 1114185 main.go:141] libmachine: (pause-040046) Calling .GetSSHKeyPath
	I0328 00:47:48.673733 1114185 main.go:141] libmachine: (pause-040046) Calling .GetSSHKeyPath
	I0328 00:47:48.673902 1114185 main.go:141] libmachine: (pause-040046) Calling .GetSSHUsername
	I0328 00:47:48.674114 1114185 main.go:141] libmachine: Using SSH client type: native
	I0328 00:47:48.674321 1114185 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.233 22 <nil> <nil>}
	I0328 00:47:48.674338 1114185 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-040046' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-040046/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-040046' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0328 00:47:48.787608 1114185 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0328 00:47:48.787643 1114185 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18485-1069254/.minikube CaCertPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18485-1069254/.minikube}
	I0328 00:47:48.787709 1114185 buildroot.go:174] setting up certificates
	I0328 00:47:48.787721 1114185 provision.go:84] configureAuth start
	I0328 00:47:48.787748 1114185 main.go:141] libmachine: (pause-040046) Calling .GetMachineName
	I0328 00:47:48.788066 1114185 main.go:141] libmachine: (pause-040046) Calling .GetIP
	I0328 00:47:48.791139 1114185 main.go:141] libmachine: (pause-040046) DBG | domain pause-040046 has defined MAC address 52:54:00:3c:48:cd in network mk-pause-040046
	I0328 00:47:48.791565 1114185 main.go:141] libmachine: (pause-040046) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:48:cd", ip: ""} in network mk-pause-040046: {Iface:virbr1 ExpiryTime:2024-03-28 01:46:23 +0000 UTC Type:0 Mac:52:54:00:3c:48:cd Iaid: IPaddr:192.168.39.233 Prefix:24 Hostname:pause-040046 Clientid:01:52:54:00:3c:48:cd}
	I0328 00:47:48.791600 1114185 main.go:141] libmachine: (pause-040046) DBG | domain pause-040046 has defined IP address 192.168.39.233 and MAC address 52:54:00:3c:48:cd in network mk-pause-040046
	I0328 00:47:48.791798 1114185 main.go:141] libmachine: (pause-040046) Calling .GetSSHHostname
	I0328 00:47:48.794333 1114185 main.go:141] libmachine: (pause-040046) DBG | domain pause-040046 has defined MAC address 52:54:00:3c:48:cd in network mk-pause-040046
	I0328 00:47:48.794722 1114185 main.go:141] libmachine: (pause-040046) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:48:cd", ip: ""} in network mk-pause-040046: {Iface:virbr1 ExpiryTime:2024-03-28 01:46:23 +0000 UTC Type:0 Mac:52:54:00:3c:48:cd Iaid: IPaddr:192.168.39.233 Prefix:24 Hostname:pause-040046 Clientid:01:52:54:00:3c:48:cd}
	I0328 00:47:48.794767 1114185 main.go:141] libmachine: (pause-040046) DBG | domain pause-040046 has defined IP address 192.168.39.233 and MAC address 52:54:00:3c:48:cd in network mk-pause-040046
	I0328 00:47:48.794936 1114185 provision.go:143] copyHostCerts
	I0328 00:47:48.795026 1114185 exec_runner.go:144] found /home/jenkins/minikube-integration/18485-1069254/.minikube/key.pem, removing ...
	I0328 00:47:48.795067 1114185 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18485-1069254/.minikube/key.pem
	I0328 00:47:48.795142 1114185 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18485-1069254/.minikube/key.pem (1679 bytes)
	I0328 00:47:48.795267 1114185 exec_runner.go:144] found /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.pem, removing ...
	I0328 00:47:48.795278 1114185 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.pem
	I0328 00:47:48.795311 1114185 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.pem (1078 bytes)
	I0328 00:47:48.795387 1114185 exec_runner.go:144] found /home/jenkins/minikube-integration/18485-1069254/.minikube/cert.pem, removing ...
	I0328 00:47:48.795397 1114185 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18485-1069254/.minikube/cert.pem
	I0328 00:47:48.795427 1114185 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18485-1069254/.minikube/cert.pem (1123 bytes)
	I0328 00:47:48.795499 1114185 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca-key.pem org=jenkins.pause-040046 san=[127.0.0.1 192.168.39.233 localhost minikube pause-040046]
	I0328 00:47:49.000994 1114185 provision.go:177] copyRemoteCerts
	I0328 00:47:49.001085 1114185 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0328 00:47:49.001122 1114185 main.go:141] libmachine: (pause-040046) Calling .GetSSHHostname
	I0328 00:47:49.003847 1114185 main.go:141] libmachine: (pause-040046) DBG | domain pause-040046 has defined MAC address 52:54:00:3c:48:cd in network mk-pause-040046
	I0328 00:47:49.004193 1114185 main.go:141] libmachine: (pause-040046) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:48:cd", ip: ""} in network mk-pause-040046: {Iface:virbr1 ExpiryTime:2024-03-28 01:46:23 +0000 UTC Type:0 Mac:52:54:00:3c:48:cd Iaid: IPaddr:192.168.39.233 Prefix:24 Hostname:pause-040046 Clientid:01:52:54:00:3c:48:cd}
	I0328 00:47:49.004235 1114185 main.go:141] libmachine: (pause-040046) DBG | domain pause-040046 has defined IP address 192.168.39.233 and MAC address 52:54:00:3c:48:cd in network mk-pause-040046
	I0328 00:47:49.004433 1114185 main.go:141] libmachine: (pause-040046) Calling .GetSSHPort
	I0328 00:47:49.004643 1114185 main.go:141] libmachine: (pause-040046) Calling .GetSSHKeyPath
	I0328 00:47:49.004829 1114185 main.go:141] libmachine: (pause-040046) Calling .GetSSHUsername
	I0328 00:47:49.005007 1114185 sshutil.go:53] new ssh client: &{IP:192.168.39.233 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/pause-040046/id_rsa Username:docker}
	I0328 00:47:49.092507 1114185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0328 00:47:49.122426 1114185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0328 00:47:49.153623 1114185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0328 00:47:49.182559 1114185 provision.go:87] duration metric: took 394.819607ms to configureAuth
	I0328 00:47:49.182594 1114185 buildroot.go:189] setting minikube options for container-runtime
	I0328 00:47:49.182850 1114185 config.go:182] Loaded profile config "pause-040046": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0328 00:47:49.182958 1114185 main.go:141] libmachine: (pause-040046) Calling .GetSSHHostname
	I0328 00:47:49.185711 1114185 main.go:141] libmachine: (pause-040046) DBG | domain pause-040046 has defined MAC address 52:54:00:3c:48:cd in network mk-pause-040046
	I0328 00:47:49.186155 1114185 main.go:141] libmachine: (pause-040046) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:48:cd", ip: ""} in network mk-pause-040046: {Iface:virbr1 ExpiryTime:2024-03-28 01:46:23 +0000 UTC Type:0 Mac:52:54:00:3c:48:cd Iaid: IPaddr:192.168.39.233 Prefix:24 Hostname:pause-040046 Clientid:01:52:54:00:3c:48:cd}
	I0328 00:47:49.186191 1114185 main.go:141] libmachine: (pause-040046) DBG | domain pause-040046 has defined IP address 192.168.39.233 and MAC address 52:54:00:3c:48:cd in network mk-pause-040046
	I0328 00:47:49.186385 1114185 main.go:141] libmachine: (pause-040046) Calling .GetSSHPort
	I0328 00:47:49.186665 1114185 main.go:141] libmachine: (pause-040046) Calling .GetSSHKeyPath
	I0328 00:47:49.186856 1114185 main.go:141] libmachine: (pause-040046) Calling .GetSSHKeyPath
	I0328 00:47:49.187031 1114185 main.go:141] libmachine: (pause-040046) Calling .GetSSHUsername
	I0328 00:47:49.187229 1114185 main.go:141] libmachine: Using SSH client type: native
	I0328 00:47:49.187476 1114185 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.233 22 <nil> <nil>}
	I0328 00:47:49.187495 1114185 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0328 00:47:54.813526 1114185 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0328 00:47:54.813566 1114185 machine.go:97] duration metric: took 6.397316927s to provisionDockerMachine
	I0328 00:47:54.813582 1114185 start.go:293] postStartSetup for "pause-040046" (driver="kvm2")
	I0328 00:47:54.813596 1114185 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0328 00:47:54.813627 1114185 main.go:141] libmachine: (pause-040046) Calling .DriverName
	I0328 00:47:54.814045 1114185 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0328 00:47:54.814080 1114185 main.go:141] libmachine: (pause-040046) Calling .GetSSHHostname
	I0328 00:47:54.817494 1114185 main.go:141] libmachine: (pause-040046) DBG | domain pause-040046 has defined MAC address 52:54:00:3c:48:cd in network mk-pause-040046
	I0328 00:47:54.818026 1114185 main.go:141] libmachine: (pause-040046) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:48:cd", ip: ""} in network mk-pause-040046: {Iface:virbr1 ExpiryTime:2024-03-28 01:46:23 +0000 UTC Type:0 Mac:52:54:00:3c:48:cd Iaid: IPaddr:192.168.39.233 Prefix:24 Hostname:pause-040046 Clientid:01:52:54:00:3c:48:cd}
	I0328 00:47:54.818058 1114185 main.go:141] libmachine: (pause-040046) DBG | domain pause-040046 has defined IP address 192.168.39.233 and MAC address 52:54:00:3c:48:cd in network mk-pause-040046
	I0328 00:47:54.818472 1114185 main.go:141] libmachine: (pause-040046) Calling .GetSSHPort
	I0328 00:47:54.818734 1114185 main.go:141] libmachine: (pause-040046) Calling .GetSSHKeyPath
	I0328 00:47:54.818963 1114185 main.go:141] libmachine: (pause-040046) Calling .GetSSHUsername
	I0328 00:47:54.819140 1114185 sshutil.go:53] new ssh client: &{IP:192.168.39.233 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/pause-040046/id_rsa Username:docker}
	I0328 00:47:54.930612 1114185 ssh_runner.go:195] Run: cat /etc/os-release
	I0328 00:47:54.935754 1114185 info.go:137] Remote host: Buildroot 2023.02.9
	I0328 00:47:54.935791 1114185 filesync.go:126] Scanning /home/jenkins/minikube-integration/18485-1069254/.minikube/addons for local assets ...
	I0328 00:47:54.935881 1114185 filesync.go:126] Scanning /home/jenkins/minikube-integration/18485-1069254/.minikube/files for local assets ...
	I0328 00:47:54.935984 1114185 filesync.go:149] local asset: /home/jenkins/minikube-integration/18485-1069254/.minikube/files/etc/ssl/certs/10765222.pem -> 10765222.pem in /etc/ssl/certs
	I0328 00:47:54.936108 1114185 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0328 00:47:54.947936 1114185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/files/etc/ssl/certs/10765222.pem --> /etc/ssl/certs/10765222.pem (1708 bytes)
	I0328 00:47:54.979882 1114185 start.go:296] duration metric: took 166.279539ms for postStartSetup
	I0328 00:47:54.979941 1114185 fix.go:56] duration metric: took 6.58873766s for fixHost
	I0328 00:47:54.979969 1114185 main.go:141] libmachine: (pause-040046) Calling .GetSSHHostname
	I0328 00:47:54.983384 1114185 main.go:141] libmachine: (pause-040046) DBG | domain pause-040046 has defined MAC address 52:54:00:3c:48:cd in network mk-pause-040046
	I0328 00:47:54.983811 1114185 main.go:141] libmachine: (pause-040046) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:48:cd", ip: ""} in network mk-pause-040046: {Iface:virbr1 ExpiryTime:2024-03-28 01:46:23 +0000 UTC Type:0 Mac:52:54:00:3c:48:cd Iaid: IPaddr:192.168.39.233 Prefix:24 Hostname:pause-040046 Clientid:01:52:54:00:3c:48:cd}
	I0328 00:47:54.983843 1114185 main.go:141] libmachine: (pause-040046) DBG | domain pause-040046 has defined IP address 192.168.39.233 and MAC address 52:54:00:3c:48:cd in network mk-pause-040046
	I0328 00:47:54.984068 1114185 main.go:141] libmachine: (pause-040046) Calling .GetSSHPort
	I0328 00:47:54.984310 1114185 main.go:141] libmachine: (pause-040046) Calling .GetSSHKeyPath
	I0328 00:47:54.984491 1114185 main.go:141] libmachine: (pause-040046) Calling .GetSSHKeyPath
	I0328 00:47:54.984680 1114185 main.go:141] libmachine: (pause-040046) Calling .GetSSHUsername
	I0328 00:47:54.984889 1114185 main.go:141] libmachine: Using SSH client type: native
	I0328 00:47:54.985062 1114185 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.233 22 <nil> <nil>}
	I0328 00:47:54.985073 1114185 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0328 00:47:55.107446 1114185 main.go:141] libmachine: SSH cmd err, output: <nil>: 1711586875.100115716
	
	I0328 00:47:55.107477 1114185 fix.go:216] guest clock: 1711586875.100115716
	I0328 00:47:55.107487 1114185 fix.go:229] Guest: 2024-03-28 00:47:55.100115716 +0000 UTC Remote: 2024-03-28 00:47:54.979947156 +0000 UTC m=+45.914021235 (delta=120.16856ms)
	I0328 00:47:55.107514 1114185 fix.go:200] guest clock delta is within tolerance: 120.16856ms
	I0328 00:47:55.107521 1114185 start.go:83] releasing machines lock for "pause-040046", held for 6.716359761s
	I0328 00:47:55.107551 1114185 main.go:141] libmachine: (pause-040046) Calling .DriverName
	I0328 00:47:55.107863 1114185 main.go:141] libmachine: (pause-040046) Calling .GetIP
	I0328 00:47:55.110633 1114185 main.go:141] libmachine: (pause-040046) DBG | domain pause-040046 has defined MAC address 52:54:00:3c:48:cd in network mk-pause-040046
	I0328 00:47:55.111095 1114185 main.go:141] libmachine: (pause-040046) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:48:cd", ip: ""} in network mk-pause-040046: {Iface:virbr1 ExpiryTime:2024-03-28 01:46:23 +0000 UTC Type:0 Mac:52:54:00:3c:48:cd Iaid: IPaddr:192.168.39.233 Prefix:24 Hostname:pause-040046 Clientid:01:52:54:00:3c:48:cd}
	I0328 00:47:55.111123 1114185 main.go:141] libmachine: (pause-040046) DBG | domain pause-040046 has defined IP address 192.168.39.233 and MAC address 52:54:00:3c:48:cd in network mk-pause-040046
	I0328 00:47:55.111326 1114185 main.go:141] libmachine: (pause-040046) Calling .DriverName
	I0328 00:47:55.112087 1114185 main.go:141] libmachine: (pause-040046) Calling .DriverName
	I0328 00:47:55.112300 1114185 main.go:141] libmachine: (pause-040046) Calling .DriverName
	I0328 00:47:55.112413 1114185 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0328 00:47:55.112479 1114185 main.go:141] libmachine: (pause-040046) Calling .GetSSHHostname
	I0328 00:47:55.112554 1114185 ssh_runner.go:195] Run: cat /version.json
	I0328 00:47:55.112583 1114185 main.go:141] libmachine: (pause-040046) Calling .GetSSHHostname
	I0328 00:47:55.115738 1114185 main.go:141] libmachine: (pause-040046) DBG | domain pause-040046 has defined MAC address 52:54:00:3c:48:cd in network mk-pause-040046
	I0328 00:47:55.115983 1114185 main.go:141] libmachine: (pause-040046) DBG | domain pause-040046 has defined MAC address 52:54:00:3c:48:cd in network mk-pause-040046
	I0328 00:47:55.116126 1114185 main.go:141] libmachine: (pause-040046) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:48:cd", ip: ""} in network mk-pause-040046: {Iface:virbr1 ExpiryTime:2024-03-28 01:46:23 +0000 UTC Type:0 Mac:52:54:00:3c:48:cd Iaid: IPaddr:192.168.39.233 Prefix:24 Hostname:pause-040046 Clientid:01:52:54:00:3c:48:cd}
	I0328 00:47:55.116150 1114185 main.go:141] libmachine: (pause-040046) DBG | domain pause-040046 has defined IP address 192.168.39.233 and MAC address 52:54:00:3c:48:cd in network mk-pause-040046
	I0328 00:47:55.116386 1114185 main.go:141] libmachine: (pause-040046) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:48:cd", ip: ""} in network mk-pause-040046: {Iface:virbr1 ExpiryTime:2024-03-28 01:46:23 +0000 UTC Type:0 Mac:52:54:00:3c:48:cd Iaid: IPaddr:192.168.39.233 Prefix:24 Hostname:pause-040046 Clientid:01:52:54:00:3c:48:cd}
	I0328 00:47:55.116413 1114185 main.go:141] libmachine: (pause-040046) DBG | domain pause-040046 has defined IP address 192.168.39.233 and MAC address 52:54:00:3c:48:cd in network mk-pause-040046
	I0328 00:47:55.116385 1114185 main.go:141] libmachine: (pause-040046) Calling .GetSSHPort
	I0328 00:47:55.116541 1114185 main.go:141] libmachine: (pause-040046) Calling .GetSSHPort
	I0328 00:47:55.116639 1114185 main.go:141] libmachine: (pause-040046) Calling .GetSSHKeyPath
	I0328 00:47:55.116758 1114185 main.go:141] libmachine: (pause-040046) Calling .GetSSHKeyPath
	I0328 00:47:55.116841 1114185 main.go:141] libmachine: (pause-040046) Calling .GetSSHUsername
	I0328 00:47:55.116920 1114185 main.go:141] libmachine: (pause-040046) Calling .GetSSHUsername
	I0328 00:47:55.116967 1114185 sshutil.go:53] new ssh client: &{IP:192.168.39.233 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/pause-040046/id_rsa Username:docker}
	I0328 00:47:55.117376 1114185 sshutil.go:53] new ssh client: &{IP:192.168.39.233 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/pause-040046/id_rsa Username:docker}
	I0328 00:47:55.234706 1114185 ssh_runner.go:195] Run: systemctl --version
	I0328 00:47:55.243801 1114185 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0328 00:47:55.417367 1114185 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0328 00:47:55.425058 1114185 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0328 00:47:55.425130 1114185 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0328 00:47:55.435835 1114185 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0328 00:47:55.435862 1114185 start.go:494] detecting cgroup driver to use...
	I0328 00:47:55.435937 1114185 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0328 00:47:55.453610 1114185 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0328 00:47:55.469485 1114185 docker.go:217] disabling cri-docker service (if available) ...
	I0328 00:47:55.469569 1114185 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0328 00:47:55.490387 1114185 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0328 00:47:55.510487 1114185 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0328 00:47:55.677532 1114185 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0328 00:47:55.837804 1114185 docker.go:233] disabling docker service ...
	I0328 00:47:55.837900 1114185 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0328 00:47:55.861151 1114185 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0328 00:47:55.877071 1114185 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0328 00:47:56.042400 1114185 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0328 00:47:56.203961 1114185 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0328 00:47:56.219471 1114185 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0328 00:47:56.248977 1114185 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0328 00:47:56.249058 1114185 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 00:47:56.260624 1114185 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0328 00:47:56.260712 1114185 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 00:47:56.273022 1114185 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 00:47:56.283897 1114185 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 00:47:56.294902 1114185 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0328 00:47:56.306347 1114185 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 00:47:56.317682 1114185 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 00:47:56.333172 1114185 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 00:47:56.344668 1114185 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0328 00:47:56.356871 1114185 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0328 00:47:56.370620 1114185 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0328 00:47:56.524928 1114185 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0328 00:47:57.003375 1114185 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0328 00:47:57.003463 1114185 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0328 00:47:57.008748 1114185 start.go:562] Will wait 60s for crictl version
	I0328 00:47:57.008815 1114185 ssh_runner.go:195] Run: which crictl
	I0328 00:47:57.013046 1114185 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0328 00:47:57.069553 1114185 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0328 00:47:57.069660 1114185 ssh_runner.go:195] Run: crio --version
	I0328 00:47:57.225501 1114185 ssh_runner.go:195] Run: crio --version
	I0328 00:47:57.491028 1114185 out.go:177] * Preparing Kubernetes v1.29.3 on CRI-O 1.29.1 ...
	I0328 00:47:57.492326 1114185 main.go:141] libmachine: (pause-040046) Calling .GetIP
	I0328 00:47:57.495782 1114185 main.go:141] libmachine: (pause-040046) DBG | domain pause-040046 has defined MAC address 52:54:00:3c:48:cd in network mk-pause-040046
	I0328 00:47:57.496176 1114185 main.go:141] libmachine: (pause-040046) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:48:cd", ip: ""} in network mk-pause-040046: {Iface:virbr1 ExpiryTime:2024-03-28 01:46:23 +0000 UTC Type:0 Mac:52:54:00:3c:48:cd Iaid: IPaddr:192.168.39.233 Prefix:24 Hostname:pause-040046 Clientid:01:52:54:00:3c:48:cd}
	I0328 00:47:57.496208 1114185 main.go:141] libmachine: (pause-040046) DBG | domain pause-040046 has defined IP address 192.168.39.233 and MAC address 52:54:00:3c:48:cd in network mk-pause-040046
	I0328 00:47:57.496522 1114185 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0328 00:47:57.519344 1114185 kubeadm.go:877] updating cluster {Name:pause-040046 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3
ClusterName:pause-040046 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.233 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:fals
e olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0328 00:47:57.519548 1114185 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0328 00:47:57.519655 1114185 ssh_runner.go:195] Run: sudo crictl images --output json
	I0328 00:47:57.820416 1114185 crio.go:514] all images are preloaded for cri-o runtime.
	I0328 00:47:57.820447 1114185 crio.go:433] Images already preloaded, skipping extraction
	I0328 00:47:57.820526 1114185 ssh_runner.go:195] Run: sudo crictl images --output json
	I0328 00:47:58.031286 1114185 crio.go:514] all images are preloaded for cri-o runtime.
	I0328 00:47:58.031315 1114185 cache_images.go:84] Images are preloaded, skipping loading
	I0328 00:47:58.031323 1114185 kubeadm.go:928] updating node { 192.168.39.233 8443 v1.29.3 crio true true} ...
	I0328 00:47:58.031432 1114185 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=pause-040046 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.233
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.3 ClusterName:pause-040046 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0328 00:47:58.031497 1114185 ssh_runner.go:195] Run: crio config
	I0328 00:47:58.227759 1114185 cni.go:84] Creating CNI manager for ""
	I0328 00:47:58.227790 1114185 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0328 00:47:58.227803 1114185 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0328 00:47:58.227833 1114185 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.233 APIServerPort:8443 KubernetesVersion:v1.29.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-040046 NodeName:pause-040046 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.233"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.233 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kub
ernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0328 00:47:58.228021 1114185 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.233
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-040046"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.233
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.233"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0328 00:47:58.228085 1114185 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.3
	I0328 00:47:58.245399 1114185 binaries.go:44] Found k8s binaries, skipping transfer
	I0328 00:47:58.245485 1114185 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0328 00:47:58.265409 1114185 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0328 00:47:58.308524 1114185 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0328 00:47:58.404480 1114185 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2156 bytes)
	I0328 00:47:58.476795 1114185 ssh_runner.go:195] Run: grep 192.168.39.233	control-plane.minikube.internal$ /etc/hosts
	I0328 00:47:58.485358 1114185 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0328 00:47:58.700241 1114185 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0328 00:47:58.723768 1114185 certs.go:68] Setting up /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/pause-040046 for IP: 192.168.39.233
	I0328 00:47:58.723799 1114185 certs.go:194] generating shared ca certs ...
	I0328 00:47:58.723822 1114185 certs.go:226] acquiring lock for ca certs: {Name:mkf4dec8f33bbf51de6ed3aabdf175c7fd744ef9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 00:47:58.723996 1114185 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.key
	I0328 00:47:58.724051 1114185 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/proxy-client-ca.key
	I0328 00:47:58.724064 1114185 certs.go:256] generating profile certs ...
	I0328 00:47:58.724175 1114185 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/pause-040046/client.key
	I0328 00:47:58.724245 1114185 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/pause-040046/apiserver.key.065f0b6c
	I0328 00:47:58.724290 1114185 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/pause-040046/proxy-client.key
	I0328 00:47:58.724451 1114185 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/1076522.pem (1338 bytes)
	W0328 00:47:58.724486 1114185 certs.go:480] ignoring /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/1076522_empty.pem, impossibly tiny 0 bytes
	I0328 00:47:58.724499 1114185 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca-key.pem (1675 bytes)
	I0328 00:47:58.724533 1114185 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem (1078 bytes)
	I0328 00:47:58.724571 1114185 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/cert.pem (1123 bytes)
	I0328 00:47:58.724597 1114185 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/key.pem (1679 bytes)
	I0328 00:47:58.724639 1114185 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/files/etc/ssl/certs/10765222.pem (1708 bytes)
	I0328 00:47:58.725410 1114185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0328 00:47:58.761040 1114185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0328 00:47:58.805396 1114185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0328 00:47:58.847843 1114185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0328 00:47:58.880885 1114185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/pause-040046/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0328 00:47:58.921769 1114185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/pause-040046/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0328 00:47:58.952435 1114185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/pause-040046/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0328 00:47:58.991198 1114185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/pause-040046/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0328 00:47:59.023713 1114185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/1076522.pem --> /usr/share/ca-certificates/1076522.pem (1338 bytes)
	I0328 00:47:59.058658 1114185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/files/etc/ssl/certs/10765222.pem --> /usr/share/ca-certificates/10765222.pem (1708 bytes)
	I0328 00:47:59.095833 1114185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0328 00:47:59.127357 1114185 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0328 00:47:59.147307 1114185 ssh_runner.go:195] Run: openssl version
	I0328 00:47:59.156717 1114185 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10765222.pem && ln -fs /usr/share/ca-certificates/10765222.pem /etc/ssl/certs/10765222.pem"
	I0328 00:47:59.174042 1114185 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10765222.pem
	I0328 00:47:59.179380 1114185 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 27 23:42 /usr/share/ca-certificates/10765222.pem
	I0328 00:47:59.179462 1114185 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10765222.pem
	I0328 00:47:59.190817 1114185 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/10765222.pem /etc/ssl/certs/3ec20f2e.0"
	I0328 00:47:59.203752 1114185 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0328 00:47:59.220678 1114185 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0328 00:47:59.242577 1114185 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 27 23:34 /usr/share/ca-certificates/minikubeCA.pem
	I0328 00:47:59.242639 1114185 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0328 00:47:59.249630 1114185 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0328 00:47:59.264407 1114185 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1076522.pem && ln -fs /usr/share/ca-certificates/1076522.pem /etc/ssl/certs/1076522.pem"
	I0328 00:47:59.283582 1114185 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1076522.pem
	I0328 00:47:59.290070 1114185 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 27 23:42 /usr/share/ca-certificates/1076522.pem
	I0328 00:47:59.290149 1114185 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1076522.pem
	I0328 00:47:59.299130 1114185 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1076522.pem /etc/ssl/certs/51391683.0"
	I0328 00:47:59.314805 1114185 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0328 00:47:59.321338 1114185 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0328 00:47:59.333404 1114185 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0328 00:47:59.343197 1114185 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0328 00:47:59.354797 1114185 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0328 00:47:59.364910 1114185 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0328 00:47:59.374810 1114185 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0328 00:47:59.382745 1114185 kubeadm.go:391] StartCluster: {Name:pause-040046 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 Cl
usterName:pause-040046 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.233 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false o
lm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0328 00:47:59.382916 1114185 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0328 00:47:59.382983 1114185 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0328 00:47:59.439371 1114185 cri.go:89] found id: "2264e41f77cc4f28ea256a12349256853bb9b3a2421f649b7e75348e9561392b"
	I0328 00:47:59.439403 1114185 cri.go:89] found id: "94ad7b59f3f0b5484e2c6ba034025fedf7553ccecaad58ee34b0b4a996190bb0"
	I0328 00:47:59.439409 1114185 cri.go:89] found id: "737bc1da2980fc5534519112255b2b2e4ced04b8f71571c699f8e1eedf84b5e9"
	I0328 00:47:59.439414 1114185 cri.go:89] found id: "adae09481ab4f504b6cc144443edbfd307e7f230fd84bab6c985cdc13f9c0558"
	I0328 00:47:59.439419 1114185 cri.go:89] found id: "abd187f6614af02b4d750c2d77c5a355583e1d586c760fd769441be096ef6314"
	I0328 00:47:59.439423 1114185 cri.go:89] found id: "8bd72f355a2ea559cef3868b146cd6fd26cba6952ec6955c36683ada0df0f439"
	I0328 00:47:59.439427 1114185 cri.go:89] found id: "20649081e8f44b376470128e187fc37fb77863492adf3b7be0a2eab8a63bda75"
	I0328 00:47:59.439431 1114185 cri.go:89] found id: "34e4e75cea1d705c5dd964a67ec39a3b7b289f3af5828272a192f3414fdae7d9"
	I0328 00:47:59.439435 1114185 cri.go:89] found id: "f1b80b5f74b883c025391f59cd96e49bfff64827dad73acf3ece5d1fe287785d"
	I0328 00:47:59.439445 1114185 cri.go:89] found id: "77e139bd230aca37656fb0d8cba8540c7743407c2a3bac69db0cc4b451fe225f"
	I0328 00:47:59.439449 1114185 cri.go:89] found id: "87ffaf60a5785562c8cf29e0f09f9f498980669c6d87f59566be0f007672adbe"
	I0328 00:47:59.439454 1114185 cri.go:89] found id: "8e9767e33ea811d3cfe2b94ac0a4e6fd225f7fc34ed1dba1bbeabee2b65a9eb7"
	I0328 00:47:59.439458 1114185 cri.go:89] found id: ""
	I0328 00:47:59.439513 1114185 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-040046 -n pause-040046
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p pause-040046 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-040046 logs -n 25: (1.763930234s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|---------------------------|---------|----------------|---------------------|---------------------|
	| Command |                         Args                         |          Profile          |  User   |    Version     |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|---------------------------|---------|----------------|---------------------|---------------------|
	| ssh     | -p cilium-443419 sudo cat                            | cilium-443419             | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:46 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                           |         |                |                     |                     |
	| ssh     | -p cilium-443419 sudo cat                            | cilium-443419             | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:46 UTC |                     |
	|         | /usr/lib/systemd/system/cri-docker.service           |                           |         |                |                     |                     |
	| ssh     | -p cilium-443419 sudo                                | cilium-443419             | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:46 UTC |                     |
	|         | cri-dockerd --version                                |                           |         |                |                     |                     |
	| ssh     | -p cilium-443419 sudo                                | cilium-443419             | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:46 UTC |                     |
	|         | systemctl status containerd                          |                           |         |                |                     |                     |
	|         | --all --full --no-pager                              |                           |         |                |                     |                     |
	| stop    | -p NoKubernetes-636163                               | NoKubernetes-636163       | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:46 UTC | 28 Mar 24 00:46 UTC |
	| ssh     | -p cilium-443419 sudo                                | cilium-443419             | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:46 UTC |                     |
	|         | systemctl cat containerd                             |                           |         |                |                     |                     |
	|         | --no-pager                                           |                           |         |                |                     |                     |
	| delete  | -p running-upgrade-642721                            | running-upgrade-642721    | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:46 UTC | 28 Mar 24 00:46 UTC |
	| ssh     | -p cilium-443419 sudo cat                            | cilium-443419             | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:46 UTC |                     |
	|         | /lib/systemd/system/containerd.service               |                           |         |                |                     |                     |
	| ssh     | -p cilium-443419 sudo cat                            | cilium-443419             | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:46 UTC |                     |
	|         | /etc/containerd/config.toml                          |                           |         |                |                     |                     |
	| ssh     | -p cilium-443419 sudo                                | cilium-443419             | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:46 UTC |                     |
	|         | containerd config dump                               |                           |         |                |                     |                     |
	| ssh     | -p cilium-443419 sudo                                | cilium-443419             | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:46 UTC |                     |
	|         | systemctl status crio --all                          |                           |         |                |                     |                     |
	|         | --full --no-pager                                    |                           |         |                |                     |                     |
	| ssh     | -p cilium-443419 sudo                                | cilium-443419             | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:46 UTC |                     |
	|         | systemctl cat crio --no-pager                        |                           |         |                |                     |                     |
	| ssh     | -p cilium-443419 sudo find                           | cilium-443419             | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:46 UTC |                     |
	|         | /etc/crio -type f -exec sh -c                        |                           |         |                |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                           |         |                |                     |                     |
	| ssh     | -p cilium-443419 sudo crio                           | cilium-443419             | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:46 UTC |                     |
	|         | config                                               |                           |         |                |                     |                     |
	| delete  | -p cilium-443419                                     | cilium-443419             | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:46 UTC | 28 Mar 24 00:46 UTC |
	| start   | -p pause-040046 --memory=2048                        | pause-040046              | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:46 UTC | 28 Mar 24 00:47 UTC |
	|         | --install-addons=false                               |                           |         |                |                     |                     |
	|         | --wait=all --driver=kvm2                             |                           |         |                |                     |                     |
	|         | --container-runtime=crio                             |                           |         |                |                     |                     |
	| start   | -p NoKubernetes-636163                               | NoKubernetes-636163       | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:46 UTC | 28 Mar 24 00:46 UTC |
	|         | --driver=kvm2                                        |                           |         |                |                     |                     |
	|         | --container-runtime=crio                             |                           |         |                |                     |                     |
	| start   | -p kubernetes-upgrade-615158                         | kubernetes-upgrade-615158 | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:46 UTC |                     |
	|         | --memory=2200                                        |                           |         |                |                     |                     |
	|         | --kubernetes-version=v1.20.0                         |                           |         |                |                     |                     |
	|         | --alsologtostderr                                    |                           |         |                |                     |                     |
	|         | -v=1 --driver=kvm2                                   |                           |         |                |                     |                     |
	|         | --container-runtime=crio                             |                           |         |                |                     |                     |
	| ssh     | -p NoKubernetes-636163 sudo                          | NoKubernetes-636163       | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:46 UTC |                     |
	|         | systemctl is-active --quiet                          |                           |         |                |                     |                     |
	|         | service kubelet                                      |                           |         |                |                     |                     |
	| delete  | -p NoKubernetes-636163                               | NoKubernetes-636163       | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:46 UTC | 28 Mar 24 00:47 UTC |
	| start   | -p stopped-upgrade-317492                            | minikube                  | jenkins | v1.26.0        | 28 Mar 24 00:47 UTC | 28 Mar 24 00:48 UTC |
	|         | --memory=2200 --vm-driver=kvm2                       |                           |         |                |                     |                     |
	|         |  --container-runtime=crio                            |                           |         |                |                     |                     |
	| start   | -p pause-040046                                      | pause-040046              | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:47 UTC | 28 Mar 24 00:48 UTC |
	|         | --alsologtostderr                                    |                           |         |                |                     |                     |
	|         | -v=1 --driver=kvm2                                   |                           |         |                |                     |                     |
	|         | --container-runtime=crio                             |                           |         |                |                     |                     |
	| stop    | stopped-upgrade-317492 stop                          | minikube                  | jenkins | v1.26.0        | 28 Mar 24 00:48 UTC | 28 Mar 24 00:48 UTC |
	| start   | -p cert-expiration-927384                            | cert-expiration-927384    | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:48 UTC |                     |
	|         | --memory=2048                                        |                           |         |                |                     |                     |
	|         | --cert-expiration=8760h                              |                           |         |                |                     |                     |
	|         | --driver=kvm2                                        |                           |         |                |                     |                     |
	|         | --container-runtime=crio                             |                           |         |                |                     |                     |
	| start   | -p stopped-upgrade-317492                            | stopped-upgrade-317492    | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:48 UTC |                     |
	|         | --memory=2200                                        |                           |         |                |                     |                     |
	|         | --alsologtostderr                                    |                           |         |                |                     |                     |
	|         | -v=1 --driver=kvm2                                   |                           |         |                |                     |                     |
	|         | --container-runtime=crio                             |                           |         |                |                     |                     |
	|---------|------------------------------------------------------|---------------------------|---------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/28 00:48:18
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0328 00:48:18.231161 1114767 out.go:291] Setting OutFile to fd 1 ...
	I0328 00:48:18.231312 1114767 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 00:48:18.231338 1114767 out.go:304] Setting ErrFile to fd 2...
	I0328 00:48:18.231353 1114767 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 00:48:18.231562 1114767 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18485-1069254/.minikube/bin
	I0328 00:48:18.232217 1114767 out.go:298] Setting JSON to false
	I0328 00:48:18.233288 1114767 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":30595,"bootTime":1711556303,"procs":219,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1054-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0328 00:48:18.233355 1114767 start.go:139] virtualization: kvm guest
	I0328 00:48:18.235476 1114767 out.go:177] * [stopped-upgrade-317492] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I0328 00:48:18.236858 1114767 notify.go:220] Checking for updates...
	I0328 00:48:18.238533 1114767 out.go:177]   - MINIKUBE_LOCATION=18485
	I0328 00:48:18.239817 1114767 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0328 00:48:18.241011 1114767 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18485-1069254/kubeconfig
	I0328 00:48:18.242219 1114767 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18485-1069254/.minikube
	I0328 00:48:18.243461 1114767 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0328 00:48:18.244668 1114767 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0328 00:48:18.246349 1114767 config.go:182] Loaded profile config "stopped-upgrade-317492": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.1
	I0328 00:48:18.246720 1114767 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18485-1069254/.minikube/bin/docker-machine-driver-kvm2
	I0328 00:48:18.246767 1114767 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 00:48:18.261988 1114767 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39757
	I0328 00:48:18.262499 1114767 main.go:141] libmachine: () Calling .GetVersion
	I0328 00:48:18.263097 1114767 main.go:141] libmachine: Using API Version  1
	I0328 00:48:18.263131 1114767 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 00:48:18.263631 1114767 main.go:141] libmachine: () Calling .GetMachineName
	I0328 00:48:18.263878 1114767 main.go:141] libmachine: (stopped-upgrade-317492) Calling .DriverName
	I0328 00:48:18.265964 1114767 out.go:177] * Kubernetes 1.29.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.29.3
	I0328 00:48:18.267419 1114767 driver.go:392] Setting default libvirt URI to qemu:///system
	I0328 00:48:18.267733 1114767 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18485-1069254/.minikube/bin/docker-machine-driver-kvm2
	I0328 00:48:18.267778 1114767 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 00:48:18.282876 1114767 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40287
	I0328 00:48:18.283309 1114767 main.go:141] libmachine: () Calling .GetVersion
	I0328 00:48:18.283766 1114767 main.go:141] libmachine: Using API Version  1
	I0328 00:48:18.283790 1114767 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 00:48:18.284123 1114767 main.go:141] libmachine: () Calling .GetMachineName
	I0328 00:48:18.284353 1114767 main.go:141] libmachine: (stopped-upgrade-317492) Calling .DriverName
	I0328 00:48:18.323384 1114767 out.go:177] * Using the kvm2 driver based on existing profile
	I0328 00:48:18.324629 1114767 start.go:297] selected driver: kvm2
	I0328 00:48:18.324641 1114767 start.go:901] validating driver "kvm2" against &{Name:stopped-upgrade-317492 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-317
492 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.128 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpti
mizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0328 00:48:18.324761 1114767 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0328 00:48:18.325463 1114767 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0328 00:48:18.325558 1114767 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18485-1069254/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0328 00:48:18.340532 1114767 install.go:137] /home/jenkins/minikube-integration/18485-1069254/.minikube/bin/docker-machine-driver-kvm2 version is 1.33.0-beta.0
	I0328 00:48:18.340909 1114767 cni.go:84] Creating CNI manager for ""
	I0328 00:48:18.340928 1114767 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0328 00:48:18.340985 1114767 start.go:340] cluster config:
	{Name:stopped-upgrade-317492 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-317492 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:
[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.128 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClient
Path: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0328 00:48:18.341093 1114767 iso.go:125] acquiring lock: {Name:mk3da1fa7d63a581c817f327d86a827697458cb8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0328 00:48:18.342999 1114767 out.go:177] * Starting "stopped-upgrade-317492" primary control-plane node in "stopped-upgrade-317492" cluster
	I0328 00:48:16.841915 1114710 machine.go:94] provisionDockerMachine start ...
	I0328 00:48:16.841951 1114710 main.go:141] libmachine: (cert-expiration-927384) Calling .DriverName
	I0328 00:48:16.842281 1114710 main.go:141] libmachine: (cert-expiration-927384) Calling .GetSSHHostname
	I0328 00:48:16.845169 1114710 main.go:141] libmachine: (cert-expiration-927384) DBG | domain cert-expiration-927384 has defined MAC address 52:54:00:71:d9:b7 in network mk-cert-expiration-927384
	I0328 00:48:16.845555 1114710 main.go:141] libmachine: (cert-expiration-927384) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:d9:b7", ip: ""} in network mk-cert-expiration-927384: {Iface:virbr4 ExpiryTime:2024-03-28 01:44:44 +0000 UTC Type:0 Mac:52:54:00:71:d9:b7 Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:cert-expiration-927384 Clientid:01:52:54:00:71:d9:b7}
	I0328 00:48:16.845581 1114710 main.go:141] libmachine: (cert-expiration-927384) DBG | domain cert-expiration-927384 has defined IP address 192.168.72.11 and MAC address 52:54:00:71:d9:b7 in network mk-cert-expiration-927384
	I0328 00:48:16.845733 1114710 main.go:141] libmachine: (cert-expiration-927384) Calling .GetSSHPort
	I0328 00:48:16.845957 1114710 main.go:141] libmachine: (cert-expiration-927384) Calling .GetSSHKeyPath
	I0328 00:48:16.846141 1114710 main.go:141] libmachine: (cert-expiration-927384) Calling .GetSSHKeyPath
	I0328 00:48:16.846298 1114710 main.go:141] libmachine: (cert-expiration-927384) Calling .GetSSHUsername
	I0328 00:48:16.846491 1114710 main.go:141] libmachine: Using SSH client type: native
	I0328 00:48:16.846759 1114710 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.11 22 <nil> <nil>}
	I0328 00:48:16.846778 1114710 main.go:141] libmachine: About to run SSH command:
	hostname
	I0328 00:48:16.969166 1114710 main.go:141] libmachine: SSH cmd err, output: <nil>: cert-expiration-927384
	
	I0328 00:48:16.969193 1114710 main.go:141] libmachine: (cert-expiration-927384) Calling .GetMachineName
	I0328 00:48:16.969480 1114710 buildroot.go:166] provisioning hostname "cert-expiration-927384"
	I0328 00:48:16.969499 1114710 main.go:141] libmachine: (cert-expiration-927384) Calling .GetMachineName
	I0328 00:48:16.969693 1114710 main.go:141] libmachine: (cert-expiration-927384) Calling .GetSSHHostname
	I0328 00:48:16.973311 1114710 main.go:141] libmachine: (cert-expiration-927384) DBG | domain cert-expiration-927384 has defined MAC address 52:54:00:71:d9:b7 in network mk-cert-expiration-927384
	I0328 00:48:16.973790 1114710 main.go:141] libmachine: (cert-expiration-927384) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:d9:b7", ip: ""} in network mk-cert-expiration-927384: {Iface:virbr4 ExpiryTime:2024-03-28 01:44:44 +0000 UTC Type:0 Mac:52:54:00:71:d9:b7 Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:cert-expiration-927384 Clientid:01:52:54:00:71:d9:b7}
	I0328 00:48:16.973810 1114710 main.go:141] libmachine: (cert-expiration-927384) DBG | domain cert-expiration-927384 has defined IP address 192.168.72.11 and MAC address 52:54:00:71:d9:b7 in network mk-cert-expiration-927384
	I0328 00:48:16.974035 1114710 main.go:141] libmachine: (cert-expiration-927384) Calling .GetSSHPort
	I0328 00:48:16.974223 1114710 main.go:141] libmachine: (cert-expiration-927384) Calling .GetSSHKeyPath
	I0328 00:48:16.974388 1114710 main.go:141] libmachine: (cert-expiration-927384) Calling .GetSSHKeyPath
	I0328 00:48:16.974523 1114710 main.go:141] libmachine: (cert-expiration-927384) Calling .GetSSHUsername
	I0328 00:48:16.974692 1114710 main.go:141] libmachine: Using SSH client type: native
	I0328 00:48:16.974913 1114710 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.11 22 <nil> <nil>}
	I0328 00:48:16.974925 1114710 main.go:141] libmachine: About to run SSH command:
	sudo hostname cert-expiration-927384 && echo "cert-expiration-927384" | sudo tee /etc/hostname
	I0328 00:48:17.115425 1114710 main.go:141] libmachine: SSH cmd err, output: <nil>: cert-expiration-927384
	
	I0328 00:48:17.115450 1114710 main.go:141] libmachine: (cert-expiration-927384) Calling .GetSSHHostname
	I0328 00:48:17.118689 1114710 main.go:141] libmachine: (cert-expiration-927384) DBG | domain cert-expiration-927384 has defined MAC address 52:54:00:71:d9:b7 in network mk-cert-expiration-927384
	I0328 00:48:17.119074 1114710 main.go:141] libmachine: (cert-expiration-927384) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:d9:b7", ip: ""} in network mk-cert-expiration-927384: {Iface:virbr4 ExpiryTime:2024-03-28 01:44:44 +0000 UTC Type:0 Mac:52:54:00:71:d9:b7 Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:cert-expiration-927384 Clientid:01:52:54:00:71:d9:b7}
	I0328 00:48:17.119100 1114710 main.go:141] libmachine: (cert-expiration-927384) DBG | domain cert-expiration-927384 has defined IP address 192.168.72.11 and MAC address 52:54:00:71:d9:b7 in network mk-cert-expiration-927384
	I0328 00:48:17.119314 1114710 main.go:141] libmachine: (cert-expiration-927384) Calling .GetSSHPort
	I0328 00:48:17.119514 1114710 main.go:141] libmachine: (cert-expiration-927384) Calling .GetSSHKeyPath
	I0328 00:48:17.119669 1114710 main.go:141] libmachine: (cert-expiration-927384) Calling .GetSSHKeyPath
	I0328 00:48:17.119817 1114710 main.go:141] libmachine: (cert-expiration-927384) Calling .GetSSHUsername
	I0328 00:48:17.119991 1114710 main.go:141] libmachine: Using SSH client type: native
	I0328 00:48:17.120222 1114710 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.11 22 <nil> <nil>}
	I0328 00:48:17.120235 1114710 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scert-expiration-927384' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 cert-expiration-927384/g' /etc/hosts;
				else 
					echo '127.0.1.1 cert-expiration-927384' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0328 00:48:17.236133 1114710 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0328 00:48:17.236157 1114710 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18485-1069254/.minikube CaCertPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18485-1069254/.minikube}
	I0328 00:48:17.236184 1114710 buildroot.go:174] setting up certificates
	I0328 00:48:17.236192 1114710 provision.go:84] configureAuth start
	I0328 00:48:17.236200 1114710 main.go:141] libmachine: (cert-expiration-927384) Calling .GetMachineName
	I0328 00:48:17.236534 1114710 main.go:141] libmachine: (cert-expiration-927384) Calling .GetIP
	I0328 00:48:17.239853 1114710 main.go:141] libmachine: (cert-expiration-927384) DBG | domain cert-expiration-927384 has defined MAC address 52:54:00:71:d9:b7 in network mk-cert-expiration-927384
	I0328 00:48:17.240250 1114710 main.go:141] libmachine: (cert-expiration-927384) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:d9:b7", ip: ""} in network mk-cert-expiration-927384: {Iface:virbr4 ExpiryTime:2024-03-28 01:44:44 +0000 UTC Type:0 Mac:52:54:00:71:d9:b7 Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:cert-expiration-927384 Clientid:01:52:54:00:71:d9:b7}
	I0328 00:48:17.240264 1114710 main.go:141] libmachine: (cert-expiration-927384) DBG | domain cert-expiration-927384 has defined IP address 192.168.72.11 and MAC address 52:54:00:71:d9:b7 in network mk-cert-expiration-927384
	I0328 00:48:17.240426 1114710 main.go:141] libmachine: (cert-expiration-927384) Calling .GetSSHHostname
	I0328 00:48:17.242796 1114710 main.go:141] libmachine: (cert-expiration-927384) DBG | domain cert-expiration-927384 has defined MAC address 52:54:00:71:d9:b7 in network mk-cert-expiration-927384
	I0328 00:48:17.243244 1114710 main.go:141] libmachine: (cert-expiration-927384) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:d9:b7", ip: ""} in network mk-cert-expiration-927384: {Iface:virbr4 ExpiryTime:2024-03-28 01:44:44 +0000 UTC Type:0 Mac:52:54:00:71:d9:b7 Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:cert-expiration-927384 Clientid:01:52:54:00:71:d9:b7}
	I0328 00:48:17.243270 1114710 main.go:141] libmachine: (cert-expiration-927384) DBG | domain cert-expiration-927384 has defined IP address 192.168.72.11 and MAC address 52:54:00:71:d9:b7 in network mk-cert-expiration-927384
	I0328 00:48:17.243369 1114710 provision.go:143] copyHostCerts
	I0328 00:48:17.243435 1114710 exec_runner.go:144] found /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.pem, removing ...
	I0328 00:48:17.243441 1114710 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.pem
	I0328 00:48:17.243504 1114710 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.pem (1078 bytes)
	I0328 00:48:17.243599 1114710 exec_runner.go:144] found /home/jenkins/minikube-integration/18485-1069254/.minikube/cert.pem, removing ...
	I0328 00:48:17.243602 1114710 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18485-1069254/.minikube/cert.pem
	I0328 00:48:17.243621 1114710 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18485-1069254/.minikube/cert.pem (1123 bytes)
	I0328 00:48:17.243669 1114710 exec_runner.go:144] found /home/jenkins/minikube-integration/18485-1069254/.minikube/key.pem, removing ...
	I0328 00:48:17.243672 1114710 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18485-1069254/.minikube/key.pem
	I0328 00:48:17.243687 1114710 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18485-1069254/.minikube/key.pem (1679 bytes)
	I0328 00:48:17.243727 1114710 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca-key.pem org=jenkins.cert-expiration-927384 san=[127.0.0.1 192.168.72.11 cert-expiration-927384 localhost minikube]
	I0328 00:48:17.487391 1114710 provision.go:177] copyRemoteCerts
	I0328 00:48:17.487441 1114710 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0328 00:48:17.487467 1114710 main.go:141] libmachine: (cert-expiration-927384) Calling .GetSSHHostname
	I0328 00:48:17.490497 1114710 main.go:141] libmachine: (cert-expiration-927384) DBG | domain cert-expiration-927384 has defined MAC address 52:54:00:71:d9:b7 in network mk-cert-expiration-927384
	I0328 00:48:17.490839 1114710 main.go:141] libmachine: (cert-expiration-927384) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:d9:b7", ip: ""} in network mk-cert-expiration-927384: {Iface:virbr4 ExpiryTime:2024-03-28 01:44:44 +0000 UTC Type:0 Mac:52:54:00:71:d9:b7 Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:cert-expiration-927384 Clientid:01:52:54:00:71:d9:b7}
	I0328 00:48:17.490858 1114710 main.go:141] libmachine: (cert-expiration-927384) DBG | domain cert-expiration-927384 has defined IP address 192.168.72.11 and MAC address 52:54:00:71:d9:b7 in network mk-cert-expiration-927384
	I0328 00:48:17.491052 1114710 main.go:141] libmachine: (cert-expiration-927384) Calling .GetSSHPort
	I0328 00:48:17.491316 1114710 main.go:141] libmachine: (cert-expiration-927384) Calling .GetSSHKeyPath
	I0328 00:48:17.491503 1114710 main.go:141] libmachine: (cert-expiration-927384) Calling .GetSSHUsername
	I0328 00:48:17.491670 1114710 sshutil.go:53] new ssh client: &{IP:192.168.72.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/cert-expiration-927384/id_rsa Username:docker}
	I0328 00:48:17.578099 1114710 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0328 00:48:17.606266 1114710 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0328 00:48:17.634910 1114710 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0328 00:48:17.667622 1114710 provision.go:87] duration metric: took 431.415492ms to configureAuth
	I0328 00:48:17.667643 1114710 buildroot.go:189] setting minikube options for container-runtime
	I0328 00:48:17.667798 1114710 config.go:182] Loaded profile config "cert-expiration-927384": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0328 00:48:17.667878 1114710 main.go:141] libmachine: (cert-expiration-927384) Calling .GetSSHHostname
	I0328 00:48:17.671227 1114710 main.go:141] libmachine: (cert-expiration-927384) DBG | domain cert-expiration-927384 has defined MAC address 52:54:00:71:d9:b7 in network mk-cert-expiration-927384
	I0328 00:48:17.672689 1114710 main.go:141] libmachine: (cert-expiration-927384) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:d9:b7", ip: ""} in network mk-cert-expiration-927384: {Iface:virbr4 ExpiryTime:2024-03-28 01:44:44 +0000 UTC Type:0 Mac:52:54:00:71:d9:b7 Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:cert-expiration-927384 Clientid:01:52:54:00:71:d9:b7}
	I0328 00:48:17.672703 1114710 main.go:141] libmachine: (cert-expiration-927384) DBG | domain cert-expiration-927384 has defined IP address 192.168.72.11 and MAC address 52:54:00:71:d9:b7 in network mk-cert-expiration-927384
	I0328 00:48:17.672930 1114710 main.go:141] libmachine: (cert-expiration-927384) Calling .GetSSHPort
	I0328 00:48:17.673118 1114710 main.go:141] libmachine: (cert-expiration-927384) Calling .GetSSHKeyPath
	I0328 00:48:17.673310 1114710 main.go:141] libmachine: (cert-expiration-927384) Calling .GetSSHKeyPath
	I0328 00:48:17.673470 1114710 main.go:141] libmachine: (cert-expiration-927384) Calling .GetSSHUsername
	I0328 00:48:17.673599 1114710 main.go:141] libmachine: Using SSH client type: native
	I0328 00:48:17.673768 1114710 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.11 22 <nil> <nil>}
	I0328 00:48:17.673778 1114710 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0328 00:48:18.344225 1114767 preload.go:132] Checking if preload exists for k8s version v1.24.1 and runtime crio
	I0328 00:48:18.344266 1114767 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-cri-o-overlay-amd64.tar.lz4
	I0328 00:48:18.344275 1114767 cache.go:56] Caching tarball of preloaded images
	I0328 00:48:18.344369 1114767 preload.go:173] Found /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0328 00:48:18.344383 1114767 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on crio
	I0328 00:48:18.344493 1114767 profile.go:142] Saving config to /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/stopped-upgrade-317492/config.json ...
	I0328 00:48:18.344694 1114767 start.go:360] acquireMachinesLock for stopped-upgrade-317492: {Name:mk85e225431128bbd27ac7bb3815095957281902 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0328 00:48:23.528143 1114767 start.go:364] duration metric: took 5.183407757s to acquireMachinesLock for "stopped-upgrade-317492"
	I0328 00:48:23.528316 1114767 start.go:96] Skipping create...Using existing machine configuration
	I0328 00:48:23.528357 1114767 fix.go:54] fixHost starting: 
	I0328 00:48:23.528816 1114767 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18485-1069254/.minikube/bin/docker-machine-driver-kvm2
	I0328 00:48:23.528882 1114767 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 00:48:23.549495 1114767 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35341
	I0328 00:48:23.549941 1114767 main.go:141] libmachine: () Calling .GetVersion
	I0328 00:48:23.550490 1114767 main.go:141] libmachine: Using API Version  1
	I0328 00:48:23.550517 1114767 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 00:48:23.550868 1114767 main.go:141] libmachine: () Calling .GetMachineName
	I0328 00:48:23.551055 1114767 main.go:141] libmachine: (stopped-upgrade-317492) Calling .DriverName
	I0328 00:48:23.551203 1114767 main.go:141] libmachine: (stopped-upgrade-317492) Calling .GetState
	I0328 00:48:23.552814 1114767 fix.go:112] recreateIfNeeded on stopped-upgrade-317492: state=Stopped err=<nil>
	I0328 00:48:23.552841 1114767 main.go:141] libmachine: (stopped-upgrade-317492) Calling .DriverName
	W0328 00:48:23.553047 1114767 fix.go:138] unexpected machine state, will restart: <nil>
	I0328 00:48:23.555010 1114767 out.go:177] * Restarting existing kvm2 VM for "stopped-upgrade-317492" ...
	I0328 00:48:19.771430 1114185 ssh_runner.go:235] Completed: sudo /usr/bin/crictl stop --timeout=10 2264e41f77cc4f28ea256a12349256853bb9b3a2421f649b7e75348e9561392b 94ad7b59f3f0b5484e2c6ba034025fedf7553ccecaad58ee34b0b4a996190bb0 737bc1da2980fc5534519112255b2b2e4ced04b8f71571c699f8e1eedf84b5e9 adae09481ab4f504b6cc144443edbfd307e7f230fd84bab6c985cdc13f9c0558 abd187f6614af02b4d750c2d77c5a355583e1d586c760fd769441be096ef6314 8bd72f355a2ea559cef3868b146cd6fd26cba6952ec6955c36683ada0df0f439 20649081e8f44b376470128e187fc37fb77863492adf3b7be0a2eab8a63bda75 34e4e75cea1d705c5dd964a67ec39a3b7b289f3af5828272a192f3414fdae7d9 f1b80b5f74b883c025391f59cd96e49bfff64827dad73acf3ece5d1fe287785d 77e139bd230aca37656fb0d8cba8540c7743407c2a3bac69db0cc4b451fe225f 87ffaf60a5785562c8cf29e0f09f9f498980669c6d87f59566be0f007672adbe 8e9767e33ea811d3cfe2b94ac0a4e6fd225f7fc34ed1dba1bbeabee2b65a9eb7: (20.095363347s)
	W0328 00:48:19.771532 1114185 kubeadm.go:638] Failed to stop kube-system containers, port conflicts may arise: stop: crictl: sudo /usr/bin/crictl stop --timeout=10 2264e41f77cc4f28ea256a12349256853bb9b3a2421f649b7e75348e9561392b 94ad7b59f3f0b5484e2c6ba034025fedf7553ccecaad58ee34b0b4a996190bb0 737bc1da2980fc5534519112255b2b2e4ced04b8f71571c699f8e1eedf84b5e9 adae09481ab4f504b6cc144443edbfd307e7f230fd84bab6c985cdc13f9c0558 abd187f6614af02b4d750c2d77c5a355583e1d586c760fd769441be096ef6314 8bd72f355a2ea559cef3868b146cd6fd26cba6952ec6955c36683ada0df0f439 20649081e8f44b376470128e187fc37fb77863492adf3b7be0a2eab8a63bda75 34e4e75cea1d705c5dd964a67ec39a3b7b289f3af5828272a192f3414fdae7d9 f1b80b5f74b883c025391f59cd96e49bfff64827dad73acf3ece5d1fe287785d 77e139bd230aca37656fb0d8cba8540c7743407c2a3bac69db0cc4b451fe225f 87ffaf60a5785562c8cf29e0f09f9f498980669c6d87f59566be0f007672adbe 8e9767e33ea811d3cfe2b94ac0a4e6fd225f7fc34ed1dba1bbeabee2b65a9eb7: Process exited with status 1
	stdout:
	2264e41f77cc4f28ea256a12349256853bb9b3a2421f649b7e75348e9561392b
	94ad7b59f3f0b5484e2c6ba034025fedf7553ccecaad58ee34b0b4a996190bb0
	737bc1da2980fc5534519112255b2b2e4ced04b8f71571c699f8e1eedf84b5e9
	adae09481ab4f504b6cc144443edbfd307e7f230fd84bab6c985cdc13f9c0558
	abd187f6614af02b4d750c2d77c5a355583e1d586c760fd769441be096ef6314
	8bd72f355a2ea559cef3868b146cd6fd26cba6952ec6955c36683ada0df0f439
	
	stderr:
	E0328 00:48:19.762579    2751 remote_runtime.go:366] "StopContainer from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"20649081e8f44b376470128e187fc37fb77863492adf3b7be0a2eab8a63bda75\": container with ID starting with 20649081e8f44b376470128e187fc37fb77863492adf3b7be0a2eab8a63bda75 not found: ID does not exist" containerID="20649081e8f44b376470128e187fc37fb77863492adf3b7be0a2eab8a63bda75"
	time="2024-03-28T00:48:19Z" level=fatal msg="stopping the container \"20649081e8f44b376470128e187fc37fb77863492adf3b7be0a2eab8a63bda75\": rpc error: code = NotFound desc = could not find container \"20649081e8f44b376470128e187fc37fb77863492adf3b7be0a2eab8a63bda75\": container with ID starting with 20649081e8f44b376470128e187fc37fb77863492adf3b7be0a2eab8a63bda75 not found: ID does not exist"
	I0328 00:48:19.771611 1114185 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0328 00:48:19.811324 1114185 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0328 00:48:19.822088 1114185 kubeadm.go:156] found existing configuration files:
	-rw------- 1 root root 5651 Mar 28 00:46 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5654 Mar 28 00:46 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 1987 Mar 28 00:46 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5606 Mar 28 00:46 /etc/kubernetes/scheduler.conf
	
	I0328 00:48:19.822163 1114185 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0328 00:48:19.832573 1114185 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0328 00:48:19.843072 1114185 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0328 00:48:19.853410 1114185 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0328 00:48:19.853484 1114185 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0328 00:48:19.864297 1114185 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0328 00:48:19.874912 1114185 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0328 00:48:19.874990 1114185 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0328 00:48:19.885478 1114185 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0328 00:48:19.895925 1114185 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0328 00:48:19.965645 1114185 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0328 00:48:20.585228 1114185 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0328 00:48:20.807630 1114185 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0328 00:48:20.908586 1114185 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0328 00:48:21.073217 1114185 api_server.go:52] waiting for apiserver process to appear ...
	I0328 00:48:21.073315 1114185 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 00:48:21.573714 1114185 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 00:48:22.074028 1114185 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 00:48:22.118192 1114185 api_server.go:72] duration metric: took 1.044974568s to wait for apiserver process to appear ...
	I0328 00:48:22.118242 1114185 api_server.go:88] waiting for apiserver healthz status ...
	I0328 00:48:22.118298 1114185 api_server.go:253] Checking apiserver healthz at https://192.168.39.233:8443/healthz ...
	I0328 00:48:23.281823 1114710 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0328 00:48:23.281842 1114710 machine.go:97] duration metric: took 6.439903475s to provisionDockerMachine
	I0328 00:48:23.281854 1114710 start.go:293] postStartSetup for "cert-expiration-927384" (driver="kvm2")
	I0328 00:48:23.281866 1114710 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0328 00:48:23.281886 1114710 main.go:141] libmachine: (cert-expiration-927384) Calling .DriverName
	I0328 00:48:23.282357 1114710 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0328 00:48:23.282385 1114710 main.go:141] libmachine: (cert-expiration-927384) Calling .GetSSHHostname
	I0328 00:48:23.285394 1114710 main.go:141] libmachine: (cert-expiration-927384) DBG | domain cert-expiration-927384 has defined MAC address 52:54:00:71:d9:b7 in network mk-cert-expiration-927384
	I0328 00:48:23.285823 1114710 main.go:141] libmachine: (cert-expiration-927384) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:d9:b7", ip: ""} in network mk-cert-expiration-927384: {Iface:virbr4 ExpiryTime:2024-03-28 01:44:44 +0000 UTC Type:0 Mac:52:54:00:71:d9:b7 Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:cert-expiration-927384 Clientid:01:52:54:00:71:d9:b7}
	I0328 00:48:23.285846 1114710 main.go:141] libmachine: (cert-expiration-927384) DBG | domain cert-expiration-927384 has defined IP address 192.168.72.11 and MAC address 52:54:00:71:d9:b7 in network mk-cert-expiration-927384
	I0328 00:48:23.286006 1114710 main.go:141] libmachine: (cert-expiration-927384) Calling .GetSSHPort
	I0328 00:48:23.286295 1114710 main.go:141] libmachine: (cert-expiration-927384) Calling .GetSSHKeyPath
	I0328 00:48:23.286493 1114710 main.go:141] libmachine: (cert-expiration-927384) Calling .GetSSHUsername
	I0328 00:48:23.286638 1114710 sshutil.go:53] new ssh client: &{IP:192.168.72.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/cert-expiration-927384/id_rsa Username:docker}
	I0328 00:48:23.372998 1114710 ssh_runner.go:195] Run: cat /etc/os-release
	I0328 00:48:23.377676 1114710 info.go:137] Remote host: Buildroot 2023.02.9
	I0328 00:48:23.377693 1114710 filesync.go:126] Scanning /home/jenkins/minikube-integration/18485-1069254/.minikube/addons for local assets ...
	I0328 00:48:23.377759 1114710 filesync.go:126] Scanning /home/jenkins/minikube-integration/18485-1069254/.minikube/files for local assets ...
	I0328 00:48:23.377829 1114710 filesync.go:149] local asset: /home/jenkins/minikube-integration/18485-1069254/.minikube/files/etc/ssl/certs/10765222.pem -> 10765222.pem in /etc/ssl/certs
	I0328 00:48:23.377910 1114710 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0328 00:48:23.387664 1114710 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/files/etc/ssl/certs/10765222.pem --> /etc/ssl/certs/10765222.pem (1708 bytes)
	I0328 00:48:23.412892 1114710 start.go:296] duration metric: took 131.021134ms for postStartSetup
	I0328 00:48:23.412931 1114710 fix.go:56] duration metric: took 6.598372258s for fixHost
	I0328 00:48:23.412956 1114710 main.go:141] libmachine: (cert-expiration-927384) Calling .GetSSHHostname
	I0328 00:48:23.416152 1114710 main.go:141] libmachine: (cert-expiration-927384) DBG | domain cert-expiration-927384 has defined MAC address 52:54:00:71:d9:b7 in network mk-cert-expiration-927384
	I0328 00:48:23.416505 1114710 main.go:141] libmachine: (cert-expiration-927384) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:d9:b7", ip: ""} in network mk-cert-expiration-927384: {Iface:virbr4 ExpiryTime:2024-03-28 01:44:44 +0000 UTC Type:0 Mac:52:54:00:71:d9:b7 Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:cert-expiration-927384 Clientid:01:52:54:00:71:d9:b7}
	I0328 00:48:23.416529 1114710 main.go:141] libmachine: (cert-expiration-927384) DBG | domain cert-expiration-927384 has defined IP address 192.168.72.11 and MAC address 52:54:00:71:d9:b7 in network mk-cert-expiration-927384
	I0328 00:48:23.416745 1114710 main.go:141] libmachine: (cert-expiration-927384) Calling .GetSSHPort
	I0328 00:48:23.416942 1114710 main.go:141] libmachine: (cert-expiration-927384) Calling .GetSSHKeyPath
	I0328 00:48:23.417096 1114710 main.go:141] libmachine: (cert-expiration-927384) Calling .GetSSHKeyPath
	I0328 00:48:23.417296 1114710 main.go:141] libmachine: (cert-expiration-927384) Calling .GetSSHUsername
	I0328 00:48:23.417536 1114710 main.go:141] libmachine: Using SSH client type: native
	I0328 00:48:23.417747 1114710 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.11 22 <nil> <nil>}
	I0328 00:48:23.417756 1114710 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0328 00:48:23.527986 1114710 main.go:141] libmachine: SSH cmd err, output: <nil>: 1711586903.525002844
	
	I0328 00:48:23.528001 1114710 fix.go:216] guest clock: 1711586903.525002844
	I0328 00:48:23.528010 1114710 fix.go:229] Guest: 2024-03-28 00:48:23.525002844 +0000 UTC Remote: 2024-03-28 00:48:23.412934105 +0000 UTC m=+6.775419394 (delta=112.068739ms)
	I0328 00:48:23.528040 1114710 fix.go:200] guest clock delta is within tolerance: 112.068739ms
	I0328 00:48:23.528045 1114710 start.go:83] releasing machines lock for "cert-expiration-927384", held for 6.71350596s
	I0328 00:48:23.528076 1114710 main.go:141] libmachine: (cert-expiration-927384) Calling .DriverName
	I0328 00:48:23.528358 1114710 main.go:141] libmachine: (cert-expiration-927384) Calling .GetIP
	I0328 00:48:23.531344 1114710 main.go:141] libmachine: (cert-expiration-927384) DBG | domain cert-expiration-927384 has defined MAC address 52:54:00:71:d9:b7 in network mk-cert-expiration-927384
	I0328 00:48:23.531730 1114710 main.go:141] libmachine: (cert-expiration-927384) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:d9:b7", ip: ""} in network mk-cert-expiration-927384: {Iface:virbr4 ExpiryTime:2024-03-28 01:44:44 +0000 UTC Type:0 Mac:52:54:00:71:d9:b7 Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:cert-expiration-927384 Clientid:01:52:54:00:71:d9:b7}
	I0328 00:48:23.531757 1114710 main.go:141] libmachine: (cert-expiration-927384) DBG | domain cert-expiration-927384 has defined IP address 192.168.72.11 and MAC address 52:54:00:71:d9:b7 in network mk-cert-expiration-927384
	I0328 00:48:23.531940 1114710 main.go:141] libmachine: (cert-expiration-927384) Calling .DriverName
	I0328 00:48:23.532581 1114710 main.go:141] libmachine: (cert-expiration-927384) Calling .DriverName
	I0328 00:48:23.532794 1114710 main.go:141] libmachine: (cert-expiration-927384) Calling .DriverName
	I0328 00:48:23.532880 1114710 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0328 00:48:23.532917 1114710 main.go:141] libmachine: (cert-expiration-927384) Calling .GetSSHHostname
	I0328 00:48:23.533011 1114710 ssh_runner.go:195] Run: cat /version.json
	I0328 00:48:23.533031 1114710 main.go:141] libmachine: (cert-expiration-927384) Calling .GetSSHHostname
	I0328 00:48:23.535840 1114710 main.go:141] libmachine: (cert-expiration-927384) DBG | domain cert-expiration-927384 has defined MAC address 52:54:00:71:d9:b7 in network mk-cert-expiration-927384
	I0328 00:48:23.535860 1114710 main.go:141] libmachine: (cert-expiration-927384) DBG | domain cert-expiration-927384 has defined MAC address 52:54:00:71:d9:b7 in network mk-cert-expiration-927384
	I0328 00:48:23.536200 1114710 main.go:141] libmachine: (cert-expiration-927384) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:d9:b7", ip: ""} in network mk-cert-expiration-927384: {Iface:virbr4 ExpiryTime:2024-03-28 01:44:44 +0000 UTC Type:0 Mac:52:54:00:71:d9:b7 Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:cert-expiration-927384 Clientid:01:52:54:00:71:d9:b7}
	I0328 00:48:23.536223 1114710 main.go:141] libmachine: (cert-expiration-927384) DBG | domain cert-expiration-927384 has defined IP address 192.168.72.11 and MAC address 52:54:00:71:d9:b7 in network mk-cert-expiration-927384
	I0328 00:48:23.536247 1114710 main.go:141] libmachine: (cert-expiration-927384) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:d9:b7", ip: ""} in network mk-cert-expiration-927384: {Iface:virbr4 ExpiryTime:2024-03-28 01:44:44 +0000 UTC Type:0 Mac:52:54:00:71:d9:b7 Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:cert-expiration-927384 Clientid:01:52:54:00:71:d9:b7}
	I0328 00:48:23.536259 1114710 main.go:141] libmachine: (cert-expiration-927384) DBG | domain cert-expiration-927384 has defined IP address 192.168.72.11 and MAC address 52:54:00:71:d9:b7 in network mk-cert-expiration-927384
	I0328 00:48:23.536430 1114710 main.go:141] libmachine: (cert-expiration-927384) Calling .GetSSHPort
	I0328 00:48:23.536593 1114710 main.go:141] libmachine: (cert-expiration-927384) Calling .GetSSHPort
	I0328 00:48:23.536599 1114710 main.go:141] libmachine: (cert-expiration-927384) Calling .GetSSHKeyPath
	I0328 00:48:23.536772 1114710 main.go:141] libmachine: (cert-expiration-927384) Calling .GetSSHKeyPath
	I0328 00:48:23.536842 1114710 main.go:141] libmachine: (cert-expiration-927384) Calling .GetSSHUsername
	I0328 00:48:23.536951 1114710 main.go:141] libmachine: (cert-expiration-927384) Calling .GetSSHUsername
	I0328 00:48:23.537031 1114710 sshutil.go:53] new ssh client: &{IP:192.168.72.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/cert-expiration-927384/id_rsa Username:docker}
	I0328 00:48:23.537098 1114710 sshutil.go:53] new ssh client: &{IP:192.168.72.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/cert-expiration-927384/id_rsa Username:docker}
	I0328 00:48:23.656621 1114710 ssh_runner.go:195] Run: systemctl --version
	I0328 00:48:23.663345 1114710 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0328 00:48:23.833238 1114710 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0328 00:48:23.842112 1114710 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0328 00:48:23.842187 1114710 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0328 00:48:23.852122 1114710 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0328 00:48:23.852141 1114710 start.go:494] detecting cgroup driver to use...
	I0328 00:48:23.852207 1114710 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0328 00:48:23.876893 1114710 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0328 00:48:23.894595 1114710 docker.go:217] disabling cri-docker service (if available) ...
	I0328 00:48:23.894662 1114710 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0328 00:48:23.913915 1114710 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0328 00:48:23.930497 1114710 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0328 00:48:24.080850 1114710 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0328 00:48:24.269252 1114710 docker.go:233] disabling docker service ...
	I0328 00:48:24.269331 1114710 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0328 00:48:24.292148 1114710 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0328 00:48:24.309015 1114710 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0328 00:48:24.465008 1114710 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0328 00:48:24.622031 1114710 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0328 00:48:24.641531 1114710 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0328 00:48:24.670835 1114710 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0328 00:48:24.670889 1114710 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 00:48:24.683920 1114710 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0328 00:48:24.683990 1114710 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 00:48:24.699470 1114710 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 00:48:24.718167 1114710 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 00:48:24.736687 1114710 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0328 00:48:24.749054 1114710 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 00:48:24.761395 1114710 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 00:48:24.776502 1114710 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 00:48:24.790833 1114710 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0328 00:48:24.801712 1114710 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0328 00:48:24.811915 1114710 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0328 00:48:24.999034 1114710 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0328 00:48:25.610148 1114710 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0328 00:48:25.610215 1114710 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0328 00:48:25.615764 1114710 start.go:562] Will wait 60s for crictl version
	I0328 00:48:25.615820 1114710 ssh_runner.go:195] Run: which crictl
	I0328 00:48:25.620424 1114710 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0328 00:48:25.664663 1114710 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0328 00:48:25.664752 1114710 ssh_runner.go:195] Run: crio --version
	I0328 00:48:25.696335 1114710 ssh_runner.go:195] Run: crio --version
	I0328 00:48:25.733674 1114710 out.go:177] * Preparing Kubernetes v1.29.3 on CRI-O 1.29.1 ...
	I0328 00:48:24.940318 1114185 api_server.go:279] https://192.168.39.233:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0328 00:48:24.940359 1114185 api_server.go:103] status: https://192.168.39.233:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0328 00:48:24.940376 1114185 api_server.go:253] Checking apiserver healthz at https://192.168.39.233:8443/healthz ...
	I0328 00:48:25.031474 1114185 api_server.go:279] https://192.168.39.233:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0328 00:48:25.031523 1114185 api_server.go:103] status: https://192.168.39.233:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0328 00:48:25.118721 1114185 api_server.go:253] Checking apiserver healthz at https://192.168.39.233:8443/healthz ...
	I0328 00:48:25.123900 1114185 api_server.go:279] https://192.168.39.233:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0328 00:48:25.123939 1114185 api_server.go:103] status: https://192.168.39.233:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0328 00:48:25.618327 1114185 api_server.go:253] Checking apiserver healthz at https://192.168.39.233:8443/healthz ...
	I0328 00:48:25.624159 1114185 api_server.go:279] https://192.168.39.233:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0328 00:48:25.624195 1114185 api_server.go:103] status: https://192.168.39.233:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0328 00:48:26.118361 1114185 api_server.go:253] Checking apiserver healthz at https://192.168.39.233:8443/healthz ...
	I0328 00:48:26.128584 1114185 api_server.go:279] https://192.168.39.233:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0328 00:48:26.128624 1114185 api_server.go:103] status: https://192.168.39.233:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0328 00:48:26.619210 1114185 api_server.go:253] Checking apiserver healthz at https://192.168.39.233:8443/healthz ...
	I0328 00:48:26.625842 1114185 api_server.go:279] https://192.168.39.233:8443/healthz returned 200:
	ok
	I0328 00:48:26.641524 1114185 api_server.go:141] control plane version: v1.29.3
	I0328 00:48:26.641559 1114185 api_server.go:131] duration metric: took 4.523307907s to wait for apiserver health ...
	I0328 00:48:26.641579 1114185 cni.go:84] Creating CNI manager for ""
	I0328 00:48:26.641588 1114185 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0328 00:48:26.643324 1114185 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0328 00:48:25.735219 1114710 main.go:141] libmachine: (cert-expiration-927384) Calling .GetIP
	I0328 00:48:25.738562 1114710 main.go:141] libmachine: (cert-expiration-927384) DBG | domain cert-expiration-927384 has defined MAC address 52:54:00:71:d9:b7 in network mk-cert-expiration-927384
	I0328 00:48:25.738932 1114710 main.go:141] libmachine: (cert-expiration-927384) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:d9:b7", ip: ""} in network mk-cert-expiration-927384: {Iface:virbr4 ExpiryTime:2024-03-28 01:44:44 +0000 UTC Type:0 Mac:52:54:00:71:d9:b7 Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:cert-expiration-927384 Clientid:01:52:54:00:71:d9:b7}
	I0328 00:48:25.738954 1114710 main.go:141] libmachine: (cert-expiration-927384) DBG | domain cert-expiration-927384 has defined IP address 192.168.72.11 and MAC address 52:54:00:71:d9:b7 in network mk-cert-expiration-927384
	I0328 00:48:25.739258 1114710 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0328 00:48:25.744295 1114710 kubeadm.go:877] updating cluster {Name:cert-expiration-927384 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.29.3 ClusterName:cert-expiration-927384 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.11 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:8760h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p M
ountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0328 00:48:25.744410 1114710 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0328 00:48:25.744466 1114710 ssh_runner.go:195] Run: sudo crictl images --output json
	I0328 00:48:25.788767 1114710 crio.go:514] all images are preloaded for cri-o runtime.
	I0328 00:48:25.788784 1114710 crio.go:433] Images already preloaded, skipping extraction
	I0328 00:48:25.788844 1114710 ssh_runner.go:195] Run: sudo crictl images --output json
	I0328 00:48:25.828802 1114710 crio.go:514] all images are preloaded for cri-o runtime.
	I0328 00:48:25.828821 1114710 cache_images.go:84] Images are preloaded, skipping loading
	I0328 00:48:25.828831 1114710 kubeadm.go:928] updating node { 192.168.72.11 8443 v1.29.3 crio true true} ...
	I0328 00:48:25.828973 1114710 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=cert-expiration-927384 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.11
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.3 ClusterName:cert-expiration-927384 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0328 00:48:25.829058 1114710 ssh_runner.go:195] Run: crio config
	I0328 00:48:25.888955 1114710 cni.go:84] Creating CNI manager for ""
	I0328 00:48:25.888965 1114710 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0328 00:48:25.888973 1114710 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0328 00:48:25.888993 1114710 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.11 APIServerPort:8443 KubernetesVersion:v1.29.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:cert-expiration-927384 NodeName:cert-expiration-927384 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.11"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.11 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0328 00:48:25.889149 1114710 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.11
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "cert-expiration-927384"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.11
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.11"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0328 00:48:25.889219 1114710 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.3
	I0328 00:48:25.902444 1114710 binaries.go:44] Found k8s binaries, skipping transfer
	I0328 00:48:25.902507 1114710 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0328 00:48:25.915930 1114710 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (321 bytes)
	I0328 00:48:25.938575 1114710 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0328 00:48:25.964637 1114710 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2163 bytes)
	I0328 00:48:25.985872 1114710 ssh_runner.go:195] Run: grep 192.168.72.11	control-plane.minikube.internal$ /etc/hosts
	I0328 00:48:25.991204 1114710 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0328 00:48:26.221442 1114710 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0328 00:48:26.296406 1114710 certs.go:68] Setting up /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/cert-expiration-927384 for IP: 192.168.72.11
	I0328 00:48:26.296423 1114710 certs.go:194] generating shared ca certs ...
	I0328 00:48:26.296444 1114710 certs.go:226] acquiring lock for ca certs: {Name:mkf4dec8f33bbf51de6ed3aabdf175c7fd744ef9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 00:48:26.296629 1114710 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.key
	I0328 00:48:26.296677 1114710 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/proxy-client-ca.key
	I0328 00:48:26.296686 1114710 certs.go:256] generating profile certs ...
	W0328 00:48:26.296839 1114710 out.go:239] ! Certificate client.crt has expired. Generating a new one...
	I0328 00:48:26.296868 1114710 certs.go:624] cert expired /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/cert-expiration-927384/client.crt: expiration: 2024-03-28 00:48:00 +0000 UTC, now: 2024-03-28 00:48:26.296862341 +0000 UTC m=+9.659347632
	I0328 00:48:26.296993 1114710 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/cert-expiration-927384/client.key
	I0328 00:48:26.297019 1114710 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/cert-expiration-927384/client.crt with IP's: []
	I0328 00:48:26.443460 1114710 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/cert-expiration-927384/client.crt ...
	I0328 00:48:26.443486 1114710 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/cert-expiration-927384/client.crt: {Name:mk0f6532c9e9a958037964474d03319b24c2fad7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 00:48:26.443706 1114710 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/cert-expiration-927384/client.key ...
	I0328 00:48:26.443722 1114710 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/cert-expiration-927384/client.key: {Name:mk07f937216548150c24811b345561ec6aaff33f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	W0328 00:48:26.443980 1114710 out.go:239] ! Certificate apiserver.crt.fd1a1cfa has expired. Generating a new one...
	I0328 00:48:26.444005 1114710 certs.go:624] cert expired /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/cert-expiration-927384/apiserver.crt.fd1a1cfa: expiration: 2024-03-28 00:48:00 +0000 UTC, now: 2024-03-28 00:48:26.443997439 +0000 UTC m=+9.806482725
	I0328 00:48:26.444111 1114710 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/cert-expiration-927384/apiserver.key.fd1a1cfa
	I0328 00:48:26.444137 1114710 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/cert-expiration-927384/apiserver.crt.fd1a1cfa with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.72.11]
	I0328 00:48:23.556444 1114767 main.go:141] libmachine: (stopped-upgrade-317492) Calling .Start
	I0328 00:48:23.556635 1114767 main.go:141] libmachine: (stopped-upgrade-317492) Ensuring networks are active...
	I0328 00:48:23.557570 1114767 main.go:141] libmachine: (stopped-upgrade-317492) Ensuring network default is active
	I0328 00:48:23.558016 1114767 main.go:141] libmachine: (stopped-upgrade-317492) Ensuring network mk-stopped-upgrade-317492 is active
	I0328 00:48:23.558898 1114767 main.go:141] libmachine: (stopped-upgrade-317492) Getting domain xml...
	I0328 00:48:23.559658 1114767 main.go:141] libmachine: (stopped-upgrade-317492) Creating domain...
	I0328 00:48:24.888082 1114767 main.go:141] libmachine: (stopped-upgrade-317492) Waiting to get IP...
	I0328 00:48:24.889313 1114767 main.go:141] libmachine: (stopped-upgrade-317492) DBG | domain stopped-upgrade-317492 has defined MAC address 52:54:00:71:b9:38 in network mk-stopped-upgrade-317492
	I0328 00:48:24.889868 1114767 main.go:141] libmachine: (stopped-upgrade-317492) DBG | unable to find current IP address of domain stopped-upgrade-317492 in network mk-stopped-upgrade-317492
	I0328 00:48:24.889995 1114767 main.go:141] libmachine: (stopped-upgrade-317492) DBG | I0328 00:48:24.889868 1114813 retry.go:31] will retry after 188.329756ms: waiting for machine to come up
	I0328 00:48:25.080585 1114767 main.go:141] libmachine: (stopped-upgrade-317492) DBG | domain stopped-upgrade-317492 has defined MAC address 52:54:00:71:b9:38 in network mk-stopped-upgrade-317492
	I0328 00:48:25.081273 1114767 main.go:141] libmachine: (stopped-upgrade-317492) DBG | unable to find current IP address of domain stopped-upgrade-317492 in network mk-stopped-upgrade-317492
	I0328 00:48:25.081339 1114767 main.go:141] libmachine: (stopped-upgrade-317492) DBG | I0328 00:48:25.081228 1114813 retry.go:31] will retry after 290.192227ms: waiting for machine to come up
	I0328 00:48:25.373009 1114767 main.go:141] libmachine: (stopped-upgrade-317492) DBG | domain stopped-upgrade-317492 has defined MAC address 52:54:00:71:b9:38 in network mk-stopped-upgrade-317492
	I0328 00:48:25.373630 1114767 main.go:141] libmachine: (stopped-upgrade-317492) DBG | unable to find current IP address of domain stopped-upgrade-317492 in network mk-stopped-upgrade-317492
	I0328 00:48:25.373659 1114767 main.go:141] libmachine: (stopped-upgrade-317492) DBG | I0328 00:48:25.373591 1114813 retry.go:31] will retry after 419.414213ms: waiting for machine to come up
	I0328 00:48:25.794224 1114767 main.go:141] libmachine: (stopped-upgrade-317492) DBG | domain stopped-upgrade-317492 has defined MAC address 52:54:00:71:b9:38 in network mk-stopped-upgrade-317492
	I0328 00:48:25.794704 1114767 main.go:141] libmachine: (stopped-upgrade-317492) DBG | unable to find current IP address of domain stopped-upgrade-317492 in network mk-stopped-upgrade-317492
	I0328 00:48:25.794735 1114767 main.go:141] libmachine: (stopped-upgrade-317492) DBG | I0328 00:48:25.794645 1114813 retry.go:31] will retry after 520.496062ms: waiting for machine to come up
	I0328 00:48:26.316482 1114767 main.go:141] libmachine: (stopped-upgrade-317492) DBG | domain stopped-upgrade-317492 has defined MAC address 52:54:00:71:b9:38 in network mk-stopped-upgrade-317492
	I0328 00:48:26.317049 1114767 main.go:141] libmachine: (stopped-upgrade-317492) DBG | unable to find current IP address of domain stopped-upgrade-317492 in network mk-stopped-upgrade-317492
	I0328 00:48:26.317084 1114767 main.go:141] libmachine: (stopped-upgrade-317492) DBG | I0328 00:48:26.317024 1114813 retry.go:31] will retry after 492.625594ms: waiting for machine to come up
	I0328 00:48:26.811905 1114767 main.go:141] libmachine: (stopped-upgrade-317492) DBG | domain stopped-upgrade-317492 has defined MAC address 52:54:00:71:b9:38 in network mk-stopped-upgrade-317492
	I0328 00:48:26.812580 1114767 main.go:141] libmachine: (stopped-upgrade-317492) DBG | unable to find current IP address of domain stopped-upgrade-317492 in network mk-stopped-upgrade-317492
	I0328 00:48:26.812605 1114767 main.go:141] libmachine: (stopped-upgrade-317492) DBG | I0328 00:48:26.812539 1114813 retry.go:31] will retry after 807.604901ms: waiting for machine to come up
	I0328 00:48:27.621635 1114767 main.go:141] libmachine: (stopped-upgrade-317492) DBG | domain stopped-upgrade-317492 has defined MAC address 52:54:00:71:b9:38 in network mk-stopped-upgrade-317492
	I0328 00:48:27.622272 1114767 main.go:141] libmachine: (stopped-upgrade-317492) DBG | unable to find current IP address of domain stopped-upgrade-317492 in network mk-stopped-upgrade-317492
	I0328 00:48:27.622347 1114767 main.go:141] libmachine: (stopped-upgrade-317492) DBG | I0328 00:48:27.622240 1114813 retry.go:31] will retry after 1.13820598s: waiting for machine to come up
	I0328 00:48:26.644600 1114185 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0328 00:48:26.661475 1114185 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0328 00:48:26.687271 1114185 system_pods.go:43] waiting for kube-system pods to appear ...
	I0328 00:48:26.699141 1114185 system_pods.go:59] 6 kube-system pods found
	I0328 00:48:26.699187 1114185 system_pods.go:61] "coredns-76f75df574-d9zx2" [dbcb1807-c16a-428c-9292-e7f4a8ff9d00] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0328 00:48:26.699198 1114185 system_pods.go:61] "etcd-pause-040046" [bde312dd-644f-479d-ba86-c5e7a3da71b2] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0328 00:48:26.699211 1114185 system_pods.go:61] "kube-apiserver-pause-040046" [f7559fbc-70d2-4624-851a-07ccc3aad759] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0328 00:48:26.699222 1114185 system_pods.go:61] "kube-controller-manager-pause-040046" [cba5b940-82ef-434a-83ee-c1be91954207] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0328 00:48:26.699233 1114185 system_pods.go:61] "kube-proxy-5tlrp" [249cdd5d-91ae-4248-9a00-f4959c78b3b2] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0328 00:48:26.699244 1114185 system_pods.go:61] "kube-scheduler-pause-040046" [337f7c7e-1319-44f7-a97f-dc80a972440b] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0328 00:48:26.699253 1114185 system_pods.go:74] duration metric: took 11.957859ms to wait for pod list to return data ...
	I0328 00:48:26.699266 1114185 node_conditions.go:102] verifying NodePressure condition ...
	I0328 00:48:26.703810 1114185 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0328 00:48:26.703843 1114185 node_conditions.go:123] node cpu capacity is 2
	I0328 00:48:26.703855 1114185 node_conditions.go:105] duration metric: took 4.583441ms to run NodePressure ...
	I0328 00:48:26.703877 1114185 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0328 00:48:27.055031 1114185 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0328 00:48:27.061811 1114185 kubeadm.go:733] kubelet initialised
	I0328 00:48:27.061842 1114185 kubeadm.go:734] duration metric: took 6.769991ms waiting for restarted kubelet to initialise ...
	I0328 00:48:27.061855 1114185 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0328 00:48:27.067758 1114185 pod_ready.go:78] waiting up to 4m0s for pod "coredns-76f75df574-d9zx2" in "kube-system" namespace to be "Ready" ...
	I0328 00:48:29.077210 1114185 pod_ready.go:102] pod "coredns-76f75df574-d9zx2" in "kube-system" namespace has status "Ready":"False"
	I0328 00:48:26.811373 1114710 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/cert-expiration-927384/apiserver.crt.fd1a1cfa ...
	I0328 00:48:26.811399 1114710 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/cert-expiration-927384/apiserver.crt.fd1a1cfa: {Name:mk34ba619ba3ddf930ef581036b0ac20e89ccba2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 00:48:26.811613 1114710 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/cert-expiration-927384/apiserver.key.fd1a1cfa ...
	I0328 00:48:26.811628 1114710 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/cert-expiration-927384/apiserver.key.fd1a1cfa: {Name:mk218bda22114738f0983db810d48c6bdf794c73 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 00:48:26.811718 1114710 certs.go:381] copying /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/cert-expiration-927384/apiserver.crt.fd1a1cfa -> /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/cert-expiration-927384/apiserver.crt
	I0328 00:48:26.811908 1114710 certs.go:385] copying /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/cert-expiration-927384/apiserver.key.fd1a1cfa -> /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/cert-expiration-927384/apiserver.key
	W0328 00:48:26.812153 1114710 out.go:239] ! Certificate proxy-client.crt has expired. Generating a new one...
	I0328 00:48:26.812177 1114710 certs.go:624] cert expired /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/cert-expiration-927384/proxy-client.crt: expiration: 2024-03-28 00:48:01 +0000 UTC, now: 2024-03-28 00:48:26.812170826 +0000 UTC m=+10.174656110
	I0328 00:48:26.812267 1114710 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/cert-expiration-927384/proxy-client.key
	I0328 00:48:26.812287 1114710 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/cert-expiration-927384/proxy-client.crt with IP's: []
	I0328 00:48:26.971956 1114710 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/cert-expiration-927384/proxy-client.crt ...
	I0328 00:48:26.971974 1114710 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/cert-expiration-927384/proxy-client.crt: {Name:mk349866c21bb285003b35aabf65b60af8539bcc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 00:48:26.972185 1114710 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/cert-expiration-927384/proxy-client.key ...
	I0328 00:48:26.972201 1114710 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/cert-expiration-927384/proxy-client.key: {Name:mk81959bdb3e6d8294e07a9a83e62d52d12e261a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 00:48:26.972375 1114710 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/1076522.pem (1338 bytes)
	W0328 00:48:26.972408 1114710 certs.go:480] ignoring /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/1076522_empty.pem, impossibly tiny 0 bytes
	I0328 00:48:26.972415 1114710 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca-key.pem (1675 bytes)
	I0328 00:48:26.972435 1114710 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem (1078 bytes)
	I0328 00:48:26.972453 1114710 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/cert.pem (1123 bytes)
	I0328 00:48:26.972469 1114710 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/key.pem (1679 bytes)
	I0328 00:48:26.972516 1114710 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/files/etc/ssl/certs/10765222.pem (1708 bytes)
	I0328 00:48:26.973131 1114710 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0328 00:48:27.091720 1114710 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0328 00:48:27.224044 1114710 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0328 00:48:27.279385 1114710 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0328 00:48:27.314593 1114710 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/cert-expiration-927384/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0328 00:48:27.344639 1114710 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/cert-expiration-927384/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0328 00:48:27.384378 1114710 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/cert-expiration-927384/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0328 00:48:27.420731 1114710 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/cert-expiration-927384/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0328 00:48:27.458602 1114710 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/1076522.pem --> /usr/share/ca-certificates/1076522.pem (1338 bytes)
	I0328 00:48:27.498938 1114710 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/files/etc/ssl/certs/10765222.pem --> /usr/share/ca-certificates/10765222.pem (1708 bytes)
	I0328 00:48:27.535082 1114710 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0328 00:48:27.607399 1114710 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0328 00:48:27.653003 1114710 ssh_runner.go:195] Run: openssl version
	I0328 00:48:27.662434 1114710 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10765222.pem && ln -fs /usr/share/ca-certificates/10765222.pem /etc/ssl/certs/10765222.pem"
	I0328 00:48:27.683708 1114710 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10765222.pem
	I0328 00:48:27.697564 1114710 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 27 23:42 /usr/share/ca-certificates/10765222.pem
	I0328 00:48:27.697635 1114710 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10765222.pem
	I0328 00:48:27.707706 1114710 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/10765222.pem /etc/ssl/certs/3ec20f2e.0"
	I0328 00:48:27.727833 1114710 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0328 00:48:27.745718 1114710 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0328 00:48:27.752828 1114710 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 27 23:34 /usr/share/ca-certificates/minikubeCA.pem
	I0328 00:48:27.752898 1114710 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0328 00:48:27.759630 1114710 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0328 00:48:27.783130 1114710 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1076522.pem && ln -fs /usr/share/ca-certificates/1076522.pem /etc/ssl/certs/1076522.pem"
	I0328 00:48:27.825332 1114710 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1076522.pem
	I0328 00:48:27.833006 1114710 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 27 23:42 /usr/share/ca-certificates/1076522.pem
	I0328 00:48:27.833070 1114710 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1076522.pem
	I0328 00:48:27.846584 1114710 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1076522.pem /etc/ssl/certs/51391683.0"
	I0328 00:48:27.862925 1114710 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0328 00:48:27.869506 1114710 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0328 00:48:27.875761 1114710 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0328 00:48:27.881627 1114710 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0328 00:48:27.887592 1114710 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0328 00:48:27.893573 1114710 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0328 00:48:27.899382 1114710 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0328 00:48:27.905228 1114710 kubeadm.go:391] StartCluster: {Name:cert-expiration-927384 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:
v1.29.3 ClusterName:cert-expiration-927384 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.11 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:8760h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Moun
tUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0328 00:48:27.905320 1114710 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0328 00:48:27.905369 1114710 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0328 00:48:27.958462 1114710 cri.go:89] found id: "6c8ea2d6f48529e858a501fb02ae3646db3ea04d62a986a8d269d55a4a22f959"
	I0328 00:48:27.958480 1114710 cri.go:89] found id: "889d56ee1c25d534c431807945c10cd2cf3845d18061eb50cdd7e96f744c670d"
	I0328 00:48:27.958483 1114710 cri.go:89] found id: "7df76a570371f45d1c809bb8525032dfa3e4d0ac34980128df2913a916d01050"
	I0328 00:48:27.958486 1114710 cri.go:89] found id: "0a4d6d97b5876b7f41ae8f8b2fc9e7706e252cf372bbb63716d7325ee7dcde80"
	I0328 00:48:27.958489 1114710 cri.go:89] found id: "a232770ca991bedb522785fcbd1be0f9b04c0df3497ee18bc8ea4f4a10fef9b9"
	I0328 00:48:27.958492 1114710 cri.go:89] found id: "eaef1aadcb512b19f0203b051067c9093fc6568665498c540d4b6e65ce102f3e"
	I0328 00:48:27.958494 1114710 cri.go:89] found id: "3c458acc4d38eae614292a1d35e7215277fac9d47d26f58b2998c9361ecd9f28"
	I0328 00:48:27.958497 1114710 cri.go:89] found id: "de58f93d3a3b6bf7c15ff90af303f39c5bbde9e729d2309a365420e0ecea6477"
	I0328 00:48:27.958499 1114710 cri.go:89] found id: "547d29e826214966219a18ea23219f0fbe13aaff13e1cee4027930e139040665"
	I0328 00:48:27.958505 1114710 cri.go:89] found id: "6bd385b1ed854e674fc2ee6648c9f77849dd1cbae0ad6b5e98d4c1559cb3db21"
	I0328 00:48:27.958508 1114710 cri.go:89] found id: "aa91868317a1e3801d3222ae012e217a242e00166e053e3698394112cf1e0e34"
	I0328 00:48:27.958511 1114710 cri.go:89] found id: "7e93cba6aefcba5288d8312b6c961538a517c3e6b0b1edfeb2166ef8cc0230c1"
	I0328 00:48:27.958513 1114710 cri.go:89] found id: "aad8a014a45c34513990a53f3c1e213badb56faea151cc23c074451973db76bf"
	I0328 00:48:27.958516 1114710 cri.go:89] found id: ""
	I0328 00:48:27.958576 1114710 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Mar 28 00:48:45 pause-040046 crio[2128]: time="2024-03-28 00:48:45.315707181Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1711586925315635718,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:121209,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4d46fb02-2786-4c04-b666-4376b713cca3 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 28 00:48:45 pause-040046 crio[2128]: time="2024-03-28 00:48:45.316694840Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=42bd1769-c373-466b-8149-763a41ae27ea name=/runtime.v1.RuntimeService/ListContainers
	Mar 28 00:48:45 pause-040046 crio[2128]: time="2024-03-28 00:48:45.316773075Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=42bd1769-c373-466b-8149-763a41ae27ea name=/runtime.v1.RuntimeService/ListContainers
	Mar 28 00:48:45 pause-040046 crio[2128]: time="2024-03-28 00:48:45.317114443Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ad125daaf884447921ace6baa3fac20fb9080244b462a3e6c1dfa81b3dd1bf9e,PodSandboxId:f2711f2561563c9dc0b89b71de750e6e8b0461558ecaafb570f43d5958900b2e,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1711586906254176473,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-d9zx2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dbcb1807-c16a-428c-9292-e7f4a8ff9d00,},Annotations:map[string]string{io.kubernetes.container.hash: 7530ac82,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d21716d3db96dc4becdcf1946eb6aeb2bad5d0e6a01d4704807ca7b1c717663,PodSandboxId:0bfdaf3c6731dbd36344a7d6856787e94a075c01506f983f9693dfea067ac12e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1711586906244232513,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5tlrp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 249cdd5d-91ae-4248-9a00-f4959c78b3b2,},Annotations:map[string]string{io.kubernetes.container.hash: 62269400,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a953be1a41bf293fed13e0cb2e03fd3a5cd17c9b79aee46a3884d75e51d7453,PodSandboxId:e6197137bbd50e78913254d90cc8874393f7bc3d1e7ebf5bc4e59c5573bd8536,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1711586901570748638,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-040046,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf1ef5c54fdae67febb96c1cd5ff3e5f,},Annot
ations:map[string]string{io.kubernetes.container.hash: 69e7280e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2924b060928092dbbd9bd04663a3b4db9488ebea45a3fcf8f014a444cd2902a,PodSandboxId:246a8e4a4bfce5313868706064d22d2da4b72c2d7a4a54fcf1f721c538a7e5b5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1711586901600556046,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-040046,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dbcc4c94bbad251b426f15a339077c36,},Annotations:map[string]
string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a15940c4f84a4954a314771f1309e8bbf59250930010e92dc67bbc616dc6099,PodSandboxId:978617d4de30fbcfea266b10efdaaa2e4aadab97a9daf324e7ebe7cf66fa6cf3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1711586901585944282,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-040046,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d72e7312da1738efc1668540f0e695a,},Annotations:map[string]string{io.kubernet
es.container.hash: 11ee3fec,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6b6176f7ebcce772df04ac9936e524988d01f46c826a569ca6ab05daf111e19f,PodSandboxId:3a57d6e0c813adf153ea3ab13838ff43bb350518484bc795616b17c4dd407b3b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1711586901580910804,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-040046,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 42f3ca6be942820b85cc87c91c1ac4b8,},Annotations:map[string]string{io
.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2264e41f77cc4f28ea256a12349256853bb9b3a2421f649b7e75348e9561392b,PodSandboxId:f2711f2561563c9dc0b89b71de750e6e8b0461558ecaafb570f43d5958900b2e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1711586878330422134,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-d9zx2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dbcb1807-c16a-428c-9292-e7f4a8ff9d00,},Annotations:map[string]string{io.kubernetes.container.hash: 7530
ac82,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:737bc1da2980fc5534519112255b2b2e4ced04b8f71571c699f8e1eedf84b5e9,PodSandboxId:0bfdaf3c6731dbd36344a7d6856787e94a075c01506f983f9693dfea067ac12e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_EXITED,CreatedAt:1711586877659540751,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.po
d.name: kube-proxy-5tlrp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 249cdd5d-91ae-4248-9a00-f4959c78b3b2,},Annotations:map[string]string{io.kubernetes.container.hash: 62269400,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:adae09481ab4f504b6cc144443edbfd307e7f230fd84bab6c985cdc13f9c0558,PodSandboxId:246a8e4a4bfce5313868706064d22d2da4b72c2d7a4a54fcf1f721c538a7e5b5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_EXITED,CreatedAt:1711586877603939716,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pau
se-040046,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dbcc4c94bbad251b426f15a339077c36,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:94ad7b59f3f0b5484e2c6ba034025fedf7553ccecaad58ee34b0b4a996190bb0,PodSandboxId:978617d4de30fbcfea266b10efdaaa2e4aadab97a9daf324e7ebe7cf66fa6cf3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_EXITED,CreatedAt:1711586877716904505,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-040046,io.kubern
etes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d72e7312da1738efc1668540f0e695a,},Annotations:map[string]string{io.kubernetes.container.hash: 11ee3fec,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:abd187f6614af02b4d750c2d77c5a355583e1d586c760fd769441be096ef6314,PodSandboxId:e6197137bbd50e78913254d90cc8874393f7bc3d1e7ebf5bc4e59c5573bd8536,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1711586877575308986,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-040046,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod
.uid: bf1ef5c54fdae67febb96c1cd5ff3e5f,},Annotations:map[string]string{io.kubernetes.container.hash: 69e7280e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8bd72f355a2ea559cef3868b146cd6fd26cba6952ec6955c36683ada0df0f439,PodSandboxId:3a57d6e0c813adf153ea3ab13838ff43bb350518484bc795616b17c4dd407b3b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_EXITED,CreatedAt:1711586877565195150,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-040046,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: 42f3ca6be942820b85cc87c91c1ac4b8,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=42bd1769-c373-466b-8149-763a41ae27ea name=/runtime.v1.RuntimeService/ListContainers
	Mar 28 00:48:45 pause-040046 crio[2128]: time="2024-03-28 00:48:45.373647738Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=7b78413f-7630-41f8-962c-95993447a30d name=/runtime.v1.RuntimeService/Version
	Mar 28 00:48:45 pause-040046 crio[2128]: time="2024-03-28 00:48:45.373741065Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=7b78413f-7630-41f8-962c-95993447a30d name=/runtime.v1.RuntimeService/Version
	Mar 28 00:48:45 pause-040046 crio[2128]: time="2024-03-28 00:48:45.375423564Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d7dc7c43-41c9-45f3-a1f8-f3663bab257b name=/runtime.v1.ImageService/ImageFsInfo
	Mar 28 00:48:45 pause-040046 crio[2128]: time="2024-03-28 00:48:45.376040523Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1711586925375946439,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:121209,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d7dc7c43-41c9-45f3-a1f8-f3663bab257b name=/runtime.v1.ImageService/ImageFsInfo
	Mar 28 00:48:45 pause-040046 crio[2128]: time="2024-03-28 00:48:45.376675096Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9aa77557-d836-4a46-931b-efac722cf2e6 name=/runtime.v1.RuntimeService/ListContainers
	Mar 28 00:48:45 pause-040046 crio[2128]: time="2024-03-28 00:48:45.376750227Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9aa77557-d836-4a46-931b-efac722cf2e6 name=/runtime.v1.RuntimeService/ListContainers
	Mar 28 00:48:45 pause-040046 crio[2128]: time="2024-03-28 00:48:45.377050216Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ad125daaf884447921ace6baa3fac20fb9080244b462a3e6c1dfa81b3dd1bf9e,PodSandboxId:f2711f2561563c9dc0b89b71de750e6e8b0461558ecaafb570f43d5958900b2e,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1711586906254176473,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-d9zx2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dbcb1807-c16a-428c-9292-e7f4a8ff9d00,},Annotations:map[string]string{io.kubernetes.container.hash: 7530ac82,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d21716d3db96dc4becdcf1946eb6aeb2bad5d0e6a01d4704807ca7b1c717663,PodSandboxId:0bfdaf3c6731dbd36344a7d6856787e94a075c01506f983f9693dfea067ac12e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1711586906244232513,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5tlrp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 249cdd5d-91ae-4248-9a00-f4959c78b3b2,},Annotations:map[string]string{io.kubernetes.container.hash: 62269400,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a953be1a41bf293fed13e0cb2e03fd3a5cd17c9b79aee46a3884d75e51d7453,PodSandboxId:e6197137bbd50e78913254d90cc8874393f7bc3d1e7ebf5bc4e59c5573bd8536,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1711586901570748638,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-040046,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf1ef5c54fdae67febb96c1cd5ff3e5f,},Annot
ations:map[string]string{io.kubernetes.container.hash: 69e7280e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2924b060928092dbbd9bd04663a3b4db9488ebea45a3fcf8f014a444cd2902a,PodSandboxId:246a8e4a4bfce5313868706064d22d2da4b72c2d7a4a54fcf1f721c538a7e5b5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1711586901600556046,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-040046,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dbcc4c94bbad251b426f15a339077c36,},Annotations:map[string]
string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a15940c4f84a4954a314771f1309e8bbf59250930010e92dc67bbc616dc6099,PodSandboxId:978617d4de30fbcfea266b10efdaaa2e4aadab97a9daf324e7ebe7cf66fa6cf3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1711586901585944282,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-040046,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d72e7312da1738efc1668540f0e695a,},Annotations:map[string]string{io.kubernet
es.container.hash: 11ee3fec,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6b6176f7ebcce772df04ac9936e524988d01f46c826a569ca6ab05daf111e19f,PodSandboxId:3a57d6e0c813adf153ea3ab13838ff43bb350518484bc795616b17c4dd407b3b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1711586901580910804,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-040046,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 42f3ca6be942820b85cc87c91c1ac4b8,},Annotations:map[string]string{io
.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2264e41f77cc4f28ea256a12349256853bb9b3a2421f649b7e75348e9561392b,PodSandboxId:f2711f2561563c9dc0b89b71de750e6e8b0461558ecaafb570f43d5958900b2e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1711586878330422134,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-d9zx2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dbcb1807-c16a-428c-9292-e7f4a8ff9d00,},Annotations:map[string]string{io.kubernetes.container.hash: 7530
ac82,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:737bc1da2980fc5534519112255b2b2e4ced04b8f71571c699f8e1eedf84b5e9,PodSandboxId:0bfdaf3c6731dbd36344a7d6856787e94a075c01506f983f9693dfea067ac12e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_EXITED,CreatedAt:1711586877659540751,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.po
d.name: kube-proxy-5tlrp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 249cdd5d-91ae-4248-9a00-f4959c78b3b2,},Annotations:map[string]string{io.kubernetes.container.hash: 62269400,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:adae09481ab4f504b6cc144443edbfd307e7f230fd84bab6c985cdc13f9c0558,PodSandboxId:246a8e4a4bfce5313868706064d22d2da4b72c2d7a4a54fcf1f721c538a7e5b5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_EXITED,CreatedAt:1711586877603939716,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pau
se-040046,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dbcc4c94bbad251b426f15a339077c36,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:94ad7b59f3f0b5484e2c6ba034025fedf7553ccecaad58ee34b0b4a996190bb0,PodSandboxId:978617d4de30fbcfea266b10efdaaa2e4aadab97a9daf324e7ebe7cf66fa6cf3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_EXITED,CreatedAt:1711586877716904505,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-040046,io.kubern
etes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d72e7312da1738efc1668540f0e695a,},Annotations:map[string]string{io.kubernetes.container.hash: 11ee3fec,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:abd187f6614af02b4d750c2d77c5a355583e1d586c760fd769441be096ef6314,PodSandboxId:e6197137bbd50e78913254d90cc8874393f7bc3d1e7ebf5bc4e59c5573bd8536,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1711586877575308986,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-040046,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod
.uid: bf1ef5c54fdae67febb96c1cd5ff3e5f,},Annotations:map[string]string{io.kubernetes.container.hash: 69e7280e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8bd72f355a2ea559cef3868b146cd6fd26cba6952ec6955c36683ada0df0f439,PodSandboxId:3a57d6e0c813adf153ea3ab13838ff43bb350518484bc795616b17c4dd407b3b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_EXITED,CreatedAt:1711586877565195150,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-040046,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: 42f3ca6be942820b85cc87c91c1ac4b8,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9aa77557-d836-4a46-931b-efac722cf2e6 name=/runtime.v1.RuntimeService/ListContainers
	Mar 28 00:48:45 pause-040046 crio[2128]: time="2024-03-28 00:48:45.429275204Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a297cb44-7a4c-4626-888a-994a6c9ddbc1 name=/runtime.v1.RuntimeService/Version
	Mar 28 00:48:45 pause-040046 crio[2128]: time="2024-03-28 00:48:45.429380639Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a297cb44-7a4c-4626-888a-994a6c9ddbc1 name=/runtime.v1.RuntimeService/Version
	Mar 28 00:48:45 pause-040046 crio[2128]: time="2024-03-28 00:48:45.431364456Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c4f3e0d4-1808-4f87-87b4-c5826f2dad52 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 28 00:48:45 pause-040046 crio[2128]: time="2024-03-28 00:48:45.432048236Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1711586925431949533,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:121209,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c4f3e0d4-1808-4f87-87b4-c5826f2dad52 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 28 00:48:45 pause-040046 crio[2128]: time="2024-03-28 00:48:45.432731592Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8c01c19b-0af5-4046-8858-65b1ef1c2d2b name=/runtime.v1.RuntimeService/ListContainers
	Mar 28 00:48:45 pause-040046 crio[2128]: time="2024-03-28 00:48:45.432856068Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8c01c19b-0af5-4046-8858-65b1ef1c2d2b name=/runtime.v1.RuntimeService/ListContainers
	Mar 28 00:48:45 pause-040046 crio[2128]: time="2024-03-28 00:48:45.433350589Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ad125daaf884447921ace6baa3fac20fb9080244b462a3e6c1dfa81b3dd1bf9e,PodSandboxId:f2711f2561563c9dc0b89b71de750e6e8b0461558ecaafb570f43d5958900b2e,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1711586906254176473,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-d9zx2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dbcb1807-c16a-428c-9292-e7f4a8ff9d00,},Annotations:map[string]string{io.kubernetes.container.hash: 7530ac82,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d21716d3db96dc4becdcf1946eb6aeb2bad5d0e6a01d4704807ca7b1c717663,PodSandboxId:0bfdaf3c6731dbd36344a7d6856787e94a075c01506f983f9693dfea067ac12e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1711586906244232513,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5tlrp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 249cdd5d-91ae-4248-9a00-f4959c78b3b2,},Annotations:map[string]string{io.kubernetes.container.hash: 62269400,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a953be1a41bf293fed13e0cb2e03fd3a5cd17c9b79aee46a3884d75e51d7453,PodSandboxId:e6197137bbd50e78913254d90cc8874393f7bc3d1e7ebf5bc4e59c5573bd8536,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1711586901570748638,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-040046,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf1ef5c54fdae67febb96c1cd5ff3e5f,},Annot
ations:map[string]string{io.kubernetes.container.hash: 69e7280e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2924b060928092dbbd9bd04663a3b4db9488ebea45a3fcf8f014a444cd2902a,PodSandboxId:246a8e4a4bfce5313868706064d22d2da4b72c2d7a4a54fcf1f721c538a7e5b5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1711586901600556046,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-040046,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dbcc4c94bbad251b426f15a339077c36,},Annotations:map[string]
string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a15940c4f84a4954a314771f1309e8bbf59250930010e92dc67bbc616dc6099,PodSandboxId:978617d4de30fbcfea266b10efdaaa2e4aadab97a9daf324e7ebe7cf66fa6cf3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1711586901585944282,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-040046,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d72e7312da1738efc1668540f0e695a,},Annotations:map[string]string{io.kubernet
es.container.hash: 11ee3fec,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6b6176f7ebcce772df04ac9936e524988d01f46c826a569ca6ab05daf111e19f,PodSandboxId:3a57d6e0c813adf153ea3ab13838ff43bb350518484bc795616b17c4dd407b3b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1711586901580910804,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-040046,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 42f3ca6be942820b85cc87c91c1ac4b8,},Annotations:map[string]string{io
.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2264e41f77cc4f28ea256a12349256853bb9b3a2421f649b7e75348e9561392b,PodSandboxId:f2711f2561563c9dc0b89b71de750e6e8b0461558ecaafb570f43d5958900b2e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1711586878330422134,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-d9zx2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dbcb1807-c16a-428c-9292-e7f4a8ff9d00,},Annotations:map[string]string{io.kubernetes.container.hash: 7530
ac82,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:737bc1da2980fc5534519112255b2b2e4ced04b8f71571c699f8e1eedf84b5e9,PodSandboxId:0bfdaf3c6731dbd36344a7d6856787e94a075c01506f983f9693dfea067ac12e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_EXITED,CreatedAt:1711586877659540751,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.po
d.name: kube-proxy-5tlrp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 249cdd5d-91ae-4248-9a00-f4959c78b3b2,},Annotations:map[string]string{io.kubernetes.container.hash: 62269400,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:adae09481ab4f504b6cc144443edbfd307e7f230fd84bab6c985cdc13f9c0558,PodSandboxId:246a8e4a4bfce5313868706064d22d2da4b72c2d7a4a54fcf1f721c538a7e5b5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_EXITED,CreatedAt:1711586877603939716,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pau
se-040046,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dbcc4c94bbad251b426f15a339077c36,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:94ad7b59f3f0b5484e2c6ba034025fedf7553ccecaad58ee34b0b4a996190bb0,PodSandboxId:978617d4de30fbcfea266b10efdaaa2e4aadab97a9daf324e7ebe7cf66fa6cf3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_EXITED,CreatedAt:1711586877716904505,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-040046,io.kubern
etes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d72e7312da1738efc1668540f0e695a,},Annotations:map[string]string{io.kubernetes.container.hash: 11ee3fec,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:abd187f6614af02b4d750c2d77c5a355583e1d586c760fd769441be096ef6314,PodSandboxId:e6197137bbd50e78913254d90cc8874393f7bc3d1e7ebf5bc4e59c5573bd8536,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1711586877575308986,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-040046,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod
.uid: bf1ef5c54fdae67febb96c1cd5ff3e5f,},Annotations:map[string]string{io.kubernetes.container.hash: 69e7280e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8bd72f355a2ea559cef3868b146cd6fd26cba6952ec6955c36683ada0df0f439,PodSandboxId:3a57d6e0c813adf153ea3ab13838ff43bb350518484bc795616b17c4dd407b3b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_EXITED,CreatedAt:1711586877565195150,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-040046,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: 42f3ca6be942820b85cc87c91c1ac4b8,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8c01c19b-0af5-4046-8858-65b1ef1c2d2b name=/runtime.v1.RuntimeService/ListContainers
	Mar 28 00:48:45 pause-040046 crio[2128]: time="2024-03-28 00:48:45.485572710Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6e9fe4fa-f0bd-4592-a4a7-2c864e8ecdb5 name=/runtime.v1.RuntimeService/Version
	Mar 28 00:48:45 pause-040046 crio[2128]: time="2024-03-28 00:48:45.485709870Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6e9fe4fa-f0bd-4592-a4a7-2c864e8ecdb5 name=/runtime.v1.RuntimeService/Version
	Mar 28 00:48:45 pause-040046 crio[2128]: time="2024-03-28 00:48:45.487937978Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f873d597-eaef-4c24-b0dd-6f4c26551378 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 28 00:48:45 pause-040046 crio[2128]: time="2024-03-28 00:48:45.488562113Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1711586925488527115,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:121209,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f873d597-eaef-4c24-b0dd-6f4c26551378 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 28 00:48:45 pause-040046 crio[2128]: time="2024-03-28 00:48:45.489905051Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=913288a3-26ee-49a9-8cf3-a181a59be8d8 name=/runtime.v1.RuntimeService/ListContainers
	Mar 28 00:48:45 pause-040046 crio[2128]: time="2024-03-28 00:48:45.490052306Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=913288a3-26ee-49a9-8cf3-a181a59be8d8 name=/runtime.v1.RuntimeService/ListContainers
	Mar 28 00:48:45 pause-040046 crio[2128]: time="2024-03-28 00:48:45.490425107Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ad125daaf884447921ace6baa3fac20fb9080244b462a3e6c1dfa81b3dd1bf9e,PodSandboxId:f2711f2561563c9dc0b89b71de750e6e8b0461558ecaafb570f43d5958900b2e,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1711586906254176473,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-d9zx2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dbcb1807-c16a-428c-9292-e7f4a8ff9d00,},Annotations:map[string]string{io.kubernetes.container.hash: 7530ac82,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d21716d3db96dc4becdcf1946eb6aeb2bad5d0e6a01d4704807ca7b1c717663,PodSandboxId:0bfdaf3c6731dbd36344a7d6856787e94a075c01506f983f9693dfea067ac12e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1711586906244232513,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5tlrp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 249cdd5d-91ae-4248-9a00-f4959c78b3b2,},Annotations:map[string]string{io.kubernetes.container.hash: 62269400,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a953be1a41bf293fed13e0cb2e03fd3a5cd17c9b79aee46a3884d75e51d7453,PodSandboxId:e6197137bbd50e78913254d90cc8874393f7bc3d1e7ebf5bc4e59c5573bd8536,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1711586901570748638,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-040046,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf1ef5c54fdae67febb96c1cd5ff3e5f,},Annot
ations:map[string]string{io.kubernetes.container.hash: 69e7280e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2924b060928092dbbd9bd04663a3b4db9488ebea45a3fcf8f014a444cd2902a,PodSandboxId:246a8e4a4bfce5313868706064d22d2da4b72c2d7a4a54fcf1f721c538a7e5b5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1711586901600556046,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-040046,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dbcc4c94bbad251b426f15a339077c36,},Annotations:map[string]
string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a15940c4f84a4954a314771f1309e8bbf59250930010e92dc67bbc616dc6099,PodSandboxId:978617d4de30fbcfea266b10efdaaa2e4aadab97a9daf324e7ebe7cf66fa6cf3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1711586901585944282,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-040046,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d72e7312da1738efc1668540f0e695a,},Annotations:map[string]string{io.kubernet
es.container.hash: 11ee3fec,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6b6176f7ebcce772df04ac9936e524988d01f46c826a569ca6ab05daf111e19f,PodSandboxId:3a57d6e0c813adf153ea3ab13838ff43bb350518484bc795616b17c4dd407b3b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1711586901580910804,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-040046,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 42f3ca6be942820b85cc87c91c1ac4b8,},Annotations:map[string]string{io
.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2264e41f77cc4f28ea256a12349256853bb9b3a2421f649b7e75348e9561392b,PodSandboxId:f2711f2561563c9dc0b89b71de750e6e8b0461558ecaafb570f43d5958900b2e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1711586878330422134,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-d9zx2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dbcb1807-c16a-428c-9292-e7f4a8ff9d00,},Annotations:map[string]string{io.kubernetes.container.hash: 7530
ac82,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:737bc1da2980fc5534519112255b2b2e4ced04b8f71571c699f8e1eedf84b5e9,PodSandboxId:0bfdaf3c6731dbd36344a7d6856787e94a075c01506f983f9693dfea067ac12e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_EXITED,CreatedAt:1711586877659540751,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.po
d.name: kube-proxy-5tlrp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 249cdd5d-91ae-4248-9a00-f4959c78b3b2,},Annotations:map[string]string{io.kubernetes.container.hash: 62269400,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:adae09481ab4f504b6cc144443edbfd307e7f230fd84bab6c985cdc13f9c0558,PodSandboxId:246a8e4a4bfce5313868706064d22d2da4b72c2d7a4a54fcf1f721c538a7e5b5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_EXITED,CreatedAt:1711586877603939716,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pau
se-040046,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dbcc4c94bbad251b426f15a339077c36,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:94ad7b59f3f0b5484e2c6ba034025fedf7553ccecaad58ee34b0b4a996190bb0,PodSandboxId:978617d4de30fbcfea266b10efdaaa2e4aadab97a9daf324e7ebe7cf66fa6cf3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_EXITED,CreatedAt:1711586877716904505,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-040046,io.kubern
etes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d72e7312da1738efc1668540f0e695a,},Annotations:map[string]string{io.kubernetes.container.hash: 11ee3fec,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:abd187f6614af02b4d750c2d77c5a355583e1d586c760fd769441be096ef6314,PodSandboxId:e6197137bbd50e78913254d90cc8874393f7bc3d1e7ebf5bc4e59c5573bd8536,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1711586877575308986,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-040046,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod
.uid: bf1ef5c54fdae67febb96c1cd5ff3e5f,},Annotations:map[string]string{io.kubernetes.container.hash: 69e7280e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8bd72f355a2ea559cef3868b146cd6fd26cba6952ec6955c36683ada0df0f439,PodSandboxId:3a57d6e0c813adf153ea3ab13838ff43bb350518484bc795616b17c4dd407b3b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_EXITED,CreatedAt:1711586877565195150,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-040046,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: 42f3ca6be942820b85cc87c91c1ac4b8,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=913288a3-26ee-49a9-8cf3-a181a59be8d8 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	ad125daaf8844       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   19 seconds ago      Running             coredns                   2                   f2711f2561563       coredns-76f75df574-d9zx2
	6d21716d3db96       a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392   19 seconds ago      Running             kube-proxy                2                   0bfdaf3c6731d       kube-proxy-5tlrp
	a2924b0609280       8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b   23 seconds ago      Running             kube-scheduler            2                   246a8e4a4bfce       kube-scheduler-pause-040046
	7a15940c4f84a       39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533   24 seconds ago      Running             kube-apiserver            2                   978617d4de30f       kube-apiserver-pause-040046
	6b6176f7ebcce       6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3   24 seconds ago      Running             kube-controller-manager   2                   3a57d6e0c813a       kube-controller-manager-pause-040046
	3a953be1a41bf       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   24 seconds ago      Running             etcd                      2                   e6197137bbd50       etcd-pause-040046
	2264e41f77cc4       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   47 seconds ago      Exited              coredns                   1                   f2711f2561563       coredns-76f75df574-d9zx2
	94ad7b59f3f0b       39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533   47 seconds ago      Exited              kube-apiserver            1                   978617d4de30f       kube-apiserver-pause-040046
	737bc1da2980f       a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392   47 seconds ago      Exited              kube-proxy                1                   0bfdaf3c6731d       kube-proxy-5tlrp
	adae09481ab4f       8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b   47 seconds ago      Exited              kube-scheduler            1                   246a8e4a4bfce       kube-scheduler-pause-040046
	abd187f6614af       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   48 seconds ago      Exited              etcd                      1                   e6197137bbd50       etcd-pause-040046
	8bd72f355a2ea       6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3   48 seconds ago      Exited              kube-controller-manager   1                   3a57d6e0c813a       kube-controller-manager-pause-040046
	
	
	==> coredns [2264e41f77cc4f28ea256a12349256853bb9b3a2421f649b7e75348e9561392b] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	[INFO] plugin/health: Going into lameduck mode for 5s
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:49871 - 57042 "HINFO IN 589354795089262720.7371850693171307278. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.010209795s
	
	
	==> coredns [ad125daaf884447921ace6baa3fac20fb9080244b462a3e6c1dfa81b3dd1bf9e] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:52539 - 35473 "HINFO IN 7062009350584288299.3531532817132609295. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.009202525s
	
	
	==> describe nodes <==
	Name:               pause-040046
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-040046
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1a940f980f77c9bfc8ef93678db8c1f49c9dd79d
	                    minikube.k8s.io/name=pause-040046
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_03_28T00_46_54_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 28 Mar 2024 00:46:51 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-040046
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 28 Mar 2024 00:48:45 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 28 Mar 2024 00:48:25 +0000   Thu, 28 Mar 2024 00:46:49 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 28 Mar 2024 00:48:25 +0000   Thu, 28 Mar 2024 00:46:49 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 28 Mar 2024 00:48:25 +0000   Thu, 28 Mar 2024 00:46:49 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 28 Mar 2024 00:48:25 +0000   Thu, 28 Mar 2024 00:46:55 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.233
	  Hostname:    pause-040046
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	System Info:
	  Machine ID:                 8dfa7a8119134880a0c21fe76cbfec60
	  System UUID:                8dfa7a81-1913-4880-a0c2-1fe76cbfec60
	  Boot ID:                    4cd15043-5dd7-4730-a218-c4a23fd2960e
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-76f75df574-d9zx2                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     98s
	  kube-system                 etcd-pause-040046                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (5%!)(MISSING)       0 (0%!)(MISSING)         111s
	  kube-system                 kube-apiserver-pause-040046             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         111s
	  kube-system                 kube-controller-manager-pause-040046    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         112s
	  kube-system                 kube-proxy-5tlrp                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         98s
	  kube-system                 kube-scheduler-pause-040046             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         111s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (8%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                  From             Message
	  ----     ------                   ----                 ----             -------
	  Normal   Starting                 19s                  kube-proxy       
	  Normal   Starting                 44s                  kube-proxy       
	  Normal   Starting                 97s                  kube-proxy       
	  Normal   NodeAllocatableEnforced  118s                 kubelet          Updated Node Allocatable limit across pods
	  Normal   Starting                 118s                 kubelet          Starting kubelet.
	  Normal   NodeHasSufficientPID     117s (x7 over 118s)  kubelet          Node pause-040046 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  117s (x8 over 118s)  kubelet          Node pause-040046 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    117s (x8 over 118s)  kubelet          Node pause-040046 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 111s                 kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  111s                 kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  111s                 kubelet          Node pause-040046 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    111s                 kubelet          Node pause-040046 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     111s                 kubelet          Node pause-040046 status is now: NodeHasSufficientPID
	  Normal   NodeReady                110s                 kubelet          Node pause-040046 status is now: NodeReady
	  Normal   RegisteredNode           99s                  node-controller  Node pause-040046 event: Registered Node pause-040046 in Controller
	  Warning  ContainerGCFailed        51s                  kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   Starting                 25s                  kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  24s (x8 over 24s)    kubelet          Node pause-040046 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    24s (x8 over 24s)    kubelet          Node pause-040046 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     24s (x7 over 24s)    kubelet          Node pause-040046 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  24s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           7s                   node-controller  Node pause-040046 event: Registered Node pause-040046 in Controller
	
	
	==> dmesg <==
	[  +0.056851] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.059137] systemd-fstab-generator[609]: Ignoring "noauto" option for root device
	[  +0.185804] systemd-fstab-generator[623]: Ignoring "noauto" option for root device
	[  +0.156089] systemd-fstab-generator[635]: Ignoring "noauto" option for root device
	[  +0.321006] systemd-fstab-generator[665]: Ignoring "noauto" option for root device
	[  +4.709069] systemd-fstab-generator[762]: Ignoring "noauto" option for root device
	[  +0.064512] kauditd_printk_skb: 130 callbacks suppressed
	[  +5.094203] systemd-fstab-generator[951]: Ignoring "noauto" option for root device
	[  +0.077974] kauditd_printk_skb: 18 callbacks suppressed
	[  +6.560016] kauditd_printk_skb: 69 callbacks suppressed
	[  +0.683669] systemd-fstab-generator[1280]: Ignoring "noauto" option for root device
	[Mar28 00:47] systemd-fstab-generator[1475]: Ignoring "noauto" option for root device
	[  +0.105960] kauditd_printk_skb: 15 callbacks suppressed
	[ +11.245990] kauditd_printk_skb: 63 callbacks suppressed
	[ +37.811695] systemd-fstab-generator[2047]: Ignoring "noauto" option for root device
	[  +0.172214] systemd-fstab-generator[2059]: Ignoring "noauto" option for root device
	[  +0.209657] systemd-fstab-generator[2073]: Ignoring "noauto" option for root device
	[  +0.160851] systemd-fstab-generator[2086]: Ignoring "noauto" option for root device
	[  +0.330630] systemd-fstab-generator[2113]: Ignoring "noauto" option for root device
	[  +2.123144] systemd-fstab-generator[2640]: Ignoring "noauto" option for root device
	[Mar28 00:48] kauditd_printk_skb: 191 callbacks suppressed
	[ +18.688611] systemd-fstab-generator[3011]: Ignoring "noauto" option for root device
	[  +5.662442] kauditd_printk_skb: 43 callbacks suppressed
	[ +11.793130] kauditd_printk_skb: 2 callbacks suppressed
	[  +3.038702] systemd-fstab-generator[3454]: Ignoring "noauto" option for root device
	
	
	==> etcd [3a953be1a41bf293fed13e0cb2e03fd3a5cd17c9b79aee46a3884d75e51d7453] <==
	{"level":"info","ts":"2024-03-28T00:48:22.021613Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-03-28T00:48:22.021723Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-03-28T00:48:22.022203Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"678e262213f11973 switched to configuration voters=(7461943560404801907)"}
	{"level":"info","ts":"2024-03-28T00:48:22.0224Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"30d9b598be045872","local-member-id":"678e262213f11973","added-peer-id":"678e262213f11973","added-peer-peer-urls":["https://192.168.39.233:2380"]}
	{"level":"info","ts":"2024-03-28T00:48:22.022658Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"30d9b598be045872","local-member-id":"678e262213f11973","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-28T00:48:22.022772Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-28T00:48:22.08588Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-03-28T00:48:22.095836Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.233:2380"}
	{"level":"info","ts":"2024-03-28T00:48:22.106253Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.233:2380"}
	{"level":"info","ts":"2024-03-28T00:48:22.106143Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-03-28T00:48:22.101355Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"678e262213f11973","initial-advertise-peer-urls":["https://192.168.39.233:2380"],"listen-peer-urls":["https://192.168.39.233:2380"],"advertise-client-urls":["https://192.168.39.233:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.233:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-03-28T00:48:23.385209Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"678e262213f11973 is starting a new election at term 3"}
	{"level":"info","ts":"2024-03-28T00:48:23.385329Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"678e262213f11973 became pre-candidate at term 3"}
	{"level":"info","ts":"2024-03-28T00:48:23.385384Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"678e262213f11973 received MsgPreVoteResp from 678e262213f11973 at term 3"}
	{"level":"info","ts":"2024-03-28T00:48:23.385426Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"678e262213f11973 became candidate at term 4"}
	{"level":"info","ts":"2024-03-28T00:48:23.385456Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"678e262213f11973 received MsgVoteResp from 678e262213f11973 at term 4"}
	{"level":"info","ts":"2024-03-28T00:48:23.38549Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"678e262213f11973 became leader at term 4"}
	{"level":"info","ts":"2024-03-28T00:48:23.385522Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 678e262213f11973 elected leader 678e262213f11973 at term 4"}
	{"level":"info","ts":"2024-03-28T00:48:23.395066Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"678e262213f11973","local-member-attributes":"{Name:pause-040046 ClientURLs:[https://192.168.39.233:2379]}","request-path":"/0/members/678e262213f11973/attributes","cluster-id":"30d9b598be045872","publish-timeout":"7s"}
	{"level":"info","ts":"2024-03-28T00:48:23.395183Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-28T00:48:23.395516Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-28T00:48:23.398195Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-03-28T00:48:23.400569Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.233:2379"}
	{"level":"info","ts":"2024-03-28T00:48:23.400686Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-03-28T00:48:23.403159Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> etcd [abd187f6614af02b4d750c2d77c5a355583e1d586c760fd769441be096ef6314] <==
	{"level":"warn","ts":"2024-03-28T00:48:02.105396Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-03-28T00:48:01.458052Z","time spent":"647.273758ms","remote":"127.0.0.1:40860","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":677,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/apiserver-lvbeik6gt3ffzhl7eycuvykhva\" mod_revision:418 > success:<request_put:<key:\"/registry/leases/kube-system/apiserver-lvbeik6gt3ffzhl7eycuvykhva\" value_size:604 >> failure:<request_range:<key:\"/registry/leases/kube-system/apiserver-lvbeik6gt3ffzhl7eycuvykhva\" > >"}
	{"level":"info","ts":"2024-03-28T00:48:02.105649Z","caller":"traceutil/trace.go:171","msg":"trace[1751956951] transaction","detail":"{read_only:false; number_of_response:1; response_revision:423; }","duration":"647.522481ms","start":"2024-03-28T00:48:01.458112Z","end":"2024-03-28T00:48:02.105634Z","steps":["trace[1751956951] 'process raft request'  (duration: 646.956533ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-28T00:48:02.105835Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-03-28T00:48:01.458107Z","time spent":"647.603566ms","remote":"127.0.0.1:40632","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":41,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/masterleases/192.168.39.233\" mod_revision:419 > success:<request_delete_range:<key:\"/registry/masterleases/192.168.39.233\" > > failure:<request_range:<key:\"/registry/masterleases/192.168.39.233\" > >"}
	{"level":"info","ts":"2024-03-28T00:48:02.106118Z","caller":"traceutil/trace.go:171","msg":"trace[2042938705] linearizableReadLoop","detail":"{readStateIndex:446; appliedIndex:443; }","duration":"294.948782ms","start":"2024-03-28T00:48:01.811161Z","end":"2024-03-28T00:48:02.10611Z","steps":["trace[2042938705] 'read index received'  (duration: 247.095335ms)","trace[2042938705] 'applied index is now lower than readState.Index'  (duration: 47.852525ms)"],"step_count":2}
	{"level":"warn","ts":"2024-03-28T00:48:02.106421Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"624.872938ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/configmaps/kube-system/extension-apiserver-authentication\" ","response":"range_response_count:1 size:3021"}
	{"level":"info","ts":"2024-03-28T00:48:02.108393Z","caller":"traceutil/trace.go:171","msg":"trace[523733625] range","detail":"{range_begin:/registry/configmaps/kube-system/extension-apiserver-authentication; range_end:; response_count:1; response_revision:424; }","duration":"626.868804ms","start":"2024-03-28T00:48:01.481515Z","end":"2024-03-28T00:48:02.108384Z","steps":["trace[523733625] 'agreement among raft nodes before linearized reading'  (duration: 624.855194ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-28T00:48:02.108444Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-03-28T00:48:01.481502Z","time spent":"626.931158ms","remote":"127.0.0.1:40682","response type":"/etcdserverpb.KV/Range","request count":0,"request size":69,"response count":1,"response size":3043,"request content":"key:\"/registry/configmaps/kube-system/extension-apiserver-authentication\" "}
	{"level":"info","ts":"2024-03-28T00:48:02.106456Z","caller":"traceutil/trace.go:171","msg":"trace[498556886] transaction","detail":"{read_only:false; response_revision:424; number_of_response:1; }","duration":"639.869442ms","start":"2024-03-28T00:48:01.466577Z","end":"2024-03-28T00:48:02.106447Z","steps":["trace[498556886] 'process raft request'  (duration: 638.534427ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-28T00:48:02.108663Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-03-28T00:48:01.466561Z","time spent":"642.070433ms","remote":"127.0.0.1:40764","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":6586,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/kube-controller-manager-pause-040046\" mod_revision:305 > success:<request_put:<key:\"/registry/pods/kube-system/kube-controller-manager-pause-040046\" value_size:6515 >> failure:<request_range:<key:\"/registry/pods/kube-system/kube-controller-manager-pause-040046\" > >"}
	{"level":"warn","ts":"2024-03-28T00:48:02.106506Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"648.48433ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/ranges/serviceips\" ","response":"range_response_count:1 size:116"}
	{"level":"info","ts":"2024-03-28T00:48:02.108852Z","caller":"traceutil/trace.go:171","msg":"trace[1410905509] range","detail":"{range_begin:/registry/ranges/serviceips; range_end:; response_count:1; response_revision:424; }","duration":"650.897598ms","start":"2024-03-28T00:48:01.457946Z","end":"2024-03-28T00:48:02.108844Z","steps":["trace[1410905509] 'agreement among raft nodes before linearized reading'  (duration: 648.526095ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-28T00:48:02.108896Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-03-28T00:48:01.457944Z","time spent":"650.944757ms","remote":"127.0.0.1:40646","response type":"/etcdserverpb.KV/Range","request count":0,"request size":29,"response count":1,"response size":138,"request content":"key:\"/registry/ranges/serviceips\" "}
	{"level":"warn","ts":"2024-03-28T00:48:02.106536Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"648.595639ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/ranges/servicenodeports\" ","response":"range_response_count:1 size:118"}
	{"level":"info","ts":"2024-03-28T00:48:02.109184Z","caller":"traceutil/trace.go:171","msg":"trace[753335425] range","detail":"{range_begin:/registry/ranges/servicenodeports; range_end:; response_count:1; response_revision:424; }","duration":"651.243758ms","start":"2024-03-28T00:48:01.457931Z","end":"2024-03-28T00:48:02.109174Z","steps":["trace[753335425] 'agreement among raft nodes before linearized reading'  (duration: 648.589342ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-28T00:48:02.109236Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-03-28T00:48:01.457926Z","time spent":"651.299499ms","remote":"127.0.0.1:40658","response type":"/etcdserverpb.KV/Range","request count":0,"request size":35,"response count":1,"response size":140,"request content":"key:\"/registry/ranges/servicenodeports\" "}
	{"level":"info","ts":"2024-03-28T00:48:19.494599Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-03-28T00:48:19.494704Z","caller":"embed/etcd.go:375","msg":"closing etcd server","name":"pause-040046","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.233:2380"],"advertise-client-urls":["https://192.168.39.233:2379"]}
	{"level":"warn","ts":"2024-03-28T00:48:19.494795Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-03-28T00:48:19.494859Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-03-28T00:48:19.496548Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.233:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-03-28T00:48:19.496603Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.233:2379: use of closed network connection"}
	{"level":"info","ts":"2024-03-28T00:48:19.497946Z","caller":"etcdserver/server.go:1471","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"678e262213f11973","current-leader-member-id":"678e262213f11973"}
	{"level":"info","ts":"2024-03-28T00:48:19.501426Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.233:2380"}
	{"level":"info","ts":"2024-03-28T00:48:19.501557Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.233:2380"}
	{"level":"info","ts":"2024-03-28T00:48:19.501568Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"pause-040046","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.233:2380"],"advertise-client-urls":["https://192.168.39.233:2379"]}
	
	
	==> kernel <==
	 00:48:46 up 2 min,  0 users,  load average: 0.92, 0.51, 0.20
	Linux pause-040046 5.10.207 #1 SMP Wed Mar 27 22:02:20 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [7a15940c4f84a4954a314771f1309e8bbf59250930010e92dc67bbc616dc6099] <==
	I0328 00:48:24.875698       1 apf_controller.go:374] Starting API Priority and Fairness config controller
	I0328 00:48:24.878078       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0328 00:48:24.878121       1 shared_informer.go:311] Waiting for caches to sync for crd-autoregister
	I0328 00:48:24.975701       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0328 00:48:24.980667       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0328 00:48:24.980709       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0328 00:48:24.982387       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0328 00:48:24.992389       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0328 00:48:25.003525       1 aggregator.go:165] initial CRD sync complete...
	I0328 00:48:25.003618       1 autoregister_controller.go:141] Starting autoregister controller
	I0328 00:48:25.003650       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0328 00:48:25.003693       1 cache.go:39] Caches are synced for autoregister controller
	I0328 00:48:25.019329       1 shared_informer.go:318] Caches are synced for configmaps
	I0328 00:48:25.019399       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0328 00:48:25.019441       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0328 00:48:25.028723       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0328 00:48:25.881204       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0328 00:48:26.363111       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.233]
	I0328 00:48:26.366812       1 controller.go:624] quota admission added evaluator for: endpoints
	I0328 00:48:26.387763       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0328 00:48:26.888560       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0328 00:48:26.923107       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0328 00:48:26.988680       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0328 00:48:27.024134       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0328 00:48:27.036737       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	
	
	==> kube-apiserver [94ad7b59f3f0b5484e2c6ba034025fedf7553ccecaad58ee34b0b4a996190bb0] <==
	I0328 00:48:09.145655       1 establishing_controller.go:87] Shutting down EstablishingController
	I0328 00:48:09.147572       1 dynamic_cafile_content.go:171] "Shutting down controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0328 00:48:09.147634       1 customresource_discovery_controller.go:325] Shutting down DiscoveryController
	I0328 00:48:09.147657       1 gc_controller.go:91] Shutting down apiserver lease garbage collector
	I0328 00:48:09.147669       1 crdregistration_controller.go:142] Shutting down crd-autoregister controller
	I0328 00:48:09.147682       1 system_namespaces_controller.go:77] Shutting down system namespaces controller
	I0328 00:48:09.147688       1 controller.go:129] Ending legacy_token_tracking_controller
	I0328 00:48:09.147691       1 controller.go:130] Shutting down legacy_token_tracking_controller
	I0328 00:48:09.147941       1 cluster_authentication_trust_controller.go:463] Shutting down cluster_authentication_trust_controller controller
	I0328 00:48:09.148048       1 dynamic_serving_content.go:146] "Shutting down controller" name="aggregator-proxy-cert::/var/lib/minikube/certs/front-proxy-client.crt::/var/lib/minikube/certs/front-proxy-client.key"
	I0328 00:48:09.148101       1 dynamic_cafile_content.go:171] "Shutting down controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0328 00:48:09.148175       1 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0328 00:48:09.148256       1 controller.go:84] Shutting down OpenAPI AggregationController
	I0328 00:48:09.148328       1 secure_serving.go:258] Stopped listening on [::]:8443
	I0328 00:48:09.148344       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0328 00:48:09.148387       1 controller.go:86] Shutting down OpenAPI V3 AggregationController
	I0328 00:48:09.148428       1 object_count_tracker.go:151] "StorageObjectCountTracker pruner is exiting"
	I0328 00:48:09.148450       1 controller.go:159] Shutting down quota evaluator
	I0328 00:48:09.148481       1 controller.go:178] quota evaluator worker shutdown
	I0328 00:48:09.148582       1 dynamic_serving_content.go:146] "Shutting down controller" name="serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key"
	I0328 00:48:09.148865       1 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0328 00:48:09.149688       1 controller.go:178] quota evaluator worker shutdown
	I0328 00:48:09.149734       1 controller.go:178] quota evaluator worker shutdown
	I0328 00:48:09.149745       1 controller.go:178] quota evaluator worker shutdown
	I0328 00:48:09.149756       1 controller.go:178] quota evaluator worker shutdown
	
	
	==> kube-controller-manager [6b6176f7ebcce772df04ac9936e524988d01f46c826a569ca6ab05daf111e19f] <==
	I0328 00:48:38.004849       1 shared_informer.go:318] Caches are synced for cidrallocator
	I0328 00:48:38.006777       1 shared_informer.go:318] Caches are synced for TTL
	I0328 00:48:38.016701       1 shared_informer.go:318] Caches are synced for endpoint_slice
	I0328 00:48:38.025027       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-serving
	I0328 00:48:38.027159       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-client
	I0328 00:48:38.028394       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0328 00:48:38.030689       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-legacy-unknown
	I0328 00:48:38.043029       1 shared_informer.go:318] Caches are synced for certificate-csrapproving
	I0328 00:48:38.050623       1 shared_informer.go:318] Caches are synced for resource quota
	I0328 00:48:38.061087       1 shared_informer.go:318] Caches are synced for resource quota
	I0328 00:48:38.067510       1 shared_informer.go:318] Caches are synced for taint
	I0328 00:48:38.067641       1 node_lifecycle_controller.go:1222] "Initializing eviction metric for zone" zone=""
	I0328 00:48:38.067749       1 node_lifecycle_controller.go:874] "Missing timestamp for Node. Assuming now as a timestamp" node="pause-040046"
	I0328 00:48:38.067810       1 node_lifecycle_controller.go:1068] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I0328 00:48:38.068360       1 event.go:376] "Event occurred" object="pause-040046" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node pause-040046 event: Registered Node pause-040046 in Controller"
	I0328 00:48:38.078711       1 shared_informer.go:318] Caches are synced for GC
	I0328 00:48:38.087298       1 shared_informer.go:318] Caches are synced for daemon sets
	I0328 00:48:38.093267       1 shared_informer.go:318] Caches are synced for taint-eviction-controller
	I0328 00:48:38.094652       1 shared_informer.go:318] Caches are synced for persistent volume
	I0328 00:48:38.097080       1 shared_informer.go:318] Caches are synced for attach detach
	I0328 00:48:38.123750       1 shared_informer.go:318] Caches are synced for deployment
	I0328 00:48:38.146073       1 shared_informer.go:318] Caches are synced for disruption
	I0328 00:48:38.484559       1 shared_informer.go:318] Caches are synced for garbage collector
	I0328 00:48:38.484682       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I0328 00:48:38.489709       1 shared_informer.go:318] Caches are synced for garbage collector
	
	
	==> kube-controller-manager [8bd72f355a2ea559cef3868b146cd6fd26cba6952ec6955c36683ada0df0f439] <==
	I0328 00:48:03.506280       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="jobs.batch"
	I0328 00:48:03.506297       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="networkpolicies.networking.k8s.io"
	I0328 00:48:03.506314       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="replicasets.apps"
	I0328 00:48:03.506350       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="controllerrevisions.apps"
	I0328 00:48:03.506369       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="deployments.apps"
	I0328 00:48:03.506407       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="roles.rbac.authorization.k8s.io"
	I0328 00:48:03.506430       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="horizontalpodautoscalers.autoscaling"
	I0328 00:48:03.506449       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="endpoints"
	I0328 00:48:03.506572       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="csistoragecapacities.storage.k8s.io"
	I0328 00:48:03.506621       1 controllermanager.go:735] "Started controller" controller="resourcequota-controller"
	I0328 00:48:03.506705       1 resource_quota_controller.go:294] "Starting resource quota controller"
	I0328 00:48:03.506822       1 shared_informer.go:311] Waiting for caches to sync for resource quota
	I0328 00:48:03.507052       1 resource_quota_monitor.go:305] "QuotaMonitor running"
	I0328 00:48:03.508879       1 controllermanager.go:735] "Started controller" controller="deployment-controller"
	I0328 00:48:03.509861       1 deployment_controller.go:168] "Starting controller" controller="deployment"
	I0328 00:48:03.512843       1 shared_informer.go:311] Waiting for caches to sync for deployment
	I0328 00:48:03.517660       1 controllermanager.go:735] "Started controller" controller="certificatesigningrequest-approving-controller"
	I0328 00:48:03.517881       1 certificate_controller.go:115] "Starting certificate controller" name="csrapproving"
	I0328 00:48:03.517911       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrapproving
	I0328 00:48:03.544697       1 shared_informer.go:318] Caches are synced for tokens
	W0328 00:48:13.522848       1 client_builder_dynamic.go:197] get or create service account failed: Get "https://192.168.39.233:8443/api/v1/namespaces/kube-system/serviceaccounts/node-controller": dial tcp 192.168.39.233:8443: connect: connection refused
	W0328 00:48:14.023629       1 client_builder_dynamic.go:197] get or create service account failed: Get "https://192.168.39.233:8443/api/v1/namespaces/kube-system/serviceaccounts/node-controller": dial tcp 192.168.39.233:8443: connect: connection refused
	W0328 00:48:15.024726       1 client_builder_dynamic.go:197] get or create service account failed: Get "https://192.168.39.233:8443/api/v1/namespaces/kube-system/serviceaccounts/node-controller": dial tcp 192.168.39.233:8443: connect: connection refused
	W0328 00:48:17.026605       1 client_builder_dynamic.go:197] get or create service account failed: Get "https://192.168.39.233:8443/api/v1/namespaces/kube-system/serviceaccounts/node-controller": dial tcp 192.168.39.233:8443: connect: connection refused
	E0328 00:48:17.026762       1 cidr_allocator.go:144] "Failed to list all nodes" err="Get \"https://192.168.39.233:8443/api/v1/nodes\": failed to get token for kube-system/node-controller: timed out waiting for the condition"
	
	
	==> kube-proxy [6d21716d3db96dc4becdcf1946eb6aeb2bad5d0e6a01d4704807ca7b1c717663] <==
	I0328 00:48:26.522474       1 server_others.go:72] "Using iptables proxy"
	I0328 00:48:26.550274       1 server.go:1050] "Successfully retrieved node IP(s)" IPs=["192.168.39.233"]
	I0328 00:48:26.606772       1 server_others.go:146] "No iptables support for family" ipFamily="IPv6"
	I0328 00:48:26.606838       1 server.go:654] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0328 00:48:26.606869       1 server_others.go:168] "Using iptables Proxier"
	I0328 00:48:26.610776       1 proxier.go:245] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0328 00:48:26.611263       1 server.go:865] "Version info" version="v1.29.3"
	I0328 00:48:26.611308       1 server.go:867] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0328 00:48:26.612784       1 config.go:188] "Starting service config controller"
	I0328 00:48:26.612843       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0328 00:48:26.612872       1 config.go:97] "Starting endpoint slice config controller"
	I0328 00:48:26.612878       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0328 00:48:26.613527       1 config.go:315] "Starting node config controller"
	I0328 00:48:26.613569       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0328 00:48:26.713579       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0328 00:48:26.713789       1 shared_informer.go:318] Caches are synced for service config
	I0328 00:48:26.714329       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-proxy [737bc1da2980fc5534519112255b2b2e4ced04b8f71571c699f8e1eedf84b5e9] <==
	I0328 00:47:59.740427       1 server_others.go:72] "Using iptables proxy"
	I0328 00:48:01.815311       1 server.go:1050] "Successfully retrieved node IP(s)" IPs=["192.168.39.233"]
	I0328 00:48:01.853144       1 server_others.go:146] "No iptables support for family" ipFamily="IPv6"
	I0328 00:48:01.853168       1 server.go:654] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0328 00:48:01.853181       1 server_others.go:168] "Using iptables Proxier"
	I0328 00:48:01.856022       1 proxier.go:245] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0328 00:48:01.856335       1 server.go:865] "Version info" version="v1.29.3"
	I0328 00:48:01.856375       1 server.go:867] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0328 00:48:01.857700       1 config.go:188] "Starting service config controller"
	I0328 00:48:01.857810       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0328 00:48:01.857931       1 config.go:97] "Starting endpoint slice config controller"
	I0328 00:48:01.858034       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0328 00:48:01.858574       1 config.go:315] "Starting node config controller"
	I0328 00:48:01.861192       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0328 00:48:01.958468       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0328 00:48:01.958551       1 shared_informer.go:318] Caches are synced for service config
	I0328 00:48:01.962052       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [a2924b060928092dbbd9bd04663a3b4db9488ebea45a3fcf8f014a444cd2902a] <==
	I0328 00:48:23.274943       1 serving.go:380] Generated self-signed cert in-memory
	W0328 00:48:24.971649       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0328 00:48:24.971867       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0328 00:48:24.972032       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0328 00:48:24.972160       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0328 00:48:25.035388       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.29.3"
	I0328 00:48:25.035715       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0328 00:48:25.038054       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0328 00:48:25.038170       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0328 00:48:25.040154       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0328 00:48:25.040289       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0328 00:48:25.139174       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [adae09481ab4f504b6cc144443edbfd307e7f230fd84bab6c985cdc13f9c0558] <==
	I0328 00:48:00.145375       1 serving.go:380] Generated self-signed cert in-memory
	I0328 00:48:01.478623       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.29.3"
	I0328 00:48:01.478664       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0328 00:48:02.112475       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0328 00:48:02.112596       1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
	I0328 00:48:02.112653       1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController
	I0328 00:48:02.112672       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0328 00:48:02.114648       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0328 00:48:02.114727       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0328 00:48:02.114664       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0328 00:48:02.116851       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0328 00:48:02.213456       1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController
	I0328 00:48:02.214891       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0328 00:48:02.219589       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0328 00:48:19.342569       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0328 00:48:19.342658       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0328 00:48:19.342779       1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0328 00:48:19.342801       1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0328 00:48:19.342842       1 requestheader_controller.go:183] Shutting down RequestHeaderAuthRequestController
	E0328 00:48:19.348100       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Mar 28 00:48:21 pause-040046 kubelet[3018]: I0328 00:48:21.332513    3018 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/42f3ca6be942820b85cc87c91c1ac4b8-usr-share-ca-certificates\") pod \"kube-controller-manager-pause-040046\" (UID: \"42f3ca6be942820b85cc87c91c1ac4b8\") " pod="kube-system/kube-controller-manager-pause-040046"
	Mar 28 00:48:21 pause-040046 kubelet[3018]: I0328 00:48:21.332546    3018 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/dbcc4c94bbad251b426f15a339077c36-kubeconfig\") pod \"kube-scheduler-pause-040046\" (UID: \"dbcc4c94bbad251b426f15a339077c36\") " pod="kube-system/kube-scheduler-pause-040046"
	Mar 28 00:48:21 pause-040046 kubelet[3018]: E0328 00:48:21.530949    3018 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-040046?timeout=10s\": dial tcp 192.168.39.233:8443: connect: connection refused" interval="800ms"
	Mar 28 00:48:21 pause-040046 kubelet[3018]: I0328 00:48:21.550225    3018 scope.go:117] "RemoveContainer" containerID="94ad7b59f3f0b5484e2c6ba034025fedf7553ccecaad58ee34b0b4a996190bb0"
	Mar 28 00:48:21 pause-040046 kubelet[3018]: I0328 00:48:21.550532    3018 scope.go:117] "RemoveContainer" containerID="abd187f6614af02b4d750c2d77c5a355583e1d586c760fd769441be096ef6314"
	Mar 28 00:48:21 pause-040046 kubelet[3018]: I0328 00:48:21.551397    3018 scope.go:117] "RemoveContainer" containerID="8bd72f355a2ea559cef3868b146cd6fd26cba6952ec6955c36683ada0df0f439"
	Mar 28 00:48:21 pause-040046 kubelet[3018]: I0328 00:48:21.552880    3018 scope.go:117] "RemoveContainer" containerID="adae09481ab4f504b6cc144443edbfd307e7f230fd84bab6c985cdc13f9c0558"
	Mar 28 00:48:21 pause-040046 kubelet[3018]: I0328 00:48:21.627175    3018 kubelet_node_status.go:73] "Attempting to register node" node="pause-040046"
	Mar 28 00:48:21 pause-040046 kubelet[3018]: E0328 00:48:21.642529    3018 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.39.233:8443: connect: connection refused" node="pause-040046"
	Mar 28 00:48:21 pause-040046 kubelet[3018]: W0328 00:48:21.938245    3018 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.233:8443: connect: connection refused
	Mar 28 00:48:21 pause-040046 kubelet[3018]: E0328 00:48:21.938328    3018 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.233:8443: connect: connection refused
	Mar 28 00:48:22 pause-040046 kubelet[3018]: I0328 00:48:22.444663    3018 kubelet_node_status.go:73] "Attempting to register node" node="pause-040046"
	Mar 28 00:48:25 pause-040046 kubelet[3018]: I0328 00:48:25.063917    3018 kubelet_node_status.go:112] "Node was previously registered" node="pause-040046"
	Mar 28 00:48:25 pause-040046 kubelet[3018]: I0328 00:48:25.064104    3018 kubelet_node_status.go:76] "Successfully registered node" node="pause-040046"
	Mar 28 00:48:25 pause-040046 kubelet[3018]: I0328 00:48:25.066373    3018 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Mar 28 00:48:25 pause-040046 kubelet[3018]: I0328 00:48:25.068766    3018 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Mar 28 00:48:25 pause-040046 kubelet[3018]: I0328 00:48:25.913704    3018 apiserver.go:52] "Watching apiserver"
	Mar 28 00:48:25 pause-040046 kubelet[3018]: I0328 00:48:25.917197    3018 topology_manager.go:215] "Topology Admit Handler" podUID="dbcb1807-c16a-428c-9292-e7f4a8ff9d00" podNamespace="kube-system" podName="coredns-76f75df574-d9zx2"
	Mar 28 00:48:25 pause-040046 kubelet[3018]: I0328 00:48:25.917320    3018 topology_manager.go:215] "Topology Admit Handler" podUID="249cdd5d-91ae-4248-9a00-f4959c78b3b2" podNamespace="kube-system" podName="kube-proxy-5tlrp"
	Mar 28 00:48:25 pause-040046 kubelet[3018]: I0328 00:48:25.925138    3018 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world"
	Mar 28 00:48:25 pause-040046 kubelet[3018]: I0328 00:48:25.929403    3018 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/249cdd5d-91ae-4248-9a00-f4959c78b3b2-xtables-lock\") pod \"kube-proxy-5tlrp\" (UID: \"249cdd5d-91ae-4248-9a00-f4959c78b3b2\") " pod="kube-system/kube-proxy-5tlrp"
	Mar 28 00:48:25 pause-040046 kubelet[3018]: I0328 00:48:25.929914    3018 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/249cdd5d-91ae-4248-9a00-f4959c78b3b2-lib-modules\") pod \"kube-proxy-5tlrp\" (UID: \"249cdd5d-91ae-4248-9a00-f4959c78b3b2\") " pod="kube-system/kube-proxy-5tlrp"
	Mar 28 00:48:26 pause-040046 kubelet[3018]: I0328 00:48:26.218084    3018 scope.go:117] "RemoveContainer" containerID="737bc1da2980fc5534519112255b2b2e4ced04b8f71571c699f8e1eedf84b5e9"
	Mar 28 00:48:26 pause-040046 kubelet[3018]: I0328 00:48:26.220522    3018 scope.go:117] "RemoveContainer" containerID="2264e41f77cc4f28ea256a12349256853bb9b3a2421f649b7e75348e9561392b"
	Mar 28 00:48:34 pause-040046 kubelet[3018]: I0328 00:48:34.291585    3018 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0328 00:48:44.979643 1114961 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/18485-1069254/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-040046 -n pause-040046
helpers_test.go:261: (dbg) Run:  kubectl --context pause-040046 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-040046 -n pause-040046
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p pause-040046 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-040046 logs -n 25: (1.608125977s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|---------------------------|---------|----------------|---------------------|---------------------|
	| Command |                         Args                         |          Profile          |  User   |    Version     |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|---------------------------|---------|----------------|---------------------|---------------------|
	| ssh     | -p cilium-443419 sudo cat                            | cilium-443419             | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:46 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                           |         |                |                     |                     |
	| ssh     | -p cilium-443419 sudo cat                            | cilium-443419             | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:46 UTC |                     |
	|         | /usr/lib/systemd/system/cri-docker.service           |                           |         |                |                     |                     |
	| ssh     | -p cilium-443419 sudo                                | cilium-443419             | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:46 UTC |                     |
	|         | cri-dockerd --version                                |                           |         |                |                     |                     |
	| ssh     | -p cilium-443419 sudo                                | cilium-443419             | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:46 UTC |                     |
	|         | systemctl status containerd                          |                           |         |                |                     |                     |
	|         | --all --full --no-pager                              |                           |         |                |                     |                     |
	| stop    | -p NoKubernetes-636163                               | NoKubernetes-636163       | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:46 UTC | 28 Mar 24 00:46 UTC |
	| ssh     | -p cilium-443419 sudo                                | cilium-443419             | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:46 UTC |                     |
	|         | systemctl cat containerd                             |                           |         |                |                     |                     |
	|         | --no-pager                                           |                           |         |                |                     |                     |
	| delete  | -p running-upgrade-642721                            | running-upgrade-642721    | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:46 UTC | 28 Mar 24 00:46 UTC |
	| ssh     | -p cilium-443419 sudo cat                            | cilium-443419             | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:46 UTC |                     |
	|         | /lib/systemd/system/containerd.service               |                           |         |                |                     |                     |
	| ssh     | -p cilium-443419 sudo cat                            | cilium-443419             | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:46 UTC |                     |
	|         | /etc/containerd/config.toml                          |                           |         |                |                     |                     |
	| ssh     | -p cilium-443419 sudo                                | cilium-443419             | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:46 UTC |                     |
	|         | containerd config dump                               |                           |         |                |                     |                     |
	| ssh     | -p cilium-443419 sudo                                | cilium-443419             | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:46 UTC |                     |
	|         | systemctl status crio --all                          |                           |         |                |                     |                     |
	|         | --full --no-pager                                    |                           |         |                |                     |                     |
	| ssh     | -p cilium-443419 sudo                                | cilium-443419             | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:46 UTC |                     |
	|         | systemctl cat crio --no-pager                        |                           |         |                |                     |                     |
	| ssh     | -p cilium-443419 sudo find                           | cilium-443419             | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:46 UTC |                     |
	|         | /etc/crio -type f -exec sh -c                        |                           |         |                |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                           |         |                |                     |                     |
	| ssh     | -p cilium-443419 sudo crio                           | cilium-443419             | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:46 UTC |                     |
	|         | config                                               |                           |         |                |                     |                     |
	| delete  | -p cilium-443419                                     | cilium-443419             | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:46 UTC | 28 Mar 24 00:46 UTC |
	| start   | -p pause-040046 --memory=2048                        | pause-040046              | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:46 UTC | 28 Mar 24 00:47 UTC |
	|         | --install-addons=false                               |                           |         |                |                     |                     |
	|         | --wait=all --driver=kvm2                             |                           |         |                |                     |                     |
	|         | --container-runtime=crio                             |                           |         |                |                     |                     |
	| start   | -p NoKubernetes-636163                               | NoKubernetes-636163       | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:46 UTC | 28 Mar 24 00:46 UTC |
	|         | --driver=kvm2                                        |                           |         |                |                     |                     |
	|         | --container-runtime=crio                             |                           |         |                |                     |                     |
	| start   | -p kubernetes-upgrade-615158                         | kubernetes-upgrade-615158 | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:46 UTC |                     |
	|         | --memory=2200                                        |                           |         |                |                     |                     |
	|         | --kubernetes-version=v1.20.0                         |                           |         |                |                     |                     |
	|         | --alsologtostderr                                    |                           |         |                |                     |                     |
	|         | -v=1 --driver=kvm2                                   |                           |         |                |                     |                     |
	|         | --container-runtime=crio                             |                           |         |                |                     |                     |
	| ssh     | -p NoKubernetes-636163 sudo                          | NoKubernetes-636163       | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:46 UTC |                     |
	|         | systemctl is-active --quiet                          |                           |         |                |                     |                     |
	|         | service kubelet                                      |                           |         |                |                     |                     |
	| delete  | -p NoKubernetes-636163                               | NoKubernetes-636163       | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:46 UTC | 28 Mar 24 00:47 UTC |
	| start   | -p stopped-upgrade-317492                            | minikube                  | jenkins | v1.26.0        | 28 Mar 24 00:47 UTC | 28 Mar 24 00:48 UTC |
	|         | --memory=2200 --vm-driver=kvm2                       |                           |         |                |                     |                     |
	|         |  --container-runtime=crio                            |                           |         |                |                     |                     |
	| start   | -p pause-040046                                      | pause-040046              | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:47 UTC | 28 Mar 24 00:48 UTC |
	|         | --alsologtostderr                                    |                           |         |                |                     |                     |
	|         | -v=1 --driver=kvm2                                   |                           |         |                |                     |                     |
	|         | --container-runtime=crio                             |                           |         |                |                     |                     |
	| stop    | stopped-upgrade-317492 stop                          | minikube                  | jenkins | v1.26.0        | 28 Mar 24 00:48 UTC | 28 Mar 24 00:48 UTC |
	| start   | -p cert-expiration-927384                            | cert-expiration-927384    | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:48 UTC |                     |
	|         | --memory=2048                                        |                           |         |                |                     |                     |
	|         | --cert-expiration=8760h                              |                           |         |                |                     |                     |
	|         | --driver=kvm2                                        |                           |         |                |                     |                     |
	|         | --container-runtime=crio                             |                           |         |                |                     |                     |
	| start   | -p stopped-upgrade-317492                            | stopped-upgrade-317492    | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:48 UTC |                     |
	|         | --memory=2200                                        |                           |         |                |                     |                     |
	|         | --alsologtostderr                                    |                           |         |                |                     |                     |
	|         | -v=1 --driver=kvm2                                   |                           |         |                |                     |                     |
	|         | --container-runtime=crio                             |                           |         |                |                     |                     |
	|---------|------------------------------------------------------|---------------------------|---------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/28 00:48:18
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0328 00:48:18.231161 1114767 out.go:291] Setting OutFile to fd 1 ...
	I0328 00:48:18.231312 1114767 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 00:48:18.231338 1114767 out.go:304] Setting ErrFile to fd 2...
	I0328 00:48:18.231353 1114767 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 00:48:18.231562 1114767 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18485-1069254/.minikube/bin
	I0328 00:48:18.232217 1114767 out.go:298] Setting JSON to false
	I0328 00:48:18.233288 1114767 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":30595,"bootTime":1711556303,"procs":219,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1054-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0328 00:48:18.233355 1114767 start.go:139] virtualization: kvm guest
	I0328 00:48:18.235476 1114767 out.go:177] * [stopped-upgrade-317492] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I0328 00:48:18.236858 1114767 notify.go:220] Checking for updates...
	I0328 00:48:18.238533 1114767 out.go:177]   - MINIKUBE_LOCATION=18485
	I0328 00:48:18.239817 1114767 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0328 00:48:18.241011 1114767 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18485-1069254/kubeconfig
	I0328 00:48:18.242219 1114767 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18485-1069254/.minikube
	I0328 00:48:18.243461 1114767 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0328 00:48:18.244668 1114767 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0328 00:48:18.246349 1114767 config.go:182] Loaded profile config "stopped-upgrade-317492": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.1
	I0328 00:48:18.246720 1114767 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18485-1069254/.minikube/bin/docker-machine-driver-kvm2
	I0328 00:48:18.246767 1114767 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 00:48:18.261988 1114767 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39757
	I0328 00:48:18.262499 1114767 main.go:141] libmachine: () Calling .GetVersion
	I0328 00:48:18.263097 1114767 main.go:141] libmachine: Using API Version  1
	I0328 00:48:18.263131 1114767 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 00:48:18.263631 1114767 main.go:141] libmachine: () Calling .GetMachineName
	I0328 00:48:18.263878 1114767 main.go:141] libmachine: (stopped-upgrade-317492) Calling .DriverName
	I0328 00:48:18.265964 1114767 out.go:177] * Kubernetes 1.29.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.29.3
	I0328 00:48:18.267419 1114767 driver.go:392] Setting default libvirt URI to qemu:///system
	I0328 00:48:18.267733 1114767 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18485-1069254/.minikube/bin/docker-machine-driver-kvm2
	I0328 00:48:18.267778 1114767 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 00:48:18.282876 1114767 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40287
	I0328 00:48:18.283309 1114767 main.go:141] libmachine: () Calling .GetVersion
	I0328 00:48:18.283766 1114767 main.go:141] libmachine: Using API Version  1
	I0328 00:48:18.283790 1114767 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 00:48:18.284123 1114767 main.go:141] libmachine: () Calling .GetMachineName
	I0328 00:48:18.284353 1114767 main.go:141] libmachine: (stopped-upgrade-317492) Calling .DriverName
	I0328 00:48:18.323384 1114767 out.go:177] * Using the kvm2 driver based on existing profile
	I0328 00:48:18.324629 1114767 start.go:297] selected driver: kvm2
	I0328 00:48:18.324641 1114767 start.go:901] validating driver "kvm2" against &{Name:stopped-upgrade-317492 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-317
492 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.128 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpti
mizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0328 00:48:18.324761 1114767 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0328 00:48:18.325463 1114767 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0328 00:48:18.325558 1114767 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18485-1069254/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0328 00:48:18.340532 1114767 install.go:137] /home/jenkins/minikube-integration/18485-1069254/.minikube/bin/docker-machine-driver-kvm2 version is 1.33.0-beta.0
	I0328 00:48:18.340909 1114767 cni.go:84] Creating CNI manager for ""
	I0328 00:48:18.340928 1114767 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0328 00:48:18.340985 1114767 start.go:340] cluster config:
	{Name:stopped-upgrade-317492 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-317492 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:
[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.128 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClient
Path: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0328 00:48:18.341093 1114767 iso.go:125] acquiring lock: {Name:mk3da1fa7d63a581c817f327d86a827697458cb8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0328 00:48:18.342999 1114767 out.go:177] * Starting "stopped-upgrade-317492" primary control-plane node in "stopped-upgrade-317492" cluster
	I0328 00:48:16.841915 1114710 machine.go:94] provisionDockerMachine start ...
	I0328 00:48:16.841951 1114710 main.go:141] libmachine: (cert-expiration-927384) Calling .DriverName
	I0328 00:48:16.842281 1114710 main.go:141] libmachine: (cert-expiration-927384) Calling .GetSSHHostname
	I0328 00:48:16.845169 1114710 main.go:141] libmachine: (cert-expiration-927384) DBG | domain cert-expiration-927384 has defined MAC address 52:54:00:71:d9:b7 in network mk-cert-expiration-927384
	I0328 00:48:16.845555 1114710 main.go:141] libmachine: (cert-expiration-927384) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:d9:b7", ip: ""} in network mk-cert-expiration-927384: {Iface:virbr4 ExpiryTime:2024-03-28 01:44:44 +0000 UTC Type:0 Mac:52:54:00:71:d9:b7 Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:cert-expiration-927384 Clientid:01:52:54:00:71:d9:b7}
	I0328 00:48:16.845581 1114710 main.go:141] libmachine: (cert-expiration-927384) DBG | domain cert-expiration-927384 has defined IP address 192.168.72.11 and MAC address 52:54:00:71:d9:b7 in network mk-cert-expiration-927384
	I0328 00:48:16.845733 1114710 main.go:141] libmachine: (cert-expiration-927384) Calling .GetSSHPort
	I0328 00:48:16.845957 1114710 main.go:141] libmachine: (cert-expiration-927384) Calling .GetSSHKeyPath
	I0328 00:48:16.846141 1114710 main.go:141] libmachine: (cert-expiration-927384) Calling .GetSSHKeyPath
	I0328 00:48:16.846298 1114710 main.go:141] libmachine: (cert-expiration-927384) Calling .GetSSHUsername
	I0328 00:48:16.846491 1114710 main.go:141] libmachine: Using SSH client type: native
	I0328 00:48:16.846759 1114710 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.11 22 <nil> <nil>}
	I0328 00:48:16.846778 1114710 main.go:141] libmachine: About to run SSH command:
	hostname
	I0328 00:48:16.969166 1114710 main.go:141] libmachine: SSH cmd err, output: <nil>: cert-expiration-927384
	
	I0328 00:48:16.969193 1114710 main.go:141] libmachine: (cert-expiration-927384) Calling .GetMachineName
	I0328 00:48:16.969480 1114710 buildroot.go:166] provisioning hostname "cert-expiration-927384"
	I0328 00:48:16.969499 1114710 main.go:141] libmachine: (cert-expiration-927384) Calling .GetMachineName
	I0328 00:48:16.969693 1114710 main.go:141] libmachine: (cert-expiration-927384) Calling .GetSSHHostname
	I0328 00:48:16.973311 1114710 main.go:141] libmachine: (cert-expiration-927384) DBG | domain cert-expiration-927384 has defined MAC address 52:54:00:71:d9:b7 in network mk-cert-expiration-927384
	I0328 00:48:16.973790 1114710 main.go:141] libmachine: (cert-expiration-927384) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:d9:b7", ip: ""} in network mk-cert-expiration-927384: {Iface:virbr4 ExpiryTime:2024-03-28 01:44:44 +0000 UTC Type:0 Mac:52:54:00:71:d9:b7 Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:cert-expiration-927384 Clientid:01:52:54:00:71:d9:b7}
	I0328 00:48:16.973810 1114710 main.go:141] libmachine: (cert-expiration-927384) DBG | domain cert-expiration-927384 has defined IP address 192.168.72.11 and MAC address 52:54:00:71:d9:b7 in network mk-cert-expiration-927384
	I0328 00:48:16.974035 1114710 main.go:141] libmachine: (cert-expiration-927384) Calling .GetSSHPort
	I0328 00:48:16.974223 1114710 main.go:141] libmachine: (cert-expiration-927384) Calling .GetSSHKeyPath
	I0328 00:48:16.974388 1114710 main.go:141] libmachine: (cert-expiration-927384) Calling .GetSSHKeyPath
	I0328 00:48:16.974523 1114710 main.go:141] libmachine: (cert-expiration-927384) Calling .GetSSHUsername
	I0328 00:48:16.974692 1114710 main.go:141] libmachine: Using SSH client type: native
	I0328 00:48:16.974913 1114710 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.11 22 <nil> <nil>}
	I0328 00:48:16.974925 1114710 main.go:141] libmachine: About to run SSH command:
	sudo hostname cert-expiration-927384 && echo "cert-expiration-927384" | sudo tee /etc/hostname
	I0328 00:48:17.115425 1114710 main.go:141] libmachine: SSH cmd err, output: <nil>: cert-expiration-927384
	
	I0328 00:48:17.115450 1114710 main.go:141] libmachine: (cert-expiration-927384) Calling .GetSSHHostname
	I0328 00:48:17.118689 1114710 main.go:141] libmachine: (cert-expiration-927384) DBG | domain cert-expiration-927384 has defined MAC address 52:54:00:71:d9:b7 in network mk-cert-expiration-927384
	I0328 00:48:17.119074 1114710 main.go:141] libmachine: (cert-expiration-927384) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:d9:b7", ip: ""} in network mk-cert-expiration-927384: {Iface:virbr4 ExpiryTime:2024-03-28 01:44:44 +0000 UTC Type:0 Mac:52:54:00:71:d9:b7 Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:cert-expiration-927384 Clientid:01:52:54:00:71:d9:b7}
	I0328 00:48:17.119100 1114710 main.go:141] libmachine: (cert-expiration-927384) DBG | domain cert-expiration-927384 has defined IP address 192.168.72.11 and MAC address 52:54:00:71:d9:b7 in network mk-cert-expiration-927384
	I0328 00:48:17.119314 1114710 main.go:141] libmachine: (cert-expiration-927384) Calling .GetSSHPort
	I0328 00:48:17.119514 1114710 main.go:141] libmachine: (cert-expiration-927384) Calling .GetSSHKeyPath
	I0328 00:48:17.119669 1114710 main.go:141] libmachine: (cert-expiration-927384) Calling .GetSSHKeyPath
	I0328 00:48:17.119817 1114710 main.go:141] libmachine: (cert-expiration-927384) Calling .GetSSHUsername
	I0328 00:48:17.119991 1114710 main.go:141] libmachine: Using SSH client type: native
	I0328 00:48:17.120222 1114710 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.11 22 <nil> <nil>}
	I0328 00:48:17.120235 1114710 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scert-expiration-927384' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 cert-expiration-927384/g' /etc/hosts;
				else 
					echo '127.0.1.1 cert-expiration-927384' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0328 00:48:17.236133 1114710 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0328 00:48:17.236157 1114710 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18485-1069254/.minikube CaCertPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18485-1069254/.minikube}
	I0328 00:48:17.236184 1114710 buildroot.go:174] setting up certificates
	I0328 00:48:17.236192 1114710 provision.go:84] configureAuth start
	I0328 00:48:17.236200 1114710 main.go:141] libmachine: (cert-expiration-927384) Calling .GetMachineName
	I0328 00:48:17.236534 1114710 main.go:141] libmachine: (cert-expiration-927384) Calling .GetIP
	I0328 00:48:17.239853 1114710 main.go:141] libmachine: (cert-expiration-927384) DBG | domain cert-expiration-927384 has defined MAC address 52:54:00:71:d9:b7 in network mk-cert-expiration-927384
	I0328 00:48:17.240250 1114710 main.go:141] libmachine: (cert-expiration-927384) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:d9:b7", ip: ""} in network mk-cert-expiration-927384: {Iface:virbr4 ExpiryTime:2024-03-28 01:44:44 +0000 UTC Type:0 Mac:52:54:00:71:d9:b7 Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:cert-expiration-927384 Clientid:01:52:54:00:71:d9:b7}
	I0328 00:48:17.240264 1114710 main.go:141] libmachine: (cert-expiration-927384) DBG | domain cert-expiration-927384 has defined IP address 192.168.72.11 and MAC address 52:54:00:71:d9:b7 in network mk-cert-expiration-927384
	I0328 00:48:17.240426 1114710 main.go:141] libmachine: (cert-expiration-927384) Calling .GetSSHHostname
	I0328 00:48:17.242796 1114710 main.go:141] libmachine: (cert-expiration-927384) DBG | domain cert-expiration-927384 has defined MAC address 52:54:00:71:d9:b7 in network mk-cert-expiration-927384
	I0328 00:48:17.243244 1114710 main.go:141] libmachine: (cert-expiration-927384) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:d9:b7", ip: ""} in network mk-cert-expiration-927384: {Iface:virbr4 ExpiryTime:2024-03-28 01:44:44 +0000 UTC Type:0 Mac:52:54:00:71:d9:b7 Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:cert-expiration-927384 Clientid:01:52:54:00:71:d9:b7}
	I0328 00:48:17.243270 1114710 main.go:141] libmachine: (cert-expiration-927384) DBG | domain cert-expiration-927384 has defined IP address 192.168.72.11 and MAC address 52:54:00:71:d9:b7 in network mk-cert-expiration-927384
	I0328 00:48:17.243369 1114710 provision.go:143] copyHostCerts
	I0328 00:48:17.243435 1114710 exec_runner.go:144] found /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.pem, removing ...
	I0328 00:48:17.243441 1114710 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.pem
	I0328 00:48:17.243504 1114710 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.pem (1078 bytes)
	I0328 00:48:17.243599 1114710 exec_runner.go:144] found /home/jenkins/minikube-integration/18485-1069254/.minikube/cert.pem, removing ...
	I0328 00:48:17.243602 1114710 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18485-1069254/.minikube/cert.pem
	I0328 00:48:17.243621 1114710 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18485-1069254/.minikube/cert.pem (1123 bytes)
	I0328 00:48:17.243669 1114710 exec_runner.go:144] found /home/jenkins/minikube-integration/18485-1069254/.minikube/key.pem, removing ...
	I0328 00:48:17.243672 1114710 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18485-1069254/.minikube/key.pem
	I0328 00:48:17.243687 1114710 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18485-1069254/.minikube/key.pem (1679 bytes)
	I0328 00:48:17.243727 1114710 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca-key.pem org=jenkins.cert-expiration-927384 san=[127.0.0.1 192.168.72.11 cert-expiration-927384 localhost minikube]
	I0328 00:48:17.487391 1114710 provision.go:177] copyRemoteCerts
	I0328 00:48:17.487441 1114710 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0328 00:48:17.487467 1114710 main.go:141] libmachine: (cert-expiration-927384) Calling .GetSSHHostname
	I0328 00:48:17.490497 1114710 main.go:141] libmachine: (cert-expiration-927384) DBG | domain cert-expiration-927384 has defined MAC address 52:54:00:71:d9:b7 in network mk-cert-expiration-927384
	I0328 00:48:17.490839 1114710 main.go:141] libmachine: (cert-expiration-927384) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:d9:b7", ip: ""} in network mk-cert-expiration-927384: {Iface:virbr4 ExpiryTime:2024-03-28 01:44:44 +0000 UTC Type:0 Mac:52:54:00:71:d9:b7 Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:cert-expiration-927384 Clientid:01:52:54:00:71:d9:b7}
	I0328 00:48:17.490858 1114710 main.go:141] libmachine: (cert-expiration-927384) DBG | domain cert-expiration-927384 has defined IP address 192.168.72.11 and MAC address 52:54:00:71:d9:b7 in network mk-cert-expiration-927384
	I0328 00:48:17.491052 1114710 main.go:141] libmachine: (cert-expiration-927384) Calling .GetSSHPort
	I0328 00:48:17.491316 1114710 main.go:141] libmachine: (cert-expiration-927384) Calling .GetSSHKeyPath
	I0328 00:48:17.491503 1114710 main.go:141] libmachine: (cert-expiration-927384) Calling .GetSSHUsername
	I0328 00:48:17.491670 1114710 sshutil.go:53] new ssh client: &{IP:192.168.72.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/cert-expiration-927384/id_rsa Username:docker}
	I0328 00:48:17.578099 1114710 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0328 00:48:17.606266 1114710 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0328 00:48:17.634910 1114710 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0328 00:48:17.667622 1114710 provision.go:87] duration metric: took 431.415492ms to configureAuth
	I0328 00:48:17.667643 1114710 buildroot.go:189] setting minikube options for container-runtime
	I0328 00:48:17.667798 1114710 config.go:182] Loaded profile config "cert-expiration-927384": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0328 00:48:17.667878 1114710 main.go:141] libmachine: (cert-expiration-927384) Calling .GetSSHHostname
	I0328 00:48:17.671227 1114710 main.go:141] libmachine: (cert-expiration-927384) DBG | domain cert-expiration-927384 has defined MAC address 52:54:00:71:d9:b7 in network mk-cert-expiration-927384
	I0328 00:48:17.672689 1114710 main.go:141] libmachine: (cert-expiration-927384) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:d9:b7", ip: ""} in network mk-cert-expiration-927384: {Iface:virbr4 ExpiryTime:2024-03-28 01:44:44 +0000 UTC Type:0 Mac:52:54:00:71:d9:b7 Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:cert-expiration-927384 Clientid:01:52:54:00:71:d9:b7}
	I0328 00:48:17.672703 1114710 main.go:141] libmachine: (cert-expiration-927384) DBG | domain cert-expiration-927384 has defined IP address 192.168.72.11 and MAC address 52:54:00:71:d9:b7 in network mk-cert-expiration-927384
	I0328 00:48:17.672930 1114710 main.go:141] libmachine: (cert-expiration-927384) Calling .GetSSHPort
	I0328 00:48:17.673118 1114710 main.go:141] libmachine: (cert-expiration-927384) Calling .GetSSHKeyPath
	I0328 00:48:17.673310 1114710 main.go:141] libmachine: (cert-expiration-927384) Calling .GetSSHKeyPath
	I0328 00:48:17.673470 1114710 main.go:141] libmachine: (cert-expiration-927384) Calling .GetSSHUsername
	I0328 00:48:17.673599 1114710 main.go:141] libmachine: Using SSH client type: native
	I0328 00:48:17.673768 1114710 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.11 22 <nil> <nil>}
	I0328 00:48:17.673778 1114710 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0328 00:48:18.344225 1114767 preload.go:132] Checking if preload exists for k8s version v1.24.1 and runtime crio
	I0328 00:48:18.344266 1114767 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-cri-o-overlay-amd64.tar.lz4
	I0328 00:48:18.344275 1114767 cache.go:56] Caching tarball of preloaded images
	I0328 00:48:18.344369 1114767 preload.go:173] Found /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0328 00:48:18.344383 1114767 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on crio
	I0328 00:48:18.344493 1114767 profile.go:142] Saving config to /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/stopped-upgrade-317492/config.json ...
	I0328 00:48:18.344694 1114767 start.go:360] acquireMachinesLock for stopped-upgrade-317492: {Name:mk85e225431128bbd27ac7bb3815095957281902 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0328 00:48:23.528143 1114767 start.go:364] duration metric: took 5.183407757s to acquireMachinesLock for "stopped-upgrade-317492"
	I0328 00:48:23.528316 1114767 start.go:96] Skipping create...Using existing machine configuration
	I0328 00:48:23.528357 1114767 fix.go:54] fixHost starting: 
	I0328 00:48:23.528816 1114767 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18485-1069254/.minikube/bin/docker-machine-driver-kvm2
	I0328 00:48:23.528882 1114767 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 00:48:23.549495 1114767 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35341
	I0328 00:48:23.549941 1114767 main.go:141] libmachine: () Calling .GetVersion
	I0328 00:48:23.550490 1114767 main.go:141] libmachine: Using API Version  1
	I0328 00:48:23.550517 1114767 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 00:48:23.550868 1114767 main.go:141] libmachine: () Calling .GetMachineName
	I0328 00:48:23.551055 1114767 main.go:141] libmachine: (stopped-upgrade-317492) Calling .DriverName
	I0328 00:48:23.551203 1114767 main.go:141] libmachine: (stopped-upgrade-317492) Calling .GetState
	I0328 00:48:23.552814 1114767 fix.go:112] recreateIfNeeded on stopped-upgrade-317492: state=Stopped err=<nil>
	I0328 00:48:23.552841 1114767 main.go:141] libmachine: (stopped-upgrade-317492) Calling .DriverName
	W0328 00:48:23.553047 1114767 fix.go:138] unexpected machine state, will restart: <nil>
	I0328 00:48:23.555010 1114767 out.go:177] * Restarting existing kvm2 VM for "stopped-upgrade-317492" ...
	I0328 00:48:19.771430 1114185 ssh_runner.go:235] Completed: sudo /usr/bin/crictl stop --timeout=10 2264e41f77cc4f28ea256a12349256853bb9b3a2421f649b7e75348e9561392b 94ad7b59f3f0b5484e2c6ba034025fedf7553ccecaad58ee34b0b4a996190bb0 737bc1da2980fc5534519112255b2b2e4ced04b8f71571c699f8e1eedf84b5e9 adae09481ab4f504b6cc144443edbfd307e7f230fd84bab6c985cdc13f9c0558 abd187f6614af02b4d750c2d77c5a355583e1d586c760fd769441be096ef6314 8bd72f355a2ea559cef3868b146cd6fd26cba6952ec6955c36683ada0df0f439 20649081e8f44b376470128e187fc37fb77863492adf3b7be0a2eab8a63bda75 34e4e75cea1d705c5dd964a67ec39a3b7b289f3af5828272a192f3414fdae7d9 f1b80b5f74b883c025391f59cd96e49bfff64827dad73acf3ece5d1fe287785d 77e139bd230aca37656fb0d8cba8540c7743407c2a3bac69db0cc4b451fe225f 87ffaf60a5785562c8cf29e0f09f9f498980669c6d87f59566be0f007672adbe 8e9767e33ea811d3cfe2b94ac0a4e6fd225f7fc34ed1dba1bbeabee2b65a9eb7: (20.095363347s)
	W0328 00:48:19.771532 1114185 kubeadm.go:638] Failed to stop kube-system containers, port conflicts may arise: stop: crictl: sudo /usr/bin/crictl stop --timeout=10 2264e41f77cc4f28ea256a12349256853bb9b3a2421f649b7e75348e9561392b 94ad7b59f3f0b5484e2c6ba034025fedf7553ccecaad58ee34b0b4a996190bb0 737bc1da2980fc5534519112255b2b2e4ced04b8f71571c699f8e1eedf84b5e9 adae09481ab4f504b6cc144443edbfd307e7f230fd84bab6c985cdc13f9c0558 abd187f6614af02b4d750c2d77c5a355583e1d586c760fd769441be096ef6314 8bd72f355a2ea559cef3868b146cd6fd26cba6952ec6955c36683ada0df0f439 20649081e8f44b376470128e187fc37fb77863492adf3b7be0a2eab8a63bda75 34e4e75cea1d705c5dd964a67ec39a3b7b289f3af5828272a192f3414fdae7d9 f1b80b5f74b883c025391f59cd96e49bfff64827dad73acf3ece5d1fe287785d 77e139bd230aca37656fb0d8cba8540c7743407c2a3bac69db0cc4b451fe225f 87ffaf60a5785562c8cf29e0f09f9f498980669c6d87f59566be0f007672adbe 8e9767e33ea811d3cfe2b94ac0a4e6fd225f7fc34ed1dba1bbeabee2b65a9eb7: Process exited with status 1
	stdout:
	2264e41f77cc4f28ea256a12349256853bb9b3a2421f649b7e75348e9561392b
	94ad7b59f3f0b5484e2c6ba034025fedf7553ccecaad58ee34b0b4a996190bb0
	737bc1da2980fc5534519112255b2b2e4ced04b8f71571c699f8e1eedf84b5e9
	adae09481ab4f504b6cc144443edbfd307e7f230fd84bab6c985cdc13f9c0558
	abd187f6614af02b4d750c2d77c5a355583e1d586c760fd769441be096ef6314
	8bd72f355a2ea559cef3868b146cd6fd26cba6952ec6955c36683ada0df0f439
	
	stderr:
	E0328 00:48:19.762579    2751 remote_runtime.go:366] "StopContainer from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"20649081e8f44b376470128e187fc37fb77863492adf3b7be0a2eab8a63bda75\": container with ID starting with 20649081e8f44b376470128e187fc37fb77863492adf3b7be0a2eab8a63bda75 not found: ID does not exist" containerID="20649081e8f44b376470128e187fc37fb77863492adf3b7be0a2eab8a63bda75"
	time="2024-03-28T00:48:19Z" level=fatal msg="stopping the container \"20649081e8f44b376470128e187fc37fb77863492adf3b7be0a2eab8a63bda75\": rpc error: code = NotFound desc = could not find container \"20649081e8f44b376470128e187fc37fb77863492adf3b7be0a2eab8a63bda75\": container with ID starting with 20649081e8f44b376470128e187fc37fb77863492adf3b7be0a2eab8a63bda75 not found: ID does not exist"
	I0328 00:48:19.771611 1114185 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0328 00:48:19.811324 1114185 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0328 00:48:19.822088 1114185 kubeadm.go:156] found existing configuration files:
	-rw------- 1 root root 5651 Mar 28 00:46 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5654 Mar 28 00:46 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 1987 Mar 28 00:46 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5606 Mar 28 00:46 /etc/kubernetes/scheduler.conf
	
	I0328 00:48:19.822163 1114185 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0328 00:48:19.832573 1114185 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0328 00:48:19.843072 1114185 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0328 00:48:19.853410 1114185 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0328 00:48:19.853484 1114185 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0328 00:48:19.864297 1114185 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0328 00:48:19.874912 1114185 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0328 00:48:19.874990 1114185 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0328 00:48:19.885478 1114185 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0328 00:48:19.895925 1114185 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0328 00:48:19.965645 1114185 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0328 00:48:20.585228 1114185 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0328 00:48:20.807630 1114185 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0328 00:48:20.908586 1114185 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0328 00:48:21.073217 1114185 api_server.go:52] waiting for apiserver process to appear ...
	I0328 00:48:21.073315 1114185 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 00:48:21.573714 1114185 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 00:48:22.074028 1114185 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 00:48:22.118192 1114185 api_server.go:72] duration metric: took 1.044974568s to wait for apiserver process to appear ...
	I0328 00:48:22.118242 1114185 api_server.go:88] waiting for apiserver healthz status ...
	I0328 00:48:22.118298 1114185 api_server.go:253] Checking apiserver healthz at https://192.168.39.233:8443/healthz ...
	I0328 00:48:23.281823 1114710 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0328 00:48:23.281842 1114710 machine.go:97] duration metric: took 6.439903475s to provisionDockerMachine
	I0328 00:48:23.281854 1114710 start.go:293] postStartSetup for "cert-expiration-927384" (driver="kvm2")
	I0328 00:48:23.281866 1114710 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0328 00:48:23.281886 1114710 main.go:141] libmachine: (cert-expiration-927384) Calling .DriverName
	I0328 00:48:23.282357 1114710 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0328 00:48:23.282385 1114710 main.go:141] libmachine: (cert-expiration-927384) Calling .GetSSHHostname
	I0328 00:48:23.285394 1114710 main.go:141] libmachine: (cert-expiration-927384) DBG | domain cert-expiration-927384 has defined MAC address 52:54:00:71:d9:b7 in network mk-cert-expiration-927384
	I0328 00:48:23.285823 1114710 main.go:141] libmachine: (cert-expiration-927384) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:d9:b7", ip: ""} in network mk-cert-expiration-927384: {Iface:virbr4 ExpiryTime:2024-03-28 01:44:44 +0000 UTC Type:0 Mac:52:54:00:71:d9:b7 Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:cert-expiration-927384 Clientid:01:52:54:00:71:d9:b7}
	I0328 00:48:23.285846 1114710 main.go:141] libmachine: (cert-expiration-927384) DBG | domain cert-expiration-927384 has defined IP address 192.168.72.11 and MAC address 52:54:00:71:d9:b7 in network mk-cert-expiration-927384
	I0328 00:48:23.286006 1114710 main.go:141] libmachine: (cert-expiration-927384) Calling .GetSSHPort
	I0328 00:48:23.286295 1114710 main.go:141] libmachine: (cert-expiration-927384) Calling .GetSSHKeyPath
	I0328 00:48:23.286493 1114710 main.go:141] libmachine: (cert-expiration-927384) Calling .GetSSHUsername
	I0328 00:48:23.286638 1114710 sshutil.go:53] new ssh client: &{IP:192.168.72.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/cert-expiration-927384/id_rsa Username:docker}
	I0328 00:48:23.372998 1114710 ssh_runner.go:195] Run: cat /etc/os-release
	I0328 00:48:23.377676 1114710 info.go:137] Remote host: Buildroot 2023.02.9
	I0328 00:48:23.377693 1114710 filesync.go:126] Scanning /home/jenkins/minikube-integration/18485-1069254/.minikube/addons for local assets ...
	I0328 00:48:23.377759 1114710 filesync.go:126] Scanning /home/jenkins/minikube-integration/18485-1069254/.minikube/files for local assets ...
	I0328 00:48:23.377829 1114710 filesync.go:149] local asset: /home/jenkins/minikube-integration/18485-1069254/.minikube/files/etc/ssl/certs/10765222.pem -> 10765222.pem in /etc/ssl/certs
	I0328 00:48:23.377910 1114710 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0328 00:48:23.387664 1114710 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/files/etc/ssl/certs/10765222.pem --> /etc/ssl/certs/10765222.pem (1708 bytes)
	I0328 00:48:23.412892 1114710 start.go:296] duration metric: took 131.021134ms for postStartSetup
	I0328 00:48:23.412931 1114710 fix.go:56] duration metric: took 6.598372258s for fixHost
	I0328 00:48:23.412956 1114710 main.go:141] libmachine: (cert-expiration-927384) Calling .GetSSHHostname
	I0328 00:48:23.416152 1114710 main.go:141] libmachine: (cert-expiration-927384) DBG | domain cert-expiration-927384 has defined MAC address 52:54:00:71:d9:b7 in network mk-cert-expiration-927384
	I0328 00:48:23.416505 1114710 main.go:141] libmachine: (cert-expiration-927384) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:d9:b7", ip: ""} in network mk-cert-expiration-927384: {Iface:virbr4 ExpiryTime:2024-03-28 01:44:44 +0000 UTC Type:0 Mac:52:54:00:71:d9:b7 Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:cert-expiration-927384 Clientid:01:52:54:00:71:d9:b7}
	I0328 00:48:23.416529 1114710 main.go:141] libmachine: (cert-expiration-927384) DBG | domain cert-expiration-927384 has defined IP address 192.168.72.11 and MAC address 52:54:00:71:d9:b7 in network mk-cert-expiration-927384
	I0328 00:48:23.416745 1114710 main.go:141] libmachine: (cert-expiration-927384) Calling .GetSSHPort
	I0328 00:48:23.416942 1114710 main.go:141] libmachine: (cert-expiration-927384) Calling .GetSSHKeyPath
	I0328 00:48:23.417096 1114710 main.go:141] libmachine: (cert-expiration-927384) Calling .GetSSHKeyPath
	I0328 00:48:23.417296 1114710 main.go:141] libmachine: (cert-expiration-927384) Calling .GetSSHUsername
	I0328 00:48:23.417536 1114710 main.go:141] libmachine: Using SSH client type: native
	I0328 00:48:23.417747 1114710 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.11 22 <nil> <nil>}
	I0328 00:48:23.417756 1114710 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0328 00:48:23.527986 1114710 main.go:141] libmachine: SSH cmd err, output: <nil>: 1711586903.525002844
	
	I0328 00:48:23.528001 1114710 fix.go:216] guest clock: 1711586903.525002844
	I0328 00:48:23.528010 1114710 fix.go:229] Guest: 2024-03-28 00:48:23.525002844 +0000 UTC Remote: 2024-03-28 00:48:23.412934105 +0000 UTC m=+6.775419394 (delta=112.068739ms)
	I0328 00:48:23.528040 1114710 fix.go:200] guest clock delta is within tolerance: 112.068739ms
	I0328 00:48:23.528045 1114710 start.go:83] releasing machines lock for "cert-expiration-927384", held for 6.71350596s
	I0328 00:48:23.528076 1114710 main.go:141] libmachine: (cert-expiration-927384) Calling .DriverName
	I0328 00:48:23.528358 1114710 main.go:141] libmachine: (cert-expiration-927384) Calling .GetIP
	I0328 00:48:23.531344 1114710 main.go:141] libmachine: (cert-expiration-927384) DBG | domain cert-expiration-927384 has defined MAC address 52:54:00:71:d9:b7 in network mk-cert-expiration-927384
	I0328 00:48:23.531730 1114710 main.go:141] libmachine: (cert-expiration-927384) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:d9:b7", ip: ""} in network mk-cert-expiration-927384: {Iface:virbr4 ExpiryTime:2024-03-28 01:44:44 +0000 UTC Type:0 Mac:52:54:00:71:d9:b7 Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:cert-expiration-927384 Clientid:01:52:54:00:71:d9:b7}
	I0328 00:48:23.531757 1114710 main.go:141] libmachine: (cert-expiration-927384) DBG | domain cert-expiration-927384 has defined IP address 192.168.72.11 and MAC address 52:54:00:71:d9:b7 in network mk-cert-expiration-927384
	I0328 00:48:23.531940 1114710 main.go:141] libmachine: (cert-expiration-927384) Calling .DriverName
	I0328 00:48:23.532581 1114710 main.go:141] libmachine: (cert-expiration-927384) Calling .DriverName
	I0328 00:48:23.532794 1114710 main.go:141] libmachine: (cert-expiration-927384) Calling .DriverName
	I0328 00:48:23.532880 1114710 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0328 00:48:23.532917 1114710 main.go:141] libmachine: (cert-expiration-927384) Calling .GetSSHHostname
	I0328 00:48:23.533011 1114710 ssh_runner.go:195] Run: cat /version.json
	I0328 00:48:23.533031 1114710 main.go:141] libmachine: (cert-expiration-927384) Calling .GetSSHHostname
	I0328 00:48:23.535840 1114710 main.go:141] libmachine: (cert-expiration-927384) DBG | domain cert-expiration-927384 has defined MAC address 52:54:00:71:d9:b7 in network mk-cert-expiration-927384
	I0328 00:48:23.535860 1114710 main.go:141] libmachine: (cert-expiration-927384) DBG | domain cert-expiration-927384 has defined MAC address 52:54:00:71:d9:b7 in network mk-cert-expiration-927384
	I0328 00:48:23.536200 1114710 main.go:141] libmachine: (cert-expiration-927384) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:d9:b7", ip: ""} in network mk-cert-expiration-927384: {Iface:virbr4 ExpiryTime:2024-03-28 01:44:44 +0000 UTC Type:0 Mac:52:54:00:71:d9:b7 Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:cert-expiration-927384 Clientid:01:52:54:00:71:d9:b7}
	I0328 00:48:23.536223 1114710 main.go:141] libmachine: (cert-expiration-927384) DBG | domain cert-expiration-927384 has defined IP address 192.168.72.11 and MAC address 52:54:00:71:d9:b7 in network mk-cert-expiration-927384
	I0328 00:48:23.536247 1114710 main.go:141] libmachine: (cert-expiration-927384) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:d9:b7", ip: ""} in network mk-cert-expiration-927384: {Iface:virbr4 ExpiryTime:2024-03-28 01:44:44 +0000 UTC Type:0 Mac:52:54:00:71:d9:b7 Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:cert-expiration-927384 Clientid:01:52:54:00:71:d9:b7}
	I0328 00:48:23.536259 1114710 main.go:141] libmachine: (cert-expiration-927384) DBG | domain cert-expiration-927384 has defined IP address 192.168.72.11 and MAC address 52:54:00:71:d9:b7 in network mk-cert-expiration-927384
	I0328 00:48:23.536430 1114710 main.go:141] libmachine: (cert-expiration-927384) Calling .GetSSHPort
	I0328 00:48:23.536593 1114710 main.go:141] libmachine: (cert-expiration-927384) Calling .GetSSHPort
	I0328 00:48:23.536599 1114710 main.go:141] libmachine: (cert-expiration-927384) Calling .GetSSHKeyPath
	I0328 00:48:23.536772 1114710 main.go:141] libmachine: (cert-expiration-927384) Calling .GetSSHKeyPath
	I0328 00:48:23.536842 1114710 main.go:141] libmachine: (cert-expiration-927384) Calling .GetSSHUsername
	I0328 00:48:23.536951 1114710 main.go:141] libmachine: (cert-expiration-927384) Calling .GetSSHUsername
	I0328 00:48:23.537031 1114710 sshutil.go:53] new ssh client: &{IP:192.168.72.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/cert-expiration-927384/id_rsa Username:docker}
	I0328 00:48:23.537098 1114710 sshutil.go:53] new ssh client: &{IP:192.168.72.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/cert-expiration-927384/id_rsa Username:docker}
	I0328 00:48:23.656621 1114710 ssh_runner.go:195] Run: systemctl --version
	I0328 00:48:23.663345 1114710 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0328 00:48:23.833238 1114710 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0328 00:48:23.842112 1114710 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0328 00:48:23.842187 1114710 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0328 00:48:23.852122 1114710 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0328 00:48:23.852141 1114710 start.go:494] detecting cgroup driver to use...
	I0328 00:48:23.852207 1114710 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0328 00:48:23.876893 1114710 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0328 00:48:23.894595 1114710 docker.go:217] disabling cri-docker service (if available) ...
	I0328 00:48:23.894662 1114710 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0328 00:48:23.913915 1114710 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0328 00:48:23.930497 1114710 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0328 00:48:24.080850 1114710 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0328 00:48:24.269252 1114710 docker.go:233] disabling docker service ...
	I0328 00:48:24.269331 1114710 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0328 00:48:24.292148 1114710 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0328 00:48:24.309015 1114710 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0328 00:48:24.465008 1114710 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0328 00:48:24.622031 1114710 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0328 00:48:24.641531 1114710 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0328 00:48:24.670835 1114710 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0328 00:48:24.670889 1114710 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 00:48:24.683920 1114710 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0328 00:48:24.683990 1114710 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 00:48:24.699470 1114710 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 00:48:24.718167 1114710 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 00:48:24.736687 1114710 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0328 00:48:24.749054 1114710 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 00:48:24.761395 1114710 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 00:48:24.776502 1114710 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 00:48:24.790833 1114710 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0328 00:48:24.801712 1114710 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0328 00:48:24.811915 1114710 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0328 00:48:24.999034 1114710 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0328 00:48:25.610148 1114710 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0328 00:48:25.610215 1114710 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0328 00:48:25.615764 1114710 start.go:562] Will wait 60s for crictl version
	I0328 00:48:25.615820 1114710 ssh_runner.go:195] Run: which crictl
	I0328 00:48:25.620424 1114710 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0328 00:48:25.664663 1114710 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0328 00:48:25.664752 1114710 ssh_runner.go:195] Run: crio --version
	I0328 00:48:25.696335 1114710 ssh_runner.go:195] Run: crio --version
	I0328 00:48:25.733674 1114710 out.go:177] * Preparing Kubernetes v1.29.3 on CRI-O 1.29.1 ...
	I0328 00:48:24.940318 1114185 api_server.go:279] https://192.168.39.233:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0328 00:48:24.940359 1114185 api_server.go:103] status: https://192.168.39.233:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0328 00:48:24.940376 1114185 api_server.go:253] Checking apiserver healthz at https://192.168.39.233:8443/healthz ...
	I0328 00:48:25.031474 1114185 api_server.go:279] https://192.168.39.233:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0328 00:48:25.031523 1114185 api_server.go:103] status: https://192.168.39.233:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0328 00:48:25.118721 1114185 api_server.go:253] Checking apiserver healthz at https://192.168.39.233:8443/healthz ...
	I0328 00:48:25.123900 1114185 api_server.go:279] https://192.168.39.233:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0328 00:48:25.123939 1114185 api_server.go:103] status: https://192.168.39.233:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0328 00:48:25.618327 1114185 api_server.go:253] Checking apiserver healthz at https://192.168.39.233:8443/healthz ...
	I0328 00:48:25.624159 1114185 api_server.go:279] https://192.168.39.233:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0328 00:48:25.624195 1114185 api_server.go:103] status: https://192.168.39.233:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0328 00:48:26.118361 1114185 api_server.go:253] Checking apiserver healthz at https://192.168.39.233:8443/healthz ...
	I0328 00:48:26.128584 1114185 api_server.go:279] https://192.168.39.233:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0328 00:48:26.128624 1114185 api_server.go:103] status: https://192.168.39.233:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0328 00:48:26.619210 1114185 api_server.go:253] Checking apiserver healthz at https://192.168.39.233:8443/healthz ...
	I0328 00:48:26.625842 1114185 api_server.go:279] https://192.168.39.233:8443/healthz returned 200:
	ok
	I0328 00:48:26.641524 1114185 api_server.go:141] control plane version: v1.29.3
	I0328 00:48:26.641559 1114185 api_server.go:131] duration metric: took 4.523307907s to wait for apiserver health ...
	I0328 00:48:26.641579 1114185 cni.go:84] Creating CNI manager for ""
	I0328 00:48:26.641588 1114185 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0328 00:48:26.643324 1114185 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0328 00:48:25.735219 1114710 main.go:141] libmachine: (cert-expiration-927384) Calling .GetIP
	I0328 00:48:25.738562 1114710 main.go:141] libmachine: (cert-expiration-927384) DBG | domain cert-expiration-927384 has defined MAC address 52:54:00:71:d9:b7 in network mk-cert-expiration-927384
	I0328 00:48:25.738932 1114710 main.go:141] libmachine: (cert-expiration-927384) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:d9:b7", ip: ""} in network mk-cert-expiration-927384: {Iface:virbr4 ExpiryTime:2024-03-28 01:44:44 +0000 UTC Type:0 Mac:52:54:00:71:d9:b7 Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:cert-expiration-927384 Clientid:01:52:54:00:71:d9:b7}
	I0328 00:48:25.738954 1114710 main.go:141] libmachine: (cert-expiration-927384) DBG | domain cert-expiration-927384 has defined IP address 192.168.72.11 and MAC address 52:54:00:71:d9:b7 in network mk-cert-expiration-927384
	I0328 00:48:25.739258 1114710 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0328 00:48:25.744295 1114710 kubeadm.go:877] updating cluster {Name:cert-expiration-927384 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.29.3 ClusterName:cert-expiration-927384 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.11 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:8760h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p M
ountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0328 00:48:25.744410 1114710 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0328 00:48:25.744466 1114710 ssh_runner.go:195] Run: sudo crictl images --output json
	I0328 00:48:25.788767 1114710 crio.go:514] all images are preloaded for cri-o runtime.
	I0328 00:48:25.788784 1114710 crio.go:433] Images already preloaded, skipping extraction
	I0328 00:48:25.788844 1114710 ssh_runner.go:195] Run: sudo crictl images --output json
	I0328 00:48:25.828802 1114710 crio.go:514] all images are preloaded for cri-o runtime.
	I0328 00:48:25.828821 1114710 cache_images.go:84] Images are preloaded, skipping loading
	I0328 00:48:25.828831 1114710 kubeadm.go:928] updating node { 192.168.72.11 8443 v1.29.3 crio true true} ...
	I0328 00:48:25.828973 1114710 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=cert-expiration-927384 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.11
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.3 ClusterName:cert-expiration-927384 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0328 00:48:25.829058 1114710 ssh_runner.go:195] Run: crio config
	I0328 00:48:25.888955 1114710 cni.go:84] Creating CNI manager for ""
	I0328 00:48:25.888965 1114710 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0328 00:48:25.888973 1114710 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0328 00:48:25.888993 1114710 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.11 APIServerPort:8443 KubernetesVersion:v1.29.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:cert-expiration-927384 NodeName:cert-expiration-927384 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.11"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.11 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0328 00:48:25.889149 1114710 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.11
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "cert-expiration-927384"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.11
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.11"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0328 00:48:25.889219 1114710 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.3
	I0328 00:48:25.902444 1114710 binaries.go:44] Found k8s binaries, skipping transfer
	I0328 00:48:25.902507 1114710 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0328 00:48:25.915930 1114710 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (321 bytes)
	I0328 00:48:25.938575 1114710 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0328 00:48:25.964637 1114710 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2163 bytes)
	I0328 00:48:25.985872 1114710 ssh_runner.go:195] Run: grep 192.168.72.11	control-plane.minikube.internal$ /etc/hosts
	I0328 00:48:25.991204 1114710 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0328 00:48:26.221442 1114710 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0328 00:48:26.296406 1114710 certs.go:68] Setting up /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/cert-expiration-927384 for IP: 192.168.72.11
	I0328 00:48:26.296423 1114710 certs.go:194] generating shared ca certs ...
	I0328 00:48:26.296444 1114710 certs.go:226] acquiring lock for ca certs: {Name:mkf4dec8f33bbf51de6ed3aabdf175c7fd744ef9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 00:48:26.296629 1114710 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.key
	I0328 00:48:26.296677 1114710 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/proxy-client-ca.key
	I0328 00:48:26.296686 1114710 certs.go:256] generating profile certs ...
	W0328 00:48:26.296839 1114710 out.go:239] ! Certificate client.crt has expired. Generating a new one...
	I0328 00:48:26.296868 1114710 certs.go:624] cert expired /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/cert-expiration-927384/client.crt: expiration: 2024-03-28 00:48:00 +0000 UTC, now: 2024-03-28 00:48:26.296862341 +0000 UTC m=+9.659347632
	I0328 00:48:26.296993 1114710 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/cert-expiration-927384/client.key
	I0328 00:48:26.297019 1114710 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/cert-expiration-927384/client.crt with IP's: []
	I0328 00:48:26.443460 1114710 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/cert-expiration-927384/client.crt ...
	I0328 00:48:26.443486 1114710 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/cert-expiration-927384/client.crt: {Name:mk0f6532c9e9a958037964474d03319b24c2fad7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 00:48:26.443706 1114710 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/cert-expiration-927384/client.key ...
	I0328 00:48:26.443722 1114710 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/cert-expiration-927384/client.key: {Name:mk07f937216548150c24811b345561ec6aaff33f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	W0328 00:48:26.443980 1114710 out.go:239] ! Certificate apiserver.crt.fd1a1cfa has expired. Generating a new one...
	I0328 00:48:26.444005 1114710 certs.go:624] cert expired /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/cert-expiration-927384/apiserver.crt.fd1a1cfa: expiration: 2024-03-28 00:48:00 +0000 UTC, now: 2024-03-28 00:48:26.443997439 +0000 UTC m=+9.806482725
	I0328 00:48:26.444111 1114710 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/cert-expiration-927384/apiserver.key.fd1a1cfa
	I0328 00:48:26.444137 1114710 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/cert-expiration-927384/apiserver.crt.fd1a1cfa with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.72.11]
	I0328 00:48:23.556444 1114767 main.go:141] libmachine: (stopped-upgrade-317492) Calling .Start
	I0328 00:48:23.556635 1114767 main.go:141] libmachine: (stopped-upgrade-317492) Ensuring networks are active...
	I0328 00:48:23.557570 1114767 main.go:141] libmachine: (stopped-upgrade-317492) Ensuring network default is active
	I0328 00:48:23.558016 1114767 main.go:141] libmachine: (stopped-upgrade-317492) Ensuring network mk-stopped-upgrade-317492 is active
	I0328 00:48:23.558898 1114767 main.go:141] libmachine: (stopped-upgrade-317492) Getting domain xml...
	I0328 00:48:23.559658 1114767 main.go:141] libmachine: (stopped-upgrade-317492) Creating domain...
	I0328 00:48:24.888082 1114767 main.go:141] libmachine: (stopped-upgrade-317492) Waiting to get IP...
	I0328 00:48:24.889313 1114767 main.go:141] libmachine: (stopped-upgrade-317492) DBG | domain stopped-upgrade-317492 has defined MAC address 52:54:00:71:b9:38 in network mk-stopped-upgrade-317492
	I0328 00:48:24.889868 1114767 main.go:141] libmachine: (stopped-upgrade-317492) DBG | unable to find current IP address of domain stopped-upgrade-317492 in network mk-stopped-upgrade-317492
	I0328 00:48:24.889995 1114767 main.go:141] libmachine: (stopped-upgrade-317492) DBG | I0328 00:48:24.889868 1114813 retry.go:31] will retry after 188.329756ms: waiting for machine to come up
	I0328 00:48:25.080585 1114767 main.go:141] libmachine: (stopped-upgrade-317492) DBG | domain stopped-upgrade-317492 has defined MAC address 52:54:00:71:b9:38 in network mk-stopped-upgrade-317492
	I0328 00:48:25.081273 1114767 main.go:141] libmachine: (stopped-upgrade-317492) DBG | unable to find current IP address of domain stopped-upgrade-317492 in network mk-stopped-upgrade-317492
	I0328 00:48:25.081339 1114767 main.go:141] libmachine: (stopped-upgrade-317492) DBG | I0328 00:48:25.081228 1114813 retry.go:31] will retry after 290.192227ms: waiting for machine to come up
	I0328 00:48:25.373009 1114767 main.go:141] libmachine: (stopped-upgrade-317492) DBG | domain stopped-upgrade-317492 has defined MAC address 52:54:00:71:b9:38 in network mk-stopped-upgrade-317492
	I0328 00:48:25.373630 1114767 main.go:141] libmachine: (stopped-upgrade-317492) DBG | unable to find current IP address of domain stopped-upgrade-317492 in network mk-stopped-upgrade-317492
	I0328 00:48:25.373659 1114767 main.go:141] libmachine: (stopped-upgrade-317492) DBG | I0328 00:48:25.373591 1114813 retry.go:31] will retry after 419.414213ms: waiting for machine to come up
	I0328 00:48:25.794224 1114767 main.go:141] libmachine: (stopped-upgrade-317492) DBG | domain stopped-upgrade-317492 has defined MAC address 52:54:00:71:b9:38 in network mk-stopped-upgrade-317492
	I0328 00:48:25.794704 1114767 main.go:141] libmachine: (stopped-upgrade-317492) DBG | unable to find current IP address of domain stopped-upgrade-317492 in network mk-stopped-upgrade-317492
	I0328 00:48:25.794735 1114767 main.go:141] libmachine: (stopped-upgrade-317492) DBG | I0328 00:48:25.794645 1114813 retry.go:31] will retry after 520.496062ms: waiting for machine to come up
	I0328 00:48:26.316482 1114767 main.go:141] libmachine: (stopped-upgrade-317492) DBG | domain stopped-upgrade-317492 has defined MAC address 52:54:00:71:b9:38 in network mk-stopped-upgrade-317492
	I0328 00:48:26.317049 1114767 main.go:141] libmachine: (stopped-upgrade-317492) DBG | unable to find current IP address of domain stopped-upgrade-317492 in network mk-stopped-upgrade-317492
	I0328 00:48:26.317084 1114767 main.go:141] libmachine: (stopped-upgrade-317492) DBG | I0328 00:48:26.317024 1114813 retry.go:31] will retry after 492.625594ms: waiting for machine to come up
	I0328 00:48:26.811905 1114767 main.go:141] libmachine: (stopped-upgrade-317492) DBG | domain stopped-upgrade-317492 has defined MAC address 52:54:00:71:b9:38 in network mk-stopped-upgrade-317492
	I0328 00:48:26.812580 1114767 main.go:141] libmachine: (stopped-upgrade-317492) DBG | unable to find current IP address of domain stopped-upgrade-317492 in network mk-stopped-upgrade-317492
	I0328 00:48:26.812605 1114767 main.go:141] libmachine: (stopped-upgrade-317492) DBG | I0328 00:48:26.812539 1114813 retry.go:31] will retry after 807.604901ms: waiting for machine to come up
	I0328 00:48:27.621635 1114767 main.go:141] libmachine: (stopped-upgrade-317492) DBG | domain stopped-upgrade-317492 has defined MAC address 52:54:00:71:b9:38 in network mk-stopped-upgrade-317492
	I0328 00:48:27.622272 1114767 main.go:141] libmachine: (stopped-upgrade-317492) DBG | unable to find current IP address of domain stopped-upgrade-317492 in network mk-stopped-upgrade-317492
	I0328 00:48:27.622347 1114767 main.go:141] libmachine: (stopped-upgrade-317492) DBG | I0328 00:48:27.622240 1114813 retry.go:31] will retry after 1.13820598s: waiting for machine to come up
	I0328 00:48:26.644600 1114185 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0328 00:48:26.661475 1114185 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0328 00:48:26.687271 1114185 system_pods.go:43] waiting for kube-system pods to appear ...
	I0328 00:48:26.699141 1114185 system_pods.go:59] 6 kube-system pods found
	I0328 00:48:26.699187 1114185 system_pods.go:61] "coredns-76f75df574-d9zx2" [dbcb1807-c16a-428c-9292-e7f4a8ff9d00] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0328 00:48:26.699198 1114185 system_pods.go:61] "etcd-pause-040046" [bde312dd-644f-479d-ba86-c5e7a3da71b2] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0328 00:48:26.699211 1114185 system_pods.go:61] "kube-apiserver-pause-040046" [f7559fbc-70d2-4624-851a-07ccc3aad759] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0328 00:48:26.699222 1114185 system_pods.go:61] "kube-controller-manager-pause-040046" [cba5b940-82ef-434a-83ee-c1be91954207] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0328 00:48:26.699233 1114185 system_pods.go:61] "kube-proxy-5tlrp" [249cdd5d-91ae-4248-9a00-f4959c78b3b2] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0328 00:48:26.699244 1114185 system_pods.go:61] "kube-scheduler-pause-040046" [337f7c7e-1319-44f7-a97f-dc80a972440b] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0328 00:48:26.699253 1114185 system_pods.go:74] duration metric: took 11.957859ms to wait for pod list to return data ...
	I0328 00:48:26.699266 1114185 node_conditions.go:102] verifying NodePressure condition ...
	I0328 00:48:26.703810 1114185 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0328 00:48:26.703843 1114185 node_conditions.go:123] node cpu capacity is 2
	I0328 00:48:26.703855 1114185 node_conditions.go:105] duration metric: took 4.583441ms to run NodePressure ...
	I0328 00:48:26.703877 1114185 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0328 00:48:27.055031 1114185 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0328 00:48:27.061811 1114185 kubeadm.go:733] kubelet initialised
	I0328 00:48:27.061842 1114185 kubeadm.go:734] duration metric: took 6.769991ms waiting for restarted kubelet to initialise ...
	I0328 00:48:27.061855 1114185 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0328 00:48:27.067758 1114185 pod_ready.go:78] waiting up to 4m0s for pod "coredns-76f75df574-d9zx2" in "kube-system" namespace to be "Ready" ...
	I0328 00:48:29.077210 1114185 pod_ready.go:102] pod "coredns-76f75df574-d9zx2" in "kube-system" namespace has status "Ready":"False"
	I0328 00:48:26.811373 1114710 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/cert-expiration-927384/apiserver.crt.fd1a1cfa ...
	I0328 00:48:26.811399 1114710 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/cert-expiration-927384/apiserver.crt.fd1a1cfa: {Name:mk34ba619ba3ddf930ef581036b0ac20e89ccba2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 00:48:26.811613 1114710 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/cert-expiration-927384/apiserver.key.fd1a1cfa ...
	I0328 00:48:26.811628 1114710 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/cert-expiration-927384/apiserver.key.fd1a1cfa: {Name:mk218bda22114738f0983db810d48c6bdf794c73 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 00:48:26.811718 1114710 certs.go:381] copying /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/cert-expiration-927384/apiserver.crt.fd1a1cfa -> /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/cert-expiration-927384/apiserver.crt
	I0328 00:48:26.811908 1114710 certs.go:385] copying /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/cert-expiration-927384/apiserver.key.fd1a1cfa -> /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/cert-expiration-927384/apiserver.key
	W0328 00:48:26.812153 1114710 out.go:239] ! Certificate proxy-client.crt has expired. Generating a new one...
	I0328 00:48:26.812177 1114710 certs.go:624] cert expired /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/cert-expiration-927384/proxy-client.crt: expiration: 2024-03-28 00:48:01 +0000 UTC, now: 2024-03-28 00:48:26.812170826 +0000 UTC m=+10.174656110
	I0328 00:48:26.812267 1114710 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/cert-expiration-927384/proxy-client.key
	I0328 00:48:26.812287 1114710 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/cert-expiration-927384/proxy-client.crt with IP's: []
	I0328 00:48:26.971956 1114710 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/cert-expiration-927384/proxy-client.crt ...
	I0328 00:48:26.971974 1114710 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/cert-expiration-927384/proxy-client.crt: {Name:mk349866c21bb285003b35aabf65b60af8539bcc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 00:48:26.972185 1114710 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/cert-expiration-927384/proxy-client.key ...
	I0328 00:48:26.972201 1114710 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/cert-expiration-927384/proxy-client.key: {Name:mk81959bdb3e6d8294e07a9a83e62d52d12e261a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 00:48:26.972375 1114710 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/1076522.pem (1338 bytes)
	W0328 00:48:26.972408 1114710 certs.go:480] ignoring /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/1076522_empty.pem, impossibly tiny 0 bytes
	I0328 00:48:26.972415 1114710 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca-key.pem (1675 bytes)
	I0328 00:48:26.972435 1114710 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem (1078 bytes)
	I0328 00:48:26.972453 1114710 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/cert.pem (1123 bytes)
	I0328 00:48:26.972469 1114710 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/key.pem (1679 bytes)
	I0328 00:48:26.972516 1114710 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/files/etc/ssl/certs/10765222.pem (1708 bytes)
	I0328 00:48:26.973131 1114710 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0328 00:48:27.091720 1114710 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0328 00:48:27.224044 1114710 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0328 00:48:27.279385 1114710 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0328 00:48:27.314593 1114710 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/cert-expiration-927384/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0328 00:48:27.344639 1114710 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/cert-expiration-927384/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0328 00:48:27.384378 1114710 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/cert-expiration-927384/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0328 00:48:27.420731 1114710 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/cert-expiration-927384/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0328 00:48:27.458602 1114710 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/1076522.pem --> /usr/share/ca-certificates/1076522.pem (1338 bytes)
	I0328 00:48:27.498938 1114710 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/files/etc/ssl/certs/10765222.pem --> /usr/share/ca-certificates/10765222.pem (1708 bytes)
	I0328 00:48:27.535082 1114710 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0328 00:48:27.607399 1114710 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0328 00:48:27.653003 1114710 ssh_runner.go:195] Run: openssl version
	I0328 00:48:27.662434 1114710 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10765222.pem && ln -fs /usr/share/ca-certificates/10765222.pem /etc/ssl/certs/10765222.pem"
	I0328 00:48:27.683708 1114710 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10765222.pem
	I0328 00:48:27.697564 1114710 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 27 23:42 /usr/share/ca-certificates/10765222.pem
	I0328 00:48:27.697635 1114710 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10765222.pem
	I0328 00:48:27.707706 1114710 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/10765222.pem /etc/ssl/certs/3ec20f2e.0"
	I0328 00:48:27.727833 1114710 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0328 00:48:27.745718 1114710 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0328 00:48:27.752828 1114710 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 27 23:34 /usr/share/ca-certificates/minikubeCA.pem
	I0328 00:48:27.752898 1114710 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0328 00:48:27.759630 1114710 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0328 00:48:27.783130 1114710 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1076522.pem && ln -fs /usr/share/ca-certificates/1076522.pem /etc/ssl/certs/1076522.pem"
	I0328 00:48:27.825332 1114710 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1076522.pem
	I0328 00:48:27.833006 1114710 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 27 23:42 /usr/share/ca-certificates/1076522.pem
	I0328 00:48:27.833070 1114710 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1076522.pem
	I0328 00:48:27.846584 1114710 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1076522.pem /etc/ssl/certs/51391683.0"
	I0328 00:48:27.862925 1114710 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0328 00:48:27.869506 1114710 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0328 00:48:27.875761 1114710 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0328 00:48:27.881627 1114710 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0328 00:48:27.887592 1114710 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0328 00:48:27.893573 1114710 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0328 00:48:27.899382 1114710 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0328 00:48:27.905228 1114710 kubeadm.go:391] StartCluster: {Name:cert-expiration-927384 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:
v1.29.3 ClusterName:cert-expiration-927384 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.11 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:8760h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Moun
tUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0328 00:48:27.905320 1114710 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0328 00:48:27.905369 1114710 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0328 00:48:27.958462 1114710 cri.go:89] found id: "6c8ea2d6f48529e858a501fb02ae3646db3ea04d62a986a8d269d55a4a22f959"
	I0328 00:48:27.958480 1114710 cri.go:89] found id: "889d56ee1c25d534c431807945c10cd2cf3845d18061eb50cdd7e96f744c670d"
	I0328 00:48:27.958483 1114710 cri.go:89] found id: "7df76a570371f45d1c809bb8525032dfa3e4d0ac34980128df2913a916d01050"
	I0328 00:48:27.958486 1114710 cri.go:89] found id: "0a4d6d97b5876b7f41ae8f8b2fc9e7706e252cf372bbb63716d7325ee7dcde80"
	I0328 00:48:27.958489 1114710 cri.go:89] found id: "a232770ca991bedb522785fcbd1be0f9b04c0df3497ee18bc8ea4f4a10fef9b9"
	I0328 00:48:27.958492 1114710 cri.go:89] found id: "eaef1aadcb512b19f0203b051067c9093fc6568665498c540d4b6e65ce102f3e"
	I0328 00:48:27.958494 1114710 cri.go:89] found id: "3c458acc4d38eae614292a1d35e7215277fac9d47d26f58b2998c9361ecd9f28"
	I0328 00:48:27.958497 1114710 cri.go:89] found id: "de58f93d3a3b6bf7c15ff90af303f39c5bbde9e729d2309a365420e0ecea6477"
	I0328 00:48:27.958499 1114710 cri.go:89] found id: "547d29e826214966219a18ea23219f0fbe13aaff13e1cee4027930e139040665"
	I0328 00:48:27.958505 1114710 cri.go:89] found id: "6bd385b1ed854e674fc2ee6648c9f77849dd1cbae0ad6b5e98d4c1559cb3db21"
	I0328 00:48:27.958508 1114710 cri.go:89] found id: "aa91868317a1e3801d3222ae012e217a242e00166e053e3698394112cf1e0e34"
	I0328 00:48:27.958511 1114710 cri.go:89] found id: "7e93cba6aefcba5288d8312b6c961538a517c3e6b0b1edfeb2166ef8cc0230c1"
	I0328 00:48:27.958513 1114710 cri.go:89] found id: "aad8a014a45c34513990a53f3c1e213badb56faea151cc23c074451973db76bf"
	I0328 00:48:27.958516 1114710 cri.go:89] found id: ""
	I0328 00:48:27.958576 1114710 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Mar 28 00:48:47 pause-040046 crio[2128]: time="2024-03-28 00:48:47.936820303Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1711586927936797701,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:121209,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=487e2aa2-5828-42de-ba70-242d248b4aef name=/runtime.v1.ImageService/ImageFsInfo
	Mar 28 00:48:47 pause-040046 crio[2128]: time="2024-03-28 00:48:47.937444001Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6a80f35f-60f3-4dd3-91fd-425bb69b7e06 name=/runtime.v1.RuntimeService/ListContainers
	Mar 28 00:48:47 pause-040046 crio[2128]: time="2024-03-28 00:48:47.937497376Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6a80f35f-60f3-4dd3-91fd-425bb69b7e06 name=/runtime.v1.RuntimeService/ListContainers
	Mar 28 00:48:47 pause-040046 crio[2128]: time="2024-03-28 00:48:47.937733930Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ad125daaf884447921ace6baa3fac20fb9080244b462a3e6c1dfa81b3dd1bf9e,PodSandboxId:f2711f2561563c9dc0b89b71de750e6e8b0461558ecaafb570f43d5958900b2e,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1711586906254176473,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-d9zx2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dbcb1807-c16a-428c-9292-e7f4a8ff9d00,},Annotations:map[string]string{io.kubernetes.container.hash: 7530ac82,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d21716d3db96dc4becdcf1946eb6aeb2bad5d0e6a01d4704807ca7b1c717663,PodSandboxId:0bfdaf3c6731dbd36344a7d6856787e94a075c01506f983f9693dfea067ac12e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1711586906244232513,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5tlrp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 249cdd5d-91ae-4248-9a00-f4959c78b3b2,},Annotations:map[string]string{io.kubernetes.container.hash: 62269400,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a953be1a41bf293fed13e0cb2e03fd3a5cd17c9b79aee46a3884d75e51d7453,PodSandboxId:e6197137bbd50e78913254d90cc8874393f7bc3d1e7ebf5bc4e59c5573bd8536,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1711586901570748638,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-040046,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf1ef5c54fdae67febb96c1cd5ff3e5f,},Annot
ations:map[string]string{io.kubernetes.container.hash: 69e7280e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2924b060928092dbbd9bd04663a3b4db9488ebea45a3fcf8f014a444cd2902a,PodSandboxId:246a8e4a4bfce5313868706064d22d2da4b72c2d7a4a54fcf1f721c538a7e5b5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1711586901600556046,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-040046,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dbcc4c94bbad251b426f15a339077c36,},Annotations:map[string]
string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a15940c4f84a4954a314771f1309e8bbf59250930010e92dc67bbc616dc6099,PodSandboxId:978617d4de30fbcfea266b10efdaaa2e4aadab97a9daf324e7ebe7cf66fa6cf3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1711586901585944282,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-040046,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d72e7312da1738efc1668540f0e695a,},Annotations:map[string]string{io.kubernet
es.container.hash: 11ee3fec,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6b6176f7ebcce772df04ac9936e524988d01f46c826a569ca6ab05daf111e19f,PodSandboxId:3a57d6e0c813adf153ea3ab13838ff43bb350518484bc795616b17c4dd407b3b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1711586901580910804,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-040046,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 42f3ca6be942820b85cc87c91c1ac4b8,},Annotations:map[string]string{io
.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2264e41f77cc4f28ea256a12349256853bb9b3a2421f649b7e75348e9561392b,PodSandboxId:f2711f2561563c9dc0b89b71de750e6e8b0461558ecaafb570f43d5958900b2e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1711586878330422134,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-d9zx2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dbcb1807-c16a-428c-9292-e7f4a8ff9d00,},Annotations:map[string]string{io.kubernetes.container.hash: 7530
ac82,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:737bc1da2980fc5534519112255b2b2e4ced04b8f71571c699f8e1eedf84b5e9,PodSandboxId:0bfdaf3c6731dbd36344a7d6856787e94a075c01506f983f9693dfea067ac12e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_EXITED,CreatedAt:1711586877659540751,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.po
d.name: kube-proxy-5tlrp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 249cdd5d-91ae-4248-9a00-f4959c78b3b2,},Annotations:map[string]string{io.kubernetes.container.hash: 62269400,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:adae09481ab4f504b6cc144443edbfd307e7f230fd84bab6c985cdc13f9c0558,PodSandboxId:246a8e4a4bfce5313868706064d22d2da4b72c2d7a4a54fcf1f721c538a7e5b5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_EXITED,CreatedAt:1711586877603939716,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pau
se-040046,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dbcc4c94bbad251b426f15a339077c36,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:94ad7b59f3f0b5484e2c6ba034025fedf7553ccecaad58ee34b0b4a996190bb0,PodSandboxId:978617d4de30fbcfea266b10efdaaa2e4aadab97a9daf324e7ebe7cf66fa6cf3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_EXITED,CreatedAt:1711586877716904505,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-040046,io.kubern
etes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d72e7312da1738efc1668540f0e695a,},Annotations:map[string]string{io.kubernetes.container.hash: 11ee3fec,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:abd187f6614af02b4d750c2d77c5a355583e1d586c760fd769441be096ef6314,PodSandboxId:e6197137bbd50e78913254d90cc8874393f7bc3d1e7ebf5bc4e59c5573bd8536,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1711586877575308986,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-040046,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod
.uid: bf1ef5c54fdae67febb96c1cd5ff3e5f,},Annotations:map[string]string{io.kubernetes.container.hash: 69e7280e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8bd72f355a2ea559cef3868b146cd6fd26cba6952ec6955c36683ada0df0f439,PodSandboxId:3a57d6e0c813adf153ea3ab13838ff43bb350518484bc795616b17c4dd407b3b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_EXITED,CreatedAt:1711586877565195150,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-040046,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: 42f3ca6be942820b85cc87c91c1ac4b8,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6a80f35f-60f3-4dd3-91fd-425bb69b7e06 name=/runtime.v1.RuntimeService/ListContainers
	Mar 28 00:48:47 pause-040046 crio[2128]: time="2024-03-28 00:48:47.982372097Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=11030c16-2d0c-4ba5-84a1-e770dab0e1e7 name=/runtime.v1.RuntimeService/Version
	Mar 28 00:48:47 pause-040046 crio[2128]: time="2024-03-28 00:48:47.982445318Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=11030c16-2d0c-4ba5-84a1-e770dab0e1e7 name=/runtime.v1.RuntimeService/Version
	Mar 28 00:48:47 pause-040046 crio[2128]: time="2024-03-28 00:48:47.983531239Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f4582aa1-05e4-4644-80e4-e2d62bfd1b0e name=/runtime.v1.ImageService/ImageFsInfo
	Mar 28 00:48:47 pause-040046 crio[2128]: time="2024-03-28 00:48:47.984154960Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1711586927984129614,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:121209,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f4582aa1-05e4-4644-80e4-e2d62bfd1b0e name=/runtime.v1.ImageService/ImageFsInfo
	Mar 28 00:48:47 pause-040046 crio[2128]: time="2024-03-28 00:48:47.984782525Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7602085f-a319-48d4-b4bf-3564b37e45d0 name=/runtime.v1.RuntimeService/ListContainers
	Mar 28 00:48:47 pause-040046 crio[2128]: time="2024-03-28 00:48:47.984836575Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7602085f-a319-48d4-b4bf-3564b37e45d0 name=/runtime.v1.RuntimeService/ListContainers
	Mar 28 00:48:47 pause-040046 crio[2128]: time="2024-03-28 00:48:47.985350330Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ad125daaf884447921ace6baa3fac20fb9080244b462a3e6c1dfa81b3dd1bf9e,PodSandboxId:f2711f2561563c9dc0b89b71de750e6e8b0461558ecaafb570f43d5958900b2e,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1711586906254176473,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-d9zx2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dbcb1807-c16a-428c-9292-e7f4a8ff9d00,},Annotations:map[string]string{io.kubernetes.container.hash: 7530ac82,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d21716d3db96dc4becdcf1946eb6aeb2bad5d0e6a01d4704807ca7b1c717663,PodSandboxId:0bfdaf3c6731dbd36344a7d6856787e94a075c01506f983f9693dfea067ac12e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1711586906244232513,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5tlrp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 249cdd5d-91ae-4248-9a00-f4959c78b3b2,},Annotations:map[string]string{io.kubernetes.container.hash: 62269400,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a953be1a41bf293fed13e0cb2e03fd3a5cd17c9b79aee46a3884d75e51d7453,PodSandboxId:e6197137bbd50e78913254d90cc8874393f7bc3d1e7ebf5bc4e59c5573bd8536,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1711586901570748638,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-040046,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf1ef5c54fdae67febb96c1cd5ff3e5f,},Annot
ations:map[string]string{io.kubernetes.container.hash: 69e7280e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2924b060928092dbbd9bd04663a3b4db9488ebea45a3fcf8f014a444cd2902a,PodSandboxId:246a8e4a4bfce5313868706064d22d2da4b72c2d7a4a54fcf1f721c538a7e5b5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1711586901600556046,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-040046,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dbcc4c94bbad251b426f15a339077c36,},Annotations:map[string]
string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a15940c4f84a4954a314771f1309e8bbf59250930010e92dc67bbc616dc6099,PodSandboxId:978617d4de30fbcfea266b10efdaaa2e4aadab97a9daf324e7ebe7cf66fa6cf3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1711586901585944282,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-040046,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d72e7312da1738efc1668540f0e695a,},Annotations:map[string]string{io.kubernet
es.container.hash: 11ee3fec,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6b6176f7ebcce772df04ac9936e524988d01f46c826a569ca6ab05daf111e19f,PodSandboxId:3a57d6e0c813adf153ea3ab13838ff43bb350518484bc795616b17c4dd407b3b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1711586901580910804,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-040046,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 42f3ca6be942820b85cc87c91c1ac4b8,},Annotations:map[string]string{io
.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2264e41f77cc4f28ea256a12349256853bb9b3a2421f649b7e75348e9561392b,PodSandboxId:f2711f2561563c9dc0b89b71de750e6e8b0461558ecaafb570f43d5958900b2e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1711586878330422134,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-d9zx2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dbcb1807-c16a-428c-9292-e7f4a8ff9d00,},Annotations:map[string]string{io.kubernetes.container.hash: 7530
ac82,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:737bc1da2980fc5534519112255b2b2e4ced04b8f71571c699f8e1eedf84b5e9,PodSandboxId:0bfdaf3c6731dbd36344a7d6856787e94a075c01506f983f9693dfea067ac12e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_EXITED,CreatedAt:1711586877659540751,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.po
d.name: kube-proxy-5tlrp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 249cdd5d-91ae-4248-9a00-f4959c78b3b2,},Annotations:map[string]string{io.kubernetes.container.hash: 62269400,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:adae09481ab4f504b6cc144443edbfd307e7f230fd84bab6c985cdc13f9c0558,PodSandboxId:246a8e4a4bfce5313868706064d22d2da4b72c2d7a4a54fcf1f721c538a7e5b5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_EXITED,CreatedAt:1711586877603939716,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pau
se-040046,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dbcc4c94bbad251b426f15a339077c36,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:94ad7b59f3f0b5484e2c6ba034025fedf7553ccecaad58ee34b0b4a996190bb0,PodSandboxId:978617d4de30fbcfea266b10efdaaa2e4aadab97a9daf324e7ebe7cf66fa6cf3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_EXITED,CreatedAt:1711586877716904505,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-040046,io.kubern
etes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d72e7312da1738efc1668540f0e695a,},Annotations:map[string]string{io.kubernetes.container.hash: 11ee3fec,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:abd187f6614af02b4d750c2d77c5a355583e1d586c760fd769441be096ef6314,PodSandboxId:e6197137bbd50e78913254d90cc8874393f7bc3d1e7ebf5bc4e59c5573bd8536,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1711586877575308986,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-040046,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod
.uid: bf1ef5c54fdae67febb96c1cd5ff3e5f,},Annotations:map[string]string{io.kubernetes.container.hash: 69e7280e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8bd72f355a2ea559cef3868b146cd6fd26cba6952ec6955c36683ada0df0f439,PodSandboxId:3a57d6e0c813adf153ea3ab13838ff43bb350518484bc795616b17c4dd407b3b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_EXITED,CreatedAt:1711586877565195150,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-040046,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: 42f3ca6be942820b85cc87c91c1ac4b8,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7602085f-a319-48d4-b4bf-3564b37e45d0 name=/runtime.v1.RuntimeService/ListContainers
	Mar 28 00:48:48 pause-040046 crio[2128]: time="2024-03-28 00:48:48.043563390Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=354294c4-9ae4-488e-b64f-c164f5016e1b name=/runtime.v1.RuntimeService/Version
	Mar 28 00:48:48 pause-040046 crio[2128]: time="2024-03-28 00:48:48.043634572Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=354294c4-9ae4-488e-b64f-c164f5016e1b name=/runtime.v1.RuntimeService/Version
	Mar 28 00:48:48 pause-040046 crio[2128]: time="2024-03-28 00:48:48.045161829Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=29344592-8fa1-45ae-8836-be2ab1d22024 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 28 00:48:48 pause-040046 crio[2128]: time="2024-03-28 00:48:48.048623460Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1711586928048583922,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:121209,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=29344592-8fa1-45ae-8836-be2ab1d22024 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 28 00:48:48 pause-040046 crio[2128]: time="2024-03-28 00:48:48.052874141Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=fac96701-0673-45e6-acdd-e90557bc96bb name=/runtime.v1.RuntimeService/ListContainers
	Mar 28 00:48:48 pause-040046 crio[2128]: time="2024-03-28 00:48:48.053008505Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=fac96701-0673-45e6-acdd-e90557bc96bb name=/runtime.v1.RuntimeService/ListContainers
	Mar 28 00:48:48 pause-040046 crio[2128]: time="2024-03-28 00:48:48.053365649Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ad125daaf884447921ace6baa3fac20fb9080244b462a3e6c1dfa81b3dd1bf9e,PodSandboxId:f2711f2561563c9dc0b89b71de750e6e8b0461558ecaafb570f43d5958900b2e,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1711586906254176473,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-d9zx2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dbcb1807-c16a-428c-9292-e7f4a8ff9d00,},Annotations:map[string]string{io.kubernetes.container.hash: 7530ac82,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d21716d3db96dc4becdcf1946eb6aeb2bad5d0e6a01d4704807ca7b1c717663,PodSandboxId:0bfdaf3c6731dbd36344a7d6856787e94a075c01506f983f9693dfea067ac12e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1711586906244232513,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5tlrp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 249cdd5d-91ae-4248-9a00-f4959c78b3b2,},Annotations:map[string]string{io.kubernetes.container.hash: 62269400,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a953be1a41bf293fed13e0cb2e03fd3a5cd17c9b79aee46a3884d75e51d7453,PodSandboxId:e6197137bbd50e78913254d90cc8874393f7bc3d1e7ebf5bc4e59c5573bd8536,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1711586901570748638,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-040046,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf1ef5c54fdae67febb96c1cd5ff3e5f,},Annot
ations:map[string]string{io.kubernetes.container.hash: 69e7280e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2924b060928092dbbd9bd04663a3b4db9488ebea45a3fcf8f014a444cd2902a,PodSandboxId:246a8e4a4bfce5313868706064d22d2da4b72c2d7a4a54fcf1f721c538a7e5b5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1711586901600556046,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-040046,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dbcc4c94bbad251b426f15a339077c36,},Annotations:map[string]
string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a15940c4f84a4954a314771f1309e8bbf59250930010e92dc67bbc616dc6099,PodSandboxId:978617d4de30fbcfea266b10efdaaa2e4aadab97a9daf324e7ebe7cf66fa6cf3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1711586901585944282,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-040046,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d72e7312da1738efc1668540f0e695a,},Annotations:map[string]string{io.kubernet
es.container.hash: 11ee3fec,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6b6176f7ebcce772df04ac9936e524988d01f46c826a569ca6ab05daf111e19f,PodSandboxId:3a57d6e0c813adf153ea3ab13838ff43bb350518484bc795616b17c4dd407b3b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1711586901580910804,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-040046,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 42f3ca6be942820b85cc87c91c1ac4b8,},Annotations:map[string]string{io
.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2264e41f77cc4f28ea256a12349256853bb9b3a2421f649b7e75348e9561392b,PodSandboxId:f2711f2561563c9dc0b89b71de750e6e8b0461558ecaafb570f43d5958900b2e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1711586878330422134,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-d9zx2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dbcb1807-c16a-428c-9292-e7f4a8ff9d00,},Annotations:map[string]string{io.kubernetes.container.hash: 7530
ac82,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:737bc1da2980fc5534519112255b2b2e4ced04b8f71571c699f8e1eedf84b5e9,PodSandboxId:0bfdaf3c6731dbd36344a7d6856787e94a075c01506f983f9693dfea067ac12e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_EXITED,CreatedAt:1711586877659540751,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.po
d.name: kube-proxy-5tlrp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 249cdd5d-91ae-4248-9a00-f4959c78b3b2,},Annotations:map[string]string{io.kubernetes.container.hash: 62269400,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:adae09481ab4f504b6cc144443edbfd307e7f230fd84bab6c985cdc13f9c0558,PodSandboxId:246a8e4a4bfce5313868706064d22d2da4b72c2d7a4a54fcf1f721c538a7e5b5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_EXITED,CreatedAt:1711586877603939716,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pau
se-040046,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dbcc4c94bbad251b426f15a339077c36,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:94ad7b59f3f0b5484e2c6ba034025fedf7553ccecaad58ee34b0b4a996190bb0,PodSandboxId:978617d4de30fbcfea266b10efdaaa2e4aadab97a9daf324e7ebe7cf66fa6cf3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_EXITED,CreatedAt:1711586877716904505,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-040046,io.kubern
etes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d72e7312da1738efc1668540f0e695a,},Annotations:map[string]string{io.kubernetes.container.hash: 11ee3fec,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:abd187f6614af02b4d750c2d77c5a355583e1d586c760fd769441be096ef6314,PodSandboxId:e6197137bbd50e78913254d90cc8874393f7bc3d1e7ebf5bc4e59c5573bd8536,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1711586877575308986,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-040046,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod
.uid: bf1ef5c54fdae67febb96c1cd5ff3e5f,},Annotations:map[string]string{io.kubernetes.container.hash: 69e7280e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8bd72f355a2ea559cef3868b146cd6fd26cba6952ec6955c36683ada0df0f439,PodSandboxId:3a57d6e0c813adf153ea3ab13838ff43bb350518484bc795616b17c4dd407b3b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_EXITED,CreatedAt:1711586877565195150,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-040046,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: 42f3ca6be942820b85cc87c91c1ac4b8,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=fac96701-0673-45e6-acdd-e90557bc96bb name=/runtime.v1.RuntimeService/ListContainers
	Mar 28 00:48:48 pause-040046 crio[2128]: time="2024-03-28 00:48:48.106547456Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9e8cd12c-5630-4dca-90ea-1862768e68c8 name=/runtime.v1.RuntimeService/Version
	Mar 28 00:48:48 pause-040046 crio[2128]: time="2024-03-28 00:48:48.107099807Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9e8cd12c-5630-4dca-90ea-1862768e68c8 name=/runtime.v1.RuntimeService/Version
	Mar 28 00:48:48 pause-040046 crio[2128]: time="2024-03-28 00:48:48.108595721Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2762bc73-164e-4b06-839e-5c836d5c7e2d name=/runtime.v1.ImageService/ImageFsInfo
	Mar 28 00:48:48 pause-040046 crio[2128]: time="2024-03-28 00:48:48.109142842Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1711586928109115750,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:121209,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2762bc73-164e-4b06-839e-5c836d5c7e2d name=/runtime.v1.ImageService/ImageFsInfo
	Mar 28 00:48:48 pause-040046 crio[2128]: time="2024-03-28 00:48:48.110179483Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a040f5f5-01fe-4d36-b5d8-588f10969611 name=/runtime.v1.RuntimeService/ListContainers
	Mar 28 00:48:48 pause-040046 crio[2128]: time="2024-03-28 00:48:48.110284707Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a040f5f5-01fe-4d36-b5d8-588f10969611 name=/runtime.v1.RuntimeService/ListContainers
	Mar 28 00:48:48 pause-040046 crio[2128]: time="2024-03-28 00:48:48.110590739Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ad125daaf884447921ace6baa3fac20fb9080244b462a3e6c1dfa81b3dd1bf9e,PodSandboxId:f2711f2561563c9dc0b89b71de750e6e8b0461558ecaafb570f43d5958900b2e,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1711586906254176473,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-d9zx2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dbcb1807-c16a-428c-9292-e7f4a8ff9d00,},Annotations:map[string]string{io.kubernetes.container.hash: 7530ac82,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d21716d3db96dc4becdcf1946eb6aeb2bad5d0e6a01d4704807ca7b1c717663,PodSandboxId:0bfdaf3c6731dbd36344a7d6856787e94a075c01506f983f9693dfea067ac12e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1711586906244232513,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5tlrp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 249cdd5d-91ae-4248-9a00-f4959c78b3b2,},Annotations:map[string]string{io.kubernetes.container.hash: 62269400,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a953be1a41bf293fed13e0cb2e03fd3a5cd17c9b79aee46a3884d75e51d7453,PodSandboxId:e6197137bbd50e78913254d90cc8874393f7bc3d1e7ebf5bc4e59c5573bd8536,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1711586901570748638,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-040046,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf1ef5c54fdae67febb96c1cd5ff3e5f,},Annot
ations:map[string]string{io.kubernetes.container.hash: 69e7280e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2924b060928092dbbd9bd04663a3b4db9488ebea45a3fcf8f014a444cd2902a,PodSandboxId:246a8e4a4bfce5313868706064d22d2da4b72c2d7a4a54fcf1f721c538a7e5b5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1711586901600556046,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-040046,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dbcc4c94bbad251b426f15a339077c36,},Annotations:map[string]
string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a15940c4f84a4954a314771f1309e8bbf59250930010e92dc67bbc616dc6099,PodSandboxId:978617d4de30fbcfea266b10efdaaa2e4aadab97a9daf324e7ebe7cf66fa6cf3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1711586901585944282,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-040046,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d72e7312da1738efc1668540f0e695a,},Annotations:map[string]string{io.kubernet
es.container.hash: 11ee3fec,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6b6176f7ebcce772df04ac9936e524988d01f46c826a569ca6ab05daf111e19f,PodSandboxId:3a57d6e0c813adf153ea3ab13838ff43bb350518484bc795616b17c4dd407b3b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1711586901580910804,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-040046,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 42f3ca6be942820b85cc87c91c1ac4b8,},Annotations:map[string]string{io
.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2264e41f77cc4f28ea256a12349256853bb9b3a2421f649b7e75348e9561392b,PodSandboxId:f2711f2561563c9dc0b89b71de750e6e8b0461558ecaafb570f43d5958900b2e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1711586878330422134,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-d9zx2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dbcb1807-c16a-428c-9292-e7f4a8ff9d00,},Annotations:map[string]string{io.kubernetes.container.hash: 7530
ac82,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:737bc1da2980fc5534519112255b2b2e4ced04b8f71571c699f8e1eedf84b5e9,PodSandboxId:0bfdaf3c6731dbd36344a7d6856787e94a075c01506f983f9693dfea067ac12e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_EXITED,CreatedAt:1711586877659540751,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.po
d.name: kube-proxy-5tlrp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 249cdd5d-91ae-4248-9a00-f4959c78b3b2,},Annotations:map[string]string{io.kubernetes.container.hash: 62269400,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:adae09481ab4f504b6cc144443edbfd307e7f230fd84bab6c985cdc13f9c0558,PodSandboxId:246a8e4a4bfce5313868706064d22d2da4b72c2d7a4a54fcf1f721c538a7e5b5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_EXITED,CreatedAt:1711586877603939716,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pau
se-040046,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dbcc4c94bbad251b426f15a339077c36,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:94ad7b59f3f0b5484e2c6ba034025fedf7553ccecaad58ee34b0b4a996190bb0,PodSandboxId:978617d4de30fbcfea266b10efdaaa2e4aadab97a9daf324e7ebe7cf66fa6cf3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_EXITED,CreatedAt:1711586877716904505,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-040046,io.kubern
etes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d72e7312da1738efc1668540f0e695a,},Annotations:map[string]string{io.kubernetes.container.hash: 11ee3fec,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:abd187f6614af02b4d750c2d77c5a355583e1d586c760fd769441be096ef6314,PodSandboxId:e6197137bbd50e78913254d90cc8874393f7bc3d1e7ebf5bc4e59c5573bd8536,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1711586877575308986,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-040046,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod
.uid: bf1ef5c54fdae67febb96c1cd5ff3e5f,},Annotations:map[string]string{io.kubernetes.container.hash: 69e7280e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8bd72f355a2ea559cef3868b146cd6fd26cba6952ec6955c36683ada0df0f439,PodSandboxId:3a57d6e0c813adf153ea3ab13838ff43bb350518484bc795616b17c4dd407b3b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_EXITED,CreatedAt:1711586877565195150,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-040046,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: 42f3ca6be942820b85cc87c91c1ac4b8,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a040f5f5-01fe-4d36-b5d8-588f10969611 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	ad125daaf8844       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   21 seconds ago      Running             coredns                   2                   f2711f2561563       coredns-76f75df574-d9zx2
	6d21716d3db96       a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392   21 seconds ago      Running             kube-proxy                2                   0bfdaf3c6731d       kube-proxy-5tlrp
	a2924b0609280       8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b   26 seconds ago      Running             kube-scheduler            2                   246a8e4a4bfce       kube-scheduler-pause-040046
	7a15940c4f84a       39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533   26 seconds ago      Running             kube-apiserver            2                   978617d4de30f       kube-apiserver-pause-040046
	6b6176f7ebcce       6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3   26 seconds ago      Running             kube-controller-manager   2                   3a57d6e0c813a       kube-controller-manager-pause-040046
	3a953be1a41bf       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   26 seconds ago      Running             etcd                      2                   e6197137bbd50       etcd-pause-040046
	2264e41f77cc4       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   49 seconds ago      Exited              coredns                   1                   f2711f2561563       coredns-76f75df574-d9zx2
	94ad7b59f3f0b       39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533   50 seconds ago      Exited              kube-apiserver            1                   978617d4de30f       kube-apiserver-pause-040046
	737bc1da2980f       a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392   50 seconds ago      Exited              kube-proxy                1                   0bfdaf3c6731d       kube-proxy-5tlrp
	adae09481ab4f       8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b   50 seconds ago      Exited              kube-scheduler            1                   246a8e4a4bfce       kube-scheduler-pause-040046
	abd187f6614af       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   50 seconds ago      Exited              etcd                      1                   e6197137bbd50       etcd-pause-040046
	8bd72f355a2ea       6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3   50 seconds ago      Exited              kube-controller-manager   1                   3a57d6e0c813a       kube-controller-manager-pause-040046
	
	
	==> coredns [2264e41f77cc4f28ea256a12349256853bb9b3a2421f649b7e75348e9561392b] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	[INFO] plugin/health: Going into lameduck mode for 5s
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:49871 - 57042 "HINFO IN 589354795089262720.7371850693171307278. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.010209795s
	
	
	==> coredns [ad125daaf884447921ace6baa3fac20fb9080244b462a3e6c1dfa81b3dd1bf9e] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:52539 - 35473 "HINFO IN 7062009350584288299.3531532817132609295. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.009202525s
	
	
	==> describe nodes <==
	Name:               pause-040046
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-040046
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1a940f980f77c9bfc8ef93678db8c1f49c9dd79d
	                    minikube.k8s.io/name=pause-040046
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_03_28T00_46_54_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 28 Mar 2024 00:46:51 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-040046
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 28 Mar 2024 00:48:45 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 28 Mar 2024 00:48:25 +0000   Thu, 28 Mar 2024 00:46:49 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 28 Mar 2024 00:48:25 +0000   Thu, 28 Mar 2024 00:46:49 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 28 Mar 2024 00:48:25 +0000   Thu, 28 Mar 2024 00:46:49 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 28 Mar 2024 00:48:25 +0000   Thu, 28 Mar 2024 00:46:55 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.233
	  Hostname:    pause-040046
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	System Info:
	  Machine ID:                 8dfa7a8119134880a0c21fe76cbfec60
	  System UUID:                8dfa7a81-1913-4880-a0c2-1fe76cbfec60
	  Boot ID:                    4cd15043-5dd7-4730-a218-c4a23fd2960e
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-76f75df574-d9zx2                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     101s
	  kube-system                 etcd-pause-040046                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (5%!)(MISSING)       0 (0%!)(MISSING)         114s
	  kube-system                 kube-apiserver-pause-040046             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         114s
	  kube-system                 kube-controller-manager-pause-040046    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         115s
	  kube-system                 kube-proxy-5tlrp                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         101s
	  kube-system                 kube-scheduler-pause-040046             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         114s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (8%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 21s                kube-proxy       
	  Normal   Starting                 46s                kube-proxy       
	  Normal   Starting                 100s               kube-proxy       
	  Normal   NodeAllocatableEnforced  2m1s               kubelet          Updated Node Allocatable limit across pods
	  Normal   Starting                 2m1s               kubelet          Starting kubelet.
	  Normal   NodeHasSufficientPID     2m (x7 over 2m1s)  kubelet          Node pause-040046 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  2m (x8 over 2m1s)  kubelet          Node pause-040046 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m (x8 over 2m1s)  kubelet          Node pause-040046 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 114s               kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  114s               kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  114s               kubelet          Node pause-040046 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    114s               kubelet          Node pause-040046 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     114s               kubelet          Node pause-040046 status is now: NodeHasSufficientPID
	  Normal   NodeReady                113s               kubelet          Node pause-040046 status is now: NodeReady
	  Normal   RegisteredNode           102s               node-controller  Node pause-040046 event: Registered Node pause-040046 in Controller
	  Warning  ContainerGCFailed        54s                kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   Starting                 28s                kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  27s (x8 over 27s)  kubelet          Node pause-040046 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    27s (x8 over 27s)  kubelet          Node pause-040046 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     27s (x7 over 27s)  kubelet          Node pause-040046 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  27s                kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           10s                node-controller  Node pause-040046 event: Registered Node pause-040046 in Controller
	
	
	==> dmesg <==
	[  +0.056851] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.059137] systemd-fstab-generator[609]: Ignoring "noauto" option for root device
	[  +0.185804] systemd-fstab-generator[623]: Ignoring "noauto" option for root device
	[  +0.156089] systemd-fstab-generator[635]: Ignoring "noauto" option for root device
	[  +0.321006] systemd-fstab-generator[665]: Ignoring "noauto" option for root device
	[  +4.709069] systemd-fstab-generator[762]: Ignoring "noauto" option for root device
	[  +0.064512] kauditd_printk_skb: 130 callbacks suppressed
	[  +5.094203] systemd-fstab-generator[951]: Ignoring "noauto" option for root device
	[  +0.077974] kauditd_printk_skb: 18 callbacks suppressed
	[  +6.560016] kauditd_printk_skb: 69 callbacks suppressed
	[  +0.683669] systemd-fstab-generator[1280]: Ignoring "noauto" option for root device
	[Mar28 00:47] systemd-fstab-generator[1475]: Ignoring "noauto" option for root device
	[  +0.105960] kauditd_printk_skb: 15 callbacks suppressed
	[ +11.245990] kauditd_printk_skb: 63 callbacks suppressed
	[ +37.811695] systemd-fstab-generator[2047]: Ignoring "noauto" option for root device
	[  +0.172214] systemd-fstab-generator[2059]: Ignoring "noauto" option for root device
	[  +0.209657] systemd-fstab-generator[2073]: Ignoring "noauto" option for root device
	[  +0.160851] systemd-fstab-generator[2086]: Ignoring "noauto" option for root device
	[  +0.330630] systemd-fstab-generator[2113]: Ignoring "noauto" option for root device
	[  +2.123144] systemd-fstab-generator[2640]: Ignoring "noauto" option for root device
	[Mar28 00:48] kauditd_printk_skb: 191 callbacks suppressed
	[ +18.688611] systemd-fstab-generator[3011]: Ignoring "noauto" option for root device
	[  +5.662442] kauditd_printk_skb: 43 callbacks suppressed
	[ +11.793130] kauditd_printk_skb: 2 callbacks suppressed
	[  +3.038702] systemd-fstab-generator[3454]: Ignoring "noauto" option for root device
	
	
	==> etcd [3a953be1a41bf293fed13e0cb2e03fd3a5cd17c9b79aee46a3884d75e51d7453] <==
	{"level":"info","ts":"2024-03-28T00:48:22.021613Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-03-28T00:48:22.021723Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-03-28T00:48:22.022203Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"678e262213f11973 switched to configuration voters=(7461943560404801907)"}
	{"level":"info","ts":"2024-03-28T00:48:22.0224Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"30d9b598be045872","local-member-id":"678e262213f11973","added-peer-id":"678e262213f11973","added-peer-peer-urls":["https://192.168.39.233:2380"]}
	{"level":"info","ts":"2024-03-28T00:48:22.022658Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"30d9b598be045872","local-member-id":"678e262213f11973","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-28T00:48:22.022772Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-28T00:48:22.08588Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-03-28T00:48:22.095836Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.233:2380"}
	{"level":"info","ts":"2024-03-28T00:48:22.106253Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.233:2380"}
	{"level":"info","ts":"2024-03-28T00:48:22.106143Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-03-28T00:48:22.101355Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"678e262213f11973","initial-advertise-peer-urls":["https://192.168.39.233:2380"],"listen-peer-urls":["https://192.168.39.233:2380"],"advertise-client-urls":["https://192.168.39.233:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.233:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-03-28T00:48:23.385209Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"678e262213f11973 is starting a new election at term 3"}
	{"level":"info","ts":"2024-03-28T00:48:23.385329Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"678e262213f11973 became pre-candidate at term 3"}
	{"level":"info","ts":"2024-03-28T00:48:23.385384Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"678e262213f11973 received MsgPreVoteResp from 678e262213f11973 at term 3"}
	{"level":"info","ts":"2024-03-28T00:48:23.385426Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"678e262213f11973 became candidate at term 4"}
	{"level":"info","ts":"2024-03-28T00:48:23.385456Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"678e262213f11973 received MsgVoteResp from 678e262213f11973 at term 4"}
	{"level":"info","ts":"2024-03-28T00:48:23.38549Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"678e262213f11973 became leader at term 4"}
	{"level":"info","ts":"2024-03-28T00:48:23.385522Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 678e262213f11973 elected leader 678e262213f11973 at term 4"}
	{"level":"info","ts":"2024-03-28T00:48:23.395066Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"678e262213f11973","local-member-attributes":"{Name:pause-040046 ClientURLs:[https://192.168.39.233:2379]}","request-path":"/0/members/678e262213f11973/attributes","cluster-id":"30d9b598be045872","publish-timeout":"7s"}
	{"level":"info","ts":"2024-03-28T00:48:23.395183Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-28T00:48:23.395516Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-28T00:48:23.398195Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-03-28T00:48:23.400569Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.233:2379"}
	{"level":"info","ts":"2024-03-28T00:48:23.400686Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-03-28T00:48:23.403159Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> etcd [abd187f6614af02b4d750c2d77c5a355583e1d586c760fd769441be096ef6314] <==
	{"level":"warn","ts":"2024-03-28T00:48:02.105396Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-03-28T00:48:01.458052Z","time spent":"647.273758ms","remote":"127.0.0.1:40860","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":677,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/apiserver-lvbeik6gt3ffzhl7eycuvykhva\" mod_revision:418 > success:<request_put:<key:\"/registry/leases/kube-system/apiserver-lvbeik6gt3ffzhl7eycuvykhva\" value_size:604 >> failure:<request_range:<key:\"/registry/leases/kube-system/apiserver-lvbeik6gt3ffzhl7eycuvykhva\" > >"}
	{"level":"info","ts":"2024-03-28T00:48:02.105649Z","caller":"traceutil/trace.go:171","msg":"trace[1751956951] transaction","detail":"{read_only:false; number_of_response:1; response_revision:423; }","duration":"647.522481ms","start":"2024-03-28T00:48:01.458112Z","end":"2024-03-28T00:48:02.105634Z","steps":["trace[1751956951] 'process raft request'  (duration: 646.956533ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-28T00:48:02.105835Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-03-28T00:48:01.458107Z","time spent":"647.603566ms","remote":"127.0.0.1:40632","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":41,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/masterleases/192.168.39.233\" mod_revision:419 > success:<request_delete_range:<key:\"/registry/masterleases/192.168.39.233\" > > failure:<request_range:<key:\"/registry/masterleases/192.168.39.233\" > >"}
	{"level":"info","ts":"2024-03-28T00:48:02.106118Z","caller":"traceutil/trace.go:171","msg":"trace[2042938705] linearizableReadLoop","detail":"{readStateIndex:446; appliedIndex:443; }","duration":"294.948782ms","start":"2024-03-28T00:48:01.811161Z","end":"2024-03-28T00:48:02.10611Z","steps":["trace[2042938705] 'read index received'  (duration: 247.095335ms)","trace[2042938705] 'applied index is now lower than readState.Index'  (duration: 47.852525ms)"],"step_count":2}
	{"level":"warn","ts":"2024-03-28T00:48:02.106421Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"624.872938ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/configmaps/kube-system/extension-apiserver-authentication\" ","response":"range_response_count:1 size:3021"}
	{"level":"info","ts":"2024-03-28T00:48:02.108393Z","caller":"traceutil/trace.go:171","msg":"trace[523733625] range","detail":"{range_begin:/registry/configmaps/kube-system/extension-apiserver-authentication; range_end:; response_count:1; response_revision:424; }","duration":"626.868804ms","start":"2024-03-28T00:48:01.481515Z","end":"2024-03-28T00:48:02.108384Z","steps":["trace[523733625] 'agreement among raft nodes before linearized reading'  (duration: 624.855194ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-28T00:48:02.108444Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-03-28T00:48:01.481502Z","time spent":"626.931158ms","remote":"127.0.0.1:40682","response type":"/etcdserverpb.KV/Range","request count":0,"request size":69,"response count":1,"response size":3043,"request content":"key:\"/registry/configmaps/kube-system/extension-apiserver-authentication\" "}
	{"level":"info","ts":"2024-03-28T00:48:02.106456Z","caller":"traceutil/trace.go:171","msg":"trace[498556886] transaction","detail":"{read_only:false; response_revision:424; number_of_response:1; }","duration":"639.869442ms","start":"2024-03-28T00:48:01.466577Z","end":"2024-03-28T00:48:02.106447Z","steps":["trace[498556886] 'process raft request'  (duration: 638.534427ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-28T00:48:02.108663Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-03-28T00:48:01.466561Z","time spent":"642.070433ms","remote":"127.0.0.1:40764","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":6586,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/kube-controller-manager-pause-040046\" mod_revision:305 > success:<request_put:<key:\"/registry/pods/kube-system/kube-controller-manager-pause-040046\" value_size:6515 >> failure:<request_range:<key:\"/registry/pods/kube-system/kube-controller-manager-pause-040046\" > >"}
	{"level":"warn","ts":"2024-03-28T00:48:02.106506Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"648.48433ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/ranges/serviceips\" ","response":"range_response_count:1 size:116"}
	{"level":"info","ts":"2024-03-28T00:48:02.108852Z","caller":"traceutil/trace.go:171","msg":"trace[1410905509] range","detail":"{range_begin:/registry/ranges/serviceips; range_end:; response_count:1; response_revision:424; }","duration":"650.897598ms","start":"2024-03-28T00:48:01.457946Z","end":"2024-03-28T00:48:02.108844Z","steps":["trace[1410905509] 'agreement among raft nodes before linearized reading'  (duration: 648.526095ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-28T00:48:02.108896Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-03-28T00:48:01.457944Z","time spent":"650.944757ms","remote":"127.0.0.1:40646","response type":"/etcdserverpb.KV/Range","request count":0,"request size":29,"response count":1,"response size":138,"request content":"key:\"/registry/ranges/serviceips\" "}
	{"level":"warn","ts":"2024-03-28T00:48:02.106536Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"648.595639ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/ranges/servicenodeports\" ","response":"range_response_count:1 size:118"}
	{"level":"info","ts":"2024-03-28T00:48:02.109184Z","caller":"traceutil/trace.go:171","msg":"trace[753335425] range","detail":"{range_begin:/registry/ranges/servicenodeports; range_end:; response_count:1; response_revision:424; }","duration":"651.243758ms","start":"2024-03-28T00:48:01.457931Z","end":"2024-03-28T00:48:02.109174Z","steps":["trace[753335425] 'agreement among raft nodes before linearized reading'  (duration: 648.589342ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-28T00:48:02.109236Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-03-28T00:48:01.457926Z","time spent":"651.299499ms","remote":"127.0.0.1:40658","response type":"/etcdserverpb.KV/Range","request count":0,"request size":35,"response count":1,"response size":140,"request content":"key:\"/registry/ranges/servicenodeports\" "}
	{"level":"info","ts":"2024-03-28T00:48:19.494599Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-03-28T00:48:19.494704Z","caller":"embed/etcd.go:375","msg":"closing etcd server","name":"pause-040046","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.233:2380"],"advertise-client-urls":["https://192.168.39.233:2379"]}
	{"level":"warn","ts":"2024-03-28T00:48:19.494795Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-03-28T00:48:19.494859Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-03-28T00:48:19.496548Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.233:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-03-28T00:48:19.496603Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.233:2379: use of closed network connection"}
	{"level":"info","ts":"2024-03-28T00:48:19.497946Z","caller":"etcdserver/server.go:1471","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"678e262213f11973","current-leader-member-id":"678e262213f11973"}
	{"level":"info","ts":"2024-03-28T00:48:19.501426Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.233:2380"}
	{"level":"info","ts":"2024-03-28T00:48:19.501557Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.233:2380"}
	{"level":"info","ts":"2024-03-28T00:48:19.501568Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"pause-040046","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.233:2380"],"advertise-client-urls":["https://192.168.39.233:2379"]}
	
	
	==> kernel <==
	 00:48:48 up 2 min,  0 users,  load average: 0.93, 0.52, 0.20
	Linux pause-040046 5.10.207 #1 SMP Wed Mar 27 22:02:20 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [7a15940c4f84a4954a314771f1309e8bbf59250930010e92dc67bbc616dc6099] <==
	I0328 00:48:24.875698       1 apf_controller.go:374] Starting API Priority and Fairness config controller
	I0328 00:48:24.878078       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0328 00:48:24.878121       1 shared_informer.go:311] Waiting for caches to sync for crd-autoregister
	I0328 00:48:24.975701       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0328 00:48:24.980667       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0328 00:48:24.980709       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0328 00:48:24.982387       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0328 00:48:24.992389       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0328 00:48:25.003525       1 aggregator.go:165] initial CRD sync complete...
	I0328 00:48:25.003618       1 autoregister_controller.go:141] Starting autoregister controller
	I0328 00:48:25.003650       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0328 00:48:25.003693       1 cache.go:39] Caches are synced for autoregister controller
	I0328 00:48:25.019329       1 shared_informer.go:318] Caches are synced for configmaps
	I0328 00:48:25.019399       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0328 00:48:25.019441       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0328 00:48:25.028723       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0328 00:48:25.881204       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0328 00:48:26.363111       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.233]
	I0328 00:48:26.366812       1 controller.go:624] quota admission added evaluator for: endpoints
	I0328 00:48:26.387763       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0328 00:48:26.888560       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0328 00:48:26.923107       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0328 00:48:26.988680       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0328 00:48:27.024134       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0328 00:48:27.036737       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	
	
	==> kube-apiserver [94ad7b59f3f0b5484e2c6ba034025fedf7553ccecaad58ee34b0b4a996190bb0] <==
	I0328 00:48:09.145655       1 establishing_controller.go:87] Shutting down EstablishingController
	I0328 00:48:09.147572       1 dynamic_cafile_content.go:171] "Shutting down controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0328 00:48:09.147634       1 customresource_discovery_controller.go:325] Shutting down DiscoveryController
	I0328 00:48:09.147657       1 gc_controller.go:91] Shutting down apiserver lease garbage collector
	I0328 00:48:09.147669       1 crdregistration_controller.go:142] Shutting down crd-autoregister controller
	I0328 00:48:09.147682       1 system_namespaces_controller.go:77] Shutting down system namespaces controller
	I0328 00:48:09.147688       1 controller.go:129] Ending legacy_token_tracking_controller
	I0328 00:48:09.147691       1 controller.go:130] Shutting down legacy_token_tracking_controller
	I0328 00:48:09.147941       1 cluster_authentication_trust_controller.go:463] Shutting down cluster_authentication_trust_controller controller
	I0328 00:48:09.148048       1 dynamic_serving_content.go:146] "Shutting down controller" name="aggregator-proxy-cert::/var/lib/minikube/certs/front-proxy-client.crt::/var/lib/minikube/certs/front-proxy-client.key"
	I0328 00:48:09.148101       1 dynamic_cafile_content.go:171] "Shutting down controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0328 00:48:09.148175       1 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0328 00:48:09.148256       1 controller.go:84] Shutting down OpenAPI AggregationController
	I0328 00:48:09.148328       1 secure_serving.go:258] Stopped listening on [::]:8443
	I0328 00:48:09.148344       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0328 00:48:09.148387       1 controller.go:86] Shutting down OpenAPI V3 AggregationController
	I0328 00:48:09.148428       1 object_count_tracker.go:151] "StorageObjectCountTracker pruner is exiting"
	I0328 00:48:09.148450       1 controller.go:159] Shutting down quota evaluator
	I0328 00:48:09.148481       1 controller.go:178] quota evaluator worker shutdown
	I0328 00:48:09.148582       1 dynamic_serving_content.go:146] "Shutting down controller" name="serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key"
	I0328 00:48:09.148865       1 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0328 00:48:09.149688       1 controller.go:178] quota evaluator worker shutdown
	I0328 00:48:09.149734       1 controller.go:178] quota evaluator worker shutdown
	I0328 00:48:09.149745       1 controller.go:178] quota evaluator worker shutdown
	I0328 00:48:09.149756       1 controller.go:178] quota evaluator worker shutdown
	
	
	==> kube-controller-manager [6b6176f7ebcce772df04ac9936e524988d01f46c826a569ca6ab05daf111e19f] <==
	I0328 00:48:38.004849       1 shared_informer.go:318] Caches are synced for cidrallocator
	I0328 00:48:38.006777       1 shared_informer.go:318] Caches are synced for TTL
	I0328 00:48:38.016701       1 shared_informer.go:318] Caches are synced for endpoint_slice
	I0328 00:48:38.025027       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-serving
	I0328 00:48:38.027159       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-client
	I0328 00:48:38.028394       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0328 00:48:38.030689       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-legacy-unknown
	I0328 00:48:38.043029       1 shared_informer.go:318] Caches are synced for certificate-csrapproving
	I0328 00:48:38.050623       1 shared_informer.go:318] Caches are synced for resource quota
	I0328 00:48:38.061087       1 shared_informer.go:318] Caches are synced for resource quota
	I0328 00:48:38.067510       1 shared_informer.go:318] Caches are synced for taint
	I0328 00:48:38.067641       1 node_lifecycle_controller.go:1222] "Initializing eviction metric for zone" zone=""
	I0328 00:48:38.067749       1 node_lifecycle_controller.go:874] "Missing timestamp for Node. Assuming now as a timestamp" node="pause-040046"
	I0328 00:48:38.067810       1 node_lifecycle_controller.go:1068] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I0328 00:48:38.068360       1 event.go:376] "Event occurred" object="pause-040046" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node pause-040046 event: Registered Node pause-040046 in Controller"
	I0328 00:48:38.078711       1 shared_informer.go:318] Caches are synced for GC
	I0328 00:48:38.087298       1 shared_informer.go:318] Caches are synced for daemon sets
	I0328 00:48:38.093267       1 shared_informer.go:318] Caches are synced for taint-eviction-controller
	I0328 00:48:38.094652       1 shared_informer.go:318] Caches are synced for persistent volume
	I0328 00:48:38.097080       1 shared_informer.go:318] Caches are synced for attach detach
	I0328 00:48:38.123750       1 shared_informer.go:318] Caches are synced for deployment
	I0328 00:48:38.146073       1 shared_informer.go:318] Caches are synced for disruption
	I0328 00:48:38.484559       1 shared_informer.go:318] Caches are synced for garbage collector
	I0328 00:48:38.484682       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I0328 00:48:38.489709       1 shared_informer.go:318] Caches are synced for garbage collector
	
	
	==> kube-controller-manager [8bd72f355a2ea559cef3868b146cd6fd26cba6952ec6955c36683ada0df0f439] <==
	I0328 00:48:03.506280       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="jobs.batch"
	I0328 00:48:03.506297       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="networkpolicies.networking.k8s.io"
	I0328 00:48:03.506314       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="replicasets.apps"
	I0328 00:48:03.506350       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="controllerrevisions.apps"
	I0328 00:48:03.506369       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="deployments.apps"
	I0328 00:48:03.506407       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="roles.rbac.authorization.k8s.io"
	I0328 00:48:03.506430       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="horizontalpodautoscalers.autoscaling"
	I0328 00:48:03.506449       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="endpoints"
	I0328 00:48:03.506572       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="csistoragecapacities.storage.k8s.io"
	I0328 00:48:03.506621       1 controllermanager.go:735] "Started controller" controller="resourcequota-controller"
	I0328 00:48:03.506705       1 resource_quota_controller.go:294] "Starting resource quota controller"
	I0328 00:48:03.506822       1 shared_informer.go:311] Waiting for caches to sync for resource quota
	I0328 00:48:03.507052       1 resource_quota_monitor.go:305] "QuotaMonitor running"
	I0328 00:48:03.508879       1 controllermanager.go:735] "Started controller" controller="deployment-controller"
	I0328 00:48:03.509861       1 deployment_controller.go:168] "Starting controller" controller="deployment"
	I0328 00:48:03.512843       1 shared_informer.go:311] Waiting for caches to sync for deployment
	I0328 00:48:03.517660       1 controllermanager.go:735] "Started controller" controller="certificatesigningrequest-approving-controller"
	I0328 00:48:03.517881       1 certificate_controller.go:115] "Starting certificate controller" name="csrapproving"
	I0328 00:48:03.517911       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrapproving
	I0328 00:48:03.544697       1 shared_informer.go:318] Caches are synced for tokens
	W0328 00:48:13.522848       1 client_builder_dynamic.go:197] get or create service account failed: Get "https://192.168.39.233:8443/api/v1/namespaces/kube-system/serviceaccounts/node-controller": dial tcp 192.168.39.233:8443: connect: connection refused
	W0328 00:48:14.023629       1 client_builder_dynamic.go:197] get or create service account failed: Get "https://192.168.39.233:8443/api/v1/namespaces/kube-system/serviceaccounts/node-controller": dial tcp 192.168.39.233:8443: connect: connection refused
	W0328 00:48:15.024726       1 client_builder_dynamic.go:197] get or create service account failed: Get "https://192.168.39.233:8443/api/v1/namespaces/kube-system/serviceaccounts/node-controller": dial tcp 192.168.39.233:8443: connect: connection refused
	W0328 00:48:17.026605       1 client_builder_dynamic.go:197] get or create service account failed: Get "https://192.168.39.233:8443/api/v1/namespaces/kube-system/serviceaccounts/node-controller": dial tcp 192.168.39.233:8443: connect: connection refused
	E0328 00:48:17.026762       1 cidr_allocator.go:144] "Failed to list all nodes" err="Get \"https://192.168.39.233:8443/api/v1/nodes\": failed to get token for kube-system/node-controller: timed out waiting for the condition"
	
	
	==> kube-proxy [6d21716d3db96dc4becdcf1946eb6aeb2bad5d0e6a01d4704807ca7b1c717663] <==
	I0328 00:48:26.522474       1 server_others.go:72] "Using iptables proxy"
	I0328 00:48:26.550274       1 server.go:1050] "Successfully retrieved node IP(s)" IPs=["192.168.39.233"]
	I0328 00:48:26.606772       1 server_others.go:146] "No iptables support for family" ipFamily="IPv6"
	I0328 00:48:26.606838       1 server.go:654] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0328 00:48:26.606869       1 server_others.go:168] "Using iptables Proxier"
	I0328 00:48:26.610776       1 proxier.go:245] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0328 00:48:26.611263       1 server.go:865] "Version info" version="v1.29.3"
	I0328 00:48:26.611308       1 server.go:867] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0328 00:48:26.612784       1 config.go:188] "Starting service config controller"
	I0328 00:48:26.612843       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0328 00:48:26.612872       1 config.go:97] "Starting endpoint slice config controller"
	I0328 00:48:26.612878       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0328 00:48:26.613527       1 config.go:315] "Starting node config controller"
	I0328 00:48:26.613569       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0328 00:48:26.713579       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0328 00:48:26.713789       1 shared_informer.go:318] Caches are synced for service config
	I0328 00:48:26.714329       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-proxy [737bc1da2980fc5534519112255b2b2e4ced04b8f71571c699f8e1eedf84b5e9] <==
	I0328 00:47:59.740427       1 server_others.go:72] "Using iptables proxy"
	I0328 00:48:01.815311       1 server.go:1050] "Successfully retrieved node IP(s)" IPs=["192.168.39.233"]
	I0328 00:48:01.853144       1 server_others.go:146] "No iptables support for family" ipFamily="IPv6"
	I0328 00:48:01.853168       1 server.go:654] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0328 00:48:01.853181       1 server_others.go:168] "Using iptables Proxier"
	I0328 00:48:01.856022       1 proxier.go:245] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0328 00:48:01.856335       1 server.go:865] "Version info" version="v1.29.3"
	I0328 00:48:01.856375       1 server.go:867] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0328 00:48:01.857700       1 config.go:188] "Starting service config controller"
	I0328 00:48:01.857810       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0328 00:48:01.857931       1 config.go:97] "Starting endpoint slice config controller"
	I0328 00:48:01.858034       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0328 00:48:01.858574       1 config.go:315] "Starting node config controller"
	I0328 00:48:01.861192       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0328 00:48:01.958468       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0328 00:48:01.958551       1 shared_informer.go:318] Caches are synced for service config
	I0328 00:48:01.962052       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [a2924b060928092dbbd9bd04663a3b4db9488ebea45a3fcf8f014a444cd2902a] <==
	I0328 00:48:23.274943       1 serving.go:380] Generated self-signed cert in-memory
	W0328 00:48:24.971649       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0328 00:48:24.971867       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0328 00:48:24.972032       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0328 00:48:24.972160       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0328 00:48:25.035388       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.29.3"
	I0328 00:48:25.035715       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0328 00:48:25.038054       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0328 00:48:25.038170       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0328 00:48:25.040154       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0328 00:48:25.040289       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0328 00:48:25.139174       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [adae09481ab4f504b6cc144443edbfd307e7f230fd84bab6c985cdc13f9c0558] <==
	I0328 00:48:00.145375       1 serving.go:380] Generated self-signed cert in-memory
	I0328 00:48:01.478623       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.29.3"
	I0328 00:48:01.478664       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0328 00:48:02.112475       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0328 00:48:02.112596       1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
	I0328 00:48:02.112653       1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController
	I0328 00:48:02.112672       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0328 00:48:02.114648       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0328 00:48:02.114727       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0328 00:48:02.114664       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0328 00:48:02.116851       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0328 00:48:02.213456       1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController
	I0328 00:48:02.214891       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0328 00:48:02.219589       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0328 00:48:19.342569       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0328 00:48:19.342658       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0328 00:48:19.342779       1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0328 00:48:19.342801       1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0328 00:48:19.342842       1 requestheader_controller.go:183] Shutting down RequestHeaderAuthRequestController
	E0328 00:48:19.348100       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Mar 28 00:48:21 pause-040046 kubelet[3018]: I0328 00:48:21.332513    3018 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/42f3ca6be942820b85cc87c91c1ac4b8-usr-share-ca-certificates\") pod \"kube-controller-manager-pause-040046\" (UID: \"42f3ca6be942820b85cc87c91c1ac4b8\") " pod="kube-system/kube-controller-manager-pause-040046"
	Mar 28 00:48:21 pause-040046 kubelet[3018]: I0328 00:48:21.332546    3018 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/dbcc4c94bbad251b426f15a339077c36-kubeconfig\") pod \"kube-scheduler-pause-040046\" (UID: \"dbcc4c94bbad251b426f15a339077c36\") " pod="kube-system/kube-scheduler-pause-040046"
	Mar 28 00:48:21 pause-040046 kubelet[3018]: E0328 00:48:21.530949    3018 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-040046?timeout=10s\": dial tcp 192.168.39.233:8443: connect: connection refused" interval="800ms"
	Mar 28 00:48:21 pause-040046 kubelet[3018]: I0328 00:48:21.550225    3018 scope.go:117] "RemoveContainer" containerID="94ad7b59f3f0b5484e2c6ba034025fedf7553ccecaad58ee34b0b4a996190bb0"
	Mar 28 00:48:21 pause-040046 kubelet[3018]: I0328 00:48:21.550532    3018 scope.go:117] "RemoveContainer" containerID="abd187f6614af02b4d750c2d77c5a355583e1d586c760fd769441be096ef6314"
	Mar 28 00:48:21 pause-040046 kubelet[3018]: I0328 00:48:21.551397    3018 scope.go:117] "RemoveContainer" containerID="8bd72f355a2ea559cef3868b146cd6fd26cba6952ec6955c36683ada0df0f439"
	Mar 28 00:48:21 pause-040046 kubelet[3018]: I0328 00:48:21.552880    3018 scope.go:117] "RemoveContainer" containerID="adae09481ab4f504b6cc144443edbfd307e7f230fd84bab6c985cdc13f9c0558"
	Mar 28 00:48:21 pause-040046 kubelet[3018]: I0328 00:48:21.627175    3018 kubelet_node_status.go:73] "Attempting to register node" node="pause-040046"
	Mar 28 00:48:21 pause-040046 kubelet[3018]: E0328 00:48:21.642529    3018 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.39.233:8443: connect: connection refused" node="pause-040046"
	Mar 28 00:48:21 pause-040046 kubelet[3018]: W0328 00:48:21.938245    3018 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.233:8443: connect: connection refused
	Mar 28 00:48:21 pause-040046 kubelet[3018]: E0328 00:48:21.938328    3018 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.233:8443: connect: connection refused
	Mar 28 00:48:22 pause-040046 kubelet[3018]: I0328 00:48:22.444663    3018 kubelet_node_status.go:73] "Attempting to register node" node="pause-040046"
	Mar 28 00:48:25 pause-040046 kubelet[3018]: I0328 00:48:25.063917    3018 kubelet_node_status.go:112] "Node was previously registered" node="pause-040046"
	Mar 28 00:48:25 pause-040046 kubelet[3018]: I0328 00:48:25.064104    3018 kubelet_node_status.go:76] "Successfully registered node" node="pause-040046"
	Mar 28 00:48:25 pause-040046 kubelet[3018]: I0328 00:48:25.066373    3018 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Mar 28 00:48:25 pause-040046 kubelet[3018]: I0328 00:48:25.068766    3018 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Mar 28 00:48:25 pause-040046 kubelet[3018]: I0328 00:48:25.913704    3018 apiserver.go:52] "Watching apiserver"
	Mar 28 00:48:25 pause-040046 kubelet[3018]: I0328 00:48:25.917197    3018 topology_manager.go:215] "Topology Admit Handler" podUID="dbcb1807-c16a-428c-9292-e7f4a8ff9d00" podNamespace="kube-system" podName="coredns-76f75df574-d9zx2"
	Mar 28 00:48:25 pause-040046 kubelet[3018]: I0328 00:48:25.917320    3018 topology_manager.go:215] "Topology Admit Handler" podUID="249cdd5d-91ae-4248-9a00-f4959c78b3b2" podNamespace="kube-system" podName="kube-proxy-5tlrp"
	Mar 28 00:48:25 pause-040046 kubelet[3018]: I0328 00:48:25.925138    3018 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world"
	Mar 28 00:48:25 pause-040046 kubelet[3018]: I0328 00:48:25.929403    3018 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/249cdd5d-91ae-4248-9a00-f4959c78b3b2-xtables-lock\") pod \"kube-proxy-5tlrp\" (UID: \"249cdd5d-91ae-4248-9a00-f4959c78b3b2\") " pod="kube-system/kube-proxy-5tlrp"
	Mar 28 00:48:25 pause-040046 kubelet[3018]: I0328 00:48:25.929914    3018 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/249cdd5d-91ae-4248-9a00-f4959c78b3b2-lib-modules\") pod \"kube-proxy-5tlrp\" (UID: \"249cdd5d-91ae-4248-9a00-f4959c78b3b2\") " pod="kube-system/kube-proxy-5tlrp"
	Mar 28 00:48:26 pause-040046 kubelet[3018]: I0328 00:48:26.218084    3018 scope.go:117] "RemoveContainer" containerID="737bc1da2980fc5534519112255b2b2e4ced04b8f71571c699f8e1eedf84b5e9"
	Mar 28 00:48:26 pause-040046 kubelet[3018]: I0328 00:48:26.220522    3018 scope.go:117] "RemoveContainer" containerID="2264e41f77cc4f28ea256a12349256853bb9b3a2421f649b7e75348e9561392b"
	Mar 28 00:48:34 pause-040046 kubelet[3018]: I0328 00:48:34.291585    3018 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0328 00:48:47.614930 1115076 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/18485-1069254/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-040046 -n pause-040046
helpers_test.go:261: (dbg) Run:  kubectl --context pause-040046 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/SecondStartNoReconfiguration (100.49s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (284.12s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-986088 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-986088 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 109 (4m43.790679053s)

                                                
                                                
-- stdout --
	* [old-k8s-version-986088] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18485
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18485-1069254/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18485-1069254/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "old-k8s-version-986088" primary control-plane node in "old-k8s-version-986088" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0328 00:52:42.097217 1124641 out.go:291] Setting OutFile to fd 1 ...
	I0328 00:52:42.097371 1124641 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 00:52:42.097384 1124641 out.go:304] Setting ErrFile to fd 2...
	I0328 00:52:42.097390 1124641 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 00:52:42.097610 1124641 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18485-1069254/.minikube/bin
	I0328 00:52:42.098253 1124641 out.go:298] Setting JSON to false
	I0328 00:52:42.099533 1124641 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":30859,"bootTime":1711556303,"procs":302,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1054-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0328 00:52:42.099606 1124641 start.go:139] virtualization: kvm guest
	I0328 00:52:42.101940 1124641 out.go:177] * [old-k8s-version-986088] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I0328 00:52:42.103395 1124641 notify.go:220] Checking for updates...
	I0328 00:52:42.103416 1124641 out.go:177]   - MINIKUBE_LOCATION=18485
	I0328 00:52:42.104648 1124641 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0328 00:52:42.105963 1124641 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18485-1069254/kubeconfig
	I0328 00:52:42.107429 1124641 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18485-1069254/.minikube
	I0328 00:52:42.108960 1124641 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0328 00:52:42.110447 1124641 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0328 00:52:42.112657 1124641 config.go:182] Loaded profile config "bridge-443419": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0328 00:52:42.112833 1124641 config.go:182] Loaded profile config "flannel-443419": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0328 00:52:42.112949 1124641 config.go:182] Loaded profile config "kubernetes-upgrade-615158": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0-beta.0
	I0328 00:52:42.113110 1124641 driver.go:392] Setting default libvirt URI to qemu:///system
	I0328 00:52:42.157704 1124641 out.go:177] * Using the kvm2 driver based on user configuration
	I0328 00:52:42.159082 1124641 start.go:297] selected driver: kvm2
	I0328 00:52:42.159106 1124641 start.go:901] validating driver "kvm2" against <nil>
	I0328 00:52:42.159123 1124641 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0328 00:52:42.160207 1124641 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0328 00:52:42.160338 1124641 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18485-1069254/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0328 00:52:42.176975 1124641 install.go:137] /home/jenkins/minikube-integration/18485-1069254/.minikube/bin/docker-machine-driver-kvm2 version is 1.33.0-beta.0
	I0328 00:52:42.177037 1124641 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0328 00:52:42.177283 1124641 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0328 00:52:42.177375 1124641 cni.go:84] Creating CNI manager for ""
	I0328 00:52:42.177395 1124641 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0328 00:52:42.177411 1124641 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0328 00:52:42.177488 1124641 start.go:340] cluster config:
	{Name:old-k8s-version-986088 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-986088 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: S
SHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0328 00:52:42.177608 1124641 iso.go:125] acquiring lock: {Name:mk3da1fa7d63a581c817f327d86a827697458cb8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0328 00:52:42.180532 1124641 out.go:177] * Starting "old-k8s-version-986088" primary control-plane node in "old-k8s-version-986088" cluster
	I0328 00:52:42.181838 1124641 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0328 00:52:42.181893 1124641 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0328 00:52:42.181901 1124641 cache.go:56] Caching tarball of preloaded images
	I0328 00:52:42.182019 1124641 preload.go:173] Found /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0328 00:52:42.182036 1124641 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0328 00:52:42.182169 1124641 profile.go:142] Saving config to /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/old-k8s-version-986088/config.json ...
	I0328 00:52:42.182193 1124641 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/old-k8s-version-986088/config.json: {Name:mk2a2b625e7c87b41aece60741847db61cc5c6d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 00:52:42.182370 1124641 start.go:360] acquireMachinesLock for old-k8s-version-986088: {Name:mk85e225431128bbd27ac7bb3815095957281902 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0328 00:52:50.307573 1124641 start.go:364] duration metric: took 8.125152123s to acquireMachinesLock for "old-k8s-version-986088"
	I0328 00:52:50.307660 1124641 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-986088 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-986088 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0328 00:52:50.307798 1124641 start.go:125] createHost starting for "" (driver="kvm2")
	I0328 00:52:50.310866 1124641 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0328 00:52:50.311066 1124641 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18485-1069254/.minikube/bin/docker-machine-driver-kvm2
	I0328 00:52:50.311116 1124641 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 00:52:50.328919 1124641 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38445
	I0328 00:52:50.329482 1124641 main.go:141] libmachine: () Calling .GetVersion
	I0328 00:52:50.330053 1124641 main.go:141] libmachine: Using API Version  1
	I0328 00:52:50.330080 1124641 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 00:52:50.330484 1124641 main.go:141] libmachine: () Calling .GetMachineName
	I0328 00:52:50.330741 1124641 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetMachineName
	I0328 00:52:50.330894 1124641 main.go:141] libmachine: (old-k8s-version-986088) Calling .DriverName
	I0328 00:52:50.331093 1124641 start.go:159] libmachine.API.Create for "old-k8s-version-986088" (driver="kvm2")
	I0328 00:52:50.331146 1124641 client.go:168] LocalClient.Create starting
	I0328 00:52:50.331188 1124641 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem
	I0328 00:52:50.331231 1124641 main.go:141] libmachine: Decoding PEM data...
	I0328 00:52:50.331255 1124641 main.go:141] libmachine: Parsing certificate...
	I0328 00:52:50.331327 1124641 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/cert.pem
	I0328 00:52:50.331353 1124641 main.go:141] libmachine: Decoding PEM data...
	I0328 00:52:50.331367 1124641 main.go:141] libmachine: Parsing certificate...
	I0328 00:52:50.331393 1124641 main.go:141] libmachine: Running pre-create checks...
	I0328 00:52:50.331406 1124641 main.go:141] libmachine: (old-k8s-version-986088) Calling .PreCreateCheck
	I0328 00:52:50.331828 1124641 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetConfigRaw
	I0328 00:52:50.332301 1124641 main.go:141] libmachine: Creating machine...
	I0328 00:52:50.332320 1124641 main.go:141] libmachine: (old-k8s-version-986088) Calling .Create
	I0328 00:52:50.332471 1124641 main.go:141] libmachine: (old-k8s-version-986088) Creating KVM machine...
	I0328 00:52:50.333718 1124641 main.go:141] libmachine: (old-k8s-version-986088) DBG | found existing default KVM network
	I0328 00:52:50.335024 1124641 main.go:141] libmachine: (old-k8s-version-986088) DBG | I0328 00:52:50.334874 1124865 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:7c:ba:23} reservation:<nil>}
	I0328 00:52:50.336007 1124641 main.go:141] libmachine: (old-k8s-version-986088) DBG | I0328 00:52:50.335919 1124865 network.go:206] using free private subnet 192.168.50.0/24: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000282960}
	I0328 00:52:50.336076 1124641 main.go:141] libmachine: (old-k8s-version-986088) DBG | created network xml: 
	I0328 00:52:50.336105 1124641 main.go:141] libmachine: (old-k8s-version-986088) DBG | <network>
	I0328 00:52:50.336136 1124641 main.go:141] libmachine: (old-k8s-version-986088) DBG |   <name>mk-old-k8s-version-986088</name>
	I0328 00:52:50.336143 1124641 main.go:141] libmachine: (old-k8s-version-986088) DBG |   <dns enable='no'/>
	I0328 00:52:50.336151 1124641 main.go:141] libmachine: (old-k8s-version-986088) DBG |   
	I0328 00:52:50.336168 1124641 main.go:141] libmachine: (old-k8s-version-986088) DBG |   <ip address='192.168.50.1' netmask='255.255.255.0'>
	I0328 00:52:50.336199 1124641 main.go:141] libmachine: (old-k8s-version-986088) DBG |     <dhcp>
	I0328 00:52:50.336236 1124641 main.go:141] libmachine: (old-k8s-version-986088) DBG |       <range start='192.168.50.2' end='192.168.50.253'/>
	I0328 00:52:50.336250 1124641 main.go:141] libmachine: (old-k8s-version-986088) DBG |     </dhcp>
	I0328 00:52:50.336259 1124641 main.go:141] libmachine: (old-k8s-version-986088) DBG |   </ip>
	I0328 00:52:50.336270 1124641 main.go:141] libmachine: (old-k8s-version-986088) DBG |   
	I0328 00:52:50.336282 1124641 main.go:141] libmachine: (old-k8s-version-986088) DBG | </network>
	I0328 00:52:50.336295 1124641 main.go:141] libmachine: (old-k8s-version-986088) DBG | 
	I0328 00:52:50.341837 1124641 main.go:141] libmachine: (old-k8s-version-986088) DBG | trying to create private KVM network mk-old-k8s-version-986088 192.168.50.0/24...
	I0328 00:52:50.413785 1124641 main.go:141] libmachine: (old-k8s-version-986088) DBG | private KVM network mk-old-k8s-version-986088 192.168.50.0/24 created
	I0328 00:52:50.413815 1124641 main.go:141] libmachine: (old-k8s-version-986088) Setting up store path in /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/old-k8s-version-986088 ...
	I0328 00:52:50.413831 1124641 main.go:141] libmachine: (old-k8s-version-986088) DBG | I0328 00:52:50.413745 1124865 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18485-1069254/.minikube
	I0328 00:52:50.413850 1124641 main.go:141] libmachine: (old-k8s-version-986088) Building disk image from file:///home/jenkins/minikube-integration/18485-1069254/.minikube/cache/iso/amd64/minikube-v1.33.0-1711559712-18485-amd64.iso
	I0328 00:52:50.413917 1124641 main.go:141] libmachine: (old-k8s-version-986088) Downloading /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18485-1069254/.minikube/cache/iso/amd64/minikube-v1.33.0-1711559712-18485-amd64.iso...
	I0328 00:52:50.671525 1124641 main.go:141] libmachine: (old-k8s-version-986088) DBG | I0328 00:52:50.671386 1124865 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/old-k8s-version-986088/id_rsa...
	I0328 00:52:50.772019 1124641 main.go:141] libmachine: (old-k8s-version-986088) DBG | I0328 00:52:50.771866 1124865 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/old-k8s-version-986088/old-k8s-version-986088.rawdisk...
	I0328 00:52:50.772065 1124641 main.go:141] libmachine: (old-k8s-version-986088) DBG | Writing magic tar header
	I0328 00:52:50.772117 1124641 main.go:141] libmachine: (old-k8s-version-986088) DBG | Writing SSH key tar header
	I0328 00:52:50.772149 1124641 main.go:141] libmachine: (old-k8s-version-986088) DBG | I0328 00:52:50.772022 1124865 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/old-k8s-version-986088 ...
	I0328 00:52:50.772179 1124641 main.go:141] libmachine: (old-k8s-version-986088) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/old-k8s-version-986088
	I0328 00:52:50.772196 1124641 main.go:141] libmachine: (old-k8s-version-986088) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18485-1069254/.minikube/machines
	I0328 00:52:50.772214 1124641 main.go:141] libmachine: (old-k8s-version-986088) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18485-1069254/.minikube
	I0328 00:52:50.772234 1124641 main.go:141] libmachine: (old-k8s-version-986088) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18485-1069254
	I0328 00:52:50.772246 1124641 main.go:141] libmachine: (old-k8s-version-986088) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0328 00:52:50.772261 1124641 main.go:141] libmachine: (old-k8s-version-986088) Setting executable bit set on /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/old-k8s-version-986088 (perms=drwx------)
	I0328 00:52:50.772266 1124641 main.go:141] libmachine: (old-k8s-version-986088) DBG | Checking permissions on dir: /home/jenkins
	I0328 00:52:50.772286 1124641 main.go:141] libmachine: (old-k8s-version-986088) DBG | Checking permissions on dir: /home
	I0328 00:52:50.772294 1124641 main.go:141] libmachine: (old-k8s-version-986088) DBG | Skipping /home - not owner
	I0328 00:52:50.772311 1124641 main.go:141] libmachine: (old-k8s-version-986088) Setting executable bit set on /home/jenkins/minikube-integration/18485-1069254/.minikube/machines (perms=drwxr-xr-x)
	I0328 00:52:50.772331 1124641 main.go:141] libmachine: (old-k8s-version-986088) Setting executable bit set on /home/jenkins/minikube-integration/18485-1069254/.minikube (perms=drwxr-xr-x)
	I0328 00:52:50.772349 1124641 main.go:141] libmachine: (old-k8s-version-986088) Setting executable bit set on /home/jenkins/minikube-integration/18485-1069254 (perms=drwxrwxr-x)
	I0328 00:52:50.772363 1124641 main.go:141] libmachine: (old-k8s-version-986088) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0328 00:52:50.772381 1124641 main.go:141] libmachine: (old-k8s-version-986088) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0328 00:52:50.772405 1124641 main.go:141] libmachine: (old-k8s-version-986088) Creating domain...
	I0328 00:52:50.773602 1124641 main.go:141] libmachine: (old-k8s-version-986088) define libvirt domain using xml: 
	I0328 00:52:50.773629 1124641 main.go:141] libmachine: (old-k8s-version-986088) <domain type='kvm'>
	I0328 00:52:50.773640 1124641 main.go:141] libmachine: (old-k8s-version-986088)   <name>old-k8s-version-986088</name>
	I0328 00:52:50.773647 1124641 main.go:141] libmachine: (old-k8s-version-986088)   <memory unit='MiB'>2200</memory>
	I0328 00:52:50.773656 1124641 main.go:141] libmachine: (old-k8s-version-986088)   <vcpu>2</vcpu>
	I0328 00:52:50.773666 1124641 main.go:141] libmachine: (old-k8s-version-986088)   <features>
	I0328 00:52:50.773675 1124641 main.go:141] libmachine: (old-k8s-version-986088)     <acpi/>
	I0328 00:52:50.773695 1124641 main.go:141] libmachine: (old-k8s-version-986088)     <apic/>
	I0328 00:52:50.773743 1124641 main.go:141] libmachine: (old-k8s-version-986088)     <pae/>
	I0328 00:52:50.773770 1124641 main.go:141] libmachine: (old-k8s-version-986088)     
	I0328 00:52:50.773782 1124641 main.go:141] libmachine: (old-k8s-version-986088)   </features>
	I0328 00:52:50.773885 1124641 main.go:141] libmachine: (old-k8s-version-986088)   <cpu mode='host-passthrough'>
	I0328 00:52:50.773914 1124641 main.go:141] libmachine: (old-k8s-version-986088)   
	I0328 00:52:50.773927 1124641 main.go:141] libmachine: (old-k8s-version-986088)   </cpu>
	I0328 00:52:50.773935 1124641 main.go:141] libmachine: (old-k8s-version-986088)   <os>
	I0328 00:52:50.773948 1124641 main.go:141] libmachine: (old-k8s-version-986088)     <type>hvm</type>
	I0328 00:52:50.773958 1124641 main.go:141] libmachine: (old-k8s-version-986088)     <boot dev='cdrom'/>
	I0328 00:52:50.773968 1124641 main.go:141] libmachine: (old-k8s-version-986088)     <boot dev='hd'/>
	I0328 00:52:50.773979 1124641 main.go:141] libmachine: (old-k8s-version-986088)     <bootmenu enable='no'/>
	I0328 00:52:50.773992 1124641 main.go:141] libmachine: (old-k8s-version-986088)   </os>
	I0328 00:52:50.774002 1124641 main.go:141] libmachine: (old-k8s-version-986088)   <devices>
	I0328 00:52:50.774010 1124641 main.go:141] libmachine: (old-k8s-version-986088)     <disk type='file' device='cdrom'>
	I0328 00:52:50.774025 1124641 main.go:141] libmachine: (old-k8s-version-986088)       <source file='/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/old-k8s-version-986088/boot2docker.iso'/>
	I0328 00:52:50.774042 1124641 main.go:141] libmachine: (old-k8s-version-986088)       <target dev='hdc' bus='scsi'/>
	I0328 00:52:50.774055 1124641 main.go:141] libmachine: (old-k8s-version-986088)       <readonly/>
	I0328 00:52:50.774065 1124641 main.go:141] libmachine: (old-k8s-version-986088)     </disk>
	I0328 00:52:50.774084 1124641 main.go:141] libmachine: (old-k8s-version-986088)     <disk type='file' device='disk'>
	I0328 00:52:50.774106 1124641 main.go:141] libmachine: (old-k8s-version-986088)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0328 00:52:50.774140 1124641 main.go:141] libmachine: (old-k8s-version-986088)       <source file='/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/old-k8s-version-986088/old-k8s-version-986088.rawdisk'/>
	I0328 00:52:50.774154 1124641 main.go:141] libmachine: (old-k8s-version-986088)       <target dev='hda' bus='virtio'/>
	I0328 00:52:50.774169 1124641 main.go:141] libmachine: (old-k8s-version-986088)     </disk>
	I0328 00:52:50.774181 1124641 main.go:141] libmachine: (old-k8s-version-986088)     <interface type='network'>
	I0328 00:52:50.774193 1124641 main.go:141] libmachine: (old-k8s-version-986088)       <source network='mk-old-k8s-version-986088'/>
	I0328 00:52:50.774205 1124641 main.go:141] libmachine: (old-k8s-version-986088)       <model type='virtio'/>
	I0328 00:52:50.774213 1124641 main.go:141] libmachine: (old-k8s-version-986088)     </interface>
	I0328 00:52:50.774225 1124641 main.go:141] libmachine: (old-k8s-version-986088)     <interface type='network'>
	I0328 00:52:50.774249 1124641 main.go:141] libmachine: (old-k8s-version-986088)       <source network='default'/>
	I0328 00:52:50.774264 1124641 main.go:141] libmachine: (old-k8s-version-986088)       <model type='virtio'/>
	I0328 00:52:50.774272 1124641 main.go:141] libmachine: (old-k8s-version-986088)     </interface>
	I0328 00:52:50.774284 1124641 main.go:141] libmachine: (old-k8s-version-986088)     <serial type='pty'>
	I0328 00:52:50.774292 1124641 main.go:141] libmachine: (old-k8s-version-986088)       <target port='0'/>
	I0328 00:52:50.774302 1124641 main.go:141] libmachine: (old-k8s-version-986088)     </serial>
	I0328 00:52:50.774314 1124641 main.go:141] libmachine: (old-k8s-version-986088)     <console type='pty'>
	I0328 00:52:50.774358 1124641 main.go:141] libmachine: (old-k8s-version-986088)       <target type='serial' port='0'/>
	I0328 00:52:50.774386 1124641 main.go:141] libmachine: (old-k8s-version-986088)     </console>
	I0328 00:52:50.774398 1124641 main.go:141] libmachine: (old-k8s-version-986088)     <rng model='virtio'>
	I0328 00:52:50.774409 1124641 main.go:141] libmachine: (old-k8s-version-986088)       <backend model='random'>/dev/random</backend>
	I0328 00:52:50.774419 1124641 main.go:141] libmachine: (old-k8s-version-986088)     </rng>
	I0328 00:52:50.774431 1124641 main.go:141] libmachine: (old-k8s-version-986088)     
	I0328 00:52:50.774443 1124641 main.go:141] libmachine: (old-k8s-version-986088)     
	I0328 00:52:50.774454 1124641 main.go:141] libmachine: (old-k8s-version-986088)   </devices>
	I0328 00:52:50.774465 1124641 main.go:141] libmachine: (old-k8s-version-986088) </domain>
	I0328 00:52:50.774476 1124641 main.go:141] libmachine: (old-k8s-version-986088) 
	I0328 00:52:50.778726 1124641 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:2f:c5:ad in network default
	I0328 00:52:50.779420 1124641 main.go:141] libmachine: (old-k8s-version-986088) Ensuring networks are active...
	I0328 00:52:50.779449 1124641 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 00:52:50.780294 1124641 main.go:141] libmachine: (old-k8s-version-986088) Ensuring network default is active
	I0328 00:52:50.780633 1124641 main.go:141] libmachine: (old-k8s-version-986088) Ensuring network mk-old-k8s-version-986088 is active
	I0328 00:52:50.781222 1124641 main.go:141] libmachine: (old-k8s-version-986088) Getting domain xml...
	I0328 00:52:50.781949 1124641 main.go:141] libmachine: (old-k8s-version-986088) Creating domain...
	I0328 00:52:52.059242 1124641 main.go:141] libmachine: (old-k8s-version-986088) Waiting to get IP...
	I0328 00:52:52.060289 1124641 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 00:52:52.060816 1124641 main.go:141] libmachine: (old-k8s-version-986088) DBG | unable to find current IP address of domain old-k8s-version-986088 in network mk-old-k8s-version-986088
	I0328 00:52:52.060912 1124641 main.go:141] libmachine: (old-k8s-version-986088) DBG | I0328 00:52:52.060811 1124865 retry.go:31] will retry after 266.370059ms: waiting for machine to come up
	I0328 00:52:52.329334 1124641 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 00:52:52.330053 1124641 main.go:141] libmachine: (old-k8s-version-986088) DBG | unable to find current IP address of domain old-k8s-version-986088 in network mk-old-k8s-version-986088
	I0328 00:52:52.330082 1124641 main.go:141] libmachine: (old-k8s-version-986088) DBG | I0328 00:52:52.329995 1124865 retry.go:31] will retry after 350.193758ms: waiting for machine to come up
	I0328 00:52:52.681577 1124641 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 00:52:52.682253 1124641 main.go:141] libmachine: (old-k8s-version-986088) DBG | unable to find current IP address of domain old-k8s-version-986088 in network mk-old-k8s-version-986088
	I0328 00:52:52.682286 1124641 main.go:141] libmachine: (old-k8s-version-986088) DBG | I0328 00:52:52.682175 1124865 retry.go:31] will retry after 480.060306ms: waiting for machine to come up
	I0328 00:52:53.163584 1124641 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 00:52:53.164246 1124641 main.go:141] libmachine: (old-k8s-version-986088) DBG | unable to find current IP address of domain old-k8s-version-986088 in network mk-old-k8s-version-986088
	I0328 00:52:53.164283 1124641 main.go:141] libmachine: (old-k8s-version-986088) DBG | I0328 00:52:53.164185 1124865 retry.go:31] will retry after 374.158905ms: waiting for machine to come up
	I0328 00:52:53.539746 1124641 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 00:52:53.540392 1124641 main.go:141] libmachine: (old-k8s-version-986088) DBG | unable to find current IP address of domain old-k8s-version-986088 in network mk-old-k8s-version-986088
	I0328 00:52:53.540418 1124641 main.go:141] libmachine: (old-k8s-version-986088) DBG | I0328 00:52:53.540332 1124865 retry.go:31] will retry after 499.267861ms: waiting for machine to come up
	I0328 00:52:54.042965 1124641 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 00:52:54.043617 1124641 main.go:141] libmachine: (old-k8s-version-986088) DBG | unable to find current IP address of domain old-k8s-version-986088 in network mk-old-k8s-version-986088
	I0328 00:52:54.043650 1124641 main.go:141] libmachine: (old-k8s-version-986088) DBG | I0328 00:52:54.043517 1124865 retry.go:31] will retry after 927.777381ms: waiting for machine to come up
	I0328 00:52:54.973519 1124641 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 00:52:54.974045 1124641 main.go:141] libmachine: (old-k8s-version-986088) DBG | unable to find current IP address of domain old-k8s-version-986088 in network mk-old-k8s-version-986088
	I0328 00:52:54.974076 1124641 main.go:141] libmachine: (old-k8s-version-986088) DBG | I0328 00:52:54.973987 1124865 retry.go:31] will retry after 1.164717466s: waiting for machine to come up
	I0328 00:52:56.140756 1124641 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 00:52:56.141300 1124641 main.go:141] libmachine: (old-k8s-version-986088) DBG | unable to find current IP address of domain old-k8s-version-986088 in network mk-old-k8s-version-986088
	I0328 00:52:56.141345 1124641 main.go:141] libmachine: (old-k8s-version-986088) DBG | I0328 00:52:56.141228 1124865 retry.go:31] will retry after 1.460098825s: waiting for machine to come up
	I0328 00:52:57.603286 1124641 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 00:52:57.603839 1124641 main.go:141] libmachine: (old-k8s-version-986088) DBG | unable to find current IP address of domain old-k8s-version-986088 in network mk-old-k8s-version-986088
	I0328 00:52:57.603864 1124641 main.go:141] libmachine: (old-k8s-version-986088) DBG | I0328 00:52:57.603802 1124865 retry.go:31] will retry after 1.557195286s: waiting for machine to come up
	I0328 00:52:59.163427 1124641 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 00:52:59.163919 1124641 main.go:141] libmachine: (old-k8s-version-986088) DBG | unable to find current IP address of domain old-k8s-version-986088 in network mk-old-k8s-version-986088
	I0328 00:52:59.163946 1124641 main.go:141] libmachine: (old-k8s-version-986088) DBG | I0328 00:52:59.163876 1124865 retry.go:31] will retry after 2.233336392s: waiting for machine to come up
	I0328 00:53:01.398539 1124641 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 00:53:01.399111 1124641 main.go:141] libmachine: (old-k8s-version-986088) DBG | unable to find current IP address of domain old-k8s-version-986088 in network mk-old-k8s-version-986088
	I0328 00:53:01.399145 1124641 main.go:141] libmachine: (old-k8s-version-986088) DBG | I0328 00:53:01.399036 1124865 retry.go:31] will retry after 2.570219642s: waiting for machine to come up
	I0328 00:53:03.971722 1124641 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 00:53:03.972214 1124641 main.go:141] libmachine: (old-k8s-version-986088) DBG | unable to find current IP address of domain old-k8s-version-986088 in network mk-old-k8s-version-986088
	I0328 00:53:03.972245 1124641 main.go:141] libmachine: (old-k8s-version-986088) DBG | I0328 00:53:03.972156 1124865 retry.go:31] will retry after 3.012723138s: waiting for machine to come up
	I0328 00:53:06.987500 1124641 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 00:53:06.988027 1124641 main.go:141] libmachine: (old-k8s-version-986088) DBG | unable to find current IP address of domain old-k8s-version-986088 in network mk-old-k8s-version-986088
	I0328 00:53:06.988057 1124641 main.go:141] libmachine: (old-k8s-version-986088) DBG | I0328 00:53:06.987959 1124865 retry.go:31] will retry after 3.380584648s: waiting for machine to come up
	I0328 00:53:10.370781 1124641 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 00:53:10.371192 1124641 main.go:141] libmachine: (old-k8s-version-986088) DBG | unable to find current IP address of domain old-k8s-version-986088 in network mk-old-k8s-version-986088
	I0328 00:53:10.371222 1124641 main.go:141] libmachine: (old-k8s-version-986088) DBG | I0328 00:53:10.371162 1124865 retry.go:31] will retry after 4.445548056s: waiting for machine to come up
	I0328 00:53:14.819490 1124641 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 00:53:14.820045 1124641 main.go:141] libmachine: (old-k8s-version-986088) Found IP for machine: 192.168.50.174
	I0328 00:53:14.820068 1124641 main.go:141] libmachine: (old-k8s-version-986088) Reserving static IP address...
	I0328 00:53:14.820097 1124641 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has current primary IP address 192.168.50.174 and MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 00:53:14.820406 1124641 main.go:141] libmachine: (old-k8s-version-986088) DBG | unable to find host DHCP lease matching {name: "old-k8s-version-986088", mac: "52:54:00:f6:94:40", ip: "192.168.50.174"} in network mk-old-k8s-version-986088
	I0328 00:53:14.960546 1124641 main.go:141] libmachine: (old-k8s-version-986088) DBG | Getting to WaitForSSH function...
	I0328 00:53:14.960578 1124641 main.go:141] libmachine: (old-k8s-version-986088) Reserved static IP address: 192.168.50.174
	I0328 00:53:14.960591 1124641 main.go:141] libmachine: (old-k8s-version-986088) Waiting for SSH to be available...
	I0328 00:53:14.964222 1124641 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 00:53:14.964681 1124641 main.go:141] libmachine: (old-k8s-version-986088) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:f6:94:40", ip: ""} in network mk-old-k8s-version-986088
	I0328 00:53:14.964714 1124641 main.go:141] libmachine: (old-k8s-version-986088) DBG | unable to find defined IP address of network mk-old-k8s-version-986088 interface with MAC address 52:54:00:f6:94:40
	I0328 00:53:14.964814 1124641 main.go:141] libmachine: (old-k8s-version-986088) DBG | Using SSH client type: external
	I0328 00:53:14.964850 1124641 main.go:141] libmachine: (old-k8s-version-986088) DBG | Using SSH private key: /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/old-k8s-version-986088/id_rsa (-rw-------)
	I0328 00:53:14.964889 1124641 main.go:141] libmachine: (old-k8s-version-986088) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/old-k8s-version-986088/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0328 00:53:14.964904 1124641 main.go:141] libmachine: (old-k8s-version-986088) DBG | About to run SSH command:
	I0328 00:53:14.964919 1124641 main.go:141] libmachine: (old-k8s-version-986088) DBG | exit 0
	I0328 00:53:14.968892 1124641 main.go:141] libmachine: (old-k8s-version-986088) DBG | SSH cmd err, output: exit status 255: 
	I0328 00:53:14.968924 1124641 main.go:141] libmachine: (old-k8s-version-986088) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0328 00:53:14.968932 1124641 main.go:141] libmachine: (old-k8s-version-986088) DBG | command : exit 0
	I0328 00:53:14.968939 1124641 main.go:141] libmachine: (old-k8s-version-986088) DBG | err     : exit status 255
	I0328 00:53:14.968950 1124641 main.go:141] libmachine: (old-k8s-version-986088) DBG | output  : 
	I0328 00:53:17.971367 1124641 main.go:141] libmachine: (old-k8s-version-986088) DBG | Getting to WaitForSSH function...
	I0328 00:53:17.973946 1124641 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 00:53:17.974492 1124641 main.go:141] libmachine: (old-k8s-version-986088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:94:40", ip: ""} in network mk-old-k8s-version-986088: {Iface:virbr2 ExpiryTime:2024-03-28 01:53:07 +0000 UTC Type:0 Mac:52:54:00:f6:94:40 Iaid: IPaddr:192.168.50.174 Prefix:24 Hostname:old-k8s-version-986088 Clientid:01:52:54:00:f6:94:40}
	I0328 00:53:17.974524 1124641 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined IP address 192.168.50.174 and MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 00:53:17.974661 1124641 main.go:141] libmachine: (old-k8s-version-986088) DBG | Using SSH client type: external
	I0328 00:53:17.974697 1124641 main.go:141] libmachine: (old-k8s-version-986088) DBG | Using SSH private key: /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/old-k8s-version-986088/id_rsa (-rw-------)
	I0328 00:53:17.974732 1124641 main.go:141] libmachine: (old-k8s-version-986088) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.174 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/old-k8s-version-986088/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0328 00:53:17.974750 1124641 main.go:141] libmachine: (old-k8s-version-986088) DBG | About to run SSH command:
	I0328 00:53:17.974764 1124641 main.go:141] libmachine: (old-k8s-version-986088) DBG | exit 0
	I0328 00:53:18.102765 1124641 main.go:141] libmachine: (old-k8s-version-986088) DBG | SSH cmd err, output: <nil>: 
	I0328 00:53:18.103047 1124641 main.go:141] libmachine: (old-k8s-version-986088) KVM machine creation complete!
	I0328 00:53:18.103410 1124641 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetConfigRaw
	I0328 00:53:18.104013 1124641 main.go:141] libmachine: (old-k8s-version-986088) Calling .DriverName
	I0328 00:53:18.104233 1124641 main.go:141] libmachine: (old-k8s-version-986088) Calling .DriverName
	I0328 00:53:18.104442 1124641 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0328 00:53:18.104462 1124641 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetState
	I0328 00:53:18.106100 1124641 main.go:141] libmachine: Detecting operating system of created instance...
	I0328 00:53:18.106121 1124641 main.go:141] libmachine: Waiting for SSH to be available...
	I0328 00:53:18.106130 1124641 main.go:141] libmachine: Getting to WaitForSSH function...
	I0328 00:53:18.106139 1124641 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHHostname
	I0328 00:53:18.109195 1124641 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 00:53:18.109639 1124641 main.go:141] libmachine: (old-k8s-version-986088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:94:40", ip: ""} in network mk-old-k8s-version-986088: {Iface:virbr2 ExpiryTime:2024-03-28 01:53:07 +0000 UTC Type:0 Mac:52:54:00:f6:94:40 Iaid: IPaddr:192.168.50.174 Prefix:24 Hostname:old-k8s-version-986088 Clientid:01:52:54:00:f6:94:40}
	I0328 00:53:18.109668 1124641 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined IP address 192.168.50.174 and MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 00:53:18.109844 1124641 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHPort
	I0328 00:53:18.110046 1124641 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHKeyPath
	I0328 00:53:18.110206 1124641 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHKeyPath
	I0328 00:53:18.110345 1124641 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHUsername
	I0328 00:53:18.110569 1124641 main.go:141] libmachine: Using SSH client type: native
	I0328 00:53:18.110815 1124641 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.174 22 <nil> <nil>}
	I0328 00:53:18.110831 1124641 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0328 00:53:18.217817 1124641 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0328 00:53:18.217842 1124641 main.go:141] libmachine: Detecting the provisioner...
	I0328 00:53:18.217851 1124641 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHHostname
	I0328 00:53:18.220643 1124641 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 00:53:18.220930 1124641 main.go:141] libmachine: (old-k8s-version-986088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:94:40", ip: ""} in network mk-old-k8s-version-986088: {Iface:virbr2 ExpiryTime:2024-03-28 01:53:07 +0000 UTC Type:0 Mac:52:54:00:f6:94:40 Iaid: IPaddr:192.168.50.174 Prefix:24 Hostname:old-k8s-version-986088 Clientid:01:52:54:00:f6:94:40}
	I0328 00:53:18.220967 1124641 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined IP address 192.168.50.174 and MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 00:53:18.221201 1124641 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHPort
	I0328 00:53:18.221420 1124641 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHKeyPath
	I0328 00:53:18.221602 1124641 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHKeyPath
	I0328 00:53:18.221752 1124641 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHUsername
	I0328 00:53:18.221952 1124641 main.go:141] libmachine: Using SSH client type: native
	I0328 00:53:18.222157 1124641 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.174 22 <nil> <nil>}
	I0328 00:53:18.222171 1124641 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0328 00:53:18.327498 1124641 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0328 00:53:18.327605 1124641 main.go:141] libmachine: found compatible host: buildroot
	I0328 00:53:18.327615 1124641 main.go:141] libmachine: Provisioning with buildroot...
	I0328 00:53:18.327633 1124641 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetMachineName
	I0328 00:53:18.327935 1124641 buildroot.go:166] provisioning hostname "old-k8s-version-986088"
	I0328 00:53:18.327972 1124641 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetMachineName
	I0328 00:53:18.328180 1124641 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHHostname
	I0328 00:53:18.331461 1124641 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 00:53:18.331850 1124641 main.go:141] libmachine: (old-k8s-version-986088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:94:40", ip: ""} in network mk-old-k8s-version-986088: {Iface:virbr2 ExpiryTime:2024-03-28 01:53:07 +0000 UTC Type:0 Mac:52:54:00:f6:94:40 Iaid: IPaddr:192.168.50.174 Prefix:24 Hostname:old-k8s-version-986088 Clientid:01:52:54:00:f6:94:40}
	I0328 00:53:18.331880 1124641 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined IP address 192.168.50.174 and MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 00:53:18.332089 1124641 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHPort
	I0328 00:53:18.332303 1124641 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHKeyPath
	I0328 00:53:18.332478 1124641 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHKeyPath
	I0328 00:53:18.332630 1124641 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHUsername
	I0328 00:53:18.332818 1124641 main.go:141] libmachine: Using SSH client type: native
	I0328 00:53:18.333017 1124641 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.174 22 <nil> <nil>}
	I0328 00:53:18.333031 1124641 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-986088 && echo "old-k8s-version-986088" | sudo tee /etc/hostname
	I0328 00:53:18.454355 1124641 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-986088
	
	I0328 00:53:18.454391 1124641 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHHostname
	I0328 00:53:18.457180 1124641 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 00:53:18.457549 1124641 main.go:141] libmachine: (old-k8s-version-986088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:94:40", ip: ""} in network mk-old-k8s-version-986088: {Iface:virbr2 ExpiryTime:2024-03-28 01:53:07 +0000 UTC Type:0 Mac:52:54:00:f6:94:40 Iaid: IPaddr:192.168.50.174 Prefix:24 Hostname:old-k8s-version-986088 Clientid:01:52:54:00:f6:94:40}
	I0328 00:53:18.457584 1124641 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined IP address 192.168.50.174 and MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 00:53:18.457758 1124641 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHPort
	I0328 00:53:18.457947 1124641 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHKeyPath
	I0328 00:53:18.458113 1124641 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHKeyPath
	I0328 00:53:18.458269 1124641 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHUsername
	I0328 00:53:18.458485 1124641 main.go:141] libmachine: Using SSH client type: native
	I0328 00:53:18.458652 1124641 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.174 22 <nil> <nil>}
	I0328 00:53:18.458669 1124641 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-986088' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-986088/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-986088' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0328 00:53:18.578051 1124641 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0328 00:53:18.578090 1124641 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18485-1069254/.minikube CaCertPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18485-1069254/.minikube}
	I0328 00:53:18.578124 1124641 buildroot.go:174] setting up certificates
	I0328 00:53:18.578138 1124641 provision.go:84] configureAuth start
	I0328 00:53:18.578149 1124641 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetMachineName
	I0328 00:53:18.578509 1124641 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetIP
	I0328 00:53:18.581751 1124641 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 00:53:18.582195 1124641 main.go:141] libmachine: (old-k8s-version-986088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:94:40", ip: ""} in network mk-old-k8s-version-986088: {Iface:virbr2 ExpiryTime:2024-03-28 01:53:07 +0000 UTC Type:0 Mac:52:54:00:f6:94:40 Iaid: IPaddr:192.168.50.174 Prefix:24 Hostname:old-k8s-version-986088 Clientid:01:52:54:00:f6:94:40}
	I0328 00:53:18.582225 1124641 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined IP address 192.168.50.174 and MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 00:53:18.582432 1124641 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHHostname
	I0328 00:53:18.584922 1124641 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 00:53:18.585266 1124641 main.go:141] libmachine: (old-k8s-version-986088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:94:40", ip: ""} in network mk-old-k8s-version-986088: {Iface:virbr2 ExpiryTime:2024-03-28 01:53:07 +0000 UTC Type:0 Mac:52:54:00:f6:94:40 Iaid: IPaddr:192.168.50.174 Prefix:24 Hostname:old-k8s-version-986088 Clientid:01:52:54:00:f6:94:40}
	I0328 00:53:18.585300 1124641 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined IP address 192.168.50.174 and MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 00:53:18.585440 1124641 provision.go:143] copyHostCerts
	I0328 00:53:18.585524 1124641 exec_runner.go:144] found /home/jenkins/minikube-integration/18485-1069254/.minikube/key.pem, removing ...
	I0328 00:53:18.585539 1124641 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18485-1069254/.minikube/key.pem
	I0328 00:53:18.585613 1124641 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18485-1069254/.minikube/key.pem (1679 bytes)
	I0328 00:53:18.585731 1124641 exec_runner.go:144] found /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.pem, removing ...
	I0328 00:53:18.585743 1124641 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.pem
	I0328 00:53:18.585775 1124641 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.pem (1078 bytes)
	I0328 00:53:18.585912 1124641 exec_runner.go:144] found /home/jenkins/minikube-integration/18485-1069254/.minikube/cert.pem, removing ...
	I0328 00:53:18.585926 1124641 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18485-1069254/.minikube/cert.pem
	I0328 00:53:18.585958 1124641 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18485-1069254/.minikube/cert.pem (1123 bytes)
	I0328 00:53:18.586026 1124641 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-986088 san=[127.0.0.1 192.168.50.174 localhost minikube old-k8s-version-986088]
	I0328 00:53:18.672045 1124641 provision.go:177] copyRemoteCerts
	I0328 00:53:18.672112 1124641 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0328 00:53:18.672139 1124641 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHHostname
	I0328 00:53:18.675064 1124641 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 00:53:18.675408 1124641 main.go:141] libmachine: (old-k8s-version-986088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:94:40", ip: ""} in network mk-old-k8s-version-986088: {Iface:virbr2 ExpiryTime:2024-03-28 01:53:07 +0000 UTC Type:0 Mac:52:54:00:f6:94:40 Iaid: IPaddr:192.168.50.174 Prefix:24 Hostname:old-k8s-version-986088 Clientid:01:52:54:00:f6:94:40}
	I0328 00:53:18.675433 1124641 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined IP address 192.168.50.174 and MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 00:53:18.675618 1124641 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHPort
	I0328 00:53:18.675841 1124641 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHKeyPath
	I0328 00:53:18.676034 1124641 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHUsername
	I0328 00:53:18.676200 1124641 sshutil.go:53] new ssh client: &{IP:192.168.50.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/old-k8s-version-986088/id_rsa Username:docker}
	I0328 00:53:18.757147 1124641 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0328 00:53:18.783750 1124641 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0328 00:53:18.810869 1124641 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0328 00:53:18.837121 1124641 provision.go:87] duration metric: took 258.970139ms to configureAuth
	I0328 00:53:18.837151 1124641 buildroot.go:189] setting minikube options for container-runtime
	I0328 00:53:18.837363 1124641 config.go:182] Loaded profile config "old-k8s-version-986088": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0328 00:53:18.837457 1124641 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHHostname
	I0328 00:53:18.840371 1124641 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 00:53:18.840751 1124641 main.go:141] libmachine: (old-k8s-version-986088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:94:40", ip: ""} in network mk-old-k8s-version-986088: {Iface:virbr2 ExpiryTime:2024-03-28 01:53:07 +0000 UTC Type:0 Mac:52:54:00:f6:94:40 Iaid: IPaddr:192.168.50.174 Prefix:24 Hostname:old-k8s-version-986088 Clientid:01:52:54:00:f6:94:40}
	I0328 00:53:18.840783 1124641 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined IP address 192.168.50.174 and MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 00:53:18.841033 1124641 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHPort
	I0328 00:53:18.841251 1124641 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHKeyPath
	I0328 00:53:18.841389 1124641 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHKeyPath
	I0328 00:53:18.841506 1124641 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHUsername
	I0328 00:53:18.841647 1124641 main.go:141] libmachine: Using SSH client type: native
	I0328 00:53:18.841875 1124641 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.174 22 <nil> <nil>}
	I0328 00:53:18.841897 1124641 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0328 00:53:19.119256 1124641 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0328 00:53:19.119291 1124641 main.go:141] libmachine: Checking connection to Docker...
	I0328 00:53:19.119302 1124641 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetURL
	I0328 00:53:19.120644 1124641 main.go:141] libmachine: (old-k8s-version-986088) DBG | Using libvirt version 6000000
	I0328 00:53:19.123208 1124641 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 00:53:19.123562 1124641 main.go:141] libmachine: (old-k8s-version-986088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:94:40", ip: ""} in network mk-old-k8s-version-986088: {Iface:virbr2 ExpiryTime:2024-03-28 01:53:07 +0000 UTC Type:0 Mac:52:54:00:f6:94:40 Iaid: IPaddr:192.168.50.174 Prefix:24 Hostname:old-k8s-version-986088 Clientid:01:52:54:00:f6:94:40}
	I0328 00:53:19.123592 1124641 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined IP address 192.168.50.174 and MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 00:53:19.123790 1124641 main.go:141] libmachine: Docker is up and running!
	I0328 00:53:19.123807 1124641 main.go:141] libmachine: Reticulating splines...
	I0328 00:53:19.123816 1124641 client.go:171] duration metric: took 28.792657152s to LocalClient.Create
	I0328 00:53:19.123850 1124641 start.go:167] duration metric: took 28.792757608s to libmachine.API.Create "old-k8s-version-986088"
	I0328 00:53:19.123864 1124641 start.go:293] postStartSetup for "old-k8s-version-986088" (driver="kvm2")
	I0328 00:53:19.123880 1124641 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0328 00:53:19.123911 1124641 main.go:141] libmachine: (old-k8s-version-986088) Calling .DriverName
	I0328 00:53:19.124226 1124641 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0328 00:53:19.124262 1124641 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHHostname
	I0328 00:53:19.126673 1124641 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 00:53:19.127034 1124641 main.go:141] libmachine: (old-k8s-version-986088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:94:40", ip: ""} in network mk-old-k8s-version-986088: {Iface:virbr2 ExpiryTime:2024-03-28 01:53:07 +0000 UTC Type:0 Mac:52:54:00:f6:94:40 Iaid: IPaddr:192.168.50.174 Prefix:24 Hostname:old-k8s-version-986088 Clientid:01:52:54:00:f6:94:40}
	I0328 00:53:19.127060 1124641 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined IP address 192.168.50.174 and MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 00:53:19.127259 1124641 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHPort
	I0328 00:53:19.127444 1124641 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHKeyPath
	I0328 00:53:19.127601 1124641 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHUsername
	I0328 00:53:19.127763 1124641 sshutil.go:53] new ssh client: &{IP:192.168.50.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/old-k8s-version-986088/id_rsa Username:docker}
	I0328 00:53:19.211077 1124641 ssh_runner.go:195] Run: cat /etc/os-release
	I0328 00:53:19.215630 1124641 info.go:137] Remote host: Buildroot 2023.02.9
	I0328 00:53:19.215654 1124641 filesync.go:126] Scanning /home/jenkins/minikube-integration/18485-1069254/.minikube/addons for local assets ...
	I0328 00:53:19.215721 1124641 filesync.go:126] Scanning /home/jenkins/minikube-integration/18485-1069254/.minikube/files for local assets ...
	I0328 00:53:19.215810 1124641 filesync.go:149] local asset: /home/jenkins/minikube-integration/18485-1069254/.minikube/files/etc/ssl/certs/10765222.pem -> 10765222.pem in /etc/ssl/certs
	I0328 00:53:19.215903 1124641 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0328 00:53:19.226790 1124641 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/files/etc/ssl/certs/10765222.pem --> /etc/ssl/certs/10765222.pem (1708 bytes)
	I0328 00:53:19.253165 1124641 start.go:296] duration metric: took 129.284021ms for postStartSetup
	I0328 00:53:19.253232 1124641 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetConfigRaw
	I0328 00:53:19.253866 1124641 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetIP
	I0328 00:53:19.256690 1124641 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 00:53:19.257020 1124641 main.go:141] libmachine: (old-k8s-version-986088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:94:40", ip: ""} in network mk-old-k8s-version-986088: {Iface:virbr2 ExpiryTime:2024-03-28 01:53:07 +0000 UTC Type:0 Mac:52:54:00:f6:94:40 Iaid: IPaddr:192.168.50.174 Prefix:24 Hostname:old-k8s-version-986088 Clientid:01:52:54:00:f6:94:40}
	I0328 00:53:19.257052 1124641 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined IP address 192.168.50.174 and MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 00:53:19.257270 1124641 profile.go:142] Saving config to /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/old-k8s-version-986088/config.json ...
	I0328 00:53:19.257510 1124641 start.go:128] duration metric: took 28.949699418s to createHost
	I0328 00:53:19.257550 1124641 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHHostname
	I0328 00:53:19.259943 1124641 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 00:53:19.260319 1124641 main.go:141] libmachine: (old-k8s-version-986088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:94:40", ip: ""} in network mk-old-k8s-version-986088: {Iface:virbr2 ExpiryTime:2024-03-28 01:53:07 +0000 UTC Type:0 Mac:52:54:00:f6:94:40 Iaid: IPaddr:192.168.50.174 Prefix:24 Hostname:old-k8s-version-986088 Clientid:01:52:54:00:f6:94:40}
	I0328 00:53:19.260345 1124641 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined IP address 192.168.50.174 and MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 00:53:19.260531 1124641 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHPort
	I0328 00:53:19.260778 1124641 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHKeyPath
	I0328 00:53:19.260917 1124641 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHKeyPath
	I0328 00:53:19.261046 1124641 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHUsername
	I0328 00:53:19.261172 1124641 main.go:141] libmachine: Using SSH client type: native
	I0328 00:53:19.261358 1124641 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.174 22 <nil> <nil>}
	I0328 00:53:19.261371 1124641 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0328 00:53:19.367406 1124641 main.go:141] libmachine: SSH cmd err, output: <nil>: 1711587199.354010792
	
	I0328 00:53:19.367435 1124641 fix.go:216] guest clock: 1711587199.354010792
	I0328 00:53:19.367444 1124641 fix.go:229] Guest: 2024-03-28 00:53:19.354010792 +0000 UTC Remote: 2024-03-28 00:53:19.257524913 +0000 UTC m=+37.211927852 (delta=96.485879ms)
	I0328 00:53:19.367471 1124641 fix.go:200] guest clock delta is within tolerance: 96.485879ms
	I0328 00:53:19.367483 1124641 start.go:83] releasing machines lock for "old-k8s-version-986088", held for 29.059861742s
	I0328 00:53:19.367518 1124641 main.go:141] libmachine: (old-k8s-version-986088) Calling .DriverName
	I0328 00:53:19.367827 1124641 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetIP
	I0328 00:53:19.370638 1124641 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 00:53:19.370975 1124641 main.go:141] libmachine: (old-k8s-version-986088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:94:40", ip: ""} in network mk-old-k8s-version-986088: {Iface:virbr2 ExpiryTime:2024-03-28 01:53:07 +0000 UTC Type:0 Mac:52:54:00:f6:94:40 Iaid: IPaddr:192.168.50.174 Prefix:24 Hostname:old-k8s-version-986088 Clientid:01:52:54:00:f6:94:40}
	I0328 00:53:19.371011 1124641 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined IP address 192.168.50.174 and MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 00:53:19.371239 1124641 main.go:141] libmachine: (old-k8s-version-986088) Calling .DriverName
	I0328 00:53:19.371832 1124641 main.go:141] libmachine: (old-k8s-version-986088) Calling .DriverName
	I0328 00:53:19.372032 1124641 main.go:141] libmachine: (old-k8s-version-986088) Calling .DriverName
	I0328 00:53:19.372130 1124641 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0328 00:53:19.372187 1124641 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHHostname
	I0328 00:53:19.372299 1124641 ssh_runner.go:195] Run: cat /version.json
	I0328 00:53:19.372328 1124641 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHHostname
	I0328 00:53:19.374894 1124641 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 00:53:19.375245 1124641 main.go:141] libmachine: (old-k8s-version-986088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:94:40", ip: ""} in network mk-old-k8s-version-986088: {Iface:virbr2 ExpiryTime:2024-03-28 01:53:07 +0000 UTC Type:0 Mac:52:54:00:f6:94:40 Iaid: IPaddr:192.168.50.174 Prefix:24 Hostname:old-k8s-version-986088 Clientid:01:52:54:00:f6:94:40}
	I0328 00:53:19.375282 1124641 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined IP address 192.168.50.174 and MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 00:53:19.375303 1124641 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 00:53:19.375417 1124641 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHPort
	I0328 00:53:19.375603 1124641 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHKeyPath
	I0328 00:53:19.375758 1124641 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHUsername
	I0328 00:53:19.375760 1124641 main.go:141] libmachine: (old-k8s-version-986088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:94:40", ip: ""} in network mk-old-k8s-version-986088: {Iface:virbr2 ExpiryTime:2024-03-28 01:53:07 +0000 UTC Type:0 Mac:52:54:00:f6:94:40 Iaid: IPaddr:192.168.50.174 Prefix:24 Hostname:old-k8s-version-986088 Clientid:01:52:54:00:f6:94:40}
	I0328 00:53:19.375804 1124641 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined IP address 192.168.50.174 and MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 00:53:19.375941 1124641 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHPort
	I0328 00:53:19.376072 1124641 sshutil.go:53] new ssh client: &{IP:192.168.50.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/old-k8s-version-986088/id_rsa Username:docker}
	I0328 00:53:19.376137 1124641 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHKeyPath
	I0328 00:53:19.376290 1124641 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHUsername
	I0328 00:53:19.376466 1124641 sshutil.go:53] new ssh client: &{IP:192.168.50.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/old-k8s-version-986088/id_rsa Username:docker}
	I0328 00:53:19.452085 1124641 ssh_runner.go:195] Run: systemctl --version
	I0328 00:53:19.489905 1124641 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0328 00:53:19.659124 1124641 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0328 00:53:19.666301 1124641 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0328 00:53:19.666367 1124641 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0328 00:53:19.683591 1124641 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0328 00:53:19.683625 1124641 start.go:494] detecting cgroup driver to use...
	I0328 00:53:19.683712 1124641 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0328 00:53:19.702361 1124641 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0328 00:53:19.717428 1124641 docker.go:217] disabling cri-docker service (if available) ...
	I0328 00:53:19.717515 1124641 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0328 00:53:19.734328 1124641 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0328 00:53:19.752378 1124641 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0328 00:53:19.886973 1124641 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0328 00:53:20.052331 1124641 docker.go:233] disabling docker service ...
	I0328 00:53:20.052405 1124641 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0328 00:53:20.070362 1124641 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0328 00:53:20.085054 1124641 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0328 00:53:20.231327 1124641 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0328 00:53:20.353342 1124641 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0328 00:53:20.368767 1124641 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0328 00:53:20.391111 1124641 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0328 00:53:20.391185 1124641 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 00:53:20.403437 1124641 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0328 00:53:20.403526 1124641 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 00:53:20.416877 1124641 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 00:53:20.428550 1124641 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 00:53:20.439998 1124641 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0328 00:53:20.451361 1124641 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0328 00:53:20.461989 1124641 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0328 00:53:20.462061 1124641 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0328 00:53:20.476497 1124641 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0328 00:53:20.486201 1124641 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0328 00:53:20.607359 1124641 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0328 00:53:20.757311 1124641 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0328 00:53:20.757402 1124641 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0328 00:53:20.763002 1124641 start.go:562] Will wait 60s for crictl version
	I0328 00:53:20.763089 1124641 ssh_runner.go:195] Run: which crictl
	I0328 00:53:20.767296 1124641 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0328 00:53:20.805442 1124641 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0328 00:53:20.805530 1124641 ssh_runner.go:195] Run: crio --version
	I0328 00:53:20.837325 1124641 ssh_runner.go:195] Run: crio --version
	I0328 00:53:20.878999 1124641 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0328 00:53:20.880558 1124641 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetIP
	I0328 00:53:20.884024 1124641 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 00:53:20.884515 1124641 main.go:141] libmachine: (old-k8s-version-986088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:94:40", ip: ""} in network mk-old-k8s-version-986088: {Iface:virbr2 ExpiryTime:2024-03-28 01:53:07 +0000 UTC Type:0 Mac:52:54:00:f6:94:40 Iaid: IPaddr:192.168.50.174 Prefix:24 Hostname:old-k8s-version-986088 Clientid:01:52:54:00:f6:94:40}
	I0328 00:53:20.884547 1124641 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined IP address 192.168.50.174 and MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 00:53:20.884776 1124641 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0328 00:53:20.890410 1124641 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0328 00:53:20.905292 1124641 kubeadm.go:877] updating cluster {Name:old-k8s-version-986088 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-986088 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.174 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0328 00:53:20.905411 1124641 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0328 00:53:20.905461 1124641 ssh_runner.go:195] Run: sudo crictl images --output json
	I0328 00:53:20.945202 1124641 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0328 00:53:20.945292 1124641 ssh_runner.go:195] Run: which lz4
	I0328 00:53:20.950541 1124641 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0328 00:53:20.955787 1124641 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0328 00:53:20.955825 1124641 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0328 00:53:23.176266 1124641 crio.go:462] duration metric: took 2.22577854s to copy over tarball
	I0328 00:53:23.176359 1124641 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0328 00:53:26.189942 1124641 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.013548607s)
	I0328 00:53:26.189979 1124641 crio.go:469] duration metric: took 3.01367539s to extract the tarball
	I0328 00:53:26.189991 1124641 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0328 00:53:26.240599 1124641 ssh_runner.go:195] Run: sudo crictl images --output json
	I0328 00:53:26.293594 1124641 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0328 00:53:26.293637 1124641 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0328 00:53:26.293692 1124641 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0328 00:53:26.293716 1124641 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0328 00:53:26.293807 1124641 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0328 00:53:26.293825 1124641 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0328 00:53:26.293971 1124641 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0328 00:53:26.293993 1124641 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0328 00:53:26.294009 1124641 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0328 00:53:26.294077 1124641 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0328 00:53:26.295901 1124641 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0328 00:53:26.295935 1124641 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0328 00:53:26.295932 1124641 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0328 00:53:26.295938 1124641 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0328 00:53:26.295956 1124641 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0328 00:53:26.295979 1124641 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0328 00:53:26.295900 1124641 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0328 00:53:26.295906 1124641 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0328 00:53:26.482866 1124641 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0328 00:53:26.514936 1124641 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0328 00:53:26.518806 1124641 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0328 00:53:26.519786 1124641 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0328 00:53:26.536745 1124641 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0328 00:53:26.537197 1124641 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0328 00:53:26.537244 1124641 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0328 00:53:26.537287 1124641 ssh_runner.go:195] Run: which crictl
	I0328 00:53:26.551945 1124641 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0328 00:53:26.555802 1124641 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0328 00:53:26.637065 1124641 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0328 00:53:26.637126 1124641 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0328 00:53:26.637187 1124641 ssh_runner.go:195] Run: which crictl
	I0328 00:53:26.660649 1124641 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0328 00:53:26.660712 1124641 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0328 00:53:26.660770 1124641 ssh_runner.go:195] Run: which crictl
	I0328 00:53:26.677564 1124641 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0328 00:53:26.677609 1124641 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0328 00:53:26.677662 1124641 ssh_runner.go:195] Run: which crictl
	I0328 00:53:26.702153 1124641 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0328 00:53:26.702202 1124641 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0328 00:53:26.702219 1124641 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0328 00:53:26.702268 1124641 ssh_runner.go:195] Run: which crictl
	I0328 00:53:26.702344 1124641 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0328 00:53:26.702393 1124641 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0328 00:53:26.702430 1124641 ssh_runner.go:195] Run: which crictl
	I0328 00:53:26.707238 1124641 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0328 00:53:26.707292 1124641 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0328 00:53:26.707322 1124641 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0328 00:53:26.707331 1124641 ssh_runner.go:195] Run: which crictl
	I0328 00:53:26.707340 1124641 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0328 00:53:26.707349 1124641 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0328 00:53:26.717118 1124641 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0328 00:53:26.717193 1124641 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0328 00:53:26.790132 1124641 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0328 00:53:26.860423 1124641 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0328 00:53:26.860517 1124641 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0328 00:53:26.860550 1124641 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0328 00:53:26.860641 1124641 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0328 00:53:26.860710 1124641 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0328 00:53:26.864195 1124641 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0328 00:53:26.898469 1124641 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0328 00:53:27.176096 1124641 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0328 00:53:27.365610 1124641 cache_images.go:92] duration metric: took 1.071949881s to LoadCachedImages
	W0328 00:53:27.365720 1124641 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0: no such file or directory
	X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0: no such file or directory
	I0328 00:53:27.365738 1124641 kubeadm.go:928] updating node { 192.168.50.174 8443 v1.20.0 crio true true} ...
	I0328 00:53:27.365933 1124641 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-986088 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.174
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-986088 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0328 00:53:27.366025 1124641 ssh_runner.go:195] Run: crio config
	I0328 00:53:27.418740 1124641 cni.go:84] Creating CNI manager for ""
	I0328 00:53:27.418764 1124641 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0328 00:53:27.418773 1124641 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0328 00:53:27.418793 1124641 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.174 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-986088 NodeName:old-k8s-version-986088 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.174"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.174 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0328 00:53:27.418951 1124641 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.174
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-986088"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.174
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.174"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0328 00:53:27.419018 1124641 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0328 00:53:27.430170 1124641 binaries.go:44] Found k8s binaries, skipping transfer
	I0328 00:53:27.430279 1124641 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0328 00:53:27.441641 1124641 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0328 00:53:27.465265 1124641 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0328 00:53:27.484812 1124641 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0328 00:53:27.506907 1124641 ssh_runner.go:195] Run: grep 192.168.50.174	control-plane.minikube.internal$ /etc/hosts
	I0328 00:53:27.512398 1124641 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.174	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0328 00:53:27.526372 1124641 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0328 00:53:27.654673 1124641 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0328 00:53:27.673162 1124641 certs.go:68] Setting up /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/old-k8s-version-986088 for IP: 192.168.50.174
	I0328 00:53:27.673191 1124641 certs.go:194] generating shared ca certs ...
	I0328 00:53:27.673214 1124641 certs.go:226] acquiring lock for ca certs: {Name:mkf4dec8f33bbf51de6ed3aabdf175c7fd744ef9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 00:53:27.673403 1124641 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.key
	I0328 00:53:27.673457 1124641 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/proxy-client-ca.key
	I0328 00:53:27.673469 1124641 certs.go:256] generating profile certs ...
	I0328 00:53:27.673549 1124641 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/old-k8s-version-986088/client.key
	I0328 00:53:27.673591 1124641 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/old-k8s-version-986088/client.crt with IP's: []
	I0328 00:53:27.780901 1124641 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/old-k8s-version-986088/client.crt ...
	I0328 00:53:27.780943 1124641 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/old-k8s-version-986088/client.crt: {Name:mk23aef8836c87e1b89d094f95ca1a5396294bba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 00:53:27.781174 1124641 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/old-k8s-version-986088/client.key ...
	I0328 00:53:27.781195 1124641 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/old-k8s-version-986088/client.key: {Name:mkdafb5c59414df473b61728e39314ba035e5a28 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 00:53:27.781326 1124641 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/old-k8s-version-986088/apiserver.key.b88fbc7e
	I0328 00:53:27.781346 1124641 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/old-k8s-version-986088/apiserver.crt.b88fbc7e with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.50.174]
	I0328 00:53:27.985113 1124641 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/old-k8s-version-986088/apiserver.crt.b88fbc7e ...
	I0328 00:53:27.985147 1124641 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/old-k8s-version-986088/apiserver.crt.b88fbc7e: {Name:mk27005d9ef1375492fa98ecc68e2cc8ffd572d8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 00:53:27.985336 1124641 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/old-k8s-version-986088/apiserver.key.b88fbc7e ...
	I0328 00:53:27.985354 1124641 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/old-k8s-version-986088/apiserver.key.b88fbc7e: {Name:mk09f92ef171af55c74a510d613a909e85a3e609 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 00:53:27.985469 1124641 certs.go:381] copying /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/old-k8s-version-986088/apiserver.crt.b88fbc7e -> /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/old-k8s-version-986088/apiserver.crt
	I0328 00:53:27.985579 1124641 certs.go:385] copying /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/old-k8s-version-986088/apiserver.key.b88fbc7e -> /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/old-k8s-version-986088/apiserver.key
	I0328 00:53:27.985650 1124641 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/old-k8s-version-986088/proxy-client.key
	I0328 00:53:27.985670 1124641 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/old-k8s-version-986088/proxy-client.crt with IP's: []
	I0328 00:53:28.107873 1124641 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/old-k8s-version-986088/proxy-client.crt ...
	I0328 00:53:28.107912 1124641 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/old-k8s-version-986088/proxy-client.crt: {Name:mk0142c92145e3f5a0474994f829f182c527278c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 00:53:28.108089 1124641 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/old-k8s-version-986088/proxy-client.key ...
	I0328 00:53:28.108103 1124641 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/old-k8s-version-986088/proxy-client.key: {Name:mkb316b567044b46087cf46fc69115e61af8813e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 00:53:28.108283 1124641 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/1076522.pem (1338 bytes)
	W0328 00:53:28.108321 1124641 certs.go:480] ignoring /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/1076522_empty.pem, impossibly tiny 0 bytes
	I0328 00:53:28.108332 1124641 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca-key.pem (1675 bytes)
	I0328 00:53:28.108353 1124641 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem (1078 bytes)
	I0328 00:53:28.108375 1124641 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/cert.pem (1123 bytes)
	I0328 00:53:28.108397 1124641 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/key.pem (1679 bytes)
	I0328 00:53:28.108432 1124641 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/files/etc/ssl/certs/10765222.pem (1708 bytes)
	I0328 00:53:28.109165 1124641 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0328 00:53:28.137925 1124641 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0328 00:53:28.167914 1124641 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0328 00:53:28.196566 1124641 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0328 00:53:28.225295 1124641 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/old-k8s-version-986088/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0328 00:53:28.253854 1124641 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/old-k8s-version-986088/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0328 00:53:28.279219 1124641 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/old-k8s-version-986088/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0328 00:53:28.307921 1124641 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/old-k8s-version-986088/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0328 00:53:28.352262 1124641 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/1076522.pem --> /usr/share/ca-certificates/1076522.pem (1338 bytes)
	I0328 00:53:28.378773 1124641 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/files/etc/ssl/certs/10765222.pem --> /usr/share/ca-certificates/10765222.pem (1708 bytes)
	I0328 00:53:28.405258 1124641 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0328 00:53:28.431779 1124641 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0328 00:53:28.451798 1124641 ssh_runner.go:195] Run: openssl version
	I0328 00:53:28.458572 1124641 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1076522.pem && ln -fs /usr/share/ca-certificates/1076522.pem /etc/ssl/certs/1076522.pem"
	I0328 00:53:28.471035 1124641 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1076522.pem
	I0328 00:53:28.476199 1124641 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 27 23:42 /usr/share/ca-certificates/1076522.pem
	I0328 00:53:28.476288 1124641 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1076522.pem
	I0328 00:53:28.483067 1124641 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1076522.pem /etc/ssl/certs/51391683.0"
	I0328 00:53:28.495812 1124641 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10765222.pem && ln -fs /usr/share/ca-certificates/10765222.pem /etc/ssl/certs/10765222.pem"
	I0328 00:53:28.511696 1124641 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10765222.pem
	I0328 00:53:28.522353 1124641 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 27 23:42 /usr/share/ca-certificates/10765222.pem
	I0328 00:53:28.522434 1124641 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10765222.pem
	I0328 00:53:28.531737 1124641 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/10765222.pem /etc/ssl/certs/3ec20f2e.0"
	I0328 00:53:28.549095 1124641 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0328 00:53:28.576030 1124641 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0328 00:53:28.594892 1124641 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 27 23:34 /usr/share/ca-certificates/minikubeCA.pem
	I0328 00:53:28.594973 1124641 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0328 00:53:28.602737 1124641 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0328 00:53:28.616182 1124641 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0328 00:53:28.622016 1124641 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0328 00:53:28.622085 1124641 kubeadm.go:391] StartCluster: {Name:old-k8s-version-986088 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-986088 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.174 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0328 00:53:28.622189 1124641 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0328 00:53:28.622269 1124641 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0328 00:53:28.670277 1124641 cri.go:89] found id: ""
	I0328 00:53:28.670363 1124641 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0328 00:53:28.682489 1124641 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0328 00:53:28.694029 1124641 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0328 00:53:28.705080 1124641 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0328 00:53:28.705109 1124641 kubeadm.go:156] found existing configuration files:
	
	I0328 00:53:28.705173 1124641 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0328 00:53:28.715562 1124641 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0328 00:53:28.715644 1124641 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0328 00:53:28.726381 1124641 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0328 00:53:28.737489 1124641 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0328 00:53:28.737557 1124641 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0328 00:53:28.748639 1124641 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0328 00:53:28.759045 1124641 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0328 00:53:28.759121 1124641 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0328 00:53:28.771358 1124641 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0328 00:53:28.782604 1124641 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0328 00:53:28.782687 1124641 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0328 00:53:28.793067 1124641 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0328 00:53:29.086610 1124641 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0328 00:55:26.950870 1124641 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0328 00:55:26.951082 1124641 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0328 00:55:26.952464 1124641 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0328 00:55:26.952560 1124641 kubeadm.go:309] [preflight] Running pre-flight checks
	I0328 00:55:26.952706 1124641 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0328 00:55:26.952903 1124641 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0328 00:55:26.953104 1124641 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0328 00:55:26.953241 1124641 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0328 00:55:26.957650 1124641 out.go:204]   - Generating certificates and keys ...
	I0328 00:55:26.957765 1124641 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0328 00:55:26.957868 1124641 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0328 00:55:26.957967 1124641 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0328 00:55:26.958066 1124641 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0328 00:55:26.958155 1124641 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0328 00:55:26.958268 1124641 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0328 00:55:26.958344 1124641 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0328 00:55:26.958481 1124641 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-986088] and IPs [192.168.50.174 127.0.0.1 ::1]
	I0328 00:55:26.958532 1124641 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0328 00:55:26.958656 1124641 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-986088] and IPs [192.168.50.174 127.0.0.1 ::1]
	I0328 00:55:26.958743 1124641 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0328 00:55:26.958828 1124641 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0328 00:55:26.958895 1124641 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0328 00:55:26.958972 1124641 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0328 00:55:26.959044 1124641 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0328 00:55:26.959124 1124641 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0328 00:55:26.959228 1124641 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0328 00:55:26.959308 1124641 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0328 00:55:26.959464 1124641 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0328 00:55:26.959583 1124641 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0328 00:55:26.959637 1124641 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0328 00:55:26.959697 1124641 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0328 00:55:26.961427 1124641 out.go:204]   - Booting up control plane ...
	I0328 00:55:26.961543 1124641 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0328 00:55:26.961642 1124641 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0328 00:55:26.961723 1124641 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0328 00:55:26.961836 1124641 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0328 00:55:26.962028 1124641 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0328 00:55:26.962116 1124641 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0328 00:55:26.962252 1124641 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0328 00:55:26.962525 1124641 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0328 00:55:26.962642 1124641 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0328 00:55:26.962861 1124641 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0328 00:55:26.962960 1124641 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0328 00:55:26.963232 1124641 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0328 00:55:26.963305 1124641 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0328 00:55:26.963461 1124641 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0328 00:55:26.963529 1124641 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0328 00:55:26.963686 1124641 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0328 00:55:26.963698 1124641 kubeadm.go:309] 
	I0328 00:55:26.963767 1124641 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0328 00:55:26.963824 1124641 kubeadm.go:309] 		timed out waiting for the condition
	I0328 00:55:26.963837 1124641 kubeadm.go:309] 
	I0328 00:55:26.963883 1124641 kubeadm.go:309] 	This error is likely caused by:
	I0328 00:55:26.963925 1124641 kubeadm.go:309] 		- The kubelet is not running
	I0328 00:55:26.964048 1124641 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0328 00:55:26.964061 1124641 kubeadm.go:309] 
	I0328 00:55:26.964188 1124641 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0328 00:55:26.964239 1124641 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0328 00:55:26.964281 1124641 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0328 00:55:26.964290 1124641 kubeadm.go:309] 
	I0328 00:55:26.964448 1124641 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0328 00:55:26.964577 1124641 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0328 00:55:26.964587 1124641 kubeadm.go:309] 
	I0328 00:55:26.964721 1124641 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0328 00:55:26.964847 1124641 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0328 00:55:26.964959 1124641 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0328 00:55:26.965068 1124641 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0328 00:55:26.965104 1124641 kubeadm.go:309] 
	W0328 00:55:26.965242 1124641 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-986088] and IPs [192.168.50.174 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-986088] and IPs [192.168.50.174 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-986088] and IPs [192.168.50.174 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-986088] and IPs [192.168.50.174 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0328 00:55:26.965297 1124641 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0328 00:55:28.545593 1124641 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.580268305s)
	I0328 00:55:28.545677 1124641 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0328 00:55:28.563400 1124641 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0328 00:55:28.576173 1124641 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0328 00:55:28.576203 1124641 kubeadm.go:156] found existing configuration files:
	
	I0328 00:55:28.576271 1124641 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0328 00:55:28.588411 1124641 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0328 00:55:28.588490 1124641 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0328 00:55:28.601004 1124641 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0328 00:55:28.613071 1124641 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0328 00:55:28.613143 1124641 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0328 00:55:28.624733 1124641 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0328 00:55:28.637281 1124641 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0328 00:55:28.637348 1124641 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0328 00:55:28.652505 1124641 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0328 00:55:28.668012 1124641 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0328 00:55:28.668082 1124641 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0328 00:55:28.681019 1124641 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0328 00:55:28.921104 1124641 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0328 00:57:25.174807 1124641 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0328 00:57:25.174935 1124641 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0328 00:57:25.176648 1124641 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0328 00:57:25.176761 1124641 kubeadm.go:309] [preflight] Running pre-flight checks
	I0328 00:57:25.176899 1124641 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0328 00:57:25.177033 1124641 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0328 00:57:25.177159 1124641 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0328 00:57:25.177237 1124641 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0328 00:57:25.178691 1124641 out.go:204]   - Generating certificates and keys ...
	I0328 00:57:25.178782 1124641 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0328 00:57:25.178838 1124641 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0328 00:57:25.178904 1124641 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0328 00:57:25.179002 1124641 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0328 00:57:25.179104 1124641 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0328 00:57:25.179150 1124641 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0328 00:57:25.179202 1124641 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0328 00:57:25.179254 1124641 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0328 00:57:25.179315 1124641 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0328 00:57:25.179439 1124641 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0328 00:57:25.179506 1124641 kubeadm.go:309] [certs] Using the existing "sa" key
	I0328 00:57:25.179589 1124641 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0328 00:57:25.179668 1124641 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0328 00:57:25.179756 1124641 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0328 00:57:25.179850 1124641 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0328 00:57:25.179936 1124641 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0328 00:57:25.180072 1124641 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0328 00:57:25.180207 1124641 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0328 00:57:25.180264 1124641 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0328 00:57:25.180351 1124641 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0328 00:57:25.181881 1124641 out.go:204]   - Booting up control plane ...
	I0328 00:57:25.181986 1124641 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0328 00:57:25.182082 1124641 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0328 00:57:25.182210 1124641 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0328 00:57:25.182347 1124641 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0328 00:57:25.182587 1124641 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0328 00:57:25.182664 1124641 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0328 00:57:25.182770 1124641 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0328 00:57:25.183041 1124641 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0328 00:57:25.183139 1124641 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0328 00:57:25.183389 1124641 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0328 00:57:25.183506 1124641 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0328 00:57:25.183765 1124641 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0328 00:57:25.183858 1124641 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0328 00:57:25.184114 1124641 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0328 00:57:25.184190 1124641 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0328 00:57:25.184416 1124641 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0328 00:57:25.184429 1124641 kubeadm.go:309] 
	I0328 00:57:25.184474 1124641 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0328 00:57:25.184534 1124641 kubeadm.go:309] 		timed out waiting for the condition
	I0328 00:57:25.184543 1124641 kubeadm.go:309] 
	I0328 00:57:25.184593 1124641 kubeadm.go:309] 	This error is likely caused by:
	I0328 00:57:25.184639 1124641 kubeadm.go:309] 		- The kubelet is not running
	I0328 00:57:25.184772 1124641 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0328 00:57:25.184781 1124641 kubeadm.go:309] 
	I0328 00:57:25.184908 1124641 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0328 00:57:25.184956 1124641 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0328 00:57:25.185004 1124641 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0328 00:57:25.185017 1124641 kubeadm.go:309] 
	I0328 00:57:25.185147 1124641 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0328 00:57:25.185227 1124641 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0328 00:57:25.185235 1124641 kubeadm.go:309] 
	I0328 00:57:25.185338 1124641 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0328 00:57:25.185432 1124641 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0328 00:57:25.185517 1124641 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0328 00:57:25.185620 1124641 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0328 00:57:25.185662 1124641 kubeadm.go:309] 
	I0328 00:57:25.185702 1124641 kubeadm.go:393] duration metric: took 3m56.563621212s to StartCluster
	I0328 00:57:25.185750 1124641 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 00:57:25.185806 1124641 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 00:57:25.233019 1124641 cri.go:89] found id: ""
	I0328 00:57:25.233059 1124641 logs.go:276] 0 containers: []
	W0328 00:57:25.233074 1124641 logs.go:278] No container was found matching "kube-apiserver"
	I0328 00:57:25.233084 1124641 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 00:57:25.233155 1124641 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 00:57:25.272283 1124641 cri.go:89] found id: ""
	I0328 00:57:25.272315 1124641 logs.go:276] 0 containers: []
	W0328 00:57:25.272327 1124641 logs.go:278] No container was found matching "etcd"
	I0328 00:57:25.272337 1124641 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 00:57:25.272398 1124641 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 00:57:25.309054 1124641 cri.go:89] found id: ""
	I0328 00:57:25.309088 1124641 logs.go:276] 0 containers: []
	W0328 00:57:25.309097 1124641 logs.go:278] No container was found matching "coredns"
	I0328 00:57:25.309104 1124641 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 00:57:25.309158 1124641 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 00:57:25.353057 1124641 cri.go:89] found id: ""
	I0328 00:57:25.353094 1124641 logs.go:276] 0 containers: []
	W0328 00:57:25.353106 1124641 logs.go:278] No container was found matching "kube-scheduler"
	I0328 00:57:25.353114 1124641 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 00:57:25.353192 1124641 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 00:57:25.391719 1124641 cri.go:89] found id: ""
	I0328 00:57:25.391761 1124641 logs.go:276] 0 containers: []
	W0328 00:57:25.391774 1124641 logs.go:278] No container was found matching "kube-proxy"
	I0328 00:57:25.391783 1124641 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 00:57:25.391844 1124641 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 00:57:25.431184 1124641 cri.go:89] found id: ""
	I0328 00:57:25.431220 1124641 logs.go:276] 0 containers: []
	W0328 00:57:25.431232 1124641 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 00:57:25.431241 1124641 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 00:57:25.431312 1124641 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 00:57:25.470041 1124641 cri.go:89] found id: ""
	I0328 00:57:25.470078 1124641 logs.go:276] 0 containers: []
	W0328 00:57:25.470090 1124641 logs.go:278] No container was found matching "kindnet"
	I0328 00:57:25.470105 1124641 logs.go:123] Gathering logs for describe nodes ...
	I0328 00:57:25.470123 1124641 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 00:57:25.599003 1124641 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 00:57:25.599037 1124641 logs.go:123] Gathering logs for CRI-O ...
	I0328 00:57:25.599058 1124641 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 00:57:25.696067 1124641 logs.go:123] Gathering logs for container status ...
	I0328 00:57:25.696119 1124641 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 00:57:25.745831 1124641 logs.go:123] Gathering logs for kubelet ...
	I0328 00:57:25.745873 1124641 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 00:57:25.798645 1124641 logs.go:123] Gathering logs for dmesg ...
	I0328 00:57:25.798684 1124641 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W0328 00:57:25.817016 1124641 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0328 00:57:25.817083 1124641 out.go:239] * 
	* 
	W0328 00:57:25.817163 1124641 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0328 00:57:25.817199 1124641 out.go:239] * 
	* 
	W0328 00:57:25.818454 1124641 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0328 00:57:25.822102 1124641 out.go:177] 
	W0328 00:57:25.823380 1124641 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0328 00:57:25.823443 1124641 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0328 00:57:25.823482 1124641 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0328 00:57:25.825011 1124641 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-linux-amd64 start -p old-k8s-version-986088 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-986088 -n old-k8s-version-986088
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-986088 -n old-k8s-version-986088: exit status 6 (265.974047ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0328 00:57:26.134895 1130531 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-986088" does not appear in /home/jenkins/minikube-integration/18485-1069254/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-986088" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (284.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (139.17s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-248059 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p no-preload-248059 --alsologtostderr -v=3: exit status 82 (2m0.552434336s)

                                                
                                                
-- stdout --
	* Stopping node "no-preload-248059"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0328 00:55:04.583633 1128932 out.go:291] Setting OutFile to fd 1 ...
	I0328 00:55:04.583886 1128932 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 00:55:04.583896 1128932 out.go:304] Setting ErrFile to fd 2...
	I0328 00:55:04.583900 1128932 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 00:55:04.584135 1128932 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18485-1069254/.minikube/bin
	I0328 00:55:04.584475 1128932 out.go:298] Setting JSON to false
	I0328 00:55:04.584559 1128932 mustload.go:65] Loading cluster: no-preload-248059
	I0328 00:55:04.584928 1128932 config.go:182] Loaded profile config "no-preload-248059": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0-beta.0
	I0328 00:55:04.584998 1128932 profile.go:142] Saving config to /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/no-preload-248059/config.json ...
	I0328 00:55:04.585178 1128932 mustload.go:65] Loading cluster: no-preload-248059
	I0328 00:55:04.585281 1128932 config.go:182] Loaded profile config "no-preload-248059": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0-beta.0
	I0328 00:55:04.585318 1128932 stop.go:39] StopHost: no-preload-248059
	I0328 00:55:04.585761 1128932 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18485-1069254/.minikube/bin/docker-machine-driver-kvm2
	I0328 00:55:04.585808 1128932 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 00:55:04.601252 1128932 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38649
	I0328 00:55:04.601832 1128932 main.go:141] libmachine: () Calling .GetVersion
	I0328 00:55:04.602565 1128932 main.go:141] libmachine: Using API Version  1
	I0328 00:55:04.602613 1128932 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 00:55:04.602995 1128932 main.go:141] libmachine: () Calling .GetMachineName
	I0328 00:55:04.605593 1128932 out.go:177] * Stopping node "no-preload-248059"  ...
	I0328 00:55:04.606854 1128932 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0328 00:55:04.606883 1128932 main.go:141] libmachine: (no-preload-248059) Calling .DriverName
	I0328 00:55:04.607168 1128932 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0328 00:55:04.607204 1128932 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHHostname
	I0328 00:55:04.610397 1128932 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 00:55:04.610833 1128932 main.go:141] libmachine: (no-preload-248059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:33:e2", ip: ""} in network mk-no-preload-248059: {Iface:virbr3 ExpiryTime:2024-03-28 01:53:36 +0000 UTC Type:0 Mac:52:54:00:58:33:e2 Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:no-preload-248059 Clientid:01:52:54:00:58:33:e2}
	I0328 00:55:04.610863 1128932 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined IP address 192.168.61.107 and MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 00:55:04.611116 1128932 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHPort
	I0328 00:55:04.611346 1128932 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHKeyPath
	I0328 00:55:04.611533 1128932 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHUsername
	I0328 00:55:04.611677 1128932 sshutil.go:53] new ssh client: &{IP:192.168.61.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/no-preload-248059/id_rsa Username:docker}
	I0328 00:55:04.714917 1128932 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0328 00:55:04.787461 1128932 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0328 00:55:04.843721 1128932 main.go:141] libmachine: Stopping "no-preload-248059"...
	I0328 00:55:04.843748 1128932 main.go:141] libmachine: (no-preload-248059) Calling .GetState
	I0328 00:55:04.845518 1128932 main.go:141] libmachine: (no-preload-248059) Calling .Stop
	I0328 00:55:04.849352 1128932 main.go:141] libmachine: (no-preload-248059) Waiting for machine to stop 0/120
	I0328 00:55:05.850844 1128932 main.go:141] libmachine: (no-preload-248059) Waiting for machine to stop 1/120
	I0328 00:55:06.853074 1128932 main.go:141] libmachine: (no-preload-248059) Waiting for machine to stop 2/120
	I0328 00:55:07.854605 1128932 main.go:141] libmachine: (no-preload-248059) Waiting for machine to stop 3/120
	I0328 00:55:08.856821 1128932 main.go:141] libmachine: (no-preload-248059) Waiting for machine to stop 4/120
	I0328 00:55:09.859211 1128932 main.go:141] libmachine: (no-preload-248059) Waiting for machine to stop 5/120
	I0328 00:55:10.860595 1128932 main.go:141] libmachine: (no-preload-248059) Waiting for machine to stop 6/120
	I0328 00:55:11.862182 1128932 main.go:141] libmachine: (no-preload-248059) Waiting for machine to stop 7/120
	I0328 00:55:12.863845 1128932 main.go:141] libmachine: (no-preload-248059) Waiting for machine to stop 8/120
	I0328 00:55:13.866456 1128932 main.go:141] libmachine: (no-preload-248059) Waiting for machine to stop 9/120
	I0328 00:55:14.869033 1128932 main.go:141] libmachine: (no-preload-248059) Waiting for machine to stop 10/120
	I0328 00:55:15.871384 1128932 main.go:141] libmachine: (no-preload-248059) Waiting for machine to stop 11/120
	I0328 00:55:16.873275 1128932 main.go:141] libmachine: (no-preload-248059) Waiting for machine to stop 12/120
	I0328 00:55:17.874729 1128932 main.go:141] libmachine: (no-preload-248059) Waiting for machine to stop 13/120
	I0328 00:55:18.876412 1128932 main.go:141] libmachine: (no-preload-248059) Waiting for machine to stop 14/120
	I0328 00:55:19.878508 1128932 main.go:141] libmachine: (no-preload-248059) Waiting for machine to stop 15/120
	I0328 00:55:20.880125 1128932 main.go:141] libmachine: (no-preload-248059) Waiting for machine to stop 16/120
	I0328 00:55:21.881412 1128932 main.go:141] libmachine: (no-preload-248059) Waiting for machine to stop 17/120
	I0328 00:55:22.882785 1128932 main.go:141] libmachine: (no-preload-248059) Waiting for machine to stop 18/120
	I0328 00:55:23.885054 1128932 main.go:141] libmachine: (no-preload-248059) Waiting for machine to stop 19/120
	I0328 00:55:24.887420 1128932 main.go:141] libmachine: (no-preload-248059) Waiting for machine to stop 20/120
	I0328 00:55:25.888739 1128932 main.go:141] libmachine: (no-preload-248059) Waiting for machine to stop 21/120
	I0328 00:55:26.890971 1128932 main.go:141] libmachine: (no-preload-248059) Waiting for machine to stop 22/120
	I0328 00:55:27.893074 1128932 main.go:141] libmachine: (no-preload-248059) Waiting for machine to stop 23/120
	I0328 00:55:28.894943 1128932 main.go:141] libmachine: (no-preload-248059) Waiting for machine to stop 24/120
	I0328 00:55:29.897369 1128932 main.go:141] libmachine: (no-preload-248059) Waiting for machine to stop 25/120
	I0328 00:55:30.898837 1128932 main.go:141] libmachine: (no-preload-248059) Waiting for machine to stop 26/120
	I0328 00:55:31.900414 1128932 main.go:141] libmachine: (no-preload-248059) Waiting for machine to stop 27/120
	I0328 00:55:32.901936 1128932 main.go:141] libmachine: (no-preload-248059) Waiting for machine to stop 28/120
	I0328 00:55:33.903520 1128932 main.go:141] libmachine: (no-preload-248059) Waiting for machine to stop 29/120
	I0328 00:55:34.905033 1128932 main.go:141] libmachine: (no-preload-248059) Waiting for machine to stop 30/120
	I0328 00:55:35.906425 1128932 main.go:141] libmachine: (no-preload-248059) Waiting for machine to stop 31/120
	I0328 00:55:36.907681 1128932 main.go:141] libmachine: (no-preload-248059) Waiting for machine to stop 32/120
	I0328 00:55:37.909076 1128932 main.go:141] libmachine: (no-preload-248059) Waiting for machine to stop 33/120
	I0328 00:55:38.910474 1128932 main.go:141] libmachine: (no-preload-248059) Waiting for machine to stop 34/120
	I0328 00:55:39.912436 1128932 main.go:141] libmachine: (no-preload-248059) Waiting for machine to stop 35/120
	I0328 00:55:40.913869 1128932 main.go:141] libmachine: (no-preload-248059) Waiting for machine to stop 36/120
	I0328 00:55:41.915509 1128932 main.go:141] libmachine: (no-preload-248059) Waiting for machine to stop 37/120
	I0328 00:55:42.916979 1128932 main.go:141] libmachine: (no-preload-248059) Waiting for machine to stop 38/120
	I0328 00:55:43.918667 1128932 main.go:141] libmachine: (no-preload-248059) Waiting for machine to stop 39/120
	I0328 00:55:44.920882 1128932 main.go:141] libmachine: (no-preload-248059) Waiting for machine to stop 40/120
	I0328 00:55:45.922467 1128932 main.go:141] libmachine: (no-preload-248059) Waiting for machine to stop 41/120
	I0328 00:55:46.924276 1128932 main.go:141] libmachine: (no-preload-248059) Waiting for machine to stop 42/120
	I0328 00:55:47.925805 1128932 main.go:141] libmachine: (no-preload-248059) Waiting for machine to stop 43/120
	I0328 00:55:48.927267 1128932 main.go:141] libmachine: (no-preload-248059) Waiting for machine to stop 44/120
	I0328 00:55:49.929529 1128932 main.go:141] libmachine: (no-preload-248059) Waiting for machine to stop 45/120
	I0328 00:55:50.931048 1128932 main.go:141] libmachine: (no-preload-248059) Waiting for machine to stop 46/120
	I0328 00:55:51.932745 1128932 main.go:141] libmachine: (no-preload-248059) Waiting for machine to stop 47/120
	I0328 00:55:52.934481 1128932 main.go:141] libmachine: (no-preload-248059) Waiting for machine to stop 48/120
	I0328 00:55:53.935976 1128932 main.go:141] libmachine: (no-preload-248059) Waiting for machine to stop 49/120
	I0328 00:55:54.937693 1128932 main.go:141] libmachine: (no-preload-248059) Waiting for machine to stop 50/120
	I0328 00:55:55.939074 1128932 main.go:141] libmachine: (no-preload-248059) Waiting for machine to stop 51/120
	I0328 00:55:56.940681 1128932 main.go:141] libmachine: (no-preload-248059) Waiting for machine to stop 52/120
	I0328 00:55:57.942039 1128932 main.go:141] libmachine: (no-preload-248059) Waiting for machine to stop 53/120
	I0328 00:55:58.943226 1128932 main.go:141] libmachine: (no-preload-248059) Waiting for machine to stop 54/120
	I0328 00:55:59.945190 1128932 main.go:141] libmachine: (no-preload-248059) Waiting for machine to stop 55/120
	I0328 00:56:00.946721 1128932 main.go:141] libmachine: (no-preload-248059) Waiting for machine to stop 56/120
	I0328 00:56:01.948953 1128932 main.go:141] libmachine: (no-preload-248059) Waiting for machine to stop 57/120
	I0328 00:56:02.950604 1128932 main.go:141] libmachine: (no-preload-248059) Waiting for machine to stop 58/120
	I0328 00:56:03.952167 1128932 main.go:141] libmachine: (no-preload-248059) Waiting for machine to stop 59/120
	I0328 00:56:04.954633 1128932 main.go:141] libmachine: (no-preload-248059) Waiting for machine to stop 60/120
	I0328 00:56:05.956619 1128932 main.go:141] libmachine: (no-preload-248059) Waiting for machine to stop 61/120
	I0328 00:56:06.958373 1128932 main.go:141] libmachine: (no-preload-248059) Waiting for machine to stop 62/120
	I0328 00:56:07.960952 1128932 main.go:141] libmachine: (no-preload-248059) Waiting for machine to stop 63/120
	I0328 00:56:08.962775 1128932 main.go:141] libmachine: (no-preload-248059) Waiting for machine to stop 64/120
	I0328 00:56:09.964873 1128932 main.go:141] libmachine: (no-preload-248059) Waiting for machine to stop 65/120
	I0328 00:56:10.966300 1128932 main.go:141] libmachine: (no-preload-248059) Waiting for machine to stop 66/120
	I0328 00:56:11.967882 1128932 main.go:141] libmachine: (no-preload-248059) Waiting for machine to stop 67/120
	I0328 00:56:12.969397 1128932 main.go:141] libmachine: (no-preload-248059) Waiting for machine to stop 68/120
	I0328 00:56:13.970698 1128932 main.go:141] libmachine: (no-preload-248059) Waiting for machine to stop 69/120
	I0328 00:56:14.972895 1128932 main.go:141] libmachine: (no-preload-248059) Waiting for machine to stop 70/120
	I0328 00:56:15.974493 1128932 main.go:141] libmachine: (no-preload-248059) Waiting for machine to stop 71/120
	I0328 00:56:16.977186 1128932 main.go:141] libmachine: (no-preload-248059) Waiting for machine to stop 72/120
	I0328 00:56:17.978622 1128932 main.go:141] libmachine: (no-preload-248059) Waiting for machine to stop 73/120
	I0328 00:56:18.980586 1128932 main.go:141] libmachine: (no-preload-248059) Waiting for machine to stop 74/120
	I0328 00:56:19.982658 1128932 main.go:141] libmachine: (no-preload-248059) Waiting for machine to stop 75/120
	I0328 00:56:20.985018 1128932 main.go:141] libmachine: (no-preload-248059) Waiting for machine to stop 76/120
	I0328 00:56:21.987182 1128932 main.go:141] libmachine: (no-preload-248059) Waiting for machine to stop 77/120
	I0328 00:56:22.990637 1128932 main.go:141] libmachine: (no-preload-248059) Waiting for machine to stop 78/120
	I0328 00:56:23.992899 1128932 main.go:141] libmachine: (no-preload-248059) Waiting for machine to stop 79/120
	I0328 00:56:24.994989 1128932 main.go:141] libmachine: (no-preload-248059) Waiting for machine to stop 80/120
	I0328 00:56:25.996876 1128932 main.go:141] libmachine: (no-preload-248059) Waiting for machine to stop 81/120
	I0328 00:56:26.998410 1128932 main.go:141] libmachine: (no-preload-248059) Waiting for machine to stop 82/120
	I0328 00:56:28.000053 1128932 main.go:141] libmachine: (no-preload-248059) Waiting for machine to stop 83/120
	I0328 00:56:29.001666 1128932 main.go:141] libmachine: (no-preload-248059) Waiting for machine to stop 84/120
	I0328 00:56:30.003889 1128932 main.go:141] libmachine: (no-preload-248059) Waiting for machine to stop 85/120
	I0328 00:56:31.005529 1128932 main.go:141] libmachine: (no-preload-248059) Waiting for machine to stop 86/120
	I0328 00:56:32.007051 1128932 main.go:141] libmachine: (no-preload-248059) Waiting for machine to stop 87/120
	I0328 00:56:33.008996 1128932 main.go:141] libmachine: (no-preload-248059) Waiting for machine to stop 88/120
	I0328 00:56:34.010996 1128932 main.go:141] libmachine: (no-preload-248059) Waiting for machine to stop 89/120
	I0328 00:56:35.013371 1128932 main.go:141] libmachine: (no-preload-248059) Waiting for machine to stop 90/120
	I0328 00:56:36.014933 1128932 main.go:141] libmachine: (no-preload-248059) Waiting for machine to stop 91/120
	I0328 00:56:37.016297 1128932 main.go:141] libmachine: (no-preload-248059) Waiting for machine to stop 92/120
	I0328 00:56:38.017803 1128932 main.go:141] libmachine: (no-preload-248059) Waiting for machine to stop 93/120
	I0328 00:56:39.019408 1128932 main.go:141] libmachine: (no-preload-248059) Waiting for machine to stop 94/120
	I0328 00:56:40.021628 1128932 main.go:141] libmachine: (no-preload-248059) Waiting for machine to stop 95/120
	I0328 00:56:41.023256 1128932 main.go:141] libmachine: (no-preload-248059) Waiting for machine to stop 96/120
	I0328 00:56:42.024827 1128932 main.go:141] libmachine: (no-preload-248059) Waiting for machine to stop 97/120
	I0328 00:56:43.026645 1128932 main.go:141] libmachine: (no-preload-248059) Waiting for machine to stop 98/120
	I0328 00:56:44.028256 1128932 main.go:141] libmachine: (no-preload-248059) Waiting for machine to stop 99/120
	I0328 00:56:45.030884 1128932 main.go:141] libmachine: (no-preload-248059) Waiting for machine to stop 100/120
	I0328 00:56:46.032952 1128932 main.go:141] libmachine: (no-preload-248059) Waiting for machine to stop 101/120
	I0328 00:56:47.034438 1128932 main.go:141] libmachine: (no-preload-248059) Waiting for machine to stop 102/120
	I0328 00:56:48.037159 1128932 main.go:141] libmachine: (no-preload-248059) Waiting for machine to stop 103/120
	I0328 00:56:49.038557 1128932 main.go:141] libmachine: (no-preload-248059) Waiting for machine to stop 104/120
	I0328 00:56:50.040602 1128932 main.go:141] libmachine: (no-preload-248059) Waiting for machine to stop 105/120
	I0328 00:56:51.042294 1128932 main.go:141] libmachine: (no-preload-248059) Waiting for machine to stop 106/120
	I0328 00:56:52.043830 1128932 main.go:141] libmachine: (no-preload-248059) Waiting for machine to stop 107/120
	I0328 00:56:53.045466 1128932 main.go:141] libmachine: (no-preload-248059) Waiting for machine to stop 108/120
	I0328 00:56:54.047547 1128932 main.go:141] libmachine: (no-preload-248059) Waiting for machine to stop 109/120
	I0328 00:56:55.049799 1128932 main.go:141] libmachine: (no-preload-248059) Waiting for machine to stop 110/120
	I0328 00:56:56.051508 1128932 main.go:141] libmachine: (no-preload-248059) Waiting for machine to stop 111/120
	I0328 00:56:57.052918 1128932 main.go:141] libmachine: (no-preload-248059) Waiting for machine to stop 112/120
	I0328 00:56:58.054344 1128932 main.go:141] libmachine: (no-preload-248059) Waiting for machine to stop 113/120
	I0328 00:56:59.055959 1128932 main.go:141] libmachine: (no-preload-248059) Waiting for machine to stop 114/120
	I0328 00:57:00.058206 1128932 main.go:141] libmachine: (no-preload-248059) Waiting for machine to stop 115/120
	I0328 00:57:01.059708 1128932 main.go:141] libmachine: (no-preload-248059) Waiting for machine to stop 116/120
	I0328 00:57:02.061465 1128932 main.go:141] libmachine: (no-preload-248059) Waiting for machine to stop 117/120
	I0328 00:57:03.063007 1128932 main.go:141] libmachine: (no-preload-248059) Waiting for machine to stop 118/120
	I0328 00:57:04.064466 1128932 main.go:141] libmachine: (no-preload-248059) Waiting for machine to stop 119/120
	I0328 00:57:05.065805 1128932 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0328 00:57:05.065861 1128932 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0328 00:57:05.068036 1128932 out.go:177] 
	W0328 00:57:05.069573 1128932 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0328 00:57:05.069592 1128932 out.go:239] * 
	* 
	W0328 00:57:05.074811 1128932 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_2.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_2.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0328 00:57:05.076190 1128932 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p no-preload-248059 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-248059 -n no-preload-248059
E0328 00:57:08.418660 1076522 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/custom-flannel-443419/client.crt: no such file or directory
E0328 00:57:10.821366 1076522 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/enable-default-cni-443419/client.crt: no such file or directory
E0328 00:57:10.826666 1076522 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/enable-default-cni-443419/client.crt: no such file or directory
E0328 00:57:10.836960 1076522 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/enable-default-cni-443419/client.crt: no such file or directory
E0328 00:57:10.857349 1076522 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/enable-default-cni-443419/client.crt: no such file or directory
E0328 00:57:10.897743 1076522 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/enable-default-cni-443419/client.crt: no such file or directory
E0328 00:57:10.978205 1076522 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/enable-default-cni-443419/client.crt: no such file or directory
E0328 00:57:11.138616 1076522 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/enable-default-cni-443419/client.crt: no such file or directory
E0328 00:57:11.459534 1076522 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/enable-default-cni-443419/client.crt: no such file or directory
E0328 00:57:12.100293 1076522 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/enable-default-cni-443419/client.crt: no such file or directory
E0328 00:57:13.381287 1076522 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/enable-default-cni-443419/client.crt: no such file or directory
E0328 00:57:15.941932 1076522 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/enable-default-cni-443419/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-248059 -n no-preload-248059: exit status 3 (18.614029355s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0328 00:57:23.690589 1130332 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.107:22: connect: no route to host
	E0328 00:57:23.690614 1130332 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.107:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-248059" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/Stop (139.17s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (139.03s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-808809 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p embed-certs-808809 --alsologtostderr -v=3: exit status 82 (2m0.549831225s)

                                                
                                                
-- stdout --
	* Stopping node "embed-certs-808809"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0328 00:55:16.494777 1129056 out.go:291] Setting OutFile to fd 1 ...
	I0328 00:55:16.494906 1129056 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 00:55:16.494916 1129056 out.go:304] Setting ErrFile to fd 2...
	I0328 00:55:16.494921 1129056 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 00:55:16.495482 1129056 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18485-1069254/.minikube/bin
	I0328 00:55:16.495985 1129056 out.go:298] Setting JSON to false
	I0328 00:55:16.496146 1129056 mustload.go:65] Loading cluster: embed-certs-808809
	I0328 00:55:16.497174 1129056 config.go:182] Loaded profile config "embed-certs-808809": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0328 00:55:16.497338 1129056 profile.go:142] Saving config to /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/embed-certs-808809/config.json ...
	I0328 00:55:16.497557 1129056 mustload.go:65] Loading cluster: embed-certs-808809
	I0328 00:55:16.497669 1129056 config.go:182] Loaded profile config "embed-certs-808809": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0328 00:55:16.497698 1129056 stop.go:39] StopHost: embed-certs-808809
	I0328 00:55:16.498102 1129056 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18485-1069254/.minikube/bin/docker-machine-driver-kvm2
	I0328 00:55:16.498162 1129056 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 00:55:16.513681 1129056 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40039
	I0328 00:55:16.514343 1129056 main.go:141] libmachine: () Calling .GetVersion
	I0328 00:55:16.514974 1129056 main.go:141] libmachine: Using API Version  1
	I0328 00:55:16.515000 1129056 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 00:55:16.515405 1129056 main.go:141] libmachine: () Calling .GetMachineName
	I0328 00:55:16.518213 1129056 out.go:177] * Stopping node "embed-certs-808809"  ...
	I0328 00:55:16.519822 1129056 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0328 00:55:16.519856 1129056 main.go:141] libmachine: (embed-certs-808809) Calling .DriverName
	I0328 00:55:16.520147 1129056 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0328 00:55:16.520179 1129056 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHHostname
	I0328 00:55:16.523397 1129056 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 00:55:16.523826 1129056 main.go:141] libmachine: (embed-certs-808809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d4:d2", ip: ""} in network mk-embed-certs-808809: {Iface:virbr4 ExpiryTime:2024-03-28 01:54:15 +0000 UTC Type:0 Mac:52:54:00:60:d4:d2 Iaid: IPaddr:192.168.72.210 Prefix:24 Hostname:embed-certs-808809 Clientid:01:52:54:00:60:d4:d2}
	I0328 00:55:16.523851 1129056 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined IP address 192.168.72.210 and MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 00:55:16.524050 1129056 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHPort
	I0328 00:55:16.524238 1129056 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHKeyPath
	I0328 00:55:16.524409 1129056 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHUsername
	I0328 00:55:16.524543 1129056 sshutil.go:53] new ssh client: &{IP:192.168.72.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/embed-certs-808809/id_rsa Username:docker}
	I0328 00:55:16.631331 1129056 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0328 00:55:16.697580 1129056 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0328 00:55:16.755181 1129056 main.go:141] libmachine: Stopping "embed-certs-808809"...
	I0328 00:55:16.755227 1129056 main.go:141] libmachine: (embed-certs-808809) Calling .GetState
	I0328 00:55:16.757355 1129056 main.go:141] libmachine: (embed-certs-808809) Calling .Stop
	I0328 00:55:16.761381 1129056 main.go:141] libmachine: (embed-certs-808809) Waiting for machine to stop 0/120
	I0328 00:55:17.762887 1129056 main.go:141] libmachine: (embed-certs-808809) Waiting for machine to stop 1/120
	I0328 00:55:18.764746 1129056 main.go:141] libmachine: (embed-certs-808809) Waiting for machine to stop 2/120
	I0328 00:55:19.766192 1129056 main.go:141] libmachine: (embed-certs-808809) Waiting for machine to stop 3/120
	I0328 00:55:20.767772 1129056 main.go:141] libmachine: (embed-certs-808809) Waiting for machine to stop 4/120
	I0328 00:55:21.770175 1129056 main.go:141] libmachine: (embed-certs-808809) Waiting for machine to stop 5/120
	I0328 00:55:22.772549 1129056 main.go:141] libmachine: (embed-certs-808809) Waiting for machine to stop 6/120
	I0328 00:55:23.774449 1129056 main.go:141] libmachine: (embed-certs-808809) Waiting for machine to stop 7/120
	I0328 00:55:24.776829 1129056 main.go:141] libmachine: (embed-certs-808809) Waiting for machine to stop 8/120
	I0328 00:55:25.778501 1129056 main.go:141] libmachine: (embed-certs-808809) Waiting for machine to stop 9/120
	I0328 00:55:26.781064 1129056 main.go:141] libmachine: (embed-certs-808809) Waiting for machine to stop 10/120
	I0328 00:55:27.782624 1129056 main.go:141] libmachine: (embed-certs-808809) Waiting for machine to stop 11/120
	I0328 00:55:28.785274 1129056 main.go:141] libmachine: (embed-certs-808809) Waiting for machine to stop 12/120
	I0328 00:55:29.786795 1129056 main.go:141] libmachine: (embed-certs-808809) Waiting for machine to stop 13/120
	I0328 00:55:30.788945 1129056 main.go:141] libmachine: (embed-certs-808809) Waiting for machine to stop 14/120
	I0328 00:55:31.790631 1129056 main.go:141] libmachine: (embed-certs-808809) Waiting for machine to stop 15/120
	I0328 00:55:32.792246 1129056 main.go:141] libmachine: (embed-certs-808809) Waiting for machine to stop 16/120
	I0328 00:55:33.793577 1129056 main.go:141] libmachine: (embed-certs-808809) Waiting for machine to stop 17/120
	I0328 00:55:34.795112 1129056 main.go:141] libmachine: (embed-certs-808809) Waiting for machine to stop 18/120
	I0328 00:55:35.796422 1129056 main.go:141] libmachine: (embed-certs-808809) Waiting for machine to stop 19/120
	I0328 00:55:36.798608 1129056 main.go:141] libmachine: (embed-certs-808809) Waiting for machine to stop 20/120
	I0328 00:55:37.800076 1129056 main.go:141] libmachine: (embed-certs-808809) Waiting for machine to stop 21/120
	I0328 00:55:38.801524 1129056 main.go:141] libmachine: (embed-certs-808809) Waiting for machine to stop 22/120
	I0328 00:55:39.802928 1129056 main.go:141] libmachine: (embed-certs-808809) Waiting for machine to stop 23/120
	I0328 00:55:40.804605 1129056 main.go:141] libmachine: (embed-certs-808809) Waiting for machine to stop 24/120
	I0328 00:55:41.806729 1129056 main.go:141] libmachine: (embed-certs-808809) Waiting for machine to stop 25/120
	I0328 00:55:42.808699 1129056 main.go:141] libmachine: (embed-certs-808809) Waiting for machine to stop 26/120
	I0328 00:55:43.810111 1129056 main.go:141] libmachine: (embed-certs-808809) Waiting for machine to stop 27/120
	I0328 00:55:44.811514 1129056 main.go:141] libmachine: (embed-certs-808809) Waiting for machine to stop 28/120
	I0328 00:55:45.813298 1129056 main.go:141] libmachine: (embed-certs-808809) Waiting for machine to stop 29/120
	I0328 00:55:46.815084 1129056 main.go:141] libmachine: (embed-certs-808809) Waiting for machine to stop 30/120
	I0328 00:55:47.816726 1129056 main.go:141] libmachine: (embed-certs-808809) Waiting for machine to stop 31/120
	I0328 00:55:48.818473 1129056 main.go:141] libmachine: (embed-certs-808809) Waiting for machine to stop 32/120
	I0328 00:55:49.820967 1129056 main.go:141] libmachine: (embed-certs-808809) Waiting for machine to stop 33/120
	I0328 00:55:50.822733 1129056 main.go:141] libmachine: (embed-certs-808809) Waiting for machine to stop 34/120
	I0328 00:55:51.824879 1129056 main.go:141] libmachine: (embed-certs-808809) Waiting for machine to stop 35/120
	I0328 00:55:52.826625 1129056 main.go:141] libmachine: (embed-certs-808809) Waiting for machine to stop 36/120
	I0328 00:55:53.828155 1129056 main.go:141] libmachine: (embed-certs-808809) Waiting for machine to stop 37/120
	I0328 00:55:54.829531 1129056 main.go:141] libmachine: (embed-certs-808809) Waiting for machine to stop 38/120
	I0328 00:55:55.831235 1129056 main.go:141] libmachine: (embed-certs-808809) Waiting for machine to stop 39/120
	I0328 00:55:56.833662 1129056 main.go:141] libmachine: (embed-certs-808809) Waiting for machine to stop 40/120
	I0328 00:55:57.835064 1129056 main.go:141] libmachine: (embed-certs-808809) Waiting for machine to stop 41/120
	I0328 00:55:58.836736 1129056 main.go:141] libmachine: (embed-certs-808809) Waiting for machine to stop 42/120
	I0328 00:55:59.838050 1129056 main.go:141] libmachine: (embed-certs-808809) Waiting for machine to stop 43/120
	I0328 00:56:00.840330 1129056 main.go:141] libmachine: (embed-certs-808809) Waiting for machine to stop 44/120
	I0328 00:56:01.842387 1129056 main.go:141] libmachine: (embed-certs-808809) Waiting for machine to stop 45/120
	I0328 00:56:02.845214 1129056 main.go:141] libmachine: (embed-certs-808809) Waiting for machine to stop 46/120
	I0328 00:56:03.846548 1129056 main.go:141] libmachine: (embed-certs-808809) Waiting for machine to stop 47/120
	I0328 00:56:04.848941 1129056 main.go:141] libmachine: (embed-certs-808809) Waiting for machine to stop 48/120
	I0328 00:56:05.850424 1129056 main.go:141] libmachine: (embed-certs-808809) Waiting for machine to stop 49/120
	I0328 00:56:06.852731 1129056 main.go:141] libmachine: (embed-certs-808809) Waiting for machine to stop 50/120
	I0328 00:56:07.854831 1129056 main.go:141] libmachine: (embed-certs-808809) Waiting for machine to stop 51/120
	I0328 00:56:08.856834 1129056 main.go:141] libmachine: (embed-certs-808809) Waiting for machine to stop 52/120
	I0328 00:56:09.858407 1129056 main.go:141] libmachine: (embed-certs-808809) Waiting for machine to stop 53/120
	I0328 00:56:10.860151 1129056 main.go:141] libmachine: (embed-certs-808809) Waiting for machine to stop 54/120
	I0328 00:56:11.862316 1129056 main.go:141] libmachine: (embed-certs-808809) Waiting for machine to stop 55/120
	I0328 00:56:12.863832 1129056 main.go:141] libmachine: (embed-certs-808809) Waiting for machine to stop 56/120
	I0328 00:56:13.865390 1129056 main.go:141] libmachine: (embed-certs-808809) Waiting for machine to stop 57/120
	I0328 00:56:14.866864 1129056 main.go:141] libmachine: (embed-certs-808809) Waiting for machine to stop 58/120
	I0328 00:56:15.868376 1129056 main.go:141] libmachine: (embed-certs-808809) Waiting for machine to stop 59/120
	I0328 00:56:16.870873 1129056 main.go:141] libmachine: (embed-certs-808809) Waiting for machine to stop 60/120
	I0328 00:56:17.872839 1129056 main.go:141] libmachine: (embed-certs-808809) Waiting for machine to stop 61/120
	I0328 00:56:18.874670 1129056 main.go:141] libmachine: (embed-certs-808809) Waiting for machine to stop 62/120
	I0328 00:56:19.876656 1129056 main.go:141] libmachine: (embed-certs-808809) Waiting for machine to stop 63/120
	I0328 00:56:20.878024 1129056 main.go:141] libmachine: (embed-certs-808809) Waiting for machine to stop 64/120
	I0328 00:56:21.880086 1129056 main.go:141] libmachine: (embed-certs-808809) Waiting for machine to stop 65/120
	I0328 00:56:22.881732 1129056 main.go:141] libmachine: (embed-certs-808809) Waiting for machine to stop 66/120
	I0328 00:56:23.883431 1129056 main.go:141] libmachine: (embed-certs-808809) Waiting for machine to stop 67/120
	I0328 00:56:24.885017 1129056 main.go:141] libmachine: (embed-certs-808809) Waiting for machine to stop 68/120
	I0328 00:56:25.886494 1129056 main.go:141] libmachine: (embed-certs-808809) Waiting for machine to stop 69/120
	I0328 00:56:26.888937 1129056 main.go:141] libmachine: (embed-certs-808809) Waiting for machine to stop 70/120
	I0328 00:56:27.890446 1129056 main.go:141] libmachine: (embed-certs-808809) Waiting for machine to stop 71/120
	I0328 00:56:28.891982 1129056 main.go:141] libmachine: (embed-certs-808809) Waiting for machine to stop 72/120
	I0328 00:56:29.893649 1129056 main.go:141] libmachine: (embed-certs-808809) Waiting for machine to stop 73/120
	I0328 00:56:30.895320 1129056 main.go:141] libmachine: (embed-certs-808809) Waiting for machine to stop 74/120
	I0328 00:56:31.897748 1129056 main.go:141] libmachine: (embed-certs-808809) Waiting for machine to stop 75/120
	I0328 00:56:32.899538 1129056 main.go:141] libmachine: (embed-certs-808809) Waiting for machine to stop 76/120
	I0328 00:56:33.901187 1129056 main.go:141] libmachine: (embed-certs-808809) Waiting for machine to stop 77/120
	I0328 00:56:34.902647 1129056 main.go:141] libmachine: (embed-certs-808809) Waiting for machine to stop 78/120
	I0328 00:56:35.905054 1129056 main.go:141] libmachine: (embed-certs-808809) Waiting for machine to stop 79/120
	I0328 00:56:36.907407 1129056 main.go:141] libmachine: (embed-certs-808809) Waiting for machine to stop 80/120
	I0328 00:56:37.909179 1129056 main.go:141] libmachine: (embed-certs-808809) Waiting for machine to stop 81/120
	I0328 00:56:38.910876 1129056 main.go:141] libmachine: (embed-certs-808809) Waiting for machine to stop 82/120
	I0328 00:56:39.912513 1129056 main.go:141] libmachine: (embed-certs-808809) Waiting for machine to stop 83/120
	I0328 00:56:40.913878 1129056 main.go:141] libmachine: (embed-certs-808809) Waiting for machine to stop 84/120
	I0328 00:56:41.916049 1129056 main.go:141] libmachine: (embed-certs-808809) Waiting for machine to stop 85/120
	I0328 00:56:42.917512 1129056 main.go:141] libmachine: (embed-certs-808809) Waiting for machine to stop 86/120
	I0328 00:56:43.919067 1129056 main.go:141] libmachine: (embed-certs-808809) Waiting for machine to stop 87/120
	I0328 00:56:44.920636 1129056 main.go:141] libmachine: (embed-certs-808809) Waiting for machine to stop 88/120
	I0328 00:56:45.923571 1129056 main.go:141] libmachine: (embed-certs-808809) Waiting for machine to stop 89/120
	I0328 00:56:46.926086 1129056 main.go:141] libmachine: (embed-certs-808809) Waiting for machine to stop 90/120
	I0328 00:56:47.927864 1129056 main.go:141] libmachine: (embed-certs-808809) Waiting for machine to stop 91/120
	I0328 00:56:48.929220 1129056 main.go:141] libmachine: (embed-certs-808809) Waiting for machine to stop 92/120
	I0328 00:56:49.931107 1129056 main.go:141] libmachine: (embed-certs-808809) Waiting for machine to stop 93/120
	I0328 00:56:50.932662 1129056 main.go:141] libmachine: (embed-certs-808809) Waiting for machine to stop 94/120
	I0328 00:56:51.934267 1129056 main.go:141] libmachine: (embed-certs-808809) Waiting for machine to stop 95/120
	I0328 00:56:52.936752 1129056 main.go:141] libmachine: (embed-certs-808809) Waiting for machine to stop 96/120
	I0328 00:56:53.938376 1129056 main.go:141] libmachine: (embed-certs-808809) Waiting for machine to stop 97/120
	I0328 00:56:54.939873 1129056 main.go:141] libmachine: (embed-certs-808809) Waiting for machine to stop 98/120
	I0328 00:56:55.941592 1129056 main.go:141] libmachine: (embed-certs-808809) Waiting for machine to stop 99/120
	I0328 00:56:56.942863 1129056 main.go:141] libmachine: (embed-certs-808809) Waiting for machine to stop 100/120
	I0328 00:56:57.944680 1129056 main.go:141] libmachine: (embed-certs-808809) Waiting for machine to stop 101/120
	I0328 00:56:58.946220 1129056 main.go:141] libmachine: (embed-certs-808809) Waiting for machine to stop 102/120
	I0328 00:56:59.947702 1129056 main.go:141] libmachine: (embed-certs-808809) Waiting for machine to stop 103/120
	I0328 00:57:00.949204 1129056 main.go:141] libmachine: (embed-certs-808809) Waiting for machine to stop 104/120
	I0328 00:57:01.951288 1129056 main.go:141] libmachine: (embed-certs-808809) Waiting for machine to stop 105/120
	I0328 00:57:02.953008 1129056 main.go:141] libmachine: (embed-certs-808809) Waiting for machine to stop 106/120
	I0328 00:57:03.954614 1129056 main.go:141] libmachine: (embed-certs-808809) Waiting for machine to stop 107/120
	I0328 00:57:04.956194 1129056 main.go:141] libmachine: (embed-certs-808809) Waiting for machine to stop 108/120
	I0328 00:57:05.957850 1129056 main.go:141] libmachine: (embed-certs-808809) Waiting for machine to stop 109/120
	I0328 00:57:06.960272 1129056 main.go:141] libmachine: (embed-certs-808809) Waiting for machine to stop 110/120
	I0328 00:57:07.962029 1129056 main.go:141] libmachine: (embed-certs-808809) Waiting for machine to stop 111/120
	I0328 00:57:08.963804 1129056 main.go:141] libmachine: (embed-certs-808809) Waiting for machine to stop 112/120
	I0328 00:57:09.965261 1129056 main.go:141] libmachine: (embed-certs-808809) Waiting for machine to stop 113/120
	I0328 00:57:10.966872 1129056 main.go:141] libmachine: (embed-certs-808809) Waiting for machine to stop 114/120
	I0328 00:57:11.968885 1129056 main.go:141] libmachine: (embed-certs-808809) Waiting for machine to stop 115/120
	I0328 00:57:12.970557 1129056 main.go:141] libmachine: (embed-certs-808809) Waiting for machine to stop 116/120
	I0328 00:57:13.972028 1129056 main.go:141] libmachine: (embed-certs-808809) Waiting for machine to stop 117/120
	I0328 00:57:14.973506 1129056 main.go:141] libmachine: (embed-certs-808809) Waiting for machine to stop 118/120
	I0328 00:57:15.975187 1129056 main.go:141] libmachine: (embed-certs-808809) Waiting for machine to stop 119/120
	I0328 00:57:16.975776 1129056 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0328 00:57:16.975833 1129056 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0328 00:57:16.977858 1129056 out.go:177] 
	W0328 00:57:16.979279 1129056 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0328 00:57:16.979306 1129056 out.go:239] * 
	* 
	W0328 00:57:16.984248 1129056 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_2.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_2.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0328 00:57:16.985741 1129056 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p embed-certs-808809 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-808809 -n embed-certs-808809
E0328 00:57:21.062868 1076522 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/enable-default-cni-443419/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-808809 -n embed-certs-808809: exit status 3 (18.479834883s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0328 00:57:35.466594 1130394 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.210:22: connect: no route to host
	E0328 00:57:35.466619 1130394 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.72.210:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-808809" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/Stop (139.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (12.42s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-248059 -n no-preload-248059
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-248059 -n no-preload-248059: exit status 3 (3.201556568s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0328 00:57:26.890637 1130499 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.107:22: connect: no route to host
	E0328 00:57:26.890662 1130499 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.107:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-248059 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
E0328 00:57:28.899223 1076522 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/custom-flannel-443419/client.crt: no such file or directory
E0328 00:57:31.303052 1076522 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/enable-default-cni-443419/client.crt: no such file or directory
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p no-preload-248059 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.153658279s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.61.107:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p no-preload-248059 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-248059 -n no-preload-248059
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-248059 -n no-preload-248059: exit status 3 (3.061701689s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0328 00:57:36.106777 1130726 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.107:22: connect: no route to host
	E0328 00:57:36.106812 1130726 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.107:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-248059" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (12.42s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.55s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-986088 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-986088 create -f testdata/busybox.yaml: exit status 1 (55.487607ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-986088" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-986088 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-986088 -n old-k8s-version-986088
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-986088 -n old-k8s-version-986088: exit status 6 (249.6574ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0328 00:57:26.448038 1130570 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-986088" does not appear in /home/jenkins/minikube-integration/18485-1069254/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-986088" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-986088 -n old-k8s-version-986088
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-986088 -n old-k8s-version-986088: exit status 6 (242.373905ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0328 00:57:26.692057 1130600 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-986088" does not appear in /home/jenkins/minikube-integration/18485-1069254/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-986088" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.55s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (96.91s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-986088 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-986088 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 10 (1m36.608103909s)

                                                
                                                
-- stdout --
	* metrics-server is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	]
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:207: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-986088 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 10
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-986088 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-986088 describe deploy/metrics-server -n kube-system: exit status 1 (48.075387ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-986088" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-986088 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-986088 -n old-k8s-version-986088
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-986088 -n old-k8s-version-986088: exit status 6 (249.75019ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0328 00:59:03.597757 1131209 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-986088" does not appear in /home/jenkins/minikube-integration/18485-1069254/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-986088" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (96.91s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (139.2s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-283961 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p default-k8s-diff-port-283961 --alsologtostderr -v=3: exit status 82 (2m0.528171983s)

                                                
                                                
-- stdout --
	* Stopping node "default-k8s-diff-port-283961"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0328 00:57:34.053005 1130778 out.go:291] Setting OutFile to fd 1 ...
	I0328 00:57:34.053133 1130778 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 00:57:34.053142 1130778 out.go:304] Setting ErrFile to fd 2...
	I0328 00:57:34.053146 1130778 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 00:57:34.053333 1130778 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18485-1069254/.minikube/bin
	I0328 00:57:34.053566 1130778 out.go:298] Setting JSON to false
	I0328 00:57:34.053638 1130778 mustload.go:65] Loading cluster: default-k8s-diff-port-283961
	I0328 00:57:34.053959 1130778 config.go:182] Loaded profile config "default-k8s-diff-port-283961": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0328 00:57:34.054028 1130778 profile.go:142] Saving config to /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/default-k8s-diff-port-283961/config.json ...
	I0328 00:57:34.054206 1130778 mustload.go:65] Loading cluster: default-k8s-diff-port-283961
	I0328 00:57:34.054341 1130778 config.go:182] Loaded profile config "default-k8s-diff-port-283961": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0328 00:57:34.054374 1130778 stop.go:39] StopHost: default-k8s-diff-port-283961
	I0328 00:57:34.054790 1130778 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18485-1069254/.minikube/bin/docker-machine-driver-kvm2
	I0328 00:57:34.054830 1130778 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 00:57:34.069838 1130778 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37443
	I0328 00:57:34.070325 1130778 main.go:141] libmachine: () Calling .GetVersion
	I0328 00:57:34.070938 1130778 main.go:141] libmachine: Using API Version  1
	I0328 00:57:34.070967 1130778 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 00:57:34.071393 1130778 main.go:141] libmachine: () Calling .GetMachineName
	I0328 00:57:34.074282 1130778 out.go:177] * Stopping node "default-k8s-diff-port-283961"  ...
	I0328 00:57:34.075573 1130778 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0328 00:57:34.075619 1130778 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .DriverName
	I0328 00:57:34.075901 1130778 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0328 00:57:34.075951 1130778 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHHostname
	I0328 00:57:34.078878 1130778 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 00:57:34.079274 1130778 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:df:6f", ip: ""} in network mk-default-k8s-diff-port-283961: {Iface:virbr1 ExpiryTime:2024-03-28 01:56:37 +0000 UTC Type:0 Mac:52:54:00:c4:df:6f Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:default-k8s-diff-port-283961 Clientid:01:52:54:00:c4:df:6f}
	I0328 00:57:34.079309 1130778 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined IP address 192.168.39.224 and MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 00:57:34.079481 1130778 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHPort
	I0328 00:57:34.079697 1130778 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHKeyPath
	I0328 00:57:34.079898 1130778 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHUsername
	I0328 00:57:34.080052 1130778 sshutil.go:53] new ssh client: &{IP:192.168.39.224 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/default-k8s-diff-port-283961/id_rsa Username:docker}
	I0328 00:57:34.184164 1130778 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0328 00:57:34.246924 1130778 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0328 00:57:34.306473 1130778 main.go:141] libmachine: Stopping "default-k8s-diff-port-283961"...
	I0328 00:57:34.306502 1130778 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetState
	I0328 00:57:34.308087 1130778 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .Stop
	I0328 00:57:34.311709 1130778 main.go:141] libmachine: (default-k8s-diff-port-283961) Waiting for machine to stop 0/120
	I0328 00:57:35.313168 1130778 main.go:141] libmachine: (default-k8s-diff-port-283961) Waiting for machine to stop 1/120
	I0328 00:57:36.314115 1130778 main.go:141] libmachine: (default-k8s-diff-port-283961) Waiting for machine to stop 2/120
	I0328 00:57:37.315581 1130778 main.go:141] libmachine: (default-k8s-diff-port-283961) Waiting for machine to stop 3/120
	I0328 00:57:38.317158 1130778 main.go:141] libmachine: (default-k8s-diff-port-283961) Waiting for machine to stop 4/120
	I0328 00:57:39.319661 1130778 main.go:141] libmachine: (default-k8s-diff-port-283961) Waiting for machine to stop 5/120
	I0328 00:57:40.321185 1130778 main.go:141] libmachine: (default-k8s-diff-port-283961) Waiting for machine to stop 6/120
	I0328 00:57:41.322742 1130778 main.go:141] libmachine: (default-k8s-diff-port-283961) Waiting for machine to stop 7/120
	I0328 00:57:42.324160 1130778 main.go:141] libmachine: (default-k8s-diff-port-283961) Waiting for machine to stop 8/120
	I0328 00:57:43.325554 1130778 main.go:141] libmachine: (default-k8s-diff-port-283961) Waiting for machine to stop 9/120
	I0328 00:57:44.327098 1130778 main.go:141] libmachine: (default-k8s-diff-port-283961) Waiting for machine to stop 10/120
	I0328 00:57:45.328682 1130778 main.go:141] libmachine: (default-k8s-diff-port-283961) Waiting for machine to stop 11/120
	I0328 00:57:46.330176 1130778 main.go:141] libmachine: (default-k8s-diff-port-283961) Waiting for machine to stop 12/120
	I0328 00:57:47.331745 1130778 main.go:141] libmachine: (default-k8s-diff-port-283961) Waiting for machine to stop 13/120
	I0328 00:57:48.333071 1130778 main.go:141] libmachine: (default-k8s-diff-port-283961) Waiting for machine to stop 14/120
	I0328 00:57:49.335275 1130778 main.go:141] libmachine: (default-k8s-diff-port-283961) Waiting for machine to stop 15/120
	I0328 00:57:50.336672 1130778 main.go:141] libmachine: (default-k8s-diff-port-283961) Waiting for machine to stop 16/120
	I0328 00:57:51.337834 1130778 main.go:141] libmachine: (default-k8s-diff-port-283961) Waiting for machine to stop 17/120
	I0328 00:57:52.339279 1130778 main.go:141] libmachine: (default-k8s-diff-port-283961) Waiting for machine to stop 18/120
	I0328 00:57:53.340535 1130778 main.go:141] libmachine: (default-k8s-diff-port-283961) Waiting for machine to stop 19/120
	I0328 00:57:54.342828 1130778 main.go:141] libmachine: (default-k8s-diff-port-283961) Waiting for machine to stop 20/120
	I0328 00:57:55.344158 1130778 main.go:141] libmachine: (default-k8s-diff-port-283961) Waiting for machine to stop 21/120
	I0328 00:57:56.345499 1130778 main.go:141] libmachine: (default-k8s-diff-port-283961) Waiting for machine to stop 22/120
	I0328 00:57:57.347049 1130778 main.go:141] libmachine: (default-k8s-diff-port-283961) Waiting for machine to stop 23/120
	I0328 00:57:58.348419 1130778 main.go:141] libmachine: (default-k8s-diff-port-283961) Waiting for machine to stop 24/120
	I0328 00:57:59.350849 1130778 main.go:141] libmachine: (default-k8s-diff-port-283961) Waiting for machine to stop 25/120
	I0328 00:58:00.352759 1130778 main.go:141] libmachine: (default-k8s-diff-port-283961) Waiting for machine to stop 26/120
	I0328 00:58:01.354282 1130778 main.go:141] libmachine: (default-k8s-diff-port-283961) Waiting for machine to stop 27/120
	I0328 00:58:02.355938 1130778 main.go:141] libmachine: (default-k8s-diff-port-283961) Waiting for machine to stop 28/120
	I0328 00:58:03.357366 1130778 main.go:141] libmachine: (default-k8s-diff-port-283961) Waiting for machine to stop 29/120
	I0328 00:58:04.359800 1130778 main.go:141] libmachine: (default-k8s-diff-port-283961) Waiting for machine to stop 30/120
	I0328 00:58:05.361356 1130778 main.go:141] libmachine: (default-k8s-diff-port-283961) Waiting for machine to stop 31/120
	I0328 00:58:06.362622 1130778 main.go:141] libmachine: (default-k8s-diff-port-283961) Waiting for machine to stop 32/120
	I0328 00:58:07.364470 1130778 main.go:141] libmachine: (default-k8s-diff-port-283961) Waiting for machine to stop 33/120
	I0328 00:58:08.366008 1130778 main.go:141] libmachine: (default-k8s-diff-port-283961) Waiting for machine to stop 34/120
	I0328 00:58:09.368318 1130778 main.go:141] libmachine: (default-k8s-diff-port-283961) Waiting for machine to stop 35/120
	I0328 00:58:10.369936 1130778 main.go:141] libmachine: (default-k8s-diff-port-283961) Waiting for machine to stop 36/120
	I0328 00:58:11.371445 1130778 main.go:141] libmachine: (default-k8s-diff-port-283961) Waiting for machine to stop 37/120
	I0328 00:58:12.373044 1130778 main.go:141] libmachine: (default-k8s-diff-port-283961) Waiting for machine to stop 38/120
	I0328 00:58:13.374792 1130778 main.go:141] libmachine: (default-k8s-diff-port-283961) Waiting for machine to stop 39/120
	I0328 00:58:14.376075 1130778 main.go:141] libmachine: (default-k8s-diff-port-283961) Waiting for machine to stop 40/120
	I0328 00:58:15.377656 1130778 main.go:141] libmachine: (default-k8s-diff-port-283961) Waiting for machine to stop 41/120
	I0328 00:58:16.379060 1130778 main.go:141] libmachine: (default-k8s-diff-port-283961) Waiting for machine to stop 42/120
	I0328 00:58:17.380588 1130778 main.go:141] libmachine: (default-k8s-diff-port-283961) Waiting for machine to stop 43/120
	I0328 00:58:18.381862 1130778 main.go:141] libmachine: (default-k8s-diff-port-283961) Waiting for machine to stop 44/120
	I0328 00:58:19.384259 1130778 main.go:141] libmachine: (default-k8s-diff-port-283961) Waiting for machine to stop 45/120
	I0328 00:58:20.385842 1130778 main.go:141] libmachine: (default-k8s-diff-port-283961) Waiting for machine to stop 46/120
	I0328 00:58:21.387283 1130778 main.go:141] libmachine: (default-k8s-diff-port-283961) Waiting for machine to stop 47/120
	I0328 00:58:22.388918 1130778 main.go:141] libmachine: (default-k8s-diff-port-283961) Waiting for machine to stop 48/120
	I0328 00:58:23.390605 1130778 main.go:141] libmachine: (default-k8s-diff-port-283961) Waiting for machine to stop 49/120
	I0328 00:58:24.392941 1130778 main.go:141] libmachine: (default-k8s-diff-port-283961) Waiting for machine to stop 50/120
	I0328 00:58:25.394500 1130778 main.go:141] libmachine: (default-k8s-diff-port-283961) Waiting for machine to stop 51/120
	I0328 00:58:26.396024 1130778 main.go:141] libmachine: (default-k8s-diff-port-283961) Waiting for machine to stop 52/120
	I0328 00:58:27.397616 1130778 main.go:141] libmachine: (default-k8s-diff-port-283961) Waiting for machine to stop 53/120
	I0328 00:58:28.399336 1130778 main.go:141] libmachine: (default-k8s-diff-port-283961) Waiting for machine to stop 54/120
	I0328 00:58:29.401415 1130778 main.go:141] libmachine: (default-k8s-diff-port-283961) Waiting for machine to stop 55/120
	I0328 00:58:30.403025 1130778 main.go:141] libmachine: (default-k8s-diff-port-283961) Waiting for machine to stop 56/120
	I0328 00:58:31.405049 1130778 main.go:141] libmachine: (default-k8s-diff-port-283961) Waiting for machine to stop 57/120
	I0328 00:58:32.406643 1130778 main.go:141] libmachine: (default-k8s-diff-port-283961) Waiting for machine to stop 58/120
	I0328 00:58:33.408068 1130778 main.go:141] libmachine: (default-k8s-diff-port-283961) Waiting for machine to stop 59/120
	I0328 00:58:34.409410 1130778 main.go:141] libmachine: (default-k8s-diff-port-283961) Waiting for machine to stop 60/120
	I0328 00:58:35.410875 1130778 main.go:141] libmachine: (default-k8s-diff-port-283961) Waiting for machine to stop 61/120
	I0328 00:58:36.412417 1130778 main.go:141] libmachine: (default-k8s-diff-port-283961) Waiting for machine to stop 62/120
	I0328 00:58:37.414183 1130778 main.go:141] libmachine: (default-k8s-diff-port-283961) Waiting for machine to stop 63/120
	I0328 00:58:38.415616 1130778 main.go:141] libmachine: (default-k8s-diff-port-283961) Waiting for machine to stop 64/120
	I0328 00:58:39.417863 1130778 main.go:141] libmachine: (default-k8s-diff-port-283961) Waiting for machine to stop 65/120
	I0328 00:58:40.419367 1130778 main.go:141] libmachine: (default-k8s-diff-port-283961) Waiting for machine to stop 66/120
	I0328 00:58:41.421224 1130778 main.go:141] libmachine: (default-k8s-diff-port-283961) Waiting for machine to stop 67/120
	I0328 00:58:42.422905 1130778 main.go:141] libmachine: (default-k8s-diff-port-283961) Waiting for machine to stop 68/120
	I0328 00:58:43.424608 1130778 main.go:141] libmachine: (default-k8s-diff-port-283961) Waiting for machine to stop 69/120
	I0328 00:58:44.426183 1130778 main.go:141] libmachine: (default-k8s-diff-port-283961) Waiting for machine to stop 70/120
	I0328 00:58:45.427992 1130778 main.go:141] libmachine: (default-k8s-diff-port-283961) Waiting for machine to stop 71/120
	I0328 00:58:46.429749 1130778 main.go:141] libmachine: (default-k8s-diff-port-283961) Waiting for machine to stop 72/120
	I0328 00:58:47.431322 1130778 main.go:141] libmachine: (default-k8s-diff-port-283961) Waiting for machine to stop 73/120
	I0328 00:58:48.432896 1130778 main.go:141] libmachine: (default-k8s-diff-port-283961) Waiting for machine to stop 74/120
	I0328 00:58:49.435147 1130778 main.go:141] libmachine: (default-k8s-diff-port-283961) Waiting for machine to stop 75/120
	I0328 00:58:50.436792 1130778 main.go:141] libmachine: (default-k8s-diff-port-283961) Waiting for machine to stop 76/120
	I0328 00:58:51.438481 1130778 main.go:141] libmachine: (default-k8s-diff-port-283961) Waiting for machine to stop 77/120
	I0328 00:58:52.440371 1130778 main.go:141] libmachine: (default-k8s-diff-port-283961) Waiting for machine to stop 78/120
	I0328 00:58:53.441884 1130778 main.go:141] libmachine: (default-k8s-diff-port-283961) Waiting for machine to stop 79/120
	I0328 00:58:54.444410 1130778 main.go:141] libmachine: (default-k8s-diff-port-283961) Waiting for machine to stop 80/120
	I0328 00:58:55.445976 1130778 main.go:141] libmachine: (default-k8s-diff-port-283961) Waiting for machine to stop 81/120
	I0328 00:58:56.447365 1130778 main.go:141] libmachine: (default-k8s-diff-port-283961) Waiting for machine to stop 82/120
	I0328 00:58:57.448902 1130778 main.go:141] libmachine: (default-k8s-diff-port-283961) Waiting for machine to stop 83/120
	I0328 00:58:58.450319 1130778 main.go:141] libmachine: (default-k8s-diff-port-283961) Waiting for machine to stop 84/120
	I0328 00:58:59.452955 1130778 main.go:141] libmachine: (default-k8s-diff-port-283961) Waiting for machine to stop 85/120
	I0328 00:59:00.454785 1130778 main.go:141] libmachine: (default-k8s-diff-port-283961) Waiting for machine to stop 86/120
	I0328 00:59:01.457172 1130778 main.go:141] libmachine: (default-k8s-diff-port-283961) Waiting for machine to stop 87/120
	I0328 00:59:02.458866 1130778 main.go:141] libmachine: (default-k8s-diff-port-283961) Waiting for machine to stop 88/120
	I0328 00:59:03.460752 1130778 main.go:141] libmachine: (default-k8s-diff-port-283961) Waiting for machine to stop 89/120
	I0328 00:59:04.463438 1130778 main.go:141] libmachine: (default-k8s-diff-port-283961) Waiting for machine to stop 90/120
	I0328 00:59:05.465148 1130778 main.go:141] libmachine: (default-k8s-diff-port-283961) Waiting for machine to stop 91/120
	I0328 00:59:06.466623 1130778 main.go:141] libmachine: (default-k8s-diff-port-283961) Waiting for machine to stop 92/120
	I0328 00:59:07.468290 1130778 main.go:141] libmachine: (default-k8s-diff-port-283961) Waiting for machine to stop 93/120
	I0328 00:59:08.469747 1130778 main.go:141] libmachine: (default-k8s-diff-port-283961) Waiting for machine to stop 94/120
	I0328 00:59:09.472248 1130778 main.go:141] libmachine: (default-k8s-diff-port-283961) Waiting for machine to stop 95/120
	I0328 00:59:10.473943 1130778 main.go:141] libmachine: (default-k8s-diff-port-283961) Waiting for machine to stop 96/120
	I0328 00:59:11.475455 1130778 main.go:141] libmachine: (default-k8s-diff-port-283961) Waiting for machine to stop 97/120
	I0328 00:59:12.477094 1130778 main.go:141] libmachine: (default-k8s-diff-port-283961) Waiting for machine to stop 98/120
	I0328 00:59:13.478386 1130778 main.go:141] libmachine: (default-k8s-diff-port-283961) Waiting for machine to stop 99/120
	I0328 00:59:14.480054 1130778 main.go:141] libmachine: (default-k8s-diff-port-283961) Waiting for machine to stop 100/120
	I0328 00:59:15.481386 1130778 main.go:141] libmachine: (default-k8s-diff-port-283961) Waiting for machine to stop 101/120
	I0328 00:59:16.483014 1130778 main.go:141] libmachine: (default-k8s-diff-port-283961) Waiting for machine to stop 102/120
	I0328 00:59:17.484632 1130778 main.go:141] libmachine: (default-k8s-diff-port-283961) Waiting for machine to stop 103/120
	I0328 00:59:18.486015 1130778 main.go:141] libmachine: (default-k8s-diff-port-283961) Waiting for machine to stop 104/120
	I0328 00:59:19.488439 1130778 main.go:141] libmachine: (default-k8s-diff-port-283961) Waiting for machine to stop 105/120
	I0328 00:59:20.490022 1130778 main.go:141] libmachine: (default-k8s-diff-port-283961) Waiting for machine to stop 106/120
	I0328 00:59:21.491518 1130778 main.go:141] libmachine: (default-k8s-diff-port-283961) Waiting for machine to stop 107/120
	I0328 00:59:22.493028 1130778 main.go:141] libmachine: (default-k8s-diff-port-283961) Waiting for machine to stop 108/120
	I0328 00:59:23.494522 1130778 main.go:141] libmachine: (default-k8s-diff-port-283961) Waiting for machine to stop 109/120
	I0328 00:59:24.496063 1130778 main.go:141] libmachine: (default-k8s-diff-port-283961) Waiting for machine to stop 110/120
	I0328 00:59:25.497583 1130778 main.go:141] libmachine: (default-k8s-diff-port-283961) Waiting for machine to stop 111/120
	I0328 00:59:26.499131 1130778 main.go:141] libmachine: (default-k8s-diff-port-283961) Waiting for machine to stop 112/120
	I0328 00:59:27.500821 1130778 main.go:141] libmachine: (default-k8s-diff-port-283961) Waiting for machine to stop 113/120
	I0328 00:59:28.502360 1130778 main.go:141] libmachine: (default-k8s-diff-port-283961) Waiting for machine to stop 114/120
	I0328 00:59:29.504682 1130778 main.go:141] libmachine: (default-k8s-diff-port-283961) Waiting for machine to stop 115/120
	I0328 00:59:30.506054 1130778 main.go:141] libmachine: (default-k8s-diff-port-283961) Waiting for machine to stop 116/120
	I0328 00:59:31.507648 1130778 main.go:141] libmachine: (default-k8s-diff-port-283961) Waiting for machine to stop 117/120
	I0328 00:59:32.508917 1130778 main.go:141] libmachine: (default-k8s-diff-port-283961) Waiting for machine to stop 118/120
	I0328 00:59:33.510699 1130778 main.go:141] libmachine: (default-k8s-diff-port-283961) Waiting for machine to stop 119/120
	I0328 00:59:34.512257 1130778 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0328 00:59:34.512338 1130778 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0328 00:59:34.514328 1130778 out.go:177] 
	W0328 00:59:34.515639 1130778 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0328 00:59:34.515658 1130778 out.go:239] * 
	* 
	W0328 00:59:34.520535 1130778 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_2.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_2.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0328 00:59:34.522798 1130778 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p default-k8s-diff-port-283961 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-283961 -n default-k8s-diff-port-283961
E0328 00:59:46.107405 1076522 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/flannel-443419/client.crt: no such file or directory
E0328 00:59:52.854947 1076522 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/auto-443419/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-283961 -n default-k8s-diff-port-283961: exit status 3 (18.672595402s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0328 00:59:53.194615 1131427 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.224:22: connect: no route to host
	E0328 00:59:53.194641 1131427 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.224:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-283961" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Stop (139.20s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (12.39s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-808809 -n embed-certs-808809
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-808809 -n embed-certs-808809: exit status 3 (3.168502539s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0328 00:57:38.634654 1130796 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.210:22: connect: no route to host
	E0328 00:57:38.634679 1130796 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.72.210:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-808809 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
E0328 00:57:41.076467 1076522 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/calico-443419/client.crt: no such file or directory
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p embed-certs-808809 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.154688058s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.72.210:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p embed-certs-808809 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-808809 -n embed-certs-808809
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-808809 -n embed-certs-808809: exit status 3 (3.066137357s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0328 00:57:47.854614 1130908 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.210:22: connect: no route to host
	E0328 00:57:47.854637 1130908 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.72.210:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-808809" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (12.39s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (720.5s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-986088 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0
E0328 00:59:17.448545 1076522 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/bridge-443419/client.crt: no such file or directory
E0328 00:59:31.780248 1076522 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/custom-flannel-443419/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-986088 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 109 (11m56.862107968s)

                                                
                                                
-- stdout --
	* [old-k8s-version-986088] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18485
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18485-1069254/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18485-1069254/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.29.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.29.3
	* Using the kvm2 driver based on existing profile
	* Starting "old-k8s-version-986088" primary control-plane node in "old-k8s-version-986088" cluster
	* Restarting existing kvm2 VM for "old-k8s-version-986088" ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0328 00:59:05.333380 1131323 out.go:291] Setting OutFile to fd 1 ...
	I0328 00:59:05.333805 1131323 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 00:59:05.333823 1131323 out.go:304] Setting ErrFile to fd 2...
	I0328 00:59:05.333831 1131323 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 00:59:05.334320 1131323 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18485-1069254/.minikube/bin
	I0328 00:59:05.335793 1131323 out.go:298] Setting JSON to false
	I0328 00:59:05.336874 1131323 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":31242,"bootTime":1711556303,"procs":203,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1054-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0328 00:59:05.336946 1131323 start.go:139] virtualization: kvm guest
	I0328 00:59:05.339023 1131323 out.go:177] * [old-k8s-version-986088] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I0328 00:59:05.340460 1131323 out.go:177]   - MINIKUBE_LOCATION=18485
	I0328 00:59:05.340523 1131323 notify.go:220] Checking for updates...
	I0328 00:59:05.341884 1131323 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0328 00:59:05.343402 1131323 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18485-1069254/kubeconfig
	I0328 00:59:05.344881 1131323 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18485-1069254/.minikube
	I0328 00:59:05.346307 1131323 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0328 00:59:05.347760 1131323 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0328 00:59:05.349654 1131323 config.go:182] Loaded profile config "old-k8s-version-986088": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0328 00:59:05.350269 1131323 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18485-1069254/.minikube/bin/docker-machine-driver-kvm2
	I0328 00:59:05.350334 1131323 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 00:59:05.365998 1131323 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46343
	I0328 00:59:05.366522 1131323 main.go:141] libmachine: () Calling .GetVersion
	I0328 00:59:05.367087 1131323 main.go:141] libmachine: Using API Version  1
	I0328 00:59:05.367104 1131323 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 00:59:05.367521 1131323 main.go:141] libmachine: () Calling .GetMachineName
	I0328 00:59:05.367719 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .DriverName
	I0328 00:59:05.369741 1131323 out.go:177] * Kubernetes 1.29.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.29.3
	I0328 00:59:05.371098 1131323 driver.go:392] Setting default libvirt URI to qemu:///system
	I0328 00:59:05.371520 1131323 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18485-1069254/.minikube/bin/docker-machine-driver-kvm2
	I0328 00:59:05.371567 1131323 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 00:59:05.386742 1131323 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34973
	I0328 00:59:05.387182 1131323 main.go:141] libmachine: () Calling .GetVersion
	I0328 00:59:05.387644 1131323 main.go:141] libmachine: Using API Version  1
	I0328 00:59:05.387687 1131323 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 00:59:05.388021 1131323 main.go:141] libmachine: () Calling .GetMachineName
	I0328 00:59:05.388243 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .DriverName
	I0328 00:59:05.426015 1131323 out.go:177] * Using the kvm2 driver based on existing profile
	I0328 00:59:05.427298 1131323 start.go:297] selected driver: kvm2
	I0328 00:59:05.427316 1131323 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-986088 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-986088 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.174 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0328 00:59:05.427447 1131323 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0328 00:59:05.428237 1131323 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0328 00:59:05.428313 1131323 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18485-1069254/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0328 00:59:05.444800 1131323 install.go:137] /home/jenkins/minikube-integration/18485-1069254/.minikube/bin/docker-machine-driver-kvm2 version is 1.33.0-beta.0
	I0328 00:59:05.445167 1131323 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0328 00:59:05.445241 1131323 cni.go:84] Creating CNI manager for ""
	I0328 00:59:05.445255 1131323 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0328 00:59:05.445291 1131323 start.go:340] cluster config:
	{Name:old-k8s-version-986088 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-986088 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.174 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0328 00:59:05.445391 1131323 iso.go:125] acquiring lock: {Name:mk3da1fa7d63a581c817f327d86a827697458cb8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0328 00:59:05.447397 1131323 out.go:177] * Starting "old-k8s-version-986088" primary control-plane node in "old-k8s-version-986088" cluster
	I0328 00:59:05.448787 1131323 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0328 00:59:05.448830 1131323 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0328 00:59:05.448846 1131323 cache.go:56] Caching tarball of preloaded images
	I0328 00:59:05.448944 1131323 preload.go:173] Found /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0328 00:59:05.448955 1131323 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0328 00:59:05.449056 1131323 profile.go:142] Saving config to /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/old-k8s-version-986088/config.json ...
	I0328 00:59:05.449266 1131323 start.go:360] acquireMachinesLock for old-k8s-version-986088: {Name:mk85e225431128bbd27ac7bb3815095957281902 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0328 01:02:32.751719 1131323 start.go:364] duration metric: took 3m27.302408957s to acquireMachinesLock for "old-k8s-version-986088"
	I0328 01:02:32.751823 1131323 start.go:96] Skipping create...Using existing machine configuration
	I0328 01:02:32.751833 1131323 fix.go:54] fixHost starting: 
	I0328 01:02:32.752289 1131323 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18485-1069254/.minikube/bin/docker-machine-driver-kvm2
	I0328 01:02:32.752326 1131323 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 01:02:32.770119 1131323 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45807
	I0328 01:02:32.770723 1131323 main.go:141] libmachine: () Calling .GetVersion
	I0328 01:02:32.771352 1131323 main.go:141] libmachine: Using API Version  1
	I0328 01:02:32.771380 1131323 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 01:02:32.771790 1131323 main.go:141] libmachine: () Calling .GetMachineName
	I0328 01:02:32.772020 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .DriverName
	I0328 01:02:32.772206 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetState
	I0328 01:02:32.773947 1131323 fix.go:112] recreateIfNeeded on old-k8s-version-986088: state=Stopped err=<nil>
	I0328 01:02:32.773980 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .DriverName
	W0328 01:02:32.774166 1131323 fix.go:138] unexpected machine state, will restart: <nil>
	I0328 01:02:32.776416 1131323 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-986088" ...
	I0328 01:02:32.778280 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .Start
	I0328 01:02:32.778470 1131323 main.go:141] libmachine: (old-k8s-version-986088) Ensuring networks are active...
	I0328 01:02:32.779179 1131323 main.go:141] libmachine: (old-k8s-version-986088) Ensuring network default is active
	I0328 01:02:32.779577 1131323 main.go:141] libmachine: (old-k8s-version-986088) Ensuring network mk-old-k8s-version-986088 is active
	I0328 01:02:32.779982 1131323 main.go:141] libmachine: (old-k8s-version-986088) Getting domain xml...
	I0328 01:02:32.780732 1131323 main.go:141] libmachine: (old-k8s-version-986088) Creating domain...
	I0328 01:02:34.066287 1131323 main.go:141] libmachine: (old-k8s-version-986088) Waiting to get IP...
	I0328 01:02:34.067193 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:34.067618 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | unable to find current IP address of domain old-k8s-version-986088 in network mk-old-k8s-version-986088
	I0328 01:02:34.067684 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | I0328 01:02:34.067586 1132067 retry.go:31] will retry after 291.270379ms: waiting for machine to come up
	I0328 01:02:34.360203 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:34.360690 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | unable to find current IP address of domain old-k8s-version-986088 in network mk-old-k8s-version-986088
	I0328 01:02:34.360721 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | I0328 01:02:34.360638 1132067 retry.go:31] will retry after 234.968456ms: waiting for machine to come up
	I0328 01:02:34.597291 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:34.597818 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | unable to find current IP address of domain old-k8s-version-986088 in network mk-old-k8s-version-986088
	I0328 01:02:34.597849 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | I0328 01:02:34.597750 1132067 retry.go:31] will retry after 382.522593ms: waiting for machine to come up
	I0328 01:02:34.982502 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:34.983176 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | unable to find current IP address of domain old-k8s-version-986088 in network mk-old-k8s-version-986088
	I0328 01:02:34.983205 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | I0328 01:02:34.983133 1132067 retry.go:31] will retry after 436.332635ms: waiting for machine to come up
	I0328 01:02:35.421623 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:35.422164 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | unable to find current IP address of domain old-k8s-version-986088 in network mk-old-k8s-version-986088
	I0328 01:02:35.422198 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | I0328 01:02:35.422135 1132067 retry.go:31] will retry after 700.861268ms: waiting for machine to come up
	I0328 01:02:36.124589 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:36.125001 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | unable to find current IP address of domain old-k8s-version-986088 in network mk-old-k8s-version-986088
	I0328 01:02:36.125031 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | I0328 01:02:36.124948 1132067 retry.go:31] will retry after 932.342478ms: waiting for machine to come up
	I0328 01:02:37.058954 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:37.059390 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | unable to find current IP address of domain old-k8s-version-986088 in network mk-old-k8s-version-986088
	I0328 01:02:37.059424 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | I0328 01:02:37.059332 1132067 retry.go:31] will retry after 1.163248691s: waiting for machine to come up
	I0328 01:02:38.224574 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:38.225019 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | unable to find current IP address of domain old-k8s-version-986088 in network mk-old-k8s-version-986088
	I0328 01:02:38.225053 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | I0328 01:02:38.224959 1132067 retry.go:31] will retry after 1.13372539s: waiting for machine to come up
	I0328 01:02:39.360393 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:39.360953 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | unable to find current IP address of domain old-k8s-version-986088 in network mk-old-k8s-version-986088
	I0328 01:02:39.360984 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | I0328 01:02:39.360906 1132067 retry.go:31] will retry after 1.793272671s: waiting for machine to come up
	I0328 01:02:41.156898 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:41.157309 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | unable to find current IP address of domain old-k8s-version-986088 in network mk-old-k8s-version-986088
	I0328 01:02:41.157336 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | I0328 01:02:41.157263 1132067 retry.go:31] will retry after 1.863775673s: waiting for machine to come up
	I0328 01:02:43.023074 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:43.023470 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | unable to find current IP address of domain old-k8s-version-986088 in network mk-old-k8s-version-986088
	I0328 01:02:43.023507 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | I0328 01:02:43.023419 1132067 retry.go:31] will retry after 2.73600503s: waiting for machine to come up
	I0328 01:02:45.762440 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:45.762891 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | unable to find current IP address of domain old-k8s-version-986088 in network mk-old-k8s-version-986088
	I0328 01:02:45.762915 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | I0328 01:02:45.762845 1132067 retry.go:31] will retry after 2.201941476s: waiting for machine to come up
	I0328 01:02:47.966601 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:47.967196 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | unable to find current IP address of domain old-k8s-version-986088 in network mk-old-k8s-version-986088
	I0328 01:02:47.967237 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | I0328 01:02:47.967144 1132067 retry.go:31] will retry after 4.122216816s: waiting for machine to come up
	I0328 01:02:52.091210 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:52.091759 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has current primary IP address 192.168.50.174 and MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:52.091794 1131323 main.go:141] libmachine: (old-k8s-version-986088) Found IP for machine: 192.168.50.174
	I0328 01:02:52.091841 1131323 main.go:141] libmachine: (old-k8s-version-986088) Reserving static IP address...
	I0328 01:02:52.092295 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | found host DHCP lease matching {name: "old-k8s-version-986088", mac: "52:54:00:f6:94:40", ip: "192.168.50.174"} in network mk-old-k8s-version-986088: {Iface:virbr2 ExpiryTime:2024-03-28 02:02:44 +0000 UTC Type:0 Mac:52:54:00:f6:94:40 Iaid: IPaddr:192.168.50.174 Prefix:24 Hostname:old-k8s-version-986088 Clientid:01:52:54:00:f6:94:40}
	I0328 01:02:52.092321 1131323 main.go:141] libmachine: (old-k8s-version-986088) Reserved static IP address: 192.168.50.174
	I0328 01:02:52.092343 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | skip adding static IP to network mk-old-k8s-version-986088 - found existing host DHCP lease matching {name: "old-k8s-version-986088", mac: "52:54:00:f6:94:40", ip: "192.168.50.174"}
	I0328 01:02:52.092356 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | Getting to WaitForSSH function...
	I0328 01:02:52.092373 1131323 main.go:141] libmachine: (old-k8s-version-986088) Waiting for SSH to be available...
	I0328 01:02:52.094682 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:52.095012 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:94:40", ip: ""} in network mk-old-k8s-version-986088: {Iface:virbr2 ExpiryTime:2024-03-28 02:02:44 +0000 UTC Type:0 Mac:52:54:00:f6:94:40 Iaid: IPaddr:192.168.50.174 Prefix:24 Hostname:old-k8s-version-986088 Clientid:01:52:54:00:f6:94:40}
	I0328 01:02:52.095033 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined IP address 192.168.50.174 and MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:52.095158 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | Using SSH client type: external
	I0328 01:02:52.095180 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | Using SSH private key: /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/old-k8s-version-986088/id_rsa (-rw-------)
	I0328 01:02:52.095208 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.174 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/old-k8s-version-986088/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0328 01:02:52.095218 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | About to run SSH command:
	I0328 01:02:52.095232 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | exit 0
	I0328 01:02:52.218494 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | SSH cmd err, output: <nil>: 
	I0328 01:02:52.218983 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetConfigRaw
	I0328 01:02:52.219663 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetIP
	I0328 01:02:52.222349 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:52.222791 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:94:40", ip: ""} in network mk-old-k8s-version-986088: {Iface:virbr2 ExpiryTime:2024-03-28 02:02:44 +0000 UTC Type:0 Mac:52:54:00:f6:94:40 Iaid: IPaddr:192.168.50.174 Prefix:24 Hostname:old-k8s-version-986088 Clientid:01:52:54:00:f6:94:40}
	I0328 01:02:52.222823 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined IP address 192.168.50.174 and MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:52.223191 1131323 profile.go:142] Saving config to /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/old-k8s-version-986088/config.json ...
	I0328 01:02:52.223388 1131323 machine.go:94] provisionDockerMachine start ...
	I0328 01:02:52.223409 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .DriverName
	I0328 01:02:52.223605 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHHostname
	I0328 01:02:52.225686 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:52.225999 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:94:40", ip: ""} in network mk-old-k8s-version-986088: {Iface:virbr2 ExpiryTime:2024-03-28 02:02:44 +0000 UTC Type:0 Mac:52:54:00:f6:94:40 Iaid: IPaddr:192.168.50.174 Prefix:24 Hostname:old-k8s-version-986088 Clientid:01:52:54:00:f6:94:40}
	I0328 01:02:52.226038 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined IP address 192.168.50.174 and MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:52.226131 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHPort
	I0328 01:02:52.226341 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHKeyPath
	I0328 01:02:52.226507 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHKeyPath
	I0328 01:02:52.226633 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHUsername
	I0328 01:02:52.226802 1131323 main.go:141] libmachine: Using SSH client type: native
	I0328 01:02:52.227078 1131323 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.174 22 <nil> <nil>}
	I0328 01:02:52.227095 1131323 main.go:141] libmachine: About to run SSH command:
	hostname
	I0328 01:02:52.327218 1131323 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0328 01:02:52.327249 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetMachineName
	I0328 01:02:52.327515 1131323 buildroot.go:166] provisioning hostname "old-k8s-version-986088"
	I0328 01:02:52.327542 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetMachineName
	I0328 01:02:52.327754 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHHostname
	I0328 01:02:52.330253 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:52.330661 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:94:40", ip: ""} in network mk-old-k8s-version-986088: {Iface:virbr2 ExpiryTime:2024-03-28 02:02:44 +0000 UTC Type:0 Mac:52:54:00:f6:94:40 Iaid: IPaddr:192.168.50.174 Prefix:24 Hostname:old-k8s-version-986088 Clientid:01:52:54:00:f6:94:40}
	I0328 01:02:52.330691 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined IP address 192.168.50.174 and MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:52.330827 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHPort
	I0328 01:02:52.331048 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHKeyPath
	I0328 01:02:52.331258 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHKeyPath
	I0328 01:02:52.331406 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHUsername
	I0328 01:02:52.331593 1131323 main.go:141] libmachine: Using SSH client type: native
	I0328 01:02:52.331772 1131323 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.174 22 <nil> <nil>}
	I0328 01:02:52.331783 1131323 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-986088 && echo "old-k8s-version-986088" | sudo tee /etc/hostname
	I0328 01:02:52.445910 1131323 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-986088
	
	I0328 01:02:52.445943 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHHostname
	I0328 01:02:52.449023 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:52.449320 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:94:40", ip: ""} in network mk-old-k8s-version-986088: {Iface:virbr2 ExpiryTime:2024-03-28 02:02:44 +0000 UTC Type:0 Mac:52:54:00:f6:94:40 Iaid: IPaddr:192.168.50.174 Prefix:24 Hostname:old-k8s-version-986088 Clientid:01:52:54:00:f6:94:40}
	I0328 01:02:52.449358 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined IP address 192.168.50.174 and MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:52.449595 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHPort
	I0328 01:02:52.449810 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHKeyPath
	I0328 01:02:52.449970 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHKeyPath
	I0328 01:02:52.450116 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHUsername
	I0328 01:02:52.450310 1131323 main.go:141] libmachine: Using SSH client type: native
	I0328 01:02:52.450572 1131323 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.174 22 <nil> <nil>}
	I0328 01:02:52.450640 1131323 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-986088' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-986088/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-986088' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0328 01:02:52.567493 1131323 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0328 01:02:52.567529 1131323 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18485-1069254/.minikube CaCertPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18485-1069254/.minikube}
	I0328 01:02:52.567559 1131323 buildroot.go:174] setting up certificates
	I0328 01:02:52.567573 1131323 provision.go:84] configureAuth start
	I0328 01:02:52.567587 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetMachineName
	I0328 01:02:52.567944 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetIP
	I0328 01:02:52.570860 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:52.571363 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:94:40", ip: ""} in network mk-old-k8s-version-986088: {Iface:virbr2 ExpiryTime:2024-03-28 02:02:44 +0000 UTC Type:0 Mac:52:54:00:f6:94:40 Iaid: IPaddr:192.168.50.174 Prefix:24 Hostname:old-k8s-version-986088 Clientid:01:52:54:00:f6:94:40}
	I0328 01:02:52.571400 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined IP address 192.168.50.174 and MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:52.571547 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHHostname
	I0328 01:02:52.574052 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:52.574483 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:94:40", ip: ""} in network mk-old-k8s-version-986088: {Iface:virbr2 ExpiryTime:2024-03-28 02:02:44 +0000 UTC Type:0 Mac:52:54:00:f6:94:40 Iaid: IPaddr:192.168.50.174 Prefix:24 Hostname:old-k8s-version-986088 Clientid:01:52:54:00:f6:94:40}
	I0328 01:02:52.574517 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined IP address 192.168.50.174 and MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:52.574619 1131323 provision.go:143] copyHostCerts
	I0328 01:02:52.574698 1131323 exec_runner.go:144] found /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.pem, removing ...
	I0328 01:02:52.574710 1131323 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.pem
	I0328 01:02:52.574778 1131323 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.pem (1078 bytes)
	I0328 01:02:52.574894 1131323 exec_runner.go:144] found /home/jenkins/minikube-integration/18485-1069254/.minikube/cert.pem, removing ...
	I0328 01:02:52.574908 1131323 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18485-1069254/.minikube/cert.pem
	I0328 01:02:52.574985 1131323 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18485-1069254/.minikube/cert.pem (1123 bytes)
	I0328 01:02:52.575086 1131323 exec_runner.go:144] found /home/jenkins/minikube-integration/18485-1069254/.minikube/key.pem, removing ...
	I0328 01:02:52.575095 1131323 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18485-1069254/.minikube/key.pem
	I0328 01:02:52.575117 1131323 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18485-1069254/.minikube/key.pem (1679 bytes)
	I0328 01:02:52.575194 1131323 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-986088 san=[127.0.0.1 192.168.50.174 localhost minikube old-k8s-version-986088]
	I0328 01:02:52.688709 1131323 provision.go:177] copyRemoteCerts
	I0328 01:02:52.688776 1131323 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0328 01:02:52.688809 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHHostname
	I0328 01:02:52.691529 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:52.691977 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:94:40", ip: ""} in network mk-old-k8s-version-986088: {Iface:virbr2 ExpiryTime:2024-03-28 02:02:44 +0000 UTC Type:0 Mac:52:54:00:f6:94:40 Iaid: IPaddr:192.168.50.174 Prefix:24 Hostname:old-k8s-version-986088 Clientid:01:52:54:00:f6:94:40}
	I0328 01:02:52.692024 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined IP address 192.168.50.174 and MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:52.692188 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHPort
	I0328 01:02:52.692425 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHKeyPath
	I0328 01:02:52.692620 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHUsername
	I0328 01:02:52.692774 1131323 sshutil.go:53] new ssh client: &{IP:192.168.50.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/old-k8s-version-986088/id_rsa Username:docker}
	I0328 01:02:52.777200 1131323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0328 01:02:52.808740 1131323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0328 01:02:52.836646 1131323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0328 01:02:52.862627 1131323 provision.go:87] duration metric: took 295.032419ms to configureAuth
	I0328 01:02:52.862668 1131323 buildroot.go:189] setting minikube options for container-runtime
	I0328 01:02:52.862908 1131323 config.go:182] Loaded profile config "old-k8s-version-986088": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0328 01:02:52.863019 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHHostname
	I0328 01:02:52.865838 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:52.866585 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:94:40", ip: ""} in network mk-old-k8s-version-986088: {Iface:virbr2 ExpiryTime:2024-03-28 02:02:44 +0000 UTC Type:0 Mac:52:54:00:f6:94:40 Iaid: IPaddr:192.168.50.174 Prefix:24 Hostname:old-k8s-version-986088 Clientid:01:52:54:00:f6:94:40}
	I0328 01:02:52.866630 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined IP address 192.168.50.174 and MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:52.867271 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHPort
	I0328 01:02:52.867521 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHKeyPath
	I0328 01:02:52.867687 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHKeyPath
	I0328 01:02:52.867826 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHUsername
	I0328 01:02:52.867961 1131323 main.go:141] libmachine: Using SSH client type: native
	I0328 01:02:52.868176 1131323 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.174 22 <nil> <nil>}
	I0328 01:02:52.868194 1131323 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0328 01:02:53.154903 1131323 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0328 01:02:53.154936 1131323 machine.go:97] duration metric: took 931.534047ms to provisionDockerMachine
	I0328 01:02:53.154949 1131323 start.go:293] postStartSetup for "old-k8s-version-986088" (driver="kvm2")
	I0328 01:02:53.154961 1131323 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0328 01:02:53.154997 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .DriverName
	I0328 01:02:53.155353 1131323 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0328 01:02:53.155386 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHHostname
	I0328 01:02:53.158072 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:53.158448 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:94:40", ip: ""} in network mk-old-k8s-version-986088: {Iface:virbr2 ExpiryTime:2024-03-28 02:02:44 +0000 UTC Type:0 Mac:52:54:00:f6:94:40 Iaid: IPaddr:192.168.50.174 Prefix:24 Hostname:old-k8s-version-986088 Clientid:01:52:54:00:f6:94:40}
	I0328 01:02:53.158482 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined IP address 192.168.50.174 and MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:53.158612 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHPort
	I0328 01:02:53.158825 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHKeyPath
	I0328 01:02:53.158974 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHUsername
	I0328 01:02:53.159102 1131323 sshutil.go:53] new ssh client: &{IP:192.168.50.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/old-k8s-version-986088/id_rsa Username:docker}
	I0328 01:02:53.243411 1131323 ssh_runner.go:195] Run: cat /etc/os-release
	I0328 01:02:53.247745 1131323 info.go:137] Remote host: Buildroot 2023.02.9
	I0328 01:02:53.247769 1131323 filesync.go:126] Scanning /home/jenkins/minikube-integration/18485-1069254/.minikube/addons for local assets ...
	I0328 01:02:53.247830 1131323 filesync.go:126] Scanning /home/jenkins/minikube-integration/18485-1069254/.minikube/files for local assets ...
	I0328 01:02:53.247903 1131323 filesync.go:149] local asset: /home/jenkins/minikube-integration/18485-1069254/.minikube/files/etc/ssl/certs/10765222.pem -> 10765222.pem in /etc/ssl/certs
	I0328 01:02:53.247990 1131323 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0328 01:02:53.258574 1131323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/files/etc/ssl/certs/10765222.pem --> /etc/ssl/certs/10765222.pem (1708 bytes)
	I0328 01:02:53.284249 1131323 start.go:296] duration metric: took 129.2844ms for postStartSetup
	I0328 01:02:53.284300 1131323 fix.go:56] duration metric: took 20.532468979s for fixHost
	I0328 01:02:53.284324 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHHostname
	I0328 01:02:53.287097 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:53.287505 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:94:40", ip: ""} in network mk-old-k8s-version-986088: {Iface:virbr2 ExpiryTime:2024-03-28 02:02:44 +0000 UTC Type:0 Mac:52:54:00:f6:94:40 Iaid: IPaddr:192.168.50.174 Prefix:24 Hostname:old-k8s-version-986088 Clientid:01:52:54:00:f6:94:40}
	I0328 01:02:53.287534 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined IP address 192.168.50.174 and MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:53.287642 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHPort
	I0328 01:02:53.287874 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHKeyPath
	I0328 01:02:53.288039 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHKeyPath
	I0328 01:02:53.288225 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHUsername
	I0328 01:02:53.288439 1131323 main.go:141] libmachine: Using SSH client type: native
	I0328 01:02:53.288601 1131323 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.174 22 <nil> <nil>}
	I0328 01:02:53.288612 1131323 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0328 01:02:53.391262 1131323 main.go:141] libmachine: SSH cmd err, output: <nil>: 1711587773.373998758
	
	I0328 01:02:53.391292 1131323 fix.go:216] guest clock: 1711587773.373998758
	I0328 01:02:53.391299 1131323 fix.go:229] Guest: 2024-03-28 01:02:53.373998758 +0000 UTC Remote: 2024-03-28 01:02:53.284304642 +0000 UTC m=+227.998260980 (delta=89.694116ms)
	I0328 01:02:53.391341 1131323 fix.go:200] guest clock delta is within tolerance: 89.694116ms
	I0328 01:02:53.391346 1131323 start.go:83] releasing machines lock for "old-k8s-version-986088", held for 20.639550927s
	I0328 01:02:53.391377 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .DriverName
	I0328 01:02:53.391728 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetIP
	I0328 01:02:53.394421 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:53.394780 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:94:40", ip: ""} in network mk-old-k8s-version-986088: {Iface:virbr2 ExpiryTime:2024-03-28 02:02:44 +0000 UTC Type:0 Mac:52:54:00:f6:94:40 Iaid: IPaddr:192.168.50.174 Prefix:24 Hostname:old-k8s-version-986088 Clientid:01:52:54:00:f6:94:40}
	I0328 01:02:53.394811 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined IP address 192.168.50.174 and MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:53.394932 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .DriverName
	I0328 01:02:53.395449 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .DriverName
	I0328 01:02:53.395729 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .DriverName
	I0328 01:02:53.395828 1131323 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0328 01:02:53.395883 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHHostname
	I0328 01:02:53.395985 1131323 ssh_runner.go:195] Run: cat /version.json
	I0328 01:02:53.396014 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHHostname
	I0328 01:02:53.398819 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:53.399010 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:53.399281 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:94:40", ip: ""} in network mk-old-k8s-version-986088: {Iface:virbr2 ExpiryTime:2024-03-28 02:02:44 +0000 UTC Type:0 Mac:52:54:00:f6:94:40 Iaid: IPaddr:192.168.50.174 Prefix:24 Hostname:old-k8s-version-986088 Clientid:01:52:54:00:f6:94:40}
	I0328 01:02:53.399320 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined IP address 192.168.50.174 and MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:53.399451 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHPort
	I0328 01:02:53.399550 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:94:40", ip: ""} in network mk-old-k8s-version-986088: {Iface:virbr2 ExpiryTime:2024-03-28 02:02:44 +0000 UTC Type:0 Mac:52:54:00:f6:94:40 Iaid: IPaddr:192.168.50.174 Prefix:24 Hostname:old-k8s-version-986088 Clientid:01:52:54:00:f6:94:40}
	I0328 01:02:53.399620 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined IP address 192.168.50.174 and MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:53.399640 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHKeyPath
	I0328 01:02:53.399880 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHUsername
	I0328 01:02:53.399902 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHPort
	I0328 01:02:53.400065 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHKeyPath
	I0328 01:02:53.400081 1131323 sshutil.go:53] new ssh client: &{IP:192.168.50.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/old-k8s-version-986088/id_rsa Username:docker}
	I0328 01:02:53.400245 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHUsername
	I0328 01:02:53.400445 1131323 sshutil.go:53] new ssh client: &{IP:192.168.50.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/old-k8s-version-986088/id_rsa Username:docker}
	I0328 01:02:53.514453 1131323 ssh_runner.go:195] Run: systemctl --version
	I0328 01:02:53.521123 1131323 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0328 01:02:53.678366 1131323 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0328 01:02:53.685402 1131323 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0328 01:02:53.685473 1131323 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0328 01:02:53.702781 1131323 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0328 01:02:53.702816 1131323 start.go:494] detecting cgroup driver to use...
	I0328 01:02:53.702900 1131323 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0328 01:02:53.720343 1131323 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0328 01:02:53.736749 1131323 docker.go:217] disabling cri-docker service (if available) ...
	I0328 01:02:53.736824 1131323 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0328 01:02:53.761087 1131323 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0328 01:02:53.779008 1131323 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0328 01:02:53.895064 1131323 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0328 01:02:54.060741 1131323 docker.go:233] disabling docker service ...
	I0328 01:02:54.060825 1131323 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0328 01:02:54.079139 1131323 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0328 01:02:54.093523 1131323 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0328 01:02:54.247544 1131323 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0328 01:02:54.396392 1131323 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0328 01:02:54.422612 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0328 01:02:54.443759 1131323 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0328 01:02:54.443817 1131323 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 01:02:54.459794 1131323 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0328 01:02:54.459875 1131323 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 01:02:54.472784 1131323 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 01:02:54.484963 1131323 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 01:02:54.496654 1131323 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0328 01:02:54.508382 1131323 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0328 01:02:54.518607 1131323 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0328 01:02:54.518687 1131323 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0328 01:02:54.532356 1131323 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0328 01:02:54.544424 1131323 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0328 01:02:54.685782 1131323 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0328 01:02:54.847233 1131323 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0328 01:02:54.847314 1131323 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0328 01:02:54.853148 1131323 start.go:562] Will wait 60s for crictl version
	I0328 01:02:54.853248 1131323 ssh_runner.go:195] Run: which crictl
	I0328 01:02:54.857536 1131323 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0328 01:02:54.901937 1131323 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0328 01:02:54.902082 1131323 ssh_runner.go:195] Run: crio --version
	I0328 01:02:54.935571 1131323 ssh_runner.go:195] Run: crio --version
	I0328 01:02:54.971452 1131323 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0328 01:02:54.972964 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetIP
	I0328 01:02:54.976523 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:54.976985 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:94:40", ip: ""} in network mk-old-k8s-version-986088: {Iface:virbr2 ExpiryTime:2024-03-28 02:02:44 +0000 UTC Type:0 Mac:52:54:00:f6:94:40 Iaid: IPaddr:192.168.50.174 Prefix:24 Hostname:old-k8s-version-986088 Clientid:01:52:54:00:f6:94:40}
	I0328 01:02:54.977017 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined IP address 192.168.50.174 and MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:54.977369 1131323 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0328 01:02:54.982326 1131323 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0328 01:02:54.996239 1131323 kubeadm.go:877] updating cluster {Name:old-k8s-version-986088 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-986088 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.174 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0328 01:02:54.996371 1131323 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0328 01:02:54.996433 1131323 ssh_runner.go:195] Run: sudo crictl images --output json
	I0328 01:02:55.045404 1131323 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0328 01:02:55.045483 1131323 ssh_runner.go:195] Run: which lz4
	I0328 01:02:55.050226 1131323 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0328 01:02:55.055182 1131323 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0328 01:02:55.055221 1131323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0328 01:02:56.994163 1131323 crio.go:462] duration metric: took 1.943992561s to copy over tarball
	I0328 01:02:56.994252 1131323 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0328 01:03:00.215115 1131323 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.220825311s)
	I0328 01:03:00.215159 1131323 crio.go:469] duration metric: took 3.22095583s to extract the tarball
	I0328 01:03:00.215171 1131323 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0328 01:03:00.259151 1131323 ssh_runner.go:195] Run: sudo crictl images --output json
	I0328 01:03:00.298446 1131323 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0328 01:03:00.298492 1131323 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0328 01:03:00.298585 1131323 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0328 01:03:00.298601 1131323 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0328 01:03:00.298613 1131323 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0328 01:03:00.298644 1131323 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0328 01:03:00.298662 1131323 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0328 01:03:00.298698 1131323 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0328 01:03:00.298593 1131323 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0328 01:03:00.298585 1131323 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0328 01:03:00.300347 1131323 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0328 01:03:00.300424 1131323 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0328 01:03:00.300470 1131323 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0328 01:03:00.300474 1131323 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0328 01:03:00.300637 1131323 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0328 01:03:00.300652 1131323 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0328 01:03:00.300723 1131323 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0328 01:03:00.300793 1131323 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0328 01:03:00.499788 1131323 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0328 01:03:00.539135 1131323 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0328 01:03:00.541462 1131323 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0328 01:03:00.544184 1131323 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0328 01:03:00.544227 1131323 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0328 01:03:00.544261 1131323 ssh_runner.go:195] Run: which crictl
	I0328 01:03:00.555720 1131323 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0328 01:03:00.560189 1131323 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0328 01:03:00.562639 1131323 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0328 01:03:00.574105 1131323 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0328 01:03:00.681717 1131323 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0328 01:03:00.681742 1131323 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0328 01:03:00.681765 1131323 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0328 01:03:00.681803 1131323 ssh_runner.go:195] Run: which crictl
	I0328 01:03:00.682033 1131323 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0328 01:03:00.682076 1131323 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0328 01:03:00.682115 1131323 ssh_runner.go:195] Run: which crictl
	I0328 01:03:00.732868 1131323 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0328 01:03:00.732922 1131323 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0328 01:03:00.732988 1131323 ssh_runner.go:195] Run: which crictl
	I0328 01:03:00.742680 1131323 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0328 01:03:00.742730 1131323 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0328 01:03:00.742762 1131323 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0328 01:03:00.742777 1131323 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0328 01:03:00.742805 1131323 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0328 01:03:00.742770 1131323 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0328 01:03:00.742817 1131323 ssh_runner.go:195] Run: which crictl
	I0328 01:03:00.742851 1131323 ssh_runner.go:195] Run: which crictl
	I0328 01:03:00.742865 1131323 ssh_runner.go:195] Run: which crictl
	I0328 01:03:00.770435 1131323 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0328 01:03:00.770472 1131323 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0328 01:03:00.770567 1131323 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0328 01:03:00.770588 1131323 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0328 01:03:00.770727 1131323 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0328 01:03:00.770760 1131323 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0328 01:03:00.770728 1131323 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0328 01:03:00.882338 1131323 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0328 01:03:00.896602 1131323 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0328 01:03:00.918814 1131323 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0328 01:03:00.918869 1131323 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0328 01:03:00.918919 1131323 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0328 01:03:00.918968 1131323 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0328 01:03:01.186124 1131323 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0328 01:03:01.334547 1131323 cache_images.go:92] duration metric: took 1.036031169s to LoadCachedImages
	W0328 01:03:01.334676 1131323 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	I0328 01:03:01.334694 1131323 kubeadm.go:928] updating node { 192.168.50.174 8443 v1.20.0 crio true true} ...
	I0328 01:03:01.334827 1131323 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-986088 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.174
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-986088 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0328 01:03:01.334926 1131323 ssh_runner.go:195] Run: crio config
	I0328 01:03:01.391004 1131323 cni.go:84] Creating CNI manager for ""
	I0328 01:03:01.391034 1131323 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0328 01:03:01.391054 1131323 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0328 01:03:01.391081 1131323 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.174 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-986088 NodeName:old-k8s-version-986088 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.174"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.174 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0328 01:03:01.391265 1131323 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.174
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-986088"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.174
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.174"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0328 01:03:01.391347 1131323 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0328 01:03:01.403684 1131323 binaries.go:44] Found k8s binaries, skipping transfer
	I0328 01:03:01.403779 1131323 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0328 01:03:01.415168 1131323 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0328 01:03:01.434329 1131323 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0328 01:03:01.456280 1131323 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0328 01:03:01.476625 1131323 ssh_runner.go:195] Run: grep 192.168.50.174	control-plane.minikube.internal$ /etc/hosts
	I0328 01:03:01.480867 1131323 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.174	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0328 01:03:01.493833 1131323 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0328 01:03:01.642273 1131323 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0328 01:03:01.661857 1131323 certs.go:68] Setting up /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/old-k8s-version-986088 for IP: 192.168.50.174
	I0328 01:03:01.661887 1131323 certs.go:194] generating shared ca certs ...
	I0328 01:03:01.661909 1131323 certs.go:226] acquiring lock for ca certs: {Name:mkf4dec8f33bbf51de6ed3aabdf175c7fd744ef9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 01:03:01.662115 1131323 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.key
	I0328 01:03:01.662174 1131323 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/proxy-client-ca.key
	I0328 01:03:01.662188 1131323 certs.go:256] generating profile certs ...
	I0328 01:03:01.662324 1131323 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/old-k8s-version-986088/client.key
	I0328 01:03:01.662399 1131323 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/old-k8s-version-986088/apiserver.key.b88fbc7e
	I0328 01:03:01.662447 1131323 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/old-k8s-version-986088/proxy-client.key
	I0328 01:03:01.662600 1131323 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/1076522.pem (1338 bytes)
	W0328 01:03:01.662656 1131323 certs.go:480] ignoring /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/1076522_empty.pem, impossibly tiny 0 bytes
	I0328 01:03:01.662672 1131323 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca-key.pem (1675 bytes)
	I0328 01:03:01.662703 1131323 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem (1078 bytes)
	I0328 01:03:01.662738 1131323 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/cert.pem (1123 bytes)
	I0328 01:03:01.662774 1131323 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/key.pem (1679 bytes)
	I0328 01:03:01.662826 1131323 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/files/etc/ssl/certs/10765222.pem (1708 bytes)
	I0328 01:03:01.663831 1131323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0328 01:03:01.697171 1131323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0328 01:03:01.742118 1131323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0328 01:03:01.783263 1131323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0328 01:03:01.831682 1131323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/old-k8s-version-986088/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0328 01:03:01.878051 1131323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/old-k8s-version-986088/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0328 01:03:01.915626 1131323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/old-k8s-version-986088/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0328 01:03:01.942247 1131323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/old-k8s-version-986088/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0328 01:03:01.969054 1131323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/files/etc/ssl/certs/10765222.pem --> /usr/share/ca-certificates/10765222.pem (1708 bytes)
	I0328 01:03:01.998651 1131323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0328 01:03:02.024881 1131323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/1076522.pem --> /usr/share/ca-certificates/1076522.pem (1338 bytes)
	I0328 01:03:02.051284 1131323 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0328 01:03:02.070414 1131323 ssh_runner.go:195] Run: openssl version
	I0328 01:03:02.076635 1131323 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10765222.pem && ln -fs /usr/share/ca-certificates/10765222.pem /etc/ssl/certs/10765222.pem"
	I0328 01:03:02.089288 1131323 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10765222.pem
	I0328 01:03:02.094260 1131323 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 27 23:42 /usr/share/ca-certificates/10765222.pem
	I0328 01:03:02.094322 1131323 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10765222.pem
	I0328 01:03:02.100846 1131323 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/10765222.pem /etc/ssl/certs/3ec20f2e.0"
	I0328 01:03:02.114474 1131323 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0328 01:03:02.126467 1131323 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0328 01:03:02.131240 1131323 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 27 23:34 /usr/share/ca-certificates/minikubeCA.pem
	I0328 01:03:02.131293 1131323 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0328 01:03:02.137496 1131323 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0328 01:03:02.150863 1131323 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1076522.pem && ln -fs /usr/share/ca-certificates/1076522.pem /etc/ssl/certs/1076522.pem"
	I0328 01:03:02.163536 1131323 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1076522.pem
	I0328 01:03:02.168767 1131323 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 27 23:42 /usr/share/ca-certificates/1076522.pem
	I0328 01:03:02.168850 1131323 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1076522.pem
	I0328 01:03:02.175218 1131323 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1076522.pem /etc/ssl/certs/51391683.0"
	I0328 01:03:02.188272 1131323 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0328 01:03:02.193348 1131323 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0328 01:03:02.199969 1131323 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0328 01:03:02.206424 1131323 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0328 01:03:02.213530 1131323 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0328 01:03:02.220136 1131323 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0328 01:03:02.226502 1131323 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0328 01:03:02.232708 1131323 kubeadm.go:391] StartCluster: {Name:old-k8s-version-986088 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-986088 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.174 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0328 01:03:02.232831 1131323 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0328 01:03:02.232890 1131323 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0328 01:03:02.280062 1131323 cri.go:89] found id: ""
	I0328 01:03:02.280160 1131323 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0328 01:03:02.291968 1131323 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0328 01:03:02.292003 1131323 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0328 01:03:02.292011 1131323 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0328 01:03:02.292072 1131323 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0328 01:03:02.304006 1131323 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0328 01:03:02.305105 1131323 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-986088" does not appear in /home/jenkins/minikube-integration/18485-1069254/kubeconfig
	I0328 01:03:02.305785 1131323 kubeconfig.go:62] /home/jenkins/minikube-integration/18485-1069254/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-986088" cluster setting kubeconfig missing "old-k8s-version-986088" context setting]
	I0328 01:03:02.306728 1131323 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18485-1069254/kubeconfig: {Name:mk608bb0dce90860f51972a105c5a28b1a9b081e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 01:03:02.308610 1131323 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0328 01:03:02.320212 1131323 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.50.174
	I0328 01:03:02.320265 1131323 kubeadm.go:1154] stopping kube-system containers ...
	I0328 01:03:02.320283 1131323 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0328 01:03:02.320356 1131323 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0328 01:03:02.366411 1131323 cri.go:89] found id: ""
	I0328 01:03:02.366500 1131323 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0328 01:03:02.388351 1131323 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0328 01:03:02.402621 1131323 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0328 01:03:02.402652 1131323 kubeadm.go:156] found existing configuration files:
	
	I0328 01:03:02.402718 1131323 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0328 01:03:02.415559 1131323 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0328 01:03:02.415633 1131323 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0328 01:03:02.426666 1131323 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0328 01:03:02.439497 1131323 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0328 01:03:02.439558 1131323 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0328 01:03:02.451040 1131323 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0328 01:03:02.461780 1131323 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0328 01:03:02.461876 1131323 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0328 01:03:02.473295 1131323 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0328 01:03:02.484762 1131323 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0328 01:03:02.484841 1131323 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0328 01:03:02.496304 1131323 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0328 01:03:02.507634 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0328 01:03:02.641980 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0328 01:03:03.598106 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0328 01:03:03.840026 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0328 01:03:03.970336 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0328 01:03:04.067774 1131323 api_server.go:52] waiting for apiserver process to appear ...
	I0328 01:03:04.067911 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:04.568260 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:05.068794 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:05.568716 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:06.068362 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:06.568235 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:07.068696 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:07.567976 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:08.068032 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:08.568586 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:09.068046 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:09.568699 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:10.067967 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:10.568240 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:11.068028 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:11.568146 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:12.068467 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:12.568820 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:13.068031 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:13.568977 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:14.068050 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:14.567938 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:15.068711 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:15.568507 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:16.068210 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:16.568761 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:17.067929 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:17.568403 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:18.068454 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:18.568086 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:19.068049 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:19.569020 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:20.068068 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:20.568464 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:21.068983 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:21.568470 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:22.068772 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:22.568940 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:23.068907 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:23.568272 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:24.068055 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:24.568056 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:25.068006 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:25.568927 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:26.068371 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:26.568107 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:27.068037 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:27.567985 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:28.068036 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:28.568843 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:29.068483 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:29.568942 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:30.068849 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:30.568918 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:31.068097 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:31.568306 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:32.068345 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:32.568773 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:33.068072 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:33.568377 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:34.068141 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:34.568574 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:35.067986 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:35.568345 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:36.068227 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:36.568528 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:37.068834 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:37.568407 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:38.068142 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:38.568732 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:39.068094 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:39.568799 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:40.068973 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:40.568441 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:41.068790 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:41.568919 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:42.068166 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:42.568012 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:43.068027 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:43.568916 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:44.067940 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:44.568074 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:45.068786 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:45.568662 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:46.068299 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:46.568793 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:47.068929 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:47.568250 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:48.068910 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:48.568138 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:49.068128 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:49.568153 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:50.068075 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:50.568929 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:51.068812 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:51.568899 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:52.068890 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:52.568751 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:53.068406 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:53.568466 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:54.068039 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:54.568745 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:55.068690 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:55.568378 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:56.068253 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:56.568989 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:57.068709 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:57.569038 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:58.068236 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:58.568386 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:59.068971 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:59.568858 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:04:00.067964 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:04:00.568622 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:04:01.067943 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:04:01.567964 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:04:02.068537 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:04:02.568772 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:04:03.068458 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:04:03.568943 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:04:04.068085 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:04:04.068176 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:04:04.112601 1131323 cri.go:89] found id: ""
	I0328 01:04:04.112631 1131323 logs.go:276] 0 containers: []
	W0328 01:04:04.112642 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:04:04.112650 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:04:04.112726 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:04:04.151837 1131323 cri.go:89] found id: ""
	I0328 01:04:04.151873 1131323 logs.go:276] 0 containers: []
	W0328 01:04:04.151885 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:04:04.151894 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:04:04.151965 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:04:04.193411 1131323 cri.go:89] found id: ""
	I0328 01:04:04.193451 1131323 logs.go:276] 0 containers: []
	W0328 01:04:04.193463 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:04:04.193473 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:04:04.193545 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:04:04.239623 1131323 cri.go:89] found id: ""
	I0328 01:04:04.239652 1131323 logs.go:276] 0 containers: []
	W0328 01:04:04.239662 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:04:04.239673 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:04:04.239732 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:04:04.279561 1131323 cri.go:89] found id: ""
	I0328 01:04:04.279600 1131323 logs.go:276] 0 containers: []
	W0328 01:04:04.279615 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:04:04.279627 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:04:04.279708 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:04:04.318680 1131323 cri.go:89] found id: ""
	I0328 01:04:04.318710 1131323 logs.go:276] 0 containers: []
	W0328 01:04:04.318722 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:04:04.318731 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:04:04.318797 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:04:04.356486 1131323 cri.go:89] found id: ""
	I0328 01:04:04.356514 1131323 logs.go:276] 0 containers: []
	W0328 01:04:04.356523 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:04:04.356530 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:04:04.356586 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:04:04.394281 1131323 cri.go:89] found id: ""
	I0328 01:04:04.394319 1131323 logs.go:276] 0 containers: []
	W0328 01:04:04.394334 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:04:04.394348 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:04:04.394364 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:04:04.458688 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:04:04.458729 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:04:04.501399 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:04:04.501440 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:04:04.556183 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:04:04.556225 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:04:04.571392 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:04:04.571427 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:04:04.709967 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:04:07.210550 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:04:07.224274 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:04:07.224345 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:04:07.262604 1131323 cri.go:89] found id: ""
	I0328 01:04:07.262640 1131323 logs.go:276] 0 containers: []
	W0328 01:04:07.262665 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:04:07.262674 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:04:07.262763 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:04:07.296868 1131323 cri.go:89] found id: ""
	I0328 01:04:07.296907 1131323 logs.go:276] 0 containers: []
	W0328 01:04:07.296918 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:04:07.296926 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:04:07.296992 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:04:07.333110 1131323 cri.go:89] found id: ""
	I0328 01:04:07.333149 1131323 logs.go:276] 0 containers: []
	W0328 01:04:07.333162 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:04:07.333171 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:04:07.333240 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:04:07.371138 1131323 cri.go:89] found id: ""
	I0328 01:04:07.371168 1131323 logs.go:276] 0 containers: []
	W0328 01:04:07.371186 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:04:07.371195 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:04:07.371259 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:04:07.412197 1131323 cri.go:89] found id: ""
	I0328 01:04:07.412230 1131323 logs.go:276] 0 containers: []
	W0328 01:04:07.412242 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:04:07.412251 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:04:07.412331 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:04:07.457021 1131323 cri.go:89] found id: ""
	I0328 01:04:07.457052 1131323 logs.go:276] 0 containers: []
	W0328 01:04:07.457070 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:04:07.457080 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:04:07.457153 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:04:07.517996 1131323 cri.go:89] found id: ""
	I0328 01:04:07.518026 1131323 logs.go:276] 0 containers: []
	W0328 01:04:07.518034 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:04:07.518040 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:04:07.518111 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:04:07.556829 1131323 cri.go:89] found id: ""
	I0328 01:04:07.556856 1131323 logs.go:276] 0 containers: []
	W0328 01:04:07.556865 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:04:07.556875 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:04:07.556890 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:04:07.572234 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:04:07.572270 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:04:07.648615 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:04:07.648641 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:04:07.648658 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:04:07.719617 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:04:07.719665 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:04:07.764053 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:04:07.764097 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:04:10.319480 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:04:10.334347 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:04:10.335893 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:04:10.375231 1131323 cri.go:89] found id: ""
	I0328 01:04:10.375263 1131323 logs.go:276] 0 containers: []
	W0328 01:04:10.375274 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:04:10.375281 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:04:10.375353 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:04:10.413652 1131323 cri.go:89] found id: ""
	I0328 01:04:10.413706 1131323 logs.go:276] 0 containers: []
	W0328 01:04:10.413726 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:04:10.413736 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:04:10.413805 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:04:10.449546 1131323 cri.go:89] found id: ""
	I0328 01:04:10.449588 1131323 logs.go:276] 0 containers: []
	W0328 01:04:10.449597 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:04:10.449604 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:04:10.449658 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:04:10.487518 1131323 cri.go:89] found id: ""
	I0328 01:04:10.487556 1131323 logs.go:276] 0 containers: []
	W0328 01:04:10.487570 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:04:10.487579 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:04:10.487663 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:04:10.525088 1131323 cri.go:89] found id: ""
	I0328 01:04:10.525124 1131323 logs.go:276] 0 containers: []
	W0328 01:04:10.525137 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:04:10.525146 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:04:10.525213 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:04:10.567177 1131323 cri.go:89] found id: ""
	I0328 01:04:10.567209 1131323 logs.go:276] 0 containers: []
	W0328 01:04:10.567221 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:04:10.567231 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:04:10.567302 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:04:10.609440 1131323 cri.go:89] found id: ""
	I0328 01:04:10.609474 1131323 logs.go:276] 0 containers: []
	W0328 01:04:10.609485 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:04:10.609492 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:04:10.609549 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:04:10.652466 1131323 cri.go:89] found id: ""
	I0328 01:04:10.652502 1131323 logs.go:276] 0 containers: []
	W0328 01:04:10.652516 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:04:10.652529 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:04:10.652546 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:04:10.737406 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:04:10.737451 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:04:10.786955 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:04:10.786991 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:04:10.843072 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:04:10.843114 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:04:10.857209 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:04:10.857244 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:04:10.950885 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:04:13.451542 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:04:13.465833 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:04:13.465924 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:04:13.503353 1131323 cri.go:89] found id: ""
	I0328 01:04:13.503386 1131323 logs.go:276] 0 containers: []
	W0328 01:04:13.503398 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:04:13.503407 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:04:13.503474 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:04:13.543175 1131323 cri.go:89] found id: ""
	I0328 01:04:13.543208 1131323 logs.go:276] 0 containers: []
	W0328 01:04:13.543220 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:04:13.543229 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:04:13.543287 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:04:13.580796 1131323 cri.go:89] found id: ""
	I0328 01:04:13.580829 1131323 logs.go:276] 0 containers: []
	W0328 01:04:13.580840 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:04:13.580848 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:04:13.580900 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:04:13.619483 1131323 cri.go:89] found id: ""
	I0328 01:04:13.619516 1131323 logs.go:276] 0 containers: []
	W0328 01:04:13.619529 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:04:13.619539 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:04:13.619596 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:04:13.654651 1131323 cri.go:89] found id: ""
	I0328 01:04:13.654683 1131323 logs.go:276] 0 containers: []
	W0328 01:04:13.654697 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:04:13.654705 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:04:13.654774 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:04:13.691763 1131323 cri.go:89] found id: ""
	I0328 01:04:13.691794 1131323 logs.go:276] 0 containers: []
	W0328 01:04:13.691805 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:04:13.691813 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:04:13.691881 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:04:13.730580 1131323 cri.go:89] found id: ""
	I0328 01:04:13.730614 1131323 logs.go:276] 0 containers: []
	W0328 01:04:13.730627 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:04:13.730635 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:04:13.730694 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:04:13.767802 1131323 cri.go:89] found id: ""
	I0328 01:04:13.767834 1131323 logs.go:276] 0 containers: []
	W0328 01:04:13.767848 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:04:13.767860 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:04:13.767876 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:04:13.815612 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:04:13.815653 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:04:13.870945 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:04:13.870991 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:04:13.891456 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:04:13.891506 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:04:14.022124 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:04:14.022163 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:04:14.022187 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:04:16.604087 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:04:16.618872 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:04:16.618971 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:04:16.665628 1131323 cri.go:89] found id: ""
	I0328 01:04:16.665661 1131323 logs.go:276] 0 containers: []
	W0328 01:04:16.665675 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:04:16.665683 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:04:16.665780 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:04:16.703727 1131323 cri.go:89] found id: ""
	I0328 01:04:16.703758 1131323 logs.go:276] 0 containers: []
	W0328 01:04:16.703768 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:04:16.703775 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:04:16.703835 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:04:16.741425 1131323 cri.go:89] found id: ""
	I0328 01:04:16.741455 1131323 logs.go:276] 0 containers: []
	W0328 01:04:16.741464 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:04:16.741470 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:04:16.741524 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:04:16.782333 1131323 cri.go:89] found id: ""
	I0328 01:04:16.782373 1131323 logs.go:276] 0 containers: []
	W0328 01:04:16.782387 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:04:16.782398 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:04:16.782469 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:04:16.820321 1131323 cri.go:89] found id: ""
	I0328 01:04:16.820355 1131323 logs.go:276] 0 containers: []
	W0328 01:04:16.820364 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:04:16.820372 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:04:16.820429 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:04:16.861091 1131323 cri.go:89] found id: ""
	I0328 01:04:16.861130 1131323 logs.go:276] 0 containers: []
	W0328 01:04:16.861144 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:04:16.861154 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:04:16.861226 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:04:16.901347 1131323 cri.go:89] found id: ""
	I0328 01:04:16.901394 1131323 logs.go:276] 0 containers: []
	W0328 01:04:16.901408 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:04:16.901418 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:04:16.901491 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:04:16.944027 1131323 cri.go:89] found id: ""
	I0328 01:04:16.944067 1131323 logs.go:276] 0 containers: []
	W0328 01:04:16.944080 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:04:16.944093 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:04:16.944110 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:04:16.959104 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:04:16.959151 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:04:17.035432 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:04:17.035464 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:04:17.035480 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:04:17.116236 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:04:17.116276 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:04:17.159321 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:04:17.159370 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:04:19.711326 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:04:19.726016 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:04:19.726094 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:04:19.776639 1131323 cri.go:89] found id: ""
	I0328 01:04:19.776676 1131323 logs.go:276] 0 containers: []
	W0328 01:04:19.776690 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:04:19.776700 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:04:19.776782 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:04:19.817849 1131323 cri.go:89] found id: ""
	I0328 01:04:19.817887 1131323 logs.go:276] 0 containers: []
	W0328 01:04:19.817897 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:04:19.817904 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:04:19.817981 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:04:19.855055 1131323 cri.go:89] found id: ""
	I0328 01:04:19.855089 1131323 logs.go:276] 0 containers: []
	W0328 01:04:19.855102 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:04:19.855110 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:04:19.855177 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:04:19.895296 1131323 cri.go:89] found id: ""
	I0328 01:04:19.895332 1131323 logs.go:276] 0 containers: []
	W0328 01:04:19.895346 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:04:19.895354 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:04:19.895414 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:04:19.930936 1131323 cri.go:89] found id: ""
	I0328 01:04:19.930968 1131323 logs.go:276] 0 containers: []
	W0328 01:04:19.930980 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:04:19.930989 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:04:19.931067 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:04:19.968573 1131323 cri.go:89] found id: ""
	I0328 01:04:19.968610 1131323 logs.go:276] 0 containers: []
	W0328 01:04:19.968623 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:04:19.968632 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:04:19.968693 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:04:20.006130 1131323 cri.go:89] found id: ""
	I0328 01:04:20.006180 1131323 logs.go:276] 0 containers: []
	W0328 01:04:20.006195 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:04:20.006203 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:04:20.006304 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:04:20.043646 1131323 cri.go:89] found id: ""
	I0328 01:04:20.043678 1131323 logs.go:276] 0 containers: []
	W0328 01:04:20.043689 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:04:20.043701 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:04:20.043717 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:04:20.058728 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:04:20.058761 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:04:20.136392 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:04:20.136417 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:04:20.136431 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:04:20.214971 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:04:20.215015 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:04:20.255002 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:04:20.255047 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:04:22.810078 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:04:22.824083 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:04:22.824169 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:04:22.862037 1131323 cri.go:89] found id: ""
	I0328 01:04:22.862066 1131323 logs.go:276] 0 containers: []
	W0328 01:04:22.862074 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:04:22.862081 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:04:22.862141 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:04:22.901625 1131323 cri.go:89] found id: ""
	I0328 01:04:22.901658 1131323 logs.go:276] 0 containers: []
	W0328 01:04:22.901670 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:04:22.901679 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:04:22.901752 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:04:22.938858 1131323 cri.go:89] found id: ""
	I0328 01:04:22.938891 1131323 logs.go:276] 0 containers: []
	W0328 01:04:22.938903 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:04:22.938912 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:04:22.938983 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:04:22.978781 1131323 cri.go:89] found id: ""
	I0328 01:04:22.978818 1131323 logs.go:276] 0 containers: []
	W0328 01:04:22.978829 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:04:22.978837 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:04:22.978910 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:04:23.016844 1131323 cri.go:89] found id: ""
	I0328 01:04:23.016882 1131323 logs.go:276] 0 containers: []
	W0328 01:04:23.016895 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:04:23.016904 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:04:23.016975 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:04:23.058456 1131323 cri.go:89] found id: ""
	I0328 01:04:23.058508 1131323 logs.go:276] 0 containers: []
	W0328 01:04:23.058522 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:04:23.058531 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:04:23.058604 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:04:23.099368 1131323 cri.go:89] found id: ""
	I0328 01:04:23.099399 1131323 logs.go:276] 0 containers: []
	W0328 01:04:23.099408 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:04:23.099420 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:04:23.099492 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:04:23.135593 1131323 cri.go:89] found id: ""
	I0328 01:04:23.135634 1131323 logs.go:276] 0 containers: []
	W0328 01:04:23.135653 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:04:23.135665 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:04:23.135679 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:04:23.191215 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:04:23.191260 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:04:23.206849 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:04:23.206884 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:04:23.289566 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:04:23.289596 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:04:23.289618 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:04:23.365429 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:04:23.365480 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:04:25.914883 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:04:25.929336 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:04:25.929415 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:04:25.969452 1131323 cri.go:89] found id: ""
	I0328 01:04:25.969485 1131323 logs.go:276] 0 containers: []
	W0328 01:04:25.969497 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:04:25.969506 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:04:25.969573 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:04:26.008978 1131323 cri.go:89] found id: ""
	I0328 01:04:26.009006 1131323 logs.go:276] 0 containers: []
	W0328 01:04:26.009015 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:04:26.009022 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:04:26.009075 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:04:26.051110 1131323 cri.go:89] found id: ""
	I0328 01:04:26.051138 1131323 logs.go:276] 0 containers: []
	W0328 01:04:26.051146 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:04:26.051153 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:04:26.051213 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:04:26.088231 1131323 cri.go:89] found id: ""
	I0328 01:04:26.088262 1131323 logs.go:276] 0 containers: []
	W0328 01:04:26.088271 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:04:26.088277 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:04:26.088342 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:04:26.125741 1131323 cri.go:89] found id: ""
	I0328 01:04:26.125782 1131323 logs.go:276] 0 containers: []
	W0328 01:04:26.125794 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:04:26.125800 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:04:26.125867 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:04:26.163367 1131323 cri.go:89] found id: ""
	I0328 01:04:26.163406 1131323 logs.go:276] 0 containers: []
	W0328 01:04:26.163417 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:04:26.163426 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:04:26.163503 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:04:26.202302 1131323 cri.go:89] found id: ""
	I0328 01:04:26.202340 1131323 logs.go:276] 0 containers: []
	W0328 01:04:26.202355 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:04:26.202364 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:04:26.202422 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:04:26.240880 1131323 cri.go:89] found id: ""
	I0328 01:04:26.240911 1131323 logs.go:276] 0 containers: []
	W0328 01:04:26.240921 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:04:26.240931 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:04:26.240943 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:04:26.283151 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:04:26.283180 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:04:26.341313 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:04:26.341350 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:04:26.356762 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:04:26.356791 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:04:26.428033 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:04:26.428054 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:04:26.428066 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:04:29.006332 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:04:29.020634 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:04:29.020745 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:04:29.060812 1131323 cri.go:89] found id: ""
	I0328 01:04:29.060843 1131323 logs.go:276] 0 containers: []
	W0328 01:04:29.060852 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:04:29.060859 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:04:29.060916 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:04:29.100110 1131323 cri.go:89] found id: ""
	I0328 01:04:29.100139 1131323 logs.go:276] 0 containers: []
	W0328 01:04:29.100149 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:04:29.100155 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:04:29.100212 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:04:29.140345 1131323 cri.go:89] found id: ""
	I0328 01:04:29.140384 1131323 logs.go:276] 0 containers: []
	W0328 01:04:29.140396 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:04:29.140404 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:04:29.140479 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:04:29.182415 1131323 cri.go:89] found id: ""
	I0328 01:04:29.182449 1131323 logs.go:276] 0 containers: []
	W0328 01:04:29.182459 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:04:29.182465 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:04:29.182533 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:04:29.225177 1131323 cri.go:89] found id: ""
	I0328 01:04:29.225214 1131323 logs.go:276] 0 containers: []
	W0328 01:04:29.225225 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:04:29.225233 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:04:29.225310 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:04:29.265437 1131323 cri.go:89] found id: ""
	I0328 01:04:29.265471 1131323 logs.go:276] 0 containers: []
	W0328 01:04:29.265485 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:04:29.265493 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:04:29.265556 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:04:29.301578 1131323 cri.go:89] found id: ""
	I0328 01:04:29.301617 1131323 logs.go:276] 0 containers: []
	W0328 01:04:29.301630 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:04:29.301639 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:04:29.301719 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:04:29.340816 1131323 cri.go:89] found id: ""
	I0328 01:04:29.340847 1131323 logs.go:276] 0 containers: []
	W0328 01:04:29.340856 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:04:29.340867 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:04:29.340880 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:04:29.384658 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:04:29.384687 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:04:29.439243 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:04:29.439285 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:04:29.456179 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:04:29.456211 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:04:29.534878 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:04:29.534906 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:04:29.534927 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:04:32.115798 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:04:32.130464 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:04:32.130560 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:04:32.168846 1131323 cri.go:89] found id: ""
	I0328 01:04:32.168877 1131323 logs.go:276] 0 containers: []
	W0328 01:04:32.168887 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:04:32.168894 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:04:32.168952 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:04:32.208590 1131323 cri.go:89] found id: ""
	I0328 01:04:32.208622 1131323 logs.go:276] 0 containers: []
	W0328 01:04:32.208632 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:04:32.208638 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:04:32.208694 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:04:32.247323 1131323 cri.go:89] found id: ""
	I0328 01:04:32.247362 1131323 logs.go:276] 0 containers: []
	W0328 01:04:32.247375 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:04:32.247384 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:04:32.247507 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:04:32.285260 1131323 cri.go:89] found id: ""
	I0328 01:04:32.285293 1131323 logs.go:276] 0 containers: []
	W0328 01:04:32.285312 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:04:32.285319 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:04:32.285395 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:04:32.326678 1131323 cri.go:89] found id: ""
	I0328 01:04:32.326712 1131323 logs.go:276] 0 containers: []
	W0328 01:04:32.326725 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:04:32.326740 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:04:32.326823 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:04:32.363375 1131323 cri.go:89] found id: ""
	I0328 01:04:32.363403 1131323 logs.go:276] 0 containers: []
	W0328 01:04:32.363412 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:04:32.363419 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:04:32.363473 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:04:32.401410 1131323 cri.go:89] found id: ""
	I0328 01:04:32.401449 1131323 logs.go:276] 0 containers: []
	W0328 01:04:32.401462 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:04:32.401470 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:04:32.401558 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:04:32.438645 1131323 cri.go:89] found id: ""
	I0328 01:04:32.438680 1131323 logs.go:276] 0 containers: []
	W0328 01:04:32.438691 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:04:32.438703 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:04:32.438718 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:04:32.488743 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:04:32.488786 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:04:32.503908 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:04:32.503944 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:04:32.577307 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:04:32.577333 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:04:32.577350 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:04:32.657787 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:04:32.657832 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:04:35.201151 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:04:35.215313 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:04:35.215383 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:04:35.253467 1131323 cri.go:89] found id: ""
	I0328 01:04:35.253504 1131323 logs.go:276] 0 containers: []
	W0328 01:04:35.253515 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:04:35.253522 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:04:35.253593 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:04:35.290218 1131323 cri.go:89] found id: ""
	I0328 01:04:35.290280 1131323 logs.go:276] 0 containers: []
	W0328 01:04:35.290292 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:04:35.290300 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:04:35.290378 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:04:35.330714 1131323 cri.go:89] found id: ""
	I0328 01:04:35.330749 1131323 logs.go:276] 0 containers: []
	W0328 01:04:35.330757 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:04:35.330764 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:04:35.330831 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:04:35.371524 1131323 cri.go:89] found id: ""
	I0328 01:04:35.371553 1131323 logs.go:276] 0 containers: []
	W0328 01:04:35.371570 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:04:35.371577 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:04:35.371630 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:04:35.411610 1131323 cri.go:89] found id: ""
	I0328 01:04:35.411638 1131323 logs.go:276] 0 containers: []
	W0328 01:04:35.411646 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:04:35.411652 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:04:35.411711 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:04:35.456709 1131323 cri.go:89] found id: ""
	I0328 01:04:35.456745 1131323 logs.go:276] 0 containers: []
	W0328 01:04:35.456758 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:04:35.456766 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:04:35.456836 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:04:35.492688 1131323 cri.go:89] found id: ""
	I0328 01:04:35.492719 1131323 logs.go:276] 0 containers: []
	W0328 01:04:35.492729 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:04:35.492755 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:04:35.492811 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:04:35.531205 1131323 cri.go:89] found id: ""
	I0328 01:04:35.531234 1131323 logs.go:276] 0 containers: []
	W0328 01:04:35.531243 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:04:35.531254 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:04:35.531266 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:04:35.611803 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:04:35.611845 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:04:35.653513 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:04:35.653551 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:04:35.708030 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:04:35.708075 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:04:35.724542 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:04:35.724576 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:04:35.798624 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:04:38.299312 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:04:38.314128 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:04:38.314213 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:04:38.357728 1131323 cri.go:89] found id: ""
	I0328 01:04:38.357761 1131323 logs.go:276] 0 containers: []
	W0328 01:04:38.357779 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:04:38.357786 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:04:38.357848 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:04:38.394512 1131323 cri.go:89] found id: ""
	I0328 01:04:38.394541 1131323 logs.go:276] 0 containers: []
	W0328 01:04:38.394549 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:04:38.394558 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:04:38.394618 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:04:38.434353 1131323 cri.go:89] found id: ""
	I0328 01:04:38.434380 1131323 logs.go:276] 0 containers: []
	W0328 01:04:38.434391 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:04:38.434399 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:04:38.434466 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:04:38.477662 1131323 cri.go:89] found id: ""
	I0328 01:04:38.477693 1131323 logs.go:276] 0 containers: []
	W0328 01:04:38.477703 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:04:38.477710 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:04:38.477763 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:04:38.515014 1131323 cri.go:89] found id: ""
	I0328 01:04:38.515044 1131323 logs.go:276] 0 containers: []
	W0328 01:04:38.515053 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:04:38.515060 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:04:38.515117 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:04:38.558865 1131323 cri.go:89] found id: ""
	I0328 01:04:38.558899 1131323 logs.go:276] 0 containers: []
	W0328 01:04:38.558911 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:04:38.558920 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:04:38.558982 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:04:38.600261 1131323 cri.go:89] found id: ""
	I0328 01:04:38.600290 1131323 logs.go:276] 0 containers: []
	W0328 01:04:38.600299 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:04:38.600306 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:04:38.600366 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:04:38.637131 1131323 cri.go:89] found id: ""
	I0328 01:04:38.637167 1131323 logs.go:276] 0 containers: []
	W0328 01:04:38.637179 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:04:38.637194 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:04:38.637218 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:04:38.716032 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:04:38.716058 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:04:38.716079 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:04:38.804534 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:04:38.804578 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:04:38.851781 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:04:38.851820 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:04:38.910091 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:04:38.910125 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:04:41.425801 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:04:41.441072 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:04:41.441168 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:04:41.482934 1131323 cri.go:89] found id: ""
	I0328 01:04:41.482962 1131323 logs.go:276] 0 containers: []
	W0328 01:04:41.482974 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:04:41.482983 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:04:41.483063 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:04:41.521762 1131323 cri.go:89] found id: ""
	I0328 01:04:41.521796 1131323 logs.go:276] 0 containers: []
	W0328 01:04:41.521810 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:04:41.521819 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:04:41.521931 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:04:41.560814 1131323 cri.go:89] found id: ""
	I0328 01:04:41.560847 1131323 logs.go:276] 0 containers: []
	W0328 01:04:41.560857 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:04:41.560864 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:04:41.560928 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:04:41.601158 1131323 cri.go:89] found id: ""
	I0328 01:04:41.601189 1131323 logs.go:276] 0 containers: []
	W0328 01:04:41.601199 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:04:41.601206 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:04:41.601271 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:04:41.638760 1131323 cri.go:89] found id: ""
	I0328 01:04:41.638789 1131323 logs.go:276] 0 containers: []
	W0328 01:04:41.638799 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:04:41.638806 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:04:41.638861 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:04:41.675235 1131323 cri.go:89] found id: ""
	I0328 01:04:41.675268 1131323 logs.go:276] 0 containers: []
	W0328 01:04:41.675278 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:04:41.675285 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:04:41.675341 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:04:41.712918 1131323 cri.go:89] found id: ""
	I0328 01:04:41.712957 1131323 logs.go:276] 0 containers: []
	W0328 01:04:41.712972 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:04:41.712983 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:04:41.713078 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:04:41.750552 1131323 cri.go:89] found id: ""
	I0328 01:04:41.750582 1131323 logs.go:276] 0 containers: []
	W0328 01:04:41.750591 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:04:41.750601 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:04:41.750617 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:04:41.811163 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:04:41.811204 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:04:41.826502 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:04:41.826547 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:04:41.900727 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:04:41.900759 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:04:41.900777 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:04:41.981731 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:04:41.981783 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:04:44.525845 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:04:44.542301 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:04:44.542389 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:04:44.584907 1131323 cri.go:89] found id: ""
	I0328 01:04:44.584936 1131323 logs.go:276] 0 containers: []
	W0328 01:04:44.584945 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:04:44.584952 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:04:44.585007 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:04:44.630465 1131323 cri.go:89] found id: ""
	I0328 01:04:44.630499 1131323 logs.go:276] 0 containers: []
	W0328 01:04:44.630511 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:04:44.630520 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:04:44.630588 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:04:44.669095 1131323 cri.go:89] found id: ""
	I0328 01:04:44.669131 1131323 logs.go:276] 0 containers: []
	W0328 01:04:44.669143 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:04:44.669152 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:04:44.669235 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:04:44.708445 1131323 cri.go:89] found id: ""
	I0328 01:04:44.708484 1131323 logs.go:276] 0 containers: []
	W0328 01:04:44.708495 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:04:44.708502 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:04:44.708570 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:04:44.747706 1131323 cri.go:89] found id: ""
	I0328 01:04:44.747744 1131323 logs.go:276] 0 containers: []
	W0328 01:04:44.747755 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:04:44.747762 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:04:44.747822 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:04:44.787768 1131323 cri.go:89] found id: ""
	I0328 01:04:44.787807 1131323 logs.go:276] 0 containers: []
	W0328 01:04:44.787821 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:04:44.787830 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:04:44.787899 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:04:44.829018 1131323 cri.go:89] found id: ""
	I0328 01:04:44.829049 1131323 logs.go:276] 0 containers: []
	W0328 01:04:44.829059 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:04:44.829066 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:04:44.829123 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:04:44.874334 1131323 cri.go:89] found id: ""
	I0328 01:04:44.874374 1131323 logs.go:276] 0 containers: []
	W0328 01:04:44.874383 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:04:44.874393 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:04:44.874405 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:04:44.921577 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:04:44.921619 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:04:44.976660 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:04:44.976713 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:04:44.991365 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:04:44.991400 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:04:45.067595 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:04:45.067630 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:04:45.067651 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:04:47.647634 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:04:47.663581 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:04:47.663687 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:04:47.702889 1131323 cri.go:89] found id: ""
	I0328 01:04:47.702940 1131323 logs.go:276] 0 containers: []
	W0328 01:04:47.702954 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:04:47.702966 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:04:47.703043 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:04:47.744995 1131323 cri.go:89] found id: ""
	I0328 01:04:47.745027 1131323 logs.go:276] 0 containers: []
	W0328 01:04:47.745037 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:04:47.745044 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:04:47.745103 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:04:47.785518 1131323 cri.go:89] found id: ""
	I0328 01:04:47.785550 1131323 logs.go:276] 0 containers: []
	W0328 01:04:47.785562 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:04:47.785572 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:04:47.785645 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:04:47.831739 1131323 cri.go:89] found id: ""
	I0328 01:04:47.831771 1131323 logs.go:276] 0 containers: []
	W0328 01:04:47.831786 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:04:47.831794 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:04:47.831867 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:04:47.871864 1131323 cri.go:89] found id: ""
	I0328 01:04:47.871906 1131323 logs.go:276] 0 containers: []
	W0328 01:04:47.871918 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:04:47.871929 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:04:47.872008 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:04:47.907899 1131323 cri.go:89] found id: ""
	I0328 01:04:47.907934 1131323 logs.go:276] 0 containers: []
	W0328 01:04:47.907946 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:04:47.907955 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:04:47.908022 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:04:47.946073 1131323 cri.go:89] found id: ""
	I0328 01:04:47.946107 1131323 logs.go:276] 0 containers: []
	W0328 01:04:47.946118 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:04:47.946127 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:04:47.946223 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:04:47.986122 1131323 cri.go:89] found id: ""
	I0328 01:04:47.986154 1131323 logs.go:276] 0 containers: []
	W0328 01:04:47.986168 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:04:47.986182 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:04:47.986198 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:04:48.057234 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:04:48.057271 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:04:48.109881 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:04:48.109926 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:04:48.125154 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:04:48.125189 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:04:48.208295 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:04:48.208327 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:04:48.208345 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:04:50.785126 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:04:50.800000 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:04:50.800078 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:04:50.839883 1131323 cri.go:89] found id: ""
	I0328 01:04:50.839911 1131323 logs.go:276] 0 containers: []
	W0328 01:04:50.839920 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:04:50.839927 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:04:50.839983 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:04:50.879627 1131323 cri.go:89] found id: ""
	I0328 01:04:50.879654 1131323 logs.go:276] 0 containers: []
	W0328 01:04:50.879661 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:04:50.879668 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:04:50.879734 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:04:50.918392 1131323 cri.go:89] found id: ""
	I0328 01:04:50.918434 1131323 logs.go:276] 0 containers: []
	W0328 01:04:50.918446 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:04:50.918454 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:04:50.918517 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:04:50.957198 1131323 cri.go:89] found id: ""
	I0328 01:04:50.957234 1131323 logs.go:276] 0 containers: []
	W0328 01:04:50.957248 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:04:50.957257 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:04:50.957328 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:04:50.997389 1131323 cri.go:89] found id: ""
	I0328 01:04:50.997424 1131323 logs.go:276] 0 containers: []
	W0328 01:04:50.997438 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:04:50.997446 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:04:50.997513 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:04:51.040259 1131323 cri.go:89] found id: ""
	I0328 01:04:51.040296 1131323 logs.go:276] 0 containers: []
	W0328 01:04:51.040309 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:04:51.040318 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:04:51.040389 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:04:51.081824 1131323 cri.go:89] found id: ""
	I0328 01:04:51.081858 1131323 logs.go:276] 0 containers: []
	W0328 01:04:51.081868 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:04:51.081875 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:04:51.081942 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:04:51.119742 1131323 cri.go:89] found id: ""
	I0328 01:04:51.119783 1131323 logs.go:276] 0 containers: []
	W0328 01:04:51.119796 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:04:51.119810 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:04:51.119836 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:04:51.173486 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:04:51.173529 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:04:51.188532 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:04:51.188568 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:04:51.269181 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:04:51.269207 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:04:51.269226 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:04:51.349882 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:04:51.349936 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:04:53.893562 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:04:53.910104 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:04:53.910186 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:04:53.951333 1131323 cri.go:89] found id: ""
	I0328 01:04:53.951375 1131323 logs.go:276] 0 containers: []
	W0328 01:04:53.951388 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:04:53.951397 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:04:53.951472 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:04:53.992438 1131323 cri.go:89] found id: ""
	I0328 01:04:53.992474 1131323 logs.go:276] 0 containers: []
	W0328 01:04:53.992486 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:04:53.992493 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:04:53.992561 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:04:54.032934 1131323 cri.go:89] found id: ""
	I0328 01:04:54.032969 1131323 logs.go:276] 0 containers: []
	W0328 01:04:54.032982 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:04:54.032992 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:04:54.033061 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:04:54.074670 1131323 cri.go:89] found id: ""
	I0328 01:04:54.074707 1131323 logs.go:276] 0 containers: []
	W0328 01:04:54.074777 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:04:54.074801 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:04:54.074875 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:04:54.111527 1131323 cri.go:89] found id: ""
	I0328 01:04:54.111555 1131323 logs.go:276] 0 containers: []
	W0328 01:04:54.111566 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:04:54.111573 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:04:54.111658 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:04:54.151401 1131323 cri.go:89] found id: ""
	I0328 01:04:54.151428 1131323 logs.go:276] 0 containers: []
	W0328 01:04:54.151437 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:04:54.151443 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:04:54.151494 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:04:54.197997 1131323 cri.go:89] found id: ""
	I0328 01:04:54.198036 1131323 logs.go:276] 0 containers: []
	W0328 01:04:54.198048 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:04:54.198058 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:04:54.198135 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:04:54.234016 1131323 cri.go:89] found id: ""
	I0328 01:04:54.234049 1131323 logs.go:276] 0 containers: []
	W0328 01:04:54.234058 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:04:54.234068 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:04:54.234081 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:04:54.286118 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:04:54.286161 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:04:54.300489 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:04:54.300541 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:04:54.376949 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:04:54.376972 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:04:54.376988 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:04:54.463857 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:04:54.463901 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:04:57.026395 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:04:57.041270 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:04:57.041358 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:04:57.082380 1131323 cri.go:89] found id: ""
	I0328 01:04:57.082416 1131323 logs.go:276] 0 containers: []
	W0328 01:04:57.082428 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:04:57.082436 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:04:57.082503 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:04:57.121835 1131323 cri.go:89] found id: ""
	I0328 01:04:57.121870 1131323 logs.go:276] 0 containers: []
	W0328 01:04:57.121885 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:04:57.121894 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:04:57.121969 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:04:57.163688 1131323 cri.go:89] found id: ""
	I0328 01:04:57.163725 1131323 logs.go:276] 0 containers: []
	W0328 01:04:57.163737 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:04:57.163745 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:04:57.163819 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:04:57.212628 1131323 cri.go:89] found id: ""
	I0328 01:04:57.212666 1131323 logs.go:276] 0 containers: []
	W0328 01:04:57.212693 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:04:57.212703 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:04:57.212788 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:04:57.249196 1131323 cri.go:89] found id: ""
	I0328 01:04:57.249231 1131323 logs.go:276] 0 containers: []
	W0328 01:04:57.249244 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:04:57.249253 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:04:57.249318 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:04:57.286996 1131323 cri.go:89] found id: ""
	I0328 01:04:57.287031 1131323 logs.go:276] 0 containers: []
	W0328 01:04:57.287040 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:04:57.287047 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:04:57.287101 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:04:57.324523 1131323 cri.go:89] found id: ""
	I0328 01:04:57.324551 1131323 logs.go:276] 0 containers: []
	W0328 01:04:57.324560 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:04:57.324566 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:04:57.324627 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:04:57.363946 1131323 cri.go:89] found id: ""
	I0328 01:04:57.363984 1131323 logs.go:276] 0 containers: []
	W0328 01:04:57.363998 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:04:57.364012 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:04:57.364034 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:04:57.418300 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:04:57.418337 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:04:57.433214 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:04:57.433242 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:04:57.508623 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:04:57.508651 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:04:57.508665 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:04:57.586336 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:04:57.586377 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:05:00.129903 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:05:00.146829 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:05:00.146920 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:05:00.197823 1131323 cri.go:89] found id: ""
	I0328 01:05:00.197856 1131323 logs.go:276] 0 containers: []
	W0328 01:05:00.197865 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:05:00.197872 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:05:00.197930 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:05:00.257523 1131323 cri.go:89] found id: ""
	I0328 01:05:00.257561 1131323 logs.go:276] 0 containers: []
	W0328 01:05:00.257575 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:05:00.257584 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:05:00.257657 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:05:00.314511 1131323 cri.go:89] found id: ""
	I0328 01:05:00.314539 1131323 logs.go:276] 0 containers: []
	W0328 01:05:00.314549 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:05:00.314558 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:05:00.314610 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:05:00.351043 1131323 cri.go:89] found id: ""
	I0328 01:05:00.351076 1131323 logs.go:276] 0 containers: []
	W0328 01:05:00.351090 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:05:00.351098 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:05:00.351167 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:05:00.391477 1131323 cri.go:89] found id: ""
	I0328 01:05:00.391507 1131323 logs.go:276] 0 containers: []
	W0328 01:05:00.391519 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:05:00.391525 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:05:00.391595 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:05:00.436196 1131323 cri.go:89] found id: ""
	I0328 01:05:00.436230 1131323 logs.go:276] 0 containers: []
	W0328 01:05:00.436242 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:05:00.436249 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:05:00.436316 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:05:00.473389 1131323 cri.go:89] found id: ""
	I0328 01:05:00.473428 1131323 logs.go:276] 0 containers: []
	W0328 01:05:00.473441 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:05:00.473450 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:05:00.473523 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:05:00.508829 1131323 cri.go:89] found id: ""
	I0328 01:05:00.508866 1131323 logs.go:276] 0 containers: []
	W0328 01:05:00.508879 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:05:00.508901 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:05:00.508931 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:05:00.553709 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:05:00.553741 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:05:00.612679 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:05:00.612732 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:05:00.630908 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:05:00.630948 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:05:00.706984 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:05:00.707016 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:05:00.707034 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:05:03.287887 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:05:03.304679 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:05:03.304779 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:05:03.343579 1131323 cri.go:89] found id: ""
	I0328 01:05:03.343608 1131323 logs.go:276] 0 containers: []
	W0328 01:05:03.343618 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:05:03.343625 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:05:03.343677 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:05:03.387158 1131323 cri.go:89] found id: ""
	I0328 01:05:03.387192 1131323 logs.go:276] 0 containers: []
	W0328 01:05:03.387206 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:05:03.387224 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:05:03.387308 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:05:03.426622 1131323 cri.go:89] found id: ""
	I0328 01:05:03.426653 1131323 logs.go:276] 0 containers: []
	W0328 01:05:03.426663 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:05:03.426670 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:05:03.426724 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:05:03.468743 1131323 cri.go:89] found id: ""
	I0328 01:05:03.468780 1131323 logs.go:276] 0 containers: []
	W0328 01:05:03.468793 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:05:03.468801 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:05:03.468870 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:05:03.508518 1131323 cri.go:89] found id: ""
	I0328 01:05:03.508554 1131323 logs.go:276] 0 containers: []
	W0328 01:05:03.508566 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:05:03.508575 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:05:03.508653 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:05:03.548295 1131323 cri.go:89] found id: ""
	I0328 01:05:03.548331 1131323 logs.go:276] 0 containers: []
	W0328 01:05:03.548343 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:05:03.548353 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:05:03.548444 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:05:03.591561 1131323 cri.go:89] found id: ""
	I0328 01:05:03.591594 1131323 logs.go:276] 0 containers: []
	W0328 01:05:03.591607 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:05:03.591615 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:05:03.591670 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:05:03.635055 1131323 cri.go:89] found id: ""
	I0328 01:05:03.635086 1131323 logs.go:276] 0 containers: []
	W0328 01:05:03.635097 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:05:03.635109 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:05:03.635127 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:05:03.715639 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:05:03.715683 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:05:03.755888 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:05:03.755931 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:05:03.810128 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:05:03.810170 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:05:03.825197 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:05:03.825227 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:05:03.908589 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:05:06.409060 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:05:06.424034 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:05:06.424119 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:05:06.461827 1131323 cri.go:89] found id: ""
	I0328 01:05:06.461888 1131323 logs.go:276] 0 containers: []
	W0328 01:05:06.461902 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:05:06.461911 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:05:06.461985 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:05:06.505006 1131323 cri.go:89] found id: ""
	I0328 01:05:06.505061 1131323 logs.go:276] 0 containers: []
	W0328 01:05:06.505078 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:05:06.505085 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:05:06.505145 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:05:06.542000 1131323 cri.go:89] found id: ""
	I0328 01:05:06.542033 1131323 logs.go:276] 0 containers: []
	W0328 01:05:06.542044 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:05:06.542052 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:05:06.542121 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:05:06.583725 1131323 cri.go:89] found id: ""
	I0328 01:05:06.583778 1131323 logs.go:276] 0 containers: []
	W0328 01:05:06.583800 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:05:06.583810 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:05:06.583887 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:05:06.620457 1131323 cri.go:89] found id: ""
	I0328 01:05:06.620501 1131323 logs.go:276] 0 containers: []
	W0328 01:05:06.620516 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:05:06.620524 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:05:06.620595 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:05:06.664380 1131323 cri.go:89] found id: ""
	I0328 01:05:06.664412 1131323 logs.go:276] 0 containers: []
	W0328 01:05:06.664425 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:05:06.664432 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:05:06.664502 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:05:06.701799 1131323 cri.go:89] found id: ""
	I0328 01:05:06.701850 1131323 logs.go:276] 0 containers: []
	W0328 01:05:06.701862 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:05:06.701870 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:05:06.701935 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:05:06.739899 1131323 cri.go:89] found id: ""
	I0328 01:05:06.739936 1131323 logs.go:276] 0 containers: []
	W0328 01:05:06.739948 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:05:06.739958 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:05:06.739973 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:05:06.814373 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:05:06.814404 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:05:06.814421 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:05:06.894331 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:05:06.894371 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:05:06.952912 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:05:06.952979 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:05:07.011851 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:05:07.011900 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:05:09.528068 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:05:09.545082 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:05:09.545167 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:05:09.586944 1131323 cri.go:89] found id: ""
	I0328 01:05:09.586983 1131323 logs.go:276] 0 containers: []
	W0328 01:05:09.586996 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:05:09.587004 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:05:09.587077 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:05:09.624153 1131323 cri.go:89] found id: ""
	I0328 01:05:09.624184 1131323 logs.go:276] 0 containers: []
	W0328 01:05:09.624192 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:05:09.624198 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:05:09.624256 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:05:09.661125 1131323 cri.go:89] found id: ""
	I0328 01:05:09.661160 1131323 logs.go:276] 0 containers: []
	W0328 01:05:09.661172 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:05:09.661182 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:05:09.661256 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:05:09.699865 1131323 cri.go:89] found id: ""
	I0328 01:05:09.699903 1131323 logs.go:276] 0 containers: []
	W0328 01:05:09.699916 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:05:09.699925 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:05:09.699992 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:05:09.737925 1131323 cri.go:89] found id: ""
	I0328 01:05:09.737958 1131323 logs.go:276] 0 containers: []
	W0328 01:05:09.737967 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:05:09.737973 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:05:09.738029 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:05:09.776906 1131323 cri.go:89] found id: ""
	I0328 01:05:09.776941 1131323 logs.go:276] 0 containers: []
	W0328 01:05:09.776950 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:05:09.776957 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:05:09.777021 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:05:09.815767 1131323 cri.go:89] found id: ""
	I0328 01:05:09.815794 1131323 logs.go:276] 0 containers: []
	W0328 01:05:09.815803 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:05:09.815809 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:05:09.815876 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:05:09.855880 1131323 cri.go:89] found id: ""
	I0328 01:05:09.855915 1131323 logs.go:276] 0 containers: []
	W0328 01:05:09.855928 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:05:09.855941 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:05:09.855958 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:05:09.918339 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:05:09.918376 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:05:09.932775 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:05:09.932810 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:05:10.011566 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:05:10.011594 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:05:10.011610 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:05:10.096057 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:05:10.096100 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:05:12.641999 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:05:12.655761 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:05:12.655843 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:05:12.697335 1131323 cri.go:89] found id: ""
	I0328 01:05:12.697369 1131323 logs.go:276] 0 containers: []
	W0328 01:05:12.697381 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:05:12.697390 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:05:12.697453 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:05:12.736482 1131323 cri.go:89] found id: ""
	I0328 01:05:12.736520 1131323 logs.go:276] 0 containers: []
	W0328 01:05:12.736534 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:05:12.736544 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:05:12.736617 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:05:12.771992 1131323 cri.go:89] found id: ""
	I0328 01:05:12.772034 1131323 logs.go:276] 0 containers: []
	W0328 01:05:12.772046 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:05:12.772055 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:05:12.772125 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:05:12.810738 1131323 cri.go:89] found id: ""
	I0328 01:05:12.810770 1131323 logs.go:276] 0 containers: []
	W0328 01:05:12.810779 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:05:12.810786 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:05:12.810837 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:05:12.848172 1131323 cri.go:89] found id: ""
	I0328 01:05:12.848209 1131323 logs.go:276] 0 containers: []
	W0328 01:05:12.848222 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:05:12.848230 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:05:12.848310 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:05:12.884660 1131323 cri.go:89] found id: ""
	I0328 01:05:12.884698 1131323 logs.go:276] 0 containers: []
	W0328 01:05:12.884710 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:05:12.884719 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:05:12.884794 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:05:12.926180 1131323 cri.go:89] found id: ""
	I0328 01:05:12.926209 1131323 logs.go:276] 0 containers: []
	W0328 01:05:12.926218 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:05:12.926244 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:05:12.926303 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:05:12.966938 1131323 cri.go:89] found id: ""
	I0328 01:05:12.966969 1131323 logs.go:276] 0 containers: []
	W0328 01:05:12.966983 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:05:12.966996 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:05:12.967014 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:05:13.018501 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:05:13.018541 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:05:13.033140 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:05:13.033175 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:05:13.108806 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:05:13.108832 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:05:13.108858 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:05:13.189198 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:05:13.189241 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:05:15.737415 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:05:15.752534 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:05:15.752614 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:05:15.789941 1131323 cri.go:89] found id: ""
	I0328 01:05:15.789974 1131323 logs.go:276] 0 containers: []
	W0328 01:05:15.789986 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:05:15.789994 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:05:15.790107 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:05:15.827688 1131323 cri.go:89] found id: ""
	I0328 01:05:15.827731 1131323 logs.go:276] 0 containers: []
	W0328 01:05:15.827745 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:05:15.827766 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:05:15.827831 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:05:15.867005 1131323 cri.go:89] found id: ""
	I0328 01:05:15.867041 1131323 logs.go:276] 0 containers: []
	W0328 01:05:15.867054 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:05:15.867064 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:05:15.867149 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:05:15.909924 1131323 cri.go:89] found id: ""
	I0328 01:05:15.910035 1131323 logs.go:276] 0 containers: []
	W0328 01:05:15.910055 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:05:15.910066 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:05:15.910139 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:05:15.950571 1131323 cri.go:89] found id: ""
	I0328 01:05:15.950606 1131323 logs.go:276] 0 containers: []
	W0328 01:05:15.950619 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:05:15.950632 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:05:15.950707 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:05:15.992557 1131323 cri.go:89] found id: ""
	I0328 01:05:15.992594 1131323 logs.go:276] 0 containers: []
	W0328 01:05:15.992605 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:05:15.992615 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:05:15.992687 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:05:16.032417 1131323 cri.go:89] found id: ""
	I0328 01:05:16.032458 1131323 logs.go:276] 0 containers: []
	W0328 01:05:16.032473 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:05:16.032482 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:05:16.032559 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:05:16.071399 1131323 cri.go:89] found id: ""
	I0328 01:05:16.071433 1131323 logs.go:276] 0 containers: []
	W0328 01:05:16.071445 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:05:16.071459 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:05:16.071481 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:05:16.147078 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:05:16.147113 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:05:16.147131 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:05:16.223828 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:05:16.223870 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:05:16.269377 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:05:16.269409 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:05:16.318545 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:05:16.318584 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:05:18.836044 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:05:18.851138 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:05:18.851231 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:05:18.887223 1131323 cri.go:89] found id: ""
	I0328 01:05:18.887260 1131323 logs.go:276] 0 containers: []
	W0328 01:05:18.887273 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:05:18.887283 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:05:18.887354 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:05:18.928652 1131323 cri.go:89] found id: ""
	I0328 01:05:18.928682 1131323 logs.go:276] 0 containers: []
	W0328 01:05:18.928692 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:05:18.928698 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:05:18.928756 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:05:18.968519 1131323 cri.go:89] found id: ""
	I0328 01:05:18.968555 1131323 logs.go:276] 0 containers: []
	W0328 01:05:18.968567 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:05:18.968575 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:05:18.968646 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:05:19.010939 1131323 cri.go:89] found id: ""
	I0328 01:05:19.010977 1131323 logs.go:276] 0 containers: []
	W0328 01:05:19.010991 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:05:19.010999 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:05:19.011070 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:05:19.048723 1131323 cri.go:89] found id: ""
	I0328 01:05:19.048748 1131323 logs.go:276] 0 containers: []
	W0328 01:05:19.048758 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:05:19.048769 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:05:19.048820 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:05:19.091761 1131323 cri.go:89] found id: ""
	I0328 01:05:19.091794 1131323 logs.go:276] 0 containers: []
	W0328 01:05:19.091803 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:05:19.091810 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:05:19.091863 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:05:19.134017 1131323 cri.go:89] found id: ""
	I0328 01:05:19.134049 1131323 logs.go:276] 0 containers: []
	W0328 01:05:19.134059 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:05:19.134065 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:05:19.134119 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:05:19.176070 1131323 cri.go:89] found id: ""
	I0328 01:05:19.176106 1131323 logs.go:276] 0 containers: []
	W0328 01:05:19.176118 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:05:19.176131 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:05:19.176155 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:05:19.261546 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:05:19.261584 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:05:19.261605 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:05:19.340271 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:05:19.340314 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:05:19.383625 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:05:19.383676 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:05:19.441635 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:05:19.441679 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:05:21.958362 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:05:21.974427 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:05:21.974528 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:05:22.013099 1131323 cri.go:89] found id: ""
	I0328 01:05:22.013139 1131323 logs.go:276] 0 containers: []
	W0328 01:05:22.013152 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:05:22.013160 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:05:22.013229 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:05:22.055558 1131323 cri.go:89] found id: ""
	I0328 01:05:22.055594 1131323 logs.go:276] 0 containers: []
	W0328 01:05:22.055604 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:05:22.055611 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:05:22.055668 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:05:22.106836 1131323 cri.go:89] found id: ""
	I0328 01:05:22.106870 1131323 logs.go:276] 0 containers: []
	W0328 01:05:22.106879 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:05:22.106886 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:05:22.106961 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:05:22.145135 1131323 cri.go:89] found id: ""
	I0328 01:05:22.145175 1131323 logs.go:276] 0 containers: []
	W0328 01:05:22.145189 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:05:22.145197 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:05:22.145266 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:05:22.183879 1131323 cri.go:89] found id: ""
	I0328 01:05:22.183909 1131323 logs.go:276] 0 containers: []
	W0328 01:05:22.183919 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:05:22.183926 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:05:22.183981 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:05:22.223087 1131323 cri.go:89] found id: ""
	I0328 01:05:22.223115 1131323 logs.go:276] 0 containers: []
	W0328 01:05:22.223124 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:05:22.223131 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:05:22.223209 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:05:22.263232 1131323 cri.go:89] found id: ""
	I0328 01:05:22.263262 1131323 logs.go:276] 0 containers: []
	W0328 01:05:22.263272 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:05:22.263279 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:05:22.263331 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:05:22.302919 1131323 cri.go:89] found id: ""
	I0328 01:05:22.302954 1131323 logs.go:276] 0 containers: []
	W0328 01:05:22.302967 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:05:22.302980 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:05:22.302998 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:05:22.358550 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:05:22.358596 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:05:22.374688 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:05:22.374722 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:05:22.453584 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:05:22.453609 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:05:22.453624 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:05:22.540983 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:05:22.541048 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:05:25.091773 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:05:25.107412 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:05:25.107484 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:05:25.143917 1131323 cri.go:89] found id: ""
	I0328 01:05:25.143944 1131323 logs.go:276] 0 containers: []
	W0328 01:05:25.143953 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:05:25.143960 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:05:25.144013 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:05:25.183615 1131323 cri.go:89] found id: ""
	I0328 01:05:25.183650 1131323 logs.go:276] 0 containers: []
	W0328 01:05:25.183659 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:05:25.183666 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:05:25.183729 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:05:25.221125 1131323 cri.go:89] found id: ""
	I0328 01:05:25.221158 1131323 logs.go:276] 0 containers: []
	W0328 01:05:25.221167 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:05:25.221174 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:05:25.221229 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:05:25.262023 1131323 cri.go:89] found id: ""
	I0328 01:05:25.262056 1131323 logs.go:276] 0 containers: []
	W0328 01:05:25.262065 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:05:25.262072 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:05:25.262134 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:05:25.297919 1131323 cri.go:89] found id: ""
	I0328 01:05:25.297948 1131323 logs.go:276] 0 containers: []
	W0328 01:05:25.297957 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:05:25.297964 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:05:25.298035 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:05:25.336582 1131323 cri.go:89] found id: ""
	I0328 01:05:25.336610 1131323 logs.go:276] 0 containers: []
	W0328 01:05:25.336620 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:05:25.336627 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:05:25.336690 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:05:25.375554 1131323 cri.go:89] found id: ""
	I0328 01:05:25.375589 1131323 logs.go:276] 0 containers: []
	W0328 01:05:25.375600 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:05:25.375609 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:05:25.375683 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:05:25.415941 1131323 cri.go:89] found id: ""
	I0328 01:05:25.415973 1131323 logs.go:276] 0 containers: []
	W0328 01:05:25.415984 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:05:25.415996 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:05:25.416013 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:05:25.430168 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:05:25.430196 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:05:25.507782 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:05:25.507805 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:05:25.507862 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:05:25.588780 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:05:25.588824 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:05:25.634958 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:05:25.634997 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:05:28.190651 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:05:28.205714 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:05:28.205794 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:05:28.242015 1131323 cri.go:89] found id: ""
	I0328 01:05:28.242056 1131323 logs.go:276] 0 containers: []
	W0328 01:05:28.242067 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:05:28.242077 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:05:28.242169 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:05:28.289132 1131323 cri.go:89] found id: ""
	I0328 01:05:28.289169 1131323 logs.go:276] 0 containers: []
	W0328 01:05:28.289182 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:05:28.289189 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:05:28.289256 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:05:28.327001 1131323 cri.go:89] found id: ""
	I0328 01:05:28.327031 1131323 logs.go:276] 0 containers: []
	W0328 01:05:28.327040 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:05:28.327052 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:05:28.327105 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:05:28.365474 1131323 cri.go:89] found id: ""
	I0328 01:05:28.365507 1131323 logs.go:276] 0 containers: []
	W0328 01:05:28.365516 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:05:28.365523 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:05:28.365587 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:05:28.405494 1131323 cri.go:89] found id: ""
	I0328 01:05:28.405553 1131323 logs.go:276] 0 containers: []
	W0328 01:05:28.405567 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:05:28.405576 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:05:28.405652 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:05:28.448464 1131323 cri.go:89] found id: ""
	I0328 01:05:28.448502 1131323 logs.go:276] 0 containers: []
	W0328 01:05:28.448512 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:05:28.448521 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:05:28.448594 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:05:28.488143 1131323 cri.go:89] found id: ""
	I0328 01:05:28.488172 1131323 logs.go:276] 0 containers: []
	W0328 01:05:28.488182 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:05:28.488189 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:05:28.488258 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:05:28.545977 1131323 cri.go:89] found id: ""
	I0328 01:05:28.546012 1131323 logs.go:276] 0 containers: []
	W0328 01:05:28.546024 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:05:28.546036 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:05:28.546050 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:05:28.629955 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:05:28.630001 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:05:28.670504 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:05:28.670536 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:05:28.722021 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:05:28.722069 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:05:28.737274 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:05:28.737310 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:05:28.824025 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:05:31.324497 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:05:31.339715 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:05:31.339811 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:05:31.379017 1131323 cri.go:89] found id: ""
	I0328 01:05:31.379050 1131323 logs.go:276] 0 containers: []
	W0328 01:05:31.379062 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:05:31.379072 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:05:31.379138 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:05:31.420024 1131323 cri.go:89] found id: ""
	I0328 01:05:31.420055 1131323 logs.go:276] 0 containers: []
	W0328 01:05:31.420065 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:05:31.420071 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:05:31.420136 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:05:31.458732 1131323 cri.go:89] found id: ""
	I0328 01:05:31.458764 1131323 logs.go:276] 0 containers: []
	W0328 01:05:31.458773 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:05:31.458779 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:05:31.458835 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:05:31.504249 1131323 cri.go:89] found id: ""
	I0328 01:05:31.504280 1131323 logs.go:276] 0 containers: []
	W0328 01:05:31.504292 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:05:31.504300 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:05:31.504366 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:05:31.545284 1131323 cri.go:89] found id: ""
	I0328 01:05:31.545316 1131323 logs.go:276] 0 containers: []
	W0328 01:05:31.545324 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:05:31.545331 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:05:31.545385 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:05:31.583402 1131323 cri.go:89] found id: ""
	I0328 01:05:31.583434 1131323 logs.go:276] 0 containers: []
	W0328 01:05:31.583444 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:05:31.583453 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:05:31.583587 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:05:31.624411 1131323 cri.go:89] found id: ""
	I0328 01:05:31.624449 1131323 logs.go:276] 0 containers: []
	W0328 01:05:31.624462 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:05:31.624471 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:05:31.624528 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:05:31.666103 1131323 cri.go:89] found id: ""
	I0328 01:05:31.666144 1131323 logs.go:276] 0 containers: []
	W0328 01:05:31.666158 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:05:31.666173 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:05:31.666192 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:05:31.717595 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:05:31.717636 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:05:31.731606 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:05:31.731637 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:05:31.803302 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:05:31.803325 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:05:31.803339 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:05:31.885552 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:05:31.885590 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:05:34.432446 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:05:34.448002 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:05:34.448085 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:05:34.493207 1131323 cri.go:89] found id: ""
	I0328 01:05:34.493246 1131323 logs.go:276] 0 containers: []
	W0328 01:05:34.493259 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:05:34.493268 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:05:34.493337 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:05:34.541838 1131323 cri.go:89] found id: ""
	I0328 01:05:34.541871 1131323 logs.go:276] 0 containers: []
	W0328 01:05:34.541883 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:05:34.541891 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:05:34.541956 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:05:34.582319 1131323 cri.go:89] found id: ""
	I0328 01:05:34.582357 1131323 logs.go:276] 0 containers: []
	W0328 01:05:34.582371 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:05:34.582380 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:05:34.582458 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:05:34.618753 1131323 cri.go:89] found id: ""
	I0328 01:05:34.618788 1131323 logs.go:276] 0 containers: []
	W0328 01:05:34.618801 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:05:34.618810 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:05:34.618882 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:05:34.656994 1131323 cri.go:89] found id: ""
	I0328 01:05:34.657027 1131323 logs.go:276] 0 containers: []
	W0328 01:05:34.657037 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:05:34.657043 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:05:34.657114 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:05:34.695214 1131323 cri.go:89] found id: ""
	I0328 01:05:34.695252 1131323 logs.go:276] 0 containers: []
	W0328 01:05:34.695264 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:05:34.695271 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:05:34.695337 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:05:34.733688 1131323 cri.go:89] found id: ""
	I0328 01:05:34.733718 1131323 logs.go:276] 0 containers: []
	W0328 01:05:34.733731 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:05:34.733739 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:05:34.733808 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:05:34.771697 1131323 cri.go:89] found id: ""
	I0328 01:05:34.771729 1131323 logs.go:276] 0 containers: []
	W0328 01:05:34.771744 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:05:34.771758 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:05:34.771776 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:05:34.828190 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:05:34.828236 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:05:34.842741 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:05:34.842776 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:05:34.918494 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:05:34.918525 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:05:34.918541 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:05:35.012689 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:05:35.012747 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:05:37.574759 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:05:37.590014 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:05:37.590128 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:05:37.626883 1131323 cri.go:89] found id: ""
	I0328 01:05:37.626914 1131323 logs.go:276] 0 containers: []
	W0328 01:05:37.626926 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:05:37.626935 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:05:37.627005 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:05:37.665171 1131323 cri.go:89] found id: ""
	I0328 01:05:37.665202 1131323 logs.go:276] 0 containers: []
	W0328 01:05:37.665215 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:05:37.665225 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:05:37.665294 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:05:37.702923 1131323 cri.go:89] found id: ""
	I0328 01:05:37.702963 1131323 logs.go:276] 0 containers: []
	W0328 01:05:37.702976 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:05:37.702984 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:05:37.703064 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:05:37.741148 1131323 cri.go:89] found id: ""
	I0328 01:05:37.741182 1131323 logs.go:276] 0 containers: []
	W0328 01:05:37.741191 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:05:37.741199 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:05:37.741269 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:05:37.782298 1131323 cri.go:89] found id: ""
	I0328 01:05:37.782331 1131323 logs.go:276] 0 containers: []
	W0328 01:05:37.782341 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:05:37.782348 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:05:37.782407 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:05:37.819056 1131323 cri.go:89] found id: ""
	I0328 01:05:37.819110 1131323 logs.go:276] 0 containers: []
	W0328 01:05:37.819124 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:05:37.819134 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:05:37.819215 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:05:37.862372 1131323 cri.go:89] found id: ""
	I0328 01:05:37.862414 1131323 logs.go:276] 0 containers: []
	W0328 01:05:37.862427 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:05:37.862436 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:05:37.862507 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:05:37.899639 1131323 cri.go:89] found id: ""
	I0328 01:05:37.899675 1131323 logs.go:276] 0 containers: []
	W0328 01:05:37.899689 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:05:37.899703 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:05:37.899721 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:05:37.978962 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:05:37.978990 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:05:37.979007 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:05:38.058972 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:05:38.059015 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:05:38.102975 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:05:38.103016 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:05:38.157994 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:05:38.158035 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:05:40.673425 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:05:40.690969 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:05:40.691041 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:05:40.735552 1131323 cri.go:89] found id: ""
	I0328 01:05:40.735585 1131323 logs.go:276] 0 containers: []
	W0328 01:05:40.735594 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:05:40.735602 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:05:40.735669 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:05:40.816611 1131323 cri.go:89] found id: ""
	I0328 01:05:40.816648 1131323 logs.go:276] 0 containers: []
	W0328 01:05:40.816661 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:05:40.816669 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:05:40.816725 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:05:40.864093 1131323 cri.go:89] found id: ""
	I0328 01:05:40.864125 1131323 logs.go:276] 0 containers: []
	W0328 01:05:40.864138 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:05:40.864147 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:05:40.864218 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:05:40.908781 1131323 cri.go:89] found id: ""
	I0328 01:05:40.908817 1131323 logs.go:276] 0 containers: []
	W0328 01:05:40.908829 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:05:40.908846 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:05:40.908914 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:05:40.950330 1131323 cri.go:89] found id: ""
	I0328 01:05:40.950369 1131323 logs.go:276] 0 containers: []
	W0328 01:05:40.950382 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:05:40.950390 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:05:40.950481 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:05:40.989983 1131323 cri.go:89] found id: ""
	I0328 01:05:40.990041 1131323 logs.go:276] 0 containers: []
	W0328 01:05:40.990054 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:05:40.990063 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:05:40.990136 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:05:41.042428 1131323 cri.go:89] found id: ""
	I0328 01:05:41.042470 1131323 logs.go:276] 0 containers: []
	W0328 01:05:41.042481 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:05:41.042489 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:05:41.042560 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:05:41.089309 1131323 cri.go:89] found id: ""
	I0328 01:05:41.089342 1131323 logs.go:276] 0 containers: []
	W0328 01:05:41.089353 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:05:41.089363 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:05:41.089377 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:05:41.148502 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:05:41.148547 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:05:41.163889 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:05:41.163918 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:05:41.242825 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:05:41.242848 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:05:41.242861 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:05:41.322658 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:05:41.322702 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:05:43.865117 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:05:43.880642 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:05:43.880729 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:05:43.919519 1131323 cri.go:89] found id: ""
	I0328 01:05:43.919550 1131323 logs.go:276] 0 containers: []
	W0328 01:05:43.919559 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:05:43.919565 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:05:43.919622 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:05:43.957906 1131323 cri.go:89] found id: ""
	I0328 01:05:43.957936 1131323 logs.go:276] 0 containers: []
	W0328 01:05:43.957945 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:05:43.957951 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:05:43.958008 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:05:44.001448 1131323 cri.go:89] found id: ""
	I0328 01:05:44.001486 1131323 logs.go:276] 0 containers: []
	W0328 01:05:44.001497 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:05:44.001505 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:05:44.001573 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:05:44.039767 1131323 cri.go:89] found id: ""
	I0328 01:05:44.039801 1131323 logs.go:276] 0 containers: []
	W0328 01:05:44.039812 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:05:44.039818 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:05:44.039871 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:05:44.079441 1131323 cri.go:89] found id: ""
	I0328 01:05:44.079470 1131323 logs.go:276] 0 containers: []
	W0328 01:05:44.079480 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:05:44.079486 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:05:44.079541 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:05:44.116534 1131323 cri.go:89] found id: ""
	I0328 01:05:44.116584 1131323 logs.go:276] 0 containers: []
	W0328 01:05:44.116596 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:05:44.116604 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:05:44.116670 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:05:44.163335 1131323 cri.go:89] found id: ""
	I0328 01:05:44.163369 1131323 logs.go:276] 0 containers: []
	W0328 01:05:44.163381 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:05:44.163389 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:05:44.163457 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:05:44.201367 1131323 cri.go:89] found id: ""
	I0328 01:05:44.201403 1131323 logs.go:276] 0 containers: []
	W0328 01:05:44.201413 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:05:44.201424 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:05:44.201442 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:05:44.257485 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:05:44.257529 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:05:44.272489 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:05:44.272534 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:05:44.354442 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:05:44.354477 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:05:44.354498 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:05:44.436219 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:05:44.436262 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:05:46.982131 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:05:46.998022 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:05:46.998100 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:05:47.037167 1131323 cri.go:89] found id: ""
	I0328 01:05:47.037205 1131323 logs.go:276] 0 containers: []
	W0328 01:05:47.037217 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:05:47.037226 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:05:47.037295 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:05:47.076175 1131323 cri.go:89] found id: ""
	I0328 01:05:47.076213 1131323 logs.go:276] 0 containers: []
	W0328 01:05:47.076226 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:05:47.076235 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:05:47.076306 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:05:47.115193 1131323 cri.go:89] found id: ""
	I0328 01:05:47.115227 1131323 logs.go:276] 0 containers: []
	W0328 01:05:47.115237 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:05:47.115244 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:05:47.115297 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:05:47.154942 1131323 cri.go:89] found id: ""
	I0328 01:05:47.154976 1131323 logs.go:276] 0 containers: []
	W0328 01:05:47.154989 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:05:47.154998 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:05:47.155069 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:05:47.196571 1131323 cri.go:89] found id: ""
	I0328 01:05:47.196609 1131323 logs.go:276] 0 containers: []
	W0328 01:05:47.196622 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:05:47.196631 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:05:47.196707 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:05:47.237572 1131323 cri.go:89] found id: ""
	I0328 01:05:47.237616 1131323 logs.go:276] 0 containers: []
	W0328 01:05:47.237625 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:05:47.237633 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:05:47.237691 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:05:47.275208 1131323 cri.go:89] found id: ""
	I0328 01:05:47.275254 1131323 logs.go:276] 0 containers: []
	W0328 01:05:47.275265 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:05:47.275272 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:05:47.275329 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:05:47.313515 1131323 cri.go:89] found id: ""
	I0328 01:05:47.313555 1131323 logs.go:276] 0 containers: []
	W0328 01:05:47.313568 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:05:47.313582 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:05:47.313598 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:05:47.368993 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:05:47.369033 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:05:47.383063 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:05:47.383097 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:05:47.460239 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:05:47.460278 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:05:47.460298 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:05:47.538552 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:05:47.538594 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:05:50.084960 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:05:50.101764 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:05:50.101859 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:05:50.141457 1131323 cri.go:89] found id: ""
	I0328 01:05:50.141488 1131323 logs.go:276] 0 containers: []
	W0328 01:05:50.141497 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:05:50.141504 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:05:50.141557 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:05:50.178184 1131323 cri.go:89] found id: ""
	I0328 01:05:50.178220 1131323 logs.go:276] 0 containers: []
	W0328 01:05:50.178254 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:05:50.178263 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:05:50.178358 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:05:50.217908 1131323 cri.go:89] found id: ""
	I0328 01:05:50.217946 1131323 logs.go:276] 0 containers: []
	W0328 01:05:50.217959 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:05:50.217966 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:05:50.218027 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:05:50.256029 1131323 cri.go:89] found id: ""
	I0328 01:05:50.256058 1131323 logs.go:276] 0 containers: []
	W0328 01:05:50.256067 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:05:50.256074 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:05:50.256130 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:05:50.295054 1131323 cri.go:89] found id: ""
	I0328 01:05:50.295087 1131323 logs.go:276] 0 containers: []
	W0328 01:05:50.295100 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:05:50.295106 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:05:50.295165 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:05:50.334695 1131323 cri.go:89] found id: ""
	I0328 01:05:50.336588 1131323 logs.go:276] 0 containers: []
	W0328 01:05:50.336605 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:05:50.336614 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:05:50.336697 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:05:50.375968 1131323 cri.go:89] found id: ""
	I0328 01:05:50.376003 1131323 logs.go:276] 0 containers: []
	W0328 01:05:50.376013 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:05:50.376021 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:05:50.376091 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:05:50.417146 1131323 cri.go:89] found id: ""
	I0328 01:05:50.417175 1131323 logs.go:276] 0 containers: []
	W0328 01:05:50.417184 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:05:50.417194 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:05:50.417207 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:05:50.474090 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:05:50.474131 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:05:50.489006 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:05:50.489040 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:05:50.566220 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:05:50.566268 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:05:50.566286 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:05:50.645593 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:05:50.645653 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:05:53.190872 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:05:53.205223 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:05:53.205320 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:05:53.242396 1131323 cri.go:89] found id: ""
	I0328 01:05:53.242433 1131323 logs.go:276] 0 containers: []
	W0328 01:05:53.242445 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:05:53.242455 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:05:53.242524 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:05:53.281237 1131323 cri.go:89] found id: ""
	I0328 01:05:53.281275 1131323 logs.go:276] 0 containers: []
	W0328 01:05:53.281288 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:05:53.281297 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:05:53.281357 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:05:53.321239 1131323 cri.go:89] found id: ""
	I0328 01:05:53.321268 1131323 logs.go:276] 0 containers: []
	W0328 01:05:53.321287 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:05:53.321296 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:05:53.321358 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:05:53.359240 1131323 cri.go:89] found id: ""
	I0328 01:05:53.359269 1131323 logs.go:276] 0 containers: []
	W0328 01:05:53.359278 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:05:53.359284 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:05:53.359337 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:05:53.396973 1131323 cri.go:89] found id: ""
	I0328 01:05:53.397008 1131323 logs.go:276] 0 containers: []
	W0328 01:05:53.397021 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:05:53.397030 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:05:53.397100 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:05:53.438368 1131323 cri.go:89] found id: ""
	I0328 01:05:53.438400 1131323 logs.go:276] 0 containers: []
	W0328 01:05:53.438408 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:05:53.438415 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:05:53.438477 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:05:53.474679 1131323 cri.go:89] found id: ""
	I0328 01:05:53.474708 1131323 logs.go:276] 0 containers: []
	W0328 01:05:53.474732 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:05:53.474742 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:05:53.474799 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:05:53.512509 1131323 cri.go:89] found id: ""
	I0328 01:05:53.512547 1131323 logs.go:276] 0 containers: []
	W0328 01:05:53.512560 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:05:53.512579 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:05:53.512599 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:05:53.569536 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:05:53.569580 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:05:53.584977 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:05:53.585016 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:05:53.657865 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:05:53.657895 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:05:53.657908 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:05:53.733158 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:05:53.733203 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:05:56.278693 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:05:56.291870 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:05:56.291949 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:05:56.332909 1131323 cri.go:89] found id: ""
	I0328 01:05:56.332943 1131323 logs.go:276] 0 containers: []
	W0328 01:05:56.332957 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:05:56.332965 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:05:56.333038 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:05:56.370608 1131323 cri.go:89] found id: ""
	I0328 01:05:56.370638 1131323 logs.go:276] 0 containers: []
	W0328 01:05:56.370649 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:05:56.370657 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:05:56.370721 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:05:56.408031 1131323 cri.go:89] found id: ""
	I0328 01:05:56.408068 1131323 logs.go:276] 0 containers: []
	W0328 01:05:56.408081 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:05:56.408100 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:05:56.408170 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:05:56.445057 1131323 cri.go:89] found id: ""
	I0328 01:05:56.445092 1131323 logs.go:276] 0 containers: []
	W0328 01:05:56.445105 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:05:56.445113 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:05:56.445177 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:05:56.486868 1131323 cri.go:89] found id: ""
	I0328 01:05:56.486898 1131323 logs.go:276] 0 containers: []
	W0328 01:05:56.486908 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:05:56.486914 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:05:56.486969 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:05:56.533594 1131323 cri.go:89] found id: ""
	I0328 01:05:56.533622 1131323 logs.go:276] 0 containers: []
	W0328 01:05:56.533632 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:05:56.533638 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:05:56.533702 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:05:56.569200 1131323 cri.go:89] found id: ""
	I0328 01:05:56.569237 1131323 logs.go:276] 0 containers: []
	W0328 01:05:56.569250 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:05:56.569258 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:05:56.569335 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:05:56.604919 1131323 cri.go:89] found id: ""
	I0328 01:05:56.604955 1131323 logs.go:276] 0 containers: []
	W0328 01:05:56.604968 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:05:56.604982 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:05:56.605011 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:05:56.654473 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:05:56.654513 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:05:56.671309 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:05:56.671339 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:05:56.739516 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:05:56.739543 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:05:56.739559 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:05:56.817445 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:05:56.817495 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:05:59.361711 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:05:59.375672 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:05:59.375750 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:05:59.414329 1131323 cri.go:89] found id: ""
	I0328 01:05:59.414360 1131323 logs.go:276] 0 containers: []
	W0328 01:05:59.414371 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:05:59.414379 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:05:59.414443 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:05:59.454813 1131323 cri.go:89] found id: ""
	I0328 01:05:59.454846 1131323 logs.go:276] 0 containers: []
	W0328 01:05:59.454855 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:05:59.454862 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:05:59.454917 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:05:59.492890 1131323 cri.go:89] found id: ""
	I0328 01:05:59.492924 1131323 logs.go:276] 0 containers: []
	W0328 01:05:59.492936 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:05:59.492946 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:05:59.493043 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:05:59.529412 1131323 cri.go:89] found id: ""
	I0328 01:05:59.529443 1131323 logs.go:276] 0 containers: []
	W0328 01:05:59.529454 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:05:59.529464 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:05:59.529521 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:05:59.568620 1131323 cri.go:89] found id: ""
	I0328 01:05:59.568655 1131323 logs.go:276] 0 containers: []
	W0328 01:05:59.568664 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:05:59.568671 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:05:59.568731 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:05:59.605826 1131323 cri.go:89] found id: ""
	I0328 01:05:59.605861 1131323 logs.go:276] 0 containers: []
	W0328 01:05:59.605874 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:05:59.605883 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:05:59.605955 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:05:59.645799 1131323 cri.go:89] found id: ""
	I0328 01:05:59.645833 1131323 logs.go:276] 0 containers: []
	W0328 01:05:59.645847 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:05:59.645856 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:05:59.645931 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:05:59.683866 1131323 cri.go:89] found id: ""
	I0328 01:05:59.683903 1131323 logs.go:276] 0 containers: []
	W0328 01:05:59.683916 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:05:59.683929 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:05:59.683953 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:05:59.726678 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:05:59.726711 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:05:59.779910 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:05:59.779954 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:05:59.795743 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:05:59.795774 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:05:59.875137 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:05:59.875162 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:05:59.875174 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:06:02.455212 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:06:02.468850 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:06:02.468945 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:06:02.506347 1131323 cri.go:89] found id: ""
	I0328 01:06:02.506385 1131323 logs.go:276] 0 containers: []
	W0328 01:06:02.506397 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:06:02.506406 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:06:02.506484 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:06:02.546056 1131323 cri.go:89] found id: ""
	I0328 01:06:02.546085 1131323 logs.go:276] 0 containers: []
	W0328 01:06:02.546096 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:06:02.546103 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:06:02.546173 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:06:02.585343 1131323 cri.go:89] found id: ""
	I0328 01:06:02.585385 1131323 logs.go:276] 0 containers: []
	W0328 01:06:02.585398 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:06:02.585407 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:06:02.585563 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:06:02.625380 1131323 cri.go:89] found id: ""
	I0328 01:06:02.625414 1131323 logs.go:276] 0 containers: []
	W0328 01:06:02.625423 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:06:02.625429 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:06:02.625486 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:06:02.664653 1131323 cri.go:89] found id: ""
	I0328 01:06:02.664687 1131323 logs.go:276] 0 containers: []
	W0328 01:06:02.664701 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:06:02.664708 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:06:02.664764 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:06:02.704468 1131323 cri.go:89] found id: ""
	I0328 01:06:02.704498 1131323 logs.go:276] 0 containers: []
	W0328 01:06:02.704511 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:06:02.704519 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:06:02.704595 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:06:02.740969 1131323 cri.go:89] found id: ""
	I0328 01:06:02.740997 1131323 logs.go:276] 0 containers: []
	W0328 01:06:02.741007 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:06:02.741014 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:06:02.741102 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:06:02.782113 1131323 cri.go:89] found id: ""
	I0328 01:06:02.782150 1131323 logs.go:276] 0 containers: []
	W0328 01:06:02.782163 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:06:02.782185 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:06:02.782203 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:06:02.836804 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:06:02.836848 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:06:02.852266 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:06:02.852299 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:06:02.929441 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:06:02.929467 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:06:02.929484 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:06:03.008114 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:06:03.008156 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:06:05.554291 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:06:05.570208 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:06:05.570304 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:06:05.610887 1131323 cri.go:89] found id: ""
	I0328 01:06:05.610916 1131323 logs.go:276] 0 containers: []
	W0328 01:06:05.610926 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:06:05.610932 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:06:05.610991 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:06:05.651561 1131323 cri.go:89] found id: ""
	I0328 01:06:05.651600 1131323 logs.go:276] 0 containers: []
	W0328 01:06:05.651610 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:06:05.651616 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:06:05.651681 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:06:05.690801 1131323 cri.go:89] found id: ""
	I0328 01:06:05.690830 1131323 logs.go:276] 0 containers: []
	W0328 01:06:05.690843 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:06:05.690851 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:06:05.690920 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:06:05.729098 1131323 cri.go:89] found id: ""
	I0328 01:06:05.729136 1131323 logs.go:276] 0 containers: []
	W0328 01:06:05.729146 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:06:05.729153 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:06:05.729225 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:06:05.774461 1131323 cri.go:89] found id: ""
	I0328 01:06:05.774499 1131323 logs.go:276] 0 containers: []
	W0328 01:06:05.774520 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:06:05.774530 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:06:05.774602 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:06:05.812135 1131323 cri.go:89] found id: ""
	I0328 01:06:05.812166 1131323 logs.go:276] 0 containers: []
	W0328 01:06:05.812180 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:06:05.812188 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:06:05.812255 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:06:05.847744 1131323 cri.go:89] found id: ""
	I0328 01:06:05.847775 1131323 logs.go:276] 0 containers: []
	W0328 01:06:05.847786 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:06:05.847796 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:06:05.847863 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:06:05.885600 1131323 cri.go:89] found id: ""
	I0328 01:06:05.885641 1131323 logs.go:276] 0 containers: []
	W0328 01:06:05.885656 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:06:05.885669 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:06:05.885684 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:06:05.963837 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:06:05.963879 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:06:06.007342 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:06:06.007381 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:06:06.062798 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:06:06.062843 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:06:06.077547 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:06:06.077599 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:06:06.148373 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:06:08.648791 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:06:08.664082 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:06:08.664154 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:06:08.701746 1131323 cri.go:89] found id: ""
	I0328 01:06:08.701776 1131323 logs.go:276] 0 containers: []
	W0328 01:06:08.701789 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:06:08.701797 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:06:08.701855 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:06:08.739035 1131323 cri.go:89] found id: ""
	I0328 01:06:08.739066 1131323 logs.go:276] 0 containers: []
	W0328 01:06:08.739076 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:06:08.739083 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:06:08.739136 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:06:08.776128 1131323 cri.go:89] found id: ""
	I0328 01:06:08.776166 1131323 logs.go:276] 0 containers: []
	W0328 01:06:08.776180 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:06:08.776189 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:06:08.776255 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:06:08.816136 1131323 cri.go:89] found id: ""
	I0328 01:06:08.816172 1131323 logs.go:276] 0 containers: []
	W0328 01:06:08.816187 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:06:08.816196 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:06:08.816271 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:06:08.855675 1131323 cri.go:89] found id: ""
	I0328 01:06:08.855709 1131323 logs.go:276] 0 containers: []
	W0328 01:06:08.855722 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:06:08.855730 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:06:08.855802 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:06:08.893161 1131323 cri.go:89] found id: ""
	I0328 01:06:08.893198 1131323 logs.go:276] 0 containers: []
	W0328 01:06:08.893212 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:06:08.893221 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:06:08.893297 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:06:08.935498 1131323 cri.go:89] found id: ""
	I0328 01:06:08.935527 1131323 logs.go:276] 0 containers: []
	W0328 01:06:08.935540 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:06:08.935548 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:06:08.935622 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:06:08.971622 1131323 cri.go:89] found id: ""
	I0328 01:06:08.971657 1131323 logs.go:276] 0 containers: []
	W0328 01:06:08.971668 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:06:08.971679 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:06:08.971696 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:06:09.039975 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:06:09.040036 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:06:09.057877 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:06:09.057920 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:06:09.130093 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:06:09.130119 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:06:09.130135 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:06:09.217177 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:06:09.217228 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:06:11.762393 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:06:11.776356 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:06:11.776424 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:06:11.811982 1131323 cri.go:89] found id: ""
	I0328 01:06:11.812017 1131323 logs.go:276] 0 containers: []
	W0328 01:06:11.812030 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:06:11.812038 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:06:11.812103 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:06:11.849789 1131323 cri.go:89] found id: ""
	I0328 01:06:11.849817 1131323 logs.go:276] 0 containers: []
	W0328 01:06:11.849826 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:06:11.849833 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:06:11.849884 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:06:11.890455 1131323 cri.go:89] found id: ""
	I0328 01:06:11.890488 1131323 logs.go:276] 0 containers: []
	W0328 01:06:11.890497 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:06:11.890503 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:06:11.890559 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:06:11.929047 1131323 cri.go:89] found id: ""
	I0328 01:06:11.929093 1131323 logs.go:276] 0 containers: []
	W0328 01:06:11.929102 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:06:11.929108 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:06:11.929164 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:06:11.969536 1131323 cri.go:89] found id: ""
	I0328 01:06:11.969566 1131323 logs.go:276] 0 containers: []
	W0328 01:06:11.969576 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:06:11.969583 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:06:11.969641 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:06:12.008779 1131323 cri.go:89] found id: ""
	I0328 01:06:12.008811 1131323 logs.go:276] 0 containers: []
	W0328 01:06:12.008821 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:06:12.008828 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:06:12.008890 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:06:12.044061 1131323 cri.go:89] found id: ""
	I0328 01:06:12.044091 1131323 logs.go:276] 0 containers: []
	W0328 01:06:12.044104 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:06:12.044112 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:06:12.044176 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:06:12.082307 1131323 cri.go:89] found id: ""
	I0328 01:06:12.082336 1131323 logs.go:276] 0 containers: []
	W0328 01:06:12.082346 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:06:12.082357 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:06:12.082369 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:06:12.133044 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:06:12.133091 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:06:12.148584 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:06:12.148624 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:06:12.218799 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:06:12.218834 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:06:12.218852 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:06:12.295580 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:06:12.295623 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:06:14.842815 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:06:14.856385 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:06:14.856456 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:06:14.895351 1131323 cri.go:89] found id: ""
	I0328 01:06:14.895409 1131323 logs.go:276] 0 containers: []
	W0328 01:06:14.895418 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:06:14.895424 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:06:14.895476 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:06:14.930333 1131323 cri.go:89] found id: ""
	I0328 01:06:14.930366 1131323 logs.go:276] 0 containers: []
	W0328 01:06:14.930380 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:06:14.930389 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:06:14.930461 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:06:14.968701 1131323 cri.go:89] found id: ""
	I0328 01:06:14.968742 1131323 logs.go:276] 0 containers: []
	W0328 01:06:14.968754 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:06:14.968767 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:06:14.968867 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:06:15.004580 1131323 cri.go:89] found id: ""
	I0328 01:06:15.004613 1131323 logs.go:276] 0 containers: []
	W0328 01:06:15.004626 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:06:15.004634 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:06:15.004700 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:06:15.046702 1131323 cri.go:89] found id: ""
	I0328 01:06:15.046726 1131323 logs.go:276] 0 containers: []
	W0328 01:06:15.046736 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:06:15.046742 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:06:15.046795 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:06:15.088693 1131323 cri.go:89] found id: ""
	I0328 01:06:15.088725 1131323 logs.go:276] 0 containers: []
	W0328 01:06:15.088734 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:06:15.088741 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:06:15.088797 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:06:15.130293 1131323 cri.go:89] found id: ""
	I0328 01:06:15.130324 1131323 logs.go:276] 0 containers: []
	W0328 01:06:15.130333 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:06:15.130339 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:06:15.130394 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:06:15.172381 1131323 cri.go:89] found id: ""
	I0328 01:06:15.172408 1131323 logs.go:276] 0 containers: []
	W0328 01:06:15.172417 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:06:15.172427 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:06:15.172440 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:06:15.225631 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:06:15.225674 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:06:15.241251 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:06:15.241294 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:06:15.319701 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:06:15.319731 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:06:15.319747 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:06:15.406813 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:06:15.406853 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:06:17.993893 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:06:18.007755 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:06:18.007843 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:06:18.047750 1131323 cri.go:89] found id: ""
	I0328 01:06:18.047777 1131323 logs.go:276] 0 containers: []
	W0328 01:06:18.047786 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:06:18.047797 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:06:18.047855 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:06:18.088264 1131323 cri.go:89] found id: ""
	I0328 01:06:18.088291 1131323 logs.go:276] 0 containers: []
	W0328 01:06:18.088303 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:06:18.088311 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:06:18.088369 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:06:18.127485 1131323 cri.go:89] found id: ""
	I0328 01:06:18.127514 1131323 logs.go:276] 0 containers: []
	W0328 01:06:18.127523 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:06:18.127530 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:06:18.127581 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:06:18.167462 1131323 cri.go:89] found id: ""
	I0328 01:06:18.167496 1131323 logs.go:276] 0 containers: []
	W0328 01:06:18.167510 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:06:18.167516 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:06:18.167571 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:06:18.209536 1131323 cri.go:89] found id: ""
	I0328 01:06:18.209571 1131323 logs.go:276] 0 containers: []
	W0328 01:06:18.209583 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:06:18.209591 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:06:18.209662 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:06:18.247565 1131323 cri.go:89] found id: ""
	I0328 01:06:18.247601 1131323 logs.go:276] 0 containers: []
	W0328 01:06:18.247614 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:06:18.247623 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:06:18.247701 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:06:18.288123 1131323 cri.go:89] found id: ""
	I0328 01:06:18.288162 1131323 logs.go:276] 0 containers: []
	W0328 01:06:18.288172 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:06:18.288179 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:06:18.288242 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:06:18.328132 1131323 cri.go:89] found id: ""
	I0328 01:06:18.328161 1131323 logs.go:276] 0 containers: []
	W0328 01:06:18.328170 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:06:18.328181 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:06:18.328193 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:06:18.403245 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:06:18.403287 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:06:18.403305 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:06:18.483446 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:06:18.483500 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:06:18.527357 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:06:18.527392 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:06:18.588402 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:06:18.588463 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:06:21.103566 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:06:21.117538 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:06:21.117616 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:06:21.174215 1131323 cri.go:89] found id: ""
	I0328 01:06:21.174270 1131323 logs.go:276] 0 containers: []
	W0328 01:06:21.174284 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:06:21.174293 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:06:21.174364 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:06:21.238666 1131323 cri.go:89] found id: ""
	I0328 01:06:21.238707 1131323 logs.go:276] 0 containers: []
	W0328 01:06:21.238722 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:06:21.238730 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:06:21.238803 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:06:21.303510 1131323 cri.go:89] found id: ""
	I0328 01:06:21.303543 1131323 logs.go:276] 0 containers: []
	W0328 01:06:21.303553 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:06:21.303559 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:06:21.303614 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:06:21.345823 1131323 cri.go:89] found id: ""
	I0328 01:06:21.345853 1131323 logs.go:276] 0 containers: []
	W0328 01:06:21.345862 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:06:21.345870 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:06:21.345940 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:06:21.386205 1131323 cri.go:89] found id: ""
	I0328 01:06:21.386248 1131323 logs.go:276] 0 containers: []
	W0328 01:06:21.386261 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:06:21.386269 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:06:21.386335 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:06:21.427424 1131323 cri.go:89] found id: ""
	I0328 01:06:21.427457 1131323 logs.go:276] 0 containers: []
	W0328 01:06:21.427470 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:06:21.427478 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:06:21.427546 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:06:21.465054 1131323 cri.go:89] found id: ""
	I0328 01:06:21.465087 1131323 logs.go:276] 0 containers: []
	W0328 01:06:21.465099 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:06:21.465107 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:06:21.465177 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:06:21.507197 1131323 cri.go:89] found id: ""
	I0328 01:06:21.507229 1131323 logs.go:276] 0 containers: []
	W0328 01:06:21.507238 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:06:21.507248 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:06:21.507263 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:06:21.586657 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:06:21.586709 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:06:21.633702 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:06:21.633739 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:06:21.688960 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:06:21.688999 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:06:21.704675 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:06:21.704714 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:06:21.781612 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:06:24.282521 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:06:24.297096 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:06:24.297185 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:06:24.338745 1131323 cri.go:89] found id: ""
	I0328 01:06:24.338780 1131323 logs.go:276] 0 containers: []
	W0328 01:06:24.338793 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:06:24.338802 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:06:24.338872 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:06:24.375499 1131323 cri.go:89] found id: ""
	I0328 01:06:24.375528 1131323 logs.go:276] 0 containers: []
	W0328 01:06:24.375540 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:06:24.375548 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:06:24.375616 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:06:24.410939 1131323 cri.go:89] found id: ""
	I0328 01:06:24.410966 1131323 logs.go:276] 0 containers: []
	W0328 01:06:24.410978 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:06:24.410986 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:06:24.411042 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:06:24.455316 1131323 cri.go:89] found id: ""
	I0328 01:06:24.455345 1131323 logs.go:276] 0 containers: []
	W0328 01:06:24.455354 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:06:24.455360 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:06:24.455427 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:06:24.493177 1131323 cri.go:89] found id: ""
	I0328 01:06:24.493206 1131323 logs.go:276] 0 containers: []
	W0328 01:06:24.493219 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:06:24.493228 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:06:24.493300 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:06:24.533612 1131323 cri.go:89] found id: ""
	I0328 01:06:24.533648 1131323 logs.go:276] 0 containers: []
	W0328 01:06:24.533659 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:06:24.533668 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:06:24.533743 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:06:24.573960 1131323 cri.go:89] found id: ""
	I0328 01:06:24.573998 1131323 logs.go:276] 0 containers: []
	W0328 01:06:24.574014 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:06:24.574020 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:06:24.574074 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:06:24.617282 1131323 cri.go:89] found id: ""
	I0328 01:06:24.617319 1131323 logs.go:276] 0 containers: []
	W0328 01:06:24.617333 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:06:24.617346 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:06:24.617364 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:06:24.691660 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:06:24.691688 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:06:24.691707 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:06:24.773138 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:06:24.773180 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:06:24.820408 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:06:24.820440 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:06:24.875901 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:06:24.875940 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:06:27.392663 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:06:27.407958 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:06:27.408046 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:06:27.446750 1131323 cri.go:89] found id: ""
	I0328 01:06:27.446782 1131323 logs.go:276] 0 containers: []
	W0328 01:06:27.446792 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:06:27.446799 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:06:27.446872 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:06:27.489199 1131323 cri.go:89] found id: ""
	I0328 01:06:27.489236 1131323 logs.go:276] 0 containers: []
	W0328 01:06:27.489249 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:06:27.489258 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:06:27.489316 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:06:27.525754 1131323 cri.go:89] found id: ""
	I0328 01:06:27.525787 1131323 logs.go:276] 0 containers: []
	W0328 01:06:27.525796 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:06:27.525803 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:06:27.525861 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:06:27.560817 1131323 cri.go:89] found id: ""
	I0328 01:06:27.560849 1131323 logs.go:276] 0 containers: []
	W0328 01:06:27.560858 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:06:27.560866 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:06:27.560930 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:06:27.597706 1131323 cri.go:89] found id: ""
	I0328 01:06:27.597736 1131323 logs.go:276] 0 containers: []
	W0328 01:06:27.597744 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:06:27.597750 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:06:27.597821 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:06:27.635170 1131323 cri.go:89] found id: ""
	I0328 01:06:27.635211 1131323 logs.go:276] 0 containers: []
	W0328 01:06:27.635223 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:06:27.635232 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:06:27.635299 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:06:27.672043 1131323 cri.go:89] found id: ""
	I0328 01:06:27.672079 1131323 logs.go:276] 0 containers: []
	W0328 01:06:27.672091 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:06:27.672099 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:06:27.672166 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:06:27.711401 1131323 cri.go:89] found id: ""
	I0328 01:06:27.711435 1131323 logs.go:276] 0 containers: []
	W0328 01:06:27.711448 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:06:27.711468 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:06:27.711488 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:06:27.755172 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:06:27.755211 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:06:27.807588 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:06:27.807632 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:06:27.823557 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:06:27.823589 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:06:27.905292 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:06:27.905316 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:06:27.905329 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:06:30.491565 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:06:30.505601 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:06:30.505667 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:06:30.541894 1131323 cri.go:89] found id: ""
	I0328 01:06:30.541929 1131323 logs.go:276] 0 containers: []
	W0328 01:06:30.541940 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:06:30.541949 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:06:30.542029 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:06:30.581484 1131323 cri.go:89] found id: ""
	I0328 01:06:30.581514 1131323 logs.go:276] 0 containers: []
	W0328 01:06:30.581532 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:06:30.581538 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:06:30.581613 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:06:30.624788 1131323 cri.go:89] found id: ""
	I0328 01:06:30.624830 1131323 logs.go:276] 0 containers: []
	W0328 01:06:30.624842 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:06:30.624850 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:06:30.624922 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:06:30.664373 1131323 cri.go:89] found id: ""
	I0328 01:06:30.664403 1131323 logs.go:276] 0 containers: []
	W0328 01:06:30.664413 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:06:30.664420 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:06:30.664489 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:06:30.702885 1131323 cri.go:89] found id: ""
	I0328 01:06:30.702917 1131323 logs.go:276] 0 containers: []
	W0328 01:06:30.702928 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:06:30.702934 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:06:30.703006 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:06:30.748170 1131323 cri.go:89] found id: ""
	I0328 01:06:30.748205 1131323 logs.go:276] 0 containers: []
	W0328 01:06:30.748217 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:06:30.748226 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:06:30.748316 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:06:30.785218 1131323 cri.go:89] found id: ""
	I0328 01:06:30.785255 1131323 logs.go:276] 0 containers: []
	W0328 01:06:30.785268 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:06:30.785276 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:06:30.785343 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:06:30.825529 1131323 cri.go:89] found id: ""
	I0328 01:06:30.825555 1131323 logs.go:276] 0 containers: []
	W0328 01:06:30.825565 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:06:30.825575 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:06:30.825589 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:06:30.881353 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:06:30.881391 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:06:30.896682 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:06:30.896718 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:06:30.973356 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:06:30.973386 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:06:30.973402 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:06:31.049014 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:06:31.049047 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:06:33.594365 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:06:33.609372 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:06:33.609460 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:06:33.648699 1131323 cri.go:89] found id: ""
	I0328 01:06:33.648728 1131323 logs.go:276] 0 containers: []
	W0328 01:06:33.648749 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:06:33.648757 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:06:33.648829 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:06:33.686707 1131323 cri.go:89] found id: ""
	I0328 01:06:33.686744 1131323 logs.go:276] 0 containers: []
	W0328 01:06:33.686758 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:06:33.686767 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:06:33.686832 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:06:33.723091 1131323 cri.go:89] found id: ""
	I0328 01:06:33.723121 1131323 logs.go:276] 0 containers: []
	W0328 01:06:33.723130 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:06:33.723136 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:06:33.723187 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:06:33.763439 1131323 cri.go:89] found id: ""
	I0328 01:06:33.763471 1131323 logs.go:276] 0 containers: []
	W0328 01:06:33.763481 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:06:33.763488 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:06:33.763544 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:06:33.812236 1131323 cri.go:89] found id: ""
	I0328 01:06:33.812271 1131323 logs.go:276] 0 containers: []
	W0328 01:06:33.812285 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:06:33.812294 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:06:33.812365 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:06:33.849421 1131323 cri.go:89] found id: ""
	I0328 01:06:33.849454 1131323 logs.go:276] 0 containers: []
	W0328 01:06:33.849465 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:06:33.849473 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:06:33.849528 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:06:33.888020 1131323 cri.go:89] found id: ""
	I0328 01:06:33.888051 1131323 logs.go:276] 0 containers: []
	W0328 01:06:33.888065 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:06:33.888078 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:06:33.888145 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:06:33.925952 1131323 cri.go:89] found id: ""
	I0328 01:06:33.925990 1131323 logs.go:276] 0 containers: []
	W0328 01:06:33.926003 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:06:33.926016 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:06:33.926034 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:06:33.976695 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:06:33.976734 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:06:33.991708 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:06:33.991752 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:06:34.068244 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:06:34.068276 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:06:34.068293 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:06:34.155843 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:06:34.155885 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:06:36.697480 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:06:36.712322 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:06:36.712420 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:06:36.749541 1131323 cri.go:89] found id: ""
	I0328 01:06:36.749570 1131323 logs.go:276] 0 containers: []
	W0328 01:06:36.749579 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:06:36.749587 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:06:36.749655 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:06:36.788226 1131323 cri.go:89] found id: ""
	I0328 01:06:36.788255 1131323 logs.go:276] 0 containers: []
	W0328 01:06:36.788264 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:06:36.788270 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:06:36.788323 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:06:36.823824 1131323 cri.go:89] found id: ""
	I0328 01:06:36.823856 1131323 logs.go:276] 0 containers: []
	W0328 01:06:36.823866 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:06:36.823872 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:06:36.823927 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:06:36.869331 1131323 cri.go:89] found id: ""
	I0328 01:06:36.869362 1131323 logs.go:276] 0 containers: []
	W0328 01:06:36.869371 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:06:36.869378 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:06:36.869473 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:06:36.907918 1131323 cri.go:89] found id: ""
	I0328 01:06:36.907950 1131323 logs.go:276] 0 containers: []
	W0328 01:06:36.907960 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:06:36.907966 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:06:36.908028 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:06:36.947708 1131323 cri.go:89] found id: ""
	I0328 01:06:36.947738 1131323 logs.go:276] 0 containers: []
	W0328 01:06:36.947749 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:06:36.947757 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:06:36.947824 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:06:36.986200 1131323 cri.go:89] found id: ""
	I0328 01:06:36.986251 1131323 logs.go:276] 0 containers: []
	W0328 01:06:36.986266 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:06:36.986275 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:06:36.986350 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:06:37.026670 1131323 cri.go:89] found id: ""
	I0328 01:06:37.026698 1131323 logs.go:276] 0 containers: []
	W0328 01:06:37.026708 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:06:37.026718 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:06:37.026732 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:06:37.079891 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:06:37.079933 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:06:37.094347 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:06:37.094378 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:06:37.168653 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:06:37.168681 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:06:37.168695 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:06:37.247909 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:06:37.247949 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:06:39.791285 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:06:39.807921 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:06:39.808000 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:06:39.851460 1131323 cri.go:89] found id: ""
	I0328 01:06:39.851499 1131323 logs.go:276] 0 containers: []
	W0328 01:06:39.851512 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:06:39.851520 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:06:39.851593 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:06:39.889506 1131323 cri.go:89] found id: ""
	I0328 01:06:39.889541 1131323 logs.go:276] 0 containers: []
	W0328 01:06:39.889554 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:06:39.889564 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:06:39.889632 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:06:39.930291 1131323 cri.go:89] found id: ""
	I0328 01:06:39.930321 1131323 logs.go:276] 0 containers: []
	W0328 01:06:39.930331 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:06:39.930337 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:06:39.930400 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:06:39.965121 1131323 cri.go:89] found id: ""
	I0328 01:06:39.965160 1131323 logs.go:276] 0 containers: []
	W0328 01:06:39.965174 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:06:39.965183 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:06:39.965252 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:06:40.003217 1131323 cri.go:89] found id: ""
	I0328 01:06:40.003248 1131323 logs.go:276] 0 containers: []
	W0328 01:06:40.003258 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:06:40.003264 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:06:40.003319 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:06:40.042702 1131323 cri.go:89] found id: ""
	I0328 01:06:40.042737 1131323 logs.go:276] 0 containers: []
	W0328 01:06:40.042749 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:06:40.042759 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:06:40.042826 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:06:40.079733 1131323 cri.go:89] found id: ""
	I0328 01:06:40.079769 1131323 logs.go:276] 0 containers: []
	W0328 01:06:40.079780 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:06:40.079788 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:06:40.079852 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:06:40.117066 1131323 cri.go:89] found id: ""
	I0328 01:06:40.117098 1131323 logs.go:276] 0 containers: []
	W0328 01:06:40.117107 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:06:40.117117 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:06:40.117130 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:06:40.158589 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:06:40.158623 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:06:40.210997 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:06:40.211049 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:06:40.225419 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:06:40.225453 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:06:40.305356 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:06:40.305385 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:06:40.305401 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:06:42.896394 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:06:42.912285 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:06:42.912355 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:06:42.949381 1131323 cri.go:89] found id: ""
	I0328 01:06:42.949411 1131323 logs.go:276] 0 containers: []
	W0328 01:06:42.949420 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:06:42.949427 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:06:42.949496 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:06:42.985325 1131323 cri.go:89] found id: ""
	I0328 01:06:42.985358 1131323 logs.go:276] 0 containers: []
	W0328 01:06:42.985371 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:06:42.985388 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:06:42.985456 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:06:43.023570 1131323 cri.go:89] found id: ""
	I0328 01:06:43.023616 1131323 logs.go:276] 0 containers: []
	W0328 01:06:43.023630 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:06:43.023638 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:06:43.023714 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:06:43.062995 1131323 cri.go:89] found id: ""
	I0328 01:06:43.063025 1131323 logs.go:276] 0 containers: []
	W0328 01:06:43.063036 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:06:43.063042 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:06:43.063111 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:06:43.101666 1131323 cri.go:89] found id: ""
	I0328 01:06:43.101704 1131323 logs.go:276] 0 containers: []
	W0328 01:06:43.101713 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:06:43.101720 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:06:43.101789 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:06:43.150713 1131323 cri.go:89] found id: ""
	I0328 01:06:43.150745 1131323 logs.go:276] 0 containers: []
	W0328 01:06:43.150757 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:06:43.150765 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:06:43.150830 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:06:43.193449 1131323 cri.go:89] found id: ""
	I0328 01:06:43.193479 1131323 logs.go:276] 0 containers: []
	W0328 01:06:43.193487 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:06:43.193495 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:06:43.193559 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:06:43.237641 1131323 cri.go:89] found id: ""
	I0328 01:06:43.237673 1131323 logs.go:276] 0 containers: []
	W0328 01:06:43.237682 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:06:43.237698 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:06:43.237714 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:06:43.287282 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:06:43.287320 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:06:43.303307 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:06:43.303343 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:06:43.383597 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:06:43.383619 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:06:43.383632 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:06:43.467874 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:06:43.467914 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:06:46.011081 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:06:46.025731 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:06:46.025824 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:06:46.064336 1131323 cri.go:89] found id: ""
	I0328 01:06:46.064371 1131323 logs.go:276] 0 containers: []
	W0328 01:06:46.064385 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:06:46.064394 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:06:46.064451 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:06:46.104493 1131323 cri.go:89] found id: ""
	I0328 01:06:46.104530 1131323 logs.go:276] 0 containers: []
	W0328 01:06:46.104550 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:06:46.104559 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:06:46.104636 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:06:46.147546 1131323 cri.go:89] found id: ""
	I0328 01:06:46.147582 1131323 logs.go:276] 0 containers: []
	W0328 01:06:46.147594 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:06:46.147602 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:06:46.147656 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:06:46.186162 1131323 cri.go:89] found id: ""
	I0328 01:06:46.186197 1131323 logs.go:276] 0 containers: []
	W0328 01:06:46.186207 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:06:46.186213 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:06:46.186296 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:06:46.230412 1131323 cri.go:89] found id: ""
	I0328 01:06:46.230450 1131323 logs.go:276] 0 containers: []
	W0328 01:06:46.230464 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:06:46.230473 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:06:46.230552 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:06:46.266000 1131323 cri.go:89] found id: ""
	I0328 01:06:46.266037 1131323 logs.go:276] 0 containers: []
	W0328 01:06:46.266050 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:06:46.266059 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:06:46.266126 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:06:46.301031 1131323 cri.go:89] found id: ""
	I0328 01:06:46.301065 1131323 logs.go:276] 0 containers: []
	W0328 01:06:46.301077 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:06:46.301084 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:06:46.301155 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:06:46.339222 1131323 cri.go:89] found id: ""
	I0328 01:06:46.339248 1131323 logs.go:276] 0 containers: []
	W0328 01:06:46.339258 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:06:46.339271 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:06:46.339290 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:06:46.352558 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:06:46.352595 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:06:46.427283 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:06:46.427308 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:06:46.427325 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:06:46.512134 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:06:46.512178 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:06:46.558276 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:06:46.558307 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:06:49.113455 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:06:49.127554 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:06:49.127645 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:06:49.169380 1131323 cri.go:89] found id: ""
	I0328 01:06:49.169421 1131323 logs.go:276] 0 containers: []
	W0328 01:06:49.169435 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:06:49.169444 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:06:49.169511 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:06:49.204540 1131323 cri.go:89] found id: ""
	I0328 01:06:49.204568 1131323 logs.go:276] 0 containers: []
	W0328 01:06:49.204579 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:06:49.204596 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:06:49.204664 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:06:49.243074 1131323 cri.go:89] found id: ""
	I0328 01:06:49.243102 1131323 logs.go:276] 0 containers: []
	W0328 01:06:49.243112 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:06:49.243119 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:06:49.243170 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:06:49.281264 1131323 cri.go:89] found id: ""
	I0328 01:06:49.281301 1131323 logs.go:276] 0 containers: []
	W0328 01:06:49.281314 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:06:49.281322 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:06:49.281391 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:06:49.320473 1131323 cri.go:89] found id: ""
	I0328 01:06:49.320505 1131323 logs.go:276] 0 containers: []
	W0328 01:06:49.320514 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:06:49.320521 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:06:49.320592 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:06:49.357715 1131323 cri.go:89] found id: ""
	I0328 01:06:49.357749 1131323 logs.go:276] 0 containers: []
	W0328 01:06:49.357759 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:06:49.357766 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:06:49.357823 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:06:49.398427 1131323 cri.go:89] found id: ""
	I0328 01:06:49.398464 1131323 logs.go:276] 0 containers: []
	W0328 01:06:49.398477 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:06:49.398498 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:06:49.398576 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:06:49.439921 1131323 cri.go:89] found id: ""
	I0328 01:06:49.439956 1131323 logs.go:276] 0 containers: []
	W0328 01:06:49.439969 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:06:49.439982 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:06:49.440003 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:06:49.557260 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:06:49.557289 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:06:49.557312 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:06:49.640105 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:06:49.640169 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:06:49.683153 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:06:49.683185 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:06:49.737420 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:06:49.737463 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:06:52.253208 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:06:52.268572 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:06:52.268649 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:06:52.305136 1131323 cri.go:89] found id: ""
	I0328 01:06:52.305180 1131323 logs.go:276] 0 containers: []
	W0328 01:06:52.305193 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:06:52.305202 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:06:52.305273 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:06:52.344774 1131323 cri.go:89] found id: ""
	I0328 01:06:52.344806 1131323 logs.go:276] 0 containers: []
	W0328 01:06:52.344816 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:06:52.344823 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:06:52.344885 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:06:52.382127 1131323 cri.go:89] found id: ""
	I0328 01:06:52.382174 1131323 logs.go:276] 0 containers: []
	W0328 01:06:52.382185 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:06:52.382200 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:06:52.382280 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:06:52.421340 1131323 cri.go:89] found id: ""
	I0328 01:06:52.421368 1131323 logs.go:276] 0 containers: []
	W0328 01:06:52.421377 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:06:52.421383 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:06:52.421433 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:06:52.460046 1131323 cri.go:89] found id: ""
	I0328 01:06:52.460084 1131323 logs.go:276] 0 containers: []
	W0328 01:06:52.460100 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:06:52.460107 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:06:52.460164 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:06:52.500067 1131323 cri.go:89] found id: ""
	I0328 01:06:52.500094 1131323 logs.go:276] 0 containers: []
	W0328 01:06:52.500102 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:06:52.500109 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:06:52.500171 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:06:52.537614 1131323 cri.go:89] found id: ""
	I0328 01:06:52.537646 1131323 logs.go:276] 0 containers: []
	W0328 01:06:52.537671 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:06:52.537680 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:06:52.537745 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:06:52.577362 1131323 cri.go:89] found id: ""
	I0328 01:06:52.577392 1131323 logs.go:276] 0 containers: []
	W0328 01:06:52.577402 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:06:52.577417 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:06:52.577434 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:06:52.633638 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:06:52.633689 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:06:52.650762 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:06:52.650796 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:06:52.729436 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:06:52.729470 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:06:52.729484 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:06:52.818193 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:06:52.818248 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:06:55.362950 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:06:55.378461 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:06:55.378577 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:06:55.419968 1131323 cri.go:89] found id: ""
	I0328 01:06:55.419995 1131323 logs.go:276] 0 containers: []
	W0328 01:06:55.420005 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:06:55.420010 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:06:55.420072 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:06:55.464308 1131323 cri.go:89] found id: ""
	I0328 01:06:55.464341 1131323 logs.go:276] 0 containers: []
	W0328 01:06:55.464350 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:06:55.464357 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:06:55.464421 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:06:55.523059 1131323 cri.go:89] found id: ""
	I0328 01:06:55.523092 1131323 logs.go:276] 0 containers: []
	W0328 01:06:55.523106 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:06:55.523114 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:06:55.523186 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:06:55.570957 1131323 cri.go:89] found id: ""
	I0328 01:06:55.570990 1131323 logs.go:276] 0 containers: []
	W0328 01:06:55.571004 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:06:55.571013 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:06:55.571077 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:06:55.606712 1131323 cri.go:89] found id: ""
	I0328 01:06:55.606739 1131323 logs.go:276] 0 containers: []
	W0328 01:06:55.606749 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:06:55.606755 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:06:55.606817 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:06:55.646445 1131323 cri.go:89] found id: ""
	I0328 01:06:55.646477 1131323 logs.go:276] 0 containers: []
	W0328 01:06:55.646486 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:06:55.646493 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:06:55.646548 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:06:55.685176 1131323 cri.go:89] found id: ""
	I0328 01:06:55.685208 1131323 logs.go:276] 0 containers: []
	W0328 01:06:55.685217 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:06:55.685225 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:06:55.685289 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:06:55.722948 1131323 cri.go:89] found id: ""
	I0328 01:06:55.722984 1131323 logs.go:276] 0 containers: []
	W0328 01:06:55.722995 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:06:55.723006 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:06:55.723022 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:06:55.797332 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:06:55.797368 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:06:55.797385 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:06:55.877648 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:06:55.877688 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:06:55.918966 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:06:55.918997 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:06:55.971226 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:06:55.971272 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:06:58.488464 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:06:58.504999 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:06:58.505088 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:06:58.549290 1131323 cri.go:89] found id: ""
	I0328 01:06:58.549325 1131323 logs.go:276] 0 containers: []
	W0328 01:06:58.549338 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:06:58.549347 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:06:58.549414 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:06:58.589222 1131323 cri.go:89] found id: ""
	I0328 01:06:58.589252 1131323 logs.go:276] 0 containers: []
	W0328 01:06:58.589261 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:06:58.589271 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:06:58.589337 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:06:58.626470 1131323 cri.go:89] found id: ""
	I0328 01:06:58.626499 1131323 logs.go:276] 0 containers: []
	W0328 01:06:58.626508 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:06:58.626514 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:06:58.626578 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:06:58.671634 1131323 cri.go:89] found id: ""
	I0328 01:06:58.671663 1131323 logs.go:276] 0 containers: []
	W0328 01:06:58.671674 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:06:58.671683 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:06:58.671744 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:06:58.707335 1131323 cri.go:89] found id: ""
	I0328 01:06:58.707370 1131323 logs.go:276] 0 containers: []
	W0328 01:06:58.707381 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:06:58.707390 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:06:58.707459 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:06:58.745635 1131323 cri.go:89] found id: ""
	I0328 01:06:58.745666 1131323 logs.go:276] 0 containers: []
	W0328 01:06:58.745679 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:06:58.745687 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:06:58.745752 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:06:58.792172 1131323 cri.go:89] found id: ""
	I0328 01:06:58.792205 1131323 logs.go:276] 0 containers: []
	W0328 01:06:58.792216 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:06:58.792225 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:06:58.792287 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:06:58.840027 1131323 cri.go:89] found id: ""
	I0328 01:06:58.840063 1131323 logs.go:276] 0 containers: []
	W0328 01:06:58.840075 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:06:58.840089 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:06:58.840108 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:06:58.921964 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:06:58.921988 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:06:58.922003 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:06:59.016935 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:06:59.016980 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:06:59.065747 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:06:59.065788 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:06:59.119189 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:06:59.119231 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:07:01.637081 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:07:01.652557 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:07:01.652634 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:07:01.691795 1131323 cri.go:89] found id: ""
	I0328 01:07:01.691832 1131323 logs.go:276] 0 containers: []
	W0328 01:07:01.691846 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:07:01.691854 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:07:01.691927 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:07:01.732815 1131323 cri.go:89] found id: ""
	I0328 01:07:01.732850 1131323 logs.go:276] 0 containers: []
	W0328 01:07:01.732861 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:07:01.732868 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:07:01.732938 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:07:01.776370 1131323 cri.go:89] found id: ""
	I0328 01:07:01.776408 1131323 logs.go:276] 0 containers: []
	W0328 01:07:01.776422 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:07:01.776431 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:07:01.776501 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:07:01.821260 1131323 cri.go:89] found id: ""
	I0328 01:07:01.821290 1131323 logs.go:276] 0 containers: []
	W0328 01:07:01.821301 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:07:01.821308 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:07:01.821377 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:07:01.860666 1131323 cri.go:89] found id: ""
	I0328 01:07:01.860696 1131323 logs.go:276] 0 containers: []
	W0328 01:07:01.860708 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:07:01.860719 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:07:01.860787 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:07:01.898255 1131323 cri.go:89] found id: ""
	I0328 01:07:01.898291 1131323 logs.go:276] 0 containers: []
	W0328 01:07:01.898304 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:07:01.898314 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:07:01.898383 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:07:01.937770 1131323 cri.go:89] found id: ""
	I0328 01:07:01.937809 1131323 logs.go:276] 0 containers: []
	W0328 01:07:01.937822 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:07:01.937830 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:07:01.937901 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:07:01.976946 1131323 cri.go:89] found id: ""
	I0328 01:07:01.976981 1131323 logs.go:276] 0 containers: []
	W0328 01:07:01.976994 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:07:01.977008 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:07:01.977027 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:07:02.062804 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:07:02.062845 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:07:02.110750 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:07:02.110783 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:07:02.179633 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:07:02.179677 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:07:02.203131 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:07:02.203181 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:07:02.303281 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:07:04.804238 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:07:04.819654 1131323 kubeadm.go:591] duration metric: took 4m2.527630194s to restartPrimaryControlPlane
	W0328 01:07:04.819747 1131323 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0328 01:07:04.819787 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0328 01:07:07.322821 1131323 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (2.50300166s)
	I0328 01:07:07.322918 1131323 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0328 01:07:07.338692 1131323 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0328 01:07:07.349812 1131323 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0328 01:07:07.361566 1131323 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0328 01:07:07.361597 1131323 kubeadm.go:156] found existing configuration files:
	
	I0328 01:07:07.361667 1131323 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0328 01:07:07.372926 1131323 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0328 01:07:07.373008 1131323 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0328 01:07:07.383770 1131323 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0328 01:07:07.394260 1131323 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0328 01:07:07.394332 1131323 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0328 01:07:07.405874 1131323 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0328 01:07:07.417177 1131323 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0328 01:07:07.417254 1131323 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0328 01:07:07.428589 1131323 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0328 01:07:07.438788 1131323 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0328 01:07:07.438845 1131323 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0328 01:07:07.449649 1131323 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0328 01:07:07.533886 1131323 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0328 01:07:07.533989 1131323 kubeadm.go:309] [preflight] Running pre-flight checks
	I0328 01:07:07.693599 1131323 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0328 01:07:07.693736 1131323 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0328 01:07:07.693852 1131323 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0328 01:07:07.910557 1131323 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0328 01:07:07.912634 1131323 out.go:204]   - Generating certificates and keys ...
	I0328 01:07:07.912743 1131323 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0328 01:07:07.912855 1131323 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0328 01:07:07.912984 1131323 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0328 01:07:07.913098 1131323 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0328 01:07:07.913212 1131323 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0328 01:07:07.913298 1131323 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0328 01:07:07.913384 1131323 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0328 01:07:07.913569 1131323 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0328 01:07:07.913947 1131323 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0328 01:07:07.914429 1131323 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0328 01:07:07.914649 1131323 kubeadm.go:309] [certs] Using the existing "sa" key
	I0328 01:07:07.914728 1131323 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0328 01:07:08.225778 1131323 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0328 01:07:08.353927 1131323 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0328 01:07:08.631240 1131323 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0328 01:07:08.824445 1131323 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0328 01:07:08.840240 1131323 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0328 01:07:08.841200 1131323 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0328 01:07:08.841315 1131323 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0328 01:07:08.997129 1131323 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0328 01:07:08.999073 1131323 out.go:204]   - Booting up control plane ...
	I0328 01:07:08.999224 1131323 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0328 01:07:09.014811 1131323 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0328 01:07:09.015898 1131323 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0328 01:07:09.016727 1131323 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0328 01:07:09.019426 1131323 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0328 01:07:49.021732 1131323 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0328 01:07:49.021890 1131323 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0328 01:07:49.022195 1131323 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0328 01:07:54.022449 1131323 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0328 01:07:54.022704 1131323 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0328 01:08:04.023273 1131323 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0328 01:08:04.023535 1131323 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0328 01:08:24.024091 1131323 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0328 01:08:24.024388 1131323 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0328 01:09:04.025791 1131323 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0328 01:09:04.026055 1131323 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0328 01:09:04.026065 1131323 kubeadm.go:309] 
	I0328 01:09:04.026124 1131323 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0328 01:09:04.026172 1131323 kubeadm.go:309] 		timed out waiting for the condition
	I0328 01:09:04.026181 1131323 kubeadm.go:309] 
	I0328 01:09:04.026221 1131323 kubeadm.go:309] 	This error is likely caused by:
	I0328 01:09:04.026279 1131323 kubeadm.go:309] 		- The kubelet is not running
	I0328 01:09:04.026401 1131323 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0328 01:09:04.026411 1131323 kubeadm.go:309] 
	I0328 01:09:04.026529 1131323 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0328 01:09:04.026586 1131323 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0328 01:09:04.026632 1131323 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0328 01:09:04.026640 1131323 kubeadm.go:309] 
	I0328 01:09:04.026758 1131323 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0328 01:09:04.026884 1131323 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0328 01:09:04.026902 1131323 kubeadm.go:309] 
	I0328 01:09:04.027061 1131323 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0328 01:09:04.027222 1131323 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0328 01:09:04.027335 1131323 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0328 01:09:04.027429 1131323 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0328 01:09:04.027537 1131323 kubeadm.go:309] 
	I0328 01:09:04.029027 1131323 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0328 01:09:04.029164 1131323 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0328 01:09:04.029284 1131323 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	W0328 01:09:04.029477 1131323 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0328 01:09:04.029545 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0328 01:09:04.543275 1131323 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0328 01:09:04.562572 1131323 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0328 01:09:04.577013 1131323 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0328 01:09:04.577040 1131323 kubeadm.go:156] found existing configuration files:
	
	I0328 01:09:04.577102 1131323 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0328 01:09:04.590795 1131323 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0328 01:09:04.590885 1131323 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0328 01:09:04.604227 1131323 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0328 01:09:04.616720 1131323 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0328 01:09:04.616818 1131323 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0328 01:09:04.630095 1131323 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0328 01:09:04.643166 1131323 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0328 01:09:04.643259 1131323 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0328 01:09:04.658084 1131323 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0328 01:09:04.671786 1131323 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0328 01:09:04.671874 1131323 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0328 01:09:04.685852 1131323 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0328 01:09:04.779013 1131323 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0328 01:09:04.779113 1131323 kubeadm.go:309] [preflight] Running pre-flight checks
	I0328 01:09:04.964178 1131323 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0328 01:09:04.964317 1131323 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0328 01:09:04.964463 1131323 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0328 01:09:05.181712 1131323 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0328 01:09:05.183644 1131323 out.go:204]   - Generating certificates and keys ...
	I0328 01:09:05.183759 1131323 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0328 01:09:05.183851 1131323 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0328 01:09:05.183962 1131323 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0328 01:09:05.184042 1131323 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0328 01:09:05.184156 1131323 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0328 01:09:05.184244 1131323 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0328 01:09:05.184337 1131323 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0328 01:09:05.184424 1131323 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0328 01:09:05.184535 1131323 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0328 01:09:05.184633 1131323 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0328 01:09:05.184683 1131323 kubeadm.go:309] [certs] Using the existing "sa" key
	I0328 01:09:05.184758 1131323 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0328 01:09:05.587190 1131323 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0328 01:09:05.923219 1131323 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0328 01:09:06.087945 1131323 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0328 01:09:06.245638 1131323 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0328 01:09:06.266195 1131323 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0328 01:09:06.267461 1131323 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0328 01:09:06.267551 1131323 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0328 01:09:06.434155 1131323 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0328 01:09:06.436300 1131323 out.go:204]   - Booting up control plane ...
	I0328 01:09:06.436447 1131323 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0328 01:09:06.446573 1131323 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0328 01:09:06.447461 1131323 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0328 01:09:06.448313 1131323 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0328 01:09:06.450917 1131323 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0328 01:09:46.453199 1131323 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0328 01:09:46.453386 1131323 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0328 01:09:46.453643 1131323 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0328 01:09:51.454402 1131323 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0328 01:09:51.454665 1131323 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0328 01:10:01.455189 1131323 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0328 01:10:01.455417 1131323 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0328 01:10:21.456491 1131323 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0328 01:10:21.456726 1131323 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0328 01:11:01.456972 1131323 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0328 01:11:01.457256 1131323 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0328 01:11:01.457269 1131323 kubeadm.go:309] 
	I0328 01:11:01.457310 1131323 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0328 01:11:01.457404 1131323 kubeadm.go:309] 		timed out waiting for the condition
	I0328 01:11:01.457441 1131323 kubeadm.go:309] 
	I0328 01:11:01.457492 1131323 kubeadm.go:309] 	This error is likely caused by:
	I0328 01:11:01.457550 1131323 kubeadm.go:309] 		- The kubelet is not running
	I0328 01:11:01.457696 1131323 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0328 01:11:01.457708 1131323 kubeadm.go:309] 
	I0328 01:11:01.457856 1131323 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0328 01:11:01.457906 1131323 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0328 01:11:01.457935 1131323 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0328 01:11:01.457943 1131323 kubeadm.go:309] 
	I0328 01:11:01.458033 1131323 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0328 01:11:01.458139 1131323 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0328 01:11:01.458155 1131323 kubeadm.go:309] 
	I0328 01:11:01.458331 1131323 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0328 01:11:01.458483 1131323 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0328 01:11:01.458594 1131323 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0328 01:11:01.458707 1131323 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0328 01:11:01.458718 1131323 kubeadm.go:309] 
	I0328 01:11:01.459597 1131323 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0328 01:11:01.459737 1131323 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0328 01:11:01.459822 1131323 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0328 01:11:01.459962 1131323 kubeadm.go:393] duration metric: took 7m59.227261729s to StartCluster
	I0328 01:11:01.460023 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:11:01.460167 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:11:01.522644 1131323 cri.go:89] found id: ""
	I0328 01:11:01.522687 1131323 logs.go:276] 0 containers: []
	W0328 01:11:01.522700 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:11:01.522710 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:11:01.522782 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:11:01.567898 1131323 cri.go:89] found id: ""
	I0328 01:11:01.567928 1131323 logs.go:276] 0 containers: []
	W0328 01:11:01.567937 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:11:01.567945 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:11:01.568005 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:11:01.604782 1131323 cri.go:89] found id: ""
	I0328 01:11:01.604810 1131323 logs.go:276] 0 containers: []
	W0328 01:11:01.604819 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:11:01.604825 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:11:01.604935 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:11:01.642875 1131323 cri.go:89] found id: ""
	I0328 01:11:01.642908 1131323 logs.go:276] 0 containers: []
	W0328 01:11:01.642920 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:11:01.642929 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:11:01.642993 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:11:01.682186 1131323 cri.go:89] found id: ""
	I0328 01:11:01.682216 1131323 logs.go:276] 0 containers: []
	W0328 01:11:01.682223 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:11:01.682241 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:11:01.682312 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:11:01.720654 1131323 cri.go:89] found id: ""
	I0328 01:11:01.720689 1131323 logs.go:276] 0 containers: []
	W0328 01:11:01.720697 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:11:01.720704 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:11:01.720759 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:11:01.757340 1131323 cri.go:89] found id: ""
	I0328 01:11:01.757372 1131323 logs.go:276] 0 containers: []
	W0328 01:11:01.757383 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:11:01.757392 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:11:01.757462 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:11:01.797426 1131323 cri.go:89] found id: ""
	I0328 01:11:01.797462 1131323 logs.go:276] 0 containers: []
	W0328 01:11:01.797473 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:11:01.797488 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:11:01.797506 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:11:01.859582 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:11:01.859623 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:11:01.876027 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:11:01.876073 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:11:01.966513 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:11:01.966539 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:11:01.966557 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:11:02.084853 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:11:02.084894 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0328 01:11:02.127221 1131323 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0328 01:11:02.127288 1131323 out.go:239] * 
	* 
	W0328 01:11:02.127417 1131323 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0328 01:11:02.127456 1131323 out.go:239] * 
	* 
	W0328 01:11:02.128313 1131323 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0328 01:11:02.131916 1131323 out.go:177] 
	W0328 01:11:02.133288 1131323 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0328 01:11:02.133351 1131323 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0328 01:11:02.133381 1131323 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0328 01:11:02.134991 1131323 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-linux-amd64 start -p old-k8s-version-986088 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-986088 -n old-k8s-version-986088
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-986088 -n old-k8s-version-986088: exit status 2 (273.763794ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-986088 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-986088 logs -n 25: (1.632784576s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|----------------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   |    Version     |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|----------------|---------------------|---------------------|
	| stop    | -p no-preload-248059                                   | no-preload-248059            | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:55 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |                |                     |                     |
	| addons  | enable metrics-server -p embed-certs-808809            | embed-certs-808809           | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:55 UTC | 28 Mar 24 00:55 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |                |                     |                     |
	| stop    | -p embed-certs-808809                                  | embed-certs-808809           | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:55 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |                |                     |                     |
	| addons  | enable metrics-server -p newest-cni-013642             | newest-cni-013642            | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:55 UTC | 28 Mar 24 00:55 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |                |                     |                     |
	| stop    | -p newest-cni-013642                                   | newest-cni-013642            | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:55 UTC | 28 Mar 24 00:55 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |                |                     |                     |
	| addons  | enable dashboard -p newest-cni-013642                  | newest-cni-013642            | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:55 UTC | 28 Mar 24 00:55 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| start   | -p newest-cni-013642 --memory=2200 --alsologtostderr   | newest-cni-013642            | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:55 UTC | 28 Mar 24 00:56 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |                |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |                |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |                |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |                |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.30.0-beta.0                    |                              |         |                |                     |                     |
	| image   | newest-cni-013642 image list                           | newest-cni-013642            | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:56 UTC | 28 Mar 24 00:56 UTC |
	|         | --format=json                                          |                              |         |                |                     |                     |
	| pause   | -p newest-cni-013642                                   | newest-cni-013642            | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:56 UTC | 28 Mar 24 00:56 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |                |                     |                     |
	| unpause | -p newest-cni-013642                                   | newest-cni-013642            | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:56 UTC | 28 Mar 24 00:56 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |                |                     |                     |
	| delete  | -p newest-cni-013642                                   | newest-cni-013642            | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:56 UTC | 28 Mar 24 00:56 UTC |
	| delete  | -p newest-cni-013642                                   | newest-cni-013642            | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:56 UTC | 28 Mar 24 00:56 UTC |
	| start   | -p                                                     | default-k8s-diff-port-283961 | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:56 UTC | 28 Mar 24 00:57 UTC |
	|         | default-k8s-diff-port-283961                           |                              |         |                |                     |                     |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |                |                     |                     |
	|         | --driver=kvm2                                          |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.29.3                           |                              |         |                |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-986088        | old-k8s-version-986088       | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:57 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |                |                     |                     |
	| addons  | enable dashboard -p no-preload-248059                  | no-preload-248059            | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:57 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-283961  | default-k8s-diff-port-283961 | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:57 UTC | 28 Mar 24 00:57 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |                |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-283961 | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:57 UTC |                     |
	|         | default-k8s-diff-port-283961                           |                              |         |                |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |                |                     |                     |
	| start   | -p no-preload-248059 --memory=2200                     | no-preload-248059            | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:57 UTC | 28 Mar 24 01:09 UTC |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |                |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.30.0-beta.0                    |                              |         |                |                     |                     |
	| addons  | enable dashboard -p embed-certs-808809                 | embed-certs-808809           | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:57 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| start   | -p embed-certs-808809                                  | embed-certs-808809           | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:57 UTC | 28 Mar 24 01:07 UTC |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |                |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.29.3                           |                              |         |                |                     |                     |
	| stop    | -p old-k8s-version-986088                              | old-k8s-version-986088       | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:59 UTC | 28 Mar 24 00:59 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |                |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-986088             | old-k8s-version-986088       | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:59 UTC | 28 Mar 24 00:59 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| start   | -p old-k8s-version-986088                              | old-k8s-version-986088       | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:59 UTC |                     |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --kvm-network=default                                  |                              |         |                |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |                |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |                |                     |                     |
	|         | --keep-context=false                                   |                              |         |                |                     |                     |
	|         | --driver=kvm2                                          |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |                |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-283961       | default-k8s-diff-port-283961 | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:59 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-283961 | jenkins | v1.33.0-beta.0 | 28 Mar 24 01:00 UTC | 28 Mar 24 01:08 UTC |
	|         | default-k8s-diff-port-283961                           |                              |         |                |                     |                     |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |                |                     |                     |
	|         | --driver=kvm2                                          |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.29.3                           |                              |         |                |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/28 01:00:05
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0328 01:00:05.675380 1131600 out.go:291] Setting OutFile to fd 1 ...
	I0328 01:00:05.675675 1131600 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 01:00:05.675710 1131600 out.go:304] Setting ErrFile to fd 2...
	I0328 01:00:05.675718 1131600 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 01:00:05.676017 1131600 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18485-1069254/.minikube/bin
	I0328 01:00:05.676919 1131600 out.go:298] Setting JSON to false
	I0328 01:00:05.678046 1131600 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":31303,"bootTime":1711556303,"procs":200,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1054-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0328 01:00:05.678129 1131600 start.go:139] virtualization: kvm guest
	I0328 01:00:05.681128 1131600 out.go:177] * [default-k8s-diff-port-283961] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I0328 01:00:05.683139 1131600 out.go:177]   - MINIKUBE_LOCATION=18485
	I0328 01:00:05.683129 1131600 notify.go:220] Checking for updates...
	I0328 01:00:05.685082 1131600 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0328 01:00:05.686765 1131600 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18485-1069254/kubeconfig
	I0328 01:00:05.688389 1131600 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18485-1069254/.minikube
	I0328 01:00:05.690187 1131600 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0328 01:00:05.691887 1131600 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0328 01:00:05.693775 1131600 config.go:182] Loaded profile config "default-k8s-diff-port-283961": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0328 01:00:05.694270 1131600 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18485-1069254/.minikube/bin/docker-machine-driver-kvm2
	I0328 01:00:05.694323 1131600 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 01:00:05.709757 1131600 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35333
	I0328 01:00:05.710275 1131600 main.go:141] libmachine: () Calling .GetVersion
	I0328 01:00:05.710875 1131600 main.go:141] libmachine: Using API Version  1
	I0328 01:00:05.710900 1131600 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 01:00:05.711323 1131600 main.go:141] libmachine: () Calling .GetMachineName
	I0328 01:00:05.711531 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .DriverName
	I0328 01:00:05.711893 1131600 driver.go:392] Setting default libvirt URI to qemu:///system
	I0328 01:00:05.712342 1131600 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18485-1069254/.minikube/bin/docker-machine-driver-kvm2
	I0328 01:00:05.712392 1131600 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 01:00:05.727583 1131600 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44249
	I0328 01:00:05.728107 1131600 main.go:141] libmachine: () Calling .GetVersion
	I0328 01:00:05.728595 1131600 main.go:141] libmachine: Using API Version  1
	I0328 01:00:05.728625 1131600 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 01:00:05.728945 1131600 main.go:141] libmachine: () Calling .GetMachineName
	I0328 01:00:05.729170 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .DriverName
	I0328 01:00:05.763895 1131600 out.go:177] * Using the kvm2 driver based on existing profile
	I0328 01:00:05.765397 1131600 start.go:297] selected driver: kvm2
	I0328 01:00:05.765431 1131600 start.go:901] validating driver "kvm2" against &{Name:default-k8s-diff-port-283961 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.29.3 ClusterName:default-k8s-diff-port-283961 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.224 Port:8444 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisk
s:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0328 01:00:05.765564 1131600 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0328 01:00:05.766282 1131600 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0328 01:00:05.766391 1131600 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18485-1069254/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0328 01:00:05.783130 1131600 install.go:137] /home/jenkins/minikube-integration/18485-1069254/.minikube/bin/docker-machine-driver-kvm2 version is 1.33.0-beta.0
	I0328 01:00:05.783602 1131600 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0328 01:00:05.783724 1131600 cni.go:84] Creating CNI manager for ""
	I0328 01:00:05.783745 1131600 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0328 01:00:05.783795 1131600 start.go:340] cluster config:
	{Name:default-k8s-diff-port-283961 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:default-k8s-diff-port-283961 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.224 Port:8444 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0328 01:00:05.783949 1131600 iso.go:125] acquiring lock: {Name:mk3da1fa7d63a581c817f327d86a827697458cb8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0328 01:00:05.785871 1131600 out.go:177] * Starting "default-k8s-diff-port-283961" primary control-plane node in "default-k8s-diff-port-283961" cluster
	I0328 01:00:02.570474 1130827 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.107:22: connect: no route to host
	I0328 01:00:05.787210 1131600 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0328 01:00:05.787259 1131600 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4
	I0328 01:00:05.787272 1131600 cache.go:56] Caching tarball of preloaded images
	I0328 01:00:05.787364 1131600 preload.go:173] Found /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0328 01:00:05.787376 1131600 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on crio
	I0328 01:00:05.787509 1131600 profile.go:142] Saving config to /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/default-k8s-diff-port-283961/config.json ...
	I0328 01:00:05.787742 1131600 start.go:360] acquireMachinesLock for default-k8s-diff-port-283961: {Name:mk85e225431128bbd27ac7bb3815095957281902 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0328 01:00:08.650481 1130827 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.107:22: connect: no route to host
	I0328 01:00:11.722571 1130827 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.107:22: connect: no route to host
	I0328 01:00:17.802536 1130827 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.107:22: connect: no route to host
	I0328 01:00:20.874568 1130827 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.107:22: connect: no route to host
	I0328 01:00:26.954473 1130827 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.107:22: connect: no route to host
	I0328 01:00:30.026674 1130827 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.107:22: connect: no route to host
	I0328 01:00:36.106489 1130827 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.107:22: connect: no route to host
	I0328 01:00:39.178555 1130827 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.107:22: connect: no route to host
	I0328 01:00:45.258539 1130827 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.107:22: connect: no route to host
	I0328 01:00:48.330581 1130827 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.107:22: connect: no route to host
	I0328 01:00:54.410577 1130827 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.107:22: connect: no route to host
	I0328 01:00:57.482545 1130827 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.107:22: connect: no route to host
	I0328 01:01:03.562558 1130827 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.107:22: connect: no route to host
	I0328 01:01:06.634602 1130827 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.107:22: connect: no route to host
	I0328 01:01:12.714559 1130827 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.107:22: connect: no route to host
	I0328 01:01:15.786597 1130827 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.107:22: connect: no route to host
	I0328 01:01:21.866544 1130827 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.107:22: connect: no route to host
	I0328 01:01:24.938619 1130827 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.107:22: connect: no route to host
	I0328 01:01:31.018631 1130827 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.107:22: connect: no route to host
	I0328 01:01:34.090562 1130827 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.107:22: connect: no route to host
	I0328 01:01:40.170864 1130827 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.107:22: connect: no route to host
	I0328 01:01:43.242565 1130827 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.107:22: connect: no route to host
	I0328 01:01:49.322492 1130827 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.107:22: connect: no route to host
	I0328 01:01:52.394572 1130827 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.107:22: connect: no route to host
	I0328 01:01:58.474562 1130827 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.107:22: connect: no route to host
	I0328 01:02:01.546621 1130827 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.107:22: connect: no route to host
	I0328 01:02:07.626510 1130827 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.107:22: connect: no route to host
	I0328 01:02:10.698534 1130827 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.107:22: connect: no route to host
	I0328 01:02:13.703348 1130949 start.go:364] duration metric: took 4m25.677777198s to acquireMachinesLock for "embed-certs-808809"
	I0328 01:02:13.703416 1130949 start.go:96] Skipping create...Using existing machine configuration
	I0328 01:02:13.703429 1130949 fix.go:54] fixHost starting: 
	I0328 01:02:13.703888 1130949 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18485-1069254/.minikube/bin/docker-machine-driver-kvm2
	I0328 01:02:13.703923 1130949 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 01:02:13.719480 1130949 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44835
	I0328 01:02:13.719968 1130949 main.go:141] libmachine: () Calling .GetVersion
	I0328 01:02:13.720450 1130949 main.go:141] libmachine: Using API Version  1
	I0328 01:02:13.720475 1130949 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 01:02:13.720774 1130949 main.go:141] libmachine: () Calling .GetMachineName
	I0328 01:02:13.721011 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .DriverName
	I0328 01:02:13.721182 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetState
	I0328 01:02:13.722796 1130949 fix.go:112] recreateIfNeeded on embed-certs-808809: state=Stopped err=<nil>
	I0328 01:02:13.722828 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .DriverName
	W0328 01:02:13.722972 1130949 fix.go:138] unexpected machine state, will restart: <nil>
	I0328 01:02:13.724895 1130949 out.go:177] * Restarting existing kvm2 VM for "embed-certs-808809" ...
	I0328 01:02:13.700647 1130827 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0328 01:02:13.700689 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetMachineName
	I0328 01:02:13.701054 1130827 buildroot.go:166] provisioning hostname "no-preload-248059"
	I0328 01:02:13.701085 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetMachineName
	I0328 01:02:13.701344 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHHostname
	I0328 01:02:13.703200 1130827 machine.go:97] duration metric: took 4m37.399616994s to provisionDockerMachine
	I0328 01:02:13.703243 1130827 fix.go:56] duration metric: took 4m37.42352766s for fixHost
	I0328 01:02:13.703249 1130827 start.go:83] releasing machines lock for "no-preload-248059", held for 4m37.423563163s
	W0328 01:02:13.703274 1130827 start.go:713] error starting host: provision: host is not running
	W0328 01:02:13.703400 1130827 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0328 01:02:13.703411 1130827 start.go:728] Will try again in 5 seconds ...
	I0328 01:02:13.726437 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .Start
	I0328 01:02:13.726574 1130949 main.go:141] libmachine: (embed-certs-808809) Ensuring networks are active...
	I0328 01:02:13.727407 1130949 main.go:141] libmachine: (embed-certs-808809) Ensuring network default is active
	I0328 01:02:13.727667 1130949 main.go:141] libmachine: (embed-certs-808809) Ensuring network mk-embed-certs-808809 is active
	I0328 01:02:13.728050 1130949 main.go:141] libmachine: (embed-certs-808809) Getting domain xml...
	I0328 01:02:13.728836 1130949 main.go:141] libmachine: (embed-certs-808809) Creating domain...
	I0328 01:02:14.931757 1130949 main.go:141] libmachine: (embed-certs-808809) Waiting to get IP...
	I0328 01:02:14.932921 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:14.933298 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | unable to find current IP address of domain embed-certs-808809 in network mk-embed-certs-808809
	I0328 01:02:14.933396 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | I0328 01:02:14.933294 1131950 retry.go:31] will retry after 279.257708ms: waiting for machine to come up
	I0328 01:02:15.213830 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:15.214439 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | unable to find current IP address of domain embed-certs-808809 in network mk-embed-certs-808809
	I0328 01:02:15.214472 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | I0328 01:02:15.214415 1131950 retry.go:31] will retry after 387.406107ms: waiting for machine to come up
	I0328 01:02:15.603078 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:15.603464 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | unable to find current IP address of domain embed-certs-808809 in network mk-embed-certs-808809
	I0328 01:02:15.603497 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | I0328 01:02:15.603431 1131950 retry.go:31] will retry after 466.553599ms: waiting for machine to come up
	I0328 01:02:16.072165 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:16.072702 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | unable to find current IP address of domain embed-certs-808809 in network mk-embed-certs-808809
	I0328 01:02:16.072732 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | I0328 01:02:16.072643 1131950 retry.go:31] will retry after 375.428381ms: waiting for machine to come up
	I0328 01:02:16.449155 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:16.449614 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | unable to find current IP address of domain embed-certs-808809 in network mk-embed-certs-808809
	I0328 01:02:16.449652 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | I0328 01:02:16.449553 1131950 retry.go:31] will retry after 466.238903ms: waiting for machine to come up
	I0328 01:02:16.917246 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:16.917697 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | unable to find current IP address of domain embed-certs-808809 in network mk-embed-certs-808809
	I0328 01:02:16.917723 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | I0328 01:02:16.917633 1131950 retry.go:31] will retry after 772.819544ms: waiting for machine to come up
	I0328 01:02:17.691645 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:17.692121 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | unable to find current IP address of domain embed-certs-808809 in network mk-embed-certs-808809
	I0328 01:02:17.692151 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | I0328 01:02:17.692071 1131950 retry.go:31] will retry after 1.19065976s: waiting for machine to come up
	I0328 01:02:18.704949 1130827 start.go:360] acquireMachinesLock for no-preload-248059: {Name:mk85e225431128bbd27ac7bb3815095957281902 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0328 01:02:18.884525 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:18.885019 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | unable to find current IP address of domain embed-certs-808809 in network mk-embed-certs-808809
	I0328 01:02:18.885044 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | I0328 01:02:18.884980 1131950 retry.go:31] will retry after 1.434726863s: waiting for machine to come up
	I0328 01:02:20.321473 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:20.322009 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | unable to find current IP address of domain embed-certs-808809 in network mk-embed-certs-808809
	I0328 01:02:20.322035 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | I0328 01:02:20.321951 1131950 retry.go:31] will retry after 1.275277555s: waiting for machine to come up
	I0328 01:02:21.599454 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:21.600049 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | unable to find current IP address of domain embed-certs-808809 in network mk-embed-certs-808809
	I0328 01:02:21.600074 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | I0328 01:02:21.599982 1131950 retry.go:31] will retry after 1.852516502s: waiting for machine to come up
	I0328 01:02:23.455282 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:23.455760 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | unable to find current IP address of domain embed-certs-808809 in network mk-embed-certs-808809
	I0328 01:02:23.455830 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | I0328 01:02:23.455746 1131950 retry.go:31] will retry after 2.056736141s: waiting for machine to come up
	I0328 01:02:25.514112 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:25.514538 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | unable to find current IP address of domain embed-certs-808809 in network mk-embed-certs-808809
	I0328 01:02:25.514569 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | I0328 01:02:25.514492 1131950 retry.go:31] will retry after 2.711520437s: waiting for machine to come up
	I0328 01:02:32.751719 1131323 start.go:364] duration metric: took 3m27.302408957s to acquireMachinesLock for "old-k8s-version-986088"
	I0328 01:02:32.751823 1131323 start.go:96] Skipping create...Using existing machine configuration
	I0328 01:02:32.751833 1131323 fix.go:54] fixHost starting: 
	I0328 01:02:32.752289 1131323 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18485-1069254/.minikube/bin/docker-machine-driver-kvm2
	I0328 01:02:32.752326 1131323 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 01:02:32.770119 1131323 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45807
	I0328 01:02:32.770723 1131323 main.go:141] libmachine: () Calling .GetVersion
	I0328 01:02:32.771352 1131323 main.go:141] libmachine: Using API Version  1
	I0328 01:02:32.771380 1131323 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 01:02:32.771790 1131323 main.go:141] libmachine: () Calling .GetMachineName
	I0328 01:02:32.772020 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .DriverName
	I0328 01:02:32.772206 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetState
	I0328 01:02:32.773947 1131323 fix.go:112] recreateIfNeeded on old-k8s-version-986088: state=Stopped err=<nil>
	I0328 01:02:32.773980 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .DriverName
	W0328 01:02:32.774166 1131323 fix.go:138] unexpected machine state, will restart: <nil>
	I0328 01:02:32.776416 1131323 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-986088" ...
	I0328 01:02:28.229576 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:28.229970 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | unable to find current IP address of domain embed-certs-808809 in network mk-embed-certs-808809
	I0328 01:02:28.230000 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | I0328 01:02:28.229920 1131950 retry.go:31] will retry after 3.231405371s: waiting for machine to come up
	I0328 01:02:31.463477 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:31.463884 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has current primary IP address 192.168.72.210 and MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:31.463902 1130949 main.go:141] libmachine: (embed-certs-808809) Found IP for machine: 192.168.72.210
	I0328 01:02:31.463915 1130949 main.go:141] libmachine: (embed-certs-808809) Reserving static IP address...
	I0328 01:02:31.464394 1130949 main.go:141] libmachine: (embed-certs-808809) Reserved static IP address: 192.168.72.210
	I0328 01:02:31.464413 1130949 main.go:141] libmachine: (embed-certs-808809) Waiting for SSH to be available...
	I0328 01:02:31.464439 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | found host DHCP lease matching {name: "embed-certs-808809", mac: "52:54:00:60:d4:d2", ip: "192.168.72.210"} in network mk-embed-certs-808809: {Iface:virbr4 ExpiryTime:2024-03-28 02:02:25 +0000 UTC Type:0 Mac:52:54:00:60:d4:d2 Iaid: IPaddr:192.168.72.210 Prefix:24 Hostname:embed-certs-808809 Clientid:01:52:54:00:60:d4:d2}
	I0328 01:02:31.464464 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | skip adding static IP to network mk-embed-certs-808809 - found existing host DHCP lease matching {name: "embed-certs-808809", mac: "52:54:00:60:d4:d2", ip: "192.168.72.210"}
	I0328 01:02:31.464480 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | Getting to WaitForSSH function...
	I0328 01:02:31.466488 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:31.466876 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d4:d2", ip: ""} in network mk-embed-certs-808809: {Iface:virbr4 ExpiryTime:2024-03-28 02:02:25 +0000 UTC Type:0 Mac:52:54:00:60:d4:d2 Iaid: IPaddr:192.168.72.210 Prefix:24 Hostname:embed-certs-808809 Clientid:01:52:54:00:60:d4:d2}
	I0328 01:02:31.466916 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined IP address 192.168.72.210 and MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:31.467054 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | Using SSH client type: external
	I0328 01:02:31.467085 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | Using SSH private key: /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/embed-certs-808809/id_rsa (-rw-------)
	I0328 01:02:31.467124 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.210 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/embed-certs-808809/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0328 01:02:31.467138 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | About to run SSH command:
	I0328 01:02:31.467155 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | exit 0
	I0328 01:02:31.590708 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | SSH cmd err, output: <nil>: 
	I0328 01:02:31.591111 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetConfigRaw
	I0328 01:02:31.591959 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetIP
	I0328 01:02:31.594592 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:31.595075 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d4:d2", ip: ""} in network mk-embed-certs-808809: {Iface:virbr4 ExpiryTime:2024-03-28 02:02:25 +0000 UTC Type:0 Mac:52:54:00:60:d4:d2 Iaid: IPaddr:192.168.72.210 Prefix:24 Hostname:embed-certs-808809 Clientid:01:52:54:00:60:d4:d2}
	I0328 01:02:31.595114 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined IP address 192.168.72.210 and MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:31.595364 1130949 profile.go:142] Saving config to /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/embed-certs-808809/config.json ...
	I0328 01:02:31.595634 1130949 machine.go:94] provisionDockerMachine start ...
	I0328 01:02:31.595656 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .DriverName
	I0328 01:02:31.595901 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHHostname
	I0328 01:02:31.598184 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:31.598529 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d4:d2", ip: ""} in network mk-embed-certs-808809: {Iface:virbr4 ExpiryTime:2024-03-28 02:02:25 +0000 UTC Type:0 Mac:52:54:00:60:d4:d2 Iaid: IPaddr:192.168.72.210 Prefix:24 Hostname:embed-certs-808809 Clientid:01:52:54:00:60:d4:d2}
	I0328 01:02:31.598556 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined IP address 192.168.72.210 and MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:31.598681 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHPort
	I0328 01:02:31.598851 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHKeyPath
	I0328 01:02:31.599012 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHKeyPath
	I0328 01:02:31.599163 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHUsername
	I0328 01:02:31.599333 1130949 main.go:141] libmachine: Using SSH client type: native
	I0328 01:02:31.599604 1130949 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.210 22 <nil> <nil>}
	I0328 01:02:31.599619 1130949 main.go:141] libmachine: About to run SSH command:
	hostname
	I0328 01:02:31.703241 1130949 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0328 01:02:31.703272 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetMachineName
	I0328 01:02:31.703575 1130949 buildroot.go:166] provisioning hostname "embed-certs-808809"
	I0328 01:02:31.703602 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetMachineName
	I0328 01:02:31.703779 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHHostname
	I0328 01:02:31.706495 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:31.706777 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d4:d2", ip: ""} in network mk-embed-certs-808809: {Iface:virbr4 ExpiryTime:2024-03-28 02:02:25 +0000 UTC Type:0 Mac:52:54:00:60:d4:d2 Iaid: IPaddr:192.168.72.210 Prefix:24 Hostname:embed-certs-808809 Clientid:01:52:54:00:60:d4:d2}
	I0328 01:02:31.706799 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined IP address 192.168.72.210 and MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:31.706978 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHPort
	I0328 01:02:31.707146 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHKeyPath
	I0328 01:02:31.707334 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHKeyPath
	I0328 01:02:31.707580 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHUsername
	I0328 01:02:31.707765 1130949 main.go:141] libmachine: Using SSH client type: native
	I0328 01:02:31.707985 1130949 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.210 22 <nil> <nil>}
	I0328 01:02:31.708004 1130949 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-808809 && echo "embed-certs-808809" | sudo tee /etc/hostname
	I0328 01:02:31.821578 1130949 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-808809
	
	I0328 01:02:31.821608 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHHostname
	I0328 01:02:31.824412 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:31.824791 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d4:d2", ip: ""} in network mk-embed-certs-808809: {Iface:virbr4 ExpiryTime:2024-03-28 02:02:25 +0000 UTC Type:0 Mac:52:54:00:60:d4:d2 Iaid: IPaddr:192.168.72.210 Prefix:24 Hostname:embed-certs-808809 Clientid:01:52:54:00:60:d4:d2}
	I0328 01:02:31.824825 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined IP address 192.168.72.210 and MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:31.825030 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHPort
	I0328 01:02:31.825253 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHKeyPath
	I0328 01:02:31.825432 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHKeyPath
	I0328 01:02:31.825589 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHUsername
	I0328 01:02:31.825758 1130949 main.go:141] libmachine: Using SSH client type: native
	I0328 01:02:31.825950 1130949 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.210 22 <nil> <nil>}
	I0328 01:02:31.825976 1130949 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-808809' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-808809/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-808809' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0328 01:02:31.937655 1130949 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0328 01:02:31.937701 1130949 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18485-1069254/.minikube CaCertPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18485-1069254/.minikube}
	I0328 01:02:31.937728 1130949 buildroot.go:174] setting up certificates
	I0328 01:02:31.937742 1130949 provision.go:84] configureAuth start
	I0328 01:02:31.937754 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetMachineName
	I0328 01:02:31.938093 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetIP
	I0328 01:02:31.940874 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:31.941328 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d4:d2", ip: ""} in network mk-embed-certs-808809: {Iface:virbr4 ExpiryTime:2024-03-28 02:02:25 +0000 UTC Type:0 Mac:52:54:00:60:d4:d2 Iaid: IPaddr:192.168.72.210 Prefix:24 Hostname:embed-certs-808809 Clientid:01:52:54:00:60:d4:d2}
	I0328 01:02:31.941360 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined IP address 192.168.72.210 and MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:31.941580 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHHostname
	I0328 01:02:31.944250 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:31.944580 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d4:d2", ip: ""} in network mk-embed-certs-808809: {Iface:virbr4 ExpiryTime:2024-03-28 02:02:25 +0000 UTC Type:0 Mac:52:54:00:60:d4:d2 Iaid: IPaddr:192.168.72.210 Prefix:24 Hostname:embed-certs-808809 Clientid:01:52:54:00:60:d4:d2}
	I0328 01:02:31.944610 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined IP address 192.168.72.210 and MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:31.944828 1130949 provision.go:143] copyHostCerts
	I0328 01:02:31.944910 1130949 exec_runner.go:144] found /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.pem, removing ...
	I0328 01:02:31.944926 1130949 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.pem
	I0328 01:02:31.945006 1130949 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.pem (1078 bytes)
	I0328 01:02:31.945151 1130949 exec_runner.go:144] found /home/jenkins/minikube-integration/18485-1069254/.minikube/cert.pem, removing ...
	I0328 01:02:31.945162 1130949 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18485-1069254/.minikube/cert.pem
	I0328 01:02:31.945205 1130949 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18485-1069254/.minikube/cert.pem (1123 bytes)
	I0328 01:02:31.945285 1130949 exec_runner.go:144] found /home/jenkins/minikube-integration/18485-1069254/.minikube/key.pem, removing ...
	I0328 01:02:31.945294 1130949 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18485-1069254/.minikube/key.pem
	I0328 01:02:31.945330 1130949 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18485-1069254/.minikube/key.pem (1679 bytes)
	I0328 01:02:31.945400 1130949 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca-key.pem org=jenkins.embed-certs-808809 san=[127.0.0.1 192.168.72.210 embed-certs-808809 localhost minikube]
	I0328 01:02:32.070925 1130949 provision.go:177] copyRemoteCerts
	I0328 01:02:32.071007 1130949 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0328 01:02:32.071067 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHHostname
	I0328 01:02:32.073876 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:32.074295 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d4:d2", ip: ""} in network mk-embed-certs-808809: {Iface:virbr4 ExpiryTime:2024-03-28 02:02:25 +0000 UTC Type:0 Mac:52:54:00:60:d4:d2 Iaid: IPaddr:192.168.72.210 Prefix:24 Hostname:embed-certs-808809 Clientid:01:52:54:00:60:d4:d2}
	I0328 01:02:32.074339 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined IP address 192.168.72.210 and MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:32.074541 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHPort
	I0328 01:02:32.074754 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHKeyPath
	I0328 01:02:32.074931 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHUsername
	I0328 01:02:32.075091 1130949 sshutil.go:53] new ssh client: &{IP:192.168.72.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/embed-certs-808809/id_rsa Username:docker}
	I0328 01:02:32.158945 1130949 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0328 01:02:32.184903 1130949 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0328 01:02:32.210411 1130949 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0328 01:02:32.235788 1130949 provision.go:87] duration metric: took 298.03126ms to configureAuth
	I0328 01:02:32.235827 1130949 buildroot.go:189] setting minikube options for container-runtime
	I0328 01:02:32.236116 1130949 config.go:182] Loaded profile config "embed-certs-808809": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0328 01:02:32.236336 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHHostname
	I0328 01:02:32.239186 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:32.239520 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d4:d2", ip: ""} in network mk-embed-certs-808809: {Iface:virbr4 ExpiryTime:2024-03-28 02:02:25 +0000 UTC Type:0 Mac:52:54:00:60:d4:d2 Iaid: IPaddr:192.168.72.210 Prefix:24 Hostname:embed-certs-808809 Clientid:01:52:54:00:60:d4:d2}
	I0328 01:02:32.239555 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined IP address 192.168.72.210 and MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:32.239782 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHPort
	I0328 01:02:32.240036 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHKeyPath
	I0328 01:02:32.240257 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHKeyPath
	I0328 01:02:32.240431 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHUsername
	I0328 01:02:32.240633 1130949 main.go:141] libmachine: Using SSH client type: native
	I0328 01:02:32.240836 1130949 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.210 22 <nil> <nil>}
	I0328 01:02:32.240862 1130949 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0328 01:02:32.513263 1130949 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0328 01:02:32.513298 1130949 machine.go:97] duration metric: took 917.647337ms to provisionDockerMachine
	I0328 01:02:32.513314 1130949 start.go:293] postStartSetup for "embed-certs-808809" (driver="kvm2")
	I0328 01:02:32.513326 1130949 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0328 01:02:32.513365 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .DriverName
	I0328 01:02:32.513727 1130949 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0328 01:02:32.513770 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHHostname
	I0328 01:02:32.516906 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:32.517382 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d4:d2", ip: ""} in network mk-embed-certs-808809: {Iface:virbr4 ExpiryTime:2024-03-28 02:02:25 +0000 UTC Type:0 Mac:52:54:00:60:d4:d2 Iaid: IPaddr:192.168.72.210 Prefix:24 Hostname:embed-certs-808809 Clientid:01:52:54:00:60:d4:d2}
	I0328 01:02:32.517425 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined IP address 192.168.72.210 and MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:32.517603 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHPort
	I0328 01:02:32.517831 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHKeyPath
	I0328 01:02:32.517989 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHUsername
	I0328 01:02:32.518115 1130949 sshutil.go:53] new ssh client: &{IP:192.168.72.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/embed-certs-808809/id_rsa Username:docker}
	I0328 01:02:32.600013 1130949 ssh_runner.go:195] Run: cat /etc/os-release
	I0328 01:02:32.604953 1130949 info.go:137] Remote host: Buildroot 2023.02.9
	I0328 01:02:32.604983 1130949 filesync.go:126] Scanning /home/jenkins/minikube-integration/18485-1069254/.minikube/addons for local assets ...
	I0328 01:02:32.605057 1130949 filesync.go:126] Scanning /home/jenkins/minikube-integration/18485-1069254/.minikube/files for local assets ...
	I0328 01:02:32.605148 1130949 filesync.go:149] local asset: /home/jenkins/minikube-integration/18485-1069254/.minikube/files/etc/ssl/certs/10765222.pem -> 10765222.pem in /etc/ssl/certs
	I0328 01:02:32.605265 1130949 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0328 01:02:32.617685 1130949 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/files/etc/ssl/certs/10765222.pem --> /etc/ssl/certs/10765222.pem (1708 bytes)
	I0328 01:02:32.646415 1130949 start.go:296] duration metric: took 133.084551ms for postStartSetup
	I0328 01:02:32.646462 1130949 fix.go:56] duration metric: took 18.943034019s for fixHost
	I0328 01:02:32.646490 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHHostname
	I0328 01:02:32.649346 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:32.649686 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d4:d2", ip: ""} in network mk-embed-certs-808809: {Iface:virbr4 ExpiryTime:2024-03-28 02:02:25 +0000 UTC Type:0 Mac:52:54:00:60:d4:d2 Iaid: IPaddr:192.168.72.210 Prefix:24 Hostname:embed-certs-808809 Clientid:01:52:54:00:60:d4:d2}
	I0328 01:02:32.649717 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined IP address 192.168.72.210 and MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:32.649864 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHPort
	I0328 01:02:32.650191 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHKeyPath
	I0328 01:02:32.650444 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHKeyPath
	I0328 01:02:32.650637 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHUsername
	I0328 01:02:32.650844 1130949 main.go:141] libmachine: Using SSH client type: native
	I0328 01:02:32.651036 1130949 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.210 22 <nil> <nil>}
	I0328 01:02:32.651069 1130949 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0328 01:02:32.751522 1130949 main.go:141] libmachine: SSH cmd err, output: <nil>: 1711587752.718800758
	
	I0328 01:02:32.751547 1130949 fix.go:216] guest clock: 1711587752.718800758
	I0328 01:02:32.751556 1130949 fix.go:229] Guest: 2024-03-28 01:02:32.718800758 +0000 UTC Remote: 2024-03-28 01:02:32.646466137 +0000 UTC m=+284.780134501 (delta=72.334621ms)
	I0328 01:02:32.751598 1130949 fix.go:200] guest clock delta is within tolerance: 72.334621ms
	I0328 01:02:32.751610 1130949 start.go:83] releasing machines lock for "embed-certs-808809", held for 19.048217918s
	I0328 01:02:32.751638 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .DriverName
	I0328 01:02:32.751953 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetIP
	I0328 01:02:32.754795 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:32.755205 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d4:d2", ip: ""} in network mk-embed-certs-808809: {Iface:virbr4 ExpiryTime:2024-03-28 02:02:25 +0000 UTC Type:0 Mac:52:54:00:60:d4:d2 Iaid: IPaddr:192.168.72.210 Prefix:24 Hostname:embed-certs-808809 Clientid:01:52:54:00:60:d4:d2}
	I0328 01:02:32.755240 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined IP address 192.168.72.210 and MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:32.755454 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .DriverName
	I0328 01:02:32.756111 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .DriverName
	I0328 01:02:32.756320 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .DriverName
	I0328 01:02:32.756412 1130949 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0328 01:02:32.756475 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHHostname
	I0328 01:02:32.756612 1130949 ssh_runner.go:195] Run: cat /version.json
	I0328 01:02:32.756646 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHHostname
	I0328 01:02:32.759337 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:32.759468 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:32.759788 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d4:d2", ip: ""} in network mk-embed-certs-808809: {Iface:virbr4 ExpiryTime:2024-03-28 02:02:25 +0000 UTC Type:0 Mac:52:54:00:60:d4:d2 Iaid: IPaddr:192.168.72.210 Prefix:24 Hostname:embed-certs-808809 Clientid:01:52:54:00:60:d4:d2}
	I0328 01:02:32.759808 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined IP address 192.168.72.210 and MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:32.759845 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d4:d2", ip: ""} in network mk-embed-certs-808809: {Iface:virbr4 ExpiryTime:2024-03-28 02:02:25 +0000 UTC Type:0 Mac:52:54:00:60:d4:d2 Iaid: IPaddr:192.168.72.210 Prefix:24 Hostname:embed-certs-808809 Clientid:01:52:54:00:60:d4:d2}
	I0328 01:02:32.759866 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined IP address 192.168.72.210 and MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:32.760009 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHPort
	I0328 01:02:32.760018 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHPort
	I0328 01:02:32.760214 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHKeyPath
	I0328 01:02:32.760222 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHKeyPath
	I0328 01:02:32.760364 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHUsername
	I0328 01:02:32.760532 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHUsername
	I0328 01:02:32.760639 1130949 sshutil.go:53] new ssh client: &{IP:192.168.72.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/embed-certs-808809/id_rsa Username:docker}
	I0328 01:02:32.760698 1130949 sshutil.go:53] new ssh client: &{IP:192.168.72.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/embed-certs-808809/id_rsa Username:docker}
	I0328 01:02:32.840137 1130949 ssh_runner.go:195] Run: systemctl --version
	I0328 01:02:32.874039 1130949 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0328 01:02:33.020534 1130949 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0328 01:02:33.027141 1130949 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0328 01:02:33.027213 1130949 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0328 01:02:33.043738 1130949 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0328 01:02:33.043767 1130949 start.go:494] detecting cgroup driver to use...
	I0328 01:02:33.043840 1130949 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0328 01:02:33.064332 1130949 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0328 01:02:33.081926 1130949 docker.go:217] disabling cri-docker service (if available) ...
	I0328 01:02:33.082016 1130949 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0328 01:02:33.097179 1130949 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0328 01:02:33.113157 1130949 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0328 01:02:33.233183 1130949 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0328 01:02:33.374061 1130949 docker.go:233] disabling docker service ...
	I0328 01:02:33.374145 1130949 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0328 01:02:33.389813 1130949 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0328 01:02:33.403439 1130949 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0328 01:02:33.546146 1130949 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0328 01:02:33.706968 1130949 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0328 01:02:33.722279 1130949 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0328 01:02:33.742578 1130949 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0328 01:02:33.742652 1130949 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 01:02:33.754966 1130949 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0328 01:02:33.755027 1130949 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 01:02:33.767170 1130949 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 01:02:33.779960 1130949 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 01:02:33.792448 1130949 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0328 01:02:33.804912 1130949 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 01:02:33.818038 1130949 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 01:02:33.838794 1130949 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 01:02:33.852157 1130949 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0328 01:02:33.862921 1130949 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0328 01:02:33.862981 1130949 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0328 01:02:33.880973 1130949 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0328 01:02:33.892698 1130949 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0328 01:02:34.029903 1130949 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0328 01:02:34.170977 1130949 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0328 01:02:34.171074 1130949 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0328 01:02:34.176652 1130949 start.go:562] Will wait 60s for crictl version
	I0328 01:02:34.176736 1130949 ssh_runner.go:195] Run: which crictl
	I0328 01:02:34.180993 1130949 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0328 01:02:34.224564 1130949 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0328 01:02:34.224675 1130949 ssh_runner.go:195] Run: crio --version
	I0328 01:02:34.254457 1130949 ssh_runner.go:195] Run: crio --version
	I0328 01:02:34.287281 1130949 out.go:177] * Preparing Kubernetes v1.29.3 on CRI-O 1.29.1 ...
	I0328 01:02:32.778280 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .Start
	I0328 01:02:32.778470 1131323 main.go:141] libmachine: (old-k8s-version-986088) Ensuring networks are active...
	I0328 01:02:32.779179 1131323 main.go:141] libmachine: (old-k8s-version-986088) Ensuring network default is active
	I0328 01:02:32.779577 1131323 main.go:141] libmachine: (old-k8s-version-986088) Ensuring network mk-old-k8s-version-986088 is active
	I0328 01:02:32.779982 1131323 main.go:141] libmachine: (old-k8s-version-986088) Getting domain xml...
	I0328 01:02:32.780732 1131323 main.go:141] libmachine: (old-k8s-version-986088) Creating domain...
	I0328 01:02:34.066287 1131323 main.go:141] libmachine: (old-k8s-version-986088) Waiting to get IP...
	I0328 01:02:34.067193 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:34.067618 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | unable to find current IP address of domain old-k8s-version-986088 in network mk-old-k8s-version-986088
	I0328 01:02:34.067684 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | I0328 01:02:34.067586 1132067 retry.go:31] will retry after 291.270379ms: waiting for machine to come up
	I0328 01:02:34.360203 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:34.360690 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | unable to find current IP address of domain old-k8s-version-986088 in network mk-old-k8s-version-986088
	I0328 01:02:34.360721 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | I0328 01:02:34.360638 1132067 retry.go:31] will retry after 234.968456ms: waiting for machine to come up
	I0328 01:02:34.597291 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:34.597818 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | unable to find current IP address of domain old-k8s-version-986088 in network mk-old-k8s-version-986088
	I0328 01:02:34.597849 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | I0328 01:02:34.597750 1132067 retry.go:31] will retry after 382.522593ms: waiting for machine to come up
	I0328 01:02:34.982502 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:34.983176 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | unable to find current IP address of domain old-k8s-version-986088 in network mk-old-k8s-version-986088
	I0328 01:02:34.983205 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | I0328 01:02:34.983133 1132067 retry.go:31] will retry after 436.332635ms: waiting for machine to come up
	I0328 01:02:34.288748 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetIP
	I0328 01:02:34.292122 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:34.292516 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d4:d2", ip: ""} in network mk-embed-certs-808809: {Iface:virbr4 ExpiryTime:2024-03-28 02:02:25 +0000 UTC Type:0 Mac:52:54:00:60:d4:d2 Iaid: IPaddr:192.168.72.210 Prefix:24 Hostname:embed-certs-808809 Clientid:01:52:54:00:60:d4:d2}
	I0328 01:02:34.292556 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined IP address 192.168.72.210 and MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:34.292869 1130949 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0328 01:02:34.298738 1130949 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0328 01:02:34.313529 1130949 kubeadm.go:877] updating cluster {Name:embed-certs-808809 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.29.3 ClusterName:embed-certs-808809 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.210 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0328 01:02:34.313698 1130949 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0328 01:02:34.313762 1130949 ssh_runner.go:195] Run: sudo crictl images --output json
	I0328 01:02:34.356518 1130949 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.3". assuming images are not preloaded.
	I0328 01:02:34.356614 1130949 ssh_runner.go:195] Run: which lz4
	I0328 01:02:34.361492 1130949 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0328 01:02:34.366053 1130949 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0328 01:02:34.366090 1130949 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (402967820 bytes)
	I0328 01:02:36.024197 1130949 crio.go:462] duration metric: took 1.662731937s to copy over tarball
	I0328 01:02:36.024287 1130949 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0328 01:02:35.421623 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:35.422164 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | unable to find current IP address of domain old-k8s-version-986088 in network mk-old-k8s-version-986088
	I0328 01:02:35.422198 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | I0328 01:02:35.422135 1132067 retry.go:31] will retry after 700.861268ms: waiting for machine to come up
	I0328 01:02:36.124589 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:36.125001 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | unable to find current IP address of domain old-k8s-version-986088 in network mk-old-k8s-version-986088
	I0328 01:02:36.125031 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | I0328 01:02:36.124948 1132067 retry.go:31] will retry after 932.342478ms: waiting for machine to come up
	I0328 01:02:37.058954 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:37.059390 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | unable to find current IP address of domain old-k8s-version-986088 in network mk-old-k8s-version-986088
	I0328 01:02:37.059424 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | I0328 01:02:37.059332 1132067 retry.go:31] will retry after 1.163248691s: waiting for machine to come up
	I0328 01:02:38.224574 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:38.225019 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | unable to find current IP address of domain old-k8s-version-986088 in network mk-old-k8s-version-986088
	I0328 01:02:38.225053 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | I0328 01:02:38.224959 1132067 retry.go:31] will retry after 1.13372539s: waiting for machine to come up
	I0328 01:02:39.360393 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:39.360953 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | unable to find current IP address of domain old-k8s-version-986088 in network mk-old-k8s-version-986088
	I0328 01:02:39.360984 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | I0328 01:02:39.360906 1132067 retry.go:31] will retry after 1.793272671s: waiting for machine to come up
	I0328 01:02:38.420741 1130949 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.396415089s)
	I0328 01:02:38.420788 1130949 crio.go:469] duration metric: took 2.39655808s to extract the tarball
	I0328 01:02:38.420797 1130949 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0328 01:02:38.459869 1130949 ssh_runner.go:195] Run: sudo crictl images --output json
	I0328 01:02:38.505999 1130949 crio.go:514] all images are preloaded for cri-o runtime.
	I0328 01:02:38.506030 1130949 cache_images.go:84] Images are preloaded, skipping loading
	I0328 01:02:38.506039 1130949 kubeadm.go:928] updating node { 192.168.72.210 8443 v1.29.3 crio true true} ...
	I0328 01:02:38.506185 1130949 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-808809 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.210
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.3 ClusterName:embed-certs-808809 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0328 01:02:38.506301 1130949 ssh_runner.go:195] Run: crio config
	I0328 01:02:38.551608 1130949 cni.go:84] Creating CNI manager for ""
	I0328 01:02:38.551633 1130949 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0328 01:02:38.551646 1130949 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0328 01:02:38.551673 1130949 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.210 APIServerPort:8443 KubernetesVersion:v1.29.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-808809 NodeName:embed-certs-808809 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.210"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.210 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0328 01:02:38.551813 1130949 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.210
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-808809"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.210
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.210"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0328 01:02:38.551881 1130949 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.3
	I0328 01:02:38.562640 1130949 binaries.go:44] Found k8s binaries, skipping transfer
	I0328 01:02:38.562732 1130949 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0328 01:02:38.572870 1130949 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0328 01:02:38.590866 1130949 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0328 01:02:38.608302 1130949 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0328 01:02:38.626925 1130949 ssh_runner.go:195] Run: grep 192.168.72.210	control-plane.minikube.internal$ /etc/hosts
	I0328 01:02:38.631111 1130949 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.210	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0328 01:02:38.644528 1130949 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0328 01:02:38.785485 1130949 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0328 01:02:38.804087 1130949 certs.go:68] Setting up /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/embed-certs-808809 for IP: 192.168.72.210
	I0328 01:02:38.804113 1130949 certs.go:194] generating shared ca certs ...
	I0328 01:02:38.804132 1130949 certs.go:226] acquiring lock for ca certs: {Name:mkf4dec8f33bbf51de6ed3aabdf175c7fd744ef9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 01:02:38.804285 1130949 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.key
	I0328 01:02:38.804326 1130949 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/proxy-client-ca.key
	I0328 01:02:38.804363 1130949 certs.go:256] generating profile certs ...
	I0328 01:02:38.804505 1130949 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/embed-certs-808809/client.key
	I0328 01:02:38.804588 1130949 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/embed-certs-808809/apiserver.key.bdc16448
	I0328 01:02:38.804638 1130949 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/embed-certs-808809/proxy-client.key
	I0328 01:02:38.804798 1130949 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/1076522.pem (1338 bytes)
	W0328 01:02:38.804829 1130949 certs.go:480] ignoring /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/1076522_empty.pem, impossibly tiny 0 bytes
	I0328 01:02:38.804836 1130949 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca-key.pem (1675 bytes)
	I0328 01:02:38.804860 1130949 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem (1078 bytes)
	I0328 01:02:38.804882 1130949 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/cert.pem (1123 bytes)
	I0328 01:02:38.804902 1130949 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/key.pem (1679 bytes)
	I0328 01:02:38.804943 1130949 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/files/etc/ssl/certs/10765222.pem (1708 bytes)
	I0328 01:02:38.805829 1130949 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0328 01:02:38.864847 1130949 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0328 01:02:38.899197 1130949 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0328 01:02:38.926734 1130949 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0328 01:02:38.958277 1130949 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/embed-certs-808809/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0328 01:02:38.997201 1130949 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/embed-certs-808809/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0328 01:02:39.023136 1130949 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/embed-certs-808809/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0328 01:02:39.048459 1130949 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/embed-certs-808809/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0328 01:02:39.074052 1130949 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/1076522.pem --> /usr/share/ca-certificates/1076522.pem (1338 bytes)
	I0328 01:02:39.099326 1130949 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/files/etc/ssl/certs/10765222.pem --> /usr/share/ca-certificates/10765222.pem (1708 bytes)
	I0328 01:02:39.124775 1130949 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0328 01:02:39.149638 1130949 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0328 01:02:39.169169 1130949 ssh_runner.go:195] Run: openssl version
	I0328 01:02:39.175948 1130949 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1076522.pem && ln -fs /usr/share/ca-certificates/1076522.pem /etc/ssl/certs/1076522.pem"
	I0328 01:02:39.188255 1130949 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1076522.pem
	I0328 01:02:39.194296 1130949 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 27 23:42 /usr/share/ca-certificates/1076522.pem
	I0328 01:02:39.194374 1130949 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1076522.pem
	I0328 01:02:39.201138 1130949 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1076522.pem /etc/ssl/certs/51391683.0"
	I0328 01:02:39.213554 1130949 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10765222.pem && ln -fs /usr/share/ca-certificates/10765222.pem /etc/ssl/certs/10765222.pem"
	I0328 01:02:39.226474 1130949 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10765222.pem
	I0328 01:02:39.232074 1130949 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 27 23:42 /usr/share/ca-certificates/10765222.pem
	I0328 01:02:39.232149 1130949 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10765222.pem
	I0328 01:02:39.238733 1130949 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/10765222.pem /etc/ssl/certs/3ec20f2e.0"
	I0328 01:02:39.250983 1130949 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0328 01:02:39.263746 1130949 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0328 01:02:39.268967 1130949 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 27 23:34 /usr/share/ca-certificates/minikubeCA.pem
	I0328 01:02:39.269038 1130949 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0328 01:02:39.275589 1130949 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0328 01:02:39.287731 1130949 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0328 01:02:39.292985 1130949 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0328 01:02:39.300366 1130949 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0328 01:02:39.307241 1130949 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0328 01:02:39.314522 1130949 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0328 01:02:39.321070 1130949 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0328 01:02:39.327777 1130949 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0328 01:02:39.334174 1130949 kubeadm.go:391] StartCluster: {Name:embed-certs-808809 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29
.3 ClusterName:embed-certs-808809 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.210 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0328 01:02:39.334310 1130949 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0328 01:02:39.334367 1130949 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0328 01:02:39.376035 1130949 cri.go:89] found id: ""
	I0328 01:02:39.376145 1130949 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0328 01:02:39.387349 1130949 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0328 01:02:39.387377 1130949 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0328 01:02:39.387385 1130949 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0328 01:02:39.387469 1130949 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0328 01:02:39.397918 1130949 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0328 01:02:39.399122 1130949 kubeconfig.go:125] found "embed-certs-808809" server: "https://192.168.72.210:8443"
	I0328 01:02:39.401219 1130949 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0328 01:02:39.411475 1130949 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.72.210
	I0328 01:02:39.411562 1130949 kubeadm.go:1154] stopping kube-system containers ...
	I0328 01:02:39.411583 1130949 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0328 01:02:39.411650 1130949 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0328 01:02:39.449529 1130949 cri.go:89] found id: ""
	I0328 01:02:39.449638 1130949 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0328 01:02:39.468553 1130949 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0328 01:02:39.479489 1130949 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0328 01:02:39.479522 1130949 kubeadm.go:156] found existing configuration files:
	
	I0328 01:02:39.479589 1130949 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0328 01:02:39.489619 1130949 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0328 01:02:39.489689 1130949 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0328 01:02:39.499726 1130949 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0328 01:02:39.509362 1130949 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0328 01:02:39.509447 1130949 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0328 01:02:39.519262 1130949 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0328 01:02:39.528858 1130949 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0328 01:02:39.528920 1130949 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0328 01:02:39.538784 1130949 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0328 01:02:39.548517 1130949 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0328 01:02:39.548593 1130949 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0328 01:02:39.559931 1130949 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0328 01:02:39.574178 1130949 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0328 01:02:39.706243 1130949 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0328 01:02:40.342144 1130949 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0328 01:02:40.559108 1130949 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0328 01:02:40.636713 1130949 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0328 01:02:40.743171 1130949 api_server.go:52] waiting for apiserver process to appear ...
	I0328 01:02:40.743269 1130949 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:02:41.243401 1130949 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:02:41.743363 1130949 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:02:41.776504 1130949 api_server.go:72] duration metric: took 1.033329844s to wait for apiserver process to appear ...
	I0328 01:02:41.776547 1130949 api_server.go:88] waiting for apiserver healthz status ...
	I0328 01:02:41.776574 1130949 api_server.go:253] Checking apiserver healthz at https://192.168.72.210:8443/healthz ...
	I0328 01:02:41.777140 1130949 api_server.go:269] stopped: https://192.168.72.210:8443/healthz: Get "https://192.168.72.210:8443/healthz": dial tcp 192.168.72.210:8443: connect: connection refused
	I0328 01:02:42.276690 1130949 api_server.go:253] Checking apiserver healthz at https://192.168.72.210:8443/healthz ...
	I0328 01:02:41.156898 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:41.157309 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | unable to find current IP address of domain old-k8s-version-986088 in network mk-old-k8s-version-986088
	I0328 01:02:41.157336 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | I0328 01:02:41.157263 1132067 retry.go:31] will retry after 1.863775673s: waiting for machine to come up
	I0328 01:02:43.023074 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:43.023470 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | unable to find current IP address of domain old-k8s-version-986088 in network mk-old-k8s-version-986088
	I0328 01:02:43.023507 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | I0328 01:02:43.023419 1132067 retry.go:31] will retry after 2.73600503s: waiting for machine to come up
	I0328 01:02:44.743286 1130949 api_server.go:279] https://192.168.72.210:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0328 01:02:44.743383 1130949 api_server.go:103] status: https://192.168.72.210:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0328 01:02:44.743412 1130949 api_server.go:253] Checking apiserver healthz at https://192.168.72.210:8443/healthz ...
	I0328 01:02:44.822370 1130949 api_server.go:279] https://192.168.72.210:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0328 01:02:44.822416 1130949 api_server.go:103] status: https://192.168.72.210:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0328 01:02:44.822436 1130949 api_server.go:253] Checking apiserver healthz at https://192.168.72.210:8443/healthz ...
	I0328 01:02:44.847406 1130949 api_server.go:279] https://192.168.72.210:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0328 01:02:44.847462 1130949 api_server.go:103] status: https://192.168.72.210:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0328 01:02:45.276899 1130949 api_server.go:253] Checking apiserver healthz at https://192.168.72.210:8443/healthz ...
	I0328 01:02:45.281884 1130949 api_server.go:279] https://192.168.72.210:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0328 01:02:45.281919 1130949 api_server.go:103] status: https://192.168.72.210:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0328 01:02:45.777495 1130949 api_server.go:253] Checking apiserver healthz at https://192.168.72.210:8443/healthz ...
	I0328 01:02:45.783673 1130949 api_server.go:279] https://192.168.72.210:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0328 01:02:45.783704 1130949 api_server.go:103] status: https://192.168.72.210:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0328 01:02:46.277372 1130949 api_server.go:253] Checking apiserver healthz at https://192.168.72.210:8443/healthz ...
	I0328 01:02:46.282281 1130949 api_server.go:279] https://192.168.72.210:8443/healthz returned 200:
	ok
	I0328 01:02:46.291242 1130949 api_server.go:141] control plane version: v1.29.3
	I0328 01:02:46.291287 1130949 api_server.go:131] duration metric: took 4.514730698s to wait for apiserver health ...
	I0328 01:02:46.291301 1130949 cni.go:84] Creating CNI manager for ""
	I0328 01:02:46.291310 1130949 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0328 01:02:46.293461 1130949 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0328 01:02:46.294971 1130949 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0328 01:02:46.312955 1130949 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0328 01:02:46.345653 1130949 system_pods.go:43] waiting for kube-system pods to appear ...
	I0328 01:02:46.355470 1130949 system_pods.go:59] 8 kube-system pods found
	I0328 01:02:46.355506 1130949 system_pods.go:61] "coredns-76f75df574-pr5d8" [90a6f3d5-6f33-4c41-804b-4b20c518aa23] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0328 01:02:46.355512 1130949 system_pods.go:61] "etcd-embed-certs-808809" [93b6b8ee-f83f-4848-b2c5-912ec07acd52] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0328 01:02:46.355519 1130949 system_pods.go:61] "kube-apiserver-embed-certs-808809" [22eb788f-4647-4a07-b5bf-ecdd54c28fcd] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0328 01:02:46.355530 1130949 system_pods.go:61] "kube-controller-manager-embed-certs-808809" [83fecd9f-c0de-4afe-b5b5-7c04bd3adc20] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0328 01:02:46.355545 1130949 system_pods.go:61] "kube-proxy-qwzpg" [57a814c6-54c8-4fa7-b7d7-bcdd4bbc91d7] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0328 01:02:46.355553 1130949 system_pods.go:61] "kube-scheduler-embed-certs-808809" [0b229d84-43fb-45ee-8d49-39204812d490] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0328 01:02:46.355568 1130949 system_pods.go:61] "metrics-server-57f55c9bc5-swsxp" [4b20e133-3054-4806-9b7f-44d8c8c35a4d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0328 01:02:46.355580 1130949 system_pods.go:61] "storage-provisioner" [59303061-19e3-4aed-8753-804988a2a44e] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0328 01:02:46.355590 1130949 system_pods.go:74] duration metric: took 9.908316ms to wait for pod list to return data ...
	I0328 01:02:46.355603 1130949 node_conditions.go:102] verifying NodePressure condition ...
	I0328 01:02:46.358936 1130949 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0328 01:02:46.358987 1130949 node_conditions.go:123] node cpu capacity is 2
	I0328 01:02:46.359006 1130949 node_conditions.go:105] duration metric: took 3.394695ms to run NodePressure ...
	I0328 01:02:46.359054 1130949 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0328 01:02:46.686479 1130949 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0328 01:02:46.692502 1130949 kubeadm.go:733] kubelet initialised
	I0328 01:02:46.692526 1130949 kubeadm.go:734] duration metric: took 6.022393ms waiting for restarted kubelet to initialise ...
	I0328 01:02:46.692534 1130949 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0328 01:02:46.699146 1130949 pod_ready.go:78] waiting up to 4m0s for pod "coredns-76f75df574-pr5d8" in "kube-system" namespace to be "Ready" ...
	I0328 01:02:45.762440 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:45.762891 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | unable to find current IP address of domain old-k8s-version-986088 in network mk-old-k8s-version-986088
	I0328 01:02:45.762915 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | I0328 01:02:45.762845 1132067 retry.go:31] will retry after 2.201941476s: waiting for machine to come up
	I0328 01:02:47.966601 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:47.967196 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | unable to find current IP address of domain old-k8s-version-986088 in network mk-old-k8s-version-986088
	I0328 01:02:47.967237 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | I0328 01:02:47.967144 1132067 retry.go:31] will retry after 4.122216816s: waiting for machine to come up
	I0328 01:02:48.709890 1130949 pod_ready.go:102] pod "coredns-76f75df574-pr5d8" in "kube-system" namespace has status "Ready":"False"
	I0328 01:02:51.207697 1130949 pod_ready.go:102] pod "coredns-76f75df574-pr5d8" in "kube-system" namespace has status "Ready":"False"
	I0328 01:02:53.391471 1131600 start.go:364] duration metric: took 2m47.603687739s to acquireMachinesLock for "default-k8s-diff-port-283961"
	I0328 01:02:53.391553 1131600 start.go:96] Skipping create...Using existing machine configuration
	I0328 01:02:53.391565 1131600 fix.go:54] fixHost starting: 
	I0328 01:02:53.391980 1131600 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18485-1069254/.minikube/bin/docker-machine-driver-kvm2
	I0328 01:02:53.392031 1131600 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 01:02:53.409035 1131600 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34073
	I0328 01:02:53.409556 1131600 main.go:141] libmachine: () Calling .GetVersion
	I0328 01:02:53.410105 1131600 main.go:141] libmachine: Using API Version  1
	I0328 01:02:53.410136 1131600 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 01:02:53.410492 1131600 main.go:141] libmachine: () Calling .GetMachineName
	I0328 01:02:53.410734 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .DriverName
	I0328 01:02:53.410903 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetState
	I0328 01:02:53.412710 1131600 fix.go:112] recreateIfNeeded on default-k8s-diff-port-283961: state=Stopped err=<nil>
	I0328 01:02:53.412739 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .DriverName
	W0328 01:02:53.412927 1131600 fix.go:138] unexpected machine state, will restart: <nil>
	I0328 01:02:53.414773 1131600 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-283961" ...
	I0328 01:02:52.091210 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:52.091759 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has current primary IP address 192.168.50.174 and MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:52.091794 1131323 main.go:141] libmachine: (old-k8s-version-986088) Found IP for machine: 192.168.50.174
	I0328 01:02:52.091841 1131323 main.go:141] libmachine: (old-k8s-version-986088) Reserving static IP address...
	I0328 01:02:52.092295 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | found host DHCP lease matching {name: "old-k8s-version-986088", mac: "52:54:00:f6:94:40", ip: "192.168.50.174"} in network mk-old-k8s-version-986088: {Iface:virbr2 ExpiryTime:2024-03-28 02:02:44 +0000 UTC Type:0 Mac:52:54:00:f6:94:40 Iaid: IPaddr:192.168.50.174 Prefix:24 Hostname:old-k8s-version-986088 Clientid:01:52:54:00:f6:94:40}
	I0328 01:02:52.092321 1131323 main.go:141] libmachine: (old-k8s-version-986088) Reserved static IP address: 192.168.50.174
	I0328 01:02:52.092343 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | skip adding static IP to network mk-old-k8s-version-986088 - found existing host DHCP lease matching {name: "old-k8s-version-986088", mac: "52:54:00:f6:94:40", ip: "192.168.50.174"}
	I0328 01:02:52.092356 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | Getting to WaitForSSH function...
	I0328 01:02:52.092373 1131323 main.go:141] libmachine: (old-k8s-version-986088) Waiting for SSH to be available...
	I0328 01:02:52.094682 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:52.095012 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:94:40", ip: ""} in network mk-old-k8s-version-986088: {Iface:virbr2 ExpiryTime:2024-03-28 02:02:44 +0000 UTC Type:0 Mac:52:54:00:f6:94:40 Iaid: IPaddr:192.168.50.174 Prefix:24 Hostname:old-k8s-version-986088 Clientid:01:52:54:00:f6:94:40}
	I0328 01:02:52.095033 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined IP address 192.168.50.174 and MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:52.095158 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | Using SSH client type: external
	I0328 01:02:52.095180 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | Using SSH private key: /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/old-k8s-version-986088/id_rsa (-rw-------)
	I0328 01:02:52.095208 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.174 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/old-k8s-version-986088/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0328 01:02:52.095218 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | About to run SSH command:
	I0328 01:02:52.095232 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | exit 0
	I0328 01:02:52.218494 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | SSH cmd err, output: <nil>: 
	I0328 01:02:52.218983 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetConfigRaw
	I0328 01:02:52.219663 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetIP
	I0328 01:02:52.222349 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:52.222791 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:94:40", ip: ""} in network mk-old-k8s-version-986088: {Iface:virbr2 ExpiryTime:2024-03-28 02:02:44 +0000 UTC Type:0 Mac:52:54:00:f6:94:40 Iaid: IPaddr:192.168.50.174 Prefix:24 Hostname:old-k8s-version-986088 Clientid:01:52:54:00:f6:94:40}
	I0328 01:02:52.222823 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined IP address 192.168.50.174 and MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:52.223191 1131323 profile.go:142] Saving config to /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/old-k8s-version-986088/config.json ...
	I0328 01:02:52.223388 1131323 machine.go:94] provisionDockerMachine start ...
	I0328 01:02:52.223409 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .DriverName
	I0328 01:02:52.223605 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHHostname
	I0328 01:02:52.225686 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:52.225999 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:94:40", ip: ""} in network mk-old-k8s-version-986088: {Iface:virbr2 ExpiryTime:2024-03-28 02:02:44 +0000 UTC Type:0 Mac:52:54:00:f6:94:40 Iaid: IPaddr:192.168.50.174 Prefix:24 Hostname:old-k8s-version-986088 Clientid:01:52:54:00:f6:94:40}
	I0328 01:02:52.226038 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined IP address 192.168.50.174 and MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:52.226131 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHPort
	I0328 01:02:52.226341 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHKeyPath
	I0328 01:02:52.226507 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHKeyPath
	I0328 01:02:52.226633 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHUsername
	I0328 01:02:52.226802 1131323 main.go:141] libmachine: Using SSH client type: native
	I0328 01:02:52.227078 1131323 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.174 22 <nil> <nil>}
	I0328 01:02:52.227095 1131323 main.go:141] libmachine: About to run SSH command:
	hostname
	I0328 01:02:52.327218 1131323 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0328 01:02:52.327249 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetMachineName
	I0328 01:02:52.327515 1131323 buildroot.go:166] provisioning hostname "old-k8s-version-986088"
	I0328 01:02:52.327542 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetMachineName
	I0328 01:02:52.327754 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHHostname
	I0328 01:02:52.330253 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:52.330661 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:94:40", ip: ""} in network mk-old-k8s-version-986088: {Iface:virbr2 ExpiryTime:2024-03-28 02:02:44 +0000 UTC Type:0 Mac:52:54:00:f6:94:40 Iaid: IPaddr:192.168.50.174 Prefix:24 Hostname:old-k8s-version-986088 Clientid:01:52:54:00:f6:94:40}
	I0328 01:02:52.330691 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined IP address 192.168.50.174 and MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:52.330827 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHPort
	I0328 01:02:52.331048 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHKeyPath
	I0328 01:02:52.331258 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHKeyPath
	I0328 01:02:52.331406 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHUsername
	I0328 01:02:52.331593 1131323 main.go:141] libmachine: Using SSH client type: native
	I0328 01:02:52.331772 1131323 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.174 22 <nil> <nil>}
	I0328 01:02:52.331783 1131323 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-986088 && echo "old-k8s-version-986088" | sudo tee /etc/hostname
	I0328 01:02:52.445910 1131323 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-986088
	
	I0328 01:02:52.445943 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHHostname
	I0328 01:02:52.449023 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:52.449320 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:94:40", ip: ""} in network mk-old-k8s-version-986088: {Iface:virbr2 ExpiryTime:2024-03-28 02:02:44 +0000 UTC Type:0 Mac:52:54:00:f6:94:40 Iaid: IPaddr:192.168.50.174 Prefix:24 Hostname:old-k8s-version-986088 Clientid:01:52:54:00:f6:94:40}
	I0328 01:02:52.449358 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined IP address 192.168.50.174 and MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:52.449595 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHPort
	I0328 01:02:52.449810 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHKeyPath
	I0328 01:02:52.449970 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHKeyPath
	I0328 01:02:52.450116 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHUsername
	I0328 01:02:52.450310 1131323 main.go:141] libmachine: Using SSH client type: native
	I0328 01:02:52.450572 1131323 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.174 22 <nil> <nil>}
	I0328 01:02:52.450640 1131323 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-986088' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-986088/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-986088' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0328 01:02:52.567493 1131323 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0328 01:02:52.567529 1131323 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18485-1069254/.minikube CaCertPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18485-1069254/.minikube}
	I0328 01:02:52.567559 1131323 buildroot.go:174] setting up certificates
	I0328 01:02:52.567573 1131323 provision.go:84] configureAuth start
	I0328 01:02:52.567587 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetMachineName
	I0328 01:02:52.567944 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetIP
	I0328 01:02:52.570860 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:52.571363 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:94:40", ip: ""} in network mk-old-k8s-version-986088: {Iface:virbr2 ExpiryTime:2024-03-28 02:02:44 +0000 UTC Type:0 Mac:52:54:00:f6:94:40 Iaid: IPaddr:192.168.50.174 Prefix:24 Hostname:old-k8s-version-986088 Clientid:01:52:54:00:f6:94:40}
	I0328 01:02:52.571400 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined IP address 192.168.50.174 and MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:52.571547 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHHostname
	I0328 01:02:52.574052 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:52.574483 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:94:40", ip: ""} in network mk-old-k8s-version-986088: {Iface:virbr2 ExpiryTime:2024-03-28 02:02:44 +0000 UTC Type:0 Mac:52:54:00:f6:94:40 Iaid: IPaddr:192.168.50.174 Prefix:24 Hostname:old-k8s-version-986088 Clientid:01:52:54:00:f6:94:40}
	I0328 01:02:52.574517 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined IP address 192.168.50.174 and MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:52.574619 1131323 provision.go:143] copyHostCerts
	I0328 01:02:52.574698 1131323 exec_runner.go:144] found /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.pem, removing ...
	I0328 01:02:52.574710 1131323 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.pem
	I0328 01:02:52.574778 1131323 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.pem (1078 bytes)
	I0328 01:02:52.574894 1131323 exec_runner.go:144] found /home/jenkins/minikube-integration/18485-1069254/.minikube/cert.pem, removing ...
	I0328 01:02:52.574908 1131323 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18485-1069254/.minikube/cert.pem
	I0328 01:02:52.574985 1131323 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18485-1069254/.minikube/cert.pem (1123 bytes)
	I0328 01:02:52.575086 1131323 exec_runner.go:144] found /home/jenkins/minikube-integration/18485-1069254/.minikube/key.pem, removing ...
	I0328 01:02:52.575095 1131323 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18485-1069254/.minikube/key.pem
	I0328 01:02:52.575117 1131323 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18485-1069254/.minikube/key.pem (1679 bytes)
	I0328 01:02:52.575194 1131323 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-986088 san=[127.0.0.1 192.168.50.174 localhost minikube old-k8s-version-986088]
	I0328 01:02:52.688709 1131323 provision.go:177] copyRemoteCerts
	I0328 01:02:52.688776 1131323 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0328 01:02:52.688809 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHHostname
	I0328 01:02:52.691529 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:52.691977 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:94:40", ip: ""} in network mk-old-k8s-version-986088: {Iface:virbr2 ExpiryTime:2024-03-28 02:02:44 +0000 UTC Type:0 Mac:52:54:00:f6:94:40 Iaid: IPaddr:192.168.50.174 Prefix:24 Hostname:old-k8s-version-986088 Clientid:01:52:54:00:f6:94:40}
	I0328 01:02:52.692024 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined IP address 192.168.50.174 and MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:52.692188 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHPort
	I0328 01:02:52.692425 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHKeyPath
	I0328 01:02:52.692620 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHUsername
	I0328 01:02:52.692774 1131323 sshutil.go:53] new ssh client: &{IP:192.168.50.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/old-k8s-version-986088/id_rsa Username:docker}
	I0328 01:02:52.777200 1131323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0328 01:02:52.808740 1131323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0328 01:02:52.836646 1131323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0328 01:02:52.862627 1131323 provision.go:87] duration metric: took 295.032419ms to configureAuth
	I0328 01:02:52.862668 1131323 buildroot.go:189] setting minikube options for container-runtime
	I0328 01:02:52.862908 1131323 config.go:182] Loaded profile config "old-k8s-version-986088": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0328 01:02:52.863019 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHHostname
	I0328 01:02:52.865838 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:52.866585 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:94:40", ip: ""} in network mk-old-k8s-version-986088: {Iface:virbr2 ExpiryTime:2024-03-28 02:02:44 +0000 UTC Type:0 Mac:52:54:00:f6:94:40 Iaid: IPaddr:192.168.50.174 Prefix:24 Hostname:old-k8s-version-986088 Clientid:01:52:54:00:f6:94:40}
	I0328 01:02:52.866630 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined IP address 192.168.50.174 and MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:52.867271 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHPort
	I0328 01:02:52.867521 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHKeyPath
	I0328 01:02:52.867687 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHKeyPath
	I0328 01:02:52.867826 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHUsername
	I0328 01:02:52.867961 1131323 main.go:141] libmachine: Using SSH client type: native
	I0328 01:02:52.868176 1131323 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.174 22 <nil> <nil>}
	I0328 01:02:52.868194 1131323 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0328 01:02:53.154903 1131323 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0328 01:02:53.154936 1131323 machine.go:97] duration metric: took 931.534047ms to provisionDockerMachine
	I0328 01:02:53.154949 1131323 start.go:293] postStartSetup for "old-k8s-version-986088" (driver="kvm2")
	I0328 01:02:53.154961 1131323 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0328 01:02:53.154997 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .DriverName
	I0328 01:02:53.155353 1131323 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0328 01:02:53.155386 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHHostname
	I0328 01:02:53.158072 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:53.158448 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:94:40", ip: ""} in network mk-old-k8s-version-986088: {Iface:virbr2 ExpiryTime:2024-03-28 02:02:44 +0000 UTC Type:0 Mac:52:54:00:f6:94:40 Iaid: IPaddr:192.168.50.174 Prefix:24 Hostname:old-k8s-version-986088 Clientid:01:52:54:00:f6:94:40}
	I0328 01:02:53.158482 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined IP address 192.168.50.174 and MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:53.158612 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHPort
	I0328 01:02:53.158825 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHKeyPath
	I0328 01:02:53.158974 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHUsername
	I0328 01:02:53.159102 1131323 sshutil.go:53] new ssh client: &{IP:192.168.50.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/old-k8s-version-986088/id_rsa Username:docker}
	I0328 01:02:53.243411 1131323 ssh_runner.go:195] Run: cat /etc/os-release
	I0328 01:02:53.247745 1131323 info.go:137] Remote host: Buildroot 2023.02.9
	I0328 01:02:53.247769 1131323 filesync.go:126] Scanning /home/jenkins/minikube-integration/18485-1069254/.minikube/addons for local assets ...
	I0328 01:02:53.247830 1131323 filesync.go:126] Scanning /home/jenkins/minikube-integration/18485-1069254/.minikube/files for local assets ...
	I0328 01:02:53.247903 1131323 filesync.go:149] local asset: /home/jenkins/minikube-integration/18485-1069254/.minikube/files/etc/ssl/certs/10765222.pem -> 10765222.pem in /etc/ssl/certs
	I0328 01:02:53.247990 1131323 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0328 01:02:53.258574 1131323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/files/etc/ssl/certs/10765222.pem --> /etc/ssl/certs/10765222.pem (1708 bytes)
	I0328 01:02:53.284249 1131323 start.go:296] duration metric: took 129.2844ms for postStartSetup
	I0328 01:02:53.284300 1131323 fix.go:56] duration metric: took 20.532468979s for fixHost
	I0328 01:02:53.284324 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHHostname
	I0328 01:02:53.287097 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:53.287505 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:94:40", ip: ""} in network mk-old-k8s-version-986088: {Iface:virbr2 ExpiryTime:2024-03-28 02:02:44 +0000 UTC Type:0 Mac:52:54:00:f6:94:40 Iaid: IPaddr:192.168.50.174 Prefix:24 Hostname:old-k8s-version-986088 Clientid:01:52:54:00:f6:94:40}
	I0328 01:02:53.287534 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined IP address 192.168.50.174 and MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:53.287642 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHPort
	I0328 01:02:53.287874 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHKeyPath
	I0328 01:02:53.288039 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHKeyPath
	I0328 01:02:53.288225 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHUsername
	I0328 01:02:53.288439 1131323 main.go:141] libmachine: Using SSH client type: native
	I0328 01:02:53.288601 1131323 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.174 22 <nil> <nil>}
	I0328 01:02:53.288612 1131323 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0328 01:02:53.391262 1131323 main.go:141] libmachine: SSH cmd err, output: <nil>: 1711587773.373998758
	
	I0328 01:02:53.391292 1131323 fix.go:216] guest clock: 1711587773.373998758
	I0328 01:02:53.391299 1131323 fix.go:229] Guest: 2024-03-28 01:02:53.373998758 +0000 UTC Remote: 2024-03-28 01:02:53.284304642 +0000 UTC m=+227.998260980 (delta=89.694116ms)
	I0328 01:02:53.391341 1131323 fix.go:200] guest clock delta is within tolerance: 89.694116ms
	I0328 01:02:53.391346 1131323 start.go:83] releasing machines lock for "old-k8s-version-986088", held for 20.639550927s
	I0328 01:02:53.391377 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .DriverName
	I0328 01:02:53.391728 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetIP
	I0328 01:02:53.394421 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:53.394780 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:94:40", ip: ""} in network mk-old-k8s-version-986088: {Iface:virbr2 ExpiryTime:2024-03-28 02:02:44 +0000 UTC Type:0 Mac:52:54:00:f6:94:40 Iaid: IPaddr:192.168.50.174 Prefix:24 Hostname:old-k8s-version-986088 Clientid:01:52:54:00:f6:94:40}
	I0328 01:02:53.394811 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined IP address 192.168.50.174 and MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:53.394932 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .DriverName
	I0328 01:02:53.395449 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .DriverName
	I0328 01:02:53.395729 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .DriverName
	I0328 01:02:53.395828 1131323 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0328 01:02:53.395883 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHHostname
	I0328 01:02:53.395985 1131323 ssh_runner.go:195] Run: cat /version.json
	I0328 01:02:53.396014 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHHostname
	I0328 01:02:53.398819 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:53.399010 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:53.399281 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:94:40", ip: ""} in network mk-old-k8s-version-986088: {Iface:virbr2 ExpiryTime:2024-03-28 02:02:44 +0000 UTC Type:0 Mac:52:54:00:f6:94:40 Iaid: IPaddr:192.168.50.174 Prefix:24 Hostname:old-k8s-version-986088 Clientid:01:52:54:00:f6:94:40}
	I0328 01:02:53.399320 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined IP address 192.168.50.174 and MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:53.399451 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHPort
	I0328 01:02:53.399550 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:94:40", ip: ""} in network mk-old-k8s-version-986088: {Iface:virbr2 ExpiryTime:2024-03-28 02:02:44 +0000 UTC Type:0 Mac:52:54:00:f6:94:40 Iaid: IPaddr:192.168.50.174 Prefix:24 Hostname:old-k8s-version-986088 Clientid:01:52:54:00:f6:94:40}
	I0328 01:02:53.399620 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined IP address 192.168.50.174 and MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:53.399640 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHKeyPath
	I0328 01:02:53.399880 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHUsername
	I0328 01:02:53.399902 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHPort
	I0328 01:02:53.400065 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHKeyPath
	I0328 01:02:53.400081 1131323 sshutil.go:53] new ssh client: &{IP:192.168.50.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/old-k8s-version-986088/id_rsa Username:docker}
	I0328 01:02:53.400245 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHUsername
	I0328 01:02:53.400445 1131323 sshutil.go:53] new ssh client: &{IP:192.168.50.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/old-k8s-version-986088/id_rsa Username:docker}
	I0328 01:02:53.514453 1131323 ssh_runner.go:195] Run: systemctl --version
	I0328 01:02:53.521123 1131323 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0328 01:02:53.678366 1131323 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0328 01:02:53.685402 1131323 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0328 01:02:53.685473 1131323 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0328 01:02:53.702781 1131323 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0328 01:02:53.702816 1131323 start.go:494] detecting cgroup driver to use...
	I0328 01:02:53.702900 1131323 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0328 01:02:53.720343 1131323 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0328 01:02:53.736749 1131323 docker.go:217] disabling cri-docker service (if available) ...
	I0328 01:02:53.736824 1131323 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0328 01:02:53.761087 1131323 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0328 01:02:53.779008 1131323 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0328 01:02:53.895064 1131323 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0328 01:02:54.060741 1131323 docker.go:233] disabling docker service ...
	I0328 01:02:54.060825 1131323 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0328 01:02:54.079139 1131323 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0328 01:02:54.093523 1131323 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0328 01:02:54.247544 1131323 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0328 01:02:54.396392 1131323 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0328 01:02:54.422612 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0328 01:02:54.443759 1131323 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0328 01:02:54.443817 1131323 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 01:02:54.459794 1131323 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0328 01:02:54.459875 1131323 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 01:02:54.472784 1131323 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 01:02:54.484963 1131323 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 01:02:54.496654 1131323 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0328 01:02:54.508382 1131323 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0328 01:02:54.518607 1131323 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0328 01:02:54.518687 1131323 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0328 01:02:54.532356 1131323 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0328 01:02:54.544424 1131323 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0328 01:02:54.685782 1131323 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0328 01:02:54.847233 1131323 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0328 01:02:54.847314 1131323 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0328 01:02:54.853148 1131323 start.go:562] Will wait 60s for crictl version
	I0328 01:02:54.853248 1131323 ssh_runner.go:195] Run: which crictl
	I0328 01:02:54.857536 1131323 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0328 01:02:54.901937 1131323 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0328 01:02:54.902082 1131323 ssh_runner.go:195] Run: crio --version
	I0328 01:02:54.935571 1131323 ssh_runner.go:195] Run: crio --version
	I0328 01:02:54.971452 1131323 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0328 01:02:54.972964 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetIP
	I0328 01:02:54.976523 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:54.976985 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:94:40", ip: ""} in network mk-old-k8s-version-986088: {Iface:virbr2 ExpiryTime:2024-03-28 02:02:44 +0000 UTC Type:0 Mac:52:54:00:f6:94:40 Iaid: IPaddr:192.168.50.174 Prefix:24 Hostname:old-k8s-version-986088 Clientid:01:52:54:00:f6:94:40}
	I0328 01:02:54.977017 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined IP address 192.168.50.174 and MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:54.977369 1131323 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0328 01:02:54.982326 1131323 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0328 01:02:54.996239 1131323 kubeadm.go:877] updating cluster {Name:old-k8s-version-986088 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-986088 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.174 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0328 01:02:54.996371 1131323 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0328 01:02:54.996433 1131323 ssh_runner.go:195] Run: sudo crictl images --output json
	I0328 01:02:55.045404 1131323 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0328 01:02:55.045483 1131323 ssh_runner.go:195] Run: which lz4
	I0328 01:02:55.050226 1131323 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0328 01:02:55.055182 1131323 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0328 01:02:55.055221 1131323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0328 01:02:53.416101 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .Start
	I0328 01:02:53.416332 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Ensuring networks are active...
	I0328 01:02:53.417021 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Ensuring network default is active
	I0328 01:02:53.417446 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Ensuring network mk-default-k8s-diff-port-283961 is active
	I0328 01:02:53.417857 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Getting domain xml...
	I0328 01:02:53.418555 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Creating domain...
	I0328 01:02:54.777201 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Waiting to get IP...
	I0328 01:02:54.778055 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:02:54.778563 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | unable to find current IP address of domain default-k8s-diff-port-283961 in network mk-default-k8s-diff-port-283961
	I0328 01:02:54.778705 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | I0328 01:02:54.778537 1132240 retry.go:31] will retry after 259.031702ms: waiting for machine to come up
	I0328 01:02:55.039365 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:02:55.039926 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | unable to find current IP address of domain default-k8s-diff-port-283961 in network mk-default-k8s-diff-port-283961
	I0328 01:02:55.039963 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | I0328 01:02:55.039860 1132240 retry.go:31] will retry after 254.124553ms: waiting for machine to come up
	I0328 01:02:55.295658 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:02:55.296218 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | unable to find current IP address of domain default-k8s-diff-port-283961 in network mk-default-k8s-diff-port-283961
	I0328 01:02:55.296265 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | I0328 01:02:55.296174 1132240 retry.go:31] will retry after 349.637234ms: waiting for machine to come up
	I0328 01:02:55.647590 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:02:55.648356 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | unable to find current IP address of domain default-k8s-diff-port-283961 in network mk-default-k8s-diff-port-283961
	I0328 01:02:55.648392 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | I0328 01:02:55.648298 1132240 retry.go:31] will retry after 446.471208ms: waiting for machine to come up
	I0328 01:02:53.707811 1130949 pod_ready.go:102] pod "coredns-76f75df574-pr5d8" in "kube-system" namespace has status "Ready":"False"
	I0328 01:02:55.708380 1130949 pod_ready.go:102] pod "coredns-76f75df574-pr5d8" in "kube-system" namespace has status "Ready":"False"
	I0328 01:02:57.213059 1130949 pod_ready.go:92] pod "coredns-76f75df574-pr5d8" in "kube-system" namespace has status "Ready":"True"
	I0328 01:02:57.213097 1130949 pod_ready.go:81] duration metric: took 10.513921238s for pod "coredns-76f75df574-pr5d8" in "kube-system" namespace to be "Ready" ...
	I0328 01:02:57.213113 1130949 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-808809" in "kube-system" namespace to be "Ready" ...
	I0328 01:02:57.222308 1130949 pod_ready.go:92] pod "etcd-embed-certs-808809" in "kube-system" namespace has status "Ready":"True"
	I0328 01:02:57.222344 1130949 pod_ready.go:81] duration metric: took 9.214056ms for pod "etcd-embed-certs-808809" in "kube-system" namespace to be "Ready" ...
	I0328 01:02:57.222357 1130949 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-808809" in "kube-system" namespace to be "Ready" ...
	I0328 01:02:57.231530 1130949 pod_ready.go:92] pod "kube-apiserver-embed-certs-808809" in "kube-system" namespace has status "Ready":"True"
	I0328 01:02:57.231558 1130949 pod_ready.go:81] duration metric: took 9.192864ms for pod "kube-apiserver-embed-certs-808809" in "kube-system" namespace to be "Ready" ...
	I0328 01:02:57.231568 1130949 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-808809" in "kube-system" namespace to be "Ready" ...
	I0328 01:02:56.994163 1131323 crio.go:462] duration metric: took 1.943992561s to copy over tarball
	I0328 01:02:56.994252 1131323 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0328 01:03:00.215115 1131323 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.220825311s)
	I0328 01:03:00.215159 1131323 crio.go:469] duration metric: took 3.22095583s to extract the tarball
	I0328 01:03:00.215171 1131323 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0328 01:03:00.259151 1131323 ssh_runner.go:195] Run: sudo crictl images --output json
	I0328 01:03:00.298446 1131323 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0328 01:03:00.298492 1131323 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0328 01:03:00.298585 1131323 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0328 01:03:00.298601 1131323 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0328 01:03:00.298613 1131323 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0328 01:03:00.298644 1131323 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0328 01:03:00.298662 1131323 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0328 01:03:00.298698 1131323 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0328 01:03:00.298593 1131323 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0328 01:03:00.298585 1131323 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0328 01:03:00.300347 1131323 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0328 01:03:00.300424 1131323 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0328 01:03:00.300470 1131323 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0328 01:03:00.300474 1131323 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0328 01:03:00.300637 1131323 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0328 01:03:00.300652 1131323 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0328 01:03:00.300723 1131323 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0328 01:03:00.300793 1131323 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0328 01:02:56.095939 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:02:56.096463 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | unable to find current IP address of domain default-k8s-diff-port-283961 in network mk-default-k8s-diff-port-283961
	I0328 01:02:56.096501 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | I0328 01:02:56.096412 1132240 retry.go:31] will retry after 490.029649ms: waiting for machine to come up
	I0328 01:02:56.588298 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:02:56.588835 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | unable to find current IP address of domain default-k8s-diff-port-283961 in network mk-default-k8s-diff-port-283961
	I0328 01:02:56.588868 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | I0328 01:02:56.588796 1132240 retry.go:31] will retry after 831.356628ms: waiting for machine to come up
	I0328 01:02:57.421917 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:02:57.422410 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | unable to find current IP address of domain default-k8s-diff-port-283961 in network mk-default-k8s-diff-port-283961
	I0328 01:02:57.422443 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | I0328 01:02:57.422353 1132240 retry.go:31] will retry after 1.164764985s: waiting for machine to come up
	I0328 01:02:58.588827 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:02:58.589183 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | unable to find current IP address of domain default-k8s-diff-port-283961 in network mk-default-k8s-diff-port-283961
	I0328 01:02:58.589225 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | I0328 01:02:58.589119 1132240 retry.go:31] will retry after 1.307248783s: waiting for machine to come up
	I0328 01:02:59.897607 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:02:59.897976 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | unable to find current IP address of domain default-k8s-diff-port-283961 in network mk-default-k8s-diff-port-283961
	I0328 01:02:59.898008 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | I0328 01:02:59.897926 1132240 retry.go:31] will retry after 1.560958271s: waiting for machine to come up
	I0328 01:02:58.241179 1130949 pod_ready.go:92] pod "kube-controller-manager-embed-certs-808809" in "kube-system" namespace has status "Ready":"True"
	I0328 01:02:58.241216 1130949 pod_ready.go:81] duration metric: took 1.00963904s for pod "kube-controller-manager-embed-certs-808809" in "kube-system" namespace to be "Ready" ...
	I0328 01:02:58.241245 1130949 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-qwzpg" in "kube-system" namespace to be "Ready" ...
	I0328 01:02:58.249787 1130949 pod_ready.go:92] pod "kube-proxy-qwzpg" in "kube-system" namespace has status "Ready":"True"
	I0328 01:02:58.249826 1130949 pod_ready.go:81] duration metric: took 8.571225ms for pod "kube-proxy-qwzpg" in "kube-system" namespace to be "Ready" ...
	I0328 01:02:58.249840 1130949 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-808809" in "kube-system" namespace to be "Ready" ...
	I0328 01:02:58.405101 1130949 pod_ready.go:92] pod "kube-scheduler-embed-certs-808809" in "kube-system" namespace has status "Ready":"True"
	I0328 01:02:58.405130 1130949 pod_ready.go:81] duration metric: took 155.281142ms for pod "kube-scheduler-embed-certs-808809" in "kube-system" namespace to be "Ready" ...
	I0328 01:02:58.405141 1130949 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace to be "Ready" ...
	I0328 01:03:00.412202 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:02.412688 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:00.499788 1131323 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0328 01:03:00.539135 1131323 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0328 01:03:00.541462 1131323 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0328 01:03:00.544184 1131323 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0328 01:03:00.544227 1131323 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0328 01:03:00.544261 1131323 ssh_runner.go:195] Run: which crictl
	I0328 01:03:00.555720 1131323 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0328 01:03:00.560189 1131323 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0328 01:03:00.562639 1131323 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0328 01:03:00.574105 1131323 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0328 01:03:00.681717 1131323 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0328 01:03:00.681742 1131323 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0328 01:03:00.681765 1131323 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0328 01:03:00.681803 1131323 ssh_runner.go:195] Run: which crictl
	I0328 01:03:00.682033 1131323 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0328 01:03:00.682076 1131323 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0328 01:03:00.682115 1131323 ssh_runner.go:195] Run: which crictl
	I0328 01:03:00.732868 1131323 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0328 01:03:00.732922 1131323 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0328 01:03:00.732988 1131323 ssh_runner.go:195] Run: which crictl
	I0328 01:03:00.742680 1131323 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0328 01:03:00.742730 1131323 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0328 01:03:00.742762 1131323 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0328 01:03:00.742777 1131323 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0328 01:03:00.742805 1131323 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0328 01:03:00.742770 1131323 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0328 01:03:00.742817 1131323 ssh_runner.go:195] Run: which crictl
	I0328 01:03:00.742851 1131323 ssh_runner.go:195] Run: which crictl
	I0328 01:03:00.742865 1131323 ssh_runner.go:195] Run: which crictl
	I0328 01:03:00.770435 1131323 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0328 01:03:00.770472 1131323 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0328 01:03:00.770567 1131323 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0328 01:03:00.770588 1131323 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0328 01:03:00.770727 1131323 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0328 01:03:00.770760 1131323 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0328 01:03:00.770728 1131323 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0328 01:03:00.882338 1131323 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0328 01:03:00.896602 1131323 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0328 01:03:00.918814 1131323 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0328 01:03:00.918869 1131323 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0328 01:03:00.918919 1131323 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0328 01:03:00.918968 1131323 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0328 01:03:01.186124 1131323 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0328 01:03:01.334547 1131323 cache_images.go:92] duration metric: took 1.036031169s to LoadCachedImages
	W0328 01:03:01.334676 1131323 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	I0328 01:03:01.334694 1131323 kubeadm.go:928] updating node { 192.168.50.174 8443 v1.20.0 crio true true} ...
	I0328 01:03:01.334827 1131323 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-986088 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.174
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-986088 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0328 01:03:01.334926 1131323 ssh_runner.go:195] Run: crio config
	I0328 01:03:01.391004 1131323 cni.go:84] Creating CNI manager for ""
	I0328 01:03:01.391034 1131323 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0328 01:03:01.391054 1131323 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0328 01:03:01.391081 1131323 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.174 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-986088 NodeName:old-k8s-version-986088 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.174"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.174 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0328 01:03:01.391265 1131323 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.174
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-986088"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.174
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.174"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0328 01:03:01.391347 1131323 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0328 01:03:01.403684 1131323 binaries.go:44] Found k8s binaries, skipping transfer
	I0328 01:03:01.403779 1131323 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0328 01:03:01.415168 1131323 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0328 01:03:01.434329 1131323 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0328 01:03:01.456280 1131323 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0328 01:03:01.476625 1131323 ssh_runner.go:195] Run: grep 192.168.50.174	control-plane.minikube.internal$ /etc/hosts
	I0328 01:03:01.480867 1131323 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.174	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0328 01:03:01.493833 1131323 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0328 01:03:01.642273 1131323 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0328 01:03:01.661857 1131323 certs.go:68] Setting up /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/old-k8s-version-986088 for IP: 192.168.50.174
	I0328 01:03:01.661887 1131323 certs.go:194] generating shared ca certs ...
	I0328 01:03:01.661909 1131323 certs.go:226] acquiring lock for ca certs: {Name:mkf4dec8f33bbf51de6ed3aabdf175c7fd744ef9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 01:03:01.662115 1131323 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.key
	I0328 01:03:01.662174 1131323 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/proxy-client-ca.key
	I0328 01:03:01.662188 1131323 certs.go:256] generating profile certs ...
	I0328 01:03:01.662324 1131323 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/old-k8s-version-986088/client.key
	I0328 01:03:01.662399 1131323 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/old-k8s-version-986088/apiserver.key.b88fbc7e
	I0328 01:03:01.662447 1131323 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/old-k8s-version-986088/proxy-client.key
	I0328 01:03:01.662600 1131323 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/1076522.pem (1338 bytes)
	W0328 01:03:01.662656 1131323 certs.go:480] ignoring /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/1076522_empty.pem, impossibly tiny 0 bytes
	I0328 01:03:01.662672 1131323 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca-key.pem (1675 bytes)
	I0328 01:03:01.662703 1131323 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem (1078 bytes)
	I0328 01:03:01.662738 1131323 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/cert.pem (1123 bytes)
	I0328 01:03:01.662774 1131323 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/key.pem (1679 bytes)
	I0328 01:03:01.662826 1131323 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/files/etc/ssl/certs/10765222.pem (1708 bytes)
	I0328 01:03:01.663831 1131323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0328 01:03:01.697171 1131323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0328 01:03:01.742118 1131323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0328 01:03:01.783263 1131323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0328 01:03:01.831682 1131323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/old-k8s-version-986088/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0328 01:03:01.878051 1131323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/old-k8s-version-986088/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0328 01:03:01.915626 1131323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/old-k8s-version-986088/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0328 01:03:01.942247 1131323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/old-k8s-version-986088/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0328 01:03:01.969054 1131323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/files/etc/ssl/certs/10765222.pem --> /usr/share/ca-certificates/10765222.pem (1708 bytes)
	I0328 01:03:01.998651 1131323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0328 01:03:02.024881 1131323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/1076522.pem --> /usr/share/ca-certificates/1076522.pem (1338 bytes)
	I0328 01:03:02.051284 1131323 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0328 01:03:02.070414 1131323 ssh_runner.go:195] Run: openssl version
	I0328 01:03:02.076635 1131323 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10765222.pem && ln -fs /usr/share/ca-certificates/10765222.pem /etc/ssl/certs/10765222.pem"
	I0328 01:03:02.089288 1131323 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10765222.pem
	I0328 01:03:02.094260 1131323 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 27 23:42 /usr/share/ca-certificates/10765222.pem
	I0328 01:03:02.094322 1131323 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10765222.pem
	I0328 01:03:02.100846 1131323 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/10765222.pem /etc/ssl/certs/3ec20f2e.0"
	I0328 01:03:02.114474 1131323 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0328 01:03:02.126467 1131323 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0328 01:03:02.131240 1131323 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 27 23:34 /usr/share/ca-certificates/minikubeCA.pem
	I0328 01:03:02.131293 1131323 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0328 01:03:02.137496 1131323 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0328 01:03:02.150863 1131323 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1076522.pem && ln -fs /usr/share/ca-certificates/1076522.pem /etc/ssl/certs/1076522.pem"
	I0328 01:03:02.163536 1131323 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1076522.pem
	I0328 01:03:02.168767 1131323 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 27 23:42 /usr/share/ca-certificates/1076522.pem
	I0328 01:03:02.168850 1131323 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1076522.pem
	I0328 01:03:02.175218 1131323 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1076522.pem /etc/ssl/certs/51391683.0"
	I0328 01:03:02.188272 1131323 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0328 01:03:02.193348 1131323 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0328 01:03:02.199969 1131323 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0328 01:03:02.206424 1131323 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0328 01:03:02.213530 1131323 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0328 01:03:02.220136 1131323 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0328 01:03:02.226502 1131323 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0328 01:03:02.232708 1131323 kubeadm.go:391] StartCluster: {Name:old-k8s-version-986088 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-986088 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.174 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0328 01:03:02.232831 1131323 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0328 01:03:02.232890 1131323 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0328 01:03:02.280062 1131323 cri.go:89] found id: ""
	I0328 01:03:02.280160 1131323 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0328 01:03:02.291968 1131323 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0328 01:03:02.292003 1131323 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0328 01:03:02.292011 1131323 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0328 01:03:02.292072 1131323 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0328 01:03:02.304006 1131323 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0328 01:03:02.305105 1131323 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-986088" does not appear in /home/jenkins/minikube-integration/18485-1069254/kubeconfig
	I0328 01:03:02.305785 1131323 kubeconfig.go:62] /home/jenkins/minikube-integration/18485-1069254/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-986088" cluster setting kubeconfig missing "old-k8s-version-986088" context setting]
	I0328 01:03:02.306728 1131323 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18485-1069254/kubeconfig: {Name:mk608bb0dce90860f51972a105c5a28b1a9b081e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 01:03:02.308610 1131323 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0328 01:03:02.320212 1131323 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.50.174
	I0328 01:03:02.320265 1131323 kubeadm.go:1154] stopping kube-system containers ...
	I0328 01:03:02.320283 1131323 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0328 01:03:02.320356 1131323 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0328 01:03:02.366411 1131323 cri.go:89] found id: ""
	I0328 01:03:02.366500 1131323 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0328 01:03:02.388351 1131323 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0328 01:03:02.402621 1131323 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0328 01:03:02.402652 1131323 kubeadm.go:156] found existing configuration files:
	
	I0328 01:03:02.402718 1131323 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0328 01:03:02.415559 1131323 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0328 01:03:02.415633 1131323 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0328 01:03:02.426666 1131323 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0328 01:03:02.439497 1131323 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0328 01:03:02.439558 1131323 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0328 01:03:02.451040 1131323 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0328 01:03:02.461780 1131323 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0328 01:03:02.461876 1131323 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0328 01:03:02.473295 1131323 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0328 01:03:02.484762 1131323 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0328 01:03:02.484841 1131323 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0328 01:03:02.496304 1131323 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0328 01:03:02.507634 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0328 01:03:02.641980 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0328 01:03:03.598106 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0328 01:03:03.840026 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0328 01:03:03.970336 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0328 01:03:04.067774 1131323 api_server.go:52] waiting for apiserver process to appear ...
	I0328 01:03:04.067911 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:04.568260 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:05.068794 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:01.460535 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:01.461008 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | unable to find current IP address of domain default-k8s-diff-port-283961 in network mk-default-k8s-diff-port-283961
	I0328 01:03:01.461039 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | I0328 01:03:01.460962 1132240 retry.go:31] will retry after 1.839531745s: waiting for machine to come up
	I0328 01:03:03.302965 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:03.303445 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | unable to find current IP address of domain default-k8s-diff-port-283961 in network mk-default-k8s-diff-port-283961
	I0328 01:03:03.303479 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | I0328 01:03:03.303387 1132240 retry.go:31] will retry after 2.461748315s: waiting for machine to come up
	I0328 01:03:04.413898 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:06.913608 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:05.568716 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:06.068362 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:06.568235 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:07.068696 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:07.567976 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:08.068032 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:08.568586 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:09.068046 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:09.568699 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:10.067967 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:05.767795 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:05.768329 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | unable to find current IP address of domain default-k8s-diff-port-283961 in network mk-default-k8s-diff-port-283961
	I0328 01:03:05.768360 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | I0328 01:03:05.768279 1132240 retry.go:31] will retry after 2.321291255s: waiting for machine to come up
	I0328 01:03:08.092644 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:08.093094 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | unable to find current IP address of domain default-k8s-diff-port-283961 in network mk-default-k8s-diff-port-283961
	I0328 01:03:08.093131 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | I0328 01:03:08.093046 1132240 retry.go:31] will retry after 4.151205276s: waiting for machine to come up
	I0328 01:03:09.413199 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:11.912234 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:13.671756 1130827 start.go:364] duration metric: took 54.966750689s to acquireMachinesLock for "no-preload-248059"
	I0328 01:03:13.671815 1130827 start.go:96] Skipping create...Using existing machine configuration
	I0328 01:03:13.671823 1130827 fix.go:54] fixHost starting: 
	I0328 01:03:13.672255 1130827 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18485-1069254/.minikube/bin/docker-machine-driver-kvm2
	I0328 01:03:13.672292 1130827 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 01:03:13.689811 1130827 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42017
	I0328 01:03:13.690364 1130827 main.go:141] libmachine: () Calling .GetVersion
	I0328 01:03:13.690817 1130827 main.go:141] libmachine: Using API Version  1
	I0328 01:03:13.690843 1130827 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 01:03:13.691213 1130827 main.go:141] libmachine: () Calling .GetMachineName
	I0328 01:03:13.691395 1130827 main.go:141] libmachine: (no-preload-248059) Calling .DriverName
	I0328 01:03:13.691523 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetState
	I0328 01:03:13.693093 1130827 fix.go:112] recreateIfNeeded on no-preload-248059: state=Stopped err=<nil>
	I0328 01:03:13.693123 1130827 main.go:141] libmachine: (no-preload-248059) Calling .DriverName
	W0328 01:03:13.693280 1130827 fix.go:138] unexpected machine state, will restart: <nil>
	I0328 01:03:13.695158 1130827 out.go:177] * Restarting existing kvm2 VM for "no-preload-248059" ...
	I0328 01:03:10.568240 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:11.068028 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:11.568146 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:12.068467 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:12.568820 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:13.068031 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:13.568977 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:14.068050 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:14.567938 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:15.068711 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:12.248769 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:12.249440 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Found IP for machine: 192.168.39.224
	I0328 01:03:12.249467 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Reserving static IP address...
	I0328 01:03:12.249498 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has current primary IP address 192.168.39.224 and MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:12.249832 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-283961", mac: "52:54:00:c4:df:6f", ip: "192.168.39.224"} in network mk-default-k8s-diff-port-283961: {Iface:virbr1 ExpiryTime:2024-03-28 02:03:06 +0000 UTC Type:0 Mac:52:54:00:c4:df:6f Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:default-k8s-diff-port-283961 Clientid:01:52:54:00:c4:df:6f}
	I0328 01:03:12.249872 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | skip adding static IP to network mk-default-k8s-diff-port-283961 - found existing host DHCP lease matching {name: "default-k8s-diff-port-283961", mac: "52:54:00:c4:df:6f", ip: "192.168.39.224"}
	I0328 01:03:12.249888 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Reserved static IP address: 192.168.39.224
	I0328 01:03:12.249908 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Waiting for SSH to be available...
	I0328 01:03:12.249921 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | Getting to WaitForSSH function...
	I0328 01:03:12.252053 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:12.252487 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:df:6f", ip: ""} in network mk-default-k8s-diff-port-283961: {Iface:virbr1 ExpiryTime:2024-03-28 02:03:06 +0000 UTC Type:0 Mac:52:54:00:c4:df:6f Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:default-k8s-diff-port-283961 Clientid:01:52:54:00:c4:df:6f}
	I0328 01:03:12.252521 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined IP address 192.168.39.224 and MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:12.252646 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | Using SSH client type: external
	I0328 01:03:12.252677 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | Using SSH private key: /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/default-k8s-diff-port-283961/id_rsa (-rw-------)
	I0328 01:03:12.252709 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.224 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/default-k8s-diff-port-283961/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0328 01:03:12.252731 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | About to run SSH command:
	I0328 01:03:12.252750 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | exit 0
	I0328 01:03:12.378419 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | SSH cmd err, output: <nil>: 
	I0328 01:03:12.378866 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetConfigRaw
	I0328 01:03:12.379659 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetIP
	I0328 01:03:12.382631 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:12.382997 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:df:6f", ip: ""} in network mk-default-k8s-diff-port-283961: {Iface:virbr1 ExpiryTime:2024-03-28 02:03:06 +0000 UTC Type:0 Mac:52:54:00:c4:df:6f Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:default-k8s-diff-port-283961 Clientid:01:52:54:00:c4:df:6f}
	I0328 01:03:12.383023 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined IP address 192.168.39.224 and MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:12.383276 1131600 profile.go:142] Saving config to /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/default-k8s-diff-port-283961/config.json ...
	I0328 01:03:12.383534 1131600 machine.go:94] provisionDockerMachine start ...
	I0328 01:03:12.383567 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .DriverName
	I0328 01:03:12.383805 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHHostname
	I0328 01:03:12.386472 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:12.386839 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:df:6f", ip: ""} in network mk-default-k8s-diff-port-283961: {Iface:virbr1 ExpiryTime:2024-03-28 02:03:06 +0000 UTC Type:0 Mac:52:54:00:c4:df:6f Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:default-k8s-diff-port-283961 Clientid:01:52:54:00:c4:df:6f}
	I0328 01:03:12.386870 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined IP address 192.168.39.224 and MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:12.387035 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHPort
	I0328 01:03:12.387240 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHKeyPath
	I0328 01:03:12.387399 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHKeyPath
	I0328 01:03:12.387577 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHUsername
	I0328 01:03:12.387729 1131600 main.go:141] libmachine: Using SSH client type: native
	I0328 01:03:12.387931 1131600 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.224 22 <nil> <nil>}
	I0328 01:03:12.387943 1131600 main.go:141] libmachine: About to run SSH command:
	hostname
	I0328 01:03:12.499608 1131600 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0328 01:03:12.499644 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetMachineName
	I0328 01:03:12.499930 1131600 buildroot.go:166] provisioning hostname "default-k8s-diff-port-283961"
	I0328 01:03:12.499962 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetMachineName
	I0328 01:03:12.500154 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHHostname
	I0328 01:03:12.502737 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:12.503089 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:df:6f", ip: ""} in network mk-default-k8s-diff-port-283961: {Iface:virbr1 ExpiryTime:2024-03-28 02:03:06 +0000 UTC Type:0 Mac:52:54:00:c4:df:6f Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:default-k8s-diff-port-283961 Clientid:01:52:54:00:c4:df:6f}
	I0328 01:03:12.503120 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined IP address 192.168.39.224 and MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:12.503295 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHPort
	I0328 01:03:12.503516 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHKeyPath
	I0328 01:03:12.503725 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHKeyPath
	I0328 01:03:12.503892 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHUsername
	I0328 01:03:12.504093 1131600 main.go:141] libmachine: Using SSH client type: native
	I0328 01:03:12.504271 1131600 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.224 22 <nil> <nil>}
	I0328 01:03:12.504285 1131600 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-283961 && echo "default-k8s-diff-port-283961" | sudo tee /etc/hostname
	I0328 01:03:12.625590 1131600 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-283961
	
	I0328 01:03:12.625624 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHHostname
	I0328 01:03:12.628570 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:12.628883 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:df:6f", ip: ""} in network mk-default-k8s-diff-port-283961: {Iface:virbr1 ExpiryTime:2024-03-28 02:03:06 +0000 UTC Type:0 Mac:52:54:00:c4:df:6f Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:default-k8s-diff-port-283961 Clientid:01:52:54:00:c4:df:6f}
	I0328 01:03:12.628968 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined IP address 192.168.39.224 and MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:12.629143 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHPort
	I0328 01:03:12.629397 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHKeyPath
	I0328 01:03:12.629627 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHKeyPath
	I0328 01:03:12.629825 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHUsername
	I0328 01:03:12.630008 1131600 main.go:141] libmachine: Using SSH client type: native
	I0328 01:03:12.630191 1131600 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.224 22 <nil> <nil>}
	I0328 01:03:12.630210 1131600 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-283961' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-283961/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-283961' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0328 01:03:12.744240 1131600 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0328 01:03:12.744280 1131600 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18485-1069254/.minikube CaCertPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18485-1069254/.minikube}
	I0328 01:03:12.744327 1131600 buildroot.go:174] setting up certificates
	I0328 01:03:12.744342 1131600 provision.go:84] configureAuth start
	I0328 01:03:12.744361 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetMachineName
	I0328 01:03:12.744722 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetIP
	I0328 01:03:12.747139 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:12.747448 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:df:6f", ip: ""} in network mk-default-k8s-diff-port-283961: {Iface:virbr1 ExpiryTime:2024-03-28 02:03:06 +0000 UTC Type:0 Mac:52:54:00:c4:df:6f Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:default-k8s-diff-port-283961 Clientid:01:52:54:00:c4:df:6f}
	I0328 01:03:12.747478 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined IP address 192.168.39.224 and MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:12.747582 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHHostname
	I0328 01:03:12.749705 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:12.749964 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:df:6f", ip: ""} in network mk-default-k8s-diff-port-283961: {Iface:virbr1 ExpiryTime:2024-03-28 02:03:06 +0000 UTC Type:0 Mac:52:54:00:c4:df:6f Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:default-k8s-diff-port-283961 Clientid:01:52:54:00:c4:df:6f}
	I0328 01:03:12.749995 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined IP address 192.168.39.224 and MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:12.750125 1131600 provision.go:143] copyHostCerts
	I0328 01:03:12.750203 1131600 exec_runner.go:144] found /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.pem, removing ...
	I0328 01:03:12.750217 1131600 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.pem
	I0328 01:03:12.750323 1131600 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.pem (1078 bytes)
	I0328 01:03:12.750435 1131600 exec_runner.go:144] found /home/jenkins/minikube-integration/18485-1069254/.minikube/cert.pem, removing ...
	I0328 01:03:12.750446 1131600 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18485-1069254/.minikube/cert.pem
	I0328 01:03:12.750479 1131600 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18485-1069254/.minikube/cert.pem (1123 bytes)
	I0328 01:03:12.750557 1131600 exec_runner.go:144] found /home/jenkins/minikube-integration/18485-1069254/.minikube/key.pem, removing ...
	I0328 01:03:12.750567 1131600 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18485-1069254/.minikube/key.pem
	I0328 01:03:12.750599 1131600 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18485-1069254/.minikube/key.pem (1679 bytes)
	I0328 01:03:12.750670 1131600 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-283961 san=[127.0.0.1 192.168.39.224 default-k8s-diff-port-283961 localhost minikube]
	I0328 01:03:12.963182 1131600 provision.go:177] copyRemoteCerts
	I0328 01:03:12.963265 1131600 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0328 01:03:12.963313 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHHostname
	I0328 01:03:12.965946 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:12.966177 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:df:6f", ip: ""} in network mk-default-k8s-diff-port-283961: {Iface:virbr1 ExpiryTime:2024-03-28 02:03:06 +0000 UTC Type:0 Mac:52:54:00:c4:df:6f Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:default-k8s-diff-port-283961 Clientid:01:52:54:00:c4:df:6f}
	I0328 01:03:12.966207 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined IP address 192.168.39.224 and MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:12.966347 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHPort
	I0328 01:03:12.966573 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHKeyPath
	I0328 01:03:12.966773 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHUsername
	I0328 01:03:12.966934 1131600 sshutil.go:53] new ssh client: &{IP:192.168.39.224 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/default-k8s-diff-port-283961/id_rsa Username:docker}
	I0328 01:03:13.057477 1131600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0328 01:03:13.083706 1131600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0328 01:03:13.109167 1131600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0328 01:03:13.136835 1131600 provision.go:87] duration metric: took 392.475069ms to configureAuth
	I0328 01:03:13.136867 1131600 buildroot.go:189] setting minikube options for container-runtime
	I0328 01:03:13.137048 1131600 config.go:182] Loaded profile config "default-k8s-diff-port-283961": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0328 01:03:13.137131 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHHostname
	I0328 01:03:13.139508 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:13.139761 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:df:6f", ip: ""} in network mk-default-k8s-diff-port-283961: {Iface:virbr1 ExpiryTime:2024-03-28 02:03:06 +0000 UTC Type:0 Mac:52:54:00:c4:df:6f Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:default-k8s-diff-port-283961 Clientid:01:52:54:00:c4:df:6f}
	I0328 01:03:13.139792 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined IP address 192.168.39.224 and MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:13.139959 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHPort
	I0328 01:03:13.140170 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHKeyPath
	I0328 01:03:13.140343 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHKeyPath
	I0328 01:03:13.140502 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHUsername
	I0328 01:03:13.140685 1131600 main.go:141] libmachine: Using SSH client type: native
	I0328 01:03:13.140873 1131600 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.224 22 <nil> <nil>}
	I0328 01:03:13.140897 1131600 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0328 01:03:13.422372 1131600 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0328 01:03:13.422405 1131600 machine.go:97] duration metric: took 1.038857021s to provisionDockerMachine
	I0328 01:03:13.422418 1131600 start.go:293] postStartSetup for "default-k8s-diff-port-283961" (driver="kvm2")
	I0328 01:03:13.422428 1131600 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0328 01:03:13.422456 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .DriverName
	I0328 01:03:13.422788 1131600 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0328 01:03:13.422819 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHHostname
	I0328 01:03:13.425539 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:13.425865 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:df:6f", ip: ""} in network mk-default-k8s-diff-port-283961: {Iface:virbr1 ExpiryTime:2024-03-28 02:03:06 +0000 UTC Type:0 Mac:52:54:00:c4:df:6f Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:default-k8s-diff-port-283961 Clientid:01:52:54:00:c4:df:6f}
	I0328 01:03:13.425894 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined IP address 192.168.39.224 and MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:13.426023 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHPort
	I0328 01:03:13.426225 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHKeyPath
	I0328 01:03:13.426407 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHUsername
	I0328 01:03:13.426577 1131600 sshutil.go:53] new ssh client: &{IP:192.168.39.224 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/default-k8s-diff-port-283961/id_rsa Username:docker}
	I0328 01:03:13.511874 1131600 ssh_runner.go:195] Run: cat /etc/os-release
	I0328 01:03:13.516643 1131600 info.go:137] Remote host: Buildroot 2023.02.9
	I0328 01:03:13.516673 1131600 filesync.go:126] Scanning /home/jenkins/minikube-integration/18485-1069254/.minikube/addons for local assets ...
	I0328 01:03:13.516749 1131600 filesync.go:126] Scanning /home/jenkins/minikube-integration/18485-1069254/.minikube/files for local assets ...
	I0328 01:03:13.516846 1131600 filesync.go:149] local asset: /home/jenkins/minikube-integration/18485-1069254/.minikube/files/etc/ssl/certs/10765222.pem -> 10765222.pem in /etc/ssl/certs
	I0328 01:03:13.516969 1131600 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0328 01:03:13.529004 1131600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/files/etc/ssl/certs/10765222.pem --> /etc/ssl/certs/10765222.pem (1708 bytes)
	I0328 01:03:13.557244 1131600 start.go:296] duration metric: took 134.810243ms for postStartSetup
	I0328 01:03:13.557289 1131600 fix.go:56] duration metric: took 20.165726422s for fixHost
	I0328 01:03:13.557313 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHHostname
	I0328 01:03:13.560216 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:13.560585 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:df:6f", ip: ""} in network mk-default-k8s-diff-port-283961: {Iface:virbr1 ExpiryTime:2024-03-28 02:03:06 +0000 UTC Type:0 Mac:52:54:00:c4:df:6f Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:default-k8s-diff-port-283961 Clientid:01:52:54:00:c4:df:6f}
	I0328 01:03:13.560623 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined IP address 192.168.39.224 and MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:13.560803 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHPort
	I0328 01:03:13.561050 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHKeyPath
	I0328 01:03:13.561188 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHKeyPath
	I0328 01:03:13.561303 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHUsername
	I0328 01:03:13.561552 1131600 main.go:141] libmachine: Using SSH client type: native
	I0328 01:03:13.561742 1131600 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.224 22 <nil> <nil>}
	I0328 01:03:13.561757 1131600 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0328 01:03:13.671545 1131600 main.go:141] libmachine: SSH cmd err, output: <nil>: 1711587793.617322674
	
	I0328 01:03:13.671570 1131600 fix.go:216] guest clock: 1711587793.617322674
	I0328 01:03:13.671578 1131600 fix.go:229] Guest: 2024-03-28 01:03:13.617322674 +0000 UTC Remote: 2024-03-28 01:03:13.55729386 +0000 UTC m=+187.934897846 (delta=60.028814ms)
	I0328 01:03:13.671632 1131600 fix.go:200] guest clock delta is within tolerance: 60.028814ms
	I0328 01:03:13.671642 1131600 start.go:83] releasing machines lock for "default-k8s-diff-port-283961", held for 20.280118311s
	I0328 01:03:13.671673 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .DriverName
	I0328 01:03:13.671976 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetIP
	I0328 01:03:13.674978 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:13.675384 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:df:6f", ip: ""} in network mk-default-k8s-diff-port-283961: {Iface:virbr1 ExpiryTime:2024-03-28 02:03:06 +0000 UTC Type:0 Mac:52:54:00:c4:df:6f Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:default-k8s-diff-port-283961 Clientid:01:52:54:00:c4:df:6f}
	I0328 01:03:13.675436 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined IP address 192.168.39.224 and MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:13.675562 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .DriverName
	I0328 01:03:13.676167 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .DriverName
	I0328 01:03:13.676337 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .DriverName
	I0328 01:03:13.676436 1131600 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0328 01:03:13.676501 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHHostname
	I0328 01:03:13.676557 1131600 ssh_runner.go:195] Run: cat /version.json
	I0328 01:03:13.676578 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHHostname
	I0328 01:03:13.679418 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:13.679452 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:13.679758 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:df:6f", ip: ""} in network mk-default-k8s-diff-port-283961: {Iface:virbr1 ExpiryTime:2024-03-28 02:03:06 +0000 UTC Type:0 Mac:52:54:00:c4:df:6f Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:default-k8s-diff-port-283961 Clientid:01:52:54:00:c4:df:6f}
	I0328 01:03:13.679785 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined IP address 192.168.39.224 and MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:13.679813 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:df:6f", ip: ""} in network mk-default-k8s-diff-port-283961: {Iface:virbr1 ExpiryTime:2024-03-28 02:03:06 +0000 UTC Type:0 Mac:52:54:00:c4:df:6f Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:default-k8s-diff-port-283961 Clientid:01:52:54:00:c4:df:6f}
	I0328 01:03:13.679832 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined IP address 192.168.39.224 and MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:13.679986 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHPort
	I0328 01:03:13.680089 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHPort
	I0328 01:03:13.680190 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHKeyPath
	I0328 01:03:13.680255 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHKeyPath
	I0328 01:03:13.680345 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHUsername
	I0328 01:03:13.680410 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHUsername
	I0328 01:03:13.680517 1131600 sshutil.go:53] new ssh client: &{IP:192.168.39.224 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/default-k8s-diff-port-283961/id_rsa Username:docker}
	I0328 01:03:13.680608 1131600 sshutil.go:53] new ssh client: &{IP:192.168.39.224 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/default-k8s-diff-port-283961/id_rsa Username:docker}
	I0328 01:03:13.759826 1131600 ssh_runner.go:195] Run: systemctl --version
	I0328 01:03:13.796647 1131600 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0328 01:03:13.947036 1131600 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0328 01:03:13.954165 1131600 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0328 01:03:13.954265 1131600 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0328 01:03:13.973503 1131600 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0328 01:03:13.973538 1131600 start.go:494] detecting cgroup driver to use...
	I0328 01:03:13.973629 1131600 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0328 01:03:13.997675 1131600 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0328 01:03:14.015349 1131600 docker.go:217] disabling cri-docker service (if available) ...
	I0328 01:03:14.015421 1131600 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0328 01:03:14.031099 1131600 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0328 01:03:14.046446 1131600 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0328 01:03:14.186993 1131600 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0328 01:03:14.351164 1131600 docker.go:233] disabling docker service ...
	I0328 01:03:14.351232 1131600 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0328 01:03:14.370629 1131600 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0328 01:03:14.387837 1131600 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0328 01:03:14.544060 1131600 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0328 01:03:14.707699 1131600 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0328 01:03:14.725658 1131600 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0328 01:03:14.746063 1131600 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0328 01:03:14.746141 1131600 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 01:03:14.759244 1131600 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0328 01:03:14.759317 1131600 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 01:03:14.773015 1131600 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 01:03:14.786810 1131600 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 01:03:14.807101 1131600 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0328 01:03:14.821013 1131600 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 01:03:14.834181 1131600 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 01:03:14.861163 1131600 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 01:03:14.874274 1131600 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0328 01:03:14.885890 1131600 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0328 01:03:14.885968 1131600 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0328 01:03:14.903142 1131600 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0328 01:03:14.916364 1131600 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0328 01:03:15.073343 1131600 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0328 01:03:15.218406 1131600 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0328 01:03:15.218500 1131600 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0328 01:03:15.226299 1131600 start.go:562] Will wait 60s for crictl version
	I0328 01:03:15.226373 1131600 ssh_runner.go:195] Run: which crictl
	I0328 01:03:15.232051 1131600 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0328 01:03:15.278793 1131600 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0328 01:03:15.278903 1131600 ssh_runner.go:195] Run: crio --version
	I0328 01:03:15.313408 1131600 ssh_runner.go:195] Run: crio --version
	I0328 01:03:15.351613 1131600 out.go:177] * Preparing Kubernetes v1.29.3 on CRI-O 1.29.1 ...
	I0328 01:03:15.353013 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetIP
	I0328 01:03:15.355924 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:15.356306 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:df:6f", ip: ""} in network mk-default-k8s-diff-port-283961: {Iface:virbr1 ExpiryTime:2024-03-28 02:03:06 +0000 UTC Type:0 Mac:52:54:00:c4:df:6f Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:default-k8s-diff-port-283961 Clientid:01:52:54:00:c4:df:6f}
	I0328 01:03:15.356341 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined IP address 192.168.39.224 and MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:15.356555 1131600 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0328 01:03:15.361194 1131600 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0328 01:03:15.380926 1131600 kubeadm.go:877] updating cluster {Name:default-k8s-diff-port-283961 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.29.3 ClusterName:default-k8s-diff-port-283961 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.224 Port:8444 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0328 01:03:15.381043 1131600 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0328 01:03:15.381099 1131600 ssh_runner.go:195] Run: sudo crictl images --output json
	I0328 01:03:15.423322 1131600 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.3". assuming images are not preloaded.
	I0328 01:03:15.423409 1131600 ssh_runner.go:195] Run: which lz4
	I0328 01:03:15.428123 1131600 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0328 01:03:15.433023 1131600 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0328 01:03:15.433065 1131600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (402967820 bytes)
	I0328 01:03:13.696314 1130827 main.go:141] libmachine: (no-preload-248059) Calling .Start
	I0328 01:03:13.696506 1130827 main.go:141] libmachine: (no-preload-248059) Ensuring networks are active...
	I0328 01:03:13.697344 1130827 main.go:141] libmachine: (no-preload-248059) Ensuring network default is active
	I0328 01:03:13.697668 1130827 main.go:141] libmachine: (no-preload-248059) Ensuring network mk-no-preload-248059 is active
	I0328 01:03:13.698009 1130827 main.go:141] libmachine: (no-preload-248059) Getting domain xml...
	I0328 01:03:13.698805 1130827 main.go:141] libmachine: (no-preload-248059) Creating domain...
	I0328 01:03:14.955922 1130827 main.go:141] libmachine: (no-preload-248059) Waiting to get IP...
	I0328 01:03:14.957088 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:14.957534 1130827 main.go:141] libmachine: (no-preload-248059) DBG | unable to find current IP address of domain no-preload-248059 in network mk-no-preload-248059
	I0328 01:03:14.957660 1130827 main.go:141] libmachine: (no-preload-248059) DBG | I0328 01:03:14.957533 1132389 retry.go:31] will retry after 222.894093ms: waiting for machine to come up
	I0328 01:03:15.182078 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:15.182541 1130827 main.go:141] libmachine: (no-preload-248059) DBG | unable to find current IP address of domain no-preload-248059 in network mk-no-preload-248059
	I0328 01:03:15.182580 1130827 main.go:141] libmachine: (no-preload-248059) DBG | I0328 01:03:15.182528 1132389 retry.go:31] will retry after 263.74163ms: waiting for machine to come up
	I0328 01:03:15.448081 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:15.448653 1130827 main.go:141] libmachine: (no-preload-248059) DBG | unable to find current IP address of domain no-preload-248059 in network mk-no-preload-248059
	I0328 01:03:15.448684 1130827 main.go:141] libmachine: (no-preload-248059) DBG | I0328 01:03:15.448586 1132389 retry.go:31] will retry after 444.066222ms: waiting for machine to come up
	I0328 01:03:15.894141 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:15.894695 1130827 main.go:141] libmachine: (no-preload-248059) DBG | unable to find current IP address of domain no-preload-248059 in network mk-no-preload-248059
	I0328 01:03:15.894732 1130827 main.go:141] libmachine: (no-preload-248059) DBG | I0328 01:03:15.894650 1132389 retry.go:31] will retry after 469.421771ms: waiting for machine to come up
	I0328 01:03:14.413443 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:16.418789 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:15.568507 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:16.068210 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:16.568761 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:17.067929 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:17.568403 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:18.068454 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:18.568086 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:19.068049 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:19.569020 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:20.068068 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:17.139682 1131600 crio.go:462] duration metric: took 1.71160157s to copy over tarball
	I0328 01:03:17.139764 1131600 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0328 01:03:19.581198 1131600 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.441406061s)
	I0328 01:03:19.581229 1131600 crio.go:469] duration metric: took 2.441510253s to extract the tarball
	I0328 01:03:19.581241 1131600 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0328 01:03:19.620964 1131600 ssh_runner.go:195] Run: sudo crictl images --output json
	I0328 01:03:19.666765 1131600 crio.go:514] all images are preloaded for cri-o runtime.
	I0328 01:03:19.666791 1131600 cache_images.go:84] Images are preloaded, skipping loading
	I0328 01:03:19.666802 1131600 kubeadm.go:928] updating node { 192.168.39.224 8444 v1.29.3 crio true true} ...
	I0328 01:03:19.666921 1131600 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-283961 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.224
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.3 ClusterName:default-k8s-diff-port-283961 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0328 01:03:19.666987 1131600 ssh_runner.go:195] Run: crio config
	I0328 01:03:19.716082 1131600 cni.go:84] Creating CNI manager for ""
	I0328 01:03:19.716106 1131600 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0328 01:03:19.716115 1131600 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0328 01:03:19.716139 1131600 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.224 APIServerPort:8444 KubernetesVersion:v1.29.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-283961 NodeName:default-k8s-diff-port-283961 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.224"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.224 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0328 01:03:19.716323 1131600 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.224
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-283961"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.224
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.224"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0328 01:03:19.716399 1131600 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.3
	I0328 01:03:19.727826 1131600 binaries.go:44] Found k8s binaries, skipping transfer
	I0328 01:03:19.727913 1131600 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0328 01:03:19.738525 1131600 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0328 01:03:19.756732 1131600 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0328 01:03:19.776665 1131600 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0328 01:03:19.795756 1131600 ssh_runner.go:195] Run: grep 192.168.39.224	control-plane.minikube.internal$ /etc/hosts
	I0328 01:03:19.800097 1131600 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.224	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0328 01:03:19.813019 1131600 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0328 01:03:19.946740 1131600 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0328 01:03:19.964216 1131600 certs.go:68] Setting up /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/default-k8s-diff-port-283961 for IP: 192.168.39.224
	I0328 01:03:19.964244 1131600 certs.go:194] generating shared ca certs ...
	I0328 01:03:19.964262 1131600 certs.go:226] acquiring lock for ca certs: {Name:mkf4dec8f33bbf51de6ed3aabdf175c7fd744ef9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 01:03:19.964448 1131600 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.key
	I0328 01:03:19.964524 1131600 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/proxy-client-ca.key
	I0328 01:03:19.964538 1131600 certs.go:256] generating profile certs ...
	I0328 01:03:19.964648 1131600 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/default-k8s-diff-port-283961/client.key
	I0328 01:03:19.964735 1131600 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/default-k8s-diff-port-283961/apiserver.key.22bfb146
	I0328 01:03:19.964810 1131600 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/default-k8s-diff-port-283961/proxy-client.key
	I0328 01:03:19.964956 1131600 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/1076522.pem (1338 bytes)
	W0328 01:03:19.965008 1131600 certs.go:480] ignoring /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/1076522_empty.pem, impossibly tiny 0 bytes
	I0328 01:03:19.965021 1131600 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca-key.pem (1675 bytes)
	I0328 01:03:19.965058 1131600 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem (1078 bytes)
	I0328 01:03:19.965091 1131600 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/cert.pem (1123 bytes)
	I0328 01:03:19.965113 1131600 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/key.pem (1679 bytes)
	I0328 01:03:19.965154 1131600 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/files/etc/ssl/certs/10765222.pem (1708 bytes)
	I0328 01:03:19.966026 1131600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0328 01:03:19.998578 1131600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0328 01:03:20.042666 1131600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0328 01:03:20.075405 1131600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0328 01:03:20.117888 1131600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/default-k8s-diff-port-283961/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0328 01:03:20.145160 1131600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/default-k8s-diff-port-283961/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0328 01:03:20.178207 1131600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/default-k8s-diff-port-283961/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0328 01:03:20.208610 1131600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/default-k8s-diff-port-283961/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0328 01:03:20.235356 1131600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0328 01:03:20.262434 1131600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/1076522.pem --> /usr/share/ca-certificates/1076522.pem (1338 bytes)
	I0328 01:03:20.291315 1131600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/files/etc/ssl/certs/10765222.pem --> /usr/share/ca-certificates/10765222.pem (1708 bytes)
	I0328 01:03:20.318034 1131600 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0328 01:03:20.337627 1131600 ssh_runner.go:195] Run: openssl version
	I0328 01:03:20.344242 1131600 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0328 01:03:20.360732 1131600 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0328 01:03:20.365858 1131600 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 27 23:34 /usr/share/ca-certificates/minikubeCA.pem
	I0328 01:03:20.365926 1131600 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0328 01:03:20.372120 1131600 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0328 01:03:20.384554 1131600 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1076522.pem && ln -fs /usr/share/ca-certificates/1076522.pem /etc/ssl/certs/1076522.pem"
	I0328 01:03:20.401731 1131600 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1076522.pem
	I0328 01:03:20.406945 1131600 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 27 23:42 /usr/share/ca-certificates/1076522.pem
	I0328 01:03:20.407024 1131600 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1076522.pem
	I0328 01:03:20.414661 1131600 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1076522.pem /etc/ssl/certs/51391683.0"
	I0328 01:03:20.427573 1131600 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10765222.pem && ln -fs /usr/share/ca-certificates/10765222.pem /etc/ssl/certs/10765222.pem"
	I0328 01:03:20.439807 1131600 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10765222.pem
	I0328 01:03:20.445064 1131600 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 27 23:42 /usr/share/ca-certificates/10765222.pem
	I0328 01:03:20.445138 1131600 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10765222.pem
	I0328 01:03:20.451754 1131600 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/10765222.pem /etc/ssl/certs/3ec20f2e.0"
	I0328 01:03:20.464988 1131600 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0328 01:03:20.470461 1131600 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0328 01:03:20.477200 1131600 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0328 01:03:20.484238 1131600 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0328 01:03:20.491125 1131600 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0328 01:03:20.497888 1131600 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0328 01:03:20.504680 1131600 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0328 01:03:20.511372 1131600 kubeadm.go:391] StartCluster: {Name:default-k8s-diff-port-283961 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.29.3 ClusterName:default-k8s-diff-port-283961 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.224 Port:8444 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0328 01:03:20.511477 1131600 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0328 01:03:20.511542 1131600 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0328 01:03:20.552247 1131600 cri.go:89] found id: ""
	I0328 01:03:20.552345 1131600 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0328 01:03:20.564906 1131600 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0328 01:03:20.564937 1131600 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0328 01:03:20.564944 1131600 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0328 01:03:20.565002 1131600 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0328 01:03:20.576394 1131600 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0328 01:03:20.593699 1131600 kubeconfig.go:125] found "default-k8s-diff-port-283961" server: "https://192.168.39.224:8444"
	I0328 01:03:20.595978 1131600 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0328 01:03:20.609519 1131600 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.39.224
	I0328 01:03:20.609565 1131600 kubeadm.go:1154] stopping kube-system containers ...
	I0328 01:03:20.609583 1131600 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0328 01:03:20.609651 1131600 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0328 01:03:20.651892 1131600 cri.go:89] found id: ""
	I0328 01:03:20.651967 1131600 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0328 01:03:20.671895 1131600 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0328 01:03:16.365505 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:16.366404 1130827 main.go:141] libmachine: (no-preload-248059) DBG | unable to find current IP address of domain no-preload-248059 in network mk-no-preload-248059
	I0328 01:03:16.366435 1130827 main.go:141] libmachine: (no-preload-248059) DBG | I0328 01:03:16.366360 1132389 retry.go:31] will retry after 488.383898ms: waiting for machine to come up
	I0328 01:03:16.856125 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:16.856727 1130827 main.go:141] libmachine: (no-preload-248059) DBG | unable to find current IP address of domain no-preload-248059 in network mk-no-preload-248059
	I0328 01:03:16.856761 1130827 main.go:141] libmachine: (no-preload-248059) DBG | I0328 01:03:16.856626 1132389 retry.go:31] will retry after 617.77144ms: waiting for machine to come up
	I0328 01:03:17.476749 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:17.477351 1130827 main.go:141] libmachine: (no-preload-248059) DBG | unable to find current IP address of domain no-preload-248059 in network mk-no-preload-248059
	I0328 01:03:17.477386 1130827 main.go:141] libmachine: (no-preload-248059) DBG | I0328 01:03:17.477282 1132389 retry.go:31] will retry after 835.951988ms: waiting for machine to come up
	I0328 01:03:18.315387 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:18.315894 1130827 main.go:141] libmachine: (no-preload-248059) DBG | unable to find current IP address of domain no-preload-248059 in network mk-no-preload-248059
	I0328 01:03:18.315925 1130827 main.go:141] libmachine: (no-preload-248059) DBG | I0328 01:03:18.315848 1132389 retry.go:31] will retry after 1.405695765s: waiting for machine to come up
	I0328 01:03:19.723053 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:19.723559 1130827 main.go:141] libmachine: (no-preload-248059) DBG | unable to find current IP address of domain no-preload-248059 in network mk-no-preload-248059
	I0328 01:03:19.723591 1130827 main.go:141] libmachine: (no-preload-248059) DBG | I0328 01:03:19.723473 1132389 retry.go:31] will retry after 1.555358462s: waiting for machine to come up
	I0328 01:03:18.913403 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:21.599662 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:20.568464 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:21.068983 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:21.568470 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:22.068772 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:22.568940 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:23.068907 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:23.568272 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:24.068055 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:24.568056 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:25.068006 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:20.685320 1131600 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0328 01:03:21.187521 1131600 kubeadm.go:156] found existing configuration files:
	
	I0328 01:03:21.187587 1131600 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0328 01:03:21.200463 1131600 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0328 01:03:21.200533 1131600 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0328 01:03:21.212763 1131600 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0328 01:03:21.224344 1131600 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0328 01:03:21.224419 1131600 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0328 01:03:21.235869 1131600 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0328 01:03:21.245970 1131600 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0328 01:03:21.246045 1131600 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0328 01:03:21.258589 1131600 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0328 01:03:21.270651 1131600 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0328 01:03:21.270724 1131600 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0328 01:03:21.283074 1131600 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0328 01:03:21.295811 1131600 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0328 01:03:21.668224 1131600 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0328 01:03:23.046357 1131600 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.378083996s)
	I0328 01:03:23.046401 1131600 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0328 01:03:23.271959 1131600 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0328 01:03:23.353976 1131600 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0328 01:03:23.501611 1131600 api_server.go:52] waiting for apiserver process to appear ...
	I0328 01:03:23.501734 1131600 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:24.002619 1131600 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:24.502614 1131600 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:24.547383 1131600 api_server.go:72] duration metric: took 1.045771287s to wait for apiserver process to appear ...
	I0328 01:03:24.547419 1131600 api_server.go:88] waiting for apiserver healthz status ...
	I0328 01:03:24.547447 1131600 api_server.go:253] Checking apiserver healthz at https://192.168.39.224:8444/healthz ...
	I0328 01:03:24.548081 1131600 api_server.go:269] stopped: https://192.168.39.224:8444/healthz: Get "https://192.168.39.224:8444/healthz": dial tcp 192.168.39.224:8444: connect: connection refused
	I0328 01:03:25.047885 1131600 api_server.go:253] Checking apiserver healthz at https://192.168.39.224:8444/healthz ...
	I0328 01:03:21.279945 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:21.590947 1130827 main.go:141] libmachine: (no-preload-248059) DBG | unable to find current IP address of domain no-preload-248059 in network mk-no-preload-248059
	I0328 01:03:21.590967 1130827 main.go:141] libmachine: (no-preload-248059) DBG | I0328 01:03:21.280358 1132389 retry.go:31] will retry after 1.905587467s: waiting for machine to come up
	I0328 01:03:23.187571 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:23.188214 1130827 main.go:141] libmachine: (no-preload-248059) DBG | unable to find current IP address of domain no-preload-248059 in network mk-no-preload-248059
	I0328 01:03:23.188248 1130827 main.go:141] libmachine: (no-preload-248059) DBG | I0328 01:03:23.188159 1132389 retry.go:31] will retry after 2.68043246s: waiting for machine to come up
	I0328 01:03:25.871414 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:25.871997 1130827 main.go:141] libmachine: (no-preload-248059) DBG | unable to find current IP address of domain no-preload-248059 in network mk-no-preload-248059
	I0328 01:03:25.872030 1130827 main.go:141] libmachine: (no-preload-248059) DBG | I0328 01:03:25.871956 1132389 retry.go:31] will retry after 2.689404788s: waiting for machine to come up
	I0328 01:03:23.913816 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:26.413616 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:27.352533 1131600 api_server.go:279] https://192.168.39.224:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0328 01:03:27.352570 1131600 api_server.go:103] status: https://192.168.39.224:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0328 01:03:27.352589 1131600 api_server.go:253] Checking apiserver healthz at https://192.168.39.224:8444/healthz ...
	I0328 01:03:27.453408 1131600 api_server.go:279] https://192.168.39.224:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0328 01:03:27.453448 1131600 api_server.go:103] status: https://192.168.39.224:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0328 01:03:27.547781 1131600 api_server.go:253] Checking apiserver healthz at https://192.168.39.224:8444/healthz ...
	I0328 01:03:27.552703 1131600 api_server.go:279] https://192.168.39.224:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0328 01:03:27.552738 1131600 api_server.go:103] status: https://192.168.39.224:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0328 01:03:28.048135 1131600 api_server.go:253] Checking apiserver healthz at https://192.168.39.224:8444/healthz ...
	I0328 01:03:28.053291 1131600 api_server.go:279] https://192.168.39.224:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0328 01:03:28.053322 1131600 api_server.go:103] status: https://192.168.39.224:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0328 01:03:28.548374 1131600 api_server.go:253] Checking apiserver healthz at https://192.168.39.224:8444/healthz ...
	I0328 01:03:28.553141 1131600 api_server.go:279] https://192.168.39.224:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0328 01:03:28.553178 1131600 api_server.go:103] status: https://192.168.39.224:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0328 01:03:29.047609 1131600 api_server.go:253] Checking apiserver healthz at https://192.168.39.224:8444/healthz ...
	I0328 01:03:29.053027 1131600 api_server.go:279] https://192.168.39.224:8444/healthz returned 200:
	ok
	I0328 01:03:29.060710 1131600 api_server.go:141] control plane version: v1.29.3
	I0328 01:03:29.060747 1131600 api_server.go:131] duration metric: took 4.513320481s to wait for apiserver health ...
	I0328 01:03:29.060757 1131600 cni.go:84] Creating CNI manager for ""
	I0328 01:03:29.060764 1131600 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0328 01:03:29.062763 1131600 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0328 01:03:25.568927 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:26.068371 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:26.568107 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:27.068037 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:27.567985 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:28.068036 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:28.568843 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:29.068483 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:29.568942 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:30.068849 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:29.064492 1131600 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0328 01:03:29.089164 1131600 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0328 01:03:29.115071 1131600 system_pods.go:43] waiting for kube-system pods to appear ...
	I0328 01:03:29.126819 1131600 system_pods.go:59] 8 kube-system pods found
	I0328 01:03:29.126871 1131600 system_pods.go:61] "coredns-76f75df574-79cdj" [48ffe344-a386-4904-a73e-56e3ce0a8bef] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0328 01:03:29.126885 1131600 system_pods.go:61] "etcd-default-k8s-diff-port-283961" [1d8fc768-e39c-4c96-bd65-2ae76fc9c6ec] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0328 01:03:29.126898 1131600 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-283961" [7c5c9f85-f16f-4248-8d2d-73c1ed2b0128] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0328 01:03:29.126912 1131600 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-283961" [2e943e7b-5506-4797-9e77-4a33e06056fa] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0328 01:03:29.126931 1131600 system_pods.go:61] "kube-proxy-d776v" [c1c86f61-b074-4a51-89e6-17c7b1076748] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0328 01:03:29.126944 1131600 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-283961" [8a840579-4145-4b68-ab3f-b1ebd3d63e81] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0328 01:03:29.126956 1131600 system_pods.go:61] "metrics-server-57f55c9bc5-w4ww4" [6d60f9e6-8ac7-4fad-91dc-61520586666c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0328 01:03:29.126968 1131600 system_pods.go:61] "storage-provisioner" [2b5e2e68-7e7c-46ec-bcec-ff9b01cbb8d9] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0328 01:03:29.126979 1131600 system_pods.go:74] duration metric: took 11.875076ms to wait for pod list to return data ...
	I0328 01:03:29.126992 1131600 node_conditions.go:102] verifying NodePressure condition ...
	I0328 01:03:29.130927 1131600 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0328 01:03:29.130971 1131600 node_conditions.go:123] node cpu capacity is 2
	I0328 01:03:29.130986 1131600 node_conditions.go:105] duration metric: took 3.984383ms to run NodePressure ...
	I0328 01:03:29.131011 1131600 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0328 01:03:29.421513 1131600 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0328 01:03:29.426043 1131600 kubeadm.go:733] kubelet initialised
	I0328 01:03:29.426104 1131600 kubeadm.go:734] duration metric: took 4.524275ms waiting for restarted kubelet to initialise ...
	I0328 01:03:29.426114 1131600 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0328 01:03:29.432378 1131600 pod_ready.go:78] waiting up to 4m0s for pod "coredns-76f75df574-79cdj" in "kube-system" namespace to be "Ready" ...
	I0328 01:03:28.563249 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:28.563778 1130827 main.go:141] libmachine: (no-preload-248059) DBG | unable to find current IP address of domain no-preload-248059 in network mk-no-preload-248059
	I0328 01:03:28.563808 1130827 main.go:141] libmachine: (no-preload-248059) DBG | I0328 01:03:28.563718 1132389 retry.go:31] will retry after 2.919225956s: waiting for machine to come up
	I0328 01:03:28.913653 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:30.914379 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:31.484584 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:31.485027 1130827 main.go:141] libmachine: (no-preload-248059) Found IP for machine: 192.168.61.107
	I0328 01:03:31.485048 1130827 main.go:141] libmachine: (no-preload-248059) Reserving static IP address...
	I0328 01:03:31.485065 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has current primary IP address 192.168.61.107 and MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:31.485584 1130827 main.go:141] libmachine: (no-preload-248059) DBG | found host DHCP lease matching {name: "no-preload-248059", mac: "52:54:00:58:33:e2", ip: "192.168.61.107"} in network mk-no-preload-248059: {Iface:virbr3 ExpiryTime:2024-03-28 02:03:25 +0000 UTC Type:0 Mac:52:54:00:58:33:e2 Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:no-preload-248059 Clientid:01:52:54:00:58:33:e2}
	I0328 01:03:31.485617 1130827 main.go:141] libmachine: (no-preload-248059) Reserved static IP address: 192.168.61.107
	I0328 01:03:31.485638 1130827 main.go:141] libmachine: (no-preload-248059) DBG | skip adding static IP to network mk-no-preload-248059 - found existing host DHCP lease matching {name: "no-preload-248059", mac: "52:54:00:58:33:e2", ip: "192.168.61.107"}
	I0328 01:03:31.485651 1130827 main.go:141] libmachine: (no-preload-248059) Waiting for SSH to be available...
	I0328 01:03:31.485671 1130827 main.go:141] libmachine: (no-preload-248059) DBG | Getting to WaitForSSH function...
	I0328 01:03:31.487909 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:31.488293 1130827 main.go:141] libmachine: (no-preload-248059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:33:e2", ip: ""} in network mk-no-preload-248059: {Iface:virbr3 ExpiryTime:2024-03-28 02:03:25 +0000 UTC Type:0 Mac:52:54:00:58:33:e2 Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:no-preload-248059 Clientid:01:52:54:00:58:33:e2}
	I0328 01:03:31.488322 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined IP address 192.168.61.107 and MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:31.488469 1130827 main.go:141] libmachine: (no-preload-248059) DBG | Using SSH client type: external
	I0328 01:03:31.488506 1130827 main.go:141] libmachine: (no-preload-248059) DBG | Using SSH private key: /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/no-preload-248059/id_rsa (-rw-------)
	I0328 01:03:31.488531 1130827 main.go:141] libmachine: (no-preload-248059) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.107 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/no-preload-248059/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0328 01:03:31.488541 1130827 main.go:141] libmachine: (no-preload-248059) DBG | About to run SSH command:
	I0328 01:03:31.488555 1130827 main.go:141] libmachine: (no-preload-248059) DBG | exit 0
	I0328 01:03:31.618358 1130827 main.go:141] libmachine: (no-preload-248059) DBG | SSH cmd err, output: <nil>: 
	I0328 01:03:31.618786 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetConfigRaw
	I0328 01:03:31.619494 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetIP
	I0328 01:03:31.622183 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:31.622555 1130827 main.go:141] libmachine: (no-preload-248059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:33:e2", ip: ""} in network mk-no-preload-248059: {Iface:virbr3 ExpiryTime:2024-03-28 02:03:25 +0000 UTC Type:0 Mac:52:54:00:58:33:e2 Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:no-preload-248059 Clientid:01:52:54:00:58:33:e2}
	I0328 01:03:31.622584 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined IP address 192.168.61.107 and MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:31.622889 1130827 profile.go:142] Saving config to /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/no-preload-248059/config.json ...
	I0328 01:03:31.623120 1130827 machine.go:94] provisionDockerMachine start ...
	I0328 01:03:31.623147 1130827 main.go:141] libmachine: (no-preload-248059) Calling .DriverName
	I0328 01:03:31.623400 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHHostname
	I0328 01:03:31.626078 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:31.626432 1130827 main.go:141] libmachine: (no-preload-248059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:33:e2", ip: ""} in network mk-no-preload-248059: {Iface:virbr3 ExpiryTime:2024-03-28 02:03:25 +0000 UTC Type:0 Mac:52:54:00:58:33:e2 Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:no-preload-248059 Clientid:01:52:54:00:58:33:e2}
	I0328 01:03:31.626458 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined IP address 192.168.61.107 and MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:31.626663 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHPort
	I0328 01:03:31.626864 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHKeyPath
	I0328 01:03:31.627031 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHKeyPath
	I0328 01:03:31.627179 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHUsername
	I0328 01:03:31.627380 1130827 main.go:141] libmachine: Using SSH client type: native
	I0328 01:03:31.627595 1130827 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.107 22 <nil> <nil>}
	I0328 01:03:31.627611 1130827 main.go:141] libmachine: About to run SSH command:
	hostname
	I0328 01:03:31.739662 1130827 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0328 01:03:31.739699 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetMachineName
	I0328 01:03:31.740049 1130827 buildroot.go:166] provisioning hostname "no-preload-248059"
	I0328 01:03:31.740086 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetMachineName
	I0328 01:03:31.740421 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHHostname
	I0328 01:03:31.743410 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:31.743776 1130827 main.go:141] libmachine: (no-preload-248059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:33:e2", ip: ""} in network mk-no-preload-248059: {Iface:virbr3 ExpiryTime:2024-03-28 02:03:25 +0000 UTC Type:0 Mac:52:54:00:58:33:e2 Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:no-preload-248059 Clientid:01:52:54:00:58:33:e2}
	I0328 01:03:31.743811 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined IP address 192.168.61.107 and MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:31.744001 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHPort
	I0328 01:03:31.744212 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHKeyPath
	I0328 01:03:31.744394 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHKeyPath
	I0328 01:03:31.744515 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHUsername
	I0328 01:03:31.744669 1130827 main.go:141] libmachine: Using SSH client type: native
	I0328 01:03:31.744846 1130827 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.107 22 <nil> <nil>}
	I0328 01:03:31.744860 1130827 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-248059 && echo "no-preload-248059" | sudo tee /etc/hostname
	I0328 01:03:31.869330 1130827 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-248059
	
	I0328 01:03:31.869368 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHHostname
	I0328 01:03:31.872451 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:31.872817 1130827 main.go:141] libmachine: (no-preload-248059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:33:e2", ip: ""} in network mk-no-preload-248059: {Iface:virbr3 ExpiryTime:2024-03-28 02:03:25 +0000 UTC Type:0 Mac:52:54:00:58:33:e2 Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:no-preload-248059 Clientid:01:52:54:00:58:33:e2}
	I0328 01:03:31.872868 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined IP address 192.168.61.107 and MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:31.873159 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHPort
	I0328 01:03:31.873405 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHKeyPath
	I0328 01:03:31.873632 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHKeyPath
	I0328 01:03:31.873803 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHUsername
	I0328 01:03:31.873982 1130827 main.go:141] libmachine: Using SSH client type: native
	I0328 01:03:31.874220 1130827 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.107 22 <nil> <nil>}
	I0328 01:03:31.874268 1130827 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-248059' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-248059/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-248059' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0328 01:03:31.997509 1130827 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0328 01:03:31.997543 1130827 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18485-1069254/.minikube CaCertPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18485-1069254/.minikube}
	I0328 01:03:31.997565 1130827 buildroot.go:174] setting up certificates
	I0328 01:03:31.997573 1130827 provision.go:84] configureAuth start
	I0328 01:03:31.997583 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetMachineName
	I0328 01:03:31.997870 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetIP
	I0328 01:03:32.000739 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:32.001127 1130827 main.go:141] libmachine: (no-preload-248059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:33:e2", ip: ""} in network mk-no-preload-248059: {Iface:virbr3 ExpiryTime:2024-03-28 02:03:25 +0000 UTC Type:0 Mac:52:54:00:58:33:e2 Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:no-preload-248059 Clientid:01:52:54:00:58:33:e2}
	I0328 01:03:32.001162 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined IP address 192.168.61.107 and MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:32.001306 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHHostname
	I0328 01:03:32.003571 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:32.003958 1130827 main.go:141] libmachine: (no-preload-248059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:33:e2", ip: ""} in network mk-no-preload-248059: {Iface:virbr3 ExpiryTime:2024-03-28 02:03:25 +0000 UTC Type:0 Mac:52:54:00:58:33:e2 Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:no-preload-248059 Clientid:01:52:54:00:58:33:e2}
	I0328 01:03:32.003988 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined IP address 192.168.61.107 and MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:32.004162 1130827 provision.go:143] copyHostCerts
	I0328 01:03:32.004246 1130827 exec_runner.go:144] found /home/jenkins/minikube-integration/18485-1069254/.minikube/key.pem, removing ...
	I0328 01:03:32.004261 1130827 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18485-1069254/.minikube/key.pem
	I0328 01:03:32.004329 1130827 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18485-1069254/.minikube/key.pem (1679 bytes)
	I0328 01:03:32.004442 1130827 exec_runner.go:144] found /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.pem, removing ...
	I0328 01:03:32.004454 1130827 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.pem
	I0328 01:03:32.004486 1130827 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.pem (1078 bytes)
	I0328 01:03:32.004562 1130827 exec_runner.go:144] found /home/jenkins/minikube-integration/18485-1069254/.minikube/cert.pem, removing ...
	I0328 01:03:32.004572 1130827 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18485-1069254/.minikube/cert.pem
	I0328 01:03:32.004602 1130827 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18485-1069254/.minikube/cert.pem (1123 bytes)
	I0328 01:03:32.004667 1130827 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca-key.pem org=jenkins.no-preload-248059 san=[127.0.0.1 192.168.61.107 localhost minikube no-preload-248059]
	I0328 01:03:32.206585 1130827 provision.go:177] copyRemoteCerts
	I0328 01:03:32.206657 1130827 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0328 01:03:32.206691 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHHostname
	I0328 01:03:32.210170 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:32.210636 1130827 main.go:141] libmachine: (no-preload-248059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:33:e2", ip: ""} in network mk-no-preload-248059: {Iface:virbr3 ExpiryTime:2024-03-28 02:03:25 +0000 UTC Type:0 Mac:52:54:00:58:33:e2 Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:no-preload-248059 Clientid:01:52:54:00:58:33:e2}
	I0328 01:03:32.210676 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined IP address 192.168.61.107 and MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:32.210979 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHPort
	I0328 01:03:32.211187 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHKeyPath
	I0328 01:03:32.211364 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHUsername
	I0328 01:03:32.211564 1130827 sshutil.go:53] new ssh client: &{IP:192.168.61.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/no-preload-248059/id_rsa Username:docker}
	I0328 01:03:32.305858 1130827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0328 01:03:32.337654 1130827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0328 01:03:32.368942 1130827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0328 01:03:32.401639 1130827 provision.go:87] duration metric: took 404.051415ms to configureAuth
	I0328 01:03:32.401669 1130827 buildroot.go:189] setting minikube options for container-runtime
	I0328 01:03:32.401936 1130827 config.go:182] Loaded profile config "no-preload-248059": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0-beta.0
	I0328 01:03:32.402025 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHHostname
	I0328 01:03:32.404890 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:32.405352 1130827 main.go:141] libmachine: (no-preload-248059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:33:e2", ip: ""} in network mk-no-preload-248059: {Iface:virbr3 ExpiryTime:2024-03-28 02:03:25 +0000 UTC Type:0 Mac:52:54:00:58:33:e2 Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:no-preload-248059 Clientid:01:52:54:00:58:33:e2}
	I0328 01:03:32.405387 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined IP address 192.168.61.107 and MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:32.405588 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHPort
	I0328 01:03:32.405858 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHKeyPath
	I0328 01:03:32.406091 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHKeyPath
	I0328 01:03:32.406303 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHUsername
	I0328 01:03:32.406510 1130827 main.go:141] libmachine: Using SSH client type: native
	I0328 01:03:32.406731 1130827 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.107 22 <nil> <nil>}
	I0328 01:03:32.406759 1130827 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0328 01:03:32.697738 1130827 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0328 01:03:32.697768 1130827 machine.go:97] duration metric: took 1.074632092s to provisionDockerMachine
	I0328 01:03:32.697781 1130827 start.go:293] postStartSetup for "no-preload-248059" (driver="kvm2")
	I0328 01:03:32.697795 1130827 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0328 01:03:32.697812 1130827 main.go:141] libmachine: (no-preload-248059) Calling .DriverName
	I0328 01:03:32.698263 1130827 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0328 01:03:32.698298 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHHostname
	I0328 01:03:32.701020 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:32.701421 1130827 main.go:141] libmachine: (no-preload-248059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:33:e2", ip: ""} in network mk-no-preload-248059: {Iface:virbr3 ExpiryTime:2024-03-28 02:03:25 +0000 UTC Type:0 Mac:52:54:00:58:33:e2 Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:no-preload-248059 Clientid:01:52:54:00:58:33:e2}
	I0328 01:03:32.701450 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined IP address 192.168.61.107 and MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:32.701609 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHPort
	I0328 01:03:32.701837 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHKeyPath
	I0328 01:03:32.702010 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHUsername
	I0328 01:03:32.702188 1130827 sshutil.go:53] new ssh client: &{IP:192.168.61.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/no-preload-248059/id_rsa Username:docker}
	I0328 01:03:32.790670 1130827 ssh_runner.go:195] Run: cat /etc/os-release
	I0328 01:03:32.795098 1130827 info.go:137] Remote host: Buildroot 2023.02.9
	I0328 01:03:32.795131 1130827 filesync.go:126] Scanning /home/jenkins/minikube-integration/18485-1069254/.minikube/addons for local assets ...
	I0328 01:03:32.795222 1130827 filesync.go:126] Scanning /home/jenkins/minikube-integration/18485-1069254/.minikube/files for local assets ...
	I0328 01:03:32.795297 1130827 filesync.go:149] local asset: /home/jenkins/minikube-integration/18485-1069254/.minikube/files/etc/ssl/certs/10765222.pem -> 10765222.pem in /etc/ssl/certs
	I0328 01:03:32.795402 1130827 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0328 01:03:32.806276 1130827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/files/etc/ssl/certs/10765222.pem --> /etc/ssl/certs/10765222.pem (1708 bytes)
	I0328 01:03:32.832753 1130827 start.go:296] duration metric: took 134.954685ms for postStartSetup
	I0328 01:03:32.832801 1130827 fix.go:56] duration metric: took 19.16097847s for fixHost
	I0328 01:03:32.832825 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHHostname
	I0328 01:03:32.835830 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:32.836199 1130827 main.go:141] libmachine: (no-preload-248059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:33:e2", ip: ""} in network mk-no-preload-248059: {Iface:virbr3 ExpiryTime:2024-03-28 02:03:25 +0000 UTC Type:0 Mac:52:54:00:58:33:e2 Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:no-preload-248059 Clientid:01:52:54:00:58:33:e2}
	I0328 01:03:32.836237 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined IP address 192.168.61.107 and MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:32.836472 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHPort
	I0328 01:03:32.836707 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHKeyPath
	I0328 01:03:32.836949 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHKeyPath
	I0328 01:03:32.837104 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHUsername
	I0328 01:03:32.837339 1130827 main.go:141] libmachine: Using SSH client type: native
	I0328 01:03:32.837551 1130827 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.107 22 <nil> <nil>}
	I0328 01:03:32.837563 1130827 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0328 01:03:32.947440 1130827 main.go:141] libmachine: SSH cmd err, output: <nil>: 1711587812.922631180
	
	I0328 01:03:32.947477 1130827 fix.go:216] guest clock: 1711587812.922631180
	I0328 01:03:32.947486 1130827 fix.go:229] Guest: 2024-03-28 01:03:32.92263118 +0000 UTC Remote: 2024-03-28 01:03:32.832804811 +0000 UTC m=+356.715929719 (delta=89.826369ms)
	I0328 01:03:32.947507 1130827 fix.go:200] guest clock delta is within tolerance: 89.826369ms
	I0328 01:03:32.947512 1130827 start.go:83] releasing machines lock for "no-preload-248059", held for 19.275724068s
	I0328 01:03:32.947531 1130827 main.go:141] libmachine: (no-preload-248059) Calling .DriverName
	I0328 01:03:32.947805 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetIP
	I0328 01:03:32.950439 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:32.950814 1130827 main.go:141] libmachine: (no-preload-248059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:33:e2", ip: ""} in network mk-no-preload-248059: {Iface:virbr3 ExpiryTime:2024-03-28 02:03:25 +0000 UTC Type:0 Mac:52:54:00:58:33:e2 Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:no-preload-248059 Clientid:01:52:54:00:58:33:e2}
	I0328 01:03:32.950844 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined IP address 192.168.61.107 and MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:32.950992 1130827 main.go:141] libmachine: (no-preload-248059) Calling .DriverName
	I0328 01:03:32.951517 1130827 main.go:141] libmachine: (no-preload-248059) Calling .DriverName
	I0328 01:03:32.951707 1130827 main.go:141] libmachine: (no-preload-248059) Calling .DriverName
	I0328 01:03:32.951809 1130827 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0328 01:03:32.951852 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHHostname
	I0328 01:03:32.951938 1130827 ssh_runner.go:195] Run: cat /version.json
	I0328 01:03:32.951964 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHHostname
	I0328 01:03:32.954721 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:32.955058 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:32.955135 1130827 main.go:141] libmachine: (no-preload-248059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:33:e2", ip: ""} in network mk-no-preload-248059: {Iface:virbr3 ExpiryTime:2024-03-28 02:03:25 +0000 UTC Type:0 Mac:52:54:00:58:33:e2 Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:no-preload-248059 Clientid:01:52:54:00:58:33:e2}
	I0328 01:03:32.955165 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined IP address 192.168.61.107 and MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:32.955314 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHPort
	I0328 01:03:32.955473 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHKeyPath
	I0328 01:03:32.955512 1130827 main.go:141] libmachine: (no-preload-248059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:33:e2", ip: ""} in network mk-no-preload-248059: {Iface:virbr3 ExpiryTime:2024-03-28 02:03:25 +0000 UTC Type:0 Mac:52:54:00:58:33:e2 Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:no-preload-248059 Clientid:01:52:54:00:58:33:e2}
	I0328 01:03:32.955538 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined IP address 192.168.61.107 and MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:32.955622 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHUsername
	I0328 01:03:32.955698 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHPort
	I0328 01:03:32.955809 1130827 sshutil.go:53] new ssh client: &{IP:192.168.61.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/no-preload-248059/id_rsa Username:docker}
	I0328 01:03:32.955859 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHKeyPath
	I0328 01:03:32.956001 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHUsername
	I0328 01:03:32.956134 1130827 sshutil.go:53] new ssh client: &{IP:192.168.61.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/no-preload-248059/id_rsa Username:docker}
	I0328 01:03:33.079381 1130827 ssh_runner.go:195] Run: systemctl --version
	I0328 01:03:33.086184 1130827 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0328 01:03:33.241799 1130827 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0328 01:03:33.248779 1130827 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0328 01:03:33.248893 1130827 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0328 01:03:33.267944 1130827 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0328 01:03:33.267977 1130827 start.go:494] detecting cgroup driver to use...
	I0328 01:03:33.268082 1130827 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0328 01:03:33.286132 1130827 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0328 01:03:33.301676 1130827 docker.go:217] disabling cri-docker service (if available) ...
	I0328 01:03:33.301762 1130827 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0328 01:03:33.317202 1130827 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0328 01:03:33.333162 1130827 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0328 01:03:33.458738 1130827 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0328 01:03:33.608509 1130827 docker.go:233] disabling docker service ...
	I0328 01:03:33.608623 1130827 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0328 01:03:33.626616 1130827 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0328 01:03:33.641798 1130827 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0328 01:03:33.808865 1130827 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0328 01:03:33.962636 1130827 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0328 01:03:33.978138 1130827 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0328 01:03:34.002323 1130827 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0328 01:03:34.002404 1130827 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 01:03:34.014483 1130827 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0328 01:03:34.014589 1130827 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 01:03:34.028647 1130827 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 01:03:34.041601 1130827 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 01:03:34.054993 1130827 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0328 01:03:34.066671 1130827 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 01:03:34.079389 1130827 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 01:03:34.099660 1130827 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 01:03:34.112379 1130827 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0328 01:03:34.123050 1130827 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0328 01:03:34.123109 1130827 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0328 01:03:34.137132 1130827 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0328 01:03:34.147092 1130827 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0328 01:03:34.282367 1130827 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0328 01:03:34.436510 1130827 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0328 01:03:34.436599 1130827 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0328 01:03:34.443019 1130827 start.go:562] Will wait 60s for crictl version
	I0328 01:03:34.443092 1130827 ssh_runner.go:195] Run: which crictl
	I0328 01:03:34.447740 1130827 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0328 01:03:34.488366 1130827 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0328 01:03:34.488469 1130827 ssh_runner.go:195] Run: crio --version
	I0328 01:03:34.520940 1130827 ssh_runner.go:195] Run: crio --version
	I0328 01:03:34.557953 1130827 out.go:177] * Preparing Kubernetes v1.30.0-beta.0 on CRI-O 1.29.1 ...
	I0328 01:03:30.568918 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:31.068097 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:31.568306 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:32.068345 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:32.568773 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:33.068072 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:33.568377 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:34.068141 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:34.568574 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:35.067986 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:31.439199 1131600 pod_ready.go:102] pod "coredns-76f75df574-79cdj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:33.439575 1131600 pod_ready.go:102] pod "coredns-76f75df574-79cdj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:34.559624 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetIP
	I0328 01:03:34.563089 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:34.563549 1130827 main.go:141] libmachine: (no-preload-248059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:33:e2", ip: ""} in network mk-no-preload-248059: {Iface:virbr3 ExpiryTime:2024-03-28 02:03:25 +0000 UTC Type:0 Mac:52:54:00:58:33:e2 Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:no-preload-248059 Clientid:01:52:54:00:58:33:e2}
	I0328 01:03:34.563583 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined IP address 192.168.61.107 and MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:34.563943 1130827 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0328 01:03:34.570153 1130827 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0328 01:03:34.584566 1130827 kubeadm.go:877] updating cluster {Name:no-preload-248059 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.0-beta.0 ClusterName:no-preload-248059 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.107 Port:8443 KubernetesVersion:v1.30.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0328 01:03:34.584723 1130827 preload.go:132] Checking if preload exists for k8s version v1.30.0-beta.0 and runtime crio
	I0328 01:03:34.584786 1130827 ssh_runner.go:195] Run: sudo crictl images --output json
	I0328 01:03:34.620182 1130827 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.0-beta.0". assuming images are not preloaded.
	I0328 01:03:34.620215 1130827 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.30.0-beta.0 registry.k8s.io/kube-controller-manager:v1.30.0-beta.0 registry.k8s.io/kube-scheduler:v1.30.0-beta.0 registry.k8s.io/kube-proxy:v1.30.0-beta.0 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.12-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0328 01:03:34.620297 1130827 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0328 01:03:34.620312 1130827 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.12-0
	I0328 01:03:34.620333 1130827 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.30.0-beta.0
	I0328 01:03:34.620301 1130827 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.30.0-beta.0
	I0328 01:03:34.620374 1130827 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.30.0-beta.0
	I0328 01:03:34.620401 1130827 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0328 01:03:34.620481 1130827 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0328 01:03:34.620319 1130827 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.30.0-beta.0
	I0328 01:03:34.621997 1130827 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.30.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.30.0-beta.0
	I0328 01:03:34.622009 1130827 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0328 01:03:34.622009 1130827 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.30.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.30.0-beta.0
	I0328 01:03:34.622052 1130827 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.30.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.30.0-beta.0
	I0328 01:03:34.621997 1130827 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.30.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.30.0-beta.0
	I0328 01:03:34.622115 1130827 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.12-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.12-0
	I0328 01:03:34.621996 1130827 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0328 01:03:34.622438 1130827 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0328 01:03:34.832761 1130827 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.30.0-beta.0
	I0328 01:03:34.849045 1130827 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I0328 01:03:34.868049 1130827 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0328 01:03:34.883941 1130827 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.30.0-beta.0" needs transfer: "registry.k8s.io/kube-proxy:v1.30.0-beta.0" does not exist at hash "3f2e573c9528d6ccab780899b4b39b75a17f00250f31ec462fccb116d45befa8" in container runtime
	I0328 01:03:34.883988 1130827 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.30.0-beta.0
	I0328 01:03:34.884047 1130827 ssh_runner.go:195] Run: which crictl
	I0328 01:03:34.884972 1130827 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.30.0-beta.0
	I0328 01:03:34.887551 1130827 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.30.0-beta.0
	I0328 01:03:34.899677 1130827 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.12-0
	I0328 01:03:34.904772 1130827 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.30.0-beta.0
	I0328 01:03:35.045850 1130827 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0328 01:03:35.045906 1130827 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0328 01:03:35.045944 1130827 ssh_runner.go:195] Run: which crictl
	I0328 01:03:35.045959 1130827 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.30.0-beta.0
	I0328 01:03:35.064862 1130827 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.30.0-beta.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.30.0-beta.0" does not exist at hash "c2da1dd389fe0ec92e0cd9e98f8def82c47e8e08ab27041cd23683c60aa1dcaa" in container runtime
	I0328 01:03:35.064908 1130827 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.30.0-beta.0
	I0328 01:03:35.064959 1130827 ssh_runner.go:195] Run: which crictl
	I0328 01:03:35.066700 1130827 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.30.0-beta.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.30.0-beta.0" does not exist at hash "f68bf73ee2fbd4ea8b58c2038ce18d4e06cb59833c44dda7435b586add021841" in container runtime
	I0328 01:03:35.066753 1130827 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.30.0-beta.0
	I0328 01:03:35.066820 1130827 ssh_runner.go:195] Run: which crictl
	I0328 01:03:35.097425 1130827 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.30.0-beta.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.30.0-beta.0" does not exist at hash "746ac2129b574b8743dca05f512b7e097235ca7229b75100a38ec9cdb23454ac" in container runtime
	I0328 01:03:35.097479 1130827 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.30.0-beta.0
	I0328 01:03:35.097546 1130827 ssh_runner.go:195] Run: which crictl
	I0328 01:03:35.097619 1130827 cache_images.go:116] "registry.k8s.io/etcd:3.5.12-0" needs transfer: "registry.k8s.io/etcd:3.5.12-0" does not exist at hash "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899" in container runtime
	I0328 01:03:35.097667 1130827 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.12-0
	I0328 01:03:35.097715 1130827 ssh_runner.go:195] Run: which crictl
	I0328 01:03:35.126977 1130827 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0328 01:03:35.126980 1130827 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.30.0-beta.0
	I0328 01:03:35.127020 1130827 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.30.0-beta.0
	I0328 01:03:35.127084 1130827 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.12-0
	I0328 01:03:35.127090 1130827 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.30.0-beta.0
	I0328 01:03:35.127082 1130827 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.30.0-beta.0
	I0328 01:03:35.127161 1130827 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.30.0-beta.0
	I0328 01:03:35.264395 1130827 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.30.0-beta.0
	I0328 01:03:35.264499 1130827 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.30.0-beta.0 (exists)
	I0328 01:03:35.264534 1130827 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.30.0-beta.0
	I0328 01:03:35.264543 1130827 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.30.0-beta.0
	I0328 01:03:35.264506 1130827 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0328 01:03:35.264590 1130827 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.30.0-beta.0
	I0328 01:03:35.264631 1130827 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.30.0-beta.0
	I0328 01:03:35.264652 1130827 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0328 01:03:35.264516 1130827 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.30.0-beta.0
	I0328 01:03:35.264584 1130827 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.12-0
	I0328 01:03:35.264717 1130827 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.30.0-beta.0
	I0328 01:03:35.264728 1130827 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.30.0-beta.0
	I0328 01:03:35.264768 1130827 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.12-0
	I0328 01:03:35.269734 1130827 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.12-0 (exists)
	I0328 01:03:35.277344 1130827 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.30.0-beta.0 (exists)
	I0328 01:03:35.277580 1130827 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0328 01:03:35.279792 1130827 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.30.0-beta.0 (exists)
	I0328 01:03:35.280423 1130827 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.30.0-beta.0 (exists)
	I0328 01:03:35.535980 1130827 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0328 01:03:33.413451 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:35.414017 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:37.913609 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:35.568345 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:36.068227 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:36.568528 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:37.068834 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:37.568407 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:38.068142 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:38.568732 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:39.068094 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:39.568799 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:40.068973 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:35.940767 1131600 pod_ready.go:102] pod "coredns-76f75df574-79cdj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:37.440919 1131600 pod_ready.go:92] pod "coredns-76f75df574-79cdj" in "kube-system" namespace has status "Ready":"True"
	I0328 01:03:37.440949 1131600 pod_ready.go:81] duration metric: took 8.008542386s for pod "coredns-76f75df574-79cdj" in "kube-system" namespace to be "Ready" ...
	I0328 01:03:37.440963 1131600 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-283961" in "kube-system" namespace to be "Ready" ...
	I0328 01:03:39.452822 1131600 pod_ready.go:102] pod "etcd-default-k8s-diff-port-283961" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:40.467937 1131600 pod_ready.go:92] pod "etcd-default-k8s-diff-port-283961" in "kube-system" namespace has status "Ready":"True"
	I0328 01:03:40.467973 1131600 pod_ready.go:81] duration metric: took 3.027001179s for pod "etcd-default-k8s-diff-port-283961" in "kube-system" namespace to be "Ready" ...
	I0328 01:03:40.467987 1131600 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-283961" in "kube-system" namespace to be "Ready" ...
	I0328 01:03:40.491342 1131600 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-283961" in "kube-system" namespace has status "Ready":"True"
	I0328 01:03:40.491373 1131600 pod_ready.go:81] duration metric: took 23.375914ms for pod "kube-apiserver-default-k8s-diff-port-283961" in "kube-system" namespace to be "Ready" ...
	I0328 01:03:40.491387 1131600 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-283961" in "kube-system" namespace to be "Ready" ...
	I0328 01:03:40.511379 1131600 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-283961" in "kube-system" namespace has status "Ready":"True"
	I0328 01:03:40.511414 1131600 pod_ready.go:81] duration metric: took 20.018124ms for pod "kube-controller-manager-default-k8s-diff-port-283961" in "kube-system" namespace to be "Ready" ...
	I0328 01:03:40.511430 1131600 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-d776v" in "kube-system" namespace to be "Ready" ...
	I0328 01:03:40.526689 1131600 pod_ready.go:92] pod "kube-proxy-d776v" in "kube-system" namespace has status "Ready":"True"
	I0328 01:03:40.526724 1131600 pod_ready.go:81] duration metric: took 15.28424ms for pod "kube-proxy-d776v" in "kube-system" namespace to be "Ready" ...
	I0328 01:03:40.526738 1131600 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-283961" in "kube-system" namespace to be "Ready" ...
	I0328 01:03:37.431690 1130827 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.30.0-beta.0: (2.167073369s)
	I0328 01:03:37.431729 1130827 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.30.0-beta.0 from cache
	I0328 01:03:37.431755 1130827 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.12-0
	I0328 01:03:37.431764 1130827 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.895749302s)
	I0328 01:03:37.431805 1130827 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0328 01:03:37.431811 1130827 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.12-0
	I0328 01:03:37.431837 1130827 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0328 01:03:37.431870 1130827 ssh_runner.go:195] Run: which crictl
	I0328 01:03:39.913936 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:42.412656 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:40.568441 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:41.068790 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:41.568919 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:42.068166 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:42.568012 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:43.068027 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:43.568916 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:44.067940 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:44.568074 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:45.068786 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:42.535179 1131600 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-283961" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:44.034128 1131600 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-283961" in "kube-system" namespace has status "Ready":"True"
	I0328 01:03:44.034164 1131600 pod_ready.go:81] duration metric: took 3.507415677s for pod "kube-scheduler-default-k8s-diff-port-283961" in "kube-system" namespace to be "Ready" ...
	I0328 01:03:44.034175 1131600 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace to be "Ready" ...
	I0328 01:03:41.523268 1130827 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.12-0: (4.091420228s)
	I0328 01:03:41.523305 1130827 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.12-0 from cache
	I0328 01:03:41.523330 1130827 ssh_runner.go:235] Completed: which crictl: (4.091431875s)
	I0328 01:03:41.523345 1130827 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.30.0-beta.0
	I0328 01:03:41.523412 1130827 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0328 01:03:41.523445 1130827 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.30.0-beta.0
	I0328 01:03:41.567312 1130827 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0328 01:03:41.567455 1130827 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0328 01:03:44.336954 1130827 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.30.0-beta.0: (2.813479223s)
	I0328 01:03:44.336991 1130827 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.30.0-beta.0 from cache
	I0328 01:03:44.336994 1130827 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (2.769509386s)
	I0328 01:03:44.337020 1130827 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0328 01:03:44.337035 1130827 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0328 01:03:44.337080 1130827 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0328 01:03:44.414767 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:46.415110 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:45.568662 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:46.068299 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:46.568793 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:47.068929 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:47.568250 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:48.068910 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:48.568138 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:49.068128 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:49.568153 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:50.068075 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:46.042489 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:48.541049 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:50.547355 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:46.297705 1130827 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.960592772s)
	I0328 01:03:46.297744 1130827 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0328 01:03:46.297776 1130827 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.30.0-beta.0
	I0328 01:03:46.297828 1130827 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.30.0-beta.0
	I0328 01:03:47.769522 1130827 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.30.0-beta.0: (1.471661236s)
	I0328 01:03:47.769569 1130827 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.30.0-beta.0 from cache
	I0328 01:03:47.769602 1130827 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.30.0-beta.0
	I0328 01:03:47.769656 1130827 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.30.0-beta.0
	I0328 01:03:50.231843 1130827 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.30.0-beta.0: (2.462162757s)
	I0328 01:03:50.231876 1130827 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.30.0-beta.0 from cache
	I0328 01:03:50.231902 1130827 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0328 01:03:50.231956 1130827 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0328 01:03:48.913184 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:51.412474 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:50.568929 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:51.068812 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:51.568899 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:52.068890 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:52.568751 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:53.068406 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:53.568466 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:54.068039 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:54.568745 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:55.068690 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:53.041197 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:55.041977 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:51.188382 1130827 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0328 01:03:51.188441 1130827 cache_images.go:123] Successfully loaded all cached images
	I0328 01:03:51.188448 1130827 cache_images.go:92] duration metric: took 16.568214969s to LoadCachedImages
	I0328 01:03:51.188464 1130827 kubeadm.go:928] updating node { 192.168.61.107 8443 v1.30.0-beta.0 crio true true} ...
	I0328 01:03:51.188628 1130827 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-248059 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.107
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0-beta.0 ClusterName:no-preload-248059 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0328 01:03:51.188710 1130827 ssh_runner.go:195] Run: crio config
	I0328 01:03:51.237071 1130827 cni.go:84] Creating CNI manager for ""
	I0328 01:03:51.237099 1130827 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0328 01:03:51.237109 1130827 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0328 01:03:51.237131 1130827 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.107 APIServerPort:8443 KubernetesVersion:v1.30.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-248059 NodeName:no-preload-248059 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.107"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.107 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0328 01:03:51.237263 1130827 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.107
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-248059"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.107
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.107"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0328 01:03:51.237330 1130827 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0-beta.0
	I0328 01:03:51.248044 1130827 binaries.go:44] Found k8s binaries, skipping transfer
	I0328 01:03:51.248113 1130827 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0328 01:03:51.257854 1130827 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (324 bytes)
	I0328 01:03:51.276307 1130827 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I0328 01:03:51.294698 1130827 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2168 bytes)
	I0328 01:03:51.313297 1130827 ssh_runner.go:195] Run: grep 192.168.61.107	control-plane.minikube.internal$ /etc/hosts
	I0328 01:03:51.317668 1130827 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.107	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0328 01:03:51.330478 1130827 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0328 01:03:51.457500 1130827 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0328 01:03:51.484463 1130827 certs.go:68] Setting up /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/no-preload-248059 for IP: 192.168.61.107
	I0328 01:03:51.484493 1130827 certs.go:194] generating shared ca certs ...
	I0328 01:03:51.484518 1130827 certs.go:226] acquiring lock for ca certs: {Name:mkf4dec8f33bbf51de6ed3aabdf175c7fd744ef9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 01:03:51.484718 1130827 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.key
	I0328 01:03:51.484768 1130827 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/proxy-client-ca.key
	I0328 01:03:51.484781 1130827 certs.go:256] generating profile certs ...
	I0328 01:03:51.484910 1130827 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/no-preload-248059/client.key
	I0328 01:03:51.484989 1130827 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/no-preload-248059/apiserver.key.85d037b2
	I0328 01:03:51.485040 1130827 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/no-preload-248059/proxy-client.key
	I0328 01:03:51.485196 1130827 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/1076522.pem (1338 bytes)
	W0328 01:03:51.485243 1130827 certs.go:480] ignoring /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/1076522_empty.pem, impossibly tiny 0 bytes
	I0328 01:03:51.485257 1130827 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca-key.pem (1675 bytes)
	I0328 01:03:51.485292 1130827 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem (1078 bytes)
	I0328 01:03:51.485327 1130827 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/cert.pem (1123 bytes)
	I0328 01:03:51.485357 1130827 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/key.pem (1679 bytes)
	I0328 01:03:51.485416 1130827 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/files/etc/ssl/certs/10765222.pem (1708 bytes)
	I0328 01:03:51.486614 1130827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0328 01:03:51.537554 1130827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0328 01:03:51.587256 1130827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0328 01:03:51.620264 1130827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0328 01:03:51.652100 1130827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/no-preload-248059/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0328 01:03:51.694388 1130827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/no-preload-248059/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0328 01:03:51.720913 1130827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/no-preload-248059/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0328 01:03:51.747141 1130827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/no-preload-248059/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0328 01:03:51.776370 1130827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/1076522.pem --> /usr/share/ca-certificates/1076522.pem (1338 bytes)
	I0328 01:03:51.803168 1130827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/files/etc/ssl/certs/10765222.pem --> /usr/share/ca-certificates/10765222.pem (1708 bytes)
	I0328 01:03:51.831138 1130827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0328 01:03:51.857272 1130827 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0328 01:03:51.876070 1130827 ssh_runner.go:195] Run: openssl version
	I0328 01:03:51.882197 1130827 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1076522.pem && ln -fs /usr/share/ca-certificates/1076522.pem /etc/ssl/certs/1076522.pem"
	I0328 01:03:51.893560 1130827 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1076522.pem
	I0328 01:03:51.898293 1130827 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 27 23:42 /usr/share/ca-certificates/1076522.pem
	I0328 01:03:51.898361 1130827 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1076522.pem
	I0328 01:03:51.904549 1130827 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1076522.pem /etc/ssl/certs/51391683.0"
	I0328 01:03:51.918175 1130827 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10765222.pem && ln -fs /usr/share/ca-certificates/10765222.pem /etc/ssl/certs/10765222.pem"
	I0328 01:03:51.930387 1130827 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10765222.pem
	I0328 01:03:51.935610 1130827 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 27 23:42 /usr/share/ca-certificates/10765222.pem
	I0328 01:03:51.935691 1130827 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10765222.pem
	I0328 01:03:51.942127 1130827 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/10765222.pem /etc/ssl/certs/3ec20f2e.0"
	I0328 01:03:51.954252 1130827 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0328 01:03:51.966727 1130827 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0328 01:03:51.971742 1130827 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 27 23:34 /usr/share/ca-certificates/minikubeCA.pem
	I0328 01:03:51.971810 1130827 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0328 01:03:51.978082 1130827 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0328 01:03:51.992233 1130827 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0328 01:03:51.997556 1130827 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0328 01:03:52.004178 1130827 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0328 01:03:52.010666 1130827 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0328 01:03:52.017076 1130827 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0328 01:03:52.023334 1130827 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0328 01:03:52.029980 1130827 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0328 01:03:52.036395 1130827 kubeadm.go:391] StartCluster: {Name:no-preload-248059 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.0-beta.0 ClusterName:no-preload-248059 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.107 Port:8443 KubernetesVersion:v1.30.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0
m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0328 01:03:52.036483 1130827 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0328 01:03:52.036539 1130827 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0328 01:03:52.080486 1130827 cri.go:89] found id: ""
	I0328 01:03:52.080580 1130827 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0328 01:03:52.094552 1130827 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0328 01:03:52.094583 1130827 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0328 01:03:52.094599 1130827 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0328 01:03:52.094650 1130827 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0328 01:03:52.107008 1130827 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0328 01:03:52.108200 1130827 kubeconfig.go:125] found "no-preload-248059" server: "https://192.168.61.107:8443"
	I0328 01:03:52.110536 1130827 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0328 01:03:52.122998 1130827 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.61.107
	I0328 01:03:52.123044 1130827 kubeadm.go:1154] stopping kube-system containers ...
	I0328 01:03:52.123090 1130827 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0328 01:03:52.123170 1130827 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0328 01:03:52.165568 1130827 cri.go:89] found id: ""
	I0328 01:03:52.165666 1130827 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0328 01:03:52.183930 1130827 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0328 01:03:52.195188 1130827 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0328 01:03:52.195215 1130827 kubeadm.go:156] found existing configuration files:
	
	I0328 01:03:52.195271 1130827 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0328 01:03:52.205872 1130827 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0328 01:03:52.205932 1130827 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0328 01:03:52.216481 1130827 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0328 01:03:52.226719 1130827 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0328 01:03:52.226787 1130827 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0328 01:03:52.238852 1130827 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0328 01:03:52.250272 1130827 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0328 01:03:52.250341 1130827 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0328 01:03:52.262474 1130827 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0328 01:03:52.273981 1130827 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0328 01:03:52.274059 1130827 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0328 01:03:52.286028 1130827 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0328 01:03:52.297016 1130827 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0328 01:03:52.406981 1130827 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0328 01:03:53.521529 1130827 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.114505514s)
	I0328 01:03:53.521569 1130827 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0328 01:03:53.735728 1130827 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0328 01:03:53.808590 1130827 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0328 01:03:53.931165 1130827 api_server.go:52] waiting for apiserver process to appear ...
	I0328 01:03:53.931281 1130827 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:54.432358 1130827 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:54.931653 1130827 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:54.948811 1130827 api_server.go:72] duration metric: took 1.017647613s to wait for apiserver process to appear ...
	I0328 01:03:54.948843 1130827 api_server.go:88] waiting for apiserver healthz status ...
	I0328 01:03:54.948871 1130827 api_server.go:253] Checking apiserver healthz at https://192.168.61.107:8443/healthz ...
	I0328 01:03:54.949490 1130827 api_server.go:269] stopped: https://192.168.61.107:8443/healthz: Get "https://192.168.61.107:8443/healthz": dial tcp 192.168.61.107:8443: connect: connection refused
	I0328 01:03:55.449050 1130827 api_server.go:253] Checking apiserver healthz at https://192.168.61.107:8443/healthz ...
	I0328 01:03:53.413775 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:55.914095 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:57.515811 1130827 api_server.go:279] https://192.168.61.107:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0328 01:03:57.515852 1130827 api_server.go:103] status: https://192.168.61.107:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0328 01:03:57.515872 1130827 api_server.go:253] Checking apiserver healthz at https://192.168.61.107:8443/healthz ...
	I0328 01:03:57.564527 1130827 api_server.go:279] https://192.168.61.107:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0328 01:03:57.564560 1130827 api_server.go:103] status: https://192.168.61.107:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0328 01:03:57.949780 1130827 api_server.go:253] Checking apiserver healthz at https://192.168.61.107:8443/healthz ...
	I0328 01:03:57.955515 1130827 api_server.go:279] https://192.168.61.107:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0328 01:03:57.955565 1130827 api_server.go:103] status: https://192.168.61.107:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0328 01:03:58.449103 1130827 api_server.go:253] Checking apiserver healthz at https://192.168.61.107:8443/healthz ...
	I0328 01:03:58.456345 1130827 api_server.go:279] https://192.168.61.107:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0328 01:03:58.456384 1130827 api_server.go:103] status: https://192.168.61.107:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0328 01:03:58.949575 1130827 api_server.go:253] Checking apiserver healthz at https://192.168.61.107:8443/healthz ...
	I0328 01:03:58.954466 1130827 api_server.go:279] https://192.168.61.107:8443/healthz returned 200:
	ok
	I0328 01:03:58.961213 1130827 api_server.go:141] control plane version: v1.30.0-beta.0
	I0328 01:03:58.961244 1130827 api_server.go:131] duration metric: took 4.012391589s to wait for apiserver health ...
	I0328 01:03:58.961256 1130827 cni.go:84] Creating CNI manager for ""
	I0328 01:03:58.961265 1130827 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0328 01:03:58.963147 1130827 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0328 01:03:55.568378 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:56.068253 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:56.568989 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:57.068709 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:57.569038 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:58.068236 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:58.568386 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:59.068971 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:59.568858 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:04:00.067964 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:57.043266 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:59.541626 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:58.964446 1130827 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0328 01:03:58.979425 1130827 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0328 01:03:59.042826 1130827 system_pods.go:43] waiting for kube-system pods to appear ...
	I0328 01:03:59.060388 1130827 system_pods.go:59] 8 kube-system pods found
	I0328 01:03:59.060429 1130827 system_pods.go:61] "coredns-7db6d8ff4d-86n4s" [71402ca8-dfa7-4caf-a422-6de9f24bf9dc] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0328 01:03:59.060439 1130827 system_pods.go:61] "etcd-no-preload-248059" [954b6886-b84f-4d94-bbce-7e520142eb4b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0328 01:03:59.060451 1130827 system_pods.go:61] "kube-apiserver-no-preload-248059" [2d3caabe-27c2-44e7-8f52-76e03f262e2d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0328 01:03:59.060462 1130827 system_pods.go:61] "kube-controller-manager-no-preload-248059" [30b9f4aa-c9a7-4d91-8e4d-35ad32f40425] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0328 01:03:59.060472 1130827 system_pods.go:61] "kube-proxy-b9qpb" [7ab4cca8-0ba2-4177-84cd-c6ac045930fc] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0328 01:03:59.060481 1130827 system_pods.go:61] "kube-scheduler-no-preload-248059" [4d9e45e3-d990-40d4-a4be-8384c39eb9b0] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0328 01:03:59.060493 1130827 system_pods.go:61] "metrics-server-569cc877fc-cvnrj" [063a47ac-9ceb-4521-9dde-aca02ec5e0d8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0328 01:03:59.060508 1130827 system_pods.go:61] "storage-provisioner" [0a0eb2d3-a426-4b76-8009-1a0a0e0312bb] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0328 01:03:59.060518 1130827 system_pods.go:74] duration metric: took 17.666067ms to wait for pod list to return data ...
	I0328 01:03:59.060533 1130827 node_conditions.go:102] verifying NodePressure condition ...
	I0328 01:03:59.065018 1130827 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0328 01:03:59.065054 1130827 node_conditions.go:123] node cpu capacity is 2
	I0328 01:03:59.065071 1130827 node_conditions.go:105] duration metric: took 4.531253ms to run NodePressure ...
	I0328 01:03:59.065097 1130827 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-beta.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0328 01:03:59.454609 1130827 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0328 01:03:59.459707 1130827 kubeadm.go:733] kubelet initialised
	I0328 01:03:59.459730 1130827 kubeadm.go:734] duration metric: took 5.09757ms waiting for restarted kubelet to initialise ...
	I0328 01:03:59.459739 1130827 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0328 01:03:59.465352 1130827 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-86n4s" in "kube-system" namespace to be "Ready" ...
	I0328 01:03:59.471020 1130827 pod_ready.go:97] node "no-preload-248059" hosting pod "coredns-7db6d8ff4d-86n4s" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-248059" has status "Ready":"False"
	I0328 01:03:59.471054 1130827 pod_ready.go:81] duration metric: took 5.676291ms for pod "coredns-7db6d8ff4d-86n4s" in "kube-system" namespace to be "Ready" ...
	E0328 01:03:59.471067 1130827 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-248059" hosting pod "coredns-7db6d8ff4d-86n4s" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-248059" has status "Ready":"False"
	I0328 01:03:59.471075 1130827 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-248059" in "kube-system" namespace to be "Ready" ...
	I0328 01:03:59.476393 1130827 pod_ready.go:97] node "no-preload-248059" hosting pod "etcd-no-preload-248059" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-248059" has status "Ready":"False"
	I0328 01:03:59.476421 1130827 pod_ready.go:81] duration metric: took 5.333391ms for pod "etcd-no-preload-248059" in "kube-system" namespace to be "Ready" ...
	E0328 01:03:59.476430 1130827 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-248059" hosting pod "etcd-no-preload-248059" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-248059" has status "Ready":"False"
	I0328 01:03:59.476436 1130827 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-248059" in "kube-system" namespace to be "Ready" ...
	I0328 01:03:59.485889 1130827 pod_ready.go:97] node "no-preload-248059" hosting pod "kube-apiserver-no-preload-248059" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-248059" has status "Ready":"False"
	I0328 01:03:59.485924 1130827 pod_ready.go:81] duration metric: took 9.481204ms for pod "kube-apiserver-no-preload-248059" in "kube-system" namespace to be "Ready" ...
	E0328 01:03:59.485937 1130827 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-248059" hosting pod "kube-apiserver-no-preload-248059" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-248059" has status "Ready":"False"
	I0328 01:03:59.485957 1130827 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-248059" in "kube-system" namespace to be "Ready" ...
	I0328 01:03:59.491064 1130827 pod_ready.go:97] node "no-preload-248059" hosting pod "kube-controller-manager-no-preload-248059" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-248059" has status "Ready":"False"
	I0328 01:03:59.491095 1130827 pod_ready.go:81] duration metric: took 5.125981ms for pod "kube-controller-manager-no-preload-248059" in "kube-system" namespace to be "Ready" ...
	E0328 01:03:59.491107 1130827 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-248059" hosting pod "kube-controller-manager-no-preload-248059" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-248059" has status "Ready":"False"
	I0328 01:03:59.491116 1130827 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-b9qpb" in "kube-system" namespace to be "Ready" ...
	I0328 01:03:59.858724 1130827 pod_ready.go:92] pod "kube-proxy-b9qpb" in "kube-system" namespace has status "Ready":"True"
	I0328 01:03:59.858753 1130827 pod_ready.go:81] duration metric: took 367.628034ms for pod "kube-proxy-b9qpb" in "kube-system" namespace to be "Ready" ...
	I0328 01:03:59.858764 1130827 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-248059" in "kube-system" namespace to be "Ready" ...
	I0328 01:03:58.413911 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:00.913297 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:02.913414 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:00.568622 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:04:01.067943 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:04:01.567964 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:04:02.068537 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:04:02.568772 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:04:03.068458 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:04:03.568943 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:04:04.068085 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:04:04.068176 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:04:04.112601 1131323 cri.go:89] found id: ""
	I0328 01:04:04.112631 1131323 logs.go:276] 0 containers: []
	W0328 01:04:04.112642 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:04:04.112650 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:04:04.112726 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:04:04.151837 1131323 cri.go:89] found id: ""
	I0328 01:04:04.151873 1131323 logs.go:276] 0 containers: []
	W0328 01:04:04.151885 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:04:04.151894 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:04:04.151965 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:04:04.193411 1131323 cri.go:89] found id: ""
	I0328 01:04:04.193451 1131323 logs.go:276] 0 containers: []
	W0328 01:04:04.193463 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:04:04.193473 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:04:04.193545 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:04:04.239623 1131323 cri.go:89] found id: ""
	I0328 01:04:04.239652 1131323 logs.go:276] 0 containers: []
	W0328 01:04:04.239662 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:04:04.239673 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:04:04.239732 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:04:04.279561 1131323 cri.go:89] found id: ""
	I0328 01:04:04.279600 1131323 logs.go:276] 0 containers: []
	W0328 01:04:04.279615 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:04:04.279627 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:04:04.279708 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:04:04.318680 1131323 cri.go:89] found id: ""
	I0328 01:04:04.318710 1131323 logs.go:276] 0 containers: []
	W0328 01:04:04.318722 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:04:04.318731 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:04:04.318797 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:04:04.356486 1131323 cri.go:89] found id: ""
	I0328 01:04:04.356514 1131323 logs.go:276] 0 containers: []
	W0328 01:04:04.356523 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:04:04.356530 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:04:04.356586 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:04:04.394281 1131323 cri.go:89] found id: ""
	I0328 01:04:04.394319 1131323 logs.go:276] 0 containers: []
	W0328 01:04:04.394334 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:04:04.394348 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:04:04.394364 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:04:04.458688 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:04:04.458729 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:04:04.501399 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:04:04.501440 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:04:04.556183 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:04:04.556225 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:04:04.571392 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:04:04.571427 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:04:04.709967 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:04:02.041555 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:04.541464 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:01.866183 1130827 pod_ready.go:102] pod "kube-scheduler-no-preload-248059" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:03.868706 1130827 pod_ready.go:102] pod "kube-scheduler-no-preload-248059" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:04.915667 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:07.412548 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:07.210550 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:04:07.224274 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:04:07.224345 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:04:07.262604 1131323 cri.go:89] found id: ""
	I0328 01:04:07.262640 1131323 logs.go:276] 0 containers: []
	W0328 01:04:07.262665 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:04:07.262674 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:04:07.262763 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:04:07.296868 1131323 cri.go:89] found id: ""
	I0328 01:04:07.296907 1131323 logs.go:276] 0 containers: []
	W0328 01:04:07.296918 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:04:07.296926 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:04:07.296992 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:04:07.333110 1131323 cri.go:89] found id: ""
	I0328 01:04:07.333149 1131323 logs.go:276] 0 containers: []
	W0328 01:04:07.333162 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:04:07.333171 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:04:07.333240 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:04:07.371138 1131323 cri.go:89] found id: ""
	I0328 01:04:07.371168 1131323 logs.go:276] 0 containers: []
	W0328 01:04:07.371186 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:04:07.371195 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:04:07.371259 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:04:07.412197 1131323 cri.go:89] found id: ""
	I0328 01:04:07.412230 1131323 logs.go:276] 0 containers: []
	W0328 01:04:07.412242 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:04:07.412251 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:04:07.412331 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:04:07.457021 1131323 cri.go:89] found id: ""
	I0328 01:04:07.457052 1131323 logs.go:276] 0 containers: []
	W0328 01:04:07.457070 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:04:07.457080 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:04:07.457153 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:04:07.517996 1131323 cri.go:89] found id: ""
	I0328 01:04:07.518026 1131323 logs.go:276] 0 containers: []
	W0328 01:04:07.518034 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:04:07.518040 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:04:07.518111 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:04:07.556829 1131323 cri.go:89] found id: ""
	I0328 01:04:07.556856 1131323 logs.go:276] 0 containers: []
	W0328 01:04:07.556865 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:04:07.556875 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:04:07.556890 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:04:07.572234 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:04:07.572270 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:04:07.648615 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:04:07.648641 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:04:07.648658 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:04:07.719617 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:04:07.719665 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:04:07.764053 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:04:07.764097 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:04:10.319480 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:04:06.542160 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:08.550725 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:06.366150 1130827 pod_ready.go:102] pod "kube-scheduler-no-preload-248059" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:07.365200 1130827 pod_ready.go:92] pod "kube-scheduler-no-preload-248059" in "kube-system" namespace has status "Ready":"True"
	I0328 01:04:07.365233 1130827 pod_ready.go:81] duration metric: took 7.506461201s for pod "kube-scheduler-no-preload-248059" in "kube-system" namespace to be "Ready" ...
	I0328 01:04:07.365256 1130827 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace to be "Ready" ...
	I0328 01:04:09.373694 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:09.413378 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:11.913400 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:10.334347 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:04:10.335893 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:04:10.375231 1131323 cri.go:89] found id: ""
	I0328 01:04:10.375263 1131323 logs.go:276] 0 containers: []
	W0328 01:04:10.375274 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:04:10.375281 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:04:10.375353 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:04:10.413652 1131323 cri.go:89] found id: ""
	I0328 01:04:10.413706 1131323 logs.go:276] 0 containers: []
	W0328 01:04:10.413726 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:04:10.413736 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:04:10.413805 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:04:10.449546 1131323 cri.go:89] found id: ""
	I0328 01:04:10.449588 1131323 logs.go:276] 0 containers: []
	W0328 01:04:10.449597 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:04:10.449604 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:04:10.449658 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:04:10.487518 1131323 cri.go:89] found id: ""
	I0328 01:04:10.487556 1131323 logs.go:276] 0 containers: []
	W0328 01:04:10.487570 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:04:10.487579 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:04:10.487663 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:04:10.525088 1131323 cri.go:89] found id: ""
	I0328 01:04:10.525124 1131323 logs.go:276] 0 containers: []
	W0328 01:04:10.525137 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:04:10.525146 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:04:10.525213 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:04:10.567177 1131323 cri.go:89] found id: ""
	I0328 01:04:10.567209 1131323 logs.go:276] 0 containers: []
	W0328 01:04:10.567221 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:04:10.567231 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:04:10.567302 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:04:10.609440 1131323 cri.go:89] found id: ""
	I0328 01:04:10.609474 1131323 logs.go:276] 0 containers: []
	W0328 01:04:10.609485 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:04:10.609492 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:04:10.609549 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:04:10.652466 1131323 cri.go:89] found id: ""
	I0328 01:04:10.652502 1131323 logs.go:276] 0 containers: []
	W0328 01:04:10.652516 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:04:10.652529 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:04:10.652546 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:04:10.737406 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:04:10.737451 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:04:10.786955 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:04:10.786991 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:04:10.843072 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:04:10.843114 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:04:10.857209 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:04:10.857244 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:04:10.950885 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:04:13.451542 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:04:13.465833 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:04:13.465924 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:04:13.503353 1131323 cri.go:89] found id: ""
	I0328 01:04:13.503386 1131323 logs.go:276] 0 containers: []
	W0328 01:04:13.503398 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:04:13.503407 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:04:13.503474 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:04:13.543175 1131323 cri.go:89] found id: ""
	I0328 01:04:13.543208 1131323 logs.go:276] 0 containers: []
	W0328 01:04:13.543220 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:04:13.543229 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:04:13.543287 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:04:13.580796 1131323 cri.go:89] found id: ""
	I0328 01:04:13.580829 1131323 logs.go:276] 0 containers: []
	W0328 01:04:13.580840 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:04:13.580848 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:04:13.580900 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:04:13.619483 1131323 cri.go:89] found id: ""
	I0328 01:04:13.619516 1131323 logs.go:276] 0 containers: []
	W0328 01:04:13.619529 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:04:13.619539 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:04:13.619596 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:04:13.654651 1131323 cri.go:89] found id: ""
	I0328 01:04:13.654683 1131323 logs.go:276] 0 containers: []
	W0328 01:04:13.654697 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:04:13.654705 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:04:13.654774 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:04:13.691763 1131323 cri.go:89] found id: ""
	I0328 01:04:13.691794 1131323 logs.go:276] 0 containers: []
	W0328 01:04:13.691805 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:04:13.691813 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:04:13.691881 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:04:13.730580 1131323 cri.go:89] found id: ""
	I0328 01:04:13.730614 1131323 logs.go:276] 0 containers: []
	W0328 01:04:13.730627 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:04:13.730635 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:04:13.730694 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:04:13.767802 1131323 cri.go:89] found id: ""
	I0328 01:04:13.767834 1131323 logs.go:276] 0 containers: []
	W0328 01:04:13.767848 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:04:13.767860 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:04:13.767876 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:04:13.815612 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:04:13.815653 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:04:13.870945 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:04:13.870991 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:04:13.891456 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:04:13.891506 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:04:14.022124 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:04:14.022163 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:04:14.022187 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:04:11.041196 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:13.044490 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:15.541942 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:11.873574 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:13.875251 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:14.412081 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:16.412837 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:16.604087 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:04:16.618872 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:04:16.618971 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:04:16.665628 1131323 cri.go:89] found id: ""
	I0328 01:04:16.665661 1131323 logs.go:276] 0 containers: []
	W0328 01:04:16.665675 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:04:16.665683 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:04:16.665780 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:04:16.703727 1131323 cri.go:89] found id: ""
	I0328 01:04:16.703758 1131323 logs.go:276] 0 containers: []
	W0328 01:04:16.703768 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:04:16.703775 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:04:16.703835 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:04:16.741425 1131323 cri.go:89] found id: ""
	I0328 01:04:16.741455 1131323 logs.go:276] 0 containers: []
	W0328 01:04:16.741464 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:04:16.741470 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:04:16.741524 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:04:16.782333 1131323 cri.go:89] found id: ""
	I0328 01:04:16.782373 1131323 logs.go:276] 0 containers: []
	W0328 01:04:16.782387 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:04:16.782398 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:04:16.782469 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:04:16.820321 1131323 cri.go:89] found id: ""
	I0328 01:04:16.820355 1131323 logs.go:276] 0 containers: []
	W0328 01:04:16.820364 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:04:16.820372 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:04:16.820429 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:04:16.861091 1131323 cri.go:89] found id: ""
	I0328 01:04:16.861130 1131323 logs.go:276] 0 containers: []
	W0328 01:04:16.861144 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:04:16.861154 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:04:16.861226 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:04:16.901347 1131323 cri.go:89] found id: ""
	I0328 01:04:16.901394 1131323 logs.go:276] 0 containers: []
	W0328 01:04:16.901408 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:04:16.901418 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:04:16.901491 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:04:16.944027 1131323 cri.go:89] found id: ""
	I0328 01:04:16.944067 1131323 logs.go:276] 0 containers: []
	W0328 01:04:16.944080 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:04:16.944093 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:04:16.944110 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:04:16.959104 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:04:16.959151 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:04:17.035432 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:04:17.035464 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:04:17.035480 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:04:17.116236 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:04:17.116276 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:04:17.159321 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:04:17.159370 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:04:19.711326 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:04:19.726016 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:04:19.726094 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:04:19.776639 1131323 cri.go:89] found id: ""
	I0328 01:04:19.776676 1131323 logs.go:276] 0 containers: []
	W0328 01:04:19.776690 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:04:19.776700 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:04:19.776782 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:04:19.817849 1131323 cri.go:89] found id: ""
	I0328 01:04:19.817887 1131323 logs.go:276] 0 containers: []
	W0328 01:04:19.817897 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:04:19.817904 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:04:19.817981 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:04:19.855055 1131323 cri.go:89] found id: ""
	I0328 01:04:19.855089 1131323 logs.go:276] 0 containers: []
	W0328 01:04:19.855102 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:04:19.855110 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:04:19.855177 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:04:19.895296 1131323 cri.go:89] found id: ""
	I0328 01:04:19.895332 1131323 logs.go:276] 0 containers: []
	W0328 01:04:19.895346 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:04:19.895354 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:04:19.895414 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:04:19.930936 1131323 cri.go:89] found id: ""
	I0328 01:04:19.930968 1131323 logs.go:276] 0 containers: []
	W0328 01:04:19.930980 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:04:19.930989 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:04:19.931067 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:04:19.968573 1131323 cri.go:89] found id: ""
	I0328 01:04:19.968610 1131323 logs.go:276] 0 containers: []
	W0328 01:04:19.968623 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:04:19.968632 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:04:19.968693 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:04:20.006130 1131323 cri.go:89] found id: ""
	I0328 01:04:20.006180 1131323 logs.go:276] 0 containers: []
	W0328 01:04:20.006195 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:04:20.006203 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:04:20.006304 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:04:20.043646 1131323 cri.go:89] found id: ""
	I0328 01:04:20.043678 1131323 logs.go:276] 0 containers: []
	W0328 01:04:20.043689 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:04:20.043701 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:04:20.043717 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:04:20.058728 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:04:20.058761 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:04:20.136392 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:04:20.136417 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:04:20.136431 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:04:20.214971 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:04:20.215015 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:04:20.255002 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:04:20.255047 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:04:18.041868 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:20.542175 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:16.372600 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:18.373203 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:20.374228 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:18.913596 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:20.913978 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:22.914777 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:22.810078 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:04:22.824083 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:04:22.824169 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:04:22.862037 1131323 cri.go:89] found id: ""
	I0328 01:04:22.862066 1131323 logs.go:276] 0 containers: []
	W0328 01:04:22.862074 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:04:22.862081 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:04:22.862141 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:04:22.901625 1131323 cri.go:89] found id: ""
	I0328 01:04:22.901658 1131323 logs.go:276] 0 containers: []
	W0328 01:04:22.901670 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:04:22.901679 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:04:22.901752 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:04:22.938858 1131323 cri.go:89] found id: ""
	I0328 01:04:22.938891 1131323 logs.go:276] 0 containers: []
	W0328 01:04:22.938903 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:04:22.938912 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:04:22.938983 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:04:22.978781 1131323 cri.go:89] found id: ""
	I0328 01:04:22.978818 1131323 logs.go:276] 0 containers: []
	W0328 01:04:22.978829 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:04:22.978837 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:04:22.978910 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:04:23.016844 1131323 cri.go:89] found id: ""
	I0328 01:04:23.016882 1131323 logs.go:276] 0 containers: []
	W0328 01:04:23.016895 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:04:23.016904 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:04:23.016975 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:04:23.058456 1131323 cri.go:89] found id: ""
	I0328 01:04:23.058508 1131323 logs.go:276] 0 containers: []
	W0328 01:04:23.058522 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:04:23.058531 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:04:23.058604 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:04:23.099368 1131323 cri.go:89] found id: ""
	I0328 01:04:23.099399 1131323 logs.go:276] 0 containers: []
	W0328 01:04:23.099408 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:04:23.099420 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:04:23.099492 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:04:23.135593 1131323 cri.go:89] found id: ""
	I0328 01:04:23.135634 1131323 logs.go:276] 0 containers: []
	W0328 01:04:23.135653 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:04:23.135665 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:04:23.135679 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:04:23.191215 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:04:23.191260 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:04:23.206849 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:04:23.206884 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:04:23.289566 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:04:23.289596 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:04:23.289618 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:04:23.365429 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:04:23.365480 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:04:23.042312 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:25.541788 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:22.872233 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:25.373908 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:25.413591 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:27.912983 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:25.914883 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:04:25.929336 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:04:25.929415 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:04:25.969452 1131323 cri.go:89] found id: ""
	I0328 01:04:25.969485 1131323 logs.go:276] 0 containers: []
	W0328 01:04:25.969497 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:04:25.969506 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:04:25.969573 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:04:26.008978 1131323 cri.go:89] found id: ""
	I0328 01:04:26.009006 1131323 logs.go:276] 0 containers: []
	W0328 01:04:26.009015 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:04:26.009022 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:04:26.009075 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:04:26.051110 1131323 cri.go:89] found id: ""
	I0328 01:04:26.051138 1131323 logs.go:276] 0 containers: []
	W0328 01:04:26.051146 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:04:26.051153 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:04:26.051213 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:04:26.088231 1131323 cri.go:89] found id: ""
	I0328 01:04:26.088262 1131323 logs.go:276] 0 containers: []
	W0328 01:04:26.088271 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:04:26.088277 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:04:26.088342 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:04:26.125741 1131323 cri.go:89] found id: ""
	I0328 01:04:26.125782 1131323 logs.go:276] 0 containers: []
	W0328 01:04:26.125794 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:04:26.125800 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:04:26.125867 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:04:26.163367 1131323 cri.go:89] found id: ""
	I0328 01:04:26.163406 1131323 logs.go:276] 0 containers: []
	W0328 01:04:26.163417 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:04:26.163426 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:04:26.163503 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:04:26.202302 1131323 cri.go:89] found id: ""
	I0328 01:04:26.202340 1131323 logs.go:276] 0 containers: []
	W0328 01:04:26.202355 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:04:26.202364 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:04:26.202422 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:04:26.240880 1131323 cri.go:89] found id: ""
	I0328 01:04:26.240911 1131323 logs.go:276] 0 containers: []
	W0328 01:04:26.240921 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:04:26.240931 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:04:26.240943 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:04:26.283151 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:04:26.283180 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:04:26.341313 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:04:26.341350 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:04:26.356762 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:04:26.356791 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:04:26.428033 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:04:26.428054 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:04:26.428066 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:04:29.006332 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:04:29.020634 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:04:29.020745 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:04:29.060812 1131323 cri.go:89] found id: ""
	I0328 01:04:29.060843 1131323 logs.go:276] 0 containers: []
	W0328 01:04:29.060852 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:04:29.060859 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:04:29.060916 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:04:29.100110 1131323 cri.go:89] found id: ""
	I0328 01:04:29.100139 1131323 logs.go:276] 0 containers: []
	W0328 01:04:29.100149 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:04:29.100155 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:04:29.100212 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:04:29.140345 1131323 cri.go:89] found id: ""
	I0328 01:04:29.140384 1131323 logs.go:276] 0 containers: []
	W0328 01:04:29.140396 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:04:29.140404 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:04:29.140479 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:04:29.182415 1131323 cri.go:89] found id: ""
	I0328 01:04:29.182449 1131323 logs.go:276] 0 containers: []
	W0328 01:04:29.182459 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:04:29.182465 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:04:29.182533 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:04:29.225177 1131323 cri.go:89] found id: ""
	I0328 01:04:29.225214 1131323 logs.go:276] 0 containers: []
	W0328 01:04:29.225225 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:04:29.225233 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:04:29.225310 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:04:29.265437 1131323 cri.go:89] found id: ""
	I0328 01:04:29.265471 1131323 logs.go:276] 0 containers: []
	W0328 01:04:29.265485 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:04:29.265493 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:04:29.265556 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:04:29.301578 1131323 cri.go:89] found id: ""
	I0328 01:04:29.301617 1131323 logs.go:276] 0 containers: []
	W0328 01:04:29.301630 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:04:29.301639 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:04:29.301719 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:04:29.340816 1131323 cri.go:89] found id: ""
	I0328 01:04:29.340847 1131323 logs.go:276] 0 containers: []
	W0328 01:04:29.340856 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:04:29.340867 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:04:29.340880 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:04:29.384658 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:04:29.384687 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:04:29.439243 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:04:29.439285 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:04:29.456179 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:04:29.456211 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:04:29.534878 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:04:29.534906 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:04:29.534927 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:04:28.041463 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:30.042506 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:27.872489 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:30.371109 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:29.913856 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:32.415699 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:32.115798 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:04:32.130464 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:04:32.130560 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:04:32.168846 1131323 cri.go:89] found id: ""
	I0328 01:04:32.168877 1131323 logs.go:276] 0 containers: []
	W0328 01:04:32.168887 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:04:32.168894 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:04:32.168952 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:04:32.208590 1131323 cri.go:89] found id: ""
	I0328 01:04:32.208622 1131323 logs.go:276] 0 containers: []
	W0328 01:04:32.208632 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:04:32.208638 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:04:32.208694 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:04:32.247323 1131323 cri.go:89] found id: ""
	I0328 01:04:32.247362 1131323 logs.go:276] 0 containers: []
	W0328 01:04:32.247375 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:04:32.247384 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:04:32.247507 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:04:32.285260 1131323 cri.go:89] found id: ""
	I0328 01:04:32.285293 1131323 logs.go:276] 0 containers: []
	W0328 01:04:32.285312 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:04:32.285319 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:04:32.285395 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:04:32.326678 1131323 cri.go:89] found id: ""
	I0328 01:04:32.326712 1131323 logs.go:276] 0 containers: []
	W0328 01:04:32.326725 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:04:32.326740 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:04:32.326823 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:04:32.363375 1131323 cri.go:89] found id: ""
	I0328 01:04:32.363403 1131323 logs.go:276] 0 containers: []
	W0328 01:04:32.363412 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:04:32.363419 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:04:32.363473 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:04:32.401410 1131323 cri.go:89] found id: ""
	I0328 01:04:32.401449 1131323 logs.go:276] 0 containers: []
	W0328 01:04:32.401462 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:04:32.401470 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:04:32.401558 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:04:32.438645 1131323 cri.go:89] found id: ""
	I0328 01:04:32.438680 1131323 logs.go:276] 0 containers: []
	W0328 01:04:32.438691 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:04:32.438703 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:04:32.438718 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:04:32.488743 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:04:32.488786 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:04:32.503908 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:04:32.503944 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:04:32.577307 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:04:32.577333 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:04:32.577350 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:04:32.657787 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:04:32.657832 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:04:35.201151 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:04:35.215313 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:04:35.215383 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:04:35.253467 1131323 cri.go:89] found id: ""
	I0328 01:04:35.253504 1131323 logs.go:276] 0 containers: []
	W0328 01:04:35.253515 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:04:35.253522 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:04:35.253593 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:04:35.290218 1131323 cri.go:89] found id: ""
	I0328 01:04:35.290280 1131323 logs.go:276] 0 containers: []
	W0328 01:04:35.290292 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:04:35.290300 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:04:35.290378 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:04:35.330714 1131323 cri.go:89] found id: ""
	I0328 01:04:35.330749 1131323 logs.go:276] 0 containers: []
	W0328 01:04:35.330757 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:04:35.330764 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:04:35.330831 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:04:32.542071 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:34.544163 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:32.372100 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:34.872293 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:34.913212 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:37.411734 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:35.371524 1131323 cri.go:89] found id: ""
	I0328 01:04:35.371553 1131323 logs.go:276] 0 containers: []
	W0328 01:04:35.371570 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:04:35.371577 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:04:35.371630 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:04:35.411610 1131323 cri.go:89] found id: ""
	I0328 01:04:35.411638 1131323 logs.go:276] 0 containers: []
	W0328 01:04:35.411646 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:04:35.411652 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:04:35.411711 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:04:35.456709 1131323 cri.go:89] found id: ""
	I0328 01:04:35.456745 1131323 logs.go:276] 0 containers: []
	W0328 01:04:35.456758 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:04:35.456766 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:04:35.456836 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:04:35.492688 1131323 cri.go:89] found id: ""
	I0328 01:04:35.492719 1131323 logs.go:276] 0 containers: []
	W0328 01:04:35.492729 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:04:35.492755 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:04:35.492811 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:04:35.531205 1131323 cri.go:89] found id: ""
	I0328 01:04:35.531234 1131323 logs.go:276] 0 containers: []
	W0328 01:04:35.531243 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:04:35.531254 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:04:35.531266 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:04:35.611803 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:04:35.611845 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:04:35.653513 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:04:35.653551 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:04:35.708030 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:04:35.708075 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:04:35.724542 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:04:35.724576 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:04:35.798624 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:04:38.299312 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:04:38.314128 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:04:38.314213 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:04:38.357728 1131323 cri.go:89] found id: ""
	I0328 01:04:38.357761 1131323 logs.go:276] 0 containers: []
	W0328 01:04:38.357779 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:04:38.357786 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:04:38.357848 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:04:38.394512 1131323 cri.go:89] found id: ""
	I0328 01:04:38.394541 1131323 logs.go:276] 0 containers: []
	W0328 01:04:38.394549 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:04:38.394558 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:04:38.394618 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:04:38.434353 1131323 cri.go:89] found id: ""
	I0328 01:04:38.434380 1131323 logs.go:276] 0 containers: []
	W0328 01:04:38.434391 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:04:38.434399 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:04:38.434466 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:04:38.477662 1131323 cri.go:89] found id: ""
	I0328 01:04:38.477693 1131323 logs.go:276] 0 containers: []
	W0328 01:04:38.477703 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:04:38.477710 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:04:38.477763 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:04:38.515014 1131323 cri.go:89] found id: ""
	I0328 01:04:38.515044 1131323 logs.go:276] 0 containers: []
	W0328 01:04:38.515053 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:04:38.515060 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:04:38.515117 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:04:38.558865 1131323 cri.go:89] found id: ""
	I0328 01:04:38.558899 1131323 logs.go:276] 0 containers: []
	W0328 01:04:38.558911 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:04:38.558920 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:04:38.558982 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:04:38.600261 1131323 cri.go:89] found id: ""
	I0328 01:04:38.600290 1131323 logs.go:276] 0 containers: []
	W0328 01:04:38.600299 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:04:38.600306 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:04:38.600366 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:04:38.637131 1131323 cri.go:89] found id: ""
	I0328 01:04:38.637167 1131323 logs.go:276] 0 containers: []
	W0328 01:04:38.637179 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:04:38.637194 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:04:38.637218 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:04:38.716032 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:04:38.716058 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:04:38.716079 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:04:38.804534 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:04:38.804578 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:04:38.851781 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:04:38.851820 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:04:38.910091 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:04:38.910125 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:04:37.041273 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:39.541843 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:37.372262 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:39.372555 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:39.912953 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:42.412667 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:41.425801 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:04:41.441072 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:04:41.441168 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:04:41.482934 1131323 cri.go:89] found id: ""
	I0328 01:04:41.482962 1131323 logs.go:276] 0 containers: []
	W0328 01:04:41.482974 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:04:41.482983 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:04:41.483063 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:04:41.521762 1131323 cri.go:89] found id: ""
	I0328 01:04:41.521796 1131323 logs.go:276] 0 containers: []
	W0328 01:04:41.521810 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:04:41.521819 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:04:41.521931 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:04:41.560814 1131323 cri.go:89] found id: ""
	I0328 01:04:41.560847 1131323 logs.go:276] 0 containers: []
	W0328 01:04:41.560857 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:04:41.560864 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:04:41.560928 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:04:41.601158 1131323 cri.go:89] found id: ""
	I0328 01:04:41.601189 1131323 logs.go:276] 0 containers: []
	W0328 01:04:41.601199 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:04:41.601206 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:04:41.601271 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:04:41.638760 1131323 cri.go:89] found id: ""
	I0328 01:04:41.638789 1131323 logs.go:276] 0 containers: []
	W0328 01:04:41.638799 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:04:41.638806 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:04:41.638861 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:04:41.675235 1131323 cri.go:89] found id: ""
	I0328 01:04:41.675268 1131323 logs.go:276] 0 containers: []
	W0328 01:04:41.675278 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:04:41.675285 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:04:41.675341 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:04:41.712918 1131323 cri.go:89] found id: ""
	I0328 01:04:41.712957 1131323 logs.go:276] 0 containers: []
	W0328 01:04:41.712972 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:04:41.712983 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:04:41.713078 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:04:41.750552 1131323 cri.go:89] found id: ""
	I0328 01:04:41.750582 1131323 logs.go:276] 0 containers: []
	W0328 01:04:41.750591 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:04:41.750601 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:04:41.750617 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:04:41.811163 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:04:41.811204 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:04:41.826502 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:04:41.826547 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:04:41.900727 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:04:41.900759 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:04:41.900777 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:04:41.981731 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:04:41.981783 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:04:44.525845 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:04:44.542301 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:04:44.542389 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:04:44.584907 1131323 cri.go:89] found id: ""
	I0328 01:04:44.584936 1131323 logs.go:276] 0 containers: []
	W0328 01:04:44.584945 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:04:44.584952 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:04:44.585007 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:04:44.630465 1131323 cri.go:89] found id: ""
	I0328 01:04:44.630499 1131323 logs.go:276] 0 containers: []
	W0328 01:04:44.630511 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:04:44.630520 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:04:44.630588 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:04:44.669095 1131323 cri.go:89] found id: ""
	I0328 01:04:44.669131 1131323 logs.go:276] 0 containers: []
	W0328 01:04:44.669143 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:04:44.669152 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:04:44.669235 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:04:44.708445 1131323 cri.go:89] found id: ""
	I0328 01:04:44.708484 1131323 logs.go:276] 0 containers: []
	W0328 01:04:44.708495 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:04:44.708502 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:04:44.708570 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:04:44.747706 1131323 cri.go:89] found id: ""
	I0328 01:04:44.747744 1131323 logs.go:276] 0 containers: []
	W0328 01:04:44.747755 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:04:44.747762 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:04:44.747822 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:04:44.787768 1131323 cri.go:89] found id: ""
	I0328 01:04:44.787807 1131323 logs.go:276] 0 containers: []
	W0328 01:04:44.787821 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:04:44.787830 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:04:44.787899 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:04:44.829018 1131323 cri.go:89] found id: ""
	I0328 01:04:44.829049 1131323 logs.go:276] 0 containers: []
	W0328 01:04:44.829059 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:04:44.829066 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:04:44.829123 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:04:44.874334 1131323 cri.go:89] found id: ""
	I0328 01:04:44.874374 1131323 logs.go:276] 0 containers: []
	W0328 01:04:44.874383 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:04:44.874393 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:04:44.874405 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:04:44.921577 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:04:44.921619 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:04:44.976660 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:04:44.976713 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:04:44.991365 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:04:44.991400 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:04:45.067595 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:04:45.067630 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:04:45.067651 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:04:42.042736 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:44.543288 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:41.372902 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:43.872925 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:45.873163 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:44.913827 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:47.412342 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:47.647634 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:04:47.663581 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:04:47.663687 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:04:47.702889 1131323 cri.go:89] found id: ""
	I0328 01:04:47.702940 1131323 logs.go:276] 0 containers: []
	W0328 01:04:47.702954 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:04:47.702966 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:04:47.703043 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:04:47.744995 1131323 cri.go:89] found id: ""
	I0328 01:04:47.745027 1131323 logs.go:276] 0 containers: []
	W0328 01:04:47.745037 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:04:47.745044 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:04:47.745103 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:04:47.785518 1131323 cri.go:89] found id: ""
	I0328 01:04:47.785550 1131323 logs.go:276] 0 containers: []
	W0328 01:04:47.785562 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:04:47.785572 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:04:47.785645 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:04:47.831739 1131323 cri.go:89] found id: ""
	I0328 01:04:47.831771 1131323 logs.go:276] 0 containers: []
	W0328 01:04:47.831786 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:04:47.831794 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:04:47.831867 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:04:47.871864 1131323 cri.go:89] found id: ""
	I0328 01:04:47.871906 1131323 logs.go:276] 0 containers: []
	W0328 01:04:47.871918 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:04:47.871929 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:04:47.872008 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:04:47.907899 1131323 cri.go:89] found id: ""
	I0328 01:04:47.907934 1131323 logs.go:276] 0 containers: []
	W0328 01:04:47.907946 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:04:47.907955 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:04:47.908022 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:04:47.946073 1131323 cri.go:89] found id: ""
	I0328 01:04:47.946107 1131323 logs.go:276] 0 containers: []
	W0328 01:04:47.946118 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:04:47.946127 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:04:47.946223 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:04:47.986122 1131323 cri.go:89] found id: ""
	I0328 01:04:47.986154 1131323 logs.go:276] 0 containers: []
	W0328 01:04:47.986168 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:04:47.986182 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:04:47.986198 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:04:48.057234 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:04:48.057271 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:04:48.109881 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:04:48.109926 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:04:48.125154 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:04:48.125189 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:04:48.208295 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:04:48.208327 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:04:48.208345 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:04:47.041447 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:49.542203 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:48.371275 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:50.372057 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:49.413451 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:51.414465 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:50.785126 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:04:50.800000 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:04:50.800078 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:04:50.839883 1131323 cri.go:89] found id: ""
	I0328 01:04:50.839911 1131323 logs.go:276] 0 containers: []
	W0328 01:04:50.839920 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:04:50.839927 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:04:50.839983 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:04:50.879627 1131323 cri.go:89] found id: ""
	I0328 01:04:50.879654 1131323 logs.go:276] 0 containers: []
	W0328 01:04:50.879661 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:04:50.879668 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:04:50.879734 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:04:50.918392 1131323 cri.go:89] found id: ""
	I0328 01:04:50.918434 1131323 logs.go:276] 0 containers: []
	W0328 01:04:50.918446 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:04:50.918454 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:04:50.918517 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:04:50.957198 1131323 cri.go:89] found id: ""
	I0328 01:04:50.957234 1131323 logs.go:276] 0 containers: []
	W0328 01:04:50.957248 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:04:50.957257 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:04:50.957328 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:04:50.997389 1131323 cri.go:89] found id: ""
	I0328 01:04:50.997424 1131323 logs.go:276] 0 containers: []
	W0328 01:04:50.997438 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:04:50.997446 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:04:50.997513 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:04:51.040259 1131323 cri.go:89] found id: ""
	I0328 01:04:51.040296 1131323 logs.go:276] 0 containers: []
	W0328 01:04:51.040309 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:04:51.040318 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:04:51.040389 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:04:51.081824 1131323 cri.go:89] found id: ""
	I0328 01:04:51.081858 1131323 logs.go:276] 0 containers: []
	W0328 01:04:51.081868 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:04:51.081875 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:04:51.081942 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:04:51.119742 1131323 cri.go:89] found id: ""
	I0328 01:04:51.119783 1131323 logs.go:276] 0 containers: []
	W0328 01:04:51.119796 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:04:51.119810 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:04:51.119836 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:04:51.173486 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:04:51.173529 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:04:51.188532 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:04:51.188568 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:04:51.269181 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:04:51.269207 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:04:51.269226 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:04:51.349882 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:04:51.349936 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:04:53.893562 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:04:53.910104 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:04:53.910186 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:04:53.951333 1131323 cri.go:89] found id: ""
	I0328 01:04:53.951375 1131323 logs.go:276] 0 containers: []
	W0328 01:04:53.951388 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:04:53.951397 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:04:53.951472 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:04:53.992438 1131323 cri.go:89] found id: ""
	I0328 01:04:53.992474 1131323 logs.go:276] 0 containers: []
	W0328 01:04:53.992486 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:04:53.992493 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:04:53.992561 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:04:54.032934 1131323 cri.go:89] found id: ""
	I0328 01:04:54.032969 1131323 logs.go:276] 0 containers: []
	W0328 01:04:54.032982 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:04:54.032992 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:04:54.033061 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:04:54.074670 1131323 cri.go:89] found id: ""
	I0328 01:04:54.074707 1131323 logs.go:276] 0 containers: []
	W0328 01:04:54.074777 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:04:54.074801 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:04:54.074875 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:04:54.111527 1131323 cri.go:89] found id: ""
	I0328 01:04:54.111555 1131323 logs.go:276] 0 containers: []
	W0328 01:04:54.111566 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:04:54.111573 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:04:54.111658 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:04:54.151401 1131323 cri.go:89] found id: ""
	I0328 01:04:54.151428 1131323 logs.go:276] 0 containers: []
	W0328 01:04:54.151437 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:04:54.151443 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:04:54.151494 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:04:54.197997 1131323 cri.go:89] found id: ""
	I0328 01:04:54.198036 1131323 logs.go:276] 0 containers: []
	W0328 01:04:54.198048 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:04:54.198058 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:04:54.198135 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:04:54.234016 1131323 cri.go:89] found id: ""
	I0328 01:04:54.234049 1131323 logs.go:276] 0 containers: []
	W0328 01:04:54.234058 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:04:54.234068 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:04:54.234081 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:04:54.286118 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:04:54.286161 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:04:54.300489 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:04:54.300541 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:04:54.376949 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:04:54.376972 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:04:54.376988 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:04:54.463857 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:04:54.463901 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:04:52.041517 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:54.042088 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:52.875923 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:55.371823 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:53.912140 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:55.912329 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:57.026395 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:04:57.041270 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:04:57.041358 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:04:57.082380 1131323 cri.go:89] found id: ""
	I0328 01:04:57.082416 1131323 logs.go:276] 0 containers: []
	W0328 01:04:57.082428 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:04:57.082436 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:04:57.082503 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:04:57.121835 1131323 cri.go:89] found id: ""
	I0328 01:04:57.121870 1131323 logs.go:276] 0 containers: []
	W0328 01:04:57.121885 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:04:57.121894 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:04:57.121969 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:04:57.163688 1131323 cri.go:89] found id: ""
	I0328 01:04:57.163725 1131323 logs.go:276] 0 containers: []
	W0328 01:04:57.163737 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:04:57.163745 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:04:57.163819 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:04:57.212628 1131323 cri.go:89] found id: ""
	I0328 01:04:57.212666 1131323 logs.go:276] 0 containers: []
	W0328 01:04:57.212693 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:04:57.212703 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:04:57.212788 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:04:57.249196 1131323 cri.go:89] found id: ""
	I0328 01:04:57.249231 1131323 logs.go:276] 0 containers: []
	W0328 01:04:57.249244 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:04:57.249253 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:04:57.249318 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:04:57.286996 1131323 cri.go:89] found id: ""
	I0328 01:04:57.287031 1131323 logs.go:276] 0 containers: []
	W0328 01:04:57.287040 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:04:57.287047 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:04:57.287101 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:04:57.324523 1131323 cri.go:89] found id: ""
	I0328 01:04:57.324551 1131323 logs.go:276] 0 containers: []
	W0328 01:04:57.324560 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:04:57.324566 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:04:57.324627 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:04:57.363946 1131323 cri.go:89] found id: ""
	I0328 01:04:57.363984 1131323 logs.go:276] 0 containers: []
	W0328 01:04:57.363998 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:04:57.364012 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:04:57.364034 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:04:57.418300 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:04:57.418337 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:04:57.433214 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:04:57.433242 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:04:57.508623 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:04:57.508651 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:04:57.508665 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:04:57.586336 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:04:57.586377 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:05:00.129903 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:05:00.146829 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:05:00.146920 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:05:00.197823 1131323 cri.go:89] found id: ""
	I0328 01:05:00.197856 1131323 logs.go:276] 0 containers: []
	W0328 01:05:00.197865 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:05:00.197872 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:05:00.197930 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:05:00.257523 1131323 cri.go:89] found id: ""
	I0328 01:05:00.257561 1131323 logs.go:276] 0 containers: []
	W0328 01:05:00.257575 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:05:00.257584 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:05:00.257657 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:05:00.314511 1131323 cri.go:89] found id: ""
	I0328 01:05:00.314539 1131323 logs.go:276] 0 containers: []
	W0328 01:05:00.314549 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:05:00.314558 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:05:00.314610 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:04:56.042295 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:58.541684 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:00.543232 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:57.372451 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:59.372577 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:58.412203 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:00.412880 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:02.913222 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:00.351043 1131323 cri.go:89] found id: ""
	I0328 01:05:00.351076 1131323 logs.go:276] 0 containers: []
	W0328 01:05:00.351090 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:05:00.351098 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:05:00.351167 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:05:00.391477 1131323 cri.go:89] found id: ""
	I0328 01:05:00.391507 1131323 logs.go:276] 0 containers: []
	W0328 01:05:00.391519 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:05:00.391525 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:05:00.391595 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:05:00.436196 1131323 cri.go:89] found id: ""
	I0328 01:05:00.436230 1131323 logs.go:276] 0 containers: []
	W0328 01:05:00.436242 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:05:00.436249 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:05:00.436316 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:05:00.473389 1131323 cri.go:89] found id: ""
	I0328 01:05:00.473428 1131323 logs.go:276] 0 containers: []
	W0328 01:05:00.473441 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:05:00.473450 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:05:00.473523 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:05:00.508829 1131323 cri.go:89] found id: ""
	I0328 01:05:00.508866 1131323 logs.go:276] 0 containers: []
	W0328 01:05:00.508879 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:05:00.508901 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:05:00.508931 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:05:00.553709 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:05:00.553741 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:05:00.612679 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:05:00.612732 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:05:00.630908 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:05:00.630948 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:05:00.706984 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:05:00.707016 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:05:00.707034 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:05:03.287887 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:05:03.304679 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:05:03.304779 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:05:03.343579 1131323 cri.go:89] found id: ""
	I0328 01:05:03.343608 1131323 logs.go:276] 0 containers: []
	W0328 01:05:03.343618 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:05:03.343625 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:05:03.343677 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:05:03.387158 1131323 cri.go:89] found id: ""
	I0328 01:05:03.387192 1131323 logs.go:276] 0 containers: []
	W0328 01:05:03.387206 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:05:03.387224 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:05:03.387308 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:05:03.426622 1131323 cri.go:89] found id: ""
	I0328 01:05:03.426653 1131323 logs.go:276] 0 containers: []
	W0328 01:05:03.426663 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:05:03.426670 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:05:03.426724 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:05:03.468743 1131323 cri.go:89] found id: ""
	I0328 01:05:03.468780 1131323 logs.go:276] 0 containers: []
	W0328 01:05:03.468793 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:05:03.468801 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:05:03.468870 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:05:03.508518 1131323 cri.go:89] found id: ""
	I0328 01:05:03.508554 1131323 logs.go:276] 0 containers: []
	W0328 01:05:03.508566 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:05:03.508575 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:05:03.508653 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:05:03.548295 1131323 cri.go:89] found id: ""
	I0328 01:05:03.548331 1131323 logs.go:276] 0 containers: []
	W0328 01:05:03.548343 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:05:03.548353 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:05:03.548444 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:05:03.591561 1131323 cri.go:89] found id: ""
	I0328 01:05:03.591594 1131323 logs.go:276] 0 containers: []
	W0328 01:05:03.591607 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:05:03.591615 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:05:03.591670 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:05:03.635055 1131323 cri.go:89] found id: ""
	I0328 01:05:03.635086 1131323 logs.go:276] 0 containers: []
	W0328 01:05:03.635097 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:05:03.635109 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:05:03.635127 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:05:03.715639 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:05:03.715683 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:05:03.755888 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:05:03.755931 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:05:03.810128 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:05:03.810170 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:05:03.825197 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:05:03.825227 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:05:03.908589 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:05:03.043330 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:05.541544 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:01.372692 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:03.373747 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:05.871945 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:05.413583 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:07.912379 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:06.409060 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:05:06.424034 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:05:06.424119 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:05:06.461827 1131323 cri.go:89] found id: ""
	I0328 01:05:06.461888 1131323 logs.go:276] 0 containers: []
	W0328 01:05:06.461902 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:05:06.461911 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:05:06.461985 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:05:06.505006 1131323 cri.go:89] found id: ""
	I0328 01:05:06.505061 1131323 logs.go:276] 0 containers: []
	W0328 01:05:06.505078 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:05:06.505085 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:05:06.505145 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:05:06.542000 1131323 cri.go:89] found id: ""
	I0328 01:05:06.542033 1131323 logs.go:276] 0 containers: []
	W0328 01:05:06.542044 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:05:06.542052 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:05:06.542121 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:05:06.583725 1131323 cri.go:89] found id: ""
	I0328 01:05:06.583778 1131323 logs.go:276] 0 containers: []
	W0328 01:05:06.583800 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:05:06.583810 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:05:06.583887 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:05:06.620457 1131323 cri.go:89] found id: ""
	I0328 01:05:06.620501 1131323 logs.go:276] 0 containers: []
	W0328 01:05:06.620516 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:05:06.620524 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:05:06.620595 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:05:06.664380 1131323 cri.go:89] found id: ""
	I0328 01:05:06.664412 1131323 logs.go:276] 0 containers: []
	W0328 01:05:06.664425 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:05:06.664432 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:05:06.664502 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:05:06.701799 1131323 cri.go:89] found id: ""
	I0328 01:05:06.701850 1131323 logs.go:276] 0 containers: []
	W0328 01:05:06.701862 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:05:06.701870 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:05:06.701935 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:05:06.739899 1131323 cri.go:89] found id: ""
	I0328 01:05:06.739936 1131323 logs.go:276] 0 containers: []
	W0328 01:05:06.739948 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:05:06.739958 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:05:06.739973 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:05:06.814373 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:05:06.814404 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:05:06.814421 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:05:06.894331 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:05:06.894371 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:05:06.952912 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:05:06.952979 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:05:07.011851 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:05:07.011900 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:05:09.528068 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:05:09.545082 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:05:09.545167 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:05:09.586944 1131323 cri.go:89] found id: ""
	I0328 01:05:09.586983 1131323 logs.go:276] 0 containers: []
	W0328 01:05:09.586996 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:05:09.587004 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:05:09.587077 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:05:09.624153 1131323 cri.go:89] found id: ""
	I0328 01:05:09.624184 1131323 logs.go:276] 0 containers: []
	W0328 01:05:09.624192 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:05:09.624198 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:05:09.624256 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:05:09.661125 1131323 cri.go:89] found id: ""
	I0328 01:05:09.661160 1131323 logs.go:276] 0 containers: []
	W0328 01:05:09.661172 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:05:09.661182 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:05:09.661256 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:05:09.699865 1131323 cri.go:89] found id: ""
	I0328 01:05:09.699903 1131323 logs.go:276] 0 containers: []
	W0328 01:05:09.699916 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:05:09.699925 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:05:09.699992 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:05:09.737925 1131323 cri.go:89] found id: ""
	I0328 01:05:09.737958 1131323 logs.go:276] 0 containers: []
	W0328 01:05:09.737967 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:05:09.737973 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:05:09.738029 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:05:09.776906 1131323 cri.go:89] found id: ""
	I0328 01:05:09.776941 1131323 logs.go:276] 0 containers: []
	W0328 01:05:09.776950 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:05:09.776957 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:05:09.777021 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:05:09.815767 1131323 cri.go:89] found id: ""
	I0328 01:05:09.815794 1131323 logs.go:276] 0 containers: []
	W0328 01:05:09.815803 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:05:09.815809 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:05:09.815876 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:05:09.855880 1131323 cri.go:89] found id: ""
	I0328 01:05:09.855915 1131323 logs.go:276] 0 containers: []
	W0328 01:05:09.855928 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:05:09.855941 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:05:09.855958 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:05:09.918339 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:05:09.918376 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:05:09.932775 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:05:09.932810 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:05:10.011566 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:05:10.011594 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:05:10.011610 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:05:10.096057 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:05:10.096100 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:05:08.041230 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:10.041991 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:07.873367 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:10.372311 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:09.913349 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:12.412259 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:12.641999 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:05:12.655761 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:05:12.655843 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:05:12.697335 1131323 cri.go:89] found id: ""
	I0328 01:05:12.697369 1131323 logs.go:276] 0 containers: []
	W0328 01:05:12.697381 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:05:12.697390 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:05:12.697453 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:05:12.736482 1131323 cri.go:89] found id: ""
	I0328 01:05:12.736520 1131323 logs.go:276] 0 containers: []
	W0328 01:05:12.736534 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:05:12.736544 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:05:12.736617 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:05:12.771992 1131323 cri.go:89] found id: ""
	I0328 01:05:12.772034 1131323 logs.go:276] 0 containers: []
	W0328 01:05:12.772046 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:05:12.772055 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:05:12.772125 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:05:12.810738 1131323 cri.go:89] found id: ""
	I0328 01:05:12.810770 1131323 logs.go:276] 0 containers: []
	W0328 01:05:12.810779 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:05:12.810786 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:05:12.810837 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:05:12.848172 1131323 cri.go:89] found id: ""
	I0328 01:05:12.848209 1131323 logs.go:276] 0 containers: []
	W0328 01:05:12.848222 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:05:12.848230 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:05:12.848310 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:05:12.884660 1131323 cri.go:89] found id: ""
	I0328 01:05:12.884698 1131323 logs.go:276] 0 containers: []
	W0328 01:05:12.884710 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:05:12.884719 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:05:12.884794 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:05:12.926180 1131323 cri.go:89] found id: ""
	I0328 01:05:12.926209 1131323 logs.go:276] 0 containers: []
	W0328 01:05:12.926218 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:05:12.926244 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:05:12.926303 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:05:12.966938 1131323 cri.go:89] found id: ""
	I0328 01:05:12.966969 1131323 logs.go:276] 0 containers: []
	W0328 01:05:12.966983 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:05:12.966996 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:05:12.967014 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:05:13.018501 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:05:13.018541 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:05:13.033140 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:05:13.033175 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:05:13.108806 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:05:13.108832 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:05:13.108858 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:05:13.189198 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:05:13.189241 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:05:12.541088 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:15.041830 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:12.372413 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:14.372804 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:14.414059 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:16.912343 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:15.737415 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:05:15.752534 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:05:15.752614 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:05:15.789941 1131323 cri.go:89] found id: ""
	I0328 01:05:15.789974 1131323 logs.go:276] 0 containers: []
	W0328 01:05:15.789986 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:05:15.789994 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:05:15.790107 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:05:15.827688 1131323 cri.go:89] found id: ""
	I0328 01:05:15.827731 1131323 logs.go:276] 0 containers: []
	W0328 01:05:15.827745 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:05:15.827766 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:05:15.827831 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:05:15.867005 1131323 cri.go:89] found id: ""
	I0328 01:05:15.867041 1131323 logs.go:276] 0 containers: []
	W0328 01:05:15.867054 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:05:15.867064 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:05:15.867149 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:05:15.909924 1131323 cri.go:89] found id: ""
	I0328 01:05:15.910035 1131323 logs.go:276] 0 containers: []
	W0328 01:05:15.910055 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:05:15.910066 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:05:15.910139 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:05:15.950571 1131323 cri.go:89] found id: ""
	I0328 01:05:15.950606 1131323 logs.go:276] 0 containers: []
	W0328 01:05:15.950619 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:05:15.950632 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:05:15.950707 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:05:15.992557 1131323 cri.go:89] found id: ""
	I0328 01:05:15.992594 1131323 logs.go:276] 0 containers: []
	W0328 01:05:15.992605 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:05:15.992615 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:05:15.992687 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:05:16.032417 1131323 cri.go:89] found id: ""
	I0328 01:05:16.032458 1131323 logs.go:276] 0 containers: []
	W0328 01:05:16.032473 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:05:16.032482 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:05:16.032559 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:05:16.071399 1131323 cri.go:89] found id: ""
	I0328 01:05:16.071433 1131323 logs.go:276] 0 containers: []
	W0328 01:05:16.071445 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:05:16.071459 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:05:16.071481 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:05:16.147078 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:05:16.147113 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:05:16.147131 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:05:16.223828 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:05:16.223870 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:05:16.269377 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:05:16.269409 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:05:16.318545 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:05:16.318584 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:05:18.836044 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:05:18.851138 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:05:18.851231 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:05:18.887223 1131323 cri.go:89] found id: ""
	I0328 01:05:18.887260 1131323 logs.go:276] 0 containers: []
	W0328 01:05:18.887273 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:05:18.887283 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:05:18.887354 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:05:18.928652 1131323 cri.go:89] found id: ""
	I0328 01:05:18.928682 1131323 logs.go:276] 0 containers: []
	W0328 01:05:18.928692 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:05:18.928698 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:05:18.928756 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:05:18.968519 1131323 cri.go:89] found id: ""
	I0328 01:05:18.968555 1131323 logs.go:276] 0 containers: []
	W0328 01:05:18.968567 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:05:18.968575 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:05:18.968646 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:05:19.010939 1131323 cri.go:89] found id: ""
	I0328 01:05:19.010977 1131323 logs.go:276] 0 containers: []
	W0328 01:05:19.010991 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:05:19.010999 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:05:19.011070 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:05:19.048723 1131323 cri.go:89] found id: ""
	I0328 01:05:19.048748 1131323 logs.go:276] 0 containers: []
	W0328 01:05:19.048758 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:05:19.048769 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:05:19.048820 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:05:19.091761 1131323 cri.go:89] found id: ""
	I0328 01:05:19.091794 1131323 logs.go:276] 0 containers: []
	W0328 01:05:19.091803 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:05:19.091810 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:05:19.091863 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:05:19.134017 1131323 cri.go:89] found id: ""
	I0328 01:05:19.134049 1131323 logs.go:276] 0 containers: []
	W0328 01:05:19.134059 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:05:19.134065 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:05:19.134119 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:05:19.176070 1131323 cri.go:89] found id: ""
	I0328 01:05:19.176106 1131323 logs.go:276] 0 containers: []
	W0328 01:05:19.176118 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:05:19.176131 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:05:19.176155 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:05:19.261546 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:05:19.261584 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:05:19.261605 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:05:19.340271 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:05:19.340314 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:05:19.383625 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:05:19.383676 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:05:19.441635 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:05:19.441679 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:05:17.541876 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:20.040841 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:16.872723 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:19.372916 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:19.414384 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:21.912881 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:21.958362 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:05:21.974427 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:05:21.974528 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:05:22.013099 1131323 cri.go:89] found id: ""
	I0328 01:05:22.013139 1131323 logs.go:276] 0 containers: []
	W0328 01:05:22.013152 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:05:22.013160 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:05:22.013229 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:05:22.055558 1131323 cri.go:89] found id: ""
	I0328 01:05:22.055594 1131323 logs.go:276] 0 containers: []
	W0328 01:05:22.055604 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:05:22.055611 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:05:22.055668 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:05:22.106836 1131323 cri.go:89] found id: ""
	I0328 01:05:22.106870 1131323 logs.go:276] 0 containers: []
	W0328 01:05:22.106879 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:05:22.106886 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:05:22.106961 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:05:22.145135 1131323 cri.go:89] found id: ""
	I0328 01:05:22.145175 1131323 logs.go:276] 0 containers: []
	W0328 01:05:22.145189 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:05:22.145197 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:05:22.145266 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:05:22.183879 1131323 cri.go:89] found id: ""
	I0328 01:05:22.183909 1131323 logs.go:276] 0 containers: []
	W0328 01:05:22.183919 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:05:22.183926 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:05:22.183981 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:05:22.223087 1131323 cri.go:89] found id: ""
	I0328 01:05:22.223115 1131323 logs.go:276] 0 containers: []
	W0328 01:05:22.223124 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:05:22.223131 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:05:22.223209 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:05:22.263232 1131323 cri.go:89] found id: ""
	I0328 01:05:22.263262 1131323 logs.go:276] 0 containers: []
	W0328 01:05:22.263272 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:05:22.263279 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:05:22.263331 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:05:22.302919 1131323 cri.go:89] found id: ""
	I0328 01:05:22.302954 1131323 logs.go:276] 0 containers: []
	W0328 01:05:22.302967 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:05:22.302980 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:05:22.302998 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:05:22.358550 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:05:22.358596 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:05:22.374688 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:05:22.374722 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:05:22.453584 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:05:22.453609 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:05:22.453624 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:05:22.540983 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:05:22.541048 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:05:25.091773 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:05:25.107412 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:05:25.107484 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:05:25.143917 1131323 cri.go:89] found id: ""
	I0328 01:05:25.143944 1131323 logs.go:276] 0 containers: []
	W0328 01:05:25.143953 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:05:25.143960 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:05:25.144013 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:05:25.183615 1131323 cri.go:89] found id: ""
	I0328 01:05:25.183650 1131323 logs.go:276] 0 containers: []
	W0328 01:05:25.183659 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:05:25.183666 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:05:25.183729 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:05:25.221125 1131323 cri.go:89] found id: ""
	I0328 01:05:25.221158 1131323 logs.go:276] 0 containers: []
	W0328 01:05:25.221167 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:05:25.221174 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:05:25.221229 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:05:25.262023 1131323 cri.go:89] found id: ""
	I0328 01:05:25.262056 1131323 logs.go:276] 0 containers: []
	W0328 01:05:25.262065 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:05:25.262072 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:05:25.262134 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:05:25.297919 1131323 cri.go:89] found id: ""
	I0328 01:05:25.297948 1131323 logs.go:276] 0 containers: []
	W0328 01:05:25.297957 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:05:25.297964 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:05:25.298035 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:05:22.041977 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:24.542416 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:21.872312 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:23.872885 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:23.914459 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:25.916730 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:25.336582 1131323 cri.go:89] found id: ""
	I0328 01:05:25.336610 1131323 logs.go:276] 0 containers: []
	W0328 01:05:25.336620 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:05:25.336627 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:05:25.336690 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:05:25.375554 1131323 cri.go:89] found id: ""
	I0328 01:05:25.375589 1131323 logs.go:276] 0 containers: []
	W0328 01:05:25.375600 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:05:25.375609 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:05:25.375683 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:05:25.415941 1131323 cri.go:89] found id: ""
	I0328 01:05:25.415973 1131323 logs.go:276] 0 containers: []
	W0328 01:05:25.415984 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:05:25.415996 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:05:25.416013 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:05:25.430168 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:05:25.430196 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:05:25.507782 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:05:25.507805 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:05:25.507862 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:05:25.588780 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:05:25.588824 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:05:25.634958 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:05:25.634997 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:05:28.190651 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:05:28.205714 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:05:28.205794 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:05:28.242015 1131323 cri.go:89] found id: ""
	I0328 01:05:28.242056 1131323 logs.go:276] 0 containers: []
	W0328 01:05:28.242067 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:05:28.242077 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:05:28.242169 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:05:28.289132 1131323 cri.go:89] found id: ""
	I0328 01:05:28.289169 1131323 logs.go:276] 0 containers: []
	W0328 01:05:28.289182 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:05:28.289189 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:05:28.289256 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:05:28.327001 1131323 cri.go:89] found id: ""
	I0328 01:05:28.327031 1131323 logs.go:276] 0 containers: []
	W0328 01:05:28.327040 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:05:28.327052 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:05:28.327105 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:05:28.365474 1131323 cri.go:89] found id: ""
	I0328 01:05:28.365507 1131323 logs.go:276] 0 containers: []
	W0328 01:05:28.365516 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:05:28.365523 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:05:28.365587 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:05:28.405494 1131323 cri.go:89] found id: ""
	I0328 01:05:28.405553 1131323 logs.go:276] 0 containers: []
	W0328 01:05:28.405567 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:05:28.405576 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:05:28.405652 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:05:28.448464 1131323 cri.go:89] found id: ""
	I0328 01:05:28.448502 1131323 logs.go:276] 0 containers: []
	W0328 01:05:28.448512 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:05:28.448521 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:05:28.448594 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:05:28.488143 1131323 cri.go:89] found id: ""
	I0328 01:05:28.488172 1131323 logs.go:276] 0 containers: []
	W0328 01:05:28.488182 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:05:28.488189 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:05:28.488258 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:05:28.545977 1131323 cri.go:89] found id: ""
	I0328 01:05:28.546012 1131323 logs.go:276] 0 containers: []
	W0328 01:05:28.546024 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:05:28.546036 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:05:28.546050 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:05:28.629955 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:05:28.630001 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:05:28.670504 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:05:28.670536 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:05:28.722021 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:05:28.722069 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:05:28.737274 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:05:28.737310 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:05:28.824025 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:05:27.041205 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:29.041342 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:26.372037 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:28.373545 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:30.872569 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:28.414921 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:30.912980 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:31.324497 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:05:31.339715 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:05:31.339811 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:05:31.379017 1131323 cri.go:89] found id: ""
	I0328 01:05:31.379050 1131323 logs.go:276] 0 containers: []
	W0328 01:05:31.379062 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:05:31.379072 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:05:31.379138 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:05:31.420024 1131323 cri.go:89] found id: ""
	I0328 01:05:31.420055 1131323 logs.go:276] 0 containers: []
	W0328 01:05:31.420065 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:05:31.420071 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:05:31.420136 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:05:31.458732 1131323 cri.go:89] found id: ""
	I0328 01:05:31.458764 1131323 logs.go:276] 0 containers: []
	W0328 01:05:31.458773 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:05:31.458779 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:05:31.458835 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:05:31.504249 1131323 cri.go:89] found id: ""
	I0328 01:05:31.504280 1131323 logs.go:276] 0 containers: []
	W0328 01:05:31.504292 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:05:31.504300 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:05:31.504366 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:05:31.545284 1131323 cri.go:89] found id: ""
	I0328 01:05:31.545316 1131323 logs.go:276] 0 containers: []
	W0328 01:05:31.545324 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:05:31.545331 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:05:31.545385 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:05:31.583402 1131323 cri.go:89] found id: ""
	I0328 01:05:31.583434 1131323 logs.go:276] 0 containers: []
	W0328 01:05:31.583444 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:05:31.583453 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:05:31.583587 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:05:31.624411 1131323 cri.go:89] found id: ""
	I0328 01:05:31.624449 1131323 logs.go:276] 0 containers: []
	W0328 01:05:31.624462 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:05:31.624471 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:05:31.624528 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:05:31.666103 1131323 cri.go:89] found id: ""
	I0328 01:05:31.666144 1131323 logs.go:276] 0 containers: []
	W0328 01:05:31.666158 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:05:31.666173 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:05:31.666192 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:05:31.717595 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:05:31.717636 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:05:31.731606 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:05:31.731637 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:05:31.803302 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:05:31.803325 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:05:31.803339 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:05:31.885552 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:05:31.885590 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:05:34.432446 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:05:34.448002 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:05:34.448085 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:05:34.493207 1131323 cri.go:89] found id: ""
	I0328 01:05:34.493246 1131323 logs.go:276] 0 containers: []
	W0328 01:05:34.493259 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:05:34.493268 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:05:34.493337 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:05:34.541838 1131323 cri.go:89] found id: ""
	I0328 01:05:34.541871 1131323 logs.go:276] 0 containers: []
	W0328 01:05:34.541883 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:05:34.541891 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:05:34.541956 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:05:34.582319 1131323 cri.go:89] found id: ""
	I0328 01:05:34.582357 1131323 logs.go:276] 0 containers: []
	W0328 01:05:34.582371 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:05:34.582380 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:05:34.582458 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:05:34.618753 1131323 cri.go:89] found id: ""
	I0328 01:05:34.618788 1131323 logs.go:276] 0 containers: []
	W0328 01:05:34.618801 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:05:34.618810 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:05:34.618882 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:05:34.656994 1131323 cri.go:89] found id: ""
	I0328 01:05:34.657027 1131323 logs.go:276] 0 containers: []
	W0328 01:05:34.657037 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:05:34.657043 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:05:34.657114 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:05:34.695214 1131323 cri.go:89] found id: ""
	I0328 01:05:34.695252 1131323 logs.go:276] 0 containers: []
	W0328 01:05:34.695264 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:05:34.695271 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:05:34.695337 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:05:34.733688 1131323 cri.go:89] found id: ""
	I0328 01:05:34.733718 1131323 logs.go:276] 0 containers: []
	W0328 01:05:34.733731 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:05:34.733739 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:05:34.733808 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:05:34.771697 1131323 cri.go:89] found id: ""
	I0328 01:05:34.771729 1131323 logs.go:276] 0 containers: []
	W0328 01:05:34.771744 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:05:34.771758 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:05:34.771776 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:05:34.828190 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:05:34.828236 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:05:34.842741 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:05:34.842776 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:05:34.918494 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:05:34.918525 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:05:34.918541 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:05:35.012689 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:05:35.012747 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:05:31.042633 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:33.541295 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:35.541588 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:33.371991 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:35.872753 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:33.412886 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:35.914065 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:37.574759 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:05:37.590014 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:05:37.590128 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:05:37.626883 1131323 cri.go:89] found id: ""
	I0328 01:05:37.626914 1131323 logs.go:276] 0 containers: []
	W0328 01:05:37.626926 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:05:37.626935 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:05:37.627005 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:05:37.665171 1131323 cri.go:89] found id: ""
	I0328 01:05:37.665202 1131323 logs.go:276] 0 containers: []
	W0328 01:05:37.665215 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:05:37.665225 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:05:37.665294 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:05:37.702923 1131323 cri.go:89] found id: ""
	I0328 01:05:37.702963 1131323 logs.go:276] 0 containers: []
	W0328 01:05:37.702976 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:05:37.702984 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:05:37.703064 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:05:37.741148 1131323 cri.go:89] found id: ""
	I0328 01:05:37.741182 1131323 logs.go:276] 0 containers: []
	W0328 01:05:37.741191 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:05:37.741199 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:05:37.741269 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:05:37.782298 1131323 cri.go:89] found id: ""
	I0328 01:05:37.782331 1131323 logs.go:276] 0 containers: []
	W0328 01:05:37.782341 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:05:37.782348 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:05:37.782407 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:05:37.819056 1131323 cri.go:89] found id: ""
	I0328 01:05:37.819110 1131323 logs.go:276] 0 containers: []
	W0328 01:05:37.819124 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:05:37.819134 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:05:37.819215 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:05:37.862372 1131323 cri.go:89] found id: ""
	I0328 01:05:37.862414 1131323 logs.go:276] 0 containers: []
	W0328 01:05:37.862427 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:05:37.862436 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:05:37.862507 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:05:37.899639 1131323 cri.go:89] found id: ""
	I0328 01:05:37.899675 1131323 logs.go:276] 0 containers: []
	W0328 01:05:37.899689 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:05:37.899703 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:05:37.899721 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:05:37.978962 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:05:37.978990 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:05:37.979007 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:05:38.058972 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:05:38.059015 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:05:38.102975 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:05:38.103016 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:05:38.157994 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:05:38.158035 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:05:38.041091 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:40.041892 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:38.371787 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:40.373131 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:38.412214 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:40.415412 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:42.912341 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:40.673425 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:05:40.690969 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:05:40.691041 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:05:40.735552 1131323 cri.go:89] found id: ""
	I0328 01:05:40.735585 1131323 logs.go:276] 0 containers: []
	W0328 01:05:40.735594 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:05:40.735602 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:05:40.735669 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:05:40.816611 1131323 cri.go:89] found id: ""
	I0328 01:05:40.816648 1131323 logs.go:276] 0 containers: []
	W0328 01:05:40.816661 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:05:40.816669 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:05:40.816725 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:05:40.864093 1131323 cri.go:89] found id: ""
	I0328 01:05:40.864125 1131323 logs.go:276] 0 containers: []
	W0328 01:05:40.864138 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:05:40.864147 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:05:40.864218 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:05:40.908781 1131323 cri.go:89] found id: ""
	I0328 01:05:40.908817 1131323 logs.go:276] 0 containers: []
	W0328 01:05:40.908829 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:05:40.908846 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:05:40.908914 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:05:40.950330 1131323 cri.go:89] found id: ""
	I0328 01:05:40.950369 1131323 logs.go:276] 0 containers: []
	W0328 01:05:40.950382 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:05:40.950390 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:05:40.950481 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:05:40.989983 1131323 cri.go:89] found id: ""
	I0328 01:05:40.990041 1131323 logs.go:276] 0 containers: []
	W0328 01:05:40.990054 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:05:40.990063 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:05:40.990136 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:05:41.042428 1131323 cri.go:89] found id: ""
	I0328 01:05:41.042470 1131323 logs.go:276] 0 containers: []
	W0328 01:05:41.042481 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:05:41.042489 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:05:41.042560 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:05:41.089309 1131323 cri.go:89] found id: ""
	I0328 01:05:41.089342 1131323 logs.go:276] 0 containers: []
	W0328 01:05:41.089353 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:05:41.089363 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:05:41.089377 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:05:41.148502 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:05:41.148547 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:05:41.163889 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:05:41.163918 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:05:41.242825 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:05:41.242848 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:05:41.242861 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:05:41.322658 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:05:41.322702 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:05:43.865117 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:05:43.880642 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:05:43.880729 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:05:43.919519 1131323 cri.go:89] found id: ""
	I0328 01:05:43.919550 1131323 logs.go:276] 0 containers: []
	W0328 01:05:43.919559 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:05:43.919565 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:05:43.919622 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:05:43.957906 1131323 cri.go:89] found id: ""
	I0328 01:05:43.957936 1131323 logs.go:276] 0 containers: []
	W0328 01:05:43.957945 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:05:43.957951 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:05:43.958008 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:05:44.001448 1131323 cri.go:89] found id: ""
	I0328 01:05:44.001486 1131323 logs.go:276] 0 containers: []
	W0328 01:05:44.001497 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:05:44.001505 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:05:44.001573 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:05:44.039767 1131323 cri.go:89] found id: ""
	I0328 01:05:44.039801 1131323 logs.go:276] 0 containers: []
	W0328 01:05:44.039812 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:05:44.039818 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:05:44.039871 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:05:44.079441 1131323 cri.go:89] found id: ""
	I0328 01:05:44.079470 1131323 logs.go:276] 0 containers: []
	W0328 01:05:44.079480 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:05:44.079486 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:05:44.079541 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:05:44.116534 1131323 cri.go:89] found id: ""
	I0328 01:05:44.116584 1131323 logs.go:276] 0 containers: []
	W0328 01:05:44.116596 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:05:44.116604 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:05:44.116670 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:05:44.163335 1131323 cri.go:89] found id: ""
	I0328 01:05:44.163369 1131323 logs.go:276] 0 containers: []
	W0328 01:05:44.163381 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:05:44.163389 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:05:44.163457 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:05:44.201367 1131323 cri.go:89] found id: ""
	I0328 01:05:44.201403 1131323 logs.go:276] 0 containers: []
	W0328 01:05:44.201413 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:05:44.201424 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:05:44.201442 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:05:44.257485 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:05:44.257529 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:05:44.272489 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:05:44.272534 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:05:44.354442 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:05:44.354477 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:05:44.354498 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:05:44.436219 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:05:44.436262 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:05:42.044443 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:44.541648 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:42.872072 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:44.873552 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:44.913292 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:47.412489 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:46.982131 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:05:46.998022 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:05:46.998100 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:05:47.037167 1131323 cri.go:89] found id: ""
	I0328 01:05:47.037205 1131323 logs.go:276] 0 containers: []
	W0328 01:05:47.037217 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:05:47.037226 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:05:47.037295 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:05:47.076175 1131323 cri.go:89] found id: ""
	I0328 01:05:47.076213 1131323 logs.go:276] 0 containers: []
	W0328 01:05:47.076226 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:05:47.076235 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:05:47.076306 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:05:47.115193 1131323 cri.go:89] found id: ""
	I0328 01:05:47.115227 1131323 logs.go:276] 0 containers: []
	W0328 01:05:47.115237 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:05:47.115244 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:05:47.115297 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:05:47.154942 1131323 cri.go:89] found id: ""
	I0328 01:05:47.154976 1131323 logs.go:276] 0 containers: []
	W0328 01:05:47.154989 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:05:47.154998 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:05:47.155069 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:05:47.196571 1131323 cri.go:89] found id: ""
	I0328 01:05:47.196609 1131323 logs.go:276] 0 containers: []
	W0328 01:05:47.196622 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:05:47.196631 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:05:47.196707 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:05:47.237572 1131323 cri.go:89] found id: ""
	I0328 01:05:47.237616 1131323 logs.go:276] 0 containers: []
	W0328 01:05:47.237625 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:05:47.237633 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:05:47.237691 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:05:47.275208 1131323 cri.go:89] found id: ""
	I0328 01:05:47.275254 1131323 logs.go:276] 0 containers: []
	W0328 01:05:47.275265 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:05:47.275272 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:05:47.275329 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:05:47.313515 1131323 cri.go:89] found id: ""
	I0328 01:05:47.313555 1131323 logs.go:276] 0 containers: []
	W0328 01:05:47.313568 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:05:47.313582 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:05:47.313598 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:05:47.368993 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:05:47.369033 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:05:47.383063 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:05:47.383097 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:05:47.460239 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:05:47.460278 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:05:47.460298 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:05:47.538552 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:05:47.538594 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:05:50.084960 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:05:50.101764 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:05:50.101859 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:05:50.141457 1131323 cri.go:89] found id: ""
	I0328 01:05:50.141488 1131323 logs.go:276] 0 containers: []
	W0328 01:05:50.141497 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:05:50.141504 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:05:50.141557 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:05:50.178184 1131323 cri.go:89] found id: ""
	I0328 01:05:50.178220 1131323 logs.go:276] 0 containers: []
	W0328 01:05:50.178254 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:05:50.178263 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:05:50.178358 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:05:50.217908 1131323 cri.go:89] found id: ""
	I0328 01:05:50.217946 1131323 logs.go:276] 0 containers: []
	W0328 01:05:50.217959 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:05:50.217966 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:05:50.218027 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:05:50.256029 1131323 cri.go:89] found id: ""
	I0328 01:05:50.256058 1131323 logs.go:276] 0 containers: []
	W0328 01:05:50.256067 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:05:50.256074 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:05:50.256130 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:05:50.295054 1131323 cri.go:89] found id: ""
	I0328 01:05:50.295087 1131323 logs.go:276] 0 containers: []
	W0328 01:05:50.295100 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:05:50.295106 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:05:50.295165 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:05:47.042338 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:49.542501 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:47.372867 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:49.872948 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:49.913873 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:52.412600 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:50.334695 1131323 cri.go:89] found id: ""
	I0328 01:05:50.336588 1131323 logs.go:276] 0 containers: []
	W0328 01:05:50.336605 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:05:50.336614 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:05:50.336697 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:05:50.375968 1131323 cri.go:89] found id: ""
	I0328 01:05:50.376003 1131323 logs.go:276] 0 containers: []
	W0328 01:05:50.376013 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:05:50.376021 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:05:50.376091 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:05:50.417146 1131323 cri.go:89] found id: ""
	I0328 01:05:50.417175 1131323 logs.go:276] 0 containers: []
	W0328 01:05:50.417184 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:05:50.417194 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:05:50.417207 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:05:50.474090 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:05:50.474131 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:05:50.489006 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:05:50.489040 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:05:50.566220 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:05:50.566268 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:05:50.566286 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:05:50.645593 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:05:50.645653 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:05:53.190872 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:05:53.205223 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:05:53.205320 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:05:53.242396 1131323 cri.go:89] found id: ""
	I0328 01:05:53.242433 1131323 logs.go:276] 0 containers: []
	W0328 01:05:53.242445 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:05:53.242455 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:05:53.242524 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:05:53.281237 1131323 cri.go:89] found id: ""
	I0328 01:05:53.281275 1131323 logs.go:276] 0 containers: []
	W0328 01:05:53.281288 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:05:53.281297 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:05:53.281357 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:05:53.321239 1131323 cri.go:89] found id: ""
	I0328 01:05:53.321268 1131323 logs.go:276] 0 containers: []
	W0328 01:05:53.321287 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:05:53.321296 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:05:53.321358 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:05:53.359240 1131323 cri.go:89] found id: ""
	I0328 01:05:53.359269 1131323 logs.go:276] 0 containers: []
	W0328 01:05:53.359278 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:05:53.359284 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:05:53.359337 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:05:53.396973 1131323 cri.go:89] found id: ""
	I0328 01:05:53.397008 1131323 logs.go:276] 0 containers: []
	W0328 01:05:53.397021 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:05:53.397030 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:05:53.397100 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:05:53.438368 1131323 cri.go:89] found id: ""
	I0328 01:05:53.438400 1131323 logs.go:276] 0 containers: []
	W0328 01:05:53.438408 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:05:53.438415 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:05:53.438477 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:05:53.474679 1131323 cri.go:89] found id: ""
	I0328 01:05:53.474708 1131323 logs.go:276] 0 containers: []
	W0328 01:05:53.474732 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:05:53.474742 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:05:53.474799 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:05:53.512509 1131323 cri.go:89] found id: ""
	I0328 01:05:53.512547 1131323 logs.go:276] 0 containers: []
	W0328 01:05:53.512560 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:05:53.512579 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:05:53.512599 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:05:53.569536 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:05:53.569580 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:05:53.584977 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:05:53.585016 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:05:53.657865 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:05:53.657895 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:05:53.657908 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:05:53.733158 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:05:53.733203 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:05:52.041508 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:54.541663 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:52.373317 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:54.872090 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:54.913464 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:57.413256 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:56.278693 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:05:56.291870 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:05:56.291949 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:05:56.332909 1131323 cri.go:89] found id: ""
	I0328 01:05:56.332943 1131323 logs.go:276] 0 containers: []
	W0328 01:05:56.332957 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:05:56.332965 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:05:56.333038 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:05:56.370608 1131323 cri.go:89] found id: ""
	I0328 01:05:56.370638 1131323 logs.go:276] 0 containers: []
	W0328 01:05:56.370649 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:05:56.370657 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:05:56.370721 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:05:56.408031 1131323 cri.go:89] found id: ""
	I0328 01:05:56.408068 1131323 logs.go:276] 0 containers: []
	W0328 01:05:56.408081 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:05:56.408100 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:05:56.408170 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:05:56.445057 1131323 cri.go:89] found id: ""
	I0328 01:05:56.445092 1131323 logs.go:276] 0 containers: []
	W0328 01:05:56.445105 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:05:56.445113 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:05:56.445177 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:05:56.486868 1131323 cri.go:89] found id: ""
	I0328 01:05:56.486898 1131323 logs.go:276] 0 containers: []
	W0328 01:05:56.486908 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:05:56.486914 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:05:56.486969 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:05:56.533594 1131323 cri.go:89] found id: ""
	I0328 01:05:56.533622 1131323 logs.go:276] 0 containers: []
	W0328 01:05:56.533632 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:05:56.533638 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:05:56.533702 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:05:56.569200 1131323 cri.go:89] found id: ""
	I0328 01:05:56.569237 1131323 logs.go:276] 0 containers: []
	W0328 01:05:56.569250 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:05:56.569258 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:05:56.569335 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:05:56.604919 1131323 cri.go:89] found id: ""
	I0328 01:05:56.604955 1131323 logs.go:276] 0 containers: []
	W0328 01:05:56.604968 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:05:56.604982 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:05:56.605011 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:05:56.654473 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:05:56.654513 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:05:56.671309 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:05:56.671339 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:05:56.739516 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:05:56.739543 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:05:56.739559 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:05:56.817445 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:05:56.817495 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:05:59.361711 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:05:59.375672 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:05:59.375750 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:05:59.414329 1131323 cri.go:89] found id: ""
	I0328 01:05:59.414360 1131323 logs.go:276] 0 containers: []
	W0328 01:05:59.414371 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:05:59.414379 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:05:59.414443 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:05:59.454813 1131323 cri.go:89] found id: ""
	I0328 01:05:59.454846 1131323 logs.go:276] 0 containers: []
	W0328 01:05:59.454855 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:05:59.454862 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:05:59.454917 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:05:59.492890 1131323 cri.go:89] found id: ""
	I0328 01:05:59.492924 1131323 logs.go:276] 0 containers: []
	W0328 01:05:59.492936 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:05:59.492946 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:05:59.493043 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:05:59.529412 1131323 cri.go:89] found id: ""
	I0328 01:05:59.529443 1131323 logs.go:276] 0 containers: []
	W0328 01:05:59.529454 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:05:59.529464 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:05:59.529521 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:05:59.568620 1131323 cri.go:89] found id: ""
	I0328 01:05:59.568655 1131323 logs.go:276] 0 containers: []
	W0328 01:05:59.568664 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:05:59.568671 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:05:59.568731 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:05:59.605826 1131323 cri.go:89] found id: ""
	I0328 01:05:59.605861 1131323 logs.go:276] 0 containers: []
	W0328 01:05:59.605874 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:05:59.605883 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:05:59.605955 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:05:59.645799 1131323 cri.go:89] found id: ""
	I0328 01:05:59.645833 1131323 logs.go:276] 0 containers: []
	W0328 01:05:59.645847 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:05:59.645856 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:05:59.645931 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:05:59.683866 1131323 cri.go:89] found id: ""
	I0328 01:05:59.683903 1131323 logs.go:276] 0 containers: []
	W0328 01:05:59.683916 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:05:59.683929 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:05:59.683953 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:05:59.726678 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:05:59.726711 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:05:59.779910 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:05:59.779954 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:05:59.795743 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:05:59.795774 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:05:59.875137 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:05:59.875162 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:05:59.875174 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:05:56.542345 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:58.542599 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:00.543094 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:57.372258 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:59.872483 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:59.912150 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:01.913694 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:02.455212 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:06:02.468850 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:06:02.468945 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:06:02.506347 1131323 cri.go:89] found id: ""
	I0328 01:06:02.506385 1131323 logs.go:276] 0 containers: []
	W0328 01:06:02.506397 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:06:02.506406 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:06:02.506484 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:06:02.546056 1131323 cri.go:89] found id: ""
	I0328 01:06:02.546085 1131323 logs.go:276] 0 containers: []
	W0328 01:06:02.546096 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:06:02.546103 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:06:02.546173 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:06:02.585343 1131323 cri.go:89] found id: ""
	I0328 01:06:02.585385 1131323 logs.go:276] 0 containers: []
	W0328 01:06:02.585398 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:06:02.585407 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:06:02.585563 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:06:02.625380 1131323 cri.go:89] found id: ""
	I0328 01:06:02.625414 1131323 logs.go:276] 0 containers: []
	W0328 01:06:02.625423 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:06:02.625429 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:06:02.625486 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:06:02.664653 1131323 cri.go:89] found id: ""
	I0328 01:06:02.664687 1131323 logs.go:276] 0 containers: []
	W0328 01:06:02.664701 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:06:02.664708 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:06:02.664764 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:06:02.704468 1131323 cri.go:89] found id: ""
	I0328 01:06:02.704498 1131323 logs.go:276] 0 containers: []
	W0328 01:06:02.704511 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:06:02.704519 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:06:02.704595 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:06:02.740969 1131323 cri.go:89] found id: ""
	I0328 01:06:02.740997 1131323 logs.go:276] 0 containers: []
	W0328 01:06:02.741007 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:06:02.741014 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:06:02.741102 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:06:02.782113 1131323 cri.go:89] found id: ""
	I0328 01:06:02.782150 1131323 logs.go:276] 0 containers: []
	W0328 01:06:02.782163 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:06:02.782185 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:06:02.782203 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:06:02.836804 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:06:02.836848 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:06:02.852266 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:06:02.852299 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:06:02.929441 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:06:02.929467 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:06:02.929484 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:06:03.008114 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:06:03.008156 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:06:03.041919 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:05.542209 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:02.372332 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:04.871689 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:04.413251 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:06.912348 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:05.554291 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:06:05.570208 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:06:05.570304 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:06:05.610887 1131323 cri.go:89] found id: ""
	I0328 01:06:05.610916 1131323 logs.go:276] 0 containers: []
	W0328 01:06:05.610926 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:06:05.610932 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:06:05.610991 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:06:05.651561 1131323 cri.go:89] found id: ""
	I0328 01:06:05.651600 1131323 logs.go:276] 0 containers: []
	W0328 01:06:05.651610 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:06:05.651616 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:06:05.651681 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:06:05.690801 1131323 cri.go:89] found id: ""
	I0328 01:06:05.690830 1131323 logs.go:276] 0 containers: []
	W0328 01:06:05.690843 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:06:05.690851 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:06:05.690920 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:06:05.729098 1131323 cri.go:89] found id: ""
	I0328 01:06:05.729136 1131323 logs.go:276] 0 containers: []
	W0328 01:06:05.729146 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:06:05.729153 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:06:05.729225 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:06:05.774461 1131323 cri.go:89] found id: ""
	I0328 01:06:05.774499 1131323 logs.go:276] 0 containers: []
	W0328 01:06:05.774520 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:06:05.774530 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:06:05.774602 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:06:05.812135 1131323 cri.go:89] found id: ""
	I0328 01:06:05.812166 1131323 logs.go:276] 0 containers: []
	W0328 01:06:05.812180 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:06:05.812188 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:06:05.812255 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:06:05.847744 1131323 cri.go:89] found id: ""
	I0328 01:06:05.847775 1131323 logs.go:276] 0 containers: []
	W0328 01:06:05.847786 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:06:05.847796 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:06:05.847863 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:06:05.885600 1131323 cri.go:89] found id: ""
	I0328 01:06:05.885641 1131323 logs.go:276] 0 containers: []
	W0328 01:06:05.885656 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:06:05.885669 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:06:05.885684 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:06:05.963837 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:06:05.963879 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:06:06.007342 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:06:06.007381 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:06:06.062798 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:06:06.062843 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:06:06.077547 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:06:06.077599 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:06:06.148373 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:06:08.648791 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:06:08.664082 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:06:08.664154 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:06:08.701746 1131323 cri.go:89] found id: ""
	I0328 01:06:08.701776 1131323 logs.go:276] 0 containers: []
	W0328 01:06:08.701789 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:06:08.701797 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:06:08.701855 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:06:08.739035 1131323 cri.go:89] found id: ""
	I0328 01:06:08.739066 1131323 logs.go:276] 0 containers: []
	W0328 01:06:08.739076 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:06:08.739083 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:06:08.739136 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:06:08.776128 1131323 cri.go:89] found id: ""
	I0328 01:06:08.776166 1131323 logs.go:276] 0 containers: []
	W0328 01:06:08.776180 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:06:08.776189 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:06:08.776255 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:06:08.816136 1131323 cri.go:89] found id: ""
	I0328 01:06:08.816172 1131323 logs.go:276] 0 containers: []
	W0328 01:06:08.816187 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:06:08.816196 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:06:08.816271 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:06:08.855675 1131323 cri.go:89] found id: ""
	I0328 01:06:08.855709 1131323 logs.go:276] 0 containers: []
	W0328 01:06:08.855722 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:06:08.855730 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:06:08.855802 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:06:08.893161 1131323 cri.go:89] found id: ""
	I0328 01:06:08.893198 1131323 logs.go:276] 0 containers: []
	W0328 01:06:08.893212 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:06:08.893221 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:06:08.893297 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:06:08.935498 1131323 cri.go:89] found id: ""
	I0328 01:06:08.935527 1131323 logs.go:276] 0 containers: []
	W0328 01:06:08.935540 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:06:08.935548 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:06:08.935622 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:06:08.971622 1131323 cri.go:89] found id: ""
	I0328 01:06:08.971657 1131323 logs.go:276] 0 containers: []
	W0328 01:06:08.971668 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:06:08.971679 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:06:08.971696 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:06:09.039975 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:06:09.040036 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:06:09.057877 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:06:09.057920 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:06:09.130093 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:06:09.130119 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:06:09.130135 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:06:09.217177 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:06:09.217228 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:06:08.040921 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:10.042895 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:06.872367 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:08.873187 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:08.914313 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:11.412330 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:11.762393 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:06:11.776356 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:06:11.776424 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:06:11.811982 1131323 cri.go:89] found id: ""
	I0328 01:06:11.812017 1131323 logs.go:276] 0 containers: []
	W0328 01:06:11.812030 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:06:11.812038 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:06:11.812103 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:06:11.849789 1131323 cri.go:89] found id: ""
	I0328 01:06:11.849817 1131323 logs.go:276] 0 containers: []
	W0328 01:06:11.849826 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:06:11.849833 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:06:11.849884 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:06:11.890455 1131323 cri.go:89] found id: ""
	I0328 01:06:11.890488 1131323 logs.go:276] 0 containers: []
	W0328 01:06:11.890497 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:06:11.890503 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:06:11.890559 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:06:11.929047 1131323 cri.go:89] found id: ""
	I0328 01:06:11.929093 1131323 logs.go:276] 0 containers: []
	W0328 01:06:11.929102 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:06:11.929108 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:06:11.929164 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:06:11.969536 1131323 cri.go:89] found id: ""
	I0328 01:06:11.969566 1131323 logs.go:276] 0 containers: []
	W0328 01:06:11.969576 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:06:11.969583 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:06:11.969641 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:06:12.008779 1131323 cri.go:89] found id: ""
	I0328 01:06:12.008811 1131323 logs.go:276] 0 containers: []
	W0328 01:06:12.008821 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:06:12.008828 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:06:12.008890 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:06:12.044061 1131323 cri.go:89] found id: ""
	I0328 01:06:12.044091 1131323 logs.go:276] 0 containers: []
	W0328 01:06:12.044104 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:06:12.044112 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:06:12.044176 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:06:12.082307 1131323 cri.go:89] found id: ""
	I0328 01:06:12.082336 1131323 logs.go:276] 0 containers: []
	W0328 01:06:12.082346 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:06:12.082357 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:06:12.082369 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:06:12.133044 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:06:12.133091 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:06:12.148584 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:06:12.148624 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:06:12.218799 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:06:12.218834 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:06:12.218852 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:06:12.295580 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:06:12.295623 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:06:14.842815 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:06:14.856385 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:06:14.856456 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:06:14.895351 1131323 cri.go:89] found id: ""
	I0328 01:06:14.895409 1131323 logs.go:276] 0 containers: []
	W0328 01:06:14.895418 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:06:14.895424 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:06:14.895476 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:06:14.930333 1131323 cri.go:89] found id: ""
	I0328 01:06:14.930366 1131323 logs.go:276] 0 containers: []
	W0328 01:06:14.930380 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:06:14.930389 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:06:14.930461 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:06:14.968701 1131323 cri.go:89] found id: ""
	I0328 01:06:14.968742 1131323 logs.go:276] 0 containers: []
	W0328 01:06:14.968754 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:06:14.968767 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:06:14.968867 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:06:15.004580 1131323 cri.go:89] found id: ""
	I0328 01:06:15.004613 1131323 logs.go:276] 0 containers: []
	W0328 01:06:15.004626 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:06:15.004634 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:06:15.004700 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:06:15.046702 1131323 cri.go:89] found id: ""
	I0328 01:06:15.046726 1131323 logs.go:276] 0 containers: []
	W0328 01:06:15.046736 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:06:15.046742 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:06:15.046795 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:06:15.088693 1131323 cri.go:89] found id: ""
	I0328 01:06:15.088725 1131323 logs.go:276] 0 containers: []
	W0328 01:06:15.088734 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:06:15.088741 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:06:15.088797 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:06:15.130293 1131323 cri.go:89] found id: ""
	I0328 01:06:15.130324 1131323 logs.go:276] 0 containers: []
	W0328 01:06:15.130333 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:06:15.130339 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:06:15.130394 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:06:15.172381 1131323 cri.go:89] found id: ""
	I0328 01:06:15.172408 1131323 logs.go:276] 0 containers: []
	W0328 01:06:15.172417 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:06:15.172427 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:06:15.172440 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:06:15.225631 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:06:15.225674 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:06:15.241251 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:06:15.241294 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:06:15.319701 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:06:15.319731 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:06:15.319747 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:06:12.540755 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:14.541618 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:11.371580 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:13.371640 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:15.373147 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:13.911792 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:15.912479 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:17.913926 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:15.406813 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:06:15.406853 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:06:17.993893 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:06:18.007755 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:06:18.007843 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:06:18.047750 1131323 cri.go:89] found id: ""
	I0328 01:06:18.047777 1131323 logs.go:276] 0 containers: []
	W0328 01:06:18.047786 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:06:18.047797 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:06:18.047855 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:06:18.088264 1131323 cri.go:89] found id: ""
	I0328 01:06:18.088291 1131323 logs.go:276] 0 containers: []
	W0328 01:06:18.088303 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:06:18.088311 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:06:18.088369 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:06:18.127485 1131323 cri.go:89] found id: ""
	I0328 01:06:18.127514 1131323 logs.go:276] 0 containers: []
	W0328 01:06:18.127523 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:06:18.127530 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:06:18.127581 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:06:18.167462 1131323 cri.go:89] found id: ""
	I0328 01:06:18.167496 1131323 logs.go:276] 0 containers: []
	W0328 01:06:18.167510 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:06:18.167516 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:06:18.167571 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:06:18.209536 1131323 cri.go:89] found id: ""
	I0328 01:06:18.209571 1131323 logs.go:276] 0 containers: []
	W0328 01:06:18.209583 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:06:18.209591 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:06:18.209662 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:06:18.247565 1131323 cri.go:89] found id: ""
	I0328 01:06:18.247601 1131323 logs.go:276] 0 containers: []
	W0328 01:06:18.247614 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:06:18.247623 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:06:18.247701 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:06:18.288123 1131323 cri.go:89] found id: ""
	I0328 01:06:18.288162 1131323 logs.go:276] 0 containers: []
	W0328 01:06:18.288172 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:06:18.288179 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:06:18.288242 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:06:18.328132 1131323 cri.go:89] found id: ""
	I0328 01:06:18.328161 1131323 logs.go:276] 0 containers: []
	W0328 01:06:18.328170 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:06:18.328181 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:06:18.328193 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:06:18.403245 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:06:18.403287 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:06:18.403305 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:06:18.483446 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:06:18.483500 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:06:18.527357 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:06:18.527392 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:06:18.588402 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:06:18.588463 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:06:16.542137 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:18.542554 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:20.546396 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:17.872147 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:20.373000 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:20.412369 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:22.412661 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:21.103566 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:06:21.117538 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:06:21.117616 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:06:21.174215 1131323 cri.go:89] found id: ""
	I0328 01:06:21.174270 1131323 logs.go:276] 0 containers: []
	W0328 01:06:21.174284 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:06:21.174293 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:06:21.174364 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:06:21.238666 1131323 cri.go:89] found id: ""
	I0328 01:06:21.238707 1131323 logs.go:276] 0 containers: []
	W0328 01:06:21.238722 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:06:21.238730 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:06:21.238803 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:06:21.303510 1131323 cri.go:89] found id: ""
	I0328 01:06:21.303543 1131323 logs.go:276] 0 containers: []
	W0328 01:06:21.303553 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:06:21.303559 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:06:21.303614 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:06:21.345823 1131323 cri.go:89] found id: ""
	I0328 01:06:21.345853 1131323 logs.go:276] 0 containers: []
	W0328 01:06:21.345862 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:06:21.345870 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:06:21.345940 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:06:21.386205 1131323 cri.go:89] found id: ""
	I0328 01:06:21.386248 1131323 logs.go:276] 0 containers: []
	W0328 01:06:21.386261 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:06:21.386269 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:06:21.386335 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:06:21.427424 1131323 cri.go:89] found id: ""
	I0328 01:06:21.427457 1131323 logs.go:276] 0 containers: []
	W0328 01:06:21.427470 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:06:21.427478 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:06:21.427546 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:06:21.465054 1131323 cri.go:89] found id: ""
	I0328 01:06:21.465087 1131323 logs.go:276] 0 containers: []
	W0328 01:06:21.465099 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:06:21.465107 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:06:21.465177 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:06:21.507197 1131323 cri.go:89] found id: ""
	I0328 01:06:21.507229 1131323 logs.go:276] 0 containers: []
	W0328 01:06:21.507238 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:06:21.507248 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:06:21.507263 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:06:21.586657 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:06:21.586709 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:06:21.633702 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:06:21.633739 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:06:21.688960 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:06:21.688999 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:06:21.704675 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:06:21.704714 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:06:21.781612 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:06:24.282521 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:06:24.297096 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:06:24.297185 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:06:24.338745 1131323 cri.go:89] found id: ""
	I0328 01:06:24.338780 1131323 logs.go:276] 0 containers: []
	W0328 01:06:24.338793 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:06:24.338802 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:06:24.338872 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:06:24.375499 1131323 cri.go:89] found id: ""
	I0328 01:06:24.375528 1131323 logs.go:276] 0 containers: []
	W0328 01:06:24.375540 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:06:24.375548 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:06:24.375616 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:06:24.410939 1131323 cri.go:89] found id: ""
	I0328 01:06:24.410966 1131323 logs.go:276] 0 containers: []
	W0328 01:06:24.410978 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:06:24.410986 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:06:24.411042 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:06:24.455316 1131323 cri.go:89] found id: ""
	I0328 01:06:24.455345 1131323 logs.go:276] 0 containers: []
	W0328 01:06:24.455354 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:06:24.455360 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:06:24.455427 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:06:24.493177 1131323 cri.go:89] found id: ""
	I0328 01:06:24.493206 1131323 logs.go:276] 0 containers: []
	W0328 01:06:24.493219 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:06:24.493228 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:06:24.493300 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:06:24.533612 1131323 cri.go:89] found id: ""
	I0328 01:06:24.533648 1131323 logs.go:276] 0 containers: []
	W0328 01:06:24.533659 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:06:24.533668 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:06:24.533743 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:06:24.573960 1131323 cri.go:89] found id: ""
	I0328 01:06:24.573998 1131323 logs.go:276] 0 containers: []
	W0328 01:06:24.574014 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:06:24.574020 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:06:24.574074 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:06:24.617282 1131323 cri.go:89] found id: ""
	I0328 01:06:24.617319 1131323 logs.go:276] 0 containers: []
	W0328 01:06:24.617333 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:06:24.617346 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:06:24.617364 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:06:24.691660 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:06:24.691688 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:06:24.691707 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:06:24.773138 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:06:24.773180 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:06:24.820408 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:06:24.820440 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:06:24.875901 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:06:24.875940 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:06:23.041030 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:25.041064 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:22.874513 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:25.378939 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:24.413732 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:26.912433 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:27.392663 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:06:27.407958 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:06:27.408046 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:06:27.446750 1131323 cri.go:89] found id: ""
	I0328 01:06:27.446782 1131323 logs.go:276] 0 containers: []
	W0328 01:06:27.446792 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:06:27.446799 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:06:27.446872 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:06:27.489199 1131323 cri.go:89] found id: ""
	I0328 01:06:27.489236 1131323 logs.go:276] 0 containers: []
	W0328 01:06:27.489249 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:06:27.489258 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:06:27.489316 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:06:27.525754 1131323 cri.go:89] found id: ""
	I0328 01:06:27.525787 1131323 logs.go:276] 0 containers: []
	W0328 01:06:27.525796 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:06:27.525803 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:06:27.525861 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:06:27.560817 1131323 cri.go:89] found id: ""
	I0328 01:06:27.560849 1131323 logs.go:276] 0 containers: []
	W0328 01:06:27.560858 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:06:27.560866 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:06:27.560930 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:06:27.597706 1131323 cri.go:89] found id: ""
	I0328 01:06:27.597736 1131323 logs.go:276] 0 containers: []
	W0328 01:06:27.597744 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:06:27.597750 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:06:27.597821 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:06:27.635170 1131323 cri.go:89] found id: ""
	I0328 01:06:27.635211 1131323 logs.go:276] 0 containers: []
	W0328 01:06:27.635223 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:06:27.635232 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:06:27.635299 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:06:27.672043 1131323 cri.go:89] found id: ""
	I0328 01:06:27.672079 1131323 logs.go:276] 0 containers: []
	W0328 01:06:27.672091 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:06:27.672099 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:06:27.672166 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:06:27.711401 1131323 cri.go:89] found id: ""
	I0328 01:06:27.711435 1131323 logs.go:276] 0 containers: []
	W0328 01:06:27.711448 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:06:27.711468 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:06:27.711488 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:06:27.755172 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:06:27.755211 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:06:27.807588 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:06:27.807632 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:06:27.823557 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:06:27.823589 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:06:27.905292 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:06:27.905316 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:06:27.905329 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:06:27.041105 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:29.041205 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:27.873797 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:30.374214 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:29.412378 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:31.413211 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:30.491565 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:06:30.505601 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:06:30.505667 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:06:30.541894 1131323 cri.go:89] found id: ""
	I0328 01:06:30.541929 1131323 logs.go:276] 0 containers: []
	W0328 01:06:30.541940 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:06:30.541949 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:06:30.542029 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:06:30.581484 1131323 cri.go:89] found id: ""
	I0328 01:06:30.581514 1131323 logs.go:276] 0 containers: []
	W0328 01:06:30.581532 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:06:30.581538 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:06:30.581613 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:06:30.624788 1131323 cri.go:89] found id: ""
	I0328 01:06:30.624830 1131323 logs.go:276] 0 containers: []
	W0328 01:06:30.624842 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:06:30.624850 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:06:30.624922 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:06:30.664373 1131323 cri.go:89] found id: ""
	I0328 01:06:30.664403 1131323 logs.go:276] 0 containers: []
	W0328 01:06:30.664413 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:06:30.664420 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:06:30.664489 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:06:30.702885 1131323 cri.go:89] found id: ""
	I0328 01:06:30.702917 1131323 logs.go:276] 0 containers: []
	W0328 01:06:30.702928 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:06:30.702934 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:06:30.703006 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:06:30.748170 1131323 cri.go:89] found id: ""
	I0328 01:06:30.748205 1131323 logs.go:276] 0 containers: []
	W0328 01:06:30.748217 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:06:30.748226 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:06:30.748316 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:06:30.785218 1131323 cri.go:89] found id: ""
	I0328 01:06:30.785255 1131323 logs.go:276] 0 containers: []
	W0328 01:06:30.785268 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:06:30.785276 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:06:30.785343 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:06:30.825529 1131323 cri.go:89] found id: ""
	I0328 01:06:30.825555 1131323 logs.go:276] 0 containers: []
	W0328 01:06:30.825565 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:06:30.825575 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:06:30.825589 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:06:30.881353 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:06:30.881391 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:06:30.896682 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:06:30.896718 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:06:30.973356 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:06:30.973386 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:06:30.973402 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:06:31.049014 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:06:31.049047 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:06:33.594365 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:06:33.609372 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:06:33.609460 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:06:33.648699 1131323 cri.go:89] found id: ""
	I0328 01:06:33.648728 1131323 logs.go:276] 0 containers: []
	W0328 01:06:33.648749 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:06:33.648757 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:06:33.648829 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:06:33.686707 1131323 cri.go:89] found id: ""
	I0328 01:06:33.686744 1131323 logs.go:276] 0 containers: []
	W0328 01:06:33.686758 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:06:33.686767 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:06:33.686832 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:06:33.723091 1131323 cri.go:89] found id: ""
	I0328 01:06:33.723121 1131323 logs.go:276] 0 containers: []
	W0328 01:06:33.723130 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:06:33.723136 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:06:33.723187 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:06:33.763439 1131323 cri.go:89] found id: ""
	I0328 01:06:33.763471 1131323 logs.go:276] 0 containers: []
	W0328 01:06:33.763481 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:06:33.763488 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:06:33.763544 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:06:33.812236 1131323 cri.go:89] found id: ""
	I0328 01:06:33.812271 1131323 logs.go:276] 0 containers: []
	W0328 01:06:33.812285 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:06:33.812294 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:06:33.812365 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:06:33.849421 1131323 cri.go:89] found id: ""
	I0328 01:06:33.849454 1131323 logs.go:276] 0 containers: []
	W0328 01:06:33.849465 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:06:33.849473 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:06:33.849528 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:06:33.888020 1131323 cri.go:89] found id: ""
	I0328 01:06:33.888051 1131323 logs.go:276] 0 containers: []
	W0328 01:06:33.888065 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:06:33.888078 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:06:33.888145 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:06:33.925952 1131323 cri.go:89] found id: ""
	I0328 01:06:33.925990 1131323 logs.go:276] 0 containers: []
	W0328 01:06:33.926003 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:06:33.926016 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:06:33.926034 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:06:33.976695 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:06:33.976734 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:06:33.991708 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:06:33.991752 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:06:34.068244 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:06:34.068276 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:06:34.068293 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:06:34.155843 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:06:34.155885 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:06:31.041375 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:33.041526 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:35.541169 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:32.872009 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:34.873043 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:33.913191 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:36.413213 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:36.697480 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:06:36.712322 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:06:36.712420 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:06:36.749541 1131323 cri.go:89] found id: ""
	I0328 01:06:36.749570 1131323 logs.go:276] 0 containers: []
	W0328 01:06:36.749579 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:06:36.749587 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:06:36.749655 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:06:36.788226 1131323 cri.go:89] found id: ""
	I0328 01:06:36.788255 1131323 logs.go:276] 0 containers: []
	W0328 01:06:36.788264 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:06:36.788270 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:06:36.788323 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:06:36.823824 1131323 cri.go:89] found id: ""
	I0328 01:06:36.823856 1131323 logs.go:276] 0 containers: []
	W0328 01:06:36.823866 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:06:36.823872 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:06:36.823927 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:06:36.869331 1131323 cri.go:89] found id: ""
	I0328 01:06:36.869362 1131323 logs.go:276] 0 containers: []
	W0328 01:06:36.869371 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:06:36.869378 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:06:36.869473 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:06:36.907918 1131323 cri.go:89] found id: ""
	I0328 01:06:36.907950 1131323 logs.go:276] 0 containers: []
	W0328 01:06:36.907960 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:06:36.907966 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:06:36.908028 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:06:36.947708 1131323 cri.go:89] found id: ""
	I0328 01:06:36.947738 1131323 logs.go:276] 0 containers: []
	W0328 01:06:36.947749 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:06:36.947757 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:06:36.947824 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:06:36.986200 1131323 cri.go:89] found id: ""
	I0328 01:06:36.986251 1131323 logs.go:276] 0 containers: []
	W0328 01:06:36.986266 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:06:36.986275 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:06:36.986350 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:06:37.026670 1131323 cri.go:89] found id: ""
	I0328 01:06:37.026698 1131323 logs.go:276] 0 containers: []
	W0328 01:06:37.026708 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:06:37.026718 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:06:37.026732 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:06:37.079891 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:06:37.079933 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:06:37.094347 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:06:37.094378 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:06:37.168653 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:06:37.168681 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:06:37.168695 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:06:37.247909 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:06:37.247949 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:06:39.791285 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:06:39.807921 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:06:39.808000 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:06:39.851460 1131323 cri.go:89] found id: ""
	I0328 01:06:39.851499 1131323 logs.go:276] 0 containers: []
	W0328 01:06:39.851512 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:06:39.851520 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:06:39.851593 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:06:39.889506 1131323 cri.go:89] found id: ""
	I0328 01:06:39.889541 1131323 logs.go:276] 0 containers: []
	W0328 01:06:39.889554 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:06:39.889564 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:06:39.889632 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:06:39.930291 1131323 cri.go:89] found id: ""
	I0328 01:06:39.930321 1131323 logs.go:276] 0 containers: []
	W0328 01:06:39.930331 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:06:39.930337 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:06:39.930400 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:06:39.965121 1131323 cri.go:89] found id: ""
	I0328 01:06:39.965160 1131323 logs.go:276] 0 containers: []
	W0328 01:06:39.965174 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:06:39.965183 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:06:39.965252 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:06:40.003217 1131323 cri.go:89] found id: ""
	I0328 01:06:40.003248 1131323 logs.go:276] 0 containers: []
	W0328 01:06:40.003258 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:06:40.003264 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:06:40.003319 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:06:40.042702 1131323 cri.go:89] found id: ""
	I0328 01:06:40.042737 1131323 logs.go:276] 0 containers: []
	W0328 01:06:40.042749 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:06:40.042759 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:06:40.042826 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:06:40.079733 1131323 cri.go:89] found id: ""
	I0328 01:06:40.079769 1131323 logs.go:276] 0 containers: []
	W0328 01:06:40.079780 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:06:40.079788 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:06:40.079852 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:06:40.117066 1131323 cri.go:89] found id: ""
	I0328 01:06:40.117098 1131323 logs.go:276] 0 containers: []
	W0328 01:06:40.117107 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:06:40.117117 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:06:40.117130 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:06:40.158589 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:06:40.158623 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:06:40.210997 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:06:40.211049 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:06:40.225419 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:06:40.225453 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:06:40.305356 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:06:40.305385 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:06:40.305401 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:06:37.541534 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:39.541905 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:36.874220 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:39.373763 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:38.413719 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:40.912939 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:42.913528 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:42.896394 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:06:42.912285 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:06:42.912355 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:06:42.949381 1131323 cri.go:89] found id: ""
	I0328 01:06:42.949411 1131323 logs.go:276] 0 containers: []
	W0328 01:06:42.949420 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:06:42.949427 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:06:42.949496 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:06:42.985325 1131323 cri.go:89] found id: ""
	I0328 01:06:42.985358 1131323 logs.go:276] 0 containers: []
	W0328 01:06:42.985371 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:06:42.985388 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:06:42.985456 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:06:43.023570 1131323 cri.go:89] found id: ""
	I0328 01:06:43.023616 1131323 logs.go:276] 0 containers: []
	W0328 01:06:43.023630 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:06:43.023638 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:06:43.023714 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:06:43.062995 1131323 cri.go:89] found id: ""
	I0328 01:06:43.063025 1131323 logs.go:276] 0 containers: []
	W0328 01:06:43.063036 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:06:43.063042 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:06:43.063111 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:06:43.101666 1131323 cri.go:89] found id: ""
	I0328 01:06:43.101704 1131323 logs.go:276] 0 containers: []
	W0328 01:06:43.101713 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:06:43.101720 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:06:43.101789 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:06:43.150713 1131323 cri.go:89] found id: ""
	I0328 01:06:43.150745 1131323 logs.go:276] 0 containers: []
	W0328 01:06:43.150757 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:06:43.150765 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:06:43.150830 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:06:43.193449 1131323 cri.go:89] found id: ""
	I0328 01:06:43.193479 1131323 logs.go:276] 0 containers: []
	W0328 01:06:43.193487 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:06:43.193495 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:06:43.193559 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:06:43.237641 1131323 cri.go:89] found id: ""
	I0328 01:06:43.237673 1131323 logs.go:276] 0 containers: []
	W0328 01:06:43.237682 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:06:43.237698 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:06:43.237714 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:06:43.287282 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:06:43.287320 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:06:43.303307 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:06:43.303343 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:06:43.383597 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:06:43.383619 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:06:43.383632 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:06:43.467874 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:06:43.467914 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:06:42.041406 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:44.540550 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:41.874286 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:44.372393 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:45.410973 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:47.412852 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:46.011081 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:06:46.025731 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:06:46.025824 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:06:46.064336 1131323 cri.go:89] found id: ""
	I0328 01:06:46.064371 1131323 logs.go:276] 0 containers: []
	W0328 01:06:46.064385 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:06:46.064394 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:06:46.064451 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:06:46.104493 1131323 cri.go:89] found id: ""
	I0328 01:06:46.104530 1131323 logs.go:276] 0 containers: []
	W0328 01:06:46.104550 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:06:46.104559 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:06:46.104636 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:06:46.147546 1131323 cri.go:89] found id: ""
	I0328 01:06:46.147582 1131323 logs.go:276] 0 containers: []
	W0328 01:06:46.147594 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:06:46.147602 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:06:46.147656 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:06:46.186162 1131323 cri.go:89] found id: ""
	I0328 01:06:46.186197 1131323 logs.go:276] 0 containers: []
	W0328 01:06:46.186207 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:06:46.186213 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:06:46.186296 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:06:46.230412 1131323 cri.go:89] found id: ""
	I0328 01:06:46.230450 1131323 logs.go:276] 0 containers: []
	W0328 01:06:46.230464 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:06:46.230473 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:06:46.230552 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:06:46.266000 1131323 cri.go:89] found id: ""
	I0328 01:06:46.266037 1131323 logs.go:276] 0 containers: []
	W0328 01:06:46.266050 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:06:46.266059 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:06:46.266126 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:06:46.301031 1131323 cri.go:89] found id: ""
	I0328 01:06:46.301065 1131323 logs.go:276] 0 containers: []
	W0328 01:06:46.301077 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:06:46.301084 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:06:46.301155 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:06:46.339222 1131323 cri.go:89] found id: ""
	I0328 01:06:46.339248 1131323 logs.go:276] 0 containers: []
	W0328 01:06:46.339258 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:06:46.339271 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:06:46.339290 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:06:46.352558 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:06:46.352595 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:06:46.427283 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:06:46.427308 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:06:46.427325 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:06:46.512134 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:06:46.512178 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:06:46.558276 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:06:46.558307 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:06:49.113455 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:06:49.127554 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:06:49.127645 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:06:49.169380 1131323 cri.go:89] found id: ""
	I0328 01:06:49.169421 1131323 logs.go:276] 0 containers: []
	W0328 01:06:49.169435 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:06:49.169444 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:06:49.169511 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:06:49.204540 1131323 cri.go:89] found id: ""
	I0328 01:06:49.204568 1131323 logs.go:276] 0 containers: []
	W0328 01:06:49.204579 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:06:49.204596 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:06:49.204664 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:06:49.243074 1131323 cri.go:89] found id: ""
	I0328 01:06:49.243102 1131323 logs.go:276] 0 containers: []
	W0328 01:06:49.243112 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:06:49.243119 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:06:49.243170 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:06:49.281264 1131323 cri.go:89] found id: ""
	I0328 01:06:49.281301 1131323 logs.go:276] 0 containers: []
	W0328 01:06:49.281314 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:06:49.281322 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:06:49.281391 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:06:49.320473 1131323 cri.go:89] found id: ""
	I0328 01:06:49.320505 1131323 logs.go:276] 0 containers: []
	W0328 01:06:49.320514 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:06:49.320521 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:06:49.320592 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:06:49.357715 1131323 cri.go:89] found id: ""
	I0328 01:06:49.357749 1131323 logs.go:276] 0 containers: []
	W0328 01:06:49.357759 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:06:49.357766 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:06:49.357823 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:06:49.398427 1131323 cri.go:89] found id: ""
	I0328 01:06:49.398464 1131323 logs.go:276] 0 containers: []
	W0328 01:06:49.398477 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:06:49.398498 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:06:49.398576 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:06:49.439921 1131323 cri.go:89] found id: ""
	I0328 01:06:49.439956 1131323 logs.go:276] 0 containers: []
	W0328 01:06:49.439969 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:06:49.439982 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:06:49.440003 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:06:49.557260 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:06:49.557289 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:06:49.557312 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:06:49.640105 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:06:49.640169 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:06:49.683153 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:06:49.683185 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:06:49.737420 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:06:49.737463 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:06:46.541377 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:49.041761 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:46.374869 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:48.875897 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:49.912535 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:51.912893 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:52.253208 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:06:52.268572 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:06:52.268649 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:06:52.305136 1131323 cri.go:89] found id: ""
	I0328 01:06:52.305180 1131323 logs.go:276] 0 containers: []
	W0328 01:06:52.305193 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:06:52.305202 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:06:52.305273 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:06:52.344774 1131323 cri.go:89] found id: ""
	I0328 01:06:52.344806 1131323 logs.go:276] 0 containers: []
	W0328 01:06:52.344816 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:06:52.344823 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:06:52.344885 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:06:52.382127 1131323 cri.go:89] found id: ""
	I0328 01:06:52.382174 1131323 logs.go:276] 0 containers: []
	W0328 01:06:52.382185 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:06:52.382200 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:06:52.382280 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:06:52.421340 1131323 cri.go:89] found id: ""
	I0328 01:06:52.421368 1131323 logs.go:276] 0 containers: []
	W0328 01:06:52.421377 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:06:52.421383 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:06:52.421433 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:06:52.460046 1131323 cri.go:89] found id: ""
	I0328 01:06:52.460084 1131323 logs.go:276] 0 containers: []
	W0328 01:06:52.460100 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:06:52.460107 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:06:52.460164 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:06:52.500067 1131323 cri.go:89] found id: ""
	I0328 01:06:52.500094 1131323 logs.go:276] 0 containers: []
	W0328 01:06:52.500102 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:06:52.500109 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:06:52.500171 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:06:52.537614 1131323 cri.go:89] found id: ""
	I0328 01:06:52.537646 1131323 logs.go:276] 0 containers: []
	W0328 01:06:52.537671 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:06:52.537680 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:06:52.537745 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:06:52.577362 1131323 cri.go:89] found id: ""
	I0328 01:06:52.577392 1131323 logs.go:276] 0 containers: []
	W0328 01:06:52.577402 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:06:52.577417 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:06:52.577434 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:06:52.633638 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:06:52.633689 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:06:52.650762 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:06:52.650796 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:06:52.729436 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:06:52.729470 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:06:52.729484 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:06:52.818193 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:06:52.818248 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:06:51.540541 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:53.541340 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:55.542165 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:51.376916 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:53.872313 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:55.873335 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:54.411986 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:56.412892 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:55.362950 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:06:55.378461 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:06:55.378577 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:06:55.419968 1131323 cri.go:89] found id: ""
	I0328 01:06:55.419995 1131323 logs.go:276] 0 containers: []
	W0328 01:06:55.420005 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:06:55.420010 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:06:55.420072 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:06:55.464308 1131323 cri.go:89] found id: ""
	I0328 01:06:55.464341 1131323 logs.go:276] 0 containers: []
	W0328 01:06:55.464350 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:06:55.464357 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:06:55.464421 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:06:55.523059 1131323 cri.go:89] found id: ""
	I0328 01:06:55.523092 1131323 logs.go:276] 0 containers: []
	W0328 01:06:55.523106 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:06:55.523114 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:06:55.523186 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:06:55.570957 1131323 cri.go:89] found id: ""
	I0328 01:06:55.570990 1131323 logs.go:276] 0 containers: []
	W0328 01:06:55.571004 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:06:55.571013 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:06:55.571077 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:06:55.606712 1131323 cri.go:89] found id: ""
	I0328 01:06:55.606739 1131323 logs.go:276] 0 containers: []
	W0328 01:06:55.606749 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:06:55.606755 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:06:55.606817 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:06:55.646445 1131323 cri.go:89] found id: ""
	I0328 01:06:55.646477 1131323 logs.go:276] 0 containers: []
	W0328 01:06:55.646486 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:06:55.646493 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:06:55.646548 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:06:55.685176 1131323 cri.go:89] found id: ""
	I0328 01:06:55.685208 1131323 logs.go:276] 0 containers: []
	W0328 01:06:55.685217 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:06:55.685225 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:06:55.685289 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:06:55.722948 1131323 cri.go:89] found id: ""
	I0328 01:06:55.722984 1131323 logs.go:276] 0 containers: []
	W0328 01:06:55.722995 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:06:55.723006 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:06:55.723022 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:06:55.797332 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:06:55.797368 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:06:55.797385 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:06:55.877648 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:06:55.877688 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:06:55.918966 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:06:55.918997 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:06:55.971226 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:06:55.971272 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:06:58.488464 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:06:58.504999 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:06:58.505088 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:06:58.549290 1131323 cri.go:89] found id: ""
	I0328 01:06:58.549325 1131323 logs.go:276] 0 containers: []
	W0328 01:06:58.549338 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:06:58.549347 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:06:58.549414 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:06:58.589222 1131323 cri.go:89] found id: ""
	I0328 01:06:58.589252 1131323 logs.go:276] 0 containers: []
	W0328 01:06:58.589261 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:06:58.589271 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:06:58.589337 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:06:58.626470 1131323 cri.go:89] found id: ""
	I0328 01:06:58.626499 1131323 logs.go:276] 0 containers: []
	W0328 01:06:58.626508 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:06:58.626514 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:06:58.626578 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:06:58.671634 1131323 cri.go:89] found id: ""
	I0328 01:06:58.671663 1131323 logs.go:276] 0 containers: []
	W0328 01:06:58.671674 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:06:58.671683 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:06:58.671744 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:06:58.707335 1131323 cri.go:89] found id: ""
	I0328 01:06:58.707370 1131323 logs.go:276] 0 containers: []
	W0328 01:06:58.707381 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:06:58.707390 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:06:58.707459 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:06:58.745635 1131323 cri.go:89] found id: ""
	I0328 01:06:58.745666 1131323 logs.go:276] 0 containers: []
	W0328 01:06:58.745679 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:06:58.745687 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:06:58.745752 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:06:58.792172 1131323 cri.go:89] found id: ""
	I0328 01:06:58.792205 1131323 logs.go:276] 0 containers: []
	W0328 01:06:58.792216 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:06:58.792225 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:06:58.792287 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:06:58.840027 1131323 cri.go:89] found id: ""
	I0328 01:06:58.840063 1131323 logs.go:276] 0 containers: []
	W0328 01:06:58.840075 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:06:58.840089 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:06:58.840108 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:06:58.921964 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:06:58.921988 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:06:58.922003 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:06:59.016935 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:06:59.016980 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:06:59.065747 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:06:59.065788 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:06:59.119189 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:06:59.119231 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:06:58.042362 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:00.544351 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:57.875649 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:00.371953 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:58.406154 1130949 pod_ready.go:81] duration metric: took 4m0.000981669s for pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace to be "Ready" ...
	E0328 01:06:58.406192 1130949 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace to be "Ready" (will not retry!)
	I0328 01:06:58.406218 1130949 pod_ready.go:38] duration metric: took 4m11.713667334s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0328 01:06:58.406275 1130949 kubeadm.go:591] duration metric: took 4m19.018883002s to restartPrimaryControlPlane
	W0328 01:06:58.406372 1130949 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0328 01:06:58.406432 1130949 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0328 01:07:01.637081 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:07:01.652557 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:07:01.652634 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:07:01.691795 1131323 cri.go:89] found id: ""
	I0328 01:07:01.691832 1131323 logs.go:276] 0 containers: []
	W0328 01:07:01.691846 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:07:01.691854 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:07:01.691927 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:07:01.732815 1131323 cri.go:89] found id: ""
	I0328 01:07:01.732850 1131323 logs.go:276] 0 containers: []
	W0328 01:07:01.732861 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:07:01.732868 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:07:01.732938 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:07:01.776370 1131323 cri.go:89] found id: ""
	I0328 01:07:01.776408 1131323 logs.go:276] 0 containers: []
	W0328 01:07:01.776422 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:07:01.776431 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:07:01.776501 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:07:01.821260 1131323 cri.go:89] found id: ""
	I0328 01:07:01.821290 1131323 logs.go:276] 0 containers: []
	W0328 01:07:01.821301 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:07:01.821308 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:07:01.821377 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:07:01.860666 1131323 cri.go:89] found id: ""
	I0328 01:07:01.860696 1131323 logs.go:276] 0 containers: []
	W0328 01:07:01.860708 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:07:01.860719 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:07:01.860787 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:07:01.898255 1131323 cri.go:89] found id: ""
	I0328 01:07:01.898291 1131323 logs.go:276] 0 containers: []
	W0328 01:07:01.898304 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:07:01.898314 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:07:01.898383 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:07:01.937770 1131323 cri.go:89] found id: ""
	I0328 01:07:01.937809 1131323 logs.go:276] 0 containers: []
	W0328 01:07:01.937822 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:07:01.937830 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:07:01.937901 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:07:01.976946 1131323 cri.go:89] found id: ""
	I0328 01:07:01.976981 1131323 logs.go:276] 0 containers: []
	W0328 01:07:01.976994 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:07:01.977008 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:07:01.977027 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:07:02.062804 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:07:02.062845 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:07:02.110750 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:07:02.110783 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:07:02.179633 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:07:02.179677 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:07:02.203131 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:07:02.203181 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:07:02.303281 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:07:04.804238 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:07:04.819654 1131323 kubeadm.go:591] duration metric: took 4m2.527630194s to restartPrimaryControlPlane
	W0328 01:07:04.819747 1131323 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0328 01:07:04.819787 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0328 01:07:03.041692 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:05.540478 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:02.372472 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:04.376413 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:07.322821 1131323 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (2.50300166s)
	I0328 01:07:07.322918 1131323 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0328 01:07:07.338692 1131323 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0328 01:07:07.349812 1131323 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0328 01:07:07.361566 1131323 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0328 01:07:07.361597 1131323 kubeadm.go:156] found existing configuration files:
	
	I0328 01:07:07.361667 1131323 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0328 01:07:07.372926 1131323 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0328 01:07:07.373008 1131323 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0328 01:07:07.383770 1131323 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0328 01:07:07.394260 1131323 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0328 01:07:07.394332 1131323 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0328 01:07:07.405874 1131323 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0328 01:07:07.417177 1131323 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0328 01:07:07.417254 1131323 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0328 01:07:07.428589 1131323 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0328 01:07:07.438788 1131323 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0328 01:07:07.438845 1131323 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0328 01:07:07.449649 1131323 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0328 01:07:07.533886 1131323 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0328 01:07:07.533989 1131323 kubeadm.go:309] [preflight] Running pre-flight checks
	I0328 01:07:07.693599 1131323 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0328 01:07:07.693736 1131323 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0328 01:07:07.693852 1131323 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0328 01:07:07.910557 1131323 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0328 01:07:07.912634 1131323 out.go:204]   - Generating certificates and keys ...
	I0328 01:07:07.912743 1131323 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0328 01:07:07.912855 1131323 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0328 01:07:07.912984 1131323 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0328 01:07:07.913098 1131323 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0328 01:07:07.913212 1131323 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0328 01:07:07.913298 1131323 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0328 01:07:07.913384 1131323 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0328 01:07:07.913569 1131323 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0328 01:07:07.913947 1131323 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0328 01:07:07.914429 1131323 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0328 01:07:07.914649 1131323 kubeadm.go:309] [certs] Using the existing "sa" key
	I0328 01:07:07.914728 1131323 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0328 01:07:08.225778 1131323 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0328 01:07:08.353927 1131323 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0328 01:07:08.631240 1131323 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0328 01:07:08.824445 1131323 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0328 01:07:08.840240 1131323 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0328 01:07:08.841200 1131323 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0328 01:07:08.841315 1131323 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0328 01:07:08.997129 1131323 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0328 01:07:08.999073 1131323 out.go:204]   - Booting up control plane ...
	I0328 01:07:08.999224 1131323 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0328 01:07:09.014811 1131323 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0328 01:07:09.015898 1131323 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0328 01:07:09.016727 1131323 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0328 01:07:09.019426 1131323 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0328 01:07:07.541363 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:10.041094 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:06.874606 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:09.372537 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:12.540137 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:14.541608 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:11.372643 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:13.873029 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:16.541814 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:19.047225 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:16.372556 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:18.871954 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:20.872047 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:21.542880 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:24.041786 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:22.872845 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:24.873747 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:26.042186 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:28.541303 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:30.540610 1130949 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.134147754s)
	I0328 01:07:30.540688 1130949 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0328 01:07:30.558971 1130949 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0328 01:07:30.570331 1130949 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0328 01:07:30.581192 1130949 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0328 01:07:30.581246 1130949 kubeadm.go:156] found existing configuration files:
	
	I0328 01:07:30.581306 1130949 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0328 01:07:30.592337 1130949 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0328 01:07:30.592410 1130949 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0328 01:07:30.603288 1130949 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0328 01:07:30.613714 1130949 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0328 01:07:30.613776 1130949 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0328 01:07:30.624281 1130949 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0328 01:07:30.634569 1130949 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0328 01:07:30.634644 1130949 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0328 01:07:30.647279 1130949 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0328 01:07:30.658554 1130949 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0328 01:07:30.658646 1130949 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0328 01:07:30.670364 1130949 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0328 01:07:30.730349 1130949 kubeadm.go:309] [init] Using Kubernetes version: v1.29.3
	I0328 01:07:30.730414 1130949 kubeadm.go:309] [preflight] Running pre-flight checks
	I0328 01:07:30.887056 1130949 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0328 01:07:30.887234 1130949 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0328 01:07:30.887385 1130949 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0328 01:07:31.104288 1130949 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0328 01:07:27.373135 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:29.373436 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:31.106496 1130949 out.go:204]   - Generating certificates and keys ...
	I0328 01:07:31.106628 1130949 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0328 01:07:31.106697 1130949 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0328 01:07:31.106765 1130949 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0328 01:07:31.106826 1130949 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0328 01:07:31.106892 1130949 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0328 01:07:31.107528 1130949 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0328 01:07:31.108302 1130949 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0328 01:07:31.112246 1130949 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0328 01:07:31.112762 1130949 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0328 01:07:31.113711 1130949 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0328 01:07:31.115230 1130949 kubeadm.go:309] [certs] Using the existing "sa" key
	I0328 01:07:31.115284 1130949 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0328 01:07:31.297632 1130949 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0328 01:07:32.446275 1130949 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0328 01:07:32.565869 1130949 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0328 01:07:32.641288 1130949 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0328 01:07:32.817229 1130949 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0328 01:07:32.817814 1130949 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0328 01:07:32.820366 1130949 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0328 01:07:32.822328 1130949 out.go:204]   - Booting up control plane ...
	I0328 01:07:32.822467 1130949 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0328 01:07:32.822550 1130949 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0328 01:07:32.822990 1130949 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0328 01:07:32.846800 1130949 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0328 01:07:32.847829 1130949 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0328 01:07:32.847902 1130949 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0328 01:07:31.044103 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:33.542106 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:35.542875 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:31.873591 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:33.875737 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:35.881819 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:32.992001 1130949 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0328 01:07:38.997010 1130949 kubeadm.go:309] [apiclient] All control plane components are healthy after 6.003888 seconds
	I0328 01:07:39.012971 1130949 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0328 01:07:39.036328 1130949 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0328 01:07:39.569806 1130949 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0328 01:07:39.570135 1130949 kubeadm.go:309] [mark-control-plane] Marking the node embed-certs-808809 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0328 01:07:40.085165 1130949 kubeadm.go:309] [bootstrap-token] Using token: 4zk5zi.uttj4zihedk5oj6k
	I0328 01:07:40.086719 1130949 out.go:204]   - Configuring RBAC rules ...
	I0328 01:07:40.086873 1130949 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0328 01:07:40.096373 1130949 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0328 01:07:40.106484 1130949 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0328 01:07:40.110525 1130949 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0328 01:07:40.120015 1130949 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0328 01:07:40.129060 1130949 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0328 01:07:40.141167 1130949 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0328 01:07:40.415429 1130949 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0328 01:07:40.507275 1130949 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0328 01:07:40.507333 1130949 kubeadm.go:309] 
	I0328 01:07:40.507551 1130949 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0328 01:07:40.507617 1130949 kubeadm.go:309] 
	I0328 01:07:40.507860 1130949 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0328 01:07:40.507891 1130949 kubeadm.go:309] 
	I0328 01:07:40.507947 1130949 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0328 01:07:40.508057 1130949 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0328 01:07:40.508140 1130949 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0328 01:07:40.508157 1130949 kubeadm.go:309] 
	I0328 01:07:40.508250 1130949 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0328 01:07:40.508264 1130949 kubeadm.go:309] 
	I0328 01:07:40.508329 1130949 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0328 01:07:40.508344 1130949 kubeadm.go:309] 
	I0328 01:07:40.508421 1130949 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0328 01:07:40.508539 1130949 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0328 01:07:40.508626 1130949 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0328 01:07:40.508632 1130949 kubeadm.go:309] 
	I0328 01:07:40.508804 1130949 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0328 01:07:40.508970 1130949 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0328 01:07:40.508990 1130949 kubeadm.go:309] 
	I0328 01:07:40.509155 1130949 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token 4zk5zi.uttj4zihedk5oj6k \
	I0328 01:07:40.509474 1130949 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:eb47e943713ff45bf2b8bbea854932df17a469668ec0f8e79ea3bb626e56fc59 \
	I0328 01:07:40.509514 1130949 kubeadm.go:309] 	--control-plane 
	I0328 01:07:40.509524 1130949 kubeadm.go:309] 
	I0328 01:07:40.509641 1130949 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0328 01:07:40.509655 1130949 kubeadm.go:309] 
	I0328 01:07:40.509767 1130949 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token 4zk5zi.uttj4zihedk5oj6k \
	I0328 01:07:40.509932 1130949 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:eb47e943713ff45bf2b8bbea854932df17a469668ec0f8e79ea3bb626e56fc59 
	I0328 01:07:40.510139 1130949 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0328 01:07:40.510157 1130949 cni.go:84] Creating CNI manager for ""
	I0328 01:07:40.510166 1130949 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0328 01:07:40.512099 1130949 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0328 01:07:38.041290 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:40.041569 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:38.373789 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:40.374369 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:40.513314 1130949 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0328 01:07:40.563257 1130949 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0328 01:07:40.627024 1130949 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0328 01:07:40.627097 1130949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:07:40.627137 1130949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-808809 minikube.k8s.io/updated_at=2024_03_28T01_07_40_0700 minikube.k8s.io/version=v1.33.0-beta.0 minikube.k8s.io/commit=1a940f980f77c9bfc8ef93678db8c1f49c9dd79d minikube.k8s.io/name=embed-certs-808809 minikube.k8s.io/primary=true
	I0328 01:07:40.928916 1130949 ops.go:34] apiserver oom_adj: -16
	I0328 01:07:40.929138 1130949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:07:41.429797 1130949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:07:41.930103 1130949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:07:42.429366 1130949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:07:42.540932 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:44.035055 1131600 pod_ready.go:81] duration metric: took 4m0.000860608s for pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace to be "Ready" ...
	E0328 01:07:44.035094 1131600 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace to be "Ready" (will not retry!)
	I0328 01:07:44.035124 1131600 pod_ready.go:38] duration metric: took 4m14.608998431s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0328 01:07:44.035180 1131600 kubeadm.go:591] duration metric: took 4m23.470228903s to restartPrimaryControlPlane
	W0328 01:07:44.035292 1131600 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0328 01:07:44.035344 1131600 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0328 01:07:42.375179 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:44.876120 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:42.929464 1130949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:07:43.429369 1130949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:07:43.929241 1130949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:07:44.429904 1130949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:07:44.930251 1130949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:07:45.429816 1130949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:07:45.930177 1130949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:07:46.429416 1130949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:07:46.929152 1130949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:07:47.429708 1130949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:07:49.021732 1131323 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0328 01:07:49.021890 1131323 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0328 01:07:49.022195 1131323 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0328 01:07:47.373358 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:49.872482 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:47.929139 1130949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:07:48.429732 1130949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:07:48.930207 1130949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:07:49.429230 1130949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:07:49.929298 1130949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:07:50.429919 1130949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:07:50.929364 1130949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:07:51.429403 1130949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:07:51.929356 1130949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:07:52.429410 1130949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:07:52.929894 1130949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:07:53.043365 1130949 kubeadm.go:1107] duration metric: took 12.416334145s to wait for elevateKubeSystemPrivileges
	W0328 01:07:53.043410 1130949 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0328 01:07:53.043419 1130949 kubeadm.go:393] duration metric: took 5m13.709259014s to StartCluster
	I0328 01:07:53.043445 1130949 settings.go:142] acquiring lock: {Name:mk594dd5d083edcb4c668528431bf610fe4ea638 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 01:07:53.043560 1130949 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18485-1069254/kubeconfig
	I0328 01:07:53.045798 1130949 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18485-1069254/kubeconfig: {Name:mk608bb0dce90860f51972a105c5a28b1a9b081e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 01:07:53.046158 1130949 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.72.210 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0328 01:07:53.047867 1130949 out.go:177] * Verifying Kubernetes components...
	I0328 01:07:53.046201 1130949 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0328 01:07:53.046412 1130949 config.go:182] Loaded profile config "embed-certs-808809": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0328 01:07:53.049163 1130949 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-808809"
	I0328 01:07:53.049175 1130949 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0328 01:07:53.049195 1130949 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-808809"
	W0328 01:07:53.049204 1130949 addons.go:243] addon storage-provisioner should already be in state true
	I0328 01:07:53.049230 1130949 host.go:66] Checking if "embed-certs-808809" exists ...
	I0328 01:07:53.049205 1130949 addons.go:69] Setting default-storageclass=true in profile "embed-certs-808809"
	I0328 01:07:53.049250 1130949 addons.go:69] Setting metrics-server=true in profile "embed-certs-808809"
	I0328 01:07:53.049271 1130949 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-808809"
	I0328 01:07:53.049309 1130949 addons.go:234] Setting addon metrics-server=true in "embed-certs-808809"
	W0328 01:07:53.049327 1130949 addons.go:243] addon metrics-server should already be in state true
	I0328 01:07:53.049371 1130949 host.go:66] Checking if "embed-certs-808809" exists ...
	I0328 01:07:53.049530 1130949 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18485-1069254/.minikube/bin/docker-machine-driver-kvm2
	I0328 01:07:53.049569 1130949 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 01:07:53.049696 1130949 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18485-1069254/.minikube/bin/docker-machine-driver-kvm2
	I0328 01:07:53.049729 1130949 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 01:07:53.049795 1130949 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18485-1069254/.minikube/bin/docker-machine-driver-kvm2
	I0328 01:07:53.049838 1130949 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 01:07:53.067042 1130949 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36143
	I0328 01:07:53.067078 1130949 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33653
	I0328 01:07:53.067536 1130949 main.go:141] libmachine: () Calling .GetVersion
	I0328 01:07:53.067599 1130949 main.go:141] libmachine: () Calling .GetVersion
	I0328 01:07:53.068156 1130949 main.go:141] libmachine: Using API Version  1
	I0328 01:07:53.068184 1130949 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 01:07:53.068289 1130949 main.go:141] libmachine: Using API Version  1
	I0328 01:07:53.068315 1130949 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 01:07:53.068583 1130949 main.go:141] libmachine: () Calling .GetMachineName
	I0328 01:07:53.068669 1130949 main.go:141] libmachine: () Calling .GetMachineName
	I0328 01:07:53.069095 1130949 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18485-1069254/.minikube/bin/docker-machine-driver-kvm2
	I0328 01:07:53.069121 1130949 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 01:07:53.069245 1130949 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18485-1069254/.minikube/bin/docker-machine-driver-kvm2
	I0328 01:07:53.069276 1130949 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 01:07:53.069991 1130949 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43737
	I0328 01:07:53.070509 1130949 main.go:141] libmachine: () Calling .GetVersion
	I0328 01:07:53.071078 1130949 main.go:141] libmachine: Using API Version  1
	I0328 01:07:53.071103 1130949 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 01:07:53.071480 1130949 main.go:141] libmachine: () Calling .GetMachineName
	I0328 01:07:53.071705 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetState
	I0328 01:07:53.075617 1130949 addons.go:234] Setting addon default-storageclass=true in "embed-certs-808809"
	W0328 01:07:53.075659 1130949 addons.go:243] addon default-storageclass should already be in state true
	I0328 01:07:53.075703 1130949 host.go:66] Checking if "embed-certs-808809" exists ...
	I0328 01:07:53.075982 1130949 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18485-1069254/.minikube/bin/docker-machine-driver-kvm2
	I0328 01:07:53.076011 1130949 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 01:07:53.085991 1130949 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39265
	I0328 01:07:53.086508 1130949 main.go:141] libmachine: () Calling .GetVersion
	I0328 01:07:53.086724 1130949 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36033
	I0328 01:07:53.087105 1130949 main.go:141] libmachine: Using API Version  1
	I0328 01:07:53.087122 1130949 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 01:07:53.087158 1130949 main.go:141] libmachine: () Calling .GetVersion
	I0328 01:07:53.087646 1130949 main.go:141] libmachine: Using API Version  1
	I0328 01:07:53.087667 1130949 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 01:07:53.087706 1130949 main.go:141] libmachine: () Calling .GetMachineName
	I0328 01:07:53.087922 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetState
	I0328 01:07:53.088031 1130949 main.go:141] libmachine: () Calling .GetMachineName
	I0328 01:07:53.088225 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetState
	I0328 01:07:53.089941 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .DriverName
	I0328 01:07:53.090168 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .DriverName
	I0328 01:07:53.091945 1130949 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0328 01:07:53.093023 1130949 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41613
	I0328 01:07:53.093537 1130949 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0328 01:07:53.093553 1130949 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0328 01:07:53.093563 1130949 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0328 01:07:53.095147 1130949 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0328 01:07:53.095165 1130949 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0328 01:07:53.093574 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHHostname
	I0328 01:07:53.095185 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHHostname
	I0328 01:07:53.093939 1130949 main.go:141] libmachine: () Calling .GetVersion
	I0328 01:07:53.096301 1130949 main.go:141] libmachine: Using API Version  1
	I0328 01:07:53.096322 1130949 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 01:07:53.096662 1130949 main.go:141] libmachine: () Calling .GetMachineName
	I0328 01:07:53.097251 1130949 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18485-1069254/.minikube/bin/docker-machine-driver-kvm2
	I0328 01:07:53.097306 1130949 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 01:07:53.098907 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:07:53.099014 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:07:53.099513 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d4:d2", ip: ""} in network mk-embed-certs-808809: {Iface:virbr4 ExpiryTime:2024-03-28 02:02:25 +0000 UTC Type:0 Mac:52:54:00:60:d4:d2 Iaid: IPaddr:192.168.72.210 Prefix:24 Hostname:embed-certs-808809 Clientid:01:52:54:00:60:d4:d2}
	I0328 01:07:53.099546 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined IP address 192.168.72.210 and MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:07:53.099996 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHPort
	I0328 01:07:53.100126 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d4:d2", ip: ""} in network mk-embed-certs-808809: {Iface:virbr4 ExpiryTime:2024-03-28 02:02:25 +0000 UTC Type:0 Mac:52:54:00:60:d4:d2 Iaid: IPaddr:192.168.72.210 Prefix:24 Hostname:embed-certs-808809 Clientid:01:52:54:00:60:d4:d2}
	I0328 01:07:53.100177 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHKeyPath
	I0328 01:07:53.100187 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined IP address 192.168.72.210 and MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:07:53.100287 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHUsername
	I0328 01:07:53.100392 1130949 sshutil.go:53] new ssh client: &{IP:192.168.72.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/embed-certs-808809/id_rsa Username:docker}
	I0328 01:07:53.100470 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHPort
	I0328 01:07:53.100576 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHKeyPath
	I0328 01:07:53.100709 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHUsername
	I0328 01:07:53.100796 1130949 sshutil.go:53] new ssh client: &{IP:192.168.72.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/embed-certs-808809/id_rsa Username:docker}
	I0328 01:07:53.114056 1130949 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41121
	I0328 01:07:53.114680 1130949 main.go:141] libmachine: () Calling .GetVersion
	I0328 01:07:53.115279 1130949 main.go:141] libmachine: Using API Version  1
	I0328 01:07:53.115313 1130949 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 01:07:53.115721 1130949 main.go:141] libmachine: () Calling .GetMachineName
	I0328 01:07:53.116061 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetState
	I0328 01:07:53.118022 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .DriverName
	I0328 01:07:53.118348 1130949 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0328 01:07:53.118370 1130949 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0328 01:07:53.118391 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHHostname
	I0328 01:07:53.121337 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:07:53.121699 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d4:d2", ip: ""} in network mk-embed-certs-808809: {Iface:virbr4 ExpiryTime:2024-03-28 02:02:25 +0000 UTC Type:0 Mac:52:54:00:60:d4:d2 Iaid: IPaddr:192.168.72.210 Prefix:24 Hostname:embed-certs-808809 Clientid:01:52:54:00:60:d4:d2}
	I0328 01:07:53.121728 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined IP address 192.168.72.210 and MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:07:53.121906 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHPort
	I0328 01:07:53.122084 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHKeyPath
	I0328 01:07:53.122266 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHUsername
	I0328 01:07:53.122414 1130949 sshutil.go:53] new ssh client: &{IP:192.168.72.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/embed-certs-808809/id_rsa Username:docker}
	I0328 01:07:53.242121 1130949 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0328 01:07:53.267118 1130949 node_ready.go:35] waiting up to 6m0s for node "embed-certs-808809" to be "Ready" ...
	I0328 01:07:53.276640 1130949 node_ready.go:49] node "embed-certs-808809" has status "Ready":"True"
	I0328 01:07:53.276670 1130949 node_ready.go:38] duration metric: took 9.513599ms for node "embed-certs-808809" to be "Ready" ...
	I0328 01:07:53.276683 1130949 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0328 01:07:53.283091 1130949 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-2rn6k" in "kube-system" namespace to be "Ready" ...
	I0328 01:07:53.325201 1130949 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0328 01:07:53.325234 1130949 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0328 01:07:53.341335 1130949 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0328 01:07:53.361084 1130949 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0328 01:07:53.361109 1130949 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0328 01:07:53.393089 1130949 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0328 01:07:53.393116 1130949 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0328 01:07:53.419245 1130949 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0328 01:07:53.445663 1130949 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0328 01:07:53.515515 1130949 main.go:141] libmachine: Making call to close driver server
	I0328 01:07:53.515555 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .Close
	I0328 01:07:53.515871 1130949 main.go:141] libmachine: Successfully made call to close driver server
	I0328 01:07:53.515891 1130949 main.go:141] libmachine: Making call to close connection to plugin binary
	I0328 01:07:53.515901 1130949 main.go:141] libmachine: Making call to close driver server
	I0328 01:07:53.515910 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .Close
	I0328 01:07:53.516173 1130949 main.go:141] libmachine: Successfully made call to close driver server
	I0328 01:07:53.516253 1130949 main.go:141] libmachine: Making call to close connection to plugin binary
	I0328 01:07:53.516212 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | Closing plugin on server side
	I0328 01:07:53.527854 1130949 main.go:141] libmachine: Making call to close driver server
	I0328 01:07:53.527882 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .Close
	I0328 01:07:53.528152 1130949 main.go:141] libmachine: Successfully made call to close driver server
	I0328 01:07:53.528173 1130949 main.go:141] libmachine: Making call to close connection to plugin binary
	I0328 01:07:53.528220 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | Closing plugin on server side
	I0328 01:07:54.159164 1130949 main.go:141] libmachine: Making call to close driver server
	I0328 01:07:54.159192 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .Close
	I0328 01:07:54.159264 1130949 main.go:141] libmachine: Making call to close driver server
	I0328 01:07:54.159292 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .Close
	I0328 01:07:54.159523 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | Closing plugin on server side
	I0328 01:07:54.159597 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | Closing plugin on server side
	I0328 01:07:54.159619 1130949 main.go:141] libmachine: Successfully made call to close driver server
	I0328 01:07:54.159637 1130949 main.go:141] libmachine: Making call to close connection to plugin binary
	I0328 01:07:54.159648 1130949 main.go:141] libmachine: Making call to close driver server
	I0328 01:07:54.159658 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .Close
	I0328 01:07:54.159660 1130949 main.go:141] libmachine: Successfully made call to close driver server
	I0328 01:07:54.159667 1130949 main.go:141] libmachine: Making call to close connection to plugin binary
	I0328 01:07:54.159688 1130949 main.go:141] libmachine: Making call to close driver server
	I0328 01:07:54.159696 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .Close
	I0328 01:07:54.159981 1130949 main.go:141] libmachine: Successfully made call to close driver server
	I0328 01:07:54.160037 1130949 main.go:141] libmachine: Making call to close connection to plugin binary
	I0328 01:07:54.160056 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | Closing plugin on server side
	I0328 01:07:54.160062 1130949 addons.go:470] Verifying addon metrics-server=true in "embed-certs-808809"
	I0328 01:07:54.160088 1130949 main.go:141] libmachine: Successfully made call to close driver server
	I0328 01:07:54.160090 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | Closing plugin on server side
	I0328 01:07:54.160106 1130949 main.go:141] libmachine: Making call to close connection to plugin binary
	I0328 01:07:54.162879 1130949 out.go:177] * Enabled addons: default-storageclass, metrics-server, storage-provisioner
	I0328 01:07:54.022449 1131323 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0328 01:07:54.022704 1131323 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0328 01:07:52.372314 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:54.372913 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:54.164263 1130949 addons.go:505] duration metric: took 1.11806212s for enable addons: enabled=[default-storageclass metrics-server storage-provisioner]
	I0328 01:07:55.294728 1130949 pod_ready.go:102] pod "coredns-76f75df574-2rn6k" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:55.790690 1130949 pod_ready.go:92] pod "coredns-76f75df574-2rn6k" in "kube-system" namespace has status "Ready":"True"
	I0328 01:07:55.790717 1130949 pod_ready.go:81] duration metric: took 2.50759161s for pod "coredns-76f75df574-2rn6k" in "kube-system" namespace to be "Ready" ...
	I0328 01:07:55.790726 1130949 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-pgcdh" in "kube-system" namespace to be "Ready" ...
	I0328 01:07:55.796249 1130949 pod_ready.go:92] pod "coredns-76f75df574-pgcdh" in "kube-system" namespace has status "Ready":"True"
	I0328 01:07:55.796279 1130949 pod_ready.go:81] duration metric: took 5.54233ms for pod "coredns-76f75df574-pgcdh" in "kube-system" namespace to be "Ready" ...
	I0328 01:07:55.796291 1130949 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-808809" in "kube-system" namespace to be "Ready" ...
	I0328 01:07:55.801226 1130949 pod_ready.go:92] pod "etcd-embed-certs-808809" in "kube-system" namespace has status "Ready":"True"
	I0328 01:07:55.801254 1130949 pod_ready.go:81] duration metric: took 4.956106ms for pod "etcd-embed-certs-808809" in "kube-system" namespace to be "Ready" ...
	I0328 01:07:55.801263 1130949 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-808809" in "kube-system" namespace to be "Ready" ...
	I0328 01:07:55.814571 1130949 pod_ready.go:92] pod "kube-apiserver-embed-certs-808809" in "kube-system" namespace has status "Ready":"True"
	I0328 01:07:55.814599 1130949 pod_ready.go:81] duration metric: took 13.328662ms for pod "kube-apiserver-embed-certs-808809" in "kube-system" namespace to be "Ready" ...
	I0328 01:07:55.814613 1130949 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-808809" in "kube-system" namespace to be "Ready" ...
	I0328 01:07:55.825995 1130949 pod_ready.go:92] pod "kube-controller-manager-embed-certs-808809" in "kube-system" namespace has status "Ready":"True"
	I0328 01:07:55.826022 1130949 pod_ready.go:81] duration metric: took 11.401096ms for pod "kube-controller-manager-embed-certs-808809" in "kube-system" namespace to be "Ready" ...
	I0328 01:07:55.826035 1130949 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-tjbhs" in "kube-system" namespace to be "Ready" ...
	I0328 01:07:56.188116 1130949 pod_ready.go:92] pod "kube-proxy-tjbhs" in "kube-system" namespace has status "Ready":"True"
	I0328 01:07:56.188147 1130949 pod_ready.go:81] duration metric: took 362.103962ms for pod "kube-proxy-tjbhs" in "kube-system" namespace to be "Ready" ...
	I0328 01:07:56.188161 1130949 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-808809" in "kube-system" namespace to be "Ready" ...
	I0328 01:07:56.588294 1130949 pod_ready.go:92] pod "kube-scheduler-embed-certs-808809" in "kube-system" namespace has status "Ready":"True"
	I0328 01:07:56.588334 1130949 pod_ready.go:81] duration metric: took 400.16517ms for pod "kube-scheduler-embed-certs-808809" in "kube-system" namespace to be "Ready" ...
	I0328 01:07:56.588347 1130949 pod_ready.go:38] duration metric: took 3.311651338s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0328 01:07:56.588369 1130949 api_server.go:52] waiting for apiserver process to appear ...
	I0328 01:07:56.588445 1130949 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:07:56.606404 1130949 api_server.go:72] duration metric: took 3.560197315s to wait for apiserver process to appear ...
	I0328 01:07:56.606435 1130949 api_server.go:88] waiting for apiserver healthz status ...
	I0328 01:07:56.606460 1130949 api_server.go:253] Checking apiserver healthz at https://192.168.72.210:8443/healthz ...
	I0328 01:07:56.612218 1130949 api_server.go:279] https://192.168.72.210:8443/healthz returned 200:
	ok
	I0328 01:07:56.613459 1130949 api_server.go:141] control plane version: v1.29.3
	I0328 01:07:56.613481 1130949 api_server.go:131] duration metric: took 7.039378ms to wait for apiserver health ...
	I0328 01:07:56.613490 1130949 system_pods.go:43] waiting for kube-system pods to appear ...
	I0328 01:07:56.793192 1130949 system_pods.go:59] 9 kube-system pods found
	I0328 01:07:56.793227 1130949 system_pods.go:61] "coredns-76f75df574-2rn6k" [2a77c778-dd83-4e2e-b45a-ca16e3922b45] Running
	I0328 01:07:56.793232 1130949 system_pods.go:61] "coredns-76f75df574-pgcdh" [52452b24-490e-4999-b700-198c6f9b2fa1] Running
	I0328 01:07:56.793236 1130949 system_pods.go:61] "etcd-embed-certs-808809" [cba526ab-c8ca-4b90-abb8-533d1bf6cb4f] Running
	I0328 01:07:56.793239 1130949 system_pods.go:61] "kube-apiserver-embed-certs-808809" [02934ac6-1e07-4cbe-ba2f-4149e59d6044] Running
	I0328 01:07:56.793243 1130949 system_pods.go:61] "kube-controller-manager-embed-certs-808809" [b3350ff9-7abb-407e-939c-fcde61186c17] Running
	I0328 01:07:56.793246 1130949 system_pods.go:61] "kube-proxy-tjbhs" [cdb30ca1-5165-4e24-888a-df79af7987d0] Running
	I0328 01:07:56.793249 1130949 system_pods.go:61] "kube-scheduler-embed-certs-808809" [5b52a22d-6ffb-468b-b7b9-8f89b0dee3b8] Running
	I0328 01:07:56.793255 1130949 system_pods.go:61] "metrics-server-57f55c9bc5-bqbfl" [8434fd7d-838b-4cf2-96a3-e4d613633871] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0328 01:07:56.793260 1130949 system_pods.go:61] "storage-provisioner" [20c1951e-7da8-4025-bbcf-2da60f87f3ab] Running
	I0328 01:07:56.793268 1130949 system_pods.go:74] duration metric: took 179.77213ms to wait for pod list to return data ...
	I0328 01:07:56.793275 1130949 default_sa.go:34] waiting for default service account to be created ...
	I0328 01:07:56.988234 1130949 default_sa.go:45] found service account: "default"
	I0328 01:07:56.988274 1130949 default_sa.go:55] duration metric: took 194.984089ms for default service account to be created ...
	I0328 01:07:56.988288 1130949 system_pods.go:116] waiting for k8s-apps to be running ...
	I0328 01:07:57.192153 1130949 system_pods.go:86] 9 kube-system pods found
	I0328 01:07:57.192188 1130949 system_pods.go:89] "coredns-76f75df574-2rn6k" [2a77c778-dd83-4e2e-b45a-ca16e3922b45] Running
	I0328 01:07:57.192194 1130949 system_pods.go:89] "coredns-76f75df574-pgcdh" [52452b24-490e-4999-b700-198c6f9b2fa1] Running
	I0328 01:07:57.192200 1130949 system_pods.go:89] "etcd-embed-certs-808809" [cba526ab-c8ca-4b90-abb8-533d1bf6cb4f] Running
	I0328 01:07:57.192205 1130949 system_pods.go:89] "kube-apiserver-embed-certs-808809" [02934ac6-1e07-4cbe-ba2f-4149e59d6044] Running
	I0328 01:07:57.192210 1130949 system_pods.go:89] "kube-controller-manager-embed-certs-808809" [b3350ff9-7abb-407e-939c-fcde61186c17] Running
	I0328 01:07:57.192214 1130949 system_pods.go:89] "kube-proxy-tjbhs" [cdb30ca1-5165-4e24-888a-df79af7987d0] Running
	I0328 01:07:57.192218 1130949 system_pods.go:89] "kube-scheduler-embed-certs-808809" [5b52a22d-6ffb-468b-b7b9-8f89b0dee3b8] Running
	I0328 01:07:57.192225 1130949 system_pods.go:89] "metrics-server-57f55c9bc5-bqbfl" [8434fd7d-838b-4cf2-96a3-e4d613633871] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0328 01:07:57.192230 1130949 system_pods.go:89] "storage-provisioner" [20c1951e-7da8-4025-bbcf-2da60f87f3ab] Running
	I0328 01:07:57.192239 1130949 system_pods.go:126] duration metric: took 203.942878ms to wait for k8s-apps to be running ...
	I0328 01:07:57.192249 1130949 system_svc.go:44] waiting for kubelet service to be running ....
	I0328 01:07:57.192301 1130949 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0328 01:07:57.209840 1130949 system_svc.go:56] duration metric: took 17.576605ms WaitForService to wait for kubelet
	I0328 01:07:57.209883 1130949 kubeadm.go:576] duration metric: took 4.163683877s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0328 01:07:57.209918 1130949 node_conditions.go:102] verifying NodePressure condition ...
	I0328 01:07:57.388321 1130949 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0328 01:07:57.388347 1130949 node_conditions.go:123] node cpu capacity is 2
	I0328 01:07:57.388357 1130949 node_conditions.go:105] duration metric: took 178.433633ms to run NodePressure ...
	I0328 01:07:57.388370 1130949 start.go:240] waiting for startup goroutines ...
	I0328 01:07:57.388377 1130949 start.go:245] waiting for cluster config update ...
	I0328 01:07:57.388387 1130949 start.go:254] writing updated cluster config ...
	I0328 01:07:57.388784 1130949 ssh_runner.go:195] Run: rm -f paused
	I0328 01:07:57.446699 1130949 start.go:600] kubectl: 1.29.3, cluster: 1.29.3 (minor skew: 0)
	I0328 01:07:57.448951 1130949 out.go:177] * Done! kubectl is now configured to use "embed-certs-808809" cluster and "default" namespace by default
	I0328 01:07:56.373123 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:58.872454 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:08:04.023273 1131323 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0328 01:08:04.023535 1131323 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0328 01:08:01.372711 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:08:03.877734 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:08:06.374031 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:08:07.366164 1130827 pod_ready.go:81] duration metric: took 4m0.000887668s for pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace to be "Ready" ...
	E0328 01:08:07.366245 1130827 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace to be "Ready" (will not retry!)
	I0328 01:08:07.366271 1130827 pod_ready.go:38] duration metric: took 4m7.906522585s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0328 01:08:07.366301 1130827 kubeadm.go:591] duration metric: took 4m15.27169704s to restartPrimaryControlPlane
	W0328 01:08:07.366368 1130827 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0328 01:08:07.366406 1130827 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0328 01:08:16.281280 1131600 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.245904746s)
	I0328 01:08:16.281365 1131600 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0328 01:08:16.298463 1131600 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0328 01:08:16.310406 1131600 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0328 01:08:16.321387 1131600 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0328 01:08:16.321415 1131600 kubeadm.go:156] found existing configuration files:
	
	I0328 01:08:16.321475 1131600 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0328 01:08:16.331965 1131600 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0328 01:08:16.332033 1131600 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0328 01:08:16.343030 1131600 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0328 01:08:16.353193 1131600 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0328 01:08:16.353254 1131600 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0328 01:08:16.363865 1131600 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0328 01:08:16.374276 1131600 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0328 01:08:16.374346 1131600 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0328 01:08:16.385300 1131600 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0328 01:08:16.396118 1131600 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0328 01:08:16.396181 1131600 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0328 01:08:16.406896 1131600 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0328 01:08:16.626615 1131600 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0328 01:08:24.024091 1131323 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0328 01:08:24.024388 1131323 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0328 01:08:25.420974 1131600 kubeadm.go:309] [init] Using Kubernetes version: v1.29.3
	I0328 01:08:25.421059 1131600 kubeadm.go:309] [preflight] Running pre-flight checks
	I0328 01:08:25.421154 1131600 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0328 01:08:25.421300 1131600 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0328 01:08:25.421547 1131600 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0328 01:08:25.421649 1131600 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0328 01:08:25.423435 1131600 out.go:204]   - Generating certificates and keys ...
	I0328 01:08:25.423549 1131600 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0328 01:08:25.423630 1131600 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0328 01:08:25.423749 1131600 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0328 01:08:25.423844 1131600 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0328 01:08:25.423956 1131600 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0328 01:08:25.424058 1131600 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0328 01:08:25.424166 1131600 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0328 01:08:25.424260 1131600 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0328 01:08:25.424375 1131600 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0328 01:08:25.424489 1131600 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0328 01:08:25.424552 1131600 kubeadm.go:309] [certs] Using the existing "sa" key
	I0328 01:08:25.424642 1131600 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0328 01:08:25.424700 1131600 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0328 01:08:25.424765 1131600 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0328 01:08:25.424832 1131600 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0328 01:08:25.424920 1131600 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0328 01:08:25.424982 1131600 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0328 01:08:25.425106 1131600 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0328 01:08:25.425207 1131600 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0328 01:08:25.426863 1131600 out.go:204]   - Booting up control plane ...
	I0328 01:08:25.427001 1131600 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0328 01:08:25.427108 1131600 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0328 01:08:25.427205 1131600 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0328 01:08:25.427327 1131600 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0328 01:08:25.427431 1131600 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0328 01:08:25.427491 1131600 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0328 01:08:25.427686 1131600 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0328 01:08:25.427784 1131600 kubeadm.go:309] [apiclient] All control plane components are healthy after 6.003000 seconds
	I0328 01:08:25.427897 1131600 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0328 01:08:25.428032 1131600 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0328 01:08:25.428109 1131600 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0328 01:08:25.428325 1131600 kubeadm.go:309] [mark-control-plane] Marking the node default-k8s-diff-port-283961 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0328 01:08:25.428408 1131600 kubeadm.go:309] [bootstrap-token] Using token: g6jusr.8nbqw788gjbu8fwz
	I0328 01:08:25.430595 1131600 out.go:204]   - Configuring RBAC rules ...
	I0328 01:08:25.430734 1131600 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0328 01:08:25.430837 1131600 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0328 01:08:25.430981 1131600 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0328 01:08:25.431163 1131600 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0328 01:08:25.431357 1131600 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0328 01:08:25.431481 1131600 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0328 01:08:25.431670 1131600 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0328 01:08:25.431726 1131600 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0328 01:08:25.431767 1131600 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0328 01:08:25.431774 1131600 kubeadm.go:309] 
	I0328 01:08:25.431819 1131600 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0328 01:08:25.431829 1131600 kubeadm.go:309] 
	I0328 01:08:25.431893 1131600 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0328 01:08:25.431900 1131600 kubeadm.go:309] 
	I0328 01:08:25.431934 1131600 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0328 01:08:25.432028 1131600 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0328 01:08:25.432089 1131600 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0328 01:08:25.432114 1131600 kubeadm.go:309] 
	I0328 01:08:25.432178 1131600 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0328 01:08:25.432186 1131600 kubeadm.go:309] 
	I0328 01:08:25.432245 1131600 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0328 01:08:25.432255 1131600 kubeadm.go:309] 
	I0328 01:08:25.432342 1131600 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0328 01:08:25.432454 1131600 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0328 01:08:25.432566 1131600 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0328 01:08:25.432576 1131600 kubeadm.go:309] 
	I0328 01:08:25.432719 1131600 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0328 01:08:25.432812 1131600 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0328 01:08:25.432825 1131600 kubeadm.go:309] 
	I0328 01:08:25.432914 1131600 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8444 --token g6jusr.8nbqw788gjbu8fwz \
	I0328 01:08:25.433018 1131600 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:eb47e943713ff45bf2b8bbea854932df17a469668ec0f8e79ea3bb626e56fc59 \
	I0328 01:08:25.433052 1131600 kubeadm.go:309] 	--control-plane 
	I0328 01:08:25.433058 1131600 kubeadm.go:309] 
	I0328 01:08:25.433135 1131600 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0328 01:08:25.433143 1131600 kubeadm.go:309] 
	I0328 01:08:25.433222 1131600 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8444 --token g6jusr.8nbqw788gjbu8fwz \
	I0328 01:08:25.433318 1131600 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:eb47e943713ff45bf2b8bbea854932df17a469668ec0f8e79ea3bb626e56fc59 
	I0328 01:08:25.433337 1131600 cni.go:84] Creating CNI manager for ""
	I0328 01:08:25.433346 1131600 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0328 01:08:25.434943 1131600 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0328 01:08:25.436103 1131600 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0328 01:08:25.483149 1131600 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0328 01:08:25.508422 1131600 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0328 01:08:25.508514 1131600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:25.508518 1131600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-283961 minikube.k8s.io/updated_at=2024_03_28T01_08_25_0700 minikube.k8s.io/version=v1.33.0-beta.0 minikube.k8s.io/commit=1a940f980f77c9bfc8ef93678db8c1f49c9dd79d minikube.k8s.io/name=default-k8s-diff-port-283961 minikube.k8s.io/primary=true
	I0328 01:08:25.537955 1131600 ops.go:34] apiserver oom_adj: -16
	I0328 01:08:25.738462 1131600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:26.239473 1131600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:26.739478 1131600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:27.238883 1131600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:27.738830 1131600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:28.239281 1131600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:28.738643 1131600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:29.238703 1131600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:29.739025 1131600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:30.239127 1131600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:30.739473 1131600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:31.239461 1131600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:31.739480 1131600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:32.239525 1131600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:32.738543 1131600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:33.239468 1131600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:33.739475 1131600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:34.238558 1131600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:34.739550 1131600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:35.239400 1131600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:35.738766 1131600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:36.239384 1131600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:36.738797 1131600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:37.238736 1131600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:37.739543 1131600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:37.850963 1131600 kubeadm.go:1107] duration metric: took 12.342521507s to wait for elevateKubeSystemPrivileges
	W0328 01:08:37.851011 1131600 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0328 01:08:37.851024 1131600 kubeadm.go:393] duration metric: took 5m17.339661641s to StartCluster
	I0328 01:08:37.851048 1131600 settings.go:142] acquiring lock: {Name:mk594dd5d083edcb4c668528431bf610fe4ea638 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 01:08:37.851164 1131600 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18485-1069254/kubeconfig
	I0328 01:08:37.853862 1131600 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18485-1069254/kubeconfig: {Name:mk608bb0dce90860f51972a105c5a28b1a9b081e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 01:08:37.854264 1131600 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.39.224 Port:8444 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0328 01:08:37.856170 1131600 out.go:177] * Verifying Kubernetes components...
	I0328 01:08:37.854341 1131600 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0328 01:08:37.854447 1131600 config.go:182] Loaded profile config "default-k8s-diff-port-283961": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0328 01:08:37.857860 1131600 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0328 01:08:37.857864 1131600 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-283961"
	I0328 01:08:37.857878 1131600 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-283961"
	I0328 01:08:37.857885 1131600 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-283961"
	I0328 01:08:37.857909 1131600 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-283961"
	I0328 01:08:37.857912 1131600 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-283961"
	W0328 01:08:37.857923 1131600 addons.go:243] addon storage-provisioner should already be in state true
	I0328 01:08:37.857928 1131600 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-283961"
	W0328 01:08:37.857941 1131600 addons.go:243] addon metrics-server should already be in state true
	I0328 01:08:37.857970 1131600 host.go:66] Checking if "default-k8s-diff-port-283961" exists ...
	I0328 01:08:37.857983 1131600 host.go:66] Checking if "default-k8s-diff-port-283961" exists ...
	I0328 01:08:37.858330 1131600 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18485-1069254/.minikube/bin/docker-machine-driver-kvm2
	I0328 01:08:37.858363 1131600 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 01:08:37.858403 1131600 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18485-1069254/.minikube/bin/docker-machine-driver-kvm2
	I0328 01:08:37.858436 1131600 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 01:08:37.858335 1131600 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18485-1069254/.minikube/bin/docker-machine-driver-kvm2
	I0328 01:08:37.858509 1131600 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 01:08:37.881197 1131600 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32799
	I0328 01:08:37.881230 1131600 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35101
	I0328 01:08:37.881244 1131600 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40403
	I0328 01:08:37.881857 1131600 main.go:141] libmachine: () Calling .GetVersion
	I0328 01:08:37.881882 1131600 main.go:141] libmachine: () Calling .GetVersion
	I0328 01:08:37.882021 1131600 main.go:141] libmachine: () Calling .GetVersion
	I0328 01:08:37.882460 1131600 main.go:141] libmachine: Using API Version  1
	I0328 01:08:37.882482 1131600 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 01:08:37.882523 1131600 main.go:141] libmachine: Using API Version  1
	I0328 01:08:37.882540 1131600 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 01:08:37.882585 1131600 main.go:141] libmachine: Using API Version  1
	I0328 01:08:37.882601 1131600 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 01:08:37.882934 1131600 main.go:141] libmachine: () Calling .GetMachineName
	I0328 01:08:37.882992 1131600 main.go:141] libmachine: () Calling .GetMachineName
	I0328 01:08:37.883007 1131600 main.go:141] libmachine: () Calling .GetMachineName
	I0328 01:08:37.883239 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetState
	I0328 01:08:37.883592 1131600 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18485-1069254/.minikube/bin/docker-machine-driver-kvm2
	I0328 01:08:37.883620 1131600 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18485-1069254/.minikube/bin/docker-machine-driver-kvm2
	I0328 01:08:37.883625 1131600 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 01:08:37.883644 1131600 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 01:08:37.887335 1131600 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-283961"
	W0328 01:08:37.887359 1131600 addons.go:243] addon default-storageclass should already be in state true
	I0328 01:08:37.887390 1131600 host.go:66] Checking if "default-k8s-diff-port-283961" exists ...
	I0328 01:08:37.887745 1131600 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18485-1069254/.minikube/bin/docker-machine-driver-kvm2
	I0328 01:08:37.887779 1131600 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 01:08:37.901416 1131600 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37383
	I0328 01:08:37.901909 1131600 main.go:141] libmachine: () Calling .GetVersion
	I0328 01:08:37.902530 1131600 main.go:141] libmachine: Using API Version  1
	I0328 01:08:37.902559 1131600 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 01:08:37.902967 1131600 main.go:141] libmachine: () Calling .GetMachineName
	I0328 01:08:37.903211 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetState
	I0328 01:08:37.904529 1131600 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36633
	I0328 01:08:37.905034 1131600 main.go:141] libmachine: () Calling .GetVersion
	I0328 01:08:37.905268 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .DriverName
	I0328 01:08:37.907486 1131600 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0328 01:08:37.905802 1131600 main.go:141] libmachine: Using API Version  1
	I0328 01:08:37.909062 1131600 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 01:08:37.909180 1131600 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0328 01:08:37.909196 1131600 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0328 01:08:37.909218 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHHostname
	I0328 01:08:37.909555 1131600 main.go:141] libmachine: () Calling .GetMachineName
	I0328 01:08:37.909794 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetState
	I0328 01:08:37.911251 1131600 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33539
	I0328 01:08:37.911845 1131600 main.go:141] libmachine: () Calling .GetVersion
	I0328 01:08:37.911995 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .DriverName
	I0328 01:08:37.913838 1131600 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0328 01:08:37.912457 1131600 main.go:141] libmachine: Using API Version  1
	I0328 01:08:37.913039 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:08:37.913804 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHPort
	I0328 01:08:37.915256 1131600 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 01:08:37.915268 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:df:6f", ip: ""} in network mk-default-k8s-diff-port-283961: {Iface:virbr1 ExpiryTime:2024-03-28 02:03:06 +0000 UTC Type:0 Mac:52:54:00:c4:df:6f Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:default-k8s-diff-port-283961 Clientid:01:52:54:00:c4:df:6f}
	I0328 01:08:37.915288 1131600 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0328 01:08:37.915297 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined IP address 192.168.39.224 and MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:08:37.915303 1131600 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0328 01:08:37.915321 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHHostname
	I0328 01:08:37.915492 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHKeyPath
	I0328 01:08:37.915674 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHUsername
	I0328 01:08:37.915894 1131600 sshutil.go:53] new ssh client: &{IP:192.168.39.224 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/default-k8s-diff-port-283961/id_rsa Username:docker}
	I0328 01:08:37.916689 1131600 main.go:141] libmachine: () Calling .GetMachineName
	I0328 01:08:37.917364 1131600 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18485-1069254/.minikube/bin/docker-machine-driver-kvm2
	I0328 01:08:37.917410 1131600 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 01:08:37.918302 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:08:37.918651 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:df:6f", ip: ""} in network mk-default-k8s-diff-port-283961: {Iface:virbr1 ExpiryTime:2024-03-28 02:03:06 +0000 UTC Type:0 Mac:52:54:00:c4:df:6f Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:default-k8s-diff-port-283961 Clientid:01:52:54:00:c4:df:6f}
	I0328 01:08:37.918678 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined IP address 192.168.39.224 and MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:08:37.918944 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHPort
	I0328 01:08:37.919117 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHKeyPath
	I0328 01:08:37.919267 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHUsername
	I0328 01:08:37.919386 1131600 sshutil.go:53] new ssh client: &{IP:192.168.39.224 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/default-k8s-diff-port-283961/id_rsa Username:docker}
	I0328 01:08:37.935233 1131600 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45659
	I0328 01:08:37.935750 1131600 main.go:141] libmachine: () Calling .GetVersion
	I0328 01:08:37.936283 1131600 main.go:141] libmachine: Using API Version  1
	I0328 01:08:37.936301 1131600 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 01:08:37.936691 1131600 main.go:141] libmachine: () Calling .GetMachineName
	I0328 01:08:37.936872 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetState
	I0328 01:08:37.938736 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .DriverName
	I0328 01:08:37.939016 1131600 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0328 01:08:37.939042 1131600 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0328 01:08:37.939065 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHHostname
	I0328 01:08:37.941653 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:08:37.941967 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:df:6f", ip: ""} in network mk-default-k8s-diff-port-283961: {Iface:virbr1 ExpiryTime:2024-03-28 02:03:06 +0000 UTC Type:0 Mac:52:54:00:c4:df:6f Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:default-k8s-diff-port-283961 Clientid:01:52:54:00:c4:df:6f}
	I0328 01:08:37.941991 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined IP address 192.168.39.224 and MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:08:37.942199 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHPort
	I0328 01:08:37.942405 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHKeyPath
	I0328 01:08:37.942575 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHUsername
	I0328 01:08:37.942761 1131600 sshutil.go:53] new ssh client: &{IP:192.168.39.224 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/default-k8s-diff-port-283961/id_rsa Username:docker}
	I0328 01:08:38.109817 1131600 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0328 01:08:38.134996 1131600 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-283961" to be "Ready" ...
	I0328 01:08:38.158252 1131600 node_ready.go:49] node "default-k8s-diff-port-283961" has status "Ready":"True"
	I0328 01:08:38.158286 1131600 node_ready.go:38] duration metric: took 23.249221ms for node "default-k8s-diff-port-283961" to be "Ready" ...
	I0328 01:08:38.158305 1131600 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0328 01:08:38.170391 1131600 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-gdv5x" in "kube-system" namespace to be "Ready" ...
	I0328 01:08:38.277223 1131600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0328 01:08:38.299923 1131600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0328 01:08:38.300686 1131600 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0328 01:08:38.300707 1131600 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0328 01:08:38.355800 1131600 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0328 01:08:38.355837 1131600 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0328 01:08:38.464742 1131600 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0328 01:08:38.464769 1131600 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0328 01:08:38.542696 1131600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0328 01:08:39.644116 1131600 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.344141889s)
	I0328 01:08:39.644184 1131600 main.go:141] libmachine: Making call to close driver server
	I0328 01:08:39.644189 1131600 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.366934481s)
	I0328 01:08:39.644197 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .Close
	I0328 01:08:39.644210 1131600 main.go:141] libmachine: Making call to close driver server
	I0328 01:08:39.644219 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .Close
	I0328 01:08:39.644620 1131600 main.go:141] libmachine: Successfully made call to close driver server
	I0328 01:08:39.644644 1131600 main.go:141] libmachine: Making call to close connection to plugin binary
	I0328 01:08:39.644654 1131600 main.go:141] libmachine: Making call to close driver server
	I0328 01:08:39.644664 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .Close
	I0328 01:08:39.644846 1131600 main.go:141] libmachine: Successfully made call to close driver server
	I0328 01:08:39.644865 1131600 main.go:141] libmachine: Making call to close connection to plugin binary
	I0328 01:08:39.644890 1131600 main.go:141] libmachine: Making call to close driver server
	I0328 01:08:39.644905 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .Close
	I0328 01:08:39.644987 1131600 main.go:141] libmachine: Successfully made call to close driver server
	I0328 01:08:39.645004 1131600 main.go:141] libmachine: Making call to close connection to plugin binary
	I0328 01:08:39.645154 1131600 main.go:141] libmachine: Successfully made call to close driver server
	I0328 01:08:39.645171 1131600 main.go:141] libmachine: Making call to close connection to plugin binary
	I0328 01:08:39.708104 1131600 main.go:141] libmachine: Making call to close driver server
	I0328 01:08:39.708143 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .Close
	I0328 01:08:39.708543 1131600 main.go:141] libmachine: Successfully made call to close driver server
	I0328 01:08:39.708567 1131600 main.go:141] libmachine: Making call to close connection to plugin binary
	I0328 01:08:39.739487 1131600 pod_ready.go:92] pod "coredns-76f75df574-gdv5x" in "kube-system" namespace has status "Ready":"True"
	I0328 01:08:39.739515 1131600 pod_ready.go:81] duration metric: took 1.569088177s for pod "coredns-76f75df574-gdv5x" in "kube-system" namespace to be "Ready" ...
	I0328 01:08:39.739526 1131600 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-qzcfp" in "kube-system" namespace to be "Ready" ...
	I0328 01:08:39.797314 1131600 pod_ready.go:92] pod "coredns-76f75df574-qzcfp" in "kube-system" namespace has status "Ready":"True"
	I0328 01:08:39.797347 1131600 pod_ready.go:81] duration metric: took 57.813218ms for pod "coredns-76f75df574-qzcfp" in "kube-system" namespace to be "Ready" ...
	I0328 01:08:39.797366 1131600 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-283961" in "kube-system" namespace to be "Ready" ...
	I0328 01:08:39.830784 1131600 pod_ready.go:92] pod "etcd-default-k8s-diff-port-283961" in "kube-system" namespace has status "Ready":"True"
	I0328 01:08:39.830865 1131600 pod_ready.go:81] duration metric: took 33.488753ms for pod "etcd-default-k8s-diff-port-283961" in "kube-system" namespace to be "Ready" ...
	I0328 01:08:39.830886 1131600 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-283961" in "kube-system" namespace to be "Ready" ...
	I0328 01:08:39.852459 1131600 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-283961" in "kube-system" namespace has status "Ready":"True"
	I0328 01:08:39.852489 1131600 pod_ready.go:81] duration metric: took 21.594748ms for pod "kube-apiserver-default-k8s-diff-port-283961" in "kube-system" namespace to be "Ready" ...
	I0328 01:08:39.852501 1131600 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-283961" in "kube-system" namespace to be "Ready" ...
	I0328 01:08:39.862630 1131600 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-283961" in "kube-system" namespace has status "Ready":"True"
	I0328 01:08:39.862658 1131600 pod_ready.go:81] duration metric: took 10.149867ms for pod "kube-controller-manager-default-k8s-diff-port-283961" in "kube-system" namespace to be "Ready" ...
	I0328 01:08:39.862674 1131600 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-js7j2" in "kube-system" namespace to be "Ready" ...
	I0328 01:08:39.893124 1131600 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.350363727s)
	I0328 01:08:39.893191 1131600 main.go:141] libmachine: Making call to close driver server
	I0328 01:08:39.893216 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .Close
	I0328 01:08:39.893559 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | Closing plugin on server side
	I0328 01:08:39.893568 1131600 main.go:141] libmachine: Successfully made call to close driver server
	I0328 01:08:39.893617 1131600 main.go:141] libmachine: Making call to close connection to plugin binary
	I0328 01:08:39.893634 1131600 main.go:141] libmachine: Making call to close driver server
	I0328 01:08:39.893646 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .Close
	I0328 01:08:39.893985 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | Closing plugin on server side
	I0328 01:08:39.893985 1131600 main.go:141] libmachine: Successfully made call to close driver server
	I0328 01:08:39.894013 1131600 main.go:141] libmachine: Making call to close connection to plugin binary
	I0328 01:08:39.894031 1131600 addons.go:470] Verifying addon metrics-server=true in "default-k8s-diff-port-283961"
	I0328 01:08:39.896978 1131600 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0328 01:08:39.898636 1131600 addons.go:505] duration metric: took 2.044292782s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0328 01:08:40.138962 1131600 pod_ready.go:92] pod "kube-proxy-js7j2" in "kube-system" namespace has status "Ready":"True"
	I0328 01:08:40.138994 1131600 pod_ready.go:81] duration metric: took 276.313147ms for pod "kube-proxy-js7j2" in "kube-system" namespace to be "Ready" ...
	I0328 01:08:40.139006 1131600 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-283961" in "kube-system" namespace to be "Ready" ...
	I0328 01:08:40.538892 1131600 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-283961" in "kube-system" namespace has status "Ready":"True"
	I0328 01:08:40.538917 1131600 pod_ready.go:81] duration metric: took 399.903327ms for pod "kube-scheduler-default-k8s-diff-port-283961" in "kube-system" namespace to be "Ready" ...
	I0328 01:08:40.538925 1131600 pod_ready.go:38] duration metric: took 2.380606168s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0328 01:08:40.538943 1131600 api_server.go:52] waiting for apiserver process to appear ...
	I0328 01:08:40.539009 1131600 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:08:40.561639 1131600 api_server.go:72] duration metric: took 2.707321816s to wait for apiserver process to appear ...
	I0328 01:08:40.561681 1131600 api_server.go:88] waiting for apiserver healthz status ...
	I0328 01:08:40.561709 1131600 api_server.go:253] Checking apiserver healthz at https://192.168.39.224:8444/healthz ...
	I0328 01:08:40.568521 1131600 api_server.go:279] https://192.168.39.224:8444/healthz returned 200:
	ok
	I0328 01:08:40.570016 1131600 api_server.go:141] control plane version: v1.29.3
	I0328 01:08:40.570060 1131600 api_server.go:131] duration metric: took 8.369036ms to wait for apiserver health ...
	I0328 01:08:40.570071 1131600 system_pods.go:43] waiting for kube-system pods to appear ...
	I0328 01:08:39.696094 1130827 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.32965227s)
	I0328 01:08:39.696193 1130827 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0328 01:08:39.717556 1130827 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0328 01:08:39.730434 1130827 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0328 01:08:39.746521 1130827 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0328 01:08:39.746567 1130827 kubeadm.go:156] found existing configuration files:
	
	I0328 01:08:39.746644 1130827 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0328 01:08:39.758252 1130827 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0328 01:08:39.758352 1130827 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0328 01:08:39.771929 1130827 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0328 01:08:39.785312 1130827 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0328 01:08:39.785400 1130827 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0328 01:08:39.800685 1130827 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0328 01:08:39.814982 1130827 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0328 01:08:39.815073 1130827 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0328 01:08:39.828804 1130827 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0328 01:08:39.841984 1130827 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0328 01:08:39.842074 1130827 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0328 01:08:39.854502 1130827 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0328 01:08:40.089742 1130827 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0328 01:08:40.742900 1131600 system_pods.go:59] 9 kube-system pods found
	I0328 01:08:40.742938 1131600 system_pods.go:61] "coredns-76f75df574-gdv5x" [5b4b835c-ae9d-4eff-ab37-6ccb7e36a748] Running
	I0328 01:08:40.742945 1131600 system_pods.go:61] "coredns-76f75df574-qzcfp" [8e7bfa94-f249-4f7a-be7b-9a615810c956] Running
	I0328 01:08:40.742951 1131600 system_pods.go:61] "etcd-default-k8s-diff-port-283961" [eb26a2e2-882b-4bc0-9ca1-7fe88b1c6c7e] Running
	I0328 01:08:40.742958 1131600 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-283961" [1c31e7a3-507e-43ea-96cf-1d182a1e0875] Running
	I0328 01:08:40.742964 1131600 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-283961" [5bbc6650-b37c-4b10-bc6e-cceb214f8881] Running
	I0328 01:08:40.742968 1131600 system_pods.go:61] "kube-proxy-js7j2" [1f31a6ee-9417-4f4a-ba0b-fdbab6a9169d] Running
	I0328 01:08:40.742972 1131600 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-283961" [3a5691f7-d6d4-45e7-aea7-94fe1cfa541a] Running
	I0328 01:08:40.742980 1131600 system_pods.go:61] "metrics-server-57f55c9bc5-gkv67" [7f0f5d0b-6821-44b6-8f3b-0bc0aeccc356] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0328 01:08:40.742986 1131600 system_pods.go:61] "storage-provisioner" [cb80efe2-521f-45d5-84e7-f6dc216b4c6d] Running
	I0328 01:08:40.742998 1131600 system_pods.go:74] duration metric: took 172.918886ms to wait for pod list to return data ...
	I0328 01:08:40.743010 1131600 default_sa.go:34] waiting for default service account to be created ...
	I0328 01:08:40.939208 1131600 default_sa.go:45] found service account: "default"
	I0328 01:08:40.939255 1131600 default_sa.go:55] duration metric: took 196.220048ms for default service account to be created ...
	I0328 01:08:40.939266 1131600 system_pods.go:116] waiting for k8s-apps to be running ...
	I0328 01:08:41.144986 1131600 system_pods.go:86] 9 kube-system pods found
	I0328 01:08:41.145023 1131600 system_pods.go:89] "coredns-76f75df574-gdv5x" [5b4b835c-ae9d-4eff-ab37-6ccb7e36a748] Running
	I0328 01:08:41.145030 1131600 system_pods.go:89] "coredns-76f75df574-qzcfp" [8e7bfa94-f249-4f7a-be7b-9a615810c956] Running
	I0328 01:08:41.145034 1131600 system_pods.go:89] "etcd-default-k8s-diff-port-283961" [eb26a2e2-882b-4bc0-9ca1-7fe88b1c6c7e] Running
	I0328 01:08:41.145039 1131600 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-283961" [1c31e7a3-507e-43ea-96cf-1d182a1e0875] Running
	I0328 01:08:41.145043 1131600 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-283961" [5bbc6650-b37c-4b10-bc6e-cceb214f8881] Running
	I0328 01:08:41.145047 1131600 system_pods.go:89] "kube-proxy-js7j2" [1f31a6ee-9417-4f4a-ba0b-fdbab6a9169d] Running
	I0328 01:08:41.145051 1131600 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-283961" [3a5691f7-d6d4-45e7-aea7-94fe1cfa541a] Running
	I0328 01:08:41.145058 1131600 system_pods.go:89] "metrics-server-57f55c9bc5-gkv67" [7f0f5d0b-6821-44b6-8f3b-0bc0aeccc356] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0328 01:08:41.145062 1131600 system_pods.go:89] "storage-provisioner" [cb80efe2-521f-45d5-84e7-f6dc216b4c6d] Running
	I0328 01:08:41.145072 1131600 system_pods.go:126] duration metric: took 205.800485ms to wait for k8s-apps to be running ...
	I0328 01:08:41.145083 1131600 system_svc.go:44] waiting for kubelet service to be running ....
	I0328 01:08:41.145131 1131600 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0328 01:08:41.163220 1131600 system_svc.go:56] duration metric: took 18.120266ms WaitForService to wait for kubelet
	I0328 01:08:41.163255 1131600 kubeadm.go:576] duration metric: took 3.308947131s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0328 01:08:41.163280 1131600 node_conditions.go:102] verifying NodePressure condition ...
	I0328 01:08:41.339219 1131600 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0328 01:08:41.339247 1131600 node_conditions.go:123] node cpu capacity is 2
	I0328 01:08:41.339292 1131600 node_conditions.go:105] duration metric: took 176.004328ms to run NodePressure ...
	I0328 01:08:41.339306 1131600 start.go:240] waiting for startup goroutines ...
	I0328 01:08:41.339317 1131600 start.go:245] waiting for cluster config update ...
	I0328 01:08:41.339334 1131600 start.go:254] writing updated cluster config ...
	I0328 01:08:41.339656 1131600 ssh_runner.go:195] Run: rm -f paused
	I0328 01:08:41.399111 1131600 start.go:600] kubectl: 1.29.3, cluster: 1.29.3 (minor skew: 0)
	I0328 01:08:41.401360 1131600 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-283961" cluster and "default" namespace by default
	I0328 01:08:49.653091 1130827 kubeadm.go:309] [init] Using Kubernetes version: v1.30.0-beta.0
	I0328 01:08:49.653205 1130827 kubeadm.go:309] [preflight] Running pre-flight checks
	I0328 01:08:49.653327 1130827 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0328 01:08:49.653468 1130827 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0328 01:08:49.653576 1130827 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0328 01:08:49.653666 1130827 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0328 01:08:49.656419 1130827 out.go:204]   - Generating certificates and keys ...
	I0328 01:08:49.656503 1130827 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0328 01:08:49.656583 1130827 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0328 01:08:49.656669 1130827 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0328 01:08:49.656775 1130827 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0328 01:08:49.656903 1130827 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0328 01:08:49.656973 1130827 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0328 01:08:49.657057 1130827 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0328 01:08:49.657138 1130827 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0328 01:08:49.657246 1130827 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0328 01:08:49.657362 1130827 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0328 01:08:49.657415 1130827 kubeadm.go:309] [certs] Using the existing "sa" key
	I0328 01:08:49.657510 1130827 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0328 01:08:49.657601 1130827 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0328 01:08:49.657713 1130827 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0328 01:08:49.657811 1130827 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0328 01:08:49.657900 1130827 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0328 01:08:49.657980 1130827 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0328 01:08:49.658074 1130827 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0328 01:08:49.658160 1130827 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0328 01:08:49.659588 1130827 out.go:204]   - Booting up control plane ...
	I0328 01:08:49.659669 1130827 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0328 01:08:49.659771 1130827 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0328 01:08:49.659855 1130827 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0328 01:08:49.659962 1130827 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0328 01:08:49.660075 1130827 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0328 01:08:49.660139 1130827 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0328 01:08:49.660309 1130827 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0328 01:08:49.660426 1130827 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0328 01:08:49.660518 1130827 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 501.594495ms
	I0328 01:08:49.660610 1130827 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0328 01:08:49.660691 1130827 kubeadm.go:309] [api-check] The API server is healthy after 5.502996727s
	I0328 01:08:49.660830 1130827 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0328 01:08:49.660975 1130827 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0328 01:08:49.661028 1130827 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0328 01:08:49.661198 1130827 kubeadm.go:309] [mark-control-plane] Marking the node no-preload-248059 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0328 01:08:49.661283 1130827 kubeadm.go:309] [bootstrap-token] Using token: 4jnfa0.q3dre6ogqbxtw8j0
	I0328 01:08:49.662907 1130827 out.go:204]   - Configuring RBAC rules ...
	I0328 01:08:49.663014 1130827 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0328 01:08:49.663090 1130827 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0328 01:08:49.663239 1130827 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0328 01:08:49.663379 1130827 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0328 01:08:49.663484 1130827 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0328 01:08:49.663576 1130827 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0328 01:08:49.663688 1130827 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0328 01:08:49.663750 1130827 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0328 01:08:49.663811 1130827 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0328 01:08:49.663820 1130827 kubeadm.go:309] 
	I0328 01:08:49.663871 1130827 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0328 01:08:49.663877 1130827 kubeadm.go:309] 
	I0328 01:08:49.663976 1130827 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0328 01:08:49.663984 1130827 kubeadm.go:309] 
	I0328 01:08:49.664004 1130827 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0328 01:08:49.664080 1130827 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0328 01:08:49.664144 1130827 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0328 01:08:49.664151 1130827 kubeadm.go:309] 
	I0328 01:08:49.664202 1130827 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0328 01:08:49.664209 1130827 kubeadm.go:309] 
	I0328 01:08:49.664246 1130827 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0328 01:08:49.664252 1130827 kubeadm.go:309] 
	I0328 01:08:49.664301 1130827 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0328 01:08:49.664370 1130827 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0328 01:08:49.664436 1130827 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0328 01:08:49.664444 1130827 kubeadm.go:309] 
	I0328 01:08:49.664515 1130827 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0328 01:08:49.664600 1130827 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0328 01:08:49.664607 1130827 kubeadm.go:309] 
	I0328 01:08:49.664678 1130827 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token 4jnfa0.q3dre6ogqbxtw8j0 \
	I0328 01:08:49.664764 1130827 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:eb47e943713ff45bf2b8bbea854932df17a469668ec0f8e79ea3bb626e56fc59 \
	I0328 01:08:49.664783 1130827 kubeadm.go:309] 	--control-plane 
	I0328 01:08:49.664789 1130827 kubeadm.go:309] 
	I0328 01:08:49.664856 1130827 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0328 01:08:49.664863 1130827 kubeadm.go:309] 
	I0328 01:08:49.664938 1130827 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token 4jnfa0.q3dre6ogqbxtw8j0 \
	I0328 01:08:49.665073 1130827 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:eb47e943713ff45bf2b8bbea854932df17a469668ec0f8e79ea3bb626e56fc59 
	I0328 01:08:49.665117 1130827 cni.go:84] Creating CNI manager for ""
	I0328 01:08:49.665130 1130827 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0328 01:08:49.667556 1130827 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0328 01:08:49.668776 1130827 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0328 01:08:49.680262 1130827 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0328 01:08:49.701490 1130827 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0328 01:08:49.701557 1130827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:49.701606 1130827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-248059 minikube.k8s.io/updated_at=2024_03_28T01_08_49_0700 minikube.k8s.io/version=v1.33.0-beta.0 minikube.k8s.io/commit=1a940f980f77c9bfc8ef93678db8c1f49c9dd79d minikube.k8s.io/name=no-preload-248059 minikube.k8s.io/primary=true
	I0328 01:08:49.734009 1130827 ops.go:34] apiserver oom_adj: -16
	I0328 01:08:49.901866 1130827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:50.402635 1130827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:50.902480 1130827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:51.402417 1130827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:51.902253 1130827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:52.402411 1130827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:52.901926 1130827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:53.402394 1130827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:53.902738 1130827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:54.401878 1130827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:54.901920 1130827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:55.401878 1130827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:55.902140 1130827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:56.402863 1130827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:56.901970 1130827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:57.402088 1130827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:57.901869 1130827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:58.402056 1130827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:58.902333 1130827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:59.402753 1130827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:59.902930 1130827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:09:00.402623 1130827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:09:00.901863 1130827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:09:01.402264 1130827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:09:01.902054 1130827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:09:02.402212 1130827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:09:02.503310 1130827 kubeadm.go:1107] duration metric: took 12.80181586s to wait for elevateKubeSystemPrivileges
	W0328 01:09:02.503352 1130827 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0328 01:09:02.503362 1130827 kubeadm.go:393] duration metric: took 5m10.46697508s to StartCluster
	I0328 01:09:02.503380 1130827 settings.go:142] acquiring lock: {Name:mk594dd5d083edcb4c668528431bf610fe4ea638 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 01:09:02.503482 1130827 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18485-1069254/kubeconfig
	I0328 01:09:02.505909 1130827 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18485-1069254/kubeconfig: {Name:mk608bb0dce90860f51972a105c5a28b1a9b081e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 01:09:02.506302 1130827 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.61.107 Port:8443 KubernetesVersion:v1.30.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0328 01:09:02.508103 1130827 out.go:177] * Verifying Kubernetes components...
	I0328 01:09:02.506385 1130827 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0328 01:09:02.506502 1130827 config.go:182] Loaded profile config "no-preload-248059": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0-beta.0
	I0328 01:09:02.509509 1130827 addons.go:69] Setting default-storageclass=true in profile "no-preload-248059"
	I0328 01:09:02.509519 1130827 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0328 01:09:02.509517 1130827 addons.go:69] Setting metrics-server=true in profile "no-preload-248059"
	I0328 01:09:02.509542 1130827 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-248059"
	I0328 01:09:02.509559 1130827 addons.go:234] Setting addon metrics-server=true in "no-preload-248059"
	W0328 01:09:02.509580 1130827 addons.go:243] addon metrics-server should already be in state true
	I0328 01:09:02.509509 1130827 addons.go:69] Setting storage-provisioner=true in profile "no-preload-248059"
	I0328 01:09:02.509623 1130827 host.go:66] Checking if "no-preload-248059" exists ...
	I0328 01:09:02.509636 1130827 addons.go:234] Setting addon storage-provisioner=true in "no-preload-248059"
	W0328 01:09:02.509690 1130827 addons.go:243] addon storage-provisioner should already be in state true
	I0328 01:09:02.509729 1130827 host.go:66] Checking if "no-preload-248059" exists ...
	I0328 01:09:02.510005 1130827 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18485-1069254/.minikube/bin/docker-machine-driver-kvm2
	I0328 01:09:02.510009 1130827 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18485-1069254/.minikube/bin/docker-machine-driver-kvm2
	I0328 01:09:02.510049 1130827 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 01:09:02.510050 1130827 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 01:09:02.510053 1130827 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18485-1069254/.minikube/bin/docker-machine-driver-kvm2
	I0328 01:09:02.510085 1130827 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 01:09:02.528082 1130827 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42381
	I0328 01:09:02.528124 1130827 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43929
	I0328 01:09:02.528714 1130827 main.go:141] libmachine: () Calling .GetVersion
	I0328 01:09:02.528738 1130827 main.go:141] libmachine: () Calling .GetVersion
	I0328 01:09:02.529081 1130827 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34573
	I0328 01:09:02.529378 1130827 main.go:141] libmachine: Using API Version  1
	I0328 01:09:02.529397 1130827 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 01:09:02.529444 1130827 main.go:141] libmachine: Using API Version  1
	I0328 01:09:02.529464 1130827 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 01:09:02.529465 1130827 main.go:141] libmachine: () Calling .GetVersion
	I0328 01:09:02.529791 1130827 main.go:141] libmachine: () Calling .GetMachineName
	I0328 01:09:02.529849 1130827 main.go:141] libmachine: () Calling .GetMachineName
	I0328 01:09:02.529948 1130827 main.go:141] libmachine: Using API Version  1
	I0328 01:09:02.529965 1130827 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 01:09:02.529950 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetState
	I0328 01:09:02.530389 1130827 main.go:141] libmachine: () Calling .GetMachineName
	I0328 01:09:02.530437 1130827 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18485-1069254/.minikube/bin/docker-machine-driver-kvm2
	I0328 01:09:02.530472 1130827 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 01:09:02.531004 1130827 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18485-1069254/.minikube/bin/docker-machine-driver-kvm2
	I0328 01:09:02.531058 1130827 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 01:09:02.534108 1130827 addons.go:234] Setting addon default-storageclass=true in "no-preload-248059"
	W0328 01:09:02.534134 1130827 addons.go:243] addon default-storageclass should already be in state true
	I0328 01:09:02.534173 1130827 host.go:66] Checking if "no-preload-248059" exists ...
	I0328 01:09:02.534563 1130827 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18485-1069254/.minikube/bin/docker-machine-driver-kvm2
	I0328 01:09:02.534592 1130827 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 01:09:02.546812 1130827 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35769
	I0328 01:09:02.547478 1130827 main.go:141] libmachine: () Calling .GetVersion
	I0328 01:09:02.547999 1130827 main.go:141] libmachine: Using API Version  1
	I0328 01:09:02.548031 1130827 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 01:09:02.548370 1130827 main.go:141] libmachine: () Calling .GetMachineName
	I0328 01:09:02.548616 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetState
	I0328 01:09:02.549185 1130827 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37367
	I0328 01:09:02.549663 1130827 main.go:141] libmachine: () Calling .GetVersion
	I0328 01:09:02.550365 1130827 main.go:141] libmachine: Using API Version  1
	I0328 01:09:02.550390 1130827 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 01:09:02.550772 1130827 main.go:141] libmachine: () Calling .GetMachineName
	I0328 01:09:02.550787 1130827 main.go:141] libmachine: (no-preload-248059) Calling .DriverName
	I0328 01:09:02.550977 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetState
	I0328 01:09:02.553075 1130827 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0328 01:09:02.554750 1130827 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0328 01:09:02.554769 1130827 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0328 01:09:02.552577 1130827 main.go:141] libmachine: (no-preload-248059) Calling .DriverName
	I0328 01:09:02.554788 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHHostname
	I0328 01:09:02.553550 1130827 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45153
	I0328 01:09:02.556534 1130827 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0328 01:09:02.555339 1130827 main.go:141] libmachine: () Calling .GetVersion
	I0328 01:09:02.558480 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:09:02.563734 1130827 main.go:141] libmachine: (no-preload-248059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:33:e2", ip: ""} in network mk-no-preload-248059: {Iface:virbr3 ExpiryTime:2024-03-28 02:03:25 +0000 UTC Type:0 Mac:52:54:00:58:33:e2 Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:no-preload-248059 Clientid:01:52:54:00:58:33:e2}
	I0328 01:09:02.563773 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined IP address 192.168.61.107 and MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:09:02.563823 1130827 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0328 01:09:02.563846 1130827 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0328 01:09:02.563876 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHHostname
	I0328 01:09:02.564584 1130827 main.go:141] libmachine: Using API Version  1
	I0328 01:09:02.564604 1130827 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 01:09:02.564633 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHPort
	I0328 01:09:02.564933 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHKeyPath
	I0328 01:09:02.565025 1130827 main.go:141] libmachine: () Calling .GetMachineName
	I0328 01:09:02.565458 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHUsername
	I0328 01:09:02.565593 1130827 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18485-1069254/.minikube/bin/docker-machine-driver-kvm2
	I0328 01:09:02.565617 1130827 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 01:09:02.565745 1130827 sshutil.go:53] new ssh client: &{IP:192.168.61.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/no-preload-248059/id_rsa Username:docker}
	I0328 01:09:02.569766 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:09:02.570083 1130827 main.go:141] libmachine: (no-preload-248059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:33:e2", ip: ""} in network mk-no-preload-248059: {Iface:virbr3 ExpiryTime:2024-03-28 02:03:25 +0000 UTC Type:0 Mac:52:54:00:58:33:e2 Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:no-preload-248059 Clientid:01:52:54:00:58:33:e2}
	I0328 01:09:02.570104 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined IP address 192.168.61.107 and MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:09:02.570413 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHPort
	I0328 01:09:02.570778 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHKeyPath
	I0328 01:09:02.570975 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHUsername
	I0328 01:09:02.571142 1130827 sshutil.go:53] new ssh client: &{IP:192.168.61.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/no-preload-248059/id_rsa Username:docker}
	I0328 01:09:02.589503 1130827 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41085
	I0328 01:09:02.590061 1130827 main.go:141] libmachine: () Calling .GetVersion
	I0328 01:09:02.590641 1130827 main.go:141] libmachine: Using API Version  1
	I0328 01:09:02.590661 1130827 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 01:09:02.591065 1130827 main.go:141] libmachine: () Calling .GetMachineName
	I0328 01:09:02.591310 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetState
	I0328 01:09:02.593270 1130827 main.go:141] libmachine: (no-preload-248059) Calling .DriverName
	I0328 01:09:02.593665 1130827 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0328 01:09:02.593696 1130827 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0328 01:09:02.593717 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHHostname
	I0328 01:09:02.596796 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:09:02.597270 1130827 main.go:141] libmachine: (no-preload-248059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:33:e2", ip: ""} in network mk-no-preload-248059: {Iface:virbr3 ExpiryTime:2024-03-28 02:03:25 +0000 UTC Type:0 Mac:52:54:00:58:33:e2 Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:no-preload-248059 Clientid:01:52:54:00:58:33:e2}
	I0328 01:09:02.597298 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined IP address 192.168.61.107 and MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:09:02.597460 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHPort
	I0328 01:09:02.597637 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHKeyPath
	I0328 01:09:02.597807 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHUsername
	I0328 01:09:02.597937 1130827 sshutil.go:53] new ssh client: &{IP:192.168.61.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/no-preload-248059/id_rsa Username:docker}
	I0328 01:09:02.705837 1130827 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0328 01:09:02.727955 1130827 node_ready.go:35] waiting up to 6m0s for node "no-preload-248059" to be "Ready" ...
	I0328 01:09:02.737291 1130827 node_ready.go:49] node "no-preload-248059" has status "Ready":"True"
	I0328 01:09:02.737325 1130827 node_ready.go:38] duration metric: took 9.337953ms for node "no-preload-248059" to be "Ready" ...
	I0328 01:09:02.737338 1130827 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0328 01:09:02.741939 1130827 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-248059" in "kube-system" namespace to be "Ready" ...
	I0328 01:09:02.749157 1130827 pod_ready.go:92] pod "etcd-no-preload-248059" in "kube-system" namespace has status "Ready":"True"
	I0328 01:09:02.749192 1130827 pod_ready.go:81] duration metric: took 7.224004ms for pod "etcd-no-preload-248059" in "kube-system" namespace to be "Ready" ...
	I0328 01:09:02.749205 1130827 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-248059" in "kube-system" namespace to be "Ready" ...
	I0328 01:09:02.755106 1130827 pod_ready.go:92] pod "kube-apiserver-no-preload-248059" in "kube-system" namespace has status "Ready":"True"
	I0328 01:09:02.755132 1130827 pod_ready.go:81] duration metric: took 5.919446ms for pod "kube-apiserver-no-preload-248059" in "kube-system" namespace to be "Ready" ...
	I0328 01:09:02.755144 1130827 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-248059" in "kube-system" namespace to be "Ready" ...
	I0328 01:09:02.761123 1130827 pod_ready.go:92] pod "kube-controller-manager-no-preload-248059" in "kube-system" namespace has status "Ready":"True"
	I0328 01:09:02.761171 1130827 pod_ready.go:81] duration metric: took 6.017877ms for pod "kube-controller-manager-no-preload-248059" in "kube-system" namespace to be "Ready" ...
	I0328 01:09:02.761187 1130827 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-248059" in "kube-system" namespace to be "Ready" ...
	I0328 01:09:02.773958 1130827 pod_ready.go:92] pod "kube-scheduler-no-preload-248059" in "kube-system" namespace has status "Ready":"True"
	I0328 01:09:02.773983 1130827 pod_ready.go:81] duration metric: took 12.787671ms for pod "kube-scheduler-no-preload-248059" in "kube-system" namespace to be "Ready" ...
	I0328 01:09:02.773991 1130827 pod_ready.go:38] duration metric: took 36.637128ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0328 01:09:02.774008 1130827 api_server.go:52] waiting for apiserver process to appear ...
	I0328 01:09:02.774068 1130827 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:09:02.794342 1130827 api_server.go:72] duration metric: took 287.989042ms to wait for apiserver process to appear ...
	I0328 01:09:02.794376 1130827 api_server.go:88] waiting for apiserver healthz status ...
	I0328 01:09:02.794408 1130827 api_server.go:253] Checking apiserver healthz at https://192.168.61.107:8443/healthz ...
	I0328 01:09:02.826957 1130827 api_server.go:279] https://192.168.61.107:8443/healthz returned 200:
	ok
	I0328 01:09:02.830377 1130827 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0328 01:09:02.830399 1130827 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0328 01:09:02.837250 1130827 api_server.go:141] control plane version: v1.30.0-beta.0
	I0328 01:09:02.837284 1130827 api_server.go:131] duration metric: took 42.898933ms to wait for apiserver health ...
	I0328 01:09:02.837295 1130827 system_pods.go:43] waiting for kube-system pods to appear ...
	I0328 01:09:02.838515 1130827 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0328 01:09:02.865482 1130827 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0328 01:09:02.880510 1130827 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0328 01:09:02.880544 1130827 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0328 01:09:02.933895 1130827 system_pods.go:59] 4 kube-system pods found
	I0328 01:09:02.933958 1130827 system_pods.go:61] "etcd-no-preload-248059" [df24a43f-e4f8-4ee2-a2d5-9a718a197670] Running
	I0328 01:09:02.933967 1130827 system_pods.go:61] "kube-apiserver-no-preload-248059" [60aa0336-e0a2-476a-9458-c76fb40a95e1] Running
	I0328 01:09:02.933973 1130827 system_pods.go:61] "kube-controller-manager-no-preload-248059" [3759c96b-4476-439f-bf97-ea6175e53272] Running
	I0328 01:09:02.933977 1130827 system_pods.go:61] "kube-scheduler-no-preload-248059" [63a844a3-9d1e-48b0-90eb-52d3d20f0de8] Running
	I0328 01:09:02.933984 1130827 system_pods.go:74] duration metric: took 96.68223ms to wait for pod list to return data ...
	I0328 01:09:02.933994 1130827 default_sa.go:34] waiting for default service account to be created ...
	I0328 01:09:02.939507 1130827 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0328 01:09:02.939538 1130827 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0328 01:09:02.994042 1130827 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0328 01:09:03.160934 1130827 default_sa.go:45] found service account: "default"
	I0328 01:09:03.160971 1130827 default_sa.go:55] duration metric: took 226.968222ms for default service account to be created ...
	I0328 01:09:03.160982 1130827 system_pods.go:116] waiting for k8s-apps to be running ...
	I0328 01:09:03.396511 1130827 system_pods.go:86] 7 kube-system pods found
	I0328 01:09:03.396549 1130827 system_pods.go:89] "coredns-7db6d8ff4d-8zzf5" [91f329ea-6d6d-45dc-ac77-40a2739249b4] Pending
	I0328 01:09:03.396554 1130827 system_pods.go:89] "coredns-7db6d8ff4d-qtgp9" [aac5a4d0-acf3-426c-a81e-d129f94d58f3] Pending
	I0328 01:09:03.396558 1130827 system_pods.go:89] "etcd-no-preload-248059" [df24a43f-e4f8-4ee2-a2d5-9a718a197670] Running
	I0328 01:09:03.396562 1130827 system_pods.go:89] "kube-apiserver-no-preload-248059" [60aa0336-e0a2-476a-9458-c76fb40a95e1] Running
	I0328 01:09:03.396567 1130827 system_pods.go:89] "kube-controller-manager-no-preload-248059" [3759c96b-4476-439f-bf97-ea6175e53272] Running
	I0328 01:09:03.396575 1130827 system_pods.go:89] "kube-proxy-g5f6g" [d9c30bc3-42b1-446f-838b-979489cf661d] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0328 01:09:03.396580 1130827 system_pods.go:89] "kube-scheduler-no-preload-248059" [63a844a3-9d1e-48b0-90eb-52d3d20f0de8] Running
	I0328 01:09:03.396601 1130827 retry.go:31] will retry after 288.008379ms: missing components: kube-dns, kube-proxy
	I0328 01:09:03.697645 1130827 system_pods.go:86] 7 kube-system pods found
	I0328 01:09:03.697688 1130827 system_pods.go:89] "coredns-7db6d8ff4d-8zzf5" [91f329ea-6d6d-45dc-ac77-40a2739249b4] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0328 01:09:03.697697 1130827 system_pods.go:89] "coredns-7db6d8ff4d-qtgp9" [aac5a4d0-acf3-426c-a81e-d129f94d58f3] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0328 01:09:03.697704 1130827 system_pods.go:89] "etcd-no-preload-248059" [df24a43f-e4f8-4ee2-a2d5-9a718a197670] Running
	I0328 01:09:03.697710 1130827 system_pods.go:89] "kube-apiserver-no-preload-248059" [60aa0336-e0a2-476a-9458-c76fb40a95e1] Running
	I0328 01:09:03.697720 1130827 system_pods.go:89] "kube-controller-manager-no-preload-248059" [3759c96b-4476-439f-bf97-ea6175e53272] Running
	I0328 01:09:03.697726 1130827 system_pods.go:89] "kube-proxy-g5f6g" [d9c30bc3-42b1-446f-838b-979489cf661d] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0328 01:09:03.697730 1130827 system_pods.go:89] "kube-scheduler-no-preload-248059" [63a844a3-9d1e-48b0-90eb-52d3d20f0de8] Running
	I0328 01:09:03.697750 1130827 retry.go:31] will retry after 356.016468ms: missing components: kube-dns, kube-proxy
	I0328 01:09:03.962535 1130827 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.097008499s)
	I0328 01:09:03.962614 1130827 main.go:141] libmachine: Making call to close driver server
	I0328 01:09:03.962633 1130827 main.go:141] libmachine: (no-preload-248059) Calling .Close
	I0328 01:09:03.963093 1130827 main.go:141] libmachine: Successfully made call to close driver server
	I0328 01:09:03.963119 1130827 main.go:141] libmachine: Making call to close connection to plugin binary
	I0328 01:09:03.963129 1130827 main.go:141] libmachine: Making call to close driver server
	I0328 01:09:03.963139 1130827 main.go:141] libmachine: (no-preload-248059) Calling .Close
	I0328 01:09:03.963406 1130827 main.go:141] libmachine: Successfully made call to close driver server
	I0328 01:09:03.963424 1130827 main.go:141] libmachine: Making call to close connection to plugin binary
	I0328 01:09:03.964335 1130827 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.125788348s)
	I0328 01:09:03.964375 1130827 main.go:141] libmachine: Making call to close driver server
	I0328 01:09:03.964384 1130827 main.go:141] libmachine: (no-preload-248059) Calling .Close
	I0328 01:09:03.964712 1130827 main.go:141] libmachine: (no-preload-248059) DBG | Closing plugin on server side
	I0328 01:09:03.964740 1130827 main.go:141] libmachine: Successfully made call to close driver server
	I0328 01:09:03.964763 1130827 main.go:141] libmachine: Making call to close connection to plugin binary
	I0328 01:09:03.964776 1130827 main.go:141] libmachine: Making call to close driver server
	I0328 01:09:03.964785 1130827 main.go:141] libmachine: (no-preload-248059) Calling .Close
	I0328 01:09:03.965054 1130827 main.go:141] libmachine: Successfully made call to close driver server
	I0328 01:09:03.965125 1130827 main.go:141] libmachine: Making call to close connection to plugin binary
	I0328 01:09:03.965142 1130827 main.go:141] libmachine: (no-preload-248059) DBG | Closing plugin on server side
	I0328 01:09:04.002303 1130827 main.go:141] libmachine: Making call to close driver server
	I0328 01:09:04.002340 1130827 main.go:141] libmachine: (no-preload-248059) Calling .Close
	I0328 01:09:04.002744 1130827 main.go:141] libmachine: Successfully made call to close driver server
	I0328 01:09:04.002766 1130827 main.go:141] libmachine: Making call to close connection to plugin binary
	I0328 01:09:04.062017 1130827 system_pods.go:86] 8 kube-system pods found
	I0328 01:09:04.062096 1130827 system_pods.go:89] "coredns-7db6d8ff4d-8zzf5" [91f329ea-6d6d-45dc-ac77-40a2739249b4] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0328 01:09:04.062111 1130827 system_pods.go:89] "coredns-7db6d8ff4d-qtgp9" [aac5a4d0-acf3-426c-a81e-d129f94d58f3] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0328 01:09:04.062121 1130827 system_pods.go:89] "etcd-no-preload-248059" [df24a43f-e4f8-4ee2-a2d5-9a718a197670] Running
	I0328 01:09:04.062132 1130827 system_pods.go:89] "kube-apiserver-no-preload-248059" [60aa0336-e0a2-476a-9458-c76fb40a95e1] Running
	I0328 01:09:04.062158 1130827 system_pods.go:89] "kube-controller-manager-no-preload-248059" [3759c96b-4476-439f-bf97-ea6175e53272] Running
	I0328 01:09:04.062172 1130827 system_pods.go:89] "kube-proxy-g5f6g" [d9c30bc3-42b1-446f-838b-979489cf661d] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0328 01:09:04.062180 1130827 system_pods.go:89] "kube-scheduler-no-preload-248059" [63a844a3-9d1e-48b0-90eb-52d3d20f0de8] Running
	I0328 01:09:04.062192 1130827 system_pods.go:89] "storage-provisioner" [1dcee5b1-4531-4068-bce7-081d51602015] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0328 01:09:04.062220 1130827 retry.go:31] will retry after 477.684804ms: missing components: kube-dns, kube-proxy
	I0328 01:09:04.574661 1130827 system_pods.go:86] 9 kube-system pods found
	I0328 01:09:04.574716 1130827 system_pods.go:89] "coredns-7db6d8ff4d-8zzf5" [91f329ea-6d6d-45dc-ac77-40a2739249b4] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0328 01:09:04.574728 1130827 system_pods.go:89] "coredns-7db6d8ff4d-qtgp9" [aac5a4d0-acf3-426c-a81e-d129f94d58f3] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0328 01:09:04.574740 1130827 system_pods.go:89] "etcd-no-preload-248059" [df24a43f-e4f8-4ee2-a2d5-9a718a197670] Running
	I0328 01:09:04.574748 1130827 system_pods.go:89] "kube-apiserver-no-preload-248059" [60aa0336-e0a2-476a-9458-c76fb40a95e1] Running
	I0328 01:09:04.574754 1130827 system_pods.go:89] "kube-controller-manager-no-preload-248059" [3759c96b-4476-439f-bf97-ea6175e53272] Running
	I0328 01:09:04.574761 1130827 system_pods.go:89] "kube-proxy-g5f6g" [d9c30bc3-42b1-446f-838b-979489cf661d] Running
	I0328 01:09:04.574768 1130827 system_pods.go:89] "kube-scheduler-no-preload-248059" [63a844a3-9d1e-48b0-90eb-52d3d20f0de8] Running
	I0328 01:09:04.574778 1130827 system_pods.go:89] "metrics-server-569cc877fc-frc5k" [d1b84bf5-8f9e-4da6-8aea-568e9bb1a4dd] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0328 01:09:04.574799 1130827 system_pods.go:89] "storage-provisioner" [1dcee5b1-4531-4068-bce7-081d51602015] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0328 01:09:04.574821 1130827 retry.go:31] will retry after 460.13955ms: missing components: kube-dns
	I0328 01:09:04.692708 1130827 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.69861394s)
	I0328 01:09:04.692782 1130827 main.go:141] libmachine: Making call to close driver server
	I0328 01:09:04.692798 1130827 main.go:141] libmachine: (no-preload-248059) Calling .Close
	I0328 01:09:04.693323 1130827 main.go:141] libmachine: Successfully made call to close driver server
	I0328 01:09:04.693366 1130827 main.go:141] libmachine: Making call to close connection to plugin binary
	I0328 01:09:04.693376 1130827 main.go:141] libmachine: Making call to close driver server
	I0328 01:09:04.693384 1130827 main.go:141] libmachine: (no-preload-248059) Calling .Close
	I0328 01:09:04.693320 1130827 main.go:141] libmachine: (no-preload-248059) DBG | Closing plugin on server side
	I0328 01:09:04.693818 1130827 main.go:141] libmachine: Successfully made call to close driver server
	I0328 01:09:04.693865 1130827 main.go:141] libmachine: Making call to close connection to plugin binary
	I0328 01:09:04.693879 1130827 main.go:141] libmachine: (no-preload-248059) DBG | Closing plugin on server side
	I0328 01:09:04.693895 1130827 addons.go:470] Verifying addon metrics-server=true in "no-preload-248059"
	I0328 01:09:04.696310 1130827 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0328 01:09:04.025791 1131323 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0328 01:09:04.026055 1131323 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0328 01:09:04.026065 1131323 kubeadm.go:309] 
	I0328 01:09:04.026124 1131323 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0328 01:09:04.026172 1131323 kubeadm.go:309] 		timed out waiting for the condition
	I0328 01:09:04.026181 1131323 kubeadm.go:309] 
	I0328 01:09:04.026221 1131323 kubeadm.go:309] 	This error is likely caused by:
	I0328 01:09:04.026279 1131323 kubeadm.go:309] 		- The kubelet is not running
	I0328 01:09:04.026401 1131323 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0328 01:09:04.026411 1131323 kubeadm.go:309] 
	I0328 01:09:04.026529 1131323 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0328 01:09:04.026586 1131323 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0328 01:09:04.026632 1131323 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0328 01:09:04.026640 1131323 kubeadm.go:309] 
	I0328 01:09:04.026758 1131323 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0328 01:09:04.026884 1131323 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0328 01:09:04.026902 1131323 kubeadm.go:309] 
	I0328 01:09:04.027061 1131323 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0328 01:09:04.027222 1131323 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0328 01:09:04.027335 1131323 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0328 01:09:04.027429 1131323 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0328 01:09:04.027537 1131323 kubeadm.go:309] 
	I0328 01:09:04.029027 1131323 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0328 01:09:04.029164 1131323 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0328 01:09:04.029284 1131323 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	W0328 01:09:04.029477 1131323 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0328 01:09:04.029545 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0328 01:09:04.543275 1131323 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0328 01:09:04.562572 1131323 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0328 01:09:04.577013 1131323 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0328 01:09:04.577040 1131323 kubeadm.go:156] found existing configuration files:
	
	I0328 01:09:04.577102 1131323 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0328 01:09:04.590795 1131323 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0328 01:09:04.590885 1131323 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0328 01:09:04.604227 1131323 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0328 01:09:04.616720 1131323 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0328 01:09:04.616818 1131323 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0328 01:09:04.630095 1131323 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0328 01:09:04.643166 1131323 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0328 01:09:04.643259 1131323 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0328 01:09:04.658084 1131323 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0328 01:09:04.671786 1131323 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0328 01:09:04.671874 1131323 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0328 01:09:04.685852 1131323 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0328 01:09:04.779013 1131323 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0328 01:09:04.779113 1131323 kubeadm.go:309] [preflight] Running pre-flight checks
	I0328 01:09:04.964178 1131323 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0328 01:09:04.964317 1131323 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0328 01:09:04.964463 1131323 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0328 01:09:05.181712 1131323 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0328 01:09:05.183644 1131323 out.go:204]   - Generating certificates and keys ...
	I0328 01:09:05.183759 1131323 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0328 01:09:05.183851 1131323 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0328 01:09:05.183962 1131323 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0328 01:09:05.184042 1131323 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0328 01:09:05.184156 1131323 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0328 01:09:05.184244 1131323 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0328 01:09:05.184337 1131323 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0328 01:09:05.184424 1131323 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0328 01:09:05.184535 1131323 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0328 01:09:05.184633 1131323 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0328 01:09:05.184683 1131323 kubeadm.go:309] [certs] Using the existing "sa" key
	I0328 01:09:05.184758 1131323 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0328 01:09:04.698039 1130827 addons.go:505] duration metric: took 2.191652421s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0328 01:09:05.044303 1130827 system_pods.go:86] 9 kube-system pods found
	I0328 01:09:05.044340 1130827 system_pods.go:89] "coredns-7db6d8ff4d-8zzf5" [91f329ea-6d6d-45dc-ac77-40a2739249b4] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0328 01:09:05.044348 1130827 system_pods.go:89] "coredns-7db6d8ff4d-qtgp9" [aac5a4d0-acf3-426c-a81e-d129f94d58f3] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0328 01:09:05.044354 1130827 system_pods.go:89] "etcd-no-preload-248059" [df24a43f-e4f8-4ee2-a2d5-9a718a197670] Running
	I0328 01:09:05.044360 1130827 system_pods.go:89] "kube-apiserver-no-preload-248059" [60aa0336-e0a2-476a-9458-c76fb40a95e1] Running
	I0328 01:09:05.044366 1130827 system_pods.go:89] "kube-controller-manager-no-preload-248059" [3759c96b-4476-439f-bf97-ea6175e53272] Running
	I0328 01:09:05.044369 1130827 system_pods.go:89] "kube-proxy-g5f6g" [d9c30bc3-42b1-446f-838b-979489cf661d] Running
	I0328 01:09:05.044373 1130827 system_pods.go:89] "kube-scheduler-no-preload-248059" [63a844a3-9d1e-48b0-90eb-52d3d20f0de8] Running
	I0328 01:09:05.044378 1130827 system_pods.go:89] "metrics-server-569cc877fc-frc5k" [d1b84bf5-8f9e-4da6-8aea-568e9bb1a4dd] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0328 01:09:05.044387 1130827 system_pods.go:89] "storage-provisioner" [1dcee5b1-4531-4068-bce7-081d51602015] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0328 01:09:05.044406 1130827 retry.go:31] will retry after 486.01075ms: missing components: kube-dns
	I0328 01:09:05.539158 1130827 system_pods.go:86] 9 kube-system pods found
	I0328 01:09:05.539204 1130827 system_pods.go:89] "coredns-7db6d8ff4d-8zzf5" [91f329ea-6d6d-45dc-ac77-40a2739249b4] Running
	I0328 01:09:05.539213 1130827 system_pods.go:89] "coredns-7db6d8ff4d-qtgp9" [aac5a4d0-acf3-426c-a81e-d129f94d58f3] Running
	I0328 01:09:05.539219 1130827 system_pods.go:89] "etcd-no-preload-248059" [df24a43f-e4f8-4ee2-a2d5-9a718a197670] Running
	I0328 01:09:05.539226 1130827 system_pods.go:89] "kube-apiserver-no-preload-248059" [60aa0336-e0a2-476a-9458-c76fb40a95e1] Running
	I0328 01:09:05.539232 1130827 system_pods.go:89] "kube-controller-manager-no-preload-248059" [3759c96b-4476-439f-bf97-ea6175e53272] Running
	I0328 01:09:05.539238 1130827 system_pods.go:89] "kube-proxy-g5f6g" [d9c30bc3-42b1-446f-838b-979489cf661d] Running
	I0328 01:09:05.539244 1130827 system_pods.go:89] "kube-scheduler-no-preload-248059" [63a844a3-9d1e-48b0-90eb-52d3d20f0de8] Running
	I0328 01:09:05.539255 1130827 system_pods.go:89] "metrics-server-569cc877fc-frc5k" [d1b84bf5-8f9e-4da6-8aea-568e9bb1a4dd] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0328 01:09:05.539260 1130827 system_pods.go:89] "storage-provisioner" [1dcee5b1-4531-4068-bce7-081d51602015] Running
	I0328 01:09:05.539274 1130827 system_pods.go:126] duration metric: took 2.37828469s to wait for k8s-apps to be running ...
	I0328 01:09:05.539292 1130827 system_svc.go:44] waiting for kubelet service to be running ....
	I0328 01:09:05.539362 1130827 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0328 01:09:05.560593 1130827 system_svc.go:56] duration metric: took 21.288819ms WaitForService to wait for kubelet
	I0328 01:09:05.560628 1130827 kubeadm.go:576] duration metric: took 3.054281955s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0328 01:09:05.560657 1130827 node_conditions.go:102] verifying NodePressure condition ...
	I0328 01:09:05.564453 1130827 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0328 01:09:05.564489 1130827 node_conditions.go:123] node cpu capacity is 2
	I0328 01:09:05.564502 1130827 node_conditions.go:105] duration metric: took 3.837449ms to run NodePressure ...
	I0328 01:09:05.564517 1130827 start.go:240] waiting for startup goroutines ...
	I0328 01:09:05.564527 1130827 start.go:245] waiting for cluster config update ...
	I0328 01:09:05.564542 1130827 start.go:254] writing updated cluster config ...
	I0328 01:09:05.564843 1130827 ssh_runner.go:195] Run: rm -f paused
	I0328 01:09:05.623218 1130827 start.go:600] kubectl: 1.29.3, cluster: 1.30.0-beta.0 (minor skew: 1)
	I0328 01:09:05.625408 1130827 out.go:177] * Done! kubectl is now configured to use "no-preload-248059" cluster and "default" namespace by default
	I0328 01:09:05.587190 1131323 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0328 01:09:05.923219 1131323 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0328 01:09:06.087945 1131323 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0328 01:09:06.245638 1131323 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0328 01:09:06.266195 1131323 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0328 01:09:06.267461 1131323 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0328 01:09:06.267551 1131323 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0328 01:09:06.434155 1131323 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0328 01:09:06.436300 1131323 out.go:204]   - Booting up control plane ...
	I0328 01:09:06.436447 1131323 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0328 01:09:06.446573 1131323 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0328 01:09:06.447461 1131323 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0328 01:09:06.448313 1131323 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0328 01:09:06.450917 1131323 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0328 01:09:46.453199 1131323 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0328 01:09:46.453386 1131323 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0328 01:09:46.453643 1131323 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0328 01:09:51.454402 1131323 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0328 01:09:51.454665 1131323 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0328 01:10:01.455189 1131323 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0328 01:10:01.455417 1131323 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0328 01:10:21.456491 1131323 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0328 01:10:21.456726 1131323 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0328 01:11:01.456972 1131323 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0328 01:11:01.457256 1131323 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0328 01:11:01.457269 1131323 kubeadm.go:309] 
	I0328 01:11:01.457310 1131323 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0328 01:11:01.457404 1131323 kubeadm.go:309] 		timed out waiting for the condition
	I0328 01:11:01.457441 1131323 kubeadm.go:309] 
	I0328 01:11:01.457492 1131323 kubeadm.go:309] 	This error is likely caused by:
	I0328 01:11:01.457550 1131323 kubeadm.go:309] 		- The kubelet is not running
	I0328 01:11:01.457696 1131323 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0328 01:11:01.457708 1131323 kubeadm.go:309] 
	I0328 01:11:01.457856 1131323 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0328 01:11:01.457906 1131323 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0328 01:11:01.457935 1131323 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0328 01:11:01.457943 1131323 kubeadm.go:309] 
	I0328 01:11:01.458033 1131323 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0328 01:11:01.458139 1131323 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0328 01:11:01.458155 1131323 kubeadm.go:309] 
	I0328 01:11:01.458331 1131323 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0328 01:11:01.458483 1131323 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0328 01:11:01.458594 1131323 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0328 01:11:01.458707 1131323 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0328 01:11:01.458718 1131323 kubeadm.go:309] 
	I0328 01:11:01.459597 1131323 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0328 01:11:01.459737 1131323 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0328 01:11:01.459822 1131323 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0328 01:11:01.459962 1131323 kubeadm.go:393] duration metric: took 7m59.227261729s to StartCluster
	I0328 01:11:01.460023 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:11:01.460167 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:11:01.522644 1131323 cri.go:89] found id: ""
	I0328 01:11:01.522687 1131323 logs.go:276] 0 containers: []
	W0328 01:11:01.522700 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:11:01.522710 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:11:01.522782 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:11:01.567898 1131323 cri.go:89] found id: ""
	I0328 01:11:01.567928 1131323 logs.go:276] 0 containers: []
	W0328 01:11:01.567937 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:11:01.567945 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:11:01.568005 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:11:01.604782 1131323 cri.go:89] found id: ""
	I0328 01:11:01.604810 1131323 logs.go:276] 0 containers: []
	W0328 01:11:01.604819 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:11:01.604825 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:11:01.604935 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:11:01.642875 1131323 cri.go:89] found id: ""
	I0328 01:11:01.642908 1131323 logs.go:276] 0 containers: []
	W0328 01:11:01.642920 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:11:01.642929 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:11:01.642993 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:11:01.682186 1131323 cri.go:89] found id: ""
	I0328 01:11:01.682216 1131323 logs.go:276] 0 containers: []
	W0328 01:11:01.682223 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:11:01.682241 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:11:01.682312 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:11:01.720654 1131323 cri.go:89] found id: ""
	I0328 01:11:01.720689 1131323 logs.go:276] 0 containers: []
	W0328 01:11:01.720697 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:11:01.720704 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:11:01.720759 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:11:01.757340 1131323 cri.go:89] found id: ""
	I0328 01:11:01.757372 1131323 logs.go:276] 0 containers: []
	W0328 01:11:01.757383 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:11:01.757392 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:11:01.757462 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:11:01.797426 1131323 cri.go:89] found id: ""
	I0328 01:11:01.797462 1131323 logs.go:276] 0 containers: []
	W0328 01:11:01.797473 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:11:01.797488 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:11:01.797506 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:11:01.859582 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:11:01.859623 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:11:01.876027 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:11:01.876073 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:11:01.966513 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:11:01.966539 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:11:01.966557 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:11:02.084853 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:11:02.084894 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0328 01:11:02.127221 1131323 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0328 01:11:02.127288 1131323 out.go:239] * 
	W0328 01:11:02.127417 1131323 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0328 01:11:02.127456 1131323 out.go:239] * 
	W0328 01:11:02.128313 1131323 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0328 01:11:02.131916 1131323 out.go:177] 
	W0328 01:11:02.133288 1131323 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0328 01:11:02.133351 1131323 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0328 01:11:02.133381 1131323 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0328 01:11:02.134991 1131323 out.go:177] 
	
	
	==> CRI-O <==
	Mar 28 01:11:04 old-k8s-version-986088 crio[655]: time="2024-03-28 01:11:04.057518153Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1711588264057494963,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=bf7d3033-d594-4123-a9fa-43733c302abb name=/runtime.v1.ImageService/ImageFsInfo
	Mar 28 01:11:04 old-k8s-version-986088 crio[655]: time="2024-03-28 01:11:04.058121367Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=dcb835a7-291c-43f9-b966-833d6e2485c6 name=/runtime.v1.RuntimeService/ListContainers
	Mar 28 01:11:04 old-k8s-version-986088 crio[655]: time="2024-03-28 01:11:04.058193344Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=dcb835a7-291c-43f9-b966-833d6e2485c6 name=/runtime.v1.RuntimeService/ListContainers
	Mar 28 01:11:04 old-k8s-version-986088 crio[655]: time="2024-03-28 01:11:04.058226524Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=dcb835a7-291c-43f9-b966-833d6e2485c6 name=/runtime.v1.RuntimeService/ListContainers
	Mar 28 01:11:04 old-k8s-version-986088 crio[655]: time="2024-03-28 01:11:04.095706524Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ee31806f-ac30-441e-9cc1-c9d77834c43f name=/runtime.v1.RuntimeService/Version
	Mar 28 01:11:04 old-k8s-version-986088 crio[655]: time="2024-03-28 01:11:04.095816549Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ee31806f-ac30-441e-9cc1-c9d77834c43f name=/runtime.v1.RuntimeService/Version
	Mar 28 01:11:04 old-k8s-version-986088 crio[655]: time="2024-03-28 01:11:04.097292038Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e599893b-4bd4-4540-b49a-b0f2e752c724 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 28 01:11:04 old-k8s-version-986088 crio[655]: time="2024-03-28 01:11:04.097799692Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1711588264097777074,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e599893b-4bd4-4540-b49a-b0f2e752c724 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 28 01:11:04 old-k8s-version-986088 crio[655]: time="2024-03-28 01:11:04.098237746Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d166c4d3-a504-4a5d-a6c7-bb101bb479cb name=/runtime.v1.RuntimeService/ListContainers
	Mar 28 01:11:04 old-k8s-version-986088 crio[655]: time="2024-03-28 01:11:04.098312951Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d166c4d3-a504-4a5d-a6c7-bb101bb479cb name=/runtime.v1.RuntimeService/ListContainers
	Mar 28 01:11:04 old-k8s-version-986088 crio[655]: time="2024-03-28 01:11:04.098347511Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=d166c4d3-a504-4a5d-a6c7-bb101bb479cb name=/runtime.v1.RuntimeService/ListContainers
	Mar 28 01:11:04 old-k8s-version-986088 crio[655]: time="2024-03-28 01:11:04.134144491Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a5606c67-5a37-4555-bb67-f16a042b57d6 name=/runtime.v1.RuntimeService/Version
	Mar 28 01:11:04 old-k8s-version-986088 crio[655]: time="2024-03-28 01:11:04.134255664Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a5606c67-5a37-4555-bb67-f16a042b57d6 name=/runtime.v1.RuntimeService/Version
	Mar 28 01:11:04 old-k8s-version-986088 crio[655]: time="2024-03-28 01:11:04.135476646Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7e9f281f-f866-40a3-b979-36422d009968 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 28 01:11:04 old-k8s-version-986088 crio[655]: time="2024-03-28 01:11:04.135864633Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1711588264135841505,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7e9f281f-f866-40a3-b979-36422d009968 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 28 01:11:04 old-k8s-version-986088 crio[655]: time="2024-03-28 01:11:04.136419543Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b03d8811-aead-45d1-827f-7679e5c388d5 name=/runtime.v1.RuntimeService/ListContainers
	Mar 28 01:11:04 old-k8s-version-986088 crio[655]: time="2024-03-28 01:11:04.136516972Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b03d8811-aead-45d1-827f-7679e5c388d5 name=/runtime.v1.RuntimeService/ListContainers
	Mar 28 01:11:04 old-k8s-version-986088 crio[655]: time="2024-03-28 01:11:04.136559581Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=b03d8811-aead-45d1-827f-7679e5c388d5 name=/runtime.v1.RuntimeService/ListContainers
	Mar 28 01:11:04 old-k8s-version-986088 crio[655]: time="2024-03-28 01:11:04.177239575Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e93bfaef-dae2-4d01-ac9a-cc59a2baa695 name=/runtime.v1.RuntimeService/Version
	Mar 28 01:11:04 old-k8s-version-986088 crio[655]: time="2024-03-28 01:11:04.177335974Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e93bfaef-dae2-4d01-ac9a-cc59a2baa695 name=/runtime.v1.RuntimeService/Version
	Mar 28 01:11:04 old-k8s-version-986088 crio[655]: time="2024-03-28 01:11:04.178694224Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7099be5a-f920-4ead-88c3-1ac270cd47bb name=/runtime.v1.ImageService/ImageFsInfo
	Mar 28 01:11:04 old-k8s-version-986088 crio[655]: time="2024-03-28 01:11:04.179071715Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1711588264179049471,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7099be5a-f920-4ead-88c3-1ac270cd47bb name=/runtime.v1.ImageService/ImageFsInfo
	Mar 28 01:11:04 old-k8s-version-986088 crio[655]: time="2024-03-28 01:11:04.179562171Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=37d10319-4261-4e70-bffa-e3b2cc4a404a name=/runtime.v1.RuntimeService/ListContainers
	Mar 28 01:11:04 old-k8s-version-986088 crio[655]: time="2024-03-28 01:11:04.179647692Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=37d10319-4261-4e70-bffa-e3b2cc4a404a name=/runtime.v1.RuntimeService/ListContainers
	Mar 28 01:11:04 old-k8s-version-986088 crio[655]: time="2024-03-28 01:11:04.179727742Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=37d10319-4261-4e70-bffa-e3b2cc4a404a name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Mar28 01:02] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.054089] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.043023] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.677467] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.716356] systemd-fstab-generator[115]: Ignoring "noauto" option for root device
	[  +1.626498] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.938962] systemd-fstab-generator[574]: Ignoring "noauto" option for root device
	[  +0.065252] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.078257] systemd-fstab-generator[586]: Ignoring "noauto" option for root device
	[  +0.191570] systemd-fstab-generator[601]: Ignoring "noauto" option for root device
	[  +0.159223] systemd-fstab-generator[613]: Ignoring "noauto" option for root device
	[  +0.285028] systemd-fstab-generator[640]: Ignoring "noauto" option for root device
	[Mar28 01:03] systemd-fstab-generator[845]: Ignoring "noauto" option for root device
	[  +0.069643] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.129611] systemd-fstab-generator[969]: Ignoring "noauto" option for root device
	[ +11.468422] kauditd_printk_skb: 46 callbacks suppressed
	[Mar28 01:07] systemd-fstab-generator[4979]: Ignoring "noauto" option for root device
	[Mar28 01:09] systemd-fstab-generator[5264]: Ignoring "noauto" option for root device
	[  +0.093089] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 01:11:04 up 8 min,  0 users,  load average: 0.18, 0.12, 0.06
	Linux old-k8s-version-986088 5.10.207 #1 SMP Wed Mar 27 22:02:20 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Mar 28 01:11:01 old-k8s-version-986088 kubelet[5440]: goroutine 149 [runnable]:
	Mar 28 01:11:01 old-k8s-version-986088 kubelet[5440]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1(0xc000cd71c0, 0xc000cf5b80)
	Mar 28 01:11:01 old-k8s-version-986088 kubelet[5440]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:71
	Mar 28 01:11:01 old-k8s-version-986088 kubelet[5440]: created by k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start
	Mar 28 01:11:01 old-k8s-version-986088 kubelet[5440]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:71 +0x65
	Mar 28 01:11:01 old-k8s-version-986088 kubelet[5440]: goroutine 150 [runnable]:
	Mar 28 01:11:01 old-k8s-version-986088 kubelet[5440]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1(0xc000cd71c0, 0xc000cf5ba0)
	Mar 28 01:11:01 old-k8s-version-986088 kubelet[5440]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:71
	Mar 28 01:11:01 old-k8s-version-986088 kubelet[5440]: created by k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start
	Mar 28 01:11:01 old-k8s-version-986088 kubelet[5440]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:71 +0x65
	Mar 28 01:11:01 old-k8s-version-986088 kubelet[5440]: goroutine 151 [runnable]:
	Mar 28 01:11:01 old-k8s-version-986088 kubelet[5440]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*controller).Run.func1(0xc0001000c0, 0xc000769a70)
	Mar 28 01:11:01 old-k8s-version-986088 kubelet[5440]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/controller.go:129
	Mar 28 01:11:01 old-k8s-version-986088 kubelet[5440]: created by k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*controller).Run
	Mar 28 01:11:01 old-k8s-version-986088 kubelet[5440]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/controller.go:129 +0xa5
	Mar 28 01:11:01 old-k8s-version-986088 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Mar 28 01:11:01 old-k8s-version-986088 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Mar 28 01:11:02 old-k8s-version-986088 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 20.
	Mar 28 01:11:02 old-k8s-version-986088 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Mar 28 01:11:02 old-k8s-version-986088 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Mar 28 01:11:02 old-k8s-version-986088 kubelet[5508]: I0328 01:11:02.265616    5508 server.go:416] Version: v1.20.0
	Mar 28 01:11:02 old-k8s-version-986088 kubelet[5508]: I0328 01:11:02.265983    5508 server.go:837] Client rotation is on, will bootstrap in background
	Mar 28 01:11:02 old-k8s-version-986088 kubelet[5508]: I0328 01:11:02.268104    5508 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Mar 28 01:11:02 old-k8s-version-986088 kubelet[5508]: W0328 01:11:02.269125    5508 manager.go:159] Cannot detect current cgroup on cgroup v2
	Mar 28 01:11:02 old-k8s-version-986088 kubelet[5508]: I0328 01:11:02.269443    5508 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-986088 -n old-k8s-version-986088
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-986088 -n old-k8s-version-986088: exit status 2 (262.039672ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-986088" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (720.50s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (12.42s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-283961 -n default-k8s-diff-port-283961
E0328 00:59:54.664995 1076522 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/enable-default-cni-443419/client.crt: no such file or directory
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-283961 -n default-k8s-diff-port-283961: exit status 3 (3.19994993s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0328 00:59:56.394608 1131501 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.224:22: connect: no route to host
	E0328 00:59:56.394633 1131501 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.224:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-283961 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
E0328 00:59:58.409209 1076522 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/bridge-443419/client.crt: no such file or directory
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-283961 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.155566985s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.224:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-283961 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-283961 -n default-k8s-diff-port-283961
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-283961 -n default-k8s-diff-port-283961: exit status 3 (3.060164067s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0328 01:00:05.610695 1131570 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.224:22: connect: no route to host
	E0328 01:00:05.610720 1131570 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.224:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-283961" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (12.42s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (544.34s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0328 01:08:24.182902 1076522 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/flannel-443419/client.crt: no such file or directory
E0328 01:08:36.488175 1076522 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/bridge-443419/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-808809 -n embed-certs-808809
start_stop_delete_test.go:274: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-03-28 01:16:58.066120452 +0000 UTC m=+6249.317599381
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-808809 -n embed-certs-808809
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-808809 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-808809 logs -n 25: (2.159939431s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|----------------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   |    Version     |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|----------------|---------------------|---------------------|
	| stop    | -p no-preload-248059                                   | no-preload-248059            | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:55 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |                |                     |                     |
	| addons  | enable metrics-server -p embed-certs-808809            | embed-certs-808809           | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:55 UTC | 28 Mar 24 00:55 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |                |                     |                     |
	| stop    | -p embed-certs-808809                                  | embed-certs-808809           | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:55 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |                |                     |                     |
	| addons  | enable metrics-server -p newest-cni-013642             | newest-cni-013642            | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:55 UTC | 28 Mar 24 00:55 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |                |                     |                     |
	| stop    | -p newest-cni-013642                                   | newest-cni-013642            | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:55 UTC | 28 Mar 24 00:55 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |                |                     |                     |
	| addons  | enable dashboard -p newest-cni-013642                  | newest-cni-013642            | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:55 UTC | 28 Mar 24 00:55 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| start   | -p newest-cni-013642 --memory=2200 --alsologtostderr   | newest-cni-013642            | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:55 UTC | 28 Mar 24 00:56 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |                |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |                |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |                |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |                |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.30.0-beta.0                    |                              |         |                |                     |                     |
	| image   | newest-cni-013642 image list                           | newest-cni-013642            | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:56 UTC | 28 Mar 24 00:56 UTC |
	|         | --format=json                                          |                              |         |                |                     |                     |
	| pause   | -p newest-cni-013642                                   | newest-cni-013642            | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:56 UTC | 28 Mar 24 00:56 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |                |                     |                     |
	| unpause | -p newest-cni-013642                                   | newest-cni-013642            | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:56 UTC | 28 Mar 24 00:56 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |                |                     |                     |
	| delete  | -p newest-cni-013642                                   | newest-cni-013642            | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:56 UTC | 28 Mar 24 00:56 UTC |
	| delete  | -p newest-cni-013642                                   | newest-cni-013642            | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:56 UTC | 28 Mar 24 00:56 UTC |
	| start   | -p                                                     | default-k8s-diff-port-283961 | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:56 UTC | 28 Mar 24 00:57 UTC |
	|         | default-k8s-diff-port-283961                           |                              |         |                |                     |                     |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |                |                     |                     |
	|         | --driver=kvm2                                          |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.29.3                           |                              |         |                |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-986088        | old-k8s-version-986088       | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:57 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |                |                     |                     |
	| addons  | enable dashboard -p no-preload-248059                  | no-preload-248059            | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:57 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-283961  | default-k8s-diff-port-283961 | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:57 UTC | 28 Mar 24 00:57 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |                |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-283961 | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:57 UTC |                     |
	|         | default-k8s-diff-port-283961                           |                              |         |                |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |                |                     |                     |
	| start   | -p no-preload-248059 --memory=2200                     | no-preload-248059            | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:57 UTC | 28 Mar 24 01:09 UTC |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |                |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.30.0-beta.0                    |                              |         |                |                     |                     |
	| addons  | enable dashboard -p embed-certs-808809                 | embed-certs-808809           | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:57 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| start   | -p embed-certs-808809                                  | embed-certs-808809           | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:57 UTC | 28 Mar 24 01:07 UTC |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |                |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.29.3                           |                              |         |                |                     |                     |
	| stop    | -p old-k8s-version-986088                              | old-k8s-version-986088       | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:59 UTC | 28 Mar 24 00:59 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |                |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-986088             | old-k8s-version-986088       | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:59 UTC | 28 Mar 24 00:59 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| start   | -p old-k8s-version-986088                              | old-k8s-version-986088       | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:59 UTC |                     |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --kvm-network=default                                  |                              |         |                |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |                |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |                |                     |                     |
	|         | --keep-context=false                                   |                              |         |                |                     |                     |
	|         | --driver=kvm2                                          |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |                |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-283961       | default-k8s-diff-port-283961 | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:59 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-283961 | jenkins | v1.33.0-beta.0 | 28 Mar 24 01:00 UTC | 28 Mar 24 01:08 UTC |
	|         | default-k8s-diff-port-283961                           |                              |         |                |                     |                     |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |                |                     |                     |
	|         | --driver=kvm2                                          |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.29.3                           |                              |         |                |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/28 01:00:05
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0328 01:00:05.675380 1131600 out.go:291] Setting OutFile to fd 1 ...
	I0328 01:00:05.675675 1131600 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 01:00:05.675710 1131600 out.go:304] Setting ErrFile to fd 2...
	I0328 01:00:05.675718 1131600 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 01:00:05.676017 1131600 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18485-1069254/.minikube/bin
	I0328 01:00:05.676919 1131600 out.go:298] Setting JSON to false
	I0328 01:00:05.678046 1131600 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":31303,"bootTime":1711556303,"procs":200,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1054-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0328 01:00:05.678129 1131600 start.go:139] virtualization: kvm guest
	I0328 01:00:05.681128 1131600 out.go:177] * [default-k8s-diff-port-283961] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I0328 01:00:05.683139 1131600 out.go:177]   - MINIKUBE_LOCATION=18485
	I0328 01:00:05.683129 1131600 notify.go:220] Checking for updates...
	I0328 01:00:05.685082 1131600 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0328 01:00:05.686765 1131600 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18485-1069254/kubeconfig
	I0328 01:00:05.688389 1131600 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18485-1069254/.minikube
	I0328 01:00:05.690187 1131600 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0328 01:00:05.691887 1131600 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0328 01:00:05.693775 1131600 config.go:182] Loaded profile config "default-k8s-diff-port-283961": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0328 01:00:05.694270 1131600 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18485-1069254/.minikube/bin/docker-machine-driver-kvm2
	I0328 01:00:05.694323 1131600 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 01:00:05.709757 1131600 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35333
	I0328 01:00:05.710275 1131600 main.go:141] libmachine: () Calling .GetVersion
	I0328 01:00:05.710875 1131600 main.go:141] libmachine: Using API Version  1
	I0328 01:00:05.710900 1131600 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 01:00:05.711323 1131600 main.go:141] libmachine: () Calling .GetMachineName
	I0328 01:00:05.711531 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .DriverName
	I0328 01:00:05.711893 1131600 driver.go:392] Setting default libvirt URI to qemu:///system
	I0328 01:00:05.712342 1131600 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18485-1069254/.minikube/bin/docker-machine-driver-kvm2
	I0328 01:00:05.712392 1131600 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 01:00:05.727583 1131600 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44249
	I0328 01:00:05.728107 1131600 main.go:141] libmachine: () Calling .GetVersion
	I0328 01:00:05.728595 1131600 main.go:141] libmachine: Using API Version  1
	I0328 01:00:05.728625 1131600 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 01:00:05.728945 1131600 main.go:141] libmachine: () Calling .GetMachineName
	I0328 01:00:05.729170 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .DriverName
	I0328 01:00:05.763895 1131600 out.go:177] * Using the kvm2 driver based on existing profile
	I0328 01:00:05.765397 1131600 start.go:297] selected driver: kvm2
	I0328 01:00:05.765431 1131600 start.go:901] validating driver "kvm2" against &{Name:default-k8s-diff-port-283961 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.29.3 ClusterName:default-k8s-diff-port-283961 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.224 Port:8444 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisk
s:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0328 01:00:05.765564 1131600 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0328 01:00:05.766282 1131600 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0328 01:00:05.766391 1131600 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18485-1069254/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0328 01:00:05.783130 1131600 install.go:137] /home/jenkins/minikube-integration/18485-1069254/.minikube/bin/docker-machine-driver-kvm2 version is 1.33.0-beta.0
	I0328 01:00:05.783602 1131600 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0328 01:00:05.783724 1131600 cni.go:84] Creating CNI manager for ""
	I0328 01:00:05.783745 1131600 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0328 01:00:05.783795 1131600 start.go:340] cluster config:
	{Name:default-k8s-diff-port-283961 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:default-k8s-diff-port-283961 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.224 Port:8444 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0328 01:00:05.783949 1131600 iso.go:125] acquiring lock: {Name:mk3da1fa7d63a581c817f327d86a827697458cb8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0328 01:00:05.785871 1131600 out.go:177] * Starting "default-k8s-diff-port-283961" primary control-plane node in "default-k8s-diff-port-283961" cluster
	I0328 01:00:02.570474 1130827 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.107:22: connect: no route to host
	I0328 01:00:05.787210 1131600 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0328 01:00:05.787259 1131600 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4
	I0328 01:00:05.787272 1131600 cache.go:56] Caching tarball of preloaded images
	I0328 01:00:05.787364 1131600 preload.go:173] Found /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0328 01:00:05.787376 1131600 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on crio
	I0328 01:00:05.787509 1131600 profile.go:142] Saving config to /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/default-k8s-diff-port-283961/config.json ...
	I0328 01:00:05.787742 1131600 start.go:360] acquireMachinesLock for default-k8s-diff-port-283961: {Name:mk85e225431128bbd27ac7bb3815095957281902 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0328 01:00:08.650481 1130827 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.107:22: connect: no route to host
	I0328 01:00:11.722571 1130827 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.107:22: connect: no route to host
	I0328 01:00:17.802536 1130827 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.107:22: connect: no route to host
	I0328 01:00:20.874568 1130827 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.107:22: connect: no route to host
	I0328 01:00:26.954473 1130827 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.107:22: connect: no route to host
	I0328 01:00:30.026674 1130827 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.107:22: connect: no route to host
	I0328 01:00:36.106489 1130827 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.107:22: connect: no route to host
	I0328 01:00:39.178555 1130827 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.107:22: connect: no route to host
	I0328 01:00:45.258539 1130827 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.107:22: connect: no route to host
	I0328 01:00:48.330581 1130827 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.107:22: connect: no route to host
	I0328 01:00:54.410577 1130827 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.107:22: connect: no route to host
	I0328 01:00:57.482545 1130827 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.107:22: connect: no route to host
	I0328 01:01:03.562558 1130827 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.107:22: connect: no route to host
	I0328 01:01:06.634602 1130827 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.107:22: connect: no route to host
	I0328 01:01:12.714559 1130827 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.107:22: connect: no route to host
	I0328 01:01:15.786597 1130827 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.107:22: connect: no route to host
	I0328 01:01:21.866544 1130827 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.107:22: connect: no route to host
	I0328 01:01:24.938619 1130827 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.107:22: connect: no route to host
	I0328 01:01:31.018631 1130827 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.107:22: connect: no route to host
	I0328 01:01:34.090562 1130827 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.107:22: connect: no route to host
	I0328 01:01:40.170864 1130827 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.107:22: connect: no route to host
	I0328 01:01:43.242565 1130827 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.107:22: connect: no route to host
	I0328 01:01:49.322492 1130827 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.107:22: connect: no route to host
	I0328 01:01:52.394572 1130827 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.107:22: connect: no route to host
	I0328 01:01:58.474562 1130827 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.107:22: connect: no route to host
	I0328 01:02:01.546621 1130827 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.107:22: connect: no route to host
	I0328 01:02:07.626510 1130827 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.107:22: connect: no route to host
	I0328 01:02:10.698534 1130827 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.107:22: connect: no route to host
	I0328 01:02:13.703348 1130949 start.go:364] duration metric: took 4m25.677777198s to acquireMachinesLock for "embed-certs-808809"
	I0328 01:02:13.703416 1130949 start.go:96] Skipping create...Using existing machine configuration
	I0328 01:02:13.703429 1130949 fix.go:54] fixHost starting: 
	I0328 01:02:13.703888 1130949 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18485-1069254/.minikube/bin/docker-machine-driver-kvm2
	I0328 01:02:13.703923 1130949 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 01:02:13.719480 1130949 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44835
	I0328 01:02:13.719968 1130949 main.go:141] libmachine: () Calling .GetVersion
	I0328 01:02:13.720450 1130949 main.go:141] libmachine: Using API Version  1
	I0328 01:02:13.720475 1130949 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 01:02:13.720774 1130949 main.go:141] libmachine: () Calling .GetMachineName
	I0328 01:02:13.721011 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .DriverName
	I0328 01:02:13.721182 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetState
	I0328 01:02:13.722796 1130949 fix.go:112] recreateIfNeeded on embed-certs-808809: state=Stopped err=<nil>
	I0328 01:02:13.722828 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .DriverName
	W0328 01:02:13.722972 1130949 fix.go:138] unexpected machine state, will restart: <nil>
	I0328 01:02:13.724895 1130949 out.go:177] * Restarting existing kvm2 VM for "embed-certs-808809" ...
	I0328 01:02:13.700647 1130827 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0328 01:02:13.700689 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetMachineName
	I0328 01:02:13.701054 1130827 buildroot.go:166] provisioning hostname "no-preload-248059"
	I0328 01:02:13.701085 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetMachineName
	I0328 01:02:13.701344 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHHostname
	I0328 01:02:13.703200 1130827 machine.go:97] duration metric: took 4m37.399616994s to provisionDockerMachine
	I0328 01:02:13.703243 1130827 fix.go:56] duration metric: took 4m37.42352766s for fixHost
	I0328 01:02:13.703249 1130827 start.go:83] releasing machines lock for "no-preload-248059", held for 4m37.423563163s
	W0328 01:02:13.703274 1130827 start.go:713] error starting host: provision: host is not running
	W0328 01:02:13.703400 1130827 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0328 01:02:13.703411 1130827 start.go:728] Will try again in 5 seconds ...
	I0328 01:02:13.726437 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .Start
	I0328 01:02:13.726574 1130949 main.go:141] libmachine: (embed-certs-808809) Ensuring networks are active...
	I0328 01:02:13.727407 1130949 main.go:141] libmachine: (embed-certs-808809) Ensuring network default is active
	I0328 01:02:13.727667 1130949 main.go:141] libmachine: (embed-certs-808809) Ensuring network mk-embed-certs-808809 is active
	I0328 01:02:13.728050 1130949 main.go:141] libmachine: (embed-certs-808809) Getting domain xml...
	I0328 01:02:13.728836 1130949 main.go:141] libmachine: (embed-certs-808809) Creating domain...
	I0328 01:02:14.931757 1130949 main.go:141] libmachine: (embed-certs-808809) Waiting to get IP...
	I0328 01:02:14.932921 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:14.933298 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | unable to find current IP address of domain embed-certs-808809 in network mk-embed-certs-808809
	I0328 01:02:14.933396 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | I0328 01:02:14.933294 1131950 retry.go:31] will retry after 279.257708ms: waiting for machine to come up
	I0328 01:02:15.213830 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:15.214439 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | unable to find current IP address of domain embed-certs-808809 in network mk-embed-certs-808809
	I0328 01:02:15.214472 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | I0328 01:02:15.214415 1131950 retry.go:31] will retry after 387.406107ms: waiting for machine to come up
	I0328 01:02:15.603078 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:15.603464 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | unable to find current IP address of domain embed-certs-808809 in network mk-embed-certs-808809
	I0328 01:02:15.603497 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | I0328 01:02:15.603431 1131950 retry.go:31] will retry after 466.553599ms: waiting for machine to come up
	I0328 01:02:16.072165 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:16.072702 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | unable to find current IP address of domain embed-certs-808809 in network mk-embed-certs-808809
	I0328 01:02:16.072732 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | I0328 01:02:16.072643 1131950 retry.go:31] will retry after 375.428381ms: waiting for machine to come up
	I0328 01:02:16.449155 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:16.449614 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | unable to find current IP address of domain embed-certs-808809 in network mk-embed-certs-808809
	I0328 01:02:16.449652 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | I0328 01:02:16.449553 1131950 retry.go:31] will retry after 466.238903ms: waiting for machine to come up
	I0328 01:02:16.917246 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:16.917697 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | unable to find current IP address of domain embed-certs-808809 in network mk-embed-certs-808809
	I0328 01:02:16.917723 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | I0328 01:02:16.917633 1131950 retry.go:31] will retry after 772.819544ms: waiting for machine to come up
	I0328 01:02:17.691645 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:17.692121 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | unable to find current IP address of domain embed-certs-808809 in network mk-embed-certs-808809
	I0328 01:02:17.692151 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | I0328 01:02:17.692071 1131950 retry.go:31] will retry after 1.19065976s: waiting for machine to come up
	I0328 01:02:18.704949 1130827 start.go:360] acquireMachinesLock for no-preload-248059: {Name:mk85e225431128bbd27ac7bb3815095957281902 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0328 01:02:18.884525 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:18.885019 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | unable to find current IP address of domain embed-certs-808809 in network mk-embed-certs-808809
	I0328 01:02:18.885044 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | I0328 01:02:18.884980 1131950 retry.go:31] will retry after 1.434726863s: waiting for machine to come up
	I0328 01:02:20.321473 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:20.322009 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | unable to find current IP address of domain embed-certs-808809 in network mk-embed-certs-808809
	I0328 01:02:20.322035 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | I0328 01:02:20.321951 1131950 retry.go:31] will retry after 1.275277555s: waiting for machine to come up
	I0328 01:02:21.599454 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:21.600049 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | unable to find current IP address of domain embed-certs-808809 in network mk-embed-certs-808809
	I0328 01:02:21.600074 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | I0328 01:02:21.599982 1131950 retry.go:31] will retry after 1.852516502s: waiting for machine to come up
	I0328 01:02:23.455282 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:23.455760 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | unable to find current IP address of domain embed-certs-808809 in network mk-embed-certs-808809
	I0328 01:02:23.455830 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | I0328 01:02:23.455746 1131950 retry.go:31] will retry after 2.056736141s: waiting for machine to come up
	I0328 01:02:25.514112 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:25.514538 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | unable to find current IP address of domain embed-certs-808809 in network mk-embed-certs-808809
	I0328 01:02:25.514569 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | I0328 01:02:25.514492 1131950 retry.go:31] will retry after 2.711520437s: waiting for machine to come up
	I0328 01:02:32.751719 1131323 start.go:364] duration metric: took 3m27.302408957s to acquireMachinesLock for "old-k8s-version-986088"
	I0328 01:02:32.751823 1131323 start.go:96] Skipping create...Using existing machine configuration
	I0328 01:02:32.751833 1131323 fix.go:54] fixHost starting: 
	I0328 01:02:32.752289 1131323 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18485-1069254/.minikube/bin/docker-machine-driver-kvm2
	I0328 01:02:32.752326 1131323 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 01:02:32.770119 1131323 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45807
	I0328 01:02:32.770723 1131323 main.go:141] libmachine: () Calling .GetVersion
	I0328 01:02:32.771352 1131323 main.go:141] libmachine: Using API Version  1
	I0328 01:02:32.771380 1131323 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 01:02:32.771790 1131323 main.go:141] libmachine: () Calling .GetMachineName
	I0328 01:02:32.772020 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .DriverName
	I0328 01:02:32.772206 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetState
	I0328 01:02:32.773947 1131323 fix.go:112] recreateIfNeeded on old-k8s-version-986088: state=Stopped err=<nil>
	I0328 01:02:32.773980 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .DriverName
	W0328 01:02:32.774166 1131323 fix.go:138] unexpected machine state, will restart: <nil>
	I0328 01:02:32.776416 1131323 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-986088" ...
	I0328 01:02:28.229576 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:28.229970 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | unable to find current IP address of domain embed-certs-808809 in network mk-embed-certs-808809
	I0328 01:02:28.230000 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | I0328 01:02:28.229920 1131950 retry.go:31] will retry after 3.231405371s: waiting for machine to come up
	I0328 01:02:31.463477 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:31.463884 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has current primary IP address 192.168.72.210 and MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:31.463902 1130949 main.go:141] libmachine: (embed-certs-808809) Found IP for machine: 192.168.72.210
	I0328 01:02:31.463915 1130949 main.go:141] libmachine: (embed-certs-808809) Reserving static IP address...
	I0328 01:02:31.464394 1130949 main.go:141] libmachine: (embed-certs-808809) Reserved static IP address: 192.168.72.210
	I0328 01:02:31.464413 1130949 main.go:141] libmachine: (embed-certs-808809) Waiting for SSH to be available...
	I0328 01:02:31.464439 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | found host DHCP lease matching {name: "embed-certs-808809", mac: "52:54:00:60:d4:d2", ip: "192.168.72.210"} in network mk-embed-certs-808809: {Iface:virbr4 ExpiryTime:2024-03-28 02:02:25 +0000 UTC Type:0 Mac:52:54:00:60:d4:d2 Iaid: IPaddr:192.168.72.210 Prefix:24 Hostname:embed-certs-808809 Clientid:01:52:54:00:60:d4:d2}
	I0328 01:02:31.464464 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | skip adding static IP to network mk-embed-certs-808809 - found existing host DHCP lease matching {name: "embed-certs-808809", mac: "52:54:00:60:d4:d2", ip: "192.168.72.210"}
	I0328 01:02:31.464480 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | Getting to WaitForSSH function...
	I0328 01:02:31.466488 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:31.466876 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d4:d2", ip: ""} in network mk-embed-certs-808809: {Iface:virbr4 ExpiryTime:2024-03-28 02:02:25 +0000 UTC Type:0 Mac:52:54:00:60:d4:d2 Iaid: IPaddr:192.168.72.210 Prefix:24 Hostname:embed-certs-808809 Clientid:01:52:54:00:60:d4:d2}
	I0328 01:02:31.466916 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined IP address 192.168.72.210 and MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:31.467054 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | Using SSH client type: external
	I0328 01:02:31.467085 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | Using SSH private key: /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/embed-certs-808809/id_rsa (-rw-------)
	I0328 01:02:31.467124 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.210 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/embed-certs-808809/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0328 01:02:31.467138 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | About to run SSH command:
	I0328 01:02:31.467155 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | exit 0
	I0328 01:02:31.590708 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | SSH cmd err, output: <nil>: 
	I0328 01:02:31.591111 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetConfigRaw
	I0328 01:02:31.591959 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetIP
	I0328 01:02:31.594592 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:31.595075 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d4:d2", ip: ""} in network mk-embed-certs-808809: {Iface:virbr4 ExpiryTime:2024-03-28 02:02:25 +0000 UTC Type:0 Mac:52:54:00:60:d4:d2 Iaid: IPaddr:192.168.72.210 Prefix:24 Hostname:embed-certs-808809 Clientid:01:52:54:00:60:d4:d2}
	I0328 01:02:31.595114 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined IP address 192.168.72.210 and MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:31.595364 1130949 profile.go:142] Saving config to /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/embed-certs-808809/config.json ...
	I0328 01:02:31.595634 1130949 machine.go:94] provisionDockerMachine start ...
	I0328 01:02:31.595656 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .DriverName
	I0328 01:02:31.595901 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHHostname
	I0328 01:02:31.598184 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:31.598529 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d4:d2", ip: ""} in network mk-embed-certs-808809: {Iface:virbr4 ExpiryTime:2024-03-28 02:02:25 +0000 UTC Type:0 Mac:52:54:00:60:d4:d2 Iaid: IPaddr:192.168.72.210 Prefix:24 Hostname:embed-certs-808809 Clientid:01:52:54:00:60:d4:d2}
	I0328 01:02:31.598556 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined IP address 192.168.72.210 and MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:31.598681 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHPort
	I0328 01:02:31.598851 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHKeyPath
	I0328 01:02:31.599012 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHKeyPath
	I0328 01:02:31.599163 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHUsername
	I0328 01:02:31.599333 1130949 main.go:141] libmachine: Using SSH client type: native
	I0328 01:02:31.599604 1130949 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.210 22 <nil> <nil>}
	I0328 01:02:31.599619 1130949 main.go:141] libmachine: About to run SSH command:
	hostname
	I0328 01:02:31.703241 1130949 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0328 01:02:31.703272 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetMachineName
	I0328 01:02:31.703575 1130949 buildroot.go:166] provisioning hostname "embed-certs-808809"
	I0328 01:02:31.703602 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetMachineName
	I0328 01:02:31.703779 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHHostname
	I0328 01:02:31.706495 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:31.706777 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d4:d2", ip: ""} in network mk-embed-certs-808809: {Iface:virbr4 ExpiryTime:2024-03-28 02:02:25 +0000 UTC Type:0 Mac:52:54:00:60:d4:d2 Iaid: IPaddr:192.168.72.210 Prefix:24 Hostname:embed-certs-808809 Clientid:01:52:54:00:60:d4:d2}
	I0328 01:02:31.706799 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined IP address 192.168.72.210 and MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:31.706978 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHPort
	I0328 01:02:31.707146 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHKeyPath
	I0328 01:02:31.707334 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHKeyPath
	I0328 01:02:31.707580 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHUsername
	I0328 01:02:31.707765 1130949 main.go:141] libmachine: Using SSH client type: native
	I0328 01:02:31.707985 1130949 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.210 22 <nil> <nil>}
	I0328 01:02:31.708004 1130949 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-808809 && echo "embed-certs-808809" | sudo tee /etc/hostname
	I0328 01:02:31.821578 1130949 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-808809
	
	I0328 01:02:31.821608 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHHostname
	I0328 01:02:31.824412 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:31.824791 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d4:d2", ip: ""} in network mk-embed-certs-808809: {Iface:virbr4 ExpiryTime:2024-03-28 02:02:25 +0000 UTC Type:0 Mac:52:54:00:60:d4:d2 Iaid: IPaddr:192.168.72.210 Prefix:24 Hostname:embed-certs-808809 Clientid:01:52:54:00:60:d4:d2}
	I0328 01:02:31.824825 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined IP address 192.168.72.210 and MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:31.825030 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHPort
	I0328 01:02:31.825253 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHKeyPath
	I0328 01:02:31.825432 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHKeyPath
	I0328 01:02:31.825589 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHUsername
	I0328 01:02:31.825758 1130949 main.go:141] libmachine: Using SSH client type: native
	I0328 01:02:31.825950 1130949 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.210 22 <nil> <nil>}
	I0328 01:02:31.825976 1130949 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-808809' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-808809/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-808809' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0328 01:02:31.937655 1130949 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0328 01:02:31.937701 1130949 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18485-1069254/.minikube CaCertPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18485-1069254/.minikube}
	I0328 01:02:31.937728 1130949 buildroot.go:174] setting up certificates
	I0328 01:02:31.937742 1130949 provision.go:84] configureAuth start
	I0328 01:02:31.937754 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetMachineName
	I0328 01:02:31.938093 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetIP
	I0328 01:02:31.940874 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:31.941328 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d4:d2", ip: ""} in network mk-embed-certs-808809: {Iface:virbr4 ExpiryTime:2024-03-28 02:02:25 +0000 UTC Type:0 Mac:52:54:00:60:d4:d2 Iaid: IPaddr:192.168.72.210 Prefix:24 Hostname:embed-certs-808809 Clientid:01:52:54:00:60:d4:d2}
	I0328 01:02:31.941360 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined IP address 192.168.72.210 and MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:31.941580 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHHostname
	I0328 01:02:31.944250 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:31.944580 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d4:d2", ip: ""} in network mk-embed-certs-808809: {Iface:virbr4 ExpiryTime:2024-03-28 02:02:25 +0000 UTC Type:0 Mac:52:54:00:60:d4:d2 Iaid: IPaddr:192.168.72.210 Prefix:24 Hostname:embed-certs-808809 Clientid:01:52:54:00:60:d4:d2}
	I0328 01:02:31.944610 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined IP address 192.168.72.210 and MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:31.944828 1130949 provision.go:143] copyHostCerts
	I0328 01:02:31.944910 1130949 exec_runner.go:144] found /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.pem, removing ...
	I0328 01:02:31.944926 1130949 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.pem
	I0328 01:02:31.945006 1130949 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.pem (1078 bytes)
	I0328 01:02:31.945151 1130949 exec_runner.go:144] found /home/jenkins/minikube-integration/18485-1069254/.minikube/cert.pem, removing ...
	I0328 01:02:31.945162 1130949 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18485-1069254/.minikube/cert.pem
	I0328 01:02:31.945205 1130949 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18485-1069254/.minikube/cert.pem (1123 bytes)
	I0328 01:02:31.945285 1130949 exec_runner.go:144] found /home/jenkins/minikube-integration/18485-1069254/.minikube/key.pem, removing ...
	I0328 01:02:31.945294 1130949 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18485-1069254/.minikube/key.pem
	I0328 01:02:31.945330 1130949 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18485-1069254/.minikube/key.pem (1679 bytes)
	I0328 01:02:31.945400 1130949 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca-key.pem org=jenkins.embed-certs-808809 san=[127.0.0.1 192.168.72.210 embed-certs-808809 localhost minikube]
	I0328 01:02:32.070925 1130949 provision.go:177] copyRemoteCerts
	I0328 01:02:32.071007 1130949 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0328 01:02:32.071067 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHHostname
	I0328 01:02:32.073876 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:32.074295 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d4:d2", ip: ""} in network mk-embed-certs-808809: {Iface:virbr4 ExpiryTime:2024-03-28 02:02:25 +0000 UTC Type:0 Mac:52:54:00:60:d4:d2 Iaid: IPaddr:192.168.72.210 Prefix:24 Hostname:embed-certs-808809 Clientid:01:52:54:00:60:d4:d2}
	I0328 01:02:32.074339 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined IP address 192.168.72.210 and MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:32.074541 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHPort
	I0328 01:02:32.074754 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHKeyPath
	I0328 01:02:32.074931 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHUsername
	I0328 01:02:32.075091 1130949 sshutil.go:53] new ssh client: &{IP:192.168.72.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/embed-certs-808809/id_rsa Username:docker}
	I0328 01:02:32.158945 1130949 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0328 01:02:32.184903 1130949 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0328 01:02:32.210411 1130949 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0328 01:02:32.235788 1130949 provision.go:87] duration metric: took 298.03126ms to configureAuth
	I0328 01:02:32.235827 1130949 buildroot.go:189] setting minikube options for container-runtime
	I0328 01:02:32.236116 1130949 config.go:182] Loaded profile config "embed-certs-808809": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0328 01:02:32.236336 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHHostname
	I0328 01:02:32.239186 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:32.239520 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d4:d2", ip: ""} in network mk-embed-certs-808809: {Iface:virbr4 ExpiryTime:2024-03-28 02:02:25 +0000 UTC Type:0 Mac:52:54:00:60:d4:d2 Iaid: IPaddr:192.168.72.210 Prefix:24 Hostname:embed-certs-808809 Clientid:01:52:54:00:60:d4:d2}
	I0328 01:02:32.239555 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined IP address 192.168.72.210 and MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:32.239782 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHPort
	I0328 01:02:32.240036 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHKeyPath
	I0328 01:02:32.240257 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHKeyPath
	I0328 01:02:32.240431 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHUsername
	I0328 01:02:32.240633 1130949 main.go:141] libmachine: Using SSH client type: native
	I0328 01:02:32.240836 1130949 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.210 22 <nil> <nil>}
	I0328 01:02:32.240862 1130949 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0328 01:02:32.513263 1130949 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0328 01:02:32.513298 1130949 machine.go:97] duration metric: took 917.647337ms to provisionDockerMachine
	I0328 01:02:32.513314 1130949 start.go:293] postStartSetup for "embed-certs-808809" (driver="kvm2")
	I0328 01:02:32.513326 1130949 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0328 01:02:32.513365 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .DriverName
	I0328 01:02:32.513727 1130949 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0328 01:02:32.513770 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHHostname
	I0328 01:02:32.516906 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:32.517382 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d4:d2", ip: ""} in network mk-embed-certs-808809: {Iface:virbr4 ExpiryTime:2024-03-28 02:02:25 +0000 UTC Type:0 Mac:52:54:00:60:d4:d2 Iaid: IPaddr:192.168.72.210 Prefix:24 Hostname:embed-certs-808809 Clientid:01:52:54:00:60:d4:d2}
	I0328 01:02:32.517425 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined IP address 192.168.72.210 and MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:32.517603 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHPort
	I0328 01:02:32.517831 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHKeyPath
	I0328 01:02:32.517989 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHUsername
	I0328 01:02:32.518115 1130949 sshutil.go:53] new ssh client: &{IP:192.168.72.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/embed-certs-808809/id_rsa Username:docker}
	I0328 01:02:32.600013 1130949 ssh_runner.go:195] Run: cat /etc/os-release
	I0328 01:02:32.604953 1130949 info.go:137] Remote host: Buildroot 2023.02.9
	I0328 01:02:32.604983 1130949 filesync.go:126] Scanning /home/jenkins/minikube-integration/18485-1069254/.minikube/addons for local assets ...
	I0328 01:02:32.605057 1130949 filesync.go:126] Scanning /home/jenkins/minikube-integration/18485-1069254/.minikube/files for local assets ...
	I0328 01:02:32.605148 1130949 filesync.go:149] local asset: /home/jenkins/minikube-integration/18485-1069254/.minikube/files/etc/ssl/certs/10765222.pem -> 10765222.pem in /etc/ssl/certs
	I0328 01:02:32.605265 1130949 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0328 01:02:32.617685 1130949 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/files/etc/ssl/certs/10765222.pem --> /etc/ssl/certs/10765222.pem (1708 bytes)
	I0328 01:02:32.646415 1130949 start.go:296] duration metric: took 133.084551ms for postStartSetup
	I0328 01:02:32.646462 1130949 fix.go:56] duration metric: took 18.943034019s for fixHost
	I0328 01:02:32.646490 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHHostname
	I0328 01:02:32.649346 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:32.649686 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d4:d2", ip: ""} in network mk-embed-certs-808809: {Iface:virbr4 ExpiryTime:2024-03-28 02:02:25 +0000 UTC Type:0 Mac:52:54:00:60:d4:d2 Iaid: IPaddr:192.168.72.210 Prefix:24 Hostname:embed-certs-808809 Clientid:01:52:54:00:60:d4:d2}
	I0328 01:02:32.649717 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined IP address 192.168.72.210 and MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:32.649864 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHPort
	I0328 01:02:32.650191 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHKeyPath
	I0328 01:02:32.650444 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHKeyPath
	I0328 01:02:32.650637 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHUsername
	I0328 01:02:32.650844 1130949 main.go:141] libmachine: Using SSH client type: native
	I0328 01:02:32.651036 1130949 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.210 22 <nil> <nil>}
	I0328 01:02:32.651069 1130949 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0328 01:02:32.751522 1130949 main.go:141] libmachine: SSH cmd err, output: <nil>: 1711587752.718800758
	
	I0328 01:02:32.751547 1130949 fix.go:216] guest clock: 1711587752.718800758
	I0328 01:02:32.751556 1130949 fix.go:229] Guest: 2024-03-28 01:02:32.718800758 +0000 UTC Remote: 2024-03-28 01:02:32.646466137 +0000 UTC m=+284.780134501 (delta=72.334621ms)
	I0328 01:02:32.751598 1130949 fix.go:200] guest clock delta is within tolerance: 72.334621ms
	I0328 01:02:32.751610 1130949 start.go:83] releasing machines lock for "embed-certs-808809", held for 19.048217918s
	I0328 01:02:32.751638 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .DriverName
	I0328 01:02:32.751953 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetIP
	I0328 01:02:32.754795 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:32.755205 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d4:d2", ip: ""} in network mk-embed-certs-808809: {Iface:virbr4 ExpiryTime:2024-03-28 02:02:25 +0000 UTC Type:0 Mac:52:54:00:60:d4:d2 Iaid: IPaddr:192.168.72.210 Prefix:24 Hostname:embed-certs-808809 Clientid:01:52:54:00:60:d4:d2}
	I0328 01:02:32.755240 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined IP address 192.168.72.210 and MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:32.755454 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .DriverName
	I0328 01:02:32.756111 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .DriverName
	I0328 01:02:32.756320 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .DriverName
	I0328 01:02:32.756412 1130949 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0328 01:02:32.756475 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHHostname
	I0328 01:02:32.756612 1130949 ssh_runner.go:195] Run: cat /version.json
	I0328 01:02:32.756646 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHHostname
	I0328 01:02:32.759337 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:32.759468 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:32.759788 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d4:d2", ip: ""} in network mk-embed-certs-808809: {Iface:virbr4 ExpiryTime:2024-03-28 02:02:25 +0000 UTC Type:0 Mac:52:54:00:60:d4:d2 Iaid: IPaddr:192.168.72.210 Prefix:24 Hostname:embed-certs-808809 Clientid:01:52:54:00:60:d4:d2}
	I0328 01:02:32.759808 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined IP address 192.168.72.210 and MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:32.759845 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d4:d2", ip: ""} in network mk-embed-certs-808809: {Iface:virbr4 ExpiryTime:2024-03-28 02:02:25 +0000 UTC Type:0 Mac:52:54:00:60:d4:d2 Iaid: IPaddr:192.168.72.210 Prefix:24 Hostname:embed-certs-808809 Clientid:01:52:54:00:60:d4:d2}
	I0328 01:02:32.759866 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined IP address 192.168.72.210 and MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:32.760009 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHPort
	I0328 01:02:32.760018 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHPort
	I0328 01:02:32.760214 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHKeyPath
	I0328 01:02:32.760222 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHKeyPath
	I0328 01:02:32.760364 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHUsername
	I0328 01:02:32.760532 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHUsername
	I0328 01:02:32.760639 1130949 sshutil.go:53] new ssh client: &{IP:192.168.72.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/embed-certs-808809/id_rsa Username:docker}
	I0328 01:02:32.760698 1130949 sshutil.go:53] new ssh client: &{IP:192.168.72.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/embed-certs-808809/id_rsa Username:docker}
	I0328 01:02:32.840137 1130949 ssh_runner.go:195] Run: systemctl --version
	I0328 01:02:32.874039 1130949 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0328 01:02:33.020534 1130949 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0328 01:02:33.027141 1130949 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0328 01:02:33.027213 1130949 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0328 01:02:33.043738 1130949 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0328 01:02:33.043767 1130949 start.go:494] detecting cgroup driver to use...
	I0328 01:02:33.043840 1130949 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0328 01:02:33.064332 1130949 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0328 01:02:33.081926 1130949 docker.go:217] disabling cri-docker service (if available) ...
	I0328 01:02:33.082016 1130949 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0328 01:02:33.097179 1130949 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0328 01:02:33.113157 1130949 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0328 01:02:33.233183 1130949 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0328 01:02:33.374061 1130949 docker.go:233] disabling docker service ...
	I0328 01:02:33.374145 1130949 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0328 01:02:33.389813 1130949 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0328 01:02:33.403439 1130949 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0328 01:02:33.546146 1130949 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0328 01:02:33.706968 1130949 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0328 01:02:33.722279 1130949 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0328 01:02:33.742578 1130949 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0328 01:02:33.742652 1130949 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 01:02:33.754966 1130949 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0328 01:02:33.755027 1130949 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 01:02:33.767170 1130949 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 01:02:33.779960 1130949 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 01:02:33.792448 1130949 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0328 01:02:33.804912 1130949 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 01:02:33.818038 1130949 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 01:02:33.838794 1130949 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 01:02:33.852157 1130949 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0328 01:02:33.862921 1130949 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0328 01:02:33.862981 1130949 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0328 01:02:33.880973 1130949 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0328 01:02:33.892698 1130949 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0328 01:02:34.029903 1130949 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0328 01:02:34.170977 1130949 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0328 01:02:34.171074 1130949 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0328 01:02:34.176652 1130949 start.go:562] Will wait 60s for crictl version
	I0328 01:02:34.176736 1130949 ssh_runner.go:195] Run: which crictl
	I0328 01:02:34.180993 1130949 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0328 01:02:34.224564 1130949 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0328 01:02:34.224675 1130949 ssh_runner.go:195] Run: crio --version
	I0328 01:02:34.254457 1130949 ssh_runner.go:195] Run: crio --version
	I0328 01:02:34.287281 1130949 out.go:177] * Preparing Kubernetes v1.29.3 on CRI-O 1.29.1 ...
	I0328 01:02:32.778280 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .Start
	I0328 01:02:32.778470 1131323 main.go:141] libmachine: (old-k8s-version-986088) Ensuring networks are active...
	I0328 01:02:32.779179 1131323 main.go:141] libmachine: (old-k8s-version-986088) Ensuring network default is active
	I0328 01:02:32.779577 1131323 main.go:141] libmachine: (old-k8s-version-986088) Ensuring network mk-old-k8s-version-986088 is active
	I0328 01:02:32.779982 1131323 main.go:141] libmachine: (old-k8s-version-986088) Getting domain xml...
	I0328 01:02:32.780732 1131323 main.go:141] libmachine: (old-k8s-version-986088) Creating domain...
	I0328 01:02:34.066287 1131323 main.go:141] libmachine: (old-k8s-version-986088) Waiting to get IP...
	I0328 01:02:34.067193 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:34.067618 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | unable to find current IP address of domain old-k8s-version-986088 in network mk-old-k8s-version-986088
	I0328 01:02:34.067684 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | I0328 01:02:34.067586 1132067 retry.go:31] will retry after 291.270379ms: waiting for machine to come up
	I0328 01:02:34.360203 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:34.360690 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | unable to find current IP address of domain old-k8s-version-986088 in network mk-old-k8s-version-986088
	I0328 01:02:34.360721 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | I0328 01:02:34.360638 1132067 retry.go:31] will retry after 234.968456ms: waiting for machine to come up
	I0328 01:02:34.597291 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:34.597818 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | unable to find current IP address of domain old-k8s-version-986088 in network mk-old-k8s-version-986088
	I0328 01:02:34.597849 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | I0328 01:02:34.597750 1132067 retry.go:31] will retry after 382.522593ms: waiting for machine to come up
	I0328 01:02:34.982502 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:34.983176 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | unable to find current IP address of domain old-k8s-version-986088 in network mk-old-k8s-version-986088
	I0328 01:02:34.983205 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | I0328 01:02:34.983133 1132067 retry.go:31] will retry after 436.332635ms: waiting for machine to come up
	I0328 01:02:34.288748 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetIP
	I0328 01:02:34.292122 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:34.292516 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d4:d2", ip: ""} in network mk-embed-certs-808809: {Iface:virbr4 ExpiryTime:2024-03-28 02:02:25 +0000 UTC Type:0 Mac:52:54:00:60:d4:d2 Iaid: IPaddr:192.168.72.210 Prefix:24 Hostname:embed-certs-808809 Clientid:01:52:54:00:60:d4:d2}
	I0328 01:02:34.292556 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined IP address 192.168.72.210 and MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:34.292869 1130949 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0328 01:02:34.298738 1130949 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0328 01:02:34.313529 1130949 kubeadm.go:877] updating cluster {Name:embed-certs-808809 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.29.3 ClusterName:embed-certs-808809 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.210 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0328 01:02:34.313698 1130949 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0328 01:02:34.313762 1130949 ssh_runner.go:195] Run: sudo crictl images --output json
	I0328 01:02:34.356518 1130949 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.3". assuming images are not preloaded.
	I0328 01:02:34.356614 1130949 ssh_runner.go:195] Run: which lz4
	I0328 01:02:34.361492 1130949 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0328 01:02:34.366053 1130949 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0328 01:02:34.366090 1130949 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (402967820 bytes)
	I0328 01:02:36.024197 1130949 crio.go:462] duration metric: took 1.662731937s to copy over tarball
	I0328 01:02:36.024287 1130949 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0328 01:02:35.421623 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:35.422164 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | unable to find current IP address of domain old-k8s-version-986088 in network mk-old-k8s-version-986088
	I0328 01:02:35.422198 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | I0328 01:02:35.422135 1132067 retry.go:31] will retry after 700.861268ms: waiting for machine to come up
	I0328 01:02:36.124589 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:36.125001 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | unable to find current IP address of domain old-k8s-version-986088 in network mk-old-k8s-version-986088
	I0328 01:02:36.125031 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | I0328 01:02:36.124948 1132067 retry.go:31] will retry after 932.342478ms: waiting for machine to come up
	I0328 01:02:37.058954 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:37.059390 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | unable to find current IP address of domain old-k8s-version-986088 in network mk-old-k8s-version-986088
	I0328 01:02:37.059424 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | I0328 01:02:37.059332 1132067 retry.go:31] will retry after 1.163248691s: waiting for machine to come up
	I0328 01:02:38.224574 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:38.225019 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | unable to find current IP address of domain old-k8s-version-986088 in network mk-old-k8s-version-986088
	I0328 01:02:38.225053 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | I0328 01:02:38.224959 1132067 retry.go:31] will retry after 1.13372539s: waiting for machine to come up
	I0328 01:02:39.360393 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:39.360953 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | unable to find current IP address of domain old-k8s-version-986088 in network mk-old-k8s-version-986088
	I0328 01:02:39.360984 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | I0328 01:02:39.360906 1132067 retry.go:31] will retry after 1.793272671s: waiting for machine to come up
	I0328 01:02:38.420741 1130949 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.396415089s)
	I0328 01:02:38.420788 1130949 crio.go:469] duration metric: took 2.39655808s to extract the tarball
	I0328 01:02:38.420797 1130949 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0328 01:02:38.459869 1130949 ssh_runner.go:195] Run: sudo crictl images --output json
	I0328 01:02:38.505999 1130949 crio.go:514] all images are preloaded for cri-o runtime.
	I0328 01:02:38.506030 1130949 cache_images.go:84] Images are preloaded, skipping loading
	I0328 01:02:38.506039 1130949 kubeadm.go:928] updating node { 192.168.72.210 8443 v1.29.3 crio true true} ...
	I0328 01:02:38.506185 1130949 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-808809 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.210
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.3 ClusterName:embed-certs-808809 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0328 01:02:38.506301 1130949 ssh_runner.go:195] Run: crio config
	I0328 01:02:38.551608 1130949 cni.go:84] Creating CNI manager for ""
	I0328 01:02:38.551633 1130949 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0328 01:02:38.551646 1130949 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0328 01:02:38.551673 1130949 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.210 APIServerPort:8443 KubernetesVersion:v1.29.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-808809 NodeName:embed-certs-808809 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.210"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.210 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0328 01:02:38.551813 1130949 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.210
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-808809"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.210
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.210"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0328 01:02:38.551881 1130949 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.3
	I0328 01:02:38.562640 1130949 binaries.go:44] Found k8s binaries, skipping transfer
	I0328 01:02:38.562732 1130949 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0328 01:02:38.572870 1130949 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0328 01:02:38.590866 1130949 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0328 01:02:38.608302 1130949 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0328 01:02:38.626925 1130949 ssh_runner.go:195] Run: grep 192.168.72.210	control-plane.minikube.internal$ /etc/hosts
	I0328 01:02:38.631111 1130949 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.210	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0328 01:02:38.644528 1130949 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0328 01:02:38.785485 1130949 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0328 01:02:38.804087 1130949 certs.go:68] Setting up /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/embed-certs-808809 for IP: 192.168.72.210
	I0328 01:02:38.804113 1130949 certs.go:194] generating shared ca certs ...
	I0328 01:02:38.804132 1130949 certs.go:226] acquiring lock for ca certs: {Name:mkf4dec8f33bbf51de6ed3aabdf175c7fd744ef9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 01:02:38.804285 1130949 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.key
	I0328 01:02:38.804326 1130949 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/proxy-client-ca.key
	I0328 01:02:38.804363 1130949 certs.go:256] generating profile certs ...
	I0328 01:02:38.804505 1130949 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/embed-certs-808809/client.key
	I0328 01:02:38.804588 1130949 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/embed-certs-808809/apiserver.key.bdc16448
	I0328 01:02:38.804638 1130949 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/embed-certs-808809/proxy-client.key
	I0328 01:02:38.804798 1130949 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/1076522.pem (1338 bytes)
	W0328 01:02:38.804829 1130949 certs.go:480] ignoring /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/1076522_empty.pem, impossibly tiny 0 bytes
	I0328 01:02:38.804836 1130949 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca-key.pem (1675 bytes)
	I0328 01:02:38.804860 1130949 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem (1078 bytes)
	I0328 01:02:38.804882 1130949 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/cert.pem (1123 bytes)
	I0328 01:02:38.804902 1130949 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/key.pem (1679 bytes)
	I0328 01:02:38.804943 1130949 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/files/etc/ssl/certs/10765222.pem (1708 bytes)
	I0328 01:02:38.805829 1130949 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0328 01:02:38.864847 1130949 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0328 01:02:38.899197 1130949 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0328 01:02:38.926734 1130949 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0328 01:02:38.958277 1130949 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/embed-certs-808809/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0328 01:02:38.997201 1130949 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/embed-certs-808809/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0328 01:02:39.023136 1130949 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/embed-certs-808809/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0328 01:02:39.048459 1130949 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/embed-certs-808809/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0328 01:02:39.074052 1130949 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/1076522.pem --> /usr/share/ca-certificates/1076522.pem (1338 bytes)
	I0328 01:02:39.099326 1130949 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/files/etc/ssl/certs/10765222.pem --> /usr/share/ca-certificates/10765222.pem (1708 bytes)
	I0328 01:02:39.124775 1130949 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0328 01:02:39.149638 1130949 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0328 01:02:39.169169 1130949 ssh_runner.go:195] Run: openssl version
	I0328 01:02:39.175948 1130949 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1076522.pem && ln -fs /usr/share/ca-certificates/1076522.pem /etc/ssl/certs/1076522.pem"
	I0328 01:02:39.188255 1130949 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1076522.pem
	I0328 01:02:39.194296 1130949 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 27 23:42 /usr/share/ca-certificates/1076522.pem
	I0328 01:02:39.194374 1130949 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1076522.pem
	I0328 01:02:39.201138 1130949 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1076522.pem /etc/ssl/certs/51391683.0"
	I0328 01:02:39.213554 1130949 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10765222.pem && ln -fs /usr/share/ca-certificates/10765222.pem /etc/ssl/certs/10765222.pem"
	I0328 01:02:39.226474 1130949 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10765222.pem
	I0328 01:02:39.232074 1130949 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 27 23:42 /usr/share/ca-certificates/10765222.pem
	I0328 01:02:39.232149 1130949 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10765222.pem
	I0328 01:02:39.238733 1130949 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/10765222.pem /etc/ssl/certs/3ec20f2e.0"
	I0328 01:02:39.250983 1130949 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0328 01:02:39.263746 1130949 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0328 01:02:39.268967 1130949 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 27 23:34 /usr/share/ca-certificates/minikubeCA.pem
	I0328 01:02:39.269038 1130949 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0328 01:02:39.275589 1130949 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0328 01:02:39.287731 1130949 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0328 01:02:39.292985 1130949 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0328 01:02:39.300366 1130949 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0328 01:02:39.307241 1130949 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0328 01:02:39.314522 1130949 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0328 01:02:39.321070 1130949 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0328 01:02:39.327777 1130949 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0328 01:02:39.334174 1130949 kubeadm.go:391] StartCluster: {Name:embed-certs-808809 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29
.3 ClusterName:embed-certs-808809 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.210 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0328 01:02:39.334310 1130949 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0328 01:02:39.334367 1130949 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0328 01:02:39.376035 1130949 cri.go:89] found id: ""
	I0328 01:02:39.376145 1130949 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0328 01:02:39.387349 1130949 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0328 01:02:39.387377 1130949 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0328 01:02:39.387385 1130949 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0328 01:02:39.387469 1130949 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0328 01:02:39.397918 1130949 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0328 01:02:39.399122 1130949 kubeconfig.go:125] found "embed-certs-808809" server: "https://192.168.72.210:8443"
	I0328 01:02:39.401219 1130949 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0328 01:02:39.411475 1130949 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.72.210
	I0328 01:02:39.411562 1130949 kubeadm.go:1154] stopping kube-system containers ...
	I0328 01:02:39.411583 1130949 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0328 01:02:39.411650 1130949 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0328 01:02:39.449529 1130949 cri.go:89] found id: ""
	I0328 01:02:39.449638 1130949 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0328 01:02:39.468553 1130949 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0328 01:02:39.479489 1130949 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0328 01:02:39.479522 1130949 kubeadm.go:156] found existing configuration files:
	
	I0328 01:02:39.479589 1130949 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0328 01:02:39.489619 1130949 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0328 01:02:39.489689 1130949 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0328 01:02:39.499726 1130949 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0328 01:02:39.509362 1130949 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0328 01:02:39.509447 1130949 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0328 01:02:39.519262 1130949 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0328 01:02:39.528858 1130949 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0328 01:02:39.528920 1130949 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0328 01:02:39.538784 1130949 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0328 01:02:39.548517 1130949 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0328 01:02:39.548593 1130949 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0328 01:02:39.559931 1130949 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0328 01:02:39.574178 1130949 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0328 01:02:39.706243 1130949 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0328 01:02:40.342144 1130949 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0328 01:02:40.559108 1130949 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0328 01:02:40.636713 1130949 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0328 01:02:40.743171 1130949 api_server.go:52] waiting for apiserver process to appear ...
	I0328 01:02:40.743269 1130949 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:02:41.243401 1130949 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:02:41.743363 1130949 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:02:41.776504 1130949 api_server.go:72] duration metric: took 1.033329844s to wait for apiserver process to appear ...
	I0328 01:02:41.776547 1130949 api_server.go:88] waiting for apiserver healthz status ...
	I0328 01:02:41.776574 1130949 api_server.go:253] Checking apiserver healthz at https://192.168.72.210:8443/healthz ...
	I0328 01:02:41.777140 1130949 api_server.go:269] stopped: https://192.168.72.210:8443/healthz: Get "https://192.168.72.210:8443/healthz": dial tcp 192.168.72.210:8443: connect: connection refused
	I0328 01:02:42.276690 1130949 api_server.go:253] Checking apiserver healthz at https://192.168.72.210:8443/healthz ...
	I0328 01:02:41.156898 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:41.157309 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | unable to find current IP address of domain old-k8s-version-986088 in network mk-old-k8s-version-986088
	I0328 01:02:41.157336 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | I0328 01:02:41.157263 1132067 retry.go:31] will retry after 1.863775673s: waiting for machine to come up
	I0328 01:02:43.023074 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:43.023470 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | unable to find current IP address of domain old-k8s-version-986088 in network mk-old-k8s-version-986088
	I0328 01:02:43.023507 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | I0328 01:02:43.023419 1132067 retry.go:31] will retry after 2.73600503s: waiting for machine to come up
	I0328 01:02:44.743286 1130949 api_server.go:279] https://192.168.72.210:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0328 01:02:44.743383 1130949 api_server.go:103] status: https://192.168.72.210:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0328 01:02:44.743412 1130949 api_server.go:253] Checking apiserver healthz at https://192.168.72.210:8443/healthz ...
	I0328 01:02:44.822370 1130949 api_server.go:279] https://192.168.72.210:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0328 01:02:44.822416 1130949 api_server.go:103] status: https://192.168.72.210:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0328 01:02:44.822436 1130949 api_server.go:253] Checking apiserver healthz at https://192.168.72.210:8443/healthz ...
	I0328 01:02:44.847406 1130949 api_server.go:279] https://192.168.72.210:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0328 01:02:44.847462 1130949 api_server.go:103] status: https://192.168.72.210:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0328 01:02:45.276899 1130949 api_server.go:253] Checking apiserver healthz at https://192.168.72.210:8443/healthz ...
	I0328 01:02:45.281884 1130949 api_server.go:279] https://192.168.72.210:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0328 01:02:45.281919 1130949 api_server.go:103] status: https://192.168.72.210:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0328 01:02:45.777495 1130949 api_server.go:253] Checking apiserver healthz at https://192.168.72.210:8443/healthz ...
	I0328 01:02:45.783673 1130949 api_server.go:279] https://192.168.72.210:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0328 01:02:45.783704 1130949 api_server.go:103] status: https://192.168.72.210:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0328 01:02:46.277372 1130949 api_server.go:253] Checking apiserver healthz at https://192.168.72.210:8443/healthz ...
	I0328 01:02:46.282281 1130949 api_server.go:279] https://192.168.72.210:8443/healthz returned 200:
	ok
	I0328 01:02:46.291242 1130949 api_server.go:141] control plane version: v1.29.3
	I0328 01:02:46.291287 1130949 api_server.go:131] duration metric: took 4.514730698s to wait for apiserver health ...
	I0328 01:02:46.291301 1130949 cni.go:84] Creating CNI manager for ""
	I0328 01:02:46.291310 1130949 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0328 01:02:46.293461 1130949 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0328 01:02:46.294971 1130949 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0328 01:02:46.312955 1130949 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0328 01:02:46.345653 1130949 system_pods.go:43] waiting for kube-system pods to appear ...
	I0328 01:02:46.355470 1130949 system_pods.go:59] 8 kube-system pods found
	I0328 01:02:46.355506 1130949 system_pods.go:61] "coredns-76f75df574-pr5d8" [90a6f3d5-6f33-4c41-804b-4b20c518aa23] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0328 01:02:46.355512 1130949 system_pods.go:61] "etcd-embed-certs-808809" [93b6b8ee-f83f-4848-b2c5-912ec07acd52] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0328 01:02:46.355519 1130949 system_pods.go:61] "kube-apiserver-embed-certs-808809" [22eb788f-4647-4a07-b5bf-ecdd54c28fcd] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0328 01:02:46.355530 1130949 system_pods.go:61] "kube-controller-manager-embed-certs-808809" [83fecd9f-c0de-4afe-b5b5-7c04bd3adc20] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0328 01:02:46.355545 1130949 system_pods.go:61] "kube-proxy-qwzpg" [57a814c6-54c8-4fa7-b7d7-bcdd4bbc91d7] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0328 01:02:46.355553 1130949 system_pods.go:61] "kube-scheduler-embed-certs-808809" [0b229d84-43fb-45ee-8d49-39204812d490] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0328 01:02:46.355568 1130949 system_pods.go:61] "metrics-server-57f55c9bc5-swsxp" [4b20e133-3054-4806-9b7f-44d8c8c35a4d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0328 01:02:46.355580 1130949 system_pods.go:61] "storage-provisioner" [59303061-19e3-4aed-8753-804988a2a44e] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0328 01:02:46.355590 1130949 system_pods.go:74] duration metric: took 9.908316ms to wait for pod list to return data ...
	I0328 01:02:46.355603 1130949 node_conditions.go:102] verifying NodePressure condition ...
	I0328 01:02:46.358936 1130949 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0328 01:02:46.358987 1130949 node_conditions.go:123] node cpu capacity is 2
	I0328 01:02:46.359006 1130949 node_conditions.go:105] duration metric: took 3.394695ms to run NodePressure ...
	I0328 01:02:46.359054 1130949 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0328 01:02:46.686479 1130949 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0328 01:02:46.692502 1130949 kubeadm.go:733] kubelet initialised
	I0328 01:02:46.692526 1130949 kubeadm.go:734] duration metric: took 6.022393ms waiting for restarted kubelet to initialise ...
	I0328 01:02:46.692534 1130949 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0328 01:02:46.699146 1130949 pod_ready.go:78] waiting up to 4m0s for pod "coredns-76f75df574-pr5d8" in "kube-system" namespace to be "Ready" ...
	I0328 01:02:45.762440 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:45.762891 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | unable to find current IP address of domain old-k8s-version-986088 in network mk-old-k8s-version-986088
	I0328 01:02:45.762915 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | I0328 01:02:45.762845 1132067 retry.go:31] will retry after 2.201941476s: waiting for machine to come up
	I0328 01:02:47.966601 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:47.967196 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | unable to find current IP address of domain old-k8s-version-986088 in network mk-old-k8s-version-986088
	I0328 01:02:47.967237 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | I0328 01:02:47.967144 1132067 retry.go:31] will retry after 4.122216816s: waiting for machine to come up
	I0328 01:02:48.709890 1130949 pod_ready.go:102] pod "coredns-76f75df574-pr5d8" in "kube-system" namespace has status "Ready":"False"
	I0328 01:02:51.207697 1130949 pod_ready.go:102] pod "coredns-76f75df574-pr5d8" in "kube-system" namespace has status "Ready":"False"
	I0328 01:02:53.391471 1131600 start.go:364] duration metric: took 2m47.603687739s to acquireMachinesLock for "default-k8s-diff-port-283961"
	I0328 01:02:53.391553 1131600 start.go:96] Skipping create...Using existing machine configuration
	I0328 01:02:53.391565 1131600 fix.go:54] fixHost starting: 
	I0328 01:02:53.391980 1131600 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18485-1069254/.minikube/bin/docker-machine-driver-kvm2
	I0328 01:02:53.392031 1131600 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 01:02:53.409035 1131600 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34073
	I0328 01:02:53.409556 1131600 main.go:141] libmachine: () Calling .GetVersion
	I0328 01:02:53.410105 1131600 main.go:141] libmachine: Using API Version  1
	I0328 01:02:53.410136 1131600 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 01:02:53.410492 1131600 main.go:141] libmachine: () Calling .GetMachineName
	I0328 01:02:53.410734 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .DriverName
	I0328 01:02:53.410903 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetState
	I0328 01:02:53.412710 1131600 fix.go:112] recreateIfNeeded on default-k8s-diff-port-283961: state=Stopped err=<nil>
	I0328 01:02:53.412739 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .DriverName
	W0328 01:02:53.412927 1131600 fix.go:138] unexpected machine state, will restart: <nil>
	I0328 01:02:53.414773 1131600 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-283961" ...
	I0328 01:02:52.091210 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:52.091759 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has current primary IP address 192.168.50.174 and MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:52.091794 1131323 main.go:141] libmachine: (old-k8s-version-986088) Found IP for machine: 192.168.50.174
	I0328 01:02:52.091841 1131323 main.go:141] libmachine: (old-k8s-version-986088) Reserving static IP address...
	I0328 01:02:52.092295 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | found host DHCP lease matching {name: "old-k8s-version-986088", mac: "52:54:00:f6:94:40", ip: "192.168.50.174"} in network mk-old-k8s-version-986088: {Iface:virbr2 ExpiryTime:2024-03-28 02:02:44 +0000 UTC Type:0 Mac:52:54:00:f6:94:40 Iaid: IPaddr:192.168.50.174 Prefix:24 Hostname:old-k8s-version-986088 Clientid:01:52:54:00:f6:94:40}
	I0328 01:02:52.092321 1131323 main.go:141] libmachine: (old-k8s-version-986088) Reserved static IP address: 192.168.50.174
	I0328 01:02:52.092343 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | skip adding static IP to network mk-old-k8s-version-986088 - found existing host DHCP lease matching {name: "old-k8s-version-986088", mac: "52:54:00:f6:94:40", ip: "192.168.50.174"}
	I0328 01:02:52.092356 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | Getting to WaitForSSH function...
	I0328 01:02:52.092373 1131323 main.go:141] libmachine: (old-k8s-version-986088) Waiting for SSH to be available...
	I0328 01:02:52.094682 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:52.095012 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:94:40", ip: ""} in network mk-old-k8s-version-986088: {Iface:virbr2 ExpiryTime:2024-03-28 02:02:44 +0000 UTC Type:0 Mac:52:54:00:f6:94:40 Iaid: IPaddr:192.168.50.174 Prefix:24 Hostname:old-k8s-version-986088 Clientid:01:52:54:00:f6:94:40}
	I0328 01:02:52.095033 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined IP address 192.168.50.174 and MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:52.095158 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | Using SSH client type: external
	I0328 01:02:52.095180 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | Using SSH private key: /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/old-k8s-version-986088/id_rsa (-rw-------)
	I0328 01:02:52.095208 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.174 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/old-k8s-version-986088/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0328 01:02:52.095218 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | About to run SSH command:
	I0328 01:02:52.095232 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | exit 0
	I0328 01:02:52.218494 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | SSH cmd err, output: <nil>: 
	I0328 01:02:52.218983 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetConfigRaw
	I0328 01:02:52.219663 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetIP
	I0328 01:02:52.222349 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:52.222791 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:94:40", ip: ""} in network mk-old-k8s-version-986088: {Iface:virbr2 ExpiryTime:2024-03-28 02:02:44 +0000 UTC Type:0 Mac:52:54:00:f6:94:40 Iaid: IPaddr:192.168.50.174 Prefix:24 Hostname:old-k8s-version-986088 Clientid:01:52:54:00:f6:94:40}
	I0328 01:02:52.222823 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined IP address 192.168.50.174 and MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:52.223191 1131323 profile.go:142] Saving config to /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/old-k8s-version-986088/config.json ...
	I0328 01:02:52.223388 1131323 machine.go:94] provisionDockerMachine start ...
	I0328 01:02:52.223409 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .DriverName
	I0328 01:02:52.223605 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHHostname
	I0328 01:02:52.225686 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:52.225999 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:94:40", ip: ""} in network mk-old-k8s-version-986088: {Iface:virbr2 ExpiryTime:2024-03-28 02:02:44 +0000 UTC Type:0 Mac:52:54:00:f6:94:40 Iaid: IPaddr:192.168.50.174 Prefix:24 Hostname:old-k8s-version-986088 Clientid:01:52:54:00:f6:94:40}
	I0328 01:02:52.226038 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined IP address 192.168.50.174 and MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:52.226131 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHPort
	I0328 01:02:52.226341 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHKeyPath
	I0328 01:02:52.226507 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHKeyPath
	I0328 01:02:52.226633 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHUsername
	I0328 01:02:52.226802 1131323 main.go:141] libmachine: Using SSH client type: native
	I0328 01:02:52.227078 1131323 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.174 22 <nil> <nil>}
	I0328 01:02:52.227095 1131323 main.go:141] libmachine: About to run SSH command:
	hostname
	I0328 01:02:52.327218 1131323 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0328 01:02:52.327249 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetMachineName
	I0328 01:02:52.327515 1131323 buildroot.go:166] provisioning hostname "old-k8s-version-986088"
	I0328 01:02:52.327542 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetMachineName
	I0328 01:02:52.327754 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHHostname
	I0328 01:02:52.330253 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:52.330661 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:94:40", ip: ""} in network mk-old-k8s-version-986088: {Iface:virbr2 ExpiryTime:2024-03-28 02:02:44 +0000 UTC Type:0 Mac:52:54:00:f6:94:40 Iaid: IPaddr:192.168.50.174 Prefix:24 Hostname:old-k8s-version-986088 Clientid:01:52:54:00:f6:94:40}
	I0328 01:02:52.330691 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined IP address 192.168.50.174 and MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:52.330827 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHPort
	I0328 01:02:52.331048 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHKeyPath
	I0328 01:02:52.331258 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHKeyPath
	I0328 01:02:52.331406 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHUsername
	I0328 01:02:52.331593 1131323 main.go:141] libmachine: Using SSH client type: native
	I0328 01:02:52.331772 1131323 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.174 22 <nil> <nil>}
	I0328 01:02:52.331783 1131323 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-986088 && echo "old-k8s-version-986088" | sudo tee /etc/hostname
	I0328 01:02:52.445910 1131323 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-986088
	
	I0328 01:02:52.445943 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHHostname
	I0328 01:02:52.449023 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:52.449320 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:94:40", ip: ""} in network mk-old-k8s-version-986088: {Iface:virbr2 ExpiryTime:2024-03-28 02:02:44 +0000 UTC Type:0 Mac:52:54:00:f6:94:40 Iaid: IPaddr:192.168.50.174 Prefix:24 Hostname:old-k8s-version-986088 Clientid:01:52:54:00:f6:94:40}
	I0328 01:02:52.449358 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined IP address 192.168.50.174 and MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:52.449595 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHPort
	I0328 01:02:52.449810 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHKeyPath
	I0328 01:02:52.449970 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHKeyPath
	I0328 01:02:52.450116 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHUsername
	I0328 01:02:52.450310 1131323 main.go:141] libmachine: Using SSH client type: native
	I0328 01:02:52.450572 1131323 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.174 22 <nil> <nil>}
	I0328 01:02:52.450640 1131323 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-986088' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-986088/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-986088' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0328 01:02:52.567493 1131323 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0328 01:02:52.567529 1131323 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18485-1069254/.minikube CaCertPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18485-1069254/.minikube}
	I0328 01:02:52.567559 1131323 buildroot.go:174] setting up certificates
	I0328 01:02:52.567573 1131323 provision.go:84] configureAuth start
	I0328 01:02:52.567587 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetMachineName
	I0328 01:02:52.567944 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetIP
	I0328 01:02:52.570860 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:52.571363 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:94:40", ip: ""} in network mk-old-k8s-version-986088: {Iface:virbr2 ExpiryTime:2024-03-28 02:02:44 +0000 UTC Type:0 Mac:52:54:00:f6:94:40 Iaid: IPaddr:192.168.50.174 Prefix:24 Hostname:old-k8s-version-986088 Clientid:01:52:54:00:f6:94:40}
	I0328 01:02:52.571400 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined IP address 192.168.50.174 and MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:52.571547 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHHostname
	I0328 01:02:52.574052 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:52.574483 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:94:40", ip: ""} in network mk-old-k8s-version-986088: {Iface:virbr2 ExpiryTime:2024-03-28 02:02:44 +0000 UTC Type:0 Mac:52:54:00:f6:94:40 Iaid: IPaddr:192.168.50.174 Prefix:24 Hostname:old-k8s-version-986088 Clientid:01:52:54:00:f6:94:40}
	I0328 01:02:52.574517 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined IP address 192.168.50.174 and MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:52.574619 1131323 provision.go:143] copyHostCerts
	I0328 01:02:52.574698 1131323 exec_runner.go:144] found /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.pem, removing ...
	I0328 01:02:52.574710 1131323 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.pem
	I0328 01:02:52.574778 1131323 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.pem (1078 bytes)
	I0328 01:02:52.574894 1131323 exec_runner.go:144] found /home/jenkins/minikube-integration/18485-1069254/.minikube/cert.pem, removing ...
	I0328 01:02:52.574908 1131323 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18485-1069254/.minikube/cert.pem
	I0328 01:02:52.574985 1131323 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18485-1069254/.minikube/cert.pem (1123 bytes)
	I0328 01:02:52.575086 1131323 exec_runner.go:144] found /home/jenkins/minikube-integration/18485-1069254/.minikube/key.pem, removing ...
	I0328 01:02:52.575095 1131323 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18485-1069254/.minikube/key.pem
	I0328 01:02:52.575117 1131323 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18485-1069254/.minikube/key.pem (1679 bytes)
	I0328 01:02:52.575194 1131323 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-986088 san=[127.0.0.1 192.168.50.174 localhost minikube old-k8s-version-986088]
	I0328 01:02:52.688709 1131323 provision.go:177] copyRemoteCerts
	I0328 01:02:52.688776 1131323 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0328 01:02:52.688809 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHHostname
	I0328 01:02:52.691529 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:52.691977 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:94:40", ip: ""} in network mk-old-k8s-version-986088: {Iface:virbr2 ExpiryTime:2024-03-28 02:02:44 +0000 UTC Type:0 Mac:52:54:00:f6:94:40 Iaid: IPaddr:192.168.50.174 Prefix:24 Hostname:old-k8s-version-986088 Clientid:01:52:54:00:f6:94:40}
	I0328 01:02:52.692024 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined IP address 192.168.50.174 and MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:52.692188 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHPort
	I0328 01:02:52.692425 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHKeyPath
	I0328 01:02:52.692620 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHUsername
	I0328 01:02:52.692774 1131323 sshutil.go:53] new ssh client: &{IP:192.168.50.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/old-k8s-version-986088/id_rsa Username:docker}
	I0328 01:02:52.777200 1131323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0328 01:02:52.808740 1131323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0328 01:02:52.836646 1131323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0328 01:02:52.862627 1131323 provision.go:87] duration metric: took 295.032419ms to configureAuth
	I0328 01:02:52.862668 1131323 buildroot.go:189] setting minikube options for container-runtime
	I0328 01:02:52.862908 1131323 config.go:182] Loaded profile config "old-k8s-version-986088": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0328 01:02:52.863019 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHHostname
	I0328 01:02:52.865838 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:52.866585 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:94:40", ip: ""} in network mk-old-k8s-version-986088: {Iface:virbr2 ExpiryTime:2024-03-28 02:02:44 +0000 UTC Type:0 Mac:52:54:00:f6:94:40 Iaid: IPaddr:192.168.50.174 Prefix:24 Hostname:old-k8s-version-986088 Clientid:01:52:54:00:f6:94:40}
	I0328 01:02:52.866630 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined IP address 192.168.50.174 and MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:52.867271 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHPort
	I0328 01:02:52.867521 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHKeyPath
	I0328 01:02:52.867687 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHKeyPath
	I0328 01:02:52.867826 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHUsername
	I0328 01:02:52.867961 1131323 main.go:141] libmachine: Using SSH client type: native
	I0328 01:02:52.868176 1131323 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.174 22 <nil> <nil>}
	I0328 01:02:52.868194 1131323 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0328 01:02:53.154903 1131323 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0328 01:02:53.154936 1131323 machine.go:97] duration metric: took 931.534047ms to provisionDockerMachine
	I0328 01:02:53.154949 1131323 start.go:293] postStartSetup for "old-k8s-version-986088" (driver="kvm2")
	I0328 01:02:53.154961 1131323 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0328 01:02:53.154997 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .DriverName
	I0328 01:02:53.155353 1131323 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0328 01:02:53.155386 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHHostname
	I0328 01:02:53.158072 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:53.158448 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:94:40", ip: ""} in network mk-old-k8s-version-986088: {Iface:virbr2 ExpiryTime:2024-03-28 02:02:44 +0000 UTC Type:0 Mac:52:54:00:f6:94:40 Iaid: IPaddr:192.168.50.174 Prefix:24 Hostname:old-k8s-version-986088 Clientid:01:52:54:00:f6:94:40}
	I0328 01:02:53.158482 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined IP address 192.168.50.174 and MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:53.158612 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHPort
	I0328 01:02:53.158825 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHKeyPath
	I0328 01:02:53.158974 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHUsername
	I0328 01:02:53.159102 1131323 sshutil.go:53] new ssh client: &{IP:192.168.50.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/old-k8s-version-986088/id_rsa Username:docker}
	I0328 01:02:53.243411 1131323 ssh_runner.go:195] Run: cat /etc/os-release
	I0328 01:02:53.247745 1131323 info.go:137] Remote host: Buildroot 2023.02.9
	I0328 01:02:53.247769 1131323 filesync.go:126] Scanning /home/jenkins/minikube-integration/18485-1069254/.minikube/addons for local assets ...
	I0328 01:02:53.247830 1131323 filesync.go:126] Scanning /home/jenkins/minikube-integration/18485-1069254/.minikube/files for local assets ...
	I0328 01:02:53.247903 1131323 filesync.go:149] local asset: /home/jenkins/minikube-integration/18485-1069254/.minikube/files/etc/ssl/certs/10765222.pem -> 10765222.pem in /etc/ssl/certs
	I0328 01:02:53.247990 1131323 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0328 01:02:53.258574 1131323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/files/etc/ssl/certs/10765222.pem --> /etc/ssl/certs/10765222.pem (1708 bytes)
	I0328 01:02:53.284249 1131323 start.go:296] duration metric: took 129.2844ms for postStartSetup
	I0328 01:02:53.284300 1131323 fix.go:56] duration metric: took 20.532468979s for fixHost
	I0328 01:02:53.284324 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHHostname
	I0328 01:02:53.287097 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:53.287505 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:94:40", ip: ""} in network mk-old-k8s-version-986088: {Iface:virbr2 ExpiryTime:2024-03-28 02:02:44 +0000 UTC Type:0 Mac:52:54:00:f6:94:40 Iaid: IPaddr:192.168.50.174 Prefix:24 Hostname:old-k8s-version-986088 Clientid:01:52:54:00:f6:94:40}
	I0328 01:02:53.287534 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined IP address 192.168.50.174 and MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:53.287642 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHPort
	I0328 01:02:53.287874 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHKeyPath
	I0328 01:02:53.288039 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHKeyPath
	I0328 01:02:53.288225 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHUsername
	I0328 01:02:53.288439 1131323 main.go:141] libmachine: Using SSH client type: native
	I0328 01:02:53.288601 1131323 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.174 22 <nil> <nil>}
	I0328 01:02:53.288612 1131323 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0328 01:02:53.391262 1131323 main.go:141] libmachine: SSH cmd err, output: <nil>: 1711587773.373998758
	
	I0328 01:02:53.391292 1131323 fix.go:216] guest clock: 1711587773.373998758
	I0328 01:02:53.391299 1131323 fix.go:229] Guest: 2024-03-28 01:02:53.373998758 +0000 UTC Remote: 2024-03-28 01:02:53.284304642 +0000 UTC m=+227.998260980 (delta=89.694116ms)
	I0328 01:02:53.391341 1131323 fix.go:200] guest clock delta is within tolerance: 89.694116ms
	I0328 01:02:53.391346 1131323 start.go:83] releasing machines lock for "old-k8s-version-986088", held for 20.639550927s
	I0328 01:02:53.391377 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .DriverName
	I0328 01:02:53.391728 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetIP
	I0328 01:02:53.394421 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:53.394780 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:94:40", ip: ""} in network mk-old-k8s-version-986088: {Iface:virbr2 ExpiryTime:2024-03-28 02:02:44 +0000 UTC Type:0 Mac:52:54:00:f6:94:40 Iaid: IPaddr:192.168.50.174 Prefix:24 Hostname:old-k8s-version-986088 Clientid:01:52:54:00:f6:94:40}
	I0328 01:02:53.394811 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined IP address 192.168.50.174 and MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:53.394932 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .DriverName
	I0328 01:02:53.395449 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .DriverName
	I0328 01:02:53.395729 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .DriverName
	I0328 01:02:53.395828 1131323 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0328 01:02:53.395883 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHHostname
	I0328 01:02:53.395985 1131323 ssh_runner.go:195] Run: cat /version.json
	I0328 01:02:53.396014 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHHostname
	I0328 01:02:53.398819 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:53.399010 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:53.399281 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:94:40", ip: ""} in network mk-old-k8s-version-986088: {Iface:virbr2 ExpiryTime:2024-03-28 02:02:44 +0000 UTC Type:0 Mac:52:54:00:f6:94:40 Iaid: IPaddr:192.168.50.174 Prefix:24 Hostname:old-k8s-version-986088 Clientid:01:52:54:00:f6:94:40}
	I0328 01:02:53.399320 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined IP address 192.168.50.174 and MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:53.399451 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHPort
	I0328 01:02:53.399550 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:94:40", ip: ""} in network mk-old-k8s-version-986088: {Iface:virbr2 ExpiryTime:2024-03-28 02:02:44 +0000 UTC Type:0 Mac:52:54:00:f6:94:40 Iaid: IPaddr:192.168.50.174 Prefix:24 Hostname:old-k8s-version-986088 Clientid:01:52:54:00:f6:94:40}
	I0328 01:02:53.399620 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined IP address 192.168.50.174 and MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:53.399640 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHKeyPath
	I0328 01:02:53.399880 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHUsername
	I0328 01:02:53.399902 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHPort
	I0328 01:02:53.400065 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHKeyPath
	I0328 01:02:53.400081 1131323 sshutil.go:53] new ssh client: &{IP:192.168.50.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/old-k8s-version-986088/id_rsa Username:docker}
	I0328 01:02:53.400245 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHUsername
	I0328 01:02:53.400445 1131323 sshutil.go:53] new ssh client: &{IP:192.168.50.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/old-k8s-version-986088/id_rsa Username:docker}
	I0328 01:02:53.514453 1131323 ssh_runner.go:195] Run: systemctl --version
	I0328 01:02:53.521123 1131323 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0328 01:02:53.678366 1131323 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0328 01:02:53.685402 1131323 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0328 01:02:53.685473 1131323 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0328 01:02:53.702781 1131323 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0328 01:02:53.702816 1131323 start.go:494] detecting cgroup driver to use...
	I0328 01:02:53.702900 1131323 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0328 01:02:53.720343 1131323 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0328 01:02:53.736749 1131323 docker.go:217] disabling cri-docker service (if available) ...
	I0328 01:02:53.736824 1131323 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0328 01:02:53.761087 1131323 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0328 01:02:53.779008 1131323 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0328 01:02:53.895064 1131323 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0328 01:02:54.060741 1131323 docker.go:233] disabling docker service ...
	I0328 01:02:54.060825 1131323 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0328 01:02:54.079139 1131323 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0328 01:02:54.093523 1131323 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0328 01:02:54.247544 1131323 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0328 01:02:54.396392 1131323 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0328 01:02:54.422612 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0328 01:02:54.443759 1131323 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0328 01:02:54.443817 1131323 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 01:02:54.459794 1131323 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0328 01:02:54.459875 1131323 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 01:02:54.472784 1131323 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 01:02:54.484963 1131323 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 01:02:54.496654 1131323 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0328 01:02:54.508382 1131323 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0328 01:02:54.518607 1131323 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0328 01:02:54.518687 1131323 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0328 01:02:54.532356 1131323 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0328 01:02:54.544424 1131323 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0328 01:02:54.685782 1131323 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0328 01:02:54.847233 1131323 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0328 01:02:54.847314 1131323 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0328 01:02:54.853148 1131323 start.go:562] Will wait 60s for crictl version
	I0328 01:02:54.853248 1131323 ssh_runner.go:195] Run: which crictl
	I0328 01:02:54.857536 1131323 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0328 01:02:54.901937 1131323 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0328 01:02:54.902082 1131323 ssh_runner.go:195] Run: crio --version
	I0328 01:02:54.935571 1131323 ssh_runner.go:195] Run: crio --version
	I0328 01:02:54.971452 1131323 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0328 01:02:54.972964 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetIP
	I0328 01:02:54.976523 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:54.976985 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:94:40", ip: ""} in network mk-old-k8s-version-986088: {Iface:virbr2 ExpiryTime:2024-03-28 02:02:44 +0000 UTC Type:0 Mac:52:54:00:f6:94:40 Iaid: IPaddr:192.168.50.174 Prefix:24 Hostname:old-k8s-version-986088 Clientid:01:52:54:00:f6:94:40}
	I0328 01:02:54.977017 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined IP address 192.168.50.174 and MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:54.977369 1131323 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0328 01:02:54.982326 1131323 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0328 01:02:54.996239 1131323 kubeadm.go:877] updating cluster {Name:old-k8s-version-986088 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-986088 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.174 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0328 01:02:54.996371 1131323 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0328 01:02:54.996433 1131323 ssh_runner.go:195] Run: sudo crictl images --output json
	I0328 01:02:55.045404 1131323 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0328 01:02:55.045483 1131323 ssh_runner.go:195] Run: which lz4
	I0328 01:02:55.050226 1131323 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0328 01:02:55.055182 1131323 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0328 01:02:55.055221 1131323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0328 01:02:53.416101 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .Start
	I0328 01:02:53.416332 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Ensuring networks are active...
	I0328 01:02:53.417021 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Ensuring network default is active
	I0328 01:02:53.417446 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Ensuring network mk-default-k8s-diff-port-283961 is active
	I0328 01:02:53.417857 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Getting domain xml...
	I0328 01:02:53.418555 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Creating domain...
	I0328 01:02:54.777201 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Waiting to get IP...
	I0328 01:02:54.778055 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:02:54.778563 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | unable to find current IP address of domain default-k8s-diff-port-283961 in network mk-default-k8s-diff-port-283961
	I0328 01:02:54.778705 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | I0328 01:02:54.778537 1132240 retry.go:31] will retry after 259.031702ms: waiting for machine to come up
	I0328 01:02:55.039365 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:02:55.039926 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | unable to find current IP address of domain default-k8s-diff-port-283961 in network mk-default-k8s-diff-port-283961
	I0328 01:02:55.039963 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | I0328 01:02:55.039860 1132240 retry.go:31] will retry after 254.124553ms: waiting for machine to come up
	I0328 01:02:55.295658 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:02:55.296218 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | unable to find current IP address of domain default-k8s-diff-port-283961 in network mk-default-k8s-diff-port-283961
	I0328 01:02:55.296265 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | I0328 01:02:55.296174 1132240 retry.go:31] will retry after 349.637234ms: waiting for machine to come up
	I0328 01:02:55.647590 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:02:55.648356 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | unable to find current IP address of domain default-k8s-diff-port-283961 in network mk-default-k8s-diff-port-283961
	I0328 01:02:55.648392 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | I0328 01:02:55.648298 1132240 retry.go:31] will retry after 446.471208ms: waiting for machine to come up
	I0328 01:02:53.707811 1130949 pod_ready.go:102] pod "coredns-76f75df574-pr5d8" in "kube-system" namespace has status "Ready":"False"
	I0328 01:02:55.708380 1130949 pod_ready.go:102] pod "coredns-76f75df574-pr5d8" in "kube-system" namespace has status "Ready":"False"
	I0328 01:02:57.213059 1130949 pod_ready.go:92] pod "coredns-76f75df574-pr5d8" in "kube-system" namespace has status "Ready":"True"
	I0328 01:02:57.213097 1130949 pod_ready.go:81] duration metric: took 10.513921238s for pod "coredns-76f75df574-pr5d8" in "kube-system" namespace to be "Ready" ...
	I0328 01:02:57.213113 1130949 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-808809" in "kube-system" namespace to be "Ready" ...
	I0328 01:02:57.222308 1130949 pod_ready.go:92] pod "etcd-embed-certs-808809" in "kube-system" namespace has status "Ready":"True"
	I0328 01:02:57.222344 1130949 pod_ready.go:81] duration metric: took 9.214056ms for pod "etcd-embed-certs-808809" in "kube-system" namespace to be "Ready" ...
	I0328 01:02:57.222357 1130949 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-808809" in "kube-system" namespace to be "Ready" ...
	I0328 01:02:57.231530 1130949 pod_ready.go:92] pod "kube-apiserver-embed-certs-808809" in "kube-system" namespace has status "Ready":"True"
	I0328 01:02:57.231558 1130949 pod_ready.go:81] duration metric: took 9.192864ms for pod "kube-apiserver-embed-certs-808809" in "kube-system" namespace to be "Ready" ...
	I0328 01:02:57.231568 1130949 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-808809" in "kube-system" namespace to be "Ready" ...
	I0328 01:02:56.994163 1131323 crio.go:462] duration metric: took 1.943992561s to copy over tarball
	I0328 01:02:56.994252 1131323 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0328 01:03:00.215115 1131323 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.220825311s)
	I0328 01:03:00.215159 1131323 crio.go:469] duration metric: took 3.22095583s to extract the tarball
	I0328 01:03:00.215171 1131323 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0328 01:03:00.259151 1131323 ssh_runner.go:195] Run: sudo crictl images --output json
	I0328 01:03:00.298446 1131323 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0328 01:03:00.298492 1131323 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0328 01:03:00.298585 1131323 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0328 01:03:00.298601 1131323 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0328 01:03:00.298613 1131323 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0328 01:03:00.298644 1131323 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0328 01:03:00.298662 1131323 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0328 01:03:00.298698 1131323 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0328 01:03:00.298593 1131323 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0328 01:03:00.298585 1131323 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0328 01:03:00.300347 1131323 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0328 01:03:00.300424 1131323 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0328 01:03:00.300470 1131323 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0328 01:03:00.300474 1131323 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0328 01:03:00.300637 1131323 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0328 01:03:00.300652 1131323 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0328 01:03:00.300723 1131323 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0328 01:03:00.300793 1131323 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0328 01:02:56.095939 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:02:56.096463 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | unable to find current IP address of domain default-k8s-diff-port-283961 in network mk-default-k8s-diff-port-283961
	I0328 01:02:56.096501 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | I0328 01:02:56.096412 1132240 retry.go:31] will retry after 490.029649ms: waiting for machine to come up
	I0328 01:02:56.588298 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:02:56.588835 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | unable to find current IP address of domain default-k8s-diff-port-283961 in network mk-default-k8s-diff-port-283961
	I0328 01:02:56.588868 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | I0328 01:02:56.588796 1132240 retry.go:31] will retry after 831.356628ms: waiting for machine to come up
	I0328 01:02:57.421917 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:02:57.422410 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | unable to find current IP address of domain default-k8s-diff-port-283961 in network mk-default-k8s-diff-port-283961
	I0328 01:02:57.422443 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | I0328 01:02:57.422353 1132240 retry.go:31] will retry after 1.164764985s: waiting for machine to come up
	I0328 01:02:58.588827 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:02:58.589183 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | unable to find current IP address of domain default-k8s-diff-port-283961 in network mk-default-k8s-diff-port-283961
	I0328 01:02:58.589225 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | I0328 01:02:58.589119 1132240 retry.go:31] will retry after 1.307248783s: waiting for machine to come up
	I0328 01:02:59.897607 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:02:59.897976 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | unable to find current IP address of domain default-k8s-diff-port-283961 in network mk-default-k8s-diff-port-283961
	I0328 01:02:59.898008 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | I0328 01:02:59.897926 1132240 retry.go:31] will retry after 1.560958271s: waiting for machine to come up
	I0328 01:02:58.241179 1130949 pod_ready.go:92] pod "kube-controller-manager-embed-certs-808809" in "kube-system" namespace has status "Ready":"True"
	I0328 01:02:58.241216 1130949 pod_ready.go:81] duration metric: took 1.00963904s for pod "kube-controller-manager-embed-certs-808809" in "kube-system" namespace to be "Ready" ...
	I0328 01:02:58.241245 1130949 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-qwzpg" in "kube-system" namespace to be "Ready" ...
	I0328 01:02:58.249787 1130949 pod_ready.go:92] pod "kube-proxy-qwzpg" in "kube-system" namespace has status "Ready":"True"
	I0328 01:02:58.249826 1130949 pod_ready.go:81] duration metric: took 8.571225ms for pod "kube-proxy-qwzpg" in "kube-system" namespace to be "Ready" ...
	I0328 01:02:58.249840 1130949 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-808809" in "kube-system" namespace to be "Ready" ...
	I0328 01:02:58.405101 1130949 pod_ready.go:92] pod "kube-scheduler-embed-certs-808809" in "kube-system" namespace has status "Ready":"True"
	I0328 01:02:58.405130 1130949 pod_ready.go:81] duration metric: took 155.281142ms for pod "kube-scheduler-embed-certs-808809" in "kube-system" namespace to be "Ready" ...
	I0328 01:02:58.405141 1130949 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace to be "Ready" ...
	I0328 01:03:00.412202 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:02.412688 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:00.499788 1131323 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0328 01:03:00.539135 1131323 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0328 01:03:00.541462 1131323 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0328 01:03:00.544184 1131323 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0328 01:03:00.544227 1131323 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0328 01:03:00.544261 1131323 ssh_runner.go:195] Run: which crictl
	I0328 01:03:00.555720 1131323 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0328 01:03:00.560189 1131323 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0328 01:03:00.562639 1131323 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0328 01:03:00.574105 1131323 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0328 01:03:00.681717 1131323 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0328 01:03:00.681742 1131323 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0328 01:03:00.681765 1131323 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0328 01:03:00.681803 1131323 ssh_runner.go:195] Run: which crictl
	I0328 01:03:00.682033 1131323 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0328 01:03:00.682076 1131323 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0328 01:03:00.682115 1131323 ssh_runner.go:195] Run: which crictl
	I0328 01:03:00.732868 1131323 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0328 01:03:00.732922 1131323 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0328 01:03:00.732988 1131323 ssh_runner.go:195] Run: which crictl
	I0328 01:03:00.742680 1131323 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0328 01:03:00.742730 1131323 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0328 01:03:00.742762 1131323 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0328 01:03:00.742777 1131323 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0328 01:03:00.742805 1131323 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0328 01:03:00.742770 1131323 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0328 01:03:00.742817 1131323 ssh_runner.go:195] Run: which crictl
	I0328 01:03:00.742851 1131323 ssh_runner.go:195] Run: which crictl
	I0328 01:03:00.742865 1131323 ssh_runner.go:195] Run: which crictl
	I0328 01:03:00.770435 1131323 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0328 01:03:00.770472 1131323 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0328 01:03:00.770567 1131323 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0328 01:03:00.770588 1131323 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0328 01:03:00.770727 1131323 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0328 01:03:00.770760 1131323 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0328 01:03:00.770728 1131323 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0328 01:03:00.882338 1131323 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0328 01:03:00.896602 1131323 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0328 01:03:00.918814 1131323 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0328 01:03:00.918869 1131323 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0328 01:03:00.918919 1131323 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0328 01:03:00.918968 1131323 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0328 01:03:01.186124 1131323 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0328 01:03:01.334547 1131323 cache_images.go:92] duration metric: took 1.036031169s to LoadCachedImages
	W0328 01:03:01.334676 1131323 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	I0328 01:03:01.334694 1131323 kubeadm.go:928] updating node { 192.168.50.174 8443 v1.20.0 crio true true} ...
	I0328 01:03:01.334827 1131323 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-986088 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.174
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-986088 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0328 01:03:01.334926 1131323 ssh_runner.go:195] Run: crio config
	I0328 01:03:01.391004 1131323 cni.go:84] Creating CNI manager for ""
	I0328 01:03:01.391034 1131323 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0328 01:03:01.391054 1131323 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0328 01:03:01.391081 1131323 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.174 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-986088 NodeName:old-k8s-version-986088 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.174"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.174 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0328 01:03:01.391265 1131323 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.174
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-986088"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.174
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.174"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0328 01:03:01.391347 1131323 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0328 01:03:01.403684 1131323 binaries.go:44] Found k8s binaries, skipping transfer
	I0328 01:03:01.403779 1131323 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0328 01:03:01.415168 1131323 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0328 01:03:01.434329 1131323 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0328 01:03:01.456280 1131323 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0328 01:03:01.476625 1131323 ssh_runner.go:195] Run: grep 192.168.50.174	control-plane.minikube.internal$ /etc/hosts
	I0328 01:03:01.480867 1131323 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.174	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0328 01:03:01.493833 1131323 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0328 01:03:01.642273 1131323 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0328 01:03:01.661857 1131323 certs.go:68] Setting up /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/old-k8s-version-986088 for IP: 192.168.50.174
	I0328 01:03:01.661887 1131323 certs.go:194] generating shared ca certs ...
	I0328 01:03:01.661909 1131323 certs.go:226] acquiring lock for ca certs: {Name:mkf4dec8f33bbf51de6ed3aabdf175c7fd744ef9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 01:03:01.662115 1131323 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.key
	I0328 01:03:01.662174 1131323 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/proxy-client-ca.key
	I0328 01:03:01.662188 1131323 certs.go:256] generating profile certs ...
	I0328 01:03:01.662324 1131323 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/old-k8s-version-986088/client.key
	I0328 01:03:01.662399 1131323 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/old-k8s-version-986088/apiserver.key.b88fbc7e
	I0328 01:03:01.662447 1131323 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/old-k8s-version-986088/proxy-client.key
	I0328 01:03:01.662600 1131323 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/1076522.pem (1338 bytes)
	W0328 01:03:01.662656 1131323 certs.go:480] ignoring /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/1076522_empty.pem, impossibly tiny 0 bytes
	I0328 01:03:01.662672 1131323 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca-key.pem (1675 bytes)
	I0328 01:03:01.662703 1131323 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem (1078 bytes)
	I0328 01:03:01.662738 1131323 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/cert.pem (1123 bytes)
	I0328 01:03:01.662774 1131323 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/key.pem (1679 bytes)
	I0328 01:03:01.662826 1131323 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/files/etc/ssl/certs/10765222.pem (1708 bytes)
	I0328 01:03:01.663831 1131323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0328 01:03:01.697171 1131323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0328 01:03:01.742118 1131323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0328 01:03:01.783263 1131323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0328 01:03:01.831682 1131323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/old-k8s-version-986088/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0328 01:03:01.878051 1131323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/old-k8s-version-986088/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0328 01:03:01.915626 1131323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/old-k8s-version-986088/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0328 01:03:01.942247 1131323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/old-k8s-version-986088/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0328 01:03:01.969054 1131323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/files/etc/ssl/certs/10765222.pem --> /usr/share/ca-certificates/10765222.pem (1708 bytes)
	I0328 01:03:01.998651 1131323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0328 01:03:02.024881 1131323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/1076522.pem --> /usr/share/ca-certificates/1076522.pem (1338 bytes)
	I0328 01:03:02.051284 1131323 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0328 01:03:02.070414 1131323 ssh_runner.go:195] Run: openssl version
	I0328 01:03:02.076635 1131323 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10765222.pem && ln -fs /usr/share/ca-certificates/10765222.pem /etc/ssl/certs/10765222.pem"
	I0328 01:03:02.089288 1131323 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10765222.pem
	I0328 01:03:02.094260 1131323 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 27 23:42 /usr/share/ca-certificates/10765222.pem
	I0328 01:03:02.094322 1131323 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10765222.pem
	I0328 01:03:02.100846 1131323 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/10765222.pem /etc/ssl/certs/3ec20f2e.0"
	I0328 01:03:02.114474 1131323 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0328 01:03:02.126467 1131323 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0328 01:03:02.131240 1131323 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 27 23:34 /usr/share/ca-certificates/minikubeCA.pem
	I0328 01:03:02.131293 1131323 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0328 01:03:02.137496 1131323 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0328 01:03:02.150863 1131323 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1076522.pem && ln -fs /usr/share/ca-certificates/1076522.pem /etc/ssl/certs/1076522.pem"
	I0328 01:03:02.163536 1131323 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1076522.pem
	I0328 01:03:02.168767 1131323 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 27 23:42 /usr/share/ca-certificates/1076522.pem
	I0328 01:03:02.168850 1131323 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1076522.pem
	I0328 01:03:02.175218 1131323 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1076522.pem /etc/ssl/certs/51391683.0"
	I0328 01:03:02.188272 1131323 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0328 01:03:02.193348 1131323 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0328 01:03:02.199969 1131323 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0328 01:03:02.206424 1131323 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0328 01:03:02.213530 1131323 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0328 01:03:02.220136 1131323 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0328 01:03:02.226502 1131323 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0328 01:03:02.232708 1131323 kubeadm.go:391] StartCluster: {Name:old-k8s-version-986088 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-986088 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.174 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0328 01:03:02.232831 1131323 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0328 01:03:02.232890 1131323 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0328 01:03:02.280062 1131323 cri.go:89] found id: ""
	I0328 01:03:02.280160 1131323 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0328 01:03:02.291968 1131323 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0328 01:03:02.292003 1131323 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0328 01:03:02.292011 1131323 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0328 01:03:02.292072 1131323 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0328 01:03:02.304006 1131323 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0328 01:03:02.305105 1131323 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-986088" does not appear in /home/jenkins/minikube-integration/18485-1069254/kubeconfig
	I0328 01:03:02.305785 1131323 kubeconfig.go:62] /home/jenkins/minikube-integration/18485-1069254/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-986088" cluster setting kubeconfig missing "old-k8s-version-986088" context setting]
	I0328 01:03:02.306728 1131323 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18485-1069254/kubeconfig: {Name:mk608bb0dce90860f51972a105c5a28b1a9b081e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 01:03:02.308610 1131323 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0328 01:03:02.320212 1131323 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.50.174
	I0328 01:03:02.320265 1131323 kubeadm.go:1154] stopping kube-system containers ...
	I0328 01:03:02.320283 1131323 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0328 01:03:02.320356 1131323 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0328 01:03:02.366411 1131323 cri.go:89] found id: ""
	I0328 01:03:02.366500 1131323 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0328 01:03:02.388351 1131323 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0328 01:03:02.402621 1131323 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0328 01:03:02.402652 1131323 kubeadm.go:156] found existing configuration files:
	
	I0328 01:03:02.402718 1131323 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0328 01:03:02.415559 1131323 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0328 01:03:02.415633 1131323 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0328 01:03:02.426666 1131323 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0328 01:03:02.439497 1131323 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0328 01:03:02.439558 1131323 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0328 01:03:02.451040 1131323 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0328 01:03:02.461780 1131323 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0328 01:03:02.461876 1131323 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0328 01:03:02.473295 1131323 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0328 01:03:02.484762 1131323 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0328 01:03:02.484841 1131323 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0328 01:03:02.496304 1131323 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0328 01:03:02.507634 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0328 01:03:02.641980 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0328 01:03:03.598106 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0328 01:03:03.840026 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0328 01:03:03.970336 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0328 01:03:04.067774 1131323 api_server.go:52] waiting for apiserver process to appear ...
	I0328 01:03:04.067911 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:04.568260 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:05.068794 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:01.460535 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:01.461008 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | unable to find current IP address of domain default-k8s-diff-port-283961 in network mk-default-k8s-diff-port-283961
	I0328 01:03:01.461039 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | I0328 01:03:01.460962 1132240 retry.go:31] will retry after 1.839531745s: waiting for machine to come up
	I0328 01:03:03.302965 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:03.303445 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | unable to find current IP address of domain default-k8s-diff-port-283961 in network mk-default-k8s-diff-port-283961
	I0328 01:03:03.303479 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | I0328 01:03:03.303387 1132240 retry.go:31] will retry after 2.461748315s: waiting for machine to come up
	I0328 01:03:04.413898 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:06.913608 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:05.568716 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:06.068362 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:06.568235 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:07.068696 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:07.567976 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:08.068032 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:08.568586 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:09.068046 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:09.568699 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:10.067967 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:05.767795 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:05.768329 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | unable to find current IP address of domain default-k8s-diff-port-283961 in network mk-default-k8s-diff-port-283961
	I0328 01:03:05.768360 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | I0328 01:03:05.768279 1132240 retry.go:31] will retry after 2.321291255s: waiting for machine to come up
	I0328 01:03:08.092644 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:08.093094 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | unable to find current IP address of domain default-k8s-diff-port-283961 in network mk-default-k8s-diff-port-283961
	I0328 01:03:08.093131 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | I0328 01:03:08.093046 1132240 retry.go:31] will retry after 4.151205276s: waiting for machine to come up
	I0328 01:03:09.413199 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:11.912234 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:13.671756 1130827 start.go:364] duration metric: took 54.966750689s to acquireMachinesLock for "no-preload-248059"
	I0328 01:03:13.671815 1130827 start.go:96] Skipping create...Using existing machine configuration
	I0328 01:03:13.671823 1130827 fix.go:54] fixHost starting: 
	I0328 01:03:13.672255 1130827 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18485-1069254/.minikube/bin/docker-machine-driver-kvm2
	I0328 01:03:13.672292 1130827 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 01:03:13.689811 1130827 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42017
	I0328 01:03:13.690364 1130827 main.go:141] libmachine: () Calling .GetVersion
	I0328 01:03:13.690817 1130827 main.go:141] libmachine: Using API Version  1
	I0328 01:03:13.690843 1130827 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 01:03:13.691213 1130827 main.go:141] libmachine: () Calling .GetMachineName
	I0328 01:03:13.691395 1130827 main.go:141] libmachine: (no-preload-248059) Calling .DriverName
	I0328 01:03:13.691523 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetState
	I0328 01:03:13.693093 1130827 fix.go:112] recreateIfNeeded on no-preload-248059: state=Stopped err=<nil>
	I0328 01:03:13.693123 1130827 main.go:141] libmachine: (no-preload-248059) Calling .DriverName
	W0328 01:03:13.693280 1130827 fix.go:138] unexpected machine state, will restart: <nil>
	I0328 01:03:13.695158 1130827 out.go:177] * Restarting existing kvm2 VM for "no-preload-248059" ...
	I0328 01:03:10.568240 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:11.068028 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:11.568146 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:12.068467 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:12.568820 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:13.068031 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:13.568977 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:14.068050 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:14.567938 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:15.068711 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:12.248769 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:12.249440 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Found IP for machine: 192.168.39.224
	I0328 01:03:12.249467 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Reserving static IP address...
	I0328 01:03:12.249498 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has current primary IP address 192.168.39.224 and MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:12.249832 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-283961", mac: "52:54:00:c4:df:6f", ip: "192.168.39.224"} in network mk-default-k8s-diff-port-283961: {Iface:virbr1 ExpiryTime:2024-03-28 02:03:06 +0000 UTC Type:0 Mac:52:54:00:c4:df:6f Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:default-k8s-diff-port-283961 Clientid:01:52:54:00:c4:df:6f}
	I0328 01:03:12.249872 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | skip adding static IP to network mk-default-k8s-diff-port-283961 - found existing host DHCP lease matching {name: "default-k8s-diff-port-283961", mac: "52:54:00:c4:df:6f", ip: "192.168.39.224"}
	I0328 01:03:12.249888 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Reserved static IP address: 192.168.39.224
	I0328 01:03:12.249908 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Waiting for SSH to be available...
	I0328 01:03:12.249921 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | Getting to WaitForSSH function...
	I0328 01:03:12.252053 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:12.252487 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:df:6f", ip: ""} in network mk-default-k8s-diff-port-283961: {Iface:virbr1 ExpiryTime:2024-03-28 02:03:06 +0000 UTC Type:0 Mac:52:54:00:c4:df:6f Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:default-k8s-diff-port-283961 Clientid:01:52:54:00:c4:df:6f}
	I0328 01:03:12.252521 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined IP address 192.168.39.224 and MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:12.252646 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | Using SSH client type: external
	I0328 01:03:12.252677 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | Using SSH private key: /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/default-k8s-diff-port-283961/id_rsa (-rw-------)
	I0328 01:03:12.252709 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.224 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/default-k8s-diff-port-283961/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0328 01:03:12.252731 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | About to run SSH command:
	I0328 01:03:12.252750 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | exit 0
	I0328 01:03:12.378419 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | SSH cmd err, output: <nil>: 
	I0328 01:03:12.378866 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetConfigRaw
	I0328 01:03:12.379659 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetIP
	I0328 01:03:12.382631 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:12.382997 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:df:6f", ip: ""} in network mk-default-k8s-diff-port-283961: {Iface:virbr1 ExpiryTime:2024-03-28 02:03:06 +0000 UTC Type:0 Mac:52:54:00:c4:df:6f Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:default-k8s-diff-port-283961 Clientid:01:52:54:00:c4:df:6f}
	I0328 01:03:12.383023 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined IP address 192.168.39.224 and MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:12.383276 1131600 profile.go:142] Saving config to /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/default-k8s-diff-port-283961/config.json ...
	I0328 01:03:12.383534 1131600 machine.go:94] provisionDockerMachine start ...
	I0328 01:03:12.383567 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .DriverName
	I0328 01:03:12.383805 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHHostname
	I0328 01:03:12.386472 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:12.386839 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:df:6f", ip: ""} in network mk-default-k8s-diff-port-283961: {Iface:virbr1 ExpiryTime:2024-03-28 02:03:06 +0000 UTC Type:0 Mac:52:54:00:c4:df:6f Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:default-k8s-diff-port-283961 Clientid:01:52:54:00:c4:df:6f}
	I0328 01:03:12.386870 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined IP address 192.168.39.224 and MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:12.387035 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHPort
	I0328 01:03:12.387240 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHKeyPath
	I0328 01:03:12.387399 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHKeyPath
	I0328 01:03:12.387577 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHUsername
	I0328 01:03:12.387729 1131600 main.go:141] libmachine: Using SSH client type: native
	I0328 01:03:12.387931 1131600 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.224 22 <nil> <nil>}
	I0328 01:03:12.387943 1131600 main.go:141] libmachine: About to run SSH command:
	hostname
	I0328 01:03:12.499608 1131600 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0328 01:03:12.499644 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetMachineName
	I0328 01:03:12.499930 1131600 buildroot.go:166] provisioning hostname "default-k8s-diff-port-283961"
	I0328 01:03:12.499962 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetMachineName
	I0328 01:03:12.500154 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHHostname
	I0328 01:03:12.502737 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:12.503089 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:df:6f", ip: ""} in network mk-default-k8s-diff-port-283961: {Iface:virbr1 ExpiryTime:2024-03-28 02:03:06 +0000 UTC Type:0 Mac:52:54:00:c4:df:6f Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:default-k8s-diff-port-283961 Clientid:01:52:54:00:c4:df:6f}
	I0328 01:03:12.503120 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined IP address 192.168.39.224 and MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:12.503295 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHPort
	I0328 01:03:12.503516 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHKeyPath
	I0328 01:03:12.503725 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHKeyPath
	I0328 01:03:12.503892 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHUsername
	I0328 01:03:12.504093 1131600 main.go:141] libmachine: Using SSH client type: native
	I0328 01:03:12.504271 1131600 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.224 22 <nil> <nil>}
	I0328 01:03:12.504285 1131600 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-283961 && echo "default-k8s-diff-port-283961" | sudo tee /etc/hostname
	I0328 01:03:12.625590 1131600 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-283961
	
	I0328 01:03:12.625624 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHHostname
	I0328 01:03:12.628570 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:12.628883 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:df:6f", ip: ""} in network mk-default-k8s-diff-port-283961: {Iface:virbr1 ExpiryTime:2024-03-28 02:03:06 +0000 UTC Type:0 Mac:52:54:00:c4:df:6f Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:default-k8s-diff-port-283961 Clientid:01:52:54:00:c4:df:6f}
	I0328 01:03:12.628968 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined IP address 192.168.39.224 and MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:12.629143 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHPort
	I0328 01:03:12.629397 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHKeyPath
	I0328 01:03:12.629627 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHKeyPath
	I0328 01:03:12.629825 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHUsername
	I0328 01:03:12.630008 1131600 main.go:141] libmachine: Using SSH client type: native
	I0328 01:03:12.630191 1131600 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.224 22 <nil> <nil>}
	I0328 01:03:12.630210 1131600 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-283961' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-283961/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-283961' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0328 01:03:12.744240 1131600 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0328 01:03:12.744280 1131600 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18485-1069254/.minikube CaCertPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18485-1069254/.minikube}
	I0328 01:03:12.744327 1131600 buildroot.go:174] setting up certificates
	I0328 01:03:12.744342 1131600 provision.go:84] configureAuth start
	I0328 01:03:12.744361 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetMachineName
	I0328 01:03:12.744722 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetIP
	I0328 01:03:12.747139 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:12.747448 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:df:6f", ip: ""} in network mk-default-k8s-diff-port-283961: {Iface:virbr1 ExpiryTime:2024-03-28 02:03:06 +0000 UTC Type:0 Mac:52:54:00:c4:df:6f Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:default-k8s-diff-port-283961 Clientid:01:52:54:00:c4:df:6f}
	I0328 01:03:12.747478 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined IP address 192.168.39.224 and MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:12.747582 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHHostname
	I0328 01:03:12.749705 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:12.749964 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:df:6f", ip: ""} in network mk-default-k8s-diff-port-283961: {Iface:virbr1 ExpiryTime:2024-03-28 02:03:06 +0000 UTC Type:0 Mac:52:54:00:c4:df:6f Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:default-k8s-diff-port-283961 Clientid:01:52:54:00:c4:df:6f}
	I0328 01:03:12.749995 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined IP address 192.168.39.224 and MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:12.750125 1131600 provision.go:143] copyHostCerts
	I0328 01:03:12.750203 1131600 exec_runner.go:144] found /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.pem, removing ...
	I0328 01:03:12.750217 1131600 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.pem
	I0328 01:03:12.750323 1131600 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.pem (1078 bytes)
	I0328 01:03:12.750435 1131600 exec_runner.go:144] found /home/jenkins/minikube-integration/18485-1069254/.minikube/cert.pem, removing ...
	I0328 01:03:12.750446 1131600 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18485-1069254/.minikube/cert.pem
	I0328 01:03:12.750479 1131600 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18485-1069254/.minikube/cert.pem (1123 bytes)
	I0328 01:03:12.750557 1131600 exec_runner.go:144] found /home/jenkins/minikube-integration/18485-1069254/.minikube/key.pem, removing ...
	I0328 01:03:12.750567 1131600 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18485-1069254/.minikube/key.pem
	I0328 01:03:12.750599 1131600 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18485-1069254/.minikube/key.pem (1679 bytes)
	I0328 01:03:12.750670 1131600 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-283961 san=[127.0.0.1 192.168.39.224 default-k8s-diff-port-283961 localhost minikube]
	I0328 01:03:12.963182 1131600 provision.go:177] copyRemoteCerts
	I0328 01:03:12.963265 1131600 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0328 01:03:12.963313 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHHostname
	I0328 01:03:12.965946 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:12.966177 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:df:6f", ip: ""} in network mk-default-k8s-diff-port-283961: {Iface:virbr1 ExpiryTime:2024-03-28 02:03:06 +0000 UTC Type:0 Mac:52:54:00:c4:df:6f Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:default-k8s-diff-port-283961 Clientid:01:52:54:00:c4:df:6f}
	I0328 01:03:12.966207 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined IP address 192.168.39.224 and MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:12.966347 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHPort
	I0328 01:03:12.966573 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHKeyPath
	I0328 01:03:12.966773 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHUsername
	I0328 01:03:12.966934 1131600 sshutil.go:53] new ssh client: &{IP:192.168.39.224 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/default-k8s-diff-port-283961/id_rsa Username:docker}
	I0328 01:03:13.057477 1131600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0328 01:03:13.083706 1131600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0328 01:03:13.109167 1131600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0328 01:03:13.136835 1131600 provision.go:87] duration metric: took 392.475069ms to configureAuth
	I0328 01:03:13.136867 1131600 buildroot.go:189] setting minikube options for container-runtime
	I0328 01:03:13.137048 1131600 config.go:182] Loaded profile config "default-k8s-diff-port-283961": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0328 01:03:13.137131 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHHostname
	I0328 01:03:13.139508 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:13.139761 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:df:6f", ip: ""} in network mk-default-k8s-diff-port-283961: {Iface:virbr1 ExpiryTime:2024-03-28 02:03:06 +0000 UTC Type:0 Mac:52:54:00:c4:df:6f Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:default-k8s-diff-port-283961 Clientid:01:52:54:00:c4:df:6f}
	I0328 01:03:13.139792 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined IP address 192.168.39.224 and MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:13.139959 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHPort
	I0328 01:03:13.140170 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHKeyPath
	I0328 01:03:13.140343 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHKeyPath
	I0328 01:03:13.140502 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHUsername
	I0328 01:03:13.140685 1131600 main.go:141] libmachine: Using SSH client type: native
	I0328 01:03:13.140873 1131600 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.224 22 <nil> <nil>}
	I0328 01:03:13.140897 1131600 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0328 01:03:13.422372 1131600 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0328 01:03:13.422405 1131600 machine.go:97] duration metric: took 1.038857021s to provisionDockerMachine
	I0328 01:03:13.422418 1131600 start.go:293] postStartSetup for "default-k8s-diff-port-283961" (driver="kvm2")
	I0328 01:03:13.422428 1131600 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0328 01:03:13.422456 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .DriverName
	I0328 01:03:13.422788 1131600 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0328 01:03:13.422819 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHHostname
	I0328 01:03:13.425539 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:13.425865 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:df:6f", ip: ""} in network mk-default-k8s-diff-port-283961: {Iface:virbr1 ExpiryTime:2024-03-28 02:03:06 +0000 UTC Type:0 Mac:52:54:00:c4:df:6f Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:default-k8s-diff-port-283961 Clientid:01:52:54:00:c4:df:6f}
	I0328 01:03:13.425894 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined IP address 192.168.39.224 and MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:13.426023 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHPort
	I0328 01:03:13.426225 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHKeyPath
	I0328 01:03:13.426407 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHUsername
	I0328 01:03:13.426577 1131600 sshutil.go:53] new ssh client: &{IP:192.168.39.224 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/default-k8s-diff-port-283961/id_rsa Username:docker}
	I0328 01:03:13.511874 1131600 ssh_runner.go:195] Run: cat /etc/os-release
	I0328 01:03:13.516643 1131600 info.go:137] Remote host: Buildroot 2023.02.9
	I0328 01:03:13.516673 1131600 filesync.go:126] Scanning /home/jenkins/minikube-integration/18485-1069254/.minikube/addons for local assets ...
	I0328 01:03:13.516749 1131600 filesync.go:126] Scanning /home/jenkins/minikube-integration/18485-1069254/.minikube/files for local assets ...
	I0328 01:03:13.516846 1131600 filesync.go:149] local asset: /home/jenkins/minikube-integration/18485-1069254/.minikube/files/etc/ssl/certs/10765222.pem -> 10765222.pem in /etc/ssl/certs
	I0328 01:03:13.516969 1131600 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0328 01:03:13.529004 1131600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/files/etc/ssl/certs/10765222.pem --> /etc/ssl/certs/10765222.pem (1708 bytes)
	I0328 01:03:13.557244 1131600 start.go:296] duration metric: took 134.810243ms for postStartSetup
	I0328 01:03:13.557289 1131600 fix.go:56] duration metric: took 20.165726422s for fixHost
	I0328 01:03:13.557313 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHHostname
	I0328 01:03:13.560216 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:13.560585 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:df:6f", ip: ""} in network mk-default-k8s-diff-port-283961: {Iface:virbr1 ExpiryTime:2024-03-28 02:03:06 +0000 UTC Type:0 Mac:52:54:00:c4:df:6f Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:default-k8s-diff-port-283961 Clientid:01:52:54:00:c4:df:6f}
	I0328 01:03:13.560623 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined IP address 192.168.39.224 and MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:13.560803 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHPort
	I0328 01:03:13.561050 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHKeyPath
	I0328 01:03:13.561188 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHKeyPath
	I0328 01:03:13.561303 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHUsername
	I0328 01:03:13.561552 1131600 main.go:141] libmachine: Using SSH client type: native
	I0328 01:03:13.561742 1131600 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.224 22 <nil> <nil>}
	I0328 01:03:13.561757 1131600 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0328 01:03:13.671545 1131600 main.go:141] libmachine: SSH cmd err, output: <nil>: 1711587793.617322674
	
	I0328 01:03:13.671570 1131600 fix.go:216] guest clock: 1711587793.617322674
	I0328 01:03:13.671578 1131600 fix.go:229] Guest: 2024-03-28 01:03:13.617322674 +0000 UTC Remote: 2024-03-28 01:03:13.55729386 +0000 UTC m=+187.934897846 (delta=60.028814ms)
	I0328 01:03:13.671632 1131600 fix.go:200] guest clock delta is within tolerance: 60.028814ms
	I0328 01:03:13.671642 1131600 start.go:83] releasing machines lock for "default-k8s-diff-port-283961", held for 20.280118311s
	I0328 01:03:13.671673 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .DriverName
	I0328 01:03:13.671976 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetIP
	I0328 01:03:13.674978 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:13.675384 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:df:6f", ip: ""} in network mk-default-k8s-diff-port-283961: {Iface:virbr1 ExpiryTime:2024-03-28 02:03:06 +0000 UTC Type:0 Mac:52:54:00:c4:df:6f Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:default-k8s-diff-port-283961 Clientid:01:52:54:00:c4:df:6f}
	I0328 01:03:13.675436 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined IP address 192.168.39.224 and MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:13.675562 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .DriverName
	I0328 01:03:13.676167 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .DriverName
	I0328 01:03:13.676337 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .DriverName
	I0328 01:03:13.676436 1131600 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0328 01:03:13.676501 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHHostname
	I0328 01:03:13.676557 1131600 ssh_runner.go:195] Run: cat /version.json
	I0328 01:03:13.676578 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHHostname
	I0328 01:03:13.679418 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:13.679452 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:13.679758 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:df:6f", ip: ""} in network mk-default-k8s-diff-port-283961: {Iface:virbr1 ExpiryTime:2024-03-28 02:03:06 +0000 UTC Type:0 Mac:52:54:00:c4:df:6f Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:default-k8s-diff-port-283961 Clientid:01:52:54:00:c4:df:6f}
	I0328 01:03:13.679785 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined IP address 192.168.39.224 and MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:13.679813 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:df:6f", ip: ""} in network mk-default-k8s-diff-port-283961: {Iface:virbr1 ExpiryTime:2024-03-28 02:03:06 +0000 UTC Type:0 Mac:52:54:00:c4:df:6f Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:default-k8s-diff-port-283961 Clientid:01:52:54:00:c4:df:6f}
	I0328 01:03:13.679832 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined IP address 192.168.39.224 and MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:13.679986 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHPort
	I0328 01:03:13.680089 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHPort
	I0328 01:03:13.680190 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHKeyPath
	I0328 01:03:13.680255 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHKeyPath
	I0328 01:03:13.680345 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHUsername
	I0328 01:03:13.680410 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHUsername
	I0328 01:03:13.680517 1131600 sshutil.go:53] new ssh client: &{IP:192.168.39.224 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/default-k8s-diff-port-283961/id_rsa Username:docker}
	I0328 01:03:13.680608 1131600 sshutil.go:53] new ssh client: &{IP:192.168.39.224 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/default-k8s-diff-port-283961/id_rsa Username:docker}
	I0328 01:03:13.759826 1131600 ssh_runner.go:195] Run: systemctl --version
	I0328 01:03:13.796647 1131600 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0328 01:03:13.947036 1131600 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0328 01:03:13.954165 1131600 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0328 01:03:13.954265 1131600 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0328 01:03:13.973503 1131600 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0328 01:03:13.973538 1131600 start.go:494] detecting cgroup driver to use...
	I0328 01:03:13.973629 1131600 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0328 01:03:13.997675 1131600 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0328 01:03:14.015349 1131600 docker.go:217] disabling cri-docker service (if available) ...
	I0328 01:03:14.015421 1131600 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0328 01:03:14.031099 1131600 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0328 01:03:14.046446 1131600 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0328 01:03:14.186993 1131600 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0328 01:03:14.351164 1131600 docker.go:233] disabling docker service ...
	I0328 01:03:14.351232 1131600 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0328 01:03:14.370629 1131600 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0328 01:03:14.387837 1131600 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0328 01:03:14.544060 1131600 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0328 01:03:14.707699 1131600 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0328 01:03:14.725658 1131600 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0328 01:03:14.746063 1131600 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0328 01:03:14.746141 1131600 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 01:03:14.759244 1131600 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0328 01:03:14.759317 1131600 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 01:03:14.773015 1131600 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 01:03:14.786810 1131600 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 01:03:14.807101 1131600 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0328 01:03:14.821013 1131600 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 01:03:14.834181 1131600 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 01:03:14.861163 1131600 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 01:03:14.874274 1131600 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0328 01:03:14.885890 1131600 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0328 01:03:14.885968 1131600 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0328 01:03:14.903142 1131600 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0328 01:03:14.916364 1131600 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0328 01:03:15.073343 1131600 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0328 01:03:15.218406 1131600 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0328 01:03:15.218500 1131600 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0328 01:03:15.226299 1131600 start.go:562] Will wait 60s for crictl version
	I0328 01:03:15.226373 1131600 ssh_runner.go:195] Run: which crictl
	I0328 01:03:15.232051 1131600 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0328 01:03:15.278793 1131600 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0328 01:03:15.278903 1131600 ssh_runner.go:195] Run: crio --version
	I0328 01:03:15.313408 1131600 ssh_runner.go:195] Run: crio --version
	I0328 01:03:15.351613 1131600 out.go:177] * Preparing Kubernetes v1.29.3 on CRI-O 1.29.1 ...
	I0328 01:03:15.353013 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetIP
	I0328 01:03:15.355924 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:15.356306 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:df:6f", ip: ""} in network mk-default-k8s-diff-port-283961: {Iface:virbr1 ExpiryTime:2024-03-28 02:03:06 +0000 UTC Type:0 Mac:52:54:00:c4:df:6f Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:default-k8s-diff-port-283961 Clientid:01:52:54:00:c4:df:6f}
	I0328 01:03:15.356341 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined IP address 192.168.39.224 and MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:15.356555 1131600 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0328 01:03:15.361194 1131600 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0328 01:03:15.380926 1131600 kubeadm.go:877] updating cluster {Name:default-k8s-diff-port-283961 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.29.3 ClusterName:default-k8s-diff-port-283961 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.224 Port:8444 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0328 01:03:15.381043 1131600 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0328 01:03:15.381099 1131600 ssh_runner.go:195] Run: sudo crictl images --output json
	I0328 01:03:15.423322 1131600 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.3". assuming images are not preloaded.
	I0328 01:03:15.423409 1131600 ssh_runner.go:195] Run: which lz4
	I0328 01:03:15.428123 1131600 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0328 01:03:15.433023 1131600 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0328 01:03:15.433065 1131600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (402967820 bytes)
	I0328 01:03:13.696314 1130827 main.go:141] libmachine: (no-preload-248059) Calling .Start
	I0328 01:03:13.696506 1130827 main.go:141] libmachine: (no-preload-248059) Ensuring networks are active...
	I0328 01:03:13.697344 1130827 main.go:141] libmachine: (no-preload-248059) Ensuring network default is active
	I0328 01:03:13.697668 1130827 main.go:141] libmachine: (no-preload-248059) Ensuring network mk-no-preload-248059 is active
	I0328 01:03:13.698009 1130827 main.go:141] libmachine: (no-preload-248059) Getting domain xml...
	I0328 01:03:13.698805 1130827 main.go:141] libmachine: (no-preload-248059) Creating domain...
	I0328 01:03:14.955922 1130827 main.go:141] libmachine: (no-preload-248059) Waiting to get IP...
	I0328 01:03:14.957088 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:14.957534 1130827 main.go:141] libmachine: (no-preload-248059) DBG | unable to find current IP address of domain no-preload-248059 in network mk-no-preload-248059
	I0328 01:03:14.957660 1130827 main.go:141] libmachine: (no-preload-248059) DBG | I0328 01:03:14.957533 1132389 retry.go:31] will retry after 222.894093ms: waiting for machine to come up
	I0328 01:03:15.182078 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:15.182541 1130827 main.go:141] libmachine: (no-preload-248059) DBG | unable to find current IP address of domain no-preload-248059 in network mk-no-preload-248059
	I0328 01:03:15.182580 1130827 main.go:141] libmachine: (no-preload-248059) DBG | I0328 01:03:15.182528 1132389 retry.go:31] will retry after 263.74163ms: waiting for machine to come up
	I0328 01:03:15.448081 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:15.448653 1130827 main.go:141] libmachine: (no-preload-248059) DBG | unable to find current IP address of domain no-preload-248059 in network mk-no-preload-248059
	I0328 01:03:15.448684 1130827 main.go:141] libmachine: (no-preload-248059) DBG | I0328 01:03:15.448586 1132389 retry.go:31] will retry after 444.066222ms: waiting for machine to come up
	I0328 01:03:15.894141 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:15.894695 1130827 main.go:141] libmachine: (no-preload-248059) DBG | unable to find current IP address of domain no-preload-248059 in network mk-no-preload-248059
	I0328 01:03:15.894732 1130827 main.go:141] libmachine: (no-preload-248059) DBG | I0328 01:03:15.894650 1132389 retry.go:31] will retry after 469.421771ms: waiting for machine to come up
	I0328 01:03:14.413443 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:16.418789 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:15.568507 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:16.068210 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:16.568761 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:17.067929 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:17.568403 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:18.068454 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:18.568086 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:19.068049 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:19.569020 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:20.068068 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:17.139682 1131600 crio.go:462] duration metric: took 1.71160157s to copy over tarball
	I0328 01:03:17.139764 1131600 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0328 01:03:19.581198 1131600 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.441406061s)
	I0328 01:03:19.581229 1131600 crio.go:469] duration metric: took 2.441510253s to extract the tarball
	I0328 01:03:19.581241 1131600 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0328 01:03:19.620964 1131600 ssh_runner.go:195] Run: sudo crictl images --output json
	I0328 01:03:19.666765 1131600 crio.go:514] all images are preloaded for cri-o runtime.
	I0328 01:03:19.666791 1131600 cache_images.go:84] Images are preloaded, skipping loading
	I0328 01:03:19.666802 1131600 kubeadm.go:928] updating node { 192.168.39.224 8444 v1.29.3 crio true true} ...
	I0328 01:03:19.666921 1131600 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-283961 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.224
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.3 ClusterName:default-k8s-diff-port-283961 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0328 01:03:19.666987 1131600 ssh_runner.go:195] Run: crio config
	I0328 01:03:19.716082 1131600 cni.go:84] Creating CNI manager for ""
	I0328 01:03:19.716106 1131600 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0328 01:03:19.716115 1131600 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0328 01:03:19.716139 1131600 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.224 APIServerPort:8444 KubernetesVersion:v1.29.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-283961 NodeName:default-k8s-diff-port-283961 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.224"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.224 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0328 01:03:19.716323 1131600 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.224
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-283961"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.224
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.224"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0328 01:03:19.716399 1131600 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.3
	I0328 01:03:19.727826 1131600 binaries.go:44] Found k8s binaries, skipping transfer
	I0328 01:03:19.727913 1131600 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0328 01:03:19.738525 1131600 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0328 01:03:19.756732 1131600 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0328 01:03:19.776665 1131600 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0328 01:03:19.795756 1131600 ssh_runner.go:195] Run: grep 192.168.39.224	control-plane.minikube.internal$ /etc/hosts
	I0328 01:03:19.800097 1131600 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.224	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0328 01:03:19.813019 1131600 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0328 01:03:19.946740 1131600 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0328 01:03:19.964216 1131600 certs.go:68] Setting up /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/default-k8s-diff-port-283961 for IP: 192.168.39.224
	I0328 01:03:19.964244 1131600 certs.go:194] generating shared ca certs ...
	I0328 01:03:19.964262 1131600 certs.go:226] acquiring lock for ca certs: {Name:mkf4dec8f33bbf51de6ed3aabdf175c7fd744ef9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 01:03:19.964448 1131600 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.key
	I0328 01:03:19.964524 1131600 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/proxy-client-ca.key
	I0328 01:03:19.964538 1131600 certs.go:256] generating profile certs ...
	I0328 01:03:19.964648 1131600 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/default-k8s-diff-port-283961/client.key
	I0328 01:03:19.964735 1131600 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/default-k8s-diff-port-283961/apiserver.key.22bfb146
	I0328 01:03:19.964810 1131600 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/default-k8s-diff-port-283961/proxy-client.key
	I0328 01:03:19.964956 1131600 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/1076522.pem (1338 bytes)
	W0328 01:03:19.965008 1131600 certs.go:480] ignoring /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/1076522_empty.pem, impossibly tiny 0 bytes
	I0328 01:03:19.965021 1131600 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca-key.pem (1675 bytes)
	I0328 01:03:19.965058 1131600 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem (1078 bytes)
	I0328 01:03:19.965091 1131600 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/cert.pem (1123 bytes)
	I0328 01:03:19.965113 1131600 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/key.pem (1679 bytes)
	I0328 01:03:19.965154 1131600 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/files/etc/ssl/certs/10765222.pem (1708 bytes)
	I0328 01:03:19.966026 1131600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0328 01:03:19.998578 1131600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0328 01:03:20.042666 1131600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0328 01:03:20.075405 1131600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0328 01:03:20.117888 1131600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/default-k8s-diff-port-283961/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0328 01:03:20.145160 1131600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/default-k8s-diff-port-283961/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0328 01:03:20.178207 1131600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/default-k8s-diff-port-283961/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0328 01:03:20.208610 1131600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/default-k8s-diff-port-283961/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0328 01:03:20.235356 1131600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0328 01:03:20.262434 1131600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/1076522.pem --> /usr/share/ca-certificates/1076522.pem (1338 bytes)
	I0328 01:03:20.291315 1131600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/files/etc/ssl/certs/10765222.pem --> /usr/share/ca-certificates/10765222.pem (1708 bytes)
	I0328 01:03:20.318034 1131600 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0328 01:03:20.337627 1131600 ssh_runner.go:195] Run: openssl version
	I0328 01:03:20.344242 1131600 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0328 01:03:20.360732 1131600 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0328 01:03:20.365858 1131600 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 27 23:34 /usr/share/ca-certificates/minikubeCA.pem
	I0328 01:03:20.365926 1131600 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0328 01:03:20.372120 1131600 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0328 01:03:20.384554 1131600 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1076522.pem && ln -fs /usr/share/ca-certificates/1076522.pem /etc/ssl/certs/1076522.pem"
	I0328 01:03:20.401731 1131600 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1076522.pem
	I0328 01:03:20.406945 1131600 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 27 23:42 /usr/share/ca-certificates/1076522.pem
	I0328 01:03:20.407024 1131600 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1076522.pem
	I0328 01:03:20.414661 1131600 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1076522.pem /etc/ssl/certs/51391683.0"
	I0328 01:03:20.427573 1131600 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10765222.pem && ln -fs /usr/share/ca-certificates/10765222.pem /etc/ssl/certs/10765222.pem"
	I0328 01:03:20.439807 1131600 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10765222.pem
	I0328 01:03:20.445064 1131600 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 27 23:42 /usr/share/ca-certificates/10765222.pem
	I0328 01:03:20.445138 1131600 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10765222.pem
	I0328 01:03:20.451754 1131600 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/10765222.pem /etc/ssl/certs/3ec20f2e.0"
	I0328 01:03:20.464988 1131600 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0328 01:03:20.470461 1131600 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0328 01:03:20.477200 1131600 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0328 01:03:20.484238 1131600 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0328 01:03:20.491125 1131600 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0328 01:03:20.497888 1131600 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0328 01:03:20.504680 1131600 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0328 01:03:20.511372 1131600 kubeadm.go:391] StartCluster: {Name:default-k8s-diff-port-283961 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.29.3 ClusterName:default-k8s-diff-port-283961 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.224 Port:8444 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0328 01:03:20.511477 1131600 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0328 01:03:20.511542 1131600 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0328 01:03:20.552247 1131600 cri.go:89] found id: ""
	I0328 01:03:20.552345 1131600 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0328 01:03:20.564906 1131600 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0328 01:03:20.564937 1131600 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0328 01:03:20.564944 1131600 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0328 01:03:20.565002 1131600 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0328 01:03:20.576394 1131600 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0328 01:03:20.593699 1131600 kubeconfig.go:125] found "default-k8s-diff-port-283961" server: "https://192.168.39.224:8444"
	I0328 01:03:20.595978 1131600 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0328 01:03:20.609519 1131600 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.39.224
	I0328 01:03:20.609565 1131600 kubeadm.go:1154] stopping kube-system containers ...
	I0328 01:03:20.609583 1131600 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0328 01:03:20.609651 1131600 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0328 01:03:20.651892 1131600 cri.go:89] found id: ""
	I0328 01:03:20.651967 1131600 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0328 01:03:20.671895 1131600 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0328 01:03:16.365505 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:16.366404 1130827 main.go:141] libmachine: (no-preload-248059) DBG | unable to find current IP address of domain no-preload-248059 in network mk-no-preload-248059
	I0328 01:03:16.366435 1130827 main.go:141] libmachine: (no-preload-248059) DBG | I0328 01:03:16.366360 1132389 retry.go:31] will retry after 488.383898ms: waiting for machine to come up
	I0328 01:03:16.856125 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:16.856727 1130827 main.go:141] libmachine: (no-preload-248059) DBG | unable to find current IP address of domain no-preload-248059 in network mk-no-preload-248059
	I0328 01:03:16.856761 1130827 main.go:141] libmachine: (no-preload-248059) DBG | I0328 01:03:16.856626 1132389 retry.go:31] will retry after 617.77144ms: waiting for machine to come up
	I0328 01:03:17.476749 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:17.477351 1130827 main.go:141] libmachine: (no-preload-248059) DBG | unable to find current IP address of domain no-preload-248059 in network mk-no-preload-248059
	I0328 01:03:17.477386 1130827 main.go:141] libmachine: (no-preload-248059) DBG | I0328 01:03:17.477282 1132389 retry.go:31] will retry after 835.951988ms: waiting for machine to come up
	I0328 01:03:18.315387 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:18.315894 1130827 main.go:141] libmachine: (no-preload-248059) DBG | unable to find current IP address of domain no-preload-248059 in network mk-no-preload-248059
	I0328 01:03:18.315925 1130827 main.go:141] libmachine: (no-preload-248059) DBG | I0328 01:03:18.315848 1132389 retry.go:31] will retry after 1.405695765s: waiting for machine to come up
	I0328 01:03:19.723053 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:19.723559 1130827 main.go:141] libmachine: (no-preload-248059) DBG | unable to find current IP address of domain no-preload-248059 in network mk-no-preload-248059
	I0328 01:03:19.723591 1130827 main.go:141] libmachine: (no-preload-248059) DBG | I0328 01:03:19.723473 1132389 retry.go:31] will retry after 1.555358462s: waiting for machine to come up
	I0328 01:03:18.913403 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:21.599662 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:20.568464 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:21.068983 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:21.568470 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:22.068772 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:22.568940 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:23.068907 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:23.568272 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:24.068055 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:24.568056 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:25.068006 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:20.685320 1131600 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0328 01:03:21.187521 1131600 kubeadm.go:156] found existing configuration files:
	
	I0328 01:03:21.187587 1131600 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0328 01:03:21.200463 1131600 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0328 01:03:21.200533 1131600 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0328 01:03:21.212763 1131600 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0328 01:03:21.224344 1131600 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0328 01:03:21.224419 1131600 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0328 01:03:21.235869 1131600 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0328 01:03:21.245970 1131600 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0328 01:03:21.246045 1131600 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0328 01:03:21.258589 1131600 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0328 01:03:21.270651 1131600 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0328 01:03:21.270724 1131600 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0328 01:03:21.283074 1131600 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0328 01:03:21.295811 1131600 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0328 01:03:21.668224 1131600 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0328 01:03:23.046357 1131600 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.378083996s)
	I0328 01:03:23.046401 1131600 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0328 01:03:23.271959 1131600 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0328 01:03:23.353976 1131600 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0328 01:03:23.501611 1131600 api_server.go:52] waiting for apiserver process to appear ...
	I0328 01:03:23.501734 1131600 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:24.002619 1131600 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:24.502614 1131600 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:24.547383 1131600 api_server.go:72] duration metric: took 1.045771287s to wait for apiserver process to appear ...
	I0328 01:03:24.547419 1131600 api_server.go:88] waiting for apiserver healthz status ...
	I0328 01:03:24.547447 1131600 api_server.go:253] Checking apiserver healthz at https://192.168.39.224:8444/healthz ...
	I0328 01:03:24.548081 1131600 api_server.go:269] stopped: https://192.168.39.224:8444/healthz: Get "https://192.168.39.224:8444/healthz": dial tcp 192.168.39.224:8444: connect: connection refused
	I0328 01:03:25.047885 1131600 api_server.go:253] Checking apiserver healthz at https://192.168.39.224:8444/healthz ...
	I0328 01:03:21.279945 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:21.590947 1130827 main.go:141] libmachine: (no-preload-248059) DBG | unable to find current IP address of domain no-preload-248059 in network mk-no-preload-248059
	I0328 01:03:21.590967 1130827 main.go:141] libmachine: (no-preload-248059) DBG | I0328 01:03:21.280358 1132389 retry.go:31] will retry after 1.905587467s: waiting for machine to come up
	I0328 01:03:23.187571 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:23.188214 1130827 main.go:141] libmachine: (no-preload-248059) DBG | unable to find current IP address of domain no-preload-248059 in network mk-no-preload-248059
	I0328 01:03:23.188248 1130827 main.go:141] libmachine: (no-preload-248059) DBG | I0328 01:03:23.188159 1132389 retry.go:31] will retry after 2.68043246s: waiting for machine to come up
	I0328 01:03:25.871414 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:25.871997 1130827 main.go:141] libmachine: (no-preload-248059) DBG | unable to find current IP address of domain no-preload-248059 in network mk-no-preload-248059
	I0328 01:03:25.872030 1130827 main.go:141] libmachine: (no-preload-248059) DBG | I0328 01:03:25.871956 1132389 retry.go:31] will retry after 2.689404788s: waiting for machine to come up
	I0328 01:03:23.913816 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:26.413616 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:27.352533 1131600 api_server.go:279] https://192.168.39.224:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0328 01:03:27.352570 1131600 api_server.go:103] status: https://192.168.39.224:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0328 01:03:27.352589 1131600 api_server.go:253] Checking apiserver healthz at https://192.168.39.224:8444/healthz ...
	I0328 01:03:27.453408 1131600 api_server.go:279] https://192.168.39.224:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0328 01:03:27.453448 1131600 api_server.go:103] status: https://192.168.39.224:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0328 01:03:27.547781 1131600 api_server.go:253] Checking apiserver healthz at https://192.168.39.224:8444/healthz ...
	I0328 01:03:27.552703 1131600 api_server.go:279] https://192.168.39.224:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0328 01:03:27.552738 1131600 api_server.go:103] status: https://192.168.39.224:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0328 01:03:28.048135 1131600 api_server.go:253] Checking apiserver healthz at https://192.168.39.224:8444/healthz ...
	I0328 01:03:28.053291 1131600 api_server.go:279] https://192.168.39.224:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0328 01:03:28.053322 1131600 api_server.go:103] status: https://192.168.39.224:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0328 01:03:28.548374 1131600 api_server.go:253] Checking apiserver healthz at https://192.168.39.224:8444/healthz ...
	I0328 01:03:28.553141 1131600 api_server.go:279] https://192.168.39.224:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0328 01:03:28.553178 1131600 api_server.go:103] status: https://192.168.39.224:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0328 01:03:29.047609 1131600 api_server.go:253] Checking apiserver healthz at https://192.168.39.224:8444/healthz ...
	I0328 01:03:29.053027 1131600 api_server.go:279] https://192.168.39.224:8444/healthz returned 200:
	ok
	I0328 01:03:29.060710 1131600 api_server.go:141] control plane version: v1.29.3
	I0328 01:03:29.060747 1131600 api_server.go:131] duration metric: took 4.513320481s to wait for apiserver health ...
	I0328 01:03:29.060757 1131600 cni.go:84] Creating CNI manager for ""
	I0328 01:03:29.060764 1131600 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0328 01:03:29.062763 1131600 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0328 01:03:25.568927 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:26.068371 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:26.568107 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:27.068037 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:27.567985 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:28.068036 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:28.568843 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:29.068483 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:29.568942 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:30.068849 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:29.064492 1131600 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0328 01:03:29.089164 1131600 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0328 01:03:29.115071 1131600 system_pods.go:43] waiting for kube-system pods to appear ...
	I0328 01:03:29.126819 1131600 system_pods.go:59] 8 kube-system pods found
	I0328 01:03:29.126871 1131600 system_pods.go:61] "coredns-76f75df574-79cdj" [48ffe344-a386-4904-a73e-56e3ce0a8bef] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0328 01:03:29.126885 1131600 system_pods.go:61] "etcd-default-k8s-diff-port-283961" [1d8fc768-e39c-4c96-bd65-2ae76fc9c6ec] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0328 01:03:29.126898 1131600 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-283961" [7c5c9f85-f16f-4248-8d2d-73c1ed2b0128] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0328 01:03:29.126912 1131600 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-283961" [2e943e7b-5506-4797-9e77-4a33e06056fa] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0328 01:03:29.126931 1131600 system_pods.go:61] "kube-proxy-d776v" [c1c86f61-b074-4a51-89e6-17c7b1076748] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0328 01:03:29.126944 1131600 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-283961" [8a840579-4145-4b68-ab3f-b1ebd3d63e81] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0328 01:03:29.126956 1131600 system_pods.go:61] "metrics-server-57f55c9bc5-w4ww4" [6d60f9e6-8ac7-4fad-91dc-61520586666c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0328 01:03:29.126968 1131600 system_pods.go:61] "storage-provisioner" [2b5e2e68-7e7c-46ec-bcec-ff9b01cbb8d9] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0328 01:03:29.126979 1131600 system_pods.go:74] duration metric: took 11.875076ms to wait for pod list to return data ...
	I0328 01:03:29.126992 1131600 node_conditions.go:102] verifying NodePressure condition ...
	I0328 01:03:29.130927 1131600 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0328 01:03:29.130971 1131600 node_conditions.go:123] node cpu capacity is 2
	I0328 01:03:29.130986 1131600 node_conditions.go:105] duration metric: took 3.984383ms to run NodePressure ...
	I0328 01:03:29.131011 1131600 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0328 01:03:29.421513 1131600 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0328 01:03:29.426043 1131600 kubeadm.go:733] kubelet initialised
	I0328 01:03:29.426104 1131600 kubeadm.go:734] duration metric: took 4.524275ms waiting for restarted kubelet to initialise ...
	I0328 01:03:29.426114 1131600 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0328 01:03:29.432378 1131600 pod_ready.go:78] waiting up to 4m0s for pod "coredns-76f75df574-79cdj" in "kube-system" namespace to be "Ready" ...
	I0328 01:03:28.563249 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:28.563778 1130827 main.go:141] libmachine: (no-preload-248059) DBG | unable to find current IP address of domain no-preload-248059 in network mk-no-preload-248059
	I0328 01:03:28.563808 1130827 main.go:141] libmachine: (no-preload-248059) DBG | I0328 01:03:28.563718 1132389 retry.go:31] will retry after 2.919225956s: waiting for machine to come up
	I0328 01:03:28.913653 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:30.914379 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:31.484584 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:31.485027 1130827 main.go:141] libmachine: (no-preload-248059) Found IP for machine: 192.168.61.107
	I0328 01:03:31.485048 1130827 main.go:141] libmachine: (no-preload-248059) Reserving static IP address...
	I0328 01:03:31.485065 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has current primary IP address 192.168.61.107 and MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:31.485584 1130827 main.go:141] libmachine: (no-preload-248059) DBG | found host DHCP lease matching {name: "no-preload-248059", mac: "52:54:00:58:33:e2", ip: "192.168.61.107"} in network mk-no-preload-248059: {Iface:virbr3 ExpiryTime:2024-03-28 02:03:25 +0000 UTC Type:0 Mac:52:54:00:58:33:e2 Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:no-preload-248059 Clientid:01:52:54:00:58:33:e2}
	I0328 01:03:31.485617 1130827 main.go:141] libmachine: (no-preload-248059) Reserved static IP address: 192.168.61.107
	I0328 01:03:31.485638 1130827 main.go:141] libmachine: (no-preload-248059) DBG | skip adding static IP to network mk-no-preload-248059 - found existing host DHCP lease matching {name: "no-preload-248059", mac: "52:54:00:58:33:e2", ip: "192.168.61.107"}
	I0328 01:03:31.485651 1130827 main.go:141] libmachine: (no-preload-248059) Waiting for SSH to be available...
	I0328 01:03:31.485671 1130827 main.go:141] libmachine: (no-preload-248059) DBG | Getting to WaitForSSH function...
	I0328 01:03:31.487909 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:31.488293 1130827 main.go:141] libmachine: (no-preload-248059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:33:e2", ip: ""} in network mk-no-preload-248059: {Iface:virbr3 ExpiryTime:2024-03-28 02:03:25 +0000 UTC Type:0 Mac:52:54:00:58:33:e2 Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:no-preload-248059 Clientid:01:52:54:00:58:33:e2}
	I0328 01:03:31.488322 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined IP address 192.168.61.107 and MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:31.488469 1130827 main.go:141] libmachine: (no-preload-248059) DBG | Using SSH client type: external
	I0328 01:03:31.488506 1130827 main.go:141] libmachine: (no-preload-248059) DBG | Using SSH private key: /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/no-preload-248059/id_rsa (-rw-------)
	I0328 01:03:31.488531 1130827 main.go:141] libmachine: (no-preload-248059) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.107 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/no-preload-248059/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0328 01:03:31.488541 1130827 main.go:141] libmachine: (no-preload-248059) DBG | About to run SSH command:
	I0328 01:03:31.488555 1130827 main.go:141] libmachine: (no-preload-248059) DBG | exit 0
	I0328 01:03:31.618358 1130827 main.go:141] libmachine: (no-preload-248059) DBG | SSH cmd err, output: <nil>: 
	I0328 01:03:31.618786 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetConfigRaw
	I0328 01:03:31.619494 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetIP
	I0328 01:03:31.622183 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:31.622555 1130827 main.go:141] libmachine: (no-preload-248059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:33:e2", ip: ""} in network mk-no-preload-248059: {Iface:virbr3 ExpiryTime:2024-03-28 02:03:25 +0000 UTC Type:0 Mac:52:54:00:58:33:e2 Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:no-preload-248059 Clientid:01:52:54:00:58:33:e2}
	I0328 01:03:31.622584 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined IP address 192.168.61.107 and MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:31.622889 1130827 profile.go:142] Saving config to /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/no-preload-248059/config.json ...
	I0328 01:03:31.623120 1130827 machine.go:94] provisionDockerMachine start ...
	I0328 01:03:31.623147 1130827 main.go:141] libmachine: (no-preload-248059) Calling .DriverName
	I0328 01:03:31.623400 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHHostname
	I0328 01:03:31.626078 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:31.626432 1130827 main.go:141] libmachine: (no-preload-248059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:33:e2", ip: ""} in network mk-no-preload-248059: {Iface:virbr3 ExpiryTime:2024-03-28 02:03:25 +0000 UTC Type:0 Mac:52:54:00:58:33:e2 Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:no-preload-248059 Clientid:01:52:54:00:58:33:e2}
	I0328 01:03:31.626458 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined IP address 192.168.61.107 and MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:31.626663 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHPort
	I0328 01:03:31.626864 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHKeyPath
	I0328 01:03:31.627031 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHKeyPath
	I0328 01:03:31.627179 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHUsername
	I0328 01:03:31.627380 1130827 main.go:141] libmachine: Using SSH client type: native
	I0328 01:03:31.627595 1130827 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.107 22 <nil> <nil>}
	I0328 01:03:31.627611 1130827 main.go:141] libmachine: About to run SSH command:
	hostname
	I0328 01:03:31.739662 1130827 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0328 01:03:31.739699 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetMachineName
	I0328 01:03:31.740049 1130827 buildroot.go:166] provisioning hostname "no-preload-248059"
	I0328 01:03:31.740086 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetMachineName
	I0328 01:03:31.740421 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHHostname
	I0328 01:03:31.743410 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:31.743776 1130827 main.go:141] libmachine: (no-preload-248059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:33:e2", ip: ""} in network mk-no-preload-248059: {Iface:virbr3 ExpiryTime:2024-03-28 02:03:25 +0000 UTC Type:0 Mac:52:54:00:58:33:e2 Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:no-preload-248059 Clientid:01:52:54:00:58:33:e2}
	I0328 01:03:31.743811 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined IP address 192.168.61.107 and MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:31.744001 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHPort
	I0328 01:03:31.744212 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHKeyPath
	I0328 01:03:31.744394 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHKeyPath
	I0328 01:03:31.744515 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHUsername
	I0328 01:03:31.744669 1130827 main.go:141] libmachine: Using SSH client type: native
	I0328 01:03:31.744846 1130827 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.107 22 <nil> <nil>}
	I0328 01:03:31.744860 1130827 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-248059 && echo "no-preload-248059" | sudo tee /etc/hostname
	I0328 01:03:31.869330 1130827 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-248059
	
	I0328 01:03:31.869368 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHHostname
	I0328 01:03:31.872451 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:31.872817 1130827 main.go:141] libmachine: (no-preload-248059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:33:e2", ip: ""} in network mk-no-preload-248059: {Iface:virbr3 ExpiryTime:2024-03-28 02:03:25 +0000 UTC Type:0 Mac:52:54:00:58:33:e2 Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:no-preload-248059 Clientid:01:52:54:00:58:33:e2}
	I0328 01:03:31.872868 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined IP address 192.168.61.107 and MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:31.873159 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHPort
	I0328 01:03:31.873405 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHKeyPath
	I0328 01:03:31.873632 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHKeyPath
	I0328 01:03:31.873803 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHUsername
	I0328 01:03:31.873982 1130827 main.go:141] libmachine: Using SSH client type: native
	I0328 01:03:31.874220 1130827 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.107 22 <nil> <nil>}
	I0328 01:03:31.874268 1130827 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-248059' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-248059/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-248059' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0328 01:03:31.997509 1130827 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0328 01:03:31.997543 1130827 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18485-1069254/.minikube CaCertPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18485-1069254/.minikube}
	I0328 01:03:31.997565 1130827 buildroot.go:174] setting up certificates
	I0328 01:03:31.997573 1130827 provision.go:84] configureAuth start
	I0328 01:03:31.997583 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetMachineName
	I0328 01:03:31.997870 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetIP
	I0328 01:03:32.000739 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:32.001127 1130827 main.go:141] libmachine: (no-preload-248059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:33:e2", ip: ""} in network mk-no-preload-248059: {Iface:virbr3 ExpiryTime:2024-03-28 02:03:25 +0000 UTC Type:0 Mac:52:54:00:58:33:e2 Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:no-preload-248059 Clientid:01:52:54:00:58:33:e2}
	I0328 01:03:32.001162 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined IP address 192.168.61.107 and MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:32.001306 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHHostname
	I0328 01:03:32.003571 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:32.003958 1130827 main.go:141] libmachine: (no-preload-248059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:33:e2", ip: ""} in network mk-no-preload-248059: {Iface:virbr3 ExpiryTime:2024-03-28 02:03:25 +0000 UTC Type:0 Mac:52:54:00:58:33:e2 Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:no-preload-248059 Clientid:01:52:54:00:58:33:e2}
	I0328 01:03:32.003988 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined IP address 192.168.61.107 and MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:32.004162 1130827 provision.go:143] copyHostCerts
	I0328 01:03:32.004246 1130827 exec_runner.go:144] found /home/jenkins/minikube-integration/18485-1069254/.minikube/key.pem, removing ...
	I0328 01:03:32.004261 1130827 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18485-1069254/.minikube/key.pem
	I0328 01:03:32.004329 1130827 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18485-1069254/.minikube/key.pem (1679 bytes)
	I0328 01:03:32.004442 1130827 exec_runner.go:144] found /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.pem, removing ...
	I0328 01:03:32.004454 1130827 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.pem
	I0328 01:03:32.004486 1130827 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.pem (1078 bytes)
	I0328 01:03:32.004562 1130827 exec_runner.go:144] found /home/jenkins/minikube-integration/18485-1069254/.minikube/cert.pem, removing ...
	I0328 01:03:32.004572 1130827 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18485-1069254/.minikube/cert.pem
	I0328 01:03:32.004602 1130827 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18485-1069254/.minikube/cert.pem (1123 bytes)
	I0328 01:03:32.004667 1130827 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca-key.pem org=jenkins.no-preload-248059 san=[127.0.0.1 192.168.61.107 localhost minikube no-preload-248059]
	I0328 01:03:32.206585 1130827 provision.go:177] copyRemoteCerts
	I0328 01:03:32.206657 1130827 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0328 01:03:32.206691 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHHostname
	I0328 01:03:32.210170 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:32.210636 1130827 main.go:141] libmachine: (no-preload-248059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:33:e2", ip: ""} in network mk-no-preload-248059: {Iface:virbr3 ExpiryTime:2024-03-28 02:03:25 +0000 UTC Type:0 Mac:52:54:00:58:33:e2 Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:no-preload-248059 Clientid:01:52:54:00:58:33:e2}
	I0328 01:03:32.210676 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined IP address 192.168.61.107 and MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:32.210979 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHPort
	I0328 01:03:32.211187 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHKeyPath
	I0328 01:03:32.211364 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHUsername
	I0328 01:03:32.211564 1130827 sshutil.go:53] new ssh client: &{IP:192.168.61.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/no-preload-248059/id_rsa Username:docker}
	I0328 01:03:32.305858 1130827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0328 01:03:32.337654 1130827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0328 01:03:32.368942 1130827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0328 01:03:32.401639 1130827 provision.go:87] duration metric: took 404.051415ms to configureAuth
	I0328 01:03:32.401669 1130827 buildroot.go:189] setting minikube options for container-runtime
	I0328 01:03:32.401936 1130827 config.go:182] Loaded profile config "no-preload-248059": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0-beta.0
	I0328 01:03:32.402025 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHHostname
	I0328 01:03:32.404890 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:32.405352 1130827 main.go:141] libmachine: (no-preload-248059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:33:e2", ip: ""} in network mk-no-preload-248059: {Iface:virbr3 ExpiryTime:2024-03-28 02:03:25 +0000 UTC Type:0 Mac:52:54:00:58:33:e2 Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:no-preload-248059 Clientid:01:52:54:00:58:33:e2}
	I0328 01:03:32.405387 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined IP address 192.168.61.107 and MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:32.405588 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHPort
	I0328 01:03:32.405858 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHKeyPath
	I0328 01:03:32.406091 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHKeyPath
	I0328 01:03:32.406303 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHUsername
	I0328 01:03:32.406510 1130827 main.go:141] libmachine: Using SSH client type: native
	I0328 01:03:32.406731 1130827 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.107 22 <nil> <nil>}
	I0328 01:03:32.406759 1130827 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0328 01:03:32.697738 1130827 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0328 01:03:32.697768 1130827 machine.go:97] duration metric: took 1.074632092s to provisionDockerMachine
	I0328 01:03:32.697781 1130827 start.go:293] postStartSetup for "no-preload-248059" (driver="kvm2")
	I0328 01:03:32.697795 1130827 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0328 01:03:32.697812 1130827 main.go:141] libmachine: (no-preload-248059) Calling .DriverName
	I0328 01:03:32.698263 1130827 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0328 01:03:32.698298 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHHostname
	I0328 01:03:32.701020 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:32.701421 1130827 main.go:141] libmachine: (no-preload-248059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:33:e2", ip: ""} in network mk-no-preload-248059: {Iface:virbr3 ExpiryTime:2024-03-28 02:03:25 +0000 UTC Type:0 Mac:52:54:00:58:33:e2 Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:no-preload-248059 Clientid:01:52:54:00:58:33:e2}
	I0328 01:03:32.701450 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined IP address 192.168.61.107 and MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:32.701609 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHPort
	I0328 01:03:32.701837 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHKeyPath
	I0328 01:03:32.702010 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHUsername
	I0328 01:03:32.702188 1130827 sshutil.go:53] new ssh client: &{IP:192.168.61.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/no-preload-248059/id_rsa Username:docker}
	I0328 01:03:32.790670 1130827 ssh_runner.go:195] Run: cat /etc/os-release
	I0328 01:03:32.795098 1130827 info.go:137] Remote host: Buildroot 2023.02.9
	I0328 01:03:32.795131 1130827 filesync.go:126] Scanning /home/jenkins/minikube-integration/18485-1069254/.minikube/addons for local assets ...
	I0328 01:03:32.795222 1130827 filesync.go:126] Scanning /home/jenkins/minikube-integration/18485-1069254/.minikube/files for local assets ...
	I0328 01:03:32.795297 1130827 filesync.go:149] local asset: /home/jenkins/minikube-integration/18485-1069254/.minikube/files/etc/ssl/certs/10765222.pem -> 10765222.pem in /etc/ssl/certs
	I0328 01:03:32.795402 1130827 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0328 01:03:32.806276 1130827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/files/etc/ssl/certs/10765222.pem --> /etc/ssl/certs/10765222.pem (1708 bytes)
	I0328 01:03:32.832753 1130827 start.go:296] duration metric: took 134.954685ms for postStartSetup
	I0328 01:03:32.832801 1130827 fix.go:56] duration metric: took 19.16097847s for fixHost
	I0328 01:03:32.832825 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHHostname
	I0328 01:03:32.835830 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:32.836199 1130827 main.go:141] libmachine: (no-preload-248059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:33:e2", ip: ""} in network mk-no-preload-248059: {Iface:virbr3 ExpiryTime:2024-03-28 02:03:25 +0000 UTC Type:0 Mac:52:54:00:58:33:e2 Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:no-preload-248059 Clientid:01:52:54:00:58:33:e2}
	I0328 01:03:32.836237 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined IP address 192.168.61.107 and MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:32.836472 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHPort
	I0328 01:03:32.836707 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHKeyPath
	I0328 01:03:32.836949 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHKeyPath
	I0328 01:03:32.837104 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHUsername
	I0328 01:03:32.837339 1130827 main.go:141] libmachine: Using SSH client type: native
	I0328 01:03:32.837551 1130827 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.107 22 <nil> <nil>}
	I0328 01:03:32.837563 1130827 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0328 01:03:32.947440 1130827 main.go:141] libmachine: SSH cmd err, output: <nil>: 1711587812.922631180
	
	I0328 01:03:32.947477 1130827 fix.go:216] guest clock: 1711587812.922631180
	I0328 01:03:32.947486 1130827 fix.go:229] Guest: 2024-03-28 01:03:32.92263118 +0000 UTC Remote: 2024-03-28 01:03:32.832804811 +0000 UTC m=+356.715929719 (delta=89.826369ms)
	I0328 01:03:32.947507 1130827 fix.go:200] guest clock delta is within tolerance: 89.826369ms
	I0328 01:03:32.947512 1130827 start.go:83] releasing machines lock for "no-preload-248059", held for 19.275724068s
	I0328 01:03:32.947531 1130827 main.go:141] libmachine: (no-preload-248059) Calling .DriverName
	I0328 01:03:32.947805 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetIP
	I0328 01:03:32.950439 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:32.950814 1130827 main.go:141] libmachine: (no-preload-248059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:33:e2", ip: ""} in network mk-no-preload-248059: {Iface:virbr3 ExpiryTime:2024-03-28 02:03:25 +0000 UTC Type:0 Mac:52:54:00:58:33:e2 Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:no-preload-248059 Clientid:01:52:54:00:58:33:e2}
	I0328 01:03:32.950844 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined IP address 192.168.61.107 and MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:32.950992 1130827 main.go:141] libmachine: (no-preload-248059) Calling .DriverName
	I0328 01:03:32.951517 1130827 main.go:141] libmachine: (no-preload-248059) Calling .DriverName
	I0328 01:03:32.951707 1130827 main.go:141] libmachine: (no-preload-248059) Calling .DriverName
	I0328 01:03:32.951809 1130827 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0328 01:03:32.951852 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHHostname
	I0328 01:03:32.951938 1130827 ssh_runner.go:195] Run: cat /version.json
	I0328 01:03:32.951964 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHHostname
	I0328 01:03:32.954721 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:32.955058 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:32.955135 1130827 main.go:141] libmachine: (no-preload-248059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:33:e2", ip: ""} in network mk-no-preload-248059: {Iface:virbr3 ExpiryTime:2024-03-28 02:03:25 +0000 UTC Type:0 Mac:52:54:00:58:33:e2 Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:no-preload-248059 Clientid:01:52:54:00:58:33:e2}
	I0328 01:03:32.955165 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined IP address 192.168.61.107 and MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:32.955314 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHPort
	I0328 01:03:32.955473 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHKeyPath
	I0328 01:03:32.955512 1130827 main.go:141] libmachine: (no-preload-248059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:33:e2", ip: ""} in network mk-no-preload-248059: {Iface:virbr3 ExpiryTime:2024-03-28 02:03:25 +0000 UTC Type:0 Mac:52:54:00:58:33:e2 Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:no-preload-248059 Clientid:01:52:54:00:58:33:e2}
	I0328 01:03:32.955538 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined IP address 192.168.61.107 and MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:32.955622 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHUsername
	I0328 01:03:32.955698 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHPort
	I0328 01:03:32.955809 1130827 sshutil.go:53] new ssh client: &{IP:192.168.61.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/no-preload-248059/id_rsa Username:docker}
	I0328 01:03:32.955859 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHKeyPath
	I0328 01:03:32.956001 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHUsername
	I0328 01:03:32.956134 1130827 sshutil.go:53] new ssh client: &{IP:192.168.61.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/no-preload-248059/id_rsa Username:docker}
	I0328 01:03:33.079381 1130827 ssh_runner.go:195] Run: systemctl --version
	I0328 01:03:33.086184 1130827 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0328 01:03:33.241799 1130827 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0328 01:03:33.248779 1130827 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0328 01:03:33.248893 1130827 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0328 01:03:33.267944 1130827 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0328 01:03:33.267977 1130827 start.go:494] detecting cgroup driver to use...
	I0328 01:03:33.268082 1130827 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0328 01:03:33.286132 1130827 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0328 01:03:33.301676 1130827 docker.go:217] disabling cri-docker service (if available) ...
	I0328 01:03:33.301762 1130827 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0328 01:03:33.317202 1130827 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0328 01:03:33.333162 1130827 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0328 01:03:33.458738 1130827 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0328 01:03:33.608509 1130827 docker.go:233] disabling docker service ...
	I0328 01:03:33.608623 1130827 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0328 01:03:33.626616 1130827 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0328 01:03:33.641798 1130827 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0328 01:03:33.808865 1130827 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0328 01:03:33.962636 1130827 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0328 01:03:33.978138 1130827 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0328 01:03:34.002323 1130827 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0328 01:03:34.002404 1130827 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 01:03:34.014483 1130827 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0328 01:03:34.014589 1130827 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 01:03:34.028647 1130827 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 01:03:34.041601 1130827 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 01:03:34.054993 1130827 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0328 01:03:34.066671 1130827 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 01:03:34.079389 1130827 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 01:03:34.099660 1130827 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 01:03:34.112379 1130827 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0328 01:03:34.123050 1130827 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0328 01:03:34.123109 1130827 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0328 01:03:34.137132 1130827 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0328 01:03:34.147092 1130827 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0328 01:03:34.282367 1130827 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0328 01:03:34.436510 1130827 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0328 01:03:34.436599 1130827 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0328 01:03:34.443019 1130827 start.go:562] Will wait 60s for crictl version
	I0328 01:03:34.443092 1130827 ssh_runner.go:195] Run: which crictl
	I0328 01:03:34.447740 1130827 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0328 01:03:34.488366 1130827 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0328 01:03:34.488469 1130827 ssh_runner.go:195] Run: crio --version
	I0328 01:03:34.520940 1130827 ssh_runner.go:195] Run: crio --version
	I0328 01:03:34.557953 1130827 out.go:177] * Preparing Kubernetes v1.30.0-beta.0 on CRI-O 1.29.1 ...
	I0328 01:03:30.568918 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:31.068097 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:31.568306 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:32.068345 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:32.568773 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:33.068072 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:33.568377 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:34.068141 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:34.568574 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:35.067986 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:31.439199 1131600 pod_ready.go:102] pod "coredns-76f75df574-79cdj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:33.439575 1131600 pod_ready.go:102] pod "coredns-76f75df574-79cdj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:34.559624 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetIP
	I0328 01:03:34.563089 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:34.563549 1130827 main.go:141] libmachine: (no-preload-248059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:33:e2", ip: ""} in network mk-no-preload-248059: {Iface:virbr3 ExpiryTime:2024-03-28 02:03:25 +0000 UTC Type:0 Mac:52:54:00:58:33:e2 Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:no-preload-248059 Clientid:01:52:54:00:58:33:e2}
	I0328 01:03:34.563583 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined IP address 192.168.61.107 and MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:34.563943 1130827 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0328 01:03:34.570153 1130827 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0328 01:03:34.584566 1130827 kubeadm.go:877] updating cluster {Name:no-preload-248059 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.0-beta.0 ClusterName:no-preload-248059 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.107 Port:8443 KubernetesVersion:v1.30.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0328 01:03:34.584723 1130827 preload.go:132] Checking if preload exists for k8s version v1.30.0-beta.0 and runtime crio
	I0328 01:03:34.584786 1130827 ssh_runner.go:195] Run: sudo crictl images --output json
	I0328 01:03:34.620182 1130827 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.0-beta.0". assuming images are not preloaded.
	I0328 01:03:34.620215 1130827 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.30.0-beta.0 registry.k8s.io/kube-controller-manager:v1.30.0-beta.0 registry.k8s.io/kube-scheduler:v1.30.0-beta.0 registry.k8s.io/kube-proxy:v1.30.0-beta.0 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.12-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0328 01:03:34.620297 1130827 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0328 01:03:34.620312 1130827 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.12-0
	I0328 01:03:34.620333 1130827 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.30.0-beta.0
	I0328 01:03:34.620301 1130827 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.30.0-beta.0
	I0328 01:03:34.620374 1130827 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.30.0-beta.0
	I0328 01:03:34.620401 1130827 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0328 01:03:34.620481 1130827 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0328 01:03:34.620319 1130827 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.30.0-beta.0
	I0328 01:03:34.621997 1130827 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.30.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.30.0-beta.0
	I0328 01:03:34.622009 1130827 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0328 01:03:34.622009 1130827 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.30.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.30.0-beta.0
	I0328 01:03:34.622052 1130827 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.30.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.30.0-beta.0
	I0328 01:03:34.621997 1130827 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.30.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.30.0-beta.0
	I0328 01:03:34.622115 1130827 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.12-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.12-0
	I0328 01:03:34.621996 1130827 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0328 01:03:34.622438 1130827 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0328 01:03:34.832761 1130827 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.30.0-beta.0
	I0328 01:03:34.849045 1130827 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I0328 01:03:34.868049 1130827 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0328 01:03:34.883941 1130827 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.30.0-beta.0" needs transfer: "registry.k8s.io/kube-proxy:v1.30.0-beta.0" does not exist at hash "3f2e573c9528d6ccab780899b4b39b75a17f00250f31ec462fccb116d45befa8" in container runtime
	I0328 01:03:34.883988 1130827 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.30.0-beta.0
	I0328 01:03:34.884047 1130827 ssh_runner.go:195] Run: which crictl
	I0328 01:03:34.884972 1130827 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.30.0-beta.0
	I0328 01:03:34.887551 1130827 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.30.0-beta.0
	I0328 01:03:34.899677 1130827 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.12-0
	I0328 01:03:34.904772 1130827 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.30.0-beta.0
	I0328 01:03:35.045850 1130827 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0328 01:03:35.045906 1130827 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0328 01:03:35.045944 1130827 ssh_runner.go:195] Run: which crictl
	I0328 01:03:35.045959 1130827 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.30.0-beta.0
	I0328 01:03:35.064862 1130827 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.30.0-beta.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.30.0-beta.0" does not exist at hash "c2da1dd389fe0ec92e0cd9e98f8def82c47e8e08ab27041cd23683c60aa1dcaa" in container runtime
	I0328 01:03:35.064908 1130827 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.30.0-beta.0
	I0328 01:03:35.064959 1130827 ssh_runner.go:195] Run: which crictl
	I0328 01:03:35.066700 1130827 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.30.0-beta.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.30.0-beta.0" does not exist at hash "f68bf73ee2fbd4ea8b58c2038ce18d4e06cb59833c44dda7435b586add021841" in container runtime
	I0328 01:03:35.066753 1130827 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.30.0-beta.0
	I0328 01:03:35.066820 1130827 ssh_runner.go:195] Run: which crictl
	I0328 01:03:35.097425 1130827 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.30.0-beta.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.30.0-beta.0" does not exist at hash "746ac2129b574b8743dca05f512b7e097235ca7229b75100a38ec9cdb23454ac" in container runtime
	I0328 01:03:35.097479 1130827 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.30.0-beta.0
	I0328 01:03:35.097546 1130827 ssh_runner.go:195] Run: which crictl
	I0328 01:03:35.097619 1130827 cache_images.go:116] "registry.k8s.io/etcd:3.5.12-0" needs transfer: "registry.k8s.io/etcd:3.5.12-0" does not exist at hash "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899" in container runtime
	I0328 01:03:35.097667 1130827 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.12-0
	I0328 01:03:35.097715 1130827 ssh_runner.go:195] Run: which crictl
	I0328 01:03:35.126977 1130827 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0328 01:03:35.126980 1130827 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.30.0-beta.0
	I0328 01:03:35.127020 1130827 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.30.0-beta.0
	I0328 01:03:35.127084 1130827 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.12-0
	I0328 01:03:35.127090 1130827 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.30.0-beta.0
	I0328 01:03:35.127082 1130827 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.30.0-beta.0
	I0328 01:03:35.127161 1130827 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.30.0-beta.0
	I0328 01:03:35.264395 1130827 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.30.0-beta.0
	I0328 01:03:35.264499 1130827 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.30.0-beta.0 (exists)
	I0328 01:03:35.264534 1130827 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.30.0-beta.0
	I0328 01:03:35.264543 1130827 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.30.0-beta.0
	I0328 01:03:35.264506 1130827 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0328 01:03:35.264590 1130827 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.30.0-beta.0
	I0328 01:03:35.264631 1130827 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.30.0-beta.0
	I0328 01:03:35.264652 1130827 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0328 01:03:35.264516 1130827 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.30.0-beta.0
	I0328 01:03:35.264584 1130827 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.12-0
	I0328 01:03:35.264717 1130827 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.30.0-beta.0
	I0328 01:03:35.264728 1130827 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.30.0-beta.0
	I0328 01:03:35.264768 1130827 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.12-0
	I0328 01:03:35.269734 1130827 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.12-0 (exists)
	I0328 01:03:35.277344 1130827 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.30.0-beta.0 (exists)
	I0328 01:03:35.277580 1130827 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0328 01:03:35.279792 1130827 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.30.0-beta.0 (exists)
	I0328 01:03:35.280423 1130827 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.30.0-beta.0 (exists)
	I0328 01:03:35.535980 1130827 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0328 01:03:33.413451 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:35.414017 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:37.913609 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:35.568345 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:36.068227 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:36.568528 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:37.068834 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:37.568407 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:38.068142 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:38.568732 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:39.068094 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:39.568799 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:40.068973 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:35.940767 1131600 pod_ready.go:102] pod "coredns-76f75df574-79cdj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:37.440919 1131600 pod_ready.go:92] pod "coredns-76f75df574-79cdj" in "kube-system" namespace has status "Ready":"True"
	I0328 01:03:37.440949 1131600 pod_ready.go:81] duration metric: took 8.008542386s for pod "coredns-76f75df574-79cdj" in "kube-system" namespace to be "Ready" ...
	I0328 01:03:37.440963 1131600 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-283961" in "kube-system" namespace to be "Ready" ...
	I0328 01:03:39.452822 1131600 pod_ready.go:102] pod "etcd-default-k8s-diff-port-283961" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:40.467937 1131600 pod_ready.go:92] pod "etcd-default-k8s-diff-port-283961" in "kube-system" namespace has status "Ready":"True"
	I0328 01:03:40.467973 1131600 pod_ready.go:81] duration metric: took 3.027001179s for pod "etcd-default-k8s-diff-port-283961" in "kube-system" namespace to be "Ready" ...
	I0328 01:03:40.467987 1131600 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-283961" in "kube-system" namespace to be "Ready" ...
	I0328 01:03:40.491342 1131600 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-283961" in "kube-system" namespace has status "Ready":"True"
	I0328 01:03:40.491373 1131600 pod_ready.go:81] duration metric: took 23.375914ms for pod "kube-apiserver-default-k8s-diff-port-283961" in "kube-system" namespace to be "Ready" ...
	I0328 01:03:40.491387 1131600 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-283961" in "kube-system" namespace to be "Ready" ...
	I0328 01:03:40.511379 1131600 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-283961" in "kube-system" namespace has status "Ready":"True"
	I0328 01:03:40.511414 1131600 pod_ready.go:81] duration metric: took 20.018124ms for pod "kube-controller-manager-default-k8s-diff-port-283961" in "kube-system" namespace to be "Ready" ...
	I0328 01:03:40.511430 1131600 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-d776v" in "kube-system" namespace to be "Ready" ...
	I0328 01:03:40.526689 1131600 pod_ready.go:92] pod "kube-proxy-d776v" in "kube-system" namespace has status "Ready":"True"
	I0328 01:03:40.526724 1131600 pod_ready.go:81] duration metric: took 15.28424ms for pod "kube-proxy-d776v" in "kube-system" namespace to be "Ready" ...
	I0328 01:03:40.526738 1131600 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-283961" in "kube-system" namespace to be "Ready" ...
	I0328 01:03:37.431690 1130827 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.30.0-beta.0: (2.167073369s)
	I0328 01:03:37.431729 1130827 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.30.0-beta.0 from cache
	I0328 01:03:37.431755 1130827 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.12-0
	I0328 01:03:37.431764 1130827 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.895749302s)
	I0328 01:03:37.431805 1130827 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0328 01:03:37.431811 1130827 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.12-0
	I0328 01:03:37.431837 1130827 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0328 01:03:37.431870 1130827 ssh_runner.go:195] Run: which crictl
	I0328 01:03:39.913936 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:42.412656 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:40.568441 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:41.068790 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:41.568919 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:42.068166 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:42.568012 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:43.068027 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:43.568916 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:44.067940 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:44.568074 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:45.068786 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:42.535179 1131600 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-283961" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:44.034128 1131600 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-283961" in "kube-system" namespace has status "Ready":"True"
	I0328 01:03:44.034164 1131600 pod_ready.go:81] duration metric: took 3.507415677s for pod "kube-scheduler-default-k8s-diff-port-283961" in "kube-system" namespace to be "Ready" ...
	I0328 01:03:44.034175 1131600 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace to be "Ready" ...
	I0328 01:03:41.523268 1130827 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.12-0: (4.091420228s)
	I0328 01:03:41.523305 1130827 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.12-0 from cache
	I0328 01:03:41.523330 1130827 ssh_runner.go:235] Completed: which crictl: (4.091431875s)
	I0328 01:03:41.523345 1130827 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.30.0-beta.0
	I0328 01:03:41.523412 1130827 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0328 01:03:41.523445 1130827 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.30.0-beta.0
	I0328 01:03:41.567312 1130827 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0328 01:03:41.567455 1130827 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0328 01:03:44.336954 1130827 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.30.0-beta.0: (2.813479223s)
	I0328 01:03:44.336991 1130827 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.30.0-beta.0 from cache
	I0328 01:03:44.336994 1130827 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (2.769509386s)
	I0328 01:03:44.337020 1130827 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0328 01:03:44.337035 1130827 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0328 01:03:44.337080 1130827 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0328 01:03:44.414767 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:46.415110 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:45.568662 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:46.068299 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:46.568793 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:47.068929 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:47.568250 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:48.068910 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:48.568138 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:49.068128 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:49.568153 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:50.068075 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:46.042489 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:48.541049 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:50.547355 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:46.297705 1130827 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.960592772s)
	I0328 01:03:46.297744 1130827 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0328 01:03:46.297776 1130827 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.30.0-beta.0
	I0328 01:03:46.297828 1130827 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.30.0-beta.0
	I0328 01:03:47.769522 1130827 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.30.0-beta.0: (1.471661236s)
	I0328 01:03:47.769569 1130827 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.30.0-beta.0 from cache
	I0328 01:03:47.769602 1130827 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.30.0-beta.0
	I0328 01:03:47.769656 1130827 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.30.0-beta.0
	I0328 01:03:50.231843 1130827 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.30.0-beta.0: (2.462162757s)
	I0328 01:03:50.231876 1130827 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.30.0-beta.0 from cache
	I0328 01:03:50.231902 1130827 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0328 01:03:50.231956 1130827 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0328 01:03:48.913184 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:51.412474 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:50.568929 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:51.068812 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:51.568899 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:52.068890 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:52.568751 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:53.068406 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:53.568466 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:54.068039 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:54.568745 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:55.068690 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:53.041197 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:55.041977 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:51.188382 1130827 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0328 01:03:51.188441 1130827 cache_images.go:123] Successfully loaded all cached images
	I0328 01:03:51.188448 1130827 cache_images.go:92] duration metric: took 16.568214969s to LoadCachedImages
	I0328 01:03:51.188464 1130827 kubeadm.go:928] updating node { 192.168.61.107 8443 v1.30.0-beta.0 crio true true} ...
	I0328 01:03:51.188628 1130827 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-248059 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.107
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0-beta.0 ClusterName:no-preload-248059 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0328 01:03:51.188710 1130827 ssh_runner.go:195] Run: crio config
	I0328 01:03:51.237071 1130827 cni.go:84] Creating CNI manager for ""
	I0328 01:03:51.237099 1130827 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0328 01:03:51.237109 1130827 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0328 01:03:51.237131 1130827 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.107 APIServerPort:8443 KubernetesVersion:v1.30.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-248059 NodeName:no-preload-248059 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.107"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.107 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0328 01:03:51.237263 1130827 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.107
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-248059"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.107
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.107"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0328 01:03:51.237330 1130827 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0-beta.0
	I0328 01:03:51.248044 1130827 binaries.go:44] Found k8s binaries, skipping transfer
	I0328 01:03:51.248113 1130827 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0328 01:03:51.257854 1130827 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (324 bytes)
	I0328 01:03:51.276307 1130827 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I0328 01:03:51.294698 1130827 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2168 bytes)
	I0328 01:03:51.313297 1130827 ssh_runner.go:195] Run: grep 192.168.61.107	control-plane.minikube.internal$ /etc/hosts
	I0328 01:03:51.317668 1130827 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.107	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0328 01:03:51.330478 1130827 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0328 01:03:51.457500 1130827 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0328 01:03:51.484463 1130827 certs.go:68] Setting up /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/no-preload-248059 for IP: 192.168.61.107
	I0328 01:03:51.484493 1130827 certs.go:194] generating shared ca certs ...
	I0328 01:03:51.484518 1130827 certs.go:226] acquiring lock for ca certs: {Name:mkf4dec8f33bbf51de6ed3aabdf175c7fd744ef9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 01:03:51.484718 1130827 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.key
	I0328 01:03:51.484768 1130827 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/proxy-client-ca.key
	I0328 01:03:51.484781 1130827 certs.go:256] generating profile certs ...
	I0328 01:03:51.484910 1130827 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/no-preload-248059/client.key
	I0328 01:03:51.484989 1130827 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/no-preload-248059/apiserver.key.85d037b2
	I0328 01:03:51.485040 1130827 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/no-preload-248059/proxy-client.key
	I0328 01:03:51.485196 1130827 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/1076522.pem (1338 bytes)
	W0328 01:03:51.485243 1130827 certs.go:480] ignoring /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/1076522_empty.pem, impossibly tiny 0 bytes
	I0328 01:03:51.485257 1130827 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca-key.pem (1675 bytes)
	I0328 01:03:51.485292 1130827 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem (1078 bytes)
	I0328 01:03:51.485327 1130827 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/cert.pem (1123 bytes)
	I0328 01:03:51.485357 1130827 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/key.pem (1679 bytes)
	I0328 01:03:51.485416 1130827 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/files/etc/ssl/certs/10765222.pem (1708 bytes)
	I0328 01:03:51.486614 1130827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0328 01:03:51.537554 1130827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0328 01:03:51.587256 1130827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0328 01:03:51.620264 1130827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0328 01:03:51.652100 1130827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/no-preload-248059/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0328 01:03:51.694388 1130827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/no-preload-248059/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0328 01:03:51.720913 1130827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/no-preload-248059/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0328 01:03:51.747141 1130827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/no-preload-248059/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0328 01:03:51.776370 1130827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/1076522.pem --> /usr/share/ca-certificates/1076522.pem (1338 bytes)
	I0328 01:03:51.803168 1130827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/files/etc/ssl/certs/10765222.pem --> /usr/share/ca-certificates/10765222.pem (1708 bytes)
	I0328 01:03:51.831138 1130827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0328 01:03:51.857272 1130827 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0328 01:03:51.876070 1130827 ssh_runner.go:195] Run: openssl version
	I0328 01:03:51.882197 1130827 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1076522.pem && ln -fs /usr/share/ca-certificates/1076522.pem /etc/ssl/certs/1076522.pem"
	I0328 01:03:51.893560 1130827 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1076522.pem
	I0328 01:03:51.898293 1130827 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 27 23:42 /usr/share/ca-certificates/1076522.pem
	I0328 01:03:51.898361 1130827 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1076522.pem
	I0328 01:03:51.904549 1130827 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1076522.pem /etc/ssl/certs/51391683.0"
	I0328 01:03:51.918175 1130827 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10765222.pem && ln -fs /usr/share/ca-certificates/10765222.pem /etc/ssl/certs/10765222.pem"
	I0328 01:03:51.930387 1130827 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10765222.pem
	I0328 01:03:51.935610 1130827 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 27 23:42 /usr/share/ca-certificates/10765222.pem
	I0328 01:03:51.935691 1130827 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10765222.pem
	I0328 01:03:51.942127 1130827 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/10765222.pem /etc/ssl/certs/3ec20f2e.0"
	I0328 01:03:51.954252 1130827 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0328 01:03:51.966727 1130827 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0328 01:03:51.971742 1130827 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 27 23:34 /usr/share/ca-certificates/minikubeCA.pem
	I0328 01:03:51.971810 1130827 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0328 01:03:51.978082 1130827 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0328 01:03:51.992233 1130827 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0328 01:03:51.997556 1130827 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0328 01:03:52.004178 1130827 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0328 01:03:52.010666 1130827 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0328 01:03:52.017076 1130827 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0328 01:03:52.023334 1130827 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0328 01:03:52.029980 1130827 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0328 01:03:52.036395 1130827 kubeadm.go:391] StartCluster: {Name:no-preload-248059 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.0-beta.0 ClusterName:no-preload-248059 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.107 Port:8443 KubernetesVersion:v1.30.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0
m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0328 01:03:52.036483 1130827 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0328 01:03:52.036539 1130827 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0328 01:03:52.080486 1130827 cri.go:89] found id: ""
	I0328 01:03:52.080580 1130827 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0328 01:03:52.094552 1130827 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0328 01:03:52.094583 1130827 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0328 01:03:52.094599 1130827 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0328 01:03:52.094650 1130827 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0328 01:03:52.107008 1130827 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0328 01:03:52.108200 1130827 kubeconfig.go:125] found "no-preload-248059" server: "https://192.168.61.107:8443"
	I0328 01:03:52.110536 1130827 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0328 01:03:52.122998 1130827 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.61.107
	I0328 01:03:52.123044 1130827 kubeadm.go:1154] stopping kube-system containers ...
	I0328 01:03:52.123090 1130827 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0328 01:03:52.123170 1130827 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0328 01:03:52.165568 1130827 cri.go:89] found id: ""
	I0328 01:03:52.165666 1130827 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0328 01:03:52.183930 1130827 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0328 01:03:52.195188 1130827 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0328 01:03:52.195215 1130827 kubeadm.go:156] found existing configuration files:
	
	I0328 01:03:52.195271 1130827 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0328 01:03:52.205872 1130827 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0328 01:03:52.205932 1130827 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0328 01:03:52.216481 1130827 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0328 01:03:52.226719 1130827 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0328 01:03:52.226787 1130827 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0328 01:03:52.238852 1130827 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0328 01:03:52.250272 1130827 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0328 01:03:52.250341 1130827 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0328 01:03:52.262474 1130827 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0328 01:03:52.273981 1130827 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0328 01:03:52.274059 1130827 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0328 01:03:52.286028 1130827 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0328 01:03:52.297016 1130827 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0328 01:03:52.406981 1130827 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0328 01:03:53.521529 1130827 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.114505514s)
	I0328 01:03:53.521569 1130827 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0328 01:03:53.735728 1130827 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0328 01:03:53.808590 1130827 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0328 01:03:53.931165 1130827 api_server.go:52] waiting for apiserver process to appear ...
	I0328 01:03:53.931281 1130827 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:54.432358 1130827 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:54.931653 1130827 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:54.948811 1130827 api_server.go:72] duration metric: took 1.017647613s to wait for apiserver process to appear ...
	I0328 01:03:54.948843 1130827 api_server.go:88] waiting for apiserver healthz status ...
	I0328 01:03:54.948871 1130827 api_server.go:253] Checking apiserver healthz at https://192.168.61.107:8443/healthz ...
	I0328 01:03:54.949490 1130827 api_server.go:269] stopped: https://192.168.61.107:8443/healthz: Get "https://192.168.61.107:8443/healthz": dial tcp 192.168.61.107:8443: connect: connection refused
	I0328 01:03:55.449050 1130827 api_server.go:253] Checking apiserver healthz at https://192.168.61.107:8443/healthz ...
	I0328 01:03:53.413775 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:55.914095 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:57.515811 1130827 api_server.go:279] https://192.168.61.107:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0328 01:03:57.515852 1130827 api_server.go:103] status: https://192.168.61.107:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0328 01:03:57.515872 1130827 api_server.go:253] Checking apiserver healthz at https://192.168.61.107:8443/healthz ...
	I0328 01:03:57.564527 1130827 api_server.go:279] https://192.168.61.107:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0328 01:03:57.564560 1130827 api_server.go:103] status: https://192.168.61.107:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0328 01:03:57.949780 1130827 api_server.go:253] Checking apiserver healthz at https://192.168.61.107:8443/healthz ...
	I0328 01:03:57.955515 1130827 api_server.go:279] https://192.168.61.107:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0328 01:03:57.955565 1130827 api_server.go:103] status: https://192.168.61.107:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0328 01:03:58.449103 1130827 api_server.go:253] Checking apiserver healthz at https://192.168.61.107:8443/healthz ...
	I0328 01:03:58.456345 1130827 api_server.go:279] https://192.168.61.107:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0328 01:03:58.456384 1130827 api_server.go:103] status: https://192.168.61.107:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0328 01:03:58.949575 1130827 api_server.go:253] Checking apiserver healthz at https://192.168.61.107:8443/healthz ...
	I0328 01:03:58.954466 1130827 api_server.go:279] https://192.168.61.107:8443/healthz returned 200:
	ok
	I0328 01:03:58.961213 1130827 api_server.go:141] control plane version: v1.30.0-beta.0
	I0328 01:03:58.961244 1130827 api_server.go:131] duration metric: took 4.012391589s to wait for apiserver health ...
	I0328 01:03:58.961256 1130827 cni.go:84] Creating CNI manager for ""
	I0328 01:03:58.961265 1130827 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0328 01:03:58.963147 1130827 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0328 01:03:55.568378 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:56.068253 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:56.568989 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:57.068709 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:57.569038 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:58.068236 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:58.568386 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:59.068971 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:59.568858 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:04:00.067964 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:57.043266 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:59.541626 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:58.964446 1130827 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0328 01:03:58.979425 1130827 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0328 01:03:59.042826 1130827 system_pods.go:43] waiting for kube-system pods to appear ...
	I0328 01:03:59.060388 1130827 system_pods.go:59] 8 kube-system pods found
	I0328 01:03:59.060429 1130827 system_pods.go:61] "coredns-7db6d8ff4d-86n4s" [71402ca8-dfa7-4caf-a422-6de9f24bf9dc] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0328 01:03:59.060439 1130827 system_pods.go:61] "etcd-no-preload-248059" [954b6886-b84f-4d94-bbce-7e520142eb4b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0328 01:03:59.060451 1130827 system_pods.go:61] "kube-apiserver-no-preload-248059" [2d3caabe-27c2-44e7-8f52-76e03f262e2d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0328 01:03:59.060462 1130827 system_pods.go:61] "kube-controller-manager-no-preload-248059" [30b9f4aa-c9a7-4d91-8e4d-35ad32f40425] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0328 01:03:59.060472 1130827 system_pods.go:61] "kube-proxy-b9qpb" [7ab4cca8-0ba2-4177-84cd-c6ac045930fc] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0328 01:03:59.060481 1130827 system_pods.go:61] "kube-scheduler-no-preload-248059" [4d9e45e3-d990-40d4-a4be-8384c39eb9b0] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0328 01:03:59.060493 1130827 system_pods.go:61] "metrics-server-569cc877fc-cvnrj" [063a47ac-9ceb-4521-9dde-aca02ec5e0d8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0328 01:03:59.060508 1130827 system_pods.go:61] "storage-provisioner" [0a0eb2d3-a426-4b76-8009-1a0a0e0312bb] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0328 01:03:59.060518 1130827 system_pods.go:74] duration metric: took 17.666067ms to wait for pod list to return data ...
	I0328 01:03:59.060533 1130827 node_conditions.go:102] verifying NodePressure condition ...
	I0328 01:03:59.065018 1130827 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0328 01:03:59.065054 1130827 node_conditions.go:123] node cpu capacity is 2
	I0328 01:03:59.065071 1130827 node_conditions.go:105] duration metric: took 4.531253ms to run NodePressure ...
	I0328 01:03:59.065097 1130827 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-beta.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0328 01:03:59.454609 1130827 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0328 01:03:59.459707 1130827 kubeadm.go:733] kubelet initialised
	I0328 01:03:59.459730 1130827 kubeadm.go:734] duration metric: took 5.09757ms waiting for restarted kubelet to initialise ...
	I0328 01:03:59.459739 1130827 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0328 01:03:59.465352 1130827 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-86n4s" in "kube-system" namespace to be "Ready" ...
	I0328 01:03:59.471020 1130827 pod_ready.go:97] node "no-preload-248059" hosting pod "coredns-7db6d8ff4d-86n4s" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-248059" has status "Ready":"False"
	I0328 01:03:59.471054 1130827 pod_ready.go:81] duration metric: took 5.676291ms for pod "coredns-7db6d8ff4d-86n4s" in "kube-system" namespace to be "Ready" ...
	E0328 01:03:59.471067 1130827 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-248059" hosting pod "coredns-7db6d8ff4d-86n4s" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-248059" has status "Ready":"False"
	I0328 01:03:59.471075 1130827 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-248059" in "kube-system" namespace to be "Ready" ...
	I0328 01:03:59.476393 1130827 pod_ready.go:97] node "no-preload-248059" hosting pod "etcd-no-preload-248059" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-248059" has status "Ready":"False"
	I0328 01:03:59.476421 1130827 pod_ready.go:81] duration metric: took 5.333391ms for pod "etcd-no-preload-248059" in "kube-system" namespace to be "Ready" ...
	E0328 01:03:59.476430 1130827 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-248059" hosting pod "etcd-no-preload-248059" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-248059" has status "Ready":"False"
	I0328 01:03:59.476436 1130827 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-248059" in "kube-system" namespace to be "Ready" ...
	I0328 01:03:59.485889 1130827 pod_ready.go:97] node "no-preload-248059" hosting pod "kube-apiserver-no-preload-248059" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-248059" has status "Ready":"False"
	I0328 01:03:59.485924 1130827 pod_ready.go:81] duration metric: took 9.481204ms for pod "kube-apiserver-no-preload-248059" in "kube-system" namespace to be "Ready" ...
	E0328 01:03:59.485937 1130827 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-248059" hosting pod "kube-apiserver-no-preload-248059" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-248059" has status "Ready":"False"
	I0328 01:03:59.485957 1130827 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-248059" in "kube-system" namespace to be "Ready" ...
	I0328 01:03:59.491064 1130827 pod_ready.go:97] node "no-preload-248059" hosting pod "kube-controller-manager-no-preload-248059" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-248059" has status "Ready":"False"
	I0328 01:03:59.491095 1130827 pod_ready.go:81] duration metric: took 5.125981ms for pod "kube-controller-manager-no-preload-248059" in "kube-system" namespace to be "Ready" ...
	E0328 01:03:59.491107 1130827 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-248059" hosting pod "kube-controller-manager-no-preload-248059" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-248059" has status "Ready":"False"
	I0328 01:03:59.491116 1130827 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-b9qpb" in "kube-system" namespace to be "Ready" ...
	I0328 01:03:59.858724 1130827 pod_ready.go:92] pod "kube-proxy-b9qpb" in "kube-system" namespace has status "Ready":"True"
	I0328 01:03:59.858753 1130827 pod_ready.go:81] duration metric: took 367.628034ms for pod "kube-proxy-b9qpb" in "kube-system" namespace to be "Ready" ...
	I0328 01:03:59.858764 1130827 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-248059" in "kube-system" namespace to be "Ready" ...
	I0328 01:03:58.413911 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:00.913297 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:02.913414 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:00.568622 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:04:01.067943 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:04:01.567964 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:04:02.068537 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:04:02.568772 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:04:03.068458 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:04:03.568943 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:04:04.068085 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:04:04.068176 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:04:04.112601 1131323 cri.go:89] found id: ""
	I0328 01:04:04.112631 1131323 logs.go:276] 0 containers: []
	W0328 01:04:04.112642 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:04:04.112650 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:04:04.112726 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:04:04.151837 1131323 cri.go:89] found id: ""
	I0328 01:04:04.151873 1131323 logs.go:276] 0 containers: []
	W0328 01:04:04.151885 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:04:04.151894 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:04:04.151965 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:04:04.193411 1131323 cri.go:89] found id: ""
	I0328 01:04:04.193451 1131323 logs.go:276] 0 containers: []
	W0328 01:04:04.193463 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:04:04.193473 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:04:04.193545 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:04:04.239623 1131323 cri.go:89] found id: ""
	I0328 01:04:04.239652 1131323 logs.go:276] 0 containers: []
	W0328 01:04:04.239662 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:04:04.239673 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:04:04.239732 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:04:04.279561 1131323 cri.go:89] found id: ""
	I0328 01:04:04.279600 1131323 logs.go:276] 0 containers: []
	W0328 01:04:04.279615 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:04:04.279627 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:04:04.279708 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:04:04.318680 1131323 cri.go:89] found id: ""
	I0328 01:04:04.318710 1131323 logs.go:276] 0 containers: []
	W0328 01:04:04.318722 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:04:04.318731 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:04:04.318797 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:04:04.356486 1131323 cri.go:89] found id: ""
	I0328 01:04:04.356514 1131323 logs.go:276] 0 containers: []
	W0328 01:04:04.356523 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:04:04.356530 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:04:04.356586 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:04:04.394281 1131323 cri.go:89] found id: ""
	I0328 01:04:04.394319 1131323 logs.go:276] 0 containers: []
	W0328 01:04:04.394334 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:04:04.394348 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:04:04.394364 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:04:04.458688 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:04:04.458729 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:04:04.501399 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:04:04.501440 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:04:04.556183 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:04:04.556225 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:04:04.571392 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:04:04.571427 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:04:04.709967 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:04:02.041555 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:04.541464 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:01.866183 1130827 pod_ready.go:102] pod "kube-scheduler-no-preload-248059" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:03.868706 1130827 pod_ready.go:102] pod "kube-scheduler-no-preload-248059" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:04.915667 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:07.412548 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:07.210550 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:04:07.224274 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:04:07.224345 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:04:07.262604 1131323 cri.go:89] found id: ""
	I0328 01:04:07.262640 1131323 logs.go:276] 0 containers: []
	W0328 01:04:07.262665 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:04:07.262674 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:04:07.262763 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:04:07.296868 1131323 cri.go:89] found id: ""
	I0328 01:04:07.296907 1131323 logs.go:276] 0 containers: []
	W0328 01:04:07.296918 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:04:07.296926 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:04:07.296992 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:04:07.333110 1131323 cri.go:89] found id: ""
	I0328 01:04:07.333149 1131323 logs.go:276] 0 containers: []
	W0328 01:04:07.333162 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:04:07.333171 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:04:07.333240 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:04:07.371138 1131323 cri.go:89] found id: ""
	I0328 01:04:07.371168 1131323 logs.go:276] 0 containers: []
	W0328 01:04:07.371186 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:04:07.371195 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:04:07.371259 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:04:07.412197 1131323 cri.go:89] found id: ""
	I0328 01:04:07.412230 1131323 logs.go:276] 0 containers: []
	W0328 01:04:07.412242 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:04:07.412251 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:04:07.412331 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:04:07.457021 1131323 cri.go:89] found id: ""
	I0328 01:04:07.457052 1131323 logs.go:276] 0 containers: []
	W0328 01:04:07.457070 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:04:07.457080 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:04:07.457153 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:04:07.517996 1131323 cri.go:89] found id: ""
	I0328 01:04:07.518026 1131323 logs.go:276] 0 containers: []
	W0328 01:04:07.518034 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:04:07.518040 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:04:07.518111 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:04:07.556829 1131323 cri.go:89] found id: ""
	I0328 01:04:07.556856 1131323 logs.go:276] 0 containers: []
	W0328 01:04:07.556865 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:04:07.556875 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:04:07.556890 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:04:07.572234 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:04:07.572270 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:04:07.648615 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:04:07.648641 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:04:07.648658 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:04:07.719617 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:04:07.719665 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:04:07.764053 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:04:07.764097 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:04:10.319480 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:04:06.542160 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:08.550725 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:06.366150 1130827 pod_ready.go:102] pod "kube-scheduler-no-preload-248059" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:07.365200 1130827 pod_ready.go:92] pod "kube-scheduler-no-preload-248059" in "kube-system" namespace has status "Ready":"True"
	I0328 01:04:07.365233 1130827 pod_ready.go:81] duration metric: took 7.506461201s for pod "kube-scheduler-no-preload-248059" in "kube-system" namespace to be "Ready" ...
	I0328 01:04:07.365256 1130827 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace to be "Ready" ...
	I0328 01:04:09.373694 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:09.413378 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:11.913400 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:10.334347 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:04:10.335893 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:04:10.375231 1131323 cri.go:89] found id: ""
	I0328 01:04:10.375263 1131323 logs.go:276] 0 containers: []
	W0328 01:04:10.375274 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:04:10.375281 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:04:10.375353 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:04:10.413652 1131323 cri.go:89] found id: ""
	I0328 01:04:10.413706 1131323 logs.go:276] 0 containers: []
	W0328 01:04:10.413726 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:04:10.413736 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:04:10.413805 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:04:10.449546 1131323 cri.go:89] found id: ""
	I0328 01:04:10.449588 1131323 logs.go:276] 0 containers: []
	W0328 01:04:10.449597 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:04:10.449604 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:04:10.449658 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:04:10.487518 1131323 cri.go:89] found id: ""
	I0328 01:04:10.487556 1131323 logs.go:276] 0 containers: []
	W0328 01:04:10.487570 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:04:10.487579 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:04:10.487663 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:04:10.525088 1131323 cri.go:89] found id: ""
	I0328 01:04:10.525124 1131323 logs.go:276] 0 containers: []
	W0328 01:04:10.525137 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:04:10.525146 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:04:10.525213 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:04:10.567177 1131323 cri.go:89] found id: ""
	I0328 01:04:10.567209 1131323 logs.go:276] 0 containers: []
	W0328 01:04:10.567221 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:04:10.567231 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:04:10.567302 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:04:10.609440 1131323 cri.go:89] found id: ""
	I0328 01:04:10.609474 1131323 logs.go:276] 0 containers: []
	W0328 01:04:10.609485 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:04:10.609492 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:04:10.609549 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:04:10.652466 1131323 cri.go:89] found id: ""
	I0328 01:04:10.652502 1131323 logs.go:276] 0 containers: []
	W0328 01:04:10.652516 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:04:10.652529 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:04:10.652546 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:04:10.737406 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:04:10.737451 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:04:10.786955 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:04:10.786991 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:04:10.843072 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:04:10.843114 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:04:10.857209 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:04:10.857244 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:04:10.950885 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:04:13.451542 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:04:13.465833 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:04:13.465924 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:04:13.503353 1131323 cri.go:89] found id: ""
	I0328 01:04:13.503386 1131323 logs.go:276] 0 containers: []
	W0328 01:04:13.503398 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:04:13.503407 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:04:13.503474 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:04:13.543175 1131323 cri.go:89] found id: ""
	I0328 01:04:13.543208 1131323 logs.go:276] 0 containers: []
	W0328 01:04:13.543220 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:04:13.543229 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:04:13.543287 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:04:13.580796 1131323 cri.go:89] found id: ""
	I0328 01:04:13.580829 1131323 logs.go:276] 0 containers: []
	W0328 01:04:13.580840 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:04:13.580848 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:04:13.580900 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:04:13.619483 1131323 cri.go:89] found id: ""
	I0328 01:04:13.619516 1131323 logs.go:276] 0 containers: []
	W0328 01:04:13.619529 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:04:13.619539 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:04:13.619596 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:04:13.654651 1131323 cri.go:89] found id: ""
	I0328 01:04:13.654683 1131323 logs.go:276] 0 containers: []
	W0328 01:04:13.654697 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:04:13.654705 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:04:13.654774 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:04:13.691763 1131323 cri.go:89] found id: ""
	I0328 01:04:13.691794 1131323 logs.go:276] 0 containers: []
	W0328 01:04:13.691805 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:04:13.691813 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:04:13.691881 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:04:13.730580 1131323 cri.go:89] found id: ""
	I0328 01:04:13.730614 1131323 logs.go:276] 0 containers: []
	W0328 01:04:13.730627 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:04:13.730635 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:04:13.730694 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:04:13.767802 1131323 cri.go:89] found id: ""
	I0328 01:04:13.767834 1131323 logs.go:276] 0 containers: []
	W0328 01:04:13.767848 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:04:13.767860 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:04:13.767876 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:04:13.815612 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:04:13.815653 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:04:13.870945 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:04:13.870991 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:04:13.891456 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:04:13.891506 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:04:14.022124 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:04:14.022163 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:04:14.022187 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:04:11.041196 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:13.044490 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:15.541942 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:11.873574 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:13.875251 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:14.412081 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:16.412837 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:16.604087 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:04:16.618872 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:04:16.618971 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:04:16.665628 1131323 cri.go:89] found id: ""
	I0328 01:04:16.665661 1131323 logs.go:276] 0 containers: []
	W0328 01:04:16.665675 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:04:16.665683 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:04:16.665780 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:04:16.703727 1131323 cri.go:89] found id: ""
	I0328 01:04:16.703758 1131323 logs.go:276] 0 containers: []
	W0328 01:04:16.703768 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:04:16.703775 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:04:16.703835 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:04:16.741425 1131323 cri.go:89] found id: ""
	I0328 01:04:16.741455 1131323 logs.go:276] 0 containers: []
	W0328 01:04:16.741464 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:04:16.741470 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:04:16.741524 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:04:16.782333 1131323 cri.go:89] found id: ""
	I0328 01:04:16.782373 1131323 logs.go:276] 0 containers: []
	W0328 01:04:16.782387 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:04:16.782398 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:04:16.782469 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:04:16.820321 1131323 cri.go:89] found id: ""
	I0328 01:04:16.820355 1131323 logs.go:276] 0 containers: []
	W0328 01:04:16.820364 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:04:16.820372 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:04:16.820429 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:04:16.861091 1131323 cri.go:89] found id: ""
	I0328 01:04:16.861130 1131323 logs.go:276] 0 containers: []
	W0328 01:04:16.861144 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:04:16.861154 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:04:16.861226 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:04:16.901347 1131323 cri.go:89] found id: ""
	I0328 01:04:16.901394 1131323 logs.go:276] 0 containers: []
	W0328 01:04:16.901408 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:04:16.901418 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:04:16.901491 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:04:16.944027 1131323 cri.go:89] found id: ""
	I0328 01:04:16.944067 1131323 logs.go:276] 0 containers: []
	W0328 01:04:16.944080 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:04:16.944093 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:04:16.944110 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:04:16.959104 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:04:16.959151 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:04:17.035432 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:04:17.035464 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:04:17.035480 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:04:17.116236 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:04:17.116276 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:04:17.159321 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:04:17.159370 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:04:19.711326 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:04:19.726016 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:04:19.726094 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:04:19.776639 1131323 cri.go:89] found id: ""
	I0328 01:04:19.776676 1131323 logs.go:276] 0 containers: []
	W0328 01:04:19.776690 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:04:19.776700 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:04:19.776782 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:04:19.817849 1131323 cri.go:89] found id: ""
	I0328 01:04:19.817887 1131323 logs.go:276] 0 containers: []
	W0328 01:04:19.817897 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:04:19.817904 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:04:19.817981 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:04:19.855055 1131323 cri.go:89] found id: ""
	I0328 01:04:19.855089 1131323 logs.go:276] 0 containers: []
	W0328 01:04:19.855102 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:04:19.855110 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:04:19.855177 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:04:19.895296 1131323 cri.go:89] found id: ""
	I0328 01:04:19.895332 1131323 logs.go:276] 0 containers: []
	W0328 01:04:19.895346 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:04:19.895354 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:04:19.895414 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:04:19.930936 1131323 cri.go:89] found id: ""
	I0328 01:04:19.930968 1131323 logs.go:276] 0 containers: []
	W0328 01:04:19.930980 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:04:19.930989 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:04:19.931067 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:04:19.968573 1131323 cri.go:89] found id: ""
	I0328 01:04:19.968610 1131323 logs.go:276] 0 containers: []
	W0328 01:04:19.968623 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:04:19.968632 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:04:19.968693 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:04:20.006130 1131323 cri.go:89] found id: ""
	I0328 01:04:20.006180 1131323 logs.go:276] 0 containers: []
	W0328 01:04:20.006195 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:04:20.006203 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:04:20.006304 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:04:20.043646 1131323 cri.go:89] found id: ""
	I0328 01:04:20.043678 1131323 logs.go:276] 0 containers: []
	W0328 01:04:20.043689 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:04:20.043701 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:04:20.043717 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:04:20.058728 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:04:20.058761 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:04:20.136392 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:04:20.136417 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:04:20.136431 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:04:20.214971 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:04:20.215015 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:04:20.255002 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:04:20.255047 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:04:18.041868 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:20.542175 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:16.372600 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:18.373203 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:20.374228 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:18.913596 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:20.913978 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:22.914777 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:22.810078 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:04:22.824083 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:04:22.824169 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:04:22.862037 1131323 cri.go:89] found id: ""
	I0328 01:04:22.862066 1131323 logs.go:276] 0 containers: []
	W0328 01:04:22.862074 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:04:22.862081 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:04:22.862141 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:04:22.901625 1131323 cri.go:89] found id: ""
	I0328 01:04:22.901658 1131323 logs.go:276] 0 containers: []
	W0328 01:04:22.901670 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:04:22.901679 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:04:22.901752 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:04:22.938858 1131323 cri.go:89] found id: ""
	I0328 01:04:22.938891 1131323 logs.go:276] 0 containers: []
	W0328 01:04:22.938903 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:04:22.938912 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:04:22.938983 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:04:22.978781 1131323 cri.go:89] found id: ""
	I0328 01:04:22.978818 1131323 logs.go:276] 0 containers: []
	W0328 01:04:22.978829 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:04:22.978837 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:04:22.978910 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:04:23.016844 1131323 cri.go:89] found id: ""
	I0328 01:04:23.016882 1131323 logs.go:276] 0 containers: []
	W0328 01:04:23.016895 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:04:23.016904 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:04:23.016975 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:04:23.058456 1131323 cri.go:89] found id: ""
	I0328 01:04:23.058508 1131323 logs.go:276] 0 containers: []
	W0328 01:04:23.058522 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:04:23.058531 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:04:23.058604 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:04:23.099368 1131323 cri.go:89] found id: ""
	I0328 01:04:23.099399 1131323 logs.go:276] 0 containers: []
	W0328 01:04:23.099408 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:04:23.099420 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:04:23.099492 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:04:23.135593 1131323 cri.go:89] found id: ""
	I0328 01:04:23.135634 1131323 logs.go:276] 0 containers: []
	W0328 01:04:23.135653 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:04:23.135665 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:04:23.135679 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:04:23.191215 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:04:23.191260 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:04:23.206849 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:04:23.206884 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:04:23.289566 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:04:23.289596 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:04:23.289618 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:04:23.365429 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:04:23.365480 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:04:23.042312 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:25.541788 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:22.872233 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:25.373908 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:25.413591 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:27.912983 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:25.914883 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:04:25.929336 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:04:25.929415 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:04:25.969452 1131323 cri.go:89] found id: ""
	I0328 01:04:25.969485 1131323 logs.go:276] 0 containers: []
	W0328 01:04:25.969497 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:04:25.969506 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:04:25.969573 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:04:26.008978 1131323 cri.go:89] found id: ""
	I0328 01:04:26.009006 1131323 logs.go:276] 0 containers: []
	W0328 01:04:26.009015 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:04:26.009022 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:04:26.009075 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:04:26.051110 1131323 cri.go:89] found id: ""
	I0328 01:04:26.051138 1131323 logs.go:276] 0 containers: []
	W0328 01:04:26.051146 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:04:26.051153 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:04:26.051213 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:04:26.088231 1131323 cri.go:89] found id: ""
	I0328 01:04:26.088262 1131323 logs.go:276] 0 containers: []
	W0328 01:04:26.088271 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:04:26.088277 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:04:26.088342 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:04:26.125741 1131323 cri.go:89] found id: ""
	I0328 01:04:26.125782 1131323 logs.go:276] 0 containers: []
	W0328 01:04:26.125794 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:04:26.125800 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:04:26.125867 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:04:26.163367 1131323 cri.go:89] found id: ""
	I0328 01:04:26.163406 1131323 logs.go:276] 0 containers: []
	W0328 01:04:26.163417 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:04:26.163426 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:04:26.163503 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:04:26.202302 1131323 cri.go:89] found id: ""
	I0328 01:04:26.202340 1131323 logs.go:276] 0 containers: []
	W0328 01:04:26.202355 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:04:26.202364 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:04:26.202422 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:04:26.240880 1131323 cri.go:89] found id: ""
	I0328 01:04:26.240911 1131323 logs.go:276] 0 containers: []
	W0328 01:04:26.240921 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:04:26.240931 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:04:26.240943 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:04:26.283151 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:04:26.283180 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:04:26.341313 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:04:26.341350 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:04:26.356762 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:04:26.356791 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:04:26.428033 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:04:26.428054 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:04:26.428066 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:04:29.006332 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:04:29.020634 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:04:29.020745 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:04:29.060812 1131323 cri.go:89] found id: ""
	I0328 01:04:29.060843 1131323 logs.go:276] 0 containers: []
	W0328 01:04:29.060852 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:04:29.060859 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:04:29.060916 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:04:29.100110 1131323 cri.go:89] found id: ""
	I0328 01:04:29.100139 1131323 logs.go:276] 0 containers: []
	W0328 01:04:29.100149 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:04:29.100155 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:04:29.100212 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:04:29.140345 1131323 cri.go:89] found id: ""
	I0328 01:04:29.140384 1131323 logs.go:276] 0 containers: []
	W0328 01:04:29.140396 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:04:29.140404 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:04:29.140479 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:04:29.182415 1131323 cri.go:89] found id: ""
	I0328 01:04:29.182449 1131323 logs.go:276] 0 containers: []
	W0328 01:04:29.182459 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:04:29.182465 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:04:29.182533 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:04:29.225177 1131323 cri.go:89] found id: ""
	I0328 01:04:29.225214 1131323 logs.go:276] 0 containers: []
	W0328 01:04:29.225225 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:04:29.225233 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:04:29.225310 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:04:29.265437 1131323 cri.go:89] found id: ""
	I0328 01:04:29.265471 1131323 logs.go:276] 0 containers: []
	W0328 01:04:29.265485 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:04:29.265493 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:04:29.265556 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:04:29.301578 1131323 cri.go:89] found id: ""
	I0328 01:04:29.301617 1131323 logs.go:276] 0 containers: []
	W0328 01:04:29.301630 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:04:29.301639 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:04:29.301719 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:04:29.340816 1131323 cri.go:89] found id: ""
	I0328 01:04:29.340847 1131323 logs.go:276] 0 containers: []
	W0328 01:04:29.340856 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:04:29.340867 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:04:29.340880 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:04:29.384658 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:04:29.384687 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:04:29.439243 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:04:29.439285 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:04:29.456179 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:04:29.456211 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:04:29.534878 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:04:29.534906 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:04:29.534927 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:04:28.041463 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:30.042506 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:27.872489 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:30.371109 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:29.913856 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:32.415699 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:32.115798 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:04:32.130464 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:04:32.130560 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:04:32.168846 1131323 cri.go:89] found id: ""
	I0328 01:04:32.168877 1131323 logs.go:276] 0 containers: []
	W0328 01:04:32.168887 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:04:32.168894 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:04:32.168952 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:04:32.208590 1131323 cri.go:89] found id: ""
	I0328 01:04:32.208622 1131323 logs.go:276] 0 containers: []
	W0328 01:04:32.208632 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:04:32.208638 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:04:32.208694 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:04:32.247323 1131323 cri.go:89] found id: ""
	I0328 01:04:32.247362 1131323 logs.go:276] 0 containers: []
	W0328 01:04:32.247375 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:04:32.247384 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:04:32.247507 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:04:32.285260 1131323 cri.go:89] found id: ""
	I0328 01:04:32.285293 1131323 logs.go:276] 0 containers: []
	W0328 01:04:32.285312 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:04:32.285319 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:04:32.285395 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:04:32.326678 1131323 cri.go:89] found id: ""
	I0328 01:04:32.326712 1131323 logs.go:276] 0 containers: []
	W0328 01:04:32.326725 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:04:32.326740 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:04:32.326823 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:04:32.363375 1131323 cri.go:89] found id: ""
	I0328 01:04:32.363403 1131323 logs.go:276] 0 containers: []
	W0328 01:04:32.363412 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:04:32.363419 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:04:32.363473 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:04:32.401410 1131323 cri.go:89] found id: ""
	I0328 01:04:32.401449 1131323 logs.go:276] 0 containers: []
	W0328 01:04:32.401462 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:04:32.401470 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:04:32.401558 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:04:32.438645 1131323 cri.go:89] found id: ""
	I0328 01:04:32.438680 1131323 logs.go:276] 0 containers: []
	W0328 01:04:32.438691 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:04:32.438703 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:04:32.438718 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:04:32.488743 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:04:32.488786 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:04:32.503908 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:04:32.503944 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:04:32.577307 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:04:32.577333 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:04:32.577350 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:04:32.657787 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:04:32.657832 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:04:35.201151 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:04:35.215313 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:04:35.215383 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:04:35.253467 1131323 cri.go:89] found id: ""
	I0328 01:04:35.253504 1131323 logs.go:276] 0 containers: []
	W0328 01:04:35.253515 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:04:35.253522 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:04:35.253593 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:04:35.290218 1131323 cri.go:89] found id: ""
	I0328 01:04:35.290280 1131323 logs.go:276] 0 containers: []
	W0328 01:04:35.290292 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:04:35.290300 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:04:35.290378 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:04:35.330714 1131323 cri.go:89] found id: ""
	I0328 01:04:35.330749 1131323 logs.go:276] 0 containers: []
	W0328 01:04:35.330757 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:04:35.330764 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:04:35.330831 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:04:32.542071 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:34.544163 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:32.372100 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:34.872293 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:34.913212 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:37.411734 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:35.371524 1131323 cri.go:89] found id: ""
	I0328 01:04:35.371553 1131323 logs.go:276] 0 containers: []
	W0328 01:04:35.371570 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:04:35.371577 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:04:35.371630 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:04:35.411610 1131323 cri.go:89] found id: ""
	I0328 01:04:35.411638 1131323 logs.go:276] 0 containers: []
	W0328 01:04:35.411646 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:04:35.411652 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:04:35.411711 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:04:35.456709 1131323 cri.go:89] found id: ""
	I0328 01:04:35.456745 1131323 logs.go:276] 0 containers: []
	W0328 01:04:35.456758 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:04:35.456766 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:04:35.456836 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:04:35.492688 1131323 cri.go:89] found id: ""
	I0328 01:04:35.492719 1131323 logs.go:276] 0 containers: []
	W0328 01:04:35.492729 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:04:35.492755 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:04:35.492811 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:04:35.531205 1131323 cri.go:89] found id: ""
	I0328 01:04:35.531234 1131323 logs.go:276] 0 containers: []
	W0328 01:04:35.531243 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:04:35.531254 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:04:35.531266 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:04:35.611803 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:04:35.611845 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:04:35.653513 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:04:35.653551 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:04:35.708030 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:04:35.708075 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:04:35.724542 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:04:35.724576 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:04:35.798624 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:04:38.299312 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:04:38.314128 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:04:38.314213 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:04:38.357728 1131323 cri.go:89] found id: ""
	I0328 01:04:38.357761 1131323 logs.go:276] 0 containers: []
	W0328 01:04:38.357779 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:04:38.357786 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:04:38.357848 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:04:38.394512 1131323 cri.go:89] found id: ""
	I0328 01:04:38.394541 1131323 logs.go:276] 0 containers: []
	W0328 01:04:38.394549 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:04:38.394558 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:04:38.394618 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:04:38.434353 1131323 cri.go:89] found id: ""
	I0328 01:04:38.434380 1131323 logs.go:276] 0 containers: []
	W0328 01:04:38.434391 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:04:38.434399 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:04:38.434466 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:04:38.477662 1131323 cri.go:89] found id: ""
	I0328 01:04:38.477693 1131323 logs.go:276] 0 containers: []
	W0328 01:04:38.477703 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:04:38.477710 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:04:38.477763 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:04:38.515014 1131323 cri.go:89] found id: ""
	I0328 01:04:38.515044 1131323 logs.go:276] 0 containers: []
	W0328 01:04:38.515053 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:04:38.515060 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:04:38.515117 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:04:38.558865 1131323 cri.go:89] found id: ""
	I0328 01:04:38.558899 1131323 logs.go:276] 0 containers: []
	W0328 01:04:38.558911 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:04:38.558920 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:04:38.558982 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:04:38.600261 1131323 cri.go:89] found id: ""
	I0328 01:04:38.600290 1131323 logs.go:276] 0 containers: []
	W0328 01:04:38.600299 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:04:38.600306 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:04:38.600366 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:04:38.637131 1131323 cri.go:89] found id: ""
	I0328 01:04:38.637167 1131323 logs.go:276] 0 containers: []
	W0328 01:04:38.637179 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:04:38.637194 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:04:38.637218 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:04:38.716032 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:04:38.716058 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:04:38.716079 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:04:38.804534 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:04:38.804578 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:04:38.851781 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:04:38.851820 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:04:38.910091 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:04:38.910125 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:04:37.041273 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:39.541843 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:37.372262 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:39.372555 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:39.912953 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:42.412667 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:41.425801 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:04:41.441072 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:04:41.441168 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:04:41.482934 1131323 cri.go:89] found id: ""
	I0328 01:04:41.482962 1131323 logs.go:276] 0 containers: []
	W0328 01:04:41.482974 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:04:41.482983 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:04:41.483063 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:04:41.521762 1131323 cri.go:89] found id: ""
	I0328 01:04:41.521796 1131323 logs.go:276] 0 containers: []
	W0328 01:04:41.521810 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:04:41.521819 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:04:41.521931 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:04:41.560814 1131323 cri.go:89] found id: ""
	I0328 01:04:41.560847 1131323 logs.go:276] 0 containers: []
	W0328 01:04:41.560857 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:04:41.560864 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:04:41.560928 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:04:41.601158 1131323 cri.go:89] found id: ""
	I0328 01:04:41.601189 1131323 logs.go:276] 0 containers: []
	W0328 01:04:41.601199 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:04:41.601206 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:04:41.601271 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:04:41.638760 1131323 cri.go:89] found id: ""
	I0328 01:04:41.638789 1131323 logs.go:276] 0 containers: []
	W0328 01:04:41.638799 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:04:41.638806 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:04:41.638861 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:04:41.675235 1131323 cri.go:89] found id: ""
	I0328 01:04:41.675268 1131323 logs.go:276] 0 containers: []
	W0328 01:04:41.675278 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:04:41.675285 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:04:41.675341 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:04:41.712918 1131323 cri.go:89] found id: ""
	I0328 01:04:41.712957 1131323 logs.go:276] 0 containers: []
	W0328 01:04:41.712972 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:04:41.712983 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:04:41.713078 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:04:41.750552 1131323 cri.go:89] found id: ""
	I0328 01:04:41.750582 1131323 logs.go:276] 0 containers: []
	W0328 01:04:41.750591 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:04:41.750601 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:04:41.750617 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:04:41.811163 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:04:41.811204 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:04:41.826502 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:04:41.826547 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:04:41.900727 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:04:41.900759 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:04:41.900777 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:04:41.981731 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:04:41.981783 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:04:44.525845 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:04:44.542301 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:04:44.542389 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:04:44.584907 1131323 cri.go:89] found id: ""
	I0328 01:04:44.584936 1131323 logs.go:276] 0 containers: []
	W0328 01:04:44.584945 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:04:44.584952 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:04:44.585007 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:04:44.630465 1131323 cri.go:89] found id: ""
	I0328 01:04:44.630499 1131323 logs.go:276] 0 containers: []
	W0328 01:04:44.630511 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:04:44.630520 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:04:44.630588 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:04:44.669095 1131323 cri.go:89] found id: ""
	I0328 01:04:44.669131 1131323 logs.go:276] 0 containers: []
	W0328 01:04:44.669143 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:04:44.669152 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:04:44.669235 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:04:44.708445 1131323 cri.go:89] found id: ""
	I0328 01:04:44.708484 1131323 logs.go:276] 0 containers: []
	W0328 01:04:44.708495 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:04:44.708502 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:04:44.708570 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:04:44.747706 1131323 cri.go:89] found id: ""
	I0328 01:04:44.747744 1131323 logs.go:276] 0 containers: []
	W0328 01:04:44.747755 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:04:44.747762 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:04:44.747822 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:04:44.787768 1131323 cri.go:89] found id: ""
	I0328 01:04:44.787807 1131323 logs.go:276] 0 containers: []
	W0328 01:04:44.787821 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:04:44.787830 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:04:44.787899 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:04:44.829018 1131323 cri.go:89] found id: ""
	I0328 01:04:44.829049 1131323 logs.go:276] 0 containers: []
	W0328 01:04:44.829059 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:04:44.829066 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:04:44.829123 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:04:44.874334 1131323 cri.go:89] found id: ""
	I0328 01:04:44.874374 1131323 logs.go:276] 0 containers: []
	W0328 01:04:44.874383 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:04:44.874393 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:04:44.874405 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:04:44.921577 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:04:44.921619 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:04:44.976660 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:04:44.976713 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:04:44.991365 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:04:44.991400 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:04:45.067595 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:04:45.067630 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:04:45.067651 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:04:42.042736 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:44.543288 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:41.372902 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:43.872925 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:45.873163 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:44.913827 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:47.412342 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:47.647634 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:04:47.663581 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:04:47.663687 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:04:47.702889 1131323 cri.go:89] found id: ""
	I0328 01:04:47.702940 1131323 logs.go:276] 0 containers: []
	W0328 01:04:47.702954 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:04:47.702966 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:04:47.703043 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:04:47.744995 1131323 cri.go:89] found id: ""
	I0328 01:04:47.745027 1131323 logs.go:276] 0 containers: []
	W0328 01:04:47.745037 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:04:47.745044 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:04:47.745103 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:04:47.785518 1131323 cri.go:89] found id: ""
	I0328 01:04:47.785550 1131323 logs.go:276] 0 containers: []
	W0328 01:04:47.785562 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:04:47.785572 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:04:47.785645 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:04:47.831739 1131323 cri.go:89] found id: ""
	I0328 01:04:47.831771 1131323 logs.go:276] 0 containers: []
	W0328 01:04:47.831786 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:04:47.831794 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:04:47.831867 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:04:47.871864 1131323 cri.go:89] found id: ""
	I0328 01:04:47.871906 1131323 logs.go:276] 0 containers: []
	W0328 01:04:47.871918 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:04:47.871929 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:04:47.872008 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:04:47.907899 1131323 cri.go:89] found id: ""
	I0328 01:04:47.907934 1131323 logs.go:276] 0 containers: []
	W0328 01:04:47.907946 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:04:47.907955 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:04:47.908022 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:04:47.946073 1131323 cri.go:89] found id: ""
	I0328 01:04:47.946107 1131323 logs.go:276] 0 containers: []
	W0328 01:04:47.946118 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:04:47.946127 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:04:47.946223 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:04:47.986122 1131323 cri.go:89] found id: ""
	I0328 01:04:47.986154 1131323 logs.go:276] 0 containers: []
	W0328 01:04:47.986168 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:04:47.986182 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:04:47.986198 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:04:48.057234 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:04:48.057271 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:04:48.109881 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:04:48.109926 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:04:48.125154 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:04:48.125189 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:04:48.208295 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:04:48.208327 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:04:48.208345 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:04:47.041447 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:49.542203 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:48.371275 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:50.372057 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:49.413451 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:51.414465 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:50.785126 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:04:50.800000 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:04:50.800078 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:04:50.839883 1131323 cri.go:89] found id: ""
	I0328 01:04:50.839911 1131323 logs.go:276] 0 containers: []
	W0328 01:04:50.839920 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:04:50.839927 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:04:50.839983 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:04:50.879627 1131323 cri.go:89] found id: ""
	I0328 01:04:50.879654 1131323 logs.go:276] 0 containers: []
	W0328 01:04:50.879661 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:04:50.879668 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:04:50.879734 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:04:50.918392 1131323 cri.go:89] found id: ""
	I0328 01:04:50.918434 1131323 logs.go:276] 0 containers: []
	W0328 01:04:50.918446 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:04:50.918454 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:04:50.918517 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:04:50.957198 1131323 cri.go:89] found id: ""
	I0328 01:04:50.957234 1131323 logs.go:276] 0 containers: []
	W0328 01:04:50.957248 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:04:50.957257 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:04:50.957328 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:04:50.997389 1131323 cri.go:89] found id: ""
	I0328 01:04:50.997424 1131323 logs.go:276] 0 containers: []
	W0328 01:04:50.997438 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:04:50.997446 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:04:50.997513 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:04:51.040259 1131323 cri.go:89] found id: ""
	I0328 01:04:51.040296 1131323 logs.go:276] 0 containers: []
	W0328 01:04:51.040309 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:04:51.040318 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:04:51.040389 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:04:51.081824 1131323 cri.go:89] found id: ""
	I0328 01:04:51.081858 1131323 logs.go:276] 0 containers: []
	W0328 01:04:51.081868 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:04:51.081875 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:04:51.081942 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:04:51.119742 1131323 cri.go:89] found id: ""
	I0328 01:04:51.119783 1131323 logs.go:276] 0 containers: []
	W0328 01:04:51.119796 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:04:51.119810 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:04:51.119836 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:04:51.173486 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:04:51.173529 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:04:51.188532 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:04:51.188568 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:04:51.269181 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:04:51.269207 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:04:51.269226 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:04:51.349882 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:04:51.349936 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:04:53.893562 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:04:53.910104 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:04:53.910186 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:04:53.951333 1131323 cri.go:89] found id: ""
	I0328 01:04:53.951375 1131323 logs.go:276] 0 containers: []
	W0328 01:04:53.951388 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:04:53.951397 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:04:53.951472 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:04:53.992438 1131323 cri.go:89] found id: ""
	I0328 01:04:53.992474 1131323 logs.go:276] 0 containers: []
	W0328 01:04:53.992486 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:04:53.992493 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:04:53.992561 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:04:54.032934 1131323 cri.go:89] found id: ""
	I0328 01:04:54.032969 1131323 logs.go:276] 0 containers: []
	W0328 01:04:54.032982 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:04:54.032992 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:04:54.033061 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:04:54.074670 1131323 cri.go:89] found id: ""
	I0328 01:04:54.074707 1131323 logs.go:276] 0 containers: []
	W0328 01:04:54.074777 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:04:54.074801 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:04:54.074875 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:04:54.111527 1131323 cri.go:89] found id: ""
	I0328 01:04:54.111555 1131323 logs.go:276] 0 containers: []
	W0328 01:04:54.111566 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:04:54.111573 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:04:54.111658 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:04:54.151401 1131323 cri.go:89] found id: ""
	I0328 01:04:54.151428 1131323 logs.go:276] 0 containers: []
	W0328 01:04:54.151437 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:04:54.151443 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:04:54.151494 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:04:54.197997 1131323 cri.go:89] found id: ""
	I0328 01:04:54.198036 1131323 logs.go:276] 0 containers: []
	W0328 01:04:54.198048 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:04:54.198058 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:04:54.198135 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:04:54.234016 1131323 cri.go:89] found id: ""
	I0328 01:04:54.234049 1131323 logs.go:276] 0 containers: []
	W0328 01:04:54.234058 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:04:54.234068 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:04:54.234081 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:04:54.286118 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:04:54.286161 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:04:54.300489 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:04:54.300541 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:04:54.376949 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:04:54.376972 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:04:54.376988 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:04:54.463857 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:04:54.463901 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:04:52.041517 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:54.042088 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:52.875923 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:55.371823 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:53.912140 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:55.912329 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:57.026395 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:04:57.041270 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:04:57.041358 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:04:57.082380 1131323 cri.go:89] found id: ""
	I0328 01:04:57.082416 1131323 logs.go:276] 0 containers: []
	W0328 01:04:57.082428 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:04:57.082436 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:04:57.082503 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:04:57.121835 1131323 cri.go:89] found id: ""
	I0328 01:04:57.121870 1131323 logs.go:276] 0 containers: []
	W0328 01:04:57.121885 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:04:57.121894 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:04:57.121969 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:04:57.163688 1131323 cri.go:89] found id: ""
	I0328 01:04:57.163725 1131323 logs.go:276] 0 containers: []
	W0328 01:04:57.163737 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:04:57.163745 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:04:57.163819 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:04:57.212628 1131323 cri.go:89] found id: ""
	I0328 01:04:57.212666 1131323 logs.go:276] 0 containers: []
	W0328 01:04:57.212693 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:04:57.212703 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:04:57.212788 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:04:57.249196 1131323 cri.go:89] found id: ""
	I0328 01:04:57.249231 1131323 logs.go:276] 0 containers: []
	W0328 01:04:57.249244 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:04:57.249253 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:04:57.249318 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:04:57.286996 1131323 cri.go:89] found id: ""
	I0328 01:04:57.287031 1131323 logs.go:276] 0 containers: []
	W0328 01:04:57.287040 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:04:57.287047 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:04:57.287101 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:04:57.324523 1131323 cri.go:89] found id: ""
	I0328 01:04:57.324551 1131323 logs.go:276] 0 containers: []
	W0328 01:04:57.324560 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:04:57.324566 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:04:57.324627 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:04:57.363946 1131323 cri.go:89] found id: ""
	I0328 01:04:57.363984 1131323 logs.go:276] 0 containers: []
	W0328 01:04:57.363998 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:04:57.364012 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:04:57.364034 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:04:57.418300 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:04:57.418337 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:04:57.433214 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:04:57.433242 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:04:57.508623 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:04:57.508651 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:04:57.508665 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:04:57.586336 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:04:57.586377 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:05:00.129903 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:05:00.146829 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:05:00.146920 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:05:00.197823 1131323 cri.go:89] found id: ""
	I0328 01:05:00.197856 1131323 logs.go:276] 0 containers: []
	W0328 01:05:00.197865 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:05:00.197872 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:05:00.197930 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:05:00.257523 1131323 cri.go:89] found id: ""
	I0328 01:05:00.257561 1131323 logs.go:276] 0 containers: []
	W0328 01:05:00.257575 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:05:00.257584 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:05:00.257657 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:05:00.314511 1131323 cri.go:89] found id: ""
	I0328 01:05:00.314539 1131323 logs.go:276] 0 containers: []
	W0328 01:05:00.314549 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:05:00.314558 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:05:00.314610 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:04:56.042295 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:58.541684 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:00.543232 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:57.372451 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:59.372577 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:58.412203 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:00.412880 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:02.913222 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:00.351043 1131323 cri.go:89] found id: ""
	I0328 01:05:00.351076 1131323 logs.go:276] 0 containers: []
	W0328 01:05:00.351090 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:05:00.351098 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:05:00.351167 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:05:00.391477 1131323 cri.go:89] found id: ""
	I0328 01:05:00.391507 1131323 logs.go:276] 0 containers: []
	W0328 01:05:00.391519 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:05:00.391525 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:05:00.391595 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:05:00.436196 1131323 cri.go:89] found id: ""
	I0328 01:05:00.436230 1131323 logs.go:276] 0 containers: []
	W0328 01:05:00.436242 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:05:00.436249 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:05:00.436316 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:05:00.473389 1131323 cri.go:89] found id: ""
	I0328 01:05:00.473428 1131323 logs.go:276] 0 containers: []
	W0328 01:05:00.473441 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:05:00.473450 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:05:00.473523 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:05:00.508829 1131323 cri.go:89] found id: ""
	I0328 01:05:00.508866 1131323 logs.go:276] 0 containers: []
	W0328 01:05:00.508879 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:05:00.508901 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:05:00.508931 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:05:00.553709 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:05:00.553741 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:05:00.612679 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:05:00.612732 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:05:00.630908 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:05:00.630948 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:05:00.706984 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:05:00.707016 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:05:00.707034 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:05:03.287887 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:05:03.304679 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:05:03.304779 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:05:03.343579 1131323 cri.go:89] found id: ""
	I0328 01:05:03.343608 1131323 logs.go:276] 0 containers: []
	W0328 01:05:03.343618 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:05:03.343625 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:05:03.343677 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:05:03.387158 1131323 cri.go:89] found id: ""
	I0328 01:05:03.387192 1131323 logs.go:276] 0 containers: []
	W0328 01:05:03.387206 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:05:03.387224 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:05:03.387308 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:05:03.426622 1131323 cri.go:89] found id: ""
	I0328 01:05:03.426653 1131323 logs.go:276] 0 containers: []
	W0328 01:05:03.426663 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:05:03.426670 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:05:03.426724 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:05:03.468743 1131323 cri.go:89] found id: ""
	I0328 01:05:03.468780 1131323 logs.go:276] 0 containers: []
	W0328 01:05:03.468793 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:05:03.468801 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:05:03.468870 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:05:03.508518 1131323 cri.go:89] found id: ""
	I0328 01:05:03.508554 1131323 logs.go:276] 0 containers: []
	W0328 01:05:03.508566 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:05:03.508575 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:05:03.508653 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:05:03.548295 1131323 cri.go:89] found id: ""
	I0328 01:05:03.548331 1131323 logs.go:276] 0 containers: []
	W0328 01:05:03.548343 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:05:03.548353 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:05:03.548444 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:05:03.591561 1131323 cri.go:89] found id: ""
	I0328 01:05:03.591594 1131323 logs.go:276] 0 containers: []
	W0328 01:05:03.591607 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:05:03.591615 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:05:03.591670 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:05:03.635055 1131323 cri.go:89] found id: ""
	I0328 01:05:03.635086 1131323 logs.go:276] 0 containers: []
	W0328 01:05:03.635097 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:05:03.635109 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:05:03.635127 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:05:03.715639 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:05:03.715683 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:05:03.755888 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:05:03.755931 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:05:03.810128 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:05:03.810170 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:05:03.825197 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:05:03.825227 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:05:03.908589 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:05:03.043330 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:05.541544 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:01.372692 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:03.373747 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:05.871945 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:05.413583 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:07.912379 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:06.409060 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:05:06.424034 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:05:06.424119 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:05:06.461827 1131323 cri.go:89] found id: ""
	I0328 01:05:06.461888 1131323 logs.go:276] 0 containers: []
	W0328 01:05:06.461902 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:05:06.461911 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:05:06.461985 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:05:06.505006 1131323 cri.go:89] found id: ""
	I0328 01:05:06.505061 1131323 logs.go:276] 0 containers: []
	W0328 01:05:06.505078 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:05:06.505085 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:05:06.505145 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:05:06.542000 1131323 cri.go:89] found id: ""
	I0328 01:05:06.542033 1131323 logs.go:276] 0 containers: []
	W0328 01:05:06.542044 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:05:06.542052 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:05:06.542121 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:05:06.583725 1131323 cri.go:89] found id: ""
	I0328 01:05:06.583778 1131323 logs.go:276] 0 containers: []
	W0328 01:05:06.583800 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:05:06.583810 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:05:06.583887 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:05:06.620457 1131323 cri.go:89] found id: ""
	I0328 01:05:06.620501 1131323 logs.go:276] 0 containers: []
	W0328 01:05:06.620516 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:05:06.620524 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:05:06.620595 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:05:06.664380 1131323 cri.go:89] found id: ""
	I0328 01:05:06.664412 1131323 logs.go:276] 0 containers: []
	W0328 01:05:06.664425 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:05:06.664432 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:05:06.664502 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:05:06.701799 1131323 cri.go:89] found id: ""
	I0328 01:05:06.701850 1131323 logs.go:276] 0 containers: []
	W0328 01:05:06.701862 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:05:06.701870 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:05:06.701935 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:05:06.739899 1131323 cri.go:89] found id: ""
	I0328 01:05:06.739936 1131323 logs.go:276] 0 containers: []
	W0328 01:05:06.739948 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:05:06.739958 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:05:06.739973 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:05:06.814373 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:05:06.814404 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:05:06.814421 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:05:06.894331 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:05:06.894371 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:05:06.952912 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:05:06.952979 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:05:07.011851 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:05:07.011900 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:05:09.528068 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:05:09.545082 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:05:09.545167 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:05:09.586944 1131323 cri.go:89] found id: ""
	I0328 01:05:09.586983 1131323 logs.go:276] 0 containers: []
	W0328 01:05:09.586996 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:05:09.587004 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:05:09.587077 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:05:09.624153 1131323 cri.go:89] found id: ""
	I0328 01:05:09.624184 1131323 logs.go:276] 0 containers: []
	W0328 01:05:09.624192 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:05:09.624198 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:05:09.624256 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:05:09.661125 1131323 cri.go:89] found id: ""
	I0328 01:05:09.661160 1131323 logs.go:276] 0 containers: []
	W0328 01:05:09.661172 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:05:09.661182 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:05:09.661256 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:05:09.699865 1131323 cri.go:89] found id: ""
	I0328 01:05:09.699903 1131323 logs.go:276] 0 containers: []
	W0328 01:05:09.699916 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:05:09.699925 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:05:09.699992 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:05:09.737925 1131323 cri.go:89] found id: ""
	I0328 01:05:09.737958 1131323 logs.go:276] 0 containers: []
	W0328 01:05:09.737967 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:05:09.737973 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:05:09.738029 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:05:09.776906 1131323 cri.go:89] found id: ""
	I0328 01:05:09.776941 1131323 logs.go:276] 0 containers: []
	W0328 01:05:09.776950 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:05:09.776957 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:05:09.777021 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:05:09.815767 1131323 cri.go:89] found id: ""
	I0328 01:05:09.815794 1131323 logs.go:276] 0 containers: []
	W0328 01:05:09.815803 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:05:09.815809 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:05:09.815876 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:05:09.855880 1131323 cri.go:89] found id: ""
	I0328 01:05:09.855915 1131323 logs.go:276] 0 containers: []
	W0328 01:05:09.855928 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:05:09.855941 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:05:09.855958 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:05:09.918339 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:05:09.918376 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:05:09.932775 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:05:09.932810 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:05:10.011566 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:05:10.011594 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:05:10.011610 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:05:10.096057 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:05:10.096100 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:05:08.041230 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:10.041991 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:07.873367 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:10.372311 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:09.913349 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:12.412259 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:12.641999 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:05:12.655761 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:05:12.655843 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:05:12.697335 1131323 cri.go:89] found id: ""
	I0328 01:05:12.697369 1131323 logs.go:276] 0 containers: []
	W0328 01:05:12.697381 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:05:12.697390 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:05:12.697453 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:05:12.736482 1131323 cri.go:89] found id: ""
	I0328 01:05:12.736520 1131323 logs.go:276] 0 containers: []
	W0328 01:05:12.736534 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:05:12.736544 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:05:12.736617 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:05:12.771992 1131323 cri.go:89] found id: ""
	I0328 01:05:12.772034 1131323 logs.go:276] 0 containers: []
	W0328 01:05:12.772046 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:05:12.772055 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:05:12.772125 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:05:12.810738 1131323 cri.go:89] found id: ""
	I0328 01:05:12.810770 1131323 logs.go:276] 0 containers: []
	W0328 01:05:12.810779 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:05:12.810786 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:05:12.810837 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:05:12.848172 1131323 cri.go:89] found id: ""
	I0328 01:05:12.848209 1131323 logs.go:276] 0 containers: []
	W0328 01:05:12.848222 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:05:12.848230 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:05:12.848310 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:05:12.884660 1131323 cri.go:89] found id: ""
	I0328 01:05:12.884698 1131323 logs.go:276] 0 containers: []
	W0328 01:05:12.884710 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:05:12.884719 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:05:12.884794 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:05:12.926180 1131323 cri.go:89] found id: ""
	I0328 01:05:12.926209 1131323 logs.go:276] 0 containers: []
	W0328 01:05:12.926218 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:05:12.926244 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:05:12.926303 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:05:12.966938 1131323 cri.go:89] found id: ""
	I0328 01:05:12.966969 1131323 logs.go:276] 0 containers: []
	W0328 01:05:12.966983 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:05:12.966996 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:05:12.967014 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:05:13.018501 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:05:13.018541 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:05:13.033140 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:05:13.033175 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:05:13.108806 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:05:13.108832 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:05:13.108858 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:05:13.189198 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:05:13.189241 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:05:12.541088 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:15.041830 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:12.372413 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:14.372804 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:14.414059 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:16.912343 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:15.737415 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:05:15.752534 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:05:15.752614 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:05:15.789941 1131323 cri.go:89] found id: ""
	I0328 01:05:15.789974 1131323 logs.go:276] 0 containers: []
	W0328 01:05:15.789986 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:05:15.789994 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:05:15.790107 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:05:15.827688 1131323 cri.go:89] found id: ""
	I0328 01:05:15.827731 1131323 logs.go:276] 0 containers: []
	W0328 01:05:15.827745 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:05:15.827766 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:05:15.827831 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:05:15.867005 1131323 cri.go:89] found id: ""
	I0328 01:05:15.867041 1131323 logs.go:276] 0 containers: []
	W0328 01:05:15.867054 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:05:15.867064 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:05:15.867149 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:05:15.909924 1131323 cri.go:89] found id: ""
	I0328 01:05:15.910035 1131323 logs.go:276] 0 containers: []
	W0328 01:05:15.910055 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:05:15.910066 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:05:15.910139 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:05:15.950571 1131323 cri.go:89] found id: ""
	I0328 01:05:15.950606 1131323 logs.go:276] 0 containers: []
	W0328 01:05:15.950619 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:05:15.950632 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:05:15.950707 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:05:15.992557 1131323 cri.go:89] found id: ""
	I0328 01:05:15.992594 1131323 logs.go:276] 0 containers: []
	W0328 01:05:15.992605 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:05:15.992615 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:05:15.992687 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:05:16.032417 1131323 cri.go:89] found id: ""
	I0328 01:05:16.032458 1131323 logs.go:276] 0 containers: []
	W0328 01:05:16.032473 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:05:16.032482 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:05:16.032559 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:05:16.071399 1131323 cri.go:89] found id: ""
	I0328 01:05:16.071433 1131323 logs.go:276] 0 containers: []
	W0328 01:05:16.071445 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:05:16.071459 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:05:16.071481 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:05:16.147078 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:05:16.147113 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:05:16.147131 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:05:16.223828 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:05:16.223870 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:05:16.269377 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:05:16.269409 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:05:16.318545 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:05:16.318584 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:05:18.836044 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:05:18.851138 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:05:18.851231 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:05:18.887223 1131323 cri.go:89] found id: ""
	I0328 01:05:18.887260 1131323 logs.go:276] 0 containers: []
	W0328 01:05:18.887273 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:05:18.887283 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:05:18.887354 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:05:18.928652 1131323 cri.go:89] found id: ""
	I0328 01:05:18.928682 1131323 logs.go:276] 0 containers: []
	W0328 01:05:18.928692 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:05:18.928698 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:05:18.928756 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:05:18.968519 1131323 cri.go:89] found id: ""
	I0328 01:05:18.968555 1131323 logs.go:276] 0 containers: []
	W0328 01:05:18.968567 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:05:18.968575 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:05:18.968646 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:05:19.010939 1131323 cri.go:89] found id: ""
	I0328 01:05:19.010977 1131323 logs.go:276] 0 containers: []
	W0328 01:05:19.010991 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:05:19.010999 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:05:19.011070 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:05:19.048723 1131323 cri.go:89] found id: ""
	I0328 01:05:19.048748 1131323 logs.go:276] 0 containers: []
	W0328 01:05:19.048758 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:05:19.048769 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:05:19.048820 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:05:19.091761 1131323 cri.go:89] found id: ""
	I0328 01:05:19.091794 1131323 logs.go:276] 0 containers: []
	W0328 01:05:19.091803 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:05:19.091810 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:05:19.091863 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:05:19.134017 1131323 cri.go:89] found id: ""
	I0328 01:05:19.134049 1131323 logs.go:276] 0 containers: []
	W0328 01:05:19.134059 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:05:19.134065 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:05:19.134119 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:05:19.176070 1131323 cri.go:89] found id: ""
	I0328 01:05:19.176106 1131323 logs.go:276] 0 containers: []
	W0328 01:05:19.176118 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:05:19.176131 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:05:19.176155 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:05:19.261546 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:05:19.261584 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:05:19.261605 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:05:19.340271 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:05:19.340314 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:05:19.383625 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:05:19.383676 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:05:19.441635 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:05:19.441679 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:05:17.541876 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:20.040841 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:16.872723 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:19.372916 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:19.414384 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:21.912881 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:21.958362 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:05:21.974427 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:05:21.974528 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:05:22.013099 1131323 cri.go:89] found id: ""
	I0328 01:05:22.013139 1131323 logs.go:276] 0 containers: []
	W0328 01:05:22.013152 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:05:22.013160 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:05:22.013229 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:05:22.055558 1131323 cri.go:89] found id: ""
	I0328 01:05:22.055594 1131323 logs.go:276] 0 containers: []
	W0328 01:05:22.055604 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:05:22.055611 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:05:22.055668 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:05:22.106836 1131323 cri.go:89] found id: ""
	I0328 01:05:22.106870 1131323 logs.go:276] 0 containers: []
	W0328 01:05:22.106879 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:05:22.106886 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:05:22.106961 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:05:22.145135 1131323 cri.go:89] found id: ""
	I0328 01:05:22.145175 1131323 logs.go:276] 0 containers: []
	W0328 01:05:22.145189 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:05:22.145197 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:05:22.145266 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:05:22.183879 1131323 cri.go:89] found id: ""
	I0328 01:05:22.183909 1131323 logs.go:276] 0 containers: []
	W0328 01:05:22.183919 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:05:22.183926 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:05:22.183981 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:05:22.223087 1131323 cri.go:89] found id: ""
	I0328 01:05:22.223115 1131323 logs.go:276] 0 containers: []
	W0328 01:05:22.223124 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:05:22.223131 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:05:22.223209 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:05:22.263232 1131323 cri.go:89] found id: ""
	I0328 01:05:22.263262 1131323 logs.go:276] 0 containers: []
	W0328 01:05:22.263272 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:05:22.263279 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:05:22.263331 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:05:22.302919 1131323 cri.go:89] found id: ""
	I0328 01:05:22.302954 1131323 logs.go:276] 0 containers: []
	W0328 01:05:22.302967 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:05:22.302980 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:05:22.302998 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:05:22.358550 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:05:22.358596 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:05:22.374688 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:05:22.374722 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:05:22.453584 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:05:22.453609 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:05:22.453624 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:05:22.540983 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:05:22.541048 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:05:25.091773 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:05:25.107412 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:05:25.107484 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:05:25.143917 1131323 cri.go:89] found id: ""
	I0328 01:05:25.143944 1131323 logs.go:276] 0 containers: []
	W0328 01:05:25.143953 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:05:25.143960 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:05:25.144013 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:05:25.183615 1131323 cri.go:89] found id: ""
	I0328 01:05:25.183650 1131323 logs.go:276] 0 containers: []
	W0328 01:05:25.183659 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:05:25.183666 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:05:25.183729 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:05:25.221125 1131323 cri.go:89] found id: ""
	I0328 01:05:25.221158 1131323 logs.go:276] 0 containers: []
	W0328 01:05:25.221167 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:05:25.221174 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:05:25.221229 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:05:25.262023 1131323 cri.go:89] found id: ""
	I0328 01:05:25.262056 1131323 logs.go:276] 0 containers: []
	W0328 01:05:25.262065 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:05:25.262072 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:05:25.262134 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:05:25.297919 1131323 cri.go:89] found id: ""
	I0328 01:05:25.297948 1131323 logs.go:276] 0 containers: []
	W0328 01:05:25.297957 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:05:25.297964 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:05:25.298035 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:05:22.041977 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:24.542416 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:21.872312 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:23.872885 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:23.914459 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:25.916730 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:25.336582 1131323 cri.go:89] found id: ""
	I0328 01:05:25.336610 1131323 logs.go:276] 0 containers: []
	W0328 01:05:25.336620 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:05:25.336627 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:05:25.336690 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:05:25.375554 1131323 cri.go:89] found id: ""
	I0328 01:05:25.375589 1131323 logs.go:276] 0 containers: []
	W0328 01:05:25.375600 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:05:25.375609 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:05:25.375683 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:05:25.415941 1131323 cri.go:89] found id: ""
	I0328 01:05:25.415973 1131323 logs.go:276] 0 containers: []
	W0328 01:05:25.415984 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:05:25.415996 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:05:25.416013 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:05:25.430168 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:05:25.430196 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:05:25.507782 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:05:25.507805 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:05:25.507862 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:05:25.588780 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:05:25.588824 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:05:25.634958 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:05:25.634997 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:05:28.190651 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:05:28.205714 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:05:28.205794 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:05:28.242015 1131323 cri.go:89] found id: ""
	I0328 01:05:28.242056 1131323 logs.go:276] 0 containers: []
	W0328 01:05:28.242067 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:05:28.242077 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:05:28.242169 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:05:28.289132 1131323 cri.go:89] found id: ""
	I0328 01:05:28.289169 1131323 logs.go:276] 0 containers: []
	W0328 01:05:28.289182 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:05:28.289189 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:05:28.289256 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:05:28.327001 1131323 cri.go:89] found id: ""
	I0328 01:05:28.327031 1131323 logs.go:276] 0 containers: []
	W0328 01:05:28.327040 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:05:28.327052 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:05:28.327105 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:05:28.365474 1131323 cri.go:89] found id: ""
	I0328 01:05:28.365507 1131323 logs.go:276] 0 containers: []
	W0328 01:05:28.365516 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:05:28.365523 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:05:28.365587 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:05:28.405494 1131323 cri.go:89] found id: ""
	I0328 01:05:28.405553 1131323 logs.go:276] 0 containers: []
	W0328 01:05:28.405567 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:05:28.405576 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:05:28.405652 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:05:28.448464 1131323 cri.go:89] found id: ""
	I0328 01:05:28.448502 1131323 logs.go:276] 0 containers: []
	W0328 01:05:28.448512 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:05:28.448521 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:05:28.448594 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:05:28.488143 1131323 cri.go:89] found id: ""
	I0328 01:05:28.488172 1131323 logs.go:276] 0 containers: []
	W0328 01:05:28.488182 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:05:28.488189 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:05:28.488258 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:05:28.545977 1131323 cri.go:89] found id: ""
	I0328 01:05:28.546012 1131323 logs.go:276] 0 containers: []
	W0328 01:05:28.546024 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:05:28.546036 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:05:28.546050 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:05:28.629955 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:05:28.630001 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:05:28.670504 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:05:28.670536 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:05:28.722021 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:05:28.722069 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:05:28.737274 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:05:28.737310 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:05:28.824025 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:05:27.041205 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:29.041342 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:26.372037 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:28.373545 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:30.872569 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:28.414921 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:30.912980 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:31.324497 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:05:31.339715 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:05:31.339811 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:05:31.379017 1131323 cri.go:89] found id: ""
	I0328 01:05:31.379050 1131323 logs.go:276] 0 containers: []
	W0328 01:05:31.379062 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:05:31.379072 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:05:31.379138 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:05:31.420024 1131323 cri.go:89] found id: ""
	I0328 01:05:31.420055 1131323 logs.go:276] 0 containers: []
	W0328 01:05:31.420065 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:05:31.420071 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:05:31.420136 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:05:31.458732 1131323 cri.go:89] found id: ""
	I0328 01:05:31.458764 1131323 logs.go:276] 0 containers: []
	W0328 01:05:31.458773 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:05:31.458779 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:05:31.458835 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:05:31.504249 1131323 cri.go:89] found id: ""
	I0328 01:05:31.504280 1131323 logs.go:276] 0 containers: []
	W0328 01:05:31.504292 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:05:31.504300 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:05:31.504366 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:05:31.545284 1131323 cri.go:89] found id: ""
	I0328 01:05:31.545316 1131323 logs.go:276] 0 containers: []
	W0328 01:05:31.545324 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:05:31.545331 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:05:31.545385 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:05:31.583402 1131323 cri.go:89] found id: ""
	I0328 01:05:31.583434 1131323 logs.go:276] 0 containers: []
	W0328 01:05:31.583444 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:05:31.583453 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:05:31.583587 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:05:31.624411 1131323 cri.go:89] found id: ""
	I0328 01:05:31.624449 1131323 logs.go:276] 0 containers: []
	W0328 01:05:31.624462 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:05:31.624471 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:05:31.624528 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:05:31.666103 1131323 cri.go:89] found id: ""
	I0328 01:05:31.666144 1131323 logs.go:276] 0 containers: []
	W0328 01:05:31.666158 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:05:31.666173 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:05:31.666192 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:05:31.717595 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:05:31.717636 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:05:31.731606 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:05:31.731637 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:05:31.803302 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:05:31.803325 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:05:31.803339 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:05:31.885552 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:05:31.885590 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:05:34.432446 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:05:34.448002 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:05:34.448085 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:05:34.493207 1131323 cri.go:89] found id: ""
	I0328 01:05:34.493246 1131323 logs.go:276] 0 containers: []
	W0328 01:05:34.493259 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:05:34.493268 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:05:34.493337 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:05:34.541838 1131323 cri.go:89] found id: ""
	I0328 01:05:34.541871 1131323 logs.go:276] 0 containers: []
	W0328 01:05:34.541883 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:05:34.541891 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:05:34.541956 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:05:34.582319 1131323 cri.go:89] found id: ""
	I0328 01:05:34.582357 1131323 logs.go:276] 0 containers: []
	W0328 01:05:34.582371 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:05:34.582380 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:05:34.582458 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:05:34.618753 1131323 cri.go:89] found id: ""
	I0328 01:05:34.618788 1131323 logs.go:276] 0 containers: []
	W0328 01:05:34.618801 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:05:34.618810 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:05:34.618882 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:05:34.656994 1131323 cri.go:89] found id: ""
	I0328 01:05:34.657027 1131323 logs.go:276] 0 containers: []
	W0328 01:05:34.657037 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:05:34.657043 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:05:34.657114 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:05:34.695214 1131323 cri.go:89] found id: ""
	I0328 01:05:34.695252 1131323 logs.go:276] 0 containers: []
	W0328 01:05:34.695264 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:05:34.695271 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:05:34.695337 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:05:34.733688 1131323 cri.go:89] found id: ""
	I0328 01:05:34.733718 1131323 logs.go:276] 0 containers: []
	W0328 01:05:34.733731 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:05:34.733739 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:05:34.733808 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:05:34.771697 1131323 cri.go:89] found id: ""
	I0328 01:05:34.771729 1131323 logs.go:276] 0 containers: []
	W0328 01:05:34.771744 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:05:34.771758 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:05:34.771776 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:05:34.828190 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:05:34.828236 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:05:34.842741 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:05:34.842776 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:05:34.918494 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:05:34.918525 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:05:34.918541 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:05:35.012689 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:05:35.012747 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:05:31.042633 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:33.541295 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:35.541588 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:33.371991 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:35.872753 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:33.412886 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:35.914065 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:37.574759 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:05:37.590014 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:05:37.590128 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:05:37.626883 1131323 cri.go:89] found id: ""
	I0328 01:05:37.626914 1131323 logs.go:276] 0 containers: []
	W0328 01:05:37.626926 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:05:37.626935 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:05:37.627005 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:05:37.665171 1131323 cri.go:89] found id: ""
	I0328 01:05:37.665202 1131323 logs.go:276] 0 containers: []
	W0328 01:05:37.665215 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:05:37.665225 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:05:37.665294 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:05:37.702923 1131323 cri.go:89] found id: ""
	I0328 01:05:37.702963 1131323 logs.go:276] 0 containers: []
	W0328 01:05:37.702976 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:05:37.702984 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:05:37.703064 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:05:37.741148 1131323 cri.go:89] found id: ""
	I0328 01:05:37.741182 1131323 logs.go:276] 0 containers: []
	W0328 01:05:37.741191 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:05:37.741199 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:05:37.741269 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:05:37.782298 1131323 cri.go:89] found id: ""
	I0328 01:05:37.782331 1131323 logs.go:276] 0 containers: []
	W0328 01:05:37.782341 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:05:37.782348 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:05:37.782407 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:05:37.819056 1131323 cri.go:89] found id: ""
	I0328 01:05:37.819110 1131323 logs.go:276] 0 containers: []
	W0328 01:05:37.819124 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:05:37.819134 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:05:37.819215 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:05:37.862372 1131323 cri.go:89] found id: ""
	I0328 01:05:37.862414 1131323 logs.go:276] 0 containers: []
	W0328 01:05:37.862427 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:05:37.862436 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:05:37.862507 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:05:37.899639 1131323 cri.go:89] found id: ""
	I0328 01:05:37.899675 1131323 logs.go:276] 0 containers: []
	W0328 01:05:37.899689 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:05:37.899703 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:05:37.899721 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:05:37.978962 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:05:37.978990 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:05:37.979007 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:05:38.058972 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:05:38.059015 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:05:38.102975 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:05:38.103016 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:05:38.157994 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:05:38.158035 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:05:38.041091 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:40.041892 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:38.371787 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:40.373131 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:38.412214 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:40.415412 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:42.912341 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:40.673425 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:05:40.690969 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:05:40.691041 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:05:40.735552 1131323 cri.go:89] found id: ""
	I0328 01:05:40.735585 1131323 logs.go:276] 0 containers: []
	W0328 01:05:40.735594 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:05:40.735602 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:05:40.735669 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:05:40.816611 1131323 cri.go:89] found id: ""
	I0328 01:05:40.816648 1131323 logs.go:276] 0 containers: []
	W0328 01:05:40.816661 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:05:40.816669 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:05:40.816725 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:05:40.864093 1131323 cri.go:89] found id: ""
	I0328 01:05:40.864125 1131323 logs.go:276] 0 containers: []
	W0328 01:05:40.864138 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:05:40.864147 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:05:40.864218 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:05:40.908781 1131323 cri.go:89] found id: ""
	I0328 01:05:40.908817 1131323 logs.go:276] 0 containers: []
	W0328 01:05:40.908829 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:05:40.908846 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:05:40.908914 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:05:40.950330 1131323 cri.go:89] found id: ""
	I0328 01:05:40.950369 1131323 logs.go:276] 0 containers: []
	W0328 01:05:40.950382 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:05:40.950390 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:05:40.950481 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:05:40.989983 1131323 cri.go:89] found id: ""
	I0328 01:05:40.990041 1131323 logs.go:276] 0 containers: []
	W0328 01:05:40.990054 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:05:40.990063 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:05:40.990136 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:05:41.042428 1131323 cri.go:89] found id: ""
	I0328 01:05:41.042470 1131323 logs.go:276] 0 containers: []
	W0328 01:05:41.042481 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:05:41.042489 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:05:41.042560 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:05:41.089309 1131323 cri.go:89] found id: ""
	I0328 01:05:41.089342 1131323 logs.go:276] 0 containers: []
	W0328 01:05:41.089353 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:05:41.089363 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:05:41.089377 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:05:41.148502 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:05:41.148547 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:05:41.163889 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:05:41.163918 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:05:41.242825 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:05:41.242848 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:05:41.242861 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:05:41.322658 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:05:41.322702 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:05:43.865117 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:05:43.880642 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:05:43.880729 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:05:43.919519 1131323 cri.go:89] found id: ""
	I0328 01:05:43.919550 1131323 logs.go:276] 0 containers: []
	W0328 01:05:43.919559 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:05:43.919565 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:05:43.919622 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:05:43.957906 1131323 cri.go:89] found id: ""
	I0328 01:05:43.957936 1131323 logs.go:276] 0 containers: []
	W0328 01:05:43.957945 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:05:43.957951 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:05:43.958008 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:05:44.001448 1131323 cri.go:89] found id: ""
	I0328 01:05:44.001486 1131323 logs.go:276] 0 containers: []
	W0328 01:05:44.001497 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:05:44.001505 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:05:44.001573 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:05:44.039767 1131323 cri.go:89] found id: ""
	I0328 01:05:44.039801 1131323 logs.go:276] 0 containers: []
	W0328 01:05:44.039812 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:05:44.039818 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:05:44.039871 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:05:44.079441 1131323 cri.go:89] found id: ""
	I0328 01:05:44.079470 1131323 logs.go:276] 0 containers: []
	W0328 01:05:44.079480 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:05:44.079486 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:05:44.079541 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:05:44.116534 1131323 cri.go:89] found id: ""
	I0328 01:05:44.116584 1131323 logs.go:276] 0 containers: []
	W0328 01:05:44.116596 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:05:44.116604 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:05:44.116670 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:05:44.163335 1131323 cri.go:89] found id: ""
	I0328 01:05:44.163369 1131323 logs.go:276] 0 containers: []
	W0328 01:05:44.163381 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:05:44.163389 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:05:44.163457 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:05:44.201367 1131323 cri.go:89] found id: ""
	I0328 01:05:44.201403 1131323 logs.go:276] 0 containers: []
	W0328 01:05:44.201413 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:05:44.201424 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:05:44.201442 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:05:44.257485 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:05:44.257529 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:05:44.272489 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:05:44.272534 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:05:44.354442 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:05:44.354477 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:05:44.354498 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:05:44.436219 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:05:44.436262 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:05:42.044443 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:44.541648 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:42.872072 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:44.873552 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:44.913292 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:47.412489 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:46.982131 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:05:46.998022 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:05:46.998100 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:05:47.037167 1131323 cri.go:89] found id: ""
	I0328 01:05:47.037205 1131323 logs.go:276] 0 containers: []
	W0328 01:05:47.037217 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:05:47.037226 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:05:47.037295 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:05:47.076175 1131323 cri.go:89] found id: ""
	I0328 01:05:47.076213 1131323 logs.go:276] 0 containers: []
	W0328 01:05:47.076226 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:05:47.076235 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:05:47.076306 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:05:47.115193 1131323 cri.go:89] found id: ""
	I0328 01:05:47.115227 1131323 logs.go:276] 0 containers: []
	W0328 01:05:47.115237 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:05:47.115244 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:05:47.115297 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:05:47.154942 1131323 cri.go:89] found id: ""
	I0328 01:05:47.154976 1131323 logs.go:276] 0 containers: []
	W0328 01:05:47.154989 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:05:47.154998 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:05:47.155069 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:05:47.196571 1131323 cri.go:89] found id: ""
	I0328 01:05:47.196609 1131323 logs.go:276] 0 containers: []
	W0328 01:05:47.196622 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:05:47.196631 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:05:47.196707 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:05:47.237572 1131323 cri.go:89] found id: ""
	I0328 01:05:47.237616 1131323 logs.go:276] 0 containers: []
	W0328 01:05:47.237625 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:05:47.237633 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:05:47.237691 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:05:47.275208 1131323 cri.go:89] found id: ""
	I0328 01:05:47.275254 1131323 logs.go:276] 0 containers: []
	W0328 01:05:47.275265 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:05:47.275272 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:05:47.275329 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:05:47.313515 1131323 cri.go:89] found id: ""
	I0328 01:05:47.313555 1131323 logs.go:276] 0 containers: []
	W0328 01:05:47.313568 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:05:47.313582 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:05:47.313598 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:05:47.368993 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:05:47.369033 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:05:47.383063 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:05:47.383097 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:05:47.460239 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:05:47.460278 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:05:47.460298 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:05:47.538552 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:05:47.538594 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:05:50.084960 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:05:50.101764 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:05:50.101859 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:05:50.141457 1131323 cri.go:89] found id: ""
	I0328 01:05:50.141488 1131323 logs.go:276] 0 containers: []
	W0328 01:05:50.141497 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:05:50.141504 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:05:50.141557 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:05:50.178184 1131323 cri.go:89] found id: ""
	I0328 01:05:50.178220 1131323 logs.go:276] 0 containers: []
	W0328 01:05:50.178254 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:05:50.178263 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:05:50.178358 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:05:50.217908 1131323 cri.go:89] found id: ""
	I0328 01:05:50.217946 1131323 logs.go:276] 0 containers: []
	W0328 01:05:50.217959 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:05:50.217966 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:05:50.218027 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:05:50.256029 1131323 cri.go:89] found id: ""
	I0328 01:05:50.256058 1131323 logs.go:276] 0 containers: []
	W0328 01:05:50.256067 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:05:50.256074 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:05:50.256130 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:05:50.295054 1131323 cri.go:89] found id: ""
	I0328 01:05:50.295087 1131323 logs.go:276] 0 containers: []
	W0328 01:05:50.295100 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:05:50.295106 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:05:50.295165 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:05:47.042338 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:49.542501 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:47.372867 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:49.872948 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:49.913873 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:52.412600 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:50.334695 1131323 cri.go:89] found id: ""
	I0328 01:05:50.336588 1131323 logs.go:276] 0 containers: []
	W0328 01:05:50.336605 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:05:50.336614 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:05:50.336697 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:05:50.375968 1131323 cri.go:89] found id: ""
	I0328 01:05:50.376003 1131323 logs.go:276] 0 containers: []
	W0328 01:05:50.376013 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:05:50.376021 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:05:50.376091 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:05:50.417146 1131323 cri.go:89] found id: ""
	I0328 01:05:50.417175 1131323 logs.go:276] 0 containers: []
	W0328 01:05:50.417184 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:05:50.417194 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:05:50.417207 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:05:50.474090 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:05:50.474131 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:05:50.489006 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:05:50.489040 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:05:50.566220 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:05:50.566268 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:05:50.566286 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:05:50.645593 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:05:50.645653 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:05:53.190872 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:05:53.205223 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:05:53.205320 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:05:53.242396 1131323 cri.go:89] found id: ""
	I0328 01:05:53.242433 1131323 logs.go:276] 0 containers: []
	W0328 01:05:53.242445 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:05:53.242455 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:05:53.242524 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:05:53.281237 1131323 cri.go:89] found id: ""
	I0328 01:05:53.281275 1131323 logs.go:276] 0 containers: []
	W0328 01:05:53.281288 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:05:53.281297 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:05:53.281357 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:05:53.321239 1131323 cri.go:89] found id: ""
	I0328 01:05:53.321268 1131323 logs.go:276] 0 containers: []
	W0328 01:05:53.321287 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:05:53.321296 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:05:53.321358 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:05:53.359240 1131323 cri.go:89] found id: ""
	I0328 01:05:53.359269 1131323 logs.go:276] 0 containers: []
	W0328 01:05:53.359278 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:05:53.359284 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:05:53.359337 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:05:53.396973 1131323 cri.go:89] found id: ""
	I0328 01:05:53.397008 1131323 logs.go:276] 0 containers: []
	W0328 01:05:53.397021 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:05:53.397030 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:05:53.397100 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:05:53.438368 1131323 cri.go:89] found id: ""
	I0328 01:05:53.438400 1131323 logs.go:276] 0 containers: []
	W0328 01:05:53.438408 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:05:53.438415 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:05:53.438477 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:05:53.474679 1131323 cri.go:89] found id: ""
	I0328 01:05:53.474708 1131323 logs.go:276] 0 containers: []
	W0328 01:05:53.474732 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:05:53.474742 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:05:53.474799 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:05:53.512509 1131323 cri.go:89] found id: ""
	I0328 01:05:53.512547 1131323 logs.go:276] 0 containers: []
	W0328 01:05:53.512560 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:05:53.512579 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:05:53.512599 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:05:53.569536 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:05:53.569580 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:05:53.584977 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:05:53.585016 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:05:53.657865 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:05:53.657895 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:05:53.657908 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:05:53.733158 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:05:53.733203 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:05:52.041508 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:54.541663 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:52.373317 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:54.872090 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:54.913464 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:57.413256 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:56.278693 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:05:56.291870 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:05:56.291949 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:05:56.332909 1131323 cri.go:89] found id: ""
	I0328 01:05:56.332943 1131323 logs.go:276] 0 containers: []
	W0328 01:05:56.332957 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:05:56.332965 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:05:56.333038 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:05:56.370608 1131323 cri.go:89] found id: ""
	I0328 01:05:56.370638 1131323 logs.go:276] 0 containers: []
	W0328 01:05:56.370649 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:05:56.370657 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:05:56.370721 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:05:56.408031 1131323 cri.go:89] found id: ""
	I0328 01:05:56.408068 1131323 logs.go:276] 0 containers: []
	W0328 01:05:56.408081 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:05:56.408100 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:05:56.408170 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:05:56.445057 1131323 cri.go:89] found id: ""
	I0328 01:05:56.445092 1131323 logs.go:276] 0 containers: []
	W0328 01:05:56.445105 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:05:56.445113 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:05:56.445177 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:05:56.486868 1131323 cri.go:89] found id: ""
	I0328 01:05:56.486898 1131323 logs.go:276] 0 containers: []
	W0328 01:05:56.486908 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:05:56.486914 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:05:56.486969 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:05:56.533594 1131323 cri.go:89] found id: ""
	I0328 01:05:56.533622 1131323 logs.go:276] 0 containers: []
	W0328 01:05:56.533632 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:05:56.533638 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:05:56.533702 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:05:56.569200 1131323 cri.go:89] found id: ""
	I0328 01:05:56.569237 1131323 logs.go:276] 0 containers: []
	W0328 01:05:56.569250 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:05:56.569258 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:05:56.569335 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:05:56.604919 1131323 cri.go:89] found id: ""
	I0328 01:05:56.604955 1131323 logs.go:276] 0 containers: []
	W0328 01:05:56.604968 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:05:56.604982 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:05:56.605011 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:05:56.654473 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:05:56.654513 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:05:56.671309 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:05:56.671339 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:05:56.739516 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:05:56.739543 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:05:56.739559 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:05:56.817445 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:05:56.817495 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:05:59.361711 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:05:59.375672 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:05:59.375750 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:05:59.414329 1131323 cri.go:89] found id: ""
	I0328 01:05:59.414360 1131323 logs.go:276] 0 containers: []
	W0328 01:05:59.414371 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:05:59.414379 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:05:59.414443 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:05:59.454813 1131323 cri.go:89] found id: ""
	I0328 01:05:59.454846 1131323 logs.go:276] 0 containers: []
	W0328 01:05:59.454855 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:05:59.454862 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:05:59.454917 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:05:59.492890 1131323 cri.go:89] found id: ""
	I0328 01:05:59.492924 1131323 logs.go:276] 0 containers: []
	W0328 01:05:59.492936 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:05:59.492946 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:05:59.493043 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:05:59.529412 1131323 cri.go:89] found id: ""
	I0328 01:05:59.529443 1131323 logs.go:276] 0 containers: []
	W0328 01:05:59.529454 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:05:59.529464 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:05:59.529521 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:05:59.568620 1131323 cri.go:89] found id: ""
	I0328 01:05:59.568655 1131323 logs.go:276] 0 containers: []
	W0328 01:05:59.568664 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:05:59.568671 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:05:59.568731 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:05:59.605826 1131323 cri.go:89] found id: ""
	I0328 01:05:59.605861 1131323 logs.go:276] 0 containers: []
	W0328 01:05:59.605874 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:05:59.605883 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:05:59.605955 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:05:59.645799 1131323 cri.go:89] found id: ""
	I0328 01:05:59.645833 1131323 logs.go:276] 0 containers: []
	W0328 01:05:59.645847 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:05:59.645856 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:05:59.645931 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:05:59.683866 1131323 cri.go:89] found id: ""
	I0328 01:05:59.683903 1131323 logs.go:276] 0 containers: []
	W0328 01:05:59.683916 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:05:59.683929 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:05:59.683953 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:05:59.726678 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:05:59.726711 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:05:59.779910 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:05:59.779954 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:05:59.795743 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:05:59.795774 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:05:59.875137 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:05:59.875162 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:05:59.875174 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:05:56.542345 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:58.542599 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:00.543094 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:57.372258 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:59.872483 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:59.912150 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:01.913694 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:02.455212 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:06:02.468850 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:06:02.468945 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:06:02.506347 1131323 cri.go:89] found id: ""
	I0328 01:06:02.506385 1131323 logs.go:276] 0 containers: []
	W0328 01:06:02.506397 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:06:02.506406 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:06:02.506484 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:06:02.546056 1131323 cri.go:89] found id: ""
	I0328 01:06:02.546085 1131323 logs.go:276] 0 containers: []
	W0328 01:06:02.546096 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:06:02.546103 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:06:02.546173 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:06:02.585343 1131323 cri.go:89] found id: ""
	I0328 01:06:02.585385 1131323 logs.go:276] 0 containers: []
	W0328 01:06:02.585398 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:06:02.585407 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:06:02.585563 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:06:02.625380 1131323 cri.go:89] found id: ""
	I0328 01:06:02.625414 1131323 logs.go:276] 0 containers: []
	W0328 01:06:02.625423 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:06:02.625429 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:06:02.625486 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:06:02.664653 1131323 cri.go:89] found id: ""
	I0328 01:06:02.664687 1131323 logs.go:276] 0 containers: []
	W0328 01:06:02.664701 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:06:02.664708 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:06:02.664764 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:06:02.704468 1131323 cri.go:89] found id: ""
	I0328 01:06:02.704498 1131323 logs.go:276] 0 containers: []
	W0328 01:06:02.704511 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:06:02.704519 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:06:02.704595 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:06:02.740969 1131323 cri.go:89] found id: ""
	I0328 01:06:02.740997 1131323 logs.go:276] 0 containers: []
	W0328 01:06:02.741007 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:06:02.741014 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:06:02.741102 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:06:02.782113 1131323 cri.go:89] found id: ""
	I0328 01:06:02.782150 1131323 logs.go:276] 0 containers: []
	W0328 01:06:02.782163 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:06:02.782185 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:06:02.782203 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:06:02.836804 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:06:02.836848 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:06:02.852266 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:06:02.852299 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:06:02.929441 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:06:02.929467 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:06:02.929484 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:06:03.008114 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:06:03.008156 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:06:03.041919 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:05.542209 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:02.372332 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:04.871689 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:04.413251 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:06.912348 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:05.554291 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:06:05.570208 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:06:05.570304 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:06:05.610887 1131323 cri.go:89] found id: ""
	I0328 01:06:05.610916 1131323 logs.go:276] 0 containers: []
	W0328 01:06:05.610926 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:06:05.610932 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:06:05.610991 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:06:05.651561 1131323 cri.go:89] found id: ""
	I0328 01:06:05.651600 1131323 logs.go:276] 0 containers: []
	W0328 01:06:05.651610 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:06:05.651616 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:06:05.651681 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:06:05.690801 1131323 cri.go:89] found id: ""
	I0328 01:06:05.690830 1131323 logs.go:276] 0 containers: []
	W0328 01:06:05.690843 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:06:05.690851 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:06:05.690920 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:06:05.729098 1131323 cri.go:89] found id: ""
	I0328 01:06:05.729136 1131323 logs.go:276] 0 containers: []
	W0328 01:06:05.729146 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:06:05.729153 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:06:05.729225 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:06:05.774461 1131323 cri.go:89] found id: ""
	I0328 01:06:05.774499 1131323 logs.go:276] 0 containers: []
	W0328 01:06:05.774520 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:06:05.774530 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:06:05.774602 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:06:05.812135 1131323 cri.go:89] found id: ""
	I0328 01:06:05.812166 1131323 logs.go:276] 0 containers: []
	W0328 01:06:05.812180 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:06:05.812188 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:06:05.812255 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:06:05.847744 1131323 cri.go:89] found id: ""
	I0328 01:06:05.847775 1131323 logs.go:276] 0 containers: []
	W0328 01:06:05.847786 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:06:05.847796 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:06:05.847863 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:06:05.885600 1131323 cri.go:89] found id: ""
	I0328 01:06:05.885641 1131323 logs.go:276] 0 containers: []
	W0328 01:06:05.885656 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:06:05.885669 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:06:05.885684 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:06:05.963837 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:06:05.963879 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:06:06.007342 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:06:06.007381 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:06:06.062798 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:06:06.062843 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:06:06.077547 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:06:06.077599 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:06:06.148373 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:06:08.648791 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:06:08.664082 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:06:08.664154 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:06:08.701746 1131323 cri.go:89] found id: ""
	I0328 01:06:08.701776 1131323 logs.go:276] 0 containers: []
	W0328 01:06:08.701789 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:06:08.701797 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:06:08.701855 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:06:08.739035 1131323 cri.go:89] found id: ""
	I0328 01:06:08.739066 1131323 logs.go:276] 0 containers: []
	W0328 01:06:08.739076 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:06:08.739083 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:06:08.739136 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:06:08.776128 1131323 cri.go:89] found id: ""
	I0328 01:06:08.776166 1131323 logs.go:276] 0 containers: []
	W0328 01:06:08.776180 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:06:08.776189 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:06:08.776255 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:06:08.816136 1131323 cri.go:89] found id: ""
	I0328 01:06:08.816172 1131323 logs.go:276] 0 containers: []
	W0328 01:06:08.816187 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:06:08.816196 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:06:08.816271 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:06:08.855675 1131323 cri.go:89] found id: ""
	I0328 01:06:08.855709 1131323 logs.go:276] 0 containers: []
	W0328 01:06:08.855722 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:06:08.855730 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:06:08.855802 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:06:08.893161 1131323 cri.go:89] found id: ""
	I0328 01:06:08.893198 1131323 logs.go:276] 0 containers: []
	W0328 01:06:08.893212 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:06:08.893221 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:06:08.893297 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:06:08.935498 1131323 cri.go:89] found id: ""
	I0328 01:06:08.935527 1131323 logs.go:276] 0 containers: []
	W0328 01:06:08.935540 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:06:08.935548 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:06:08.935622 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:06:08.971622 1131323 cri.go:89] found id: ""
	I0328 01:06:08.971657 1131323 logs.go:276] 0 containers: []
	W0328 01:06:08.971668 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:06:08.971679 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:06:08.971696 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:06:09.039975 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:06:09.040036 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:06:09.057877 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:06:09.057920 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:06:09.130093 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:06:09.130119 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:06:09.130135 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:06:09.217177 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:06:09.217228 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:06:08.040921 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:10.042895 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:06.872367 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:08.873187 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:08.914313 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:11.412330 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:11.762393 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:06:11.776356 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:06:11.776424 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:06:11.811982 1131323 cri.go:89] found id: ""
	I0328 01:06:11.812017 1131323 logs.go:276] 0 containers: []
	W0328 01:06:11.812030 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:06:11.812038 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:06:11.812103 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:06:11.849789 1131323 cri.go:89] found id: ""
	I0328 01:06:11.849817 1131323 logs.go:276] 0 containers: []
	W0328 01:06:11.849826 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:06:11.849833 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:06:11.849884 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:06:11.890455 1131323 cri.go:89] found id: ""
	I0328 01:06:11.890488 1131323 logs.go:276] 0 containers: []
	W0328 01:06:11.890497 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:06:11.890503 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:06:11.890559 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:06:11.929047 1131323 cri.go:89] found id: ""
	I0328 01:06:11.929093 1131323 logs.go:276] 0 containers: []
	W0328 01:06:11.929102 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:06:11.929108 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:06:11.929164 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:06:11.969536 1131323 cri.go:89] found id: ""
	I0328 01:06:11.969566 1131323 logs.go:276] 0 containers: []
	W0328 01:06:11.969576 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:06:11.969583 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:06:11.969641 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:06:12.008779 1131323 cri.go:89] found id: ""
	I0328 01:06:12.008811 1131323 logs.go:276] 0 containers: []
	W0328 01:06:12.008821 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:06:12.008828 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:06:12.008890 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:06:12.044061 1131323 cri.go:89] found id: ""
	I0328 01:06:12.044091 1131323 logs.go:276] 0 containers: []
	W0328 01:06:12.044104 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:06:12.044112 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:06:12.044176 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:06:12.082307 1131323 cri.go:89] found id: ""
	I0328 01:06:12.082336 1131323 logs.go:276] 0 containers: []
	W0328 01:06:12.082346 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:06:12.082357 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:06:12.082369 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:06:12.133044 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:06:12.133091 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:06:12.148584 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:06:12.148624 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:06:12.218799 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:06:12.218834 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:06:12.218852 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:06:12.295580 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:06:12.295623 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:06:14.842815 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:06:14.856385 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:06:14.856456 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:06:14.895351 1131323 cri.go:89] found id: ""
	I0328 01:06:14.895409 1131323 logs.go:276] 0 containers: []
	W0328 01:06:14.895418 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:06:14.895424 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:06:14.895476 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:06:14.930333 1131323 cri.go:89] found id: ""
	I0328 01:06:14.930366 1131323 logs.go:276] 0 containers: []
	W0328 01:06:14.930380 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:06:14.930389 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:06:14.930461 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:06:14.968701 1131323 cri.go:89] found id: ""
	I0328 01:06:14.968742 1131323 logs.go:276] 0 containers: []
	W0328 01:06:14.968754 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:06:14.968767 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:06:14.968867 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:06:15.004580 1131323 cri.go:89] found id: ""
	I0328 01:06:15.004613 1131323 logs.go:276] 0 containers: []
	W0328 01:06:15.004626 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:06:15.004634 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:06:15.004700 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:06:15.046702 1131323 cri.go:89] found id: ""
	I0328 01:06:15.046726 1131323 logs.go:276] 0 containers: []
	W0328 01:06:15.046736 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:06:15.046742 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:06:15.046795 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:06:15.088693 1131323 cri.go:89] found id: ""
	I0328 01:06:15.088725 1131323 logs.go:276] 0 containers: []
	W0328 01:06:15.088734 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:06:15.088741 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:06:15.088797 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:06:15.130293 1131323 cri.go:89] found id: ""
	I0328 01:06:15.130324 1131323 logs.go:276] 0 containers: []
	W0328 01:06:15.130333 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:06:15.130339 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:06:15.130394 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:06:15.172381 1131323 cri.go:89] found id: ""
	I0328 01:06:15.172408 1131323 logs.go:276] 0 containers: []
	W0328 01:06:15.172417 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:06:15.172427 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:06:15.172440 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:06:15.225631 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:06:15.225674 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:06:15.241251 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:06:15.241294 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:06:15.319701 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:06:15.319731 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:06:15.319747 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:06:12.540755 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:14.541618 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:11.371580 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:13.371640 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:15.373147 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:13.911792 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:15.912479 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:17.913926 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:15.406813 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:06:15.406853 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:06:17.993893 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:06:18.007755 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:06:18.007843 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:06:18.047750 1131323 cri.go:89] found id: ""
	I0328 01:06:18.047777 1131323 logs.go:276] 0 containers: []
	W0328 01:06:18.047786 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:06:18.047797 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:06:18.047855 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:06:18.088264 1131323 cri.go:89] found id: ""
	I0328 01:06:18.088291 1131323 logs.go:276] 0 containers: []
	W0328 01:06:18.088303 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:06:18.088311 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:06:18.088369 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:06:18.127485 1131323 cri.go:89] found id: ""
	I0328 01:06:18.127514 1131323 logs.go:276] 0 containers: []
	W0328 01:06:18.127523 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:06:18.127530 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:06:18.127581 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:06:18.167462 1131323 cri.go:89] found id: ""
	I0328 01:06:18.167496 1131323 logs.go:276] 0 containers: []
	W0328 01:06:18.167510 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:06:18.167516 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:06:18.167571 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:06:18.209536 1131323 cri.go:89] found id: ""
	I0328 01:06:18.209571 1131323 logs.go:276] 0 containers: []
	W0328 01:06:18.209583 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:06:18.209591 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:06:18.209662 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:06:18.247565 1131323 cri.go:89] found id: ""
	I0328 01:06:18.247601 1131323 logs.go:276] 0 containers: []
	W0328 01:06:18.247614 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:06:18.247623 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:06:18.247701 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:06:18.288123 1131323 cri.go:89] found id: ""
	I0328 01:06:18.288162 1131323 logs.go:276] 0 containers: []
	W0328 01:06:18.288172 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:06:18.288179 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:06:18.288242 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:06:18.328132 1131323 cri.go:89] found id: ""
	I0328 01:06:18.328161 1131323 logs.go:276] 0 containers: []
	W0328 01:06:18.328170 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:06:18.328181 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:06:18.328193 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:06:18.403245 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:06:18.403287 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:06:18.403305 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:06:18.483446 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:06:18.483500 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:06:18.527357 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:06:18.527392 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:06:18.588402 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:06:18.588463 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:06:16.542137 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:18.542554 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:20.546396 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:17.872147 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:20.373000 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:20.412369 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:22.412661 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:21.103566 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:06:21.117538 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:06:21.117616 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:06:21.174215 1131323 cri.go:89] found id: ""
	I0328 01:06:21.174270 1131323 logs.go:276] 0 containers: []
	W0328 01:06:21.174284 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:06:21.174293 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:06:21.174364 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:06:21.238666 1131323 cri.go:89] found id: ""
	I0328 01:06:21.238707 1131323 logs.go:276] 0 containers: []
	W0328 01:06:21.238722 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:06:21.238730 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:06:21.238803 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:06:21.303510 1131323 cri.go:89] found id: ""
	I0328 01:06:21.303543 1131323 logs.go:276] 0 containers: []
	W0328 01:06:21.303553 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:06:21.303559 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:06:21.303614 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:06:21.345823 1131323 cri.go:89] found id: ""
	I0328 01:06:21.345853 1131323 logs.go:276] 0 containers: []
	W0328 01:06:21.345862 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:06:21.345870 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:06:21.345940 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:06:21.386205 1131323 cri.go:89] found id: ""
	I0328 01:06:21.386248 1131323 logs.go:276] 0 containers: []
	W0328 01:06:21.386261 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:06:21.386269 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:06:21.386335 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:06:21.427424 1131323 cri.go:89] found id: ""
	I0328 01:06:21.427457 1131323 logs.go:276] 0 containers: []
	W0328 01:06:21.427470 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:06:21.427478 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:06:21.427546 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:06:21.465054 1131323 cri.go:89] found id: ""
	I0328 01:06:21.465087 1131323 logs.go:276] 0 containers: []
	W0328 01:06:21.465099 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:06:21.465107 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:06:21.465177 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:06:21.507197 1131323 cri.go:89] found id: ""
	I0328 01:06:21.507229 1131323 logs.go:276] 0 containers: []
	W0328 01:06:21.507238 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:06:21.507248 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:06:21.507263 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:06:21.586657 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:06:21.586709 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:06:21.633702 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:06:21.633739 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:06:21.688960 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:06:21.688999 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:06:21.704675 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:06:21.704714 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:06:21.781612 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:06:24.282521 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:06:24.297096 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:06:24.297185 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:06:24.338745 1131323 cri.go:89] found id: ""
	I0328 01:06:24.338780 1131323 logs.go:276] 0 containers: []
	W0328 01:06:24.338793 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:06:24.338802 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:06:24.338872 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:06:24.375499 1131323 cri.go:89] found id: ""
	I0328 01:06:24.375528 1131323 logs.go:276] 0 containers: []
	W0328 01:06:24.375540 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:06:24.375548 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:06:24.375616 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:06:24.410939 1131323 cri.go:89] found id: ""
	I0328 01:06:24.410966 1131323 logs.go:276] 0 containers: []
	W0328 01:06:24.410978 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:06:24.410986 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:06:24.411042 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:06:24.455316 1131323 cri.go:89] found id: ""
	I0328 01:06:24.455345 1131323 logs.go:276] 0 containers: []
	W0328 01:06:24.455354 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:06:24.455360 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:06:24.455427 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:06:24.493177 1131323 cri.go:89] found id: ""
	I0328 01:06:24.493206 1131323 logs.go:276] 0 containers: []
	W0328 01:06:24.493219 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:06:24.493228 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:06:24.493300 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:06:24.533612 1131323 cri.go:89] found id: ""
	I0328 01:06:24.533648 1131323 logs.go:276] 0 containers: []
	W0328 01:06:24.533659 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:06:24.533668 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:06:24.533743 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:06:24.573960 1131323 cri.go:89] found id: ""
	I0328 01:06:24.573998 1131323 logs.go:276] 0 containers: []
	W0328 01:06:24.574014 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:06:24.574020 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:06:24.574074 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:06:24.617282 1131323 cri.go:89] found id: ""
	I0328 01:06:24.617319 1131323 logs.go:276] 0 containers: []
	W0328 01:06:24.617333 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:06:24.617346 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:06:24.617364 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:06:24.691660 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:06:24.691688 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:06:24.691707 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:06:24.773138 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:06:24.773180 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:06:24.820408 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:06:24.820440 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:06:24.875901 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:06:24.875940 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:06:23.041030 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:25.041064 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:22.874513 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:25.378939 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:24.413732 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:26.912433 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:27.392663 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:06:27.407958 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:06:27.408046 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:06:27.446750 1131323 cri.go:89] found id: ""
	I0328 01:06:27.446782 1131323 logs.go:276] 0 containers: []
	W0328 01:06:27.446792 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:06:27.446799 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:06:27.446872 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:06:27.489199 1131323 cri.go:89] found id: ""
	I0328 01:06:27.489236 1131323 logs.go:276] 0 containers: []
	W0328 01:06:27.489249 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:06:27.489258 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:06:27.489316 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:06:27.525754 1131323 cri.go:89] found id: ""
	I0328 01:06:27.525787 1131323 logs.go:276] 0 containers: []
	W0328 01:06:27.525796 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:06:27.525803 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:06:27.525861 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:06:27.560817 1131323 cri.go:89] found id: ""
	I0328 01:06:27.560849 1131323 logs.go:276] 0 containers: []
	W0328 01:06:27.560858 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:06:27.560866 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:06:27.560930 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:06:27.597706 1131323 cri.go:89] found id: ""
	I0328 01:06:27.597736 1131323 logs.go:276] 0 containers: []
	W0328 01:06:27.597744 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:06:27.597750 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:06:27.597821 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:06:27.635170 1131323 cri.go:89] found id: ""
	I0328 01:06:27.635211 1131323 logs.go:276] 0 containers: []
	W0328 01:06:27.635223 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:06:27.635232 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:06:27.635299 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:06:27.672043 1131323 cri.go:89] found id: ""
	I0328 01:06:27.672079 1131323 logs.go:276] 0 containers: []
	W0328 01:06:27.672091 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:06:27.672099 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:06:27.672166 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:06:27.711401 1131323 cri.go:89] found id: ""
	I0328 01:06:27.711435 1131323 logs.go:276] 0 containers: []
	W0328 01:06:27.711448 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:06:27.711468 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:06:27.711488 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:06:27.755172 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:06:27.755211 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:06:27.807588 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:06:27.807632 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:06:27.823557 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:06:27.823589 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:06:27.905292 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:06:27.905316 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:06:27.905329 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:06:27.041105 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:29.041205 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:27.873797 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:30.374214 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:29.412378 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:31.413211 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:30.491565 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:06:30.505601 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:06:30.505667 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:06:30.541894 1131323 cri.go:89] found id: ""
	I0328 01:06:30.541929 1131323 logs.go:276] 0 containers: []
	W0328 01:06:30.541940 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:06:30.541949 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:06:30.542029 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:06:30.581484 1131323 cri.go:89] found id: ""
	I0328 01:06:30.581514 1131323 logs.go:276] 0 containers: []
	W0328 01:06:30.581532 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:06:30.581538 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:06:30.581613 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:06:30.624788 1131323 cri.go:89] found id: ""
	I0328 01:06:30.624830 1131323 logs.go:276] 0 containers: []
	W0328 01:06:30.624842 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:06:30.624850 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:06:30.624922 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:06:30.664373 1131323 cri.go:89] found id: ""
	I0328 01:06:30.664403 1131323 logs.go:276] 0 containers: []
	W0328 01:06:30.664413 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:06:30.664420 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:06:30.664489 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:06:30.702885 1131323 cri.go:89] found id: ""
	I0328 01:06:30.702917 1131323 logs.go:276] 0 containers: []
	W0328 01:06:30.702928 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:06:30.702934 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:06:30.703006 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:06:30.748170 1131323 cri.go:89] found id: ""
	I0328 01:06:30.748205 1131323 logs.go:276] 0 containers: []
	W0328 01:06:30.748217 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:06:30.748226 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:06:30.748316 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:06:30.785218 1131323 cri.go:89] found id: ""
	I0328 01:06:30.785255 1131323 logs.go:276] 0 containers: []
	W0328 01:06:30.785268 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:06:30.785276 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:06:30.785343 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:06:30.825529 1131323 cri.go:89] found id: ""
	I0328 01:06:30.825555 1131323 logs.go:276] 0 containers: []
	W0328 01:06:30.825565 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:06:30.825575 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:06:30.825589 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:06:30.881353 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:06:30.881391 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:06:30.896682 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:06:30.896718 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:06:30.973356 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:06:30.973386 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:06:30.973402 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:06:31.049014 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:06:31.049047 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:06:33.594365 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:06:33.609372 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:06:33.609460 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:06:33.648699 1131323 cri.go:89] found id: ""
	I0328 01:06:33.648728 1131323 logs.go:276] 0 containers: []
	W0328 01:06:33.648749 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:06:33.648757 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:06:33.648829 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:06:33.686707 1131323 cri.go:89] found id: ""
	I0328 01:06:33.686744 1131323 logs.go:276] 0 containers: []
	W0328 01:06:33.686758 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:06:33.686767 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:06:33.686832 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:06:33.723091 1131323 cri.go:89] found id: ""
	I0328 01:06:33.723121 1131323 logs.go:276] 0 containers: []
	W0328 01:06:33.723130 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:06:33.723136 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:06:33.723187 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:06:33.763439 1131323 cri.go:89] found id: ""
	I0328 01:06:33.763471 1131323 logs.go:276] 0 containers: []
	W0328 01:06:33.763481 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:06:33.763488 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:06:33.763544 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:06:33.812236 1131323 cri.go:89] found id: ""
	I0328 01:06:33.812271 1131323 logs.go:276] 0 containers: []
	W0328 01:06:33.812285 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:06:33.812294 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:06:33.812365 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:06:33.849421 1131323 cri.go:89] found id: ""
	I0328 01:06:33.849454 1131323 logs.go:276] 0 containers: []
	W0328 01:06:33.849465 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:06:33.849473 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:06:33.849528 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:06:33.888020 1131323 cri.go:89] found id: ""
	I0328 01:06:33.888051 1131323 logs.go:276] 0 containers: []
	W0328 01:06:33.888065 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:06:33.888078 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:06:33.888145 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:06:33.925952 1131323 cri.go:89] found id: ""
	I0328 01:06:33.925990 1131323 logs.go:276] 0 containers: []
	W0328 01:06:33.926003 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:06:33.926016 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:06:33.926034 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:06:33.976695 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:06:33.976734 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:06:33.991708 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:06:33.991752 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:06:34.068244 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:06:34.068276 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:06:34.068293 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:06:34.155843 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:06:34.155885 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:06:31.041375 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:33.041526 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:35.541169 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:32.872009 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:34.873043 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:33.913191 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:36.413213 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:36.697480 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:06:36.712322 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:06:36.712420 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:06:36.749541 1131323 cri.go:89] found id: ""
	I0328 01:06:36.749570 1131323 logs.go:276] 0 containers: []
	W0328 01:06:36.749579 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:06:36.749587 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:06:36.749655 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:06:36.788226 1131323 cri.go:89] found id: ""
	I0328 01:06:36.788255 1131323 logs.go:276] 0 containers: []
	W0328 01:06:36.788264 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:06:36.788270 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:06:36.788323 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:06:36.823824 1131323 cri.go:89] found id: ""
	I0328 01:06:36.823856 1131323 logs.go:276] 0 containers: []
	W0328 01:06:36.823866 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:06:36.823872 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:06:36.823927 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:06:36.869331 1131323 cri.go:89] found id: ""
	I0328 01:06:36.869362 1131323 logs.go:276] 0 containers: []
	W0328 01:06:36.869371 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:06:36.869378 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:06:36.869473 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:06:36.907918 1131323 cri.go:89] found id: ""
	I0328 01:06:36.907950 1131323 logs.go:276] 0 containers: []
	W0328 01:06:36.907960 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:06:36.907966 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:06:36.908028 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:06:36.947708 1131323 cri.go:89] found id: ""
	I0328 01:06:36.947738 1131323 logs.go:276] 0 containers: []
	W0328 01:06:36.947749 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:06:36.947757 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:06:36.947824 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:06:36.986200 1131323 cri.go:89] found id: ""
	I0328 01:06:36.986251 1131323 logs.go:276] 0 containers: []
	W0328 01:06:36.986266 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:06:36.986275 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:06:36.986350 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:06:37.026670 1131323 cri.go:89] found id: ""
	I0328 01:06:37.026698 1131323 logs.go:276] 0 containers: []
	W0328 01:06:37.026708 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:06:37.026718 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:06:37.026732 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:06:37.079891 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:06:37.079933 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:06:37.094347 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:06:37.094378 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:06:37.168653 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:06:37.168681 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:06:37.168695 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:06:37.247909 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:06:37.247949 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:06:39.791285 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:06:39.807921 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:06:39.808000 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:06:39.851460 1131323 cri.go:89] found id: ""
	I0328 01:06:39.851499 1131323 logs.go:276] 0 containers: []
	W0328 01:06:39.851512 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:06:39.851520 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:06:39.851593 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:06:39.889506 1131323 cri.go:89] found id: ""
	I0328 01:06:39.889541 1131323 logs.go:276] 0 containers: []
	W0328 01:06:39.889554 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:06:39.889564 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:06:39.889632 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:06:39.930291 1131323 cri.go:89] found id: ""
	I0328 01:06:39.930321 1131323 logs.go:276] 0 containers: []
	W0328 01:06:39.930331 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:06:39.930337 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:06:39.930400 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:06:39.965121 1131323 cri.go:89] found id: ""
	I0328 01:06:39.965160 1131323 logs.go:276] 0 containers: []
	W0328 01:06:39.965174 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:06:39.965183 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:06:39.965252 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:06:40.003217 1131323 cri.go:89] found id: ""
	I0328 01:06:40.003248 1131323 logs.go:276] 0 containers: []
	W0328 01:06:40.003258 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:06:40.003264 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:06:40.003319 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:06:40.042702 1131323 cri.go:89] found id: ""
	I0328 01:06:40.042737 1131323 logs.go:276] 0 containers: []
	W0328 01:06:40.042749 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:06:40.042759 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:06:40.042826 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:06:40.079733 1131323 cri.go:89] found id: ""
	I0328 01:06:40.079769 1131323 logs.go:276] 0 containers: []
	W0328 01:06:40.079780 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:06:40.079788 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:06:40.079852 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:06:40.117066 1131323 cri.go:89] found id: ""
	I0328 01:06:40.117098 1131323 logs.go:276] 0 containers: []
	W0328 01:06:40.117107 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:06:40.117117 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:06:40.117130 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:06:40.158589 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:06:40.158623 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:06:40.210997 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:06:40.211049 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:06:40.225419 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:06:40.225453 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:06:40.305356 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:06:40.305385 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:06:40.305401 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:06:37.541534 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:39.541905 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:36.874220 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:39.373763 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:38.413719 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:40.912939 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:42.913528 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:42.896394 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:06:42.912285 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:06:42.912355 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:06:42.949381 1131323 cri.go:89] found id: ""
	I0328 01:06:42.949411 1131323 logs.go:276] 0 containers: []
	W0328 01:06:42.949420 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:06:42.949427 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:06:42.949496 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:06:42.985325 1131323 cri.go:89] found id: ""
	I0328 01:06:42.985358 1131323 logs.go:276] 0 containers: []
	W0328 01:06:42.985371 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:06:42.985388 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:06:42.985456 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:06:43.023570 1131323 cri.go:89] found id: ""
	I0328 01:06:43.023616 1131323 logs.go:276] 0 containers: []
	W0328 01:06:43.023630 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:06:43.023638 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:06:43.023714 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:06:43.062995 1131323 cri.go:89] found id: ""
	I0328 01:06:43.063025 1131323 logs.go:276] 0 containers: []
	W0328 01:06:43.063036 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:06:43.063042 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:06:43.063111 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:06:43.101666 1131323 cri.go:89] found id: ""
	I0328 01:06:43.101704 1131323 logs.go:276] 0 containers: []
	W0328 01:06:43.101713 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:06:43.101720 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:06:43.101789 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:06:43.150713 1131323 cri.go:89] found id: ""
	I0328 01:06:43.150745 1131323 logs.go:276] 0 containers: []
	W0328 01:06:43.150757 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:06:43.150765 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:06:43.150830 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:06:43.193449 1131323 cri.go:89] found id: ""
	I0328 01:06:43.193479 1131323 logs.go:276] 0 containers: []
	W0328 01:06:43.193487 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:06:43.193495 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:06:43.193559 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:06:43.237641 1131323 cri.go:89] found id: ""
	I0328 01:06:43.237673 1131323 logs.go:276] 0 containers: []
	W0328 01:06:43.237682 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:06:43.237698 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:06:43.237714 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:06:43.287282 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:06:43.287320 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:06:43.303307 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:06:43.303343 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:06:43.383597 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:06:43.383619 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:06:43.383632 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:06:43.467874 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:06:43.467914 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:06:42.041406 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:44.540550 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:41.874286 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:44.372393 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:45.410973 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:47.412852 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:46.011081 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:06:46.025731 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:06:46.025824 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:06:46.064336 1131323 cri.go:89] found id: ""
	I0328 01:06:46.064371 1131323 logs.go:276] 0 containers: []
	W0328 01:06:46.064385 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:06:46.064394 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:06:46.064451 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:06:46.104493 1131323 cri.go:89] found id: ""
	I0328 01:06:46.104530 1131323 logs.go:276] 0 containers: []
	W0328 01:06:46.104550 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:06:46.104559 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:06:46.104636 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:06:46.147546 1131323 cri.go:89] found id: ""
	I0328 01:06:46.147582 1131323 logs.go:276] 0 containers: []
	W0328 01:06:46.147594 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:06:46.147602 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:06:46.147656 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:06:46.186162 1131323 cri.go:89] found id: ""
	I0328 01:06:46.186197 1131323 logs.go:276] 0 containers: []
	W0328 01:06:46.186207 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:06:46.186213 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:06:46.186296 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:06:46.230412 1131323 cri.go:89] found id: ""
	I0328 01:06:46.230450 1131323 logs.go:276] 0 containers: []
	W0328 01:06:46.230464 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:06:46.230473 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:06:46.230552 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:06:46.266000 1131323 cri.go:89] found id: ""
	I0328 01:06:46.266037 1131323 logs.go:276] 0 containers: []
	W0328 01:06:46.266050 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:06:46.266059 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:06:46.266126 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:06:46.301031 1131323 cri.go:89] found id: ""
	I0328 01:06:46.301065 1131323 logs.go:276] 0 containers: []
	W0328 01:06:46.301077 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:06:46.301084 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:06:46.301155 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:06:46.339222 1131323 cri.go:89] found id: ""
	I0328 01:06:46.339248 1131323 logs.go:276] 0 containers: []
	W0328 01:06:46.339258 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:06:46.339271 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:06:46.339290 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:06:46.352558 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:06:46.352595 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:06:46.427283 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:06:46.427308 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:06:46.427325 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:06:46.512134 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:06:46.512178 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:06:46.558276 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:06:46.558307 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:06:49.113455 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:06:49.127554 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:06:49.127645 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:06:49.169380 1131323 cri.go:89] found id: ""
	I0328 01:06:49.169421 1131323 logs.go:276] 0 containers: []
	W0328 01:06:49.169435 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:06:49.169444 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:06:49.169511 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:06:49.204540 1131323 cri.go:89] found id: ""
	I0328 01:06:49.204568 1131323 logs.go:276] 0 containers: []
	W0328 01:06:49.204579 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:06:49.204596 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:06:49.204664 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:06:49.243074 1131323 cri.go:89] found id: ""
	I0328 01:06:49.243102 1131323 logs.go:276] 0 containers: []
	W0328 01:06:49.243112 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:06:49.243119 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:06:49.243170 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:06:49.281264 1131323 cri.go:89] found id: ""
	I0328 01:06:49.281301 1131323 logs.go:276] 0 containers: []
	W0328 01:06:49.281314 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:06:49.281322 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:06:49.281391 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:06:49.320473 1131323 cri.go:89] found id: ""
	I0328 01:06:49.320505 1131323 logs.go:276] 0 containers: []
	W0328 01:06:49.320514 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:06:49.320521 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:06:49.320592 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:06:49.357715 1131323 cri.go:89] found id: ""
	I0328 01:06:49.357749 1131323 logs.go:276] 0 containers: []
	W0328 01:06:49.357759 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:06:49.357766 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:06:49.357823 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:06:49.398427 1131323 cri.go:89] found id: ""
	I0328 01:06:49.398464 1131323 logs.go:276] 0 containers: []
	W0328 01:06:49.398477 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:06:49.398498 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:06:49.398576 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:06:49.439921 1131323 cri.go:89] found id: ""
	I0328 01:06:49.439956 1131323 logs.go:276] 0 containers: []
	W0328 01:06:49.439969 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:06:49.439982 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:06:49.440003 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:06:49.557260 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:06:49.557289 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:06:49.557312 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:06:49.640105 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:06:49.640169 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:06:49.683153 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:06:49.683185 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:06:49.737420 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:06:49.737463 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:06:46.541377 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:49.041761 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:46.374869 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:48.875897 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:49.912535 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:51.912893 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:52.253208 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:06:52.268572 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:06:52.268649 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:06:52.305136 1131323 cri.go:89] found id: ""
	I0328 01:06:52.305180 1131323 logs.go:276] 0 containers: []
	W0328 01:06:52.305193 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:06:52.305202 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:06:52.305273 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:06:52.344774 1131323 cri.go:89] found id: ""
	I0328 01:06:52.344806 1131323 logs.go:276] 0 containers: []
	W0328 01:06:52.344816 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:06:52.344823 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:06:52.344885 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:06:52.382127 1131323 cri.go:89] found id: ""
	I0328 01:06:52.382174 1131323 logs.go:276] 0 containers: []
	W0328 01:06:52.382185 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:06:52.382200 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:06:52.382280 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:06:52.421340 1131323 cri.go:89] found id: ""
	I0328 01:06:52.421368 1131323 logs.go:276] 0 containers: []
	W0328 01:06:52.421377 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:06:52.421383 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:06:52.421433 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:06:52.460046 1131323 cri.go:89] found id: ""
	I0328 01:06:52.460084 1131323 logs.go:276] 0 containers: []
	W0328 01:06:52.460100 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:06:52.460107 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:06:52.460164 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:06:52.500067 1131323 cri.go:89] found id: ""
	I0328 01:06:52.500094 1131323 logs.go:276] 0 containers: []
	W0328 01:06:52.500102 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:06:52.500109 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:06:52.500171 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:06:52.537614 1131323 cri.go:89] found id: ""
	I0328 01:06:52.537646 1131323 logs.go:276] 0 containers: []
	W0328 01:06:52.537671 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:06:52.537680 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:06:52.537745 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:06:52.577362 1131323 cri.go:89] found id: ""
	I0328 01:06:52.577392 1131323 logs.go:276] 0 containers: []
	W0328 01:06:52.577402 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:06:52.577417 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:06:52.577434 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:06:52.633638 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:06:52.633689 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:06:52.650762 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:06:52.650796 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:06:52.729436 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:06:52.729470 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:06:52.729484 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:06:52.818193 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:06:52.818248 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:06:51.540541 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:53.541340 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:55.542165 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:51.376916 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:53.872313 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:55.873335 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:54.411986 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:56.412892 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:55.362950 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:06:55.378461 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:06:55.378577 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:06:55.419968 1131323 cri.go:89] found id: ""
	I0328 01:06:55.419995 1131323 logs.go:276] 0 containers: []
	W0328 01:06:55.420005 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:06:55.420010 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:06:55.420072 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:06:55.464308 1131323 cri.go:89] found id: ""
	I0328 01:06:55.464341 1131323 logs.go:276] 0 containers: []
	W0328 01:06:55.464350 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:06:55.464357 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:06:55.464421 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:06:55.523059 1131323 cri.go:89] found id: ""
	I0328 01:06:55.523092 1131323 logs.go:276] 0 containers: []
	W0328 01:06:55.523106 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:06:55.523114 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:06:55.523186 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:06:55.570957 1131323 cri.go:89] found id: ""
	I0328 01:06:55.570990 1131323 logs.go:276] 0 containers: []
	W0328 01:06:55.571004 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:06:55.571013 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:06:55.571077 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:06:55.606712 1131323 cri.go:89] found id: ""
	I0328 01:06:55.606739 1131323 logs.go:276] 0 containers: []
	W0328 01:06:55.606749 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:06:55.606755 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:06:55.606817 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:06:55.646445 1131323 cri.go:89] found id: ""
	I0328 01:06:55.646477 1131323 logs.go:276] 0 containers: []
	W0328 01:06:55.646486 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:06:55.646493 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:06:55.646548 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:06:55.685176 1131323 cri.go:89] found id: ""
	I0328 01:06:55.685208 1131323 logs.go:276] 0 containers: []
	W0328 01:06:55.685217 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:06:55.685225 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:06:55.685289 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:06:55.722948 1131323 cri.go:89] found id: ""
	I0328 01:06:55.722984 1131323 logs.go:276] 0 containers: []
	W0328 01:06:55.722995 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:06:55.723006 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:06:55.723022 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:06:55.797332 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:06:55.797368 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:06:55.797385 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:06:55.877648 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:06:55.877688 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:06:55.918966 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:06:55.918997 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:06:55.971226 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:06:55.971272 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:06:58.488464 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:06:58.504999 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:06:58.505088 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:06:58.549290 1131323 cri.go:89] found id: ""
	I0328 01:06:58.549325 1131323 logs.go:276] 0 containers: []
	W0328 01:06:58.549338 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:06:58.549347 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:06:58.549414 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:06:58.589222 1131323 cri.go:89] found id: ""
	I0328 01:06:58.589252 1131323 logs.go:276] 0 containers: []
	W0328 01:06:58.589261 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:06:58.589271 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:06:58.589337 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:06:58.626470 1131323 cri.go:89] found id: ""
	I0328 01:06:58.626499 1131323 logs.go:276] 0 containers: []
	W0328 01:06:58.626508 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:06:58.626514 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:06:58.626578 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:06:58.671634 1131323 cri.go:89] found id: ""
	I0328 01:06:58.671663 1131323 logs.go:276] 0 containers: []
	W0328 01:06:58.671674 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:06:58.671683 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:06:58.671744 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:06:58.707335 1131323 cri.go:89] found id: ""
	I0328 01:06:58.707370 1131323 logs.go:276] 0 containers: []
	W0328 01:06:58.707381 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:06:58.707390 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:06:58.707459 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:06:58.745635 1131323 cri.go:89] found id: ""
	I0328 01:06:58.745666 1131323 logs.go:276] 0 containers: []
	W0328 01:06:58.745679 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:06:58.745687 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:06:58.745752 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:06:58.792172 1131323 cri.go:89] found id: ""
	I0328 01:06:58.792205 1131323 logs.go:276] 0 containers: []
	W0328 01:06:58.792216 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:06:58.792225 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:06:58.792287 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:06:58.840027 1131323 cri.go:89] found id: ""
	I0328 01:06:58.840063 1131323 logs.go:276] 0 containers: []
	W0328 01:06:58.840075 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:06:58.840089 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:06:58.840108 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:06:58.921964 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:06:58.921988 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:06:58.922003 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:06:59.016935 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:06:59.016980 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:06:59.065747 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:06:59.065788 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:06:59.119189 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:06:59.119231 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:06:58.042362 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:00.544351 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:57.875649 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:00.371953 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:58.406154 1130949 pod_ready.go:81] duration metric: took 4m0.000981669s for pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace to be "Ready" ...
	E0328 01:06:58.406192 1130949 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace to be "Ready" (will not retry!)
	I0328 01:06:58.406218 1130949 pod_ready.go:38] duration metric: took 4m11.713667334s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0328 01:06:58.406275 1130949 kubeadm.go:591] duration metric: took 4m19.018883002s to restartPrimaryControlPlane
	W0328 01:06:58.406372 1130949 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0328 01:06:58.406432 1130949 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0328 01:07:01.637081 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:07:01.652557 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:07:01.652634 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:07:01.691795 1131323 cri.go:89] found id: ""
	I0328 01:07:01.691832 1131323 logs.go:276] 0 containers: []
	W0328 01:07:01.691846 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:07:01.691854 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:07:01.691927 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:07:01.732815 1131323 cri.go:89] found id: ""
	I0328 01:07:01.732850 1131323 logs.go:276] 0 containers: []
	W0328 01:07:01.732861 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:07:01.732868 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:07:01.732938 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:07:01.776370 1131323 cri.go:89] found id: ""
	I0328 01:07:01.776408 1131323 logs.go:276] 0 containers: []
	W0328 01:07:01.776422 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:07:01.776431 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:07:01.776501 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:07:01.821260 1131323 cri.go:89] found id: ""
	I0328 01:07:01.821290 1131323 logs.go:276] 0 containers: []
	W0328 01:07:01.821301 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:07:01.821308 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:07:01.821377 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:07:01.860666 1131323 cri.go:89] found id: ""
	I0328 01:07:01.860696 1131323 logs.go:276] 0 containers: []
	W0328 01:07:01.860708 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:07:01.860719 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:07:01.860787 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:07:01.898255 1131323 cri.go:89] found id: ""
	I0328 01:07:01.898291 1131323 logs.go:276] 0 containers: []
	W0328 01:07:01.898304 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:07:01.898314 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:07:01.898383 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:07:01.937770 1131323 cri.go:89] found id: ""
	I0328 01:07:01.937809 1131323 logs.go:276] 0 containers: []
	W0328 01:07:01.937822 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:07:01.937830 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:07:01.937901 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:07:01.976946 1131323 cri.go:89] found id: ""
	I0328 01:07:01.976981 1131323 logs.go:276] 0 containers: []
	W0328 01:07:01.976994 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:07:01.977008 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:07:01.977027 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:07:02.062804 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:07:02.062845 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:07:02.110750 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:07:02.110783 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:07:02.179633 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:07:02.179677 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:07:02.203131 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:07:02.203181 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:07:02.303281 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:07:04.804238 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:07:04.819654 1131323 kubeadm.go:591] duration metric: took 4m2.527630194s to restartPrimaryControlPlane
	W0328 01:07:04.819747 1131323 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0328 01:07:04.819787 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0328 01:07:03.041692 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:05.540478 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:02.372472 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:04.376413 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:07.322821 1131323 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (2.50300166s)
	I0328 01:07:07.322918 1131323 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0328 01:07:07.338692 1131323 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0328 01:07:07.349812 1131323 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0328 01:07:07.361566 1131323 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0328 01:07:07.361597 1131323 kubeadm.go:156] found existing configuration files:
	
	I0328 01:07:07.361667 1131323 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0328 01:07:07.372926 1131323 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0328 01:07:07.373008 1131323 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0328 01:07:07.383770 1131323 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0328 01:07:07.394260 1131323 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0328 01:07:07.394332 1131323 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0328 01:07:07.405874 1131323 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0328 01:07:07.417177 1131323 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0328 01:07:07.417254 1131323 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0328 01:07:07.428589 1131323 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0328 01:07:07.438788 1131323 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0328 01:07:07.438845 1131323 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0328 01:07:07.449649 1131323 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0328 01:07:07.533886 1131323 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0328 01:07:07.533989 1131323 kubeadm.go:309] [preflight] Running pre-flight checks
	I0328 01:07:07.693599 1131323 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0328 01:07:07.693736 1131323 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0328 01:07:07.693852 1131323 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0328 01:07:07.910557 1131323 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0328 01:07:07.912634 1131323 out.go:204]   - Generating certificates and keys ...
	I0328 01:07:07.912743 1131323 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0328 01:07:07.912855 1131323 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0328 01:07:07.912984 1131323 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0328 01:07:07.913098 1131323 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0328 01:07:07.913212 1131323 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0328 01:07:07.913298 1131323 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0328 01:07:07.913384 1131323 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0328 01:07:07.913569 1131323 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0328 01:07:07.913947 1131323 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0328 01:07:07.914429 1131323 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0328 01:07:07.914649 1131323 kubeadm.go:309] [certs] Using the existing "sa" key
	I0328 01:07:07.914728 1131323 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0328 01:07:08.225778 1131323 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0328 01:07:08.353927 1131323 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0328 01:07:08.631240 1131323 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0328 01:07:08.824445 1131323 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0328 01:07:08.840240 1131323 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0328 01:07:08.841200 1131323 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0328 01:07:08.841315 1131323 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0328 01:07:08.997129 1131323 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0328 01:07:08.999073 1131323 out.go:204]   - Booting up control plane ...
	I0328 01:07:08.999224 1131323 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0328 01:07:09.014811 1131323 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0328 01:07:09.015898 1131323 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0328 01:07:09.016727 1131323 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0328 01:07:09.019426 1131323 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0328 01:07:07.541363 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:10.041094 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:06.874606 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:09.372537 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:12.540137 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:14.541608 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:11.372643 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:13.873029 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:16.541814 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:19.047225 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:16.372556 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:18.871954 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:20.872047 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:21.542880 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:24.041786 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:22.872845 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:24.873747 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:26.042186 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:28.541303 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:30.540610 1130949 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.134147754s)
	I0328 01:07:30.540688 1130949 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0328 01:07:30.558971 1130949 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0328 01:07:30.570331 1130949 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0328 01:07:30.581192 1130949 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0328 01:07:30.581246 1130949 kubeadm.go:156] found existing configuration files:
	
	I0328 01:07:30.581306 1130949 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0328 01:07:30.592337 1130949 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0328 01:07:30.592410 1130949 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0328 01:07:30.603288 1130949 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0328 01:07:30.613714 1130949 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0328 01:07:30.613776 1130949 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0328 01:07:30.624281 1130949 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0328 01:07:30.634569 1130949 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0328 01:07:30.634644 1130949 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0328 01:07:30.647279 1130949 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0328 01:07:30.658554 1130949 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0328 01:07:30.658646 1130949 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0328 01:07:30.670364 1130949 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0328 01:07:30.730349 1130949 kubeadm.go:309] [init] Using Kubernetes version: v1.29.3
	I0328 01:07:30.730414 1130949 kubeadm.go:309] [preflight] Running pre-flight checks
	I0328 01:07:30.887056 1130949 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0328 01:07:30.887234 1130949 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0328 01:07:30.887385 1130949 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0328 01:07:31.104288 1130949 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0328 01:07:27.373135 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:29.373436 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:31.106496 1130949 out.go:204]   - Generating certificates and keys ...
	I0328 01:07:31.106628 1130949 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0328 01:07:31.106697 1130949 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0328 01:07:31.106765 1130949 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0328 01:07:31.106826 1130949 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0328 01:07:31.106892 1130949 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0328 01:07:31.107528 1130949 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0328 01:07:31.108302 1130949 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0328 01:07:31.112246 1130949 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0328 01:07:31.112762 1130949 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0328 01:07:31.113711 1130949 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0328 01:07:31.115230 1130949 kubeadm.go:309] [certs] Using the existing "sa" key
	I0328 01:07:31.115284 1130949 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0328 01:07:31.297632 1130949 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0328 01:07:32.446275 1130949 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0328 01:07:32.565869 1130949 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0328 01:07:32.641288 1130949 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0328 01:07:32.817229 1130949 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0328 01:07:32.817814 1130949 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0328 01:07:32.820366 1130949 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0328 01:07:32.822328 1130949 out.go:204]   - Booting up control plane ...
	I0328 01:07:32.822467 1130949 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0328 01:07:32.822550 1130949 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0328 01:07:32.822990 1130949 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0328 01:07:32.846800 1130949 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0328 01:07:32.847829 1130949 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0328 01:07:32.847902 1130949 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0328 01:07:31.044103 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:33.542106 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:35.542875 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:31.873591 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:33.875737 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:35.881819 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:32.992001 1130949 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0328 01:07:38.997010 1130949 kubeadm.go:309] [apiclient] All control plane components are healthy after 6.003888 seconds
	I0328 01:07:39.012971 1130949 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0328 01:07:39.036328 1130949 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0328 01:07:39.569806 1130949 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0328 01:07:39.570135 1130949 kubeadm.go:309] [mark-control-plane] Marking the node embed-certs-808809 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0328 01:07:40.085165 1130949 kubeadm.go:309] [bootstrap-token] Using token: 4zk5zi.uttj4zihedk5oj6k
	I0328 01:07:40.086719 1130949 out.go:204]   - Configuring RBAC rules ...
	I0328 01:07:40.086873 1130949 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0328 01:07:40.096373 1130949 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0328 01:07:40.106484 1130949 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0328 01:07:40.110525 1130949 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0328 01:07:40.120015 1130949 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0328 01:07:40.129060 1130949 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0328 01:07:40.141167 1130949 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0328 01:07:40.415429 1130949 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0328 01:07:40.507275 1130949 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0328 01:07:40.507333 1130949 kubeadm.go:309] 
	I0328 01:07:40.507551 1130949 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0328 01:07:40.507617 1130949 kubeadm.go:309] 
	I0328 01:07:40.507860 1130949 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0328 01:07:40.507891 1130949 kubeadm.go:309] 
	I0328 01:07:40.507947 1130949 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0328 01:07:40.508057 1130949 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0328 01:07:40.508140 1130949 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0328 01:07:40.508157 1130949 kubeadm.go:309] 
	I0328 01:07:40.508250 1130949 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0328 01:07:40.508264 1130949 kubeadm.go:309] 
	I0328 01:07:40.508329 1130949 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0328 01:07:40.508344 1130949 kubeadm.go:309] 
	I0328 01:07:40.508421 1130949 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0328 01:07:40.508539 1130949 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0328 01:07:40.508626 1130949 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0328 01:07:40.508632 1130949 kubeadm.go:309] 
	I0328 01:07:40.508804 1130949 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0328 01:07:40.508970 1130949 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0328 01:07:40.508990 1130949 kubeadm.go:309] 
	I0328 01:07:40.509155 1130949 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token 4zk5zi.uttj4zihedk5oj6k \
	I0328 01:07:40.509474 1130949 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:eb47e943713ff45bf2b8bbea854932df17a469668ec0f8e79ea3bb626e56fc59 \
	I0328 01:07:40.509514 1130949 kubeadm.go:309] 	--control-plane 
	I0328 01:07:40.509524 1130949 kubeadm.go:309] 
	I0328 01:07:40.509641 1130949 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0328 01:07:40.509655 1130949 kubeadm.go:309] 
	I0328 01:07:40.509767 1130949 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token 4zk5zi.uttj4zihedk5oj6k \
	I0328 01:07:40.509932 1130949 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:eb47e943713ff45bf2b8bbea854932df17a469668ec0f8e79ea3bb626e56fc59 
	I0328 01:07:40.510139 1130949 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0328 01:07:40.510157 1130949 cni.go:84] Creating CNI manager for ""
	I0328 01:07:40.510166 1130949 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0328 01:07:40.512099 1130949 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0328 01:07:38.041290 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:40.041569 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:38.373789 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:40.374369 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:40.513314 1130949 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0328 01:07:40.563257 1130949 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0328 01:07:40.627024 1130949 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0328 01:07:40.627097 1130949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:07:40.627137 1130949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-808809 minikube.k8s.io/updated_at=2024_03_28T01_07_40_0700 minikube.k8s.io/version=v1.33.0-beta.0 minikube.k8s.io/commit=1a940f980f77c9bfc8ef93678db8c1f49c9dd79d minikube.k8s.io/name=embed-certs-808809 minikube.k8s.io/primary=true
	I0328 01:07:40.928916 1130949 ops.go:34] apiserver oom_adj: -16
	I0328 01:07:40.929138 1130949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:07:41.429797 1130949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:07:41.930103 1130949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:07:42.429366 1130949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:07:42.540932 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:44.035055 1131600 pod_ready.go:81] duration metric: took 4m0.000860608s for pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace to be "Ready" ...
	E0328 01:07:44.035094 1131600 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace to be "Ready" (will not retry!)
	I0328 01:07:44.035124 1131600 pod_ready.go:38] duration metric: took 4m14.608998431s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0328 01:07:44.035180 1131600 kubeadm.go:591] duration metric: took 4m23.470228903s to restartPrimaryControlPlane
	W0328 01:07:44.035292 1131600 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0328 01:07:44.035344 1131600 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0328 01:07:42.375179 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:44.876120 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:42.929464 1130949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:07:43.429369 1130949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:07:43.929241 1130949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:07:44.429904 1130949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:07:44.930251 1130949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:07:45.429816 1130949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:07:45.930177 1130949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:07:46.429416 1130949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:07:46.929152 1130949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:07:47.429708 1130949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:07:49.021732 1131323 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0328 01:07:49.021890 1131323 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0328 01:07:49.022195 1131323 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0328 01:07:47.373358 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:49.872482 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:47.929139 1130949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:07:48.429732 1130949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:07:48.930207 1130949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:07:49.429230 1130949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:07:49.929298 1130949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:07:50.429919 1130949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:07:50.929364 1130949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:07:51.429403 1130949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:07:51.929356 1130949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:07:52.429410 1130949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:07:52.929894 1130949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:07:53.043365 1130949 kubeadm.go:1107] duration metric: took 12.416334145s to wait for elevateKubeSystemPrivileges
	W0328 01:07:53.043410 1130949 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0328 01:07:53.043419 1130949 kubeadm.go:393] duration metric: took 5m13.709259014s to StartCluster
	I0328 01:07:53.043445 1130949 settings.go:142] acquiring lock: {Name:mk594dd5d083edcb4c668528431bf610fe4ea638 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 01:07:53.043560 1130949 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18485-1069254/kubeconfig
	I0328 01:07:53.045798 1130949 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18485-1069254/kubeconfig: {Name:mk608bb0dce90860f51972a105c5a28b1a9b081e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 01:07:53.046158 1130949 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.72.210 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0328 01:07:53.047867 1130949 out.go:177] * Verifying Kubernetes components...
	I0328 01:07:53.046201 1130949 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0328 01:07:53.046412 1130949 config.go:182] Loaded profile config "embed-certs-808809": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0328 01:07:53.049163 1130949 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-808809"
	I0328 01:07:53.049175 1130949 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0328 01:07:53.049195 1130949 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-808809"
	W0328 01:07:53.049204 1130949 addons.go:243] addon storage-provisioner should already be in state true
	I0328 01:07:53.049230 1130949 host.go:66] Checking if "embed-certs-808809" exists ...
	I0328 01:07:53.049205 1130949 addons.go:69] Setting default-storageclass=true in profile "embed-certs-808809"
	I0328 01:07:53.049250 1130949 addons.go:69] Setting metrics-server=true in profile "embed-certs-808809"
	I0328 01:07:53.049271 1130949 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-808809"
	I0328 01:07:53.049309 1130949 addons.go:234] Setting addon metrics-server=true in "embed-certs-808809"
	W0328 01:07:53.049327 1130949 addons.go:243] addon metrics-server should already be in state true
	I0328 01:07:53.049371 1130949 host.go:66] Checking if "embed-certs-808809" exists ...
	I0328 01:07:53.049530 1130949 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18485-1069254/.minikube/bin/docker-machine-driver-kvm2
	I0328 01:07:53.049569 1130949 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 01:07:53.049696 1130949 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18485-1069254/.minikube/bin/docker-machine-driver-kvm2
	I0328 01:07:53.049729 1130949 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 01:07:53.049795 1130949 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18485-1069254/.minikube/bin/docker-machine-driver-kvm2
	I0328 01:07:53.049838 1130949 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 01:07:53.067042 1130949 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36143
	I0328 01:07:53.067078 1130949 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33653
	I0328 01:07:53.067536 1130949 main.go:141] libmachine: () Calling .GetVersion
	I0328 01:07:53.067599 1130949 main.go:141] libmachine: () Calling .GetVersion
	I0328 01:07:53.068156 1130949 main.go:141] libmachine: Using API Version  1
	I0328 01:07:53.068184 1130949 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 01:07:53.068289 1130949 main.go:141] libmachine: Using API Version  1
	I0328 01:07:53.068315 1130949 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 01:07:53.068583 1130949 main.go:141] libmachine: () Calling .GetMachineName
	I0328 01:07:53.068669 1130949 main.go:141] libmachine: () Calling .GetMachineName
	I0328 01:07:53.069095 1130949 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18485-1069254/.minikube/bin/docker-machine-driver-kvm2
	I0328 01:07:53.069121 1130949 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 01:07:53.069245 1130949 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18485-1069254/.minikube/bin/docker-machine-driver-kvm2
	I0328 01:07:53.069276 1130949 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 01:07:53.069991 1130949 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43737
	I0328 01:07:53.070509 1130949 main.go:141] libmachine: () Calling .GetVersion
	I0328 01:07:53.071078 1130949 main.go:141] libmachine: Using API Version  1
	I0328 01:07:53.071103 1130949 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 01:07:53.071480 1130949 main.go:141] libmachine: () Calling .GetMachineName
	I0328 01:07:53.071705 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetState
	I0328 01:07:53.075617 1130949 addons.go:234] Setting addon default-storageclass=true in "embed-certs-808809"
	W0328 01:07:53.075659 1130949 addons.go:243] addon default-storageclass should already be in state true
	I0328 01:07:53.075703 1130949 host.go:66] Checking if "embed-certs-808809" exists ...
	I0328 01:07:53.075982 1130949 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18485-1069254/.minikube/bin/docker-machine-driver-kvm2
	I0328 01:07:53.076011 1130949 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 01:07:53.085991 1130949 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39265
	I0328 01:07:53.086508 1130949 main.go:141] libmachine: () Calling .GetVersion
	I0328 01:07:53.086724 1130949 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36033
	I0328 01:07:53.087105 1130949 main.go:141] libmachine: Using API Version  1
	I0328 01:07:53.087122 1130949 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 01:07:53.087158 1130949 main.go:141] libmachine: () Calling .GetVersion
	I0328 01:07:53.087646 1130949 main.go:141] libmachine: Using API Version  1
	I0328 01:07:53.087667 1130949 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 01:07:53.087706 1130949 main.go:141] libmachine: () Calling .GetMachineName
	I0328 01:07:53.087922 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetState
	I0328 01:07:53.088031 1130949 main.go:141] libmachine: () Calling .GetMachineName
	I0328 01:07:53.088225 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetState
	I0328 01:07:53.089941 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .DriverName
	I0328 01:07:53.090168 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .DriverName
	I0328 01:07:53.091945 1130949 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0328 01:07:53.093023 1130949 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41613
	I0328 01:07:53.093537 1130949 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0328 01:07:53.093553 1130949 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0328 01:07:53.093563 1130949 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0328 01:07:53.095147 1130949 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0328 01:07:53.095165 1130949 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0328 01:07:53.093574 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHHostname
	I0328 01:07:53.095185 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHHostname
	I0328 01:07:53.093939 1130949 main.go:141] libmachine: () Calling .GetVersion
	I0328 01:07:53.096301 1130949 main.go:141] libmachine: Using API Version  1
	I0328 01:07:53.096322 1130949 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 01:07:53.096662 1130949 main.go:141] libmachine: () Calling .GetMachineName
	I0328 01:07:53.097251 1130949 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18485-1069254/.minikube/bin/docker-machine-driver-kvm2
	I0328 01:07:53.097306 1130949 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 01:07:53.098907 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:07:53.099014 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:07:53.099513 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d4:d2", ip: ""} in network mk-embed-certs-808809: {Iface:virbr4 ExpiryTime:2024-03-28 02:02:25 +0000 UTC Type:0 Mac:52:54:00:60:d4:d2 Iaid: IPaddr:192.168.72.210 Prefix:24 Hostname:embed-certs-808809 Clientid:01:52:54:00:60:d4:d2}
	I0328 01:07:53.099546 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined IP address 192.168.72.210 and MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:07:53.099996 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHPort
	I0328 01:07:53.100126 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d4:d2", ip: ""} in network mk-embed-certs-808809: {Iface:virbr4 ExpiryTime:2024-03-28 02:02:25 +0000 UTC Type:0 Mac:52:54:00:60:d4:d2 Iaid: IPaddr:192.168.72.210 Prefix:24 Hostname:embed-certs-808809 Clientid:01:52:54:00:60:d4:d2}
	I0328 01:07:53.100177 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHKeyPath
	I0328 01:07:53.100187 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined IP address 192.168.72.210 and MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:07:53.100287 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHUsername
	I0328 01:07:53.100392 1130949 sshutil.go:53] new ssh client: &{IP:192.168.72.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/embed-certs-808809/id_rsa Username:docker}
	I0328 01:07:53.100470 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHPort
	I0328 01:07:53.100576 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHKeyPath
	I0328 01:07:53.100709 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHUsername
	I0328 01:07:53.100796 1130949 sshutil.go:53] new ssh client: &{IP:192.168.72.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/embed-certs-808809/id_rsa Username:docker}
	I0328 01:07:53.114056 1130949 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41121
	I0328 01:07:53.114680 1130949 main.go:141] libmachine: () Calling .GetVersion
	I0328 01:07:53.115279 1130949 main.go:141] libmachine: Using API Version  1
	I0328 01:07:53.115313 1130949 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 01:07:53.115721 1130949 main.go:141] libmachine: () Calling .GetMachineName
	I0328 01:07:53.116061 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetState
	I0328 01:07:53.118022 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .DriverName
	I0328 01:07:53.118348 1130949 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0328 01:07:53.118370 1130949 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0328 01:07:53.118391 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHHostname
	I0328 01:07:53.121337 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:07:53.121699 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d4:d2", ip: ""} in network mk-embed-certs-808809: {Iface:virbr4 ExpiryTime:2024-03-28 02:02:25 +0000 UTC Type:0 Mac:52:54:00:60:d4:d2 Iaid: IPaddr:192.168.72.210 Prefix:24 Hostname:embed-certs-808809 Clientid:01:52:54:00:60:d4:d2}
	I0328 01:07:53.121728 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined IP address 192.168.72.210 and MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:07:53.121906 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHPort
	I0328 01:07:53.122084 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHKeyPath
	I0328 01:07:53.122266 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHUsername
	I0328 01:07:53.122414 1130949 sshutil.go:53] new ssh client: &{IP:192.168.72.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/embed-certs-808809/id_rsa Username:docker}
	I0328 01:07:53.242121 1130949 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0328 01:07:53.267118 1130949 node_ready.go:35] waiting up to 6m0s for node "embed-certs-808809" to be "Ready" ...
	I0328 01:07:53.276640 1130949 node_ready.go:49] node "embed-certs-808809" has status "Ready":"True"
	I0328 01:07:53.276670 1130949 node_ready.go:38] duration metric: took 9.513599ms for node "embed-certs-808809" to be "Ready" ...
	I0328 01:07:53.276683 1130949 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0328 01:07:53.283091 1130949 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-2rn6k" in "kube-system" namespace to be "Ready" ...
	I0328 01:07:53.325201 1130949 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0328 01:07:53.325234 1130949 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0328 01:07:53.341335 1130949 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0328 01:07:53.361084 1130949 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0328 01:07:53.361109 1130949 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0328 01:07:53.393089 1130949 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0328 01:07:53.393116 1130949 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0328 01:07:53.419245 1130949 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0328 01:07:53.445663 1130949 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0328 01:07:53.515515 1130949 main.go:141] libmachine: Making call to close driver server
	I0328 01:07:53.515555 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .Close
	I0328 01:07:53.515871 1130949 main.go:141] libmachine: Successfully made call to close driver server
	I0328 01:07:53.515891 1130949 main.go:141] libmachine: Making call to close connection to plugin binary
	I0328 01:07:53.515901 1130949 main.go:141] libmachine: Making call to close driver server
	I0328 01:07:53.515910 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .Close
	I0328 01:07:53.516173 1130949 main.go:141] libmachine: Successfully made call to close driver server
	I0328 01:07:53.516253 1130949 main.go:141] libmachine: Making call to close connection to plugin binary
	I0328 01:07:53.516212 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | Closing plugin on server side
	I0328 01:07:53.527854 1130949 main.go:141] libmachine: Making call to close driver server
	I0328 01:07:53.527882 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .Close
	I0328 01:07:53.528152 1130949 main.go:141] libmachine: Successfully made call to close driver server
	I0328 01:07:53.528173 1130949 main.go:141] libmachine: Making call to close connection to plugin binary
	I0328 01:07:53.528220 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | Closing plugin on server side
	I0328 01:07:54.159164 1130949 main.go:141] libmachine: Making call to close driver server
	I0328 01:07:54.159192 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .Close
	I0328 01:07:54.159264 1130949 main.go:141] libmachine: Making call to close driver server
	I0328 01:07:54.159292 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .Close
	I0328 01:07:54.159523 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | Closing plugin on server side
	I0328 01:07:54.159597 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | Closing plugin on server side
	I0328 01:07:54.159619 1130949 main.go:141] libmachine: Successfully made call to close driver server
	I0328 01:07:54.159637 1130949 main.go:141] libmachine: Making call to close connection to plugin binary
	I0328 01:07:54.159648 1130949 main.go:141] libmachine: Making call to close driver server
	I0328 01:07:54.159658 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .Close
	I0328 01:07:54.159660 1130949 main.go:141] libmachine: Successfully made call to close driver server
	I0328 01:07:54.159667 1130949 main.go:141] libmachine: Making call to close connection to plugin binary
	I0328 01:07:54.159688 1130949 main.go:141] libmachine: Making call to close driver server
	I0328 01:07:54.159696 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .Close
	I0328 01:07:54.159981 1130949 main.go:141] libmachine: Successfully made call to close driver server
	I0328 01:07:54.160037 1130949 main.go:141] libmachine: Making call to close connection to plugin binary
	I0328 01:07:54.160056 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | Closing plugin on server side
	I0328 01:07:54.160062 1130949 addons.go:470] Verifying addon metrics-server=true in "embed-certs-808809"
	I0328 01:07:54.160088 1130949 main.go:141] libmachine: Successfully made call to close driver server
	I0328 01:07:54.160090 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | Closing plugin on server side
	I0328 01:07:54.160106 1130949 main.go:141] libmachine: Making call to close connection to plugin binary
	I0328 01:07:54.162879 1130949 out.go:177] * Enabled addons: default-storageclass, metrics-server, storage-provisioner
	I0328 01:07:54.022449 1131323 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0328 01:07:54.022704 1131323 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0328 01:07:52.372314 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:54.372913 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:54.164263 1130949 addons.go:505] duration metric: took 1.11806212s for enable addons: enabled=[default-storageclass metrics-server storage-provisioner]
	I0328 01:07:55.294728 1130949 pod_ready.go:102] pod "coredns-76f75df574-2rn6k" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:55.790690 1130949 pod_ready.go:92] pod "coredns-76f75df574-2rn6k" in "kube-system" namespace has status "Ready":"True"
	I0328 01:07:55.790717 1130949 pod_ready.go:81] duration metric: took 2.50759161s for pod "coredns-76f75df574-2rn6k" in "kube-system" namespace to be "Ready" ...
	I0328 01:07:55.790726 1130949 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-pgcdh" in "kube-system" namespace to be "Ready" ...
	I0328 01:07:55.796249 1130949 pod_ready.go:92] pod "coredns-76f75df574-pgcdh" in "kube-system" namespace has status "Ready":"True"
	I0328 01:07:55.796279 1130949 pod_ready.go:81] duration metric: took 5.54233ms for pod "coredns-76f75df574-pgcdh" in "kube-system" namespace to be "Ready" ...
	I0328 01:07:55.796291 1130949 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-808809" in "kube-system" namespace to be "Ready" ...
	I0328 01:07:55.801226 1130949 pod_ready.go:92] pod "etcd-embed-certs-808809" in "kube-system" namespace has status "Ready":"True"
	I0328 01:07:55.801254 1130949 pod_ready.go:81] duration metric: took 4.956106ms for pod "etcd-embed-certs-808809" in "kube-system" namespace to be "Ready" ...
	I0328 01:07:55.801263 1130949 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-808809" in "kube-system" namespace to be "Ready" ...
	I0328 01:07:55.814571 1130949 pod_ready.go:92] pod "kube-apiserver-embed-certs-808809" in "kube-system" namespace has status "Ready":"True"
	I0328 01:07:55.814599 1130949 pod_ready.go:81] duration metric: took 13.328662ms for pod "kube-apiserver-embed-certs-808809" in "kube-system" namespace to be "Ready" ...
	I0328 01:07:55.814613 1130949 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-808809" in "kube-system" namespace to be "Ready" ...
	I0328 01:07:55.825995 1130949 pod_ready.go:92] pod "kube-controller-manager-embed-certs-808809" in "kube-system" namespace has status "Ready":"True"
	I0328 01:07:55.826022 1130949 pod_ready.go:81] duration metric: took 11.401096ms for pod "kube-controller-manager-embed-certs-808809" in "kube-system" namespace to be "Ready" ...
	I0328 01:07:55.826035 1130949 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-tjbhs" in "kube-system" namespace to be "Ready" ...
	I0328 01:07:56.188116 1130949 pod_ready.go:92] pod "kube-proxy-tjbhs" in "kube-system" namespace has status "Ready":"True"
	I0328 01:07:56.188147 1130949 pod_ready.go:81] duration metric: took 362.103962ms for pod "kube-proxy-tjbhs" in "kube-system" namespace to be "Ready" ...
	I0328 01:07:56.188161 1130949 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-808809" in "kube-system" namespace to be "Ready" ...
	I0328 01:07:56.588294 1130949 pod_ready.go:92] pod "kube-scheduler-embed-certs-808809" in "kube-system" namespace has status "Ready":"True"
	I0328 01:07:56.588334 1130949 pod_ready.go:81] duration metric: took 400.16517ms for pod "kube-scheduler-embed-certs-808809" in "kube-system" namespace to be "Ready" ...
	I0328 01:07:56.588347 1130949 pod_ready.go:38] duration metric: took 3.311651338s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0328 01:07:56.588369 1130949 api_server.go:52] waiting for apiserver process to appear ...
	I0328 01:07:56.588445 1130949 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:07:56.606404 1130949 api_server.go:72] duration metric: took 3.560197315s to wait for apiserver process to appear ...
	I0328 01:07:56.606435 1130949 api_server.go:88] waiting for apiserver healthz status ...
	I0328 01:07:56.606460 1130949 api_server.go:253] Checking apiserver healthz at https://192.168.72.210:8443/healthz ...
	I0328 01:07:56.612218 1130949 api_server.go:279] https://192.168.72.210:8443/healthz returned 200:
	ok
	I0328 01:07:56.613459 1130949 api_server.go:141] control plane version: v1.29.3
	I0328 01:07:56.613481 1130949 api_server.go:131] duration metric: took 7.039378ms to wait for apiserver health ...
	I0328 01:07:56.613490 1130949 system_pods.go:43] waiting for kube-system pods to appear ...
	I0328 01:07:56.793192 1130949 system_pods.go:59] 9 kube-system pods found
	I0328 01:07:56.793227 1130949 system_pods.go:61] "coredns-76f75df574-2rn6k" [2a77c778-dd83-4e2e-b45a-ca16e3922b45] Running
	I0328 01:07:56.793232 1130949 system_pods.go:61] "coredns-76f75df574-pgcdh" [52452b24-490e-4999-b700-198c6f9b2fa1] Running
	I0328 01:07:56.793236 1130949 system_pods.go:61] "etcd-embed-certs-808809" [cba526ab-c8ca-4b90-abb8-533d1bf6cb4f] Running
	I0328 01:07:56.793239 1130949 system_pods.go:61] "kube-apiserver-embed-certs-808809" [02934ac6-1e07-4cbe-ba2f-4149e59d6044] Running
	I0328 01:07:56.793243 1130949 system_pods.go:61] "kube-controller-manager-embed-certs-808809" [b3350ff9-7abb-407e-939c-fcde61186c17] Running
	I0328 01:07:56.793246 1130949 system_pods.go:61] "kube-proxy-tjbhs" [cdb30ca1-5165-4e24-888a-df79af7987d0] Running
	I0328 01:07:56.793249 1130949 system_pods.go:61] "kube-scheduler-embed-certs-808809" [5b52a22d-6ffb-468b-b7b9-8f89b0dee3b8] Running
	I0328 01:07:56.793255 1130949 system_pods.go:61] "metrics-server-57f55c9bc5-bqbfl" [8434fd7d-838b-4cf2-96a3-e4d613633871] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0328 01:07:56.793260 1130949 system_pods.go:61] "storage-provisioner" [20c1951e-7da8-4025-bbcf-2da60f87f3ab] Running
	I0328 01:07:56.793268 1130949 system_pods.go:74] duration metric: took 179.77213ms to wait for pod list to return data ...
	I0328 01:07:56.793275 1130949 default_sa.go:34] waiting for default service account to be created ...
	I0328 01:07:56.988234 1130949 default_sa.go:45] found service account: "default"
	I0328 01:07:56.988274 1130949 default_sa.go:55] duration metric: took 194.984089ms for default service account to be created ...
	I0328 01:07:56.988288 1130949 system_pods.go:116] waiting for k8s-apps to be running ...
	I0328 01:07:57.192153 1130949 system_pods.go:86] 9 kube-system pods found
	I0328 01:07:57.192188 1130949 system_pods.go:89] "coredns-76f75df574-2rn6k" [2a77c778-dd83-4e2e-b45a-ca16e3922b45] Running
	I0328 01:07:57.192194 1130949 system_pods.go:89] "coredns-76f75df574-pgcdh" [52452b24-490e-4999-b700-198c6f9b2fa1] Running
	I0328 01:07:57.192200 1130949 system_pods.go:89] "etcd-embed-certs-808809" [cba526ab-c8ca-4b90-abb8-533d1bf6cb4f] Running
	I0328 01:07:57.192205 1130949 system_pods.go:89] "kube-apiserver-embed-certs-808809" [02934ac6-1e07-4cbe-ba2f-4149e59d6044] Running
	I0328 01:07:57.192210 1130949 system_pods.go:89] "kube-controller-manager-embed-certs-808809" [b3350ff9-7abb-407e-939c-fcde61186c17] Running
	I0328 01:07:57.192214 1130949 system_pods.go:89] "kube-proxy-tjbhs" [cdb30ca1-5165-4e24-888a-df79af7987d0] Running
	I0328 01:07:57.192218 1130949 system_pods.go:89] "kube-scheduler-embed-certs-808809" [5b52a22d-6ffb-468b-b7b9-8f89b0dee3b8] Running
	I0328 01:07:57.192225 1130949 system_pods.go:89] "metrics-server-57f55c9bc5-bqbfl" [8434fd7d-838b-4cf2-96a3-e4d613633871] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0328 01:07:57.192230 1130949 system_pods.go:89] "storage-provisioner" [20c1951e-7da8-4025-bbcf-2da60f87f3ab] Running
	I0328 01:07:57.192239 1130949 system_pods.go:126] duration metric: took 203.942878ms to wait for k8s-apps to be running ...
	I0328 01:07:57.192249 1130949 system_svc.go:44] waiting for kubelet service to be running ....
	I0328 01:07:57.192301 1130949 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0328 01:07:57.209840 1130949 system_svc.go:56] duration metric: took 17.576605ms WaitForService to wait for kubelet
	I0328 01:07:57.209883 1130949 kubeadm.go:576] duration metric: took 4.163683877s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0328 01:07:57.209918 1130949 node_conditions.go:102] verifying NodePressure condition ...
	I0328 01:07:57.388321 1130949 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0328 01:07:57.388347 1130949 node_conditions.go:123] node cpu capacity is 2
	I0328 01:07:57.388357 1130949 node_conditions.go:105] duration metric: took 178.433633ms to run NodePressure ...
	I0328 01:07:57.388370 1130949 start.go:240] waiting for startup goroutines ...
	I0328 01:07:57.388377 1130949 start.go:245] waiting for cluster config update ...
	I0328 01:07:57.388387 1130949 start.go:254] writing updated cluster config ...
	I0328 01:07:57.388784 1130949 ssh_runner.go:195] Run: rm -f paused
	I0328 01:07:57.446699 1130949 start.go:600] kubectl: 1.29.3, cluster: 1.29.3 (minor skew: 0)
	I0328 01:07:57.448951 1130949 out.go:177] * Done! kubectl is now configured to use "embed-certs-808809" cluster and "default" namespace by default
	I0328 01:07:56.373123 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:58.872454 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:08:04.023273 1131323 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0328 01:08:04.023535 1131323 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0328 01:08:01.372711 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:08:03.877734 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:08:06.374031 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:08:07.366164 1130827 pod_ready.go:81] duration metric: took 4m0.000887668s for pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace to be "Ready" ...
	E0328 01:08:07.366245 1130827 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace to be "Ready" (will not retry!)
	I0328 01:08:07.366271 1130827 pod_ready.go:38] duration metric: took 4m7.906522585s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0328 01:08:07.366301 1130827 kubeadm.go:591] duration metric: took 4m15.27169704s to restartPrimaryControlPlane
	W0328 01:08:07.366368 1130827 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0328 01:08:07.366406 1130827 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0328 01:08:16.281280 1131600 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.245904746s)
	I0328 01:08:16.281365 1131600 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0328 01:08:16.298463 1131600 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0328 01:08:16.310406 1131600 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0328 01:08:16.321387 1131600 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0328 01:08:16.321415 1131600 kubeadm.go:156] found existing configuration files:
	
	I0328 01:08:16.321475 1131600 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0328 01:08:16.331965 1131600 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0328 01:08:16.332033 1131600 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0328 01:08:16.343030 1131600 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0328 01:08:16.353193 1131600 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0328 01:08:16.353254 1131600 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0328 01:08:16.363865 1131600 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0328 01:08:16.374276 1131600 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0328 01:08:16.374346 1131600 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0328 01:08:16.385300 1131600 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0328 01:08:16.396118 1131600 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0328 01:08:16.396181 1131600 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0328 01:08:16.406896 1131600 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0328 01:08:16.626615 1131600 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0328 01:08:24.024091 1131323 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0328 01:08:24.024388 1131323 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0328 01:08:25.420974 1131600 kubeadm.go:309] [init] Using Kubernetes version: v1.29.3
	I0328 01:08:25.421059 1131600 kubeadm.go:309] [preflight] Running pre-flight checks
	I0328 01:08:25.421154 1131600 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0328 01:08:25.421300 1131600 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0328 01:08:25.421547 1131600 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0328 01:08:25.421649 1131600 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0328 01:08:25.423435 1131600 out.go:204]   - Generating certificates and keys ...
	I0328 01:08:25.423549 1131600 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0328 01:08:25.423630 1131600 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0328 01:08:25.423749 1131600 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0328 01:08:25.423844 1131600 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0328 01:08:25.423956 1131600 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0328 01:08:25.424058 1131600 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0328 01:08:25.424166 1131600 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0328 01:08:25.424260 1131600 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0328 01:08:25.424375 1131600 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0328 01:08:25.424489 1131600 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0328 01:08:25.424552 1131600 kubeadm.go:309] [certs] Using the existing "sa" key
	I0328 01:08:25.424642 1131600 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0328 01:08:25.424700 1131600 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0328 01:08:25.424765 1131600 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0328 01:08:25.424832 1131600 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0328 01:08:25.424920 1131600 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0328 01:08:25.424982 1131600 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0328 01:08:25.425106 1131600 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0328 01:08:25.425207 1131600 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0328 01:08:25.426863 1131600 out.go:204]   - Booting up control plane ...
	I0328 01:08:25.427001 1131600 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0328 01:08:25.427108 1131600 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0328 01:08:25.427205 1131600 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0328 01:08:25.427327 1131600 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0328 01:08:25.427431 1131600 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0328 01:08:25.427491 1131600 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0328 01:08:25.427686 1131600 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0328 01:08:25.427784 1131600 kubeadm.go:309] [apiclient] All control plane components are healthy after 6.003000 seconds
	I0328 01:08:25.427897 1131600 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0328 01:08:25.428032 1131600 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0328 01:08:25.428109 1131600 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0328 01:08:25.428325 1131600 kubeadm.go:309] [mark-control-plane] Marking the node default-k8s-diff-port-283961 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0328 01:08:25.428408 1131600 kubeadm.go:309] [bootstrap-token] Using token: g6jusr.8nbqw788gjbu8fwz
	I0328 01:08:25.430595 1131600 out.go:204]   - Configuring RBAC rules ...
	I0328 01:08:25.430734 1131600 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0328 01:08:25.430837 1131600 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0328 01:08:25.430981 1131600 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0328 01:08:25.431163 1131600 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0328 01:08:25.431357 1131600 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0328 01:08:25.431481 1131600 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0328 01:08:25.431670 1131600 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0328 01:08:25.431726 1131600 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0328 01:08:25.431767 1131600 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0328 01:08:25.431774 1131600 kubeadm.go:309] 
	I0328 01:08:25.431819 1131600 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0328 01:08:25.431829 1131600 kubeadm.go:309] 
	I0328 01:08:25.431893 1131600 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0328 01:08:25.431900 1131600 kubeadm.go:309] 
	I0328 01:08:25.431934 1131600 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0328 01:08:25.432028 1131600 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0328 01:08:25.432089 1131600 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0328 01:08:25.432114 1131600 kubeadm.go:309] 
	I0328 01:08:25.432178 1131600 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0328 01:08:25.432186 1131600 kubeadm.go:309] 
	I0328 01:08:25.432245 1131600 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0328 01:08:25.432255 1131600 kubeadm.go:309] 
	I0328 01:08:25.432342 1131600 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0328 01:08:25.432454 1131600 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0328 01:08:25.432566 1131600 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0328 01:08:25.432576 1131600 kubeadm.go:309] 
	I0328 01:08:25.432719 1131600 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0328 01:08:25.432812 1131600 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0328 01:08:25.432825 1131600 kubeadm.go:309] 
	I0328 01:08:25.432914 1131600 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8444 --token g6jusr.8nbqw788gjbu8fwz \
	I0328 01:08:25.433018 1131600 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:eb47e943713ff45bf2b8bbea854932df17a469668ec0f8e79ea3bb626e56fc59 \
	I0328 01:08:25.433052 1131600 kubeadm.go:309] 	--control-plane 
	I0328 01:08:25.433058 1131600 kubeadm.go:309] 
	I0328 01:08:25.433135 1131600 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0328 01:08:25.433143 1131600 kubeadm.go:309] 
	I0328 01:08:25.433222 1131600 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8444 --token g6jusr.8nbqw788gjbu8fwz \
	I0328 01:08:25.433318 1131600 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:eb47e943713ff45bf2b8bbea854932df17a469668ec0f8e79ea3bb626e56fc59 
	I0328 01:08:25.433337 1131600 cni.go:84] Creating CNI manager for ""
	I0328 01:08:25.433346 1131600 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0328 01:08:25.434943 1131600 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0328 01:08:25.436103 1131600 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0328 01:08:25.483149 1131600 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0328 01:08:25.508422 1131600 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0328 01:08:25.508514 1131600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:25.508518 1131600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-283961 minikube.k8s.io/updated_at=2024_03_28T01_08_25_0700 minikube.k8s.io/version=v1.33.0-beta.0 minikube.k8s.io/commit=1a940f980f77c9bfc8ef93678db8c1f49c9dd79d minikube.k8s.io/name=default-k8s-diff-port-283961 minikube.k8s.io/primary=true
	I0328 01:08:25.537955 1131600 ops.go:34] apiserver oom_adj: -16
	I0328 01:08:25.738462 1131600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:26.239473 1131600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:26.739478 1131600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:27.238883 1131600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:27.738830 1131600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:28.239281 1131600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:28.738643 1131600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:29.238703 1131600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:29.739025 1131600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:30.239127 1131600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:30.739473 1131600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:31.239461 1131600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:31.739480 1131600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:32.239525 1131600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:32.738543 1131600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:33.239468 1131600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:33.739475 1131600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:34.238558 1131600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:34.739550 1131600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:35.239400 1131600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:35.738766 1131600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:36.239384 1131600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:36.738797 1131600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:37.238736 1131600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:37.739543 1131600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:37.850963 1131600 kubeadm.go:1107] duration metric: took 12.342521507s to wait for elevateKubeSystemPrivileges
	W0328 01:08:37.851011 1131600 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0328 01:08:37.851024 1131600 kubeadm.go:393] duration metric: took 5m17.339661641s to StartCluster
	I0328 01:08:37.851048 1131600 settings.go:142] acquiring lock: {Name:mk594dd5d083edcb4c668528431bf610fe4ea638 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 01:08:37.851164 1131600 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18485-1069254/kubeconfig
	I0328 01:08:37.853862 1131600 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18485-1069254/kubeconfig: {Name:mk608bb0dce90860f51972a105c5a28b1a9b081e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 01:08:37.854264 1131600 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.39.224 Port:8444 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0328 01:08:37.856170 1131600 out.go:177] * Verifying Kubernetes components...
	I0328 01:08:37.854341 1131600 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0328 01:08:37.854447 1131600 config.go:182] Loaded profile config "default-k8s-diff-port-283961": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0328 01:08:37.857860 1131600 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0328 01:08:37.857864 1131600 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-283961"
	I0328 01:08:37.857878 1131600 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-283961"
	I0328 01:08:37.857885 1131600 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-283961"
	I0328 01:08:37.857909 1131600 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-283961"
	I0328 01:08:37.857912 1131600 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-283961"
	W0328 01:08:37.857923 1131600 addons.go:243] addon storage-provisioner should already be in state true
	I0328 01:08:37.857928 1131600 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-283961"
	W0328 01:08:37.857941 1131600 addons.go:243] addon metrics-server should already be in state true
	I0328 01:08:37.857970 1131600 host.go:66] Checking if "default-k8s-diff-port-283961" exists ...
	I0328 01:08:37.857983 1131600 host.go:66] Checking if "default-k8s-diff-port-283961" exists ...
	I0328 01:08:37.858330 1131600 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18485-1069254/.minikube/bin/docker-machine-driver-kvm2
	I0328 01:08:37.858363 1131600 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 01:08:37.858403 1131600 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18485-1069254/.minikube/bin/docker-machine-driver-kvm2
	I0328 01:08:37.858436 1131600 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 01:08:37.858335 1131600 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18485-1069254/.minikube/bin/docker-machine-driver-kvm2
	I0328 01:08:37.858509 1131600 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 01:08:37.881197 1131600 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32799
	I0328 01:08:37.881230 1131600 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35101
	I0328 01:08:37.881244 1131600 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40403
	I0328 01:08:37.881857 1131600 main.go:141] libmachine: () Calling .GetVersion
	I0328 01:08:37.881882 1131600 main.go:141] libmachine: () Calling .GetVersion
	I0328 01:08:37.882021 1131600 main.go:141] libmachine: () Calling .GetVersion
	I0328 01:08:37.882460 1131600 main.go:141] libmachine: Using API Version  1
	I0328 01:08:37.882482 1131600 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 01:08:37.882523 1131600 main.go:141] libmachine: Using API Version  1
	I0328 01:08:37.882540 1131600 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 01:08:37.882585 1131600 main.go:141] libmachine: Using API Version  1
	I0328 01:08:37.882601 1131600 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 01:08:37.882934 1131600 main.go:141] libmachine: () Calling .GetMachineName
	I0328 01:08:37.882992 1131600 main.go:141] libmachine: () Calling .GetMachineName
	I0328 01:08:37.883007 1131600 main.go:141] libmachine: () Calling .GetMachineName
	I0328 01:08:37.883239 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetState
	I0328 01:08:37.883592 1131600 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18485-1069254/.minikube/bin/docker-machine-driver-kvm2
	I0328 01:08:37.883620 1131600 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18485-1069254/.minikube/bin/docker-machine-driver-kvm2
	I0328 01:08:37.883625 1131600 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 01:08:37.883644 1131600 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 01:08:37.887335 1131600 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-283961"
	W0328 01:08:37.887359 1131600 addons.go:243] addon default-storageclass should already be in state true
	I0328 01:08:37.887390 1131600 host.go:66] Checking if "default-k8s-diff-port-283961" exists ...
	I0328 01:08:37.887745 1131600 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18485-1069254/.minikube/bin/docker-machine-driver-kvm2
	I0328 01:08:37.887779 1131600 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 01:08:37.901416 1131600 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37383
	I0328 01:08:37.901909 1131600 main.go:141] libmachine: () Calling .GetVersion
	I0328 01:08:37.902530 1131600 main.go:141] libmachine: Using API Version  1
	I0328 01:08:37.902559 1131600 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 01:08:37.902967 1131600 main.go:141] libmachine: () Calling .GetMachineName
	I0328 01:08:37.903211 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetState
	I0328 01:08:37.904529 1131600 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36633
	I0328 01:08:37.905034 1131600 main.go:141] libmachine: () Calling .GetVersion
	I0328 01:08:37.905268 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .DriverName
	I0328 01:08:37.907486 1131600 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0328 01:08:37.905802 1131600 main.go:141] libmachine: Using API Version  1
	I0328 01:08:37.909062 1131600 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 01:08:37.909180 1131600 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0328 01:08:37.909196 1131600 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0328 01:08:37.909218 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHHostname
	I0328 01:08:37.909555 1131600 main.go:141] libmachine: () Calling .GetMachineName
	I0328 01:08:37.909794 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetState
	I0328 01:08:37.911251 1131600 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33539
	I0328 01:08:37.911845 1131600 main.go:141] libmachine: () Calling .GetVersion
	I0328 01:08:37.911995 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .DriverName
	I0328 01:08:37.913838 1131600 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0328 01:08:37.912457 1131600 main.go:141] libmachine: Using API Version  1
	I0328 01:08:37.913039 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:08:37.913804 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHPort
	I0328 01:08:37.915256 1131600 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 01:08:37.915268 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:df:6f", ip: ""} in network mk-default-k8s-diff-port-283961: {Iface:virbr1 ExpiryTime:2024-03-28 02:03:06 +0000 UTC Type:0 Mac:52:54:00:c4:df:6f Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:default-k8s-diff-port-283961 Clientid:01:52:54:00:c4:df:6f}
	I0328 01:08:37.915288 1131600 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0328 01:08:37.915297 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined IP address 192.168.39.224 and MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:08:37.915303 1131600 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0328 01:08:37.915321 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHHostname
	I0328 01:08:37.915492 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHKeyPath
	I0328 01:08:37.915674 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHUsername
	I0328 01:08:37.915894 1131600 sshutil.go:53] new ssh client: &{IP:192.168.39.224 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/default-k8s-diff-port-283961/id_rsa Username:docker}
	I0328 01:08:37.916689 1131600 main.go:141] libmachine: () Calling .GetMachineName
	I0328 01:08:37.917364 1131600 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18485-1069254/.minikube/bin/docker-machine-driver-kvm2
	I0328 01:08:37.917410 1131600 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 01:08:37.918302 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:08:37.918651 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:df:6f", ip: ""} in network mk-default-k8s-diff-port-283961: {Iface:virbr1 ExpiryTime:2024-03-28 02:03:06 +0000 UTC Type:0 Mac:52:54:00:c4:df:6f Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:default-k8s-diff-port-283961 Clientid:01:52:54:00:c4:df:6f}
	I0328 01:08:37.918678 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined IP address 192.168.39.224 and MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:08:37.918944 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHPort
	I0328 01:08:37.919117 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHKeyPath
	I0328 01:08:37.919267 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHUsername
	I0328 01:08:37.919386 1131600 sshutil.go:53] new ssh client: &{IP:192.168.39.224 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/default-k8s-diff-port-283961/id_rsa Username:docker}
	I0328 01:08:37.935233 1131600 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45659
	I0328 01:08:37.935750 1131600 main.go:141] libmachine: () Calling .GetVersion
	I0328 01:08:37.936283 1131600 main.go:141] libmachine: Using API Version  1
	I0328 01:08:37.936301 1131600 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 01:08:37.936691 1131600 main.go:141] libmachine: () Calling .GetMachineName
	I0328 01:08:37.936872 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetState
	I0328 01:08:37.938736 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .DriverName
	I0328 01:08:37.939016 1131600 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0328 01:08:37.939042 1131600 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0328 01:08:37.939065 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHHostname
	I0328 01:08:37.941653 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:08:37.941967 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:df:6f", ip: ""} in network mk-default-k8s-diff-port-283961: {Iface:virbr1 ExpiryTime:2024-03-28 02:03:06 +0000 UTC Type:0 Mac:52:54:00:c4:df:6f Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:default-k8s-diff-port-283961 Clientid:01:52:54:00:c4:df:6f}
	I0328 01:08:37.941991 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined IP address 192.168.39.224 and MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:08:37.942199 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHPort
	I0328 01:08:37.942405 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHKeyPath
	I0328 01:08:37.942575 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHUsername
	I0328 01:08:37.942761 1131600 sshutil.go:53] new ssh client: &{IP:192.168.39.224 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/default-k8s-diff-port-283961/id_rsa Username:docker}
	I0328 01:08:38.109817 1131600 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0328 01:08:38.134996 1131600 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-283961" to be "Ready" ...
	I0328 01:08:38.158252 1131600 node_ready.go:49] node "default-k8s-diff-port-283961" has status "Ready":"True"
	I0328 01:08:38.158286 1131600 node_ready.go:38] duration metric: took 23.249221ms for node "default-k8s-diff-port-283961" to be "Ready" ...
	I0328 01:08:38.158305 1131600 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0328 01:08:38.170391 1131600 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-gdv5x" in "kube-system" namespace to be "Ready" ...
	I0328 01:08:38.277223 1131600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0328 01:08:38.299923 1131600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0328 01:08:38.300686 1131600 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0328 01:08:38.300707 1131600 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0328 01:08:38.355800 1131600 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0328 01:08:38.355837 1131600 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0328 01:08:38.464742 1131600 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0328 01:08:38.464769 1131600 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0328 01:08:38.542696 1131600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0328 01:08:39.644116 1131600 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.344141889s)
	I0328 01:08:39.644184 1131600 main.go:141] libmachine: Making call to close driver server
	I0328 01:08:39.644189 1131600 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.366934481s)
	I0328 01:08:39.644197 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .Close
	I0328 01:08:39.644210 1131600 main.go:141] libmachine: Making call to close driver server
	I0328 01:08:39.644219 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .Close
	I0328 01:08:39.644620 1131600 main.go:141] libmachine: Successfully made call to close driver server
	I0328 01:08:39.644644 1131600 main.go:141] libmachine: Making call to close connection to plugin binary
	I0328 01:08:39.644654 1131600 main.go:141] libmachine: Making call to close driver server
	I0328 01:08:39.644664 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .Close
	I0328 01:08:39.644846 1131600 main.go:141] libmachine: Successfully made call to close driver server
	I0328 01:08:39.644865 1131600 main.go:141] libmachine: Making call to close connection to plugin binary
	I0328 01:08:39.644890 1131600 main.go:141] libmachine: Making call to close driver server
	I0328 01:08:39.644905 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .Close
	I0328 01:08:39.644987 1131600 main.go:141] libmachine: Successfully made call to close driver server
	I0328 01:08:39.645004 1131600 main.go:141] libmachine: Making call to close connection to plugin binary
	I0328 01:08:39.645154 1131600 main.go:141] libmachine: Successfully made call to close driver server
	I0328 01:08:39.645171 1131600 main.go:141] libmachine: Making call to close connection to plugin binary
	I0328 01:08:39.708104 1131600 main.go:141] libmachine: Making call to close driver server
	I0328 01:08:39.708143 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .Close
	I0328 01:08:39.708543 1131600 main.go:141] libmachine: Successfully made call to close driver server
	I0328 01:08:39.708567 1131600 main.go:141] libmachine: Making call to close connection to plugin binary
	I0328 01:08:39.739487 1131600 pod_ready.go:92] pod "coredns-76f75df574-gdv5x" in "kube-system" namespace has status "Ready":"True"
	I0328 01:08:39.739515 1131600 pod_ready.go:81] duration metric: took 1.569088177s for pod "coredns-76f75df574-gdv5x" in "kube-system" namespace to be "Ready" ...
	I0328 01:08:39.739526 1131600 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-qzcfp" in "kube-system" namespace to be "Ready" ...
	I0328 01:08:39.797314 1131600 pod_ready.go:92] pod "coredns-76f75df574-qzcfp" in "kube-system" namespace has status "Ready":"True"
	I0328 01:08:39.797347 1131600 pod_ready.go:81] duration metric: took 57.813218ms for pod "coredns-76f75df574-qzcfp" in "kube-system" namespace to be "Ready" ...
	I0328 01:08:39.797366 1131600 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-283961" in "kube-system" namespace to be "Ready" ...
	I0328 01:08:39.830784 1131600 pod_ready.go:92] pod "etcd-default-k8s-diff-port-283961" in "kube-system" namespace has status "Ready":"True"
	I0328 01:08:39.830865 1131600 pod_ready.go:81] duration metric: took 33.488753ms for pod "etcd-default-k8s-diff-port-283961" in "kube-system" namespace to be "Ready" ...
	I0328 01:08:39.830886 1131600 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-283961" in "kube-system" namespace to be "Ready" ...
	I0328 01:08:39.852459 1131600 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-283961" in "kube-system" namespace has status "Ready":"True"
	I0328 01:08:39.852489 1131600 pod_ready.go:81] duration metric: took 21.594748ms for pod "kube-apiserver-default-k8s-diff-port-283961" in "kube-system" namespace to be "Ready" ...
	I0328 01:08:39.852501 1131600 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-283961" in "kube-system" namespace to be "Ready" ...
	I0328 01:08:39.862630 1131600 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-283961" in "kube-system" namespace has status "Ready":"True"
	I0328 01:08:39.862658 1131600 pod_ready.go:81] duration metric: took 10.149867ms for pod "kube-controller-manager-default-k8s-diff-port-283961" in "kube-system" namespace to be "Ready" ...
	I0328 01:08:39.862674 1131600 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-js7j2" in "kube-system" namespace to be "Ready" ...
	I0328 01:08:39.893124 1131600 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.350363727s)
	I0328 01:08:39.893191 1131600 main.go:141] libmachine: Making call to close driver server
	I0328 01:08:39.893216 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .Close
	I0328 01:08:39.893559 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | Closing plugin on server side
	I0328 01:08:39.893568 1131600 main.go:141] libmachine: Successfully made call to close driver server
	I0328 01:08:39.893617 1131600 main.go:141] libmachine: Making call to close connection to plugin binary
	I0328 01:08:39.893634 1131600 main.go:141] libmachine: Making call to close driver server
	I0328 01:08:39.893646 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .Close
	I0328 01:08:39.893985 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | Closing plugin on server side
	I0328 01:08:39.893985 1131600 main.go:141] libmachine: Successfully made call to close driver server
	I0328 01:08:39.894013 1131600 main.go:141] libmachine: Making call to close connection to plugin binary
	I0328 01:08:39.894031 1131600 addons.go:470] Verifying addon metrics-server=true in "default-k8s-diff-port-283961"
	I0328 01:08:39.896978 1131600 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0328 01:08:39.898636 1131600 addons.go:505] duration metric: took 2.044292782s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0328 01:08:40.138962 1131600 pod_ready.go:92] pod "kube-proxy-js7j2" in "kube-system" namespace has status "Ready":"True"
	I0328 01:08:40.138994 1131600 pod_ready.go:81] duration metric: took 276.313147ms for pod "kube-proxy-js7j2" in "kube-system" namespace to be "Ready" ...
	I0328 01:08:40.139006 1131600 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-283961" in "kube-system" namespace to be "Ready" ...
	I0328 01:08:40.538892 1131600 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-283961" in "kube-system" namespace has status "Ready":"True"
	I0328 01:08:40.538917 1131600 pod_ready.go:81] duration metric: took 399.903327ms for pod "kube-scheduler-default-k8s-diff-port-283961" in "kube-system" namespace to be "Ready" ...
	I0328 01:08:40.538925 1131600 pod_ready.go:38] duration metric: took 2.380606168s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0328 01:08:40.538943 1131600 api_server.go:52] waiting for apiserver process to appear ...
	I0328 01:08:40.539009 1131600 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:08:40.561639 1131600 api_server.go:72] duration metric: took 2.707321816s to wait for apiserver process to appear ...
	I0328 01:08:40.561681 1131600 api_server.go:88] waiting for apiserver healthz status ...
	I0328 01:08:40.561709 1131600 api_server.go:253] Checking apiserver healthz at https://192.168.39.224:8444/healthz ...
	I0328 01:08:40.568521 1131600 api_server.go:279] https://192.168.39.224:8444/healthz returned 200:
	ok
	I0328 01:08:40.570016 1131600 api_server.go:141] control plane version: v1.29.3
	I0328 01:08:40.570060 1131600 api_server.go:131] duration metric: took 8.369036ms to wait for apiserver health ...
	I0328 01:08:40.570071 1131600 system_pods.go:43] waiting for kube-system pods to appear ...
	I0328 01:08:39.696094 1130827 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.32965227s)
	I0328 01:08:39.696193 1130827 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0328 01:08:39.717556 1130827 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0328 01:08:39.730434 1130827 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0328 01:08:39.746521 1130827 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0328 01:08:39.746567 1130827 kubeadm.go:156] found existing configuration files:
	
	I0328 01:08:39.746644 1130827 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0328 01:08:39.758252 1130827 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0328 01:08:39.758352 1130827 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0328 01:08:39.771929 1130827 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0328 01:08:39.785312 1130827 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0328 01:08:39.785400 1130827 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0328 01:08:39.800685 1130827 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0328 01:08:39.814982 1130827 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0328 01:08:39.815073 1130827 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0328 01:08:39.828804 1130827 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0328 01:08:39.841984 1130827 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0328 01:08:39.842074 1130827 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0328 01:08:39.854502 1130827 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0328 01:08:40.089742 1130827 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0328 01:08:40.742900 1131600 system_pods.go:59] 9 kube-system pods found
	I0328 01:08:40.742938 1131600 system_pods.go:61] "coredns-76f75df574-gdv5x" [5b4b835c-ae9d-4eff-ab37-6ccb7e36a748] Running
	I0328 01:08:40.742945 1131600 system_pods.go:61] "coredns-76f75df574-qzcfp" [8e7bfa94-f249-4f7a-be7b-9a615810c956] Running
	I0328 01:08:40.742951 1131600 system_pods.go:61] "etcd-default-k8s-diff-port-283961" [eb26a2e2-882b-4bc0-9ca1-7fe88b1c6c7e] Running
	I0328 01:08:40.742958 1131600 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-283961" [1c31e7a3-507e-43ea-96cf-1d182a1e0875] Running
	I0328 01:08:40.742964 1131600 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-283961" [5bbc6650-b37c-4b10-bc6e-cceb214f8881] Running
	I0328 01:08:40.742968 1131600 system_pods.go:61] "kube-proxy-js7j2" [1f31a6ee-9417-4f4a-ba0b-fdbab6a9169d] Running
	I0328 01:08:40.742972 1131600 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-283961" [3a5691f7-d6d4-45e7-aea7-94fe1cfa541a] Running
	I0328 01:08:40.742980 1131600 system_pods.go:61] "metrics-server-57f55c9bc5-gkv67" [7f0f5d0b-6821-44b6-8f3b-0bc0aeccc356] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0328 01:08:40.742986 1131600 system_pods.go:61] "storage-provisioner" [cb80efe2-521f-45d5-84e7-f6dc216b4c6d] Running
	I0328 01:08:40.742998 1131600 system_pods.go:74] duration metric: took 172.918886ms to wait for pod list to return data ...
	I0328 01:08:40.743010 1131600 default_sa.go:34] waiting for default service account to be created ...
	I0328 01:08:40.939208 1131600 default_sa.go:45] found service account: "default"
	I0328 01:08:40.939255 1131600 default_sa.go:55] duration metric: took 196.220048ms for default service account to be created ...
	I0328 01:08:40.939266 1131600 system_pods.go:116] waiting for k8s-apps to be running ...
	I0328 01:08:41.144986 1131600 system_pods.go:86] 9 kube-system pods found
	I0328 01:08:41.145023 1131600 system_pods.go:89] "coredns-76f75df574-gdv5x" [5b4b835c-ae9d-4eff-ab37-6ccb7e36a748] Running
	I0328 01:08:41.145030 1131600 system_pods.go:89] "coredns-76f75df574-qzcfp" [8e7bfa94-f249-4f7a-be7b-9a615810c956] Running
	I0328 01:08:41.145034 1131600 system_pods.go:89] "etcd-default-k8s-diff-port-283961" [eb26a2e2-882b-4bc0-9ca1-7fe88b1c6c7e] Running
	I0328 01:08:41.145039 1131600 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-283961" [1c31e7a3-507e-43ea-96cf-1d182a1e0875] Running
	I0328 01:08:41.145043 1131600 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-283961" [5bbc6650-b37c-4b10-bc6e-cceb214f8881] Running
	I0328 01:08:41.145047 1131600 system_pods.go:89] "kube-proxy-js7j2" [1f31a6ee-9417-4f4a-ba0b-fdbab6a9169d] Running
	I0328 01:08:41.145051 1131600 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-283961" [3a5691f7-d6d4-45e7-aea7-94fe1cfa541a] Running
	I0328 01:08:41.145058 1131600 system_pods.go:89] "metrics-server-57f55c9bc5-gkv67" [7f0f5d0b-6821-44b6-8f3b-0bc0aeccc356] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0328 01:08:41.145062 1131600 system_pods.go:89] "storage-provisioner" [cb80efe2-521f-45d5-84e7-f6dc216b4c6d] Running
	I0328 01:08:41.145072 1131600 system_pods.go:126] duration metric: took 205.800485ms to wait for k8s-apps to be running ...
	I0328 01:08:41.145083 1131600 system_svc.go:44] waiting for kubelet service to be running ....
	I0328 01:08:41.145131 1131600 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0328 01:08:41.163220 1131600 system_svc.go:56] duration metric: took 18.120266ms WaitForService to wait for kubelet
	I0328 01:08:41.163255 1131600 kubeadm.go:576] duration metric: took 3.308947131s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0328 01:08:41.163280 1131600 node_conditions.go:102] verifying NodePressure condition ...
	I0328 01:08:41.339219 1131600 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0328 01:08:41.339247 1131600 node_conditions.go:123] node cpu capacity is 2
	I0328 01:08:41.339292 1131600 node_conditions.go:105] duration metric: took 176.004328ms to run NodePressure ...
	I0328 01:08:41.339306 1131600 start.go:240] waiting for startup goroutines ...
	I0328 01:08:41.339317 1131600 start.go:245] waiting for cluster config update ...
	I0328 01:08:41.339334 1131600 start.go:254] writing updated cluster config ...
	I0328 01:08:41.339656 1131600 ssh_runner.go:195] Run: rm -f paused
	I0328 01:08:41.399111 1131600 start.go:600] kubectl: 1.29.3, cluster: 1.29.3 (minor skew: 0)
	I0328 01:08:41.401360 1131600 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-283961" cluster and "default" namespace by default
	I0328 01:08:49.653091 1130827 kubeadm.go:309] [init] Using Kubernetes version: v1.30.0-beta.0
	I0328 01:08:49.653205 1130827 kubeadm.go:309] [preflight] Running pre-flight checks
	I0328 01:08:49.653327 1130827 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0328 01:08:49.653468 1130827 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0328 01:08:49.653576 1130827 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0328 01:08:49.653666 1130827 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0328 01:08:49.656419 1130827 out.go:204]   - Generating certificates and keys ...
	I0328 01:08:49.656503 1130827 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0328 01:08:49.656583 1130827 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0328 01:08:49.656669 1130827 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0328 01:08:49.656775 1130827 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0328 01:08:49.656903 1130827 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0328 01:08:49.656973 1130827 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0328 01:08:49.657057 1130827 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0328 01:08:49.657138 1130827 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0328 01:08:49.657246 1130827 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0328 01:08:49.657362 1130827 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0328 01:08:49.657415 1130827 kubeadm.go:309] [certs] Using the existing "sa" key
	I0328 01:08:49.657510 1130827 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0328 01:08:49.657601 1130827 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0328 01:08:49.657713 1130827 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0328 01:08:49.657811 1130827 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0328 01:08:49.657900 1130827 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0328 01:08:49.657980 1130827 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0328 01:08:49.658074 1130827 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0328 01:08:49.658160 1130827 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0328 01:08:49.659588 1130827 out.go:204]   - Booting up control plane ...
	I0328 01:08:49.659669 1130827 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0328 01:08:49.659771 1130827 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0328 01:08:49.659855 1130827 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0328 01:08:49.659962 1130827 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0328 01:08:49.660075 1130827 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0328 01:08:49.660139 1130827 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0328 01:08:49.660309 1130827 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0328 01:08:49.660426 1130827 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0328 01:08:49.660518 1130827 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 501.594495ms
	I0328 01:08:49.660610 1130827 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0328 01:08:49.660691 1130827 kubeadm.go:309] [api-check] The API server is healthy after 5.502996727s
	I0328 01:08:49.660830 1130827 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0328 01:08:49.660975 1130827 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0328 01:08:49.661028 1130827 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0328 01:08:49.661198 1130827 kubeadm.go:309] [mark-control-plane] Marking the node no-preload-248059 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0328 01:08:49.661283 1130827 kubeadm.go:309] [bootstrap-token] Using token: 4jnfa0.q3dre6ogqbxtw8j0
	I0328 01:08:49.662907 1130827 out.go:204]   - Configuring RBAC rules ...
	I0328 01:08:49.663014 1130827 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0328 01:08:49.663090 1130827 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0328 01:08:49.663239 1130827 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0328 01:08:49.663379 1130827 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0328 01:08:49.663484 1130827 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0328 01:08:49.663576 1130827 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0328 01:08:49.663688 1130827 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0328 01:08:49.663750 1130827 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0328 01:08:49.663811 1130827 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0328 01:08:49.663820 1130827 kubeadm.go:309] 
	I0328 01:08:49.663871 1130827 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0328 01:08:49.663877 1130827 kubeadm.go:309] 
	I0328 01:08:49.663976 1130827 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0328 01:08:49.663984 1130827 kubeadm.go:309] 
	I0328 01:08:49.664004 1130827 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0328 01:08:49.664080 1130827 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0328 01:08:49.664144 1130827 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0328 01:08:49.664151 1130827 kubeadm.go:309] 
	I0328 01:08:49.664202 1130827 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0328 01:08:49.664209 1130827 kubeadm.go:309] 
	I0328 01:08:49.664246 1130827 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0328 01:08:49.664252 1130827 kubeadm.go:309] 
	I0328 01:08:49.664301 1130827 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0328 01:08:49.664370 1130827 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0328 01:08:49.664436 1130827 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0328 01:08:49.664444 1130827 kubeadm.go:309] 
	I0328 01:08:49.664515 1130827 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0328 01:08:49.664600 1130827 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0328 01:08:49.664607 1130827 kubeadm.go:309] 
	I0328 01:08:49.664678 1130827 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token 4jnfa0.q3dre6ogqbxtw8j0 \
	I0328 01:08:49.664764 1130827 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:eb47e943713ff45bf2b8bbea854932df17a469668ec0f8e79ea3bb626e56fc59 \
	I0328 01:08:49.664783 1130827 kubeadm.go:309] 	--control-plane 
	I0328 01:08:49.664789 1130827 kubeadm.go:309] 
	I0328 01:08:49.664856 1130827 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0328 01:08:49.664863 1130827 kubeadm.go:309] 
	I0328 01:08:49.664938 1130827 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token 4jnfa0.q3dre6ogqbxtw8j0 \
	I0328 01:08:49.665073 1130827 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:eb47e943713ff45bf2b8bbea854932df17a469668ec0f8e79ea3bb626e56fc59 
	I0328 01:08:49.665117 1130827 cni.go:84] Creating CNI manager for ""
	I0328 01:08:49.665130 1130827 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0328 01:08:49.667556 1130827 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0328 01:08:49.668776 1130827 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0328 01:08:49.680262 1130827 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0328 01:08:49.701490 1130827 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0328 01:08:49.701557 1130827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:49.701606 1130827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-248059 minikube.k8s.io/updated_at=2024_03_28T01_08_49_0700 minikube.k8s.io/version=v1.33.0-beta.0 minikube.k8s.io/commit=1a940f980f77c9bfc8ef93678db8c1f49c9dd79d minikube.k8s.io/name=no-preload-248059 minikube.k8s.io/primary=true
	I0328 01:08:49.734009 1130827 ops.go:34] apiserver oom_adj: -16
	I0328 01:08:49.901866 1130827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:50.402635 1130827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:50.902480 1130827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:51.402417 1130827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:51.902253 1130827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:52.402411 1130827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:52.901926 1130827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:53.402394 1130827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:53.902738 1130827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:54.401878 1130827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:54.901920 1130827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:55.401878 1130827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:55.902140 1130827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:56.402863 1130827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:56.901970 1130827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:57.402088 1130827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:57.901869 1130827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:58.402056 1130827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:58.902333 1130827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:59.402753 1130827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:59.902930 1130827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:09:00.402623 1130827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:09:00.901863 1130827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:09:01.402264 1130827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:09:01.902054 1130827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:09:02.402212 1130827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:09:02.503310 1130827 kubeadm.go:1107] duration metric: took 12.80181586s to wait for elevateKubeSystemPrivileges
	W0328 01:09:02.503352 1130827 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0328 01:09:02.503362 1130827 kubeadm.go:393] duration metric: took 5m10.46697508s to StartCluster
	I0328 01:09:02.503380 1130827 settings.go:142] acquiring lock: {Name:mk594dd5d083edcb4c668528431bf610fe4ea638 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 01:09:02.503482 1130827 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18485-1069254/kubeconfig
	I0328 01:09:02.505909 1130827 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18485-1069254/kubeconfig: {Name:mk608bb0dce90860f51972a105c5a28b1a9b081e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 01:09:02.506302 1130827 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.61.107 Port:8443 KubernetesVersion:v1.30.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0328 01:09:02.508103 1130827 out.go:177] * Verifying Kubernetes components...
	I0328 01:09:02.506385 1130827 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0328 01:09:02.506502 1130827 config.go:182] Loaded profile config "no-preload-248059": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0-beta.0
	I0328 01:09:02.509509 1130827 addons.go:69] Setting default-storageclass=true in profile "no-preload-248059"
	I0328 01:09:02.509519 1130827 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0328 01:09:02.509517 1130827 addons.go:69] Setting metrics-server=true in profile "no-preload-248059"
	I0328 01:09:02.509542 1130827 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-248059"
	I0328 01:09:02.509559 1130827 addons.go:234] Setting addon metrics-server=true in "no-preload-248059"
	W0328 01:09:02.509580 1130827 addons.go:243] addon metrics-server should already be in state true
	I0328 01:09:02.509509 1130827 addons.go:69] Setting storage-provisioner=true in profile "no-preload-248059"
	I0328 01:09:02.509623 1130827 host.go:66] Checking if "no-preload-248059" exists ...
	I0328 01:09:02.509636 1130827 addons.go:234] Setting addon storage-provisioner=true in "no-preload-248059"
	W0328 01:09:02.509690 1130827 addons.go:243] addon storage-provisioner should already be in state true
	I0328 01:09:02.509729 1130827 host.go:66] Checking if "no-preload-248059" exists ...
	I0328 01:09:02.510005 1130827 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18485-1069254/.minikube/bin/docker-machine-driver-kvm2
	I0328 01:09:02.510009 1130827 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18485-1069254/.minikube/bin/docker-machine-driver-kvm2
	I0328 01:09:02.510049 1130827 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 01:09:02.510050 1130827 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 01:09:02.510053 1130827 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18485-1069254/.minikube/bin/docker-machine-driver-kvm2
	I0328 01:09:02.510085 1130827 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 01:09:02.528082 1130827 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42381
	I0328 01:09:02.528124 1130827 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43929
	I0328 01:09:02.528714 1130827 main.go:141] libmachine: () Calling .GetVersion
	I0328 01:09:02.528738 1130827 main.go:141] libmachine: () Calling .GetVersion
	I0328 01:09:02.529081 1130827 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34573
	I0328 01:09:02.529378 1130827 main.go:141] libmachine: Using API Version  1
	I0328 01:09:02.529397 1130827 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 01:09:02.529444 1130827 main.go:141] libmachine: Using API Version  1
	I0328 01:09:02.529464 1130827 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 01:09:02.529465 1130827 main.go:141] libmachine: () Calling .GetVersion
	I0328 01:09:02.529791 1130827 main.go:141] libmachine: () Calling .GetMachineName
	I0328 01:09:02.529849 1130827 main.go:141] libmachine: () Calling .GetMachineName
	I0328 01:09:02.529948 1130827 main.go:141] libmachine: Using API Version  1
	I0328 01:09:02.529965 1130827 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 01:09:02.529950 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetState
	I0328 01:09:02.530389 1130827 main.go:141] libmachine: () Calling .GetMachineName
	I0328 01:09:02.530437 1130827 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18485-1069254/.minikube/bin/docker-machine-driver-kvm2
	I0328 01:09:02.530472 1130827 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 01:09:02.531004 1130827 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18485-1069254/.minikube/bin/docker-machine-driver-kvm2
	I0328 01:09:02.531058 1130827 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 01:09:02.534108 1130827 addons.go:234] Setting addon default-storageclass=true in "no-preload-248059"
	W0328 01:09:02.534134 1130827 addons.go:243] addon default-storageclass should already be in state true
	I0328 01:09:02.534173 1130827 host.go:66] Checking if "no-preload-248059" exists ...
	I0328 01:09:02.534563 1130827 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18485-1069254/.minikube/bin/docker-machine-driver-kvm2
	I0328 01:09:02.534592 1130827 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 01:09:02.546812 1130827 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35769
	I0328 01:09:02.547478 1130827 main.go:141] libmachine: () Calling .GetVersion
	I0328 01:09:02.547999 1130827 main.go:141] libmachine: Using API Version  1
	I0328 01:09:02.548031 1130827 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 01:09:02.548370 1130827 main.go:141] libmachine: () Calling .GetMachineName
	I0328 01:09:02.548616 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetState
	I0328 01:09:02.549185 1130827 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37367
	I0328 01:09:02.549663 1130827 main.go:141] libmachine: () Calling .GetVersion
	I0328 01:09:02.550365 1130827 main.go:141] libmachine: Using API Version  1
	I0328 01:09:02.550390 1130827 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 01:09:02.550772 1130827 main.go:141] libmachine: () Calling .GetMachineName
	I0328 01:09:02.550787 1130827 main.go:141] libmachine: (no-preload-248059) Calling .DriverName
	I0328 01:09:02.550977 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetState
	I0328 01:09:02.553075 1130827 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0328 01:09:02.554750 1130827 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0328 01:09:02.554769 1130827 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0328 01:09:02.552577 1130827 main.go:141] libmachine: (no-preload-248059) Calling .DriverName
	I0328 01:09:02.554788 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHHostname
	I0328 01:09:02.553550 1130827 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45153
	I0328 01:09:02.556534 1130827 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0328 01:09:02.555339 1130827 main.go:141] libmachine: () Calling .GetVersion
	I0328 01:09:02.558480 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:09:02.563734 1130827 main.go:141] libmachine: (no-preload-248059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:33:e2", ip: ""} in network mk-no-preload-248059: {Iface:virbr3 ExpiryTime:2024-03-28 02:03:25 +0000 UTC Type:0 Mac:52:54:00:58:33:e2 Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:no-preload-248059 Clientid:01:52:54:00:58:33:e2}
	I0328 01:09:02.563773 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined IP address 192.168.61.107 and MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:09:02.563823 1130827 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0328 01:09:02.563846 1130827 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0328 01:09:02.563876 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHHostname
	I0328 01:09:02.564584 1130827 main.go:141] libmachine: Using API Version  1
	I0328 01:09:02.564604 1130827 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 01:09:02.564633 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHPort
	I0328 01:09:02.564933 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHKeyPath
	I0328 01:09:02.565025 1130827 main.go:141] libmachine: () Calling .GetMachineName
	I0328 01:09:02.565458 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHUsername
	I0328 01:09:02.565593 1130827 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18485-1069254/.minikube/bin/docker-machine-driver-kvm2
	I0328 01:09:02.565617 1130827 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 01:09:02.565745 1130827 sshutil.go:53] new ssh client: &{IP:192.168.61.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/no-preload-248059/id_rsa Username:docker}
	I0328 01:09:02.569766 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:09:02.570083 1130827 main.go:141] libmachine: (no-preload-248059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:33:e2", ip: ""} in network mk-no-preload-248059: {Iface:virbr3 ExpiryTime:2024-03-28 02:03:25 +0000 UTC Type:0 Mac:52:54:00:58:33:e2 Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:no-preload-248059 Clientid:01:52:54:00:58:33:e2}
	I0328 01:09:02.570104 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined IP address 192.168.61.107 and MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:09:02.570413 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHPort
	I0328 01:09:02.570778 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHKeyPath
	I0328 01:09:02.570975 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHUsername
	I0328 01:09:02.571142 1130827 sshutil.go:53] new ssh client: &{IP:192.168.61.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/no-preload-248059/id_rsa Username:docker}
	I0328 01:09:02.589503 1130827 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41085
	I0328 01:09:02.590061 1130827 main.go:141] libmachine: () Calling .GetVersion
	I0328 01:09:02.590641 1130827 main.go:141] libmachine: Using API Version  1
	I0328 01:09:02.590661 1130827 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 01:09:02.591065 1130827 main.go:141] libmachine: () Calling .GetMachineName
	I0328 01:09:02.591310 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetState
	I0328 01:09:02.593270 1130827 main.go:141] libmachine: (no-preload-248059) Calling .DriverName
	I0328 01:09:02.593665 1130827 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0328 01:09:02.593696 1130827 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0328 01:09:02.593717 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHHostname
	I0328 01:09:02.596796 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:09:02.597270 1130827 main.go:141] libmachine: (no-preload-248059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:33:e2", ip: ""} in network mk-no-preload-248059: {Iface:virbr3 ExpiryTime:2024-03-28 02:03:25 +0000 UTC Type:0 Mac:52:54:00:58:33:e2 Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:no-preload-248059 Clientid:01:52:54:00:58:33:e2}
	I0328 01:09:02.597298 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined IP address 192.168.61.107 and MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:09:02.597460 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHPort
	I0328 01:09:02.597637 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHKeyPath
	I0328 01:09:02.597807 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHUsername
	I0328 01:09:02.597937 1130827 sshutil.go:53] new ssh client: &{IP:192.168.61.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/no-preload-248059/id_rsa Username:docker}
	I0328 01:09:02.705837 1130827 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0328 01:09:02.727955 1130827 node_ready.go:35] waiting up to 6m0s for node "no-preload-248059" to be "Ready" ...
	I0328 01:09:02.737291 1130827 node_ready.go:49] node "no-preload-248059" has status "Ready":"True"
	I0328 01:09:02.737325 1130827 node_ready.go:38] duration metric: took 9.337953ms for node "no-preload-248059" to be "Ready" ...
	I0328 01:09:02.737338 1130827 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0328 01:09:02.741939 1130827 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-248059" in "kube-system" namespace to be "Ready" ...
	I0328 01:09:02.749157 1130827 pod_ready.go:92] pod "etcd-no-preload-248059" in "kube-system" namespace has status "Ready":"True"
	I0328 01:09:02.749192 1130827 pod_ready.go:81] duration metric: took 7.224004ms for pod "etcd-no-preload-248059" in "kube-system" namespace to be "Ready" ...
	I0328 01:09:02.749205 1130827 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-248059" in "kube-system" namespace to be "Ready" ...
	I0328 01:09:02.755106 1130827 pod_ready.go:92] pod "kube-apiserver-no-preload-248059" in "kube-system" namespace has status "Ready":"True"
	I0328 01:09:02.755132 1130827 pod_ready.go:81] duration metric: took 5.919446ms for pod "kube-apiserver-no-preload-248059" in "kube-system" namespace to be "Ready" ...
	I0328 01:09:02.755144 1130827 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-248059" in "kube-system" namespace to be "Ready" ...
	I0328 01:09:02.761123 1130827 pod_ready.go:92] pod "kube-controller-manager-no-preload-248059" in "kube-system" namespace has status "Ready":"True"
	I0328 01:09:02.761171 1130827 pod_ready.go:81] duration metric: took 6.017877ms for pod "kube-controller-manager-no-preload-248059" in "kube-system" namespace to be "Ready" ...
	I0328 01:09:02.761187 1130827 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-248059" in "kube-system" namespace to be "Ready" ...
	I0328 01:09:02.773958 1130827 pod_ready.go:92] pod "kube-scheduler-no-preload-248059" in "kube-system" namespace has status "Ready":"True"
	I0328 01:09:02.773983 1130827 pod_ready.go:81] duration metric: took 12.787671ms for pod "kube-scheduler-no-preload-248059" in "kube-system" namespace to be "Ready" ...
	I0328 01:09:02.773991 1130827 pod_ready.go:38] duration metric: took 36.637128ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0328 01:09:02.774008 1130827 api_server.go:52] waiting for apiserver process to appear ...
	I0328 01:09:02.774068 1130827 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:09:02.794342 1130827 api_server.go:72] duration metric: took 287.989042ms to wait for apiserver process to appear ...
	I0328 01:09:02.794376 1130827 api_server.go:88] waiting for apiserver healthz status ...
	I0328 01:09:02.794408 1130827 api_server.go:253] Checking apiserver healthz at https://192.168.61.107:8443/healthz ...
	I0328 01:09:02.826957 1130827 api_server.go:279] https://192.168.61.107:8443/healthz returned 200:
	ok
	I0328 01:09:02.830377 1130827 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0328 01:09:02.830399 1130827 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0328 01:09:02.837250 1130827 api_server.go:141] control plane version: v1.30.0-beta.0
	I0328 01:09:02.837284 1130827 api_server.go:131] duration metric: took 42.898933ms to wait for apiserver health ...
	I0328 01:09:02.837295 1130827 system_pods.go:43] waiting for kube-system pods to appear ...
	I0328 01:09:02.838515 1130827 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0328 01:09:02.865482 1130827 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0328 01:09:02.880510 1130827 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0328 01:09:02.880544 1130827 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0328 01:09:02.933895 1130827 system_pods.go:59] 4 kube-system pods found
	I0328 01:09:02.933958 1130827 system_pods.go:61] "etcd-no-preload-248059" [df24a43f-e4f8-4ee2-a2d5-9a718a197670] Running
	I0328 01:09:02.933967 1130827 system_pods.go:61] "kube-apiserver-no-preload-248059" [60aa0336-e0a2-476a-9458-c76fb40a95e1] Running
	I0328 01:09:02.933973 1130827 system_pods.go:61] "kube-controller-manager-no-preload-248059" [3759c96b-4476-439f-bf97-ea6175e53272] Running
	I0328 01:09:02.933977 1130827 system_pods.go:61] "kube-scheduler-no-preload-248059" [63a844a3-9d1e-48b0-90eb-52d3d20f0de8] Running
	I0328 01:09:02.933984 1130827 system_pods.go:74] duration metric: took 96.68223ms to wait for pod list to return data ...
	I0328 01:09:02.933994 1130827 default_sa.go:34] waiting for default service account to be created ...
	I0328 01:09:02.939507 1130827 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0328 01:09:02.939538 1130827 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0328 01:09:02.994042 1130827 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0328 01:09:03.160934 1130827 default_sa.go:45] found service account: "default"
	I0328 01:09:03.160971 1130827 default_sa.go:55] duration metric: took 226.968222ms for default service account to be created ...
	I0328 01:09:03.160982 1130827 system_pods.go:116] waiting for k8s-apps to be running ...
	I0328 01:09:03.396511 1130827 system_pods.go:86] 7 kube-system pods found
	I0328 01:09:03.396549 1130827 system_pods.go:89] "coredns-7db6d8ff4d-8zzf5" [91f329ea-6d6d-45dc-ac77-40a2739249b4] Pending
	I0328 01:09:03.396554 1130827 system_pods.go:89] "coredns-7db6d8ff4d-qtgp9" [aac5a4d0-acf3-426c-a81e-d129f94d58f3] Pending
	I0328 01:09:03.396558 1130827 system_pods.go:89] "etcd-no-preload-248059" [df24a43f-e4f8-4ee2-a2d5-9a718a197670] Running
	I0328 01:09:03.396562 1130827 system_pods.go:89] "kube-apiserver-no-preload-248059" [60aa0336-e0a2-476a-9458-c76fb40a95e1] Running
	I0328 01:09:03.396567 1130827 system_pods.go:89] "kube-controller-manager-no-preload-248059" [3759c96b-4476-439f-bf97-ea6175e53272] Running
	I0328 01:09:03.396575 1130827 system_pods.go:89] "kube-proxy-g5f6g" [d9c30bc3-42b1-446f-838b-979489cf661d] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0328 01:09:03.396580 1130827 system_pods.go:89] "kube-scheduler-no-preload-248059" [63a844a3-9d1e-48b0-90eb-52d3d20f0de8] Running
	I0328 01:09:03.396601 1130827 retry.go:31] will retry after 288.008379ms: missing components: kube-dns, kube-proxy
	I0328 01:09:03.697645 1130827 system_pods.go:86] 7 kube-system pods found
	I0328 01:09:03.697688 1130827 system_pods.go:89] "coredns-7db6d8ff4d-8zzf5" [91f329ea-6d6d-45dc-ac77-40a2739249b4] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0328 01:09:03.697697 1130827 system_pods.go:89] "coredns-7db6d8ff4d-qtgp9" [aac5a4d0-acf3-426c-a81e-d129f94d58f3] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0328 01:09:03.697704 1130827 system_pods.go:89] "etcd-no-preload-248059" [df24a43f-e4f8-4ee2-a2d5-9a718a197670] Running
	I0328 01:09:03.697710 1130827 system_pods.go:89] "kube-apiserver-no-preload-248059" [60aa0336-e0a2-476a-9458-c76fb40a95e1] Running
	I0328 01:09:03.697720 1130827 system_pods.go:89] "kube-controller-manager-no-preload-248059" [3759c96b-4476-439f-bf97-ea6175e53272] Running
	I0328 01:09:03.697726 1130827 system_pods.go:89] "kube-proxy-g5f6g" [d9c30bc3-42b1-446f-838b-979489cf661d] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0328 01:09:03.697730 1130827 system_pods.go:89] "kube-scheduler-no-preload-248059" [63a844a3-9d1e-48b0-90eb-52d3d20f0de8] Running
	I0328 01:09:03.697750 1130827 retry.go:31] will retry after 356.016468ms: missing components: kube-dns, kube-proxy
	I0328 01:09:03.962535 1130827 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.097008499s)
	I0328 01:09:03.962614 1130827 main.go:141] libmachine: Making call to close driver server
	I0328 01:09:03.962633 1130827 main.go:141] libmachine: (no-preload-248059) Calling .Close
	I0328 01:09:03.963093 1130827 main.go:141] libmachine: Successfully made call to close driver server
	I0328 01:09:03.963119 1130827 main.go:141] libmachine: Making call to close connection to plugin binary
	I0328 01:09:03.963129 1130827 main.go:141] libmachine: Making call to close driver server
	I0328 01:09:03.963139 1130827 main.go:141] libmachine: (no-preload-248059) Calling .Close
	I0328 01:09:03.963406 1130827 main.go:141] libmachine: Successfully made call to close driver server
	I0328 01:09:03.963424 1130827 main.go:141] libmachine: Making call to close connection to plugin binary
	I0328 01:09:03.964335 1130827 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.125788348s)
	I0328 01:09:03.964375 1130827 main.go:141] libmachine: Making call to close driver server
	I0328 01:09:03.964384 1130827 main.go:141] libmachine: (no-preload-248059) Calling .Close
	I0328 01:09:03.964712 1130827 main.go:141] libmachine: (no-preload-248059) DBG | Closing plugin on server side
	I0328 01:09:03.964740 1130827 main.go:141] libmachine: Successfully made call to close driver server
	I0328 01:09:03.964763 1130827 main.go:141] libmachine: Making call to close connection to plugin binary
	I0328 01:09:03.964776 1130827 main.go:141] libmachine: Making call to close driver server
	I0328 01:09:03.964785 1130827 main.go:141] libmachine: (no-preload-248059) Calling .Close
	I0328 01:09:03.965054 1130827 main.go:141] libmachine: Successfully made call to close driver server
	I0328 01:09:03.965125 1130827 main.go:141] libmachine: Making call to close connection to plugin binary
	I0328 01:09:03.965142 1130827 main.go:141] libmachine: (no-preload-248059) DBG | Closing plugin on server side
	I0328 01:09:04.002303 1130827 main.go:141] libmachine: Making call to close driver server
	I0328 01:09:04.002340 1130827 main.go:141] libmachine: (no-preload-248059) Calling .Close
	I0328 01:09:04.002744 1130827 main.go:141] libmachine: Successfully made call to close driver server
	I0328 01:09:04.002766 1130827 main.go:141] libmachine: Making call to close connection to plugin binary
	I0328 01:09:04.062017 1130827 system_pods.go:86] 8 kube-system pods found
	I0328 01:09:04.062096 1130827 system_pods.go:89] "coredns-7db6d8ff4d-8zzf5" [91f329ea-6d6d-45dc-ac77-40a2739249b4] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0328 01:09:04.062111 1130827 system_pods.go:89] "coredns-7db6d8ff4d-qtgp9" [aac5a4d0-acf3-426c-a81e-d129f94d58f3] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0328 01:09:04.062121 1130827 system_pods.go:89] "etcd-no-preload-248059" [df24a43f-e4f8-4ee2-a2d5-9a718a197670] Running
	I0328 01:09:04.062132 1130827 system_pods.go:89] "kube-apiserver-no-preload-248059" [60aa0336-e0a2-476a-9458-c76fb40a95e1] Running
	I0328 01:09:04.062158 1130827 system_pods.go:89] "kube-controller-manager-no-preload-248059" [3759c96b-4476-439f-bf97-ea6175e53272] Running
	I0328 01:09:04.062172 1130827 system_pods.go:89] "kube-proxy-g5f6g" [d9c30bc3-42b1-446f-838b-979489cf661d] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0328 01:09:04.062180 1130827 system_pods.go:89] "kube-scheduler-no-preload-248059" [63a844a3-9d1e-48b0-90eb-52d3d20f0de8] Running
	I0328 01:09:04.062192 1130827 system_pods.go:89] "storage-provisioner" [1dcee5b1-4531-4068-bce7-081d51602015] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0328 01:09:04.062220 1130827 retry.go:31] will retry after 477.684804ms: missing components: kube-dns, kube-proxy
	I0328 01:09:04.574661 1130827 system_pods.go:86] 9 kube-system pods found
	I0328 01:09:04.574716 1130827 system_pods.go:89] "coredns-7db6d8ff4d-8zzf5" [91f329ea-6d6d-45dc-ac77-40a2739249b4] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0328 01:09:04.574728 1130827 system_pods.go:89] "coredns-7db6d8ff4d-qtgp9" [aac5a4d0-acf3-426c-a81e-d129f94d58f3] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0328 01:09:04.574740 1130827 system_pods.go:89] "etcd-no-preload-248059" [df24a43f-e4f8-4ee2-a2d5-9a718a197670] Running
	I0328 01:09:04.574748 1130827 system_pods.go:89] "kube-apiserver-no-preload-248059" [60aa0336-e0a2-476a-9458-c76fb40a95e1] Running
	I0328 01:09:04.574754 1130827 system_pods.go:89] "kube-controller-manager-no-preload-248059" [3759c96b-4476-439f-bf97-ea6175e53272] Running
	I0328 01:09:04.574761 1130827 system_pods.go:89] "kube-proxy-g5f6g" [d9c30bc3-42b1-446f-838b-979489cf661d] Running
	I0328 01:09:04.574768 1130827 system_pods.go:89] "kube-scheduler-no-preload-248059" [63a844a3-9d1e-48b0-90eb-52d3d20f0de8] Running
	I0328 01:09:04.574778 1130827 system_pods.go:89] "metrics-server-569cc877fc-frc5k" [d1b84bf5-8f9e-4da6-8aea-568e9bb1a4dd] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0328 01:09:04.574799 1130827 system_pods.go:89] "storage-provisioner" [1dcee5b1-4531-4068-bce7-081d51602015] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0328 01:09:04.574821 1130827 retry.go:31] will retry after 460.13955ms: missing components: kube-dns
	I0328 01:09:04.692708 1130827 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.69861394s)
	I0328 01:09:04.692782 1130827 main.go:141] libmachine: Making call to close driver server
	I0328 01:09:04.692798 1130827 main.go:141] libmachine: (no-preload-248059) Calling .Close
	I0328 01:09:04.693323 1130827 main.go:141] libmachine: Successfully made call to close driver server
	I0328 01:09:04.693366 1130827 main.go:141] libmachine: Making call to close connection to plugin binary
	I0328 01:09:04.693376 1130827 main.go:141] libmachine: Making call to close driver server
	I0328 01:09:04.693384 1130827 main.go:141] libmachine: (no-preload-248059) Calling .Close
	I0328 01:09:04.693320 1130827 main.go:141] libmachine: (no-preload-248059) DBG | Closing plugin on server side
	I0328 01:09:04.693818 1130827 main.go:141] libmachine: Successfully made call to close driver server
	I0328 01:09:04.693865 1130827 main.go:141] libmachine: Making call to close connection to plugin binary
	I0328 01:09:04.693879 1130827 main.go:141] libmachine: (no-preload-248059) DBG | Closing plugin on server side
	I0328 01:09:04.693895 1130827 addons.go:470] Verifying addon metrics-server=true in "no-preload-248059"
	I0328 01:09:04.696310 1130827 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0328 01:09:04.025791 1131323 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0328 01:09:04.026055 1131323 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0328 01:09:04.026065 1131323 kubeadm.go:309] 
	I0328 01:09:04.026124 1131323 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0328 01:09:04.026172 1131323 kubeadm.go:309] 		timed out waiting for the condition
	I0328 01:09:04.026181 1131323 kubeadm.go:309] 
	I0328 01:09:04.026221 1131323 kubeadm.go:309] 	This error is likely caused by:
	I0328 01:09:04.026279 1131323 kubeadm.go:309] 		- The kubelet is not running
	I0328 01:09:04.026401 1131323 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0328 01:09:04.026411 1131323 kubeadm.go:309] 
	I0328 01:09:04.026529 1131323 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0328 01:09:04.026586 1131323 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0328 01:09:04.026632 1131323 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0328 01:09:04.026640 1131323 kubeadm.go:309] 
	I0328 01:09:04.026758 1131323 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0328 01:09:04.026884 1131323 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0328 01:09:04.026902 1131323 kubeadm.go:309] 
	I0328 01:09:04.027061 1131323 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0328 01:09:04.027222 1131323 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0328 01:09:04.027335 1131323 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0328 01:09:04.027429 1131323 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0328 01:09:04.027537 1131323 kubeadm.go:309] 
	I0328 01:09:04.029027 1131323 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0328 01:09:04.029164 1131323 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0328 01:09:04.029284 1131323 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	W0328 01:09:04.029477 1131323 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0328 01:09:04.029545 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0328 01:09:04.543275 1131323 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0328 01:09:04.562572 1131323 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0328 01:09:04.577013 1131323 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0328 01:09:04.577040 1131323 kubeadm.go:156] found existing configuration files:
	
	I0328 01:09:04.577102 1131323 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0328 01:09:04.590795 1131323 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0328 01:09:04.590885 1131323 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0328 01:09:04.604227 1131323 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0328 01:09:04.616720 1131323 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0328 01:09:04.616818 1131323 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0328 01:09:04.630095 1131323 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0328 01:09:04.643166 1131323 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0328 01:09:04.643259 1131323 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0328 01:09:04.658084 1131323 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0328 01:09:04.671786 1131323 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0328 01:09:04.671874 1131323 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0328 01:09:04.685852 1131323 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0328 01:09:04.779013 1131323 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0328 01:09:04.779113 1131323 kubeadm.go:309] [preflight] Running pre-flight checks
	I0328 01:09:04.964178 1131323 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0328 01:09:04.964317 1131323 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0328 01:09:04.964463 1131323 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0328 01:09:05.181712 1131323 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0328 01:09:05.183644 1131323 out.go:204]   - Generating certificates and keys ...
	I0328 01:09:05.183759 1131323 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0328 01:09:05.183851 1131323 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0328 01:09:05.183962 1131323 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0328 01:09:05.184042 1131323 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0328 01:09:05.184156 1131323 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0328 01:09:05.184244 1131323 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0328 01:09:05.184337 1131323 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0328 01:09:05.184424 1131323 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0328 01:09:05.184535 1131323 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0328 01:09:05.184633 1131323 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0328 01:09:05.184683 1131323 kubeadm.go:309] [certs] Using the existing "sa" key
	I0328 01:09:05.184758 1131323 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0328 01:09:04.698039 1130827 addons.go:505] duration metric: took 2.191652421s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0328 01:09:05.044303 1130827 system_pods.go:86] 9 kube-system pods found
	I0328 01:09:05.044340 1130827 system_pods.go:89] "coredns-7db6d8ff4d-8zzf5" [91f329ea-6d6d-45dc-ac77-40a2739249b4] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0328 01:09:05.044348 1130827 system_pods.go:89] "coredns-7db6d8ff4d-qtgp9" [aac5a4d0-acf3-426c-a81e-d129f94d58f3] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0328 01:09:05.044354 1130827 system_pods.go:89] "etcd-no-preload-248059" [df24a43f-e4f8-4ee2-a2d5-9a718a197670] Running
	I0328 01:09:05.044360 1130827 system_pods.go:89] "kube-apiserver-no-preload-248059" [60aa0336-e0a2-476a-9458-c76fb40a95e1] Running
	I0328 01:09:05.044366 1130827 system_pods.go:89] "kube-controller-manager-no-preload-248059" [3759c96b-4476-439f-bf97-ea6175e53272] Running
	I0328 01:09:05.044369 1130827 system_pods.go:89] "kube-proxy-g5f6g" [d9c30bc3-42b1-446f-838b-979489cf661d] Running
	I0328 01:09:05.044373 1130827 system_pods.go:89] "kube-scheduler-no-preload-248059" [63a844a3-9d1e-48b0-90eb-52d3d20f0de8] Running
	I0328 01:09:05.044378 1130827 system_pods.go:89] "metrics-server-569cc877fc-frc5k" [d1b84bf5-8f9e-4da6-8aea-568e9bb1a4dd] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0328 01:09:05.044387 1130827 system_pods.go:89] "storage-provisioner" [1dcee5b1-4531-4068-bce7-081d51602015] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0328 01:09:05.044406 1130827 retry.go:31] will retry after 486.01075ms: missing components: kube-dns
	I0328 01:09:05.539158 1130827 system_pods.go:86] 9 kube-system pods found
	I0328 01:09:05.539204 1130827 system_pods.go:89] "coredns-7db6d8ff4d-8zzf5" [91f329ea-6d6d-45dc-ac77-40a2739249b4] Running
	I0328 01:09:05.539213 1130827 system_pods.go:89] "coredns-7db6d8ff4d-qtgp9" [aac5a4d0-acf3-426c-a81e-d129f94d58f3] Running
	I0328 01:09:05.539219 1130827 system_pods.go:89] "etcd-no-preload-248059" [df24a43f-e4f8-4ee2-a2d5-9a718a197670] Running
	I0328 01:09:05.539226 1130827 system_pods.go:89] "kube-apiserver-no-preload-248059" [60aa0336-e0a2-476a-9458-c76fb40a95e1] Running
	I0328 01:09:05.539232 1130827 system_pods.go:89] "kube-controller-manager-no-preload-248059" [3759c96b-4476-439f-bf97-ea6175e53272] Running
	I0328 01:09:05.539238 1130827 system_pods.go:89] "kube-proxy-g5f6g" [d9c30bc3-42b1-446f-838b-979489cf661d] Running
	I0328 01:09:05.539244 1130827 system_pods.go:89] "kube-scheduler-no-preload-248059" [63a844a3-9d1e-48b0-90eb-52d3d20f0de8] Running
	I0328 01:09:05.539255 1130827 system_pods.go:89] "metrics-server-569cc877fc-frc5k" [d1b84bf5-8f9e-4da6-8aea-568e9bb1a4dd] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0328 01:09:05.539260 1130827 system_pods.go:89] "storage-provisioner" [1dcee5b1-4531-4068-bce7-081d51602015] Running
	I0328 01:09:05.539274 1130827 system_pods.go:126] duration metric: took 2.37828469s to wait for k8s-apps to be running ...
	I0328 01:09:05.539292 1130827 system_svc.go:44] waiting for kubelet service to be running ....
	I0328 01:09:05.539362 1130827 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0328 01:09:05.560593 1130827 system_svc.go:56] duration metric: took 21.288819ms WaitForService to wait for kubelet
	I0328 01:09:05.560628 1130827 kubeadm.go:576] duration metric: took 3.054281955s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0328 01:09:05.560657 1130827 node_conditions.go:102] verifying NodePressure condition ...
	I0328 01:09:05.564453 1130827 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0328 01:09:05.564489 1130827 node_conditions.go:123] node cpu capacity is 2
	I0328 01:09:05.564502 1130827 node_conditions.go:105] duration metric: took 3.837449ms to run NodePressure ...
	I0328 01:09:05.564517 1130827 start.go:240] waiting for startup goroutines ...
	I0328 01:09:05.564527 1130827 start.go:245] waiting for cluster config update ...
	I0328 01:09:05.564542 1130827 start.go:254] writing updated cluster config ...
	I0328 01:09:05.564843 1130827 ssh_runner.go:195] Run: rm -f paused
	I0328 01:09:05.623218 1130827 start.go:600] kubectl: 1.29.3, cluster: 1.30.0-beta.0 (minor skew: 1)
	I0328 01:09:05.625408 1130827 out.go:177] * Done! kubectl is now configured to use "no-preload-248059" cluster and "default" namespace by default
	I0328 01:09:05.587190 1131323 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0328 01:09:05.923219 1131323 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0328 01:09:06.087945 1131323 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0328 01:09:06.245638 1131323 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0328 01:09:06.266195 1131323 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0328 01:09:06.267461 1131323 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0328 01:09:06.267551 1131323 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0328 01:09:06.434155 1131323 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0328 01:09:06.436300 1131323 out.go:204]   - Booting up control plane ...
	I0328 01:09:06.436447 1131323 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0328 01:09:06.446573 1131323 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0328 01:09:06.447461 1131323 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0328 01:09:06.448313 1131323 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0328 01:09:06.450917 1131323 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0328 01:09:46.453199 1131323 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0328 01:09:46.453386 1131323 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0328 01:09:46.453643 1131323 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0328 01:09:51.454402 1131323 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0328 01:09:51.454665 1131323 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0328 01:10:01.455189 1131323 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0328 01:10:01.455417 1131323 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0328 01:10:21.456491 1131323 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0328 01:10:21.456726 1131323 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0328 01:11:01.456972 1131323 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0328 01:11:01.457256 1131323 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0328 01:11:01.457269 1131323 kubeadm.go:309] 
	I0328 01:11:01.457310 1131323 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0328 01:11:01.457404 1131323 kubeadm.go:309] 		timed out waiting for the condition
	I0328 01:11:01.457441 1131323 kubeadm.go:309] 
	I0328 01:11:01.457492 1131323 kubeadm.go:309] 	This error is likely caused by:
	I0328 01:11:01.457550 1131323 kubeadm.go:309] 		- The kubelet is not running
	I0328 01:11:01.457696 1131323 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0328 01:11:01.457708 1131323 kubeadm.go:309] 
	I0328 01:11:01.457856 1131323 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0328 01:11:01.457906 1131323 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0328 01:11:01.457935 1131323 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0328 01:11:01.457943 1131323 kubeadm.go:309] 
	I0328 01:11:01.458033 1131323 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0328 01:11:01.458139 1131323 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0328 01:11:01.458155 1131323 kubeadm.go:309] 
	I0328 01:11:01.458331 1131323 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0328 01:11:01.458483 1131323 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0328 01:11:01.458594 1131323 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0328 01:11:01.458707 1131323 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0328 01:11:01.458718 1131323 kubeadm.go:309] 
	I0328 01:11:01.459597 1131323 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0328 01:11:01.459737 1131323 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0328 01:11:01.459822 1131323 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0328 01:11:01.459962 1131323 kubeadm.go:393] duration metric: took 7m59.227261729s to StartCluster
	I0328 01:11:01.460023 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:11:01.460167 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:11:01.522644 1131323 cri.go:89] found id: ""
	I0328 01:11:01.522687 1131323 logs.go:276] 0 containers: []
	W0328 01:11:01.522700 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:11:01.522710 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:11:01.522782 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:11:01.567898 1131323 cri.go:89] found id: ""
	I0328 01:11:01.567928 1131323 logs.go:276] 0 containers: []
	W0328 01:11:01.567937 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:11:01.567945 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:11:01.568005 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:11:01.604782 1131323 cri.go:89] found id: ""
	I0328 01:11:01.604810 1131323 logs.go:276] 0 containers: []
	W0328 01:11:01.604819 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:11:01.604825 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:11:01.604935 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:11:01.642875 1131323 cri.go:89] found id: ""
	I0328 01:11:01.642908 1131323 logs.go:276] 0 containers: []
	W0328 01:11:01.642920 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:11:01.642929 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:11:01.642993 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:11:01.682186 1131323 cri.go:89] found id: ""
	I0328 01:11:01.682216 1131323 logs.go:276] 0 containers: []
	W0328 01:11:01.682223 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:11:01.682241 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:11:01.682312 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:11:01.720654 1131323 cri.go:89] found id: ""
	I0328 01:11:01.720689 1131323 logs.go:276] 0 containers: []
	W0328 01:11:01.720697 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:11:01.720704 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:11:01.720759 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:11:01.757340 1131323 cri.go:89] found id: ""
	I0328 01:11:01.757372 1131323 logs.go:276] 0 containers: []
	W0328 01:11:01.757383 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:11:01.757392 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:11:01.757462 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:11:01.797426 1131323 cri.go:89] found id: ""
	I0328 01:11:01.797462 1131323 logs.go:276] 0 containers: []
	W0328 01:11:01.797473 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:11:01.797488 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:11:01.797506 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:11:01.859582 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:11:01.859623 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:11:01.876027 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:11:01.876073 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:11:01.966513 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:11:01.966539 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:11:01.966557 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:11:02.084853 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:11:02.084894 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0328 01:11:02.127221 1131323 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0328 01:11:02.127288 1131323 out.go:239] * 
	W0328 01:11:02.127417 1131323 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0328 01:11:02.127456 1131323 out.go:239] * 
	W0328 01:11:02.128313 1131323 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0328 01:11:02.131916 1131323 out.go:177] 
	W0328 01:11:02.133288 1131323 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0328 01:11:02.133351 1131323 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0328 01:11:02.133381 1131323 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0328 01:11:02.134991 1131323 out.go:177] 
	
	
	==> CRI-O <==
	Mar 28 01:16:59 embed-certs-808809 crio[700]: time="2024-03-28 01:16:59.667730369Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:d13be47cdfe2d55cfc468405cc79fffada20ae6e4ac957e9096e5f1a7cb8ed43,Metadata:&PodSandboxMetadata{Name:coredns-76f75df574-pgcdh,Uid:52452b24-490e-4999-b700-198c6f9b2fa1,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1711588075021193455,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-76f75df574-pgcdh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 52452b24-490e-4999-b700-198c6f9b2fa1,k8s-app: kube-dns,pod-template-hash: 76f75df574,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-03-28T01:07:52.907041512Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:fad0a5eeca45eab29bdac781c0dc7c18488da0d3f13f62e2f5cd1835585a98b0,Metadata:&PodSandboxMetadata{Name:kube-proxy-tjbhs,Uid:cdb30ca1-5165-4e24-888a-df79af7987d0,Namespace:kube-system,At
tempt:0,},State:SANDBOX_READY,CreatedAt:1711588074826056782,Labels:map[string]string{controller-revision-hash: 7659797656,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-tjbhs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cdb30ca1-5165-4e24-888a-df79af7987d0,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-03-28T01:07:52.716264325Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:d8ec51bb0fc8c97f17b2d0ad68ec94a6b2408e9dd2fda55f2c1987a82b0ce31d,Metadata:&PodSandboxMetadata{Name:coredns-76f75df574-2rn6k,Uid:2a77c778-dd83-4e2e-b45a-ca16e3922b45,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1711588074639563931,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-76f75df574-2rn6k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2a77c778-dd83-4e2e-b45a-ca16e3922b45,k8s-app: kube-dns,pod-template-hash: 76f75df574,},Annotations:m
ap[string]string{kubernetes.io/config.seen: 2024-03-28T01:07:52.832558188Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:e8a102066890c532311038e6c72e9556f225349f7841774a6b878e24bc779ca9,Metadata:&PodSandboxMetadata{Name:metrics-server-57f55c9bc5-bqbfl,Uid:8434fd7d-838b-4cf2-96a3-e4d613633871,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1711588074352797283,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-57f55c9bc5-bqbfl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8434fd7d-838b-4cf2-96a3-e4d613633871,k8s-app: metrics-server,pod-template-hash: 57f55c9bc5,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-03-28T01:07:54.035391793Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:44b0cdf3876f13c6e331f982b09d269acd6f6da9d1d02e4a173f8848430acff9,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:20c1951e-7da8-4025-bbcf-2da60f87f3ab,Namespace:kube-system,Attempt:0,},Sta
te:SANDBOX_READY,CreatedAt:1711588074346768287,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20c1951e-7da8-4025-bbcf-2da60f87f3ab,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/t
mp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-03-28T01:07:54.036342678Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:8cb6101361b286ade5c77b27888d852fef11b2154f3493ef7c329a8a2497f761,Metadata:&PodSandboxMetadata{Name:kube-scheduler-embed-certs-808809,Uid:415bbf6af6af03844395934967f1d53e,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1711588053967485273,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-embed-certs-808809,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 415bbf6af6af03844395934967f1d53e,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 415bbf6af6af03844395934967f1d53e,kubernetes.io/config.seen: 2024-03-28T01:07:33.511636569Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:fa06dc12c22a31cdfde9b37c5f79193892430055c56ff9b94ada4e5d0a5060bb,Metadata:&PodSandboxMetadata{Name:kube-controlle
r-manager-embed-certs-808809,Uid:b27a7f528d676bb567a98dd9c93ba802,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1711588053957310147,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-embed-certs-808809,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b27a7f528d676bb567a98dd9c93ba802,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: b27a7f528d676bb567a98dd9c93ba802,kubernetes.io/config.seen: 2024-03-28T01:07:33.511631484Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:dae6f2944cfe146fe51c15d22ea9ac3564c7566a59e12e80fb7f005ec86f6908,Metadata:&PodSandboxMetadata{Name:kube-apiserver-embed-certs-808809,Uid:82aa56ffa6fd4273e5fcfbb8ee4837e3,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1711588053952795223,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver
-embed-certs-808809,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82aa56ffa6fd4273e5fcfbb8ee4837e3,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.72.210:8443,kubernetes.io/config.hash: 82aa56ffa6fd4273e5fcfbb8ee4837e3,kubernetes.io/config.seen: 2024-03-28T01:07:33.511641011Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:ee08dd080a76fdba4eeaa1378d59c389b012222d8d48ce7cf0a50226bbbc375e,Metadata:&PodSandboxMetadata{Name:etcd-embed-certs-808809,Uid:23726df11311a725c5c2cea5aa7bbf82,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1711588053948496234,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-embed-certs-808809,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 23726df11311a725c5c2cea5aa7bbf82,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.7
2.210:2379,kubernetes.io/config.hash: 23726df11311a725c5c2cea5aa7bbf82,kubernetes.io/config.seen: 2024-03-28T01:07:33.511638119Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=2f8e223b-d8a0-4d35-bcb7-fa547a0d55c3 name=/runtime.v1.RuntimeService/ListPodSandbox
	Mar 28 01:16:59 embed-certs-808809 crio[700]: time="2024-03-28 01:16:59.669034270Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=94fd72e4-b145-4d8c-89f1-f7fd56babd8a name=/runtime.v1.RuntimeService/ListContainers
	Mar 28 01:16:59 embed-certs-808809 crio[700]: time="2024-03-28 01:16:59.669094712Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=94fd72e4-b145-4d8c-89f1-f7fd56babd8a name=/runtime.v1.RuntimeService/ListContainers
	Mar 28 01:16:59 embed-certs-808809 crio[700]: time="2024-03-28 01:16:59.669282428Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f76a2cc1195c1d8095c2eaf14403049b3091569f9fce2b5c62a421df809f99d6,PodSandboxId:d13be47cdfe2d55cfc468405cc79fffada20ae6e4ac957e9096e5f1a7cb8ed43,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1711588075222203315,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-pgcdh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 52452b24-490e-4999-b700-198c6f9b2fa1,},Annotations:map[string]string{io.kubernetes.container.hash: b98b61cd,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d0fd0f91f9b4fa1ab25e032a39edcd5bc44b739fe492a9a2637fe447630c6048,PodSandboxId:fad0a5eeca45eab29bdac781c0dc7c18488da0d3f13f62e2f5cd1835585a98b0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1711588074985489557,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tjbhs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: cdb30ca1-5165-4e24-888a-df79af7987d0,},Annotations:map[string]string{io.kubernetes.container.hash: b2fb58c1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77d15563a8b85fedf40c66723f296b06f62de317f6266c0b3d4af970cfb7e7fd,PodSandboxId:d8ec51bb0fc8c97f17b2d0ad68ec94a6b2408e9dd2fda55f2c1987a82b0ce31d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1711588074840305163,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-2rn6k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2a77c778-dd83-4e2e-b45
a-ca16e3922b45,},Annotations:map[string]string{io.kubernetes.container.hash: 9855ca01,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e8068bfab6214b30b06e6ea1061ef0a1fdb69e653672c29bb9d353b14e1fc56,PodSandboxId:44b0cdf3876f13c6e331f982b09d269acd6f6da9d1d02e4a173f8848430acff9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:17115880744
96417762,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20c1951e-7da8-4025-bbcf-2da60f87f3ab,},Annotations:map[string]string{io.kubernetes.container.hash: f91dd921,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:05add19d22fda2570b52ec7b279575972ce3d4e3ed1b15e14a842c35667338fe,PodSandboxId:8cb6101361b286ade5c77b27888d852fef11b2154f3493ef7c329a8a2497f761,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1711588054214339857,Labels:m
ap[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-808809,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 415bbf6af6af03844395934967f1d53e,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7f806518515daaea31de47549c3df138890f62d699f2eb1ad54abc661c5c095,PodSandboxId:fa06dc12c22a31cdfde9b37c5f79193892430055c56ff9b94ada4e5d0a5060bb,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1711588054250077399,Labels:map[
string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-808809,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b27a7f528d676bb567a98dd9c93ba802,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:afb39c84cbe45f2c1cd49e7ca7da5be67d0236bc61f3b4a4a8f3209867d57b0a,PodSandboxId:ee08dd080a76fdba4eeaa1378d59c389b012222d8d48ce7cf0a50226bbbc375e,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1711588054211596130,Labels:map[stri
ng]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-808809,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 23726df11311a725c5c2cea5aa7bbf82,},Annotations:map[string]string{io.kubernetes.container.hash: f47fb476,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1473b87f5ec0a698f975e02b643017deca11b4bf6f0d78f062b12764558bc159,PodSandboxId:dae6f2944cfe146fe51c15d22ea9ac3564c7566a59e12e80fb7f005ec86f6908,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1711588054187453180,Labels:map[string]string{io.kubernetes.containe
r.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-808809,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82aa56ffa6fd4273e5fcfbb8ee4837e3,},Annotations:map[string]string{io.kubernetes.container.hash: ebc61d60,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=94fd72e4-b145-4d8c-89f1-f7fd56babd8a name=/runtime.v1.RuntimeService/ListContainers
	Mar 28 01:16:59 embed-certs-808809 crio[700]: time="2024-03-28 01:16:59.684084312Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=7a3729d8-6b47-4531-8880-28a11a94443b name=/runtime.v1.RuntimeService/Version
	Mar 28 01:16:59 embed-certs-808809 crio[700]: time="2024-03-28 01:16:59.684175678Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=7a3729d8-6b47-4531-8880-28a11a94443b name=/runtime.v1.RuntimeService/Version
	Mar 28 01:16:59 embed-certs-808809 crio[700]: time="2024-03-28 01:16:59.685915154Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7ed14190-751c-4608-93ea-61961258d510 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 28 01:16:59 embed-certs-808809 crio[700]: time="2024-03-28 01:16:59.686391758Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1711588619686361434,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:130129,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7ed14190-751c-4608-93ea-61961258d510 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 28 01:16:59 embed-certs-808809 crio[700]: time="2024-03-28 01:16:59.687121069Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=bcef655d-0ee9-4b3b-865e-fdcf1338665e name=/runtime.v1.RuntimeService/ListContainers
	Mar 28 01:16:59 embed-certs-808809 crio[700]: time="2024-03-28 01:16:59.687188730Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=bcef655d-0ee9-4b3b-865e-fdcf1338665e name=/runtime.v1.RuntimeService/ListContainers
	Mar 28 01:16:59 embed-certs-808809 crio[700]: time="2024-03-28 01:16:59.687365913Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f76a2cc1195c1d8095c2eaf14403049b3091569f9fce2b5c62a421df809f99d6,PodSandboxId:d13be47cdfe2d55cfc468405cc79fffada20ae6e4ac957e9096e5f1a7cb8ed43,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1711588075222203315,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-pgcdh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 52452b24-490e-4999-b700-198c6f9b2fa1,},Annotations:map[string]string{io.kubernetes.container.hash: b98b61cd,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d0fd0f91f9b4fa1ab25e032a39edcd5bc44b739fe492a9a2637fe447630c6048,PodSandboxId:fad0a5eeca45eab29bdac781c0dc7c18488da0d3f13f62e2f5cd1835585a98b0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1711588074985489557,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tjbhs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: cdb30ca1-5165-4e24-888a-df79af7987d0,},Annotations:map[string]string{io.kubernetes.container.hash: b2fb58c1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77d15563a8b85fedf40c66723f296b06f62de317f6266c0b3d4af970cfb7e7fd,PodSandboxId:d8ec51bb0fc8c97f17b2d0ad68ec94a6b2408e9dd2fda55f2c1987a82b0ce31d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1711588074840305163,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-2rn6k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2a77c778-dd83-4e2e-b45
a-ca16e3922b45,},Annotations:map[string]string{io.kubernetes.container.hash: 9855ca01,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e8068bfab6214b30b06e6ea1061ef0a1fdb69e653672c29bb9d353b14e1fc56,PodSandboxId:44b0cdf3876f13c6e331f982b09d269acd6f6da9d1d02e4a173f8848430acff9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:17115880744
96417762,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20c1951e-7da8-4025-bbcf-2da60f87f3ab,},Annotations:map[string]string{io.kubernetes.container.hash: f91dd921,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:05add19d22fda2570b52ec7b279575972ce3d4e3ed1b15e14a842c35667338fe,PodSandboxId:8cb6101361b286ade5c77b27888d852fef11b2154f3493ef7c329a8a2497f761,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1711588054214339857,Labels:m
ap[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-808809,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 415bbf6af6af03844395934967f1d53e,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7f806518515daaea31de47549c3df138890f62d699f2eb1ad54abc661c5c095,PodSandboxId:fa06dc12c22a31cdfde9b37c5f79193892430055c56ff9b94ada4e5d0a5060bb,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1711588054250077399,Labels:map[
string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-808809,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b27a7f528d676bb567a98dd9c93ba802,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:afb39c84cbe45f2c1cd49e7ca7da5be67d0236bc61f3b4a4a8f3209867d57b0a,PodSandboxId:ee08dd080a76fdba4eeaa1378d59c389b012222d8d48ce7cf0a50226bbbc375e,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1711588054211596130,Labels:map[stri
ng]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-808809,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 23726df11311a725c5c2cea5aa7bbf82,},Annotations:map[string]string{io.kubernetes.container.hash: f47fb476,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1473b87f5ec0a698f975e02b643017deca11b4bf6f0d78f062b12764558bc159,PodSandboxId:dae6f2944cfe146fe51c15d22ea9ac3564c7566a59e12e80fb7f005ec86f6908,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1711588054187453180,Labels:map[string]string{io.kubernetes.containe
r.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-808809,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82aa56ffa6fd4273e5fcfbb8ee4837e3,},Annotations:map[string]string{io.kubernetes.container.hash: ebc61d60,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=bcef655d-0ee9-4b3b-865e-fdcf1338665e name=/runtime.v1.RuntimeService/ListContainers
	Mar 28 01:16:59 embed-certs-808809 crio[700]: time="2024-03-28 01:16:59.725720950Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d61d6bec-6233-4437-941e-8718dc4798a8 name=/runtime.v1.RuntimeService/Version
	Mar 28 01:16:59 embed-certs-808809 crio[700]: time="2024-03-28 01:16:59.725795354Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d61d6bec-6233-4437-941e-8718dc4798a8 name=/runtime.v1.RuntimeService/Version
	Mar 28 01:16:59 embed-certs-808809 crio[700]: time="2024-03-28 01:16:59.727779048Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=30733947-4517-4d33-9911-b7855c7586b2 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 28 01:16:59 embed-certs-808809 crio[700]: time="2024-03-28 01:16:59.728343128Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1711588619728317779,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:130129,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=30733947-4517-4d33-9911-b7855c7586b2 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 28 01:16:59 embed-certs-808809 crio[700]: time="2024-03-28 01:16:59.728981405Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=807bd7bd-774a-42d4-b7a9-1985ed8d5a40 name=/runtime.v1.RuntimeService/ListContainers
	Mar 28 01:16:59 embed-certs-808809 crio[700]: time="2024-03-28 01:16:59.729037648Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=807bd7bd-774a-42d4-b7a9-1985ed8d5a40 name=/runtime.v1.RuntimeService/ListContainers
	Mar 28 01:16:59 embed-certs-808809 crio[700]: time="2024-03-28 01:16:59.729266045Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f76a2cc1195c1d8095c2eaf14403049b3091569f9fce2b5c62a421df809f99d6,PodSandboxId:d13be47cdfe2d55cfc468405cc79fffada20ae6e4ac957e9096e5f1a7cb8ed43,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1711588075222203315,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-pgcdh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 52452b24-490e-4999-b700-198c6f9b2fa1,},Annotations:map[string]string{io.kubernetes.container.hash: b98b61cd,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d0fd0f91f9b4fa1ab25e032a39edcd5bc44b739fe492a9a2637fe447630c6048,PodSandboxId:fad0a5eeca45eab29bdac781c0dc7c18488da0d3f13f62e2f5cd1835585a98b0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1711588074985489557,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tjbhs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: cdb30ca1-5165-4e24-888a-df79af7987d0,},Annotations:map[string]string{io.kubernetes.container.hash: b2fb58c1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77d15563a8b85fedf40c66723f296b06f62de317f6266c0b3d4af970cfb7e7fd,PodSandboxId:d8ec51bb0fc8c97f17b2d0ad68ec94a6b2408e9dd2fda55f2c1987a82b0ce31d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1711588074840305163,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-2rn6k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2a77c778-dd83-4e2e-b45
a-ca16e3922b45,},Annotations:map[string]string{io.kubernetes.container.hash: 9855ca01,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e8068bfab6214b30b06e6ea1061ef0a1fdb69e653672c29bb9d353b14e1fc56,PodSandboxId:44b0cdf3876f13c6e331f982b09d269acd6f6da9d1d02e4a173f8848430acff9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:17115880744
96417762,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20c1951e-7da8-4025-bbcf-2da60f87f3ab,},Annotations:map[string]string{io.kubernetes.container.hash: f91dd921,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:05add19d22fda2570b52ec7b279575972ce3d4e3ed1b15e14a842c35667338fe,PodSandboxId:8cb6101361b286ade5c77b27888d852fef11b2154f3493ef7c329a8a2497f761,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1711588054214339857,Labels:m
ap[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-808809,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 415bbf6af6af03844395934967f1d53e,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7f806518515daaea31de47549c3df138890f62d699f2eb1ad54abc661c5c095,PodSandboxId:fa06dc12c22a31cdfde9b37c5f79193892430055c56ff9b94ada4e5d0a5060bb,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1711588054250077399,Labels:map[
string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-808809,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b27a7f528d676bb567a98dd9c93ba802,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:afb39c84cbe45f2c1cd49e7ca7da5be67d0236bc61f3b4a4a8f3209867d57b0a,PodSandboxId:ee08dd080a76fdba4eeaa1378d59c389b012222d8d48ce7cf0a50226bbbc375e,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1711588054211596130,Labels:map[stri
ng]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-808809,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 23726df11311a725c5c2cea5aa7bbf82,},Annotations:map[string]string{io.kubernetes.container.hash: f47fb476,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1473b87f5ec0a698f975e02b643017deca11b4bf6f0d78f062b12764558bc159,PodSandboxId:dae6f2944cfe146fe51c15d22ea9ac3564c7566a59e12e80fb7f005ec86f6908,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1711588054187453180,Labels:map[string]string{io.kubernetes.containe
r.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-808809,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82aa56ffa6fd4273e5fcfbb8ee4837e3,},Annotations:map[string]string{io.kubernetes.container.hash: ebc61d60,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=807bd7bd-774a-42d4-b7a9-1985ed8d5a40 name=/runtime.v1.RuntimeService/ListContainers
	Mar 28 01:16:59 embed-certs-808809 crio[700]: time="2024-03-28 01:16:59.769964862Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=2f1985e0-6179-40b8-9288-079a9556d8e5 name=/runtime.v1.RuntimeService/Version
	Mar 28 01:16:59 embed-certs-808809 crio[700]: time="2024-03-28 01:16:59.770113598Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=2f1985e0-6179-40b8-9288-079a9556d8e5 name=/runtime.v1.RuntimeService/Version
	Mar 28 01:16:59 embed-certs-808809 crio[700]: time="2024-03-28 01:16:59.771264099Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c444d103-68f4-4d55-8478-c9dc36c2783b name=/runtime.v1.ImageService/ImageFsInfo
	Mar 28 01:16:59 embed-certs-808809 crio[700]: time="2024-03-28 01:16:59.771986875Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1711588619771959180,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:130129,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c444d103-68f4-4d55-8478-c9dc36c2783b name=/runtime.v1.ImageService/ImageFsInfo
	Mar 28 01:16:59 embed-certs-808809 crio[700]: time="2024-03-28 01:16:59.772528483Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=716b186a-b067-4689-8d5b-5f261d1805fc name=/runtime.v1.RuntimeService/ListContainers
	Mar 28 01:16:59 embed-certs-808809 crio[700]: time="2024-03-28 01:16:59.772718997Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=716b186a-b067-4689-8d5b-5f261d1805fc name=/runtime.v1.RuntimeService/ListContainers
	Mar 28 01:16:59 embed-certs-808809 crio[700]: time="2024-03-28 01:16:59.772945964Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f76a2cc1195c1d8095c2eaf14403049b3091569f9fce2b5c62a421df809f99d6,PodSandboxId:d13be47cdfe2d55cfc468405cc79fffada20ae6e4ac957e9096e5f1a7cb8ed43,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1711588075222203315,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-pgcdh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 52452b24-490e-4999-b700-198c6f9b2fa1,},Annotations:map[string]string{io.kubernetes.container.hash: b98b61cd,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d0fd0f91f9b4fa1ab25e032a39edcd5bc44b739fe492a9a2637fe447630c6048,PodSandboxId:fad0a5eeca45eab29bdac781c0dc7c18488da0d3f13f62e2f5cd1835585a98b0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1711588074985489557,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tjbhs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: cdb30ca1-5165-4e24-888a-df79af7987d0,},Annotations:map[string]string{io.kubernetes.container.hash: b2fb58c1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77d15563a8b85fedf40c66723f296b06f62de317f6266c0b3d4af970cfb7e7fd,PodSandboxId:d8ec51bb0fc8c97f17b2d0ad68ec94a6b2408e9dd2fda55f2c1987a82b0ce31d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1711588074840305163,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-2rn6k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2a77c778-dd83-4e2e-b45
a-ca16e3922b45,},Annotations:map[string]string{io.kubernetes.container.hash: 9855ca01,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e8068bfab6214b30b06e6ea1061ef0a1fdb69e653672c29bb9d353b14e1fc56,PodSandboxId:44b0cdf3876f13c6e331f982b09d269acd6f6da9d1d02e4a173f8848430acff9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:17115880744
96417762,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20c1951e-7da8-4025-bbcf-2da60f87f3ab,},Annotations:map[string]string{io.kubernetes.container.hash: f91dd921,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:05add19d22fda2570b52ec7b279575972ce3d4e3ed1b15e14a842c35667338fe,PodSandboxId:8cb6101361b286ade5c77b27888d852fef11b2154f3493ef7c329a8a2497f761,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1711588054214339857,Labels:m
ap[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-808809,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 415bbf6af6af03844395934967f1d53e,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7f806518515daaea31de47549c3df138890f62d699f2eb1ad54abc661c5c095,PodSandboxId:fa06dc12c22a31cdfde9b37c5f79193892430055c56ff9b94ada4e5d0a5060bb,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1711588054250077399,Labels:map[
string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-808809,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b27a7f528d676bb567a98dd9c93ba802,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:afb39c84cbe45f2c1cd49e7ca7da5be67d0236bc61f3b4a4a8f3209867d57b0a,PodSandboxId:ee08dd080a76fdba4eeaa1378d59c389b012222d8d48ce7cf0a50226bbbc375e,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1711588054211596130,Labels:map[stri
ng]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-808809,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 23726df11311a725c5c2cea5aa7bbf82,},Annotations:map[string]string{io.kubernetes.container.hash: f47fb476,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1473b87f5ec0a698f975e02b643017deca11b4bf6f0d78f062b12764558bc159,PodSandboxId:dae6f2944cfe146fe51c15d22ea9ac3564c7566a59e12e80fb7f005ec86f6908,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1711588054187453180,Labels:map[string]string{io.kubernetes.containe
r.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-808809,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82aa56ffa6fd4273e5fcfbb8ee4837e3,},Annotations:map[string]string{io.kubernetes.container.hash: ebc61d60,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=716b186a-b067-4689-8d5b-5f261d1805fc name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	f76a2cc1195c1       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   9 minutes ago       Running             coredns                   0                   d13be47cdfe2d       coredns-76f75df574-pgcdh
	d0fd0f91f9b4f       a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392   9 minutes ago       Running             kube-proxy                0                   fad0a5eeca45e       kube-proxy-tjbhs
	77d15563a8b85       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   9 minutes ago       Running             coredns                   0                   d8ec51bb0fc8c       coredns-76f75df574-2rn6k
	2e8068bfab621       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   9 minutes ago       Running             storage-provisioner       0                   44b0cdf3876f1       storage-provisioner
	e7f806518515d       6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3   9 minutes ago       Running             kube-controller-manager   2                   fa06dc12c22a3       kube-controller-manager-embed-certs-808809
	05add19d22fda       8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b   9 minutes ago       Running             kube-scheduler            2                   8cb6101361b28       kube-scheduler-embed-certs-808809
	afb39c84cbe45       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   9 minutes ago       Running             etcd                      2                   ee08dd080a76f       etcd-embed-certs-808809
	1473b87f5ec0a       39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533   9 minutes ago       Running             kube-apiserver            2                   dae6f2944cfe1       kube-apiserver-embed-certs-808809
	
	
	==> coredns [77d15563a8b85fedf40c66723f296b06f62de317f6266c0b3d4af970cfb7e7fd] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [f76a2cc1195c1d8095c2eaf14403049b3091569f9fce2b5c62a421df809f99d6] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> describe nodes <==
	Name:               embed-certs-808809
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-808809
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1a940f980f77c9bfc8ef93678db8c1f49c9dd79d
	                    minikube.k8s.io/name=embed-certs-808809
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_03_28T01_07_40_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 28 Mar 2024 01:07:36 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-808809
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 28 Mar 2024 01:16:52 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 28 Mar 2024 01:13:08 +0000   Thu, 28 Mar 2024 01:07:35 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 28 Mar 2024 01:13:08 +0000   Thu, 28 Mar 2024 01:07:35 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 28 Mar 2024 01:13:08 +0000   Thu, 28 Mar 2024 01:07:35 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 28 Mar 2024 01:13:08 +0000   Thu, 28 Mar 2024 01:07:50 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.210
	  Hostname:    embed-certs-808809
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 e130d39af5334fbc87366b845f05a2e1
	  System UUID:                e130d39a-f533-4fbc-8736-6b845f05a2e1
	  Boot ID:                    f85ced42-5373-45cf-9a97-c85fe4592bc2
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-76f75df574-2rn6k                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m8s
	  kube-system                 coredns-76f75df574-pgcdh                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m8s
	  kube-system                 etcd-embed-certs-808809                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         9m20s
	  kube-system                 kube-apiserver-embed-certs-808809             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m20s
	  kube-system                 kube-controller-manager-embed-certs-808809    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m20s
	  kube-system                 kube-proxy-tjbhs                              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m8s
	  kube-system                 kube-scheduler-embed-certs-808809             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m20s
	  kube-system                 metrics-server-57f55c9bc5-bqbfl               100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         9m7s
	  kube-system                 storage-provisioner                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m6s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   0 (0%!)(MISSING)
	  memory             440Mi (20%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 9m4s   kube-proxy       
	  Normal  Starting                 9m20s  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  9m20s  kubelet          Node embed-certs-808809 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m20s  kubelet          Node embed-certs-808809 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m20s  kubelet          Node embed-certs-808809 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             9m20s  kubelet          Node embed-certs-808809 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  9m20s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                9m10s  kubelet          Node embed-certs-808809 status is now: NodeReady
	  Normal  RegisteredNode           9m8s   node-controller  Node embed-certs-808809 event: Registered Node embed-certs-808809 in Controller
	
	
	==> dmesg <==
	[  +0.041287] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.556473] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.837413] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.657032] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.617437] systemd-fstab-generator[619]: Ignoring "noauto" option for root device
	[  +0.067060] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.059267] systemd-fstab-generator[631]: Ignoring "noauto" option for root device
	[  +0.174312] systemd-fstab-generator[645]: Ignoring "noauto" option for root device
	[  +0.167803] systemd-fstab-generator[657]: Ignoring "noauto" option for root device
	[  +0.331014] systemd-fstab-generator[686]: Ignoring "noauto" option for root device
	[  +4.752980] systemd-fstab-generator[781]: Ignoring "noauto" option for root device
	[  +0.064843] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.699936] systemd-fstab-generator[903]: Ignoring "noauto" option for root device
	[  +5.673497] kauditd_printk_skb: 97 callbacks suppressed
	[  +6.777167] kauditd_printk_skb: 74 callbacks suppressed
	[Mar28 01:07] kauditd_printk_skb: 2 callbacks suppressed
	[ +26.779774] kauditd_printk_skb: 7 callbacks suppressed
	[  +2.375091] systemd-fstab-generator[3419]: Ignoring "noauto" option for root device
	[  +4.660659] kauditd_printk_skb: 53 callbacks suppressed
	[  +2.642342] systemd-fstab-generator[3740]: Ignoring "noauto" option for root device
	[ +12.937190] systemd-fstab-generator[3942]: Ignoring "noauto" option for root device
	[  +0.082277] kauditd_printk_skb: 14 callbacks suppressed
	[Mar28 01:08] kauditd_printk_skb: 78 callbacks suppressed
	
	
	==> etcd [afb39c84cbe45f2c1cd49e7ca7da5be67d0236bc61f3b4a4a8f3209867d57b0a] <==
	{"level":"info","ts":"2024-03-28T01:07:34.825241Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"60056697e173d7ad switched to configuration voters=(6919049205033195437)"}
	{"level":"info","ts":"2024-03-28T01:07:34.826168Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"3e0f7cc7df3e38c1","local-member-id":"60056697e173d7ad","added-peer-id":"60056697e173d7ad","added-peer-peer-urls":["https://192.168.72.210:2380"]}
	{"level":"info","ts":"2024-03-28T01:07:34.84578Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-03-28T01:07:34.846303Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.72.210:2380"}
	{"level":"info","ts":"2024-03-28T01:07:34.848928Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.72.210:2380"}
	{"level":"info","ts":"2024-03-28T01:07:34.846365Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"60056697e173d7ad","initial-advertise-peer-urls":["https://192.168.72.210:2380"],"listen-peer-urls":["https://192.168.72.210:2380"],"advertise-client-urls":["https://192.168.72.210:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.72.210:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-03-28T01:07:34.846394Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-03-28T01:07:34.87292Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"60056697e173d7ad is starting a new election at term 1"}
	{"level":"info","ts":"2024-03-28T01:07:34.872984Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"60056697e173d7ad became pre-candidate at term 1"}
	{"level":"info","ts":"2024-03-28T01:07:34.873019Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"60056697e173d7ad received MsgPreVoteResp from 60056697e173d7ad at term 1"}
	{"level":"info","ts":"2024-03-28T01:07:34.873031Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"60056697e173d7ad became candidate at term 2"}
	{"level":"info","ts":"2024-03-28T01:07:34.873037Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"60056697e173d7ad received MsgVoteResp from 60056697e173d7ad at term 2"}
	{"level":"info","ts":"2024-03-28T01:07:34.873044Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"60056697e173d7ad became leader at term 2"}
	{"level":"info","ts":"2024-03-28T01:07:34.873052Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 60056697e173d7ad elected leader 60056697e173d7ad at term 2"}
	{"level":"info","ts":"2024-03-28T01:07:34.877949Z","caller":"etcdserver/server.go:2578","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-28T01:07:34.882145Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"60056697e173d7ad","local-member-attributes":"{Name:embed-certs-808809 ClientURLs:[https://192.168.72.210:2379]}","request-path":"/0/members/60056697e173d7ad/attributes","cluster-id":"3e0f7cc7df3e38c1","publish-timeout":"7s"}
	{"level":"info","ts":"2024-03-28T01:07:34.883208Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-28T01:07:34.883224Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-28T01:07:34.884013Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-03-28T01:07:34.890016Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-03-28T01:07:34.886441Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-03-28T01:07:34.886496Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3e0f7cc7df3e38c1","local-member-id":"60056697e173d7ad","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-28T01:07:34.890314Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-28T01:07:34.890363Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-28T01:07:34.898242Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.72.210:2379"}
	
	
	==> kernel <==
	 01:17:00 up 14 min,  0 users,  load average: 0.15, 0.15, 0.10
	Linux embed-certs-808809 5.10.207 #1 SMP Wed Mar 27 22:02:20 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [1473b87f5ec0a698f975e02b643017deca11b4bf6f0d78f062b12764558bc159] <==
	I0328 01:10:54.929912       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0328 01:12:36.808391       1 handler_proxy.go:93] no RequestInfo found in the context
	E0328 01:12:36.808771       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W0328 01:12:37.809826       1 handler_proxy.go:93] no RequestInfo found in the context
	W0328 01:12:37.809963       1 handler_proxy.go:93] no RequestInfo found in the context
	E0328 01:12:37.810040       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0328 01:12:37.810079       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E0328 01:12:37.810081       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0328 01:12:37.811362       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0328 01:13:37.811153       1 handler_proxy.go:93] no RequestInfo found in the context
	W0328 01:13:37.811463       1 handler_proxy.go:93] no RequestInfo found in the context
	E0328 01:13:37.811497       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0328 01:13:37.811546       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E0328 01:13:37.811579       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0328 01:13:37.813468       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0328 01:15:37.812521       1 handler_proxy.go:93] no RequestInfo found in the context
	E0328 01:15:37.812829       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0328 01:15:37.812948       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0328 01:15:37.814226       1 handler_proxy.go:93] no RequestInfo found in the context
	E0328 01:15:37.814328       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0328 01:15:37.814366       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [e7f806518515daaea31de47549c3df138890f62d699f2eb1ad54abc661c5c095] <==
	I0328 01:11:23.255240       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0328 01:11:52.678047       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0328 01:11:53.264470       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0328 01:12:22.687232       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0328 01:12:23.276506       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0328 01:12:52.693470       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0328 01:12:53.285071       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0328 01:13:22.700196       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0328 01:13:23.293790       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0328 01:13:38.526391       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="303.031µs"
	I0328 01:13:52.523265       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="127.685µs"
	E0328 01:13:52.706771       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0328 01:13:53.303230       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0328 01:14:22.717303       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0328 01:14:23.311393       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0328 01:14:52.722992       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0328 01:14:53.320630       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0328 01:15:22.729420       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0328 01:15:23.330931       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0328 01:15:52.741014       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0328 01:15:53.339101       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0328 01:16:22.749322       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0328 01:16:23.346513       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0328 01:16:52.756533       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0328 01:16:53.354491       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [d0fd0f91f9b4fa1ab25e032a39edcd5bc44b739fe492a9a2637fe447630c6048] <==
	I0328 01:07:55.207212       1 server_others.go:72] "Using iptables proxy"
	I0328 01:07:55.234374       1 server.go:1050] "Successfully retrieved node IP(s)" IPs=["192.168.72.210"]
	I0328 01:07:55.332271       1 server_others.go:146] "No iptables support for family" ipFamily="IPv6"
	I0328 01:07:55.332299       1 server.go:654] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0328 01:07:55.332321       1 server_others.go:168] "Using iptables Proxier"
	I0328 01:07:55.337407       1 proxier.go:245] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0328 01:07:55.337779       1 server.go:865] "Version info" version="v1.29.3"
	I0328 01:07:55.338171       1 server.go:867] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0328 01:07:55.345652       1 config.go:188] "Starting service config controller"
	I0328 01:07:55.345785       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0328 01:07:55.346017       1 config.go:97] "Starting endpoint slice config controller"
	I0328 01:07:55.346070       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0328 01:07:55.348918       1 config.go:315] "Starting node config controller"
	I0328 01:07:55.348973       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0328 01:07:55.446612       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0328 01:07:55.446707       1 shared_informer.go:318] Caches are synced for service config
	I0328 01:07:55.449472       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [05add19d22fda2570b52ec7b279575972ce3d4e3ed1b15e14a842c35667338fe] <==
	W0328 01:07:36.833624       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0328 01:07:36.833661       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0328 01:07:36.836022       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0328 01:07:36.837284       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0328 01:07:36.839470       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0328 01:07:36.839517       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0328 01:07:37.718545       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0328 01:07:37.718735       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0328 01:07:37.807926       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0328 01:07:37.808017       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0328 01:07:37.846440       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0328 01:07:37.846548       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0328 01:07:37.891574       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0328 01:07:37.891773       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0328 01:07:37.915284       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0328 01:07:37.915394       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0328 01:07:37.949092       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0328 01:07:37.949785       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0328 01:07:38.048365       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0328 01:07:38.048702       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0328 01:07:38.094645       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0328 01:07:38.095089       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0328 01:07:38.197789       1 reflector.go:539] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0328 01:07:38.198075       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0328 01:07:40.908626       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Mar 28 01:14:40 embed-certs-808809 kubelet[3747]: E0328 01:14:40.562264    3747 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 28 01:14:40 embed-certs-808809 kubelet[3747]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 28 01:14:40 embed-certs-808809 kubelet[3747]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 28 01:14:40 embed-certs-808809 kubelet[3747]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 28 01:14:40 embed-certs-808809 kubelet[3747]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 28 01:14:45 embed-certs-808809 kubelet[3747]: E0328 01:14:45.506178    3747 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-bqbfl" podUID="8434fd7d-838b-4cf2-96a3-e4d613633871"
	Mar 28 01:14:57 embed-certs-808809 kubelet[3747]: E0328 01:14:57.505808    3747 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-bqbfl" podUID="8434fd7d-838b-4cf2-96a3-e4d613633871"
	Mar 28 01:15:12 embed-certs-808809 kubelet[3747]: E0328 01:15:12.506125    3747 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-bqbfl" podUID="8434fd7d-838b-4cf2-96a3-e4d613633871"
	Mar 28 01:15:27 embed-certs-808809 kubelet[3747]: E0328 01:15:27.506988    3747 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-bqbfl" podUID="8434fd7d-838b-4cf2-96a3-e4d613633871"
	Mar 28 01:15:40 embed-certs-808809 kubelet[3747]: E0328 01:15:40.561400    3747 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 28 01:15:40 embed-certs-808809 kubelet[3747]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 28 01:15:40 embed-certs-808809 kubelet[3747]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 28 01:15:40 embed-certs-808809 kubelet[3747]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 28 01:15:40 embed-certs-808809 kubelet[3747]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 28 01:15:41 embed-certs-808809 kubelet[3747]: E0328 01:15:41.504659    3747 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-bqbfl" podUID="8434fd7d-838b-4cf2-96a3-e4d613633871"
	Mar 28 01:15:55 embed-certs-808809 kubelet[3747]: E0328 01:15:55.505712    3747 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-bqbfl" podUID="8434fd7d-838b-4cf2-96a3-e4d613633871"
	Mar 28 01:16:08 embed-certs-808809 kubelet[3747]: E0328 01:16:08.505142    3747 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-bqbfl" podUID="8434fd7d-838b-4cf2-96a3-e4d613633871"
	Mar 28 01:16:19 embed-certs-808809 kubelet[3747]: E0328 01:16:19.505293    3747 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-bqbfl" podUID="8434fd7d-838b-4cf2-96a3-e4d613633871"
	Mar 28 01:16:30 embed-certs-808809 kubelet[3747]: E0328 01:16:30.506382    3747 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-bqbfl" podUID="8434fd7d-838b-4cf2-96a3-e4d613633871"
	Mar 28 01:16:40 embed-certs-808809 kubelet[3747]: E0328 01:16:40.560986    3747 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 28 01:16:40 embed-certs-808809 kubelet[3747]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 28 01:16:40 embed-certs-808809 kubelet[3747]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 28 01:16:40 embed-certs-808809 kubelet[3747]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 28 01:16:40 embed-certs-808809 kubelet[3747]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 28 01:16:45 embed-certs-808809 kubelet[3747]: E0328 01:16:45.505208    3747 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-bqbfl" podUID="8434fd7d-838b-4cf2-96a3-e4d613633871"
	
	
	==> storage-provisioner [2e8068bfab6214b30b06e6ea1061ef0a1fdb69e653672c29bb9d353b14e1fc56] <==
	I0328 01:07:54.678794       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0328 01:07:54.697804       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0328 01:07:54.698080       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0328 01:07:54.712597       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0328 01:07:54.717570       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"17a20dd2-996d-46f6-a17d-3df61e572ba7", APIVersion:"v1", ResourceVersion:"398", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-808809_12b76f71-59a0-4979-a1f1-79806ef62186 became leader
	I0328 01:07:54.728148       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-808809_12b76f71-59a0-4979-a1f1-79806ef62186!
	I0328 01:07:54.833978       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-808809_12b76f71-59a0-4979-a1f1-79806ef62186!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-808809 -n embed-certs-808809
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-808809 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-57f55c9bc5-bqbfl
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-808809 describe pod metrics-server-57f55c9bc5-bqbfl
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-808809 describe pod metrics-server-57f55c9bc5-bqbfl: exit status 1 (64.139813ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-bqbfl" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-808809 describe pod metrics-server-57f55c9bc5-bqbfl: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (544.34s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (544.21s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-283961 -n default-k8s-diff-port-283961
start_stop_delete_test.go:274: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-03-28 01:17:42.027755502 +0000 UTC m=+6293.279234434
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-283961 -n default-k8s-diff-port-283961
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-283961 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-283961 logs -n 25: (2.078463245s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|----------------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   |    Version     |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|----------------|---------------------|---------------------|
	| stop    | -p no-preload-248059                                   | no-preload-248059            | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:55 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |                |                     |                     |
	| addons  | enable metrics-server -p embed-certs-808809            | embed-certs-808809           | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:55 UTC | 28 Mar 24 00:55 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |                |                     |                     |
	| stop    | -p embed-certs-808809                                  | embed-certs-808809           | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:55 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |                |                     |                     |
	| addons  | enable metrics-server -p newest-cni-013642             | newest-cni-013642            | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:55 UTC | 28 Mar 24 00:55 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |                |                     |                     |
	| stop    | -p newest-cni-013642                                   | newest-cni-013642            | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:55 UTC | 28 Mar 24 00:55 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |                |                     |                     |
	| addons  | enable dashboard -p newest-cni-013642                  | newest-cni-013642            | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:55 UTC | 28 Mar 24 00:55 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| start   | -p newest-cni-013642 --memory=2200 --alsologtostderr   | newest-cni-013642            | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:55 UTC | 28 Mar 24 00:56 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |                |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |                |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |                |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |                |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.30.0-beta.0                    |                              |         |                |                     |                     |
	| image   | newest-cni-013642 image list                           | newest-cni-013642            | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:56 UTC | 28 Mar 24 00:56 UTC |
	|         | --format=json                                          |                              |         |                |                     |                     |
	| pause   | -p newest-cni-013642                                   | newest-cni-013642            | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:56 UTC | 28 Mar 24 00:56 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |                |                     |                     |
	| unpause | -p newest-cni-013642                                   | newest-cni-013642            | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:56 UTC | 28 Mar 24 00:56 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |                |                     |                     |
	| delete  | -p newest-cni-013642                                   | newest-cni-013642            | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:56 UTC | 28 Mar 24 00:56 UTC |
	| delete  | -p newest-cni-013642                                   | newest-cni-013642            | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:56 UTC | 28 Mar 24 00:56 UTC |
	| start   | -p                                                     | default-k8s-diff-port-283961 | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:56 UTC | 28 Mar 24 00:57 UTC |
	|         | default-k8s-diff-port-283961                           |                              |         |                |                     |                     |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |                |                     |                     |
	|         | --driver=kvm2                                          |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.29.3                           |                              |         |                |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-986088        | old-k8s-version-986088       | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:57 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |                |                     |                     |
	| addons  | enable dashboard -p no-preload-248059                  | no-preload-248059            | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:57 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-283961  | default-k8s-diff-port-283961 | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:57 UTC | 28 Mar 24 00:57 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |                |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-283961 | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:57 UTC |                     |
	|         | default-k8s-diff-port-283961                           |                              |         |                |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |                |                     |                     |
	| start   | -p no-preload-248059 --memory=2200                     | no-preload-248059            | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:57 UTC | 28 Mar 24 01:09 UTC |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |                |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.30.0-beta.0                    |                              |         |                |                     |                     |
	| addons  | enable dashboard -p embed-certs-808809                 | embed-certs-808809           | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:57 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| start   | -p embed-certs-808809                                  | embed-certs-808809           | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:57 UTC | 28 Mar 24 01:07 UTC |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |                |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.29.3                           |                              |         |                |                     |                     |
	| stop    | -p old-k8s-version-986088                              | old-k8s-version-986088       | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:59 UTC | 28 Mar 24 00:59 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |                |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-986088             | old-k8s-version-986088       | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:59 UTC | 28 Mar 24 00:59 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| start   | -p old-k8s-version-986088                              | old-k8s-version-986088       | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:59 UTC |                     |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --kvm-network=default                                  |                              |         |                |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |                |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |                |                     |                     |
	|         | --keep-context=false                                   |                              |         |                |                     |                     |
	|         | --driver=kvm2                                          |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |                |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-283961       | default-k8s-diff-port-283961 | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:59 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-283961 | jenkins | v1.33.0-beta.0 | 28 Mar 24 01:00 UTC | 28 Mar 24 01:08 UTC |
	|         | default-k8s-diff-port-283961                           |                              |         |                |                     |                     |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |                |                     |                     |
	|         | --driver=kvm2                                          |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.29.3                           |                              |         |                |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/28 01:00:05
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0328 01:00:05.675380 1131600 out.go:291] Setting OutFile to fd 1 ...
	I0328 01:00:05.675675 1131600 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 01:00:05.675710 1131600 out.go:304] Setting ErrFile to fd 2...
	I0328 01:00:05.675718 1131600 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 01:00:05.676017 1131600 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18485-1069254/.minikube/bin
	I0328 01:00:05.676919 1131600 out.go:298] Setting JSON to false
	I0328 01:00:05.678046 1131600 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":31303,"bootTime":1711556303,"procs":200,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1054-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0328 01:00:05.678129 1131600 start.go:139] virtualization: kvm guest
	I0328 01:00:05.681128 1131600 out.go:177] * [default-k8s-diff-port-283961] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I0328 01:00:05.683139 1131600 out.go:177]   - MINIKUBE_LOCATION=18485
	I0328 01:00:05.683129 1131600 notify.go:220] Checking for updates...
	I0328 01:00:05.685082 1131600 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0328 01:00:05.686765 1131600 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18485-1069254/kubeconfig
	I0328 01:00:05.688389 1131600 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18485-1069254/.minikube
	I0328 01:00:05.690187 1131600 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0328 01:00:05.691887 1131600 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0328 01:00:05.693775 1131600 config.go:182] Loaded profile config "default-k8s-diff-port-283961": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0328 01:00:05.694270 1131600 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18485-1069254/.minikube/bin/docker-machine-driver-kvm2
	I0328 01:00:05.694323 1131600 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 01:00:05.709757 1131600 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35333
	I0328 01:00:05.710275 1131600 main.go:141] libmachine: () Calling .GetVersion
	I0328 01:00:05.710875 1131600 main.go:141] libmachine: Using API Version  1
	I0328 01:00:05.710900 1131600 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 01:00:05.711323 1131600 main.go:141] libmachine: () Calling .GetMachineName
	I0328 01:00:05.711531 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .DriverName
	I0328 01:00:05.711893 1131600 driver.go:392] Setting default libvirt URI to qemu:///system
	I0328 01:00:05.712342 1131600 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18485-1069254/.minikube/bin/docker-machine-driver-kvm2
	I0328 01:00:05.712392 1131600 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 01:00:05.727583 1131600 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44249
	I0328 01:00:05.728107 1131600 main.go:141] libmachine: () Calling .GetVersion
	I0328 01:00:05.728595 1131600 main.go:141] libmachine: Using API Version  1
	I0328 01:00:05.728625 1131600 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 01:00:05.728945 1131600 main.go:141] libmachine: () Calling .GetMachineName
	I0328 01:00:05.729170 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .DriverName
	I0328 01:00:05.763895 1131600 out.go:177] * Using the kvm2 driver based on existing profile
	I0328 01:00:05.765397 1131600 start.go:297] selected driver: kvm2
	I0328 01:00:05.765431 1131600 start.go:901] validating driver "kvm2" against &{Name:default-k8s-diff-port-283961 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.29.3 ClusterName:default-k8s-diff-port-283961 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.224 Port:8444 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisk
s:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0328 01:00:05.765564 1131600 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0328 01:00:05.766282 1131600 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0328 01:00:05.766391 1131600 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18485-1069254/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0328 01:00:05.783130 1131600 install.go:137] /home/jenkins/minikube-integration/18485-1069254/.minikube/bin/docker-machine-driver-kvm2 version is 1.33.0-beta.0
	I0328 01:00:05.783602 1131600 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0328 01:00:05.783724 1131600 cni.go:84] Creating CNI manager for ""
	I0328 01:00:05.783745 1131600 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0328 01:00:05.783795 1131600 start.go:340] cluster config:
	{Name:default-k8s-diff-port-283961 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:default-k8s-diff-port-283961 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.224 Port:8444 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0328 01:00:05.783949 1131600 iso.go:125] acquiring lock: {Name:mk3da1fa7d63a581c817f327d86a827697458cb8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0328 01:00:05.785871 1131600 out.go:177] * Starting "default-k8s-diff-port-283961" primary control-plane node in "default-k8s-diff-port-283961" cluster
	I0328 01:00:02.570474 1130827 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.107:22: connect: no route to host
	I0328 01:00:05.787210 1131600 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0328 01:00:05.787259 1131600 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4
	I0328 01:00:05.787272 1131600 cache.go:56] Caching tarball of preloaded images
	I0328 01:00:05.787364 1131600 preload.go:173] Found /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0328 01:00:05.787376 1131600 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on crio
	I0328 01:00:05.787509 1131600 profile.go:142] Saving config to /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/default-k8s-diff-port-283961/config.json ...
	I0328 01:00:05.787742 1131600 start.go:360] acquireMachinesLock for default-k8s-diff-port-283961: {Name:mk85e225431128bbd27ac7bb3815095957281902 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0328 01:00:08.650481 1130827 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.107:22: connect: no route to host
	I0328 01:00:11.722571 1130827 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.107:22: connect: no route to host
	I0328 01:00:17.802536 1130827 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.107:22: connect: no route to host
	I0328 01:00:20.874568 1130827 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.107:22: connect: no route to host
	I0328 01:00:26.954473 1130827 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.107:22: connect: no route to host
	I0328 01:00:30.026674 1130827 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.107:22: connect: no route to host
	I0328 01:00:36.106489 1130827 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.107:22: connect: no route to host
	I0328 01:00:39.178555 1130827 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.107:22: connect: no route to host
	I0328 01:00:45.258539 1130827 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.107:22: connect: no route to host
	I0328 01:00:48.330581 1130827 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.107:22: connect: no route to host
	I0328 01:00:54.410577 1130827 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.107:22: connect: no route to host
	I0328 01:00:57.482545 1130827 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.107:22: connect: no route to host
	I0328 01:01:03.562558 1130827 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.107:22: connect: no route to host
	I0328 01:01:06.634602 1130827 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.107:22: connect: no route to host
	I0328 01:01:12.714559 1130827 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.107:22: connect: no route to host
	I0328 01:01:15.786597 1130827 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.107:22: connect: no route to host
	I0328 01:01:21.866544 1130827 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.107:22: connect: no route to host
	I0328 01:01:24.938619 1130827 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.107:22: connect: no route to host
	I0328 01:01:31.018631 1130827 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.107:22: connect: no route to host
	I0328 01:01:34.090562 1130827 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.107:22: connect: no route to host
	I0328 01:01:40.170864 1130827 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.107:22: connect: no route to host
	I0328 01:01:43.242565 1130827 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.107:22: connect: no route to host
	I0328 01:01:49.322492 1130827 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.107:22: connect: no route to host
	I0328 01:01:52.394572 1130827 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.107:22: connect: no route to host
	I0328 01:01:58.474562 1130827 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.107:22: connect: no route to host
	I0328 01:02:01.546621 1130827 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.107:22: connect: no route to host
	I0328 01:02:07.626510 1130827 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.107:22: connect: no route to host
	I0328 01:02:10.698534 1130827 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.107:22: connect: no route to host
	I0328 01:02:13.703348 1130949 start.go:364] duration metric: took 4m25.677777198s to acquireMachinesLock for "embed-certs-808809"
	I0328 01:02:13.703416 1130949 start.go:96] Skipping create...Using existing machine configuration
	I0328 01:02:13.703429 1130949 fix.go:54] fixHost starting: 
	I0328 01:02:13.703888 1130949 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18485-1069254/.minikube/bin/docker-machine-driver-kvm2
	I0328 01:02:13.703923 1130949 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 01:02:13.719480 1130949 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44835
	I0328 01:02:13.719968 1130949 main.go:141] libmachine: () Calling .GetVersion
	I0328 01:02:13.720450 1130949 main.go:141] libmachine: Using API Version  1
	I0328 01:02:13.720475 1130949 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 01:02:13.720774 1130949 main.go:141] libmachine: () Calling .GetMachineName
	I0328 01:02:13.721011 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .DriverName
	I0328 01:02:13.721182 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetState
	I0328 01:02:13.722796 1130949 fix.go:112] recreateIfNeeded on embed-certs-808809: state=Stopped err=<nil>
	I0328 01:02:13.722828 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .DriverName
	W0328 01:02:13.722972 1130949 fix.go:138] unexpected machine state, will restart: <nil>
	I0328 01:02:13.724895 1130949 out.go:177] * Restarting existing kvm2 VM for "embed-certs-808809" ...
	I0328 01:02:13.700647 1130827 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0328 01:02:13.700689 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetMachineName
	I0328 01:02:13.701054 1130827 buildroot.go:166] provisioning hostname "no-preload-248059"
	I0328 01:02:13.701085 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetMachineName
	I0328 01:02:13.701344 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHHostname
	I0328 01:02:13.703200 1130827 machine.go:97] duration metric: took 4m37.399616994s to provisionDockerMachine
	I0328 01:02:13.703243 1130827 fix.go:56] duration metric: took 4m37.42352766s for fixHost
	I0328 01:02:13.703249 1130827 start.go:83] releasing machines lock for "no-preload-248059", held for 4m37.423563163s
	W0328 01:02:13.703274 1130827 start.go:713] error starting host: provision: host is not running
	W0328 01:02:13.703400 1130827 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0328 01:02:13.703411 1130827 start.go:728] Will try again in 5 seconds ...
	I0328 01:02:13.726437 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .Start
	I0328 01:02:13.726574 1130949 main.go:141] libmachine: (embed-certs-808809) Ensuring networks are active...
	I0328 01:02:13.727407 1130949 main.go:141] libmachine: (embed-certs-808809) Ensuring network default is active
	I0328 01:02:13.727667 1130949 main.go:141] libmachine: (embed-certs-808809) Ensuring network mk-embed-certs-808809 is active
	I0328 01:02:13.728050 1130949 main.go:141] libmachine: (embed-certs-808809) Getting domain xml...
	I0328 01:02:13.728836 1130949 main.go:141] libmachine: (embed-certs-808809) Creating domain...
	I0328 01:02:14.931757 1130949 main.go:141] libmachine: (embed-certs-808809) Waiting to get IP...
	I0328 01:02:14.932921 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:14.933298 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | unable to find current IP address of domain embed-certs-808809 in network mk-embed-certs-808809
	I0328 01:02:14.933396 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | I0328 01:02:14.933294 1131950 retry.go:31] will retry after 279.257708ms: waiting for machine to come up
	I0328 01:02:15.213830 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:15.214439 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | unable to find current IP address of domain embed-certs-808809 in network mk-embed-certs-808809
	I0328 01:02:15.214472 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | I0328 01:02:15.214415 1131950 retry.go:31] will retry after 387.406107ms: waiting for machine to come up
	I0328 01:02:15.603078 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:15.603464 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | unable to find current IP address of domain embed-certs-808809 in network mk-embed-certs-808809
	I0328 01:02:15.603497 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | I0328 01:02:15.603431 1131950 retry.go:31] will retry after 466.553599ms: waiting for machine to come up
	I0328 01:02:16.072165 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:16.072702 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | unable to find current IP address of domain embed-certs-808809 in network mk-embed-certs-808809
	I0328 01:02:16.072732 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | I0328 01:02:16.072643 1131950 retry.go:31] will retry after 375.428381ms: waiting for machine to come up
	I0328 01:02:16.449155 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:16.449614 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | unable to find current IP address of domain embed-certs-808809 in network mk-embed-certs-808809
	I0328 01:02:16.449652 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | I0328 01:02:16.449553 1131950 retry.go:31] will retry after 466.238903ms: waiting for machine to come up
	I0328 01:02:16.917246 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:16.917697 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | unable to find current IP address of domain embed-certs-808809 in network mk-embed-certs-808809
	I0328 01:02:16.917723 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | I0328 01:02:16.917633 1131950 retry.go:31] will retry after 772.819544ms: waiting for machine to come up
	I0328 01:02:17.691645 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:17.692121 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | unable to find current IP address of domain embed-certs-808809 in network mk-embed-certs-808809
	I0328 01:02:17.692151 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | I0328 01:02:17.692071 1131950 retry.go:31] will retry after 1.19065976s: waiting for machine to come up
	I0328 01:02:18.704949 1130827 start.go:360] acquireMachinesLock for no-preload-248059: {Name:mk85e225431128bbd27ac7bb3815095957281902 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0328 01:02:18.884525 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:18.885019 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | unable to find current IP address of domain embed-certs-808809 in network mk-embed-certs-808809
	I0328 01:02:18.885044 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | I0328 01:02:18.884980 1131950 retry.go:31] will retry after 1.434726863s: waiting for machine to come up
	I0328 01:02:20.321473 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:20.322009 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | unable to find current IP address of domain embed-certs-808809 in network mk-embed-certs-808809
	I0328 01:02:20.322035 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | I0328 01:02:20.321951 1131950 retry.go:31] will retry after 1.275277555s: waiting for machine to come up
	I0328 01:02:21.599454 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:21.600049 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | unable to find current IP address of domain embed-certs-808809 in network mk-embed-certs-808809
	I0328 01:02:21.600074 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | I0328 01:02:21.599982 1131950 retry.go:31] will retry after 1.852516502s: waiting for machine to come up
	I0328 01:02:23.455282 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:23.455760 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | unable to find current IP address of domain embed-certs-808809 in network mk-embed-certs-808809
	I0328 01:02:23.455830 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | I0328 01:02:23.455746 1131950 retry.go:31] will retry after 2.056736141s: waiting for machine to come up
	I0328 01:02:25.514112 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:25.514538 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | unable to find current IP address of domain embed-certs-808809 in network mk-embed-certs-808809
	I0328 01:02:25.514569 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | I0328 01:02:25.514492 1131950 retry.go:31] will retry after 2.711520437s: waiting for machine to come up
	I0328 01:02:32.751719 1131323 start.go:364] duration metric: took 3m27.302408957s to acquireMachinesLock for "old-k8s-version-986088"
	I0328 01:02:32.751823 1131323 start.go:96] Skipping create...Using existing machine configuration
	I0328 01:02:32.751833 1131323 fix.go:54] fixHost starting: 
	I0328 01:02:32.752289 1131323 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18485-1069254/.minikube/bin/docker-machine-driver-kvm2
	I0328 01:02:32.752326 1131323 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 01:02:32.770119 1131323 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45807
	I0328 01:02:32.770723 1131323 main.go:141] libmachine: () Calling .GetVersion
	I0328 01:02:32.771352 1131323 main.go:141] libmachine: Using API Version  1
	I0328 01:02:32.771380 1131323 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 01:02:32.771790 1131323 main.go:141] libmachine: () Calling .GetMachineName
	I0328 01:02:32.772020 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .DriverName
	I0328 01:02:32.772206 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetState
	I0328 01:02:32.773947 1131323 fix.go:112] recreateIfNeeded on old-k8s-version-986088: state=Stopped err=<nil>
	I0328 01:02:32.773980 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .DriverName
	W0328 01:02:32.774166 1131323 fix.go:138] unexpected machine state, will restart: <nil>
	I0328 01:02:32.776416 1131323 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-986088" ...
	I0328 01:02:28.229576 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:28.229970 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | unable to find current IP address of domain embed-certs-808809 in network mk-embed-certs-808809
	I0328 01:02:28.230000 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | I0328 01:02:28.229920 1131950 retry.go:31] will retry after 3.231405371s: waiting for machine to come up
	I0328 01:02:31.463477 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:31.463884 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has current primary IP address 192.168.72.210 and MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:31.463902 1130949 main.go:141] libmachine: (embed-certs-808809) Found IP for machine: 192.168.72.210
	I0328 01:02:31.463915 1130949 main.go:141] libmachine: (embed-certs-808809) Reserving static IP address...
	I0328 01:02:31.464394 1130949 main.go:141] libmachine: (embed-certs-808809) Reserved static IP address: 192.168.72.210
	I0328 01:02:31.464413 1130949 main.go:141] libmachine: (embed-certs-808809) Waiting for SSH to be available...
	I0328 01:02:31.464439 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | found host DHCP lease matching {name: "embed-certs-808809", mac: "52:54:00:60:d4:d2", ip: "192.168.72.210"} in network mk-embed-certs-808809: {Iface:virbr4 ExpiryTime:2024-03-28 02:02:25 +0000 UTC Type:0 Mac:52:54:00:60:d4:d2 Iaid: IPaddr:192.168.72.210 Prefix:24 Hostname:embed-certs-808809 Clientid:01:52:54:00:60:d4:d2}
	I0328 01:02:31.464464 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | skip adding static IP to network mk-embed-certs-808809 - found existing host DHCP lease matching {name: "embed-certs-808809", mac: "52:54:00:60:d4:d2", ip: "192.168.72.210"}
	I0328 01:02:31.464480 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | Getting to WaitForSSH function...
	I0328 01:02:31.466488 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:31.466876 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d4:d2", ip: ""} in network mk-embed-certs-808809: {Iface:virbr4 ExpiryTime:2024-03-28 02:02:25 +0000 UTC Type:0 Mac:52:54:00:60:d4:d2 Iaid: IPaddr:192.168.72.210 Prefix:24 Hostname:embed-certs-808809 Clientid:01:52:54:00:60:d4:d2}
	I0328 01:02:31.466916 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined IP address 192.168.72.210 and MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:31.467054 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | Using SSH client type: external
	I0328 01:02:31.467085 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | Using SSH private key: /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/embed-certs-808809/id_rsa (-rw-------)
	I0328 01:02:31.467124 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.210 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/embed-certs-808809/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0328 01:02:31.467138 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | About to run SSH command:
	I0328 01:02:31.467155 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | exit 0
	I0328 01:02:31.590708 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | SSH cmd err, output: <nil>: 
	I0328 01:02:31.591111 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetConfigRaw
	I0328 01:02:31.591959 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetIP
	I0328 01:02:31.594592 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:31.595075 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d4:d2", ip: ""} in network mk-embed-certs-808809: {Iface:virbr4 ExpiryTime:2024-03-28 02:02:25 +0000 UTC Type:0 Mac:52:54:00:60:d4:d2 Iaid: IPaddr:192.168.72.210 Prefix:24 Hostname:embed-certs-808809 Clientid:01:52:54:00:60:d4:d2}
	I0328 01:02:31.595114 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined IP address 192.168.72.210 and MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:31.595364 1130949 profile.go:142] Saving config to /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/embed-certs-808809/config.json ...
	I0328 01:02:31.595634 1130949 machine.go:94] provisionDockerMachine start ...
	I0328 01:02:31.595656 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .DriverName
	I0328 01:02:31.595901 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHHostname
	I0328 01:02:31.598184 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:31.598529 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d4:d2", ip: ""} in network mk-embed-certs-808809: {Iface:virbr4 ExpiryTime:2024-03-28 02:02:25 +0000 UTC Type:0 Mac:52:54:00:60:d4:d2 Iaid: IPaddr:192.168.72.210 Prefix:24 Hostname:embed-certs-808809 Clientid:01:52:54:00:60:d4:d2}
	I0328 01:02:31.598556 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined IP address 192.168.72.210 and MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:31.598681 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHPort
	I0328 01:02:31.598851 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHKeyPath
	I0328 01:02:31.599012 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHKeyPath
	I0328 01:02:31.599163 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHUsername
	I0328 01:02:31.599333 1130949 main.go:141] libmachine: Using SSH client type: native
	I0328 01:02:31.599604 1130949 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.210 22 <nil> <nil>}
	I0328 01:02:31.599619 1130949 main.go:141] libmachine: About to run SSH command:
	hostname
	I0328 01:02:31.703241 1130949 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0328 01:02:31.703272 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetMachineName
	I0328 01:02:31.703575 1130949 buildroot.go:166] provisioning hostname "embed-certs-808809"
	I0328 01:02:31.703602 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetMachineName
	I0328 01:02:31.703779 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHHostname
	I0328 01:02:31.706495 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:31.706777 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d4:d2", ip: ""} in network mk-embed-certs-808809: {Iface:virbr4 ExpiryTime:2024-03-28 02:02:25 +0000 UTC Type:0 Mac:52:54:00:60:d4:d2 Iaid: IPaddr:192.168.72.210 Prefix:24 Hostname:embed-certs-808809 Clientid:01:52:54:00:60:d4:d2}
	I0328 01:02:31.706799 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined IP address 192.168.72.210 and MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:31.706978 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHPort
	I0328 01:02:31.707146 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHKeyPath
	I0328 01:02:31.707334 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHKeyPath
	I0328 01:02:31.707580 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHUsername
	I0328 01:02:31.707765 1130949 main.go:141] libmachine: Using SSH client type: native
	I0328 01:02:31.707985 1130949 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.210 22 <nil> <nil>}
	I0328 01:02:31.708004 1130949 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-808809 && echo "embed-certs-808809" | sudo tee /etc/hostname
	I0328 01:02:31.821578 1130949 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-808809
	
	I0328 01:02:31.821608 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHHostname
	I0328 01:02:31.824412 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:31.824791 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d4:d2", ip: ""} in network mk-embed-certs-808809: {Iface:virbr4 ExpiryTime:2024-03-28 02:02:25 +0000 UTC Type:0 Mac:52:54:00:60:d4:d2 Iaid: IPaddr:192.168.72.210 Prefix:24 Hostname:embed-certs-808809 Clientid:01:52:54:00:60:d4:d2}
	I0328 01:02:31.824825 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined IP address 192.168.72.210 and MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:31.825030 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHPort
	I0328 01:02:31.825253 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHKeyPath
	I0328 01:02:31.825432 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHKeyPath
	I0328 01:02:31.825589 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHUsername
	I0328 01:02:31.825758 1130949 main.go:141] libmachine: Using SSH client type: native
	I0328 01:02:31.825950 1130949 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.210 22 <nil> <nil>}
	I0328 01:02:31.825976 1130949 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-808809' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-808809/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-808809' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0328 01:02:31.937655 1130949 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0328 01:02:31.937701 1130949 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18485-1069254/.minikube CaCertPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18485-1069254/.minikube}
	I0328 01:02:31.937728 1130949 buildroot.go:174] setting up certificates
	I0328 01:02:31.937742 1130949 provision.go:84] configureAuth start
	I0328 01:02:31.937754 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetMachineName
	I0328 01:02:31.938093 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetIP
	I0328 01:02:31.940874 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:31.941328 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d4:d2", ip: ""} in network mk-embed-certs-808809: {Iface:virbr4 ExpiryTime:2024-03-28 02:02:25 +0000 UTC Type:0 Mac:52:54:00:60:d4:d2 Iaid: IPaddr:192.168.72.210 Prefix:24 Hostname:embed-certs-808809 Clientid:01:52:54:00:60:d4:d2}
	I0328 01:02:31.941360 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined IP address 192.168.72.210 and MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:31.941580 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHHostname
	I0328 01:02:31.944250 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:31.944580 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d4:d2", ip: ""} in network mk-embed-certs-808809: {Iface:virbr4 ExpiryTime:2024-03-28 02:02:25 +0000 UTC Type:0 Mac:52:54:00:60:d4:d2 Iaid: IPaddr:192.168.72.210 Prefix:24 Hostname:embed-certs-808809 Clientid:01:52:54:00:60:d4:d2}
	I0328 01:02:31.944610 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined IP address 192.168.72.210 and MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:31.944828 1130949 provision.go:143] copyHostCerts
	I0328 01:02:31.944910 1130949 exec_runner.go:144] found /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.pem, removing ...
	I0328 01:02:31.944926 1130949 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.pem
	I0328 01:02:31.945006 1130949 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.pem (1078 bytes)
	I0328 01:02:31.945151 1130949 exec_runner.go:144] found /home/jenkins/minikube-integration/18485-1069254/.minikube/cert.pem, removing ...
	I0328 01:02:31.945162 1130949 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18485-1069254/.minikube/cert.pem
	I0328 01:02:31.945205 1130949 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18485-1069254/.minikube/cert.pem (1123 bytes)
	I0328 01:02:31.945285 1130949 exec_runner.go:144] found /home/jenkins/minikube-integration/18485-1069254/.minikube/key.pem, removing ...
	I0328 01:02:31.945294 1130949 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18485-1069254/.minikube/key.pem
	I0328 01:02:31.945330 1130949 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18485-1069254/.minikube/key.pem (1679 bytes)
	I0328 01:02:31.945400 1130949 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca-key.pem org=jenkins.embed-certs-808809 san=[127.0.0.1 192.168.72.210 embed-certs-808809 localhost minikube]
	I0328 01:02:32.070925 1130949 provision.go:177] copyRemoteCerts
	I0328 01:02:32.071007 1130949 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0328 01:02:32.071067 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHHostname
	I0328 01:02:32.073876 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:32.074295 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d4:d2", ip: ""} in network mk-embed-certs-808809: {Iface:virbr4 ExpiryTime:2024-03-28 02:02:25 +0000 UTC Type:0 Mac:52:54:00:60:d4:d2 Iaid: IPaddr:192.168.72.210 Prefix:24 Hostname:embed-certs-808809 Clientid:01:52:54:00:60:d4:d2}
	I0328 01:02:32.074339 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined IP address 192.168.72.210 and MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:32.074541 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHPort
	I0328 01:02:32.074754 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHKeyPath
	I0328 01:02:32.074931 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHUsername
	I0328 01:02:32.075091 1130949 sshutil.go:53] new ssh client: &{IP:192.168.72.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/embed-certs-808809/id_rsa Username:docker}
	I0328 01:02:32.158945 1130949 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0328 01:02:32.184903 1130949 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0328 01:02:32.210411 1130949 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0328 01:02:32.235788 1130949 provision.go:87] duration metric: took 298.03126ms to configureAuth
	I0328 01:02:32.235827 1130949 buildroot.go:189] setting minikube options for container-runtime
	I0328 01:02:32.236116 1130949 config.go:182] Loaded profile config "embed-certs-808809": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0328 01:02:32.236336 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHHostname
	I0328 01:02:32.239186 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:32.239520 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d4:d2", ip: ""} in network mk-embed-certs-808809: {Iface:virbr4 ExpiryTime:2024-03-28 02:02:25 +0000 UTC Type:0 Mac:52:54:00:60:d4:d2 Iaid: IPaddr:192.168.72.210 Prefix:24 Hostname:embed-certs-808809 Clientid:01:52:54:00:60:d4:d2}
	I0328 01:02:32.239555 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined IP address 192.168.72.210 and MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:32.239782 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHPort
	I0328 01:02:32.240036 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHKeyPath
	I0328 01:02:32.240257 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHKeyPath
	I0328 01:02:32.240431 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHUsername
	I0328 01:02:32.240633 1130949 main.go:141] libmachine: Using SSH client type: native
	I0328 01:02:32.240836 1130949 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.210 22 <nil> <nil>}
	I0328 01:02:32.240862 1130949 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0328 01:02:32.513263 1130949 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0328 01:02:32.513298 1130949 machine.go:97] duration metric: took 917.647337ms to provisionDockerMachine
	I0328 01:02:32.513314 1130949 start.go:293] postStartSetup for "embed-certs-808809" (driver="kvm2")
	I0328 01:02:32.513326 1130949 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0328 01:02:32.513365 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .DriverName
	I0328 01:02:32.513727 1130949 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0328 01:02:32.513770 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHHostname
	I0328 01:02:32.516906 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:32.517382 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d4:d2", ip: ""} in network mk-embed-certs-808809: {Iface:virbr4 ExpiryTime:2024-03-28 02:02:25 +0000 UTC Type:0 Mac:52:54:00:60:d4:d2 Iaid: IPaddr:192.168.72.210 Prefix:24 Hostname:embed-certs-808809 Clientid:01:52:54:00:60:d4:d2}
	I0328 01:02:32.517425 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined IP address 192.168.72.210 and MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:32.517603 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHPort
	I0328 01:02:32.517831 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHKeyPath
	I0328 01:02:32.517989 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHUsername
	I0328 01:02:32.518115 1130949 sshutil.go:53] new ssh client: &{IP:192.168.72.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/embed-certs-808809/id_rsa Username:docker}
	I0328 01:02:32.600013 1130949 ssh_runner.go:195] Run: cat /etc/os-release
	I0328 01:02:32.604953 1130949 info.go:137] Remote host: Buildroot 2023.02.9
	I0328 01:02:32.604983 1130949 filesync.go:126] Scanning /home/jenkins/minikube-integration/18485-1069254/.minikube/addons for local assets ...
	I0328 01:02:32.605057 1130949 filesync.go:126] Scanning /home/jenkins/minikube-integration/18485-1069254/.minikube/files for local assets ...
	I0328 01:02:32.605148 1130949 filesync.go:149] local asset: /home/jenkins/minikube-integration/18485-1069254/.minikube/files/etc/ssl/certs/10765222.pem -> 10765222.pem in /etc/ssl/certs
	I0328 01:02:32.605265 1130949 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0328 01:02:32.617685 1130949 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/files/etc/ssl/certs/10765222.pem --> /etc/ssl/certs/10765222.pem (1708 bytes)
	I0328 01:02:32.646415 1130949 start.go:296] duration metric: took 133.084551ms for postStartSetup
	I0328 01:02:32.646462 1130949 fix.go:56] duration metric: took 18.943034019s for fixHost
	I0328 01:02:32.646490 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHHostname
	I0328 01:02:32.649346 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:32.649686 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d4:d2", ip: ""} in network mk-embed-certs-808809: {Iface:virbr4 ExpiryTime:2024-03-28 02:02:25 +0000 UTC Type:0 Mac:52:54:00:60:d4:d2 Iaid: IPaddr:192.168.72.210 Prefix:24 Hostname:embed-certs-808809 Clientid:01:52:54:00:60:d4:d2}
	I0328 01:02:32.649717 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined IP address 192.168.72.210 and MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:32.649864 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHPort
	I0328 01:02:32.650191 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHKeyPath
	I0328 01:02:32.650444 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHKeyPath
	I0328 01:02:32.650637 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHUsername
	I0328 01:02:32.650844 1130949 main.go:141] libmachine: Using SSH client type: native
	I0328 01:02:32.651036 1130949 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.210 22 <nil> <nil>}
	I0328 01:02:32.651069 1130949 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0328 01:02:32.751522 1130949 main.go:141] libmachine: SSH cmd err, output: <nil>: 1711587752.718800758
	
	I0328 01:02:32.751547 1130949 fix.go:216] guest clock: 1711587752.718800758
	I0328 01:02:32.751556 1130949 fix.go:229] Guest: 2024-03-28 01:02:32.718800758 +0000 UTC Remote: 2024-03-28 01:02:32.646466137 +0000 UTC m=+284.780134501 (delta=72.334621ms)
	I0328 01:02:32.751598 1130949 fix.go:200] guest clock delta is within tolerance: 72.334621ms
	I0328 01:02:32.751610 1130949 start.go:83] releasing machines lock for "embed-certs-808809", held for 19.048217918s
	I0328 01:02:32.751638 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .DriverName
	I0328 01:02:32.751953 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetIP
	I0328 01:02:32.754795 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:32.755205 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d4:d2", ip: ""} in network mk-embed-certs-808809: {Iface:virbr4 ExpiryTime:2024-03-28 02:02:25 +0000 UTC Type:0 Mac:52:54:00:60:d4:d2 Iaid: IPaddr:192.168.72.210 Prefix:24 Hostname:embed-certs-808809 Clientid:01:52:54:00:60:d4:d2}
	I0328 01:02:32.755240 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined IP address 192.168.72.210 and MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:32.755454 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .DriverName
	I0328 01:02:32.756111 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .DriverName
	I0328 01:02:32.756320 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .DriverName
	I0328 01:02:32.756412 1130949 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0328 01:02:32.756475 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHHostname
	I0328 01:02:32.756612 1130949 ssh_runner.go:195] Run: cat /version.json
	I0328 01:02:32.756646 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHHostname
	I0328 01:02:32.759337 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:32.759468 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:32.759788 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d4:d2", ip: ""} in network mk-embed-certs-808809: {Iface:virbr4 ExpiryTime:2024-03-28 02:02:25 +0000 UTC Type:0 Mac:52:54:00:60:d4:d2 Iaid: IPaddr:192.168.72.210 Prefix:24 Hostname:embed-certs-808809 Clientid:01:52:54:00:60:d4:d2}
	I0328 01:02:32.759808 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined IP address 192.168.72.210 and MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:32.759845 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d4:d2", ip: ""} in network mk-embed-certs-808809: {Iface:virbr4 ExpiryTime:2024-03-28 02:02:25 +0000 UTC Type:0 Mac:52:54:00:60:d4:d2 Iaid: IPaddr:192.168.72.210 Prefix:24 Hostname:embed-certs-808809 Clientid:01:52:54:00:60:d4:d2}
	I0328 01:02:32.759866 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined IP address 192.168.72.210 and MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:32.760009 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHPort
	I0328 01:02:32.760018 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHPort
	I0328 01:02:32.760214 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHKeyPath
	I0328 01:02:32.760222 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHKeyPath
	I0328 01:02:32.760364 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHUsername
	I0328 01:02:32.760532 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHUsername
	I0328 01:02:32.760639 1130949 sshutil.go:53] new ssh client: &{IP:192.168.72.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/embed-certs-808809/id_rsa Username:docker}
	I0328 01:02:32.760698 1130949 sshutil.go:53] new ssh client: &{IP:192.168.72.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/embed-certs-808809/id_rsa Username:docker}
	I0328 01:02:32.840137 1130949 ssh_runner.go:195] Run: systemctl --version
	I0328 01:02:32.874039 1130949 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0328 01:02:33.020534 1130949 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0328 01:02:33.027141 1130949 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0328 01:02:33.027213 1130949 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0328 01:02:33.043738 1130949 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0328 01:02:33.043767 1130949 start.go:494] detecting cgroup driver to use...
	I0328 01:02:33.043840 1130949 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0328 01:02:33.064332 1130949 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0328 01:02:33.081926 1130949 docker.go:217] disabling cri-docker service (if available) ...
	I0328 01:02:33.082016 1130949 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0328 01:02:33.097179 1130949 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0328 01:02:33.113157 1130949 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0328 01:02:33.233183 1130949 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0328 01:02:33.374061 1130949 docker.go:233] disabling docker service ...
	I0328 01:02:33.374145 1130949 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0328 01:02:33.389813 1130949 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0328 01:02:33.403439 1130949 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0328 01:02:33.546146 1130949 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0328 01:02:33.706968 1130949 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0328 01:02:33.722279 1130949 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0328 01:02:33.742578 1130949 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0328 01:02:33.742652 1130949 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 01:02:33.754966 1130949 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0328 01:02:33.755027 1130949 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 01:02:33.767170 1130949 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 01:02:33.779960 1130949 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 01:02:33.792448 1130949 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0328 01:02:33.804912 1130949 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 01:02:33.818038 1130949 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 01:02:33.838794 1130949 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 01:02:33.852157 1130949 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0328 01:02:33.862921 1130949 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0328 01:02:33.862981 1130949 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0328 01:02:33.880973 1130949 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0328 01:02:33.892698 1130949 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0328 01:02:34.029903 1130949 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0328 01:02:34.170977 1130949 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0328 01:02:34.171074 1130949 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0328 01:02:34.176652 1130949 start.go:562] Will wait 60s for crictl version
	I0328 01:02:34.176736 1130949 ssh_runner.go:195] Run: which crictl
	I0328 01:02:34.180993 1130949 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0328 01:02:34.224564 1130949 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0328 01:02:34.224675 1130949 ssh_runner.go:195] Run: crio --version
	I0328 01:02:34.254457 1130949 ssh_runner.go:195] Run: crio --version
	I0328 01:02:34.287281 1130949 out.go:177] * Preparing Kubernetes v1.29.3 on CRI-O 1.29.1 ...
	I0328 01:02:32.778280 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .Start
	I0328 01:02:32.778470 1131323 main.go:141] libmachine: (old-k8s-version-986088) Ensuring networks are active...
	I0328 01:02:32.779179 1131323 main.go:141] libmachine: (old-k8s-version-986088) Ensuring network default is active
	I0328 01:02:32.779577 1131323 main.go:141] libmachine: (old-k8s-version-986088) Ensuring network mk-old-k8s-version-986088 is active
	I0328 01:02:32.779982 1131323 main.go:141] libmachine: (old-k8s-version-986088) Getting domain xml...
	I0328 01:02:32.780732 1131323 main.go:141] libmachine: (old-k8s-version-986088) Creating domain...
	I0328 01:02:34.066287 1131323 main.go:141] libmachine: (old-k8s-version-986088) Waiting to get IP...
	I0328 01:02:34.067193 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:34.067618 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | unable to find current IP address of domain old-k8s-version-986088 in network mk-old-k8s-version-986088
	I0328 01:02:34.067684 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | I0328 01:02:34.067586 1132067 retry.go:31] will retry after 291.270379ms: waiting for machine to come up
	I0328 01:02:34.360203 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:34.360690 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | unable to find current IP address of domain old-k8s-version-986088 in network mk-old-k8s-version-986088
	I0328 01:02:34.360721 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | I0328 01:02:34.360638 1132067 retry.go:31] will retry after 234.968456ms: waiting for machine to come up
	I0328 01:02:34.597291 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:34.597818 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | unable to find current IP address of domain old-k8s-version-986088 in network mk-old-k8s-version-986088
	I0328 01:02:34.597849 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | I0328 01:02:34.597750 1132067 retry.go:31] will retry after 382.522593ms: waiting for machine to come up
	I0328 01:02:34.982502 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:34.983176 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | unable to find current IP address of domain old-k8s-version-986088 in network mk-old-k8s-version-986088
	I0328 01:02:34.983205 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | I0328 01:02:34.983133 1132067 retry.go:31] will retry after 436.332635ms: waiting for machine to come up
	I0328 01:02:34.288748 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetIP
	I0328 01:02:34.292122 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:34.292516 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d4:d2", ip: ""} in network mk-embed-certs-808809: {Iface:virbr4 ExpiryTime:2024-03-28 02:02:25 +0000 UTC Type:0 Mac:52:54:00:60:d4:d2 Iaid: IPaddr:192.168.72.210 Prefix:24 Hostname:embed-certs-808809 Clientid:01:52:54:00:60:d4:d2}
	I0328 01:02:34.292556 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined IP address 192.168.72.210 and MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:34.292869 1130949 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0328 01:02:34.298738 1130949 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0328 01:02:34.313529 1130949 kubeadm.go:877] updating cluster {Name:embed-certs-808809 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.29.3 ClusterName:embed-certs-808809 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.210 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0328 01:02:34.313698 1130949 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0328 01:02:34.313762 1130949 ssh_runner.go:195] Run: sudo crictl images --output json
	I0328 01:02:34.356518 1130949 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.3". assuming images are not preloaded.
	I0328 01:02:34.356614 1130949 ssh_runner.go:195] Run: which lz4
	I0328 01:02:34.361492 1130949 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0328 01:02:34.366053 1130949 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0328 01:02:34.366090 1130949 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (402967820 bytes)
	I0328 01:02:36.024197 1130949 crio.go:462] duration metric: took 1.662731937s to copy over tarball
	I0328 01:02:36.024287 1130949 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0328 01:02:35.421623 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:35.422164 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | unable to find current IP address of domain old-k8s-version-986088 in network mk-old-k8s-version-986088
	I0328 01:02:35.422198 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | I0328 01:02:35.422135 1132067 retry.go:31] will retry after 700.861268ms: waiting for machine to come up
	I0328 01:02:36.124589 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:36.125001 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | unable to find current IP address of domain old-k8s-version-986088 in network mk-old-k8s-version-986088
	I0328 01:02:36.125031 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | I0328 01:02:36.124948 1132067 retry.go:31] will retry after 932.342478ms: waiting for machine to come up
	I0328 01:02:37.058954 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:37.059390 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | unable to find current IP address of domain old-k8s-version-986088 in network mk-old-k8s-version-986088
	I0328 01:02:37.059424 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | I0328 01:02:37.059332 1132067 retry.go:31] will retry after 1.163248691s: waiting for machine to come up
	I0328 01:02:38.224574 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:38.225019 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | unable to find current IP address of domain old-k8s-version-986088 in network mk-old-k8s-version-986088
	I0328 01:02:38.225053 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | I0328 01:02:38.224959 1132067 retry.go:31] will retry after 1.13372539s: waiting for machine to come up
	I0328 01:02:39.360393 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:39.360953 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | unable to find current IP address of domain old-k8s-version-986088 in network mk-old-k8s-version-986088
	I0328 01:02:39.360984 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | I0328 01:02:39.360906 1132067 retry.go:31] will retry after 1.793272671s: waiting for machine to come up
	I0328 01:02:38.420741 1130949 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.396415089s)
	I0328 01:02:38.420788 1130949 crio.go:469] duration metric: took 2.39655808s to extract the tarball
	I0328 01:02:38.420797 1130949 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0328 01:02:38.459869 1130949 ssh_runner.go:195] Run: sudo crictl images --output json
	I0328 01:02:38.505999 1130949 crio.go:514] all images are preloaded for cri-o runtime.
	I0328 01:02:38.506030 1130949 cache_images.go:84] Images are preloaded, skipping loading
	I0328 01:02:38.506039 1130949 kubeadm.go:928] updating node { 192.168.72.210 8443 v1.29.3 crio true true} ...
	I0328 01:02:38.506185 1130949 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-808809 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.210
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.3 ClusterName:embed-certs-808809 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0328 01:02:38.506301 1130949 ssh_runner.go:195] Run: crio config
	I0328 01:02:38.551608 1130949 cni.go:84] Creating CNI manager for ""
	I0328 01:02:38.551633 1130949 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0328 01:02:38.551646 1130949 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0328 01:02:38.551673 1130949 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.210 APIServerPort:8443 KubernetesVersion:v1.29.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-808809 NodeName:embed-certs-808809 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.210"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.210 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0328 01:02:38.551813 1130949 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.210
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-808809"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.210
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.210"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0328 01:02:38.551881 1130949 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.3
	I0328 01:02:38.562640 1130949 binaries.go:44] Found k8s binaries, skipping transfer
	I0328 01:02:38.562732 1130949 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0328 01:02:38.572870 1130949 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0328 01:02:38.590866 1130949 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0328 01:02:38.608302 1130949 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0328 01:02:38.626925 1130949 ssh_runner.go:195] Run: grep 192.168.72.210	control-plane.minikube.internal$ /etc/hosts
	I0328 01:02:38.631111 1130949 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.210	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0328 01:02:38.644528 1130949 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0328 01:02:38.785485 1130949 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0328 01:02:38.804087 1130949 certs.go:68] Setting up /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/embed-certs-808809 for IP: 192.168.72.210
	I0328 01:02:38.804113 1130949 certs.go:194] generating shared ca certs ...
	I0328 01:02:38.804132 1130949 certs.go:226] acquiring lock for ca certs: {Name:mkf4dec8f33bbf51de6ed3aabdf175c7fd744ef9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 01:02:38.804285 1130949 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.key
	I0328 01:02:38.804326 1130949 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/proxy-client-ca.key
	I0328 01:02:38.804363 1130949 certs.go:256] generating profile certs ...
	I0328 01:02:38.804505 1130949 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/embed-certs-808809/client.key
	I0328 01:02:38.804588 1130949 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/embed-certs-808809/apiserver.key.bdc16448
	I0328 01:02:38.804638 1130949 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/embed-certs-808809/proxy-client.key
	I0328 01:02:38.804798 1130949 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/1076522.pem (1338 bytes)
	W0328 01:02:38.804829 1130949 certs.go:480] ignoring /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/1076522_empty.pem, impossibly tiny 0 bytes
	I0328 01:02:38.804836 1130949 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca-key.pem (1675 bytes)
	I0328 01:02:38.804860 1130949 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem (1078 bytes)
	I0328 01:02:38.804882 1130949 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/cert.pem (1123 bytes)
	I0328 01:02:38.804902 1130949 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/key.pem (1679 bytes)
	I0328 01:02:38.804943 1130949 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/files/etc/ssl/certs/10765222.pem (1708 bytes)
	I0328 01:02:38.805829 1130949 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0328 01:02:38.864847 1130949 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0328 01:02:38.899197 1130949 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0328 01:02:38.926734 1130949 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0328 01:02:38.958277 1130949 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/embed-certs-808809/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0328 01:02:38.997201 1130949 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/embed-certs-808809/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0328 01:02:39.023136 1130949 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/embed-certs-808809/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0328 01:02:39.048459 1130949 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/embed-certs-808809/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0328 01:02:39.074052 1130949 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/1076522.pem --> /usr/share/ca-certificates/1076522.pem (1338 bytes)
	I0328 01:02:39.099326 1130949 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/files/etc/ssl/certs/10765222.pem --> /usr/share/ca-certificates/10765222.pem (1708 bytes)
	I0328 01:02:39.124775 1130949 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0328 01:02:39.149638 1130949 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0328 01:02:39.169169 1130949 ssh_runner.go:195] Run: openssl version
	I0328 01:02:39.175948 1130949 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1076522.pem && ln -fs /usr/share/ca-certificates/1076522.pem /etc/ssl/certs/1076522.pem"
	I0328 01:02:39.188255 1130949 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1076522.pem
	I0328 01:02:39.194296 1130949 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 27 23:42 /usr/share/ca-certificates/1076522.pem
	I0328 01:02:39.194374 1130949 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1076522.pem
	I0328 01:02:39.201138 1130949 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1076522.pem /etc/ssl/certs/51391683.0"
	I0328 01:02:39.213554 1130949 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10765222.pem && ln -fs /usr/share/ca-certificates/10765222.pem /etc/ssl/certs/10765222.pem"
	I0328 01:02:39.226474 1130949 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10765222.pem
	I0328 01:02:39.232074 1130949 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 27 23:42 /usr/share/ca-certificates/10765222.pem
	I0328 01:02:39.232149 1130949 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10765222.pem
	I0328 01:02:39.238733 1130949 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/10765222.pem /etc/ssl/certs/3ec20f2e.0"
	I0328 01:02:39.250983 1130949 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0328 01:02:39.263746 1130949 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0328 01:02:39.268967 1130949 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 27 23:34 /usr/share/ca-certificates/minikubeCA.pem
	I0328 01:02:39.269038 1130949 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0328 01:02:39.275589 1130949 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0328 01:02:39.287731 1130949 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0328 01:02:39.292985 1130949 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0328 01:02:39.300366 1130949 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0328 01:02:39.307241 1130949 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0328 01:02:39.314522 1130949 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0328 01:02:39.321070 1130949 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0328 01:02:39.327777 1130949 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0328 01:02:39.334174 1130949 kubeadm.go:391] StartCluster: {Name:embed-certs-808809 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29
.3 ClusterName:embed-certs-808809 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.210 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0328 01:02:39.334310 1130949 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0328 01:02:39.334367 1130949 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0328 01:02:39.376035 1130949 cri.go:89] found id: ""
	I0328 01:02:39.376145 1130949 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0328 01:02:39.387349 1130949 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0328 01:02:39.387377 1130949 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0328 01:02:39.387385 1130949 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0328 01:02:39.387469 1130949 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0328 01:02:39.397918 1130949 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0328 01:02:39.399122 1130949 kubeconfig.go:125] found "embed-certs-808809" server: "https://192.168.72.210:8443"
	I0328 01:02:39.401219 1130949 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0328 01:02:39.411475 1130949 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.72.210
	I0328 01:02:39.411562 1130949 kubeadm.go:1154] stopping kube-system containers ...
	I0328 01:02:39.411583 1130949 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0328 01:02:39.411650 1130949 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0328 01:02:39.449529 1130949 cri.go:89] found id: ""
	I0328 01:02:39.449638 1130949 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0328 01:02:39.468553 1130949 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0328 01:02:39.479489 1130949 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0328 01:02:39.479522 1130949 kubeadm.go:156] found existing configuration files:
	
	I0328 01:02:39.479589 1130949 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0328 01:02:39.489619 1130949 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0328 01:02:39.489689 1130949 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0328 01:02:39.499726 1130949 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0328 01:02:39.509362 1130949 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0328 01:02:39.509447 1130949 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0328 01:02:39.519262 1130949 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0328 01:02:39.528858 1130949 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0328 01:02:39.528920 1130949 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0328 01:02:39.538784 1130949 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0328 01:02:39.548517 1130949 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0328 01:02:39.548593 1130949 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0328 01:02:39.559931 1130949 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0328 01:02:39.574178 1130949 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0328 01:02:39.706243 1130949 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0328 01:02:40.342144 1130949 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0328 01:02:40.559108 1130949 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0328 01:02:40.636713 1130949 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0328 01:02:40.743171 1130949 api_server.go:52] waiting for apiserver process to appear ...
	I0328 01:02:40.743269 1130949 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:02:41.243401 1130949 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:02:41.743363 1130949 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:02:41.776504 1130949 api_server.go:72] duration metric: took 1.033329844s to wait for apiserver process to appear ...
	I0328 01:02:41.776547 1130949 api_server.go:88] waiting for apiserver healthz status ...
	I0328 01:02:41.776574 1130949 api_server.go:253] Checking apiserver healthz at https://192.168.72.210:8443/healthz ...
	I0328 01:02:41.777140 1130949 api_server.go:269] stopped: https://192.168.72.210:8443/healthz: Get "https://192.168.72.210:8443/healthz": dial tcp 192.168.72.210:8443: connect: connection refused
	I0328 01:02:42.276690 1130949 api_server.go:253] Checking apiserver healthz at https://192.168.72.210:8443/healthz ...
	I0328 01:02:41.156898 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:41.157309 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | unable to find current IP address of domain old-k8s-version-986088 in network mk-old-k8s-version-986088
	I0328 01:02:41.157336 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | I0328 01:02:41.157263 1132067 retry.go:31] will retry after 1.863775673s: waiting for machine to come up
	I0328 01:02:43.023074 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:43.023470 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | unable to find current IP address of domain old-k8s-version-986088 in network mk-old-k8s-version-986088
	I0328 01:02:43.023507 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | I0328 01:02:43.023419 1132067 retry.go:31] will retry after 2.73600503s: waiting for machine to come up
	I0328 01:02:44.743286 1130949 api_server.go:279] https://192.168.72.210:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0328 01:02:44.743383 1130949 api_server.go:103] status: https://192.168.72.210:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0328 01:02:44.743412 1130949 api_server.go:253] Checking apiserver healthz at https://192.168.72.210:8443/healthz ...
	I0328 01:02:44.822370 1130949 api_server.go:279] https://192.168.72.210:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0328 01:02:44.822416 1130949 api_server.go:103] status: https://192.168.72.210:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0328 01:02:44.822436 1130949 api_server.go:253] Checking apiserver healthz at https://192.168.72.210:8443/healthz ...
	I0328 01:02:44.847406 1130949 api_server.go:279] https://192.168.72.210:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0328 01:02:44.847462 1130949 api_server.go:103] status: https://192.168.72.210:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0328 01:02:45.276899 1130949 api_server.go:253] Checking apiserver healthz at https://192.168.72.210:8443/healthz ...
	I0328 01:02:45.281884 1130949 api_server.go:279] https://192.168.72.210:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0328 01:02:45.281919 1130949 api_server.go:103] status: https://192.168.72.210:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0328 01:02:45.777495 1130949 api_server.go:253] Checking apiserver healthz at https://192.168.72.210:8443/healthz ...
	I0328 01:02:45.783673 1130949 api_server.go:279] https://192.168.72.210:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0328 01:02:45.783704 1130949 api_server.go:103] status: https://192.168.72.210:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0328 01:02:46.277372 1130949 api_server.go:253] Checking apiserver healthz at https://192.168.72.210:8443/healthz ...
	I0328 01:02:46.282281 1130949 api_server.go:279] https://192.168.72.210:8443/healthz returned 200:
	ok
	I0328 01:02:46.291242 1130949 api_server.go:141] control plane version: v1.29.3
	I0328 01:02:46.291287 1130949 api_server.go:131] duration metric: took 4.514730698s to wait for apiserver health ...
	I0328 01:02:46.291301 1130949 cni.go:84] Creating CNI manager for ""
	I0328 01:02:46.291310 1130949 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0328 01:02:46.293461 1130949 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0328 01:02:46.294971 1130949 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0328 01:02:46.312955 1130949 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0328 01:02:46.345653 1130949 system_pods.go:43] waiting for kube-system pods to appear ...
	I0328 01:02:46.355470 1130949 system_pods.go:59] 8 kube-system pods found
	I0328 01:02:46.355506 1130949 system_pods.go:61] "coredns-76f75df574-pr5d8" [90a6f3d5-6f33-4c41-804b-4b20c518aa23] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0328 01:02:46.355512 1130949 system_pods.go:61] "etcd-embed-certs-808809" [93b6b8ee-f83f-4848-b2c5-912ec07acd52] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0328 01:02:46.355519 1130949 system_pods.go:61] "kube-apiserver-embed-certs-808809" [22eb788f-4647-4a07-b5bf-ecdd54c28fcd] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0328 01:02:46.355530 1130949 system_pods.go:61] "kube-controller-manager-embed-certs-808809" [83fecd9f-c0de-4afe-b5b5-7c04bd3adc20] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0328 01:02:46.355545 1130949 system_pods.go:61] "kube-proxy-qwzpg" [57a814c6-54c8-4fa7-b7d7-bcdd4bbc91d7] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0328 01:02:46.355553 1130949 system_pods.go:61] "kube-scheduler-embed-certs-808809" [0b229d84-43fb-45ee-8d49-39204812d490] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0328 01:02:46.355568 1130949 system_pods.go:61] "metrics-server-57f55c9bc5-swsxp" [4b20e133-3054-4806-9b7f-44d8c8c35a4d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0328 01:02:46.355580 1130949 system_pods.go:61] "storage-provisioner" [59303061-19e3-4aed-8753-804988a2a44e] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0328 01:02:46.355590 1130949 system_pods.go:74] duration metric: took 9.908316ms to wait for pod list to return data ...
	I0328 01:02:46.355603 1130949 node_conditions.go:102] verifying NodePressure condition ...
	I0328 01:02:46.358936 1130949 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0328 01:02:46.358987 1130949 node_conditions.go:123] node cpu capacity is 2
	I0328 01:02:46.359006 1130949 node_conditions.go:105] duration metric: took 3.394695ms to run NodePressure ...
	I0328 01:02:46.359054 1130949 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0328 01:02:46.686479 1130949 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0328 01:02:46.692502 1130949 kubeadm.go:733] kubelet initialised
	I0328 01:02:46.692526 1130949 kubeadm.go:734] duration metric: took 6.022393ms waiting for restarted kubelet to initialise ...
	I0328 01:02:46.692534 1130949 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0328 01:02:46.699146 1130949 pod_ready.go:78] waiting up to 4m0s for pod "coredns-76f75df574-pr5d8" in "kube-system" namespace to be "Ready" ...
	I0328 01:02:45.762440 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:45.762891 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | unable to find current IP address of domain old-k8s-version-986088 in network mk-old-k8s-version-986088
	I0328 01:02:45.762915 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | I0328 01:02:45.762845 1132067 retry.go:31] will retry after 2.201941476s: waiting for machine to come up
	I0328 01:02:47.966601 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:47.967196 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | unable to find current IP address of domain old-k8s-version-986088 in network mk-old-k8s-version-986088
	I0328 01:02:47.967237 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | I0328 01:02:47.967144 1132067 retry.go:31] will retry after 4.122216816s: waiting for machine to come up
	I0328 01:02:48.709890 1130949 pod_ready.go:102] pod "coredns-76f75df574-pr5d8" in "kube-system" namespace has status "Ready":"False"
	I0328 01:02:51.207697 1130949 pod_ready.go:102] pod "coredns-76f75df574-pr5d8" in "kube-system" namespace has status "Ready":"False"
	I0328 01:02:53.391471 1131600 start.go:364] duration metric: took 2m47.603687739s to acquireMachinesLock for "default-k8s-diff-port-283961"
	I0328 01:02:53.391553 1131600 start.go:96] Skipping create...Using existing machine configuration
	I0328 01:02:53.391565 1131600 fix.go:54] fixHost starting: 
	I0328 01:02:53.391980 1131600 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18485-1069254/.minikube/bin/docker-machine-driver-kvm2
	I0328 01:02:53.392031 1131600 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 01:02:53.409035 1131600 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34073
	I0328 01:02:53.409556 1131600 main.go:141] libmachine: () Calling .GetVersion
	I0328 01:02:53.410105 1131600 main.go:141] libmachine: Using API Version  1
	I0328 01:02:53.410136 1131600 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 01:02:53.410492 1131600 main.go:141] libmachine: () Calling .GetMachineName
	I0328 01:02:53.410734 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .DriverName
	I0328 01:02:53.410903 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetState
	I0328 01:02:53.412710 1131600 fix.go:112] recreateIfNeeded on default-k8s-diff-port-283961: state=Stopped err=<nil>
	I0328 01:02:53.412739 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .DriverName
	W0328 01:02:53.412927 1131600 fix.go:138] unexpected machine state, will restart: <nil>
	I0328 01:02:53.414773 1131600 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-283961" ...
	I0328 01:02:52.091210 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:52.091759 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has current primary IP address 192.168.50.174 and MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:52.091794 1131323 main.go:141] libmachine: (old-k8s-version-986088) Found IP for machine: 192.168.50.174
	I0328 01:02:52.091841 1131323 main.go:141] libmachine: (old-k8s-version-986088) Reserving static IP address...
	I0328 01:02:52.092295 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | found host DHCP lease matching {name: "old-k8s-version-986088", mac: "52:54:00:f6:94:40", ip: "192.168.50.174"} in network mk-old-k8s-version-986088: {Iface:virbr2 ExpiryTime:2024-03-28 02:02:44 +0000 UTC Type:0 Mac:52:54:00:f6:94:40 Iaid: IPaddr:192.168.50.174 Prefix:24 Hostname:old-k8s-version-986088 Clientid:01:52:54:00:f6:94:40}
	I0328 01:02:52.092321 1131323 main.go:141] libmachine: (old-k8s-version-986088) Reserved static IP address: 192.168.50.174
	I0328 01:02:52.092343 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | skip adding static IP to network mk-old-k8s-version-986088 - found existing host DHCP lease matching {name: "old-k8s-version-986088", mac: "52:54:00:f6:94:40", ip: "192.168.50.174"}
	I0328 01:02:52.092356 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | Getting to WaitForSSH function...
	I0328 01:02:52.092373 1131323 main.go:141] libmachine: (old-k8s-version-986088) Waiting for SSH to be available...
	I0328 01:02:52.094682 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:52.095012 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:94:40", ip: ""} in network mk-old-k8s-version-986088: {Iface:virbr2 ExpiryTime:2024-03-28 02:02:44 +0000 UTC Type:0 Mac:52:54:00:f6:94:40 Iaid: IPaddr:192.168.50.174 Prefix:24 Hostname:old-k8s-version-986088 Clientid:01:52:54:00:f6:94:40}
	I0328 01:02:52.095033 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined IP address 192.168.50.174 and MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:52.095158 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | Using SSH client type: external
	I0328 01:02:52.095180 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | Using SSH private key: /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/old-k8s-version-986088/id_rsa (-rw-------)
	I0328 01:02:52.095208 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.174 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/old-k8s-version-986088/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0328 01:02:52.095218 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | About to run SSH command:
	I0328 01:02:52.095232 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | exit 0
	I0328 01:02:52.218494 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | SSH cmd err, output: <nil>: 
	I0328 01:02:52.218983 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetConfigRaw
	I0328 01:02:52.219663 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetIP
	I0328 01:02:52.222349 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:52.222791 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:94:40", ip: ""} in network mk-old-k8s-version-986088: {Iface:virbr2 ExpiryTime:2024-03-28 02:02:44 +0000 UTC Type:0 Mac:52:54:00:f6:94:40 Iaid: IPaddr:192.168.50.174 Prefix:24 Hostname:old-k8s-version-986088 Clientid:01:52:54:00:f6:94:40}
	I0328 01:02:52.222823 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined IP address 192.168.50.174 and MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:52.223191 1131323 profile.go:142] Saving config to /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/old-k8s-version-986088/config.json ...
	I0328 01:02:52.223388 1131323 machine.go:94] provisionDockerMachine start ...
	I0328 01:02:52.223409 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .DriverName
	I0328 01:02:52.223605 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHHostname
	I0328 01:02:52.225686 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:52.225999 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:94:40", ip: ""} in network mk-old-k8s-version-986088: {Iface:virbr2 ExpiryTime:2024-03-28 02:02:44 +0000 UTC Type:0 Mac:52:54:00:f6:94:40 Iaid: IPaddr:192.168.50.174 Prefix:24 Hostname:old-k8s-version-986088 Clientid:01:52:54:00:f6:94:40}
	I0328 01:02:52.226038 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined IP address 192.168.50.174 and MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:52.226131 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHPort
	I0328 01:02:52.226341 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHKeyPath
	I0328 01:02:52.226507 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHKeyPath
	I0328 01:02:52.226633 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHUsername
	I0328 01:02:52.226802 1131323 main.go:141] libmachine: Using SSH client type: native
	I0328 01:02:52.227078 1131323 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.174 22 <nil> <nil>}
	I0328 01:02:52.227095 1131323 main.go:141] libmachine: About to run SSH command:
	hostname
	I0328 01:02:52.327218 1131323 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0328 01:02:52.327249 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetMachineName
	I0328 01:02:52.327515 1131323 buildroot.go:166] provisioning hostname "old-k8s-version-986088"
	I0328 01:02:52.327542 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetMachineName
	I0328 01:02:52.327754 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHHostname
	I0328 01:02:52.330253 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:52.330661 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:94:40", ip: ""} in network mk-old-k8s-version-986088: {Iface:virbr2 ExpiryTime:2024-03-28 02:02:44 +0000 UTC Type:0 Mac:52:54:00:f6:94:40 Iaid: IPaddr:192.168.50.174 Prefix:24 Hostname:old-k8s-version-986088 Clientid:01:52:54:00:f6:94:40}
	I0328 01:02:52.330691 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined IP address 192.168.50.174 and MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:52.330827 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHPort
	I0328 01:02:52.331048 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHKeyPath
	I0328 01:02:52.331258 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHKeyPath
	I0328 01:02:52.331406 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHUsername
	I0328 01:02:52.331593 1131323 main.go:141] libmachine: Using SSH client type: native
	I0328 01:02:52.331772 1131323 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.174 22 <nil> <nil>}
	I0328 01:02:52.331783 1131323 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-986088 && echo "old-k8s-version-986088" | sudo tee /etc/hostname
	I0328 01:02:52.445910 1131323 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-986088
	
	I0328 01:02:52.445943 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHHostname
	I0328 01:02:52.449023 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:52.449320 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:94:40", ip: ""} in network mk-old-k8s-version-986088: {Iface:virbr2 ExpiryTime:2024-03-28 02:02:44 +0000 UTC Type:0 Mac:52:54:00:f6:94:40 Iaid: IPaddr:192.168.50.174 Prefix:24 Hostname:old-k8s-version-986088 Clientid:01:52:54:00:f6:94:40}
	I0328 01:02:52.449358 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined IP address 192.168.50.174 and MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:52.449595 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHPort
	I0328 01:02:52.449810 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHKeyPath
	I0328 01:02:52.449970 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHKeyPath
	I0328 01:02:52.450116 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHUsername
	I0328 01:02:52.450310 1131323 main.go:141] libmachine: Using SSH client type: native
	I0328 01:02:52.450572 1131323 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.174 22 <nil> <nil>}
	I0328 01:02:52.450640 1131323 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-986088' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-986088/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-986088' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0328 01:02:52.567493 1131323 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0328 01:02:52.567529 1131323 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18485-1069254/.minikube CaCertPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18485-1069254/.minikube}
	I0328 01:02:52.567559 1131323 buildroot.go:174] setting up certificates
	I0328 01:02:52.567573 1131323 provision.go:84] configureAuth start
	I0328 01:02:52.567587 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetMachineName
	I0328 01:02:52.567944 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetIP
	I0328 01:02:52.570860 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:52.571363 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:94:40", ip: ""} in network mk-old-k8s-version-986088: {Iface:virbr2 ExpiryTime:2024-03-28 02:02:44 +0000 UTC Type:0 Mac:52:54:00:f6:94:40 Iaid: IPaddr:192.168.50.174 Prefix:24 Hostname:old-k8s-version-986088 Clientid:01:52:54:00:f6:94:40}
	I0328 01:02:52.571400 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined IP address 192.168.50.174 and MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:52.571547 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHHostname
	I0328 01:02:52.574052 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:52.574483 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:94:40", ip: ""} in network mk-old-k8s-version-986088: {Iface:virbr2 ExpiryTime:2024-03-28 02:02:44 +0000 UTC Type:0 Mac:52:54:00:f6:94:40 Iaid: IPaddr:192.168.50.174 Prefix:24 Hostname:old-k8s-version-986088 Clientid:01:52:54:00:f6:94:40}
	I0328 01:02:52.574517 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined IP address 192.168.50.174 and MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:52.574619 1131323 provision.go:143] copyHostCerts
	I0328 01:02:52.574698 1131323 exec_runner.go:144] found /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.pem, removing ...
	I0328 01:02:52.574710 1131323 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.pem
	I0328 01:02:52.574778 1131323 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.pem (1078 bytes)
	I0328 01:02:52.574894 1131323 exec_runner.go:144] found /home/jenkins/minikube-integration/18485-1069254/.minikube/cert.pem, removing ...
	I0328 01:02:52.574908 1131323 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18485-1069254/.minikube/cert.pem
	I0328 01:02:52.574985 1131323 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18485-1069254/.minikube/cert.pem (1123 bytes)
	I0328 01:02:52.575086 1131323 exec_runner.go:144] found /home/jenkins/minikube-integration/18485-1069254/.minikube/key.pem, removing ...
	I0328 01:02:52.575095 1131323 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18485-1069254/.minikube/key.pem
	I0328 01:02:52.575117 1131323 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18485-1069254/.minikube/key.pem (1679 bytes)
	I0328 01:02:52.575194 1131323 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-986088 san=[127.0.0.1 192.168.50.174 localhost minikube old-k8s-version-986088]
	I0328 01:02:52.688709 1131323 provision.go:177] copyRemoteCerts
	I0328 01:02:52.688776 1131323 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0328 01:02:52.688809 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHHostname
	I0328 01:02:52.691529 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:52.691977 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:94:40", ip: ""} in network mk-old-k8s-version-986088: {Iface:virbr2 ExpiryTime:2024-03-28 02:02:44 +0000 UTC Type:0 Mac:52:54:00:f6:94:40 Iaid: IPaddr:192.168.50.174 Prefix:24 Hostname:old-k8s-version-986088 Clientid:01:52:54:00:f6:94:40}
	I0328 01:02:52.692024 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined IP address 192.168.50.174 and MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:52.692188 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHPort
	I0328 01:02:52.692425 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHKeyPath
	I0328 01:02:52.692620 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHUsername
	I0328 01:02:52.692774 1131323 sshutil.go:53] new ssh client: &{IP:192.168.50.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/old-k8s-version-986088/id_rsa Username:docker}
	I0328 01:02:52.777200 1131323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0328 01:02:52.808740 1131323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0328 01:02:52.836646 1131323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0328 01:02:52.862627 1131323 provision.go:87] duration metric: took 295.032419ms to configureAuth
	I0328 01:02:52.862668 1131323 buildroot.go:189] setting minikube options for container-runtime
	I0328 01:02:52.862908 1131323 config.go:182] Loaded profile config "old-k8s-version-986088": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0328 01:02:52.863019 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHHostname
	I0328 01:02:52.865838 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:52.866585 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:94:40", ip: ""} in network mk-old-k8s-version-986088: {Iface:virbr2 ExpiryTime:2024-03-28 02:02:44 +0000 UTC Type:0 Mac:52:54:00:f6:94:40 Iaid: IPaddr:192.168.50.174 Prefix:24 Hostname:old-k8s-version-986088 Clientid:01:52:54:00:f6:94:40}
	I0328 01:02:52.866630 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined IP address 192.168.50.174 and MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:52.867271 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHPort
	I0328 01:02:52.867521 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHKeyPath
	I0328 01:02:52.867687 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHKeyPath
	I0328 01:02:52.867826 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHUsername
	I0328 01:02:52.867961 1131323 main.go:141] libmachine: Using SSH client type: native
	I0328 01:02:52.868176 1131323 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.174 22 <nil> <nil>}
	I0328 01:02:52.868194 1131323 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0328 01:02:53.154903 1131323 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0328 01:02:53.154936 1131323 machine.go:97] duration metric: took 931.534047ms to provisionDockerMachine
	I0328 01:02:53.154949 1131323 start.go:293] postStartSetup for "old-k8s-version-986088" (driver="kvm2")
	I0328 01:02:53.154961 1131323 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0328 01:02:53.154997 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .DriverName
	I0328 01:02:53.155353 1131323 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0328 01:02:53.155386 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHHostname
	I0328 01:02:53.158072 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:53.158448 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:94:40", ip: ""} in network mk-old-k8s-version-986088: {Iface:virbr2 ExpiryTime:2024-03-28 02:02:44 +0000 UTC Type:0 Mac:52:54:00:f6:94:40 Iaid: IPaddr:192.168.50.174 Prefix:24 Hostname:old-k8s-version-986088 Clientid:01:52:54:00:f6:94:40}
	I0328 01:02:53.158482 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined IP address 192.168.50.174 and MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:53.158612 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHPort
	I0328 01:02:53.158825 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHKeyPath
	I0328 01:02:53.158974 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHUsername
	I0328 01:02:53.159102 1131323 sshutil.go:53] new ssh client: &{IP:192.168.50.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/old-k8s-version-986088/id_rsa Username:docker}
	I0328 01:02:53.243411 1131323 ssh_runner.go:195] Run: cat /etc/os-release
	I0328 01:02:53.247745 1131323 info.go:137] Remote host: Buildroot 2023.02.9
	I0328 01:02:53.247769 1131323 filesync.go:126] Scanning /home/jenkins/minikube-integration/18485-1069254/.minikube/addons for local assets ...
	I0328 01:02:53.247830 1131323 filesync.go:126] Scanning /home/jenkins/minikube-integration/18485-1069254/.minikube/files for local assets ...
	I0328 01:02:53.247903 1131323 filesync.go:149] local asset: /home/jenkins/minikube-integration/18485-1069254/.minikube/files/etc/ssl/certs/10765222.pem -> 10765222.pem in /etc/ssl/certs
	I0328 01:02:53.247990 1131323 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0328 01:02:53.258574 1131323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/files/etc/ssl/certs/10765222.pem --> /etc/ssl/certs/10765222.pem (1708 bytes)
	I0328 01:02:53.284249 1131323 start.go:296] duration metric: took 129.2844ms for postStartSetup
	I0328 01:02:53.284300 1131323 fix.go:56] duration metric: took 20.532468979s for fixHost
	I0328 01:02:53.284324 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHHostname
	I0328 01:02:53.287097 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:53.287505 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:94:40", ip: ""} in network mk-old-k8s-version-986088: {Iface:virbr2 ExpiryTime:2024-03-28 02:02:44 +0000 UTC Type:0 Mac:52:54:00:f6:94:40 Iaid: IPaddr:192.168.50.174 Prefix:24 Hostname:old-k8s-version-986088 Clientid:01:52:54:00:f6:94:40}
	I0328 01:02:53.287534 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined IP address 192.168.50.174 and MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:53.287642 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHPort
	I0328 01:02:53.287874 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHKeyPath
	I0328 01:02:53.288039 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHKeyPath
	I0328 01:02:53.288225 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHUsername
	I0328 01:02:53.288439 1131323 main.go:141] libmachine: Using SSH client type: native
	I0328 01:02:53.288601 1131323 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.174 22 <nil> <nil>}
	I0328 01:02:53.288612 1131323 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0328 01:02:53.391262 1131323 main.go:141] libmachine: SSH cmd err, output: <nil>: 1711587773.373998758
	
	I0328 01:02:53.391292 1131323 fix.go:216] guest clock: 1711587773.373998758
	I0328 01:02:53.391299 1131323 fix.go:229] Guest: 2024-03-28 01:02:53.373998758 +0000 UTC Remote: 2024-03-28 01:02:53.284304642 +0000 UTC m=+227.998260980 (delta=89.694116ms)
	I0328 01:02:53.391341 1131323 fix.go:200] guest clock delta is within tolerance: 89.694116ms
	I0328 01:02:53.391346 1131323 start.go:83] releasing machines lock for "old-k8s-version-986088", held for 20.639550927s
	I0328 01:02:53.391377 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .DriverName
	I0328 01:02:53.391728 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetIP
	I0328 01:02:53.394421 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:53.394780 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:94:40", ip: ""} in network mk-old-k8s-version-986088: {Iface:virbr2 ExpiryTime:2024-03-28 02:02:44 +0000 UTC Type:0 Mac:52:54:00:f6:94:40 Iaid: IPaddr:192.168.50.174 Prefix:24 Hostname:old-k8s-version-986088 Clientid:01:52:54:00:f6:94:40}
	I0328 01:02:53.394811 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined IP address 192.168.50.174 and MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:53.394932 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .DriverName
	I0328 01:02:53.395449 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .DriverName
	I0328 01:02:53.395729 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .DriverName
	I0328 01:02:53.395828 1131323 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0328 01:02:53.395883 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHHostname
	I0328 01:02:53.395985 1131323 ssh_runner.go:195] Run: cat /version.json
	I0328 01:02:53.396014 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHHostname
	I0328 01:02:53.398819 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:53.399010 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:53.399281 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:94:40", ip: ""} in network mk-old-k8s-version-986088: {Iface:virbr2 ExpiryTime:2024-03-28 02:02:44 +0000 UTC Type:0 Mac:52:54:00:f6:94:40 Iaid: IPaddr:192.168.50.174 Prefix:24 Hostname:old-k8s-version-986088 Clientid:01:52:54:00:f6:94:40}
	I0328 01:02:53.399320 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined IP address 192.168.50.174 and MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:53.399451 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHPort
	I0328 01:02:53.399550 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:94:40", ip: ""} in network mk-old-k8s-version-986088: {Iface:virbr2 ExpiryTime:2024-03-28 02:02:44 +0000 UTC Type:0 Mac:52:54:00:f6:94:40 Iaid: IPaddr:192.168.50.174 Prefix:24 Hostname:old-k8s-version-986088 Clientid:01:52:54:00:f6:94:40}
	I0328 01:02:53.399620 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined IP address 192.168.50.174 and MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:53.399640 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHKeyPath
	I0328 01:02:53.399880 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHUsername
	I0328 01:02:53.399902 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHPort
	I0328 01:02:53.400065 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHKeyPath
	I0328 01:02:53.400081 1131323 sshutil.go:53] new ssh client: &{IP:192.168.50.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/old-k8s-version-986088/id_rsa Username:docker}
	I0328 01:02:53.400245 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHUsername
	I0328 01:02:53.400445 1131323 sshutil.go:53] new ssh client: &{IP:192.168.50.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/old-k8s-version-986088/id_rsa Username:docker}
	I0328 01:02:53.514453 1131323 ssh_runner.go:195] Run: systemctl --version
	I0328 01:02:53.521123 1131323 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0328 01:02:53.678366 1131323 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0328 01:02:53.685402 1131323 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0328 01:02:53.685473 1131323 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0328 01:02:53.702781 1131323 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0328 01:02:53.702816 1131323 start.go:494] detecting cgroup driver to use...
	I0328 01:02:53.702900 1131323 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0328 01:02:53.720343 1131323 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0328 01:02:53.736749 1131323 docker.go:217] disabling cri-docker service (if available) ...
	I0328 01:02:53.736824 1131323 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0328 01:02:53.761087 1131323 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0328 01:02:53.779008 1131323 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0328 01:02:53.895064 1131323 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0328 01:02:54.060741 1131323 docker.go:233] disabling docker service ...
	I0328 01:02:54.060825 1131323 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0328 01:02:54.079139 1131323 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0328 01:02:54.093523 1131323 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0328 01:02:54.247544 1131323 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0328 01:02:54.396392 1131323 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0328 01:02:54.422612 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0328 01:02:54.443759 1131323 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0328 01:02:54.443817 1131323 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 01:02:54.459794 1131323 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0328 01:02:54.459875 1131323 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 01:02:54.472784 1131323 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 01:02:54.484963 1131323 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 01:02:54.496654 1131323 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0328 01:02:54.508382 1131323 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0328 01:02:54.518607 1131323 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0328 01:02:54.518687 1131323 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0328 01:02:54.532356 1131323 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0328 01:02:54.544424 1131323 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0328 01:02:54.685782 1131323 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0328 01:02:54.847233 1131323 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0328 01:02:54.847314 1131323 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0328 01:02:54.853148 1131323 start.go:562] Will wait 60s for crictl version
	I0328 01:02:54.853248 1131323 ssh_runner.go:195] Run: which crictl
	I0328 01:02:54.857536 1131323 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0328 01:02:54.901937 1131323 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0328 01:02:54.902082 1131323 ssh_runner.go:195] Run: crio --version
	I0328 01:02:54.935571 1131323 ssh_runner.go:195] Run: crio --version
	I0328 01:02:54.971452 1131323 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0328 01:02:54.972964 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetIP
	I0328 01:02:54.976523 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:54.976985 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:94:40", ip: ""} in network mk-old-k8s-version-986088: {Iface:virbr2 ExpiryTime:2024-03-28 02:02:44 +0000 UTC Type:0 Mac:52:54:00:f6:94:40 Iaid: IPaddr:192.168.50.174 Prefix:24 Hostname:old-k8s-version-986088 Clientid:01:52:54:00:f6:94:40}
	I0328 01:02:54.977017 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined IP address 192.168.50.174 and MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:54.977369 1131323 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0328 01:02:54.982326 1131323 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0328 01:02:54.996239 1131323 kubeadm.go:877] updating cluster {Name:old-k8s-version-986088 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-986088 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.174 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0328 01:02:54.996371 1131323 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0328 01:02:54.996433 1131323 ssh_runner.go:195] Run: sudo crictl images --output json
	I0328 01:02:55.045404 1131323 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0328 01:02:55.045483 1131323 ssh_runner.go:195] Run: which lz4
	I0328 01:02:55.050226 1131323 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0328 01:02:55.055182 1131323 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0328 01:02:55.055221 1131323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0328 01:02:53.416101 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .Start
	I0328 01:02:53.416332 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Ensuring networks are active...
	I0328 01:02:53.417021 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Ensuring network default is active
	I0328 01:02:53.417446 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Ensuring network mk-default-k8s-diff-port-283961 is active
	I0328 01:02:53.417857 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Getting domain xml...
	I0328 01:02:53.418555 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Creating domain...
	I0328 01:02:54.777201 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Waiting to get IP...
	I0328 01:02:54.778055 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:02:54.778563 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | unable to find current IP address of domain default-k8s-diff-port-283961 in network mk-default-k8s-diff-port-283961
	I0328 01:02:54.778705 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | I0328 01:02:54.778537 1132240 retry.go:31] will retry after 259.031702ms: waiting for machine to come up
	I0328 01:02:55.039365 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:02:55.039926 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | unable to find current IP address of domain default-k8s-diff-port-283961 in network mk-default-k8s-diff-port-283961
	I0328 01:02:55.039963 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | I0328 01:02:55.039860 1132240 retry.go:31] will retry after 254.124553ms: waiting for machine to come up
	I0328 01:02:55.295658 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:02:55.296218 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | unable to find current IP address of domain default-k8s-diff-port-283961 in network mk-default-k8s-diff-port-283961
	I0328 01:02:55.296265 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | I0328 01:02:55.296174 1132240 retry.go:31] will retry after 349.637234ms: waiting for machine to come up
	I0328 01:02:55.647590 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:02:55.648356 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | unable to find current IP address of domain default-k8s-diff-port-283961 in network mk-default-k8s-diff-port-283961
	I0328 01:02:55.648392 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | I0328 01:02:55.648298 1132240 retry.go:31] will retry after 446.471208ms: waiting for machine to come up
	I0328 01:02:53.707811 1130949 pod_ready.go:102] pod "coredns-76f75df574-pr5d8" in "kube-system" namespace has status "Ready":"False"
	I0328 01:02:55.708380 1130949 pod_ready.go:102] pod "coredns-76f75df574-pr5d8" in "kube-system" namespace has status "Ready":"False"
	I0328 01:02:57.213059 1130949 pod_ready.go:92] pod "coredns-76f75df574-pr5d8" in "kube-system" namespace has status "Ready":"True"
	I0328 01:02:57.213097 1130949 pod_ready.go:81] duration metric: took 10.513921238s for pod "coredns-76f75df574-pr5d8" in "kube-system" namespace to be "Ready" ...
	I0328 01:02:57.213113 1130949 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-808809" in "kube-system" namespace to be "Ready" ...
	I0328 01:02:57.222308 1130949 pod_ready.go:92] pod "etcd-embed-certs-808809" in "kube-system" namespace has status "Ready":"True"
	I0328 01:02:57.222344 1130949 pod_ready.go:81] duration metric: took 9.214056ms for pod "etcd-embed-certs-808809" in "kube-system" namespace to be "Ready" ...
	I0328 01:02:57.222357 1130949 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-808809" in "kube-system" namespace to be "Ready" ...
	I0328 01:02:57.231530 1130949 pod_ready.go:92] pod "kube-apiserver-embed-certs-808809" in "kube-system" namespace has status "Ready":"True"
	I0328 01:02:57.231558 1130949 pod_ready.go:81] duration metric: took 9.192864ms for pod "kube-apiserver-embed-certs-808809" in "kube-system" namespace to be "Ready" ...
	I0328 01:02:57.231568 1130949 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-808809" in "kube-system" namespace to be "Ready" ...
	I0328 01:02:56.994163 1131323 crio.go:462] duration metric: took 1.943992561s to copy over tarball
	I0328 01:02:56.994252 1131323 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0328 01:03:00.215115 1131323 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.220825311s)
	I0328 01:03:00.215159 1131323 crio.go:469] duration metric: took 3.22095583s to extract the tarball
	I0328 01:03:00.215171 1131323 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0328 01:03:00.259151 1131323 ssh_runner.go:195] Run: sudo crictl images --output json
	I0328 01:03:00.298446 1131323 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0328 01:03:00.298492 1131323 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0328 01:03:00.298585 1131323 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0328 01:03:00.298601 1131323 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0328 01:03:00.298613 1131323 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0328 01:03:00.298644 1131323 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0328 01:03:00.298662 1131323 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0328 01:03:00.298698 1131323 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0328 01:03:00.298593 1131323 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0328 01:03:00.298585 1131323 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0328 01:03:00.300347 1131323 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0328 01:03:00.300424 1131323 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0328 01:03:00.300470 1131323 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0328 01:03:00.300474 1131323 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0328 01:03:00.300637 1131323 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0328 01:03:00.300652 1131323 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0328 01:03:00.300723 1131323 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0328 01:03:00.300793 1131323 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0328 01:02:56.095939 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:02:56.096463 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | unable to find current IP address of domain default-k8s-diff-port-283961 in network mk-default-k8s-diff-port-283961
	I0328 01:02:56.096501 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | I0328 01:02:56.096412 1132240 retry.go:31] will retry after 490.029649ms: waiting for machine to come up
	I0328 01:02:56.588298 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:02:56.588835 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | unable to find current IP address of domain default-k8s-diff-port-283961 in network mk-default-k8s-diff-port-283961
	I0328 01:02:56.588868 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | I0328 01:02:56.588796 1132240 retry.go:31] will retry after 831.356628ms: waiting for machine to come up
	I0328 01:02:57.421917 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:02:57.422410 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | unable to find current IP address of domain default-k8s-diff-port-283961 in network mk-default-k8s-diff-port-283961
	I0328 01:02:57.422443 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | I0328 01:02:57.422353 1132240 retry.go:31] will retry after 1.164764985s: waiting for machine to come up
	I0328 01:02:58.588827 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:02:58.589183 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | unable to find current IP address of domain default-k8s-diff-port-283961 in network mk-default-k8s-diff-port-283961
	I0328 01:02:58.589225 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | I0328 01:02:58.589119 1132240 retry.go:31] will retry after 1.307248783s: waiting for machine to come up
	I0328 01:02:59.897607 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:02:59.897976 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | unable to find current IP address of domain default-k8s-diff-port-283961 in network mk-default-k8s-diff-port-283961
	I0328 01:02:59.898008 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | I0328 01:02:59.897926 1132240 retry.go:31] will retry after 1.560958271s: waiting for machine to come up
	I0328 01:02:58.241179 1130949 pod_ready.go:92] pod "kube-controller-manager-embed-certs-808809" in "kube-system" namespace has status "Ready":"True"
	I0328 01:02:58.241216 1130949 pod_ready.go:81] duration metric: took 1.00963904s for pod "kube-controller-manager-embed-certs-808809" in "kube-system" namespace to be "Ready" ...
	I0328 01:02:58.241245 1130949 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-qwzpg" in "kube-system" namespace to be "Ready" ...
	I0328 01:02:58.249787 1130949 pod_ready.go:92] pod "kube-proxy-qwzpg" in "kube-system" namespace has status "Ready":"True"
	I0328 01:02:58.249826 1130949 pod_ready.go:81] duration metric: took 8.571225ms for pod "kube-proxy-qwzpg" in "kube-system" namespace to be "Ready" ...
	I0328 01:02:58.249840 1130949 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-808809" in "kube-system" namespace to be "Ready" ...
	I0328 01:02:58.405101 1130949 pod_ready.go:92] pod "kube-scheduler-embed-certs-808809" in "kube-system" namespace has status "Ready":"True"
	I0328 01:02:58.405130 1130949 pod_ready.go:81] duration metric: took 155.281142ms for pod "kube-scheduler-embed-certs-808809" in "kube-system" namespace to be "Ready" ...
	I0328 01:02:58.405141 1130949 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace to be "Ready" ...
	I0328 01:03:00.412202 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:02.412688 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:00.499788 1131323 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0328 01:03:00.539135 1131323 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0328 01:03:00.541462 1131323 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0328 01:03:00.544184 1131323 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0328 01:03:00.544227 1131323 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0328 01:03:00.544261 1131323 ssh_runner.go:195] Run: which crictl
	I0328 01:03:00.555720 1131323 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0328 01:03:00.560189 1131323 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0328 01:03:00.562639 1131323 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0328 01:03:00.574105 1131323 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0328 01:03:00.681717 1131323 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0328 01:03:00.681742 1131323 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0328 01:03:00.681765 1131323 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0328 01:03:00.681803 1131323 ssh_runner.go:195] Run: which crictl
	I0328 01:03:00.682033 1131323 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0328 01:03:00.682076 1131323 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0328 01:03:00.682115 1131323 ssh_runner.go:195] Run: which crictl
	I0328 01:03:00.732868 1131323 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0328 01:03:00.732922 1131323 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0328 01:03:00.732988 1131323 ssh_runner.go:195] Run: which crictl
	I0328 01:03:00.742680 1131323 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0328 01:03:00.742730 1131323 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0328 01:03:00.742762 1131323 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0328 01:03:00.742777 1131323 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0328 01:03:00.742805 1131323 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0328 01:03:00.742770 1131323 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0328 01:03:00.742817 1131323 ssh_runner.go:195] Run: which crictl
	I0328 01:03:00.742851 1131323 ssh_runner.go:195] Run: which crictl
	I0328 01:03:00.742865 1131323 ssh_runner.go:195] Run: which crictl
	I0328 01:03:00.770435 1131323 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0328 01:03:00.770472 1131323 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0328 01:03:00.770567 1131323 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0328 01:03:00.770588 1131323 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0328 01:03:00.770727 1131323 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0328 01:03:00.770760 1131323 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0328 01:03:00.770728 1131323 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0328 01:03:00.882338 1131323 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0328 01:03:00.896602 1131323 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0328 01:03:00.918814 1131323 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0328 01:03:00.918869 1131323 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0328 01:03:00.918919 1131323 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0328 01:03:00.918968 1131323 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0328 01:03:01.186124 1131323 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0328 01:03:01.334547 1131323 cache_images.go:92] duration metric: took 1.036031169s to LoadCachedImages
	W0328 01:03:01.334676 1131323 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	I0328 01:03:01.334694 1131323 kubeadm.go:928] updating node { 192.168.50.174 8443 v1.20.0 crio true true} ...
	I0328 01:03:01.334827 1131323 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-986088 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.174
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-986088 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0328 01:03:01.334926 1131323 ssh_runner.go:195] Run: crio config
	I0328 01:03:01.391004 1131323 cni.go:84] Creating CNI manager for ""
	I0328 01:03:01.391034 1131323 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0328 01:03:01.391054 1131323 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0328 01:03:01.391081 1131323 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.174 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-986088 NodeName:old-k8s-version-986088 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.174"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.174 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0328 01:03:01.391265 1131323 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.174
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-986088"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.174
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.174"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0328 01:03:01.391347 1131323 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0328 01:03:01.403684 1131323 binaries.go:44] Found k8s binaries, skipping transfer
	I0328 01:03:01.403779 1131323 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0328 01:03:01.415168 1131323 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0328 01:03:01.434329 1131323 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0328 01:03:01.456280 1131323 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0328 01:03:01.476625 1131323 ssh_runner.go:195] Run: grep 192.168.50.174	control-plane.minikube.internal$ /etc/hosts
	I0328 01:03:01.480867 1131323 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.174	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0328 01:03:01.493833 1131323 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0328 01:03:01.642273 1131323 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0328 01:03:01.661857 1131323 certs.go:68] Setting up /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/old-k8s-version-986088 for IP: 192.168.50.174
	I0328 01:03:01.661887 1131323 certs.go:194] generating shared ca certs ...
	I0328 01:03:01.661909 1131323 certs.go:226] acquiring lock for ca certs: {Name:mkf4dec8f33bbf51de6ed3aabdf175c7fd744ef9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 01:03:01.662115 1131323 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.key
	I0328 01:03:01.662174 1131323 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/proxy-client-ca.key
	I0328 01:03:01.662188 1131323 certs.go:256] generating profile certs ...
	I0328 01:03:01.662324 1131323 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/old-k8s-version-986088/client.key
	I0328 01:03:01.662399 1131323 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/old-k8s-version-986088/apiserver.key.b88fbc7e
	I0328 01:03:01.662447 1131323 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/old-k8s-version-986088/proxy-client.key
	I0328 01:03:01.662600 1131323 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/1076522.pem (1338 bytes)
	W0328 01:03:01.662656 1131323 certs.go:480] ignoring /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/1076522_empty.pem, impossibly tiny 0 bytes
	I0328 01:03:01.662672 1131323 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca-key.pem (1675 bytes)
	I0328 01:03:01.662703 1131323 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem (1078 bytes)
	I0328 01:03:01.662738 1131323 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/cert.pem (1123 bytes)
	I0328 01:03:01.662774 1131323 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/key.pem (1679 bytes)
	I0328 01:03:01.662826 1131323 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/files/etc/ssl/certs/10765222.pem (1708 bytes)
	I0328 01:03:01.663831 1131323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0328 01:03:01.697171 1131323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0328 01:03:01.742118 1131323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0328 01:03:01.783263 1131323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0328 01:03:01.831682 1131323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/old-k8s-version-986088/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0328 01:03:01.878051 1131323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/old-k8s-version-986088/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0328 01:03:01.915626 1131323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/old-k8s-version-986088/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0328 01:03:01.942247 1131323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/old-k8s-version-986088/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0328 01:03:01.969054 1131323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/files/etc/ssl/certs/10765222.pem --> /usr/share/ca-certificates/10765222.pem (1708 bytes)
	I0328 01:03:01.998651 1131323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0328 01:03:02.024881 1131323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/1076522.pem --> /usr/share/ca-certificates/1076522.pem (1338 bytes)
	I0328 01:03:02.051284 1131323 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0328 01:03:02.070414 1131323 ssh_runner.go:195] Run: openssl version
	I0328 01:03:02.076635 1131323 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10765222.pem && ln -fs /usr/share/ca-certificates/10765222.pem /etc/ssl/certs/10765222.pem"
	I0328 01:03:02.089288 1131323 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10765222.pem
	I0328 01:03:02.094260 1131323 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 27 23:42 /usr/share/ca-certificates/10765222.pem
	I0328 01:03:02.094322 1131323 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10765222.pem
	I0328 01:03:02.100846 1131323 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/10765222.pem /etc/ssl/certs/3ec20f2e.0"
	I0328 01:03:02.114474 1131323 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0328 01:03:02.126467 1131323 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0328 01:03:02.131240 1131323 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 27 23:34 /usr/share/ca-certificates/minikubeCA.pem
	I0328 01:03:02.131293 1131323 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0328 01:03:02.137496 1131323 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0328 01:03:02.150863 1131323 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1076522.pem && ln -fs /usr/share/ca-certificates/1076522.pem /etc/ssl/certs/1076522.pem"
	I0328 01:03:02.163536 1131323 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1076522.pem
	I0328 01:03:02.168767 1131323 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 27 23:42 /usr/share/ca-certificates/1076522.pem
	I0328 01:03:02.168850 1131323 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1076522.pem
	I0328 01:03:02.175218 1131323 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1076522.pem /etc/ssl/certs/51391683.0"
	I0328 01:03:02.188272 1131323 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0328 01:03:02.193348 1131323 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0328 01:03:02.199969 1131323 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0328 01:03:02.206424 1131323 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0328 01:03:02.213530 1131323 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0328 01:03:02.220136 1131323 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0328 01:03:02.226502 1131323 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0328 01:03:02.232708 1131323 kubeadm.go:391] StartCluster: {Name:old-k8s-version-986088 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-986088 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.174 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0328 01:03:02.232831 1131323 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0328 01:03:02.232890 1131323 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0328 01:03:02.280062 1131323 cri.go:89] found id: ""
	I0328 01:03:02.280160 1131323 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0328 01:03:02.291968 1131323 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0328 01:03:02.292003 1131323 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0328 01:03:02.292011 1131323 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0328 01:03:02.292072 1131323 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0328 01:03:02.304006 1131323 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0328 01:03:02.305105 1131323 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-986088" does not appear in /home/jenkins/minikube-integration/18485-1069254/kubeconfig
	I0328 01:03:02.305785 1131323 kubeconfig.go:62] /home/jenkins/minikube-integration/18485-1069254/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-986088" cluster setting kubeconfig missing "old-k8s-version-986088" context setting]
	I0328 01:03:02.306728 1131323 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18485-1069254/kubeconfig: {Name:mk608bb0dce90860f51972a105c5a28b1a9b081e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 01:03:02.308610 1131323 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0328 01:03:02.320212 1131323 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.50.174
	I0328 01:03:02.320265 1131323 kubeadm.go:1154] stopping kube-system containers ...
	I0328 01:03:02.320283 1131323 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0328 01:03:02.320356 1131323 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0328 01:03:02.366411 1131323 cri.go:89] found id: ""
	I0328 01:03:02.366500 1131323 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0328 01:03:02.388351 1131323 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0328 01:03:02.402621 1131323 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0328 01:03:02.402652 1131323 kubeadm.go:156] found existing configuration files:
	
	I0328 01:03:02.402718 1131323 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0328 01:03:02.415559 1131323 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0328 01:03:02.415633 1131323 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0328 01:03:02.426666 1131323 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0328 01:03:02.439497 1131323 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0328 01:03:02.439558 1131323 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0328 01:03:02.451040 1131323 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0328 01:03:02.461780 1131323 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0328 01:03:02.461876 1131323 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0328 01:03:02.473295 1131323 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0328 01:03:02.484762 1131323 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0328 01:03:02.484841 1131323 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0328 01:03:02.496304 1131323 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0328 01:03:02.507634 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0328 01:03:02.641980 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0328 01:03:03.598106 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0328 01:03:03.840026 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0328 01:03:03.970336 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0328 01:03:04.067774 1131323 api_server.go:52] waiting for apiserver process to appear ...
	I0328 01:03:04.067911 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:04.568260 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:05.068794 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:01.460535 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:01.461008 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | unable to find current IP address of domain default-k8s-diff-port-283961 in network mk-default-k8s-diff-port-283961
	I0328 01:03:01.461039 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | I0328 01:03:01.460962 1132240 retry.go:31] will retry after 1.839531745s: waiting for machine to come up
	I0328 01:03:03.302965 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:03.303445 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | unable to find current IP address of domain default-k8s-diff-port-283961 in network mk-default-k8s-diff-port-283961
	I0328 01:03:03.303479 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | I0328 01:03:03.303387 1132240 retry.go:31] will retry after 2.461748315s: waiting for machine to come up
	I0328 01:03:04.413898 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:06.913608 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:05.568716 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:06.068362 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:06.568235 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:07.068696 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:07.567976 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:08.068032 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:08.568586 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:09.068046 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:09.568699 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:10.067967 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:05.767795 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:05.768329 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | unable to find current IP address of domain default-k8s-diff-port-283961 in network mk-default-k8s-diff-port-283961
	I0328 01:03:05.768360 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | I0328 01:03:05.768279 1132240 retry.go:31] will retry after 2.321291255s: waiting for machine to come up
	I0328 01:03:08.092644 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:08.093094 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | unable to find current IP address of domain default-k8s-diff-port-283961 in network mk-default-k8s-diff-port-283961
	I0328 01:03:08.093131 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | I0328 01:03:08.093046 1132240 retry.go:31] will retry after 4.151205276s: waiting for machine to come up
	I0328 01:03:09.413199 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:11.912234 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:13.671756 1130827 start.go:364] duration metric: took 54.966750689s to acquireMachinesLock for "no-preload-248059"
	I0328 01:03:13.671815 1130827 start.go:96] Skipping create...Using existing machine configuration
	I0328 01:03:13.671823 1130827 fix.go:54] fixHost starting: 
	I0328 01:03:13.672255 1130827 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18485-1069254/.minikube/bin/docker-machine-driver-kvm2
	I0328 01:03:13.672292 1130827 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 01:03:13.689811 1130827 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42017
	I0328 01:03:13.690364 1130827 main.go:141] libmachine: () Calling .GetVersion
	I0328 01:03:13.690817 1130827 main.go:141] libmachine: Using API Version  1
	I0328 01:03:13.690843 1130827 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 01:03:13.691213 1130827 main.go:141] libmachine: () Calling .GetMachineName
	I0328 01:03:13.691395 1130827 main.go:141] libmachine: (no-preload-248059) Calling .DriverName
	I0328 01:03:13.691523 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetState
	I0328 01:03:13.693093 1130827 fix.go:112] recreateIfNeeded on no-preload-248059: state=Stopped err=<nil>
	I0328 01:03:13.693123 1130827 main.go:141] libmachine: (no-preload-248059) Calling .DriverName
	W0328 01:03:13.693280 1130827 fix.go:138] unexpected machine state, will restart: <nil>
	I0328 01:03:13.695158 1130827 out.go:177] * Restarting existing kvm2 VM for "no-preload-248059" ...
	I0328 01:03:10.568240 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:11.068028 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:11.568146 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:12.068467 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:12.568820 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:13.068031 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:13.568977 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:14.068050 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:14.567938 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:15.068711 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:12.248769 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:12.249440 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Found IP for machine: 192.168.39.224
	I0328 01:03:12.249467 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Reserving static IP address...
	I0328 01:03:12.249498 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has current primary IP address 192.168.39.224 and MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:12.249832 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-283961", mac: "52:54:00:c4:df:6f", ip: "192.168.39.224"} in network mk-default-k8s-diff-port-283961: {Iface:virbr1 ExpiryTime:2024-03-28 02:03:06 +0000 UTC Type:0 Mac:52:54:00:c4:df:6f Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:default-k8s-diff-port-283961 Clientid:01:52:54:00:c4:df:6f}
	I0328 01:03:12.249872 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | skip adding static IP to network mk-default-k8s-diff-port-283961 - found existing host DHCP lease matching {name: "default-k8s-diff-port-283961", mac: "52:54:00:c4:df:6f", ip: "192.168.39.224"}
	I0328 01:03:12.249888 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Reserved static IP address: 192.168.39.224
	I0328 01:03:12.249908 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Waiting for SSH to be available...
	I0328 01:03:12.249921 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | Getting to WaitForSSH function...
	I0328 01:03:12.252053 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:12.252487 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:df:6f", ip: ""} in network mk-default-k8s-diff-port-283961: {Iface:virbr1 ExpiryTime:2024-03-28 02:03:06 +0000 UTC Type:0 Mac:52:54:00:c4:df:6f Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:default-k8s-diff-port-283961 Clientid:01:52:54:00:c4:df:6f}
	I0328 01:03:12.252521 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined IP address 192.168.39.224 and MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:12.252646 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | Using SSH client type: external
	I0328 01:03:12.252677 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | Using SSH private key: /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/default-k8s-diff-port-283961/id_rsa (-rw-------)
	I0328 01:03:12.252709 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.224 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/default-k8s-diff-port-283961/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0328 01:03:12.252731 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | About to run SSH command:
	I0328 01:03:12.252750 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | exit 0
	I0328 01:03:12.378419 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | SSH cmd err, output: <nil>: 
	I0328 01:03:12.378866 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetConfigRaw
	I0328 01:03:12.379659 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetIP
	I0328 01:03:12.382631 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:12.382997 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:df:6f", ip: ""} in network mk-default-k8s-diff-port-283961: {Iface:virbr1 ExpiryTime:2024-03-28 02:03:06 +0000 UTC Type:0 Mac:52:54:00:c4:df:6f Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:default-k8s-diff-port-283961 Clientid:01:52:54:00:c4:df:6f}
	I0328 01:03:12.383023 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined IP address 192.168.39.224 and MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:12.383276 1131600 profile.go:142] Saving config to /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/default-k8s-diff-port-283961/config.json ...
	I0328 01:03:12.383534 1131600 machine.go:94] provisionDockerMachine start ...
	I0328 01:03:12.383567 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .DriverName
	I0328 01:03:12.383805 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHHostname
	I0328 01:03:12.386472 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:12.386839 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:df:6f", ip: ""} in network mk-default-k8s-diff-port-283961: {Iface:virbr1 ExpiryTime:2024-03-28 02:03:06 +0000 UTC Type:0 Mac:52:54:00:c4:df:6f Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:default-k8s-diff-port-283961 Clientid:01:52:54:00:c4:df:6f}
	I0328 01:03:12.386870 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined IP address 192.168.39.224 and MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:12.387035 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHPort
	I0328 01:03:12.387240 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHKeyPath
	I0328 01:03:12.387399 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHKeyPath
	I0328 01:03:12.387577 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHUsername
	I0328 01:03:12.387729 1131600 main.go:141] libmachine: Using SSH client type: native
	I0328 01:03:12.387931 1131600 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.224 22 <nil> <nil>}
	I0328 01:03:12.387943 1131600 main.go:141] libmachine: About to run SSH command:
	hostname
	I0328 01:03:12.499608 1131600 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0328 01:03:12.499644 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetMachineName
	I0328 01:03:12.499930 1131600 buildroot.go:166] provisioning hostname "default-k8s-diff-port-283961"
	I0328 01:03:12.499962 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetMachineName
	I0328 01:03:12.500154 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHHostname
	I0328 01:03:12.502737 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:12.503089 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:df:6f", ip: ""} in network mk-default-k8s-diff-port-283961: {Iface:virbr1 ExpiryTime:2024-03-28 02:03:06 +0000 UTC Type:0 Mac:52:54:00:c4:df:6f Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:default-k8s-diff-port-283961 Clientid:01:52:54:00:c4:df:6f}
	I0328 01:03:12.503120 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined IP address 192.168.39.224 and MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:12.503295 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHPort
	I0328 01:03:12.503516 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHKeyPath
	I0328 01:03:12.503725 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHKeyPath
	I0328 01:03:12.503892 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHUsername
	I0328 01:03:12.504093 1131600 main.go:141] libmachine: Using SSH client type: native
	I0328 01:03:12.504271 1131600 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.224 22 <nil> <nil>}
	I0328 01:03:12.504285 1131600 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-283961 && echo "default-k8s-diff-port-283961" | sudo tee /etc/hostname
	I0328 01:03:12.625590 1131600 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-283961
	
	I0328 01:03:12.625624 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHHostname
	I0328 01:03:12.628570 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:12.628883 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:df:6f", ip: ""} in network mk-default-k8s-diff-port-283961: {Iface:virbr1 ExpiryTime:2024-03-28 02:03:06 +0000 UTC Type:0 Mac:52:54:00:c4:df:6f Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:default-k8s-diff-port-283961 Clientid:01:52:54:00:c4:df:6f}
	I0328 01:03:12.628968 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined IP address 192.168.39.224 and MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:12.629143 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHPort
	I0328 01:03:12.629397 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHKeyPath
	I0328 01:03:12.629627 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHKeyPath
	I0328 01:03:12.629825 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHUsername
	I0328 01:03:12.630008 1131600 main.go:141] libmachine: Using SSH client type: native
	I0328 01:03:12.630191 1131600 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.224 22 <nil> <nil>}
	I0328 01:03:12.630210 1131600 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-283961' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-283961/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-283961' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0328 01:03:12.744240 1131600 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0328 01:03:12.744280 1131600 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18485-1069254/.minikube CaCertPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18485-1069254/.minikube}
	I0328 01:03:12.744327 1131600 buildroot.go:174] setting up certificates
	I0328 01:03:12.744342 1131600 provision.go:84] configureAuth start
	I0328 01:03:12.744361 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetMachineName
	I0328 01:03:12.744722 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetIP
	I0328 01:03:12.747139 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:12.747448 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:df:6f", ip: ""} in network mk-default-k8s-diff-port-283961: {Iface:virbr1 ExpiryTime:2024-03-28 02:03:06 +0000 UTC Type:0 Mac:52:54:00:c4:df:6f Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:default-k8s-diff-port-283961 Clientid:01:52:54:00:c4:df:6f}
	I0328 01:03:12.747478 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined IP address 192.168.39.224 and MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:12.747582 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHHostname
	I0328 01:03:12.749705 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:12.749964 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:df:6f", ip: ""} in network mk-default-k8s-diff-port-283961: {Iface:virbr1 ExpiryTime:2024-03-28 02:03:06 +0000 UTC Type:0 Mac:52:54:00:c4:df:6f Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:default-k8s-diff-port-283961 Clientid:01:52:54:00:c4:df:6f}
	I0328 01:03:12.749995 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined IP address 192.168.39.224 and MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:12.750125 1131600 provision.go:143] copyHostCerts
	I0328 01:03:12.750203 1131600 exec_runner.go:144] found /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.pem, removing ...
	I0328 01:03:12.750217 1131600 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.pem
	I0328 01:03:12.750323 1131600 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.pem (1078 bytes)
	I0328 01:03:12.750435 1131600 exec_runner.go:144] found /home/jenkins/minikube-integration/18485-1069254/.minikube/cert.pem, removing ...
	I0328 01:03:12.750446 1131600 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18485-1069254/.minikube/cert.pem
	I0328 01:03:12.750479 1131600 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18485-1069254/.minikube/cert.pem (1123 bytes)
	I0328 01:03:12.750557 1131600 exec_runner.go:144] found /home/jenkins/minikube-integration/18485-1069254/.minikube/key.pem, removing ...
	I0328 01:03:12.750567 1131600 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18485-1069254/.minikube/key.pem
	I0328 01:03:12.750599 1131600 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18485-1069254/.minikube/key.pem (1679 bytes)
	I0328 01:03:12.750670 1131600 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-283961 san=[127.0.0.1 192.168.39.224 default-k8s-diff-port-283961 localhost minikube]
	I0328 01:03:12.963182 1131600 provision.go:177] copyRemoteCerts
	I0328 01:03:12.963265 1131600 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0328 01:03:12.963313 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHHostname
	I0328 01:03:12.965946 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:12.966177 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:df:6f", ip: ""} in network mk-default-k8s-diff-port-283961: {Iface:virbr1 ExpiryTime:2024-03-28 02:03:06 +0000 UTC Type:0 Mac:52:54:00:c4:df:6f Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:default-k8s-diff-port-283961 Clientid:01:52:54:00:c4:df:6f}
	I0328 01:03:12.966207 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined IP address 192.168.39.224 and MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:12.966347 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHPort
	I0328 01:03:12.966573 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHKeyPath
	I0328 01:03:12.966773 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHUsername
	I0328 01:03:12.966934 1131600 sshutil.go:53] new ssh client: &{IP:192.168.39.224 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/default-k8s-diff-port-283961/id_rsa Username:docker}
	I0328 01:03:13.057477 1131600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0328 01:03:13.083706 1131600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0328 01:03:13.109167 1131600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0328 01:03:13.136835 1131600 provision.go:87] duration metric: took 392.475069ms to configureAuth
	I0328 01:03:13.136867 1131600 buildroot.go:189] setting minikube options for container-runtime
	I0328 01:03:13.137048 1131600 config.go:182] Loaded profile config "default-k8s-diff-port-283961": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0328 01:03:13.137131 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHHostname
	I0328 01:03:13.139508 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:13.139761 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:df:6f", ip: ""} in network mk-default-k8s-diff-port-283961: {Iface:virbr1 ExpiryTime:2024-03-28 02:03:06 +0000 UTC Type:0 Mac:52:54:00:c4:df:6f Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:default-k8s-diff-port-283961 Clientid:01:52:54:00:c4:df:6f}
	I0328 01:03:13.139792 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined IP address 192.168.39.224 and MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:13.139959 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHPort
	I0328 01:03:13.140170 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHKeyPath
	I0328 01:03:13.140343 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHKeyPath
	I0328 01:03:13.140502 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHUsername
	I0328 01:03:13.140685 1131600 main.go:141] libmachine: Using SSH client type: native
	I0328 01:03:13.140873 1131600 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.224 22 <nil> <nil>}
	I0328 01:03:13.140897 1131600 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0328 01:03:13.422372 1131600 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0328 01:03:13.422405 1131600 machine.go:97] duration metric: took 1.038857021s to provisionDockerMachine
	I0328 01:03:13.422418 1131600 start.go:293] postStartSetup for "default-k8s-diff-port-283961" (driver="kvm2")
	I0328 01:03:13.422428 1131600 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0328 01:03:13.422456 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .DriverName
	I0328 01:03:13.422788 1131600 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0328 01:03:13.422819 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHHostname
	I0328 01:03:13.425539 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:13.425865 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:df:6f", ip: ""} in network mk-default-k8s-diff-port-283961: {Iface:virbr1 ExpiryTime:2024-03-28 02:03:06 +0000 UTC Type:0 Mac:52:54:00:c4:df:6f Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:default-k8s-diff-port-283961 Clientid:01:52:54:00:c4:df:6f}
	I0328 01:03:13.425894 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined IP address 192.168.39.224 and MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:13.426023 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHPort
	I0328 01:03:13.426225 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHKeyPath
	I0328 01:03:13.426407 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHUsername
	I0328 01:03:13.426577 1131600 sshutil.go:53] new ssh client: &{IP:192.168.39.224 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/default-k8s-diff-port-283961/id_rsa Username:docker}
	I0328 01:03:13.511874 1131600 ssh_runner.go:195] Run: cat /etc/os-release
	I0328 01:03:13.516643 1131600 info.go:137] Remote host: Buildroot 2023.02.9
	I0328 01:03:13.516673 1131600 filesync.go:126] Scanning /home/jenkins/minikube-integration/18485-1069254/.minikube/addons for local assets ...
	I0328 01:03:13.516749 1131600 filesync.go:126] Scanning /home/jenkins/minikube-integration/18485-1069254/.minikube/files for local assets ...
	I0328 01:03:13.516846 1131600 filesync.go:149] local asset: /home/jenkins/minikube-integration/18485-1069254/.minikube/files/etc/ssl/certs/10765222.pem -> 10765222.pem in /etc/ssl/certs
	I0328 01:03:13.516969 1131600 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0328 01:03:13.529004 1131600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/files/etc/ssl/certs/10765222.pem --> /etc/ssl/certs/10765222.pem (1708 bytes)
	I0328 01:03:13.557244 1131600 start.go:296] duration metric: took 134.810243ms for postStartSetup
	I0328 01:03:13.557289 1131600 fix.go:56] duration metric: took 20.165726422s for fixHost
	I0328 01:03:13.557313 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHHostname
	I0328 01:03:13.560216 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:13.560585 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:df:6f", ip: ""} in network mk-default-k8s-diff-port-283961: {Iface:virbr1 ExpiryTime:2024-03-28 02:03:06 +0000 UTC Type:0 Mac:52:54:00:c4:df:6f Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:default-k8s-diff-port-283961 Clientid:01:52:54:00:c4:df:6f}
	I0328 01:03:13.560623 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined IP address 192.168.39.224 and MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:13.560803 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHPort
	I0328 01:03:13.561050 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHKeyPath
	I0328 01:03:13.561188 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHKeyPath
	I0328 01:03:13.561303 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHUsername
	I0328 01:03:13.561552 1131600 main.go:141] libmachine: Using SSH client type: native
	I0328 01:03:13.561742 1131600 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.224 22 <nil> <nil>}
	I0328 01:03:13.561757 1131600 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0328 01:03:13.671545 1131600 main.go:141] libmachine: SSH cmd err, output: <nil>: 1711587793.617322674
	
	I0328 01:03:13.671570 1131600 fix.go:216] guest clock: 1711587793.617322674
	I0328 01:03:13.671578 1131600 fix.go:229] Guest: 2024-03-28 01:03:13.617322674 +0000 UTC Remote: 2024-03-28 01:03:13.55729386 +0000 UTC m=+187.934897846 (delta=60.028814ms)
	I0328 01:03:13.671632 1131600 fix.go:200] guest clock delta is within tolerance: 60.028814ms
	I0328 01:03:13.671642 1131600 start.go:83] releasing machines lock for "default-k8s-diff-port-283961", held for 20.280118311s
	I0328 01:03:13.671673 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .DriverName
	I0328 01:03:13.671976 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetIP
	I0328 01:03:13.674978 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:13.675384 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:df:6f", ip: ""} in network mk-default-k8s-diff-port-283961: {Iface:virbr1 ExpiryTime:2024-03-28 02:03:06 +0000 UTC Type:0 Mac:52:54:00:c4:df:6f Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:default-k8s-diff-port-283961 Clientid:01:52:54:00:c4:df:6f}
	I0328 01:03:13.675436 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined IP address 192.168.39.224 and MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:13.675562 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .DriverName
	I0328 01:03:13.676167 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .DriverName
	I0328 01:03:13.676337 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .DriverName
	I0328 01:03:13.676436 1131600 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0328 01:03:13.676501 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHHostname
	I0328 01:03:13.676557 1131600 ssh_runner.go:195] Run: cat /version.json
	I0328 01:03:13.676578 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHHostname
	I0328 01:03:13.679418 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:13.679452 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:13.679758 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:df:6f", ip: ""} in network mk-default-k8s-diff-port-283961: {Iface:virbr1 ExpiryTime:2024-03-28 02:03:06 +0000 UTC Type:0 Mac:52:54:00:c4:df:6f Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:default-k8s-diff-port-283961 Clientid:01:52:54:00:c4:df:6f}
	I0328 01:03:13.679785 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined IP address 192.168.39.224 and MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:13.679813 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:df:6f", ip: ""} in network mk-default-k8s-diff-port-283961: {Iface:virbr1 ExpiryTime:2024-03-28 02:03:06 +0000 UTC Type:0 Mac:52:54:00:c4:df:6f Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:default-k8s-diff-port-283961 Clientid:01:52:54:00:c4:df:6f}
	I0328 01:03:13.679832 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined IP address 192.168.39.224 and MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:13.679986 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHPort
	I0328 01:03:13.680089 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHPort
	I0328 01:03:13.680190 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHKeyPath
	I0328 01:03:13.680255 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHKeyPath
	I0328 01:03:13.680345 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHUsername
	I0328 01:03:13.680410 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHUsername
	I0328 01:03:13.680517 1131600 sshutil.go:53] new ssh client: &{IP:192.168.39.224 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/default-k8s-diff-port-283961/id_rsa Username:docker}
	I0328 01:03:13.680608 1131600 sshutil.go:53] new ssh client: &{IP:192.168.39.224 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/default-k8s-diff-port-283961/id_rsa Username:docker}
	I0328 01:03:13.759826 1131600 ssh_runner.go:195] Run: systemctl --version
	I0328 01:03:13.796647 1131600 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0328 01:03:13.947036 1131600 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0328 01:03:13.954165 1131600 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0328 01:03:13.954265 1131600 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0328 01:03:13.973503 1131600 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0328 01:03:13.973538 1131600 start.go:494] detecting cgroup driver to use...
	I0328 01:03:13.973629 1131600 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0328 01:03:13.997675 1131600 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0328 01:03:14.015349 1131600 docker.go:217] disabling cri-docker service (if available) ...
	I0328 01:03:14.015421 1131600 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0328 01:03:14.031099 1131600 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0328 01:03:14.046446 1131600 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0328 01:03:14.186993 1131600 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0328 01:03:14.351164 1131600 docker.go:233] disabling docker service ...
	I0328 01:03:14.351232 1131600 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0328 01:03:14.370629 1131600 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0328 01:03:14.387837 1131600 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0328 01:03:14.544060 1131600 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0328 01:03:14.707699 1131600 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0328 01:03:14.725658 1131600 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0328 01:03:14.746063 1131600 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0328 01:03:14.746141 1131600 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 01:03:14.759244 1131600 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0328 01:03:14.759317 1131600 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 01:03:14.773015 1131600 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 01:03:14.786810 1131600 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 01:03:14.807101 1131600 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0328 01:03:14.821013 1131600 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 01:03:14.834181 1131600 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 01:03:14.861163 1131600 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 01:03:14.874274 1131600 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0328 01:03:14.885890 1131600 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0328 01:03:14.885968 1131600 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0328 01:03:14.903142 1131600 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0328 01:03:14.916364 1131600 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0328 01:03:15.073343 1131600 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0328 01:03:15.218406 1131600 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0328 01:03:15.218500 1131600 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0328 01:03:15.226299 1131600 start.go:562] Will wait 60s for crictl version
	I0328 01:03:15.226373 1131600 ssh_runner.go:195] Run: which crictl
	I0328 01:03:15.232051 1131600 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0328 01:03:15.278793 1131600 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0328 01:03:15.278903 1131600 ssh_runner.go:195] Run: crio --version
	I0328 01:03:15.313408 1131600 ssh_runner.go:195] Run: crio --version
	I0328 01:03:15.351613 1131600 out.go:177] * Preparing Kubernetes v1.29.3 on CRI-O 1.29.1 ...
	I0328 01:03:15.353013 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetIP
	I0328 01:03:15.355924 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:15.356306 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:df:6f", ip: ""} in network mk-default-k8s-diff-port-283961: {Iface:virbr1 ExpiryTime:2024-03-28 02:03:06 +0000 UTC Type:0 Mac:52:54:00:c4:df:6f Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:default-k8s-diff-port-283961 Clientid:01:52:54:00:c4:df:6f}
	I0328 01:03:15.356341 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined IP address 192.168.39.224 and MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:15.356555 1131600 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0328 01:03:15.361194 1131600 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0328 01:03:15.380926 1131600 kubeadm.go:877] updating cluster {Name:default-k8s-diff-port-283961 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.29.3 ClusterName:default-k8s-diff-port-283961 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.224 Port:8444 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0328 01:03:15.381043 1131600 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0328 01:03:15.381099 1131600 ssh_runner.go:195] Run: sudo crictl images --output json
	I0328 01:03:15.423322 1131600 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.3". assuming images are not preloaded.
	I0328 01:03:15.423409 1131600 ssh_runner.go:195] Run: which lz4
	I0328 01:03:15.428123 1131600 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0328 01:03:15.433023 1131600 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0328 01:03:15.433065 1131600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (402967820 bytes)
	I0328 01:03:13.696314 1130827 main.go:141] libmachine: (no-preload-248059) Calling .Start
	I0328 01:03:13.696506 1130827 main.go:141] libmachine: (no-preload-248059) Ensuring networks are active...
	I0328 01:03:13.697344 1130827 main.go:141] libmachine: (no-preload-248059) Ensuring network default is active
	I0328 01:03:13.697668 1130827 main.go:141] libmachine: (no-preload-248059) Ensuring network mk-no-preload-248059 is active
	I0328 01:03:13.698009 1130827 main.go:141] libmachine: (no-preload-248059) Getting domain xml...
	I0328 01:03:13.698805 1130827 main.go:141] libmachine: (no-preload-248059) Creating domain...
	I0328 01:03:14.955922 1130827 main.go:141] libmachine: (no-preload-248059) Waiting to get IP...
	I0328 01:03:14.957088 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:14.957534 1130827 main.go:141] libmachine: (no-preload-248059) DBG | unable to find current IP address of domain no-preload-248059 in network mk-no-preload-248059
	I0328 01:03:14.957660 1130827 main.go:141] libmachine: (no-preload-248059) DBG | I0328 01:03:14.957533 1132389 retry.go:31] will retry after 222.894093ms: waiting for machine to come up
	I0328 01:03:15.182078 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:15.182541 1130827 main.go:141] libmachine: (no-preload-248059) DBG | unable to find current IP address of domain no-preload-248059 in network mk-no-preload-248059
	I0328 01:03:15.182580 1130827 main.go:141] libmachine: (no-preload-248059) DBG | I0328 01:03:15.182528 1132389 retry.go:31] will retry after 263.74163ms: waiting for machine to come up
	I0328 01:03:15.448081 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:15.448653 1130827 main.go:141] libmachine: (no-preload-248059) DBG | unable to find current IP address of domain no-preload-248059 in network mk-no-preload-248059
	I0328 01:03:15.448684 1130827 main.go:141] libmachine: (no-preload-248059) DBG | I0328 01:03:15.448586 1132389 retry.go:31] will retry after 444.066222ms: waiting for machine to come up
	I0328 01:03:15.894141 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:15.894695 1130827 main.go:141] libmachine: (no-preload-248059) DBG | unable to find current IP address of domain no-preload-248059 in network mk-no-preload-248059
	I0328 01:03:15.894732 1130827 main.go:141] libmachine: (no-preload-248059) DBG | I0328 01:03:15.894650 1132389 retry.go:31] will retry after 469.421771ms: waiting for machine to come up
	I0328 01:03:14.413443 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:16.418789 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:15.568507 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:16.068210 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:16.568761 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:17.067929 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:17.568403 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:18.068454 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:18.568086 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:19.068049 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:19.569020 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:20.068068 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:17.139682 1131600 crio.go:462] duration metric: took 1.71160157s to copy over tarball
	I0328 01:03:17.139764 1131600 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0328 01:03:19.581198 1131600 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.441406061s)
	I0328 01:03:19.581229 1131600 crio.go:469] duration metric: took 2.441510253s to extract the tarball
	I0328 01:03:19.581241 1131600 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0328 01:03:19.620964 1131600 ssh_runner.go:195] Run: sudo crictl images --output json
	I0328 01:03:19.666765 1131600 crio.go:514] all images are preloaded for cri-o runtime.
	I0328 01:03:19.666791 1131600 cache_images.go:84] Images are preloaded, skipping loading
	I0328 01:03:19.666802 1131600 kubeadm.go:928] updating node { 192.168.39.224 8444 v1.29.3 crio true true} ...
	I0328 01:03:19.666921 1131600 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-283961 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.224
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.3 ClusterName:default-k8s-diff-port-283961 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0328 01:03:19.666987 1131600 ssh_runner.go:195] Run: crio config
	I0328 01:03:19.716082 1131600 cni.go:84] Creating CNI manager for ""
	I0328 01:03:19.716106 1131600 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0328 01:03:19.716115 1131600 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0328 01:03:19.716139 1131600 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.224 APIServerPort:8444 KubernetesVersion:v1.29.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-283961 NodeName:default-k8s-diff-port-283961 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.224"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.224 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0328 01:03:19.716323 1131600 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.224
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-283961"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.224
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.224"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0328 01:03:19.716399 1131600 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.3
	I0328 01:03:19.727826 1131600 binaries.go:44] Found k8s binaries, skipping transfer
	I0328 01:03:19.727913 1131600 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0328 01:03:19.738525 1131600 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0328 01:03:19.756732 1131600 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0328 01:03:19.776665 1131600 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0328 01:03:19.795756 1131600 ssh_runner.go:195] Run: grep 192.168.39.224	control-plane.minikube.internal$ /etc/hosts
	I0328 01:03:19.800097 1131600 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.224	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0328 01:03:19.813019 1131600 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0328 01:03:19.946740 1131600 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0328 01:03:19.964216 1131600 certs.go:68] Setting up /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/default-k8s-diff-port-283961 for IP: 192.168.39.224
	I0328 01:03:19.964244 1131600 certs.go:194] generating shared ca certs ...
	I0328 01:03:19.964262 1131600 certs.go:226] acquiring lock for ca certs: {Name:mkf4dec8f33bbf51de6ed3aabdf175c7fd744ef9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 01:03:19.964448 1131600 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.key
	I0328 01:03:19.964524 1131600 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/proxy-client-ca.key
	I0328 01:03:19.964538 1131600 certs.go:256] generating profile certs ...
	I0328 01:03:19.964648 1131600 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/default-k8s-diff-port-283961/client.key
	I0328 01:03:19.964735 1131600 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/default-k8s-diff-port-283961/apiserver.key.22bfb146
	I0328 01:03:19.964810 1131600 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/default-k8s-diff-port-283961/proxy-client.key
	I0328 01:03:19.964956 1131600 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/1076522.pem (1338 bytes)
	W0328 01:03:19.965008 1131600 certs.go:480] ignoring /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/1076522_empty.pem, impossibly tiny 0 bytes
	I0328 01:03:19.965021 1131600 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca-key.pem (1675 bytes)
	I0328 01:03:19.965058 1131600 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem (1078 bytes)
	I0328 01:03:19.965091 1131600 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/cert.pem (1123 bytes)
	I0328 01:03:19.965113 1131600 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/key.pem (1679 bytes)
	I0328 01:03:19.965154 1131600 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/files/etc/ssl/certs/10765222.pem (1708 bytes)
	I0328 01:03:19.966026 1131600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0328 01:03:19.998578 1131600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0328 01:03:20.042666 1131600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0328 01:03:20.075405 1131600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0328 01:03:20.117888 1131600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/default-k8s-diff-port-283961/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0328 01:03:20.145160 1131600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/default-k8s-diff-port-283961/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0328 01:03:20.178207 1131600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/default-k8s-diff-port-283961/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0328 01:03:20.208610 1131600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/default-k8s-diff-port-283961/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0328 01:03:20.235356 1131600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0328 01:03:20.262434 1131600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/1076522.pem --> /usr/share/ca-certificates/1076522.pem (1338 bytes)
	I0328 01:03:20.291315 1131600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/files/etc/ssl/certs/10765222.pem --> /usr/share/ca-certificates/10765222.pem (1708 bytes)
	I0328 01:03:20.318034 1131600 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0328 01:03:20.337627 1131600 ssh_runner.go:195] Run: openssl version
	I0328 01:03:20.344242 1131600 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0328 01:03:20.360732 1131600 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0328 01:03:20.365858 1131600 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 27 23:34 /usr/share/ca-certificates/minikubeCA.pem
	I0328 01:03:20.365926 1131600 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0328 01:03:20.372120 1131600 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0328 01:03:20.384554 1131600 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1076522.pem && ln -fs /usr/share/ca-certificates/1076522.pem /etc/ssl/certs/1076522.pem"
	I0328 01:03:20.401731 1131600 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1076522.pem
	I0328 01:03:20.406945 1131600 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 27 23:42 /usr/share/ca-certificates/1076522.pem
	I0328 01:03:20.407024 1131600 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1076522.pem
	I0328 01:03:20.414661 1131600 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1076522.pem /etc/ssl/certs/51391683.0"
	I0328 01:03:20.427573 1131600 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10765222.pem && ln -fs /usr/share/ca-certificates/10765222.pem /etc/ssl/certs/10765222.pem"
	I0328 01:03:20.439807 1131600 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10765222.pem
	I0328 01:03:20.445064 1131600 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 27 23:42 /usr/share/ca-certificates/10765222.pem
	I0328 01:03:20.445138 1131600 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10765222.pem
	I0328 01:03:20.451754 1131600 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/10765222.pem /etc/ssl/certs/3ec20f2e.0"
	I0328 01:03:20.464988 1131600 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0328 01:03:20.470461 1131600 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0328 01:03:20.477200 1131600 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0328 01:03:20.484238 1131600 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0328 01:03:20.491125 1131600 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0328 01:03:20.497888 1131600 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0328 01:03:20.504680 1131600 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0328 01:03:20.511372 1131600 kubeadm.go:391] StartCluster: {Name:default-k8s-diff-port-283961 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.29.3 ClusterName:default-k8s-diff-port-283961 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.224 Port:8444 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0328 01:03:20.511477 1131600 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0328 01:03:20.511542 1131600 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0328 01:03:20.552247 1131600 cri.go:89] found id: ""
	I0328 01:03:20.552345 1131600 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0328 01:03:20.564906 1131600 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0328 01:03:20.564937 1131600 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0328 01:03:20.564944 1131600 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0328 01:03:20.565002 1131600 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0328 01:03:20.576394 1131600 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0328 01:03:20.593699 1131600 kubeconfig.go:125] found "default-k8s-diff-port-283961" server: "https://192.168.39.224:8444"
	I0328 01:03:20.595978 1131600 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0328 01:03:20.609519 1131600 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.39.224
	I0328 01:03:20.609565 1131600 kubeadm.go:1154] stopping kube-system containers ...
	I0328 01:03:20.609583 1131600 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0328 01:03:20.609651 1131600 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0328 01:03:20.651892 1131600 cri.go:89] found id: ""
	I0328 01:03:20.651967 1131600 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0328 01:03:20.671895 1131600 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0328 01:03:16.365505 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:16.366404 1130827 main.go:141] libmachine: (no-preload-248059) DBG | unable to find current IP address of domain no-preload-248059 in network mk-no-preload-248059
	I0328 01:03:16.366435 1130827 main.go:141] libmachine: (no-preload-248059) DBG | I0328 01:03:16.366360 1132389 retry.go:31] will retry after 488.383898ms: waiting for machine to come up
	I0328 01:03:16.856125 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:16.856727 1130827 main.go:141] libmachine: (no-preload-248059) DBG | unable to find current IP address of domain no-preload-248059 in network mk-no-preload-248059
	I0328 01:03:16.856761 1130827 main.go:141] libmachine: (no-preload-248059) DBG | I0328 01:03:16.856626 1132389 retry.go:31] will retry after 617.77144ms: waiting for machine to come up
	I0328 01:03:17.476749 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:17.477351 1130827 main.go:141] libmachine: (no-preload-248059) DBG | unable to find current IP address of domain no-preload-248059 in network mk-no-preload-248059
	I0328 01:03:17.477386 1130827 main.go:141] libmachine: (no-preload-248059) DBG | I0328 01:03:17.477282 1132389 retry.go:31] will retry after 835.951988ms: waiting for machine to come up
	I0328 01:03:18.315387 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:18.315894 1130827 main.go:141] libmachine: (no-preload-248059) DBG | unable to find current IP address of domain no-preload-248059 in network mk-no-preload-248059
	I0328 01:03:18.315925 1130827 main.go:141] libmachine: (no-preload-248059) DBG | I0328 01:03:18.315848 1132389 retry.go:31] will retry after 1.405695765s: waiting for machine to come up
	I0328 01:03:19.723053 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:19.723559 1130827 main.go:141] libmachine: (no-preload-248059) DBG | unable to find current IP address of domain no-preload-248059 in network mk-no-preload-248059
	I0328 01:03:19.723591 1130827 main.go:141] libmachine: (no-preload-248059) DBG | I0328 01:03:19.723473 1132389 retry.go:31] will retry after 1.555358462s: waiting for machine to come up
	I0328 01:03:18.913403 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:21.599662 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:20.568464 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:21.068983 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:21.568470 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:22.068772 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:22.568940 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:23.068907 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:23.568272 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:24.068055 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:24.568056 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:25.068006 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:20.685320 1131600 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0328 01:03:21.187521 1131600 kubeadm.go:156] found existing configuration files:
	
	I0328 01:03:21.187587 1131600 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0328 01:03:21.200463 1131600 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0328 01:03:21.200533 1131600 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0328 01:03:21.212763 1131600 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0328 01:03:21.224344 1131600 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0328 01:03:21.224419 1131600 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0328 01:03:21.235869 1131600 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0328 01:03:21.245970 1131600 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0328 01:03:21.246045 1131600 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0328 01:03:21.258589 1131600 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0328 01:03:21.270651 1131600 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0328 01:03:21.270724 1131600 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0328 01:03:21.283074 1131600 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0328 01:03:21.295811 1131600 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0328 01:03:21.668224 1131600 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0328 01:03:23.046357 1131600 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.378083996s)
	I0328 01:03:23.046401 1131600 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0328 01:03:23.271959 1131600 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0328 01:03:23.353976 1131600 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0328 01:03:23.501611 1131600 api_server.go:52] waiting for apiserver process to appear ...
	I0328 01:03:23.501734 1131600 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:24.002619 1131600 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:24.502614 1131600 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:24.547383 1131600 api_server.go:72] duration metric: took 1.045771287s to wait for apiserver process to appear ...
	I0328 01:03:24.547419 1131600 api_server.go:88] waiting for apiserver healthz status ...
	I0328 01:03:24.547447 1131600 api_server.go:253] Checking apiserver healthz at https://192.168.39.224:8444/healthz ...
	I0328 01:03:24.548081 1131600 api_server.go:269] stopped: https://192.168.39.224:8444/healthz: Get "https://192.168.39.224:8444/healthz": dial tcp 192.168.39.224:8444: connect: connection refused
	I0328 01:03:25.047885 1131600 api_server.go:253] Checking apiserver healthz at https://192.168.39.224:8444/healthz ...
	I0328 01:03:21.279945 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:21.590947 1130827 main.go:141] libmachine: (no-preload-248059) DBG | unable to find current IP address of domain no-preload-248059 in network mk-no-preload-248059
	I0328 01:03:21.590967 1130827 main.go:141] libmachine: (no-preload-248059) DBG | I0328 01:03:21.280358 1132389 retry.go:31] will retry after 1.905587467s: waiting for machine to come up
	I0328 01:03:23.187571 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:23.188214 1130827 main.go:141] libmachine: (no-preload-248059) DBG | unable to find current IP address of domain no-preload-248059 in network mk-no-preload-248059
	I0328 01:03:23.188248 1130827 main.go:141] libmachine: (no-preload-248059) DBG | I0328 01:03:23.188159 1132389 retry.go:31] will retry after 2.68043246s: waiting for machine to come up
	I0328 01:03:25.871414 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:25.871997 1130827 main.go:141] libmachine: (no-preload-248059) DBG | unable to find current IP address of domain no-preload-248059 in network mk-no-preload-248059
	I0328 01:03:25.872030 1130827 main.go:141] libmachine: (no-preload-248059) DBG | I0328 01:03:25.871956 1132389 retry.go:31] will retry after 2.689404788s: waiting for machine to come up
	I0328 01:03:23.913816 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:26.413616 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:27.352533 1131600 api_server.go:279] https://192.168.39.224:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0328 01:03:27.352570 1131600 api_server.go:103] status: https://192.168.39.224:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0328 01:03:27.352589 1131600 api_server.go:253] Checking apiserver healthz at https://192.168.39.224:8444/healthz ...
	I0328 01:03:27.453408 1131600 api_server.go:279] https://192.168.39.224:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0328 01:03:27.453448 1131600 api_server.go:103] status: https://192.168.39.224:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0328 01:03:27.547781 1131600 api_server.go:253] Checking apiserver healthz at https://192.168.39.224:8444/healthz ...
	I0328 01:03:27.552703 1131600 api_server.go:279] https://192.168.39.224:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0328 01:03:27.552738 1131600 api_server.go:103] status: https://192.168.39.224:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0328 01:03:28.048135 1131600 api_server.go:253] Checking apiserver healthz at https://192.168.39.224:8444/healthz ...
	I0328 01:03:28.053291 1131600 api_server.go:279] https://192.168.39.224:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0328 01:03:28.053322 1131600 api_server.go:103] status: https://192.168.39.224:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0328 01:03:28.548374 1131600 api_server.go:253] Checking apiserver healthz at https://192.168.39.224:8444/healthz ...
	I0328 01:03:28.553141 1131600 api_server.go:279] https://192.168.39.224:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0328 01:03:28.553178 1131600 api_server.go:103] status: https://192.168.39.224:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0328 01:03:29.047609 1131600 api_server.go:253] Checking apiserver healthz at https://192.168.39.224:8444/healthz ...
	I0328 01:03:29.053027 1131600 api_server.go:279] https://192.168.39.224:8444/healthz returned 200:
	ok
	I0328 01:03:29.060710 1131600 api_server.go:141] control plane version: v1.29.3
	I0328 01:03:29.060747 1131600 api_server.go:131] duration metric: took 4.513320481s to wait for apiserver health ...
	I0328 01:03:29.060757 1131600 cni.go:84] Creating CNI manager for ""
	I0328 01:03:29.060764 1131600 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0328 01:03:29.062763 1131600 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0328 01:03:25.568927 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:26.068371 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:26.568107 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:27.068037 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:27.567985 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:28.068036 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:28.568843 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:29.068483 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:29.568942 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:30.068849 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:29.064492 1131600 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0328 01:03:29.089164 1131600 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0328 01:03:29.115071 1131600 system_pods.go:43] waiting for kube-system pods to appear ...
	I0328 01:03:29.126819 1131600 system_pods.go:59] 8 kube-system pods found
	I0328 01:03:29.126871 1131600 system_pods.go:61] "coredns-76f75df574-79cdj" [48ffe344-a386-4904-a73e-56e3ce0a8bef] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0328 01:03:29.126885 1131600 system_pods.go:61] "etcd-default-k8s-diff-port-283961" [1d8fc768-e39c-4c96-bd65-2ae76fc9c6ec] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0328 01:03:29.126898 1131600 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-283961" [7c5c9f85-f16f-4248-8d2d-73c1ed2b0128] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0328 01:03:29.126912 1131600 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-283961" [2e943e7b-5506-4797-9e77-4a33e06056fa] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0328 01:03:29.126931 1131600 system_pods.go:61] "kube-proxy-d776v" [c1c86f61-b074-4a51-89e6-17c7b1076748] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0328 01:03:29.126944 1131600 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-283961" [8a840579-4145-4b68-ab3f-b1ebd3d63e81] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0328 01:03:29.126956 1131600 system_pods.go:61] "metrics-server-57f55c9bc5-w4ww4" [6d60f9e6-8ac7-4fad-91dc-61520586666c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0328 01:03:29.126968 1131600 system_pods.go:61] "storage-provisioner" [2b5e2e68-7e7c-46ec-bcec-ff9b01cbb8d9] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0328 01:03:29.126979 1131600 system_pods.go:74] duration metric: took 11.875076ms to wait for pod list to return data ...
	I0328 01:03:29.126992 1131600 node_conditions.go:102] verifying NodePressure condition ...
	I0328 01:03:29.130927 1131600 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0328 01:03:29.130971 1131600 node_conditions.go:123] node cpu capacity is 2
	I0328 01:03:29.130986 1131600 node_conditions.go:105] duration metric: took 3.984383ms to run NodePressure ...
	I0328 01:03:29.131011 1131600 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0328 01:03:29.421513 1131600 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0328 01:03:29.426043 1131600 kubeadm.go:733] kubelet initialised
	I0328 01:03:29.426104 1131600 kubeadm.go:734] duration metric: took 4.524275ms waiting for restarted kubelet to initialise ...
	I0328 01:03:29.426114 1131600 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0328 01:03:29.432378 1131600 pod_ready.go:78] waiting up to 4m0s for pod "coredns-76f75df574-79cdj" in "kube-system" namespace to be "Ready" ...
	I0328 01:03:28.563249 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:28.563778 1130827 main.go:141] libmachine: (no-preload-248059) DBG | unable to find current IP address of domain no-preload-248059 in network mk-no-preload-248059
	I0328 01:03:28.563808 1130827 main.go:141] libmachine: (no-preload-248059) DBG | I0328 01:03:28.563718 1132389 retry.go:31] will retry after 2.919225956s: waiting for machine to come up
	I0328 01:03:28.913653 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:30.914379 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:31.484584 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:31.485027 1130827 main.go:141] libmachine: (no-preload-248059) Found IP for machine: 192.168.61.107
	I0328 01:03:31.485048 1130827 main.go:141] libmachine: (no-preload-248059) Reserving static IP address...
	I0328 01:03:31.485065 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has current primary IP address 192.168.61.107 and MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:31.485584 1130827 main.go:141] libmachine: (no-preload-248059) DBG | found host DHCP lease matching {name: "no-preload-248059", mac: "52:54:00:58:33:e2", ip: "192.168.61.107"} in network mk-no-preload-248059: {Iface:virbr3 ExpiryTime:2024-03-28 02:03:25 +0000 UTC Type:0 Mac:52:54:00:58:33:e2 Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:no-preload-248059 Clientid:01:52:54:00:58:33:e2}
	I0328 01:03:31.485617 1130827 main.go:141] libmachine: (no-preload-248059) Reserved static IP address: 192.168.61.107
	I0328 01:03:31.485638 1130827 main.go:141] libmachine: (no-preload-248059) DBG | skip adding static IP to network mk-no-preload-248059 - found existing host DHCP lease matching {name: "no-preload-248059", mac: "52:54:00:58:33:e2", ip: "192.168.61.107"}
	I0328 01:03:31.485651 1130827 main.go:141] libmachine: (no-preload-248059) Waiting for SSH to be available...
	I0328 01:03:31.485671 1130827 main.go:141] libmachine: (no-preload-248059) DBG | Getting to WaitForSSH function...
	I0328 01:03:31.487909 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:31.488293 1130827 main.go:141] libmachine: (no-preload-248059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:33:e2", ip: ""} in network mk-no-preload-248059: {Iface:virbr3 ExpiryTime:2024-03-28 02:03:25 +0000 UTC Type:0 Mac:52:54:00:58:33:e2 Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:no-preload-248059 Clientid:01:52:54:00:58:33:e2}
	I0328 01:03:31.488322 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined IP address 192.168.61.107 and MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:31.488469 1130827 main.go:141] libmachine: (no-preload-248059) DBG | Using SSH client type: external
	I0328 01:03:31.488506 1130827 main.go:141] libmachine: (no-preload-248059) DBG | Using SSH private key: /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/no-preload-248059/id_rsa (-rw-------)
	I0328 01:03:31.488531 1130827 main.go:141] libmachine: (no-preload-248059) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.107 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/no-preload-248059/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0328 01:03:31.488541 1130827 main.go:141] libmachine: (no-preload-248059) DBG | About to run SSH command:
	I0328 01:03:31.488555 1130827 main.go:141] libmachine: (no-preload-248059) DBG | exit 0
	I0328 01:03:31.618358 1130827 main.go:141] libmachine: (no-preload-248059) DBG | SSH cmd err, output: <nil>: 
	I0328 01:03:31.618786 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetConfigRaw
	I0328 01:03:31.619494 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetIP
	I0328 01:03:31.622183 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:31.622555 1130827 main.go:141] libmachine: (no-preload-248059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:33:e2", ip: ""} in network mk-no-preload-248059: {Iface:virbr3 ExpiryTime:2024-03-28 02:03:25 +0000 UTC Type:0 Mac:52:54:00:58:33:e2 Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:no-preload-248059 Clientid:01:52:54:00:58:33:e2}
	I0328 01:03:31.622584 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined IP address 192.168.61.107 and MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:31.622889 1130827 profile.go:142] Saving config to /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/no-preload-248059/config.json ...
	I0328 01:03:31.623120 1130827 machine.go:94] provisionDockerMachine start ...
	I0328 01:03:31.623147 1130827 main.go:141] libmachine: (no-preload-248059) Calling .DriverName
	I0328 01:03:31.623400 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHHostname
	I0328 01:03:31.626078 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:31.626432 1130827 main.go:141] libmachine: (no-preload-248059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:33:e2", ip: ""} in network mk-no-preload-248059: {Iface:virbr3 ExpiryTime:2024-03-28 02:03:25 +0000 UTC Type:0 Mac:52:54:00:58:33:e2 Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:no-preload-248059 Clientid:01:52:54:00:58:33:e2}
	I0328 01:03:31.626458 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined IP address 192.168.61.107 and MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:31.626663 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHPort
	I0328 01:03:31.626864 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHKeyPath
	I0328 01:03:31.627031 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHKeyPath
	I0328 01:03:31.627179 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHUsername
	I0328 01:03:31.627380 1130827 main.go:141] libmachine: Using SSH client type: native
	I0328 01:03:31.627595 1130827 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.107 22 <nil> <nil>}
	I0328 01:03:31.627611 1130827 main.go:141] libmachine: About to run SSH command:
	hostname
	I0328 01:03:31.739662 1130827 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0328 01:03:31.739699 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetMachineName
	I0328 01:03:31.740049 1130827 buildroot.go:166] provisioning hostname "no-preload-248059"
	I0328 01:03:31.740086 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetMachineName
	I0328 01:03:31.740421 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHHostname
	I0328 01:03:31.743410 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:31.743776 1130827 main.go:141] libmachine: (no-preload-248059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:33:e2", ip: ""} in network mk-no-preload-248059: {Iface:virbr3 ExpiryTime:2024-03-28 02:03:25 +0000 UTC Type:0 Mac:52:54:00:58:33:e2 Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:no-preload-248059 Clientid:01:52:54:00:58:33:e2}
	I0328 01:03:31.743811 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined IP address 192.168.61.107 and MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:31.744001 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHPort
	I0328 01:03:31.744212 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHKeyPath
	I0328 01:03:31.744394 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHKeyPath
	I0328 01:03:31.744515 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHUsername
	I0328 01:03:31.744669 1130827 main.go:141] libmachine: Using SSH client type: native
	I0328 01:03:31.744846 1130827 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.107 22 <nil> <nil>}
	I0328 01:03:31.744860 1130827 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-248059 && echo "no-preload-248059" | sudo tee /etc/hostname
	I0328 01:03:31.869330 1130827 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-248059
	
	I0328 01:03:31.869368 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHHostname
	I0328 01:03:31.872451 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:31.872817 1130827 main.go:141] libmachine: (no-preload-248059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:33:e2", ip: ""} in network mk-no-preload-248059: {Iface:virbr3 ExpiryTime:2024-03-28 02:03:25 +0000 UTC Type:0 Mac:52:54:00:58:33:e2 Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:no-preload-248059 Clientid:01:52:54:00:58:33:e2}
	I0328 01:03:31.872868 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined IP address 192.168.61.107 and MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:31.873159 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHPort
	I0328 01:03:31.873405 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHKeyPath
	I0328 01:03:31.873632 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHKeyPath
	I0328 01:03:31.873803 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHUsername
	I0328 01:03:31.873982 1130827 main.go:141] libmachine: Using SSH client type: native
	I0328 01:03:31.874220 1130827 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.107 22 <nil> <nil>}
	I0328 01:03:31.874268 1130827 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-248059' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-248059/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-248059' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0328 01:03:31.997509 1130827 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0328 01:03:31.997543 1130827 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18485-1069254/.minikube CaCertPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18485-1069254/.minikube}
	I0328 01:03:31.997565 1130827 buildroot.go:174] setting up certificates
	I0328 01:03:31.997573 1130827 provision.go:84] configureAuth start
	I0328 01:03:31.997583 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetMachineName
	I0328 01:03:31.997870 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetIP
	I0328 01:03:32.000739 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:32.001127 1130827 main.go:141] libmachine: (no-preload-248059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:33:e2", ip: ""} in network mk-no-preload-248059: {Iface:virbr3 ExpiryTime:2024-03-28 02:03:25 +0000 UTC Type:0 Mac:52:54:00:58:33:e2 Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:no-preload-248059 Clientid:01:52:54:00:58:33:e2}
	I0328 01:03:32.001162 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined IP address 192.168.61.107 and MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:32.001306 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHHostname
	I0328 01:03:32.003571 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:32.003958 1130827 main.go:141] libmachine: (no-preload-248059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:33:e2", ip: ""} in network mk-no-preload-248059: {Iface:virbr3 ExpiryTime:2024-03-28 02:03:25 +0000 UTC Type:0 Mac:52:54:00:58:33:e2 Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:no-preload-248059 Clientid:01:52:54:00:58:33:e2}
	I0328 01:03:32.003988 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined IP address 192.168.61.107 and MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:32.004162 1130827 provision.go:143] copyHostCerts
	I0328 01:03:32.004246 1130827 exec_runner.go:144] found /home/jenkins/minikube-integration/18485-1069254/.minikube/key.pem, removing ...
	I0328 01:03:32.004261 1130827 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18485-1069254/.minikube/key.pem
	I0328 01:03:32.004329 1130827 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18485-1069254/.minikube/key.pem (1679 bytes)
	I0328 01:03:32.004442 1130827 exec_runner.go:144] found /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.pem, removing ...
	I0328 01:03:32.004454 1130827 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.pem
	I0328 01:03:32.004486 1130827 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.pem (1078 bytes)
	I0328 01:03:32.004562 1130827 exec_runner.go:144] found /home/jenkins/minikube-integration/18485-1069254/.minikube/cert.pem, removing ...
	I0328 01:03:32.004572 1130827 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18485-1069254/.minikube/cert.pem
	I0328 01:03:32.004602 1130827 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18485-1069254/.minikube/cert.pem (1123 bytes)
	I0328 01:03:32.004667 1130827 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca-key.pem org=jenkins.no-preload-248059 san=[127.0.0.1 192.168.61.107 localhost minikube no-preload-248059]
	I0328 01:03:32.206585 1130827 provision.go:177] copyRemoteCerts
	I0328 01:03:32.206657 1130827 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0328 01:03:32.206691 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHHostname
	I0328 01:03:32.210170 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:32.210636 1130827 main.go:141] libmachine: (no-preload-248059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:33:e2", ip: ""} in network mk-no-preload-248059: {Iface:virbr3 ExpiryTime:2024-03-28 02:03:25 +0000 UTC Type:0 Mac:52:54:00:58:33:e2 Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:no-preload-248059 Clientid:01:52:54:00:58:33:e2}
	I0328 01:03:32.210676 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined IP address 192.168.61.107 and MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:32.210979 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHPort
	I0328 01:03:32.211187 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHKeyPath
	I0328 01:03:32.211364 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHUsername
	I0328 01:03:32.211564 1130827 sshutil.go:53] new ssh client: &{IP:192.168.61.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/no-preload-248059/id_rsa Username:docker}
	I0328 01:03:32.305858 1130827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0328 01:03:32.337654 1130827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0328 01:03:32.368942 1130827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0328 01:03:32.401639 1130827 provision.go:87] duration metric: took 404.051415ms to configureAuth
	I0328 01:03:32.401669 1130827 buildroot.go:189] setting minikube options for container-runtime
	I0328 01:03:32.401936 1130827 config.go:182] Loaded profile config "no-preload-248059": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0-beta.0
	I0328 01:03:32.402025 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHHostname
	I0328 01:03:32.404890 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:32.405352 1130827 main.go:141] libmachine: (no-preload-248059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:33:e2", ip: ""} in network mk-no-preload-248059: {Iface:virbr3 ExpiryTime:2024-03-28 02:03:25 +0000 UTC Type:0 Mac:52:54:00:58:33:e2 Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:no-preload-248059 Clientid:01:52:54:00:58:33:e2}
	I0328 01:03:32.405387 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined IP address 192.168.61.107 and MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:32.405588 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHPort
	I0328 01:03:32.405858 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHKeyPath
	I0328 01:03:32.406091 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHKeyPath
	I0328 01:03:32.406303 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHUsername
	I0328 01:03:32.406510 1130827 main.go:141] libmachine: Using SSH client type: native
	I0328 01:03:32.406731 1130827 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.107 22 <nil> <nil>}
	I0328 01:03:32.406759 1130827 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0328 01:03:32.697738 1130827 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0328 01:03:32.697768 1130827 machine.go:97] duration metric: took 1.074632092s to provisionDockerMachine
	I0328 01:03:32.697781 1130827 start.go:293] postStartSetup for "no-preload-248059" (driver="kvm2")
	I0328 01:03:32.697795 1130827 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0328 01:03:32.697812 1130827 main.go:141] libmachine: (no-preload-248059) Calling .DriverName
	I0328 01:03:32.698263 1130827 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0328 01:03:32.698298 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHHostname
	I0328 01:03:32.701020 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:32.701421 1130827 main.go:141] libmachine: (no-preload-248059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:33:e2", ip: ""} in network mk-no-preload-248059: {Iface:virbr3 ExpiryTime:2024-03-28 02:03:25 +0000 UTC Type:0 Mac:52:54:00:58:33:e2 Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:no-preload-248059 Clientid:01:52:54:00:58:33:e2}
	I0328 01:03:32.701450 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined IP address 192.168.61.107 and MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:32.701609 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHPort
	I0328 01:03:32.701837 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHKeyPath
	I0328 01:03:32.702010 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHUsername
	I0328 01:03:32.702188 1130827 sshutil.go:53] new ssh client: &{IP:192.168.61.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/no-preload-248059/id_rsa Username:docker}
	I0328 01:03:32.790670 1130827 ssh_runner.go:195] Run: cat /etc/os-release
	I0328 01:03:32.795098 1130827 info.go:137] Remote host: Buildroot 2023.02.9
	I0328 01:03:32.795131 1130827 filesync.go:126] Scanning /home/jenkins/minikube-integration/18485-1069254/.minikube/addons for local assets ...
	I0328 01:03:32.795222 1130827 filesync.go:126] Scanning /home/jenkins/minikube-integration/18485-1069254/.minikube/files for local assets ...
	I0328 01:03:32.795297 1130827 filesync.go:149] local asset: /home/jenkins/minikube-integration/18485-1069254/.minikube/files/etc/ssl/certs/10765222.pem -> 10765222.pem in /etc/ssl/certs
	I0328 01:03:32.795402 1130827 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0328 01:03:32.806276 1130827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/files/etc/ssl/certs/10765222.pem --> /etc/ssl/certs/10765222.pem (1708 bytes)
	I0328 01:03:32.832753 1130827 start.go:296] duration metric: took 134.954685ms for postStartSetup
	I0328 01:03:32.832801 1130827 fix.go:56] duration metric: took 19.16097847s for fixHost
	I0328 01:03:32.832825 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHHostname
	I0328 01:03:32.835830 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:32.836199 1130827 main.go:141] libmachine: (no-preload-248059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:33:e2", ip: ""} in network mk-no-preload-248059: {Iface:virbr3 ExpiryTime:2024-03-28 02:03:25 +0000 UTC Type:0 Mac:52:54:00:58:33:e2 Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:no-preload-248059 Clientid:01:52:54:00:58:33:e2}
	I0328 01:03:32.836237 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined IP address 192.168.61.107 and MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:32.836472 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHPort
	I0328 01:03:32.836707 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHKeyPath
	I0328 01:03:32.836949 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHKeyPath
	I0328 01:03:32.837104 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHUsername
	I0328 01:03:32.837339 1130827 main.go:141] libmachine: Using SSH client type: native
	I0328 01:03:32.837551 1130827 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.107 22 <nil> <nil>}
	I0328 01:03:32.837563 1130827 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0328 01:03:32.947440 1130827 main.go:141] libmachine: SSH cmd err, output: <nil>: 1711587812.922631180
	
	I0328 01:03:32.947477 1130827 fix.go:216] guest clock: 1711587812.922631180
	I0328 01:03:32.947486 1130827 fix.go:229] Guest: 2024-03-28 01:03:32.92263118 +0000 UTC Remote: 2024-03-28 01:03:32.832804811 +0000 UTC m=+356.715929719 (delta=89.826369ms)
	I0328 01:03:32.947507 1130827 fix.go:200] guest clock delta is within tolerance: 89.826369ms
	I0328 01:03:32.947512 1130827 start.go:83] releasing machines lock for "no-preload-248059", held for 19.275724068s
	I0328 01:03:32.947531 1130827 main.go:141] libmachine: (no-preload-248059) Calling .DriverName
	I0328 01:03:32.947805 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetIP
	I0328 01:03:32.950439 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:32.950814 1130827 main.go:141] libmachine: (no-preload-248059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:33:e2", ip: ""} in network mk-no-preload-248059: {Iface:virbr3 ExpiryTime:2024-03-28 02:03:25 +0000 UTC Type:0 Mac:52:54:00:58:33:e2 Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:no-preload-248059 Clientid:01:52:54:00:58:33:e2}
	I0328 01:03:32.950844 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined IP address 192.168.61.107 and MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:32.950992 1130827 main.go:141] libmachine: (no-preload-248059) Calling .DriverName
	I0328 01:03:32.951517 1130827 main.go:141] libmachine: (no-preload-248059) Calling .DriverName
	I0328 01:03:32.951707 1130827 main.go:141] libmachine: (no-preload-248059) Calling .DriverName
	I0328 01:03:32.951809 1130827 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0328 01:03:32.951852 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHHostname
	I0328 01:03:32.951938 1130827 ssh_runner.go:195] Run: cat /version.json
	I0328 01:03:32.951964 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHHostname
	I0328 01:03:32.954721 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:32.955058 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:32.955135 1130827 main.go:141] libmachine: (no-preload-248059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:33:e2", ip: ""} in network mk-no-preload-248059: {Iface:virbr3 ExpiryTime:2024-03-28 02:03:25 +0000 UTC Type:0 Mac:52:54:00:58:33:e2 Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:no-preload-248059 Clientid:01:52:54:00:58:33:e2}
	I0328 01:03:32.955165 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined IP address 192.168.61.107 and MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:32.955314 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHPort
	I0328 01:03:32.955473 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHKeyPath
	I0328 01:03:32.955512 1130827 main.go:141] libmachine: (no-preload-248059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:33:e2", ip: ""} in network mk-no-preload-248059: {Iface:virbr3 ExpiryTime:2024-03-28 02:03:25 +0000 UTC Type:0 Mac:52:54:00:58:33:e2 Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:no-preload-248059 Clientid:01:52:54:00:58:33:e2}
	I0328 01:03:32.955538 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined IP address 192.168.61.107 and MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:32.955622 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHUsername
	I0328 01:03:32.955698 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHPort
	I0328 01:03:32.955809 1130827 sshutil.go:53] new ssh client: &{IP:192.168.61.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/no-preload-248059/id_rsa Username:docker}
	I0328 01:03:32.955859 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHKeyPath
	I0328 01:03:32.956001 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHUsername
	I0328 01:03:32.956134 1130827 sshutil.go:53] new ssh client: &{IP:192.168.61.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/no-preload-248059/id_rsa Username:docker}
	I0328 01:03:33.079381 1130827 ssh_runner.go:195] Run: systemctl --version
	I0328 01:03:33.086184 1130827 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0328 01:03:33.241799 1130827 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0328 01:03:33.248779 1130827 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0328 01:03:33.248893 1130827 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0328 01:03:33.267944 1130827 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0328 01:03:33.267977 1130827 start.go:494] detecting cgroup driver to use...
	I0328 01:03:33.268082 1130827 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0328 01:03:33.286132 1130827 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0328 01:03:33.301676 1130827 docker.go:217] disabling cri-docker service (if available) ...
	I0328 01:03:33.301762 1130827 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0328 01:03:33.317202 1130827 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0328 01:03:33.333162 1130827 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0328 01:03:33.458738 1130827 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0328 01:03:33.608509 1130827 docker.go:233] disabling docker service ...
	I0328 01:03:33.608623 1130827 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0328 01:03:33.626616 1130827 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0328 01:03:33.641798 1130827 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0328 01:03:33.808865 1130827 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0328 01:03:33.962636 1130827 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0328 01:03:33.978138 1130827 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0328 01:03:34.002323 1130827 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0328 01:03:34.002404 1130827 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 01:03:34.014483 1130827 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0328 01:03:34.014589 1130827 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 01:03:34.028647 1130827 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 01:03:34.041601 1130827 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 01:03:34.054993 1130827 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0328 01:03:34.066671 1130827 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 01:03:34.079389 1130827 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 01:03:34.099660 1130827 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 01:03:34.112379 1130827 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0328 01:03:34.123050 1130827 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0328 01:03:34.123109 1130827 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0328 01:03:34.137132 1130827 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0328 01:03:34.147092 1130827 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0328 01:03:34.282367 1130827 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0328 01:03:34.436510 1130827 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0328 01:03:34.436599 1130827 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0328 01:03:34.443019 1130827 start.go:562] Will wait 60s for crictl version
	I0328 01:03:34.443092 1130827 ssh_runner.go:195] Run: which crictl
	I0328 01:03:34.447740 1130827 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0328 01:03:34.488366 1130827 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0328 01:03:34.488469 1130827 ssh_runner.go:195] Run: crio --version
	I0328 01:03:34.520940 1130827 ssh_runner.go:195] Run: crio --version
	I0328 01:03:34.557953 1130827 out.go:177] * Preparing Kubernetes v1.30.0-beta.0 on CRI-O 1.29.1 ...
	I0328 01:03:30.568918 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:31.068097 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:31.568306 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:32.068345 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:32.568773 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:33.068072 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:33.568377 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:34.068141 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:34.568574 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:35.067986 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:31.439199 1131600 pod_ready.go:102] pod "coredns-76f75df574-79cdj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:33.439575 1131600 pod_ready.go:102] pod "coredns-76f75df574-79cdj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:34.559624 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetIP
	I0328 01:03:34.563089 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:34.563549 1130827 main.go:141] libmachine: (no-preload-248059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:33:e2", ip: ""} in network mk-no-preload-248059: {Iface:virbr3 ExpiryTime:2024-03-28 02:03:25 +0000 UTC Type:0 Mac:52:54:00:58:33:e2 Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:no-preload-248059 Clientid:01:52:54:00:58:33:e2}
	I0328 01:03:34.563583 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined IP address 192.168.61.107 and MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:34.563943 1130827 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0328 01:03:34.570153 1130827 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0328 01:03:34.584566 1130827 kubeadm.go:877] updating cluster {Name:no-preload-248059 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.0-beta.0 ClusterName:no-preload-248059 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.107 Port:8443 KubernetesVersion:v1.30.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0328 01:03:34.584723 1130827 preload.go:132] Checking if preload exists for k8s version v1.30.0-beta.0 and runtime crio
	I0328 01:03:34.584786 1130827 ssh_runner.go:195] Run: sudo crictl images --output json
	I0328 01:03:34.620182 1130827 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.0-beta.0". assuming images are not preloaded.
	I0328 01:03:34.620215 1130827 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.30.0-beta.0 registry.k8s.io/kube-controller-manager:v1.30.0-beta.0 registry.k8s.io/kube-scheduler:v1.30.0-beta.0 registry.k8s.io/kube-proxy:v1.30.0-beta.0 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.12-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0328 01:03:34.620297 1130827 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0328 01:03:34.620312 1130827 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.12-0
	I0328 01:03:34.620333 1130827 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.30.0-beta.0
	I0328 01:03:34.620301 1130827 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.30.0-beta.0
	I0328 01:03:34.620374 1130827 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.30.0-beta.0
	I0328 01:03:34.620401 1130827 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0328 01:03:34.620481 1130827 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0328 01:03:34.620319 1130827 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.30.0-beta.0
	I0328 01:03:34.621997 1130827 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.30.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.30.0-beta.0
	I0328 01:03:34.622009 1130827 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0328 01:03:34.622009 1130827 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.30.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.30.0-beta.0
	I0328 01:03:34.622052 1130827 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.30.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.30.0-beta.0
	I0328 01:03:34.621997 1130827 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.30.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.30.0-beta.0
	I0328 01:03:34.622115 1130827 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.12-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.12-0
	I0328 01:03:34.621996 1130827 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0328 01:03:34.622438 1130827 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0328 01:03:34.832761 1130827 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.30.0-beta.0
	I0328 01:03:34.849045 1130827 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I0328 01:03:34.868049 1130827 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0328 01:03:34.883941 1130827 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.30.0-beta.0" needs transfer: "registry.k8s.io/kube-proxy:v1.30.0-beta.0" does not exist at hash "3f2e573c9528d6ccab780899b4b39b75a17f00250f31ec462fccb116d45befa8" in container runtime
	I0328 01:03:34.883988 1130827 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.30.0-beta.0
	I0328 01:03:34.884047 1130827 ssh_runner.go:195] Run: which crictl
	I0328 01:03:34.884972 1130827 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.30.0-beta.0
	I0328 01:03:34.887551 1130827 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.30.0-beta.0
	I0328 01:03:34.899677 1130827 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.12-0
	I0328 01:03:34.904772 1130827 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.30.0-beta.0
	I0328 01:03:35.045850 1130827 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0328 01:03:35.045906 1130827 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0328 01:03:35.045944 1130827 ssh_runner.go:195] Run: which crictl
	I0328 01:03:35.045959 1130827 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.30.0-beta.0
	I0328 01:03:35.064862 1130827 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.30.0-beta.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.30.0-beta.0" does not exist at hash "c2da1dd389fe0ec92e0cd9e98f8def82c47e8e08ab27041cd23683c60aa1dcaa" in container runtime
	I0328 01:03:35.064908 1130827 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.30.0-beta.0
	I0328 01:03:35.064959 1130827 ssh_runner.go:195] Run: which crictl
	I0328 01:03:35.066700 1130827 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.30.0-beta.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.30.0-beta.0" does not exist at hash "f68bf73ee2fbd4ea8b58c2038ce18d4e06cb59833c44dda7435b586add021841" in container runtime
	I0328 01:03:35.066753 1130827 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.30.0-beta.0
	I0328 01:03:35.066820 1130827 ssh_runner.go:195] Run: which crictl
	I0328 01:03:35.097425 1130827 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.30.0-beta.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.30.0-beta.0" does not exist at hash "746ac2129b574b8743dca05f512b7e097235ca7229b75100a38ec9cdb23454ac" in container runtime
	I0328 01:03:35.097479 1130827 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.30.0-beta.0
	I0328 01:03:35.097546 1130827 ssh_runner.go:195] Run: which crictl
	I0328 01:03:35.097619 1130827 cache_images.go:116] "registry.k8s.io/etcd:3.5.12-0" needs transfer: "registry.k8s.io/etcd:3.5.12-0" does not exist at hash "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899" in container runtime
	I0328 01:03:35.097667 1130827 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.12-0
	I0328 01:03:35.097715 1130827 ssh_runner.go:195] Run: which crictl
	I0328 01:03:35.126977 1130827 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0328 01:03:35.126980 1130827 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.30.0-beta.0
	I0328 01:03:35.127020 1130827 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.30.0-beta.0
	I0328 01:03:35.127084 1130827 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.12-0
	I0328 01:03:35.127090 1130827 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.30.0-beta.0
	I0328 01:03:35.127082 1130827 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.30.0-beta.0
	I0328 01:03:35.127161 1130827 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.30.0-beta.0
	I0328 01:03:35.264395 1130827 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.30.0-beta.0
	I0328 01:03:35.264499 1130827 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.30.0-beta.0 (exists)
	I0328 01:03:35.264534 1130827 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.30.0-beta.0
	I0328 01:03:35.264543 1130827 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.30.0-beta.0
	I0328 01:03:35.264506 1130827 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0328 01:03:35.264590 1130827 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.30.0-beta.0
	I0328 01:03:35.264631 1130827 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.30.0-beta.0
	I0328 01:03:35.264652 1130827 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0328 01:03:35.264516 1130827 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.30.0-beta.0
	I0328 01:03:35.264584 1130827 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.12-0
	I0328 01:03:35.264717 1130827 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.30.0-beta.0
	I0328 01:03:35.264728 1130827 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.30.0-beta.0
	I0328 01:03:35.264768 1130827 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.12-0
	I0328 01:03:35.269734 1130827 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.12-0 (exists)
	I0328 01:03:35.277344 1130827 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.30.0-beta.0 (exists)
	I0328 01:03:35.277580 1130827 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0328 01:03:35.279792 1130827 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.30.0-beta.0 (exists)
	I0328 01:03:35.280423 1130827 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.30.0-beta.0 (exists)
	I0328 01:03:35.535980 1130827 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0328 01:03:33.413451 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:35.414017 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:37.913609 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:35.568345 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:36.068227 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:36.568528 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:37.068834 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:37.568407 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:38.068142 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:38.568732 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:39.068094 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:39.568799 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:40.068973 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:35.940767 1131600 pod_ready.go:102] pod "coredns-76f75df574-79cdj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:37.440919 1131600 pod_ready.go:92] pod "coredns-76f75df574-79cdj" in "kube-system" namespace has status "Ready":"True"
	I0328 01:03:37.440949 1131600 pod_ready.go:81] duration metric: took 8.008542386s for pod "coredns-76f75df574-79cdj" in "kube-system" namespace to be "Ready" ...
	I0328 01:03:37.440963 1131600 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-283961" in "kube-system" namespace to be "Ready" ...
	I0328 01:03:39.452822 1131600 pod_ready.go:102] pod "etcd-default-k8s-diff-port-283961" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:40.467937 1131600 pod_ready.go:92] pod "etcd-default-k8s-diff-port-283961" in "kube-system" namespace has status "Ready":"True"
	I0328 01:03:40.467973 1131600 pod_ready.go:81] duration metric: took 3.027001179s for pod "etcd-default-k8s-diff-port-283961" in "kube-system" namespace to be "Ready" ...
	I0328 01:03:40.467987 1131600 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-283961" in "kube-system" namespace to be "Ready" ...
	I0328 01:03:40.491342 1131600 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-283961" in "kube-system" namespace has status "Ready":"True"
	I0328 01:03:40.491373 1131600 pod_ready.go:81] duration metric: took 23.375914ms for pod "kube-apiserver-default-k8s-diff-port-283961" in "kube-system" namespace to be "Ready" ...
	I0328 01:03:40.491387 1131600 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-283961" in "kube-system" namespace to be "Ready" ...
	I0328 01:03:40.511379 1131600 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-283961" in "kube-system" namespace has status "Ready":"True"
	I0328 01:03:40.511414 1131600 pod_ready.go:81] duration metric: took 20.018124ms for pod "kube-controller-manager-default-k8s-diff-port-283961" in "kube-system" namespace to be "Ready" ...
	I0328 01:03:40.511430 1131600 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-d776v" in "kube-system" namespace to be "Ready" ...
	I0328 01:03:40.526689 1131600 pod_ready.go:92] pod "kube-proxy-d776v" in "kube-system" namespace has status "Ready":"True"
	I0328 01:03:40.526724 1131600 pod_ready.go:81] duration metric: took 15.28424ms for pod "kube-proxy-d776v" in "kube-system" namespace to be "Ready" ...
	I0328 01:03:40.526738 1131600 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-283961" in "kube-system" namespace to be "Ready" ...
	I0328 01:03:37.431690 1130827 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.30.0-beta.0: (2.167073369s)
	I0328 01:03:37.431729 1130827 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.30.0-beta.0 from cache
	I0328 01:03:37.431755 1130827 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.12-0
	I0328 01:03:37.431764 1130827 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.895749302s)
	I0328 01:03:37.431805 1130827 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0328 01:03:37.431811 1130827 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.12-0
	I0328 01:03:37.431837 1130827 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0328 01:03:37.431870 1130827 ssh_runner.go:195] Run: which crictl
	I0328 01:03:39.913936 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:42.412656 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:40.568441 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:41.068790 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:41.568919 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:42.068166 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:42.568012 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:43.068027 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:43.568916 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:44.067940 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:44.568074 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:45.068786 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:42.535179 1131600 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-283961" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:44.034128 1131600 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-283961" in "kube-system" namespace has status "Ready":"True"
	I0328 01:03:44.034164 1131600 pod_ready.go:81] duration metric: took 3.507415677s for pod "kube-scheduler-default-k8s-diff-port-283961" in "kube-system" namespace to be "Ready" ...
	I0328 01:03:44.034175 1131600 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace to be "Ready" ...
	I0328 01:03:41.523268 1130827 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.12-0: (4.091420228s)
	I0328 01:03:41.523305 1130827 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.12-0 from cache
	I0328 01:03:41.523330 1130827 ssh_runner.go:235] Completed: which crictl: (4.091431875s)
	I0328 01:03:41.523345 1130827 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.30.0-beta.0
	I0328 01:03:41.523412 1130827 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0328 01:03:41.523445 1130827 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.30.0-beta.0
	I0328 01:03:41.567312 1130827 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0328 01:03:41.567455 1130827 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0328 01:03:44.336954 1130827 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.30.0-beta.0: (2.813479223s)
	I0328 01:03:44.336991 1130827 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.30.0-beta.0 from cache
	I0328 01:03:44.336994 1130827 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (2.769509386s)
	I0328 01:03:44.337020 1130827 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0328 01:03:44.337035 1130827 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0328 01:03:44.337080 1130827 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0328 01:03:44.414767 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:46.415110 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:45.568662 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:46.068299 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:46.568793 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:47.068929 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:47.568250 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:48.068910 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:48.568138 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:49.068128 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:49.568153 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:50.068075 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:46.042489 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:48.541049 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:50.547355 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:46.297705 1130827 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.960592772s)
	I0328 01:03:46.297744 1130827 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0328 01:03:46.297776 1130827 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.30.0-beta.0
	I0328 01:03:46.297828 1130827 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.30.0-beta.0
	I0328 01:03:47.769522 1130827 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.30.0-beta.0: (1.471661236s)
	I0328 01:03:47.769569 1130827 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.30.0-beta.0 from cache
	I0328 01:03:47.769602 1130827 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.30.0-beta.0
	I0328 01:03:47.769656 1130827 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.30.0-beta.0
	I0328 01:03:50.231843 1130827 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.30.0-beta.0: (2.462162757s)
	I0328 01:03:50.231876 1130827 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.30.0-beta.0 from cache
	I0328 01:03:50.231902 1130827 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0328 01:03:50.231956 1130827 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0328 01:03:48.913184 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:51.412474 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:50.568929 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:51.068812 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:51.568899 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:52.068890 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:52.568751 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:53.068406 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:53.568466 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:54.068039 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:54.568745 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:55.068690 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:53.041197 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:55.041977 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:51.188382 1130827 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0328 01:03:51.188441 1130827 cache_images.go:123] Successfully loaded all cached images
	I0328 01:03:51.188448 1130827 cache_images.go:92] duration metric: took 16.568214969s to LoadCachedImages
	I0328 01:03:51.188464 1130827 kubeadm.go:928] updating node { 192.168.61.107 8443 v1.30.0-beta.0 crio true true} ...
	I0328 01:03:51.188628 1130827 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-248059 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.107
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0-beta.0 ClusterName:no-preload-248059 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0328 01:03:51.188710 1130827 ssh_runner.go:195] Run: crio config
	I0328 01:03:51.237071 1130827 cni.go:84] Creating CNI manager for ""
	I0328 01:03:51.237099 1130827 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0328 01:03:51.237109 1130827 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0328 01:03:51.237131 1130827 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.107 APIServerPort:8443 KubernetesVersion:v1.30.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-248059 NodeName:no-preload-248059 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.107"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.107 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0328 01:03:51.237263 1130827 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.107
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-248059"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.107
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.107"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0328 01:03:51.237330 1130827 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0-beta.0
	I0328 01:03:51.248044 1130827 binaries.go:44] Found k8s binaries, skipping transfer
	I0328 01:03:51.248113 1130827 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0328 01:03:51.257854 1130827 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (324 bytes)
	I0328 01:03:51.276307 1130827 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I0328 01:03:51.294698 1130827 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2168 bytes)
	I0328 01:03:51.313297 1130827 ssh_runner.go:195] Run: grep 192.168.61.107	control-plane.minikube.internal$ /etc/hosts
	I0328 01:03:51.317668 1130827 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.107	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0328 01:03:51.330478 1130827 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0328 01:03:51.457500 1130827 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0328 01:03:51.484463 1130827 certs.go:68] Setting up /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/no-preload-248059 for IP: 192.168.61.107
	I0328 01:03:51.484493 1130827 certs.go:194] generating shared ca certs ...
	I0328 01:03:51.484518 1130827 certs.go:226] acquiring lock for ca certs: {Name:mkf4dec8f33bbf51de6ed3aabdf175c7fd744ef9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 01:03:51.484718 1130827 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.key
	I0328 01:03:51.484768 1130827 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/proxy-client-ca.key
	I0328 01:03:51.484781 1130827 certs.go:256] generating profile certs ...
	I0328 01:03:51.484910 1130827 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/no-preload-248059/client.key
	I0328 01:03:51.484989 1130827 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/no-preload-248059/apiserver.key.85d037b2
	I0328 01:03:51.485040 1130827 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/no-preload-248059/proxy-client.key
	I0328 01:03:51.485196 1130827 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/1076522.pem (1338 bytes)
	W0328 01:03:51.485243 1130827 certs.go:480] ignoring /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/1076522_empty.pem, impossibly tiny 0 bytes
	I0328 01:03:51.485257 1130827 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca-key.pem (1675 bytes)
	I0328 01:03:51.485292 1130827 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem (1078 bytes)
	I0328 01:03:51.485327 1130827 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/cert.pem (1123 bytes)
	I0328 01:03:51.485357 1130827 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/key.pem (1679 bytes)
	I0328 01:03:51.485416 1130827 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/files/etc/ssl/certs/10765222.pem (1708 bytes)
	I0328 01:03:51.486614 1130827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0328 01:03:51.537554 1130827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0328 01:03:51.587256 1130827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0328 01:03:51.620264 1130827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0328 01:03:51.652100 1130827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/no-preload-248059/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0328 01:03:51.694388 1130827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/no-preload-248059/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0328 01:03:51.720913 1130827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/no-preload-248059/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0328 01:03:51.747141 1130827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/no-preload-248059/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0328 01:03:51.776370 1130827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/1076522.pem --> /usr/share/ca-certificates/1076522.pem (1338 bytes)
	I0328 01:03:51.803168 1130827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/files/etc/ssl/certs/10765222.pem --> /usr/share/ca-certificates/10765222.pem (1708 bytes)
	I0328 01:03:51.831138 1130827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0328 01:03:51.857272 1130827 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0328 01:03:51.876070 1130827 ssh_runner.go:195] Run: openssl version
	I0328 01:03:51.882197 1130827 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1076522.pem && ln -fs /usr/share/ca-certificates/1076522.pem /etc/ssl/certs/1076522.pem"
	I0328 01:03:51.893560 1130827 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1076522.pem
	I0328 01:03:51.898293 1130827 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 27 23:42 /usr/share/ca-certificates/1076522.pem
	I0328 01:03:51.898361 1130827 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1076522.pem
	I0328 01:03:51.904549 1130827 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1076522.pem /etc/ssl/certs/51391683.0"
	I0328 01:03:51.918175 1130827 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10765222.pem && ln -fs /usr/share/ca-certificates/10765222.pem /etc/ssl/certs/10765222.pem"
	I0328 01:03:51.930387 1130827 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10765222.pem
	I0328 01:03:51.935610 1130827 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 27 23:42 /usr/share/ca-certificates/10765222.pem
	I0328 01:03:51.935691 1130827 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10765222.pem
	I0328 01:03:51.942127 1130827 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/10765222.pem /etc/ssl/certs/3ec20f2e.0"
	I0328 01:03:51.954252 1130827 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0328 01:03:51.966727 1130827 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0328 01:03:51.971742 1130827 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 27 23:34 /usr/share/ca-certificates/minikubeCA.pem
	I0328 01:03:51.971810 1130827 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0328 01:03:51.978082 1130827 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0328 01:03:51.992233 1130827 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0328 01:03:51.997556 1130827 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0328 01:03:52.004178 1130827 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0328 01:03:52.010666 1130827 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0328 01:03:52.017076 1130827 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0328 01:03:52.023334 1130827 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0328 01:03:52.029980 1130827 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0328 01:03:52.036395 1130827 kubeadm.go:391] StartCluster: {Name:no-preload-248059 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.0-beta.0 ClusterName:no-preload-248059 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.107 Port:8443 KubernetesVersion:v1.30.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0
m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0328 01:03:52.036483 1130827 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0328 01:03:52.036539 1130827 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0328 01:03:52.080486 1130827 cri.go:89] found id: ""
	I0328 01:03:52.080580 1130827 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0328 01:03:52.094552 1130827 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0328 01:03:52.094583 1130827 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0328 01:03:52.094599 1130827 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0328 01:03:52.094650 1130827 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0328 01:03:52.107008 1130827 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0328 01:03:52.108200 1130827 kubeconfig.go:125] found "no-preload-248059" server: "https://192.168.61.107:8443"
	I0328 01:03:52.110536 1130827 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0328 01:03:52.122998 1130827 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.61.107
	I0328 01:03:52.123044 1130827 kubeadm.go:1154] stopping kube-system containers ...
	I0328 01:03:52.123090 1130827 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0328 01:03:52.123170 1130827 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0328 01:03:52.165568 1130827 cri.go:89] found id: ""
	I0328 01:03:52.165666 1130827 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0328 01:03:52.183930 1130827 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0328 01:03:52.195188 1130827 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0328 01:03:52.195215 1130827 kubeadm.go:156] found existing configuration files:
	
	I0328 01:03:52.195271 1130827 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0328 01:03:52.205872 1130827 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0328 01:03:52.205932 1130827 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0328 01:03:52.216481 1130827 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0328 01:03:52.226719 1130827 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0328 01:03:52.226787 1130827 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0328 01:03:52.238852 1130827 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0328 01:03:52.250272 1130827 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0328 01:03:52.250341 1130827 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0328 01:03:52.262474 1130827 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0328 01:03:52.273981 1130827 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0328 01:03:52.274059 1130827 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0328 01:03:52.286028 1130827 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0328 01:03:52.297016 1130827 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0328 01:03:52.406981 1130827 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0328 01:03:53.521529 1130827 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.114505514s)
	I0328 01:03:53.521569 1130827 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0328 01:03:53.735728 1130827 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0328 01:03:53.808590 1130827 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0328 01:03:53.931165 1130827 api_server.go:52] waiting for apiserver process to appear ...
	I0328 01:03:53.931281 1130827 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:54.432358 1130827 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:54.931653 1130827 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:54.948811 1130827 api_server.go:72] duration metric: took 1.017647613s to wait for apiserver process to appear ...
	I0328 01:03:54.948843 1130827 api_server.go:88] waiting for apiserver healthz status ...
	I0328 01:03:54.948871 1130827 api_server.go:253] Checking apiserver healthz at https://192.168.61.107:8443/healthz ...
	I0328 01:03:54.949490 1130827 api_server.go:269] stopped: https://192.168.61.107:8443/healthz: Get "https://192.168.61.107:8443/healthz": dial tcp 192.168.61.107:8443: connect: connection refused
	I0328 01:03:55.449050 1130827 api_server.go:253] Checking apiserver healthz at https://192.168.61.107:8443/healthz ...
	I0328 01:03:53.413775 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:55.914095 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:57.515811 1130827 api_server.go:279] https://192.168.61.107:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0328 01:03:57.515852 1130827 api_server.go:103] status: https://192.168.61.107:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0328 01:03:57.515872 1130827 api_server.go:253] Checking apiserver healthz at https://192.168.61.107:8443/healthz ...
	I0328 01:03:57.564527 1130827 api_server.go:279] https://192.168.61.107:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0328 01:03:57.564560 1130827 api_server.go:103] status: https://192.168.61.107:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0328 01:03:57.949780 1130827 api_server.go:253] Checking apiserver healthz at https://192.168.61.107:8443/healthz ...
	I0328 01:03:57.955515 1130827 api_server.go:279] https://192.168.61.107:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0328 01:03:57.955565 1130827 api_server.go:103] status: https://192.168.61.107:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0328 01:03:58.449103 1130827 api_server.go:253] Checking apiserver healthz at https://192.168.61.107:8443/healthz ...
	I0328 01:03:58.456345 1130827 api_server.go:279] https://192.168.61.107:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0328 01:03:58.456384 1130827 api_server.go:103] status: https://192.168.61.107:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0328 01:03:58.949575 1130827 api_server.go:253] Checking apiserver healthz at https://192.168.61.107:8443/healthz ...
	I0328 01:03:58.954466 1130827 api_server.go:279] https://192.168.61.107:8443/healthz returned 200:
	ok
	I0328 01:03:58.961213 1130827 api_server.go:141] control plane version: v1.30.0-beta.0
	I0328 01:03:58.961244 1130827 api_server.go:131] duration metric: took 4.012391589s to wait for apiserver health ...
	I0328 01:03:58.961256 1130827 cni.go:84] Creating CNI manager for ""
	I0328 01:03:58.961265 1130827 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0328 01:03:58.963147 1130827 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0328 01:03:55.568378 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:56.068253 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:56.568989 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:57.068709 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:57.569038 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:58.068236 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:58.568386 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:59.068971 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:59.568858 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:04:00.067964 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:57.043266 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:59.541626 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:58.964446 1130827 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0328 01:03:58.979425 1130827 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0328 01:03:59.042826 1130827 system_pods.go:43] waiting for kube-system pods to appear ...
	I0328 01:03:59.060388 1130827 system_pods.go:59] 8 kube-system pods found
	I0328 01:03:59.060429 1130827 system_pods.go:61] "coredns-7db6d8ff4d-86n4s" [71402ca8-dfa7-4caf-a422-6de9f24bf9dc] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0328 01:03:59.060439 1130827 system_pods.go:61] "etcd-no-preload-248059" [954b6886-b84f-4d94-bbce-7e520142eb4b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0328 01:03:59.060451 1130827 system_pods.go:61] "kube-apiserver-no-preload-248059" [2d3caabe-27c2-44e7-8f52-76e03f262e2d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0328 01:03:59.060462 1130827 system_pods.go:61] "kube-controller-manager-no-preload-248059" [30b9f4aa-c9a7-4d91-8e4d-35ad32f40425] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0328 01:03:59.060472 1130827 system_pods.go:61] "kube-proxy-b9qpb" [7ab4cca8-0ba2-4177-84cd-c6ac045930fc] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0328 01:03:59.060481 1130827 system_pods.go:61] "kube-scheduler-no-preload-248059" [4d9e45e3-d990-40d4-a4be-8384c39eb9b0] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0328 01:03:59.060493 1130827 system_pods.go:61] "metrics-server-569cc877fc-cvnrj" [063a47ac-9ceb-4521-9dde-aca02ec5e0d8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0328 01:03:59.060508 1130827 system_pods.go:61] "storage-provisioner" [0a0eb2d3-a426-4b76-8009-1a0a0e0312bb] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0328 01:03:59.060518 1130827 system_pods.go:74] duration metric: took 17.666067ms to wait for pod list to return data ...
	I0328 01:03:59.060533 1130827 node_conditions.go:102] verifying NodePressure condition ...
	I0328 01:03:59.065018 1130827 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0328 01:03:59.065054 1130827 node_conditions.go:123] node cpu capacity is 2
	I0328 01:03:59.065071 1130827 node_conditions.go:105] duration metric: took 4.531253ms to run NodePressure ...
	I0328 01:03:59.065097 1130827 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-beta.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0328 01:03:59.454609 1130827 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0328 01:03:59.459707 1130827 kubeadm.go:733] kubelet initialised
	I0328 01:03:59.459730 1130827 kubeadm.go:734] duration metric: took 5.09757ms waiting for restarted kubelet to initialise ...
	I0328 01:03:59.459739 1130827 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0328 01:03:59.465352 1130827 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-86n4s" in "kube-system" namespace to be "Ready" ...
	I0328 01:03:59.471020 1130827 pod_ready.go:97] node "no-preload-248059" hosting pod "coredns-7db6d8ff4d-86n4s" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-248059" has status "Ready":"False"
	I0328 01:03:59.471054 1130827 pod_ready.go:81] duration metric: took 5.676291ms for pod "coredns-7db6d8ff4d-86n4s" in "kube-system" namespace to be "Ready" ...
	E0328 01:03:59.471067 1130827 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-248059" hosting pod "coredns-7db6d8ff4d-86n4s" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-248059" has status "Ready":"False"
	I0328 01:03:59.471075 1130827 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-248059" in "kube-system" namespace to be "Ready" ...
	I0328 01:03:59.476393 1130827 pod_ready.go:97] node "no-preload-248059" hosting pod "etcd-no-preload-248059" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-248059" has status "Ready":"False"
	I0328 01:03:59.476421 1130827 pod_ready.go:81] duration metric: took 5.333391ms for pod "etcd-no-preload-248059" in "kube-system" namespace to be "Ready" ...
	E0328 01:03:59.476430 1130827 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-248059" hosting pod "etcd-no-preload-248059" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-248059" has status "Ready":"False"
	I0328 01:03:59.476436 1130827 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-248059" in "kube-system" namespace to be "Ready" ...
	I0328 01:03:59.485889 1130827 pod_ready.go:97] node "no-preload-248059" hosting pod "kube-apiserver-no-preload-248059" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-248059" has status "Ready":"False"
	I0328 01:03:59.485924 1130827 pod_ready.go:81] duration metric: took 9.481204ms for pod "kube-apiserver-no-preload-248059" in "kube-system" namespace to be "Ready" ...
	E0328 01:03:59.485937 1130827 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-248059" hosting pod "kube-apiserver-no-preload-248059" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-248059" has status "Ready":"False"
	I0328 01:03:59.485957 1130827 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-248059" in "kube-system" namespace to be "Ready" ...
	I0328 01:03:59.491064 1130827 pod_ready.go:97] node "no-preload-248059" hosting pod "kube-controller-manager-no-preload-248059" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-248059" has status "Ready":"False"
	I0328 01:03:59.491095 1130827 pod_ready.go:81] duration metric: took 5.125981ms for pod "kube-controller-manager-no-preload-248059" in "kube-system" namespace to be "Ready" ...
	E0328 01:03:59.491107 1130827 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-248059" hosting pod "kube-controller-manager-no-preload-248059" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-248059" has status "Ready":"False"
	I0328 01:03:59.491116 1130827 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-b9qpb" in "kube-system" namespace to be "Ready" ...
	I0328 01:03:59.858724 1130827 pod_ready.go:92] pod "kube-proxy-b9qpb" in "kube-system" namespace has status "Ready":"True"
	I0328 01:03:59.858753 1130827 pod_ready.go:81] duration metric: took 367.628034ms for pod "kube-proxy-b9qpb" in "kube-system" namespace to be "Ready" ...
	I0328 01:03:59.858764 1130827 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-248059" in "kube-system" namespace to be "Ready" ...
	I0328 01:03:58.413911 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:00.913297 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:02.913414 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:00.568622 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:04:01.067943 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:04:01.567964 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:04:02.068537 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:04:02.568772 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:04:03.068458 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:04:03.568943 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:04:04.068085 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:04:04.068176 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:04:04.112601 1131323 cri.go:89] found id: ""
	I0328 01:04:04.112631 1131323 logs.go:276] 0 containers: []
	W0328 01:04:04.112642 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:04:04.112650 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:04:04.112726 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:04:04.151837 1131323 cri.go:89] found id: ""
	I0328 01:04:04.151873 1131323 logs.go:276] 0 containers: []
	W0328 01:04:04.151885 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:04:04.151894 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:04:04.151965 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:04:04.193411 1131323 cri.go:89] found id: ""
	I0328 01:04:04.193451 1131323 logs.go:276] 0 containers: []
	W0328 01:04:04.193463 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:04:04.193473 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:04:04.193545 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:04:04.239623 1131323 cri.go:89] found id: ""
	I0328 01:04:04.239652 1131323 logs.go:276] 0 containers: []
	W0328 01:04:04.239662 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:04:04.239673 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:04:04.239732 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:04:04.279561 1131323 cri.go:89] found id: ""
	I0328 01:04:04.279600 1131323 logs.go:276] 0 containers: []
	W0328 01:04:04.279615 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:04:04.279627 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:04:04.279708 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:04:04.318680 1131323 cri.go:89] found id: ""
	I0328 01:04:04.318710 1131323 logs.go:276] 0 containers: []
	W0328 01:04:04.318722 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:04:04.318731 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:04:04.318797 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:04:04.356486 1131323 cri.go:89] found id: ""
	I0328 01:04:04.356514 1131323 logs.go:276] 0 containers: []
	W0328 01:04:04.356523 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:04:04.356530 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:04:04.356586 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:04:04.394281 1131323 cri.go:89] found id: ""
	I0328 01:04:04.394319 1131323 logs.go:276] 0 containers: []
	W0328 01:04:04.394334 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:04:04.394348 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:04:04.394364 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:04:04.458688 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:04:04.458729 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:04:04.501399 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:04:04.501440 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:04:04.556183 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:04:04.556225 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:04:04.571392 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:04:04.571427 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:04:04.709967 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:04:02.041555 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:04.541464 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:01.866183 1130827 pod_ready.go:102] pod "kube-scheduler-no-preload-248059" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:03.868706 1130827 pod_ready.go:102] pod "kube-scheduler-no-preload-248059" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:04.915667 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:07.412548 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:07.210550 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:04:07.224274 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:04:07.224345 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:04:07.262604 1131323 cri.go:89] found id: ""
	I0328 01:04:07.262640 1131323 logs.go:276] 0 containers: []
	W0328 01:04:07.262665 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:04:07.262674 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:04:07.262763 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:04:07.296868 1131323 cri.go:89] found id: ""
	I0328 01:04:07.296907 1131323 logs.go:276] 0 containers: []
	W0328 01:04:07.296918 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:04:07.296926 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:04:07.296992 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:04:07.333110 1131323 cri.go:89] found id: ""
	I0328 01:04:07.333149 1131323 logs.go:276] 0 containers: []
	W0328 01:04:07.333162 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:04:07.333171 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:04:07.333240 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:04:07.371138 1131323 cri.go:89] found id: ""
	I0328 01:04:07.371168 1131323 logs.go:276] 0 containers: []
	W0328 01:04:07.371186 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:04:07.371195 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:04:07.371259 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:04:07.412197 1131323 cri.go:89] found id: ""
	I0328 01:04:07.412230 1131323 logs.go:276] 0 containers: []
	W0328 01:04:07.412242 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:04:07.412251 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:04:07.412331 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:04:07.457021 1131323 cri.go:89] found id: ""
	I0328 01:04:07.457052 1131323 logs.go:276] 0 containers: []
	W0328 01:04:07.457070 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:04:07.457080 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:04:07.457153 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:04:07.517996 1131323 cri.go:89] found id: ""
	I0328 01:04:07.518026 1131323 logs.go:276] 0 containers: []
	W0328 01:04:07.518034 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:04:07.518040 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:04:07.518111 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:04:07.556829 1131323 cri.go:89] found id: ""
	I0328 01:04:07.556856 1131323 logs.go:276] 0 containers: []
	W0328 01:04:07.556865 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:04:07.556875 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:04:07.556890 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:04:07.572234 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:04:07.572270 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:04:07.648615 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:04:07.648641 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:04:07.648658 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:04:07.719617 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:04:07.719665 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:04:07.764053 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:04:07.764097 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:04:10.319480 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:04:06.542160 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:08.550725 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:06.366150 1130827 pod_ready.go:102] pod "kube-scheduler-no-preload-248059" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:07.365200 1130827 pod_ready.go:92] pod "kube-scheduler-no-preload-248059" in "kube-system" namespace has status "Ready":"True"
	I0328 01:04:07.365233 1130827 pod_ready.go:81] duration metric: took 7.506461201s for pod "kube-scheduler-no-preload-248059" in "kube-system" namespace to be "Ready" ...
	I0328 01:04:07.365256 1130827 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace to be "Ready" ...
	I0328 01:04:09.373694 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:09.413378 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:11.913400 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:10.334347 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:04:10.335893 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:04:10.375231 1131323 cri.go:89] found id: ""
	I0328 01:04:10.375263 1131323 logs.go:276] 0 containers: []
	W0328 01:04:10.375274 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:04:10.375281 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:04:10.375353 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:04:10.413652 1131323 cri.go:89] found id: ""
	I0328 01:04:10.413706 1131323 logs.go:276] 0 containers: []
	W0328 01:04:10.413726 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:04:10.413736 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:04:10.413805 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:04:10.449546 1131323 cri.go:89] found id: ""
	I0328 01:04:10.449588 1131323 logs.go:276] 0 containers: []
	W0328 01:04:10.449597 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:04:10.449604 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:04:10.449658 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:04:10.487518 1131323 cri.go:89] found id: ""
	I0328 01:04:10.487556 1131323 logs.go:276] 0 containers: []
	W0328 01:04:10.487570 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:04:10.487579 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:04:10.487663 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:04:10.525088 1131323 cri.go:89] found id: ""
	I0328 01:04:10.525124 1131323 logs.go:276] 0 containers: []
	W0328 01:04:10.525137 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:04:10.525146 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:04:10.525213 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:04:10.567177 1131323 cri.go:89] found id: ""
	I0328 01:04:10.567209 1131323 logs.go:276] 0 containers: []
	W0328 01:04:10.567221 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:04:10.567231 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:04:10.567302 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:04:10.609440 1131323 cri.go:89] found id: ""
	I0328 01:04:10.609474 1131323 logs.go:276] 0 containers: []
	W0328 01:04:10.609485 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:04:10.609492 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:04:10.609549 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:04:10.652466 1131323 cri.go:89] found id: ""
	I0328 01:04:10.652502 1131323 logs.go:276] 0 containers: []
	W0328 01:04:10.652516 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:04:10.652529 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:04:10.652546 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:04:10.737406 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:04:10.737451 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:04:10.786955 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:04:10.786991 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:04:10.843072 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:04:10.843114 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:04:10.857209 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:04:10.857244 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:04:10.950885 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:04:13.451542 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:04:13.465833 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:04:13.465924 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:04:13.503353 1131323 cri.go:89] found id: ""
	I0328 01:04:13.503386 1131323 logs.go:276] 0 containers: []
	W0328 01:04:13.503398 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:04:13.503407 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:04:13.503474 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:04:13.543175 1131323 cri.go:89] found id: ""
	I0328 01:04:13.543208 1131323 logs.go:276] 0 containers: []
	W0328 01:04:13.543220 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:04:13.543229 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:04:13.543287 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:04:13.580796 1131323 cri.go:89] found id: ""
	I0328 01:04:13.580829 1131323 logs.go:276] 0 containers: []
	W0328 01:04:13.580840 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:04:13.580848 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:04:13.580900 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:04:13.619483 1131323 cri.go:89] found id: ""
	I0328 01:04:13.619516 1131323 logs.go:276] 0 containers: []
	W0328 01:04:13.619529 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:04:13.619539 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:04:13.619596 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:04:13.654651 1131323 cri.go:89] found id: ""
	I0328 01:04:13.654683 1131323 logs.go:276] 0 containers: []
	W0328 01:04:13.654697 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:04:13.654705 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:04:13.654774 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:04:13.691763 1131323 cri.go:89] found id: ""
	I0328 01:04:13.691794 1131323 logs.go:276] 0 containers: []
	W0328 01:04:13.691805 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:04:13.691813 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:04:13.691881 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:04:13.730580 1131323 cri.go:89] found id: ""
	I0328 01:04:13.730614 1131323 logs.go:276] 0 containers: []
	W0328 01:04:13.730627 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:04:13.730635 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:04:13.730694 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:04:13.767802 1131323 cri.go:89] found id: ""
	I0328 01:04:13.767834 1131323 logs.go:276] 0 containers: []
	W0328 01:04:13.767848 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:04:13.767860 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:04:13.767876 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:04:13.815612 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:04:13.815653 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:04:13.870945 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:04:13.870991 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:04:13.891456 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:04:13.891506 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:04:14.022124 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:04:14.022163 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:04:14.022187 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:04:11.041196 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:13.044490 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:15.541942 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:11.873574 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:13.875251 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:14.412081 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:16.412837 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:16.604087 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:04:16.618872 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:04:16.618971 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:04:16.665628 1131323 cri.go:89] found id: ""
	I0328 01:04:16.665661 1131323 logs.go:276] 0 containers: []
	W0328 01:04:16.665675 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:04:16.665683 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:04:16.665780 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:04:16.703727 1131323 cri.go:89] found id: ""
	I0328 01:04:16.703758 1131323 logs.go:276] 0 containers: []
	W0328 01:04:16.703768 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:04:16.703775 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:04:16.703835 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:04:16.741425 1131323 cri.go:89] found id: ""
	I0328 01:04:16.741455 1131323 logs.go:276] 0 containers: []
	W0328 01:04:16.741464 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:04:16.741470 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:04:16.741524 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:04:16.782333 1131323 cri.go:89] found id: ""
	I0328 01:04:16.782373 1131323 logs.go:276] 0 containers: []
	W0328 01:04:16.782387 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:04:16.782398 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:04:16.782469 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:04:16.820321 1131323 cri.go:89] found id: ""
	I0328 01:04:16.820355 1131323 logs.go:276] 0 containers: []
	W0328 01:04:16.820364 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:04:16.820372 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:04:16.820429 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:04:16.861091 1131323 cri.go:89] found id: ""
	I0328 01:04:16.861130 1131323 logs.go:276] 0 containers: []
	W0328 01:04:16.861144 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:04:16.861154 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:04:16.861226 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:04:16.901347 1131323 cri.go:89] found id: ""
	I0328 01:04:16.901394 1131323 logs.go:276] 0 containers: []
	W0328 01:04:16.901408 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:04:16.901418 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:04:16.901491 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:04:16.944027 1131323 cri.go:89] found id: ""
	I0328 01:04:16.944067 1131323 logs.go:276] 0 containers: []
	W0328 01:04:16.944080 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:04:16.944093 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:04:16.944110 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:04:16.959104 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:04:16.959151 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:04:17.035432 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:04:17.035464 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:04:17.035480 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:04:17.116236 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:04:17.116276 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:04:17.159321 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:04:17.159370 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:04:19.711326 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:04:19.726016 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:04:19.726094 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:04:19.776639 1131323 cri.go:89] found id: ""
	I0328 01:04:19.776676 1131323 logs.go:276] 0 containers: []
	W0328 01:04:19.776690 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:04:19.776700 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:04:19.776782 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:04:19.817849 1131323 cri.go:89] found id: ""
	I0328 01:04:19.817887 1131323 logs.go:276] 0 containers: []
	W0328 01:04:19.817897 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:04:19.817904 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:04:19.817981 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:04:19.855055 1131323 cri.go:89] found id: ""
	I0328 01:04:19.855089 1131323 logs.go:276] 0 containers: []
	W0328 01:04:19.855102 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:04:19.855110 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:04:19.855177 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:04:19.895296 1131323 cri.go:89] found id: ""
	I0328 01:04:19.895332 1131323 logs.go:276] 0 containers: []
	W0328 01:04:19.895346 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:04:19.895354 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:04:19.895414 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:04:19.930936 1131323 cri.go:89] found id: ""
	I0328 01:04:19.930968 1131323 logs.go:276] 0 containers: []
	W0328 01:04:19.930980 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:04:19.930989 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:04:19.931067 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:04:19.968573 1131323 cri.go:89] found id: ""
	I0328 01:04:19.968610 1131323 logs.go:276] 0 containers: []
	W0328 01:04:19.968623 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:04:19.968632 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:04:19.968693 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:04:20.006130 1131323 cri.go:89] found id: ""
	I0328 01:04:20.006180 1131323 logs.go:276] 0 containers: []
	W0328 01:04:20.006195 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:04:20.006203 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:04:20.006304 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:04:20.043646 1131323 cri.go:89] found id: ""
	I0328 01:04:20.043678 1131323 logs.go:276] 0 containers: []
	W0328 01:04:20.043689 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:04:20.043701 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:04:20.043717 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:04:20.058728 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:04:20.058761 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:04:20.136392 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:04:20.136417 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:04:20.136431 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:04:20.214971 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:04:20.215015 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:04:20.255002 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:04:20.255047 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:04:18.041868 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:20.542175 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:16.372600 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:18.373203 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:20.374228 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:18.913596 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:20.913978 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:22.914777 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:22.810078 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:04:22.824083 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:04:22.824169 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:04:22.862037 1131323 cri.go:89] found id: ""
	I0328 01:04:22.862066 1131323 logs.go:276] 0 containers: []
	W0328 01:04:22.862074 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:04:22.862081 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:04:22.862141 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:04:22.901625 1131323 cri.go:89] found id: ""
	I0328 01:04:22.901658 1131323 logs.go:276] 0 containers: []
	W0328 01:04:22.901670 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:04:22.901679 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:04:22.901752 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:04:22.938858 1131323 cri.go:89] found id: ""
	I0328 01:04:22.938891 1131323 logs.go:276] 0 containers: []
	W0328 01:04:22.938903 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:04:22.938912 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:04:22.938983 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:04:22.978781 1131323 cri.go:89] found id: ""
	I0328 01:04:22.978818 1131323 logs.go:276] 0 containers: []
	W0328 01:04:22.978829 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:04:22.978837 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:04:22.978910 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:04:23.016844 1131323 cri.go:89] found id: ""
	I0328 01:04:23.016882 1131323 logs.go:276] 0 containers: []
	W0328 01:04:23.016895 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:04:23.016904 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:04:23.016975 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:04:23.058456 1131323 cri.go:89] found id: ""
	I0328 01:04:23.058508 1131323 logs.go:276] 0 containers: []
	W0328 01:04:23.058522 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:04:23.058531 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:04:23.058604 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:04:23.099368 1131323 cri.go:89] found id: ""
	I0328 01:04:23.099399 1131323 logs.go:276] 0 containers: []
	W0328 01:04:23.099408 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:04:23.099420 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:04:23.099492 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:04:23.135593 1131323 cri.go:89] found id: ""
	I0328 01:04:23.135634 1131323 logs.go:276] 0 containers: []
	W0328 01:04:23.135653 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:04:23.135665 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:04:23.135679 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:04:23.191215 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:04:23.191260 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:04:23.206849 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:04:23.206884 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:04:23.289566 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:04:23.289596 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:04:23.289618 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:04:23.365429 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:04:23.365480 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:04:23.042312 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:25.541788 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:22.872233 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:25.373908 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:25.413591 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:27.912983 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:25.914883 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:04:25.929336 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:04:25.929415 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:04:25.969452 1131323 cri.go:89] found id: ""
	I0328 01:04:25.969485 1131323 logs.go:276] 0 containers: []
	W0328 01:04:25.969497 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:04:25.969506 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:04:25.969573 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:04:26.008978 1131323 cri.go:89] found id: ""
	I0328 01:04:26.009006 1131323 logs.go:276] 0 containers: []
	W0328 01:04:26.009015 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:04:26.009022 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:04:26.009075 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:04:26.051110 1131323 cri.go:89] found id: ""
	I0328 01:04:26.051138 1131323 logs.go:276] 0 containers: []
	W0328 01:04:26.051146 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:04:26.051153 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:04:26.051213 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:04:26.088231 1131323 cri.go:89] found id: ""
	I0328 01:04:26.088262 1131323 logs.go:276] 0 containers: []
	W0328 01:04:26.088271 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:04:26.088277 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:04:26.088342 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:04:26.125741 1131323 cri.go:89] found id: ""
	I0328 01:04:26.125782 1131323 logs.go:276] 0 containers: []
	W0328 01:04:26.125794 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:04:26.125800 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:04:26.125867 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:04:26.163367 1131323 cri.go:89] found id: ""
	I0328 01:04:26.163406 1131323 logs.go:276] 0 containers: []
	W0328 01:04:26.163417 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:04:26.163426 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:04:26.163503 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:04:26.202302 1131323 cri.go:89] found id: ""
	I0328 01:04:26.202340 1131323 logs.go:276] 0 containers: []
	W0328 01:04:26.202355 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:04:26.202364 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:04:26.202422 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:04:26.240880 1131323 cri.go:89] found id: ""
	I0328 01:04:26.240911 1131323 logs.go:276] 0 containers: []
	W0328 01:04:26.240921 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:04:26.240931 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:04:26.240943 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:04:26.283151 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:04:26.283180 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:04:26.341313 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:04:26.341350 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:04:26.356762 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:04:26.356791 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:04:26.428033 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:04:26.428054 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:04:26.428066 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:04:29.006332 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:04:29.020634 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:04:29.020745 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:04:29.060812 1131323 cri.go:89] found id: ""
	I0328 01:04:29.060843 1131323 logs.go:276] 0 containers: []
	W0328 01:04:29.060852 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:04:29.060859 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:04:29.060916 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:04:29.100110 1131323 cri.go:89] found id: ""
	I0328 01:04:29.100139 1131323 logs.go:276] 0 containers: []
	W0328 01:04:29.100149 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:04:29.100155 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:04:29.100212 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:04:29.140345 1131323 cri.go:89] found id: ""
	I0328 01:04:29.140384 1131323 logs.go:276] 0 containers: []
	W0328 01:04:29.140396 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:04:29.140404 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:04:29.140479 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:04:29.182415 1131323 cri.go:89] found id: ""
	I0328 01:04:29.182449 1131323 logs.go:276] 0 containers: []
	W0328 01:04:29.182459 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:04:29.182465 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:04:29.182533 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:04:29.225177 1131323 cri.go:89] found id: ""
	I0328 01:04:29.225214 1131323 logs.go:276] 0 containers: []
	W0328 01:04:29.225225 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:04:29.225233 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:04:29.225310 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:04:29.265437 1131323 cri.go:89] found id: ""
	I0328 01:04:29.265471 1131323 logs.go:276] 0 containers: []
	W0328 01:04:29.265485 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:04:29.265493 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:04:29.265556 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:04:29.301578 1131323 cri.go:89] found id: ""
	I0328 01:04:29.301617 1131323 logs.go:276] 0 containers: []
	W0328 01:04:29.301630 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:04:29.301639 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:04:29.301719 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:04:29.340816 1131323 cri.go:89] found id: ""
	I0328 01:04:29.340847 1131323 logs.go:276] 0 containers: []
	W0328 01:04:29.340856 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:04:29.340867 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:04:29.340880 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:04:29.384658 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:04:29.384687 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:04:29.439243 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:04:29.439285 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:04:29.456179 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:04:29.456211 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:04:29.534878 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:04:29.534906 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:04:29.534927 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:04:28.041463 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:30.042506 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:27.872489 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:30.371109 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:29.913856 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:32.415699 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:32.115798 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:04:32.130464 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:04:32.130560 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:04:32.168846 1131323 cri.go:89] found id: ""
	I0328 01:04:32.168877 1131323 logs.go:276] 0 containers: []
	W0328 01:04:32.168887 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:04:32.168894 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:04:32.168952 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:04:32.208590 1131323 cri.go:89] found id: ""
	I0328 01:04:32.208622 1131323 logs.go:276] 0 containers: []
	W0328 01:04:32.208632 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:04:32.208638 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:04:32.208694 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:04:32.247323 1131323 cri.go:89] found id: ""
	I0328 01:04:32.247362 1131323 logs.go:276] 0 containers: []
	W0328 01:04:32.247375 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:04:32.247384 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:04:32.247507 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:04:32.285260 1131323 cri.go:89] found id: ""
	I0328 01:04:32.285293 1131323 logs.go:276] 0 containers: []
	W0328 01:04:32.285312 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:04:32.285319 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:04:32.285395 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:04:32.326678 1131323 cri.go:89] found id: ""
	I0328 01:04:32.326712 1131323 logs.go:276] 0 containers: []
	W0328 01:04:32.326725 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:04:32.326740 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:04:32.326823 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:04:32.363375 1131323 cri.go:89] found id: ""
	I0328 01:04:32.363403 1131323 logs.go:276] 0 containers: []
	W0328 01:04:32.363412 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:04:32.363419 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:04:32.363473 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:04:32.401410 1131323 cri.go:89] found id: ""
	I0328 01:04:32.401449 1131323 logs.go:276] 0 containers: []
	W0328 01:04:32.401462 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:04:32.401470 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:04:32.401558 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:04:32.438645 1131323 cri.go:89] found id: ""
	I0328 01:04:32.438680 1131323 logs.go:276] 0 containers: []
	W0328 01:04:32.438691 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:04:32.438703 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:04:32.438718 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:04:32.488743 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:04:32.488786 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:04:32.503908 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:04:32.503944 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:04:32.577307 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:04:32.577333 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:04:32.577350 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:04:32.657787 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:04:32.657832 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:04:35.201151 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:04:35.215313 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:04:35.215383 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:04:35.253467 1131323 cri.go:89] found id: ""
	I0328 01:04:35.253504 1131323 logs.go:276] 0 containers: []
	W0328 01:04:35.253515 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:04:35.253522 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:04:35.253593 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:04:35.290218 1131323 cri.go:89] found id: ""
	I0328 01:04:35.290280 1131323 logs.go:276] 0 containers: []
	W0328 01:04:35.290292 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:04:35.290300 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:04:35.290378 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:04:35.330714 1131323 cri.go:89] found id: ""
	I0328 01:04:35.330749 1131323 logs.go:276] 0 containers: []
	W0328 01:04:35.330757 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:04:35.330764 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:04:35.330831 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:04:32.542071 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:34.544163 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:32.372100 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:34.872293 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:34.913212 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:37.411734 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:35.371524 1131323 cri.go:89] found id: ""
	I0328 01:04:35.371553 1131323 logs.go:276] 0 containers: []
	W0328 01:04:35.371570 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:04:35.371577 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:04:35.371630 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:04:35.411610 1131323 cri.go:89] found id: ""
	I0328 01:04:35.411638 1131323 logs.go:276] 0 containers: []
	W0328 01:04:35.411646 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:04:35.411652 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:04:35.411711 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:04:35.456709 1131323 cri.go:89] found id: ""
	I0328 01:04:35.456745 1131323 logs.go:276] 0 containers: []
	W0328 01:04:35.456758 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:04:35.456766 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:04:35.456836 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:04:35.492688 1131323 cri.go:89] found id: ""
	I0328 01:04:35.492719 1131323 logs.go:276] 0 containers: []
	W0328 01:04:35.492729 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:04:35.492755 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:04:35.492811 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:04:35.531205 1131323 cri.go:89] found id: ""
	I0328 01:04:35.531234 1131323 logs.go:276] 0 containers: []
	W0328 01:04:35.531243 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:04:35.531254 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:04:35.531266 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:04:35.611803 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:04:35.611845 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:04:35.653513 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:04:35.653551 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:04:35.708030 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:04:35.708075 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:04:35.724542 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:04:35.724576 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:04:35.798624 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:04:38.299312 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:04:38.314128 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:04:38.314213 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:04:38.357728 1131323 cri.go:89] found id: ""
	I0328 01:04:38.357761 1131323 logs.go:276] 0 containers: []
	W0328 01:04:38.357779 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:04:38.357786 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:04:38.357848 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:04:38.394512 1131323 cri.go:89] found id: ""
	I0328 01:04:38.394541 1131323 logs.go:276] 0 containers: []
	W0328 01:04:38.394549 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:04:38.394558 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:04:38.394618 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:04:38.434353 1131323 cri.go:89] found id: ""
	I0328 01:04:38.434380 1131323 logs.go:276] 0 containers: []
	W0328 01:04:38.434391 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:04:38.434399 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:04:38.434466 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:04:38.477662 1131323 cri.go:89] found id: ""
	I0328 01:04:38.477693 1131323 logs.go:276] 0 containers: []
	W0328 01:04:38.477703 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:04:38.477710 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:04:38.477763 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:04:38.515014 1131323 cri.go:89] found id: ""
	I0328 01:04:38.515044 1131323 logs.go:276] 0 containers: []
	W0328 01:04:38.515053 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:04:38.515060 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:04:38.515117 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:04:38.558865 1131323 cri.go:89] found id: ""
	I0328 01:04:38.558899 1131323 logs.go:276] 0 containers: []
	W0328 01:04:38.558911 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:04:38.558920 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:04:38.558982 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:04:38.600261 1131323 cri.go:89] found id: ""
	I0328 01:04:38.600290 1131323 logs.go:276] 0 containers: []
	W0328 01:04:38.600299 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:04:38.600306 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:04:38.600366 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:04:38.637131 1131323 cri.go:89] found id: ""
	I0328 01:04:38.637167 1131323 logs.go:276] 0 containers: []
	W0328 01:04:38.637179 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:04:38.637194 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:04:38.637218 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:04:38.716032 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:04:38.716058 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:04:38.716079 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:04:38.804534 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:04:38.804578 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:04:38.851781 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:04:38.851820 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:04:38.910091 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:04:38.910125 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:04:37.041273 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:39.541843 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:37.372262 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:39.372555 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:39.912953 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:42.412667 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:41.425801 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:04:41.441072 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:04:41.441168 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:04:41.482934 1131323 cri.go:89] found id: ""
	I0328 01:04:41.482962 1131323 logs.go:276] 0 containers: []
	W0328 01:04:41.482974 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:04:41.482983 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:04:41.483063 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:04:41.521762 1131323 cri.go:89] found id: ""
	I0328 01:04:41.521796 1131323 logs.go:276] 0 containers: []
	W0328 01:04:41.521810 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:04:41.521819 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:04:41.521931 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:04:41.560814 1131323 cri.go:89] found id: ""
	I0328 01:04:41.560847 1131323 logs.go:276] 0 containers: []
	W0328 01:04:41.560857 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:04:41.560864 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:04:41.560928 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:04:41.601158 1131323 cri.go:89] found id: ""
	I0328 01:04:41.601189 1131323 logs.go:276] 0 containers: []
	W0328 01:04:41.601199 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:04:41.601206 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:04:41.601271 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:04:41.638760 1131323 cri.go:89] found id: ""
	I0328 01:04:41.638789 1131323 logs.go:276] 0 containers: []
	W0328 01:04:41.638799 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:04:41.638806 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:04:41.638861 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:04:41.675235 1131323 cri.go:89] found id: ""
	I0328 01:04:41.675268 1131323 logs.go:276] 0 containers: []
	W0328 01:04:41.675278 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:04:41.675285 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:04:41.675341 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:04:41.712918 1131323 cri.go:89] found id: ""
	I0328 01:04:41.712957 1131323 logs.go:276] 0 containers: []
	W0328 01:04:41.712972 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:04:41.712983 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:04:41.713078 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:04:41.750552 1131323 cri.go:89] found id: ""
	I0328 01:04:41.750582 1131323 logs.go:276] 0 containers: []
	W0328 01:04:41.750591 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:04:41.750601 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:04:41.750617 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:04:41.811163 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:04:41.811204 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:04:41.826502 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:04:41.826547 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:04:41.900727 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:04:41.900759 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:04:41.900777 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:04:41.981731 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:04:41.981783 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:04:44.525845 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:04:44.542301 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:04:44.542389 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:04:44.584907 1131323 cri.go:89] found id: ""
	I0328 01:04:44.584936 1131323 logs.go:276] 0 containers: []
	W0328 01:04:44.584945 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:04:44.584952 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:04:44.585007 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:04:44.630465 1131323 cri.go:89] found id: ""
	I0328 01:04:44.630499 1131323 logs.go:276] 0 containers: []
	W0328 01:04:44.630511 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:04:44.630520 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:04:44.630588 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:04:44.669095 1131323 cri.go:89] found id: ""
	I0328 01:04:44.669131 1131323 logs.go:276] 0 containers: []
	W0328 01:04:44.669143 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:04:44.669152 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:04:44.669235 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:04:44.708445 1131323 cri.go:89] found id: ""
	I0328 01:04:44.708484 1131323 logs.go:276] 0 containers: []
	W0328 01:04:44.708495 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:04:44.708502 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:04:44.708570 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:04:44.747706 1131323 cri.go:89] found id: ""
	I0328 01:04:44.747744 1131323 logs.go:276] 0 containers: []
	W0328 01:04:44.747755 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:04:44.747762 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:04:44.747822 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:04:44.787768 1131323 cri.go:89] found id: ""
	I0328 01:04:44.787807 1131323 logs.go:276] 0 containers: []
	W0328 01:04:44.787821 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:04:44.787830 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:04:44.787899 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:04:44.829018 1131323 cri.go:89] found id: ""
	I0328 01:04:44.829049 1131323 logs.go:276] 0 containers: []
	W0328 01:04:44.829059 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:04:44.829066 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:04:44.829123 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:04:44.874334 1131323 cri.go:89] found id: ""
	I0328 01:04:44.874374 1131323 logs.go:276] 0 containers: []
	W0328 01:04:44.874383 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:04:44.874393 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:04:44.874405 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:04:44.921577 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:04:44.921619 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:04:44.976660 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:04:44.976713 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:04:44.991365 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:04:44.991400 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:04:45.067595 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:04:45.067630 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:04:45.067651 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:04:42.042736 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:44.543288 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:41.372902 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:43.872925 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:45.873163 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:44.913827 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:47.412342 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:47.647634 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:04:47.663581 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:04:47.663687 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:04:47.702889 1131323 cri.go:89] found id: ""
	I0328 01:04:47.702940 1131323 logs.go:276] 0 containers: []
	W0328 01:04:47.702954 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:04:47.702966 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:04:47.703043 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:04:47.744995 1131323 cri.go:89] found id: ""
	I0328 01:04:47.745027 1131323 logs.go:276] 0 containers: []
	W0328 01:04:47.745037 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:04:47.745044 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:04:47.745103 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:04:47.785518 1131323 cri.go:89] found id: ""
	I0328 01:04:47.785550 1131323 logs.go:276] 0 containers: []
	W0328 01:04:47.785562 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:04:47.785572 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:04:47.785645 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:04:47.831739 1131323 cri.go:89] found id: ""
	I0328 01:04:47.831771 1131323 logs.go:276] 0 containers: []
	W0328 01:04:47.831786 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:04:47.831794 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:04:47.831867 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:04:47.871864 1131323 cri.go:89] found id: ""
	I0328 01:04:47.871906 1131323 logs.go:276] 0 containers: []
	W0328 01:04:47.871918 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:04:47.871929 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:04:47.872008 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:04:47.907899 1131323 cri.go:89] found id: ""
	I0328 01:04:47.907934 1131323 logs.go:276] 0 containers: []
	W0328 01:04:47.907946 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:04:47.907955 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:04:47.908022 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:04:47.946073 1131323 cri.go:89] found id: ""
	I0328 01:04:47.946107 1131323 logs.go:276] 0 containers: []
	W0328 01:04:47.946118 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:04:47.946127 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:04:47.946223 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:04:47.986122 1131323 cri.go:89] found id: ""
	I0328 01:04:47.986154 1131323 logs.go:276] 0 containers: []
	W0328 01:04:47.986168 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:04:47.986182 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:04:47.986198 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:04:48.057234 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:04:48.057271 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:04:48.109881 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:04:48.109926 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:04:48.125154 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:04:48.125189 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:04:48.208295 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:04:48.208327 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:04:48.208345 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:04:47.041447 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:49.542203 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:48.371275 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:50.372057 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:49.413451 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:51.414465 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:50.785126 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:04:50.800000 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:04:50.800078 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:04:50.839883 1131323 cri.go:89] found id: ""
	I0328 01:04:50.839911 1131323 logs.go:276] 0 containers: []
	W0328 01:04:50.839920 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:04:50.839927 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:04:50.839983 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:04:50.879627 1131323 cri.go:89] found id: ""
	I0328 01:04:50.879654 1131323 logs.go:276] 0 containers: []
	W0328 01:04:50.879661 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:04:50.879668 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:04:50.879734 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:04:50.918392 1131323 cri.go:89] found id: ""
	I0328 01:04:50.918434 1131323 logs.go:276] 0 containers: []
	W0328 01:04:50.918446 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:04:50.918454 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:04:50.918517 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:04:50.957198 1131323 cri.go:89] found id: ""
	I0328 01:04:50.957234 1131323 logs.go:276] 0 containers: []
	W0328 01:04:50.957248 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:04:50.957257 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:04:50.957328 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:04:50.997389 1131323 cri.go:89] found id: ""
	I0328 01:04:50.997424 1131323 logs.go:276] 0 containers: []
	W0328 01:04:50.997438 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:04:50.997446 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:04:50.997513 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:04:51.040259 1131323 cri.go:89] found id: ""
	I0328 01:04:51.040296 1131323 logs.go:276] 0 containers: []
	W0328 01:04:51.040309 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:04:51.040318 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:04:51.040389 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:04:51.081824 1131323 cri.go:89] found id: ""
	I0328 01:04:51.081858 1131323 logs.go:276] 0 containers: []
	W0328 01:04:51.081868 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:04:51.081875 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:04:51.081942 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:04:51.119742 1131323 cri.go:89] found id: ""
	I0328 01:04:51.119783 1131323 logs.go:276] 0 containers: []
	W0328 01:04:51.119796 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:04:51.119810 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:04:51.119836 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:04:51.173486 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:04:51.173529 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:04:51.188532 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:04:51.188568 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:04:51.269181 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:04:51.269207 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:04:51.269226 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:04:51.349882 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:04:51.349936 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:04:53.893562 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:04:53.910104 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:04:53.910186 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:04:53.951333 1131323 cri.go:89] found id: ""
	I0328 01:04:53.951375 1131323 logs.go:276] 0 containers: []
	W0328 01:04:53.951388 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:04:53.951397 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:04:53.951472 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:04:53.992438 1131323 cri.go:89] found id: ""
	I0328 01:04:53.992474 1131323 logs.go:276] 0 containers: []
	W0328 01:04:53.992486 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:04:53.992493 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:04:53.992561 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:04:54.032934 1131323 cri.go:89] found id: ""
	I0328 01:04:54.032969 1131323 logs.go:276] 0 containers: []
	W0328 01:04:54.032982 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:04:54.032992 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:04:54.033061 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:04:54.074670 1131323 cri.go:89] found id: ""
	I0328 01:04:54.074707 1131323 logs.go:276] 0 containers: []
	W0328 01:04:54.074777 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:04:54.074801 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:04:54.074875 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:04:54.111527 1131323 cri.go:89] found id: ""
	I0328 01:04:54.111555 1131323 logs.go:276] 0 containers: []
	W0328 01:04:54.111566 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:04:54.111573 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:04:54.111658 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:04:54.151401 1131323 cri.go:89] found id: ""
	I0328 01:04:54.151428 1131323 logs.go:276] 0 containers: []
	W0328 01:04:54.151437 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:04:54.151443 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:04:54.151494 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:04:54.197997 1131323 cri.go:89] found id: ""
	I0328 01:04:54.198036 1131323 logs.go:276] 0 containers: []
	W0328 01:04:54.198048 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:04:54.198058 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:04:54.198135 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:04:54.234016 1131323 cri.go:89] found id: ""
	I0328 01:04:54.234049 1131323 logs.go:276] 0 containers: []
	W0328 01:04:54.234058 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:04:54.234068 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:04:54.234081 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:04:54.286118 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:04:54.286161 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:04:54.300489 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:04:54.300541 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:04:54.376949 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:04:54.376972 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:04:54.376988 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:04:54.463857 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:04:54.463901 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:04:52.041517 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:54.042088 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:52.875923 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:55.371823 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:53.912140 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:55.912329 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:57.026395 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:04:57.041270 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:04:57.041358 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:04:57.082380 1131323 cri.go:89] found id: ""
	I0328 01:04:57.082416 1131323 logs.go:276] 0 containers: []
	W0328 01:04:57.082428 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:04:57.082436 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:04:57.082503 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:04:57.121835 1131323 cri.go:89] found id: ""
	I0328 01:04:57.121870 1131323 logs.go:276] 0 containers: []
	W0328 01:04:57.121885 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:04:57.121894 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:04:57.121969 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:04:57.163688 1131323 cri.go:89] found id: ""
	I0328 01:04:57.163725 1131323 logs.go:276] 0 containers: []
	W0328 01:04:57.163737 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:04:57.163745 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:04:57.163819 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:04:57.212628 1131323 cri.go:89] found id: ""
	I0328 01:04:57.212666 1131323 logs.go:276] 0 containers: []
	W0328 01:04:57.212693 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:04:57.212703 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:04:57.212788 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:04:57.249196 1131323 cri.go:89] found id: ""
	I0328 01:04:57.249231 1131323 logs.go:276] 0 containers: []
	W0328 01:04:57.249244 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:04:57.249253 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:04:57.249318 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:04:57.286996 1131323 cri.go:89] found id: ""
	I0328 01:04:57.287031 1131323 logs.go:276] 0 containers: []
	W0328 01:04:57.287040 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:04:57.287047 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:04:57.287101 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:04:57.324523 1131323 cri.go:89] found id: ""
	I0328 01:04:57.324551 1131323 logs.go:276] 0 containers: []
	W0328 01:04:57.324560 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:04:57.324566 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:04:57.324627 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:04:57.363946 1131323 cri.go:89] found id: ""
	I0328 01:04:57.363984 1131323 logs.go:276] 0 containers: []
	W0328 01:04:57.363998 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:04:57.364012 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:04:57.364034 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:04:57.418300 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:04:57.418337 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:04:57.433214 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:04:57.433242 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:04:57.508623 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:04:57.508651 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:04:57.508665 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:04:57.586336 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:04:57.586377 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:05:00.129903 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:05:00.146829 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:05:00.146920 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:05:00.197823 1131323 cri.go:89] found id: ""
	I0328 01:05:00.197856 1131323 logs.go:276] 0 containers: []
	W0328 01:05:00.197865 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:05:00.197872 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:05:00.197930 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:05:00.257523 1131323 cri.go:89] found id: ""
	I0328 01:05:00.257561 1131323 logs.go:276] 0 containers: []
	W0328 01:05:00.257575 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:05:00.257584 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:05:00.257657 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:05:00.314511 1131323 cri.go:89] found id: ""
	I0328 01:05:00.314539 1131323 logs.go:276] 0 containers: []
	W0328 01:05:00.314549 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:05:00.314558 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:05:00.314610 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:04:56.042295 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:58.541684 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:00.543232 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:57.372451 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:59.372577 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:58.412203 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:00.412880 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:02.913222 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:00.351043 1131323 cri.go:89] found id: ""
	I0328 01:05:00.351076 1131323 logs.go:276] 0 containers: []
	W0328 01:05:00.351090 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:05:00.351098 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:05:00.351167 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:05:00.391477 1131323 cri.go:89] found id: ""
	I0328 01:05:00.391507 1131323 logs.go:276] 0 containers: []
	W0328 01:05:00.391519 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:05:00.391525 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:05:00.391595 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:05:00.436196 1131323 cri.go:89] found id: ""
	I0328 01:05:00.436230 1131323 logs.go:276] 0 containers: []
	W0328 01:05:00.436242 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:05:00.436249 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:05:00.436316 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:05:00.473389 1131323 cri.go:89] found id: ""
	I0328 01:05:00.473428 1131323 logs.go:276] 0 containers: []
	W0328 01:05:00.473441 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:05:00.473450 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:05:00.473523 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:05:00.508829 1131323 cri.go:89] found id: ""
	I0328 01:05:00.508866 1131323 logs.go:276] 0 containers: []
	W0328 01:05:00.508879 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:05:00.508901 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:05:00.508931 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:05:00.553709 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:05:00.553741 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:05:00.612679 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:05:00.612732 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:05:00.630908 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:05:00.630948 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:05:00.706984 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:05:00.707016 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:05:00.707034 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:05:03.287887 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:05:03.304679 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:05:03.304779 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:05:03.343579 1131323 cri.go:89] found id: ""
	I0328 01:05:03.343608 1131323 logs.go:276] 0 containers: []
	W0328 01:05:03.343618 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:05:03.343625 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:05:03.343677 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:05:03.387158 1131323 cri.go:89] found id: ""
	I0328 01:05:03.387192 1131323 logs.go:276] 0 containers: []
	W0328 01:05:03.387206 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:05:03.387224 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:05:03.387308 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:05:03.426622 1131323 cri.go:89] found id: ""
	I0328 01:05:03.426653 1131323 logs.go:276] 0 containers: []
	W0328 01:05:03.426663 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:05:03.426670 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:05:03.426724 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:05:03.468743 1131323 cri.go:89] found id: ""
	I0328 01:05:03.468780 1131323 logs.go:276] 0 containers: []
	W0328 01:05:03.468793 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:05:03.468801 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:05:03.468870 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:05:03.508518 1131323 cri.go:89] found id: ""
	I0328 01:05:03.508554 1131323 logs.go:276] 0 containers: []
	W0328 01:05:03.508566 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:05:03.508575 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:05:03.508653 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:05:03.548295 1131323 cri.go:89] found id: ""
	I0328 01:05:03.548331 1131323 logs.go:276] 0 containers: []
	W0328 01:05:03.548343 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:05:03.548353 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:05:03.548444 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:05:03.591561 1131323 cri.go:89] found id: ""
	I0328 01:05:03.591594 1131323 logs.go:276] 0 containers: []
	W0328 01:05:03.591607 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:05:03.591615 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:05:03.591670 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:05:03.635055 1131323 cri.go:89] found id: ""
	I0328 01:05:03.635086 1131323 logs.go:276] 0 containers: []
	W0328 01:05:03.635097 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:05:03.635109 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:05:03.635127 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:05:03.715639 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:05:03.715683 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:05:03.755888 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:05:03.755931 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:05:03.810128 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:05:03.810170 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:05:03.825197 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:05:03.825227 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:05:03.908589 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:05:03.043330 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:05.541544 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:01.372692 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:03.373747 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:05.871945 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:05.413583 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:07.912379 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:06.409060 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:05:06.424034 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:05:06.424119 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:05:06.461827 1131323 cri.go:89] found id: ""
	I0328 01:05:06.461888 1131323 logs.go:276] 0 containers: []
	W0328 01:05:06.461902 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:05:06.461911 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:05:06.461985 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:05:06.505006 1131323 cri.go:89] found id: ""
	I0328 01:05:06.505061 1131323 logs.go:276] 0 containers: []
	W0328 01:05:06.505078 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:05:06.505085 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:05:06.505145 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:05:06.542000 1131323 cri.go:89] found id: ""
	I0328 01:05:06.542033 1131323 logs.go:276] 0 containers: []
	W0328 01:05:06.542044 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:05:06.542052 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:05:06.542121 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:05:06.583725 1131323 cri.go:89] found id: ""
	I0328 01:05:06.583778 1131323 logs.go:276] 0 containers: []
	W0328 01:05:06.583800 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:05:06.583810 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:05:06.583887 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:05:06.620457 1131323 cri.go:89] found id: ""
	I0328 01:05:06.620501 1131323 logs.go:276] 0 containers: []
	W0328 01:05:06.620516 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:05:06.620524 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:05:06.620595 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:05:06.664380 1131323 cri.go:89] found id: ""
	I0328 01:05:06.664412 1131323 logs.go:276] 0 containers: []
	W0328 01:05:06.664425 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:05:06.664432 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:05:06.664502 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:05:06.701799 1131323 cri.go:89] found id: ""
	I0328 01:05:06.701850 1131323 logs.go:276] 0 containers: []
	W0328 01:05:06.701862 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:05:06.701870 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:05:06.701935 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:05:06.739899 1131323 cri.go:89] found id: ""
	I0328 01:05:06.739936 1131323 logs.go:276] 0 containers: []
	W0328 01:05:06.739948 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:05:06.739958 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:05:06.739973 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:05:06.814373 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:05:06.814404 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:05:06.814421 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:05:06.894331 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:05:06.894371 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:05:06.952912 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:05:06.952979 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:05:07.011851 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:05:07.011900 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:05:09.528068 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:05:09.545082 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:05:09.545167 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:05:09.586944 1131323 cri.go:89] found id: ""
	I0328 01:05:09.586983 1131323 logs.go:276] 0 containers: []
	W0328 01:05:09.586996 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:05:09.587004 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:05:09.587077 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:05:09.624153 1131323 cri.go:89] found id: ""
	I0328 01:05:09.624184 1131323 logs.go:276] 0 containers: []
	W0328 01:05:09.624192 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:05:09.624198 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:05:09.624256 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:05:09.661125 1131323 cri.go:89] found id: ""
	I0328 01:05:09.661160 1131323 logs.go:276] 0 containers: []
	W0328 01:05:09.661172 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:05:09.661182 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:05:09.661256 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:05:09.699865 1131323 cri.go:89] found id: ""
	I0328 01:05:09.699903 1131323 logs.go:276] 0 containers: []
	W0328 01:05:09.699916 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:05:09.699925 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:05:09.699992 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:05:09.737925 1131323 cri.go:89] found id: ""
	I0328 01:05:09.737958 1131323 logs.go:276] 0 containers: []
	W0328 01:05:09.737967 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:05:09.737973 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:05:09.738029 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:05:09.776906 1131323 cri.go:89] found id: ""
	I0328 01:05:09.776941 1131323 logs.go:276] 0 containers: []
	W0328 01:05:09.776950 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:05:09.776957 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:05:09.777021 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:05:09.815767 1131323 cri.go:89] found id: ""
	I0328 01:05:09.815794 1131323 logs.go:276] 0 containers: []
	W0328 01:05:09.815803 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:05:09.815809 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:05:09.815876 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:05:09.855880 1131323 cri.go:89] found id: ""
	I0328 01:05:09.855915 1131323 logs.go:276] 0 containers: []
	W0328 01:05:09.855928 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:05:09.855941 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:05:09.855958 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:05:09.918339 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:05:09.918376 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:05:09.932775 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:05:09.932810 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:05:10.011566 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:05:10.011594 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:05:10.011610 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:05:10.096057 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:05:10.096100 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:05:08.041230 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:10.041991 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:07.873367 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:10.372311 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:09.913349 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:12.412259 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:12.641999 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:05:12.655761 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:05:12.655843 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:05:12.697335 1131323 cri.go:89] found id: ""
	I0328 01:05:12.697369 1131323 logs.go:276] 0 containers: []
	W0328 01:05:12.697381 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:05:12.697390 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:05:12.697453 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:05:12.736482 1131323 cri.go:89] found id: ""
	I0328 01:05:12.736520 1131323 logs.go:276] 0 containers: []
	W0328 01:05:12.736534 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:05:12.736544 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:05:12.736617 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:05:12.771992 1131323 cri.go:89] found id: ""
	I0328 01:05:12.772034 1131323 logs.go:276] 0 containers: []
	W0328 01:05:12.772046 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:05:12.772055 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:05:12.772125 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:05:12.810738 1131323 cri.go:89] found id: ""
	I0328 01:05:12.810770 1131323 logs.go:276] 0 containers: []
	W0328 01:05:12.810779 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:05:12.810786 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:05:12.810837 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:05:12.848172 1131323 cri.go:89] found id: ""
	I0328 01:05:12.848209 1131323 logs.go:276] 0 containers: []
	W0328 01:05:12.848222 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:05:12.848230 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:05:12.848310 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:05:12.884660 1131323 cri.go:89] found id: ""
	I0328 01:05:12.884698 1131323 logs.go:276] 0 containers: []
	W0328 01:05:12.884710 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:05:12.884719 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:05:12.884794 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:05:12.926180 1131323 cri.go:89] found id: ""
	I0328 01:05:12.926209 1131323 logs.go:276] 0 containers: []
	W0328 01:05:12.926218 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:05:12.926244 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:05:12.926303 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:05:12.966938 1131323 cri.go:89] found id: ""
	I0328 01:05:12.966969 1131323 logs.go:276] 0 containers: []
	W0328 01:05:12.966983 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:05:12.966996 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:05:12.967014 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:05:13.018501 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:05:13.018541 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:05:13.033140 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:05:13.033175 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:05:13.108806 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:05:13.108832 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:05:13.108858 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:05:13.189198 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:05:13.189241 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:05:12.541088 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:15.041830 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:12.372413 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:14.372804 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:14.414059 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:16.912343 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:15.737415 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:05:15.752534 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:05:15.752614 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:05:15.789941 1131323 cri.go:89] found id: ""
	I0328 01:05:15.789974 1131323 logs.go:276] 0 containers: []
	W0328 01:05:15.789986 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:05:15.789994 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:05:15.790107 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:05:15.827688 1131323 cri.go:89] found id: ""
	I0328 01:05:15.827731 1131323 logs.go:276] 0 containers: []
	W0328 01:05:15.827745 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:05:15.827766 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:05:15.827831 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:05:15.867005 1131323 cri.go:89] found id: ""
	I0328 01:05:15.867041 1131323 logs.go:276] 0 containers: []
	W0328 01:05:15.867054 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:05:15.867064 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:05:15.867149 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:05:15.909924 1131323 cri.go:89] found id: ""
	I0328 01:05:15.910035 1131323 logs.go:276] 0 containers: []
	W0328 01:05:15.910055 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:05:15.910066 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:05:15.910139 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:05:15.950571 1131323 cri.go:89] found id: ""
	I0328 01:05:15.950606 1131323 logs.go:276] 0 containers: []
	W0328 01:05:15.950619 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:05:15.950632 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:05:15.950707 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:05:15.992557 1131323 cri.go:89] found id: ""
	I0328 01:05:15.992594 1131323 logs.go:276] 0 containers: []
	W0328 01:05:15.992605 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:05:15.992615 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:05:15.992687 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:05:16.032417 1131323 cri.go:89] found id: ""
	I0328 01:05:16.032458 1131323 logs.go:276] 0 containers: []
	W0328 01:05:16.032473 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:05:16.032482 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:05:16.032559 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:05:16.071399 1131323 cri.go:89] found id: ""
	I0328 01:05:16.071433 1131323 logs.go:276] 0 containers: []
	W0328 01:05:16.071445 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:05:16.071459 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:05:16.071481 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:05:16.147078 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:05:16.147113 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:05:16.147131 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:05:16.223828 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:05:16.223870 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:05:16.269377 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:05:16.269409 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:05:16.318545 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:05:16.318584 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:05:18.836044 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:05:18.851138 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:05:18.851231 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:05:18.887223 1131323 cri.go:89] found id: ""
	I0328 01:05:18.887260 1131323 logs.go:276] 0 containers: []
	W0328 01:05:18.887273 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:05:18.887283 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:05:18.887354 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:05:18.928652 1131323 cri.go:89] found id: ""
	I0328 01:05:18.928682 1131323 logs.go:276] 0 containers: []
	W0328 01:05:18.928692 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:05:18.928698 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:05:18.928756 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:05:18.968519 1131323 cri.go:89] found id: ""
	I0328 01:05:18.968555 1131323 logs.go:276] 0 containers: []
	W0328 01:05:18.968567 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:05:18.968575 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:05:18.968646 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:05:19.010939 1131323 cri.go:89] found id: ""
	I0328 01:05:19.010977 1131323 logs.go:276] 0 containers: []
	W0328 01:05:19.010991 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:05:19.010999 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:05:19.011070 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:05:19.048723 1131323 cri.go:89] found id: ""
	I0328 01:05:19.048748 1131323 logs.go:276] 0 containers: []
	W0328 01:05:19.048758 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:05:19.048769 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:05:19.048820 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:05:19.091761 1131323 cri.go:89] found id: ""
	I0328 01:05:19.091794 1131323 logs.go:276] 0 containers: []
	W0328 01:05:19.091803 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:05:19.091810 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:05:19.091863 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:05:19.134017 1131323 cri.go:89] found id: ""
	I0328 01:05:19.134049 1131323 logs.go:276] 0 containers: []
	W0328 01:05:19.134059 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:05:19.134065 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:05:19.134119 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:05:19.176070 1131323 cri.go:89] found id: ""
	I0328 01:05:19.176106 1131323 logs.go:276] 0 containers: []
	W0328 01:05:19.176118 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:05:19.176131 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:05:19.176155 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:05:19.261546 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:05:19.261584 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:05:19.261605 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:05:19.340271 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:05:19.340314 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:05:19.383625 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:05:19.383676 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:05:19.441635 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:05:19.441679 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:05:17.541876 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:20.040841 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:16.872723 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:19.372916 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:19.414384 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:21.912881 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:21.958362 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:05:21.974427 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:05:21.974528 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:05:22.013099 1131323 cri.go:89] found id: ""
	I0328 01:05:22.013139 1131323 logs.go:276] 0 containers: []
	W0328 01:05:22.013152 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:05:22.013160 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:05:22.013229 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:05:22.055558 1131323 cri.go:89] found id: ""
	I0328 01:05:22.055594 1131323 logs.go:276] 0 containers: []
	W0328 01:05:22.055604 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:05:22.055611 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:05:22.055668 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:05:22.106836 1131323 cri.go:89] found id: ""
	I0328 01:05:22.106870 1131323 logs.go:276] 0 containers: []
	W0328 01:05:22.106879 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:05:22.106886 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:05:22.106961 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:05:22.145135 1131323 cri.go:89] found id: ""
	I0328 01:05:22.145175 1131323 logs.go:276] 0 containers: []
	W0328 01:05:22.145189 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:05:22.145197 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:05:22.145266 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:05:22.183879 1131323 cri.go:89] found id: ""
	I0328 01:05:22.183909 1131323 logs.go:276] 0 containers: []
	W0328 01:05:22.183919 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:05:22.183926 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:05:22.183981 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:05:22.223087 1131323 cri.go:89] found id: ""
	I0328 01:05:22.223115 1131323 logs.go:276] 0 containers: []
	W0328 01:05:22.223124 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:05:22.223131 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:05:22.223209 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:05:22.263232 1131323 cri.go:89] found id: ""
	I0328 01:05:22.263262 1131323 logs.go:276] 0 containers: []
	W0328 01:05:22.263272 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:05:22.263279 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:05:22.263331 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:05:22.302919 1131323 cri.go:89] found id: ""
	I0328 01:05:22.302954 1131323 logs.go:276] 0 containers: []
	W0328 01:05:22.302967 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:05:22.302980 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:05:22.302998 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:05:22.358550 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:05:22.358596 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:05:22.374688 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:05:22.374722 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:05:22.453584 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:05:22.453609 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:05:22.453624 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:05:22.540983 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:05:22.541048 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:05:25.091773 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:05:25.107412 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:05:25.107484 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:05:25.143917 1131323 cri.go:89] found id: ""
	I0328 01:05:25.143944 1131323 logs.go:276] 0 containers: []
	W0328 01:05:25.143953 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:05:25.143960 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:05:25.144013 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:05:25.183615 1131323 cri.go:89] found id: ""
	I0328 01:05:25.183650 1131323 logs.go:276] 0 containers: []
	W0328 01:05:25.183659 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:05:25.183666 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:05:25.183729 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:05:25.221125 1131323 cri.go:89] found id: ""
	I0328 01:05:25.221158 1131323 logs.go:276] 0 containers: []
	W0328 01:05:25.221167 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:05:25.221174 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:05:25.221229 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:05:25.262023 1131323 cri.go:89] found id: ""
	I0328 01:05:25.262056 1131323 logs.go:276] 0 containers: []
	W0328 01:05:25.262065 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:05:25.262072 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:05:25.262134 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:05:25.297919 1131323 cri.go:89] found id: ""
	I0328 01:05:25.297948 1131323 logs.go:276] 0 containers: []
	W0328 01:05:25.297957 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:05:25.297964 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:05:25.298035 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:05:22.041977 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:24.542416 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:21.872312 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:23.872885 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:23.914459 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:25.916730 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:25.336582 1131323 cri.go:89] found id: ""
	I0328 01:05:25.336610 1131323 logs.go:276] 0 containers: []
	W0328 01:05:25.336620 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:05:25.336627 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:05:25.336690 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:05:25.375554 1131323 cri.go:89] found id: ""
	I0328 01:05:25.375589 1131323 logs.go:276] 0 containers: []
	W0328 01:05:25.375600 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:05:25.375609 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:05:25.375683 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:05:25.415941 1131323 cri.go:89] found id: ""
	I0328 01:05:25.415973 1131323 logs.go:276] 0 containers: []
	W0328 01:05:25.415984 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:05:25.415996 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:05:25.416013 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:05:25.430168 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:05:25.430196 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:05:25.507782 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:05:25.507805 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:05:25.507862 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:05:25.588780 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:05:25.588824 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:05:25.634958 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:05:25.634997 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:05:28.190651 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:05:28.205714 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:05:28.205794 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:05:28.242015 1131323 cri.go:89] found id: ""
	I0328 01:05:28.242056 1131323 logs.go:276] 0 containers: []
	W0328 01:05:28.242067 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:05:28.242077 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:05:28.242169 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:05:28.289132 1131323 cri.go:89] found id: ""
	I0328 01:05:28.289169 1131323 logs.go:276] 0 containers: []
	W0328 01:05:28.289182 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:05:28.289189 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:05:28.289256 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:05:28.327001 1131323 cri.go:89] found id: ""
	I0328 01:05:28.327031 1131323 logs.go:276] 0 containers: []
	W0328 01:05:28.327040 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:05:28.327052 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:05:28.327105 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:05:28.365474 1131323 cri.go:89] found id: ""
	I0328 01:05:28.365507 1131323 logs.go:276] 0 containers: []
	W0328 01:05:28.365516 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:05:28.365523 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:05:28.365587 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:05:28.405494 1131323 cri.go:89] found id: ""
	I0328 01:05:28.405553 1131323 logs.go:276] 0 containers: []
	W0328 01:05:28.405567 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:05:28.405576 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:05:28.405652 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:05:28.448464 1131323 cri.go:89] found id: ""
	I0328 01:05:28.448502 1131323 logs.go:276] 0 containers: []
	W0328 01:05:28.448512 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:05:28.448521 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:05:28.448594 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:05:28.488143 1131323 cri.go:89] found id: ""
	I0328 01:05:28.488172 1131323 logs.go:276] 0 containers: []
	W0328 01:05:28.488182 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:05:28.488189 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:05:28.488258 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:05:28.545977 1131323 cri.go:89] found id: ""
	I0328 01:05:28.546012 1131323 logs.go:276] 0 containers: []
	W0328 01:05:28.546024 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:05:28.546036 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:05:28.546050 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:05:28.629955 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:05:28.630001 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:05:28.670504 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:05:28.670536 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:05:28.722021 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:05:28.722069 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:05:28.737274 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:05:28.737310 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:05:28.824025 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:05:27.041205 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:29.041342 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:26.372037 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:28.373545 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:30.872569 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:28.414921 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:30.912980 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:31.324497 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:05:31.339715 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:05:31.339811 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:05:31.379017 1131323 cri.go:89] found id: ""
	I0328 01:05:31.379050 1131323 logs.go:276] 0 containers: []
	W0328 01:05:31.379062 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:05:31.379072 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:05:31.379138 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:05:31.420024 1131323 cri.go:89] found id: ""
	I0328 01:05:31.420055 1131323 logs.go:276] 0 containers: []
	W0328 01:05:31.420065 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:05:31.420071 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:05:31.420136 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:05:31.458732 1131323 cri.go:89] found id: ""
	I0328 01:05:31.458764 1131323 logs.go:276] 0 containers: []
	W0328 01:05:31.458773 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:05:31.458779 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:05:31.458835 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:05:31.504249 1131323 cri.go:89] found id: ""
	I0328 01:05:31.504280 1131323 logs.go:276] 0 containers: []
	W0328 01:05:31.504292 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:05:31.504300 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:05:31.504366 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:05:31.545284 1131323 cri.go:89] found id: ""
	I0328 01:05:31.545316 1131323 logs.go:276] 0 containers: []
	W0328 01:05:31.545324 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:05:31.545331 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:05:31.545385 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:05:31.583402 1131323 cri.go:89] found id: ""
	I0328 01:05:31.583434 1131323 logs.go:276] 0 containers: []
	W0328 01:05:31.583444 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:05:31.583453 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:05:31.583587 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:05:31.624411 1131323 cri.go:89] found id: ""
	I0328 01:05:31.624449 1131323 logs.go:276] 0 containers: []
	W0328 01:05:31.624462 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:05:31.624471 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:05:31.624528 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:05:31.666103 1131323 cri.go:89] found id: ""
	I0328 01:05:31.666144 1131323 logs.go:276] 0 containers: []
	W0328 01:05:31.666158 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:05:31.666173 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:05:31.666192 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:05:31.717595 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:05:31.717636 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:05:31.731606 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:05:31.731637 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:05:31.803302 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:05:31.803325 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:05:31.803339 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:05:31.885552 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:05:31.885590 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:05:34.432446 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:05:34.448002 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:05:34.448085 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:05:34.493207 1131323 cri.go:89] found id: ""
	I0328 01:05:34.493246 1131323 logs.go:276] 0 containers: []
	W0328 01:05:34.493259 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:05:34.493268 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:05:34.493337 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:05:34.541838 1131323 cri.go:89] found id: ""
	I0328 01:05:34.541871 1131323 logs.go:276] 0 containers: []
	W0328 01:05:34.541883 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:05:34.541891 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:05:34.541956 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:05:34.582319 1131323 cri.go:89] found id: ""
	I0328 01:05:34.582357 1131323 logs.go:276] 0 containers: []
	W0328 01:05:34.582371 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:05:34.582380 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:05:34.582458 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:05:34.618753 1131323 cri.go:89] found id: ""
	I0328 01:05:34.618788 1131323 logs.go:276] 0 containers: []
	W0328 01:05:34.618801 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:05:34.618810 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:05:34.618882 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:05:34.656994 1131323 cri.go:89] found id: ""
	I0328 01:05:34.657027 1131323 logs.go:276] 0 containers: []
	W0328 01:05:34.657037 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:05:34.657043 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:05:34.657114 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:05:34.695214 1131323 cri.go:89] found id: ""
	I0328 01:05:34.695252 1131323 logs.go:276] 0 containers: []
	W0328 01:05:34.695264 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:05:34.695271 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:05:34.695337 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:05:34.733688 1131323 cri.go:89] found id: ""
	I0328 01:05:34.733718 1131323 logs.go:276] 0 containers: []
	W0328 01:05:34.733731 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:05:34.733739 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:05:34.733808 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:05:34.771697 1131323 cri.go:89] found id: ""
	I0328 01:05:34.771729 1131323 logs.go:276] 0 containers: []
	W0328 01:05:34.771744 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:05:34.771758 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:05:34.771776 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:05:34.828190 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:05:34.828236 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:05:34.842741 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:05:34.842776 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:05:34.918494 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:05:34.918525 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:05:34.918541 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:05:35.012689 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:05:35.012747 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:05:31.042633 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:33.541295 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:35.541588 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:33.371991 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:35.872753 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:33.412886 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:35.914065 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:37.574759 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:05:37.590014 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:05:37.590128 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:05:37.626883 1131323 cri.go:89] found id: ""
	I0328 01:05:37.626914 1131323 logs.go:276] 0 containers: []
	W0328 01:05:37.626926 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:05:37.626935 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:05:37.627005 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:05:37.665171 1131323 cri.go:89] found id: ""
	I0328 01:05:37.665202 1131323 logs.go:276] 0 containers: []
	W0328 01:05:37.665215 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:05:37.665225 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:05:37.665294 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:05:37.702923 1131323 cri.go:89] found id: ""
	I0328 01:05:37.702963 1131323 logs.go:276] 0 containers: []
	W0328 01:05:37.702976 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:05:37.702984 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:05:37.703064 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:05:37.741148 1131323 cri.go:89] found id: ""
	I0328 01:05:37.741182 1131323 logs.go:276] 0 containers: []
	W0328 01:05:37.741191 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:05:37.741199 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:05:37.741269 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:05:37.782298 1131323 cri.go:89] found id: ""
	I0328 01:05:37.782331 1131323 logs.go:276] 0 containers: []
	W0328 01:05:37.782341 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:05:37.782348 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:05:37.782407 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:05:37.819056 1131323 cri.go:89] found id: ""
	I0328 01:05:37.819110 1131323 logs.go:276] 0 containers: []
	W0328 01:05:37.819124 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:05:37.819134 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:05:37.819215 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:05:37.862372 1131323 cri.go:89] found id: ""
	I0328 01:05:37.862414 1131323 logs.go:276] 0 containers: []
	W0328 01:05:37.862427 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:05:37.862436 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:05:37.862507 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:05:37.899639 1131323 cri.go:89] found id: ""
	I0328 01:05:37.899675 1131323 logs.go:276] 0 containers: []
	W0328 01:05:37.899689 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:05:37.899703 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:05:37.899721 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:05:37.978962 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:05:37.978990 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:05:37.979007 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:05:38.058972 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:05:38.059015 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:05:38.102975 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:05:38.103016 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:05:38.157994 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:05:38.158035 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:05:38.041091 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:40.041892 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:38.371787 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:40.373131 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:38.412214 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:40.415412 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:42.912341 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:40.673425 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:05:40.690969 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:05:40.691041 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:05:40.735552 1131323 cri.go:89] found id: ""
	I0328 01:05:40.735585 1131323 logs.go:276] 0 containers: []
	W0328 01:05:40.735594 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:05:40.735602 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:05:40.735669 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:05:40.816611 1131323 cri.go:89] found id: ""
	I0328 01:05:40.816648 1131323 logs.go:276] 0 containers: []
	W0328 01:05:40.816661 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:05:40.816669 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:05:40.816725 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:05:40.864093 1131323 cri.go:89] found id: ""
	I0328 01:05:40.864125 1131323 logs.go:276] 0 containers: []
	W0328 01:05:40.864138 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:05:40.864147 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:05:40.864218 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:05:40.908781 1131323 cri.go:89] found id: ""
	I0328 01:05:40.908817 1131323 logs.go:276] 0 containers: []
	W0328 01:05:40.908829 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:05:40.908846 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:05:40.908914 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:05:40.950330 1131323 cri.go:89] found id: ""
	I0328 01:05:40.950369 1131323 logs.go:276] 0 containers: []
	W0328 01:05:40.950382 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:05:40.950390 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:05:40.950481 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:05:40.989983 1131323 cri.go:89] found id: ""
	I0328 01:05:40.990041 1131323 logs.go:276] 0 containers: []
	W0328 01:05:40.990054 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:05:40.990063 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:05:40.990136 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:05:41.042428 1131323 cri.go:89] found id: ""
	I0328 01:05:41.042470 1131323 logs.go:276] 0 containers: []
	W0328 01:05:41.042481 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:05:41.042489 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:05:41.042560 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:05:41.089309 1131323 cri.go:89] found id: ""
	I0328 01:05:41.089342 1131323 logs.go:276] 0 containers: []
	W0328 01:05:41.089353 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:05:41.089363 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:05:41.089377 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:05:41.148502 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:05:41.148547 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:05:41.163889 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:05:41.163918 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:05:41.242825 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:05:41.242848 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:05:41.242861 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:05:41.322658 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:05:41.322702 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:05:43.865117 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:05:43.880642 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:05:43.880729 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:05:43.919519 1131323 cri.go:89] found id: ""
	I0328 01:05:43.919550 1131323 logs.go:276] 0 containers: []
	W0328 01:05:43.919559 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:05:43.919565 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:05:43.919622 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:05:43.957906 1131323 cri.go:89] found id: ""
	I0328 01:05:43.957936 1131323 logs.go:276] 0 containers: []
	W0328 01:05:43.957945 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:05:43.957951 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:05:43.958008 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:05:44.001448 1131323 cri.go:89] found id: ""
	I0328 01:05:44.001486 1131323 logs.go:276] 0 containers: []
	W0328 01:05:44.001497 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:05:44.001505 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:05:44.001573 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:05:44.039767 1131323 cri.go:89] found id: ""
	I0328 01:05:44.039801 1131323 logs.go:276] 0 containers: []
	W0328 01:05:44.039812 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:05:44.039818 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:05:44.039871 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:05:44.079441 1131323 cri.go:89] found id: ""
	I0328 01:05:44.079470 1131323 logs.go:276] 0 containers: []
	W0328 01:05:44.079480 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:05:44.079486 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:05:44.079541 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:05:44.116534 1131323 cri.go:89] found id: ""
	I0328 01:05:44.116584 1131323 logs.go:276] 0 containers: []
	W0328 01:05:44.116596 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:05:44.116604 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:05:44.116670 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:05:44.163335 1131323 cri.go:89] found id: ""
	I0328 01:05:44.163369 1131323 logs.go:276] 0 containers: []
	W0328 01:05:44.163381 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:05:44.163389 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:05:44.163457 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:05:44.201367 1131323 cri.go:89] found id: ""
	I0328 01:05:44.201403 1131323 logs.go:276] 0 containers: []
	W0328 01:05:44.201413 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:05:44.201424 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:05:44.201442 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:05:44.257485 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:05:44.257529 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:05:44.272489 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:05:44.272534 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:05:44.354442 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:05:44.354477 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:05:44.354498 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:05:44.436219 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:05:44.436262 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:05:42.044443 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:44.541648 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:42.872072 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:44.873552 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:44.913292 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:47.412489 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:46.982131 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:05:46.998022 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:05:46.998100 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:05:47.037167 1131323 cri.go:89] found id: ""
	I0328 01:05:47.037205 1131323 logs.go:276] 0 containers: []
	W0328 01:05:47.037217 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:05:47.037226 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:05:47.037295 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:05:47.076175 1131323 cri.go:89] found id: ""
	I0328 01:05:47.076213 1131323 logs.go:276] 0 containers: []
	W0328 01:05:47.076226 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:05:47.076235 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:05:47.076306 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:05:47.115193 1131323 cri.go:89] found id: ""
	I0328 01:05:47.115227 1131323 logs.go:276] 0 containers: []
	W0328 01:05:47.115237 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:05:47.115244 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:05:47.115297 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:05:47.154942 1131323 cri.go:89] found id: ""
	I0328 01:05:47.154976 1131323 logs.go:276] 0 containers: []
	W0328 01:05:47.154989 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:05:47.154998 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:05:47.155069 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:05:47.196571 1131323 cri.go:89] found id: ""
	I0328 01:05:47.196609 1131323 logs.go:276] 0 containers: []
	W0328 01:05:47.196622 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:05:47.196631 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:05:47.196707 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:05:47.237572 1131323 cri.go:89] found id: ""
	I0328 01:05:47.237616 1131323 logs.go:276] 0 containers: []
	W0328 01:05:47.237625 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:05:47.237633 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:05:47.237691 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:05:47.275208 1131323 cri.go:89] found id: ""
	I0328 01:05:47.275254 1131323 logs.go:276] 0 containers: []
	W0328 01:05:47.275265 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:05:47.275272 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:05:47.275329 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:05:47.313515 1131323 cri.go:89] found id: ""
	I0328 01:05:47.313555 1131323 logs.go:276] 0 containers: []
	W0328 01:05:47.313568 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:05:47.313582 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:05:47.313598 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:05:47.368993 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:05:47.369033 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:05:47.383063 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:05:47.383097 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:05:47.460239 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:05:47.460278 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:05:47.460298 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:05:47.538552 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:05:47.538594 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:05:50.084960 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:05:50.101764 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:05:50.101859 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:05:50.141457 1131323 cri.go:89] found id: ""
	I0328 01:05:50.141488 1131323 logs.go:276] 0 containers: []
	W0328 01:05:50.141497 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:05:50.141504 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:05:50.141557 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:05:50.178184 1131323 cri.go:89] found id: ""
	I0328 01:05:50.178220 1131323 logs.go:276] 0 containers: []
	W0328 01:05:50.178254 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:05:50.178263 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:05:50.178358 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:05:50.217908 1131323 cri.go:89] found id: ""
	I0328 01:05:50.217946 1131323 logs.go:276] 0 containers: []
	W0328 01:05:50.217959 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:05:50.217966 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:05:50.218027 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:05:50.256029 1131323 cri.go:89] found id: ""
	I0328 01:05:50.256058 1131323 logs.go:276] 0 containers: []
	W0328 01:05:50.256067 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:05:50.256074 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:05:50.256130 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:05:50.295054 1131323 cri.go:89] found id: ""
	I0328 01:05:50.295087 1131323 logs.go:276] 0 containers: []
	W0328 01:05:50.295100 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:05:50.295106 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:05:50.295165 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:05:47.042338 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:49.542501 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:47.372867 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:49.872948 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:49.913873 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:52.412600 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:50.334695 1131323 cri.go:89] found id: ""
	I0328 01:05:50.336588 1131323 logs.go:276] 0 containers: []
	W0328 01:05:50.336605 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:05:50.336614 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:05:50.336697 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:05:50.375968 1131323 cri.go:89] found id: ""
	I0328 01:05:50.376003 1131323 logs.go:276] 0 containers: []
	W0328 01:05:50.376013 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:05:50.376021 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:05:50.376091 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:05:50.417146 1131323 cri.go:89] found id: ""
	I0328 01:05:50.417175 1131323 logs.go:276] 0 containers: []
	W0328 01:05:50.417184 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:05:50.417194 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:05:50.417207 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:05:50.474090 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:05:50.474131 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:05:50.489006 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:05:50.489040 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:05:50.566220 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:05:50.566268 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:05:50.566286 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:05:50.645593 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:05:50.645653 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:05:53.190872 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:05:53.205223 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:05:53.205320 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:05:53.242396 1131323 cri.go:89] found id: ""
	I0328 01:05:53.242433 1131323 logs.go:276] 0 containers: []
	W0328 01:05:53.242445 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:05:53.242455 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:05:53.242524 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:05:53.281237 1131323 cri.go:89] found id: ""
	I0328 01:05:53.281275 1131323 logs.go:276] 0 containers: []
	W0328 01:05:53.281288 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:05:53.281297 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:05:53.281357 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:05:53.321239 1131323 cri.go:89] found id: ""
	I0328 01:05:53.321268 1131323 logs.go:276] 0 containers: []
	W0328 01:05:53.321287 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:05:53.321296 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:05:53.321358 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:05:53.359240 1131323 cri.go:89] found id: ""
	I0328 01:05:53.359269 1131323 logs.go:276] 0 containers: []
	W0328 01:05:53.359278 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:05:53.359284 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:05:53.359337 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:05:53.396973 1131323 cri.go:89] found id: ""
	I0328 01:05:53.397008 1131323 logs.go:276] 0 containers: []
	W0328 01:05:53.397021 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:05:53.397030 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:05:53.397100 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:05:53.438368 1131323 cri.go:89] found id: ""
	I0328 01:05:53.438400 1131323 logs.go:276] 0 containers: []
	W0328 01:05:53.438408 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:05:53.438415 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:05:53.438477 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:05:53.474679 1131323 cri.go:89] found id: ""
	I0328 01:05:53.474708 1131323 logs.go:276] 0 containers: []
	W0328 01:05:53.474732 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:05:53.474742 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:05:53.474799 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:05:53.512509 1131323 cri.go:89] found id: ""
	I0328 01:05:53.512547 1131323 logs.go:276] 0 containers: []
	W0328 01:05:53.512560 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:05:53.512579 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:05:53.512599 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:05:53.569536 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:05:53.569580 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:05:53.584977 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:05:53.585016 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:05:53.657865 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:05:53.657895 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:05:53.657908 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:05:53.733158 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:05:53.733203 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:05:52.041508 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:54.541663 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:52.373317 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:54.872090 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:54.913464 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:57.413256 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:56.278693 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:05:56.291870 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:05:56.291949 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:05:56.332909 1131323 cri.go:89] found id: ""
	I0328 01:05:56.332943 1131323 logs.go:276] 0 containers: []
	W0328 01:05:56.332957 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:05:56.332965 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:05:56.333038 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:05:56.370608 1131323 cri.go:89] found id: ""
	I0328 01:05:56.370638 1131323 logs.go:276] 0 containers: []
	W0328 01:05:56.370649 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:05:56.370657 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:05:56.370721 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:05:56.408031 1131323 cri.go:89] found id: ""
	I0328 01:05:56.408068 1131323 logs.go:276] 0 containers: []
	W0328 01:05:56.408081 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:05:56.408100 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:05:56.408170 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:05:56.445057 1131323 cri.go:89] found id: ""
	I0328 01:05:56.445092 1131323 logs.go:276] 0 containers: []
	W0328 01:05:56.445105 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:05:56.445113 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:05:56.445177 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:05:56.486868 1131323 cri.go:89] found id: ""
	I0328 01:05:56.486898 1131323 logs.go:276] 0 containers: []
	W0328 01:05:56.486908 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:05:56.486914 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:05:56.486969 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:05:56.533594 1131323 cri.go:89] found id: ""
	I0328 01:05:56.533622 1131323 logs.go:276] 0 containers: []
	W0328 01:05:56.533632 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:05:56.533638 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:05:56.533702 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:05:56.569200 1131323 cri.go:89] found id: ""
	I0328 01:05:56.569237 1131323 logs.go:276] 0 containers: []
	W0328 01:05:56.569250 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:05:56.569258 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:05:56.569335 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:05:56.604919 1131323 cri.go:89] found id: ""
	I0328 01:05:56.604955 1131323 logs.go:276] 0 containers: []
	W0328 01:05:56.604968 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:05:56.604982 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:05:56.605011 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:05:56.654473 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:05:56.654513 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:05:56.671309 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:05:56.671339 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:05:56.739516 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:05:56.739543 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:05:56.739559 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:05:56.817445 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:05:56.817495 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:05:59.361711 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:05:59.375672 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:05:59.375750 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:05:59.414329 1131323 cri.go:89] found id: ""
	I0328 01:05:59.414360 1131323 logs.go:276] 0 containers: []
	W0328 01:05:59.414371 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:05:59.414379 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:05:59.414443 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:05:59.454813 1131323 cri.go:89] found id: ""
	I0328 01:05:59.454846 1131323 logs.go:276] 0 containers: []
	W0328 01:05:59.454855 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:05:59.454862 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:05:59.454917 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:05:59.492890 1131323 cri.go:89] found id: ""
	I0328 01:05:59.492924 1131323 logs.go:276] 0 containers: []
	W0328 01:05:59.492936 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:05:59.492946 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:05:59.493043 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:05:59.529412 1131323 cri.go:89] found id: ""
	I0328 01:05:59.529443 1131323 logs.go:276] 0 containers: []
	W0328 01:05:59.529454 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:05:59.529464 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:05:59.529521 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:05:59.568620 1131323 cri.go:89] found id: ""
	I0328 01:05:59.568655 1131323 logs.go:276] 0 containers: []
	W0328 01:05:59.568664 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:05:59.568671 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:05:59.568731 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:05:59.605826 1131323 cri.go:89] found id: ""
	I0328 01:05:59.605861 1131323 logs.go:276] 0 containers: []
	W0328 01:05:59.605874 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:05:59.605883 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:05:59.605955 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:05:59.645799 1131323 cri.go:89] found id: ""
	I0328 01:05:59.645833 1131323 logs.go:276] 0 containers: []
	W0328 01:05:59.645847 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:05:59.645856 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:05:59.645931 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:05:59.683866 1131323 cri.go:89] found id: ""
	I0328 01:05:59.683903 1131323 logs.go:276] 0 containers: []
	W0328 01:05:59.683916 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:05:59.683929 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:05:59.683953 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:05:59.726678 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:05:59.726711 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:05:59.779910 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:05:59.779954 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:05:59.795743 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:05:59.795774 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:05:59.875137 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:05:59.875162 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:05:59.875174 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:05:56.542345 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:58.542599 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:00.543094 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:57.372258 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:59.872483 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:59.912150 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:01.913694 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:02.455212 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:06:02.468850 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:06:02.468945 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:06:02.506347 1131323 cri.go:89] found id: ""
	I0328 01:06:02.506385 1131323 logs.go:276] 0 containers: []
	W0328 01:06:02.506397 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:06:02.506406 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:06:02.506484 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:06:02.546056 1131323 cri.go:89] found id: ""
	I0328 01:06:02.546085 1131323 logs.go:276] 0 containers: []
	W0328 01:06:02.546096 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:06:02.546103 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:06:02.546173 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:06:02.585343 1131323 cri.go:89] found id: ""
	I0328 01:06:02.585385 1131323 logs.go:276] 0 containers: []
	W0328 01:06:02.585398 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:06:02.585407 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:06:02.585563 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:06:02.625380 1131323 cri.go:89] found id: ""
	I0328 01:06:02.625414 1131323 logs.go:276] 0 containers: []
	W0328 01:06:02.625423 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:06:02.625429 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:06:02.625486 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:06:02.664653 1131323 cri.go:89] found id: ""
	I0328 01:06:02.664687 1131323 logs.go:276] 0 containers: []
	W0328 01:06:02.664701 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:06:02.664708 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:06:02.664764 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:06:02.704468 1131323 cri.go:89] found id: ""
	I0328 01:06:02.704498 1131323 logs.go:276] 0 containers: []
	W0328 01:06:02.704511 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:06:02.704519 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:06:02.704595 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:06:02.740969 1131323 cri.go:89] found id: ""
	I0328 01:06:02.740997 1131323 logs.go:276] 0 containers: []
	W0328 01:06:02.741007 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:06:02.741014 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:06:02.741102 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:06:02.782113 1131323 cri.go:89] found id: ""
	I0328 01:06:02.782150 1131323 logs.go:276] 0 containers: []
	W0328 01:06:02.782163 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:06:02.782185 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:06:02.782203 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:06:02.836804 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:06:02.836848 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:06:02.852266 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:06:02.852299 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:06:02.929441 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:06:02.929467 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:06:02.929484 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:06:03.008114 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:06:03.008156 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:06:03.041919 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:05.542209 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:02.372332 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:04.871689 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:04.413251 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:06.912348 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:05.554291 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:06:05.570208 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:06:05.570304 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:06:05.610887 1131323 cri.go:89] found id: ""
	I0328 01:06:05.610916 1131323 logs.go:276] 0 containers: []
	W0328 01:06:05.610926 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:06:05.610932 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:06:05.610991 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:06:05.651561 1131323 cri.go:89] found id: ""
	I0328 01:06:05.651600 1131323 logs.go:276] 0 containers: []
	W0328 01:06:05.651610 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:06:05.651616 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:06:05.651681 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:06:05.690801 1131323 cri.go:89] found id: ""
	I0328 01:06:05.690830 1131323 logs.go:276] 0 containers: []
	W0328 01:06:05.690843 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:06:05.690851 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:06:05.690920 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:06:05.729098 1131323 cri.go:89] found id: ""
	I0328 01:06:05.729136 1131323 logs.go:276] 0 containers: []
	W0328 01:06:05.729146 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:06:05.729153 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:06:05.729225 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:06:05.774461 1131323 cri.go:89] found id: ""
	I0328 01:06:05.774499 1131323 logs.go:276] 0 containers: []
	W0328 01:06:05.774520 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:06:05.774530 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:06:05.774602 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:06:05.812135 1131323 cri.go:89] found id: ""
	I0328 01:06:05.812166 1131323 logs.go:276] 0 containers: []
	W0328 01:06:05.812180 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:06:05.812188 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:06:05.812255 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:06:05.847744 1131323 cri.go:89] found id: ""
	I0328 01:06:05.847775 1131323 logs.go:276] 0 containers: []
	W0328 01:06:05.847786 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:06:05.847796 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:06:05.847863 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:06:05.885600 1131323 cri.go:89] found id: ""
	I0328 01:06:05.885641 1131323 logs.go:276] 0 containers: []
	W0328 01:06:05.885656 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:06:05.885669 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:06:05.885684 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:06:05.963837 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:06:05.963879 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:06:06.007342 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:06:06.007381 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:06:06.062798 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:06:06.062843 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:06:06.077547 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:06:06.077599 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:06:06.148373 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:06:08.648791 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:06:08.664082 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:06:08.664154 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:06:08.701746 1131323 cri.go:89] found id: ""
	I0328 01:06:08.701776 1131323 logs.go:276] 0 containers: []
	W0328 01:06:08.701789 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:06:08.701797 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:06:08.701855 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:06:08.739035 1131323 cri.go:89] found id: ""
	I0328 01:06:08.739066 1131323 logs.go:276] 0 containers: []
	W0328 01:06:08.739076 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:06:08.739083 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:06:08.739136 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:06:08.776128 1131323 cri.go:89] found id: ""
	I0328 01:06:08.776166 1131323 logs.go:276] 0 containers: []
	W0328 01:06:08.776180 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:06:08.776189 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:06:08.776255 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:06:08.816136 1131323 cri.go:89] found id: ""
	I0328 01:06:08.816172 1131323 logs.go:276] 0 containers: []
	W0328 01:06:08.816187 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:06:08.816196 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:06:08.816271 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:06:08.855675 1131323 cri.go:89] found id: ""
	I0328 01:06:08.855709 1131323 logs.go:276] 0 containers: []
	W0328 01:06:08.855722 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:06:08.855730 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:06:08.855802 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:06:08.893161 1131323 cri.go:89] found id: ""
	I0328 01:06:08.893198 1131323 logs.go:276] 0 containers: []
	W0328 01:06:08.893212 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:06:08.893221 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:06:08.893297 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:06:08.935498 1131323 cri.go:89] found id: ""
	I0328 01:06:08.935527 1131323 logs.go:276] 0 containers: []
	W0328 01:06:08.935540 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:06:08.935548 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:06:08.935622 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:06:08.971622 1131323 cri.go:89] found id: ""
	I0328 01:06:08.971657 1131323 logs.go:276] 0 containers: []
	W0328 01:06:08.971668 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:06:08.971679 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:06:08.971696 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:06:09.039975 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:06:09.040036 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:06:09.057877 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:06:09.057920 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:06:09.130093 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:06:09.130119 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:06:09.130135 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:06:09.217177 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:06:09.217228 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:06:08.040921 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:10.042895 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:06.872367 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:08.873187 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:08.914313 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:11.412330 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:11.762393 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:06:11.776356 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:06:11.776424 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:06:11.811982 1131323 cri.go:89] found id: ""
	I0328 01:06:11.812017 1131323 logs.go:276] 0 containers: []
	W0328 01:06:11.812030 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:06:11.812038 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:06:11.812103 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:06:11.849789 1131323 cri.go:89] found id: ""
	I0328 01:06:11.849817 1131323 logs.go:276] 0 containers: []
	W0328 01:06:11.849826 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:06:11.849833 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:06:11.849884 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:06:11.890455 1131323 cri.go:89] found id: ""
	I0328 01:06:11.890488 1131323 logs.go:276] 0 containers: []
	W0328 01:06:11.890497 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:06:11.890503 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:06:11.890559 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:06:11.929047 1131323 cri.go:89] found id: ""
	I0328 01:06:11.929093 1131323 logs.go:276] 0 containers: []
	W0328 01:06:11.929102 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:06:11.929108 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:06:11.929164 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:06:11.969536 1131323 cri.go:89] found id: ""
	I0328 01:06:11.969566 1131323 logs.go:276] 0 containers: []
	W0328 01:06:11.969576 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:06:11.969583 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:06:11.969641 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:06:12.008779 1131323 cri.go:89] found id: ""
	I0328 01:06:12.008811 1131323 logs.go:276] 0 containers: []
	W0328 01:06:12.008821 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:06:12.008828 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:06:12.008890 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:06:12.044061 1131323 cri.go:89] found id: ""
	I0328 01:06:12.044091 1131323 logs.go:276] 0 containers: []
	W0328 01:06:12.044104 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:06:12.044112 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:06:12.044176 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:06:12.082307 1131323 cri.go:89] found id: ""
	I0328 01:06:12.082336 1131323 logs.go:276] 0 containers: []
	W0328 01:06:12.082346 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:06:12.082357 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:06:12.082369 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:06:12.133044 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:06:12.133091 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:06:12.148584 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:06:12.148624 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:06:12.218799 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:06:12.218834 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:06:12.218852 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:06:12.295580 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:06:12.295623 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:06:14.842815 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:06:14.856385 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:06:14.856456 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:06:14.895351 1131323 cri.go:89] found id: ""
	I0328 01:06:14.895409 1131323 logs.go:276] 0 containers: []
	W0328 01:06:14.895418 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:06:14.895424 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:06:14.895476 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:06:14.930333 1131323 cri.go:89] found id: ""
	I0328 01:06:14.930366 1131323 logs.go:276] 0 containers: []
	W0328 01:06:14.930380 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:06:14.930389 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:06:14.930461 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:06:14.968701 1131323 cri.go:89] found id: ""
	I0328 01:06:14.968742 1131323 logs.go:276] 0 containers: []
	W0328 01:06:14.968754 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:06:14.968767 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:06:14.968867 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:06:15.004580 1131323 cri.go:89] found id: ""
	I0328 01:06:15.004613 1131323 logs.go:276] 0 containers: []
	W0328 01:06:15.004626 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:06:15.004634 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:06:15.004700 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:06:15.046702 1131323 cri.go:89] found id: ""
	I0328 01:06:15.046726 1131323 logs.go:276] 0 containers: []
	W0328 01:06:15.046736 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:06:15.046742 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:06:15.046795 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:06:15.088693 1131323 cri.go:89] found id: ""
	I0328 01:06:15.088725 1131323 logs.go:276] 0 containers: []
	W0328 01:06:15.088734 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:06:15.088741 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:06:15.088797 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:06:15.130293 1131323 cri.go:89] found id: ""
	I0328 01:06:15.130324 1131323 logs.go:276] 0 containers: []
	W0328 01:06:15.130333 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:06:15.130339 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:06:15.130394 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:06:15.172381 1131323 cri.go:89] found id: ""
	I0328 01:06:15.172408 1131323 logs.go:276] 0 containers: []
	W0328 01:06:15.172417 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:06:15.172427 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:06:15.172440 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:06:15.225631 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:06:15.225674 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:06:15.241251 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:06:15.241294 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:06:15.319701 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:06:15.319731 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:06:15.319747 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:06:12.540755 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:14.541618 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:11.371580 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:13.371640 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:15.373147 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:13.911792 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:15.912479 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:17.913926 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:15.406813 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:06:15.406853 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:06:17.993893 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:06:18.007755 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:06:18.007843 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:06:18.047750 1131323 cri.go:89] found id: ""
	I0328 01:06:18.047777 1131323 logs.go:276] 0 containers: []
	W0328 01:06:18.047786 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:06:18.047797 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:06:18.047855 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:06:18.088264 1131323 cri.go:89] found id: ""
	I0328 01:06:18.088291 1131323 logs.go:276] 0 containers: []
	W0328 01:06:18.088303 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:06:18.088311 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:06:18.088369 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:06:18.127485 1131323 cri.go:89] found id: ""
	I0328 01:06:18.127514 1131323 logs.go:276] 0 containers: []
	W0328 01:06:18.127523 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:06:18.127530 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:06:18.127581 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:06:18.167462 1131323 cri.go:89] found id: ""
	I0328 01:06:18.167496 1131323 logs.go:276] 0 containers: []
	W0328 01:06:18.167510 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:06:18.167516 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:06:18.167571 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:06:18.209536 1131323 cri.go:89] found id: ""
	I0328 01:06:18.209571 1131323 logs.go:276] 0 containers: []
	W0328 01:06:18.209583 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:06:18.209591 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:06:18.209662 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:06:18.247565 1131323 cri.go:89] found id: ""
	I0328 01:06:18.247601 1131323 logs.go:276] 0 containers: []
	W0328 01:06:18.247614 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:06:18.247623 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:06:18.247701 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:06:18.288123 1131323 cri.go:89] found id: ""
	I0328 01:06:18.288162 1131323 logs.go:276] 0 containers: []
	W0328 01:06:18.288172 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:06:18.288179 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:06:18.288242 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:06:18.328132 1131323 cri.go:89] found id: ""
	I0328 01:06:18.328161 1131323 logs.go:276] 0 containers: []
	W0328 01:06:18.328170 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:06:18.328181 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:06:18.328193 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:06:18.403245 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:06:18.403287 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:06:18.403305 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:06:18.483446 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:06:18.483500 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:06:18.527357 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:06:18.527392 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:06:18.588402 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:06:18.588463 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:06:16.542137 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:18.542554 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:20.546396 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:17.872147 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:20.373000 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:20.412369 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:22.412661 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:21.103566 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:06:21.117538 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:06:21.117616 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:06:21.174215 1131323 cri.go:89] found id: ""
	I0328 01:06:21.174270 1131323 logs.go:276] 0 containers: []
	W0328 01:06:21.174284 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:06:21.174293 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:06:21.174364 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:06:21.238666 1131323 cri.go:89] found id: ""
	I0328 01:06:21.238707 1131323 logs.go:276] 0 containers: []
	W0328 01:06:21.238722 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:06:21.238730 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:06:21.238803 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:06:21.303510 1131323 cri.go:89] found id: ""
	I0328 01:06:21.303543 1131323 logs.go:276] 0 containers: []
	W0328 01:06:21.303553 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:06:21.303559 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:06:21.303614 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:06:21.345823 1131323 cri.go:89] found id: ""
	I0328 01:06:21.345853 1131323 logs.go:276] 0 containers: []
	W0328 01:06:21.345862 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:06:21.345870 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:06:21.345940 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:06:21.386205 1131323 cri.go:89] found id: ""
	I0328 01:06:21.386248 1131323 logs.go:276] 0 containers: []
	W0328 01:06:21.386261 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:06:21.386269 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:06:21.386335 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:06:21.427424 1131323 cri.go:89] found id: ""
	I0328 01:06:21.427457 1131323 logs.go:276] 0 containers: []
	W0328 01:06:21.427470 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:06:21.427478 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:06:21.427546 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:06:21.465054 1131323 cri.go:89] found id: ""
	I0328 01:06:21.465087 1131323 logs.go:276] 0 containers: []
	W0328 01:06:21.465099 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:06:21.465107 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:06:21.465177 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:06:21.507197 1131323 cri.go:89] found id: ""
	I0328 01:06:21.507229 1131323 logs.go:276] 0 containers: []
	W0328 01:06:21.507238 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:06:21.507248 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:06:21.507263 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:06:21.586657 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:06:21.586709 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:06:21.633702 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:06:21.633739 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:06:21.688960 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:06:21.688999 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:06:21.704675 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:06:21.704714 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:06:21.781612 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:06:24.282521 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:06:24.297096 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:06:24.297185 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:06:24.338745 1131323 cri.go:89] found id: ""
	I0328 01:06:24.338780 1131323 logs.go:276] 0 containers: []
	W0328 01:06:24.338793 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:06:24.338802 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:06:24.338872 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:06:24.375499 1131323 cri.go:89] found id: ""
	I0328 01:06:24.375528 1131323 logs.go:276] 0 containers: []
	W0328 01:06:24.375540 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:06:24.375548 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:06:24.375616 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:06:24.410939 1131323 cri.go:89] found id: ""
	I0328 01:06:24.410966 1131323 logs.go:276] 0 containers: []
	W0328 01:06:24.410978 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:06:24.410986 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:06:24.411042 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:06:24.455316 1131323 cri.go:89] found id: ""
	I0328 01:06:24.455345 1131323 logs.go:276] 0 containers: []
	W0328 01:06:24.455354 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:06:24.455360 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:06:24.455427 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:06:24.493177 1131323 cri.go:89] found id: ""
	I0328 01:06:24.493206 1131323 logs.go:276] 0 containers: []
	W0328 01:06:24.493219 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:06:24.493228 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:06:24.493300 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:06:24.533612 1131323 cri.go:89] found id: ""
	I0328 01:06:24.533648 1131323 logs.go:276] 0 containers: []
	W0328 01:06:24.533659 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:06:24.533668 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:06:24.533743 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:06:24.573960 1131323 cri.go:89] found id: ""
	I0328 01:06:24.573998 1131323 logs.go:276] 0 containers: []
	W0328 01:06:24.574014 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:06:24.574020 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:06:24.574074 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:06:24.617282 1131323 cri.go:89] found id: ""
	I0328 01:06:24.617319 1131323 logs.go:276] 0 containers: []
	W0328 01:06:24.617333 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:06:24.617346 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:06:24.617364 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:06:24.691660 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:06:24.691688 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:06:24.691707 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:06:24.773138 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:06:24.773180 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:06:24.820408 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:06:24.820440 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:06:24.875901 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:06:24.875940 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:06:23.041030 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:25.041064 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:22.874513 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:25.378939 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:24.413732 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:26.912433 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:27.392663 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:06:27.407958 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:06:27.408046 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:06:27.446750 1131323 cri.go:89] found id: ""
	I0328 01:06:27.446782 1131323 logs.go:276] 0 containers: []
	W0328 01:06:27.446792 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:06:27.446799 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:06:27.446872 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:06:27.489199 1131323 cri.go:89] found id: ""
	I0328 01:06:27.489236 1131323 logs.go:276] 0 containers: []
	W0328 01:06:27.489249 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:06:27.489258 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:06:27.489316 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:06:27.525754 1131323 cri.go:89] found id: ""
	I0328 01:06:27.525787 1131323 logs.go:276] 0 containers: []
	W0328 01:06:27.525796 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:06:27.525803 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:06:27.525861 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:06:27.560817 1131323 cri.go:89] found id: ""
	I0328 01:06:27.560849 1131323 logs.go:276] 0 containers: []
	W0328 01:06:27.560858 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:06:27.560866 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:06:27.560930 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:06:27.597706 1131323 cri.go:89] found id: ""
	I0328 01:06:27.597736 1131323 logs.go:276] 0 containers: []
	W0328 01:06:27.597744 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:06:27.597750 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:06:27.597821 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:06:27.635170 1131323 cri.go:89] found id: ""
	I0328 01:06:27.635211 1131323 logs.go:276] 0 containers: []
	W0328 01:06:27.635223 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:06:27.635232 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:06:27.635299 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:06:27.672043 1131323 cri.go:89] found id: ""
	I0328 01:06:27.672079 1131323 logs.go:276] 0 containers: []
	W0328 01:06:27.672091 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:06:27.672099 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:06:27.672166 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:06:27.711401 1131323 cri.go:89] found id: ""
	I0328 01:06:27.711435 1131323 logs.go:276] 0 containers: []
	W0328 01:06:27.711448 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:06:27.711468 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:06:27.711488 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:06:27.755172 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:06:27.755211 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:06:27.807588 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:06:27.807632 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:06:27.823557 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:06:27.823589 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:06:27.905292 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:06:27.905316 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:06:27.905329 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:06:27.041105 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:29.041205 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:27.873797 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:30.374214 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:29.412378 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:31.413211 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:30.491565 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:06:30.505601 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:06:30.505667 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:06:30.541894 1131323 cri.go:89] found id: ""
	I0328 01:06:30.541929 1131323 logs.go:276] 0 containers: []
	W0328 01:06:30.541940 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:06:30.541949 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:06:30.542029 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:06:30.581484 1131323 cri.go:89] found id: ""
	I0328 01:06:30.581514 1131323 logs.go:276] 0 containers: []
	W0328 01:06:30.581532 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:06:30.581538 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:06:30.581613 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:06:30.624788 1131323 cri.go:89] found id: ""
	I0328 01:06:30.624830 1131323 logs.go:276] 0 containers: []
	W0328 01:06:30.624842 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:06:30.624850 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:06:30.624922 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:06:30.664373 1131323 cri.go:89] found id: ""
	I0328 01:06:30.664403 1131323 logs.go:276] 0 containers: []
	W0328 01:06:30.664413 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:06:30.664420 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:06:30.664489 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:06:30.702885 1131323 cri.go:89] found id: ""
	I0328 01:06:30.702917 1131323 logs.go:276] 0 containers: []
	W0328 01:06:30.702928 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:06:30.702934 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:06:30.703006 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:06:30.748170 1131323 cri.go:89] found id: ""
	I0328 01:06:30.748205 1131323 logs.go:276] 0 containers: []
	W0328 01:06:30.748217 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:06:30.748226 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:06:30.748316 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:06:30.785218 1131323 cri.go:89] found id: ""
	I0328 01:06:30.785255 1131323 logs.go:276] 0 containers: []
	W0328 01:06:30.785268 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:06:30.785276 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:06:30.785343 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:06:30.825529 1131323 cri.go:89] found id: ""
	I0328 01:06:30.825555 1131323 logs.go:276] 0 containers: []
	W0328 01:06:30.825565 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:06:30.825575 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:06:30.825589 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:06:30.881353 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:06:30.881391 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:06:30.896682 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:06:30.896718 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:06:30.973356 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:06:30.973386 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:06:30.973402 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:06:31.049014 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:06:31.049047 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:06:33.594365 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:06:33.609372 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:06:33.609460 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:06:33.648699 1131323 cri.go:89] found id: ""
	I0328 01:06:33.648728 1131323 logs.go:276] 0 containers: []
	W0328 01:06:33.648749 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:06:33.648757 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:06:33.648829 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:06:33.686707 1131323 cri.go:89] found id: ""
	I0328 01:06:33.686744 1131323 logs.go:276] 0 containers: []
	W0328 01:06:33.686758 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:06:33.686767 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:06:33.686832 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:06:33.723091 1131323 cri.go:89] found id: ""
	I0328 01:06:33.723121 1131323 logs.go:276] 0 containers: []
	W0328 01:06:33.723130 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:06:33.723136 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:06:33.723187 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:06:33.763439 1131323 cri.go:89] found id: ""
	I0328 01:06:33.763471 1131323 logs.go:276] 0 containers: []
	W0328 01:06:33.763481 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:06:33.763488 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:06:33.763544 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:06:33.812236 1131323 cri.go:89] found id: ""
	I0328 01:06:33.812271 1131323 logs.go:276] 0 containers: []
	W0328 01:06:33.812285 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:06:33.812294 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:06:33.812365 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:06:33.849421 1131323 cri.go:89] found id: ""
	I0328 01:06:33.849454 1131323 logs.go:276] 0 containers: []
	W0328 01:06:33.849465 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:06:33.849473 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:06:33.849528 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:06:33.888020 1131323 cri.go:89] found id: ""
	I0328 01:06:33.888051 1131323 logs.go:276] 0 containers: []
	W0328 01:06:33.888065 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:06:33.888078 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:06:33.888145 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:06:33.925952 1131323 cri.go:89] found id: ""
	I0328 01:06:33.925990 1131323 logs.go:276] 0 containers: []
	W0328 01:06:33.926003 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:06:33.926016 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:06:33.926034 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:06:33.976695 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:06:33.976734 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:06:33.991708 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:06:33.991752 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:06:34.068244 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:06:34.068276 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:06:34.068293 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:06:34.155843 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:06:34.155885 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:06:31.041375 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:33.041526 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:35.541169 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:32.872009 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:34.873043 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:33.913191 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:36.413213 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:36.697480 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:06:36.712322 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:06:36.712420 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:06:36.749541 1131323 cri.go:89] found id: ""
	I0328 01:06:36.749570 1131323 logs.go:276] 0 containers: []
	W0328 01:06:36.749579 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:06:36.749587 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:06:36.749655 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:06:36.788226 1131323 cri.go:89] found id: ""
	I0328 01:06:36.788255 1131323 logs.go:276] 0 containers: []
	W0328 01:06:36.788264 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:06:36.788270 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:06:36.788323 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:06:36.823824 1131323 cri.go:89] found id: ""
	I0328 01:06:36.823856 1131323 logs.go:276] 0 containers: []
	W0328 01:06:36.823866 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:06:36.823872 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:06:36.823927 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:06:36.869331 1131323 cri.go:89] found id: ""
	I0328 01:06:36.869362 1131323 logs.go:276] 0 containers: []
	W0328 01:06:36.869371 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:06:36.869378 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:06:36.869473 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:06:36.907918 1131323 cri.go:89] found id: ""
	I0328 01:06:36.907950 1131323 logs.go:276] 0 containers: []
	W0328 01:06:36.907960 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:06:36.907966 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:06:36.908028 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:06:36.947708 1131323 cri.go:89] found id: ""
	I0328 01:06:36.947738 1131323 logs.go:276] 0 containers: []
	W0328 01:06:36.947749 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:06:36.947757 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:06:36.947824 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:06:36.986200 1131323 cri.go:89] found id: ""
	I0328 01:06:36.986251 1131323 logs.go:276] 0 containers: []
	W0328 01:06:36.986266 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:06:36.986275 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:06:36.986350 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:06:37.026670 1131323 cri.go:89] found id: ""
	I0328 01:06:37.026698 1131323 logs.go:276] 0 containers: []
	W0328 01:06:37.026708 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:06:37.026718 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:06:37.026732 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:06:37.079891 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:06:37.079933 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:06:37.094347 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:06:37.094378 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:06:37.168653 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:06:37.168681 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:06:37.168695 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:06:37.247909 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:06:37.247949 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:06:39.791285 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:06:39.807921 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:06:39.808000 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:06:39.851460 1131323 cri.go:89] found id: ""
	I0328 01:06:39.851499 1131323 logs.go:276] 0 containers: []
	W0328 01:06:39.851512 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:06:39.851520 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:06:39.851593 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:06:39.889506 1131323 cri.go:89] found id: ""
	I0328 01:06:39.889541 1131323 logs.go:276] 0 containers: []
	W0328 01:06:39.889554 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:06:39.889564 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:06:39.889632 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:06:39.930291 1131323 cri.go:89] found id: ""
	I0328 01:06:39.930321 1131323 logs.go:276] 0 containers: []
	W0328 01:06:39.930331 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:06:39.930337 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:06:39.930400 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:06:39.965121 1131323 cri.go:89] found id: ""
	I0328 01:06:39.965160 1131323 logs.go:276] 0 containers: []
	W0328 01:06:39.965174 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:06:39.965183 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:06:39.965252 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:06:40.003217 1131323 cri.go:89] found id: ""
	I0328 01:06:40.003248 1131323 logs.go:276] 0 containers: []
	W0328 01:06:40.003258 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:06:40.003264 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:06:40.003319 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:06:40.042702 1131323 cri.go:89] found id: ""
	I0328 01:06:40.042737 1131323 logs.go:276] 0 containers: []
	W0328 01:06:40.042749 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:06:40.042759 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:06:40.042826 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:06:40.079733 1131323 cri.go:89] found id: ""
	I0328 01:06:40.079769 1131323 logs.go:276] 0 containers: []
	W0328 01:06:40.079780 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:06:40.079788 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:06:40.079852 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:06:40.117066 1131323 cri.go:89] found id: ""
	I0328 01:06:40.117098 1131323 logs.go:276] 0 containers: []
	W0328 01:06:40.117107 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:06:40.117117 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:06:40.117130 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:06:40.158589 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:06:40.158623 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:06:40.210997 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:06:40.211049 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:06:40.225419 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:06:40.225453 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:06:40.305356 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:06:40.305385 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:06:40.305401 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:06:37.541534 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:39.541905 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:36.874220 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:39.373763 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:38.413719 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:40.912939 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:42.913528 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:42.896394 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:06:42.912285 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:06:42.912355 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:06:42.949381 1131323 cri.go:89] found id: ""
	I0328 01:06:42.949411 1131323 logs.go:276] 0 containers: []
	W0328 01:06:42.949420 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:06:42.949427 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:06:42.949496 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:06:42.985325 1131323 cri.go:89] found id: ""
	I0328 01:06:42.985358 1131323 logs.go:276] 0 containers: []
	W0328 01:06:42.985371 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:06:42.985388 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:06:42.985456 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:06:43.023570 1131323 cri.go:89] found id: ""
	I0328 01:06:43.023616 1131323 logs.go:276] 0 containers: []
	W0328 01:06:43.023630 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:06:43.023638 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:06:43.023714 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:06:43.062995 1131323 cri.go:89] found id: ""
	I0328 01:06:43.063025 1131323 logs.go:276] 0 containers: []
	W0328 01:06:43.063036 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:06:43.063042 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:06:43.063111 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:06:43.101666 1131323 cri.go:89] found id: ""
	I0328 01:06:43.101704 1131323 logs.go:276] 0 containers: []
	W0328 01:06:43.101713 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:06:43.101720 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:06:43.101789 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:06:43.150713 1131323 cri.go:89] found id: ""
	I0328 01:06:43.150745 1131323 logs.go:276] 0 containers: []
	W0328 01:06:43.150757 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:06:43.150765 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:06:43.150830 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:06:43.193449 1131323 cri.go:89] found id: ""
	I0328 01:06:43.193479 1131323 logs.go:276] 0 containers: []
	W0328 01:06:43.193487 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:06:43.193495 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:06:43.193559 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:06:43.237641 1131323 cri.go:89] found id: ""
	I0328 01:06:43.237673 1131323 logs.go:276] 0 containers: []
	W0328 01:06:43.237682 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:06:43.237698 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:06:43.237714 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:06:43.287282 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:06:43.287320 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:06:43.303307 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:06:43.303343 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:06:43.383597 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:06:43.383619 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:06:43.383632 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:06:43.467874 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:06:43.467914 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:06:42.041406 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:44.540550 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:41.874286 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:44.372393 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:45.410973 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:47.412852 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:46.011081 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:06:46.025731 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:06:46.025824 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:06:46.064336 1131323 cri.go:89] found id: ""
	I0328 01:06:46.064371 1131323 logs.go:276] 0 containers: []
	W0328 01:06:46.064385 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:06:46.064394 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:06:46.064451 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:06:46.104493 1131323 cri.go:89] found id: ""
	I0328 01:06:46.104530 1131323 logs.go:276] 0 containers: []
	W0328 01:06:46.104550 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:06:46.104559 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:06:46.104636 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:06:46.147546 1131323 cri.go:89] found id: ""
	I0328 01:06:46.147582 1131323 logs.go:276] 0 containers: []
	W0328 01:06:46.147594 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:06:46.147602 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:06:46.147656 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:06:46.186162 1131323 cri.go:89] found id: ""
	I0328 01:06:46.186197 1131323 logs.go:276] 0 containers: []
	W0328 01:06:46.186207 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:06:46.186213 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:06:46.186296 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:06:46.230412 1131323 cri.go:89] found id: ""
	I0328 01:06:46.230450 1131323 logs.go:276] 0 containers: []
	W0328 01:06:46.230464 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:06:46.230473 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:06:46.230552 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:06:46.266000 1131323 cri.go:89] found id: ""
	I0328 01:06:46.266037 1131323 logs.go:276] 0 containers: []
	W0328 01:06:46.266050 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:06:46.266059 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:06:46.266126 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:06:46.301031 1131323 cri.go:89] found id: ""
	I0328 01:06:46.301065 1131323 logs.go:276] 0 containers: []
	W0328 01:06:46.301077 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:06:46.301084 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:06:46.301155 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:06:46.339222 1131323 cri.go:89] found id: ""
	I0328 01:06:46.339248 1131323 logs.go:276] 0 containers: []
	W0328 01:06:46.339258 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:06:46.339271 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:06:46.339290 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:06:46.352558 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:06:46.352595 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:06:46.427283 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:06:46.427308 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:06:46.427325 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:06:46.512134 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:06:46.512178 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:06:46.558276 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:06:46.558307 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:06:49.113455 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:06:49.127554 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:06:49.127645 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:06:49.169380 1131323 cri.go:89] found id: ""
	I0328 01:06:49.169421 1131323 logs.go:276] 0 containers: []
	W0328 01:06:49.169435 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:06:49.169444 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:06:49.169511 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:06:49.204540 1131323 cri.go:89] found id: ""
	I0328 01:06:49.204568 1131323 logs.go:276] 0 containers: []
	W0328 01:06:49.204579 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:06:49.204596 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:06:49.204664 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:06:49.243074 1131323 cri.go:89] found id: ""
	I0328 01:06:49.243102 1131323 logs.go:276] 0 containers: []
	W0328 01:06:49.243112 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:06:49.243119 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:06:49.243170 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:06:49.281264 1131323 cri.go:89] found id: ""
	I0328 01:06:49.281301 1131323 logs.go:276] 0 containers: []
	W0328 01:06:49.281314 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:06:49.281322 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:06:49.281391 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:06:49.320473 1131323 cri.go:89] found id: ""
	I0328 01:06:49.320505 1131323 logs.go:276] 0 containers: []
	W0328 01:06:49.320514 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:06:49.320521 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:06:49.320592 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:06:49.357715 1131323 cri.go:89] found id: ""
	I0328 01:06:49.357749 1131323 logs.go:276] 0 containers: []
	W0328 01:06:49.357759 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:06:49.357766 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:06:49.357823 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:06:49.398427 1131323 cri.go:89] found id: ""
	I0328 01:06:49.398464 1131323 logs.go:276] 0 containers: []
	W0328 01:06:49.398477 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:06:49.398498 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:06:49.398576 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:06:49.439921 1131323 cri.go:89] found id: ""
	I0328 01:06:49.439956 1131323 logs.go:276] 0 containers: []
	W0328 01:06:49.439969 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:06:49.439982 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:06:49.440003 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:06:49.557260 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:06:49.557289 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:06:49.557312 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:06:49.640105 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:06:49.640169 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:06:49.683153 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:06:49.683185 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:06:49.737420 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:06:49.737463 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:06:46.541377 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:49.041761 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:46.374869 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:48.875897 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:49.912535 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:51.912893 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:52.253208 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:06:52.268572 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:06:52.268649 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:06:52.305136 1131323 cri.go:89] found id: ""
	I0328 01:06:52.305180 1131323 logs.go:276] 0 containers: []
	W0328 01:06:52.305193 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:06:52.305202 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:06:52.305273 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:06:52.344774 1131323 cri.go:89] found id: ""
	I0328 01:06:52.344806 1131323 logs.go:276] 0 containers: []
	W0328 01:06:52.344816 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:06:52.344823 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:06:52.344885 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:06:52.382127 1131323 cri.go:89] found id: ""
	I0328 01:06:52.382174 1131323 logs.go:276] 0 containers: []
	W0328 01:06:52.382185 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:06:52.382200 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:06:52.382280 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:06:52.421340 1131323 cri.go:89] found id: ""
	I0328 01:06:52.421368 1131323 logs.go:276] 0 containers: []
	W0328 01:06:52.421377 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:06:52.421383 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:06:52.421433 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:06:52.460046 1131323 cri.go:89] found id: ""
	I0328 01:06:52.460084 1131323 logs.go:276] 0 containers: []
	W0328 01:06:52.460100 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:06:52.460107 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:06:52.460164 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:06:52.500067 1131323 cri.go:89] found id: ""
	I0328 01:06:52.500094 1131323 logs.go:276] 0 containers: []
	W0328 01:06:52.500102 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:06:52.500109 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:06:52.500171 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:06:52.537614 1131323 cri.go:89] found id: ""
	I0328 01:06:52.537646 1131323 logs.go:276] 0 containers: []
	W0328 01:06:52.537671 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:06:52.537680 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:06:52.537745 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:06:52.577362 1131323 cri.go:89] found id: ""
	I0328 01:06:52.577392 1131323 logs.go:276] 0 containers: []
	W0328 01:06:52.577402 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:06:52.577417 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:06:52.577434 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:06:52.633638 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:06:52.633689 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:06:52.650762 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:06:52.650796 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:06:52.729436 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:06:52.729470 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:06:52.729484 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:06:52.818193 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:06:52.818248 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:06:51.540541 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:53.541340 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:55.542165 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:51.376916 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:53.872313 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:55.873335 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:54.411986 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:56.412892 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:55.362950 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:06:55.378461 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:06:55.378577 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:06:55.419968 1131323 cri.go:89] found id: ""
	I0328 01:06:55.419995 1131323 logs.go:276] 0 containers: []
	W0328 01:06:55.420005 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:06:55.420010 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:06:55.420072 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:06:55.464308 1131323 cri.go:89] found id: ""
	I0328 01:06:55.464341 1131323 logs.go:276] 0 containers: []
	W0328 01:06:55.464350 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:06:55.464357 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:06:55.464421 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:06:55.523059 1131323 cri.go:89] found id: ""
	I0328 01:06:55.523092 1131323 logs.go:276] 0 containers: []
	W0328 01:06:55.523106 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:06:55.523114 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:06:55.523186 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:06:55.570957 1131323 cri.go:89] found id: ""
	I0328 01:06:55.570990 1131323 logs.go:276] 0 containers: []
	W0328 01:06:55.571004 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:06:55.571013 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:06:55.571077 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:06:55.606712 1131323 cri.go:89] found id: ""
	I0328 01:06:55.606739 1131323 logs.go:276] 0 containers: []
	W0328 01:06:55.606749 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:06:55.606755 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:06:55.606817 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:06:55.646445 1131323 cri.go:89] found id: ""
	I0328 01:06:55.646477 1131323 logs.go:276] 0 containers: []
	W0328 01:06:55.646486 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:06:55.646493 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:06:55.646548 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:06:55.685176 1131323 cri.go:89] found id: ""
	I0328 01:06:55.685208 1131323 logs.go:276] 0 containers: []
	W0328 01:06:55.685217 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:06:55.685225 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:06:55.685289 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:06:55.722948 1131323 cri.go:89] found id: ""
	I0328 01:06:55.722984 1131323 logs.go:276] 0 containers: []
	W0328 01:06:55.722995 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:06:55.723006 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:06:55.723022 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:06:55.797332 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:06:55.797368 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:06:55.797385 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:06:55.877648 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:06:55.877688 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:06:55.918966 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:06:55.918997 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:06:55.971226 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:06:55.971272 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:06:58.488464 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:06:58.504999 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:06:58.505088 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:06:58.549290 1131323 cri.go:89] found id: ""
	I0328 01:06:58.549325 1131323 logs.go:276] 0 containers: []
	W0328 01:06:58.549338 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:06:58.549347 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:06:58.549414 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:06:58.589222 1131323 cri.go:89] found id: ""
	I0328 01:06:58.589252 1131323 logs.go:276] 0 containers: []
	W0328 01:06:58.589261 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:06:58.589271 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:06:58.589337 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:06:58.626470 1131323 cri.go:89] found id: ""
	I0328 01:06:58.626499 1131323 logs.go:276] 0 containers: []
	W0328 01:06:58.626508 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:06:58.626514 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:06:58.626578 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:06:58.671634 1131323 cri.go:89] found id: ""
	I0328 01:06:58.671663 1131323 logs.go:276] 0 containers: []
	W0328 01:06:58.671674 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:06:58.671683 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:06:58.671744 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:06:58.707335 1131323 cri.go:89] found id: ""
	I0328 01:06:58.707370 1131323 logs.go:276] 0 containers: []
	W0328 01:06:58.707381 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:06:58.707390 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:06:58.707459 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:06:58.745635 1131323 cri.go:89] found id: ""
	I0328 01:06:58.745666 1131323 logs.go:276] 0 containers: []
	W0328 01:06:58.745679 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:06:58.745687 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:06:58.745752 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:06:58.792172 1131323 cri.go:89] found id: ""
	I0328 01:06:58.792205 1131323 logs.go:276] 0 containers: []
	W0328 01:06:58.792216 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:06:58.792225 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:06:58.792287 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:06:58.840027 1131323 cri.go:89] found id: ""
	I0328 01:06:58.840063 1131323 logs.go:276] 0 containers: []
	W0328 01:06:58.840075 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:06:58.840089 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:06:58.840108 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:06:58.921964 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:06:58.921988 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:06:58.922003 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:06:59.016935 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:06:59.016980 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:06:59.065747 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:06:59.065788 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:06:59.119189 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:06:59.119231 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:06:58.042362 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:00.544351 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:57.875649 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:00.371953 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:58.406154 1130949 pod_ready.go:81] duration metric: took 4m0.000981669s for pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace to be "Ready" ...
	E0328 01:06:58.406192 1130949 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace to be "Ready" (will not retry!)
	I0328 01:06:58.406218 1130949 pod_ready.go:38] duration metric: took 4m11.713667334s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0328 01:06:58.406275 1130949 kubeadm.go:591] duration metric: took 4m19.018883002s to restartPrimaryControlPlane
	W0328 01:06:58.406372 1130949 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0328 01:06:58.406432 1130949 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0328 01:07:01.637081 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:07:01.652557 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:07:01.652634 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:07:01.691795 1131323 cri.go:89] found id: ""
	I0328 01:07:01.691832 1131323 logs.go:276] 0 containers: []
	W0328 01:07:01.691846 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:07:01.691854 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:07:01.691927 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:07:01.732815 1131323 cri.go:89] found id: ""
	I0328 01:07:01.732850 1131323 logs.go:276] 0 containers: []
	W0328 01:07:01.732861 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:07:01.732868 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:07:01.732938 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:07:01.776370 1131323 cri.go:89] found id: ""
	I0328 01:07:01.776408 1131323 logs.go:276] 0 containers: []
	W0328 01:07:01.776422 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:07:01.776431 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:07:01.776501 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:07:01.821260 1131323 cri.go:89] found id: ""
	I0328 01:07:01.821290 1131323 logs.go:276] 0 containers: []
	W0328 01:07:01.821301 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:07:01.821308 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:07:01.821377 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:07:01.860666 1131323 cri.go:89] found id: ""
	I0328 01:07:01.860696 1131323 logs.go:276] 0 containers: []
	W0328 01:07:01.860708 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:07:01.860719 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:07:01.860787 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:07:01.898255 1131323 cri.go:89] found id: ""
	I0328 01:07:01.898291 1131323 logs.go:276] 0 containers: []
	W0328 01:07:01.898304 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:07:01.898314 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:07:01.898383 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:07:01.937770 1131323 cri.go:89] found id: ""
	I0328 01:07:01.937809 1131323 logs.go:276] 0 containers: []
	W0328 01:07:01.937822 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:07:01.937830 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:07:01.937901 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:07:01.976946 1131323 cri.go:89] found id: ""
	I0328 01:07:01.976981 1131323 logs.go:276] 0 containers: []
	W0328 01:07:01.976994 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:07:01.977008 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:07:01.977027 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:07:02.062804 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:07:02.062845 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:07:02.110750 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:07:02.110783 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:07:02.179633 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:07:02.179677 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:07:02.203131 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:07:02.203181 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:07:02.303281 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:07:04.804238 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:07:04.819654 1131323 kubeadm.go:591] duration metric: took 4m2.527630194s to restartPrimaryControlPlane
	W0328 01:07:04.819747 1131323 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0328 01:07:04.819787 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0328 01:07:03.041692 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:05.540478 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:02.372472 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:04.376413 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:07.322821 1131323 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (2.50300166s)
	I0328 01:07:07.322918 1131323 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0328 01:07:07.338692 1131323 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0328 01:07:07.349812 1131323 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0328 01:07:07.361566 1131323 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0328 01:07:07.361597 1131323 kubeadm.go:156] found existing configuration files:
	
	I0328 01:07:07.361667 1131323 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0328 01:07:07.372926 1131323 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0328 01:07:07.373008 1131323 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0328 01:07:07.383770 1131323 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0328 01:07:07.394260 1131323 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0328 01:07:07.394332 1131323 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0328 01:07:07.405874 1131323 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0328 01:07:07.417177 1131323 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0328 01:07:07.417254 1131323 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0328 01:07:07.428589 1131323 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0328 01:07:07.438788 1131323 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0328 01:07:07.438845 1131323 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0328 01:07:07.449649 1131323 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0328 01:07:07.533886 1131323 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0328 01:07:07.533989 1131323 kubeadm.go:309] [preflight] Running pre-flight checks
	I0328 01:07:07.693599 1131323 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0328 01:07:07.693736 1131323 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0328 01:07:07.693852 1131323 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0328 01:07:07.910557 1131323 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0328 01:07:07.912634 1131323 out.go:204]   - Generating certificates and keys ...
	I0328 01:07:07.912743 1131323 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0328 01:07:07.912855 1131323 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0328 01:07:07.912984 1131323 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0328 01:07:07.913098 1131323 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0328 01:07:07.913212 1131323 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0328 01:07:07.913298 1131323 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0328 01:07:07.913384 1131323 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0328 01:07:07.913569 1131323 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0328 01:07:07.913947 1131323 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0328 01:07:07.914429 1131323 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0328 01:07:07.914649 1131323 kubeadm.go:309] [certs] Using the existing "sa" key
	I0328 01:07:07.914728 1131323 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0328 01:07:08.225778 1131323 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0328 01:07:08.353927 1131323 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0328 01:07:08.631240 1131323 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0328 01:07:08.824445 1131323 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0328 01:07:08.840240 1131323 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0328 01:07:08.841200 1131323 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0328 01:07:08.841315 1131323 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0328 01:07:08.997129 1131323 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0328 01:07:08.999073 1131323 out.go:204]   - Booting up control plane ...
	I0328 01:07:08.999224 1131323 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0328 01:07:09.014811 1131323 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0328 01:07:09.015898 1131323 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0328 01:07:09.016727 1131323 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0328 01:07:09.019426 1131323 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0328 01:07:07.541363 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:10.041094 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:06.874606 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:09.372537 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:12.540137 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:14.541608 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:11.372643 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:13.873029 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:16.541814 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:19.047225 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:16.372556 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:18.871954 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:20.872047 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:21.542880 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:24.041786 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:22.872845 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:24.873747 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:26.042186 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:28.541303 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:30.540610 1130949 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.134147754s)
	I0328 01:07:30.540688 1130949 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0328 01:07:30.558971 1130949 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0328 01:07:30.570331 1130949 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0328 01:07:30.581192 1130949 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0328 01:07:30.581246 1130949 kubeadm.go:156] found existing configuration files:
	
	I0328 01:07:30.581306 1130949 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0328 01:07:30.592337 1130949 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0328 01:07:30.592410 1130949 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0328 01:07:30.603288 1130949 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0328 01:07:30.613714 1130949 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0328 01:07:30.613776 1130949 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0328 01:07:30.624281 1130949 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0328 01:07:30.634569 1130949 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0328 01:07:30.634644 1130949 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0328 01:07:30.647279 1130949 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0328 01:07:30.658554 1130949 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0328 01:07:30.658646 1130949 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0328 01:07:30.670364 1130949 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0328 01:07:30.730349 1130949 kubeadm.go:309] [init] Using Kubernetes version: v1.29.3
	I0328 01:07:30.730414 1130949 kubeadm.go:309] [preflight] Running pre-flight checks
	I0328 01:07:30.887056 1130949 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0328 01:07:30.887234 1130949 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0328 01:07:30.887385 1130949 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0328 01:07:31.104288 1130949 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0328 01:07:27.373135 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:29.373436 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:31.106496 1130949 out.go:204]   - Generating certificates and keys ...
	I0328 01:07:31.106628 1130949 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0328 01:07:31.106697 1130949 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0328 01:07:31.106765 1130949 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0328 01:07:31.106826 1130949 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0328 01:07:31.106892 1130949 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0328 01:07:31.107528 1130949 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0328 01:07:31.108302 1130949 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0328 01:07:31.112246 1130949 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0328 01:07:31.112762 1130949 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0328 01:07:31.113711 1130949 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0328 01:07:31.115230 1130949 kubeadm.go:309] [certs] Using the existing "sa" key
	I0328 01:07:31.115284 1130949 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0328 01:07:31.297632 1130949 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0328 01:07:32.446275 1130949 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0328 01:07:32.565869 1130949 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0328 01:07:32.641288 1130949 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0328 01:07:32.817229 1130949 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0328 01:07:32.817814 1130949 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0328 01:07:32.820366 1130949 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0328 01:07:32.822328 1130949 out.go:204]   - Booting up control plane ...
	I0328 01:07:32.822467 1130949 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0328 01:07:32.822550 1130949 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0328 01:07:32.822990 1130949 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0328 01:07:32.846800 1130949 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0328 01:07:32.847829 1130949 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0328 01:07:32.847902 1130949 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0328 01:07:31.044103 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:33.542106 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:35.542875 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:31.873591 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:33.875737 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:35.881819 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:32.992001 1130949 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0328 01:07:38.997010 1130949 kubeadm.go:309] [apiclient] All control plane components are healthy after 6.003888 seconds
	I0328 01:07:39.012971 1130949 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0328 01:07:39.036328 1130949 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0328 01:07:39.569806 1130949 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0328 01:07:39.570135 1130949 kubeadm.go:309] [mark-control-plane] Marking the node embed-certs-808809 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0328 01:07:40.085165 1130949 kubeadm.go:309] [bootstrap-token] Using token: 4zk5zi.uttj4zihedk5oj6k
	I0328 01:07:40.086719 1130949 out.go:204]   - Configuring RBAC rules ...
	I0328 01:07:40.086873 1130949 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0328 01:07:40.096373 1130949 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0328 01:07:40.106484 1130949 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0328 01:07:40.110525 1130949 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0328 01:07:40.120015 1130949 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0328 01:07:40.129060 1130949 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0328 01:07:40.141167 1130949 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0328 01:07:40.415429 1130949 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0328 01:07:40.507275 1130949 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0328 01:07:40.507333 1130949 kubeadm.go:309] 
	I0328 01:07:40.507551 1130949 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0328 01:07:40.507617 1130949 kubeadm.go:309] 
	I0328 01:07:40.507860 1130949 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0328 01:07:40.507891 1130949 kubeadm.go:309] 
	I0328 01:07:40.507947 1130949 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0328 01:07:40.508057 1130949 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0328 01:07:40.508140 1130949 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0328 01:07:40.508157 1130949 kubeadm.go:309] 
	I0328 01:07:40.508250 1130949 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0328 01:07:40.508264 1130949 kubeadm.go:309] 
	I0328 01:07:40.508329 1130949 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0328 01:07:40.508344 1130949 kubeadm.go:309] 
	I0328 01:07:40.508421 1130949 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0328 01:07:40.508539 1130949 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0328 01:07:40.508626 1130949 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0328 01:07:40.508632 1130949 kubeadm.go:309] 
	I0328 01:07:40.508804 1130949 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0328 01:07:40.508970 1130949 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0328 01:07:40.508990 1130949 kubeadm.go:309] 
	I0328 01:07:40.509155 1130949 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token 4zk5zi.uttj4zihedk5oj6k \
	I0328 01:07:40.509474 1130949 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:eb47e943713ff45bf2b8bbea854932df17a469668ec0f8e79ea3bb626e56fc59 \
	I0328 01:07:40.509514 1130949 kubeadm.go:309] 	--control-plane 
	I0328 01:07:40.509524 1130949 kubeadm.go:309] 
	I0328 01:07:40.509641 1130949 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0328 01:07:40.509655 1130949 kubeadm.go:309] 
	I0328 01:07:40.509767 1130949 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token 4zk5zi.uttj4zihedk5oj6k \
	I0328 01:07:40.509932 1130949 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:eb47e943713ff45bf2b8bbea854932df17a469668ec0f8e79ea3bb626e56fc59 
	I0328 01:07:40.510139 1130949 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0328 01:07:40.510157 1130949 cni.go:84] Creating CNI manager for ""
	I0328 01:07:40.510166 1130949 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0328 01:07:40.512099 1130949 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0328 01:07:38.041290 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:40.041569 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:38.373789 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:40.374369 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:40.513314 1130949 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0328 01:07:40.563257 1130949 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0328 01:07:40.627024 1130949 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0328 01:07:40.627097 1130949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:07:40.627137 1130949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-808809 minikube.k8s.io/updated_at=2024_03_28T01_07_40_0700 minikube.k8s.io/version=v1.33.0-beta.0 minikube.k8s.io/commit=1a940f980f77c9bfc8ef93678db8c1f49c9dd79d minikube.k8s.io/name=embed-certs-808809 minikube.k8s.io/primary=true
	I0328 01:07:40.928916 1130949 ops.go:34] apiserver oom_adj: -16
	I0328 01:07:40.929138 1130949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:07:41.429797 1130949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:07:41.930103 1130949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:07:42.429366 1130949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:07:42.540932 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:44.035055 1131600 pod_ready.go:81] duration metric: took 4m0.000860608s for pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace to be "Ready" ...
	E0328 01:07:44.035094 1131600 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace to be "Ready" (will not retry!)
	I0328 01:07:44.035124 1131600 pod_ready.go:38] duration metric: took 4m14.608998431s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0328 01:07:44.035180 1131600 kubeadm.go:591] duration metric: took 4m23.470228903s to restartPrimaryControlPlane
	W0328 01:07:44.035292 1131600 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0328 01:07:44.035344 1131600 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0328 01:07:42.375179 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:44.876120 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:42.929464 1130949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:07:43.429369 1130949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:07:43.929241 1130949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:07:44.429904 1130949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:07:44.930251 1130949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:07:45.429816 1130949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:07:45.930177 1130949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:07:46.429416 1130949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:07:46.929152 1130949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:07:47.429708 1130949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:07:49.021732 1131323 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0328 01:07:49.021890 1131323 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0328 01:07:49.022195 1131323 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0328 01:07:47.373358 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:49.872482 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:47.929139 1130949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:07:48.429732 1130949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:07:48.930207 1130949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:07:49.429230 1130949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:07:49.929298 1130949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:07:50.429919 1130949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:07:50.929364 1130949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:07:51.429403 1130949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:07:51.929356 1130949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:07:52.429410 1130949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:07:52.929894 1130949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:07:53.043365 1130949 kubeadm.go:1107] duration metric: took 12.416334145s to wait for elevateKubeSystemPrivileges
	W0328 01:07:53.043410 1130949 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0328 01:07:53.043419 1130949 kubeadm.go:393] duration metric: took 5m13.709259014s to StartCluster
	I0328 01:07:53.043445 1130949 settings.go:142] acquiring lock: {Name:mk594dd5d083edcb4c668528431bf610fe4ea638 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 01:07:53.043560 1130949 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18485-1069254/kubeconfig
	I0328 01:07:53.045798 1130949 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18485-1069254/kubeconfig: {Name:mk608bb0dce90860f51972a105c5a28b1a9b081e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 01:07:53.046158 1130949 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.72.210 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0328 01:07:53.047867 1130949 out.go:177] * Verifying Kubernetes components...
	I0328 01:07:53.046201 1130949 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0328 01:07:53.046412 1130949 config.go:182] Loaded profile config "embed-certs-808809": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0328 01:07:53.049163 1130949 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-808809"
	I0328 01:07:53.049175 1130949 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0328 01:07:53.049195 1130949 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-808809"
	W0328 01:07:53.049204 1130949 addons.go:243] addon storage-provisioner should already be in state true
	I0328 01:07:53.049230 1130949 host.go:66] Checking if "embed-certs-808809" exists ...
	I0328 01:07:53.049205 1130949 addons.go:69] Setting default-storageclass=true in profile "embed-certs-808809"
	I0328 01:07:53.049250 1130949 addons.go:69] Setting metrics-server=true in profile "embed-certs-808809"
	I0328 01:07:53.049271 1130949 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-808809"
	I0328 01:07:53.049309 1130949 addons.go:234] Setting addon metrics-server=true in "embed-certs-808809"
	W0328 01:07:53.049327 1130949 addons.go:243] addon metrics-server should already be in state true
	I0328 01:07:53.049371 1130949 host.go:66] Checking if "embed-certs-808809" exists ...
	I0328 01:07:53.049530 1130949 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18485-1069254/.minikube/bin/docker-machine-driver-kvm2
	I0328 01:07:53.049569 1130949 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 01:07:53.049696 1130949 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18485-1069254/.minikube/bin/docker-machine-driver-kvm2
	I0328 01:07:53.049729 1130949 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 01:07:53.049795 1130949 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18485-1069254/.minikube/bin/docker-machine-driver-kvm2
	I0328 01:07:53.049838 1130949 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 01:07:53.067042 1130949 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36143
	I0328 01:07:53.067078 1130949 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33653
	I0328 01:07:53.067536 1130949 main.go:141] libmachine: () Calling .GetVersion
	I0328 01:07:53.067599 1130949 main.go:141] libmachine: () Calling .GetVersion
	I0328 01:07:53.068156 1130949 main.go:141] libmachine: Using API Version  1
	I0328 01:07:53.068184 1130949 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 01:07:53.068289 1130949 main.go:141] libmachine: Using API Version  1
	I0328 01:07:53.068315 1130949 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 01:07:53.068583 1130949 main.go:141] libmachine: () Calling .GetMachineName
	I0328 01:07:53.068669 1130949 main.go:141] libmachine: () Calling .GetMachineName
	I0328 01:07:53.069095 1130949 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18485-1069254/.minikube/bin/docker-machine-driver-kvm2
	I0328 01:07:53.069121 1130949 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 01:07:53.069245 1130949 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18485-1069254/.minikube/bin/docker-machine-driver-kvm2
	I0328 01:07:53.069276 1130949 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 01:07:53.069991 1130949 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43737
	I0328 01:07:53.070509 1130949 main.go:141] libmachine: () Calling .GetVersion
	I0328 01:07:53.071078 1130949 main.go:141] libmachine: Using API Version  1
	I0328 01:07:53.071103 1130949 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 01:07:53.071480 1130949 main.go:141] libmachine: () Calling .GetMachineName
	I0328 01:07:53.071705 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetState
	I0328 01:07:53.075617 1130949 addons.go:234] Setting addon default-storageclass=true in "embed-certs-808809"
	W0328 01:07:53.075659 1130949 addons.go:243] addon default-storageclass should already be in state true
	I0328 01:07:53.075703 1130949 host.go:66] Checking if "embed-certs-808809" exists ...
	I0328 01:07:53.075982 1130949 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18485-1069254/.minikube/bin/docker-machine-driver-kvm2
	I0328 01:07:53.076011 1130949 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 01:07:53.085991 1130949 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39265
	I0328 01:07:53.086508 1130949 main.go:141] libmachine: () Calling .GetVersion
	I0328 01:07:53.086724 1130949 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36033
	I0328 01:07:53.087105 1130949 main.go:141] libmachine: Using API Version  1
	I0328 01:07:53.087122 1130949 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 01:07:53.087158 1130949 main.go:141] libmachine: () Calling .GetVersion
	I0328 01:07:53.087646 1130949 main.go:141] libmachine: Using API Version  1
	I0328 01:07:53.087667 1130949 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 01:07:53.087706 1130949 main.go:141] libmachine: () Calling .GetMachineName
	I0328 01:07:53.087922 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetState
	I0328 01:07:53.088031 1130949 main.go:141] libmachine: () Calling .GetMachineName
	I0328 01:07:53.088225 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetState
	I0328 01:07:53.089941 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .DriverName
	I0328 01:07:53.090168 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .DriverName
	I0328 01:07:53.091945 1130949 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0328 01:07:53.093023 1130949 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41613
	I0328 01:07:53.093537 1130949 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0328 01:07:53.093553 1130949 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0328 01:07:53.093563 1130949 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0328 01:07:53.095147 1130949 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0328 01:07:53.095165 1130949 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0328 01:07:53.093574 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHHostname
	I0328 01:07:53.095185 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHHostname
	I0328 01:07:53.093939 1130949 main.go:141] libmachine: () Calling .GetVersion
	I0328 01:07:53.096301 1130949 main.go:141] libmachine: Using API Version  1
	I0328 01:07:53.096322 1130949 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 01:07:53.096662 1130949 main.go:141] libmachine: () Calling .GetMachineName
	I0328 01:07:53.097251 1130949 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18485-1069254/.minikube/bin/docker-machine-driver-kvm2
	I0328 01:07:53.097306 1130949 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 01:07:53.098907 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:07:53.099014 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:07:53.099513 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d4:d2", ip: ""} in network mk-embed-certs-808809: {Iface:virbr4 ExpiryTime:2024-03-28 02:02:25 +0000 UTC Type:0 Mac:52:54:00:60:d4:d2 Iaid: IPaddr:192.168.72.210 Prefix:24 Hostname:embed-certs-808809 Clientid:01:52:54:00:60:d4:d2}
	I0328 01:07:53.099546 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined IP address 192.168.72.210 and MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:07:53.099996 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHPort
	I0328 01:07:53.100126 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d4:d2", ip: ""} in network mk-embed-certs-808809: {Iface:virbr4 ExpiryTime:2024-03-28 02:02:25 +0000 UTC Type:0 Mac:52:54:00:60:d4:d2 Iaid: IPaddr:192.168.72.210 Prefix:24 Hostname:embed-certs-808809 Clientid:01:52:54:00:60:d4:d2}
	I0328 01:07:53.100177 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHKeyPath
	I0328 01:07:53.100187 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined IP address 192.168.72.210 and MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:07:53.100287 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHUsername
	I0328 01:07:53.100392 1130949 sshutil.go:53] new ssh client: &{IP:192.168.72.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/embed-certs-808809/id_rsa Username:docker}
	I0328 01:07:53.100470 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHPort
	I0328 01:07:53.100576 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHKeyPath
	I0328 01:07:53.100709 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHUsername
	I0328 01:07:53.100796 1130949 sshutil.go:53] new ssh client: &{IP:192.168.72.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/embed-certs-808809/id_rsa Username:docker}
	I0328 01:07:53.114056 1130949 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41121
	I0328 01:07:53.114680 1130949 main.go:141] libmachine: () Calling .GetVersion
	I0328 01:07:53.115279 1130949 main.go:141] libmachine: Using API Version  1
	I0328 01:07:53.115313 1130949 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 01:07:53.115721 1130949 main.go:141] libmachine: () Calling .GetMachineName
	I0328 01:07:53.116061 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetState
	I0328 01:07:53.118022 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .DriverName
	I0328 01:07:53.118348 1130949 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0328 01:07:53.118370 1130949 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0328 01:07:53.118391 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHHostname
	I0328 01:07:53.121337 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:07:53.121699 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d4:d2", ip: ""} in network mk-embed-certs-808809: {Iface:virbr4 ExpiryTime:2024-03-28 02:02:25 +0000 UTC Type:0 Mac:52:54:00:60:d4:d2 Iaid: IPaddr:192.168.72.210 Prefix:24 Hostname:embed-certs-808809 Clientid:01:52:54:00:60:d4:d2}
	I0328 01:07:53.121728 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined IP address 192.168.72.210 and MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:07:53.121906 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHPort
	I0328 01:07:53.122084 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHKeyPath
	I0328 01:07:53.122266 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHUsername
	I0328 01:07:53.122414 1130949 sshutil.go:53] new ssh client: &{IP:192.168.72.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/embed-certs-808809/id_rsa Username:docker}
	I0328 01:07:53.242121 1130949 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0328 01:07:53.267118 1130949 node_ready.go:35] waiting up to 6m0s for node "embed-certs-808809" to be "Ready" ...
	I0328 01:07:53.276640 1130949 node_ready.go:49] node "embed-certs-808809" has status "Ready":"True"
	I0328 01:07:53.276670 1130949 node_ready.go:38] duration metric: took 9.513599ms for node "embed-certs-808809" to be "Ready" ...
	I0328 01:07:53.276683 1130949 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0328 01:07:53.283091 1130949 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-2rn6k" in "kube-system" namespace to be "Ready" ...
	I0328 01:07:53.325201 1130949 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0328 01:07:53.325234 1130949 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0328 01:07:53.341335 1130949 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0328 01:07:53.361084 1130949 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0328 01:07:53.361109 1130949 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0328 01:07:53.393089 1130949 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0328 01:07:53.393116 1130949 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0328 01:07:53.419245 1130949 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0328 01:07:53.445663 1130949 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0328 01:07:53.515515 1130949 main.go:141] libmachine: Making call to close driver server
	I0328 01:07:53.515555 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .Close
	I0328 01:07:53.515871 1130949 main.go:141] libmachine: Successfully made call to close driver server
	I0328 01:07:53.515891 1130949 main.go:141] libmachine: Making call to close connection to plugin binary
	I0328 01:07:53.515901 1130949 main.go:141] libmachine: Making call to close driver server
	I0328 01:07:53.515910 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .Close
	I0328 01:07:53.516173 1130949 main.go:141] libmachine: Successfully made call to close driver server
	I0328 01:07:53.516253 1130949 main.go:141] libmachine: Making call to close connection to plugin binary
	I0328 01:07:53.516212 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | Closing plugin on server side
	I0328 01:07:53.527854 1130949 main.go:141] libmachine: Making call to close driver server
	I0328 01:07:53.527882 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .Close
	I0328 01:07:53.528152 1130949 main.go:141] libmachine: Successfully made call to close driver server
	I0328 01:07:53.528173 1130949 main.go:141] libmachine: Making call to close connection to plugin binary
	I0328 01:07:53.528220 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | Closing plugin on server side
	I0328 01:07:54.159164 1130949 main.go:141] libmachine: Making call to close driver server
	I0328 01:07:54.159192 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .Close
	I0328 01:07:54.159264 1130949 main.go:141] libmachine: Making call to close driver server
	I0328 01:07:54.159292 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .Close
	I0328 01:07:54.159523 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | Closing plugin on server side
	I0328 01:07:54.159597 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | Closing plugin on server side
	I0328 01:07:54.159619 1130949 main.go:141] libmachine: Successfully made call to close driver server
	I0328 01:07:54.159637 1130949 main.go:141] libmachine: Making call to close connection to plugin binary
	I0328 01:07:54.159648 1130949 main.go:141] libmachine: Making call to close driver server
	I0328 01:07:54.159658 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .Close
	I0328 01:07:54.159660 1130949 main.go:141] libmachine: Successfully made call to close driver server
	I0328 01:07:54.159667 1130949 main.go:141] libmachine: Making call to close connection to plugin binary
	I0328 01:07:54.159688 1130949 main.go:141] libmachine: Making call to close driver server
	I0328 01:07:54.159696 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .Close
	I0328 01:07:54.159981 1130949 main.go:141] libmachine: Successfully made call to close driver server
	I0328 01:07:54.160037 1130949 main.go:141] libmachine: Making call to close connection to plugin binary
	I0328 01:07:54.160056 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | Closing plugin on server side
	I0328 01:07:54.160062 1130949 addons.go:470] Verifying addon metrics-server=true in "embed-certs-808809"
	I0328 01:07:54.160088 1130949 main.go:141] libmachine: Successfully made call to close driver server
	I0328 01:07:54.160090 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | Closing plugin on server side
	I0328 01:07:54.160106 1130949 main.go:141] libmachine: Making call to close connection to plugin binary
	I0328 01:07:54.162879 1130949 out.go:177] * Enabled addons: default-storageclass, metrics-server, storage-provisioner
	I0328 01:07:54.022449 1131323 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0328 01:07:54.022704 1131323 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0328 01:07:52.372314 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:54.372913 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:54.164263 1130949 addons.go:505] duration metric: took 1.11806212s for enable addons: enabled=[default-storageclass metrics-server storage-provisioner]
	I0328 01:07:55.294728 1130949 pod_ready.go:102] pod "coredns-76f75df574-2rn6k" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:55.790690 1130949 pod_ready.go:92] pod "coredns-76f75df574-2rn6k" in "kube-system" namespace has status "Ready":"True"
	I0328 01:07:55.790717 1130949 pod_ready.go:81] duration metric: took 2.50759161s for pod "coredns-76f75df574-2rn6k" in "kube-system" namespace to be "Ready" ...
	I0328 01:07:55.790726 1130949 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-pgcdh" in "kube-system" namespace to be "Ready" ...
	I0328 01:07:55.796249 1130949 pod_ready.go:92] pod "coredns-76f75df574-pgcdh" in "kube-system" namespace has status "Ready":"True"
	I0328 01:07:55.796279 1130949 pod_ready.go:81] duration metric: took 5.54233ms for pod "coredns-76f75df574-pgcdh" in "kube-system" namespace to be "Ready" ...
	I0328 01:07:55.796291 1130949 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-808809" in "kube-system" namespace to be "Ready" ...
	I0328 01:07:55.801226 1130949 pod_ready.go:92] pod "etcd-embed-certs-808809" in "kube-system" namespace has status "Ready":"True"
	I0328 01:07:55.801254 1130949 pod_ready.go:81] duration metric: took 4.956106ms for pod "etcd-embed-certs-808809" in "kube-system" namespace to be "Ready" ...
	I0328 01:07:55.801263 1130949 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-808809" in "kube-system" namespace to be "Ready" ...
	I0328 01:07:55.814571 1130949 pod_ready.go:92] pod "kube-apiserver-embed-certs-808809" in "kube-system" namespace has status "Ready":"True"
	I0328 01:07:55.814599 1130949 pod_ready.go:81] duration metric: took 13.328662ms for pod "kube-apiserver-embed-certs-808809" in "kube-system" namespace to be "Ready" ...
	I0328 01:07:55.814613 1130949 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-808809" in "kube-system" namespace to be "Ready" ...
	I0328 01:07:55.825995 1130949 pod_ready.go:92] pod "kube-controller-manager-embed-certs-808809" in "kube-system" namespace has status "Ready":"True"
	I0328 01:07:55.826022 1130949 pod_ready.go:81] duration metric: took 11.401096ms for pod "kube-controller-manager-embed-certs-808809" in "kube-system" namespace to be "Ready" ...
	I0328 01:07:55.826035 1130949 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-tjbhs" in "kube-system" namespace to be "Ready" ...
	I0328 01:07:56.188116 1130949 pod_ready.go:92] pod "kube-proxy-tjbhs" in "kube-system" namespace has status "Ready":"True"
	I0328 01:07:56.188147 1130949 pod_ready.go:81] duration metric: took 362.103962ms for pod "kube-proxy-tjbhs" in "kube-system" namespace to be "Ready" ...
	I0328 01:07:56.188161 1130949 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-808809" in "kube-system" namespace to be "Ready" ...
	I0328 01:07:56.588294 1130949 pod_ready.go:92] pod "kube-scheduler-embed-certs-808809" in "kube-system" namespace has status "Ready":"True"
	I0328 01:07:56.588334 1130949 pod_ready.go:81] duration metric: took 400.16517ms for pod "kube-scheduler-embed-certs-808809" in "kube-system" namespace to be "Ready" ...
	I0328 01:07:56.588347 1130949 pod_ready.go:38] duration metric: took 3.311651338s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0328 01:07:56.588369 1130949 api_server.go:52] waiting for apiserver process to appear ...
	I0328 01:07:56.588445 1130949 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:07:56.606404 1130949 api_server.go:72] duration metric: took 3.560197315s to wait for apiserver process to appear ...
	I0328 01:07:56.606435 1130949 api_server.go:88] waiting for apiserver healthz status ...
	I0328 01:07:56.606460 1130949 api_server.go:253] Checking apiserver healthz at https://192.168.72.210:8443/healthz ...
	I0328 01:07:56.612218 1130949 api_server.go:279] https://192.168.72.210:8443/healthz returned 200:
	ok
	I0328 01:07:56.613459 1130949 api_server.go:141] control plane version: v1.29.3
	I0328 01:07:56.613481 1130949 api_server.go:131] duration metric: took 7.039378ms to wait for apiserver health ...
	I0328 01:07:56.613490 1130949 system_pods.go:43] waiting for kube-system pods to appear ...
	I0328 01:07:56.793192 1130949 system_pods.go:59] 9 kube-system pods found
	I0328 01:07:56.793227 1130949 system_pods.go:61] "coredns-76f75df574-2rn6k" [2a77c778-dd83-4e2e-b45a-ca16e3922b45] Running
	I0328 01:07:56.793232 1130949 system_pods.go:61] "coredns-76f75df574-pgcdh" [52452b24-490e-4999-b700-198c6f9b2fa1] Running
	I0328 01:07:56.793236 1130949 system_pods.go:61] "etcd-embed-certs-808809" [cba526ab-c8ca-4b90-abb8-533d1bf6cb4f] Running
	I0328 01:07:56.793239 1130949 system_pods.go:61] "kube-apiserver-embed-certs-808809" [02934ac6-1e07-4cbe-ba2f-4149e59d6044] Running
	I0328 01:07:56.793243 1130949 system_pods.go:61] "kube-controller-manager-embed-certs-808809" [b3350ff9-7abb-407e-939c-fcde61186c17] Running
	I0328 01:07:56.793246 1130949 system_pods.go:61] "kube-proxy-tjbhs" [cdb30ca1-5165-4e24-888a-df79af7987d0] Running
	I0328 01:07:56.793249 1130949 system_pods.go:61] "kube-scheduler-embed-certs-808809" [5b52a22d-6ffb-468b-b7b9-8f89b0dee3b8] Running
	I0328 01:07:56.793255 1130949 system_pods.go:61] "metrics-server-57f55c9bc5-bqbfl" [8434fd7d-838b-4cf2-96a3-e4d613633871] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0328 01:07:56.793260 1130949 system_pods.go:61] "storage-provisioner" [20c1951e-7da8-4025-bbcf-2da60f87f3ab] Running
	I0328 01:07:56.793268 1130949 system_pods.go:74] duration metric: took 179.77213ms to wait for pod list to return data ...
	I0328 01:07:56.793275 1130949 default_sa.go:34] waiting for default service account to be created ...
	I0328 01:07:56.988234 1130949 default_sa.go:45] found service account: "default"
	I0328 01:07:56.988274 1130949 default_sa.go:55] duration metric: took 194.984089ms for default service account to be created ...
	I0328 01:07:56.988288 1130949 system_pods.go:116] waiting for k8s-apps to be running ...
	I0328 01:07:57.192153 1130949 system_pods.go:86] 9 kube-system pods found
	I0328 01:07:57.192188 1130949 system_pods.go:89] "coredns-76f75df574-2rn6k" [2a77c778-dd83-4e2e-b45a-ca16e3922b45] Running
	I0328 01:07:57.192194 1130949 system_pods.go:89] "coredns-76f75df574-pgcdh" [52452b24-490e-4999-b700-198c6f9b2fa1] Running
	I0328 01:07:57.192200 1130949 system_pods.go:89] "etcd-embed-certs-808809" [cba526ab-c8ca-4b90-abb8-533d1bf6cb4f] Running
	I0328 01:07:57.192205 1130949 system_pods.go:89] "kube-apiserver-embed-certs-808809" [02934ac6-1e07-4cbe-ba2f-4149e59d6044] Running
	I0328 01:07:57.192210 1130949 system_pods.go:89] "kube-controller-manager-embed-certs-808809" [b3350ff9-7abb-407e-939c-fcde61186c17] Running
	I0328 01:07:57.192214 1130949 system_pods.go:89] "kube-proxy-tjbhs" [cdb30ca1-5165-4e24-888a-df79af7987d0] Running
	I0328 01:07:57.192218 1130949 system_pods.go:89] "kube-scheduler-embed-certs-808809" [5b52a22d-6ffb-468b-b7b9-8f89b0dee3b8] Running
	I0328 01:07:57.192225 1130949 system_pods.go:89] "metrics-server-57f55c9bc5-bqbfl" [8434fd7d-838b-4cf2-96a3-e4d613633871] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0328 01:07:57.192230 1130949 system_pods.go:89] "storage-provisioner" [20c1951e-7da8-4025-bbcf-2da60f87f3ab] Running
	I0328 01:07:57.192239 1130949 system_pods.go:126] duration metric: took 203.942878ms to wait for k8s-apps to be running ...
	I0328 01:07:57.192249 1130949 system_svc.go:44] waiting for kubelet service to be running ....
	I0328 01:07:57.192301 1130949 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0328 01:07:57.209840 1130949 system_svc.go:56] duration metric: took 17.576605ms WaitForService to wait for kubelet
	I0328 01:07:57.209883 1130949 kubeadm.go:576] duration metric: took 4.163683877s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0328 01:07:57.209918 1130949 node_conditions.go:102] verifying NodePressure condition ...
	I0328 01:07:57.388321 1130949 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0328 01:07:57.388347 1130949 node_conditions.go:123] node cpu capacity is 2
	I0328 01:07:57.388357 1130949 node_conditions.go:105] duration metric: took 178.433633ms to run NodePressure ...
	I0328 01:07:57.388370 1130949 start.go:240] waiting for startup goroutines ...
	I0328 01:07:57.388377 1130949 start.go:245] waiting for cluster config update ...
	I0328 01:07:57.388387 1130949 start.go:254] writing updated cluster config ...
	I0328 01:07:57.388784 1130949 ssh_runner.go:195] Run: rm -f paused
	I0328 01:07:57.446699 1130949 start.go:600] kubectl: 1.29.3, cluster: 1.29.3 (minor skew: 0)
	I0328 01:07:57.448951 1130949 out.go:177] * Done! kubectl is now configured to use "embed-certs-808809" cluster and "default" namespace by default
	I0328 01:07:56.373123 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:58.872454 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:08:04.023273 1131323 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0328 01:08:04.023535 1131323 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0328 01:08:01.372711 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:08:03.877734 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:08:06.374031 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:08:07.366164 1130827 pod_ready.go:81] duration metric: took 4m0.000887668s for pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace to be "Ready" ...
	E0328 01:08:07.366245 1130827 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace to be "Ready" (will not retry!)
	I0328 01:08:07.366271 1130827 pod_ready.go:38] duration metric: took 4m7.906522585s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0328 01:08:07.366301 1130827 kubeadm.go:591] duration metric: took 4m15.27169704s to restartPrimaryControlPlane
	W0328 01:08:07.366368 1130827 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0328 01:08:07.366406 1130827 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0328 01:08:16.281280 1131600 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.245904746s)
	I0328 01:08:16.281365 1131600 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0328 01:08:16.298463 1131600 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0328 01:08:16.310406 1131600 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0328 01:08:16.321387 1131600 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0328 01:08:16.321415 1131600 kubeadm.go:156] found existing configuration files:
	
	I0328 01:08:16.321475 1131600 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0328 01:08:16.331965 1131600 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0328 01:08:16.332033 1131600 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0328 01:08:16.343030 1131600 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0328 01:08:16.353193 1131600 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0328 01:08:16.353254 1131600 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0328 01:08:16.363865 1131600 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0328 01:08:16.374276 1131600 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0328 01:08:16.374346 1131600 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0328 01:08:16.385300 1131600 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0328 01:08:16.396118 1131600 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0328 01:08:16.396181 1131600 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0328 01:08:16.406896 1131600 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0328 01:08:16.626615 1131600 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0328 01:08:24.024091 1131323 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0328 01:08:24.024388 1131323 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0328 01:08:25.420974 1131600 kubeadm.go:309] [init] Using Kubernetes version: v1.29.3
	I0328 01:08:25.421059 1131600 kubeadm.go:309] [preflight] Running pre-flight checks
	I0328 01:08:25.421154 1131600 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0328 01:08:25.421300 1131600 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0328 01:08:25.421547 1131600 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0328 01:08:25.421649 1131600 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0328 01:08:25.423435 1131600 out.go:204]   - Generating certificates and keys ...
	I0328 01:08:25.423549 1131600 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0328 01:08:25.423630 1131600 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0328 01:08:25.423749 1131600 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0328 01:08:25.423844 1131600 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0328 01:08:25.423956 1131600 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0328 01:08:25.424058 1131600 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0328 01:08:25.424166 1131600 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0328 01:08:25.424260 1131600 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0328 01:08:25.424375 1131600 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0328 01:08:25.424489 1131600 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0328 01:08:25.424552 1131600 kubeadm.go:309] [certs] Using the existing "sa" key
	I0328 01:08:25.424642 1131600 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0328 01:08:25.424700 1131600 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0328 01:08:25.424765 1131600 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0328 01:08:25.424832 1131600 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0328 01:08:25.424920 1131600 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0328 01:08:25.424982 1131600 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0328 01:08:25.425106 1131600 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0328 01:08:25.425207 1131600 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0328 01:08:25.426863 1131600 out.go:204]   - Booting up control plane ...
	I0328 01:08:25.427001 1131600 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0328 01:08:25.427108 1131600 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0328 01:08:25.427205 1131600 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0328 01:08:25.427327 1131600 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0328 01:08:25.427431 1131600 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0328 01:08:25.427491 1131600 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0328 01:08:25.427686 1131600 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0328 01:08:25.427784 1131600 kubeadm.go:309] [apiclient] All control plane components are healthy after 6.003000 seconds
	I0328 01:08:25.427897 1131600 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0328 01:08:25.428032 1131600 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0328 01:08:25.428109 1131600 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0328 01:08:25.428325 1131600 kubeadm.go:309] [mark-control-plane] Marking the node default-k8s-diff-port-283961 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0328 01:08:25.428408 1131600 kubeadm.go:309] [bootstrap-token] Using token: g6jusr.8nbqw788gjbu8fwz
	I0328 01:08:25.430595 1131600 out.go:204]   - Configuring RBAC rules ...
	I0328 01:08:25.430734 1131600 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0328 01:08:25.430837 1131600 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0328 01:08:25.430981 1131600 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0328 01:08:25.431163 1131600 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0328 01:08:25.431357 1131600 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0328 01:08:25.431481 1131600 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0328 01:08:25.431670 1131600 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0328 01:08:25.431726 1131600 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0328 01:08:25.431767 1131600 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0328 01:08:25.431774 1131600 kubeadm.go:309] 
	I0328 01:08:25.431819 1131600 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0328 01:08:25.431829 1131600 kubeadm.go:309] 
	I0328 01:08:25.431893 1131600 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0328 01:08:25.431900 1131600 kubeadm.go:309] 
	I0328 01:08:25.431934 1131600 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0328 01:08:25.432028 1131600 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0328 01:08:25.432089 1131600 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0328 01:08:25.432114 1131600 kubeadm.go:309] 
	I0328 01:08:25.432178 1131600 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0328 01:08:25.432186 1131600 kubeadm.go:309] 
	I0328 01:08:25.432245 1131600 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0328 01:08:25.432255 1131600 kubeadm.go:309] 
	I0328 01:08:25.432342 1131600 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0328 01:08:25.432454 1131600 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0328 01:08:25.432566 1131600 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0328 01:08:25.432576 1131600 kubeadm.go:309] 
	I0328 01:08:25.432719 1131600 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0328 01:08:25.432812 1131600 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0328 01:08:25.432825 1131600 kubeadm.go:309] 
	I0328 01:08:25.432914 1131600 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8444 --token g6jusr.8nbqw788gjbu8fwz \
	I0328 01:08:25.433018 1131600 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:eb47e943713ff45bf2b8bbea854932df17a469668ec0f8e79ea3bb626e56fc59 \
	I0328 01:08:25.433052 1131600 kubeadm.go:309] 	--control-plane 
	I0328 01:08:25.433058 1131600 kubeadm.go:309] 
	I0328 01:08:25.433135 1131600 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0328 01:08:25.433143 1131600 kubeadm.go:309] 
	I0328 01:08:25.433222 1131600 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8444 --token g6jusr.8nbqw788gjbu8fwz \
	I0328 01:08:25.433318 1131600 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:eb47e943713ff45bf2b8bbea854932df17a469668ec0f8e79ea3bb626e56fc59 
	I0328 01:08:25.433337 1131600 cni.go:84] Creating CNI manager for ""
	I0328 01:08:25.433346 1131600 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0328 01:08:25.434943 1131600 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0328 01:08:25.436103 1131600 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0328 01:08:25.483149 1131600 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0328 01:08:25.508422 1131600 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0328 01:08:25.508514 1131600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:25.508518 1131600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-283961 minikube.k8s.io/updated_at=2024_03_28T01_08_25_0700 minikube.k8s.io/version=v1.33.0-beta.0 minikube.k8s.io/commit=1a940f980f77c9bfc8ef93678db8c1f49c9dd79d minikube.k8s.io/name=default-k8s-diff-port-283961 minikube.k8s.io/primary=true
	I0328 01:08:25.537955 1131600 ops.go:34] apiserver oom_adj: -16
	I0328 01:08:25.738462 1131600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:26.239473 1131600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:26.739478 1131600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:27.238883 1131600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:27.738830 1131600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:28.239281 1131600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:28.738643 1131600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:29.238703 1131600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:29.739025 1131600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:30.239127 1131600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:30.739473 1131600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:31.239461 1131600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:31.739480 1131600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:32.239525 1131600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:32.738543 1131600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:33.239468 1131600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:33.739475 1131600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:34.238558 1131600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:34.739550 1131600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:35.239400 1131600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:35.738766 1131600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:36.239384 1131600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:36.738797 1131600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:37.238736 1131600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:37.739543 1131600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:37.850963 1131600 kubeadm.go:1107] duration metric: took 12.342521507s to wait for elevateKubeSystemPrivileges
	W0328 01:08:37.851011 1131600 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0328 01:08:37.851024 1131600 kubeadm.go:393] duration metric: took 5m17.339661641s to StartCluster
	I0328 01:08:37.851048 1131600 settings.go:142] acquiring lock: {Name:mk594dd5d083edcb4c668528431bf610fe4ea638 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 01:08:37.851164 1131600 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18485-1069254/kubeconfig
	I0328 01:08:37.853862 1131600 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18485-1069254/kubeconfig: {Name:mk608bb0dce90860f51972a105c5a28b1a9b081e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 01:08:37.854264 1131600 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.39.224 Port:8444 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0328 01:08:37.856170 1131600 out.go:177] * Verifying Kubernetes components...
	I0328 01:08:37.854341 1131600 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0328 01:08:37.854447 1131600 config.go:182] Loaded profile config "default-k8s-diff-port-283961": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0328 01:08:37.857860 1131600 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0328 01:08:37.857864 1131600 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-283961"
	I0328 01:08:37.857878 1131600 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-283961"
	I0328 01:08:37.857885 1131600 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-283961"
	I0328 01:08:37.857909 1131600 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-283961"
	I0328 01:08:37.857912 1131600 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-283961"
	W0328 01:08:37.857923 1131600 addons.go:243] addon storage-provisioner should already be in state true
	I0328 01:08:37.857928 1131600 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-283961"
	W0328 01:08:37.857941 1131600 addons.go:243] addon metrics-server should already be in state true
	I0328 01:08:37.857970 1131600 host.go:66] Checking if "default-k8s-diff-port-283961" exists ...
	I0328 01:08:37.857983 1131600 host.go:66] Checking if "default-k8s-diff-port-283961" exists ...
	I0328 01:08:37.858330 1131600 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18485-1069254/.minikube/bin/docker-machine-driver-kvm2
	I0328 01:08:37.858363 1131600 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 01:08:37.858403 1131600 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18485-1069254/.minikube/bin/docker-machine-driver-kvm2
	I0328 01:08:37.858436 1131600 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 01:08:37.858335 1131600 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18485-1069254/.minikube/bin/docker-machine-driver-kvm2
	I0328 01:08:37.858509 1131600 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 01:08:37.881197 1131600 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32799
	I0328 01:08:37.881230 1131600 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35101
	I0328 01:08:37.881244 1131600 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40403
	I0328 01:08:37.881857 1131600 main.go:141] libmachine: () Calling .GetVersion
	I0328 01:08:37.881882 1131600 main.go:141] libmachine: () Calling .GetVersion
	I0328 01:08:37.882021 1131600 main.go:141] libmachine: () Calling .GetVersion
	I0328 01:08:37.882460 1131600 main.go:141] libmachine: Using API Version  1
	I0328 01:08:37.882482 1131600 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 01:08:37.882523 1131600 main.go:141] libmachine: Using API Version  1
	I0328 01:08:37.882540 1131600 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 01:08:37.882585 1131600 main.go:141] libmachine: Using API Version  1
	I0328 01:08:37.882601 1131600 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 01:08:37.882934 1131600 main.go:141] libmachine: () Calling .GetMachineName
	I0328 01:08:37.882992 1131600 main.go:141] libmachine: () Calling .GetMachineName
	I0328 01:08:37.883007 1131600 main.go:141] libmachine: () Calling .GetMachineName
	I0328 01:08:37.883239 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetState
	I0328 01:08:37.883592 1131600 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18485-1069254/.minikube/bin/docker-machine-driver-kvm2
	I0328 01:08:37.883620 1131600 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18485-1069254/.minikube/bin/docker-machine-driver-kvm2
	I0328 01:08:37.883625 1131600 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 01:08:37.883644 1131600 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 01:08:37.887335 1131600 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-283961"
	W0328 01:08:37.887359 1131600 addons.go:243] addon default-storageclass should already be in state true
	I0328 01:08:37.887390 1131600 host.go:66] Checking if "default-k8s-diff-port-283961" exists ...
	I0328 01:08:37.887745 1131600 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18485-1069254/.minikube/bin/docker-machine-driver-kvm2
	I0328 01:08:37.887779 1131600 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 01:08:37.901416 1131600 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37383
	I0328 01:08:37.901909 1131600 main.go:141] libmachine: () Calling .GetVersion
	I0328 01:08:37.902530 1131600 main.go:141] libmachine: Using API Version  1
	I0328 01:08:37.902559 1131600 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 01:08:37.902967 1131600 main.go:141] libmachine: () Calling .GetMachineName
	I0328 01:08:37.903211 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetState
	I0328 01:08:37.904529 1131600 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36633
	I0328 01:08:37.905034 1131600 main.go:141] libmachine: () Calling .GetVersion
	I0328 01:08:37.905268 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .DriverName
	I0328 01:08:37.907486 1131600 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0328 01:08:37.905802 1131600 main.go:141] libmachine: Using API Version  1
	I0328 01:08:37.909062 1131600 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 01:08:37.909180 1131600 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0328 01:08:37.909196 1131600 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0328 01:08:37.909218 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHHostname
	I0328 01:08:37.909555 1131600 main.go:141] libmachine: () Calling .GetMachineName
	I0328 01:08:37.909794 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetState
	I0328 01:08:37.911251 1131600 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33539
	I0328 01:08:37.911845 1131600 main.go:141] libmachine: () Calling .GetVersion
	I0328 01:08:37.911995 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .DriverName
	I0328 01:08:37.913838 1131600 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0328 01:08:37.912457 1131600 main.go:141] libmachine: Using API Version  1
	I0328 01:08:37.913039 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:08:37.913804 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHPort
	I0328 01:08:37.915256 1131600 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 01:08:37.915268 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:df:6f", ip: ""} in network mk-default-k8s-diff-port-283961: {Iface:virbr1 ExpiryTime:2024-03-28 02:03:06 +0000 UTC Type:0 Mac:52:54:00:c4:df:6f Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:default-k8s-diff-port-283961 Clientid:01:52:54:00:c4:df:6f}
	I0328 01:08:37.915288 1131600 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0328 01:08:37.915297 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined IP address 192.168.39.224 and MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:08:37.915303 1131600 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0328 01:08:37.915321 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHHostname
	I0328 01:08:37.915492 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHKeyPath
	I0328 01:08:37.915674 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHUsername
	I0328 01:08:37.915894 1131600 sshutil.go:53] new ssh client: &{IP:192.168.39.224 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/default-k8s-diff-port-283961/id_rsa Username:docker}
	I0328 01:08:37.916689 1131600 main.go:141] libmachine: () Calling .GetMachineName
	I0328 01:08:37.917364 1131600 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18485-1069254/.minikube/bin/docker-machine-driver-kvm2
	I0328 01:08:37.917410 1131600 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 01:08:37.918302 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:08:37.918651 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:df:6f", ip: ""} in network mk-default-k8s-diff-port-283961: {Iface:virbr1 ExpiryTime:2024-03-28 02:03:06 +0000 UTC Type:0 Mac:52:54:00:c4:df:6f Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:default-k8s-diff-port-283961 Clientid:01:52:54:00:c4:df:6f}
	I0328 01:08:37.918678 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined IP address 192.168.39.224 and MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:08:37.918944 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHPort
	I0328 01:08:37.919117 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHKeyPath
	I0328 01:08:37.919267 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHUsername
	I0328 01:08:37.919386 1131600 sshutil.go:53] new ssh client: &{IP:192.168.39.224 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/default-k8s-diff-port-283961/id_rsa Username:docker}
	I0328 01:08:37.935233 1131600 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45659
	I0328 01:08:37.935750 1131600 main.go:141] libmachine: () Calling .GetVersion
	I0328 01:08:37.936283 1131600 main.go:141] libmachine: Using API Version  1
	I0328 01:08:37.936301 1131600 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 01:08:37.936691 1131600 main.go:141] libmachine: () Calling .GetMachineName
	I0328 01:08:37.936872 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetState
	I0328 01:08:37.938736 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .DriverName
	I0328 01:08:37.939016 1131600 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0328 01:08:37.939042 1131600 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0328 01:08:37.939065 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHHostname
	I0328 01:08:37.941653 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:08:37.941967 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:df:6f", ip: ""} in network mk-default-k8s-diff-port-283961: {Iface:virbr1 ExpiryTime:2024-03-28 02:03:06 +0000 UTC Type:0 Mac:52:54:00:c4:df:6f Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:default-k8s-diff-port-283961 Clientid:01:52:54:00:c4:df:6f}
	I0328 01:08:37.941991 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined IP address 192.168.39.224 and MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:08:37.942199 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHPort
	I0328 01:08:37.942405 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHKeyPath
	I0328 01:08:37.942575 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHUsername
	I0328 01:08:37.942761 1131600 sshutil.go:53] new ssh client: &{IP:192.168.39.224 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/default-k8s-diff-port-283961/id_rsa Username:docker}
	I0328 01:08:38.109817 1131600 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0328 01:08:38.134996 1131600 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-283961" to be "Ready" ...
	I0328 01:08:38.158252 1131600 node_ready.go:49] node "default-k8s-diff-port-283961" has status "Ready":"True"
	I0328 01:08:38.158286 1131600 node_ready.go:38] duration metric: took 23.249221ms for node "default-k8s-diff-port-283961" to be "Ready" ...
	I0328 01:08:38.158305 1131600 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0328 01:08:38.170391 1131600 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-gdv5x" in "kube-system" namespace to be "Ready" ...
	I0328 01:08:38.277223 1131600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0328 01:08:38.299923 1131600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0328 01:08:38.300686 1131600 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0328 01:08:38.300707 1131600 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0328 01:08:38.355800 1131600 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0328 01:08:38.355837 1131600 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0328 01:08:38.464742 1131600 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0328 01:08:38.464769 1131600 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0328 01:08:38.542696 1131600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0328 01:08:39.644116 1131600 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.344141889s)
	I0328 01:08:39.644184 1131600 main.go:141] libmachine: Making call to close driver server
	I0328 01:08:39.644189 1131600 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.366934481s)
	I0328 01:08:39.644197 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .Close
	I0328 01:08:39.644210 1131600 main.go:141] libmachine: Making call to close driver server
	I0328 01:08:39.644219 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .Close
	I0328 01:08:39.644620 1131600 main.go:141] libmachine: Successfully made call to close driver server
	I0328 01:08:39.644644 1131600 main.go:141] libmachine: Making call to close connection to plugin binary
	I0328 01:08:39.644654 1131600 main.go:141] libmachine: Making call to close driver server
	I0328 01:08:39.644664 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .Close
	I0328 01:08:39.644846 1131600 main.go:141] libmachine: Successfully made call to close driver server
	I0328 01:08:39.644865 1131600 main.go:141] libmachine: Making call to close connection to plugin binary
	I0328 01:08:39.644890 1131600 main.go:141] libmachine: Making call to close driver server
	I0328 01:08:39.644905 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .Close
	I0328 01:08:39.644987 1131600 main.go:141] libmachine: Successfully made call to close driver server
	I0328 01:08:39.645004 1131600 main.go:141] libmachine: Making call to close connection to plugin binary
	I0328 01:08:39.645154 1131600 main.go:141] libmachine: Successfully made call to close driver server
	I0328 01:08:39.645171 1131600 main.go:141] libmachine: Making call to close connection to plugin binary
	I0328 01:08:39.708104 1131600 main.go:141] libmachine: Making call to close driver server
	I0328 01:08:39.708143 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .Close
	I0328 01:08:39.708543 1131600 main.go:141] libmachine: Successfully made call to close driver server
	I0328 01:08:39.708567 1131600 main.go:141] libmachine: Making call to close connection to plugin binary
	I0328 01:08:39.739487 1131600 pod_ready.go:92] pod "coredns-76f75df574-gdv5x" in "kube-system" namespace has status "Ready":"True"
	I0328 01:08:39.739515 1131600 pod_ready.go:81] duration metric: took 1.569088177s for pod "coredns-76f75df574-gdv5x" in "kube-system" namespace to be "Ready" ...
	I0328 01:08:39.739526 1131600 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-qzcfp" in "kube-system" namespace to be "Ready" ...
	I0328 01:08:39.797314 1131600 pod_ready.go:92] pod "coredns-76f75df574-qzcfp" in "kube-system" namespace has status "Ready":"True"
	I0328 01:08:39.797347 1131600 pod_ready.go:81] duration metric: took 57.813218ms for pod "coredns-76f75df574-qzcfp" in "kube-system" namespace to be "Ready" ...
	I0328 01:08:39.797366 1131600 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-283961" in "kube-system" namespace to be "Ready" ...
	I0328 01:08:39.830784 1131600 pod_ready.go:92] pod "etcd-default-k8s-diff-port-283961" in "kube-system" namespace has status "Ready":"True"
	I0328 01:08:39.830865 1131600 pod_ready.go:81] duration metric: took 33.488753ms for pod "etcd-default-k8s-diff-port-283961" in "kube-system" namespace to be "Ready" ...
	I0328 01:08:39.830886 1131600 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-283961" in "kube-system" namespace to be "Ready" ...
	I0328 01:08:39.852459 1131600 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-283961" in "kube-system" namespace has status "Ready":"True"
	I0328 01:08:39.852489 1131600 pod_ready.go:81] duration metric: took 21.594748ms for pod "kube-apiserver-default-k8s-diff-port-283961" in "kube-system" namespace to be "Ready" ...
	I0328 01:08:39.852501 1131600 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-283961" in "kube-system" namespace to be "Ready" ...
	I0328 01:08:39.862630 1131600 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-283961" in "kube-system" namespace has status "Ready":"True"
	I0328 01:08:39.862658 1131600 pod_ready.go:81] duration metric: took 10.149867ms for pod "kube-controller-manager-default-k8s-diff-port-283961" in "kube-system" namespace to be "Ready" ...
	I0328 01:08:39.862674 1131600 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-js7j2" in "kube-system" namespace to be "Ready" ...
	I0328 01:08:39.893124 1131600 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.350363727s)
	I0328 01:08:39.893191 1131600 main.go:141] libmachine: Making call to close driver server
	I0328 01:08:39.893216 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .Close
	I0328 01:08:39.893559 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | Closing plugin on server side
	I0328 01:08:39.893568 1131600 main.go:141] libmachine: Successfully made call to close driver server
	I0328 01:08:39.893617 1131600 main.go:141] libmachine: Making call to close connection to plugin binary
	I0328 01:08:39.893634 1131600 main.go:141] libmachine: Making call to close driver server
	I0328 01:08:39.893646 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .Close
	I0328 01:08:39.893985 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | Closing plugin on server side
	I0328 01:08:39.893985 1131600 main.go:141] libmachine: Successfully made call to close driver server
	I0328 01:08:39.894013 1131600 main.go:141] libmachine: Making call to close connection to plugin binary
	I0328 01:08:39.894031 1131600 addons.go:470] Verifying addon metrics-server=true in "default-k8s-diff-port-283961"
	I0328 01:08:39.896978 1131600 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0328 01:08:39.898636 1131600 addons.go:505] duration metric: took 2.044292782s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0328 01:08:40.138962 1131600 pod_ready.go:92] pod "kube-proxy-js7j2" in "kube-system" namespace has status "Ready":"True"
	I0328 01:08:40.138994 1131600 pod_ready.go:81] duration metric: took 276.313147ms for pod "kube-proxy-js7j2" in "kube-system" namespace to be "Ready" ...
	I0328 01:08:40.139006 1131600 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-283961" in "kube-system" namespace to be "Ready" ...
	I0328 01:08:40.538892 1131600 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-283961" in "kube-system" namespace has status "Ready":"True"
	I0328 01:08:40.538917 1131600 pod_ready.go:81] duration metric: took 399.903327ms for pod "kube-scheduler-default-k8s-diff-port-283961" in "kube-system" namespace to be "Ready" ...
	I0328 01:08:40.538925 1131600 pod_ready.go:38] duration metric: took 2.380606168s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0328 01:08:40.538943 1131600 api_server.go:52] waiting for apiserver process to appear ...
	I0328 01:08:40.539009 1131600 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:08:40.561639 1131600 api_server.go:72] duration metric: took 2.707321816s to wait for apiserver process to appear ...
	I0328 01:08:40.561681 1131600 api_server.go:88] waiting for apiserver healthz status ...
	I0328 01:08:40.561709 1131600 api_server.go:253] Checking apiserver healthz at https://192.168.39.224:8444/healthz ...
	I0328 01:08:40.568521 1131600 api_server.go:279] https://192.168.39.224:8444/healthz returned 200:
	ok
	I0328 01:08:40.570016 1131600 api_server.go:141] control plane version: v1.29.3
	I0328 01:08:40.570060 1131600 api_server.go:131] duration metric: took 8.369036ms to wait for apiserver health ...
	I0328 01:08:40.570071 1131600 system_pods.go:43] waiting for kube-system pods to appear ...
	I0328 01:08:39.696094 1130827 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.32965227s)
	I0328 01:08:39.696193 1130827 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0328 01:08:39.717556 1130827 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0328 01:08:39.730434 1130827 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0328 01:08:39.746521 1130827 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0328 01:08:39.746567 1130827 kubeadm.go:156] found existing configuration files:
	
	I0328 01:08:39.746644 1130827 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0328 01:08:39.758252 1130827 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0328 01:08:39.758352 1130827 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0328 01:08:39.771929 1130827 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0328 01:08:39.785312 1130827 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0328 01:08:39.785400 1130827 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0328 01:08:39.800685 1130827 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0328 01:08:39.814982 1130827 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0328 01:08:39.815073 1130827 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0328 01:08:39.828804 1130827 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0328 01:08:39.841984 1130827 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0328 01:08:39.842074 1130827 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0328 01:08:39.854502 1130827 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0328 01:08:40.089742 1130827 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0328 01:08:40.742900 1131600 system_pods.go:59] 9 kube-system pods found
	I0328 01:08:40.742938 1131600 system_pods.go:61] "coredns-76f75df574-gdv5x" [5b4b835c-ae9d-4eff-ab37-6ccb7e36a748] Running
	I0328 01:08:40.742945 1131600 system_pods.go:61] "coredns-76f75df574-qzcfp" [8e7bfa94-f249-4f7a-be7b-9a615810c956] Running
	I0328 01:08:40.742951 1131600 system_pods.go:61] "etcd-default-k8s-diff-port-283961" [eb26a2e2-882b-4bc0-9ca1-7fe88b1c6c7e] Running
	I0328 01:08:40.742958 1131600 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-283961" [1c31e7a3-507e-43ea-96cf-1d182a1e0875] Running
	I0328 01:08:40.742964 1131600 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-283961" [5bbc6650-b37c-4b10-bc6e-cceb214f8881] Running
	I0328 01:08:40.742968 1131600 system_pods.go:61] "kube-proxy-js7j2" [1f31a6ee-9417-4f4a-ba0b-fdbab6a9169d] Running
	I0328 01:08:40.742972 1131600 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-283961" [3a5691f7-d6d4-45e7-aea7-94fe1cfa541a] Running
	I0328 01:08:40.742980 1131600 system_pods.go:61] "metrics-server-57f55c9bc5-gkv67" [7f0f5d0b-6821-44b6-8f3b-0bc0aeccc356] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0328 01:08:40.742986 1131600 system_pods.go:61] "storage-provisioner" [cb80efe2-521f-45d5-84e7-f6dc216b4c6d] Running
	I0328 01:08:40.742998 1131600 system_pods.go:74] duration metric: took 172.918886ms to wait for pod list to return data ...
	I0328 01:08:40.743010 1131600 default_sa.go:34] waiting for default service account to be created ...
	I0328 01:08:40.939208 1131600 default_sa.go:45] found service account: "default"
	I0328 01:08:40.939255 1131600 default_sa.go:55] duration metric: took 196.220048ms for default service account to be created ...
	I0328 01:08:40.939266 1131600 system_pods.go:116] waiting for k8s-apps to be running ...
	I0328 01:08:41.144986 1131600 system_pods.go:86] 9 kube-system pods found
	I0328 01:08:41.145023 1131600 system_pods.go:89] "coredns-76f75df574-gdv5x" [5b4b835c-ae9d-4eff-ab37-6ccb7e36a748] Running
	I0328 01:08:41.145030 1131600 system_pods.go:89] "coredns-76f75df574-qzcfp" [8e7bfa94-f249-4f7a-be7b-9a615810c956] Running
	I0328 01:08:41.145034 1131600 system_pods.go:89] "etcd-default-k8s-diff-port-283961" [eb26a2e2-882b-4bc0-9ca1-7fe88b1c6c7e] Running
	I0328 01:08:41.145039 1131600 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-283961" [1c31e7a3-507e-43ea-96cf-1d182a1e0875] Running
	I0328 01:08:41.145043 1131600 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-283961" [5bbc6650-b37c-4b10-bc6e-cceb214f8881] Running
	I0328 01:08:41.145047 1131600 system_pods.go:89] "kube-proxy-js7j2" [1f31a6ee-9417-4f4a-ba0b-fdbab6a9169d] Running
	I0328 01:08:41.145051 1131600 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-283961" [3a5691f7-d6d4-45e7-aea7-94fe1cfa541a] Running
	I0328 01:08:41.145058 1131600 system_pods.go:89] "metrics-server-57f55c9bc5-gkv67" [7f0f5d0b-6821-44b6-8f3b-0bc0aeccc356] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0328 01:08:41.145062 1131600 system_pods.go:89] "storage-provisioner" [cb80efe2-521f-45d5-84e7-f6dc216b4c6d] Running
	I0328 01:08:41.145072 1131600 system_pods.go:126] duration metric: took 205.800485ms to wait for k8s-apps to be running ...
	I0328 01:08:41.145083 1131600 system_svc.go:44] waiting for kubelet service to be running ....
	I0328 01:08:41.145131 1131600 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0328 01:08:41.163220 1131600 system_svc.go:56] duration metric: took 18.120266ms WaitForService to wait for kubelet
	I0328 01:08:41.163255 1131600 kubeadm.go:576] duration metric: took 3.308947131s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0328 01:08:41.163280 1131600 node_conditions.go:102] verifying NodePressure condition ...
	I0328 01:08:41.339219 1131600 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0328 01:08:41.339247 1131600 node_conditions.go:123] node cpu capacity is 2
	I0328 01:08:41.339292 1131600 node_conditions.go:105] duration metric: took 176.004328ms to run NodePressure ...
	I0328 01:08:41.339306 1131600 start.go:240] waiting for startup goroutines ...
	I0328 01:08:41.339317 1131600 start.go:245] waiting for cluster config update ...
	I0328 01:08:41.339334 1131600 start.go:254] writing updated cluster config ...
	I0328 01:08:41.339656 1131600 ssh_runner.go:195] Run: rm -f paused
	I0328 01:08:41.399111 1131600 start.go:600] kubectl: 1.29.3, cluster: 1.29.3 (minor skew: 0)
	I0328 01:08:41.401360 1131600 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-283961" cluster and "default" namespace by default
	I0328 01:08:49.653091 1130827 kubeadm.go:309] [init] Using Kubernetes version: v1.30.0-beta.0
	I0328 01:08:49.653205 1130827 kubeadm.go:309] [preflight] Running pre-flight checks
	I0328 01:08:49.653327 1130827 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0328 01:08:49.653468 1130827 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0328 01:08:49.653576 1130827 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0328 01:08:49.653666 1130827 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0328 01:08:49.656419 1130827 out.go:204]   - Generating certificates and keys ...
	I0328 01:08:49.656503 1130827 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0328 01:08:49.656583 1130827 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0328 01:08:49.656669 1130827 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0328 01:08:49.656775 1130827 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0328 01:08:49.656903 1130827 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0328 01:08:49.656973 1130827 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0328 01:08:49.657057 1130827 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0328 01:08:49.657138 1130827 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0328 01:08:49.657246 1130827 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0328 01:08:49.657362 1130827 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0328 01:08:49.657415 1130827 kubeadm.go:309] [certs] Using the existing "sa" key
	I0328 01:08:49.657510 1130827 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0328 01:08:49.657601 1130827 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0328 01:08:49.657713 1130827 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0328 01:08:49.657811 1130827 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0328 01:08:49.657900 1130827 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0328 01:08:49.657980 1130827 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0328 01:08:49.658074 1130827 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0328 01:08:49.658160 1130827 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0328 01:08:49.659588 1130827 out.go:204]   - Booting up control plane ...
	I0328 01:08:49.659669 1130827 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0328 01:08:49.659771 1130827 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0328 01:08:49.659855 1130827 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0328 01:08:49.659962 1130827 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0328 01:08:49.660075 1130827 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0328 01:08:49.660139 1130827 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0328 01:08:49.660309 1130827 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0328 01:08:49.660426 1130827 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0328 01:08:49.660518 1130827 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 501.594495ms
	I0328 01:08:49.660610 1130827 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0328 01:08:49.660691 1130827 kubeadm.go:309] [api-check] The API server is healthy after 5.502996727s
	I0328 01:08:49.660830 1130827 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0328 01:08:49.660975 1130827 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0328 01:08:49.661028 1130827 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0328 01:08:49.661198 1130827 kubeadm.go:309] [mark-control-plane] Marking the node no-preload-248059 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0328 01:08:49.661283 1130827 kubeadm.go:309] [bootstrap-token] Using token: 4jnfa0.q3dre6ogqbxtw8j0
	I0328 01:08:49.662907 1130827 out.go:204]   - Configuring RBAC rules ...
	I0328 01:08:49.663014 1130827 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0328 01:08:49.663090 1130827 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0328 01:08:49.663239 1130827 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0328 01:08:49.663379 1130827 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0328 01:08:49.663484 1130827 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0328 01:08:49.663576 1130827 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0328 01:08:49.663688 1130827 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0328 01:08:49.663750 1130827 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0328 01:08:49.663811 1130827 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0328 01:08:49.663820 1130827 kubeadm.go:309] 
	I0328 01:08:49.663871 1130827 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0328 01:08:49.663877 1130827 kubeadm.go:309] 
	I0328 01:08:49.663976 1130827 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0328 01:08:49.663984 1130827 kubeadm.go:309] 
	I0328 01:08:49.664004 1130827 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0328 01:08:49.664080 1130827 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0328 01:08:49.664144 1130827 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0328 01:08:49.664151 1130827 kubeadm.go:309] 
	I0328 01:08:49.664202 1130827 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0328 01:08:49.664209 1130827 kubeadm.go:309] 
	I0328 01:08:49.664246 1130827 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0328 01:08:49.664252 1130827 kubeadm.go:309] 
	I0328 01:08:49.664301 1130827 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0328 01:08:49.664370 1130827 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0328 01:08:49.664436 1130827 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0328 01:08:49.664444 1130827 kubeadm.go:309] 
	I0328 01:08:49.664515 1130827 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0328 01:08:49.664600 1130827 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0328 01:08:49.664607 1130827 kubeadm.go:309] 
	I0328 01:08:49.664678 1130827 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token 4jnfa0.q3dre6ogqbxtw8j0 \
	I0328 01:08:49.664764 1130827 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:eb47e943713ff45bf2b8bbea854932df17a469668ec0f8e79ea3bb626e56fc59 \
	I0328 01:08:49.664783 1130827 kubeadm.go:309] 	--control-plane 
	I0328 01:08:49.664789 1130827 kubeadm.go:309] 
	I0328 01:08:49.664856 1130827 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0328 01:08:49.664863 1130827 kubeadm.go:309] 
	I0328 01:08:49.664938 1130827 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token 4jnfa0.q3dre6ogqbxtw8j0 \
	I0328 01:08:49.665073 1130827 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:eb47e943713ff45bf2b8bbea854932df17a469668ec0f8e79ea3bb626e56fc59 
	I0328 01:08:49.665117 1130827 cni.go:84] Creating CNI manager for ""
	I0328 01:08:49.665130 1130827 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0328 01:08:49.667556 1130827 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0328 01:08:49.668776 1130827 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0328 01:08:49.680262 1130827 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0328 01:08:49.701490 1130827 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0328 01:08:49.701557 1130827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:49.701606 1130827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-248059 minikube.k8s.io/updated_at=2024_03_28T01_08_49_0700 minikube.k8s.io/version=v1.33.0-beta.0 minikube.k8s.io/commit=1a940f980f77c9bfc8ef93678db8c1f49c9dd79d minikube.k8s.io/name=no-preload-248059 minikube.k8s.io/primary=true
	I0328 01:08:49.734009 1130827 ops.go:34] apiserver oom_adj: -16
	I0328 01:08:49.901866 1130827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:50.402635 1130827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:50.902480 1130827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:51.402417 1130827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:51.902253 1130827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:52.402411 1130827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:52.901926 1130827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:53.402394 1130827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:53.902738 1130827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:54.401878 1130827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:54.901920 1130827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:55.401878 1130827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:55.902140 1130827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:56.402863 1130827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:56.901970 1130827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:57.402088 1130827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:57.901869 1130827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:58.402056 1130827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:58.902333 1130827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:59.402753 1130827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:59.902930 1130827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:09:00.402623 1130827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:09:00.901863 1130827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:09:01.402264 1130827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:09:01.902054 1130827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:09:02.402212 1130827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:09:02.503310 1130827 kubeadm.go:1107] duration metric: took 12.80181586s to wait for elevateKubeSystemPrivileges
	W0328 01:09:02.503352 1130827 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0328 01:09:02.503362 1130827 kubeadm.go:393] duration metric: took 5m10.46697508s to StartCluster
	I0328 01:09:02.503380 1130827 settings.go:142] acquiring lock: {Name:mk594dd5d083edcb4c668528431bf610fe4ea638 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 01:09:02.503482 1130827 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18485-1069254/kubeconfig
	I0328 01:09:02.505909 1130827 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18485-1069254/kubeconfig: {Name:mk608bb0dce90860f51972a105c5a28b1a9b081e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 01:09:02.506302 1130827 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.61.107 Port:8443 KubernetesVersion:v1.30.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0328 01:09:02.508103 1130827 out.go:177] * Verifying Kubernetes components...
	I0328 01:09:02.506385 1130827 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0328 01:09:02.506502 1130827 config.go:182] Loaded profile config "no-preload-248059": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0-beta.0
	I0328 01:09:02.509509 1130827 addons.go:69] Setting default-storageclass=true in profile "no-preload-248059"
	I0328 01:09:02.509519 1130827 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0328 01:09:02.509517 1130827 addons.go:69] Setting metrics-server=true in profile "no-preload-248059"
	I0328 01:09:02.509542 1130827 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-248059"
	I0328 01:09:02.509559 1130827 addons.go:234] Setting addon metrics-server=true in "no-preload-248059"
	W0328 01:09:02.509580 1130827 addons.go:243] addon metrics-server should already be in state true
	I0328 01:09:02.509509 1130827 addons.go:69] Setting storage-provisioner=true in profile "no-preload-248059"
	I0328 01:09:02.509623 1130827 host.go:66] Checking if "no-preload-248059" exists ...
	I0328 01:09:02.509636 1130827 addons.go:234] Setting addon storage-provisioner=true in "no-preload-248059"
	W0328 01:09:02.509690 1130827 addons.go:243] addon storage-provisioner should already be in state true
	I0328 01:09:02.509729 1130827 host.go:66] Checking if "no-preload-248059" exists ...
	I0328 01:09:02.510005 1130827 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18485-1069254/.minikube/bin/docker-machine-driver-kvm2
	I0328 01:09:02.510009 1130827 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18485-1069254/.minikube/bin/docker-machine-driver-kvm2
	I0328 01:09:02.510049 1130827 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 01:09:02.510050 1130827 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 01:09:02.510053 1130827 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18485-1069254/.minikube/bin/docker-machine-driver-kvm2
	I0328 01:09:02.510085 1130827 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 01:09:02.528082 1130827 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42381
	I0328 01:09:02.528124 1130827 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43929
	I0328 01:09:02.528714 1130827 main.go:141] libmachine: () Calling .GetVersion
	I0328 01:09:02.528738 1130827 main.go:141] libmachine: () Calling .GetVersion
	I0328 01:09:02.529081 1130827 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34573
	I0328 01:09:02.529378 1130827 main.go:141] libmachine: Using API Version  1
	I0328 01:09:02.529397 1130827 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 01:09:02.529444 1130827 main.go:141] libmachine: Using API Version  1
	I0328 01:09:02.529464 1130827 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 01:09:02.529465 1130827 main.go:141] libmachine: () Calling .GetVersion
	I0328 01:09:02.529791 1130827 main.go:141] libmachine: () Calling .GetMachineName
	I0328 01:09:02.529849 1130827 main.go:141] libmachine: () Calling .GetMachineName
	I0328 01:09:02.529948 1130827 main.go:141] libmachine: Using API Version  1
	I0328 01:09:02.529965 1130827 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 01:09:02.529950 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetState
	I0328 01:09:02.530389 1130827 main.go:141] libmachine: () Calling .GetMachineName
	I0328 01:09:02.530437 1130827 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18485-1069254/.minikube/bin/docker-machine-driver-kvm2
	I0328 01:09:02.530472 1130827 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 01:09:02.531004 1130827 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18485-1069254/.minikube/bin/docker-machine-driver-kvm2
	I0328 01:09:02.531058 1130827 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 01:09:02.534108 1130827 addons.go:234] Setting addon default-storageclass=true in "no-preload-248059"
	W0328 01:09:02.534134 1130827 addons.go:243] addon default-storageclass should already be in state true
	I0328 01:09:02.534173 1130827 host.go:66] Checking if "no-preload-248059" exists ...
	I0328 01:09:02.534563 1130827 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18485-1069254/.minikube/bin/docker-machine-driver-kvm2
	I0328 01:09:02.534592 1130827 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 01:09:02.546812 1130827 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35769
	I0328 01:09:02.547478 1130827 main.go:141] libmachine: () Calling .GetVersion
	I0328 01:09:02.547999 1130827 main.go:141] libmachine: Using API Version  1
	I0328 01:09:02.548031 1130827 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 01:09:02.548370 1130827 main.go:141] libmachine: () Calling .GetMachineName
	I0328 01:09:02.548616 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetState
	I0328 01:09:02.549185 1130827 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37367
	I0328 01:09:02.549663 1130827 main.go:141] libmachine: () Calling .GetVersion
	I0328 01:09:02.550365 1130827 main.go:141] libmachine: Using API Version  1
	I0328 01:09:02.550390 1130827 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 01:09:02.550772 1130827 main.go:141] libmachine: () Calling .GetMachineName
	I0328 01:09:02.550787 1130827 main.go:141] libmachine: (no-preload-248059) Calling .DriverName
	I0328 01:09:02.550977 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetState
	I0328 01:09:02.553075 1130827 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0328 01:09:02.554750 1130827 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0328 01:09:02.554769 1130827 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0328 01:09:02.552577 1130827 main.go:141] libmachine: (no-preload-248059) Calling .DriverName
	I0328 01:09:02.554788 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHHostname
	I0328 01:09:02.553550 1130827 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45153
	I0328 01:09:02.556534 1130827 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0328 01:09:02.555339 1130827 main.go:141] libmachine: () Calling .GetVersion
	I0328 01:09:02.558480 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:09:02.563734 1130827 main.go:141] libmachine: (no-preload-248059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:33:e2", ip: ""} in network mk-no-preload-248059: {Iface:virbr3 ExpiryTime:2024-03-28 02:03:25 +0000 UTC Type:0 Mac:52:54:00:58:33:e2 Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:no-preload-248059 Clientid:01:52:54:00:58:33:e2}
	I0328 01:09:02.563773 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined IP address 192.168.61.107 and MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:09:02.563823 1130827 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0328 01:09:02.563846 1130827 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0328 01:09:02.563876 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHHostname
	I0328 01:09:02.564584 1130827 main.go:141] libmachine: Using API Version  1
	I0328 01:09:02.564604 1130827 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 01:09:02.564633 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHPort
	I0328 01:09:02.564933 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHKeyPath
	I0328 01:09:02.565025 1130827 main.go:141] libmachine: () Calling .GetMachineName
	I0328 01:09:02.565458 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHUsername
	I0328 01:09:02.565593 1130827 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18485-1069254/.minikube/bin/docker-machine-driver-kvm2
	I0328 01:09:02.565617 1130827 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 01:09:02.565745 1130827 sshutil.go:53] new ssh client: &{IP:192.168.61.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/no-preload-248059/id_rsa Username:docker}
	I0328 01:09:02.569766 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:09:02.570083 1130827 main.go:141] libmachine: (no-preload-248059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:33:e2", ip: ""} in network mk-no-preload-248059: {Iface:virbr3 ExpiryTime:2024-03-28 02:03:25 +0000 UTC Type:0 Mac:52:54:00:58:33:e2 Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:no-preload-248059 Clientid:01:52:54:00:58:33:e2}
	I0328 01:09:02.570104 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined IP address 192.168.61.107 and MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:09:02.570413 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHPort
	I0328 01:09:02.570778 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHKeyPath
	I0328 01:09:02.570975 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHUsername
	I0328 01:09:02.571142 1130827 sshutil.go:53] new ssh client: &{IP:192.168.61.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/no-preload-248059/id_rsa Username:docker}
	I0328 01:09:02.589503 1130827 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41085
	I0328 01:09:02.590061 1130827 main.go:141] libmachine: () Calling .GetVersion
	I0328 01:09:02.590641 1130827 main.go:141] libmachine: Using API Version  1
	I0328 01:09:02.590661 1130827 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 01:09:02.591065 1130827 main.go:141] libmachine: () Calling .GetMachineName
	I0328 01:09:02.591310 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetState
	I0328 01:09:02.593270 1130827 main.go:141] libmachine: (no-preload-248059) Calling .DriverName
	I0328 01:09:02.593665 1130827 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0328 01:09:02.593696 1130827 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0328 01:09:02.593717 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHHostname
	I0328 01:09:02.596796 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:09:02.597270 1130827 main.go:141] libmachine: (no-preload-248059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:33:e2", ip: ""} in network mk-no-preload-248059: {Iface:virbr3 ExpiryTime:2024-03-28 02:03:25 +0000 UTC Type:0 Mac:52:54:00:58:33:e2 Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:no-preload-248059 Clientid:01:52:54:00:58:33:e2}
	I0328 01:09:02.597298 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined IP address 192.168.61.107 and MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:09:02.597460 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHPort
	I0328 01:09:02.597637 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHKeyPath
	I0328 01:09:02.597807 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHUsername
	I0328 01:09:02.597937 1130827 sshutil.go:53] new ssh client: &{IP:192.168.61.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/no-preload-248059/id_rsa Username:docker}
	I0328 01:09:02.705837 1130827 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0328 01:09:02.727955 1130827 node_ready.go:35] waiting up to 6m0s for node "no-preload-248059" to be "Ready" ...
	I0328 01:09:02.737291 1130827 node_ready.go:49] node "no-preload-248059" has status "Ready":"True"
	I0328 01:09:02.737325 1130827 node_ready.go:38] duration metric: took 9.337953ms for node "no-preload-248059" to be "Ready" ...
	I0328 01:09:02.737338 1130827 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0328 01:09:02.741939 1130827 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-248059" in "kube-system" namespace to be "Ready" ...
	I0328 01:09:02.749157 1130827 pod_ready.go:92] pod "etcd-no-preload-248059" in "kube-system" namespace has status "Ready":"True"
	I0328 01:09:02.749192 1130827 pod_ready.go:81] duration metric: took 7.224004ms for pod "etcd-no-preload-248059" in "kube-system" namespace to be "Ready" ...
	I0328 01:09:02.749205 1130827 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-248059" in "kube-system" namespace to be "Ready" ...
	I0328 01:09:02.755106 1130827 pod_ready.go:92] pod "kube-apiserver-no-preload-248059" in "kube-system" namespace has status "Ready":"True"
	I0328 01:09:02.755132 1130827 pod_ready.go:81] duration metric: took 5.919446ms for pod "kube-apiserver-no-preload-248059" in "kube-system" namespace to be "Ready" ...
	I0328 01:09:02.755144 1130827 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-248059" in "kube-system" namespace to be "Ready" ...
	I0328 01:09:02.761123 1130827 pod_ready.go:92] pod "kube-controller-manager-no-preload-248059" in "kube-system" namespace has status "Ready":"True"
	I0328 01:09:02.761171 1130827 pod_ready.go:81] duration metric: took 6.017877ms for pod "kube-controller-manager-no-preload-248059" in "kube-system" namespace to be "Ready" ...
	I0328 01:09:02.761187 1130827 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-248059" in "kube-system" namespace to be "Ready" ...
	I0328 01:09:02.773958 1130827 pod_ready.go:92] pod "kube-scheduler-no-preload-248059" in "kube-system" namespace has status "Ready":"True"
	I0328 01:09:02.773983 1130827 pod_ready.go:81] duration metric: took 12.787671ms for pod "kube-scheduler-no-preload-248059" in "kube-system" namespace to be "Ready" ...
	I0328 01:09:02.773991 1130827 pod_ready.go:38] duration metric: took 36.637128ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0328 01:09:02.774008 1130827 api_server.go:52] waiting for apiserver process to appear ...
	I0328 01:09:02.774068 1130827 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:09:02.794342 1130827 api_server.go:72] duration metric: took 287.989042ms to wait for apiserver process to appear ...
	I0328 01:09:02.794376 1130827 api_server.go:88] waiting for apiserver healthz status ...
	I0328 01:09:02.794408 1130827 api_server.go:253] Checking apiserver healthz at https://192.168.61.107:8443/healthz ...
	I0328 01:09:02.826957 1130827 api_server.go:279] https://192.168.61.107:8443/healthz returned 200:
	ok
	I0328 01:09:02.830377 1130827 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0328 01:09:02.830399 1130827 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0328 01:09:02.837250 1130827 api_server.go:141] control plane version: v1.30.0-beta.0
	I0328 01:09:02.837284 1130827 api_server.go:131] duration metric: took 42.898933ms to wait for apiserver health ...
	I0328 01:09:02.837295 1130827 system_pods.go:43] waiting for kube-system pods to appear ...
	I0328 01:09:02.838515 1130827 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0328 01:09:02.865482 1130827 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0328 01:09:02.880510 1130827 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0328 01:09:02.880544 1130827 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0328 01:09:02.933895 1130827 system_pods.go:59] 4 kube-system pods found
	I0328 01:09:02.933958 1130827 system_pods.go:61] "etcd-no-preload-248059" [df24a43f-e4f8-4ee2-a2d5-9a718a197670] Running
	I0328 01:09:02.933967 1130827 system_pods.go:61] "kube-apiserver-no-preload-248059" [60aa0336-e0a2-476a-9458-c76fb40a95e1] Running
	I0328 01:09:02.933973 1130827 system_pods.go:61] "kube-controller-manager-no-preload-248059" [3759c96b-4476-439f-bf97-ea6175e53272] Running
	I0328 01:09:02.933977 1130827 system_pods.go:61] "kube-scheduler-no-preload-248059" [63a844a3-9d1e-48b0-90eb-52d3d20f0de8] Running
	I0328 01:09:02.933984 1130827 system_pods.go:74] duration metric: took 96.68223ms to wait for pod list to return data ...
	I0328 01:09:02.933994 1130827 default_sa.go:34] waiting for default service account to be created ...
	I0328 01:09:02.939507 1130827 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0328 01:09:02.939538 1130827 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0328 01:09:02.994042 1130827 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0328 01:09:03.160934 1130827 default_sa.go:45] found service account: "default"
	I0328 01:09:03.160971 1130827 default_sa.go:55] duration metric: took 226.968222ms for default service account to be created ...
	I0328 01:09:03.160982 1130827 system_pods.go:116] waiting for k8s-apps to be running ...
	I0328 01:09:03.396511 1130827 system_pods.go:86] 7 kube-system pods found
	I0328 01:09:03.396549 1130827 system_pods.go:89] "coredns-7db6d8ff4d-8zzf5" [91f329ea-6d6d-45dc-ac77-40a2739249b4] Pending
	I0328 01:09:03.396554 1130827 system_pods.go:89] "coredns-7db6d8ff4d-qtgp9" [aac5a4d0-acf3-426c-a81e-d129f94d58f3] Pending
	I0328 01:09:03.396558 1130827 system_pods.go:89] "etcd-no-preload-248059" [df24a43f-e4f8-4ee2-a2d5-9a718a197670] Running
	I0328 01:09:03.396562 1130827 system_pods.go:89] "kube-apiserver-no-preload-248059" [60aa0336-e0a2-476a-9458-c76fb40a95e1] Running
	I0328 01:09:03.396567 1130827 system_pods.go:89] "kube-controller-manager-no-preload-248059" [3759c96b-4476-439f-bf97-ea6175e53272] Running
	I0328 01:09:03.396575 1130827 system_pods.go:89] "kube-proxy-g5f6g" [d9c30bc3-42b1-446f-838b-979489cf661d] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0328 01:09:03.396580 1130827 system_pods.go:89] "kube-scheduler-no-preload-248059" [63a844a3-9d1e-48b0-90eb-52d3d20f0de8] Running
	I0328 01:09:03.396601 1130827 retry.go:31] will retry after 288.008379ms: missing components: kube-dns, kube-proxy
	I0328 01:09:03.697645 1130827 system_pods.go:86] 7 kube-system pods found
	I0328 01:09:03.697688 1130827 system_pods.go:89] "coredns-7db6d8ff4d-8zzf5" [91f329ea-6d6d-45dc-ac77-40a2739249b4] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0328 01:09:03.697697 1130827 system_pods.go:89] "coredns-7db6d8ff4d-qtgp9" [aac5a4d0-acf3-426c-a81e-d129f94d58f3] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0328 01:09:03.697704 1130827 system_pods.go:89] "etcd-no-preload-248059" [df24a43f-e4f8-4ee2-a2d5-9a718a197670] Running
	I0328 01:09:03.697710 1130827 system_pods.go:89] "kube-apiserver-no-preload-248059" [60aa0336-e0a2-476a-9458-c76fb40a95e1] Running
	I0328 01:09:03.697720 1130827 system_pods.go:89] "kube-controller-manager-no-preload-248059" [3759c96b-4476-439f-bf97-ea6175e53272] Running
	I0328 01:09:03.697726 1130827 system_pods.go:89] "kube-proxy-g5f6g" [d9c30bc3-42b1-446f-838b-979489cf661d] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0328 01:09:03.697730 1130827 system_pods.go:89] "kube-scheduler-no-preload-248059" [63a844a3-9d1e-48b0-90eb-52d3d20f0de8] Running
	I0328 01:09:03.697750 1130827 retry.go:31] will retry after 356.016468ms: missing components: kube-dns, kube-proxy
	I0328 01:09:03.962535 1130827 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.097008499s)
	I0328 01:09:03.962614 1130827 main.go:141] libmachine: Making call to close driver server
	I0328 01:09:03.962633 1130827 main.go:141] libmachine: (no-preload-248059) Calling .Close
	I0328 01:09:03.963093 1130827 main.go:141] libmachine: Successfully made call to close driver server
	I0328 01:09:03.963119 1130827 main.go:141] libmachine: Making call to close connection to plugin binary
	I0328 01:09:03.963129 1130827 main.go:141] libmachine: Making call to close driver server
	I0328 01:09:03.963139 1130827 main.go:141] libmachine: (no-preload-248059) Calling .Close
	I0328 01:09:03.963406 1130827 main.go:141] libmachine: Successfully made call to close driver server
	I0328 01:09:03.963424 1130827 main.go:141] libmachine: Making call to close connection to plugin binary
	I0328 01:09:03.964335 1130827 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.125788348s)
	I0328 01:09:03.964375 1130827 main.go:141] libmachine: Making call to close driver server
	I0328 01:09:03.964384 1130827 main.go:141] libmachine: (no-preload-248059) Calling .Close
	I0328 01:09:03.964712 1130827 main.go:141] libmachine: (no-preload-248059) DBG | Closing plugin on server side
	I0328 01:09:03.964740 1130827 main.go:141] libmachine: Successfully made call to close driver server
	I0328 01:09:03.964763 1130827 main.go:141] libmachine: Making call to close connection to plugin binary
	I0328 01:09:03.964776 1130827 main.go:141] libmachine: Making call to close driver server
	I0328 01:09:03.964785 1130827 main.go:141] libmachine: (no-preload-248059) Calling .Close
	I0328 01:09:03.965054 1130827 main.go:141] libmachine: Successfully made call to close driver server
	I0328 01:09:03.965125 1130827 main.go:141] libmachine: Making call to close connection to plugin binary
	I0328 01:09:03.965142 1130827 main.go:141] libmachine: (no-preload-248059) DBG | Closing plugin on server side
	I0328 01:09:04.002303 1130827 main.go:141] libmachine: Making call to close driver server
	I0328 01:09:04.002340 1130827 main.go:141] libmachine: (no-preload-248059) Calling .Close
	I0328 01:09:04.002744 1130827 main.go:141] libmachine: Successfully made call to close driver server
	I0328 01:09:04.002766 1130827 main.go:141] libmachine: Making call to close connection to plugin binary
	I0328 01:09:04.062017 1130827 system_pods.go:86] 8 kube-system pods found
	I0328 01:09:04.062096 1130827 system_pods.go:89] "coredns-7db6d8ff4d-8zzf5" [91f329ea-6d6d-45dc-ac77-40a2739249b4] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0328 01:09:04.062111 1130827 system_pods.go:89] "coredns-7db6d8ff4d-qtgp9" [aac5a4d0-acf3-426c-a81e-d129f94d58f3] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0328 01:09:04.062121 1130827 system_pods.go:89] "etcd-no-preload-248059" [df24a43f-e4f8-4ee2-a2d5-9a718a197670] Running
	I0328 01:09:04.062132 1130827 system_pods.go:89] "kube-apiserver-no-preload-248059" [60aa0336-e0a2-476a-9458-c76fb40a95e1] Running
	I0328 01:09:04.062158 1130827 system_pods.go:89] "kube-controller-manager-no-preload-248059" [3759c96b-4476-439f-bf97-ea6175e53272] Running
	I0328 01:09:04.062172 1130827 system_pods.go:89] "kube-proxy-g5f6g" [d9c30bc3-42b1-446f-838b-979489cf661d] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0328 01:09:04.062180 1130827 system_pods.go:89] "kube-scheduler-no-preload-248059" [63a844a3-9d1e-48b0-90eb-52d3d20f0de8] Running
	I0328 01:09:04.062192 1130827 system_pods.go:89] "storage-provisioner" [1dcee5b1-4531-4068-bce7-081d51602015] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0328 01:09:04.062220 1130827 retry.go:31] will retry after 477.684804ms: missing components: kube-dns, kube-proxy
	I0328 01:09:04.574661 1130827 system_pods.go:86] 9 kube-system pods found
	I0328 01:09:04.574716 1130827 system_pods.go:89] "coredns-7db6d8ff4d-8zzf5" [91f329ea-6d6d-45dc-ac77-40a2739249b4] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0328 01:09:04.574728 1130827 system_pods.go:89] "coredns-7db6d8ff4d-qtgp9" [aac5a4d0-acf3-426c-a81e-d129f94d58f3] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0328 01:09:04.574740 1130827 system_pods.go:89] "etcd-no-preload-248059" [df24a43f-e4f8-4ee2-a2d5-9a718a197670] Running
	I0328 01:09:04.574748 1130827 system_pods.go:89] "kube-apiserver-no-preload-248059" [60aa0336-e0a2-476a-9458-c76fb40a95e1] Running
	I0328 01:09:04.574754 1130827 system_pods.go:89] "kube-controller-manager-no-preload-248059" [3759c96b-4476-439f-bf97-ea6175e53272] Running
	I0328 01:09:04.574761 1130827 system_pods.go:89] "kube-proxy-g5f6g" [d9c30bc3-42b1-446f-838b-979489cf661d] Running
	I0328 01:09:04.574768 1130827 system_pods.go:89] "kube-scheduler-no-preload-248059" [63a844a3-9d1e-48b0-90eb-52d3d20f0de8] Running
	I0328 01:09:04.574778 1130827 system_pods.go:89] "metrics-server-569cc877fc-frc5k" [d1b84bf5-8f9e-4da6-8aea-568e9bb1a4dd] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0328 01:09:04.574799 1130827 system_pods.go:89] "storage-provisioner" [1dcee5b1-4531-4068-bce7-081d51602015] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0328 01:09:04.574821 1130827 retry.go:31] will retry after 460.13955ms: missing components: kube-dns
	I0328 01:09:04.692708 1130827 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.69861394s)
	I0328 01:09:04.692782 1130827 main.go:141] libmachine: Making call to close driver server
	I0328 01:09:04.692798 1130827 main.go:141] libmachine: (no-preload-248059) Calling .Close
	I0328 01:09:04.693323 1130827 main.go:141] libmachine: Successfully made call to close driver server
	I0328 01:09:04.693366 1130827 main.go:141] libmachine: Making call to close connection to plugin binary
	I0328 01:09:04.693376 1130827 main.go:141] libmachine: Making call to close driver server
	I0328 01:09:04.693384 1130827 main.go:141] libmachine: (no-preload-248059) Calling .Close
	I0328 01:09:04.693320 1130827 main.go:141] libmachine: (no-preload-248059) DBG | Closing plugin on server side
	I0328 01:09:04.693818 1130827 main.go:141] libmachine: Successfully made call to close driver server
	I0328 01:09:04.693865 1130827 main.go:141] libmachine: Making call to close connection to plugin binary
	I0328 01:09:04.693879 1130827 main.go:141] libmachine: (no-preload-248059) DBG | Closing plugin on server side
	I0328 01:09:04.693895 1130827 addons.go:470] Verifying addon metrics-server=true in "no-preload-248059"
	I0328 01:09:04.696310 1130827 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0328 01:09:04.025791 1131323 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0328 01:09:04.026055 1131323 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0328 01:09:04.026065 1131323 kubeadm.go:309] 
	I0328 01:09:04.026124 1131323 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0328 01:09:04.026172 1131323 kubeadm.go:309] 		timed out waiting for the condition
	I0328 01:09:04.026181 1131323 kubeadm.go:309] 
	I0328 01:09:04.026221 1131323 kubeadm.go:309] 	This error is likely caused by:
	I0328 01:09:04.026279 1131323 kubeadm.go:309] 		- The kubelet is not running
	I0328 01:09:04.026401 1131323 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0328 01:09:04.026411 1131323 kubeadm.go:309] 
	I0328 01:09:04.026529 1131323 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0328 01:09:04.026586 1131323 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0328 01:09:04.026632 1131323 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0328 01:09:04.026640 1131323 kubeadm.go:309] 
	I0328 01:09:04.026758 1131323 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0328 01:09:04.026884 1131323 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0328 01:09:04.026902 1131323 kubeadm.go:309] 
	I0328 01:09:04.027061 1131323 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0328 01:09:04.027222 1131323 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0328 01:09:04.027335 1131323 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0328 01:09:04.027429 1131323 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0328 01:09:04.027537 1131323 kubeadm.go:309] 
	I0328 01:09:04.029027 1131323 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0328 01:09:04.029164 1131323 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0328 01:09:04.029284 1131323 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	W0328 01:09:04.029477 1131323 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0328 01:09:04.029545 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0328 01:09:04.543275 1131323 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0328 01:09:04.562572 1131323 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0328 01:09:04.577013 1131323 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0328 01:09:04.577040 1131323 kubeadm.go:156] found existing configuration files:
	
	I0328 01:09:04.577102 1131323 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0328 01:09:04.590795 1131323 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0328 01:09:04.590885 1131323 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0328 01:09:04.604227 1131323 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0328 01:09:04.616720 1131323 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0328 01:09:04.616818 1131323 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0328 01:09:04.630095 1131323 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0328 01:09:04.643166 1131323 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0328 01:09:04.643259 1131323 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0328 01:09:04.658084 1131323 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0328 01:09:04.671786 1131323 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0328 01:09:04.671874 1131323 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0328 01:09:04.685852 1131323 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0328 01:09:04.779013 1131323 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0328 01:09:04.779113 1131323 kubeadm.go:309] [preflight] Running pre-flight checks
	I0328 01:09:04.964178 1131323 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0328 01:09:04.964317 1131323 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0328 01:09:04.964463 1131323 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0328 01:09:05.181712 1131323 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0328 01:09:05.183644 1131323 out.go:204]   - Generating certificates and keys ...
	I0328 01:09:05.183759 1131323 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0328 01:09:05.183851 1131323 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0328 01:09:05.183962 1131323 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0328 01:09:05.184042 1131323 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0328 01:09:05.184156 1131323 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0328 01:09:05.184244 1131323 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0328 01:09:05.184337 1131323 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0328 01:09:05.184424 1131323 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0328 01:09:05.184535 1131323 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0328 01:09:05.184633 1131323 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0328 01:09:05.184683 1131323 kubeadm.go:309] [certs] Using the existing "sa" key
	I0328 01:09:05.184758 1131323 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0328 01:09:04.698039 1130827 addons.go:505] duration metric: took 2.191652421s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0328 01:09:05.044303 1130827 system_pods.go:86] 9 kube-system pods found
	I0328 01:09:05.044340 1130827 system_pods.go:89] "coredns-7db6d8ff4d-8zzf5" [91f329ea-6d6d-45dc-ac77-40a2739249b4] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0328 01:09:05.044348 1130827 system_pods.go:89] "coredns-7db6d8ff4d-qtgp9" [aac5a4d0-acf3-426c-a81e-d129f94d58f3] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0328 01:09:05.044354 1130827 system_pods.go:89] "etcd-no-preload-248059" [df24a43f-e4f8-4ee2-a2d5-9a718a197670] Running
	I0328 01:09:05.044360 1130827 system_pods.go:89] "kube-apiserver-no-preload-248059" [60aa0336-e0a2-476a-9458-c76fb40a95e1] Running
	I0328 01:09:05.044366 1130827 system_pods.go:89] "kube-controller-manager-no-preload-248059" [3759c96b-4476-439f-bf97-ea6175e53272] Running
	I0328 01:09:05.044369 1130827 system_pods.go:89] "kube-proxy-g5f6g" [d9c30bc3-42b1-446f-838b-979489cf661d] Running
	I0328 01:09:05.044373 1130827 system_pods.go:89] "kube-scheduler-no-preload-248059" [63a844a3-9d1e-48b0-90eb-52d3d20f0de8] Running
	I0328 01:09:05.044378 1130827 system_pods.go:89] "metrics-server-569cc877fc-frc5k" [d1b84bf5-8f9e-4da6-8aea-568e9bb1a4dd] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0328 01:09:05.044387 1130827 system_pods.go:89] "storage-provisioner" [1dcee5b1-4531-4068-bce7-081d51602015] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0328 01:09:05.044406 1130827 retry.go:31] will retry after 486.01075ms: missing components: kube-dns
	I0328 01:09:05.539158 1130827 system_pods.go:86] 9 kube-system pods found
	I0328 01:09:05.539204 1130827 system_pods.go:89] "coredns-7db6d8ff4d-8zzf5" [91f329ea-6d6d-45dc-ac77-40a2739249b4] Running
	I0328 01:09:05.539213 1130827 system_pods.go:89] "coredns-7db6d8ff4d-qtgp9" [aac5a4d0-acf3-426c-a81e-d129f94d58f3] Running
	I0328 01:09:05.539219 1130827 system_pods.go:89] "etcd-no-preload-248059" [df24a43f-e4f8-4ee2-a2d5-9a718a197670] Running
	I0328 01:09:05.539226 1130827 system_pods.go:89] "kube-apiserver-no-preload-248059" [60aa0336-e0a2-476a-9458-c76fb40a95e1] Running
	I0328 01:09:05.539232 1130827 system_pods.go:89] "kube-controller-manager-no-preload-248059" [3759c96b-4476-439f-bf97-ea6175e53272] Running
	I0328 01:09:05.539238 1130827 system_pods.go:89] "kube-proxy-g5f6g" [d9c30bc3-42b1-446f-838b-979489cf661d] Running
	I0328 01:09:05.539244 1130827 system_pods.go:89] "kube-scheduler-no-preload-248059" [63a844a3-9d1e-48b0-90eb-52d3d20f0de8] Running
	I0328 01:09:05.539255 1130827 system_pods.go:89] "metrics-server-569cc877fc-frc5k" [d1b84bf5-8f9e-4da6-8aea-568e9bb1a4dd] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0328 01:09:05.539260 1130827 system_pods.go:89] "storage-provisioner" [1dcee5b1-4531-4068-bce7-081d51602015] Running
	I0328 01:09:05.539274 1130827 system_pods.go:126] duration metric: took 2.37828469s to wait for k8s-apps to be running ...
	I0328 01:09:05.539292 1130827 system_svc.go:44] waiting for kubelet service to be running ....
	I0328 01:09:05.539362 1130827 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0328 01:09:05.560593 1130827 system_svc.go:56] duration metric: took 21.288819ms WaitForService to wait for kubelet
	I0328 01:09:05.560628 1130827 kubeadm.go:576] duration metric: took 3.054281955s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0328 01:09:05.560657 1130827 node_conditions.go:102] verifying NodePressure condition ...
	I0328 01:09:05.564453 1130827 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0328 01:09:05.564489 1130827 node_conditions.go:123] node cpu capacity is 2
	I0328 01:09:05.564502 1130827 node_conditions.go:105] duration metric: took 3.837449ms to run NodePressure ...
	I0328 01:09:05.564517 1130827 start.go:240] waiting for startup goroutines ...
	I0328 01:09:05.564527 1130827 start.go:245] waiting for cluster config update ...
	I0328 01:09:05.564542 1130827 start.go:254] writing updated cluster config ...
	I0328 01:09:05.564843 1130827 ssh_runner.go:195] Run: rm -f paused
	I0328 01:09:05.623218 1130827 start.go:600] kubectl: 1.29.3, cluster: 1.30.0-beta.0 (minor skew: 1)
	I0328 01:09:05.625408 1130827 out.go:177] * Done! kubectl is now configured to use "no-preload-248059" cluster and "default" namespace by default
	I0328 01:09:05.587190 1131323 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0328 01:09:05.923219 1131323 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0328 01:09:06.087945 1131323 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0328 01:09:06.245638 1131323 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0328 01:09:06.266195 1131323 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0328 01:09:06.267461 1131323 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0328 01:09:06.267551 1131323 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0328 01:09:06.434155 1131323 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0328 01:09:06.436300 1131323 out.go:204]   - Booting up control plane ...
	I0328 01:09:06.436447 1131323 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0328 01:09:06.446573 1131323 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0328 01:09:06.447461 1131323 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0328 01:09:06.448313 1131323 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0328 01:09:06.450917 1131323 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0328 01:09:46.453199 1131323 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0328 01:09:46.453386 1131323 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0328 01:09:46.453643 1131323 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0328 01:09:51.454402 1131323 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0328 01:09:51.454665 1131323 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0328 01:10:01.455189 1131323 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0328 01:10:01.455417 1131323 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0328 01:10:21.456491 1131323 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0328 01:10:21.456726 1131323 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0328 01:11:01.456972 1131323 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0328 01:11:01.457256 1131323 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0328 01:11:01.457269 1131323 kubeadm.go:309] 
	I0328 01:11:01.457310 1131323 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0328 01:11:01.457404 1131323 kubeadm.go:309] 		timed out waiting for the condition
	I0328 01:11:01.457441 1131323 kubeadm.go:309] 
	I0328 01:11:01.457492 1131323 kubeadm.go:309] 	This error is likely caused by:
	I0328 01:11:01.457550 1131323 kubeadm.go:309] 		- The kubelet is not running
	I0328 01:11:01.457696 1131323 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0328 01:11:01.457708 1131323 kubeadm.go:309] 
	I0328 01:11:01.457856 1131323 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0328 01:11:01.457906 1131323 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0328 01:11:01.457935 1131323 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0328 01:11:01.457943 1131323 kubeadm.go:309] 
	I0328 01:11:01.458033 1131323 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0328 01:11:01.458139 1131323 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0328 01:11:01.458155 1131323 kubeadm.go:309] 
	I0328 01:11:01.458331 1131323 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0328 01:11:01.458483 1131323 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0328 01:11:01.458594 1131323 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0328 01:11:01.458707 1131323 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0328 01:11:01.458718 1131323 kubeadm.go:309] 
	I0328 01:11:01.459597 1131323 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0328 01:11:01.459737 1131323 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0328 01:11:01.459822 1131323 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0328 01:11:01.459962 1131323 kubeadm.go:393] duration metric: took 7m59.227261729s to StartCluster
	I0328 01:11:01.460023 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:11:01.460167 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:11:01.522644 1131323 cri.go:89] found id: ""
	I0328 01:11:01.522687 1131323 logs.go:276] 0 containers: []
	W0328 01:11:01.522700 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:11:01.522710 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:11:01.522782 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:11:01.567898 1131323 cri.go:89] found id: ""
	I0328 01:11:01.567928 1131323 logs.go:276] 0 containers: []
	W0328 01:11:01.567937 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:11:01.567945 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:11:01.568005 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:11:01.604782 1131323 cri.go:89] found id: ""
	I0328 01:11:01.604810 1131323 logs.go:276] 0 containers: []
	W0328 01:11:01.604819 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:11:01.604825 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:11:01.604935 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:11:01.642875 1131323 cri.go:89] found id: ""
	I0328 01:11:01.642908 1131323 logs.go:276] 0 containers: []
	W0328 01:11:01.642920 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:11:01.642929 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:11:01.642993 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:11:01.682186 1131323 cri.go:89] found id: ""
	I0328 01:11:01.682216 1131323 logs.go:276] 0 containers: []
	W0328 01:11:01.682223 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:11:01.682241 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:11:01.682312 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:11:01.720654 1131323 cri.go:89] found id: ""
	I0328 01:11:01.720689 1131323 logs.go:276] 0 containers: []
	W0328 01:11:01.720697 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:11:01.720704 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:11:01.720759 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:11:01.757340 1131323 cri.go:89] found id: ""
	I0328 01:11:01.757372 1131323 logs.go:276] 0 containers: []
	W0328 01:11:01.757383 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:11:01.757392 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:11:01.757462 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:11:01.797426 1131323 cri.go:89] found id: ""
	I0328 01:11:01.797462 1131323 logs.go:276] 0 containers: []
	W0328 01:11:01.797473 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:11:01.797488 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:11:01.797506 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:11:01.859582 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:11:01.859623 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:11:01.876027 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:11:01.876073 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:11:01.966513 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:11:01.966539 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:11:01.966557 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:11:02.084853 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:11:02.084894 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0328 01:11:02.127221 1131323 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0328 01:11:02.127288 1131323 out.go:239] * 
	W0328 01:11:02.127417 1131323 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0328 01:11:02.127456 1131323 out.go:239] * 
	W0328 01:11:02.128313 1131323 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0328 01:11:02.131916 1131323 out.go:177] 
	W0328 01:11:02.133288 1131323 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0328 01:11:02.133351 1131323 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0328 01:11:02.133381 1131323 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0328 01:11:02.134991 1131323 out.go:177] 
	
	
	==> CRI-O <==
	Mar 28 01:17:43 default-k8s-diff-port-283961 crio[690]: time="2024-03-28 01:17:43.536592244Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:cdff81bda8229e3bd3e172682e542d1fa22fe17a7f68e5ed50fc500c6d40c543,Metadata:&PodSandboxMetadata{Name:metrics-server-57f55c9bc5-gkv67,Uid:7f0f5d0b-6821-44b6-8f3b-0bc0aeccc356,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1711588120053777980,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-57f55c9bc5-gkv67,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f0f5d0b-6821-44b6-8f3b-0bc0aeccc356,k8s-app: metrics-server,pod-template-hash: 57f55c9bc5,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-03-28T01:08:39.745432030Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:e7a6e9a7eeb6cc208ff629ec60c10d876640cdb9da4bdc52d76c224f8c98904a,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:cb80efe2-521f-45d5-84e7-f6dc
216b4c6d,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1711588119967354895,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb80efe2-521f-45d5-84e7-f6dc216b4c6d,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provision
er\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-03-28T01:08:39.657795457Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:803dccd16ce63ecad890423782c9d84b6c72ace6ba07f39a49de4bb6749d1736,Metadata:&PodSandboxMetadata{Name:coredns-76f75df574-qzcfp,Uid:8e7bfa94-f249-4f7a-be7b-9a615810c956,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1711588118428797906,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-76f75df574-qzcfp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e7bfa94-f249-4f7a-be7b-9a615810c956,k8s-app: kube-dns,pod-template-hash: 76f75df574,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-03-28T01:08:38.118857983Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:3cfb417abd42b4954bb5c2a4c8b3fdb3ccf29d420bf7165f81ee5ef1e199695c,Metadata:&PodSandboxMetadata{Name:coredns-76f75df574-gdv5x,Uid:5b4b835c
-ae9d-4eff-ab37-6ccb7e36a748,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1711588118372037331,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-76f75df574-gdv5x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b4b835c-ae9d-4eff-ab37-6ccb7e36a748,k8s-app: kube-dns,pod-template-hash: 76f75df574,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-03-28T01:08:38.058570990Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:e09bb8fab1a58dddc95cb04ec5a31fc709e4569bb2cde76074684683933a5afe,Metadata:&PodSandboxMetadata{Name:kube-proxy-js7j2,Uid:1f31a6ee-9417-4f4a-ba0b-fdbab6a9169d,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1711588118339043879,Labels:map[string]string{controller-revision-hash: 7659797656,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-js7j2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f31a6ee-9417-4f4a-ba0b-fdbab6a9169d,k8s-app: kube-pro
xy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-03-28T01:08:37.421764090Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:0eb54a2d43e98c8847b017d2ee21af27cd75d4f8482dedf5f847491ca95bf120,Metadata:&PodSandboxMetadata{Name:kube-scheduler-default-k8s-diff-port-283961,Uid:1dfb04a92f09d808d7e99d429b5cee4e,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1711588098754955323,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-283961,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1dfb04a92f09d808d7e99d429b5cee4e,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 1dfb04a92f09d808d7e99d429b5cee4e,kubernetes.io/config.seen: 2024-03-28T01:08:18.281998603Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:40793c655cfd9768e76f2f83a71424a300d0aac057cc2f611bdda09ed1b2a3fc,Metadata:&PodSandb
oxMetadata{Name:kube-controller-manager-default-k8s-diff-port-283961,Uid:752d20882748f6f16766053339f66ac2,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1711588098726075704,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-283961,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 752d20882748f6f16766053339f66ac2,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 752d20882748f6f16766053339f66ac2,kubernetes.io/config.seen: 2024-03-28T01:08:18.281996846Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:5433b9300e41b6ec6a7e4015ada8c4c66f270fa2b5ef7513159ae78190f66693,Metadata:&PodSandboxMetadata{Name:etcd-default-k8s-diff-port-283961,Uid:a1de4e3c3c3539f681e560c69accf057,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1711588098715936017,Labels:map[string]string{component: etcd,io.kubernetes.container.name: PO
D,io.kubernetes.pod.name: etcd-default-k8s-diff-port-283961,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1de4e3c3c3539f681e560c69accf057,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.224:2379,kubernetes.io/config.hash: a1de4e3c3c3539f681e560c69accf057,kubernetes.io/config.seen: 2024-03-28T01:08:18.282000077Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:a1a715d4cc301d4ea0d63d94d5aa77988679a62407b33116761afb7146a874a1,Metadata:&PodSandboxMetadata{Name:kube-apiserver-default-k8s-diff-port-283961,Uid:b67001339a7063aa3fa376614daa7f54,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1711588098711819756,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-283961,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b67001339a7063aa3fa376614daa7f54,tier: control-plane,},Annotations:ma
p[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.224:8444,kubernetes.io/config.hash: b67001339a7063aa3fa376614daa7f54,kubernetes.io/config.seen: 2024-03-28T01:08:18.281990678Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=e0d9f076-1283-4274-a112-b389e6434e82 name=/runtime.v1.RuntimeService/ListPodSandbox
	Mar 28 01:17:43 default-k8s-diff-port-283961 crio[690]: time="2024-03-28 01:17:43.541150659Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e0bb057b-aa9b-4207-a212-fccc7c71bd0c name=/runtime.v1.RuntimeService/ListContainers
	Mar 28 01:17:43 default-k8s-diff-port-283961 crio[690]: time="2024-03-28 01:17:43.541262928Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e0bb057b-aa9b-4207-a212-fccc7c71bd0c name=/runtime.v1.RuntimeService/ListContainers
	Mar 28 01:17:43 default-k8s-diff-port-283961 crio[690]: time="2024-03-28 01:17:43.541450854Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e424026873582b3cb422868efb139c9493e87ce38c6f5d50d6c75052ba346e03,PodSandboxId:e7a6e9a7eeb6cc208ff629ec60c10d876640cdb9da4bdc52d76c224f8c98904a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1711588120120863506,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb80efe2-521f-45d5-84e7-f6dc216b4c6d,},Annotations:map[string]string{io.kubernetes.container.hash: 7bb55d66,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de1711ee6a5dc43bc28c1177f89f91198505babc976460637ce8225259d5a408,PodSandboxId:803dccd16ce63ecad890423782c9d84b6c72ace6ba07f39a49de4bb6749d1736,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1711588119047022566,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-qzcfp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e7bfa94-f249-4f7a-be7b-9a615810c956,},Annotations:map[string]string{io.kubernetes.container.hash: 525c88a1,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:422a905518d5426ce48b860149b0f9588ee1bb14058d9bd1ac78a3ea72037fd9,PodSandboxId:e09bb8fab1a58dddc95cb04ec5a31fc709e4569bb2cde76074684683933a5afe,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1711588118915220409,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-js7j2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.po
d.uid: 1f31a6ee-9417-4f4a-ba0b-fdbab6a9169d,},Annotations:map[string]string{io.kubernetes.container.hash: f8ea6801,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ae8bdde5b55d2c17376a80e0d2822b57a3d6af056ea8deac369393e0f38fd42,PodSandboxId:3cfb417abd42b4954bb5c2a4c8b3fdb3ccf29d420bf7165f81ee5ef1e199695c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1711588118976846562,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-gdv5x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b4b835c-ae9d-4eff-ab37-
6ccb7e36a748,},Annotations:map[string]string{io.kubernetes.container.hash: 5994b028,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a84313d97f96d700a70f3447583b8682711e293b8d7186846062d8b4f3b29f3,PodSandboxId:0eb54a2d43e98c8847b017d2ee21af27cd75d4f8482dedf5f847491ca95bf120,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:171158809910719692
1,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-283961,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1dfb04a92f09d808d7e99d429b5cee4e,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:360c718fc7dc9fd42d5b06bad743933fe575f0f169492bd0d6227e57e740f172,PodSandboxId:a1a715d4cc301d4ea0d63d94d5aa77988679a62407b33116761afb7146a874a1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1711588099109294768,
Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-283961,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b67001339a7063aa3fa376614daa7f54,},Annotations:map[string]string{io.kubernetes.container.hash: a066b342,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:59a6698011c7da31695bc91fd0cc71cbd5f23ddcb2bb6527a8bc650716e83867,PodSandboxId:40793c655cfd9768e76f2f83a71424a300d0aac057cc2f611bdda09ed1b2a3fc,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1711588098983
618428,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-283961,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 752d20882748f6f16766053339f66ac2,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a32cb718b54d521eab0e1da343ce520cc70f2542de9d33964fbf54e2bc80a70,PodSandboxId:5433b9300e41b6ec6a7e4015ada8c4c66f270fa2b5ef7513159ae78190f66693,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1711588
098881885696,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-283961,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1de4e3c3c3539f681e560c69accf057,},Annotations:map[string]string{io.kubernetes.container.hash: 56b0e412,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e0bb057b-aa9b-4207-a212-fccc7c71bd0c name=/runtime.v1.RuntimeService/ListContainers
	Mar 28 01:17:43 default-k8s-diff-port-283961 crio[690]: time="2024-03-28 01:17:43.575011644Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=5a392b9d-66c2-4f0a-bb51-5f7cc6926484 name=/runtime.v1.RuntimeService/Version
	Mar 28 01:17:43 default-k8s-diff-port-283961 crio[690]: time="2024-03-28 01:17:43.575235912Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=5a392b9d-66c2-4f0a-bb51-5f7cc6926484 name=/runtime.v1.RuntimeService/Version
	Mar 28 01:17:43 default-k8s-diff-port-283961 crio[690]: time="2024-03-28 01:17:43.576538530Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=3acb37de-972d-4e74-bf32-feb0753691a8 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 28 01:17:43 default-k8s-diff-port-283961 crio[690]: time="2024-03-28 01:17:43.577038572Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1711588663577017005,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:130129,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3acb37de-972d-4e74-bf32-feb0753691a8 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 28 01:17:43 default-k8s-diff-port-283961 crio[690]: time="2024-03-28 01:17:43.577602691Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e2e95267-d655-4df3-936d-0191eaaab265 name=/runtime.v1.RuntimeService/ListContainers
	Mar 28 01:17:43 default-k8s-diff-port-283961 crio[690]: time="2024-03-28 01:17:43.577721640Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e2e95267-d655-4df3-936d-0191eaaab265 name=/runtime.v1.RuntimeService/ListContainers
	Mar 28 01:17:43 default-k8s-diff-port-283961 crio[690]: time="2024-03-28 01:17:43.578035723Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e424026873582b3cb422868efb139c9493e87ce38c6f5d50d6c75052ba346e03,PodSandboxId:e7a6e9a7eeb6cc208ff629ec60c10d876640cdb9da4bdc52d76c224f8c98904a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1711588120120863506,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb80efe2-521f-45d5-84e7-f6dc216b4c6d,},Annotations:map[string]string{io.kubernetes.container.hash: 7bb55d66,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de1711ee6a5dc43bc28c1177f89f91198505babc976460637ce8225259d5a408,PodSandboxId:803dccd16ce63ecad890423782c9d84b6c72ace6ba07f39a49de4bb6749d1736,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1711588119047022566,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-qzcfp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e7bfa94-f249-4f7a-be7b-9a615810c956,},Annotations:map[string]string{io.kubernetes.container.hash: 525c88a1,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:422a905518d5426ce48b860149b0f9588ee1bb14058d9bd1ac78a3ea72037fd9,PodSandboxId:e09bb8fab1a58dddc95cb04ec5a31fc709e4569bb2cde76074684683933a5afe,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1711588118915220409,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-js7j2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.po
d.uid: 1f31a6ee-9417-4f4a-ba0b-fdbab6a9169d,},Annotations:map[string]string{io.kubernetes.container.hash: f8ea6801,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ae8bdde5b55d2c17376a80e0d2822b57a3d6af056ea8deac369393e0f38fd42,PodSandboxId:3cfb417abd42b4954bb5c2a4c8b3fdb3ccf29d420bf7165f81ee5ef1e199695c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1711588118976846562,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-gdv5x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b4b835c-ae9d-4eff-ab37-
6ccb7e36a748,},Annotations:map[string]string{io.kubernetes.container.hash: 5994b028,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a84313d97f96d700a70f3447583b8682711e293b8d7186846062d8b4f3b29f3,PodSandboxId:0eb54a2d43e98c8847b017d2ee21af27cd75d4f8482dedf5f847491ca95bf120,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:171158809910719692
1,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-283961,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1dfb04a92f09d808d7e99d429b5cee4e,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:360c718fc7dc9fd42d5b06bad743933fe575f0f169492bd0d6227e57e740f172,PodSandboxId:a1a715d4cc301d4ea0d63d94d5aa77988679a62407b33116761afb7146a874a1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1711588099109294768,
Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-283961,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b67001339a7063aa3fa376614daa7f54,},Annotations:map[string]string{io.kubernetes.container.hash: a066b342,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:59a6698011c7da31695bc91fd0cc71cbd5f23ddcb2bb6527a8bc650716e83867,PodSandboxId:40793c655cfd9768e76f2f83a71424a300d0aac057cc2f611bdda09ed1b2a3fc,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1711588098983
618428,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-283961,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 752d20882748f6f16766053339f66ac2,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a32cb718b54d521eab0e1da343ce520cc70f2542de9d33964fbf54e2bc80a70,PodSandboxId:5433b9300e41b6ec6a7e4015ada8c4c66f270fa2b5ef7513159ae78190f66693,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1711588
098881885696,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-283961,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1de4e3c3c3539f681e560c69accf057,},Annotations:map[string]string{io.kubernetes.container.hash: 56b0e412,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e2e95267-d655-4df3-936d-0191eaaab265 name=/runtime.v1.RuntimeService/ListContainers
	Mar 28 01:17:43 default-k8s-diff-port-283961 crio[690]: time="2024-03-28 01:17:43.620961844Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=0d262c75-1b5c-47de-89a5-6bfccf603457 name=/runtime.v1.RuntimeService/Version
	Mar 28 01:17:43 default-k8s-diff-port-283961 crio[690]: time="2024-03-28 01:17:43.621060752Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=0d262c75-1b5c-47de-89a5-6bfccf603457 name=/runtime.v1.RuntimeService/Version
	Mar 28 01:17:43 default-k8s-diff-port-283961 crio[690]: time="2024-03-28 01:17:43.623089885Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d4b1b372-9f96-4f6c-8118-5ca0c053f4b1 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 28 01:17:43 default-k8s-diff-port-283961 crio[690]: time="2024-03-28 01:17:43.623550823Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1711588663623524590,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:130129,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d4b1b372-9f96-4f6c-8118-5ca0c053f4b1 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 28 01:17:43 default-k8s-diff-port-283961 crio[690]: time="2024-03-28 01:17:43.624058267Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8e49ce9d-0ea0-4eff-9219-96fbc5a52124 name=/runtime.v1.RuntimeService/ListContainers
	Mar 28 01:17:43 default-k8s-diff-port-283961 crio[690]: time="2024-03-28 01:17:43.624130485Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8e49ce9d-0ea0-4eff-9219-96fbc5a52124 name=/runtime.v1.RuntimeService/ListContainers
	Mar 28 01:17:43 default-k8s-diff-port-283961 crio[690]: time="2024-03-28 01:17:43.624336326Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e424026873582b3cb422868efb139c9493e87ce38c6f5d50d6c75052ba346e03,PodSandboxId:e7a6e9a7eeb6cc208ff629ec60c10d876640cdb9da4bdc52d76c224f8c98904a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1711588120120863506,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb80efe2-521f-45d5-84e7-f6dc216b4c6d,},Annotations:map[string]string{io.kubernetes.container.hash: 7bb55d66,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de1711ee6a5dc43bc28c1177f89f91198505babc976460637ce8225259d5a408,PodSandboxId:803dccd16ce63ecad890423782c9d84b6c72ace6ba07f39a49de4bb6749d1736,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1711588119047022566,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-qzcfp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e7bfa94-f249-4f7a-be7b-9a615810c956,},Annotations:map[string]string{io.kubernetes.container.hash: 525c88a1,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:422a905518d5426ce48b860149b0f9588ee1bb14058d9bd1ac78a3ea72037fd9,PodSandboxId:e09bb8fab1a58dddc95cb04ec5a31fc709e4569bb2cde76074684683933a5afe,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1711588118915220409,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-js7j2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.po
d.uid: 1f31a6ee-9417-4f4a-ba0b-fdbab6a9169d,},Annotations:map[string]string{io.kubernetes.container.hash: f8ea6801,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ae8bdde5b55d2c17376a80e0d2822b57a3d6af056ea8deac369393e0f38fd42,PodSandboxId:3cfb417abd42b4954bb5c2a4c8b3fdb3ccf29d420bf7165f81ee5ef1e199695c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1711588118976846562,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-gdv5x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b4b835c-ae9d-4eff-ab37-
6ccb7e36a748,},Annotations:map[string]string{io.kubernetes.container.hash: 5994b028,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a84313d97f96d700a70f3447583b8682711e293b8d7186846062d8b4f3b29f3,PodSandboxId:0eb54a2d43e98c8847b017d2ee21af27cd75d4f8482dedf5f847491ca95bf120,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:171158809910719692
1,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-283961,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1dfb04a92f09d808d7e99d429b5cee4e,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:360c718fc7dc9fd42d5b06bad743933fe575f0f169492bd0d6227e57e740f172,PodSandboxId:a1a715d4cc301d4ea0d63d94d5aa77988679a62407b33116761afb7146a874a1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1711588099109294768,
Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-283961,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b67001339a7063aa3fa376614daa7f54,},Annotations:map[string]string{io.kubernetes.container.hash: a066b342,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:59a6698011c7da31695bc91fd0cc71cbd5f23ddcb2bb6527a8bc650716e83867,PodSandboxId:40793c655cfd9768e76f2f83a71424a300d0aac057cc2f611bdda09ed1b2a3fc,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1711588098983
618428,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-283961,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 752d20882748f6f16766053339f66ac2,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a32cb718b54d521eab0e1da343ce520cc70f2542de9d33964fbf54e2bc80a70,PodSandboxId:5433b9300e41b6ec6a7e4015ada8c4c66f270fa2b5ef7513159ae78190f66693,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1711588
098881885696,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-283961,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1de4e3c3c3539f681e560c69accf057,},Annotations:map[string]string{io.kubernetes.container.hash: 56b0e412,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8e49ce9d-0ea0-4eff-9219-96fbc5a52124 name=/runtime.v1.RuntimeService/ListContainers
	Mar 28 01:17:43 default-k8s-diff-port-283961 crio[690]: time="2024-03-28 01:17:43.659991644Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=45f71c43-1e35-4291-828c-dfecbd8f1394 name=/runtime.v1.RuntimeService/Version
	Mar 28 01:17:43 default-k8s-diff-port-283961 crio[690]: time="2024-03-28 01:17:43.660081750Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=45f71c43-1e35-4291-828c-dfecbd8f1394 name=/runtime.v1.RuntimeService/Version
	Mar 28 01:17:43 default-k8s-diff-port-283961 crio[690]: time="2024-03-28 01:17:43.661898119Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a711c4d8-ae10-473e-8f3b-155f62a7d159 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 28 01:17:43 default-k8s-diff-port-283961 crio[690]: time="2024-03-28 01:17:43.662467156Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1711588663662444840,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:130129,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a711c4d8-ae10-473e-8f3b-155f62a7d159 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 28 01:17:43 default-k8s-diff-port-283961 crio[690]: time="2024-03-28 01:17:43.663051098Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=88111ee8-8400-470f-bc35-f95ffaa04f1d name=/runtime.v1.RuntimeService/ListContainers
	Mar 28 01:17:43 default-k8s-diff-port-283961 crio[690]: time="2024-03-28 01:17:43.663121511Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=88111ee8-8400-470f-bc35-f95ffaa04f1d name=/runtime.v1.RuntimeService/ListContainers
	Mar 28 01:17:43 default-k8s-diff-port-283961 crio[690]: time="2024-03-28 01:17:43.663314185Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e424026873582b3cb422868efb139c9493e87ce38c6f5d50d6c75052ba346e03,PodSandboxId:e7a6e9a7eeb6cc208ff629ec60c10d876640cdb9da4bdc52d76c224f8c98904a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1711588120120863506,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb80efe2-521f-45d5-84e7-f6dc216b4c6d,},Annotations:map[string]string{io.kubernetes.container.hash: 7bb55d66,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de1711ee6a5dc43bc28c1177f89f91198505babc976460637ce8225259d5a408,PodSandboxId:803dccd16ce63ecad890423782c9d84b6c72ace6ba07f39a49de4bb6749d1736,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1711588119047022566,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-qzcfp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e7bfa94-f249-4f7a-be7b-9a615810c956,},Annotations:map[string]string{io.kubernetes.container.hash: 525c88a1,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:422a905518d5426ce48b860149b0f9588ee1bb14058d9bd1ac78a3ea72037fd9,PodSandboxId:e09bb8fab1a58dddc95cb04ec5a31fc709e4569bb2cde76074684683933a5afe,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1711588118915220409,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-js7j2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.po
d.uid: 1f31a6ee-9417-4f4a-ba0b-fdbab6a9169d,},Annotations:map[string]string{io.kubernetes.container.hash: f8ea6801,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ae8bdde5b55d2c17376a80e0d2822b57a3d6af056ea8deac369393e0f38fd42,PodSandboxId:3cfb417abd42b4954bb5c2a4c8b3fdb3ccf29d420bf7165f81ee5ef1e199695c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1711588118976846562,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-gdv5x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b4b835c-ae9d-4eff-ab37-
6ccb7e36a748,},Annotations:map[string]string{io.kubernetes.container.hash: 5994b028,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a84313d97f96d700a70f3447583b8682711e293b8d7186846062d8b4f3b29f3,PodSandboxId:0eb54a2d43e98c8847b017d2ee21af27cd75d4f8482dedf5f847491ca95bf120,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:171158809910719692
1,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-283961,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1dfb04a92f09d808d7e99d429b5cee4e,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:360c718fc7dc9fd42d5b06bad743933fe575f0f169492bd0d6227e57e740f172,PodSandboxId:a1a715d4cc301d4ea0d63d94d5aa77988679a62407b33116761afb7146a874a1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1711588099109294768,
Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-283961,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b67001339a7063aa3fa376614daa7f54,},Annotations:map[string]string{io.kubernetes.container.hash: a066b342,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:59a6698011c7da31695bc91fd0cc71cbd5f23ddcb2bb6527a8bc650716e83867,PodSandboxId:40793c655cfd9768e76f2f83a71424a300d0aac057cc2f611bdda09ed1b2a3fc,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1711588098983
618428,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-283961,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 752d20882748f6f16766053339f66ac2,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a32cb718b54d521eab0e1da343ce520cc70f2542de9d33964fbf54e2bc80a70,PodSandboxId:5433b9300e41b6ec6a7e4015ada8c4c66f270fa2b5ef7513159ae78190f66693,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1711588
098881885696,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-283961,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1de4e3c3c3539f681e560c69accf057,},Annotations:map[string]string{io.kubernetes.container.hash: 56b0e412,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=88111ee8-8400-470f-bc35-f95ffaa04f1d name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	e424026873582       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   9 minutes ago       Running             storage-provisioner       0                   e7a6e9a7eeb6c       storage-provisioner
	de1711ee6a5dc       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   9 minutes ago       Running             coredns                   0                   803dccd16ce63       coredns-76f75df574-qzcfp
	3ae8bdde5b55d       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   9 minutes ago       Running             coredns                   0                   3cfb417abd42b       coredns-76f75df574-gdv5x
	422a905518d54       a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392   9 minutes ago       Running             kube-proxy                0                   e09bb8fab1a58       kube-proxy-js7j2
	360c718fc7dc9       39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533   9 minutes ago       Running             kube-apiserver            2                   a1a715d4cc301       kube-apiserver-default-k8s-diff-port-283961
	0a84313d97f96       8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b   9 minutes ago       Running             kube-scheduler            2                   0eb54a2d43e98       kube-scheduler-default-k8s-diff-port-283961
	59a6698011c7d       6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3   9 minutes ago       Running             kube-controller-manager   2                   40793c655cfd9       kube-controller-manager-default-k8s-diff-port-283961
	5a32cb718b54d       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   9 minutes ago       Running             etcd                      2                   5433b9300e41b       etcd-default-k8s-diff-port-283961
	
	
	==> coredns [3ae8bdde5b55d2c17376a80e0d2822b57a3d6af056ea8deac369393e0f38fd42] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [de1711ee6a5dc43bc28c1177f89f91198505babc976460637ce8225259d5a408] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-283961
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-283961
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1a940f980f77c9bfc8ef93678db8c1f49c9dd79d
	                    minikube.k8s.io/name=default-k8s-diff-port-283961
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_03_28T01_08_25_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 28 Mar 2024 01:08:21 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-283961
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 28 Mar 2024 01:17:34 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 28 Mar 2024 01:13:52 +0000   Thu, 28 Mar 2024 01:08:19 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 28 Mar 2024 01:13:52 +0000   Thu, 28 Mar 2024 01:08:19 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 28 Mar 2024 01:13:52 +0000   Thu, 28 Mar 2024 01:08:19 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 28 Mar 2024 01:13:52 +0000   Thu, 28 Mar 2024 01:08:22 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.224
	  Hostname:    default-k8s-diff-port-283961
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 87d8e612642044708c030a5a4ca94107
	  System UUID:                87d8e612-6420-4470-8c03-0a5a4ca94107
	  Boot ID:                    d1c4e68a-97a6-4101-8e7c-c0a713f0e9a0
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-76f75df574-gdv5x                                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m5s
	  kube-system                 coredns-76f75df574-qzcfp                                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m5s
	  kube-system                 etcd-default-k8s-diff-port-283961                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         9m18s
	  kube-system                 kube-apiserver-default-k8s-diff-port-283961             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m18s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-283961    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m18s
	  kube-system                 kube-proxy-js7j2                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m6s
	  kube-system                 kube-scheduler-default-k8s-diff-port-283961             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m19s
	  kube-system                 metrics-server-57f55c9bc5-gkv67                         100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         9m4s
	  kube-system                 storage-provisioner                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m4s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   0 (0%!)(MISSING)
	  memory             440Mi (20%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 9m4s   kube-proxy       
	  Normal  Starting                 9m18s  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  9m18s  kubelet          Node default-k8s-diff-port-283961 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m18s  kubelet          Node default-k8s-diff-port-283961 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m18s  kubelet          Node default-k8s-diff-port-283961 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9m18s  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           9m7s   node-controller  Node default-k8s-diff-port-283961 event: Registered Node default-k8s-diff-port-283961 in Controller
	
	
	==> dmesg <==
	[  +0.055565] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.045209] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[Mar28 01:03] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.847710] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.666146] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.009088] systemd-fstab-generator[608]: Ignoring "noauto" option for root device
	[  +0.062189] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.067957] systemd-fstab-generator[620]: Ignoring "noauto" option for root device
	[  +0.206518] systemd-fstab-generator[634]: Ignoring "noauto" option for root device
	[  +0.169610] systemd-fstab-generator[646]: Ignoring "noauto" option for root device
	[  +0.349923] systemd-fstab-generator[675]: Ignoring "noauto" option for root device
	[  +4.885715] systemd-fstab-generator[774]: Ignoring "noauto" option for root device
	[  +0.076595] kauditd_printk_skb: 130 callbacks suppressed
	[  +3.254116] systemd-fstab-generator[895]: Ignoring "noauto" option for root device
	[  +5.608091] kauditd_printk_skb: 97 callbacks suppressed
	[  +6.662961] kauditd_printk_skb: 74 callbacks suppressed
	[Mar28 01:08] kauditd_printk_skb: 5 callbacks suppressed
	[  +1.517241] systemd-fstab-generator[3401]: Ignoring "noauto" option for root device
	[  +4.665698] kauditd_printk_skb: 55 callbacks suppressed
	[  +2.662670] systemd-fstab-generator[3727]: Ignoring "noauto" option for root device
	[ +12.977680] systemd-fstab-generator[3916]: Ignoring "noauto" option for root device
	[  +0.118919] kauditd_printk_skb: 14 callbacks suppressed
	[Mar28 01:09] kauditd_printk_skb: 78 callbacks suppressed
	
	
	==> etcd [5a32cb718b54d521eab0e1da343ce520cc70f2542de9d33964fbf54e2bc80a70] <==
	{"level":"info","ts":"2024-03-28T01:08:19.156598Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"84bfccc973752067 switched to configuration voters=(9565589299155771495)"}
	{"level":"info","ts":"2024-03-28T01:08:19.15785Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6ff541a05f82feac","local-member-id":"84bfccc973752067","added-peer-id":"84bfccc973752067","added-peer-peer-urls":["https://192.168.39.224:2380"]}
	{"level":"info","ts":"2024-03-28T01:08:19.156613Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.224:2380"}
	{"level":"info","ts":"2024-03-28T01:08:19.157881Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.224:2380"}
	{"level":"info","ts":"2024-03-28T01:08:19.157515Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-03-28T01:08:19.157598Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-03-28T01:08:19.157956Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-03-28T01:08:19.717119Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"84bfccc973752067 is starting a new election at term 1"}
	{"level":"info","ts":"2024-03-28T01:08:19.717275Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"84bfccc973752067 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-03-28T01:08:19.717497Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"84bfccc973752067 received MsgPreVoteResp from 84bfccc973752067 at term 1"}
	{"level":"info","ts":"2024-03-28T01:08:19.717802Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"84bfccc973752067 became candidate at term 2"}
	{"level":"info","ts":"2024-03-28T01:08:19.717844Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"84bfccc973752067 received MsgVoteResp from 84bfccc973752067 at term 2"}
	{"level":"info","ts":"2024-03-28T01:08:19.71788Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"84bfccc973752067 became leader at term 2"}
	{"level":"info","ts":"2024-03-28T01:08:19.718046Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 84bfccc973752067 elected leader 84bfccc973752067 at term 2"}
	{"level":"info","ts":"2024-03-28T01:08:19.722033Z","caller":"etcdserver/server.go:2578","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-28T01:08:19.724888Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"84bfccc973752067","local-member-attributes":"{Name:default-k8s-diff-port-283961 ClientURLs:[https://192.168.39.224:2379]}","request-path":"/0/members/84bfccc973752067/attributes","cluster-id":"6ff541a05f82feac","publish-timeout":"7s"}
	{"level":"info","ts":"2024-03-28T01:08:19.72497Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-28T01:08:19.724827Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6ff541a05f82feac","local-member-id":"84bfccc973752067","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-28T01:08:19.727799Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-28T01:08:19.727898Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-28T01:08:19.727957Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-28T01:08:19.744263Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.224:2379"}
	{"level":"info","ts":"2024-03-28T01:08:19.755431Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-03-28T01:08:19.760731Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-03-28T01:08:19.76787Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 01:17:44 up 14 min,  0 users,  load average: 0.16, 0.18, 0.12
	Linux default-k8s-diff-port-283961 5.10.207 #1 SMP Wed Mar 27 22:02:20 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [360c718fc7dc9fd42d5b06bad743933fe575f0f169492bd0d6227e57e740f172] <==
	I0328 01:11:40.580359       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0328 01:13:21.887069       1 handler_proxy.go:93] no RequestInfo found in the context
	E0328 01:13:21.887483       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W0328 01:13:22.888075       1 handler_proxy.go:93] no RequestInfo found in the context
	E0328 01:13:22.888182       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0328 01:13:22.888303       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0328 01:13:22.888173       1 handler_proxy.go:93] no RequestInfo found in the context
	E0328 01:13:22.888441       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0328 01:13:22.889622       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0328 01:14:22.889180       1 handler_proxy.go:93] no RequestInfo found in the context
	E0328 01:14:22.889485       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0328 01:14:22.889524       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0328 01:14:22.890704       1 handler_proxy.go:93] no RequestInfo found in the context
	E0328 01:14:22.890811       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0328 01:14:22.890863       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0328 01:16:22.889628       1 handler_proxy.go:93] no RequestInfo found in the context
	E0328 01:16:22.889797       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0328 01:16:22.889806       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0328 01:16:22.891026       1 handler_proxy.go:93] no RequestInfo found in the context
	E0328 01:16:22.891119       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0328 01:16:22.891126       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [59a6698011c7da31695bc91fd0cc71cbd5f23ddcb2bb6527a8bc650716e83867] <==
	I0328 01:12:07.511983       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0328 01:12:37.032207       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0328 01:12:37.520811       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0328 01:13:07.038471       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0328 01:13:07.530054       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0328 01:13:37.045182       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0328 01:13:37.539242       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0328 01:14:07.050548       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0328 01:14:07.547979       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0328 01:14:37.056938       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0328 01:14:37.555564       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0328 01:14:39.346240       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="287.636µs"
	I0328 01:14:50.340946       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="178.767µs"
	E0328 01:15:07.062052       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0328 01:15:07.565089       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0328 01:15:37.070838       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0328 01:15:37.574204       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0328 01:16:07.076289       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0328 01:16:07.582596       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0328 01:16:37.082197       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0328 01:16:37.592373       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0328 01:17:07.087972       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0328 01:17:07.609253       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0328 01:17:37.093777       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0328 01:17:37.620095       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [422a905518d5426ce48b860149b0f9588ee1bb14058d9bd1ac78a3ea72037fd9] <==
	I0328 01:08:39.415846       1 server_others.go:72] "Using iptables proxy"
	I0328 01:08:39.459000       1 server.go:1050] "Successfully retrieved node IP(s)" IPs=["192.168.39.224"]
	I0328 01:08:39.694064       1 server_others.go:146] "No iptables support for family" ipFamily="IPv6"
	I0328 01:08:39.694084       1 server.go:654] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0328 01:08:39.694101       1 server_others.go:168] "Using iptables Proxier"
	I0328 01:08:39.743200       1 proxier.go:245] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0328 01:08:39.756270       1 server.go:865] "Version info" version="v1.29.3"
	I0328 01:08:39.763781       1 server.go:867] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0328 01:08:39.766583       1 config.go:188] "Starting service config controller"
	I0328 01:08:39.766677       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0328 01:08:39.766838       1 config.go:97] "Starting endpoint slice config controller"
	I0328 01:08:39.766904       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0328 01:08:39.771058       1 config.go:315] "Starting node config controller"
	I0328 01:08:39.771148       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0328 01:08:39.868141       1 shared_informer.go:318] Caches are synced for service config
	I0328 01:08:39.868391       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0328 01:08:39.872050       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [0a84313d97f96d700a70f3447583b8682711e293b8d7186846062d8b4f3b29f3] <==
	W0328 01:08:21.911272       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0328 01:08:21.911976       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0328 01:08:21.911353       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0328 01:08:21.912226       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0328 01:08:22.764886       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0328 01:08:22.765181       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0328 01:08:22.774715       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0328 01:08:22.775188       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0328 01:08:22.796723       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0328 01:08:22.796776       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0328 01:08:22.807576       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0328 01:08:22.807615       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0328 01:08:22.834169       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0328 01:08:22.834237       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0328 01:08:22.916223       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0328 01:08:22.916273       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0328 01:08:22.945328       1 reflector.go:539] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0328 01:08:22.945382       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0328 01:08:22.946475       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0328 01:08:22.946519       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0328 01:08:23.199043       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0328 01:08:23.199092       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0328 01:08:23.238874       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0328 01:08:23.238972       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0328 01:08:25.088378       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Mar 28 01:15:25 default-k8s-diff-port-283961 kubelet[3734]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 28 01:15:25 default-k8s-diff-port-283961 kubelet[3734]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 28 01:15:25 default-k8s-diff-port-283961 kubelet[3734]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 28 01:15:25 default-k8s-diff-port-283961 kubelet[3734]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 28 01:15:28 default-k8s-diff-port-283961 kubelet[3734]: E0328 01:15:28.321605    3734 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-gkv67" podUID="7f0f5d0b-6821-44b6-8f3b-0bc0aeccc356"
	Mar 28 01:15:41 default-k8s-diff-port-283961 kubelet[3734]: E0328 01:15:41.322044    3734 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-gkv67" podUID="7f0f5d0b-6821-44b6-8f3b-0bc0aeccc356"
	Mar 28 01:15:56 default-k8s-diff-port-283961 kubelet[3734]: E0328 01:15:56.321916    3734 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-gkv67" podUID="7f0f5d0b-6821-44b6-8f3b-0bc0aeccc356"
	Mar 28 01:16:09 default-k8s-diff-port-283961 kubelet[3734]: E0328 01:16:09.321869    3734 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-gkv67" podUID="7f0f5d0b-6821-44b6-8f3b-0bc0aeccc356"
	Mar 28 01:16:23 default-k8s-diff-port-283961 kubelet[3734]: E0328 01:16:23.322422    3734 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-gkv67" podUID="7f0f5d0b-6821-44b6-8f3b-0bc0aeccc356"
	Mar 28 01:16:25 default-k8s-diff-port-283961 kubelet[3734]: E0328 01:16:25.362910    3734 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 28 01:16:25 default-k8s-diff-port-283961 kubelet[3734]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 28 01:16:25 default-k8s-diff-port-283961 kubelet[3734]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 28 01:16:25 default-k8s-diff-port-283961 kubelet[3734]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 28 01:16:25 default-k8s-diff-port-283961 kubelet[3734]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 28 01:16:34 default-k8s-diff-port-283961 kubelet[3734]: E0328 01:16:34.321208    3734 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-gkv67" podUID="7f0f5d0b-6821-44b6-8f3b-0bc0aeccc356"
	Mar 28 01:16:47 default-k8s-diff-port-283961 kubelet[3734]: E0328 01:16:47.325338    3734 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-gkv67" podUID="7f0f5d0b-6821-44b6-8f3b-0bc0aeccc356"
	Mar 28 01:16:58 default-k8s-diff-port-283961 kubelet[3734]: E0328 01:16:58.321599    3734 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-gkv67" podUID="7f0f5d0b-6821-44b6-8f3b-0bc0aeccc356"
	Mar 28 01:17:09 default-k8s-diff-port-283961 kubelet[3734]: E0328 01:17:09.321720    3734 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-gkv67" podUID="7f0f5d0b-6821-44b6-8f3b-0bc0aeccc356"
	Mar 28 01:17:22 default-k8s-diff-port-283961 kubelet[3734]: E0328 01:17:22.322419    3734 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-gkv67" podUID="7f0f5d0b-6821-44b6-8f3b-0bc0aeccc356"
	Mar 28 01:17:25 default-k8s-diff-port-283961 kubelet[3734]: E0328 01:17:25.364780    3734 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 28 01:17:25 default-k8s-diff-port-283961 kubelet[3734]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 28 01:17:25 default-k8s-diff-port-283961 kubelet[3734]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 28 01:17:25 default-k8s-diff-port-283961 kubelet[3734]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 28 01:17:25 default-k8s-diff-port-283961 kubelet[3734]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 28 01:17:34 default-k8s-diff-port-283961 kubelet[3734]: E0328 01:17:34.322354    3734 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-gkv67" podUID="7f0f5d0b-6821-44b6-8f3b-0bc0aeccc356"
	
	
	==> storage-provisioner [e424026873582b3cb422868efb139c9493e87ce38c6f5d50d6c75052ba346e03] <==
	I0328 01:08:40.285577       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0328 01:08:40.311694       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0328 01:08:40.311770       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0328 01:08:40.324029       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0328 01:08:40.324588       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-283961_4416619f-58e4-47e9-bc15-4a33ec62ad43!
	I0328 01:08:40.327785       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"9a7ed12b-89ef-41b7-afcc-a955c8331b11", APIVersion:"v1", ResourceVersion:"419", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-283961_4416619f-58e4-47e9-bc15-4a33ec62ad43 became leader
	I0328 01:08:40.425544       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-283961_4416619f-58e4-47e9-bc15-4a33ec62ad43!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-283961 -n default-k8s-diff-port-283961
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-283961 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-57f55c9bc5-gkv67
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-283961 describe pod metrics-server-57f55c9bc5-gkv67
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-283961 describe pod metrics-server-57f55c9bc5-gkv67: exit status 1 (75.889606ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-gkv67" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-283961 describe pod metrics-server-57f55c9bc5-gkv67: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (544.21s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (544.24s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0328 01:09:17.406462 1076522 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/addons-910864/client.crt: no such file or directory
E0328 01:09:52.855351 1076522 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/auto-443419/client.crt: no such file or directory
E0328 01:10:28.662673 1076522 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/kindnet-443419/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-248059 -n no-preload-248059
start_stop_delete_test.go:274: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-03-28 01:18:06.270647141 +0000 UTC m=+6317.522126081
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-248059 -n no-preload-248059
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-248059 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-248059 logs -n 25: (2.070495415s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|----------------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   |    Version     |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|----------------|---------------------|---------------------|
	| stop    | -p no-preload-248059                                   | no-preload-248059            | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:55 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |                |                     |                     |
	| addons  | enable metrics-server -p embed-certs-808809            | embed-certs-808809           | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:55 UTC | 28 Mar 24 00:55 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |                |                     |                     |
	| stop    | -p embed-certs-808809                                  | embed-certs-808809           | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:55 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |                |                     |                     |
	| addons  | enable metrics-server -p newest-cni-013642             | newest-cni-013642            | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:55 UTC | 28 Mar 24 00:55 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |                |                     |                     |
	| stop    | -p newest-cni-013642                                   | newest-cni-013642            | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:55 UTC | 28 Mar 24 00:55 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |                |                     |                     |
	| addons  | enable dashboard -p newest-cni-013642                  | newest-cni-013642            | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:55 UTC | 28 Mar 24 00:55 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| start   | -p newest-cni-013642 --memory=2200 --alsologtostderr   | newest-cni-013642            | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:55 UTC | 28 Mar 24 00:56 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |                |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |                |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |                |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |                |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.30.0-beta.0                    |                              |         |                |                     |                     |
	| image   | newest-cni-013642 image list                           | newest-cni-013642            | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:56 UTC | 28 Mar 24 00:56 UTC |
	|         | --format=json                                          |                              |         |                |                     |                     |
	| pause   | -p newest-cni-013642                                   | newest-cni-013642            | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:56 UTC | 28 Mar 24 00:56 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |                |                     |                     |
	| unpause | -p newest-cni-013642                                   | newest-cni-013642            | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:56 UTC | 28 Mar 24 00:56 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |                |                     |                     |
	| delete  | -p newest-cni-013642                                   | newest-cni-013642            | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:56 UTC | 28 Mar 24 00:56 UTC |
	| delete  | -p newest-cni-013642                                   | newest-cni-013642            | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:56 UTC | 28 Mar 24 00:56 UTC |
	| start   | -p                                                     | default-k8s-diff-port-283961 | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:56 UTC | 28 Mar 24 00:57 UTC |
	|         | default-k8s-diff-port-283961                           |                              |         |                |                     |                     |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |                |                     |                     |
	|         | --driver=kvm2                                          |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.29.3                           |                              |         |                |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-986088        | old-k8s-version-986088       | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:57 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |                |                     |                     |
	| addons  | enable dashboard -p no-preload-248059                  | no-preload-248059            | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:57 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-283961  | default-k8s-diff-port-283961 | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:57 UTC | 28 Mar 24 00:57 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |                |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-283961 | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:57 UTC |                     |
	|         | default-k8s-diff-port-283961                           |                              |         |                |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |                |                     |                     |
	| start   | -p no-preload-248059 --memory=2200                     | no-preload-248059            | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:57 UTC | 28 Mar 24 01:09 UTC |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |                |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.30.0-beta.0                    |                              |         |                |                     |                     |
	| addons  | enable dashboard -p embed-certs-808809                 | embed-certs-808809           | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:57 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| start   | -p embed-certs-808809                                  | embed-certs-808809           | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:57 UTC | 28 Mar 24 01:07 UTC |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |                |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.29.3                           |                              |         |                |                     |                     |
	| stop    | -p old-k8s-version-986088                              | old-k8s-version-986088       | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:59 UTC | 28 Mar 24 00:59 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |                |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-986088             | old-k8s-version-986088       | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:59 UTC | 28 Mar 24 00:59 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| start   | -p old-k8s-version-986088                              | old-k8s-version-986088       | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:59 UTC |                     |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --kvm-network=default                                  |                              |         |                |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |                |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |                |                     |                     |
	|         | --keep-context=false                                   |                              |         |                |                     |                     |
	|         | --driver=kvm2                                          |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |                |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-283961       | default-k8s-diff-port-283961 | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:59 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-283961 | jenkins | v1.33.0-beta.0 | 28 Mar 24 01:00 UTC | 28 Mar 24 01:08 UTC |
	|         | default-k8s-diff-port-283961                           |                              |         |                |                     |                     |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |                |                     |                     |
	|         | --driver=kvm2                                          |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.29.3                           |                              |         |                |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/28 01:00:05
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0328 01:00:05.675380 1131600 out.go:291] Setting OutFile to fd 1 ...
	I0328 01:00:05.675675 1131600 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 01:00:05.675710 1131600 out.go:304] Setting ErrFile to fd 2...
	I0328 01:00:05.675718 1131600 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 01:00:05.676017 1131600 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18485-1069254/.minikube/bin
	I0328 01:00:05.676919 1131600 out.go:298] Setting JSON to false
	I0328 01:00:05.678046 1131600 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":31303,"bootTime":1711556303,"procs":200,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1054-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0328 01:00:05.678129 1131600 start.go:139] virtualization: kvm guest
	I0328 01:00:05.681128 1131600 out.go:177] * [default-k8s-diff-port-283961] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I0328 01:00:05.683139 1131600 out.go:177]   - MINIKUBE_LOCATION=18485
	I0328 01:00:05.683129 1131600 notify.go:220] Checking for updates...
	I0328 01:00:05.685082 1131600 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0328 01:00:05.686765 1131600 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18485-1069254/kubeconfig
	I0328 01:00:05.688389 1131600 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18485-1069254/.minikube
	I0328 01:00:05.690187 1131600 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0328 01:00:05.691887 1131600 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0328 01:00:05.693775 1131600 config.go:182] Loaded profile config "default-k8s-diff-port-283961": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0328 01:00:05.694270 1131600 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18485-1069254/.minikube/bin/docker-machine-driver-kvm2
	I0328 01:00:05.694323 1131600 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 01:00:05.709757 1131600 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35333
	I0328 01:00:05.710275 1131600 main.go:141] libmachine: () Calling .GetVersion
	I0328 01:00:05.710875 1131600 main.go:141] libmachine: Using API Version  1
	I0328 01:00:05.710900 1131600 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 01:00:05.711323 1131600 main.go:141] libmachine: () Calling .GetMachineName
	I0328 01:00:05.711531 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .DriverName
	I0328 01:00:05.711893 1131600 driver.go:392] Setting default libvirt URI to qemu:///system
	I0328 01:00:05.712342 1131600 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18485-1069254/.minikube/bin/docker-machine-driver-kvm2
	I0328 01:00:05.712392 1131600 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 01:00:05.727583 1131600 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44249
	I0328 01:00:05.728107 1131600 main.go:141] libmachine: () Calling .GetVersion
	I0328 01:00:05.728595 1131600 main.go:141] libmachine: Using API Version  1
	I0328 01:00:05.728625 1131600 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 01:00:05.728945 1131600 main.go:141] libmachine: () Calling .GetMachineName
	I0328 01:00:05.729170 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .DriverName
	I0328 01:00:05.763895 1131600 out.go:177] * Using the kvm2 driver based on existing profile
	I0328 01:00:05.765397 1131600 start.go:297] selected driver: kvm2
	I0328 01:00:05.765431 1131600 start.go:901] validating driver "kvm2" against &{Name:default-k8s-diff-port-283961 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.29.3 ClusterName:default-k8s-diff-port-283961 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.224 Port:8444 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisk
s:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0328 01:00:05.765564 1131600 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0328 01:00:05.766282 1131600 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0328 01:00:05.766391 1131600 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18485-1069254/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0328 01:00:05.783130 1131600 install.go:137] /home/jenkins/minikube-integration/18485-1069254/.minikube/bin/docker-machine-driver-kvm2 version is 1.33.0-beta.0
	I0328 01:00:05.783602 1131600 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0328 01:00:05.783724 1131600 cni.go:84] Creating CNI manager for ""
	I0328 01:00:05.783745 1131600 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0328 01:00:05.783795 1131600 start.go:340] cluster config:
	{Name:default-k8s-diff-port-283961 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:default-k8s-diff-port-283961 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.224 Port:8444 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0328 01:00:05.783949 1131600 iso.go:125] acquiring lock: {Name:mk3da1fa7d63a581c817f327d86a827697458cb8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0328 01:00:05.785871 1131600 out.go:177] * Starting "default-k8s-diff-port-283961" primary control-plane node in "default-k8s-diff-port-283961" cluster
	I0328 01:00:02.570474 1130827 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.107:22: connect: no route to host
	I0328 01:00:05.787210 1131600 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0328 01:00:05.787259 1131600 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4
	I0328 01:00:05.787272 1131600 cache.go:56] Caching tarball of preloaded images
	I0328 01:00:05.787364 1131600 preload.go:173] Found /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0328 01:00:05.787376 1131600 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on crio
	I0328 01:00:05.787509 1131600 profile.go:142] Saving config to /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/default-k8s-diff-port-283961/config.json ...
	I0328 01:00:05.787742 1131600 start.go:360] acquireMachinesLock for default-k8s-diff-port-283961: {Name:mk85e225431128bbd27ac7bb3815095957281902 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0328 01:00:08.650481 1130827 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.107:22: connect: no route to host
	I0328 01:00:11.722571 1130827 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.107:22: connect: no route to host
	I0328 01:00:17.802536 1130827 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.107:22: connect: no route to host
	I0328 01:00:20.874568 1130827 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.107:22: connect: no route to host
	I0328 01:00:26.954473 1130827 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.107:22: connect: no route to host
	I0328 01:00:30.026674 1130827 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.107:22: connect: no route to host
	I0328 01:00:36.106489 1130827 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.107:22: connect: no route to host
	I0328 01:00:39.178555 1130827 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.107:22: connect: no route to host
	I0328 01:00:45.258539 1130827 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.107:22: connect: no route to host
	I0328 01:00:48.330581 1130827 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.107:22: connect: no route to host
	I0328 01:00:54.410577 1130827 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.107:22: connect: no route to host
	I0328 01:00:57.482545 1130827 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.107:22: connect: no route to host
	I0328 01:01:03.562558 1130827 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.107:22: connect: no route to host
	I0328 01:01:06.634602 1130827 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.107:22: connect: no route to host
	I0328 01:01:12.714559 1130827 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.107:22: connect: no route to host
	I0328 01:01:15.786597 1130827 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.107:22: connect: no route to host
	I0328 01:01:21.866544 1130827 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.107:22: connect: no route to host
	I0328 01:01:24.938619 1130827 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.107:22: connect: no route to host
	I0328 01:01:31.018631 1130827 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.107:22: connect: no route to host
	I0328 01:01:34.090562 1130827 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.107:22: connect: no route to host
	I0328 01:01:40.170864 1130827 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.107:22: connect: no route to host
	I0328 01:01:43.242565 1130827 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.107:22: connect: no route to host
	I0328 01:01:49.322492 1130827 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.107:22: connect: no route to host
	I0328 01:01:52.394572 1130827 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.107:22: connect: no route to host
	I0328 01:01:58.474562 1130827 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.107:22: connect: no route to host
	I0328 01:02:01.546621 1130827 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.107:22: connect: no route to host
	I0328 01:02:07.626510 1130827 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.107:22: connect: no route to host
	I0328 01:02:10.698534 1130827 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.107:22: connect: no route to host
	I0328 01:02:13.703348 1130949 start.go:364] duration metric: took 4m25.677777198s to acquireMachinesLock for "embed-certs-808809"
	I0328 01:02:13.703416 1130949 start.go:96] Skipping create...Using existing machine configuration
	I0328 01:02:13.703429 1130949 fix.go:54] fixHost starting: 
	I0328 01:02:13.703888 1130949 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18485-1069254/.minikube/bin/docker-machine-driver-kvm2
	I0328 01:02:13.703923 1130949 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 01:02:13.719480 1130949 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44835
	I0328 01:02:13.719968 1130949 main.go:141] libmachine: () Calling .GetVersion
	I0328 01:02:13.720450 1130949 main.go:141] libmachine: Using API Version  1
	I0328 01:02:13.720475 1130949 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 01:02:13.720774 1130949 main.go:141] libmachine: () Calling .GetMachineName
	I0328 01:02:13.721011 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .DriverName
	I0328 01:02:13.721182 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetState
	I0328 01:02:13.722796 1130949 fix.go:112] recreateIfNeeded on embed-certs-808809: state=Stopped err=<nil>
	I0328 01:02:13.722828 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .DriverName
	W0328 01:02:13.722972 1130949 fix.go:138] unexpected machine state, will restart: <nil>
	I0328 01:02:13.724895 1130949 out.go:177] * Restarting existing kvm2 VM for "embed-certs-808809" ...
	I0328 01:02:13.700647 1130827 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0328 01:02:13.700689 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetMachineName
	I0328 01:02:13.701054 1130827 buildroot.go:166] provisioning hostname "no-preload-248059"
	I0328 01:02:13.701085 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetMachineName
	I0328 01:02:13.701344 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHHostname
	I0328 01:02:13.703200 1130827 machine.go:97] duration metric: took 4m37.399616994s to provisionDockerMachine
	I0328 01:02:13.703243 1130827 fix.go:56] duration metric: took 4m37.42352766s for fixHost
	I0328 01:02:13.703249 1130827 start.go:83] releasing machines lock for "no-preload-248059", held for 4m37.423563163s
	W0328 01:02:13.703274 1130827 start.go:713] error starting host: provision: host is not running
	W0328 01:02:13.703400 1130827 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0328 01:02:13.703411 1130827 start.go:728] Will try again in 5 seconds ...
	I0328 01:02:13.726437 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .Start
	I0328 01:02:13.726574 1130949 main.go:141] libmachine: (embed-certs-808809) Ensuring networks are active...
	I0328 01:02:13.727407 1130949 main.go:141] libmachine: (embed-certs-808809) Ensuring network default is active
	I0328 01:02:13.727667 1130949 main.go:141] libmachine: (embed-certs-808809) Ensuring network mk-embed-certs-808809 is active
	I0328 01:02:13.728050 1130949 main.go:141] libmachine: (embed-certs-808809) Getting domain xml...
	I0328 01:02:13.728836 1130949 main.go:141] libmachine: (embed-certs-808809) Creating domain...
	I0328 01:02:14.931757 1130949 main.go:141] libmachine: (embed-certs-808809) Waiting to get IP...
	I0328 01:02:14.932921 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:14.933298 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | unable to find current IP address of domain embed-certs-808809 in network mk-embed-certs-808809
	I0328 01:02:14.933396 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | I0328 01:02:14.933294 1131950 retry.go:31] will retry after 279.257708ms: waiting for machine to come up
	I0328 01:02:15.213830 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:15.214439 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | unable to find current IP address of domain embed-certs-808809 in network mk-embed-certs-808809
	I0328 01:02:15.214472 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | I0328 01:02:15.214415 1131950 retry.go:31] will retry after 387.406107ms: waiting for machine to come up
	I0328 01:02:15.603078 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:15.603464 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | unable to find current IP address of domain embed-certs-808809 in network mk-embed-certs-808809
	I0328 01:02:15.603497 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | I0328 01:02:15.603431 1131950 retry.go:31] will retry after 466.553599ms: waiting for machine to come up
	I0328 01:02:16.072165 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:16.072702 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | unable to find current IP address of domain embed-certs-808809 in network mk-embed-certs-808809
	I0328 01:02:16.072732 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | I0328 01:02:16.072643 1131950 retry.go:31] will retry after 375.428381ms: waiting for machine to come up
	I0328 01:02:16.449155 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:16.449614 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | unable to find current IP address of domain embed-certs-808809 in network mk-embed-certs-808809
	I0328 01:02:16.449652 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | I0328 01:02:16.449553 1131950 retry.go:31] will retry after 466.238903ms: waiting for machine to come up
	I0328 01:02:16.917246 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:16.917697 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | unable to find current IP address of domain embed-certs-808809 in network mk-embed-certs-808809
	I0328 01:02:16.917723 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | I0328 01:02:16.917633 1131950 retry.go:31] will retry after 772.819544ms: waiting for machine to come up
	I0328 01:02:17.691645 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:17.692121 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | unable to find current IP address of domain embed-certs-808809 in network mk-embed-certs-808809
	I0328 01:02:17.692151 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | I0328 01:02:17.692071 1131950 retry.go:31] will retry after 1.19065976s: waiting for machine to come up
	I0328 01:02:18.704949 1130827 start.go:360] acquireMachinesLock for no-preload-248059: {Name:mk85e225431128bbd27ac7bb3815095957281902 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0328 01:02:18.884525 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:18.885019 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | unable to find current IP address of domain embed-certs-808809 in network mk-embed-certs-808809
	I0328 01:02:18.885044 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | I0328 01:02:18.884980 1131950 retry.go:31] will retry after 1.434726863s: waiting for machine to come up
	I0328 01:02:20.321473 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:20.322009 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | unable to find current IP address of domain embed-certs-808809 in network mk-embed-certs-808809
	I0328 01:02:20.322035 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | I0328 01:02:20.321951 1131950 retry.go:31] will retry after 1.275277555s: waiting for machine to come up
	I0328 01:02:21.599454 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:21.600049 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | unable to find current IP address of domain embed-certs-808809 in network mk-embed-certs-808809
	I0328 01:02:21.600074 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | I0328 01:02:21.599982 1131950 retry.go:31] will retry after 1.852516502s: waiting for machine to come up
	I0328 01:02:23.455282 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:23.455760 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | unable to find current IP address of domain embed-certs-808809 in network mk-embed-certs-808809
	I0328 01:02:23.455830 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | I0328 01:02:23.455746 1131950 retry.go:31] will retry after 2.056736141s: waiting for machine to come up
	I0328 01:02:25.514112 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:25.514538 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | unable to find current IP address of domain embed-certs-808809 in network mk-embed-certs-808809
	I0328 01:02:25.514569 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | I0328 01:02:25.514492 1131950 retry.go:31] will retry after 2.711520437s: waiting for machine to come up
	I0328 01:02:32.751719 1131323 start.go:364] duration metric: took 3m27.302408957s to acquireMachinesLock for "old-k8s-version-986088"
	I0328 01:02:32.751823 1131323 start.go:96] Skipping create...Using existing machine configuration
	I0328 01:02:32.751833 1131323 fix.go:54] fixHost starting: 
	I0328 01:02:32.752289 1131323 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18485-1069254/.minikube/bin/docker-machine-driver-kvm2
	I0328 01:02:32.752326 1131323 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 01:02:32.770119 1131323 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45807
	I0328 01:02:32.770723 1131323 main.go:141] libmachine: () Calling .GetVersion
	I0328 01:02:32.771352 1131323 main.go:141] libmachine: Using API Version  1
	I0328 01:02:32.771380 1131323 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 01:02:32.771790 1131323 main.go:141] libmachine: () Calling .GetMachineName
	I0328 01:02:32.772020 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .DriverName
	I0328 01:02:32.772206 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetState
	I0328 01:02:32.773947 1131323 fix.go:112] recreateIfNeeded on old-k8s-version-986088: state=Stopped err=<nil>
	I0328 01:02:32.773980 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .DriverName
	W0328 01:02:32.774166 1131323 fix.go:138] unexpected machine state, will restart: <nil>
	I0328 01:02:32.776416 1131323 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-986088" ...
	I0328 01:02:28.229576 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:28.229970 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | unable to find current IP address of domain embed-certs-808809 in network mk-embed-certs-808809
	I0328 01:02:28.230000 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | I0328 01:02:28.229920 1131950 retry.go:31] will retry after 3.231405371s: waiting for machine to come up
	I0328 01:02:31.463477 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:31.463884 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has current primary IP address 192.168.72.210 and MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:31.463902 1130949 main.go:141] libmachine: (embed-certs-808809) Found IP for machine: 192.168.72.210
	I0328 01:02:31.463915 1130949 main.go:141] libmachine: (embed-certs-808809) Reserving static IP address...
	I0328 01:02:31.464394 1130949 main.go:141] libmachine: (embed-certs-808809) Reserved static IP address: 192.168.72.210
	I0328 01:02:31.464413 1130949 main.go:141] libmachine: (embed-certs-808809) Waiting for SSH to be available...
	I0328 01:02:31.464439 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | found host DHCP lease matching {name: "embed-certs-808809", mac: "52:54:00:60:d4:d2", ip: "192.168.72.210"} in network mk-embed-certs-808809: {Iface:virbr4 ExpiryTime:2024-03-28 02:02:25 +0000 UTC Type:0 Mac:52:54:00:60:d4:d2 Iaid: IPaddr:192.168.72.210 Prefix:24 Hostname:embed-certs-808809 Clientid:01:52:54:00:60:d4:d2}
	I0328 01:02:31.464464 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | skip adding static IP to network mk-embed-certs-808809 - found existing host DHCP lease matching {name: "embed-certs-808809", mac: "52:54:00:60:d4:d2", ip: "192.168.72.210"}
	I0328 01:02:31.464480 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | Getting to WaitForSSH function...
	I0328 01:02:31.466488 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:31.466876 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d4:d2", ip: ""} in network mk-embed-certs-808809: {Iface:virbr4 ExpiryTime:2024-03-28 02:02:25 +0000 UTC Type:0 Mac:52:54:00:60:d4:d2 Iaid: IPaddr:192.168.72.210 Prefix:24 Hostname:embed-certs-808809 Clientid:01:52:54:00:60:d4:d2}
	I0328 01:02:31.466916 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined IP address 192.168.72.210 and MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:31.467054 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | Using SSH client type: external
	I0328 01:02:31.467085 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | Using SSH private key: /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/embed-certs-808809/id_rsa (-rw-------)
	I0328 01:02:31.467124 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.210 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/embed-certs-808809/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0328 01:02:31.467138 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | About to run SSH command:
	I0328 01:02:31.467155 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | exit 0
	I0328 01:02:31.590708 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | SSH cmd err, output: <nil>: 
	I0328 01:02:31.591111 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetConfigRaw
	I0328 01:02:31.591959 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetIP
	I0328 01:02:31.594592 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:31.595075 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d4:d2", ip: ""} in network mk-embed-certs-808809: {Iface:virbr4 ExpiryTime:2024-03-28 02:02:25 +0000 UTC Type:0 Mac:52:54:00:60:d4:d2 Iaid: IPaddr:192.168.72.210 Prefix:24 Hostname:embed-certs-808809 Clientid:01:52:54:00:60:d4:d2}
	I0328 01:02:31.595114 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined IP address 192.168.72.210 and MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:31.595364 1130949 profile.go:142] Saving config to /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/embed-certs-808809/config.json ...
	I0328 01:02:31.595634 1130949 machine.go:94] provisionDockerMachine start ...
	I0328 01:02:31.595656 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .DriverName
	I0328 01:02:31.595901 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHHostname
	I0328 01:02:31.598184 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:31.598529 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d4:d2", ip: ""} in network mk-embed-certs-808809: {Iface:virbr4 ExpiryTime:2024-03-28 02:02:25 +0000 UTC Type:0 Mac:52:54:00:60:d4:d2 Iaid: IPaddr:192.168.72.210 Prefix:24 Hostname:embed-certs-808809 Clientid:01:52:54:00:60:d4:d2}
	I0328 01:02:31.598556 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined IP address 192.168.72.210 and MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:31.598681 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHPort
	I0328 01:02:31.598851 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHKeyPath
	I0328 01:02:31.599012 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHKeyPath
	I0328 01:02:31.599163 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHUsername
	I0328 01:02:31.599333 1130949 main.go:141] libmachine: Using SSH client type: native
	I0328 01:02:31.599604 1130949 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.210 22 <nil> <nil>}
	I0328 01:02:31.599619 1130949 main.go:141] libmachine: About to run SSH command:
	hostname
	I0328 01:02:31.703241 1130949 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0328 01:02:31.703272 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetMachineName
	I0328 01:02:31.703575 1130949 buildroot.go:166] provisioning hostname "embed-certs-808809"
	I0328 01:02:31.703602 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetMachineName
	I0328 01:02:31.703779 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHHostname
	I0328 01:02:31.706495 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:31.706777 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d4:d2", ip: ""} in network mk-embed-certs-808809: {Iface:virbr4 ExpiryTime:2024-03-28 02:02:25 +0000 UTC Type:0 Mac:52:54:00:60:d4:d2 Iaid: IPaddr:192.168.72.210 Prefix:24 Hostname:embed-certs-808809 Clientid:01:52:54:00:60:d4:d2}
	I0328 01:02:31.706799 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined IP address 192.168.72.210 and MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:31.706978 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHPort
	I0328 01:02:31.707146 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHKeyPath
	I0328 01:02:31.707334 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHKeyPath
	I0328 01:02:31.707580 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHUsername
	I0328 01:02:31.707765 1130949 main.go:141] libmachine: Using SSH client type: native
	I0328 01:02:31.707985 1130949 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.210 22 <nil> <nil>}
	I0328 01:02:31.708004 1130949 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-808809 && echo "embed-certs-808809" | sudo tee /etc/hostname
	I0328 01:02:31.821578 1130949 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-808809
	
	I0328 01:02:31.821608 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHHostname
	I0328 01:02:31.824412 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:31.824791 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d4:d2", ip: ""} in network mk-embed-certs-808809: {Iface:virbr4 ExpiryTime:2024-03-28 02:02:25 +0000 UTC Type:0 Mac:52:54:00:60:d4:d2 Iaid: IPaddr:192.168.72.210 Prefix:24 Hostname:embed-certs-808809 Clientid:01:52:54:00:60:d4:d2}
	I0328 01:02:31.824825 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined IP address 192.168.72.210 and MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:31.825030 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHPort
	I0328 01:02:31.825253 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHKeyPath
	I0328 01:02:31.825432 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHKeyPath
	I0328 01:02:31.825589 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHUsername
	I0328 01:02:31.825758 1130949 main.go:141] libmachine: Using SSH client type: native
	I0328 01:02:31.825950 1130949 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.210 22 <nil> <nil>}
	I0328 01:02:31.825976 1130949 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-808809' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-808809/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-808809' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0328 01:02:31.937655 1130949 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0328 01:02:31.937701 1130949 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18485-1069254/.minikube CaCertPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18485-1069254/.minikube}
	I0328 01:02:31.937728 1130949 buildroot.go:174] setting up certificates
	I0328 01:02:31.937742 1130949 provision.go:84] configureAuth start
	I0328 01:02:31.937754 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetMachineName
	I0328 01:02:31.938093 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetIP
	I0328 01:02:31.940874 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:31.941328 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d4:d2", ip: ""} in network mk-embed-certs-808809: {Iface:virbr4 ExpiryTime:2024-03-28 02:02:25 +0000 UTC Type:0 Mac:52:54:00:60:d4:d2 Iaid: IPaddr:192.168.72.210 Prefix:24 Hostname:embed-certs-808809 Clientid:01:52:54:00:60:d4:d2}
	I0328 01:02:31.941360 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined IP address 192.168.72.210 and MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:31.941580 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHHostname
	I0328 01:02:31.944250 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:31.944580 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d4:d2", ip: ""} in network mk-embed-certs-808809: {Iface:virbr4 ExpiryTime:2024-03-28 02:02:25 +0000 UTC Type:0 Mac:52:54:00:60:d4:d2 Iaid: IPaddr:192.168.72.210 Prefix:24 Hostname:embed-certs-808809 Clientid:01:52:54:00:60:d4:d2}
	I0328 01:02:31.944610 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined IP address 192.168.72.210 and MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:31.944828 1130949 provision.go:143] copyHostCerts
	I0328 01:02:31.944910 1130949 exec_runner.go:144] found /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.pem, removing ...
	I0328 01:02:31.944926 1130949 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.pem
	I0328 01:02:31.945006 1130949 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.pem (1078 bytes)
	I0328 01:02:31.945151 1130949 exec_runner.go:144] found /home/jenkins/minikube-integration/18485-1069254/.minikube/cert.pem, removing ...
	I0328 01:02:31.945162 1130949 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18485-1069254/.minikube/cert.pem
	I0328 01:02:31.945205 1130949 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18485-1069254/.minikube/cert.pem (1123 bytes)
	I0328 01:02:31.945285 1130949 exec_runner.go:144] found /home/jenkins/minikube-integration/18485-1069254/.minikube/key.pem, removing ...
	I0328 01:02:31.945294 1130949 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18485-1069254/.minikube/key.pem
	I0328 01:02:31.945330 1130949 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18485-1069254/.minikube/key.pem (1679 bytes)
	I0328 01:02:31.945400 1130949 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca-key.pem org=jenkins.embed-certs-808809 san=[127.0.0.1 192.168.72.210 embed-certs-808809 localhost minikube]
	I0328 01:02:32.070925 1130949 provision.go:177] copyRemoteCerts
	I0328 01:02:32.071007 1130949 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0328 01:02:32.071067 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHHostname
	I0328 01:02:32.073876 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:32.074295 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d4:d2", ip: ""} in network mk-embed-certs-808809: {Iface:virbr4 ExpiryTime:2024-03-28 02:02:25 +0000 UTC Type:0 Mac:52:54:00:60:d4:d2 Iaid: IPaddr:192.168.72.210 Prefix:24 Hostname:embed-certs-808809 Clientid:01:52:54:00:60:d4:d2}
	I0328 01:02:32.074339 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined IP address 192.168.72.210 and MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:32.074541 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHPort
	I0328 01:02:32.074754 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHKeyPath
	I0328 01:02:32.074931 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHUsername
	I0328 01:02:32.075091 1130949 sshutil.go:53] new ssh client: &{IP:192.168.72.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/embed-certs-808809/id_rsa Username:docker}
	I0328 01:02:32.158945 1130949 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0328 01:02:32.184903 1130949 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0328 01:02:32.210411 1130949 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0328 01:02:32.235788 1130949 provision.go:87] duration metric: took 298.03126ms to configureAuth
	I0328 01:02:32.235827 1130949 buildroot.go:189] setting minikube options for container-runtime
	I0328 01:02:32.236116 1130949 config.go:182] Loaded profile config "embed-certs-808809": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0328 01:02:32.236336 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHHostname
	I0328 01:02:32.239186 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:32.239520 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d4:d2", ip: ""} in network mk-embed-certs-808809: {Iface:virbr4 ExpiryTime:2024-03-28 02:02:25 +0000 UTC Type:0 Mac:52:54:00:60:d4:d2 Iaid: IPaddr:192.168.72.210 Prefix:24 Hostname:embed-certs-808809 Clientid:01:52:54:00:60:d4:d2}
	I0328 01:02:32.239555 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined IP address 192.168.72.210 and MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:32.239782 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHPort
	I0328 01:02:32.240036 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHKeyPath
	I0328 01:02:32.240257 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHKeyPath
	I0328 01:02:32.240431 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHUsername
	I0328 01:02:32.240633 1130949 main.go:141] libmachine: Using SSH client type: native
	I0328 01:02:32.240836 1130949 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.210 22 <nil> <nil>}
	I0328 01:02:32.240862 1130949 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0328 01:02:32.513263 1130949 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0328 01:02:32.513298 1130949 machine.go:97] duration metric: took 917.647337ms to provisionDockerMachine
	I0328 01:02:32.513314 1130949 start.go:293] postStartSetup for "embed-certs-808809" (driver="kvm2")
	I0328 01:02:32.513326 1130949 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0328 01:02:32.513365 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .DriverName
	I0328 01:02:32.513727 1130949 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0328 01:02:32.513770 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHHostname
	I0328 01:02:32.516906 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:32.517382 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d4:d2", ip: ""} in network mk-embed-certs-808809: {Iface:virbr4 ExpiryTime:2024-03-28 02:02:25 +0000 UTC Type:0 Mac:52:54:00:60:d4:d2 Iaid: IPaddr:192.168.72.210 Prefix:24 Hostname:embed-certs-808809 Clientid:01:52:54:00:60:d4:d2}
	I0328 01:02:32.517425 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined IP address 192.168.72.210 and MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:32.517603 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHPort
	I0328 01:02:32.517831 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHKeyPath
	I0328 01:02:32.517989 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHUsername
	I0328 01:02:32.518115 1130949 sshutil.go:53] new ssh client: &{IP:192.168.72.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/embed-certs-808809/id_rsa Username:docker}
	I0328 01:02:32.600013 1130949 ssh_runner.go:195] Run: cat /etc/os-release
	I0328 01:02:32.604953 1130949 info.go:137] Remote host: Buildroot 2023.02.9
	I0328 01:02:32.604983 1130949 filesync.go:126] Scanning /home/jenkins/minikube-integration/18485-1069254/.minikube/addons for local assets ...
	I0328 01:02:32.605057 1130949 filesync.go:126] Scanning /home/jenkins/minikube-integration/18485-1069254/.minikube/files for local assets ...
	I0328 01:02:32.605148 1130949 filesync.go:149] local asset: /home/jenkins/minikube-integration/18485-1069254/.minikube/files/etc/ssl/certs/10765222.pem -> 10765222.pem in /etc/ssl/certs
	I0328 01:02:32.605265 1130949 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0328 01:02:32.617685 1130949 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/files/etc/ssl/certs/10765222.pem --> /etc/ssl/certs/10765222.pem (1708 bytes)
	I0328 01:02:32.646415 1130949 start.go:296] duration metric: took 133.084551ms for postStartSetup
	I0328 01:02:32.646462 1130949 fix.go:56] duration metric: took 18.943034019s for fixHost
	I0328 01:02:32.646490 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHHostname
	I0328 01:02:32.649346 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:32.649686 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d4:d2", ip: ""} in network mk-embed-certs-808809: {Iface:virbr4 ExpiryTime:2024-03-28 02:02:25 +0000 UTC Type:0 Mac:52:54:00:60:d4:d2 Iaid: IPaddr:192.168.72.210 Prefix:24 Hostname:embed-certs-808809 Clientid:01:52:54:00:60:d4:d2}
	I0328 01:02:32.649717 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined IP address 192.168.72.210 and MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:32.649864 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHPort
	I0328 01:02:32.650191 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHKeyPath
	I0328 01:02:32.650444 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHKeyPath
	I0328 01:02:32.650637 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHUsername
	I0328 01:02:32.650844 1130949 main.go:141] libmachine: Using SSH client type: native
	I0328 01:02:32.651036 1130949 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.210 22 <nil> <nil>}
	I0328 01:02:32.651069 1130949 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0328 01:02:32.751522 1130949 main.go:141] libmachine: SSH cmd err, output: <nil>: 1711587752.718800758
	
	I0328 01:02:32.751547 1130949 fix.go:216] guest clock: 1711587752.718800758
	I0328 01:02:32.751556 1130949 fix.go:229] Guest: 2024-03-28 01:02:32.718800758 +0000 UTC Remote: 2024-03-28 01:02:32.646466137 +0000 UTC m=+284.780134501 (delta=72.334621ms)
	I0328 01:02:32.751598 1130949 fix.go:200] guest clock delta is within tolerance: 72.334621ms
	I0328 01:02:32.751610 1130949 start.go:83] releasing machines lock for "embed-certs-808809", held for 19.048217918s
	I0328 01:02:32.751638 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .DriverName
	I0328 01:02:32.751953 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetIP
	I0328 01:02:32.754795 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:32.755205 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d4:d2", ip: ""} in network mk-embed-certs-808809: {Iface:virbr4 ExpiryTime:2024-03-28 02:02:25 +0000 UTC Type:0 Mac:52:54:00:60:d4:d2 Iaid: IPaddr:192.168.72.210 Prefix:24 Hostname:embed-certs-808809 Clientid:01:52:54:00:60:d4:d2}
	I0328 01:02:32.755240 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined IP address 192.168.72.210 and MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:32.755454 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .DriverName
	I0328 01:02:32.756111 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .DriverName
	I0328 01:02:32.756320 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .DriverName
	I0328 01:02:32.756412 1130949 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0328 01:02:32.756475 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHHostname
	I0328 01:02:32.756612 1130949 ssh_runner.go:195] Run: cat /version.json
	I0328 01:02:32.756646 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHHostname
	I0328 01:02:32.759337 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:32.759468 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:32.759788 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d4:d2", ip: ""} in network mk-embed-certs-808809: {Iface:virbr4 ExpiryTime:2024-03-28 02:02:25 +0000 UTC Type:0 Mac:52:54:00:60:d4:d2 Iaid: IPaddr:192.168.72.210 Prefix:24 Hostname:embed-certs-808809 Clientid:01:52:54:00:60:d4:d2}
	I0328 01:02:32.759808 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined IP address 192.168.72.210 and MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:32.759845 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d4:d2", ip: ""} in network mk-embed-certs-808809: {Iface:virbr4 ExpiryTime:2024-03-28 02:02:25 +0000 UTC Type:0 Mac:52:54:00:60:d4:d2 Iaid: IPaddr:192.168.72.210 Prefix:24 Hostname:embed-certs-808809 Clientid:01:52:54:00:60:d4:d2}
	I0328 01:02:32.759866 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined IP address 192.168.72.210 and MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:32.760009 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHPort
	I0328 01:02:32.760018 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHPort
	I0328 01:02:32.760214 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHKeyPath
	I0328 01:02:32.760222 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHKeyPath
	I0328 01:02:32.760364 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHUsername
	I0328 01:02:32.760532 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHUsername
	I0328 01:02:32.760639 1130949 sshutil.go:53] new ssh client: &{IP:192.168.72.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/embed-certs-808809/id_rsa Username:docker}
	I0328 01:02:32.760698 1130949 sshutil.go:53] new ssh client: &{IP:192.168.72.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/embed-certs-808809/id_rsa Username:docker}
	I0328 01:02:32.840137 1130949 ssh_runner.go:195] Run: systemctl --version
	I0328 01:02:32.874039 1130949 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0328 01:02:33.020534 1130949 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0328 01:02:33.027141 1130949 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0328 01:02:33.027213 1130949 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0328 01:02:33.043738 1130949 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0328 01:02:33.043767 1130949 start.go:494] detecting cgroup driver to use...
	I0328 01:02:33.043840 1130949 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0328 01:02:33.064332 1130949 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0328 01:02:33.081926 1130949 docker.go:217] disabling cri-docker service (if available) ...
	I0328 01:02:33.082016 1130949 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0328 01:02:33.097179 1130949 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0328 01:02:33.113157 1130949 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0328 01:02:33.233183 1130949 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0328 01:02:33.374061 1130949 docker.go:233] disabling docker service ...
	I0328 01:02:33.374145 1130949 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0328 01:02:33.389813 1130949 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0328 01:02:33.403439 1130949 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0328 01:02:33.546146 1130949 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0328 01:02:33.706968 1130949 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0328 01:02:33.722279 1130949 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0328 01:02:33.742578 1130949 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0328 01:02:33.742652 1130949 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 01:02:33.754966 1130949 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0328 01:02:33.755027 1130949 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 01:02:33.767170 1130949 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 01:02:33.779960 1130949 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 01:02:33.792448 1130949 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0328 01:02:33.804912 1130949 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 01:02:33.818038 1130949 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 01:02:33.838794 1130949 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 01:02:33.852157 1130949 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0328 01:02:33.862921 1130949 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0328 01:02:33.862981 1130949 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0328 01:02:33.880973 1130949 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0328 01:02:33.892698 1130949 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0328 01:02:34.029903 1130949 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0328 01:02:34.170977 1130949 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0328 01:02:34.171074 1130949 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0328 01:02:34.176652 1130949 start.go:562] Will wait 60s for crictl version
	I0328 01:02:34.176736 1130949 ssh_runner.go:195] Run: which crictl
	I0328 01:02:34.180993 1130949 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0328 01:02:34.224564 1130949 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0328 01:02:34.224675 1130949 ssh_runner.go:195] Run: crio --version
	I0328 01:02:34.254457 1130949 ssh_runner.go:195] Run: crio --version
	I0328 01:02:34.287281 1130949 out.go:177] * Preparing Kubernetes v1.29.3 on CRI-O 1.29.1 ...
	I0328 01:02:32.778280 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .Start
	I0328 01:02:32.778470 1131323 main.go:141] libmachine: (old-k8s-version-986088) Ensuring networks are active...
	I0328 01:02:32.779179 1131323 main.go:141] libmachine: (old-k8s-version-986088) Ensuring network default is active
	I0328 01:02:32.779577 1131323 main.go:141] libmachine: (old-k8s-version-986088) Ensuring network mk-old-k8s-version-986088 is active
	I0328 01:02:32.779982 1131323 main.go:141] libmachine: (old-k8s-version-986088) Getting domain xml...
	I0328 01:02:32.780732 1131323 main.go:141] libmachine: (old-k8s-version-986088) Creating domain...
	I0328 01:02:34.066287 1131323 main.go:141] libmachine: (old-k8s-version-986088) Waiting to get IP...
	I0328 01:02:34.067193 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:34.067618 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | unable to find current IP address of domain old-k8s-version-986088 in network mk-old-k8s-version-986088
	I0328 01:02:34.067684 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | I0328 01:02:34.067586 1132067 retry.go:31] will retry after 291.270379ms: waiting for machine to come up
	I0328 01:02:34.360203 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:34.360690 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | unable to find current IP address of domain old-k8s-version-986088 in network mk-old-k8s-version-986088
	I0328 01:02:34.360721 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | I0328 01:02:34.360638 1132067 retry.go:31] will retry after 234.968456ms: waiting for machine to come up
	I0328 01:02:34.597291 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:34.597818 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | unable to find current IP address of domain old-k8s-version-986088 in network mk-old-k8s-version-986088
	I0328 01:02:34.597849 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | I0328 01:02:34.597750 1132067 retry.go:31] will retry after 382.522593ms: waiting for machine to come up
	I0328 01:02:34.982502 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:34.983176 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | unable to find current IP address of domain old-k8s-version-986088 in network mk-old-k8s-version-986088
	I0328 01:02:34.983205 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | I0328 01:02:34.983133 1132067 retry.go:31] will retry after 436.332635ms: waiting for machine to come up
	I0328 01:02:34.288748 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetIP
	I0328 01:02:34.292122 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:34.292516 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d4:d2", ip: ""} in network mk-embed-certs-808809: {Iface:virbr4 ExpiryTime:2024-03-28 02:02:25 +0000 UTC Type:0 Mac:52:54:00:60:d4:d2 Iaid: IPaddr:192.168.72.210 Prefix:24 Hostname:embed-certs-808809 Clientid:01:52:54:00:60:d4:d2}
	I0328 01:02:34.292556 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined IP address 192.168.72.210 and MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:34.292869 1130949 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0328 01:02:34.298738 1130949 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0328 01:02:34.313529 1130949 kubeadm.go:877] updating cluster {Name:embed-certs-808809 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.29.3 ClusterName:embed-certs-808809 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.210 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0328 01:02:34.313698 1130949 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0328 01:02:34.313762 1130949 ssh_runner.go:195] Run: sudo crictl images --output json
	I0328 01:02:34.356518 1130949 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.3". assuming images are not preloaded.
	I0328 01:02:34.356614 1130949 ssh_runner.go:195] Run: which lz4
	I0328 01:02:34.361492 1130949 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0328 01:02:34.366053 1130949 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0328 01:02:34.366090 1130949 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (402967820 bytes)
	I0328 01:02:36.024197 1130949 crio.go:462] duration metric: took 1.662731937s to copy over tarball
	I0328 01:02:36.024287 1130949 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0328 01:02:35.421623 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:35.422164 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | unable to find current IP address of domain old-k8s-version-986088 in network mk-old-k8s-version-986088
	I0328 01:02:35.422198 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | I0328 01:02:35.422135 1132067 retry.go:31] will retry after 700.861268ms: waiting for machine to come up
	I0328 01:02:36.124589 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:36.125001 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | unable to find current IP address of domain old-k8s-version-986088 in network mk-old-k8s-version-986088
	I0328 01:02:36.125031 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | I0328 01:02:36.124948 1132067 retry.go:31] will retry after 932.342478ms: waiting for machine to come up
	I0328 01:02:37.058954 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:37.059390 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | unable to find current IP address of domain old-k8s-version-986088 in network mk-old-k8s-version-986088
	I0328 01:02:37.059424 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | I0328 01:02:37.059332 1132067 retry.go:31] will retry after 1.163248691s: waiting for machine to come up
	I0328 01:02:38.224574 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:38.225019 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | unable to find current IP address of domain old-k8s-version-986088 in network mk-old-k8s-version-986088
	I0328 01:02:38.225053 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | I0328 01:02:38.224959 1132067 retry.go:31] will retry after 1.13372539s: waiting for machine to come up
	I0328 01:02:39.360393 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:39.360953 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | unable to find current IP address of domain old-k8s-version-986088 in network mk-old-k8s-version-986088
	I0328 01:02:39.360984 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | I0328 01:02:39.360906 1132067 retry.go:31] will retry after 1.793272671s: waiting for machine to come up
	I0328 01:02:38.420741 1130949 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.396415089s)
	I0328 01:02:38.420788 1130949 crio.go:469] duration metric: took 2.39655808s to extract the tarball
	I0328 01:02:38.420797 1130949 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0328 01:02:38.459869 1130949 ssh_runner.go:195] Run: sudo crictl images --output json
	I0328 01:02:38.505999 1130949 crio.go:514] all images are preloaded for cri-o runtime.
	I0328 01:02:38.506030 1130949 cache_images.go:84] Images are preloaded, skipping loading
	I0328 01:02:38.506039 1130949 kubeadm.go:928] updating node { 192.168.72.210 8443 v1.29.3 crio true true} ...
	I0328 01:02:38.506185 1130949 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-808809 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.210
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.3 ClusterName:embed-certs-808809 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0328 01:02:38.506301 1130949 ssh_runner.go:195] Run: crio config
	I0328 01:02:38.551608 1130949 cni.go:84] Creating CNI manager for ""
	I0328 01:02:38.551633 1130949 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0328 01:02:38.551646 1130949 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0328 01:02:38.551673 1130949 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.210 APIServerPort:8443 KubernetesVersion:v1.29.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-808809 NodeName:embed-certs-808809 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.210"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.210 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0328 01:02:38.551813 1130949 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.210
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-808809"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.210
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.210"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0328 01:02:38.551881 1130949 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.3
	I0328 01:02:38.562640 1130949 binaries.go:44] Found k8s binaries, skipping transfer
	I0328 01:02:38.562732 1130949 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0328 01:02:38.572870 1130949 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0328 01:02:38.590866 1130949 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0328 01:02:38.608302 1130949 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0328 01:02:38.626925 1130949 ssh_runner.go:195] Run: grep 192.168.72.210	control-plane.minikube.internal$ /etc/hosts
	I0328 01:02:38.631111 1130949 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.210	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0328 01:02:38.644528 1130949 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0328 01:02:38.785485 1130949 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0328 01:02:38.804087 1130949 certs.go:68] Setting up /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/embed-certs-808809 for IP: 192.168.72.210
	I0328 01:02:38.804113 1130949 certs.go:194] generating shared ca certs ...
	I0328 01:02:38.804132 1130949 certs.go:226] acquiring lock for ca certs: {Name:mkf4dec8f33bbf51de6ed3aabdf175c7fd744ef9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 01:02:38.804285 1130949 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.key
	I0328 01:02:38.804326 1130949 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/proxy-client-ca.key
	I0328 01:02:38.804363 1130949 certs.go:256] generating profile certs ...
	I0328 01:02:38.804505 1130949 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/embed-certs-808809/client.key
	I0328 01:02:38.804588 1130949 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/embed-certs-808809/apiserver.key.bdc16448
	I0328 01:02:38.804638 1130949 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/embed-certs-808809/proxy-client.key
	I0328 01:02:38.804798 1130949 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/1076522.pem (1338 bytes)
	W0328 01:02:38.804829 1130949 certs.go:480] ignoring /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/1076522_empty.pem, impossibly tiny 0 bytes
	I0328 01:02:38.804836 1130949 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca-key.pem (1675 bytes)
	I0328 01:02:38.804860 1130949 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem (1078 bytes)
	I0328 01:02:38.804882 1130949 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/cert.pem (1123 bytes)
	I0328 01:02:38.804902 1130949 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/key.pem (1679 bytes)
	I0328 01:02:38.804943 1130949 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/files/etc/ssl/certs/10765222.pem (1708 bytes)
	I0328 01:02:38.805829 1130949 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0328 01:02:38.864847 1130949 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0328 01:02:38.899197 1130949 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0328 01:02:38.926734 1130949 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0328 01:02:38.958277 1130949 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/embed-certs-808809/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0328 01:02:38.997201 1130949 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/embed-certs-808809/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0328 01:02:39.023136 1130949 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/embed-certs-808809/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0328 01:02:39.048459 1130949 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/embed-certs-808809/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0328 01:02:39.074052 1130949 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/1076522.pem --> /usr/share/ca-certificates/1076522.pem (1338 bytes)
	I0328 01:02:39.099326 1130949 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/files/etc/ssl/certs/10765222.pem --> /usr/share/ca-certificates/10765222.pem (1708 bytes)
	I0328 01:02:39.124775 1130949 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0328 01:02:39.149638 1130949 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0328 01:02:39.169169 1130949 ssh_runner.go:195] Run: openssl version
	I0328 01:02:39.175948 1130949 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1076522.pem && ln -fs /usr/share/ca-certificates/1076522.pem /etc/ssl/certs/1076522.pem"
	I0328 01:02:39.188255 1130949 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1076522.pem
	I0328 01:02:39.194296 1130949 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 27 23:42 /usr/share/ca-certificates/1076522.pem
	I0328 01:02:39.194374 1130949 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1076522.pem
	I0328 01:02:39.201138 1130949 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1076522.pem /etc/ssl/certs/51391683.0"
	I0328 01:02:39.213554 1130949 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10765222.pem && ln -fs /usr/share/ca-certificates/10765222.pem /etc/ssl/certs/10765222.pem"
	I0328 01:02:39.226474 1130949 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10765222.pem
	I0328 01:02:39.232074 1130949 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 27 23:42 /usr/share/ca-certificates/10765222.pem
	I0328 01:02:39.232149 1130949 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10765222.pem
	I0328 01:02:39.238733 1130949 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/10765222.pem /etc/ssl/certs/3ec20f2e.0"
	I0328 01:02:39.250983 1130949 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0328 01:02:39.263746 1130949 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0328 01:02:39.268967 1130949 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 27 23:34 /usr/share/ca-certificates/minikubeCA.pem
	I0328 01:02:39.269038 1130949 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0328 01:02:39.275589 1130949 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0328 01:02:39.287731 1130949 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0328 01:02:39.292985 1130949 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0328 01:02:39.300366 1130949 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0328 01:02:39.307241 1130949 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0328 01:02:39.314522 1130949 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0328 01:02:39.321070 1130949 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0328 01:02:39.327777 1130949 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0328 01:02:39.334174 1130949 kubeadm.go:391] StartCluster: {Name:embed-certs-808809 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29
.3 ClusterName:embed-certs-808809 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.210 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0328 01:02:39.334310 1130949 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0328 01:02:39.334367 1130949 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0328 01:02:39.376035 1130949 cri.go:89] found id: ""
	I0328 01:02:39.376145 1130949 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0328 01:02:39.387349 1130949 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0328 01:02:39.387377 1130949 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0328 01:02:39.387385 1130949 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0328 01:02:39.387469 1130949 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0328 01:02:39.397918 1130949 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0328 01:02:39.399122 1130949 kubeconfig.go:125] found "embed-certs-808809" server: "https://192.168.72.210:8443"
	I0328 01:02:39.401219 1130949 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0328 01:02:39.411475 1130949 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.72.210
	I0328 01:02:39.411562 1130949 kubeadm.go:1154] stopping kube-system containers ...
	I0328 01:02:39.411583 1130949 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0328 01:02:39.411650 1130949 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0328 01:02:39.449529 1130949 cri.go:89] found id: ""
	I0328 01:02:39.449638 1130949 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0328 01:02:39.468553 1130949 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0328 01:02:39.479489 1130949 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0328 01:02:39.479522 1130949 kubeadm.go:156] found existing configuration files:
	
	I0328 01:02:39.479589 1130949 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0328 01:02:39.489619 1130949 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0328 01:02:39.489689 1130949 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0328 01:02:39.499726 1130949 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0328 01:02:39.509362 1130949 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0328 01:02:39.509447 1130949 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0328 01:02:39.519262 1130949 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0328 01:02:39.528858 1130949 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0328 01:02:39.528920 1130949 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0328 01:02:39.538784 1130949 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0328 01:02:39.548517 1130949 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0328 01:02:39.548593 1130949 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0328 01:02:39.559931 1130949 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0328 01:02:39.574178 1130949 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0328 01:02:39.706243 1130949 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0328 01:02:40.342144 1130949 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0328 01:02:40.559108 1130949 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0328 01:02:40.636713 1130949 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0328 01:02:40.743171 1130949 api_server.go:52] waiting for apiserver process to appear ...
	I0328 01:02:40.743269 1130949 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:02:41.243401 1130949 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:02:41.743363 1130949 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:02:41.776504 1130949 api_server.go:72] duration metric: took 1.033329844s to wait for apiserver process to appear ...
	I0328 01:02:41.776547 1130949 api_server.go:88] waiting for apiserver healthz status ...
	I0328 01:02:41.776574 1130949 api_server.go:253] Checking apiserver healthz at https://192.168.72.210:8443/healthz ...
	I0328 01:02:41.777140 1130949 api_server.go:269] stopped: https://192.168.72.210:8443/healthz: Get "https://192.168.72.210:8443/healthz": dial tcp 192.168.72.210:8443: connect: connection refused
	I0328 01:02:42.276690 1130949 api_server.go:253] Checking apiserver healthz at https://192.168.72.210:8443/healthz ...
	I0328 01:02:41.156898 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:41.157309 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | unable to find current IP address of domain old-k8s-version-986088 in network mk-old-k8s-version-986088
	I0328 01:02:41.157336 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | I0328 01:02:41.157263 1132067 retry.go:31] will retry after 1.863775673s: waiting for machine to come up
	I0328 01:02:43.023074 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:43.023470 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | unable to find current IP address of domain old-k8s-version-986088 in network mk-old-k8s-version-986088
	I0328 01:02:43.023507 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | I0328 01:02:43.023419 1132067 retry.go:31] will retry after 2.73600503s: waiting for machine to come up
	I0328 01:02:44.743286 1130949 api_server.go:279] https://192.168.72.210:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0328 01:02:44.743383 1130949 api_server.go:103] status: https://192.168.72.210:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0328 01:02:44.743412 1130949 api_server.go:253] Checking apiserver healthz at https://192.168.72.210:8443/healthz ...
	I0328 01:02:44.822370 1130949 api_server.go:279] https://192.168.72.210:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0328 01:02:44.822416 1130949 api_server.go:103] status: https://192.168.72.210:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0328 01:02:44.822436 1130949 api_server.go:253] Checking apiserver healthz at https://192.168.72.210:8443/healthz ...
	I0328 01:02:44.847406 1130949 api_server.go:279] https://192.168.72.210:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0328 01:02:44.847462 1130949 api_server.go:103] status: https://192.168.72.210:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0328 01:02:45.276899 1130949 api_server.go:253] Checking apiserver healthz at https://192.168.72.210:8443/healthz ...
	I0328 01:02:45.281884 1130949 api_server.go:279] https://192.168.72.210:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0328 01:02:45.281919 1130949 api_server.go:103] status: https://192.168.72.210:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0328 01:02:45.777495 1130949 api_server.go:253] Checking apiserver healthz at https://192.168.72.210:8443/healthz ...
	I0328 01:02:45.783673 1130949 api_server.go:279] https://192.168.72.210:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0328 01:02:45.783704 1130949 api_server.go:103] status: https://192.168.72.210:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0328 01:02:46.277372 1130949 api_server.go:253] Checking apiserver healthz at https://192.168.72.210:8443/healthz ...
	I0328 01:02:46.282281 1130949 api_server.go:279] https://192.168.72.210:8443/healthz returned 200:
	ok
	I0328 01:02:46.291242 1130949 api_server.go:141] control plane version: v1.29.3
	I0328 01:02:46.291287 1130949 api_server.go:131] duration metric: took 4.514730698s to wait for apiserver health ...
	I0328 01:02:46.291301 1130949 cni.go:84] Creating CNI manager for ""
	I0328 01:02:46.291310 1130949 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0328 01:02:46.293461 1130949 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0328 01:02:46.294971 1130949 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0328 01:02:46.312955 1130949 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0328 01:02:46.345653 1130949 system_pods.go:43] waiting for kube-system pods to appear ...
	I0328 01:02:46.355470 1130949 system_pods.go:59] 8 kube-system pods found
	I0328 01:02:46.355506 1130949 system_pods.go:61] "coredns-76f75df574-pr5d8" [90a6f3d5-6f33-4c41-804b-4b20c518aa23] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0328 01:02:46.355512 1130949 system_pods.go:61] "etcd-embed-certs-808809" [93b6b8ee-f83f-4848-b2c5-912ec07acd52] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0328 01:02:46.355519 1130949 system_pods.go:61] "kube-apiserver-embed-certs-808809" [22eb788f-4647-4a07-b5bf-ecdd54c28fcd] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0328 01:02:46.355530 1130949 system_pods.go:61] "kube-controller-manager-embed-certs-808809" [83fecd9f-c0de-4afe-b5b5-7c04bd3adc20] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0328 01:02:46.355545 1130949 system_pods.go:61] "kube-proxy-qwzpg" [57a814c6-54c8-4fa7-b7d7-bcdd4bbc91d7] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0328 01:02:46.355553 1130949 system_pods.go:61] "kube-scheduler-embed-certs-808809" [0b229d84-43fb-45ee-8d49-39204812d490] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0328 01:02:46.355568 1130949 system_pods.go:61] "metrics-server-57f55c9bc5-swsxp" [4b20e133-3054-4806-9b7f-44d8c8c35a4d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0328 01:02:46.355580 1130949 system_pods.go:61] "storage-provisioner" [59303061-19e3-4aed-8753-804988a2a44e] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0328 01:02:46.355590 1130949 system_pods.go:74] duration metric: took 9.908316ms to wait for pod list to return data ...
	I0328 01:02:46.355603 1130949 node_conditions.go:102] verifying NodePressure condition ...
	I0328 01:02:46.358936 1130949 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0328 01:02:46.358987 1130949 node_conditions.go:123] node cpu capacity is 2
	I0328 01:02:46.359006 1130949 node_conditions.go:105] duration metric: took 3.394695ms to run NodePressure ...
	I0328 01:02:46.359054 1130949 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0328 01:02:46.686479 1130949 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0328 01:02:46.692502 1130949 kubeadm.go:733] kubelet initialised
	I0328 01:02:46.692526 1130949 kubeadm.go:734] duration metric: took 6.022393ms waiting for restarted kubelet to initialise ...
	I0328 01:02:46.692534 1130949 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0328 01:02:46.699146 1130949 pod_ready.go:78] waiting up to 4m0s for pod "coredns-76f75df574-pr5d8" in "kube-system" namespace to be "Ready" ...
	I0328 01:02:45.762440 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:45.762891 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | unable to find current IP address of domain old-k8s-version-986088 in network mk-old-k8s-version-986088
	I0328 01:02:45.762915 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | I0328 01:02:45.762845 1132067 retry.go:31] will retry after 2.201941476s: waiting for machine to come up
	I0328 01:02:47.966601 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:47.967196 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | unable to find current IP address of domain old-k8s-version-986088 in network mk-old-k8s-version-986088
	I0328 01:02:47.967237 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | I0328 01:02:47.967144 1132067 retry.go:31] will retry after 4.122216816s: waiting for machine to come up
	I0328 01:02:48.709890 1130949 pod_ready.go:102] pod "coredns-76f75df574-pr5d8" in "kube-system" namespace has status "Ready":"False"
	I0328 01:02:51.207697 1130949 pod_ready.go:102] pod "coredns-76f75df574-pr5d8" in "kube-system" namespace has status "Ready":"False"
	I0328 01:02:53.391471 1131600 start.go:364] duration metric: took 2m47.603687739s to acquireMachinesLock for "default-k8s-diff-port-283961"
	I0328 01:02:53.391553 1131600 start.go:96] Skipping create...Using existing machine configuration
	I0328 01:02:53.391565 1131600 fix.go:54] fixHost starting: 
	I0328 01:02:53.391980 1131600 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18485-1069254/.minikube/bin/docker-machine-driver-kvm2
	I0328 01:02:53.392031 1131600 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 01:02:53.409035 1131600 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34073
	I0328 01:02:53.409556 1131600 main.go:141] libmachine: () Calling .GetVersion
	I0328 01:02:53.410105 1131600 main.go:141] libmachine: Using API Version  1
	I0328 01:02:53.410136 1131600 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 01:02:53.410492 1131600 main.go:141] libmachine: () Calling .GetMachineName
	I0328 01:02:53.410734 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .DriverName
	I0328 01:02:53.410903 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetState
	I0328 01:02:53.412710 1131600 fix.go:112] recreateIfNeeded on default-k8s-diff-port-283961: state=Stopped err=<nil>
	I0328 01:02:53.412739 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .DriverName
	W0328 01:02:53.412927 1131600 fix.go:138] unexpected machine state, will restart: <nil>
	I0328 01:02:53.414773 1131600 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-283961" ...
	I0328 01:02:52.091210 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:52.091759 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has current primary IP address 192.168.50.174 and MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:52.091794 1131323 main.go:141] libmachine: (old-k8s-version-986088) Found IP for machine: 192.168.50.174
	I0328 01:02:52.091841 1131323 main.go:141] libmachine: (old-k8s-version-986088) Reserving static IP address...
	I0328 01:02:52.092295 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | found host DHCP lease matching {name: "old-k8s-version-986088", mac: "52:54:00:f6:94:40", ip: "192.168.50.174"} in network mk-old-k8s-version-986088: {Iface:virbr2 ExpiryTime:2024-03-28 02:02:44 +0000 UTC Type:0 Mac:52:54:00:f6:94:40 Iaid: IPaddr:192.168.50.174 Prefix:24 Hostname:old-k8s-version-986088 Clientid:01:52:54:00:f6:94:40}
	I0328 01:02:52.092321 1131323 main.go:141] libmachine: (old-k8s-version-986088) Reserved static IP address: 192.168.50.174
	I0328 01:02:52.092343 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | skip adding static IP to network mk-old-k8s-version-986088 - found existing host DHCP lease matching {name: "old-k8s-version-986088", mac: "52:54:00:f6:94:40", ip: "192.168.50.174"}
	I0328 01:02:52.092356 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | Getting to WaitForSSH function...
	I0328 01:02:52.092373 1131323 main.go:141] libmachine: (old-k8s-version-986088) Waiting for SSH to be available...
	I0328 01:02:52.094682 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:52.095012 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:94:40", ip: ""} in network mk-old-k8s-version-986088: {Iface:virbr2 ExpiryTime:2024-03-28 02:02:44 +0000 UTC Type:0 Mac:52:54:00:f6:94:40 Iaid: IPaddr:192.168.50.174 Prefix:24 Hostname:old-k8s-version-986088 Clientid:01:52:54:00:f6:94:40}
	I0328 01:02:52.095033 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined IP address 192.168.50.174 and MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:52.095158 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | Using SSH client type: external
	I0328 01:02:52.095180 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | Using SSH private key: /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/old-k8s-version-986088/id_rsa (-rw-------)
	I0328 01:02:52.095208 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.174 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/old-k8s-version-986088/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0328 01:02:52.095218 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | About to run SSH command:
	I0328 01:02:52.095232 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | exit 0
	I0328 01:02:52.218494 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | SSH cmd err, output: <nil>: 
	I0328 01:02:52.218983 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetConfigRaw
	I0328 01:02:52.219663 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetIP
	I0328 01:02:52.222349 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:52.222791 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:94:40", ip: ""} in network mk-old-k8s-version-986088: {Iface:virbr2 ExpiryTime:2024-03-28 02:02:44 +0000 UTC Type:0 Mac:52:54:00:f6:94:40 Iaid: IPaddr:192.168.50.174 Prefix:24 Hostname:old-k8s-version-986088 Clientid:01:52:54:00:f6:94:40}
	I0328 01:02:52.222823 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined IP address 192.168.50.174 and MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:52.223191 1131323 profile.go:142] Saving config to /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/old-k8s-version-986088/config.json ...
	I0328 01:02:52.223388 1131323 machine.go:94] provisionDockerMachine start ...
	I0328 01:02:52.223409 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .DriverName
	I0328 01:02:52.223605 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHHostname
	I0328 01:02:52.225686 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:52.225999 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:94:40", ip: ""} in network mk-old-k8s-version-986088: {Iface:virbr2 ExpiryTime:2024-03-28 02:02:44 +0000 UTC Type:0 Mac:52:54:00:f6:94:40 Iaid: IPaddr:192.168.50.174 Prefix:24 Hostname:old-k8s-version-986088 Clientid:01:52:54:00:f6:94:40}
	I0328 01:02:52.226038 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined IP address 192.168.50.174 and MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:52.226131 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHPort
	I0328 01:02:52.226341 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHKeyPath
	I0328 01:02:52.226507 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHKeyPath
	I0328 01:02:52.226633 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHUsername
	I0328 01:02:52.226802 1131323 main.go:141] libmachine: Using SSH client type: native
	I0328 01:02:52.227078 1131323 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.174 22 <nil> <nil>}
	I0328 01:02:52.227095 1131323 main.go:141] libmachine: About to run SSH command:
	hostname
	I0328 01:02:52.327218 1131323 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0328 01:02:52.327249 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetMachineName
	I0328 01:02:52.327515 1131323 buildroot.go:166] provisioning hostname "old-k8s-version-986088"
	I0328 01:02:52.327542 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetMachineName
	I0328 01:02:52.327754 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHHostname
	I0328 01:02:52.330253 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:52.330661 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:94:40", ip: ""} in network mk-old-k8s-version-986088: {Iface:virbr2 ExpiryTime:2024-03-28 02:02:44 +0000 UTC Type:0 Mac:52:54:00:f6:94:40 Iaid: IPaddr:192.168.50.174 Prefix:24 Hostname:old-k8s-version-986088 Clientid:01:52:54:00:f6:94:40}
	I0328 01:02:52.330691 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined IP address 192.168.50.174 and MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:52.330827 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHPort
	I0328 01:02:52.331048 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHKeyPath
	I0328 01:02:52.331258 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHKeyPath
	I0328 01:02:52.331406 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHUsername
	I0328 01:02:52.331593 1131323 main.go:141] libmachine: Using SSH client type: native
	I0328 01:02:52.331772 1131323 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.174 22 <nil> <nil>}
	I0328 01:02:52.331783 1131323 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-986088 && echo "old-k8s-version-986088" | sudo tee /etc/hostname
	I0328 01:02:52.445910 1131323 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-986088
	
	I0328 01:02:52.445943 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHHostname
	I0328 01:02:52.449023 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:52.449320 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:94:40", ip: ""} in network mk-old-k8s-version-986088: {Iface:virbr2 ExpiryTime:2024-03-28 02:02:44 +0000 UTC Type:0 Mac:52:54:00:f6:94:40 Iaid: IPaddr:192.168.50.174 Prefix:24 Hostname:old-k8s-version-986088 Clientid:01:52:54:00:f6:94:40}
	I0328 01:02:52.449358 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined IP address 192.168.50.174 and MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:52.449595 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHPort
	I0328 01:02:52.449810 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHKeyPath
	I0328 01:02:52.449970 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHKeyPath
	I0328 01:02:52.450116 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHUsername
	I0328 01:02:52.450310 1131323 main.go:141] libmachine: Using SSH client type: native
	I0328 01:02:52.450572 1131323 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.174 22 <nil> <nil>}
	I0328 01:02:52.450640 1131323 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-986088' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-986088/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-986088' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0328 01:02:52.567493 1131323 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0328 01:02:52.567529 1131323 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18485-1069254/.minikube CaCertPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18485-1069254/.minikube}
	I0328 01:02:52.567559 1131323 buildroot.go:174] setting up certificates
	I0328 01:02:52.567573 1131323 provision.go:84] configureAuth start
	I0328 01:02:52.567587 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetMachineName
	I0328 01:02:52.567944 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetIP
	I0328 01:02:52.570860 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:52.571363 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:94:40", ip: ""} in network mk-old-k8s-version-986088: {Iface:virbr2 ExpiryTime:2024-03-28 02:02:44 +0000 UTC Type:0 Mac:52:54:00:f6:94:40 Iaid: IPaddr:192.168.50.174 Prefix:24 Hostname:old-k8s-version-986088 Clientid:01:52:54:00:f6:94:40}
	I0328 01:02:52.571400 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined IP address 192.168.50.174 and MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:52.571547 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHHostname
	I0328 01:02:52.574052 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:52.574483 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:94:40", ip: ""} in network mk-old-k8s-version-986088: {Iface:virbr2 ExpiryTime:2024-03-28 02:02:44 +0000 UTC Type:0 Mac:52:54:00:f6:94:40 Iaid: IPaddr:192.168.50.174 Prefix:24 Hostname:old-k8s-version-986088 Clientid:01:52:54:00:f6:94:40}
	I0328 01:02:52.574517 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined IP address 192.168.50.174 and MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:52.574619 1131323 provision.go:143] copyHostCerts
	I0328 01:02:52.574698 1131323 exec_runner.go:144] found /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.pem, removing ...
	I0328 01:02:52.574710 1131323 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.pem
	I0328 01:02:52.574778 1131323 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.pem (1078 bytes)
	I0328 01:02:52.574894 1131323 exec_runner.go:144] found /home/jenkins/minikube-integration/18485-1069254/.minikube/cert.pem, removing ...
	I0328 01:02:52.574908 1131323 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18485-1069254/.minikube/cert.pem
	I0328 01:02:52.574985 1131323 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18485-1069254/.minikube/cert.pem (1123 bytes)
	I0328 01:02:52.575086 1131323 exec_runner.go:144] found /home/jenkins/minikube-integration/18485-1069254/.minikube/key.pem, removing ...
	I0328 01:02:52.575095 1131323 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18485-1069254/.minikube/key.pem
	I0328 01:02:52.575117 1131323 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18485-1069254/.minikube/key.pem (1679 bytes)
	I0328 01:02:52.575194 1131323 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-986088 san=[127.0.0.1 192.168.50.174 localhost minikube old-k8s-version-986088]
	I0328 01:02:52.688709 1131323 provision.go:177] copyRemoteCerts
	I0328 01:02:52.688776 1131323 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0328 01:02:52.688809 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHHostname
	I0328 01:02:52.691529 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:52.691977 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:94:40", ip: ""} in network mk-old-k8s-version-986088: {Iface:virbr2 ExpiryTime:2024-03-28 02:02:44 +0000 UTC Type:0 Mac:52:54:00:f6:94:40 Iaid: IPaddr:192.168.50.174 Prefix:24 Hostname:old-k8s-version-986088 Clientid:01:52:54:00:f6:94:40}
	I0328 01:02:52.692024 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined IP address 192.168.50.174 and MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:52.692188 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHPort
	I0328 01:02:52.692425 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHKeyPath
	I0328 01:02:52.692620 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHUsername
	I0328 01:02:52.692774 1131323 sshutil.go:53] new ssh client: &{IP:192.168.50.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/old-k8s-version-986088/id_rsa Username:docker}
	I0328 01:02:52.777200 1131323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0328 01:02:52.808740 1131323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0328 01:02:52.836646 1131323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0328 01:02:52.862627 1131323 provision.go:87] duration metric: took 295.032419ms to configureAuth
	I0328 01:02:52.862668 1131323 buildroot.go:189] setting minikube options for container-runtime
	I0328 01:02:52.862908 1131323 config.go:182] Loaded profile config "old-k8s-version-986088": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0328 01:02:52.863019 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHHostname
	I0328 01:02:52.865838 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:52.866585 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:94:40", ip: ""} in network mk-old-k8s-version-986088: {Iface:virbr2 ExpiryTime:2024-03-28 02:02:44 +0000 UTC Type:0 Mac:52:54:00:f6:94:40 Iaid: IPaddr:192.168.50.174 Prefix:24 Hostname:old-k8s-version-986088 Clientid:01:52:54:00:f6:94:40}
	I0328 01:02:52.866630 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined IP address 192.168.50.174 and MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:52.867271 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHPort
	I0328 01:02:52.867521 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHKeyPath
	I0328 01:02:52.867687 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHKeyPath
	I0328 01:02:52.867826 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHUsername
	I0328 01:02:52.867961 1131323 main.go:141] libmachine: Using SSH client type: native
	I0328 01:02:52.868176 1131323 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.174 22 <nil> <nil>}
	I0328 01:02:52.868194 1131323 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0328 01:02:53.154903 1131323 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0328 01:02:53.154936 1131323 machine.go:97] duration metric: took 931.534047ms to provisionDockerMachine
	I0328 01:02:53.154949 1131323 start.go:293] postStartSetup for "old-k8s-version-986088" (driver="kvm2")
	I0328 01:02:53.154961 1131323 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0328 01:02:53.154997 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .DriverName
	I0328 01:02:53.155353 1131323 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0328 01:02:53.155386 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHHostname
	I0328 01:02:53.158072 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:53.158448 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:94:40", ip: ""} in network mk-old-k8s-version-986088: {Iface:virbr2 ExpiryTime:2024-03-28 02:02:44 +0000 UTC Type:0 Mac:52:54:00:f6:94:40 Iaid: IPaddr:192.168.50.174 Prefix:24 Hostname:old-k8s-version-986088 Clientid:01:52:54:00:f6:94:40}
	I0328 01:02:53.158482 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined IP address 192.168.50.174 and MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:53.158612 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHPort
	I0328 01:02:53.158825 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHKeyPath
	I0328 01:02:53.158974 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHUsername
	I0328 01:02:53.159102 1131323 sshutil.go:53] new ssh client: &{IP:192.168.50.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/old-k8s-version-986088/id_rsa Username:docker}
	I0328 01:02:53.243411 1131323 ssh_runner.go:195] Run: cat /etc/os-release
	I0328 01:02:53.247745 1131323 info.go:137] Remote host: Buildroot 2023.02.9
	I0328 01:02:53.247769 1131323 filesync.go:126] Scanning /home/jenkins/minikube-integration/18485-1069254/.minikube/addons for local assets ...
	I0328 01:02:53.247830 1131323 filesync.go:126] Scanning /home/jenkins/minikube-integration/18485-1069254/.minikube/files for local assets ...
	I0328 01:02:53.247903 1131323 filesync.go:149] local asset: /home/jenkins/minikube-integration/18485-1069254/.minikube/files/etc/ssl/certs/10765222.pem -> 10765222.pem in /etc/ssl/certs
	I0328 01:02:53.247990 1131323 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0328 01:02:53.258574 1131323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/files/etc/ssl/certs/10765222.pem --> /etc/ssl/certs/10765222.pem (1708 bytes)
	I0328 01:02:53.284249 1131323 start.go:296] duration metric: took 129.2844ms for postStartSetup
	I0328 01:02:53.284300 1131323 fix.go:56] duration metric: took 20.532468979s for fixHost
	I0328 01:02:53.284324 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHHostname
	I0328 01:02:53.287097 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:53.287505 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:94:40", ip: ""} in network mk-old-k8s-version-986088: {Iface:virbr2 ExpiryTime:2024-03-28 02:02:44 +0000 UTC Type:0 Mac:52:54:00:f6:94:40 Iaid: IPaddr:192.168.50.174 Prefix:24 Hostname:old-k8s-version-986088 Clientid:01:52:54:00:f6:94:40}
	I0328 01:02:53.287534 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined IP address 192.168.50.174 and MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:53.287642 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHPort
	I0328 01:02:53.287874 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHKeyPath
	I0328 01:02:53.288039 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHKeyPath
	I0328 01:02:53.288225 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHUsername
	I0328 01:02:53.288439 1131323 main.go:141] libmachine: Using SSH client type: native
	I0328 01:02:53.288601 1131323 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.174 22 <nil> <nil>}
	I0328 01:02:53.288612 1131323 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0328 01:02:53.391262 1131323 main.go:141] libmachine: SSH cmd err, output: <nil>: 1711587773.373998758
	
	I0328 01:02:53.391292 1131323 fix.go:216] guest clock: 1711587773.373998758
	I0328 01:02:53.391299 1131323 fix.go:229] Guest: 2024-03-28 01:02:53.373998758 +0000 UTC Remote: 2024-03-28 01:02:53.284304642 +0000 UTC m=+227.998260980 (delta=89.694116ms)
	I0328 01:02:53.391341 1131323 fix.go:200] guest clock delta is within tolerance: 89.694116ms
	I0328 01:02:53.391346 1131323 start.go:83] releasing machines lock for "old-k8s-version-986088", held for 20.639550927s
	I0328 01:02:53.391377 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .DriverName
	I0328 01:02:53.391728 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetIP
	I0328 01:02:53.394421 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:53.394780 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:94:40", ip: ""} in network mk-old-k8s-version-986088: {Iface:virbr2 ExpiryTime:2024-03-28 02:02:44 +0000 UTC Type:0 Mac:52:54:00:f6:94:40 Iaid: IPaddr:192.168.50.174 Prefix:24 Hostname:old-k8s-version-986088 Clientid:01:52:54:00:f6:94:40}
	I0328 01:02:53.394811 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined IP address 192.168.50.174 and MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:53.394932 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .DriverName
	I0328 01:02:53.395449 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .DriverName
	I0328 01:02:53.395729 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .DriverName
	I0328 01:02:53.395828 1131323 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0328 01:02:53.395883 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHHostname
	I0328 01:02:53.395985 1131323 ssh_runner.go:195] Run: cat /version.json
	I0328 01:02:53.396014 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHHostname
	I0328 01:02:53.398819 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:53.399010 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:53.399281 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:94:40", ip: ""} in network mk-old-k8s-version-986088: {Iface:virbr2 ExpiryTime:2024-03-28 02:02:44 +0000 UTC Type:0 Mac:52:54:00:f6:94:40 Iaid: IPaddr:192.168.50.174 Prefix:24 Hostname:old-k8s-version-986088 Clientid:01:52:54:00:f6:94:40}
	I0328 01:02:53.399320 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined IP address 192.168.50.174 and MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:53.399451 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHPort
	I0328 01:02:53.399550 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:94:40", ip: ""} in network mk-old-k8s-version-986088: {Iface:virbr2 ExpiryTime:2024-03-28 02:02:44 +0000 UTC Type:0 Mac:52:54:00:f6:94:40 Iaid: IPaddr:192.168.50.174 Prefix:24 Hostname:old-k8s-version-986088 Clientid:01:52:54:00:f6:94:40}
	I0328 01:02:53.399620 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined IP address 192.168.50.174 and MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:53.399640 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHKeyPath
	I0328 01:02:53.399880 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHUsername
	I0328 01:02:53.399902 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHPort
	I0328 01:02:53.400065 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHKeyPath
	I0328 01:02:53.400081 1131323 sshutil.go:53] new ssh client: &{IP:192.168.50.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/old-k8s-version-986088/id_rsa Username:docker}
	I0328 01:02:53.400245 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHUsername
	I0328 01:02:53.400445 1131323 sshutil.go:53] new ssh client: &{IP:192.168.50.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/old-k8s-version-986088/id_rsa Username:docker}
	I0328 01:02:53.514453 1131323 ssh_runner.go:195] Run: systemctl --version
	I0328 01:02:53.521123 1131323 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0328 01:02:53.678366 1131323 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0328 01:02:53.685402 1131323 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0328 01:02:53.685473 1131323 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0328 01:02:53.702781 1131323 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0328 01:02:53.702816 1131323 start.go:494] detecting cgroup driver to use...
	I0328 01:02:53.702900 1131323 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0328 01:02:53.720343 1131323 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0328 01:02:53.736749 1131323 docker.go:217] disabling cri-docker service (if available) ...
	I0328 01:02:53.736824 1131323 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0328 01:02:53.761087 1131323 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0328 01:02:53.779008 1131323 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0328 01:02:53.895064 1131323 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0328 01:02:54.060741 1131323 docker.go:233] disabling docker service ...
	I0328 01:02:54.060825 1131323 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0328 01:02:54.079139 1131323 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0328 01:02:54.093523 1131323 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0328 01:02:54.247544 1131323 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0328 01:02:54.396392 1131323 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0328 01:02:54.422612 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0328 01:02:54.443759 1131323 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0328 01:02:54.443817 1131323 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 01:02:54.459794 1131323 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0328 01:02:54.459875 1131323 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 01:02:54.472784 1131323 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 01:02:54.484963 1131323 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 01:02:54.496654 1131323 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0328 01:02:54.508382 1131323 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0328 01:02:54.518607 1131323 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0328 01:02:54.518687 1131323 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0328 01:02:54.532356 1131323 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0328 01:02:54.544424 1131323 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0328 01:02:54.685782 1131323 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0328 01:02:54.847233 1131323 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0328 01:02:54.847314 1131323 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0328 01:02:54.853148 1131323 start.go:562] Will wait 60s for crictl version
	I0328 01:02:54.853248 1131323 ssh_runner.go:195] Run: which crictl
	I0328 01:02:54.857536 1131323 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0328 01:02:54.901937 1131323 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0328 01:02:54.902082 1131323 ssh_runner.go:195] Run: crio --version
	I0328 01:02:54.935571 1131323 ssh_runner.go:195] Run: crio --version
	I0328 01:02:54.971452 1131323 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0328 01:02:54.972964 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetIP
	I0328 01:02:54.976523 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:54.976985 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:94:40", ip: ""} in network mk-old-k8s-version-986088: {Iface:virbr2 ExpiryTime:2024-03-28 02:02:44 +0000 UTC Type:0 Mac:52:54:00:f6:94:40 Iaid: IPaddr:192.168.50.174 Prefix:24 Hostname:old-k8s-version-986088 Clientid:01:52:54:00:f6:94:40}
	I0328 01:02:54.977017 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined IP address 192.168.50.174 and MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:54.977369 1131323 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0328 01:02:54.982326 1131323 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0328 01:02:54.996239 1131323 kubeadm.go:877] updating cluster {Name:old-k8s-version-986088 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-986088 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.174 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0328 01:02:54.996371 1131323 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0328 01:02:54.996433 1131323 ssh_runner.go:195] Run: sudo crictl images --output json
	I0328 01:02:55.045404 1131323 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0328 01:02:55.045483 1131323 ssh_runner.go:195] Run: which lz4
	I0328 01:02:55.050226 1131323 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0328 01:02:55.055182 1131323 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0328 01:02:55.055221 1131323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0328 01:02:53.416101 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .Start
	I0328 01:02:53.416332 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Ensuring networks are active...
	I0328 01:02:53.417021 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Ensuring network default is active
	I0328 01:02:53.417446 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Ensuring network mk-default-k8s-diff-port-283961 is active
	I0328 01:02:53.417857 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Getting domain xml...
	I0328 01:02:53.418555 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Creating domain...
	I0328 01:02:54.777201 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Waiting to get IP...
	I0328 01:02:54.778055 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:02:54.778563 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | unable to find current IP address of domain default-k8s-diff-port-283961 in network mk-default-k8s-diff-port-283961
	I0328 01:02:54.778705 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | I0328 01:02:54.778537 1132240 retry.go:31] will retry after 259.031702ms: waiting for machine to come up
	I0328 01:02:55.039365 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:02:55.039926 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | unable to find current IP address of domain default-k8s-diff-port-283961 in network mk-default-k8s-diff-port-283961
	I0328 01:02:55.039963 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | I0328 01:02:55.039860 1132240 retry.go:31] will retry after 254.124553ms: waiting for machine to come up
	I0328 01:02:55.295658 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:02:55.296218 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | unable to find current IP address of domain default-k8s-diff-port-283961 in network mk-default-k8s-diff-port-283961
	I0328 01:02:55.296265 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | I0328 01:02:55.296174 1132240 retry.go:31] will retry after 349.637234ms: waiting for machine to come up
	I0328 01:02:55.647590 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:02:55.648356 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | unable to find current IP address of domain default-k8s-diff-port-283961 in network mk-default-k8s-diff-port-283961
	I0328 01:02:55.648392 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | I0328 01:02:55.648298 1132240 retry.go:31] will retry after 446.471208ms: waiting for machine to come up
	I0328 01:02:53.707811 1130949 pod_ready.go:102] pod "coredns-76f75df574-pr5d8" in "kube-system" namespace has status "Ready":"False"
	I0328 01:02:55.708380 1130949 pod_ready.go:102] pod "coredns-76f75df574-pr5d8" in "kube-system" namespace has status "Ready":"False"
	I0328 01:02:57.213059 1130949 pod_ready.go:92] pod "coredns-76f75df574-pr5d8" in "kube-system" namespace has status "Ready":"True"
	I0328 01:02:57.213097 1130949 pod_ready.go:81] duration metric: took 10.513921238s for pod "coredns-76f75df574-pr5d8" in "kube-system" namespace to be "Ready" ...
	I0328 01:02:57.213113 1130949 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-808809" in "kube-system" namespace to be "Ready" ...
	I0328 01:02:57.222308 1130949 pod_ready.go:92] pod "etcd-embed-certs-808809" in "kube-system" namespace has status "Ready":"True"
	I0328 01:02:57.222344 1130949 pod_ready.go:81] duration metric: took 9.214056ms for pod "etcd-embed-certs-808809" in "kube-system" namespace to be "Ready" ...
	I0328 01:02:57.222357 1130949 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-808809" in "kube-system" namespace to be "Ready" ...
	I0328 01:02:57.231530 1130949 pod_ready.go:92] pod "kube-apiserver-embed-certs-808809" in "kube-system" namespace has status "Ready":"True"
	I0328 01:02:57.231558 1130949 pod_ready.go:81] duration metric: took 9.192864ms for pod "kube-apiserver-embed-certs-808809" in "kube-system" namespace to be "Ready" ...
	I0328 01:02:57.231568 1130949 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-808809" in "kube-system" namespace to be "Ready" ...
	I0328 01:02:56.994163 1131323 crio.go:462] duration metric: took 1.943992561s to copy over tarball
	I0328 01:02:56.994252 1131323 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0328 01:03:00.215115 1131323 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.220825311s)
	I0328 01:03:00.215159 1131323 crio.go:469] duration metric: took 3.22095583s to extract the tarball
	I0328 01:03:00.215171 1131323 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0328 01:03:00.259151 1131323 ssh_runner.go:195] Run: sudo crictl images --output json
	I0328 01:03:00.298446 1131323 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0328 01:03:00.298492 1131323 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0328 01:03:00.298585 1131323 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0328 01:03:00.298601 1131323 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0328 01:03:00.298613 1131323 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0328 01:03:00.298644 1131323 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0328 01:03:00.298662 1131323 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0328 01:03:00.298698 1131323 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0328 01:03:00.298593 1131323 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0328 01:03:00.298585 1131323 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0328 01:03:00.300347 1131323 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0328 01:03:00.300424 1131323 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0328 01:03:00.300470 1131323 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0328 01:03:00.300474 1131323 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0328 01:03:00.300637 1131323 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0328 01:03:00.300652 1131323 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0328 01:03:00.300723 1131323 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0328 01:03:00.300793 1131323 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0328 01:02:56.095939 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:02:56.096463 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | unable to find current IP address of domain default-k8s-diff-port-283961 in network mk-default-k8s-diff-port-283961
	I0328 01:02:56.096501 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | I0328 01:02:56.096412 1132240 retry.go:31] will retry after 490.029649ms: waiting for machine to come up
	I0328 01:02:56.588298 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:02:56.588835 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | unable to find current IP address of domain default-k8s-diff-port-283961 in network mk-default-k8s-diff-port-283961
	I0328 01:02:56.588868 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | I0328 01:02:56.588796 1132240 retry.go:31] will retry after 831.356628ms: waiting for machine to come up
	I0328 01:02:57.421917 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:02:57.422410 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | unable to find current IP address of domain default-k8s-diff-port-283961 in network mk-default-k8s-diff-port-283961
	I0328 01:02:57.422443 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | I0328 01:02:57.422353 1132240 retry.go:31] will retry after 1.164764985s: waiting for machine to come up
	I0328 01:02:58.588827 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:02:58.589183 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | unable to find current IP address of domain default-k8s-diff-port-283961 in network mk-default-k8s-diff-port-283961
	I0328 01:02:58.589225 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | I0328 01:02:58.589119 1132240 retry.go:31] will retry after 1.307248783s: waiting for machine to come up
	I0328 01:02:59.897607 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:02:59.897976 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | unable to find current IP address of domain default-k8s-diff-port-283961 in network mk-default-k8s-diff-port-283961
	I0328 01:02:59.898008 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | I0328 01:02:59.897926 1132240 retry.go:31] will retry after 1.560958271s: waiting for machine to come up
	I0328 01:02:58.241179 1130949 pod_ready.go:92] pod "kube-controller-manager-embed-certs-808809" in "kube-system" namespace has status "Ready":"True"
	I0328 01:02:58.241216 1130949 pod_ready.go:81] duration metric: took 1.00963904s for pod "kube-controller-manager-embed-certs-808809" in "kube-system" namespace to be "Ready" ...
	I0328 01:02:58.241245 1130949 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-qwzpg" in "kube-system" namespace to be "Ready" ...
	I0328 01:02:58.249787 1130949 pod_ready.go:92] pod "kube-proxy-qwzpg" in "kube-system" namespace has status "Ready":"True"
	I0328 01:02:58.249826 1130949 pod_ready.go:81] duration metric: took 8.571225ms for pod "kube-proxy-qwzpg" in "kube-system" namespace to be "Ready" ...
	I0328 01:02:58.249840 1130949 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-808809" in "kube-system" namespace to be "Ready" ...
	I0328 01:02:58.405101 1130949 pod_ready.go:92] pod "kube-scheduler-embed-certs-808809" in "kube-system" namespace has status "Ready":"True"
	I0328 01:02:58.405130 1130949 pod_ready.go:81] duration metric: took 155.281142ms for pod "kube-scheduler-embed-certs-808809" in "kube-system" namespace to be "Ready" ...
	I0328 01:02:58.405141 1130949 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace to be "Ready" ...
	I0328 01:03:00.412202 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:02.412688 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:00.499788 1131323 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0328 01:03:00.539135 1131323 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0328 01:03:00.541462 1131323 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0328 01:03:00.544184 1131323 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0328 01:03:00.544227 1131323 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0328 01:03:00.544261 1131323 ssh_runner.go:195] Run: which crictl
	I0328 01:03:00.555720 1131323 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0328 01:03:00.560189 1131323 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0328 01:03:00.562639 1131323 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0328 01:03:00.574105 1131323 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0328 01:03:00.681717 1131323 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0328 01:03:00.681742 1131323 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0328 01:03:00.681765 1131323 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0328 01:03:00.681803 1131323 ssh_runner.go:195] Run: which crictl
	I0328 01:03:00.682033 1131323 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0328 01:03:00.682076 1131323 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0328 01:03:00.682115 1131323 ssh_runner.go:195] Run: which crictl
	I0328 01:03:00.732868 1131323 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0328 01:03:00.732922 1131323 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0328 01:03:00.732988 1131323 ssh_runner.go:195] Run: which crictl
	I0328 01:03:00.742680 1131323 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0328 01:03:00.742730 1131323 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0328 01:03:00.742762 1131323 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0328 01:03:00.742777 1131323 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0328 01:03:00.742805 1131323 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0328 01:03:00.742770 1131323 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0328 01:03:00.742817 1131323 ssh_runner.go:195] Run: which crictl
	I0328 01:03:00.742851 1131323 ssh_runner.go:195] Run: which crictl
	I0328 01:03:00.742865 1131323 ssh_runner.go:195] Run: which crictl
	I0328 01:03:00.770435 1131323 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0328 01:03:00.770472 1131323 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0328 01:03:00.770567 1131323 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0328 01:03:00.770588 1131323 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0328 01:03:00.770727 1131323 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0328 01:03:00.770760 1131323 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0328 01:03:00.770728 1131323 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0328 01:03:00.882338 1131323 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0328 01:03:00.896602 1131323 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0328 01:03:00.918814 1131323 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0328 01:03:00.918869 1131323 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0328 01:03:00.918919 1131323 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0328 01:03:00.918968 1131323 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0328 01:03:01.186124 1131323 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0328 01:03:01.334547 1131323 cache_images.go:92] duration metric: took 1.036031169s to LoadCachedImages
	W0328 01:03:01.334676 1131323 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	I0328 01:03:01.334694 1131323 kubeadm.go:928] updating node { 192.168.50.174 8443 v1.20.0 crio true true} ...
	I0328 01:03:01.334827 1131323 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-986088 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.174
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-986088 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0328 01:03:01.334926 1131323 ssh_runner.go:195] Run: crio config
	I0328 01:03:01.391004 1131323 cni.go:84] Creating CNI manager for ""
	I0328 01:03:01.391034 1131323 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0328 01:03:01.391054 1131323 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0328 01:03:01.391081 1131323 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.174 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-986088 NodeName:old-k8s-version-986088 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.174"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.174 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0328 01:03:01.391265 1131323 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.174
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-986088"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.174
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.174"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0328 01:03:01.391347 1131323 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0328 01:03:01.403684 1131323 binaries.go:44] Found k8s binaries, skipping transfer
	I0328 01:03:01.403779 1131323 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0328 01:03:01.415168 1131323 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0328 01:03:01.434329 1131323 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0328 01:03:01.456280 1131323 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0328 01:03:01.476625 1131323 ssh_runner.go:195] Run: grep 192.168.50.174	control-plane.minikube.internal$ /etc/hosts
	I0328 01:03:01.480867 1131323 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.174	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0328 01:03:01.493833 1131323 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0328 01:03:01.642273 1131323 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0328 01:03:01.661857 1131323 certs.go:68] Setting up /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/old-k8s-version-986088 for IP: 192.168.50.174
	I0328 01:03:01.661887 1131323 certs.go:194] generating shared ca certs ...
	I0328 01:03:01.661909 1131323 certs.go:226] acquiring lock for ca certs: {Name:mkf4dec8f33bbf51de6ed3aabdf175c7fd744ef9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 01:03:01.662115 1131323 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.key
	I0328 01:03:01.662174 1131323 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/proxy-client-ca.key
	I0328 01:03:01.662188 1131323 certs.go:256] generating profile certs ...
	I0328 01:03:01.662324 1131323 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/old-k8s-version-986088/client.key
	I0328 01:03:01.662399 1131323 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/old-k8s-version-986088/apiserver.key.b88fbc7e
	I0328 01:03:01.662447 1131323 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/old-k8s-version-986088/proxy-client.key
	I0328 01:03:01.662600 1131323 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/1076522.pem (1338 bytes)
	W0328 01:03:01.662656 1131323 certs.go:480] ignoring /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/1076522_empty.pem, impossibly tiny 0 bytes
	I0328 01:03:01.662672 1131323 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca-key.pem (1675 bytes)
	I0328 01:03:01.662703 1131323 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem (1078 bytes)
	I0328 01:03:01.662738 1131323 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/cert.pem (1123 bytes)
	I0328 01:03:01.662774 1131323 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/key.pem (1679 bytes)
	I0328 01:03:01.662826 1131323 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/files/etc/ssl/certs/10765222.pem (1708 bytes)
	I0328 01:03:01.663831 1131323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0328 01:03:01.697171 1131323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0328 01:03:01.742118 1131323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0328 01:03:01.783263 1131323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0328 01:03:01.831682 1131323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/old-k8s-version-986088/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0328 01:03:01.878051 1131323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/old-k8s-version-986088/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0328 01:03:01.915626 1131323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/old-k8s-version-986088/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0328 01:03:01.942247 1131323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/old-k8s-version-986088/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0328 01:03:01.969054 1131323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/files/etc/ssl/certs/10765222.pem --> /usr/share/ca-certificates/10765222.pem (1708 bytes)
	I0328 01:03:01.998651 1131323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0328 01:03:02.024881 1131323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/1076522.pem --> /usr/share/ca-certificates/1076522.pem (1338 bytes)
	I0328 01:03:02.051284 1131323 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0328 01:03:02.070414 1131323 ssh_runner.go:195] Run: openssl version
	I0328 01:03:02.076635 1131323 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10765222.pem && ln -fs /usr/share/ca-certificates/10765222.pem /etc/ssl/certs/10765222.pem"
	I0328 01:03:02.089288 1131323 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10765222.pem
	I0328 01:03:02.094260 1131323 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 27 23:42 /usr/share/ca-certificates/10765222.pem
	I0328 01:03:02.094322 1131323 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10765222.pem
	I0328 01:03:02.100846 1131323 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/10765222.pem /etc/ssl/certs/3ec20f2e.0"
	I0328 01:03:02.114474 1131323 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0328 01:03:02.126467 1131323 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0328 01:03:02.131240 1131323 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 27 23:34 /usr/share/ca-certificates/minikubeCA.pem
	I0328 01:03:02.131293 1131323 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0328 01:03:02.137496 1131323 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0328 01:03:02.150863 1131323 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1076522.pem && ln -fs /usr/share/ca-certificates/1076522.pem /etc/ssl/certs/1076522.pem"
	I0328 01:03:02.163536 1131323 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1076522.pem
	I0328 01:03:02.168767 1131323 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 27 23:42 /usr/share/ca-certificates/1076522.pem
	I0328 01:03:02.168850 1131323 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1076522.pem
	I0328 01:03:02.175218 1131323 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1076522.pem /etc/ssl/certs/51391683.0"
	I0328 01:03:02.188272 1131323 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0328 01:03:02.193348 1131323 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0328 01:03:02.199969 1131323 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0328 01:03:02.206424 1131323 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0328 01:03:02.213530 1131323 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0328 01:03:02.220136 1131323 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0328 01:03:02.226502 1131323 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0328 01:03:02.232708 1131323 kubeadm.go:391] StartCluster: {Name:old-k8s-version-986088 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-986088 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.174 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0328 01:03:02.232831 1131323 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0328 01:03:02.232890 1131323 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0328 01:03:02.280062 1131323 cri.go:89] found id: ""
	I0328 01:03:02.280160 1131323 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0328 01:03:02.291968 1131323 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0328 01:03:02.292003 1131323 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0328 01:03:02.292011 1131323 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0328 01:03:02.292072 1131323 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0328 01:03:02.304006 1131323 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0328 01:03:02.305105 1131323 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-986088" does not appear in /home/jenkins/minikube-integration/18485-1069254/kubeconfig
	I0328 01:03:02.305785 1131323 kubeconfig.go:62] /home/jenkins/minikube-integration/18485-1069254/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-986088" cluster setting kubeconfig missing "old-k8s-version-986088" context setting]
	I0328 01:03:02.306728 1131323 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18485-1069254/kubeconfig: {Name:mk608bb0dce90860f51972a105c5a28b1a9b081e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 01:03:02.308610 1131323 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0328 01:03:02.320212 1131323 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.50.174
	I0328 01:03:02.320265 1131323 kubeadm.go:1154] stopping kube-system containers ...
	I0328 01:03:02.320283 1131323 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0328 01:03:02.320356 1131323 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0328 01:03:02.366411 1131323 cri.go:89] found id: ""
	I0328 01:03:02.366500 1131323 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0328 01:03:02.388351 1131323 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0328 01:03:02.402621 1131323 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0328 01:03:02.402652 1131323 kubeadm.go:156] found existing configuration files:
	
	I0328 01:03:02.402718 1131323 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0328 01:03:02.415559 1131323 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0328 01:03:02.415633 1131323 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0328 01:03:02.426666 1131323 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0328 01:03:02.439497 1131323 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0328 01:03:02.439558 1131323 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0328 01:03:02.451040 1131323 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0328 01:03:02.461780 1131323 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0328 01:03:02.461876 1131323 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0328 01:03:02.473295 1131323 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0328 01:03:02.484762 1131323 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0328 01:03:02.484841 1131323 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0328 01:03:02.496304 1131323 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0328 01:03:02.507634 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0328 01:03:02.641980 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0328 01:03:03.598106 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0328 01:03:03.840026 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0328 01:03:03.970336 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0328 01:03:04.067774 1131323 api_server.go:52] waiting for apiserver process to appear ...
	I0328 01:03:04.067911 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:04.568260 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:05.068794 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:01.460535 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:01.461008 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | unable to find current IP address of domain default-k8s-diff-port-283961 in network mk-default-k8s-diff-port-283961
	I0328 01:03:01.461039 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | I0328 01:03:01.460962 1132240 retry.go:31] will retry after 1.839531745s: waiting for machine to come up
	I0328 01:03:03.302965 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:03.303445 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | unable to find current IP address of domain default-k8s-diff-port-283961 in network mk-default-k8s-diff-port-283961
	I0328 01:03:03.303479 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | I0328 01:03:03.303387 1132240 retry.go:31] will retry after 2.461748315s: waiting for machine to come up
	I0328 01:03:04.413898 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:06.913608 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:05.568716 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:06.068362 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:06.568235 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:07.068696 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:07.567976 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:08.068032 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:08.568586 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:09.068046 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:09.568699 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:10.067967 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:05.767795 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:05.768329 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | unable to find current IP address of domain default-k8s-diff-port-283961 in network mk-default-k8s-diff-port-283961
	I0328 01:03:05.768360 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | I0328 01:03:05.768279 1132240 retry.go:31] will retry after 2.321291255s: waiting for machine to come up
	I0328 01:03:08.092644 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:08.093094 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | unable to find current IP address of domain default-k8s-diff-port-283961 in network mk-default-k8s-diff-port-283961
	I0328 01:03:08.093131 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | I0328 01:03:08.093046 1132240 retry.go:31] will retry after 4.151205276s: waiting for machine to come up
	I0328 01:03:09.413199 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:11.912234 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:13.671756 1130827 start.go:364] duration metric: took 54.966750689s to acquireMachinesLock for "no-preload-248059"
	I0328 01:03:13.671815 1130827 start.go:96] Skipping create...Using existing machine configuration
	I0328 01:03:13.671823 1130827 fix.go:54] fixHost starting: 
	I0328 01:03:13.672255 1130827 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18485-1069254/.minikube/bin/docker-machine-driver-kvm2
	I0328 01:03:13.672292 1130827 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 01:03:13.689811 1130827 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42017
	I0328 01:03:13.690364 1130827 main.go:141] libmachine: () Calling .GetVersion
	I0328 01:03:13.690817 1130827 main.go:141] libmachine: Using API Version  1
	I0328 01:03:13.690843 1130827 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 01:03:13.691213 1130827 main.go:141] libmachine: () Calling .GetMachineName
	I0328 01:03:13.691395 1130827 main.go:141] libmachine: (no-preload-248059) Calling .DriverName
	I0328 01:03:13.691523 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetState
	I0328 01:03:13.693093 1130827 fix.go:112] recreateIfNeeded on no-preload-248059: state=Stopped err=<nil>
	I0328 01:03:13.693123 1130827 main.go:141] libmachine: (no-preload-248059) Calling .DriverName
	W0328 01:03:13.693280 1130827 fix.go:138] unexpected machine state, will restart: <nil>
	I0328 01:03:13.695158 1130827 out.go:177] * Restarting existing kvm2 VM for "no-preload-248059" ...
	I0328 01:03:10.568240 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:11.068028 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:11.568146 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:12.068467 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:12.568820 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:13.068031 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:13.568977 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:14.068050 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:14.567938 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:15.068711 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:12.248769 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:12.249440 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Found IP for machine: 192.168.39.224
	I0328 01:03:12.249467 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Reserving static IP address...
	I0328 01:03:12.249498 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has current primary IP address 192.168.39.224 and MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:12.249832 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-283961", mac: "52:54:00:c4:df:6f", ip: "192.168.39.224"} in network mk-default-k8s-diff-port-283961: {Iface:virbr1 ExpiryTime:2024-03-28 02:03:06 +0000 UTC Type:0 Mac:52:54:00:c4:df:6f Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:default-k8s-diff-port-283961 Clientid:01:52:54:00:c4:df:6f}
	I0328 01:03:12.249872 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | skip adding static IP to network mk-default-k8s-diff-port-283961 - found existing host DHCP lease matching {name: "default-k8s-diff-port-283961", mac: "52:54:00:c4:df:6f", ip: "192.168.39.224"}
	I0328 01:03:12.249888 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Reserved static IP address: 192.168.39.224
	I0328 01:03:12.249908 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Waiting for SSH to be available...
	I0328 01:03:12.249921 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | Getting to WaitForSSH function...
	I0328 01:03:12.252053 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:12.252487 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:df:6f", ip: ""} in network mk-default-k8s-diff-port-283961: {Iface:virbr1 ExpiryTime:2024-03-28 02:03:06 +0000 UTC Type:0 Mac:52:54:00:c4:df:6f Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:default-k8s-diff-port-283961 Clientid:01:52:54:00:c4:df:6f}
	I0328 01:03:12.252521 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined IP address 192.168.39.224 and MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:12.252646 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | Using SSH client type: external
	I0328 01:03:12.252677 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | Using SSH private key: /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/default-k8s-diff-port-283961/id_rsa (-rw-------)
	I0328 01:03:12.252709 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.224 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/default-k8s-diff-port-283961/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0328 01:03:12.252731 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | About to run SSH command:
	I0328 01:03:12.252750 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | exit 0
	I0328 01:03:12.378419 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | SSH cmd err, output: <nil>: 
	I0328 01:03:12.378866 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetConfigRaw
	I0328 01:03:12.379659 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetIP
	I0328 01:03:12.382631 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:12.382997 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:df:6f", ip: ""} in network mk-default-k8s-diff-port-283961: {Iface:virbr1 ExpiryTime:2024-03-28 02:03:06 +0000 UTC Type:0 Mac:52:54:00:c4:df:6f Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:default-k8s-diff-port-283961 Clientid:01:52:54:00:c4:df:6f}
	I0328 01:03:12.383023 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined IP address 192.168.39.224 and MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:12.383276 1131600 profile.go:142] Saving config to /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/default-k8s-diff-port-283961/config.json ...
	I0328 01:03:12.383534 1131600 machine.go:94] provisionDockerMachine start ...
	I0328 01:03:12.383567 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .DriverName
	I0328 01:03:12.383805 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHHostname
	I0328 01:03:12.386472 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:12.386839 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:df:6f", ip: ""} in network mk-default-k8s-diff-port-283961: {Iface:virbr1 ExpiryTime:2024-03-28 02:03:06 +0000 UTC Type:0 Mac:52:54:00:c4:df:6f Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:default-k8s-diff-port-283961 Clientid:01:52:54:00:c4:df:6f}
	I0328 01:03:12.386870 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined IP address 192.168.39.224 and MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:12.387035 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHPort
	I0328 01:03:12.387240 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHKeyPath
	I0328 01:03:12.387399 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHKeyPath
	I0328 01:03:12.387577 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHUsername
	I0328 01:03:12.387729 1131600 main.go:141] libmachine: Using SSH client type: native
	I0328 01:03:12.387931 1131600 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.224 22 <nil> <nil>}
	I0328 01:03:12.387943 1131600 main.go:141] libmachine: About to run SSH command:
	hostname
	I0328 01:03:12.499608 1131600 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0328 01:03:12.499644 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetMachineName
	I0328 01:03:12.499930 1131600 buildroot.go:166] provisioning hostname "default-k8s-diff-port-283961"
	I0328 01:03:12.499962 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetMachineName
	I0328 01:03:12.500154 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHHostname
	I0328 01:03:12.502737 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:12.503089 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:df:6f", ip: ""} in network mk-default-k8s-diff-port-283961: {Iface:virbr1 ExpiryTime:2024-03-28 02:03:06 +0000 UTC Type:0 Mac:52:54:00:c4:df:6f Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:default-k8s-diff-port-283961 Clientid:01:52:54:00:c4:df:6f}
	I0328 01:03:12.503120 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined IP address 192.168.39.224 and MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:12.503295 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHPort
	I0328 01:03:12.503516 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHKeyPath
	I0328 01:03:12.503725 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHKeyPath
	I0328 01:03:12.503892 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHUsername
	I0328 01:03:12.504093 1131600 main.go:141] libmachine: Using SSH client type: native
	I0328 01:03:12.504271 1131600 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.224 22 <nil> <nil>}
	I0328 01:03:12.504285 1131600 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-283961 && echo "default-k8s-diff-port-283961" | sudo tee /etc/hostname
	I0328 01:03:12.625590 1131600 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-283961
	
	I0328 01:03:12.625624 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHHostname
	I0328 01:03:12.628570 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:12.628883 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:df:6f", ip: ""} in network mk-default-k8s-diff-port-283961: {Iface:virbr1 ExpiryTime:2024-03-28 02:03:06 +0000 UTC Type:0 Mac:52:54:00:c4:df:6f Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:default-k8s-diff-port-283961 Clientid:01:52:54:00:c4:df:6f}
	I0328 01:03:12.628968 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined IP address 192.168.39.224 and MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:12.629143 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHPort
	I0328 01:03:12.629397 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHKeyPath
	I0328 01:03:12.629627 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHKeyPath
	I0328 01:03:12.629825 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHUsername
	I0328 01:03:12.630008 1131600 main.go:141] libmachine: Using SSH client type: native
	I0328 01:03:12.630191 1131600 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.224 22 <nil> <nil>}
	I0328 01:03:12.630210 1131600 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-283961' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-283961/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-283961' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0328 01:03:12.744240 1131600 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0328 01:03:12.744280 1131600 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18485-1069254/.minikube CaCertPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18485-1069254/.minikube}
	I0328 01:03:12.744327 1131600 buildroot.go:174] setting up certificates
	I0328 01:03:12.744342 1131600 provision.go:84] configureAuth start
	I0328 01:03:12.744361 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetMachineName
	I0328 01:03:12.744722 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetIP
	I0328 01:03:12.747139 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:12.747448 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:df:6f", ip: ""} in network mk-default-k8s-diff-port-283961: {Iface:virbr1 ExpiryTime:2024-03-28 02:03:06 +0000 UTC Type:0 Mac:52:54:00:c4:df:6f Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:default-k8s-diff-port-283961 Clientid:01:52:54:00:c4:df:6f}
	I0328 01:03:12.747478 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined IP address 192.168.39.224 and MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:12.747582 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHHostname
	I0328 01:03:12.749705 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:12.749964 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:df:6f", ip: ""} in network mk-default-k8s-diff-port-283961: {Iface:virbr1 ExpiryTime:2024-03-28 02:03:06 +0000 UTC Type:0 Mac:52:54:00:c4:df:6f Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:default-k8s-diff-port-283961 Clientid:01:52:54:00:c4:df:6f}
	I0328 01:03:12.749995 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined IP address 192.168.39.224 and MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:12.750125 1131600 provision.go:143] copyHostCerts
	I0328 01:03:12.750203 1131600 exec_runner.go:144] found /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.pem, removing ...
	I0328 01:03:12.750217 1131600 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.pem
	I0328 01:03:12.750323 1131600 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.pem (1078 bytes)
	I0328 01:03:12.750435 1131600 exec_runner.go:144] found /home/jenkins/minikube-integration/18485-1069254/.minikube/cert.pem, removing ...
	I0328 01:03:12.750446 1131600 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18485-1069254/.minikube/cert.pem
	I0328 01:03:12.750479 1131600 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18485-1069254/.minikube/cert.pem (1123 bytes)
	I0328 01:03:12.750557 1131600 exec_runner.go:144] found /home/jenkins/minikube-integration/18485-1069254/.minikube/key.pem, removing ...
	I0328 01:03:12.750567 1131600 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18485-1069254/.minikube/key.pem
	I0328 01:03:12.750599 1131600 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18485-1069254/.minikube/key.pem (1679 bytes)
	I0328 01:03:12.750670 1131600 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-283961 san=[127.0.0.1 192.168.39.224 default-k8s-diff-port-283961 localhost minikube]
	I0328 01:03:12.963182 1131600 provision.go:177] copyRemoteCerts
	I0328 01:03:12.963265 1131600 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0328 01:03:12.963313 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHHostname
	I0328 01:03:12.965946 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:12.966177 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:df:6f", ip: ""} in network mk-default-k8s-diff-port-283961: {Iface:virbr1 ExpiryTime:2024-03-28 02:03:06 +0000 UTC Type:0 Mac:52:54:00:c4:df:6f Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:default-k8s-diff-port-283961 Clientid:01:52:54:00:c4:df:6f}
	I0328 01:03:12.966207 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined IP address 192.168.39.224 and MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:12.966347 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHPort
	I0328 01:03:12.966573 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHKeyPath
	I0328 01:03:12.966773 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHUsername
	I0328 01:03:12.966934 1131600 sshutil.go:53] new ssh client: &{IP:192.168.39.224 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/default-k8s-diff-port-283961/id_rsa Username:docker}
	I0328 01:03:13.057477 1131600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0328 01:03:13.083706 1131600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0328 01:03:13.109167 1131600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0328 01:03:13.136835 1131600 provision.go:87] duration metric: took 392.475069ms to configureAuth
	I0328 01:03:13.136867 1131600 buildroot.go:189] setting minikube options for container-runtime
	I0328 01:03:13.137048 1131600 config.go:182] Loaded profile config "default-k8s-diff-port-283961": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0328 01:03:13.137131 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHHostname
	I0328 01:03:13.139508 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:13.139761 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:df:6f", ip: ""} in network mk-default-k8s-diff-port-283961: {Iface:virbr1 ExpiryTime:2024-03-28 02:03:06 +0000 UTC Type:0 Mac:52:54:00:c4:df:6f Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:default-k8s-diff-port-283961 Clientid:01:52:54:00:c4:df:6f}
	I0328 01:03:13.139792 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined IP address 192.168.39.224 and MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:13.139959 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHPort
	I0328 01:03:13.140170 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHKeyPath
	I0328 01:03:13.140343 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHKeyPath
	I0328 01:03:13.140502 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHUsername
	I0328 01:03:13.140685 1131600 main.go:141] libmachine: Using SSH client type: native
	I0328 01:03:13.140873 1131600 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.224 22 <nil> <nil>}
	I0328 01:03:13.140897 1131600 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0328 01:03:13.422372 1131600 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0328 01:03:13.422405 1131600 machine.go:97] duration metric: took 1.038857021s to provisionDockerMachine
	I0328 01:03:13.422418 1131600 start.go:293] postStartSetup for "default-k8s-diff-port-283961" (driver="kvm2")
	I0328 01:03:13.422428 1131600 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0328 01:03:13.422456 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .DriverName
	I0328 01:03:13.422788 1131600 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0328 01:03:13.422819 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHHostname
	I0328 01:03:13.425539 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:13.425865 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:df:6f", ip: ""} in network mk-default-k8s-diff-port-283961: {Iface:virbr1 ExpiryTime:2024-03-28 02:03:06 +0000 UTC Type:0 Mac:52:54:00:c4:df:6f Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:default-k8s-diff-port-283961 Clientid:01:52:54:00:c4:df:6f}
	I0328 01:03:13.425894 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined IP address 192.168.39.224 and MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:13.426023 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHPort
	I0328 01:03:13.426225 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHKeyPath
	I0328 01:03:13.426407 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHUsername
	I0328 01:03:13.426577 1131600 sshutil.go:53] new ssh client: &{IP:192.168.39.224 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/default-k8s-diff-port-283961/id_rsa Username:docker}
	I0328 01:03:13.511874 1131600 ssh_runner.go:195] Run: cat /etc/os-release
	I0328 01:03:13.516643 1131600 info.go:137] Remote host: Buildroot 2023.02.9
	I0328 01:03:13.516673 1131600 filesync.go:126] Scanning /home/jenkins/minikube-integration/18485-1069254/.minikube/addons for local assets ...
	I0328 01:03:13.516749 1131600 filesync.go:126] Scanning /home/jenkins/minikube-integration/18485-1069254/.minikube/files for local assets ...
	I0328 01:03:13.516846 1131600 filesync.go:149] local asset: /home/jenkins/minikube-integration/18485-1069254/.minikube/files/etc/ssl/certs/10765222.pem -> 10765222.pem in /etc/ssl/certs
	I0328 01:03:13.516969 1131600 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0328 01:03:13.529004 1131600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/files/etc/ssl/certs/10765222.pem --> /etc/ssl/certs/10765222.pem (1708 bytes)
	I0328 01:03:13.557244 1131600 start.go:296] duration metric: took 134.810243ms for postStartSetup
	I0328 01:03:13.557289 1131600 fix.go:56] duration metric: took 20.165726422s for fixHost
	I0328 01:03:13.557313 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHHostname
	I0328 01:03:13.560216 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:13.560585 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:df:6f", ip: ""} in network mk-default-k8s-diff-port-283961: {Iface:virbr1 ExpiryTime:2024-03-28 02:03:06 +0000 UTC Type:0 Mac:52:54:00:c4:df:6f Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:default-k8s-diff-port-283961 Clientid:01:52:54:00:c4:df:6f}
	I0328 01:03:13.560623 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined IP address 192.168.39.224 and MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:13.560803 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHPort
	I0328 01:03:13.561050 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHKeyPath
	I0328 01:03:13.561188 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHKeyPath
	I0328 01:03:13.561303 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHUsername
	I0328 01:03:13.561552 1131600 main.go:141] libmachine: Using SSH client type: native
	I0328 01:03:13.561742 1131600 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.224 22 <nil> <nil>}
	I0328 01:03:13.561757 1131600 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0328 01:03:13.671545 1131600 main.go:141] libmachine: SSH cmd err, output: <nil>: 1711587793.617322674
	
	I0328 01:03:13.671570 1131600 fix.go:216] guest clock: 1711587793.617322674
	I0328 01:03:13.671578 1131600 fix.go:229] Guest: 2024-03-28 01:03:13.617322674 +0000 UTC Remote: 2024-03-28 01:03:13.55729386 +0000 UTC m=+187.934897846 (delta=60.028814ms)
	I0328 01:03:13.671632 1131600 fix.go:200] guest clock delta is within tolerance: 60.028814ms
	I0328 01:03:13.671642 1131600 start.go:83] releasing machines lock for "default-k8s-diff-port-283961", held for 20.280118311s
	I0328 01:03:13.671673 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .DriverName
	I0328 01:03:13.671976 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetIP
	I0328 01:03:13.674978 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:13.675384 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:df:6f", ip: ""} in network mk-default-k8s-diff-port-283961: {Iface:virbr1 ExpiryTime:2024-03-28 02:03:06 +0000 UTC Type:0 Mac:52:54:00:c4:df:6f Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:default-k8s-diff-port-283961 Clientid:01:52:54:00:c4:df:6f}
	I0328 01:03:13.675436 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined IP address 192.168.39.224 and MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:13.675562 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .DriverName
	I0328 01:03:13.676167 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .DriverName
	I0328 01:03:13.676337 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .DriverName
	I0328 01:03:13.676436 1131600 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0328 01:03:13.676501 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHHostname
	I0328 01:03:13.676557 1131600 ssh_runner.go:195] Run: cat /version.json
	I0328 01:03:13.676578 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHHostname
	I0328 01:03:13.679418 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:13.679452 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:13.679758 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:df:6f", ip: ""} in network mk-default-k8s-diff-port-283961: {Iface:virbr1 ExpiryTime:2024-03-28 02:03:06 +0000 UTC Type:0 Mac:52:54:00:c4:df:6f Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:default-k8s-diff-port-283961 Clientid:01:52:54:00:c4:df:6f}
	I0328 01:03:13.679785 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined IP address 192.168.39.224 and MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:13.679813 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:df:6f", ip: ""} in network mk-default-k8s-diff-port-283961: {Iface:virbr1 ExpiryTime:2024-03-28 02:03:06 +0000 UTC Type:0 Mac:52:54:00:c4:df:6f Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:default-k8s-diff-port-283961 Clientid:01:52:54:00:c4:df:6f}
	I0328 01:03:13.679832 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined IP address 192.168.39.224 and MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:13.679986 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHPort
	I0328 01:03:13.680089 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHPort
	I0328 01:03:13.680190 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHKeyPath
	I0328 01:03:13.680255 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHKeyPath
	I0328 01:03:13.680345 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHUsername
	I0328 01:03:13.680410 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHUsername
	I0328 01:03:13.680517 1131600 sshutil.go:53] new ssh client: &{IP:192.168.39.224 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/default-k8s-diff-port-283961/id_rsa Username:docker}
	I0328 01:03:13.680608 1131600 sshutil.go:53] new ssh client: &{IP:192.168.39.224 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/default-k8s-diff-port-283961/id_rsa Username:docker}
	I0328 01:03:13.759826 1131600 ssh_runner.go:195] Run: systemctl --version
	I0328 01:03:13.796647 1131600 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0328 01:03:13.947036 1131600 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0328 01:03:13.954165 1131600 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0328 01:03:13.954265 1131600 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0328 01:03:13.973503 1131600 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0328 01:03:13.973538 1131600 start.go:494] detecting cgroup driver to use...
	I0328 01:03:13.973629 1131600 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0328 01:03:13.997675 1131600 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0328 01:03:14.015349 1131600 docker.go:217] disabling cri-docker service (if available) ...
	I0328 01:03:14.015421 1131600 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0328 01:03:14.031099 1131600 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0328 01:03:14.046446 1131600 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0328 01:03:14.186993 1131600 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0328 01:03:14.351164 1131600 docker.go:233] disabling docker service ...
	I0328 01:03:14.351232 1131600 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0328 01:03:14.370629 1131600 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0328 01:03:14.387837 1131600 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0328 01:03:14.544060 1131600 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0328 01:03:14.707699 1131600 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0328 01:03:14.725658 1131600 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0328 01:03:14.746063 1131600 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0328 01:03:14.746141 1131600 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 01:03:14.759244 1131600 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0328 01:03:14.759317 1131600 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 01:03:14.773015 1131600 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 01:03:14.786810 1131600 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 01:03:14.807101 1131600 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0328 01:03:14.821013 1131600 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 01:03:14.834181 1131600 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 01:03:14.861163 1131600 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 01:03:14.874274 1131600 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0328 01:03:14.885890 1131600 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0328 01:03:14.885968 1131600 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0328 01:03:14.903142 1131600 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0328 01:03:14.916364 1131600 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0328 01:03:15.073343 1131600 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0328 01:03:15.218406 1131600 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0328 01:03:15.218500 1131600 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0328 01:03:15.226299 1131600 start.go:562] Will wait 60s for crictl version
	I0328 01:03:15.226373 1131600 ssh_runner.go:195] Run: which crictl
	I0328 01:03:15.232051 1131600 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0328 01:03:15.278793 1131600 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0328 01:03:15.278903 1131600 ssh_runner.go:195] Run: crio --version
	I0328 01:03:15.313408 1131600 ssh_runner.go:195] Run: crio --version
	I0328 01:03:15.351613 1131600 out.go:177] * Preparing Kubernetes v1.29.3 on CRI-O 1.29.1 ...
	I0328 01:03:15.353013 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetIP
	I0328 01:03:15.355924 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:15.356306 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:df:6f", ip: ""} in network mk-default-k8s-diff-port-283961: {Iface:virbr1 ExpiryTime:2024-03-28 02:03:06 +0000 UTC Type:0 Mac:52:54:00:c4:df:6f Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:default-k8s-diff-port-283961 Clientid:01:52:54:00:c4:df:6f}
	I0328 01:03:15.356341 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined IP address 192.168.39.224 and MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:15.356555 1131600 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0328 01:03:15.361194 1131600 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0328 01:03:15.380926 1131600 kubeadm.go:877] updating cluster {Name:default-k8s-diff-port-283961 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.29.3 ClusterName:default-k8s-diff-port-283961 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.224 Port:8444 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0328 01:03:15.381043 1131600 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0328 01:03:15.381099 1131600 ssh_runner.go:195] Run: sudo crictl images --output json
	I0328 01:03:15.423322 1131600 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.3". assuming images are not preloaded.
	I0328 01:03:15.423409 1131600 ssh_runner.go:195] Run: which lz4
	I0328 01:03:15.428123 1131600 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0328 01:03:15.433023 1131600 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0328 01:03:15.433065 1131600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (402967820 bytes)
	I0328 01:03:13.696314 1130827 main.go:141] libmachine: (no-preload-248059) Calling .Start
	I0328 01:03:13.696506 1130827 main.go:141] libmachine: (no-preload-248059) Ensuring networks are active...
	I0328 01:03:13.697344 1130827 main.go:141] libmachine: (no-preload-248059) Ensuring network default is active
	I0328 01:03:13.697668 1130827 main.go:141] libmachine: (no-preload-248059) Ensuring network mk-no-preload-248059 is active
	I0328 01:03:13.698009 1130827 main.go:141] libmachine: (no-preload-248059) Getting domain xml...
	I0328 01:03:13.698805 1130827 main.go:141] libmachine: (no-preload-248059) Creating domain...
	I0328 01:03:14.955922 1130827 main.go:141] libmachine: (no-preload-248059) Waiting to get IP...
	I0328 01:03:14.957088 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:14.957534 1130827 main.go:141] libmachine: (no-preload-248059) DBG | unable to find current IP address of domain no-preload-248059 in network mk-no-preload-248059
	I0328 01:03:14.957660 1130827 main.go:141] libmachine: (no-preload-248059) DBG | I0328 01:03:14.957533 1132389 retry.go:31] will retry after 222.894093ms: waiting for machine to come up
	I0328 01:03:15.182078 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:15.182541 1130827 main.go:141] libmachine: (no-preload-248059) DBG | unable to find current IP address of domain no-preload-248059 in network mk-no-preload-248059
	I0328 01:03:15.182580 1130827 main.go:141] libmachine: (no-preload-248059) DBG | I0328 01:03:15.182528 1132389 retry.go:31] will retry after 263.74163ms: waiting for machine to come up
	I0328 01:03:15.448081 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:15.448653 1130827 main.go:141] libmachine: (no-preload-248059) DBG | unable to find current IP address of domain no-preload-248059 in network mk-no-preload-248059
	I0328 01:03:15.448684 1130827 main.go:141] libmachine: (no-preload-248059) DBG | I0328 01:03:15.448586 1132389 retry.go:31] will retry after 444.066222ms: waiting for machine to come up
	I0328 01:03:15.894141 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:15.894695 1130827 main.go:141] libmachine: (no-preload-248059) DBG | unable to find current IP address of domain no-preload-248059 in network mk-no-preload-248059
	I0328 01:03:15.894732 1130827 main.go:141] libmachine: (no-preload-248059) DBG | I0328 01:03:15.894650 1132389 retry.go:31] will retry after 469.421771ms: waiting for machine to come up
	I0328 01:03:14.413443 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:16.418789 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:15.568507 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:16.068210 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:16.568761 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:17.067929 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:17.568403 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:18.068454 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:18.568086 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:19.068049 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:19.569020 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:20.068068 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:17.139682 1131600 crio.go:462] duration metric: took 1.71160157s to copy over tarball
	I0328 01:03:17.139764 1131600 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0328 01:03:19.581198 1131600 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.441406061s)
	I0328 01:03:19.581229 1131600 crio.go:469] duration metric: took 2.441510253s to extract the tarball
	I0328 01:03:19.581241 1131600 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0328 01:03:19.620964 1131600 ssh_runner.go:195] Run: sudo crictl images --output json
	I0328 01:03:19.666765 1131600 crio.go:514] all images are preloaded for cri-o runtime.
	I0328 01:03:19.666791 1131600 cache_images.go:84] Images are preloaded, skipping loading
	I0328 01:03:19.666802 1131600 kubeadm.go:928] updating node { 192.168.39.224 8444 v1.29.3 crio true true} ...
	I0328 01:03:19.666921 1131600 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-283961 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.224
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.3 ClusterName:default-k8s-diff-port-283961 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0328 01:03:19.666987 1131600 ssh_runner.go:195] Run: crio config
	I0328 01:03:19.716082 1131600 cni.go:84] Creating CNI manager for ""
	I0328 01:03:19.716106 1131600 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0328 01:03:19.716115 1131600 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0328 01:03:19.716139 1131600 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.224 APIServerPort:8444 KubernetesVersion:v1.29.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-283961 NodeName:default-k8s-diff-port-283961 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.224"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.224 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0328 01:03:19.716323 1131600 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.224
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-283961"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.224
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.224"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0328 01:03:19.716399 1131600 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.3
	I0328 01:03:19.727826 1131600 binaries.go:44] Found k8s binaries, skipping transfer
	I0328 01:03:19.727913 1131600 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0328 01:03:19.738525 1131600 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0328 01:03:19.756732 1131600 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0328 01:03:19.776665 1131600 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0328 01:03:19.795756 1131600 ssh_runner.go:195] Run: grep 192.168.39.224	control-plane.minikube.internal$ /etc/hosts
	I0328 01:03:19.800097 1131600 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.224	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0328 01:03:19.813019 1131600 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0328 01:03:19.946740 1131600 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0328 01:03:19.964216 1131600 certs.go:68] Setting up /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/default-k8s-diff-port-283961 for IP: 192.168.39.224
	I0328 01:03:19.964244 1131600 certs.go:194] generating shared ca certs ...
	I0328 01:03:19.964262 1131600 certs.go:226] acquiring lock for ca certs: {Name:mkf4dec8f33bbf51de6ed3aabdf175c7fd744ef9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 01:03:19.964448 1131600 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.key
	I0328 01:03:19.964524 1131600 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/proxy-client-ca.key
	I0328 01:03:19.964538 1131600 certs.go:256] generating profile certs ...
	I0328 01:03:19.964648 1131600 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/default-k8s-diff-port-283961/client.key
	I0328 01:03:19.964735 1131600 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/default-k8s-diff-port-283961/apiserver.key.22bfb146
	I0328 01:03:19.964810 1131600 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/default-k8s-diff-port-283961/proxy-client.key
	I0328 01:03:19.964956 1131600 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/1076522.pem (1338 bytes)
	W0328 01:03:19.965008 1131600 certs.go:480] ignoring /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/1076522_empty.pem, impossibly tiny 0 bytes
	I0328 01:03:19.965021 1131600 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca-key.pem (1675 bytes)
	I0328 01:03:19.965058 1131600 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem (1078 bytes)
	I0328 01:03:19.965091 1131600 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/cert.pem (1123 bytes)
	I0328 01:03:19.965113 1131600 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/key.pem (1679 bytes)
	I0328 01:03:19.965154 1131600 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/files/etc/ssl/certs/10765222.pem (1708 bytes)
	I0328 01:03:19.966026 1131600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0328 01:03:19.998578 1131600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0328 01:03:20.042666 1131600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0328 01:03:20.075405 1131600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0328 01:03:20.117888 1131600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/default-k8s-diff-port-283961/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0328 01:03:20.145160 1131600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/default-k8s-diff-port-283961/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0328 01:03:20.178207 1131600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/default-k8s-diff-port-283961/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0328 01:03:20.208610 1131600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/default-k8s-diff-port-283961/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0328 01:03:20.235356 1131600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0328 01:03:20.262434 1131600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/1076522.pem --> /usr/share/ca-certificates/1076522.pem (1338 bytes)
	I0328 01:03:20.291315 1131600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/files/etc/ssl/certs/10765222.pem --> /usr/share/ca-certificates/10765222.pem (1708 bytes)
	I0328 01:03:20.318034 1131600 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0328 01:03:20.337627 1131600 ssh_runner.go:195] Run: openssl version
	I0328 01:03:20.344242 1131600 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0328 01:03:20.360732 1131600 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0328 01:03:20.365858 1131600 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 27 23:34 /usr/share/ca-certificates/minikubeCA.pem
	I0328 01:03:20.365926 1131600 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0328 01:03:20.372120 1131600 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0328 01:03:20.384554 1131600 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1076522.pem && ln -fs /usr/share/ca-certificates/1076522.pem /etc/ssl/certs/1076522.pem"
	I0328 01:03:20.401731 1131600 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1076522.pem
	I0328 01:03:20.406945 1131600 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 27 23:42 /usr/share/ca-certificates/1076522.pem
	I0328 01:03:20.407024 1131600 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1076522.pem
	I0328 01:03:20.414661 1131600 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1076522.pem /etc/ssl/certs/51391683.0"
	I0328 01:03:20.427573 1131600 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10765222.pem && ln -fs /usr/share/ca-certificates/10765222.pem /etc/ssl/certs/10765222.pem"
	I0328 01:03:20.439807 1131600 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10765222.pem
	I0328 01:03:20.445064 1131600 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 27 23:42 /usr/share/ca-certificates/10765222.pem
	I0328 01:03:20.445138 1131600 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10765222.pem
	I0328 01:03:20.451754 1131600 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/10765222.pem /etc/ssl/certs/3ec20f2e.0"
	I0328 01:03:20.464988 1131600 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0328 01:03:20.470461 1131600 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0328 01:03:20.477200 1131600 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0328 01:03:20.484238 1131600 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0328 01:03:20.491125 1131600 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0328 01:03:20.497888 1131600 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0328 01:03:20.504680 1131600 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0328 01:03:20.511372 1131600 kubeadm.go:391] StartCluster: {Name:default-k8s-diff-port-283961 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.29.3 ClusterName:default-k8s-diff-port-283961 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.224 Port:8444 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0328 01:03:20.511477 1131600 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0328 01:03:20.511542 1131600 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0328 01:03:20.552247 1131600 cri.go:89] found id: ""
	I0328 01:03:20.552345 1131600 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0328 01:03:20.564906 1131600 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0328 01:03:20.564937 1131600 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0328 01:03:20.564944 1131600 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0328 01:03:20.565002 1131600 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0328 01:03:20.576394 1131600 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0328 01:03:20.593699 1131600 kubeconfig.go:125] found "default-k8s-diff-port-283961" server: "https://192.168.39.224:8444"
	I0328 01:03:20.595978 1131600 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0328 01:03:20.609519 1131600 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.39.224
	I0328 01:03:20.609565 1131600 kubeadm.go:1154] stopping kube-system containers ...
	I0328 01:03:20.609583 1131600 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0328 01:03:20.609651 1131600 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0328 01:03:20.651892 1131600 cri.go:89] found id: ""
	I0328 01:03:20.651967 1131600 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0328 01:03:20.671895 1131600 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0328 01:03:16.365505 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:16.366404 1130827 main.go:141] libmachine: (no-preload-248059) DBG | unable to find current IP address of domain no-preload-248059 in network mk-no-preload-248059
	I0328 01:03:16.366435 1130827 main.go:141] libmachine: (no-preload-248059) DBG | I0328 01:03:16.366360 1132389 retry.go:31] will retry after 488.383898ms: waiting for machine to come up
	I0328 01:03:16.856125 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:16.856727 1130827 main.go:141] libmachine: (no-preload-248059) DBG | unable to find current IP address of domain no-preload-248059 in network mk-no-preload-248059
	I0328 01:03:16.856761 1130827 main.go:141] libmachine: (no-preload-248059) DBG | I0328 01:03:16.856626 1132389 retry.go:31] will retry after 617.77144ms: waiting for machine to come up
	I0328 01:03:17.476749 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:17.477351 1130827 main.go:141] libmachine: (no-preload-248059) DBG | unable to find current IP address of domain no-preload-248059 in network mk-no-preload-248059
	I0328 01:03:17.477386 1130827 main.go:141] libmachine: (no-preload-248059) DBG | I0328 01:03:17.477282 1132389 retry.go:31] will retry after 835.951988ms: waiting for machine to come up
	I0328 01:03:18.315387 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:18.315894 1130827 main.go:141] libmachine: (no-preload-248059) DBG | unable to find current IP address of domain no-preload-248059 in network mk-no-preload-248059
	I0328 01:03:18.315925 1130827 main.go:141] libmachine: (no-preload-248059) DBG | I0328 01:03:18.315848 1132389 retry.go:31] will retry after 1.405695765s: waiting for machine to come up
	I0328 01:03:19.723053 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:19.723559 1130827 main.go:141] libmachine: (no-preload-248059) DBG | unable to find current IP address of domain no-preload-248059 in network mk-no-preload-248059
	I0328 01:03:19.723591 1130827 main.go:141] libmachine: (no-preload-248059) DBG | I0328 01:03:19.723473 1132389 retry.go:31] will retry after 1.555358462s: waiting for machine to come up
	I0328 01:03:18.913403 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:21.599662 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:20.568464 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:21.068983 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:21.568470 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:22.068772 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:22.568940 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:23.068907 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:23.568272 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:24.068055 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:24.568056 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:25.068006 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:20.685320 1131600 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0328 01:03:21.187521 1131600 kubeadm.go:156] found existing configuration files:
	
	I0328 01:03:21.187587 1131600 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0328 01:03:21.200463 1131600 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0328 01:03:21.200533 1131600 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0328 01:03:21.212763 1131600 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0328 01:03:21.224344 1131600 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0328 01:03:21.224419 1131600 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0328 01:03:21.235869 1131600 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0328 01:03:21.245970 1131600 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0328 01:03:21.246045 1131600 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0328 01:03:21.258589 1131600 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0328 01:03:21.270651 1131600 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0328 01:03:21.270724 1131600 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0328 01:03:21.283074 1131600 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0328 01:03:21.295811 1131600 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0328 01:03:21.668224 1131600 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0328 01:03:23.046357 1131600 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.378083996s)
	I0328 01:03:23.046401 1131600 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0328 01:03:23.271959 1131600 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0328 01:03:23.353976 1131600 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0328 01:03:23.501611 1131600 api_server.go:52] waiting for apiserver process to appear ...
	I0328 01:03:23.501734 1131600 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:24.002619 1131600 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:24.502614 1131600 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:24.547383 1131600 api_server.go:72] duration metric: took 1.045771287s to wait for apiserver process to appear ...
	I0328 01:03:24.547419 1131600 api_server.go:88] waiting for apiserver healthz status ...
	I0328 01:03:24.547447 1131600 api_server.go:253] Checking apiserver healthz at https://192.168.39.224:8444/healthz ...
	I0328 01:03:24.548081 1131600 api_server.go:269] stopped: https://192.168.39.224:8444/healthz: Get "https://192.168.39.224:8444/healthz": dial tcp 192.168.39.224:8444: connect: connection refused
	I0328 01:03:25.047885 1131600 api_server.go:253] Checking apiserver healthz at https://192.168.39.224:8444/healthz ...
	I0328 01:03:21.279945 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:21.590947 1130827 main.go:141] libmachine: (no-preload-248059) DBG | unable to find current IP address of domain no-preload-248059 in network mk-no-preload-248059
	I0328 01:03:21.590967 1130827 main.go:141] libmachine: (no-preload-248059) DBG | I0328 01:03:21.280358 1132389 retry.go:31] will retry after 1.905587467s: waiting for machine to come up
	I0328 01:03:23.187571 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:23.188214 1130827 main.go:141] libmachine: (no-preload-248059) DBG | unable to find current IP address of domain no-preload-248059 in network mk-no-preload-248059
	I0328 01:03:23.188248 1130827 main.go:141] libmachine: (no-preload-248059) DBG | I0328 01:03:23.188159 1132389 retry.go:31] will retry after 2.68043246s: waiting for machine to come up
	I0328 01:03:25.871414 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:25.871997 1130827 main.go:141] libmachine: (no-preload-248059) DBG | unable to find current IP address of domain no-preload-248059 in network mk-no-preload-248059
	I0328 01:03:25.872030 1130827 main.go:141] libmachine: (no-preload-248059) DBG | I0328 01:03:25.871956 1132389 retry.go:31] will retry after 2.689404788s: waiting for machine to come up
	I0328 01:03:23.913816 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:26.413616 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:27.352533 1131600 api_server.go:279] https://192.168.39.224:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0328 01:03:27.352570 1131600 api_server.go:103] status: https://192.168.39.224:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0328 01:03:27.352589 1131600 api_server.go:253] Checking apiserver healthz at https://192.168.39.224:8444/healthz ...
	I0328 01:03:27.453408 1131600 api_server.go:279] https://192.168.39.224:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0328 01:03:27.453448 1131600 api_server.go:103] status: https://192.168.39.224:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0328 01:03:27.547781 1131600 api_server.go:253] Checking apiserver healthz at https://192.168.39.224:8444/healthz ...
	I0328 01:03:27.552703 1131600 api_server.go:279] https://192.168.39.224:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0328 01:03:27.552738 1131600 api_server.go:103] status: https://192.168.39.224:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0328 01:03:28.048135 1131600 api_server.go:253] Checking apiserver healthz at https://192.168.39.224:8444/healthz ...
	I0328 01:03:28.053291 1131600 api_server.go:279] https://192.168.39.224:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0328 01:03:28.053322 1131600 api_server.go:103] status: https://192.168.39.224:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0328 01:03:28.548374 1131600 api_server.go:253] Checking apiserver healthz at https://192.168.39.224:8444/healthz ...
	I0328 01:03:28.553141 1131600 api_server.go:279] https://192.168.39.224:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0328 01:03:28.553178 1131600 api_server.go:103] status: https://192.168.39.224:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0328 01:03:29.047609 1131600 api_server.go:253] Checking apiserver healthz at https://192.168.39.224:8444/healthz ...
	I0328 01:03:29.053027 1131600 api_server.go:279] https://192.168.39.224:8444/healthz returned 200:
	ok
	I0328 01:03:29.060710 1131600 api_server.go:141] control plane version: v1.29.3
	I0328 01:03:29.060747 1131600 api_server.go:131] duration metric: took 4.513320481s to wait for apiserver health ...
	I0328 01:03:29.060757 1131600 cni.go:84] Creating CNI manager for ""
	I0328 01:03:29.060764 1131600 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0328 01:03:29.062763 1131600 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0328 01:03:25.568927 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:26.068371 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:26.568107 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:27.068037 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:27.567985 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:28.068036 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:28.568843 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:29.068483 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:29.568942 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:30.068849 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:29.064492 1131600 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0328 01:03:29.089164 1131600 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0328 01:03:29.115071 1131600 system_pods.go:43] waiting for kube-system pods to appear ...
	I0328 01:03:29.126819 1131600 system_pods.go:59] 8 kube-system pods found
	I0328 01:03:29.126871 1131600 system_pods.go:61] "coredns-76f75df574-79cdj" [48ffe344-a386-4904-a73e-56e3ce0a8bef] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0328 01:03:29.126885 1131600 system_pods.go:61] "etcd-default-k8s-diff-port-283961" [1d8fc768-e39c-4c96-bd65-2ae76fc9c6ec] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0328 01:03:29.126898 1131600 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-283961" [7c5c9f85-f16f-4248-8d2d-73c1ed2b0128] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0328 01:03:29.126912 1131600 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-283961" [2e943e7b-5506-4797-9e77-4a33e06056fa] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0328 01:03:29.126931 1131600 system_pods.go:61] "kube-proxy-d776v" [c1c86f61-b074-4a51-89e6-17c7b1076748] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0328 01:03:29.126944 1131600 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-283961" [8a840579-4145-4b68-ab3f-b1ebd3d63e81] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0328 01:03:29.126956 1131600 system_pods.go:61] "metrics-server-57f55c9bc5-w4ww4" [6d60f9e6-8ac7-4fad-91dc-61520586666c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0328 01:03:29.126968 1131600 system_pods.go:61] "storage-provisioner" [2b5e2e68-7e7c-46ec-bcec-ff9b01cbb8d9] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0328 01:03:29.126979 1131600 system_pods.go:74] duration metric: took 11.875076ms to wait for pod list to return data ...
	I0328 01:03:29.126992 1131600 node_conditions.go:102] verifying NodePressure condition ...
	I0328 01:03:29.130927 1131600 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0328 01:03:29.130971 1131600 node_conditions.go:123] node cpu capacity is 2
	I0328 01:03:29.130986 1131600 node_conditions.go:105] duration metric: took 3.984383ms to run NodePressure ...
	I0328 01:03:29.131011 1131600 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0328 01:03:29.421513 1131600 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0328 01:03:29.426043 1131600 kubeadm.go:733] kubelet initialised
	I0328 01:03:29.426104 1131600 kubeadm.go:734] duration metric: took 4.524275ms waiting for restarted kubelet to initialise ...
	I0328 01:03:29.426114 1131600 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0328 01:03:29.432378 1131600 pod_ready.go:78] waiting up to 4m0s for pod "coredns-76f75df574-79cdj" in "kube-system" namespace to be "Ready" ...
	I0328 01:03:28.563249 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:28.563778 1130827 main.go:141] libmachine: (no-preload-248059) DBG | unable to find current IP address of domain no-preload-248059 in network mk-no-preload-248059
	I0328 01:03:28.563808 1130827 main.go:141] libmachine: (no-preload-248059) DBG | I0328 01:03:28.563718 1132389 retry.go:31] will retry after 2.919225956s: waiting for machine to come up
	I0328 01:03:28.913653 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:30.914379 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:31.484584 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:31.485027 1130827 main.go:141] libmachine: (no-preload-248059) Found IP for machine: 192.168.61.107
	I0328 01:03:31.485048 1130827 main.go:141] libmachine: (no-preload-248059) Reserving static IP address...
	I0328 01:03:31.485065 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has current primary IP address 192.168.61.107 and MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:31.485584 1130827 main.go:141] libmachine: (no-preload-248059) DBG | found host DHCP lease matching {name: "no-preload-248059", mac: "52:54:00:58:33:e2", ip: "192.168.61.107"} in network mk-no-preload-248059: {Iface:virbr3 ExpiryTime:2024-03-28 02:03:25 +0000 UTC Type:0 Mac:52:54:00:58:33:e2 Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:no-preload-248059 Clientid:01:52:54:00:58:33:e2}
	I0328 01:03:31.485617 1130827 main.go:141] libmachine: (no-preload-248059) Reserved static IP address: 192.168.61.107
	I0328 01:03:31.485638 1130827 main.go:141] libmachine: (no-preload-248059) DBG | skip adding static IP to network mk-no-preload-248059 - found existing host DHCP lease matching {name: "no-preload-248059", mac: "52:54:00:58:33:e2", ip: "192.168.61.107"}
	I0328 01:03:31.485651 1130827 main.go:141] libmachine: (no-preload-248059) Waiting for SSH to be available...
	I0328 01:03:31.485671 1130827 main.go:141] libmachine: (no-preload-248059) DBG | Getting to WaitForSSH function...
	I0328 01:03:31.487909 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:31.488293 1130827 main.go:141] libmachine: (no-preload-248059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:33:e2", ip: ""} in network mk-no-preload-248059: {Iface:virbr3 ExpiryTime:2024-03-28 02:03:25 +0000 UTC Type:0 Mac:52:54:00:58:33:e2 Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:no-preload-248059 Clientid:01:52:54:00:58:33:e2}
	I0328 01:03:31.488322 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined IP address 192.168.61.107 and MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:31.488469 1130827 main.go:141] libmachine: (no-preload-248059) DBG | Using SSH client type: external
	I0328 01:03:31.488506 1130827 main.go:141] libmachine: (no-preload-248059) DBG | Using SSH private key: /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/no-preload-248059/id_rsa (-rw-------)
	I0328 01:03:31.488531 1130827 main.go:141] libmachine: (no-preload-248059) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.107 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/no-preload-248059/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0328 01:03:31.488541 1130827 main.go:141] libmachine: (no-preload-248059) DBG | About to run SSH command:
	I0328 01:03:31.488555 1130827 main.go:141] libmachine: (no-preload-248059) DBG | exit 0
	I0328 01:03:31.618358 1130827 main.go:141] libmachine: (no-preload-248059) DBG | SSH cmd err, output: <nil>: 
	I0328 01:03:31.618786 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetConfigRaw
	I0328 01:03:31.619494 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetIP
	I0328 01:03:31.622183 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:31.622555 1130827 main.go:141] libmachine: (no-preload-248059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:33:e2", ip: ""} in network mk-no-preload-248059: {Iface:virbr3 ExpiryTime:2024-03-28 02:03:25 +0000 UTC Type:0 Mac:52:54:00:58:33:e2 Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:no-preload-248059 Clientid:01:52:54:00:58:33:e2}
	I0328 01:03:31.622584 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined IP address 192.168.61.107 and MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:31.622889 1130827 profile.go:142] Saving config to /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/no-preload-248059/config.json ...
	I0328 01:03:31.623120 1130827 machine.go:94] provisionDockerMachine start ...
	I0328 01:03:31.623147 1130827 main.go:141] libmachine: (no-preload-248059) Calling .DriverName
	I0328 01:03:31.623400 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHHostname
	I0328 01:03:31.626078 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:31.626432 1130827 main.go:141] libmachine: (no-preload-248059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:33:e2", ip: ""} in network mk-no-preload-248059: {Iface:virbr3 ExpiryTime:2024-03-28 02:03:25 +0000 UTC Type:0 Mac:52:54:00:58:33:e2 Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:no-preload-248059 Clientid:01:52:54:00:58:33:e2}
	I0328 01:03:31.626458 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined IP address 192.168.61.107 and MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:31.626663 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHPort
	I0328 01:03:31.626864 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHKeyPath
	I0328 01:03:31.627031 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHKeyPath
	I0328 01:03:31.627179 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHUsername
	I0328 01:03:31.627380 1130827 main.go:141] libmachine: Using SSH client type: native
	I0328 01:03:31.627595 1130827 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.107 22 <nil> <nil>}
	I0328 01:03:31.627611 1130827 main.go:141] libmachine: About to run SSH command:
	hostname
	I0328 01:03:31.739662 1130827 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0328 01:03:31.739699 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetMachineName
	I0328 01:03:31.740049 1130827 buildroot.go:166] provisioning hostname "no-preload-248059"
	I0328 01:03:31.740086 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetMachineName
	I0328 01:03:31.740421 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHHostname
	I0328 01:03:31.743410 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:31.743776 1130827 main.go:141] libmachine: (no-preload-248059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:33:e2", ip: ""} in network mk-no-preload-248059: {Iface:virbr3 ExpiryTime:2024-03-28 02:03:25 +0000 UTC Type:0 Mac:52:54:00:58:33:e2 Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:no-preload-248059 Clientid:01:52:54:00:58:33:e2}
	I0328 01:03:31.743811 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined IP address 192.168.61.107 and MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:31.744001 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHPort
	I0328 01:03:31.744212 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHKeyPath
	I0328 01:03:31.744394 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHKeyPath
	I0328 01:03:31.744515 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHUsername
	I0328 01:03:31.744669 1130827 main.go:141] libmachine: Using SSH client type: native
	I0328 01:03:31.744846 1130827 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.107 22 <nil> <nil>}
	I0328 01:03:31.744860 1130827 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-248059 && echo "no-preload-248059" | sudo tee /etc/hostname
	I0328 01:03:31.869330 1130827 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-248059
	
	I0328 01:03:31.869368 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHHostname
	I0328 01:03:31.872451 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:31.872817 1130827 main.go:141] libmachine: (no-preload-248059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:33:e2", ip: ""} in network mk-no-preload-248059: {Iface:virbr3 ExpiryTime:2024-03-28 02:03:25 +0000 UTC Type:0 Mac:52:54:00:58:33:e2 Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:no-preload-248059 Clientid:01:52:54:00:58:33:e2}
	I0328 01:03:31.872868 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined IP address 192.168.61.107 and MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:31.873159 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHPort
	I0328 01:03:31.873405 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHKeyPath
	I0328 01:03:31.873632 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHKeyPath
	I0328 01:03:31.873803 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHUsername
	I0328 01:03:31.873982 1130827 main.go:141] libmachine: Using SSH client type: native
	I0328 01:03:31.874220 1130827 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.107 22 <nil> <nil>}
	I0328 01:03:31.874268 1130827 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-248059' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-248059/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-248059' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0328 01:03:31.997509 1130827 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0328 01:03:31.997543 1130827 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18485-1069254/.minikube CaCertPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18485-1069254/.minikube}
	I0328 01:03:31.997565 1130827 buildroot.go:174] setting up certificates
	I0328 01:03:31.997573 1130827 provision.go:84] configureAuth start
	I0328 01:03:31.997583 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetMachineName
	I0328 01:03:31.997870 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetIP
	I0328 01:03:32.000739 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:32.001127 1130827 main.go:141] libmachine: (no-preload-248059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:33:e2", ip: ""} in network mk-no-preload-248059: {Iface:virbr3 ExpiryTime:2024-03-28 02:03:25 +0000 UTC Type:0 Mac:52:54:00:58:33:e2 Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:no-preload-248059 Clientid:01:52:54:00:58:33:e2}
	I0328 01:03:32.001162 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined IP address 192.168.61.107 and MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:32.001306 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHHostname
	I0328 01:03:32.003571 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:32.003958 1130827 main.go:141] libmachine: (no-preload-248059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:33:e2", ip: ""} in network mk-no-preload-248059: {Iface:virbr3 ExpiryTime:2024-03-28 02:03:25 +0000 UTC Type:0 Mac:52:54:00:58:33:e2 Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:no-preload-248059 Clientid:01:52:54:00:58:33:e2}
	I0328 01:03:32.003988 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined IP address 192.168.61.107 and MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:32.004162 1130827 provision.go:143] copyHostCerts
	I0328 01:03:32.004246 1130827 exec_runner.go:144] found /home/jenkins/minikube-integration/18485-1069254/.minikube/key.pem, removing ...
	I0328 01:03:32.004261 1130827 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18485-1069254/.minikube/key.pem
	I0328 01:03:32.004329 1130827 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18485-1069254/.minikube/key.pem (1679 bytes)
	I0328 01:03:32.004442 1130827 exec_runner.go:144] found /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.pem, removing ...
	I0328 01:03:32.004454 1130827 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.pem
	I0328 01:03:32.004486 1130827 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.pem (1078 bytes)
	I0328 01:03:32.004562 1130827 exec_runner.go:144] found /home/jenkins/minikube-integration/18485-1069254/.minikube/cert.pem, removing ...
	I0328 01:03:32.004572 1130827 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18485-1069254/.minikube/cert.pem
	I0328 01:03:32.004602 1130827 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18485-1069254/.minikube/cert.pem (1123 bytes)
	I0328 01:03:32.004667 1130827 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca-key.pem org=jenkins.no-preload-248059 san=[127.0.0.1 192.168.61.107 localhost minikube no-preload-248059]
	I0328 01:03:32.206585 1130827 provision.go:177] copyRemoteCerts
	I0328 01:03:32.206657 1130827 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0328 01:03:32.206691 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHHostname
	I0328 01:03:32.210170 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:32.210636 1130827 main.go:141] libmachine: (no-preload-248059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:33:e2", ip: ""} in network mk-no-preload-248059: {Iface:virbr3 ExpiryTime:2024-03-28 02:03:25 +0000 UTC Type:0 Mac:52:54:00:58:33:e2 Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:no-preload-248059 Clientid:01:52:54:00:58:33:e2}
	I0328 01:03:32.210676 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined IP address 192.168.61.107 and MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:32.210979 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHPort
	I0328 01:03:32.211187 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHKeyPath
	I0328 01:03:32.211364 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHUsername
	I0328 01:03:32.211564 1130827 sshutil.go:53] new ssh client: &{IP:192.168.61.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/no-preload-248059/id_rsa Username:docker}
	I0328 01:03:32.305858 1130827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0328 01:03:32.337654 1130827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0328 01:03:32.368942 1130827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0328 01:03:32.401639 1130827 provision.go:87] duration metric: took 404.051415ms to configureAuth
	I0328 01:03:32.401669 1130827 buildroot.go:189] setting minikube options for container-runtime
	I0328 01:03:32.401936 1130827 config.go:182] Loaded profile config "no-preload-248059": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0-beta.0
	I0328 01:03:32.402025 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHHostname
	I0328 01:03:32.404890 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:32.405352 1130827 main.go:141] libmachine: (no-preload-248059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:33:e2", ip: ""} in network mk-no-preload-248059: {Iface:virbr3 ExpiryTime:2024-03-28 02:03:25 +0000 UTC Type:0 Mac:52:54:00:58:33:e2 Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:no-preload-248059 Clientid:01:52:54:00:58:33:e2}
	I0328 01:03:32.405387 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined IP address 192.168.61.107 and MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:32.405588 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHPort
	I0328 01:03:32.405858 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHKeyPath
	I0328 01:03:32.406091 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHKeyPath
	I0328 01:03:32.406303 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHUsername
	I0328 01:03:32.406510 1130827 main.go:141] libmachine: Using SSH client type: native
	I0328 01:03:32.406731 1130827 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.107 22 <nil> <nil>}
	I0328 01:03:32.406759 1130827 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0328 01:03:32.697738 1130827 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0328 01:03:32.697768 1130827 machine.go:97] duration metric: took 1.074632092s to provisionDockerMachine
	I0328 01:03:32.697781 1130827 start.go:293] postStartSetup for "no-preload-248059" (driver="kvm2")
	I0328 01:03:32.697795 1130827 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0328 01:03:32.697812 1130827 main.go:141] libmachine: (no-preload-248059) Calling .DriverName
	I0328 01:03:32.698263 1130827 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0328 01:03:32.698298 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHHostname
	I0328 01:03:32.701020 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:32.701421 1130827 main.go:141] libmachine: (no-preload-248059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:33:e2", ip: ""} in network mk-no-preload-248059: {Iface:virbr3 ExpiryTime:2024-03-28 02:03:25 +0000 UTC Type:0 Mac:52:54:00:58:33:e2 Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:no-preload-248059 Clientid:01:52:54:00:58:33:e2}
	I0328 01:03:32.701450 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined IP address 192.168.61.107 and MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:32.701609 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHPort
	I0328 01:03:32.701837 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHKeyPath
	I0328 01:03:32.702010 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHUsername
	I0328 01:03:32.702188 1130827 sshutil.go:53] new ssh client: &{IP:192.168.61.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/no-preload-248059/id_rsa Username:docker}
	I0328 01:03:32.790670 1130827 ssh_runner.go:195] Run: cat /etc/os-release
	I0328 01:03:32.795098 1130827 info.go:137] Remote host: Buildroot 2023.02.9
	I0328 01:03:32.795131 1130827 filesync.go:126] Scanning /home/jenkins/minikube-integration/18485-1069254/.minikube/addons for local assets ...
	I0328 01:03:32.795222 1130827 filesync.go:126] Scanning /home/jenkins/minikube-integration/18485-1069254/.minikube/files for local assets ...
	I0328 01:03:32.795297 1130827 filesync.go:149] local asset: /home/jenkins/minikube-integration/18485-1069254/.minikube/files/etc/ssl/certs/10765222.pem -> 10765222.pem in /etc/ssl/certs
	I0328 01:03:32.795402 1130827 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0328 01:03:32.806276 1130827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/files/etc/ssl/certs/10765222.pem --> /etc/ssl/certs/10765222.pem (1708 bytes)
	I0328 01:03:32.832753 1130827 start.go:296] duration metric: took 134.954685ms for postStartSetup
	I0328 01:03:32.832801 1130827 fix.go:56] duration metric: took 19.16097847s for fixHost
	I0328 01:03:32.832825 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHHostname
	I0328 01:03:32.835830 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:32.836199 1130827 main.go:141] libmachine: (no-preload-248059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:33:e2", ip: ""} in network mk-no-preload-248059: {Iface:virbr3 ExpiryTime:2024-03-28 02:03:25 +0000 UTC Type:0 Mac:52:54:00:58:33:e2 Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:no-preload-248059 Clientid:01:52:54:00:58:33:e2}
	I0328 01:03:32.836237 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined IP address 192.168.61.107 and MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:32.836472 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHPort
	I0328 01:03:32.836707 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHKeyPath
	I0328 01:03:32.836949 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHKeyPath
	I0328 01:03:32.837104 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHUsername
	I0328 01:03:32.837339 1130827 main.go:141] libmachine: Using SSH client type: native
	I0328 01:03:32.837551 1130827 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.107 22 <nil> <nil>}
	I0328 01:03:32.837563 1130827 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0328 01:03:32.947440 1130827 main.go:141] libmachine: SSH cmd err, output: <nil>: 1711587812.922631180
	
	I0328 01:03:32.947477 1130827 fix.go:216] guest clock: 1711587812.922631180
	I0328 01:03:32.947486 1130827 fix.go:229] Guest: 2024-03-28 01:03:32.92263118 +0000 UTC Remote: 2024-03-28 01:03:32.832804811 +0000 UTC m=+356.715929719 (delta=89.826369ms)
	I0328 01:03:32.947507 1130827 fix.go:200] guest clock delta is within tolerance: 89.826369ms
	I0328 01:03:32.947512 1130827 start.go:83] releasing machines lock for "no-preload-248059", held for 19.275724068s
	I0328 01:03:32.947531 1130827 main.go:141] libmachine: (no-preload-248059) Calling .DriverName
	I0328 01:03:32.947805 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetIP
	I0328 01:03:32.950439 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:32.950814 1130827 main.go:141] libmachine: (no-preload-248059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:33:e2", ip: ""} in network mk-no-preload-248059: {Iface:virbr3 ExpiryTime:2024-03-28 02:03:25 +0000 UTC Type:0 Mac:52:54:00:58:33:e2 Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:no-preload-248059 Clientid:01:52:54:00:58:33:e2}
	I0328 01:03:32.950844 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined IP address 192.168.61.107 and MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:32.950992 1130827 main.go:141] libmachine: (no-preload-248059) Calling .DriverName
	I0328 01:03:32.951517 1130827 main.go:141] libmachine: (no-preload-248059) Calling .DriverName
	I0328 01:03:32.951707 1130827 main.go:141] libmachine: (no-preload-248059) Calling .DriverName
	I0328 01:03:32.951809 1130827 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0328 01:03:32.951852 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHHostname
	I0328 01:03:32.951938 1130827 ssh_runner.go:195] Run: cat /version.json
	I0328 01:03:32.951964 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHHostname
	I0328 01:03:32.954721 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:32.955058 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:32.955135 1130827 main.go:141] libmachine: (no-preload-248059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:33:e2", ip: ""} in network mk-no-preload-248059: {Iface:virbr3 ExpiryTime:2024-03-28 02:03:25 +0000 UTC Type:0 Mac:52:54:00:58:33:e2 Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:no-preload-248059 Clientid:01:52:54:00:58:33:e2}
	I0328 01:03:32.955165 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined IP address 192.168.61.107 and MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:32.955314 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHPort
	I0328 01:03:32.955473 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHKeyPath
	I0328 01:03:32.955512 1130827 main.go:141] libmachine: (no-preload-248059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:33:e2", ip: ""} in network mk-no-preload-248059: {Iface:virbr3 ExpiryTime:2024-03-28 02:03:25 +0000 UTC Type:0 Mac:52:54:00:58:33:e2 Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:no-preload-248059 Clientid:01:52:54:00:58:33:e2}
	I0328 01:03:32.955538 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined IP address 192.168.61.107 and MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:32.955622 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHUsername
	I0328 01:03:32.955698 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHPort
	I0328 01:03:32.955809 1130827 sshutil.go:53] new ssh client: &{IP:192.168.61.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/no-preload-248059/id_rsa Username:docker}
	I0328 01:03:32.955859 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHKeyPath
	I0328 01:03:32.956001 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHUsername
	I0328 01:03:32.956134 1130827 sshutil.go:53] new ssh client: &{IP:192.168.61.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/no-preload-248059/id_rsa Username:docker}
	I0328 01:03:33.079381 1130827 ssh_runner.go:195] Run: systemctl --version
	I0328 01:03:33.086184 1130827 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0328 01:03:33.241799 1130827 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0328 01:03:33.248779 1130827 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0328 01:03:33.248893 1130827 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0328 01:03:33.267944 1130827 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0328 01:03:33.267977 1130827 start.go:494] detecting cgroup driver to use...
	I0328 01:03:33.268082 1130827 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0328 01:03:33.286132 1130827 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0328 01:03:33.301676 1130827 docker.go:217] disabling cri-docker service (if available) ...
	I0328 01:03:33.301762 1130827 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0328 01:03:33.317202 1130827 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0328 01:03:33.333162 1130827 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0328 01:03:33.458738 1130827 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0328 01:03:33.608509 1130827 docker.go:233] disabling docker service ...
	I0328 01:03:33.608623 1130827 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0328 01:03:33.626616 1130827 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0328 01:03:33.641798 1130827 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0328 01:03:33.808865 1130827 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0328 01:03:33.962636 1130827 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0328 01:03:33.978138 1130827 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0328 01:03:34.002323 1130827 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0328 01:03:34.002404 1130827 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 01:03:34.014483 1130827 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0328 01:03:34.014589 1130827 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 01:03:34.028647 1130827 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 01:03:34.041601 1130827 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 01:03:34.054993 1130827 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0328 01:03:34.066671 1130827 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 01:03:34.079389 1130827 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 01:03:34.099660 1130827 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 01:03:34.112379 1130827 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0328 01:03:34.123050 1130827 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0328 01:03:34.123109 1130827 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0328 01:03:34.137132 1130827 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0328 01:03:34.147092 1130827 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0328 01:03:34.282367 1130827 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0328 01:03:34.436510 1130827 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0328 01:03:34.436599 1130827 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0328 01:03:34.443019 1130827 start.go:562] Will wait 60s for crictl version
	I0328 01:03:34.443092 1130827 ssh_runner.go:195] Run: which crictl
	I0328 01:03:34.447740 1130827 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0328 01:03:34.488366 1130827 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0328 01:03:34.488469 1130827 ssh_runner.go:195] Run: crio --version
	I0328 01:03:34.520940 1130827 ssh_runner.go:195] Run: crio --version
	I0328 01:03:34.557953 1130827 out.go:177] * Preparing Kubernetes v1.30.0-beta.0 on CRI-O 1.29.1 ...
	I0328 01:03:30.568918 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:31.068097 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:31.568306 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:32.068345 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:32.568773 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:33.068072 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:33.568377 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:34.068141 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:34.568574 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:35.067986 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:31.439199 1131600 pod_ready.go:102] pod "coredns-76f75df574-79cdj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:33.439575 1131600 pod_ready.go:102] pod "coredns-76f75df574-79cdj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:34.559624 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetIP
	I0328 01:03:34.563089 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:34.563549 1130827 main.go:141] libmachine: (no-preload-248059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:33:e2", ip: ""} in network mk-no-preload-248059: {Iface:virbr3 ExpiryTime:2024-03-28 02:03:25 +0000 UTC Type:0 Mac:52:54:00:58:33:e2 Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:no-preload-248059 Clientid:01:52:54:00:58:33:e2}
	I0328 01:03:34.563583 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined IP address 192.168.61.107 and MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:34.563943 1130827 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0328 01:03:34.570153 1130827 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0328 01:03:34.584566 1130827 kubeadm.go:877] updating cluster {Name:no-preload-248059 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.0-beta.0 ClusterName:no-preload-248059 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.107 Port:8443 KubernetesVersion:v1.30.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0328 01:03:34.584723 1130827 preload.go:132] Checking if preload exists for k8s version v1.30.0-beta.0 and runtime crio
	I0328 01:03:34.584786 1130827 ssh_runner.go:195] Run: sudo crictl images --output json
	I0328 01:03:34.620182 1130827 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.0-beta.0". assuming images are not preloaded.
	I0328 01:03:34.620215 1130827 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.30.0-beta.0 registry.k8s.io/kube-controller-manager:v1.30.0-beta.0 registry.k8s.io/kube-scheduler:v1.30.0-beta.0 registry.k8s.io/kube-proxy:v1.30.0-beta.0 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.12-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0328 01:03:34.620297 1130827 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0328 01:03:34.620312 1130827 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.12-0
	I0328 01:03:34.620333 1130827 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.30.0-beta.0
	I0328 01:03:34.620301 1130827 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.30.0-beta.0
	I0328 01:03:34.620374 1130827 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.30.0-beta.0
	I0328 01:03:34.620401 1130827 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0328 01:03:34.620481 1130827 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0328 01:03:34.620319 1130827 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.30.0-beta.0
	I0328 01:03:34.621997 1130827 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.30.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.30.0-beta.0
	I0328 01:03:34.622009 1130827 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0328 01:03:34.622009 1130827 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.30.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.30.0-beta.0
	I0328 01:03:34.622052 1130827 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.30.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.30.0-beta.0
	I0328 01:03:34.621997 1130827 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.30.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.30.0-beta.0
	I0328 01:03:34.622115 1130827 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.12-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.12-0
	I0328 01:03:34.621996 1130827 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0328 01:03:34.622438 1130827 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0328 01:03:34.832761 1130827 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.30.0-beta.0
	I0328 01:03:34.849045 1130827 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I0328 01:03:34.868049 1130827 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0328 01:03:34.883941 1130827 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.30.0-beta.0" needs transfer: "registry.k8s.io/kube-proxy:v1.30.0-beta.0" does not exist at hash "3f2e573c9528d6ccab780899b4b39b75a17f00250f31ec462fccb116d45befa8" in container runtime
	I0328 01:03:34.883988 1130827 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.30.0-beta.0
	I0328 01:03:34.884047 1130827 ssh_runner.go:195] Run: which crictl
	I0328 01:03:34.884972 1130827 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.30.0-beta.0
	I0328 01:03:34.887551 1130827 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.30.0-beta.0
	I0328 01:03:34.899677 1130827 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.12-0
	I0328 01:03:34.904772 1130827 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.30.0-beta.0
	I0328 01:03:35.045850 1130827 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0328 01:03:35.045906 1130827 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0328 01:03:35.045944 1130827 ssh_runner.go:195] Run: which crictl
	I0328 01:03:35.045959 1130827 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.30.0-beta.0
	I0328 01:03:35.064862 1130827 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.30.0-beta.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.30.0-beta.0" does not exist at hash "c2da1dd389fe0ec92e0cd9e98f8def82c47e8e08ab27041cd23683c60aa1dcaa" in container runtime
	I0328 01:03:35.064908 1130827 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.30.0-beta.0
	I0328 01:03:35.064959 1130827 ssh_runner.go:195] Run: which crictl
	I0328 01:03:35.066700 1130827 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.30.0-beta.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.30.0-beta.0" does not exist at hash "f68bf73ee2fbd4ea8b58c2038ce18d4e06cb59833c44dda7435b586add021841" in container runtime
	I0328 01:03:35.066753 1130827 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.30.0-beta.0
	I0328 01:03:35.066820 1130827 ssh_runner.go:195] Run: which crictl
	I0328 01:03:35.097425 1130827 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.30.0-beta.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.30.0-beta.0" does not exist at hash "746ac2129b574b8743dca05f512b7e097235ca7229b75100a38ec9cdb23454ac" in container runtime
	I0328 01:03:35.097479 1130827 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.30.0-beta.0
	I0328 01:03:35.097546 1130827 ssh_runner.go:195] Run: which crictl
	I0328 01:03:35.097619 1130827 cache_images.go:116] "registry.k8s.io/etcd:3.5.12-0" needs transfer: "registry.k8s.io/etcd:3.5.12-0" does not exist at hash "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899" in container runtime
	I0328 01:03:35.097667 1130827 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.12-0
	I0328 01:03:35.097715 1130827 ssh_runner.go:195] Run: which crictl
	I0328 01:03:35.126977 1130827 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0328 01:03:35.126980 1130827 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.30.0-beta.0
	I0328 01:03:35.127020 1130827 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.30.0-beta.0
	I0328 01:03:35.127084 1130827 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.12-0
	I0328 01:03:35.127090 1130827 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.30.0-beta.0
	I0328 01:03:35.127082 1130827 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.30.0-beta.0
	I0328 01:03:35.127161 1130827 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.30.0-beta.0
	I0328 01:03:35.264395 1130827 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.30.0-beta.0
	I0328 01:03:35.264499 1130827 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.30.0-beta.0 (exists)
	I0328 01:03:35.264534 1130827 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.30.0-beta.0
	I0328 01:03:35.264543 1130827 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.30.0-beta.0
	I0328 01:03:35.264506 1130827 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0328 01:03:35.264590 1130827 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.30.0-beta.0
	I0328 01:03:35.264631 1130827 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.30.0-beta.0
	I0328 01:03:35.264652 1130827 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0328 01:03:35.264516 1130827 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.30.0-beta.0
	I0328 01:03:35.264584 1130827 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.12-0
	I0328 01:03:35.264717 1130827 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.30.0-beta.0
	I0328 01:03:35.264728 1130827 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.30.0-beta.0
	I0328 01:03:35.264768 1130827 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.12-0
	I0328 01:03:35.269734 1130827 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.12-0 (exists)
	I0328 01:03:35.277344 1130827 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.30.0-beta.0 (exists)
	I0328 01:03:35.277580 1130827 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0328 01:03:35.279792 1130827 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.30.0-beta.0 (exists)
	I0328 01:03:35.280423 1130827 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.30.0-beta.0 (exists)
	I0328 01:03:35.535980 1130827 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0328 01:03:33.413451 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:35.414017 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:37.913609 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:35.568345 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:36.068227 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:36.568528 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:37.068834 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:37.568407 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:38.068142 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:38.568732 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:39.068094 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:39.568799 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:40.068973 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:35.940767 1131600 pod_ready.go:102] pod "coredns-76f75df574-79cdj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:37.440919 1131600 pod_ready.go:92] pod "coredns-76f75df574-79cdj" in "kube-system" namespace has status "Ready":"True"
	I0328 01:03:37.440949 1131600 pod_ready.go:81] duration metric: took 8.008542386s for pod "coredns-76f75df574-79cdj" in "kube-system" namespace to be "Ready" ...
	I0328 01:03:37.440963 1131600 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-283961" in "kube-system" namespace to be "Ready" ...
	I0328 01:03:39.452822 1131600 pod_ready.go:102] pod "etcd-default-k8s-diff-port-283961" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:40.467937 1131600 pod_ready.go:92] pod "etcd-default-k8s-diff-port-283961" in "kube-system" namespace has status "Ready":"True"
	I0328 01:03:40.467973 1131600 pod_ready.go:81] duration metric: took 3.027001179s for pod "etcd-default-k8s-diff-port-283961" in "kube-system" namespace to be "Ready" ...
	I0328 01:03:40.467987 1131600 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-283961" in "kube-system" namespace to be "Ready" ...
	I0328 01:03:40.491342 1131600 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-283961" in "kube-system" namespace has status "Ready":"True"
	I0328 01:03:40.491373 1131600 pod_ready.go:81] duration metric: took 23.375914ms for pod "kube-apiserver-default-k8s-diff-port-283961" in "kube-system" namespace to be "Ready" ...
	I0328 01:03:40.491387 1131600 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-283961" in "kube-system" namespace to be "Ready" ...
	I0328 01:03:40.511379 1131600 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-283961" in "kube-system" namespace has status "Ready":"True"
	I0328 01:03:40.511414 1131600 pod_ready.go:81] duration metric: took 20.018124ms for pod "kube-controller-manager-default-k8s-diff-port-283961" in "kube-system" namespace to be "Ready" ...
	I0328 01:03:40.511430 1131600 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-d776v" in "kube-system" namespace to be "Ready" ...
	I0328 01:03:40.526689 1131600 pod_ready.go:92] pod "kube-proxy-d776v" in "kube-system" namespace has status "Ready":"True"
	I0328 01:03:40.526724 1131600 pod_ready.go:81] duration metric: took 15.28424ms for pod "kube-proxy-d776v" in "kube-system" namespace to be "Ready" ...
	I0328 01:03:40.526738 1131600 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-283961" in "kube-system" namespace to be "Ready" ...
	I0328 01:03:37.431690 1130827 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.30.0-beta.0: (2.167073369s)
	I0328 01:03:37.431729 1130827 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.30.0-beta.0 from cache
	I0328 01:03:37.431755 1130827 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.12-0
	I0328 01:03:37.431764 1130827 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.895749302s)
	I0328 01:03:37.431805 1130827 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0328 01:03:37.431811 1130827 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.12-0
	I0328 01:03:37.431837 1130827 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0328 01:03:37.431870 1130827 ssh_runner.go:195] Run: which crictl
	I0328 01:03:39.913936 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:42.412656 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:40.568441 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:41.068790 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:41.568919 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:42.068166 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:42.568012 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:43.068027 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:43.568916 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:44.067940 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:44.568074 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:45.068786 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:42.535179 1131600 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-283961" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:44.034128 1131600 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-283961" in "kube-system" namespace has status "Ready":"True"
	I0328 01:03:44.034164 1131600 pod_ready.go:81] duration metric: took 3.507415677s for pod "kube-scheduler-default-k8s-diff-port-283961" in "kube-system" namespace to be "Ready" ...
	I0328 01:03:44.034175 1131600 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace to be "Ready" ...
	I0328 01:03:41.523268 1130827 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.12-0: (4.091420228s)
	I0328 01:03:41.523305 1130827 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.12-0 from cache
	I0328 01:03:41.523330 1130827 ssh_runner.go:235] Completed: which crictl: (4.091431875s)
	I0328 01:03:41.523345 1130827 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.30.0-beta.0
	I0328 01:03:41.523412 1130827 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0328 01:03:41.523445 1130827 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.30.0-beta.0
	I0328 01:03:41.567312 1130827 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0328 01:03:41.567455 1130827 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0328 01:03:44.336954 1130827 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.30.0-beta.0: (2.813479223s)
	I0328 01:03:44.336991 1130827 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.30.0-beta.0 from cache
	I0328 01:03:44.336994 1130827 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (2.769509386s)
	I0328 01:03:44.337020 1130827 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0328 01:03:44.337035 1130827 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0328 01:03:44.337080 1130827 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0328 01:03:44.414767 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:46.415110 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:45.568662 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:46.068299 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:46.568793 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:47.068929 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:47.568250 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:48.068910 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:48.568138 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:49.068128 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:49.568153 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:50.068075 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:46.042489 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:48.541049 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:50.547355 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:46.297705 1130827 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.960592772s)
	I0328 01:03:46.297744 1130827 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0328 01:03:46.297776 1130827 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.30.0-beta.0
	I0328 01:03:46.297828 1130827 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.30.0-beta.0
	I0328 01:03:47.769522 1130827 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.30.0-beta.0: (1.471661236s)
	I0328 01:03:47.769569 1130827 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.30.0-beta.0 from cache
	I0328 01:03:47.769602 1130827 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.30.0-beta.0
	I0328 01:03:47.769656 1130827 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.30.0-beta.0
	I0328 01:03:50.231843 1130827 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.30.0-beta.0: (2.462162757s)
	I0328 01:03:50.231876 1130827 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.30.0-beta.0 from cache
	I0328 01:03:50.231902 1130827 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0328 01:03:50.231956 1130827 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0328 01:03:48.913184 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:51.412474 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:50.568929 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:51.068812 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:51.568899 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:52.068890 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:52.568751 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:53.068406 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:53.568466 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:54.068039 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:54.568745 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:55.068690 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:53.041197 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:55.041977 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:51.188382 1130827 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0328 01:03:51.188441 1130827 cache_images.go:123] Successfully loaded all cached images
	I0328 01:03:51.188448 1130827 cache_images.go:92] duration metric: took 16.568214969s to LoadCachedImages
	I0328 01:03:51.188464 1130827 kubeadm.go:928] updating node { 192.168.61.107 8443 v1.30.0-beta.0 crio true true} ...
	I0328 01:03:51.188628 1130827 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-248059 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.107
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0-beta.0 ClusterName:no-preload-248059 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0328 01:03:51.188710 1130827 ssh_runner.go:195] Run: crio config
	I0328 01:03:51.237071 1130827 cni.go:84] Creating CNI manager for ""
	I0328 01:03:51.237099 1130827 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0328 01:03:51.237109 1130827 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0328 01:03:51.237131 1130827 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.107 APIServerPort:8443 KubernetesVersion:v1.30.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-248059 NodeName:no-preload-248059 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.107"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.107 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0328 01:03:51.237263 1130827 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.107
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-248059"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.107
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.107"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0328 01:03:51.237330 1130827 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0-beta.0
	I0328 01:03:51.248044 1130827 binaries.go:44] Found k8s binaries, skipping transfer
	I0328 01:03:51.248113 1130827 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0328 01:03:51.257854 1130827 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (324 bytes)
	I0328 01:03:51.276307 1130827 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I0328 01:03:51.294698 1130827 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2168 bytes)
	I0328 01:03:51.313297 1130827 ssh_runner.go:195] Run: grep 192.168.61.107	control-plane.minikube.internal$ /etc/hosts
	I0328 01:03:51.317668 1130827 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.107	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0328 01:03:51.330478 1130827 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0328 01:03:51.457500 1130827 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0328 01:03:51.484463 1130827 certs.go:68] Setting up /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/no-preload-248059 for IP: 192.168.61.107
	I0328 01:03:51.484493 1130827 certs.go:194] generating shared ca certs ...
	I0328 01:03:51.484518 1130827 certs.go:226] acquiring lock for ca certs: {Name:mkf4dec8f33bbf51de6ed3aabdf175c7fd744ef9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 01:03:51.484718 1130827 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.key
	I0328 01:03:51.484768 1130827 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/proxy-client-ca.key
	I0328 01:03:51.484781 1130827 certs.go:256] generating profile certs ...
	I0328 01:03:51.484910 1130827 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/no-preload-248059/client.key
	I0328 01:03:51.484989 1130827 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/no-preload-248059/apiserver.key.85d037b2
	I0328 01:03:51.485040 1130827 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/no-preload-248059/proxy-client.key
	I0328 01:03:51.485196 1130827 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/1076522.pem (1338 bytes)
	W0328 01:03:51.485243 1130827 certs.go:480] ignoring /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/1076522_empty.pem, impossibly tiny 0 bytes
	I0328 01:03:51.485257 1130827 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca-key.pem (1675 bytes)
	I0328 01:03:51.485292 1130827 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem (1078 bytes)
	I0328 01:03:51.485327 1130827 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/cert.pem (1123 bytes)
	I0328 01:03:51.485357 1130827 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/key.pem (1679 bytes)
	I0328 01:03:51.485416 1130827 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/files/etc/ssl/certs/10765222.pem (1708 bytes)
	I0328 01:03:51.486614 1130827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0328 01:03:51.537554 1130827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0328 01:03:51.587256 1130827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0328 01:03:51.620264 1130827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0328 01:03:51.652100 1130827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/no-preload-248059/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0328 01:03:51.694388 1130827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/no-preload-248059/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0328 01:03:51.720913 1130827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/no-preload-248059/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0328 01:03:51.747141 1130827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/no-preload-248059/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0328 01:03:51.776370 1130827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/1076522.pem --> /usr/share/ca-certificates/1076522.pem (1338 bytes)
	I0328 01:03:51.803168 1130827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/files/etc/ssl/certs/10765222.pem --> /usr/share/ca-certificates/10765222.pem (1708 bytes)
	I0328 01:03:51.831138 1130827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0328 01:03:51.857272 1130827 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0328 01:03:51.876070 1130827 ssh_runner.go:195] Run: openssl version
	I0328 01:03:51.882197 1130827 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1076522.pem && ln -fs /usr/share/ca-certificates/1076522.pem /etc/ssl/certs/1076522.pem"
	I0328 01:03:51.893560 1130827 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1076522.pem
	I0328 01:03:51.898293 1130827 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 27 23:42 /usr/share/ca-certificates/1076522.pem
	I0328 01:03:51.898361 1130827 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1076522.pem
	I0328 01:03:51.904549 1130827 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1076522.pem /etc/ssl/certs/51391683.0"
	I0328 01:03:51.918175 1130827 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10765222.pem && ln -fs /usr/share/ca-certificates/10765222.pem /etc/ssl/certs/10765222.pem"
	I0328 01:03:51.930387 1130827 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10765222.pem
	I0328 01:03:51.935610 1130827 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 27 23:42 /usr/share/ca-certificates/10765222.pem
	I0328 01:03:51.935691 1130827 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10765222.pem
	I0328 01:03:51.942127 1130827 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/10765222.pem /etc/ssl/certs/3ec20f2e.0"
	I0328 01:03:51.954252 1130827 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0328 01:03:51.966727 1130827 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0328 01:03:51.971742 1130827 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 27 23:34 /usr/share/ca-certificates/minikubeCA.pem
	I0328 01:03:51.971810 1130827 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0328 01:03:51.978082 1130827 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0328 01:03:51.992233 1130827 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0328 01:03:51.997556 1130827 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0328 01:03:52.004178 1130827 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0328 01:03:52.010666 1130827 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0328 01:03:52.017076 1130827 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0328 01:03:52.023334 1130827 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0328 01:03:52.029980 1130827 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0328 01:03:52.036395 1130827 kubeadm.go:391] StartCluster: {Name:no-preload-248059 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.0-beta.0 ClusterName:no-preload-248059 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.107 Port:8443 KubernetesVersion:v1.30.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0
m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0328 01:03:52.036483 1130827 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0328 01:03:52.036539 1130827 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0328 01:03:52.080486 1130827 cri.go:89] found id: ""
	I0328 01:03:52.080580 1130827 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0328 01:03:52.094552 1130827 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0328 01:03:52.094583 1130827 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0328 01:03:52.094599 1130827 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0328 01:03:52.094650 1130827 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0328 01:03:52.107008 1130827 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0328 01:03:52.108200 1130827 kubeconfig.go:125] found "no-preload-248059" server: "https://192.168.61.107:8443"
	I0328 01:03:52.110536 1130827 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0328 01:03:52.122998 1130827 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.61.107
	I0328 01:03:52.123044 1130827 kubeadm.go:1154] stopping kube-system containers ...
	I0328 01:03:52.123090 1130827 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0328 01:03:52.123170 1130827 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0328 01:03:52.165568 1130827 cri.go:89] found id: ""
	I0328 01:03:52.165666 1130827 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0328 01:03:52.183930 1130827 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0328 01:03:52.195188 1130827 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0328 01:03:52.195215 1130827 kubeadm.go:156] found existing configuration files:
	
	I0328 01:03:52.195271 1130827 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0328 01:03:52.205872 1130827 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0328 01:03:52.205932 1130827 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0328 01:03:52.216481 1130827 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0328 01:03:52.226719 1130827 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0328 01:03:52.226787 1130827 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0328 01:03:52.238852 1130827 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0328 01:03:52.250272 1130827 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0328 01:03:52.250341 1130827 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0328 01:03:52.262474 1130827 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0328 01:03:52.273981 1130827 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0328 01:03:52.274059 1130827 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0328 01:03:52.286028 1130827 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0328 01:03:52.297016 1130827 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0328 01:03:52.406981 1130827 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0328 01:03:53.521529 1130827 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.114505514s)
	I0328 01:03:53.521569 1130827 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0328 01:03:53.735728 1130827 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0328 01:03:53.808590 1130827 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0328 01:03:53.931165 1130827 api_server.go:52] waiting for apiserver process to appear ...
	I0328 01:03:53.931281 1130827 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:54.432358 1130827 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:54.931653 1130827 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:54.948811 1130827 api_server.go:72] duration metric: took 1.017647613s to wait for apiserver process to appear ...
	I0328 01:03:54.948843 1130827 api_server.go:88] waiting for apiserver healthz status ...
	I0328 01:03:54.948871 1130827 api_server.go:253] Checking apiserver healthz at https://192.168.61.107:8443/healthz ...
	I0328 01:03:54.949490 1130827 api_server.go:269] stopped: https://192.168.61.107:8443/healthz: Get "https://192.168.61.107:8443/healthz": dial tcp 192.168.61.107:8443: connect: connection refused
	I0328 01:03:55.449050 1130827 api_server.go:253] Checking apiserver healthz at https://192.168.61.107:8443/healthz ...
	I0328 01:03:53.413775 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:55.914095 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:57.515811 1130827 api_server.go:279] https://192.168.61.107:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0328 01:03:57.515852 1130827 api_server.go:103] status: https://192.168.61.107:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0328 01:03:57.515872 1130827 api_server.go:253] Checking apiserver healthz at https://192.168.61.107:8443/healthz ...
	I0328 01:03:57.564527 1130827 api_server.go:279] https://192.168.61.107:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0328 01:03:57.564560 1130827 api_server.go:103] status: https://192.168.61.107:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0328 01:03:57.949780 1130827 api_server.go:253] Checking apiserver healthz at https://192.168.61.107:8443/healthz ...
	I0328 01:03:57.955515 1130827 api_server.go:279] https://192.168.61.107:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0328 01:03:57.955565 1130827 api_server.go:103] status: https://192.168.61.107:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0328 01:03:58.449103 1130827 api_server.go:253] Checking apiserver healthz at https://192.168.61.107:8443/healthz ...
	I0328 01:03:58.456345 1130827 api_server.go:279] https://192.168.61.107:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0328 01:03:58.456384 1130827 api_server.go:103] status: https://192.168.61.107:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0328 01:03:58.949575 1130827 api_server.go:253] Checking apiserver healthz at https://192.168.61.107:8443/healthz ...
	I0328 01:03:58.954466 1130827 api_server.go:279] https://192.168.61.107:8443/healthz returned 200:
	ok
	I0328 01:03:58.961213 1130827 api_server.go:141] control plane version: v1.30.0-beta.0
	I0328 01:03:58.961244 1130827 api_server.go:131] duration metric: took 4.012391589s to wait for apiserver health ...
	I0328 01:03:58.961256 1130827 cni.go:84] Creating CNI manager for ""
	I0328 01:03:58.961265 1130827 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0328 01:03:58.963147 1130827 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0328 01:03:55.568378 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:56.068253 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:56.568989 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:57.068709 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:57.569038 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:58.068236 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:58.568386 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:59.068971 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:59.568858 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:04:00.067964 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:57.043266 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:59.541626 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:58.964446 1130827 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0328 01:03:58.979425 1130827 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0328 01:03:59.042826 1130827 system_pods.go:43] waiting for kube-system pods to appear ...
	I0328 01:03:59.060388 1130827 system_pods.go:59] 8 kube-system pods found
	I0328 01:03:59.060429 1130827 system_pods.go:61] "coredns-7db6d8ff4d-86n4s" [71402ca8-dfa7-4caf-a422-6de9f24bf9dc] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0328 01:03:59.060439 1130827 system_pods.go:61] "etcd-no-preload-248059" [954b6886-b84f-4d94-bbce-7e520142eb4b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0328 01:03:59.060451 1130827 system_pods.go:61] "kube-apiserver-no-preload-248059" [2d3caabe-27c2-44e7-8f52-76e03f262e2d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0328 01:03:59.060462 1130827 system_pods.go:61] "kube-controller-manager-no-preload-248059" [30b9f4aa-c9a7-4d91-8e4d-35ad32f40425] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0328 01:03:59.060472 1130827 system_pods.go:61] "kube-proxy-b9qpb" [7ab4cca8-0ba2-4177-84cd-c6ac045930fc] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0328 01:03:59.060481 1130827 system_pods.go:61] "kube-scheduler-no-preload-248059" [4d9e45e3-d990-40d4-a4be-8384c39eb9b0] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0328 01:03:59.060493 1130827 system_pods.go:61] "metrics-server-569cc877fc-cvnrj" [063a47ac-9ceb-4521-9dde-aca02ec5e0d8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0328 01:03:59.060508 1130827 system_pods.go:61] "storage-provisioner" [0a0eb2d3-a426-4b76-8009-1a0a0e0312bb] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0328 01:03:59.060518 1130827 system_pods.go:74] duration metric: took 17.666067ms to wait for pod list to return data ...
	I0328 01:03:59.060533 1130827 node_conditions.go:102] verifying NodePressure condition ...
	I0328 01:03:59.065018 1130827 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0328 01:03:59.065054 1130827 node_conditions.go:123] node cpu capacity is 2
	I0328 01:03:59.065071 1130827 node_conditions.go:105] duration metric: took 4.531253ms to run NodePressure ...
	I0328 01:03:59.065097 1130827 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-beta.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0328 01:03:59.454609 1130827 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0328 01:03:59.459707 1130827 kubeadm.go:733] kubelet initialised
	I0328 01:03:59.459730 1130827 kubeadm.go:734] duration metric: took 5.09757ms waiting for restarted kubelet to initialise ...
	I0328 01:03:59.459739 1130827 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0328 01:03:59.465352 1130827 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-86n4s" in "kube-system" namespace to be "Ready" ...
	I0328 01:03:59.471020 1130827 pod_ready.go:97] node "no-preload-248059" hosting pod "coredns-7db6d8ff4d-86n4s" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-248059" has status "Ready":"False"
	I0328 01:03:59.471054 1130827 pod_ready.go:81] duration metric: took 5.676291ms for pod "coredns-7db6d8ff4d-86n4s" in "kube-system" namespace to be "Ready" ...
	E0328 01:03:59.471067 1130827 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-248059" hosting pod "coredns-7db6d8ff4d-86n4s" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-248059" has status "Ready":"False"
	I0328 01:03:59.471075 1130827 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-248059" in "kube-system" namespace to be "Ready" ...
	I0328 01:03:59.476393 1130827 pod_ready.go:97] node "no-preload-248059" hosting pod "etcd-no-preload-248059" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-248059" has status "Ready":"False"
	I0328 01:03:59.476421 1130827 pod_ready.go:81] duration metric: took 5.333391ms for pod "etcd-no-preload-248059" in "kube-system" namespace to be "Ready" ...
	E0328 01:03:59.476430 1130827 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-248059" hosting pod "etcd-no-preload-248059" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-248059" has status "Ready":"False"
	I0328 01:03:59.476436 1130827 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-248059" in "kube-system" namespace to be "Ready" ...
	I0328 01:03:59.485889 1130827 pod_ready.go:97] node "no-preload-248059" hosting pod "kube-apiserver-no-preload-248059" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-248059" has status "Ready":"False"
	I0328 01:03:59.485924 1130827 pod_ready.go:81] duration metric: took 9.481204ms for pod "kube-apiserver-no-preload-248059" in "kube-system" namespace to be "Ready" ...
	E0328 01:03:59.485937 1130827 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-248059" hosting pod "kube-apiserver-no-preload-248059" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-248059" has status "Ready":"False"
	I0328 01:03:59.485957 1130827 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-248059" in "kube-system" namespace to be "Ready" ...
	I0328 01:03:59.491064 1130827 pod_ready.go:97] node "no-preload-248059" hosting pod "kube-controller-manager-no-preload-248059" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-248059" has status "Ready":"False"
	I0328 01:03:59.491095 1130827 pod_ready.go:81] duration metric: took 5.125981ms for pod "kube-controller-manager-no-preload-248059" in "kube-system" namespace to be "Ready" ...
	E0328 01:03:59.491107 1130827 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-248059" hosting pod "kube-controller-manager-no-preload-248059" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-248059" has status "Ready":"False"
	I0328 01:03:59.491116 1130827 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-b9qpb" in "kube-system" namespace to be "Ready" ...
	I0328 01:03:59.858724 1130827 pod_ready.go:92] pod "kube-proxy-b9qpb" in "kube-system" namespace has status "Ready":"True"
	I0328 01:03:59.858753 1130827 pod_ready.go:81] duration metric: took 367.628034ms for pod "kube-proxy-b9qpb" in "kube-system" namespace to be "Ready" ...
	I0328 01:03:59.858764 1130827 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-248059" in "kube-system" namespace to be "Ready" ...
	I0328 01:03:58.413911 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:00.913297 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:02.913414 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:00.568622 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:04:01.067943 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:04:01.567964 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:04:02.068537 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:04:02.568772 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:04:03.068458 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:04:03.568943 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:04:04.068085 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:04:04.068176 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:04:04.112601 1131323 cri.go:89] found id: ""
	I0328 01:04:04.112631 1131323 logs.go:276] 0 containers: []
	W0328 01:04:04.112642 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:04:04.112650 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:04:04.112726 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:04:04.151837 1131323 cri.go:89] found id: ""
	I0328 01:04:04.151873 1131323 logs.go:276] 0 containers: []
	W0328 01:04:04.151885 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:04:04.151894 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:04:04.151965 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:04:04.193411 1131323 cri.go:89] found id: ""
	I0328 01:04:04.193451 1131323 logs.go:276] 0 containers: []
	W0328 01:04:04.193463 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:04:04.193473 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:04:04.193545 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:04:04.239623 1131323 cri.go:89] found id: ""
	I0328 01:04:04.239652 1131323 logs.go:276] 0 containers: []
	W0328 01:04:04.239662 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:04:04.239673 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:04:04.239732 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:04:04.279561 1131323 cri.go:89] found id: ""
	I0328 01:04:04.279600 1131323 logs.go:276] 0 containers: []
	W0328 01:04:04.279615 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:04:04.279627 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:04:04.279708 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:04:04.318680 1131323 cri.go:89] found id: ""
	I0328 01:04:04.318710 1131323 logs.go:276] 0 containers: []
	W0328 01:04:04.318722 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:04:04.318731 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:04:04.318797 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:04:04.356486 1131323 cri.go:89] found id: ""
	I0328 01:04:04.356514 1131323 logs.go:276] 0 containers: []
	W0328 01:04:04.356523 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:04:04.356530 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:04:04.356586 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:04:04.394281 1131323 cri.go:89] found id: ""
	I0328 01:04:04.394319 1131323 logs.go:276] 0 containers: []
	W0328 01:04:04.394334 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:04:04.394348 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:04:04.394364 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:04:04.458688 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:04:04.458729 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:04:04.501399 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:04:04.501440 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:04:04.556183 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:04:04.556225 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:04:04.571392 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:04:04.571427 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:04:04.709967 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:04:02.041555 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:04.541464 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:01.866183 1130827 pod_ready.go:102] pod "kube-scheduler-no-preload-248059" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:03.868706 1130827 pod_ready.go:102] pod "kube-scheduler-no-preload-248059" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:04.915667 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:07.412548 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:07.210550 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:04:07.224274 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:04:07.224345 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:04:07.262604 1131323 cri.go:89] found id: ""
	I0328 01:04:07.262640 1131323 logs.go:276] 0 containers: []
	W0328 01:04:07.262665 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:04:07.262674 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:04:07.262763 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:04:07.296868 1131323 cri.go:89] found id: ""
	I0328 01:04:07.296907 1131323 logs.go:276] 0 containers: []
	W0328 01:04:07.296918 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:04:07.296926 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:04:07.296992 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:04:07.333110 1131323 cri.go:89] found id: ""
	I0328 01:04:07.333149 1131323 logs.go:276] 0 containers: []
	W0328 01:04:07.333162 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:04:07.333171 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:04:07.333240 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:04:07.371138 1131323 cri.go:89] found id: ""
	I0328 01:04:07.371168 1131323 logs.go:276] 0 containers: []
	W0328 01:04:07.371186 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:04:07.371195 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:04:07.371259 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:04:07.412197 1131323 cri.go:89] found id: ""
	I0328 01:04:07.412230 1131323 logs.go:276] 0 containers: []
	W0328 01:04:07.412242 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:04:07.412251 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:04:07.412331 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:04:07.457021 1131323 cri.go:89] found id: ""
	I0328 01:04:07.457052 1131323 logs.go:276] 0 containers: []
	W0328 01:04:07.457070 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:04:07.457080 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:04:07.457153 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:04:07.517996 1131323 cri.go:89] found id: ""
	I0328 01:04:07.518026 1131323 logs.go:276] 0 containers: []
	W0328 01:04:07.518034 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:04:07.518040 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:04:07.518111 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:04:07.556829 1131323 cri.go:89] found id: ""
	I0328 01:04:07.556856 1131323 logs.go:276] 0 containers: []
	W0328 01:04:07.556865 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:04:07.556875 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:04:07.556890 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:04:07.572234 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:04:07.572270 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:04:07.648615 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:04:07.648641 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:04:07.648658 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:04:07.719617 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:04:07.719665 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:04:07.764053 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:04:07.764097 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:04:10.319480 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:04:06.542160 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:08.550725 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:06.366150 1130827 pod_ready.go:102] pod "kube-scheduler-no-preload-248059" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:07.365200 1130827 pod_ready.go:92] pod "kube-scheduler-no-preload-248059" in "kube-system" namespace has status "Ready":"True"
	I0328 01:04:07.365233 1130827 pod_ready.go:81] duration metric: took 7.506461201s for pod "kube-scheduler-no-preload-248059" in "kube-system" namespace to be "Ready" ...
	I0328 01:04:07.365256 1130827 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace to be "Ready" ...
	I0328 01:04:09.373694 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:09.413378 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:11.913400 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:10.334347 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:04:10.335893 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:04:10.375231 1131323 cri.go:89] found id: ""
	I0328 01:04:10.375263 1131323 logs.go:276] 0 containers: []
	W0328 01:04:10.375274 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:04:10.375281 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:04:10.375353 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:04:10.413652 1131323 cri.go:89] found id: ""
	I0328 01:04:10.413706 1131323 logs.go:276] 0 containers: []
	W0328 01:04:10.413726 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:04:10.413736 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:04:10.413805 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:04:10.449546 1131323 cri.go:89] found id: ""
	I0328 01:04:10.449588 1131323 logs.go:276] 0 containers: []
	W0328 01:04:10.449597 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:04:10.449604 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:04:10.449658 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:04:10.487518 1131323 cri.go:89] found id: ""
	I0328 01:04:10.487556 1131323 logs.go:276] 0 containers: []
	W0328 01:04:10.487570 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:04:10.487579 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:04:10.487663 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:04:10.525088 1131323 cri.go:89] found id: ""
	I0328 01:04:10.525124 1131323 logs.go:276] 0 containers: []
	W0328 01:04:10.525137 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:04:10.525146 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:04:10.525213 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:04:10.567177 1131323 cri.go:89] found id: ""
	I0328 01:04:10.567209 1131323 logs.go:276] 0 containers: []
	W0328 01:04:10.567221 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:04:10.567231 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:04:10.567302 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:04:10.609440 1131323 cri.go:89] found id: ""
	I0328 01:04:10.609474 1131323 logs.go:276] 0 containers: []
	W0328 01:04:10.609485 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:04:10.609492 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:04:10.609549 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:04:10.652466 1131323 cri.go:89] found id: ""
	I0328 01:04:10.652502 1131323 logs.go:276] 0 containers: []
	W0328 01:04:10.652516 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:04:10.652529 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:04:10.652546 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:04:10.737406 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:04:10.737451 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:04:10.786955 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:04:10.786991 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:04:10.843072 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:04:10.843114 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:04:10.857209 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:04:10.857244 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:04:10.950885 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:04:13.451542 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:04:13.465833 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:04:13.465924 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:04:13.503353 1131323 cri.go:89] found id: ""
	I0328 01:04:13.503386 1131323 logs.go:276] 0 containers: []
	W0328 01:04:13.503398 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:04:13.503407 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:04:13.503474 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:04:13.543175 1131323 cri.go:89] found id: ""
	I0328 01:04:13.543208 1131323 logs.go:276] 0 containers: []
	W0328 01:04:13.543220 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:04:13.543229 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:04:13.543287 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:04:13.580796 1131323 cri.go:89] found id: ""
	I0328 01:04:13.580829 1131323 logs.go:276] 0 containers: []
	W0328 01:04:13.580840 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:04:13.580848 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:04:13.580900 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:04:13.619483 1131323 cri.go:89] found id: ""
	I0328 01:04:13.619516 1131323 logs.go:276] 0 containers: []
	W0328 01:04:13.619529 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:04:13.619539 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:04:13.619596 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:04:13.654651 1131323 cri.go:89] found id: ""
	I0328 01:04:13.654683 1131323 logs.go:276] 0 containers: []
	W0328 01:04:13.654697 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:04:13.654705 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:04:13.654774 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:04:13.691763 1131323 cri.go:89] found id: ""
	I0328 01:04:13.691794 1131323 logs.go:276] 0 containers: []
	W0328 01:04:13.691805 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:04:13.691813 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:04:13.691881 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:04:13.730580 1131323 cri.go:89] found id: ""
	I0328 01:04:13.730614 1131323 logs.go:276] 0 containers: []
	W0328 01:04:13.730627 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:04:13.730635 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:04:13.730694 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:04:13.767802 1131323 cri.go:89] found id: ""
	I0328 01:04:13.767834 1131323 logs.go:276] 0 containers: []
	W0328 01:04:13.767848 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:04:13.767860 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:04:13.767876 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:04:13.815612 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:04:13.815653 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:04:13.870945 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:04:13.870991 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:04:13.891456 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:04:13.891506 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:04:14.022124 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:04:14.022163 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:04:14.022187 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:04:11.041196 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:13.044490 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:15.541942 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:11.873574 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:13.875251 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:14.412081 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:16.412837 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:16.604087 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:04:16.618872 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:04:16.618971 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:04:16.665628 1131323 cri.go:89] found id: ""
	I0328 01:04:16.665661 1131323 logs.go:276] 0 containers: []
	W0328 01:04:16.665675 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:04:16.665683 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:04:16.665780 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:04:16.703727 1131323 cri.go:89] found id: ""
	I0328 01:04:16.703758 1131323 logs.go:276] 0 containers: []
	W0328 01:04:16.703768 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:04:16.703775 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:04:16.703835 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:04:16.741425 1131323 cri.go:89] found id: ""
	I0328 01:04:16.741455 1131323 logs.go:276] 0 containers: []
	W0328 01:04:16.741464 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:04:16.741470 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:04:16.741524 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:04:16.782333 1131323 cri.go:89] found id: ""
	I0328 01:04:16.782373 1131323 logs.go:276] 0 containers: []
	W0328 01:04:16.782387 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:04:16.782398 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:04:16.782469 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:04:16.820321 1131323 cri.go:89] found id: ""
	I0328 01:04:16.820355 1131323 logs.go:276] 0 containers: []
	W0328 01:04:16.820364 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:04:16.820372 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:04:16.820429 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:04:16.861091 1131323 cri.go:89] found id: ""
	I0328 01:04:16.861130 1131323 logs.go:276] 0 containers: []
	W0328 01:04:16.861144 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:04:16.861154 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:04:16.861226 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:04:16.901347 1131323 cri.go:89] found id: ""
	I0328 01:04:16.901394 1131323 logs.go:276] 0 containers: []
	W0328 01:04:16.901408 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:04:16.901418 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:04:16.901491 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:04:16.944027 1131323 cri.go:89] found id: ""
	I0328 01:04:16.944067 1131323 logs.go:276] 0 containers: []
	W0328 01:04:16.944080 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:04:16.944093 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:04:16.944110 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:04:16.959104 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:04:16.959151 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:04:17.035432 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:04:17.035464 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:04:17.035480 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:04:17.116236 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:04:17.116276 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:04:17.159321 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:04:17.159370 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:04:19.711326 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:04:19.726016 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:04:19.726094 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:04:19.776639 1131323 cri.go:89] found id: ""
	I0328 01:04:19.776676 1131323 logs.go:276] 0 containers: []
	W0328 01:04:19.776690 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:04:19.776700 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:04:19.776782 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:04:19.817849 1131323 cri.go:89] found id: ""
	I0328 01:04:19.817887 1131323 logs.go:276] 0 containers: []
	W0328 01:04:19.817897 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:04:19.817904 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:04:19.817981 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:04:19.855055 1131323 cri.go:89] found id: ""
	I0328 01:04:19.855089 1131323 logs.go:276] 0 containers: []
	W0328 01:04:19.855102 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:04:19.855110 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:04:19.855177 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:04:19.895296 1131323 cri.go:89] found id: ""
	I0328 01:04:19.895332 1131323 logs.go:276] 0 containers: []
	W0328 01:04:19.895346 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:04:19.895354 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:04:19.895414 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:04:19.930936 1131323 cri.go:89] found id: ""
	I0328 01:04:19.930968 1131323 logs.go:276] 0 containers: []
	W0328 01:04:19.930980 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:04:19.930989 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:04:19.931067 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:04:19.968573 1131323 cri.go:89] found id: ""
	I0328 01:04:19.968610 1131323 logs.go:276] 0 containers: []
	W0328 01:04:19.968623 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:04:19.968632 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:04:19.968693 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:04:20.006130 1131323 cri.go:89] found id: ""
	I0328 01:04:20.006180 1131323 logs.go:276] 0 containers: []
	W0328 01:04:20.006195 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:04:20.006203 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:04:20.006304 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:04:20.043646 1131323 cri.go:89] found id: ""
	I0328 01:04:20.043678 1131323 logs.go:276] 0 containers: []
	W0328 01:04:20.043689 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:04:20.043701 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:04:20.043717 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:04:20.058728 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:04:20.058761 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:04:20.136392 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:04:20.136417 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:04:20.136431 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:04:20.214971 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:04:20.215015 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:04:20.255002 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:04:20.255047 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:04:18.041868 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:20.542175 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:16.372600 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:18.373203 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:20.374228 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:18.913596 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:20.913978 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:22.914777 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:22.810078 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:04:22.824083 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:04:22.824169 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:04:22.862037 1131323 cri.go:89] found id: ""
	I0328 01:04:22.862066 1131323 logs.go:276] 0 containers: []
	W0328 01:04:22.862074 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:04:22.862081 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:04:22.862141 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:04:22.901625 1131323 cri.go:89] found id: ""
	I0328 01:04:22.901658 1131323 logs.go:276] 0 containers: []
	W0328 01:04:22.901670 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:04:22.901679 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:04:22.901752 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:04:22.938858 1131323 cri.go:89] found id: ""
	I0328 01:04:22.938891 1131323 logs.go:276] 0 containers: []
	W0328 01:04:22.938903 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:04:22.938912 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:04:22.938983 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:04:22.978781 1131323 cri.go:89] found id: ""
	I0328 01:04:22.978818 1131323 logs.go:276] 0 containers: []
	W0328 01:04:22.978829 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:04:22.978837 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:04:22.978910 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:04:23.016844 1131323 cri.go:89] found id: ""
	I0328 01:04:23.016882 1131323 logs.go:276] 0 containers: []
	W0328 01:04:23.016895 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:04:23.016904 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:04:23.016975 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:04:23.058456 1131323 cri.go:89] found id: ""
	I0328 01:04:23.058508 1131323 logs.go:276] 0 containers: []
	W0328 01:04:23.058522 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:04:23.058531 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:04:23.058604 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:04:23.099368 1131323 cri.go:89] found id: ""
	I0328 01:04:23.099399 1131323 logs.go:276] 0 containers: []
	W0328 01:04:23.099408 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:04:23.099420 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:04:23.099492 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:04:23.135593 1131323 cri.go:89] found id: ""
	I0328 01:04:23.135634 1131323 logs.go:276] 0 containers: []
	W0328 01:04:23.135653 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:04:23.135665 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:04:23.135679 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:04:23.191215 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:04:23.191260 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:04:23.206849 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:04:23.206884 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:04:23.289566 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:04:23.289596 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:04:23.289618 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:04:23.365429 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:04:23.365480 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:04:23.042312 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:25.541788 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:22.872233 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:25.373908 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:25.413591 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:27.912983 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:25.914883 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:04:25.929336 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:04:25.929415 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:04:25.969452 1131323 cri.go:89] found id: ""
	I0328 01:04:25.969485 1131323 logs.go:276] 0 containers: []
	W0328 01:04:25.969497 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:04:25.969506 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:04:25.969573 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:04:26.008978 1131323 cri.go:89] found id: ""
	I0328 01:04:26.009006 1131323 logs.go:276] 0 containers: []
	W0328 01:04:26.009015 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:04:26.009022 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:04:26.009075 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:04:26.051110 1131323 cri.go:89] found id: ""
	I0328 01:04:26.051138 1131323 logs.go:276] 0 containers: []
	W0328 01:04:26.051146 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:04:26.051153 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:04:26.051213 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:04:26.088231 1131323 cri.go:89] found id: ""
	I0328 01:04:26.088262 1131323 logs.go:276] 0 containers: []
	W0328 01:04:26.088271 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:04:26.088277 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:04:26.088342 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:04:26.125741 1131323 cri.go:89] found id: ""
	I0328 01:04:26.125782 1131323 logs.go:276] 0 containers: []
	W0328 01:04:26.125794 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:04:26.125800 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:04:26.125867 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:04:26.163367 1131323 cri.go:89] found id: ""
	I0328 01:04:26.163406 1131323 logs.go:276] 0 containers: []
	W0328 01:04:26.163417 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:04:26.163426 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:04:26.163503 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:04:26.202302 1131323 cri.go:89] found id: ""
	I0328 01:04:26.202340 1131323 logs.go:276] 0 containers: []
	W0328 01:04:26.202355 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:04:26.202364 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:04:26.202422 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:04:26.240880 1131323 cri.go:89] found id: ""
	I0328 01:04:26.240911 1131323 logs.go:276] 0 containers: []
	W0328 01:04:26.240921 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:04:26.240931 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:04:26.240943 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:04:26.283151 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:04:26.283180 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:04:26.341313 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:04:26.341350 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:04:26.356762 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:04:26.356791 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:04:26.428033 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:04:26.428054 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:04:26.428066 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:04:29.006332 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:04:29.020634 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:04:29.020745 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:04:29.060812 1131323 cri.go:89] found id: ""
	I0328 01:04:29.060843 1131323 logs.go:276] 0 containers: []
	W0328 01:04:29.060852 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:04:29.060859 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:04:29.060916 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:04:29.100110 1131323 cri.go:89] found id: ""
	I0328 01:04:29.100139 1131323 logs.go:276] 0 containers: []
	W0328 01:04:29.100149 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:04:29.100155 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:04:29.100212 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:04:29.140345 1131323 cri.go:89] found id: ""
	I0328 01:04:29.140384 1131323 logs.go:276] 0 containers: []
	W0328 01:04:29.140396 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:04:29.140404 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:04:29.140479 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:04:29.182415 1131323 cri.go:89] found id: ""
	I0328 01:04:29.182449 1131323 logs.go:276] 0 containers: []
	W0328 01:04:29.182459 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:04:29.182465 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:04:29.182533 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:04:29.225177 1131323 cri.go:89] found id: ""
	I0328 01:04:29.225214 1131323 logs.go:276] 0 containers: []
	W0328 01:04:29.225225 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:04:29.225233 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:04:29.225310 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:04:29.265437 1131323 cri.go:89] found id: ""
	I0328 01:04:29.265471 1131323 logs.go:276] 0 containers: []
	W0328 01:04:29.265485 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:04:29.265493 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:04:29.265556 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:04:29.301578 1131323 cri.go:89] found id: ""
	I0328 01:04:29.301617 1131323 logs.go:276] 0 containers: []
	W0328 01:04:29.301630 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:04:29.301639 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:04:29.301719 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:04:29.340816 1131323 cri.go:89] found id: ""
	I0328 01:04:29.340847 1131323 logs.go:276] 0 containers: []
	W0328 01:04:29.340856 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:04:29.340867 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:04:29.340880 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:04:29.384658 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:04:29.384687 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:04:29.439243 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:04:29.439285 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:04:29.456179 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:04:29.456211 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:04:29.534878 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:04:29.534906 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:04:29.534927 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:04:28.041463 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:30.042506 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:27.872489 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:30.371109 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:29.913856 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:32.415699 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:32.115798 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:04:32.130464 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:04:32.130560 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:04:32.168846 1131323 cri.go:89] found id: ""
	I0328 01:04:32.168877 1131323 logs.go:276] 0 containers: []
	W0328 01:04:32.168887 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:04:32.168894 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:04:32.168952 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:04:32.208590 1131323 cri.go:89] found id: ""
	I0328 01:04:32.208622 1131323 logs.go:276] 0 containers: []
	W0328 01:04:32.208632 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:04:32.208638 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:04:32.208694 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:04:32.247323 1131323 cri.go:89] found id: ""
	I0328 01:04:32.247362 1131323 logs.go:276] 0 containers: []
	W0328 01:04:32.247375 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:04:32.247384 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:04:32.247507 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:04:32.285260 1131323 cri.go:89] found id: ""
	I0328 01:04:32.285293 1131323 logs.go:276] 0 containers: []
	W0328 01:04:32.285312 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:04:32.285319 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:04:32.285395 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:04:32.326678 1131323 cri.go:89] found id: ""
	I0328 01:04:32.326712 1131323 logs.go:276] 0 containers: []
	W0328 01:04:32.326725 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:04:32.326740 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:04:32.326823 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:04:32.363375 1131323 cri.go:89] found id: ""
	I0328 01:04:32.363403 1131323 logs.go:276] 0 containers: []
	W0328 01:04:32.363412 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:04:32.363419 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:04:32.363473 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:04:32.401410 1131323 cri.go:89] found id: ""
	I0328 01:04:32.401449 1131323 logs.go:276] 0 containers: []
	W0328 01:04:32.401462 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:04:32.401470 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:04:32.401558 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:04:32.438645 1131323 cri.go:89] found id: ""
	I0328 01:04:32.438680 1131323 logs.go:276] 0 containers: []
	W0328 01:04:32.438691 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:04:32.438703 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:04:32.438718 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:04:32.488743 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:04:32.488786 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:04:32.503908 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:04:32.503944 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:04:32.577307 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:04:32.577333 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:04:32.577350 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:04:32.657787 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:04:32.657832 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:04:35.201151 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:04:35.215313 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:04:35.215383 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:04:35.253467 1131323 cri.go:89] found id: ""
	I0328 01:04:35.253504 1131323 logs.go:276] 0 containers: []
	W0328 01:04:35.253515 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:04:35.253522 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:04:35.253593 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:04:35.290218 1131323 cri.go:89] found id: ""
	I0328 01:04:35.290280 1131323 logs.go:276] 0 containers: []
	W0328 01:04:35.290292 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:04:35.290300 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:04:35.290378 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:04:35.330714 1131323 cri.go:89] found id: ""
	I0328 01:04:35.330749 1131323 logs.go:276] 0 containers: []
	W0328 01:04:35.330757 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:04:35.330764 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:04:35.330831 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:04:32.542071 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:34.544163 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:32.372100 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:34.872293 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:34.913212 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:37.411734 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:35.371524 1131323 cri.go:89] found id: ""
	I0328 01:04:35.371553 1131323 logs.go:276] 0 containers: []
	W0328 01:04:35.371570 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:04:35.371577 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:04:35.371630 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:04:35.411610 1131323 cri.go:89] found id: ""
	I0328 01:04:35.411638 1131323 logs.go:276] 0 containers: []
	W0328 01:04:35.411646 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:04:35.411652 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:04:35.411711 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:04:35.456709 1131323 cri.go:89] found id: ""
	I0328 01:04:35.456745 1131323 logs.go:276] 0 containers: []
	W0328 01:04:35.456758 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:04:35.456766 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:04:35.456836 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:04:35.492688 1131323 cri.go:89] found id: ""
	I0328 01:04:35.492719 1131323 logs.go:276] 0 containers: []
	W0328 01:04:35.492729 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:04:35.492755 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:04:35.492811 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:04:35.531205 1131323 cri.go:89] found id: ""
	I0328 01:04:35.531234 1131323 logs.go:276] 0 containers: []
	W0328 01:04:35.531243 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:04:35.531254 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:04:35.531266 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:04:35.611803 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:04:35.611845 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:04:35.653513 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:04:35.653551 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:04:35.708030 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:04:35.708075 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:04:35.724542 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:04:35.724576 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:04:35.798624 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:04:38.299312 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:04:38.314128 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:04:38.314213 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:04:38.357728 1131323 cri.go:89] found id: ""
	I0328 01:04:38.357761 1131323 logs.go:276] 0 containers: []
	W0328 01:04:38.357779 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:04:38.357786 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:04:38.357848 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:04:38.394512 1131323 cri.go:89] found id: ""
	I0328 01:04:38.394541 1131323 logs.go:276] 0 containers: []
	W0328 01:04:38.394549 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:04:38.394558 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:04:38.394618 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:04:38.434353 1131323 cri.go:89] found id: ""
	I0328 01:04:38.434380 1131323 logs.go:276] 0 containers: []
	W0328 01:04:38.434391 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:04:38.434399 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:04:38.434466 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:04:38.477662 1131323 cri.go:89] found id: ""
	I0328 01:04:38.477693 1131323 logs.go:276] 0 containers: []
	W0328 01:04:38.477703 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:04:38.477710 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:04:38.477763 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:04:38.515014 1131323 cri.go:89] found id: ""
	I0328 01:04:38.515044 1131323 logs.go:276] 0 containers: []
	W0328 01:04:38.515053 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:04:38.515060 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:04:38.515117 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:04:38.558865 1131323 cri.go:89] found id: ""
	I0328 01:04:38.558899 1131323 logs.go:276] 0 containers: []
	W0328 01:04:38.558911 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:04:38.558920 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:04:38.558982 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:04:38.600261 1131323 cri.go:89] found id: ""
	I0328 01:04:38.600290 1131323 logs.go:276] 0 containers: []
	W0328 01:04:38.600299 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:04:38.600306 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:04:38.600366 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:04:38.637131 1131323 cri.go:89] found id: ""
	I0328 01:04:38.637167 1131323 logs.go:276] 0 containers: []
	W0328 01:04:38.637179 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:04:38.637194 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:04:38.637218 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:04:38.716032 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:04:38.716058 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:04:38.716079 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:04:38.804534 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:04:38.804578 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:04:38.851781 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:04:38.851820 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:04:38.910091 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:04:38.910125 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:04:37.041273 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:39.541843 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:37.372262 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:39.372555 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:39.912953 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:42.412667 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:41.425801 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:04:41.441072 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:04:41.441168 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:04:41.482934 1131323 cri.go:89] found id: ""
	I0328 01:04:41.482962 1131323 logs.go:276] 0 containers: []
	W0328 01:04:41.482974 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:04:41.482983 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:04:41.483063 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:04:41.521762 1131323 cri.go:89] found id: ""
	I0328 01:04:41.521796 1131323 logs.go:276] 0 containers: []
	W0328 01:04:41.521810 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:04:41.521819 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:04:41.521931 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:04:41.560814 1131323 cri.go:89] found id: ""
	I0328 01:04:41.560847 1131323 logs.go:276] 0 containers: []
	W0328 01:04:41.560857 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:04:41.560864 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:04:41.560928 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:04:41.601158 1131323 cri.go:89] found id: ""
	I0328 01:04:41.601189 1131323 logs.go:276] 0 containers: []
	W0328 01:04:41.601199 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:04:41.601206 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:04:41.601271 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:04:41.638760 1131323 cri.go:89] found id: ""
	I0328 01:04:41.638789 1131323 logs.go:276] 0 containers: []
	W0328 01:04:41.638799 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:04:41.638806 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:04:41.638861 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:04:41.675235 1131323 cri.go:89] found id: ""
	I0328 01:04:41.675268 1131323 logs.go:276] 0 containers: []
	W0328 01:04:41.675278 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:04:41.675285 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:04:41.675341 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:04:41.712918 1131323 cri.go:89] found id: ""
	I0328 01:04:41.712957 1131323 logs.go:276] 0 containers: []
	W0328 01:04:41.712972 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:04:41.712983 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:04:41.713078 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:04:41.750552 1131323 cri.go:89] found id: ""
	I0328 01:04:41.750582 1131323 logs.go:276] 0 containers: []
	W0328 01:04:41.750591 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:04:41.750601 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:04:41.750617 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:04:41.811163 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:04:41.811204 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:04:41.826502 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:04:41.826547 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:04:41.900727 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:04:41.900759 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:04:41.900777 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:04:41.981731 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:04:41.981783 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:04:44.525845 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:04:44.542301 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:04:44.542389 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:04:44.584907 1131323 cri.go:89] found id: ""
	I0328 01:04:44.584936 1131323 logs.go:276] 0 containers: []
	W0328 01:04:44.584945 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:04:44.584952 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:04:44.585007 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:04:44.630465 1131323 cri.go:89] found id: ""
	I0328 01:04:44.630499 1131323 logs.go:276] 0 containers: []
	W0328 01:04:44.630511 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:04:44.630520 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:04:44.630588 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:04:44.669095 1131323 cri.go:89] found id: ""
	I0328 01:04:44.669131 1131323 logs.go:276] 0 containers: []
	W0328 01:04:44.669143 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:04:44.669152 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:04:44.669235 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:04:44.708445 1131323 cri.go:89] found id: ""
	I0328 01:04:44.708484 1131323 logs.go:276] 0 containers: []
	W0328 01:04:44.708495 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:04:44.708502 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:04:44.708570 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:04:44.747706 1131323 cri.go:89] found id: ""
	I0328 01:04:44.747744 1131323 logs.go:276] 0 containers: []
	W0328 01:04:44.747755 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:04:44.747762 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:04:44.747822 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:04:44.787768 1131323 cri.go:89] found id: ""
	I0328 01:04:44.787807 1131323 logs.go:276] 0 containers: []
	W0328 01:04:44.787821 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:04:44.787830 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:04:44.787899 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:04:44.829018 1131323 cri.go:89] found id: ""
	I0328 01:04:44.829049 1131323 logs.go:276] 0 containers: []
	W0328 01:04:44.829059 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:04:44.829066 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:04:44.829123 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:04:44.874334 1131323 cri.go:89] found id: ""
	I0328 01:04:44.874374 1131323 logs.go:276] 0 containers: []
	W0328 01:04:44.874383 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:04:44.874393 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:04:44.874405 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:04:44.921577 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:04:44.921619 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:04:44.976660 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:04:44.976713 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:04:44.991365 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:04:44.991400 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:04:45.067595 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:04:45.067630 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:04:45.067651 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:04:42.042736 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:44.543288 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:41.372902 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:43.872925 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:45.873163 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:44.913827 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:47.412342 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:47.647634 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:04:47.663581 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:04:47.663687 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:04:47.702889 1131323 cri.go:89] found id: ""
	I0328 01:04:47.702940 1131323 logs.go:276] 0 containers: []
	W0328 01:04:47.702954 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:04:47.702966 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:04:47.703043 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:04:47.744995 1131323 cri.go:89] found id: ""
	I0328 01:04:47.745027 1131323 logs.go:276] 0 containers: []
	W0328 01:04:47.745037 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:04:47.745044 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:04:47.745103 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:04:47.785518 1131323 cri.go:89] found id: ""
	I0328 01:04:47.785550 1131323 logs.go:276] 0 containers: []
	W0328 01:04:47.785562 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:04:47.785572 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:04:47.785645 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:04:47.831739 1131323 cri.go:89] found id: ""
	I0328 01:04:47.831771 1131323 logs.go:276] 0 containers: []
	W0328 01:04:47.831786 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:04:47.831794 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:04:47.831867 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:04:47.871864 1131323 cri.go:89] found id: ""
	I0328 01:04:47.871906 1131323 logs.go:276] 0 containers: []
	W0328 01:04:47.871918 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:04:47.871929 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:04:47.872008 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:04:47.907899 1131323 cri.go:89] found id: ""
	I0328 01:04:47.907934 1131323 logs.go:276] 0 containers: []
	W0328 01:04:47.907946 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:04:47.907955 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:04:47.908022 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:04:47.946073 1131323 cri.go:89] found id: ""
	I0328 01:04:47.946107 1131323 logs.go:276] 0 containers: []
	W0328 01:04:47.946118 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:04:47.946127 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:04:47.946223 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:04:47.986122 1131323 cri.go:89] found id: ""
	I0328 01:04:47.986154 1131323 logs.go:276] 0 containers: []
	W0328 01:04:47.986168 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:04:47.986182 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:04:47.986198 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:04:48.057234 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:04:48.057271 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:04:48.109881 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:04:48.109926 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:04:48.125154 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:04:48.125189 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:04:48.208295 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:04:48.208327 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:04:48.208345 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:04:47.041447 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:49.542203 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:48.371275 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:50.372057 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:49.413451 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:51.414465 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:50.785126 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:04:50.800000 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:04:50.800078 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:04:50.839883 1131323 cri.go:89] found id: ""
	I0328 01:04:50.839911 1131323 logs.go:276] 0 containers: []
	W0328 01:04:50.839920 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:04:50.839927 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:04:50.839983 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:04:50.879627 1131323 cri.go:89] found id: ""
	I0328 01:04:50.879654 1131323 logs.go:276] 0 containers: []
	W0328 01:04:50.879661 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:04:50.879668 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:04:50.879734 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:04:50.918392 1131323 cri.go:89] found id: ""
	I0328 01:04:50.918434 1131323 logs.go:276] 0 containers: []
	W0328 01:04:50.918446 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:04:50.918454 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:04:50.918517 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:04:50.957198 1131323 cri.go:89] found id: ""
	I0328 01:04:50.957234 1131323 logs.go:276] 0 containers: []
	W0328 01:04:50.957248 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:04:50.957257 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:04:50.957328 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:04:50.997389 1131323 cri.go:89] found id: ""
	I0328 01:04:50.997424 1131323 logs.go:276] 0 containers: []
	W0328 01:04:50.997438 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:04:50.997446 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:04:50.997513 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:04:51.040259 1131323 cri.go:89] found id: ""
	I0328 01:04:51.040296 1131323 logs.go:276] 0 containers: []
	W0328 01:04:51.040309 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:04:51.040318 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:04:51.040389 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:04:51.081824 1131323 cri.go:89] found id: ""
	I0328 01:04:51.081858 1131323 logs.go:276] 0 containers: []
	W0328 01:04:51.081868 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:04:51.081875 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:04:51.081942 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:04:51.119742 1131323 cri.go:89] found id: ""
	I0328 01:04:51.119783 1131323 logs.go:276] 0 containers: []
	W0328 01:04:51.119796 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:04:51.119810 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:04:51.119836 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:04:51.173486 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:04:51.173529 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:04:51.188532 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:04:51.188568 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:04:51.269181 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:04:51.269207 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:04:51.269226 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:04:51.349882 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:04:51.349936 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:04:53.893562 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:04:53.910104 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:04:53.910186 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:04:53.951333 1131323 cri.go:89] found id: ""
	I0328 01:04:53.951375 1131323 logs.go:276] 0 containers: []
	W0328 01:04:53.951388 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:04:53.951397 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:04:53.951472 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:04:53.992438 1131323 cri.go:89] found id: ""
	I0328 01:04:53.992474 1131323 logs.go:276] 0 containers: []
	W0328 01:04:53.992486 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:04:53.992493 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:04:53.992561 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:04:54.032934 1131323 cri.go:89] found id: ""
	I0328 01:04:54.032969 1131323 logs.go:276] 0 containers: []
	W0328 01:04:54.032982 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:04:54.032992 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:04:54.033061 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:04:54.074670 1131323 cri.go:89] found id: ""
	I0328 01:04:54.074707 1131323 logs.go:276] 0 containers: []
	W0328 01:04:54.074777 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:04:54.074801 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:04:54.074875 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:04:54.111527 1131323 cri.go:89] found id: ""
	I0328 01:04:54.111555 1131323 logs.go:276] 0 containers: []
	W0328 01:04:54.111566 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:04:54.111573 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:04:54.111658 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:04:54.151401 1131323 cri.go:89] found id: ""
	I0328 01:04:54.151428 1131323 logs.go:276] 0 containers: []
	W0328 01:04:54.151437 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:04:54.151443 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:04:54.151494 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:04:54.197997 1131323 cri.go:89] found id: ""
	I0328 01:04:54.198036 1131323 logs.go:276] 0 containers: []
	W0328 01:04:54.198048 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:04:54.198058 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:04:54.198135 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:04:54.234016 1131323 cri.go:89] found id: ""
	I0328 01:04:54.234049 1131323 logs.go:276] 0 containers: []
	W0328 01:04:54.234058 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:04:54.234068 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:04:54.234081 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:04:54.286118 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:04:54.286161 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:04:54.300489 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:04:54.300541 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:04:54.376949 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:04:54.376972 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:04:54.376988 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:04:54.463857 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:04:54.463901 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:04:52.041517 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:54.042088 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:52.875923 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:55.371823 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:53.912140 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:55.912329 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:57.026395 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:04:57.041270 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:04:57.041358 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:04:57.082380 1131323 cri.go:89] found id: ""
	I0328 01:04:57.082416 1131323 logs.go:276] 0 containers: []
	W0328 01:04:57.082428 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:04:57.082436 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:04:57.082503 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:04:57.121835 1131323 cri.go:89] found id: ""
	I0328 01:04:57.121870 1131323 logs.go:276] 0 containers: []
	W0328 01:04:57.121885 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:04:57.121894 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:04:57.121969 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:04:57.163688 1131323 cri.go:89] found id: ""
	I0328 01:04:57.163725 1131323 logs.go:276] 0 containers: []
	W0328 01:04:57.163737 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:04:57.163745 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:04:57.163819 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:04:57.212628 1131323 cri.go:89] found id: ""
	I0328 01:04:57.212666 1131323 logs.go:276] 0 containers: []
	W0328 01:04:57.212693 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:04:57.212703 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:04:57.212788 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:04:57.249196 1131323 cri.go:89] found id: ""
	I0328 01:04:57.249231 1131323 logs.go:276] 0 containers: []
	W0328 01:04:57.249244 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:04:57.249253 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:04:57.249318 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:04:57.286996 1131323 cri.go:89] found id: ""
	I0328 01:04:57.287031 1131323 logs.go:276] 0 containers: []
	W0328 01:04:57.287040 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:04:57.287047 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:04:57.287101 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:04:57.324523 1131323 cri.go:89] found id: ""
	I0328 01:04:57.324551 1131323 logs.go:276] 0 containers: []
	W0328 01:04:57.324560 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:04:57.324566 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:04:57.324627 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:04:57.363946 1131323 cri.go:89] found id: ""
	I0328 01:04:57.363984 1131323 logs.go:276] 0 containers: []
	W0328 01:04:57.363998 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:04:57.364012 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:04:57.364034 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:04:57.418300 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:04:57.418337 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:04:57.433214 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:04:57.433242 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:04:57.508623 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:04:57.508651 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:04:57.508665 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:04:57.586336 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:04:57.586377 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:05:00.129903 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:05:00.146829 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:05:00.146920 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:05:00.197823 1131323 cri.go:89] found id: ""
	I0328 01:05:00.197856 1131323 logs.go:276] 0 containers: []
	W0328 01:05:00.197865 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:05:00.197872 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:05:00.197930 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:05:00.257523 1131323 cri.go:89] found id: ""
	I0328 01:05:00.257561 1131323 logs.go:276] 0 containers: []
	W0328 01:05:00.257575 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:05:00.257584 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:05:00.257657 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:05:00.314511 1131323 cri.go:89] found id: ""
	I0328 01:05:00.314539 1131323 logs.go:276] 0 containers: []
	W0328 01:05:00.314549 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:05:00.314558 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:05:00.314610 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:04:56.042295 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:58.541684 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:00.543232 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:57.372451 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:59.372577 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:58.412203 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:00.412880 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:02.913222 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:00.351043 1131323 cri.go:89] found id: ""
	I0328 01:05:00.351076 1131323 logs.go:276] 0 containers: []
	W0328 01:05:00.351090 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:05:00.351098 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:05:00.351167 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:05:00.391477 1131323 cri.go:89] found id: ""
	I0328 01:05:00.391507 1131323 logs.go:276] 0 containers: []
	W0328 01:05:00.391519 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:05:00.391525 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:05:00.391595 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:05:00.436196 1131323 cri.go:89] found id: ""
	I0328 01:05:00.436230 1131323 logs.go:276] 0 containers: []
	W0328 01:05:00.436242 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:05:00.436249 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:05:00.436316 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:05:00.473389 1131323 cri.go:89] found id: ""
	I0328 01:05:00.473428 1131323 logs.go:276] 0 containers: []
	W0328 01:05:00.473441 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:05:00.473450 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:05:00.473523 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:05:00.508829 1131323 cri.go:89] found id: ""
	I0328 01:05:00.508866 1131323 logs.go:276] 0 containers: []
	W0328 01:05:00.508879 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:05:00.508901 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:05:00.508931 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:05:00.553709 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:05:00.553741 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:05:00.612679 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:05:00.612732 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:05:00.630908 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:05:00.630948 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:05:00.706984 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:05:00.707016 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:05:00.707034 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:05:03.287887 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:05:03.304679 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:05:03.304779 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:05:03.343579 1131323 cri.go:89] found id: ""
	I0328 01:05:03.343608 1131323 logs.go:276] 0 containers: []
	W0328 01:05:03.343618 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:05:03.343625 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:05:03.343677 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:05:03.387158 1131323 cri.go:89] found id: ""
	I0328 01:05:03.387192 1131323 logs.go:276] 0 containers: []
	W0328 01:05:03.387206 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:05:03.387224 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:05:03.387308 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:05:03.426622 1131323 cri.go:89] found id: ""
	I0328 01:05:03.426653 1131323 logs.go:276] 0 containers: []
	W0328 01:05:03.426663 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:05:03.426670 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:05:03.426724 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:05:03.468743 1131323 cri.go:89] found id: ""
	I0328 01:05:03.468780 1131323 logs.go:276] 0 containers: []
	W0328 01:05:03.468793 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:05:03.468801 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:05:03.468870 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:05:03.508518 1131323 cri.go:89] found id: ""
	I0328 01:05:03.508554 1131323 logs.go:276] 0 containers: []
	W0328 01:05:03.508566 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:05:03.508575 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:05:03.508653 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:05:03.548295 1131323 cri.go:89] found id: ""
	I0328 01:05:03.548331 1131323 logs.go:276] 0 containers: []
	W0328 01:05:03.548343 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:05:03.548353 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:05:03.548444 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:05:03.591561 1131323 cri.go:89] found id: ""
	I0328 01:05:03.591594 1131323 logs.go:276] 0 containers: []
	W0328 01:05:03.591607 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:05:03.591615 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:05:03.591670 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:05:03.635055 1131323 cri.go:89] found id: ""
	I0328 01:05:03.635086 1131323 logs.go:276] 0 containers: []
	W0328 01:05:03.635097 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:05:03.635109 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:05:03.635127 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:05:03.715639 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:05:03.715683 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:05:03.755888 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:05:03.755931 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:05:03.810128 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:05:03.810170 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:05:03.825197 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:05:03.825227 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:05:03.908589 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:05:03.043330 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:05.541544 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:01.372692 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:03.373747 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:05.871945 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:05.413583 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:07.912379 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:06.409060 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:05:06.424034 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:05:06.424119 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:05:06.461827 1131323 cri.go:89] found id: ""
	I0328 01:05:06.461888 1131323 logs.go:276] 0 containers: []
	W0328 01:05:06.461902 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:05:06.461911 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:05:06.461985 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:05:06.505006 1131323 cri.go:89] found id: ""
	I0328 01:05:06.505061 1131323 logs.go:276] 0 containers: []
	W0328 01:05:06.505078 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:05:06.505085 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:05:06.505145 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:05:06.542000 1131323 cri.go:89] found id: ""
	I0328 01:05:06.542033 1131323 logs.go:276] 0 containers: []
	W0328 01:05:06.542044 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:05:06.542052 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:05:06.542121 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:05:06.583725 1131323 cri.go:89] found id: ""
	I0328 01:05:06.583778 1131323 logs.go:276] 0 containers: []
	W0328 01:05:06.583800 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:05:06.583810 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:05:06.583887 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:05:06.620457 1131323 cri.go:89] found id: ""
	I0328 01:05:06.620501 1131323 logs.go:276] 0 containers: []
	W0328 01:05:06.620516 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:05:06.620524 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:05:06.620595 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:05:06.664380 1131323 cri.go:89] found id: ""
	I0328 01:05:06.664412 1131323 logs.go:276] 0 containers: []
	W0328 01:05:06.664425 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:05:06.664432 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:05:06.664502 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:05:06.701799 1131323 cri.go:89] found id: ""
	I0328 01:05:06.701850 1131323 logs.go:276] 0 containers: []
	W0328 01:05:06.701862 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:05:06.701870 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:05:06.701935 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:05:06.739899 1131323 cri.go:89] found id: ""
	I0328 01:05:06.739936 1131323 logs.go:276] 0 containers: []
	W0328 01:05:06.739948 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:05:06.739958 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:05:06.739973 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:05:06.814373 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:05:06.814404 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:05:06.814421 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:05:06.894331 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:05:06.894371 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:05:06.952912 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:05:06.952979 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:05:07.011851 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:05:07.011900 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:05:09.528068 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:05:09.545082 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:05:09.545167 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:05:09.586944 1131323 cri.go:89] found id: ""
	I0328 01:05:09.586983 1131323 logs.go:276] 0 containers: []
	W0328 01:05:09.586996 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:05:09.587004 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:05:09.587077 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:05:09.624153 1131323 cri.go:89] found id: ""
	I0328 01:05:09.624184 1131323 logs.go:276] 0 containers: []
	W0328 01:05:09.624192 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:05:09.624198 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:05:09.624256 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:05:09.661125 1131323 cri.go:89] found id: ""
	I0328 01:05:09.661160 1131323 logs.go:276] 0 containers: []
	W0328 01:05:09.661172 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:05:09.661182 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:05:09.661256 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:05:09.699865 1131323 cri.go:89] found id: ""
	I0328 01:05:09.699903 1131323 logs.go:276] 0 containers: []
	W0328 01:05:09.699916 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:05:09.699925 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:05:09.699992 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:05:09.737925 1131323 cri.go:89] found id: ""
	I0328 01:05:09.737958 1131323 logs.go:276] 0 containers: []
	W0328 01:05:09.737967 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:05:09.737973 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:05:09.738029 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:05:09.776906 1131323 cri.go:89] found id: ""
	I0328 01:05:09.776941 1131323 logs.go:276] 0 containers: []
	W0328 01:05:09.776950 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:05:09.776957 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:05:09.777021 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:05:09.815767 1131323 cri.go:89] found id: ""
	I0328 01:05:09.815794 1131323 logs.go:276] 0 containers: []
	W0328 01:05:09.815803 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:05:09.815809 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:05:09.815876 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:05:09.855880 1131323 cri.go:89] found id: ""
	I0328 01:05:09.855915 1131323 logs.go:276] 0 containers: []
	W0328 01:05:09.855928 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:05:09.855941 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:05:09.855958 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:05:09.918339 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:05:09.918376 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:05:09.932775 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:05:09.932810 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:05:10.011566 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:05:10.011594 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:05:10.011610 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:05:10.096057 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:05:10.096100 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:05:08.041230 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:10.041991 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:07.873367 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:10.372311 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:09.913349 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:12.412259 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:12.641999 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:05:12.655761 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:05:12.655843 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:05:12.697335 1131323 cri.go:89] found id: ""
	I0328 01:05:12.697369 1131323 logs.go:276] 0 containers: []
	W0328 01:05:12.697381 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:05:12.697390 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:05:12.697453 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:05:12.736482 1131323 cri.go:89] found id: ""
	I0328 01:05:12.736520 1131323 logs.go:276] 0 containers: []
	W0328 01:05:12.736534 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:05:12.736544 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:05:12.736617 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:05:12.771992 1131323 cri.go:89] found id: ""
	I0328 01:05:12.772034 1131323 logs.go:276] 0 containers: []
	W0328 01:05:12.772046 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:05:12.772055 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:05:12.772125 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:05:12.810738 1131323 cri.go:89] found id: ""
	I0328 01:05:12.810770 1131323 logs.go:276] 0 containers: []
	W0328 01:05:12.810779 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:05:12.810786 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:05:12.810837 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:05:12.848172 1131323 cri.go:89] found id: ""
	I0328 01:05:12.848209 1131323 logs.go:276] 0 containers: []
	W0328 01:05:12.848222 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:05:12.848230 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:05:12.848310 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:05:12.884660 1131323 cri.go:89] found id: ""
	I0328 01:05:12.884698 1131323 logs.go:276] 0 containers: []
	W0328 01:05:12.884710 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:05:12.884719 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:05:12.884794 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:05:12.926180 1131323 cri.go:89] found id: ""
	I0328 01:05:12.926209 1131323 logs.go:276] 0 containers: []
	W0328 01:05:12.926218 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:05:12.926244 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:05:12.926303 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:05:12.966938 1131323 cri.go:89] found id: ""
	I0328 01:05:12.966969 1131323 logs.go:276] 0 containers: []
	W0328 01:05:12.966983 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:05:12.966996 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:05:12.967014 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:05:13.018501 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:05:13.018541 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:05:13.033140 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:05:13.033175 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:05:13.108806 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:05:13.108832 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:05:13.108858 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:05:13.189198 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:05:13.189241 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:05:12.541088 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:15.041830 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:12.372413 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:14.372804 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:14.414059 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:16.912343 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:15.737415 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:05:15.752534 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:05:15.752614 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:05:15.789941 1131323 cri.go:89] found id: ""
	I0328 01:05:15.789974 1131323 logs.go:276] 0 containers: []
	W0328 01:05:15.789986 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:05:15.789994 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:05:15.790107 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:05:15.827688 1131323 cri.go:89] found id: ""
	I0328 01:05:15.827731 1131323 logs.go:276] 0 containers: []
	W0328 01:05:15.827745 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:05:15.827766 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:05:15.827831 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:05:15.867005 1131323 cri.go:89] found id: ""
	I0328 01:05:15.867041 1131323 logs.go:276] 0 containers: []
	W0328 01:05:15.867054 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:05:15.867064 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:05:15.867149 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:05:15.909924 1131323 cri.go:89] found id: ""
	I0328 01:05:15.910035 1131323 logs.go:276] 0 containers: []
	W0328 01:05:15.910055 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:05:15.910066 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:05:15.910139 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:05:15.950571 1131323 cri.go:89] found id: ""
	I0328 01:05:15.950606 1131323 logs.go:276] 0 containers: []
	W0328 01:05:15.950619 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:05:15.950632 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:05:15.950707 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:05:15.992557 1131323 cri.go:89] found id: ""
	I0328 01:05:15.992594 1131323 logs.go:276] 0 containers: []
	W0328 01:05:15.992605 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:05:15.992615 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:05:15.992687 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:05:16.032417 1131323 cri.go:89] found id: ""
	I0328 01:05:16.032458 1131323 logs.go:276] 0 containers: []
	W0328 01:05:16.032473 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:05:16.032482 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:05:16.032559 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:05:16.071399 1131323 cri.go:89] found id: ""
	I0328 01:05:16.071433 1131323 logs.go:276] 0 containers: []
	W0328 01:05:16.071445 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:05:16.071459 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:05:16.071481 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:05:16.147078 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:05:16.147113 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:05:16.147131 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:05:16.223828 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:05:16.223870 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:05:16.269377 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:05:16.269409 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:05:16.318545 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:05:16.318584 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:05:18.836044 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:05:18.851138 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:05:18.851231 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:05:18.887223 1131323 cri.go:89] found id: ""
	I0328 01:05:18.887260 1131323 logs.go:276] 0 containers: []
	W0328 01:05:18.887273 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:05:18.887283 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:05:18.887354 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:05:18.928652 1131323 cri.go:89] found id: ""
	I0328 01:05:18.928682 1131323 logs.go:276] 0 containers: []
	W0328 01:05:18.928692 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:05:18.928698 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:05:18.928756 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:05:18.968519 1131323 cri.go:89] found id: ""
	I0328 01:05:18.968555 1131323 logs.go:276] 0 containers: []
	W0328 01:05:18.968567 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:05:18.968575 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:05:18.968646 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:05:19.010939 1131323 cri.go:89] found id: ""
	I0328 01:05:19.010977 1131323 logs.go:276] 0 containers: []
	W0328 01:05:19.010991 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:05:19.010999 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:05:19.011070 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:05:19.048723 1131323 cri.go:89] found id: ""
	I0328 01:05:19.048748 1131323 logs.go:276] 0 containers: []
	W0328 01:05:19.048758 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:05:19.048769 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:05:19.048820 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:05:19.091761 1131323 cri.go:89] found id: ""
	I0328 01:05:19.091794 1131323 logs.go:276] 0 containers: []
	W0328 01:05:19.091803 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:05:19.091810 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:05:19.091863 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:05:19.134017 1131323 cri.go:89] found id: ""
	I0328 01:05:19.134049 1131323 logs.go:276] 0 containers: []
	W0328 01:05:19.134059 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:05:19.134065 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:05:19.134119 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:05:19.176070 1131323 cri.go:89] found id: ""
	I0328 01:05:19.176106 1131323 logs.go:276] 0 containers: []
	W0328 01:05:19.176118 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:05:19.176131 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:05:19.176155 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:05:19.261546 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:05:19.261584 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:05:19.261605 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:05:19.340271 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:05:19.340314 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:05:19.383625 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:05:19.383676 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:05:19.441635 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:05:19.441679 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:05:17.541876 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:20.040841 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:16.872723 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:19.372916 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:19.414384 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:21.912881 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:21.958362 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:05:21.974427 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:05:21.974528 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:05:22.013099 1131323 cri.go:89] found id: ""
	I0328 01:05:22.013139 1131323 logs.go:276] 0 containers: []
	W0328 01:05:22.013152 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:05:22.013160 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:05:22.013229 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:05:22.055558 1131323 cri.go:89] found id: ""
	I0328 01:05:22.055594 1131323 logs.go:276] 0 containers: []
	W0328 01:05:22.055604 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:05:22.055611 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:05:22.055668 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:05:22.106836 1131323 cri.go:89] found id: ""
	I0328 01:05:22.106870 1131323 logs.go:276] 0 containers: []
	W0328 01:05:22.106879 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:05:22.106886 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:05:22.106961 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:05:22.145135 1131323 cri.go:89] found id: ""
	I0328 01:05:22.145175 1131323 logs.go:276] 0 containers: []
	W0328 01:05:22.145189 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:05:22.145197 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:05:22.145266 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:05:22.183879 1131323 cri.go:89] found id: ""
	I0328 01:05:22.183909 1131323 logs.go:276] 0 containers: []
	W0328 01:05:22.183919 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:05:22.183926 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:05:22.183981 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:05:22.223087 1131323 cri.go:89] found id: ""
	I0328 01:05:22.223115 1131323 logs.go:276] 0 containers: []
	W0328 01:05:22.223124 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:05:22.223131 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:05:22.223209 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:05:22.263232 1131323 cri.go:89] found id: ""
	I0328 01:05:22.263262 1131323 logs.go:276] 0 containers: []
	W0328 01:05:22.263272 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:05:22.263279 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:05:22.263331 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:05:22.302919 1131323 cri.go:89] found id: ""
	I0328 01:05:22.302954 1131323 logs.go:276] 0 containers: []
	W0328 01:05:22.302967 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:05:22.302980 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:05:22.302998 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:05:22.358550 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:05:22.358596 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:05:22.374688 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:05:22.374722 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:05:22.453584 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:05:22.453609 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:05:22.453624 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:05:22.540983 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:05:22.541048 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:05:25.091773 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:05:25.107412 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:05:25.107484 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:05:25.143917 1131323 cri.go:89] found id: ""
	I0328 01:05:25.143944 1131323 logs.go:276] 0 containers: []
	W0328 01:05:25.143953 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:05:25.143960 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:05:25.144013 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:05:25.183615 1131323 cri.go:89] found id: ""
	I0328 01:05:25.183650 1131323 logs.go:276] 0 containers: []
	W0328 01:05:25.183659 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:05:25.183666 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:05:25.183729 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:05:25.221125 1131323 cri.go:89] found id: ""
	I0328 01:05:25.221158 1131323 logs.go:276] 0 containers: []
	W0328 01:05:25.221167 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:05:25.221174 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:05:25.221229 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:05:25.262023 1131323 cri.go:89] found id: ""
	I0328 01:05:25.262056 1131323 logs.go:276] 0 containers: []
	W0328 01:05:25.262065 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:05:25.262072 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:05:25.262134 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:05:25.297919 1131323 cri.go:89] found id: ""
	I0328 01:05:25.297948 1131323 logs.go:276] 0 containers: []
	W0328 01:05:25.297957 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:05:25.297964 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:05:25.298035 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:05:22.041977 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:24.542416 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:21.872312 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:23.872885 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:23.914459 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:25.916730 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:25.336582 1131323 cri.go:89] found id: ""
	I0328 01:05:25.336610 1131323 logs.go:276] 0 containers: []
	W0328 01:05:25.336620 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:05:25.336627 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:05:25.336690 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:05:25.375554 1131323 cri.go:89] found id: ""
	I0328 01:05:25.375589 1131323 logs.go:276] 0 containers: []
	W0328 01:05:25.375600 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:05:25.375609 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:05:25.375683 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:05:25.415941 1131323 cri.go:89] found id: ""
	I0328 01:05:25.415973 1131323 logs.go:276] 0 containers: []
	W0328 01:05:25.415984 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:05:25.415996 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:05:25.416013 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:05:25.430168 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:05:25.430196 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:05:25.507782 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:05:25.507805 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:05:25.507862 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:05:25.588780 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:05:25.588824 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:05:25.634958 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:05:25.634997 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:05:28.190651 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:05:28.205714 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:05:28.205794 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:05:28.242015 1131323 cri.go:89] found id: ""
	I0328 01:05:28.242056 1131323 logs.go:276] 0 containers: []
	W0328 01:05:28.242067 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:05:28.242077 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:05:28.242169 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:05:28.289132 1131323 cri.go:89] found id: ""
	I0328 01:05:28.289169 1131323 logs.go:276] 0 containers: []
	W0328 01:05:28.289182 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:05:28.289189 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:05:28.289256 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:05:28.327001 1131323 cri.go:89] found id: ""
	I0328 01:05:28.327031 1131323 logs.go:276] 0 containers: []
	W0328 01:05:28.327040 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:05:28.327052 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:05:28.327105 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:05:28.365474 1131323 cri.go:89] found id: ""
	I0328 01:05:28.365507 1131323 logs.go:276] 0 containers: []
	W0328 01:05:28.365516 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:05:28.365523 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:05:28.365587 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:05:28.405494 1131323 cri.go:89] found id: ""
	I0328 01:05:28.405553 1131323 logs.go:276] 0 containers: []
	W0328 01:05:28.405567 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:05:28.405576 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:05:28.405652 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:05:28.448464 1131323 cri.go:89] found id: ""
	I0328 01:05:28.448502 1131323 logs.go:276] 0 containers: []
	W0328 01:05:28.448512 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:05:28.448521 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:05:28.448594 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:05:28.488143 1131323 cri.go:89] found id: ""
	I0328 01:05:28.488172 1131323 logs.go:276] 0 containers: []
	W0328 01:05:28.488182 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:05:28.488189 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:05:28.488258 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:05:28.545977 1131323 cri.go:89] found id: ""
	I0328 01:05:28.546012 1131323 logs.go:276] 0 containers: []
	W0328 01:05:28.546024 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:05:28.546036 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:05:28.546050 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:05:28.629955 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:05:28.630001 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:05:28.670504 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:05:28.670536 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:05:28.722021 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:05:28.722069 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:05:28.737274 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:05:28.737310 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:05:28.824025 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:05:27.041205 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:29.041342 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:26.372037 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:28.373545 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:30.872569 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:28.414921 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:30.912980 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:31.324497 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:05:31.339715 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:05:31.339811 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:05:31.379017 1131323 cri.go:89] found id: ""
	I0328 01:05:31.379050 1131323 logs.go:276] 0 containers: []
	W0328 01:05:31.379062 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:05:31.379072 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:05:31.379138 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:05:31.420024 1131323 cri.go:89] found id: ""
	I0328 01:05:31.420055 1131323 logs.go:276] 0 containers: []
	W0328 01:05:31.420065 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:05:31.420071 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:05:31.420136 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:05:31.458732 1131323 cri.go:89] found id: ""
	I0328 01:05:31.458764 1131323 logs.go:276] 0 containers: []
	W0328 01:05:31.458773 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:05:31.458779 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:05:31.458835 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:05:31.504249 1131323 cri.go:89] found id: ""
	I0328 01:05:31.504280 1131323 logs.go:276] 0 containers: []
	W0328 01:05:31.504292 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:05:31.504300 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:05:31.504366 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:05:31.545284 1131323 cri.go:89] found id: ""
	I0328 01:05:31.545316 1131323 logs.go:276] 0 containers: []
	W0328 01:05:31.545324 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:05:31.545331 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:05:31.545385 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:05:31.583402 1131323 cri.go:89] found id: ""
	I0328 01:05:31.583434 1131323 logs.go:276] 0 containers: []
	W0328 01:05:31.583444 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:05:31.583453 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:05:31.583587 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:05:31.624411 1131323 cri.go:89] found id: ""
	I0328 01:05:31.624449 1131323 logs.go:276] 0 containers: []
	W0328 01:05:31.624462 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:05:31.624471 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:05:31.624528 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:05:31.666103 1131323 cri.go:89] found id: ""
	I0328 01:05:31.666144 1131323 logs.go:276] 0 containers: []
	W0328 01:05:31.666158 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:05:31.666173 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:05:31.666192 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:05:31.717595 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:05:31.717636 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:05:31.731606 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:05:31.731637 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:05:31.803302 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:05:31.803325 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:05:31.803339 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:05:31.885552 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:05:31.885590 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:05:34.432446 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:05:34.448002 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:05:34.448085 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:05:34.493207 1131323 cri.go:89] found id: ""
	I0328 01:05:34.493246 1131323 logs.go:276] 0 containers: []
	W0328 01:05:34.493259 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:05:34.493268 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:05:34.493337 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:05:34.541838 1131323 cri.go:89] found id: ""
	I0328 01:05:34.541871 1131323 logs.go:276] 0 containers: []
	W0328 01:05:34.541883 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:05:34.541891 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:05:34.541956 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:05:34.582319 1131323 cri.go:89] found id: ""
	I0328 01:05:34.582357 1131323 logs.go:276] 0 containers: []
	W0328 01:05:34.582371 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:05:34.582380 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:05:34.582458 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:05:34.618753 1131323 cri.go:89] found id: ""
	I0328 01:05:34.618788 1131323 logs.go:276] 0 containers: []
	W0328 01:05:34.618801 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:05:34.618810 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:05:34.618882 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:05:34.656994 1131323 cri.go:89] found id: ""
	I0328 01:05:34.657027 1131323 logs.go:276] 0 containers: []
	W0328 01:05:34.657037 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:05:34.657043 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:05:34.657114 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:05:34.695214 1131323 cri.go:89] found id: ""
	I0328 01:05:34.695252 1131323 logs.go:276] 0 containers: []
	W0328 01:05:34.695264 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:05:34.695271 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:05:34.695337 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:05:34.733688 1131323 cri.go:89] found id: ""
	I0328 01:05:34.733718 1131323 logs.go:276] 0 containers: []
	W0328 01:05:34.733731 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:05:34.733739 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:05:34.733808 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:05:34.771697 1131323 cri.go:89] found id: ""
	I0328 01:05:34.771729 1131323 logs.go:276] 0 containers: []
	W0328 01:05:34.771744 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:05:34.771758 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:05:34.771776 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:05:34.828190 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:05:34.828236 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:05:34.842741 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:05:34.842776 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:05:34.918494 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:05:34.918525 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:05:34.918541 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:05:35.012689 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:05:35.012747 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:05:31.042633 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:33.541295 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:35.541588 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:33.371991 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:35.872753 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:33.412886 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:35.914065 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:37.574759 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:05:37.590014 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:05:37.590128 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:05:37.626883 1131323 cri.go:89] found id: ""
	I0328 01:05:37.626914 1131323 logs.go:276] 0 containers: []
	W0328 01:05:37.626926 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:05:37.626935 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:05:37.627005 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:05:37.665171 1131323 cri.go:89] found id: ""
	I0328 01:05:37.665202 1131323 logs.go:276] 0 containers: []
	W0328 01:05:37.665215 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:05:37.665225 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:05:37.665294 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:05:37.702923 1131323 cri.go:89] found id: ""
	I0328 01:05:37.702963 1131323 logs.go:276] 0 containers: []
	W0328 01:05:37.702976 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:05:37.702984 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:05:37.703064 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:05:37.741148 1131323 cri.go:89] found id: ""
	I0328 01:05:37.741182 1131323 logs.go:276] 0 containers: []
	W0328 01:05:37.741191 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:05:37.741199 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:05:37.741269 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:05:37.782298 1131323 cri.go:89] found id: ""
	I0328 01:05:37.782331 1131323 logs.go:276] 0 containers: []
	W0328 01:05:37.782341 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:05:37.782348 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:05:37.782407 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:05:37.819056 1131323 cri.go:89] found id: ""
	I0328 01:05:37.819110 1131323 logs.go:276] 0 containers: []
	W0328 01:05:37.819124 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:05:37.819134 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:05:37.819215 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:05:37.862372 1131323 cri.go:89] found id: ""
	I0328 01:05:37.862414 1131323 logs.go:276] 0 containers: []
	W0328 01:05:37.862427 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:05:37.862436 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:05:37.862507 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:05:37.899639 1131323 cri.go:89] found id: ""
	I0328 01:05:37.899675 1131323 logs.go:276] 0 containers: []
	W0328 01:05:37.899689 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:05:37.899703 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:05:37.899721 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:05:37.978962 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:05:37.978990 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:05:37.979007 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:05:38.058972 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:05:38.059015 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:05:38.102975 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:05:38.103016 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:05:38.157994 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:05:38.158035 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:05:38.041091 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:40.041892 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:38.371787 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:40.373131 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:38.412214 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:40.415412 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:42.912341 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:40.673425 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:05:40.690969 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:05:40.691041 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:05:40.735552 1131323 cri.go:89] found id: ""
	I0328 01:05:40.735585 1131323 logs.go:276] 0 containers: []
	W0328 01:05:40.735594 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:05:40.735602 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:05:40.735669 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:05:40.816611 1131323 cri.go:89] found id: ""
	I0328 01:05:40.816648 1131323 logs.go:276] 0 containers: []
	W0328 01:05:40.816661 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:05:40.816669 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:05:40.816725 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:05:40.864093 1131323 cri.go:89] found id: ""
	I0328 01:05:40.864125 1131323 logs.go:276] 0 containers: []
	W0328 01:05:40.864138 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:05:40.864147 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:05:40.864218 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:05:40.908781 1131323 cri.go:89] found id: ""
	I0328 01:05:40.908817 1131323 logs.go:276] 0 containers: []
	W0328 01:05:40.908829 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:05:40.908846 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:05:40.908914 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:05:40.950330 1131323 cri.go:89] found id: ""
	I0328 01:05:40.950369 1131323 logs.go:276] 0 containers: []
	W0328 01:05:40.950382 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:05:40.950390 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:05:40.950481 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:05:40.989983 1131323 cri.go:89] found id: ""
	I0328 01:05:40.990041 1131323 logs.go:276] 0 containers: []
	W0328 01:05:40.990054 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:05:40.990063 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:05:40.990136 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:05:41.042428 1131323 cri.go:89] found id: ""
	I0328 01:05:41.042470 1131323 logs.go:276] 0 containers: []
	W0328 01:05:41.042481 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:05:41.042489 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:05:41.042560 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:05:41.089309 1131323 cri.go:89] found id: ""
	I0328 01:05:41.089342 1131323 logs.go:276] 0 containers: []
	W0328 01:05:41.089353 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:05:41.089363 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:05:41.089377 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:05:41.148502 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:05:41.148547 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:05:41.163889 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:05:41.163918 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:05:41.242825 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:05:41.242848 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:05:41.242861 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:05:41.322658 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:05:41.322702 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:05:43.865117 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:05:43.880642 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:05:43.880729 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:05:43.919519 1131323 cri.go:89] found id: ""
	I0328 01:05:43.919550 1131323 logs.go:276] 0 containers: []
	W0328 01:05:43.919559 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:05:43.919565 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:05:43.919622 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:05:43.957906 1131323 cri.go:89] found id: ""
	I0328 01:05:43.957936 1131323 logs.go:276] 0 containers: []
	W0328 01:05:43.957945 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:05:43.957951 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:05:43.958008 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:05:44.001448 1131323 cri.go:89] found id: ""
	I0328 01:05:44.001486 1131323 logs.go:276] 0 containers: []
	W0328 01:05:44.001497 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:05:44.001505 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:05:44.001573 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:05:44.039767 1131323 cri.go:89] found id: ""
	I0328 01:05:44.039801 1131323 logs.go:276] 0 containers: []
	W0328 01:05:44.039812 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:05:44.039818 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:05:44.039871 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:05:44.079441 1131323 cri.go:89] found id: ""
	I0328 01:05:44.079470 1131323 logs.go:276] 0 containers: []
	W0328 01:05:44.079480 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:05:44.079486 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:05:44.079541 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:05:44.116534 1131323 cri.go:89] found id: ""
	I0328 01:05:44.116584 1131323 logs.go:276] 0 containers: []
	W0328 01:05:44.116596 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:05:44.116604 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:05:44.116670 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:05:44.163335 1131323 cri.go:89] found id: ""
	I0328 01:05:44.163369 1131323 logs.go:276] 0 containers: []
	W0328 01:05:44.163381 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:05:44.163389 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:05:44.163457 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:05:44.201367 1131323 cri.go:89] found id: ""
	I0328 01:05:44.201403 1131323 logs.go:276] 0 containers: []
	W0328 01:05:44.201413 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:05:44.201424 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:05:44.201442 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:05:44.257485 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:05:44.257529 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:05:44.272489 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:05:44.272534 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:05:44.354442 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:05:44.354477 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:05:44.354498 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:05:44.436219 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:05:44.436262 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:05:42.044443 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:44.541648 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:42.872072 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:44.873552 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:44.913292 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:47.412489 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:46.982131 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:05:46.998022 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:05:46.998100 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:05:47.037167 1131323 cri.go:89] found id: ""
	I0328 01:05:47.037205 1131323 logs.go:276] 0 containers: []
	W0328 01:05:47.037217 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:05:47.037226 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:05:47.037295 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:05:47.076175 1131323 cri.go:89] found id: ""
	I0328 01:05:47.076213 1131323 logs.go:276] 0 containers: []
	W0328 01:05:47.076226 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:05:47.076235 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:05:47.076306 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:05:47.115193 1131323 cri.go:89] found id: ""
	I0328 01:05:47.115227 1131323 logs.go:276] 0 containers: []
	W0328 01:05:47.115237 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:05:47.115244 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:05:47.115297 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:05:47.154942 1131323 cri.go:89] found id: ""
	I0328 01:05:47.154976 1131323 logs.go:276] 0 containers: []
	W0328 01:05:47.154989 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:05:47.154998 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:05:47.155069 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:05:47.196571 1131323 cri.go:89] found id: ""
	I0328 01:05:47.196609 1131323 logs.go:276] 0 containers: []
	W0328 01:05:47.196622 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:05:47.196631 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:05:47.196707 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:05:47.237572 1131323 cri.go:89] found id: ""
	I0328 01:05:47.237616 1131323 logs.go:276] 0 containers: []
	W0328 01:05:47.237625 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:05:47.237633 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:05:47.237691 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:05:47.275208 1131323 cri.go:89] found id: ""
	I0328 01:05:47.275254 1131323 logs.go:276] 0 containers: []
	W0328 01:05:47.275265 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:05:47.275272 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:05:47.275329 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:05:47.313515 1131323 cri.go:89] found id: ""
	I0328 01:05:47.313555 1131323 logs.go:276] 0 containers: []
	W0328 01:05:47.313568 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:05:47.313582 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:05:47.313598 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:05:47.368993 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:05:47.369033 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:05:47.383063 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:05:47.383097 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:05:47.460239 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:05:47.460278 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:05:47.460298 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:05:47.538552 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:05:47.538594 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:05:50.084960 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:05:50.101764 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:05:50.101859 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:05:50.141457 1131323 cri.go:89] found id: ""
	I0328 01:05:50.141488 1131323 logs.go:276] 0 containers: []
	W0328 01:05:50.141497 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:05:50.141504 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:05:50.141557 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:05:50.178184 1131323 cri.go:89] found id: ""
	I0328 01:05:50.178220 1131323 logs.go:276] 0 containers: []
	W0328 01:05:50.178254 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:05:50.178263 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:05:50.178358 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:05:50.217908 1131323 cri.go:89] found id: ""
	I0328 01:05:50.217946 1131323 logs.go:276] 0 containers: []
	W0328 01:05:50.217959 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:05:50.217966 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:05:50.218027 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:05:50.256029 1131323 cri.go:89] found id: ""
	I0328 01:05:50.256058 1131323 logs.go:276] 0 containers: []
	W0328 01:05:50.256067 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:05:50.256074 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:05:50.256130 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:05:50.295054 1131323 cri.go:89] found id: ""
	I0328 01:05:50.295087 1131323 logs.go:276] 0 containers: []
	W0328 01:05:50.295100 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:05:50.295106 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:05:50.295165 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:05:47.042338 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:49.542501 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:47.372867 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:49.872948 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:49.913873 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:52.412600 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:50.334695 1131323 cri.go:89] found id: ""
	I0328 01:05:50.336588 1131323 logs.go:276] 0 containers: []
	W0328 01:05:50.336605 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:05:50.336614 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:05:50.336697 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:05:50.375968 1131323 cri.go:89] found id: ""
	I0328 01:05:50.376003 1131323 logs.go:276] 0 containers: []
	W0328 01:05:50.376013 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:05:50.376021 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:05:50.376091 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:05:50.417146 1131323 cri.go:89] found id: ""
	I0328 01:05:50.417175 1131323 logs.go:276] 0 containers: []
	W0328 01:05:50.417184 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:05:50.417194 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:05:50.417207 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:05:50.474090 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:05:50.474131 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:05:50.489006 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:05:50.489040 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:05:50.566220 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:05:50.566268 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:05:50.566286 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:05:50.645593 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:05:50.645653 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:05:53.190872 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:05:53.205223 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:05:53.205320 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:05:53.242396 1131323 cri.go:89] found id: ""
	I0328 01:05:53.242433 1131323 logs.go:276] 0 containers: []
	W0328 01:05:53.242445 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:05:53.242455 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:05:53.242524 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:05:53.281237 1131323 cri.go:89] found id: ""
	I0328 01:05:53.281275 1131323 logs.go:276] 0 containers: []
	W0328 01:05:53.281288 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:05:53.281297 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:05:53.281357 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:05:53.321239 1131323 cri.go:89] found id: ""
	I0328 01:05:53.321268 1131323 logs.go:276] 0 containers: []
	W0328 01:05:53.321287 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:05:53.321296 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:05:53.321358 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:05:53.359240 1131323 cri.go:89] found id: ""
	I0328 01:05:53.359269 1131323 logs.go:276] 0 containers: []
	W0328 01:05:53.359278 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:05:53.359284 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:05:53.359337 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:05:53.396973 1131323 cri.go:89] found id: ""
	I0328 01:05:53.397008 1131323 logs.go:276] 0 containers: []
	W0328 01:05:53.397021 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:05:53.397030 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:05:53.397100 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:05:53.438368 1131323 cri.go:89] found id: ""
	I0328 01:05:53.438400 1131323 logs.go:276] 0 containers: []
	W0328 01:05:53.438408 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:05:53.438415 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:05:53.438477 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:05:53.474679 1131323 cri.go:89] found id: ""
	I0328 01:05:53.474708 1131323 logs.go:276] 0 containers: []
	W0328 01:05:53.474732 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:05:53.474742 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:05:53.474799 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:05:53.512509 1131323 cri.go:89] found id: ""
	I0328 01:05:53.512547 1131323 logs.go:276] 0 containers: []
	W0328 01:05:53.512560 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:05:53.512579 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:05:53.512599 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:05:53.569536 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:05:53.569580 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:05:53.584977 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:05:53.585016 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:05:53.657865 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:05:53.657895 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:05:53.657908 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:05:53.733158 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:05:53.733203 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:05:52.041508 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:54.541663 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:52.373317 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:54.872090 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:54.913464 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:57.413256 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:56.278693 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:05:56.291870 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:05:56.291949 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:05:56.332909 1131323 cri.go:89] found id: ""
	I0328 01:05:56.332943 1131323 logs.go:276] 0 containers: []
	W0328 01:05:56.332957 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:05:56.332965 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:05:56.333038 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:05:56.370608 1131323 cri.go:89] found id: ""
	I0328 01:05:56.370638 1131323 logs.go:276] 0 containers: []
	W0328 01:05:56.370649 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:05:56.370657 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:05:56.370721 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:05:56.408031 1131323 cri.go:89] found id: ""
	I0328 01:05:56.408068 1131323 logs.go:276] 0 containers: []
	W0328 01:05:56.408081 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:05:56.408100 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:05:56.408170 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:05:56.445057 1131323 cri.go:89] found id: ""
	I0328 01:05:56.445092 1131323 logs.go:276] 0 containers: []
	W0328 01:05:56.445105 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:05:56.445113 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:05:56.445177 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:05:56.486868 1131323 cri.go:89] found id: ""
	I0328 01:05:56.486898 1131323 logs.go:276] 0 containers: []
	W0328 01:05:56.486908 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:05:56.486914 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:05:56.486969 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:05:56.533594 1131323 cri.go:89] found id: ""
	I0328 01:05:56.533622 1131323 logs.go:276] 0 containers: []
	W0328 01:05:56.533632 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:05:56.533638 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:05:56.533702 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:05:56.569200 1131323 cri.go:89] found id: ""
	I0328 01:05:56.569237 1131323 logs.go:276] 0 containers: []
	W0328 01:05:56.569250 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:05:56.569258 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:05:56.569335 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:05:56.604919 1131323 cri.go:89] found id: ""
	I0328 01:05:56.604955 1131323 logs.go:276] 0 containers: []
	W0328 01:05:56.604968 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:05:56.604982 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:05:56.605011 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:05:56.654473 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:05:56.654513 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:05:56.671309 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:05:56.671339 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:05:56.739516 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:05:56.739543 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:05:56.739559 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:05:56.817445 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:05:56.817495 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:05:59.361711 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:05:59.375672 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:05:59.375750 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:05:59.414329 1131323 cri.go:89] found id: ""
	I0328 01:05:59.414360 1131323 logs.go:276] 0 containers: []
	W0328 01:05:59.414371 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:05:59.414379 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:05:59.414443 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:05:59.454813 1131323 cri.go:89] found id: ""
	I0328 01:05:59.454846 1131323 logs.go:276] 0 containers: []
	W0328 01:05:59.454855 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:05:59.454862 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:05:59.454917 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:05:59.492890 1131323 cri.go:89] found id: ""
	I0328 01:05:59.492924 1131323 logs.go:276] 0 containers: []
	W0328 01:05:59.492936 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:05:59.492946 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:05:59.493043 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:05:59.529412 1131323 cri.go:89] found id: ""
	I0328 01:05:59.529443 1131323 logs.go:276] 0 containers: []
	W0328 01:05:59.529454 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:05:59.529464 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:05:59.529521 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:05:59.568620 1131323 cri.go:89] found id: ""
	I0328 01:05:59.568655 1131323 logs.go:276] 0 containers: []
	W0328 01:05:59.568664 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:05:59.568671 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:05:59.568731 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:05:59.605826 1131323 cri.go:89] found id: ""
	I0328 01:05:59.605861 1131323 logs.go:276] 0 containers: []
	W0328 01:05:59.605874 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:05:59.605883 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:05:59.605955 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:05:59.645799 1131323 cri.go:89] found id: ""
	I0328 01:05:59.645833 1131323 logs.go:276] 0 containers: []
	W0328 01:05:59.645847 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:05:59.645856 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:05:59.645931 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:05:59.683866 1131323 cri.go:89] found id: ""
	I0328 01:05:59.683903 1131323 logs.go:276] 0 containers: []
	W0328 01:05:59.683916 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:05:59.683929 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:05:59.683953 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:05:59.726678 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:05:59.726711 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:05:59.779910 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:05:59.779954 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:05:59.795743 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:05:59.795774 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:05:59.875137 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:05:59.875162 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:05:59.875174 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:05:56.542345 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:58.542599 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:00.543094 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:57.372258 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:59.872483 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:59.912150 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:01.913694 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:02.455212 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:06:02.468850 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:06:02.468945 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:06:02.506347 1131323 cri.go:89] found id: ""
	I0328 01:06:02.506385 1131323 logs.go:276] 0 containers: []
	W0328 01:06:02.506397 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:06:02.506406 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:06:02.506484 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:06:02.546056 1131323 cri.go:89] found id: ""
	I0328 01:06:02.546085 1131323 logs.go:276] 0 containers: []
	W0328 01:06:02.546096 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:06:02.546103 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:06:02.546173 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:06:02.585343 1131323 cri.go:89] found id: ""
	I0328 01:06:02.585385 1131323 logs.go:276] 0 containers: []
	W0328 01:06:02.585398 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:06:02.585407 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:06:02.585563 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:06:02.625380 1131323 cri.go:89] found id: ""
	I0328 01:06:02.625414 1131323 logs.go:276] 0 containers: []
	W0328 01:06:02.625423 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:06:02.625429 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:06:02.625486 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:06:02.664653 1131323 cri.go:89] found id: ""
	I0328 01:06:02.664687 1131323 logs.go:276] 0 containers: []
	W0328 01:06:02.664701 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:06:02.664708 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:06:02.664764 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:06:02.704468 1131323 cri.go:89] found id: ""
	I0328 01:06:02.704498 1131323 logs.go:276] 0 containers: []
	W0328 01:06:02.704511 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:06:02.704519 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:06:02.704595 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:06:02.740969 1131323 cri.go:89] found id: ""
	I0328 01:06:02.740997 1131323 logs.go:276] 0 containers: []
	W0328 01:06:02.741007 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:06:02.741014 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:06:02.741102 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:06:02.782113 1131323 cri.go:89] found id: ""
	I0328 01:06:02.782150 1131323 logs.go:276] 0 containers: []
	W0328 01:06:02.782163 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:06:02.782185 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:06:02.782203 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:06:02.836804 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:06:02.836848 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:06:02.852266 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:06:02.852299 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:06:02.929441 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:06:02.929467 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:06:02.929484 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:06:03.008114 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:06:03.008156 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:06:03.041919 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:05.542209 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:02.372332 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:04.871689 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:04.413251 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:06.912348 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:05.554291 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:06:05.570208 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:06:05.570304 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:06:05.610887 1131323 cri.go:89] found id: ""
	I0328 01:06:05.610916 1131323 logs.go:276] 0 containers: []
	W0328 01:06:05.610926 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:06:05.610932 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:06:05.610991 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:06:05.651561 1131323 cri.go:89] found id: ""
	I0328 01:06:05.651600 1131323 logs.go:276] 0 containers: []
	W0328 01:06:05.651610 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:06:05.651616 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:06:05.651681 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:06:05.690801 1131323 cri.go:89] found id: ""
	I0328 01:06:05.690830 1131323 logs.go:276] 0 containers: []
	W0328 01:06:05.690843 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:06:05.690851 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:06:05.690920 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:06:05.729098 1131323 cri.go:89] found id: ""
	I0328 01:06:05.729136 1131323 logs.go:276] 0 containers: []
	W0328 01:06:05.729146 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:06:05.729153 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:06:05.729225 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:06:05.774461 1131323 cri.go:89] found id: ""
	I0328 01:06:05.774499 1131323 logs.go:276] 0 containers: []
	W0328 01:06:05.774520 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:06:05.774530 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:06:05.774602 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:06:05.812135 1131323 cri.go:89] found id: ""
	I0328 01:06:05.812166 1131323 logs.go:276] 0 containers: []
	W0328 01:06:05.812180 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:06:05.812188 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:06:05.812255 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:06:05.847744 1131323 cri.go:89] found id: ""
	I0328 01:06:05.847775 1131323 logs.go:276] 0 containers: []
	W0328 01:06:05.847786 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:06:05.847796 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:06:05.847863 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:06:05.885600 1131323 cri.go:89] found id: ""
	I0328 01:06:05.885641 1131323 logs.go:276] 0 containers: []
	W0328 01:06:05.885656 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:06:05.885669 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:06:05.885684 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:06:05.963837 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:06:05.963879 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:06:06.007342 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:06:06.007381 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:06:06.062798 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:06:06.062843 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:06:06.077547 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:06:06.077599 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:06:06.148373 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:06:08.648791 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:06:08.664082 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:06:08.664154 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:06:08.701746 1131323 cri.go:89] found id: ""
	I0328 01:06:08.701776 1131323 logs.go:276] 0 containers: []
	W0328 01:06:08.701789 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:06:08.701797 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:06:08.701855 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:06:08.739035 1131323 cri.go:89] found id: ""
	I0328 01:06:08.739066 1131323 logs.go:276] 0 containers: []
	W0328 01:06:08.739076 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:06:08.739083 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:06:08.739136 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:06:08.776128 1131323 cri.go:89] found id: ""
	I0328 01:06:08.776166 1131323 logs.go:276] 0 containers: []
	W0328 01:06:08.776180 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:06:08.776189 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:06:08.776255 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:06:08.816136 1131323 cri.go:89] found id: ""
	I0328 01:06:08.816172 1131323 logs.go:276] 0 containers: []
	W0328 01:06:08.816187 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:06:08.816196 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:06:08.816271 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:06:08.855675 1131323 cri.go:89] found id: ""
	I0328 01:06:08.855709 1131323 logs.go:276] 0 containers: []
	W0328 01:06:08.855722 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:06:08.855730 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:06:08.855802 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:06:08.893161 1131323 cri.go:89] found id: ""
	I0328 01:06:08.893198 1131323 logs.go:276] 0 containers: []
	W0328 01:06:08.893212 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:06:08.893221 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:06:08.893297 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:06:08.935498 1131323 cri.go:89] found id: ""
	I0328 01:06:08.935527 1131323 logs.go:276] 0 containers: []
	W0328 01:06:08.935540 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:06:08.935548 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:06:08.935622 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:06:08.971622 1131323 cri.go:89] found id: ""
	I0328 01:06:08.971657 1131323 logs.go:276] 0 containers: []
	W0328 01:06:08.971668 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:06:08.971679 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:06:08.971696 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:06:09.039975 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:06:09.040036 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:06:09.057877 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:06:09.057920 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:06:09.130093 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:06:09.130119 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:06:09.130135 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:06:09.217177 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:06:09.217228 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:06:08.040921 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:10.042895 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:06.872367 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:08.873187 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:08.914313 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:11.412330 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:11.762393 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:06:11.776356 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:06:11.776424 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:06:11.811982 1131323 cri.go:89] found id: ""
	I0328 01:06:11.812017 1131323 logs.go:276] 0 containers: []
	W0328 01:06:11.812030 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:06:11.812038 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:06:11.812103 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:06:11.849789 1131323 cri.go:89] found id: ""
	I0328 01:06:11.849817 1131323 logs.go:276] 0 containers: []
	W0328 01:06:11.849826 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:06:11.849833 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:06:11.849884 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:06:11.890455 1131323 cri.go:89] found id: ""
	I0328 01:06:11.890488 1131323 logs.go:276] 0 containers: []
	W0328 01:06:11.890497 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:06:11.890503 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:06:11.890559 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:06:11.929047 1131323 cri.go:89] found id: ""
	I0328 01:06:11.929093 1131323 logs.go:276] 0 containers: []
	W0328 01:06:11.929102 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:06:11.929108 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:06:11.929164 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:06:11.969536 1131323 cri.go:89] found id: ""
	I0328 01:06:11.969566 1131323 logs.go:276] 0 containers: []
	W0328 01:06:11.969576 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:06:11.969583 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:06:11.969641 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:06:12.008779 1131323 cri.go:89] found id: ""
	I0328 01:06:12.008811 1131323 logs.go:276] 0 containers: []
	W0328 01:06:12.008821 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:06:12.008828 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:06:12.008890 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:06:12.044061 1131323 cri.go:89] found id: ""
	I0328 01:06:12.044091 1131323 logs.go:276] 0 containers: []
	W0328 01:06:12.044104 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:06:12.044112 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:06:12.044176 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:06:12.082307 1131323 cri.go:89] found id: ""
	I0328 01:06:12.082336 1131323 logs.go:276] 0 containers: []
	W0328 01:06:12.082346 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:06:12.082357 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:06:12.082369 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:06:12.133044 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:06:12.133091 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:06:12.148584 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:06:12.148624 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:06:12.218799 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:06:12.218834 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:06:12.218852 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:06:12.295580 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:06:12.295623 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:06:14.842815 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:06:14.856385 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:06:14.856456 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:06:14.895351 1131323 cri.go:89] found id: ""
	I0328 01:06:14.895409 1131323 logs.go:276] 0 containers: []
	W0328 01:06:14.895418 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:06:14.895424 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:06:14.895476 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:06:14.930333 1131323 cri.go:89] found id: ""
	I0328 01:06:14.930366 1131323 logs.go:276] 0 containers: []
	W0328 01:06:14.930380 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:06:14.930389 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:06:14.930461 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:06:14.968701 1131323 cri.go:89] found id: ""
	I0328 01:06:14.968742 1131323 logs.go:276] 0 containers: []
	W0328 01:06:14.968754 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:06:14.968767 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:06:14.968867 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:06:15.004580 1131323 cri.go:89] found id: ""
	I0328 01:06:15.004613 1131323 logs.go:276] 0 containers: []
	W0328 01:06:15.004626 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:06:15.004634 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:06:15.004700 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:06:15.046702 1131323 cri.go:89] found id: ""
	I0328 01:06:15.046726 1131323 logs.go:276] 0 containers: []
	W0328 01:06:15.046736 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:06:15.046742 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:06:15.046795 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:06:15.088693 1131323 cri.go:89] found id: ""
	I0328 01:06:15.088725 1131323 logs.go:276] 0 containers: []
	W0328 01:06:15.088734 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:06:15.088741 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:06:15.088797 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:06:15.130293 1131323 cri.go:89] found id: ""
	I0328 01:06:15.130324 1131323 logs.go:276] 0 containers: []
	W0328 01:06:15.130333 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:06:15.130339 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:06:15.130394 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:06:15.172381 1131323 cri.go:89] found id: ""
	I0328 01:06:15.172408 1131323 logs.go:276] 0 containers: []
	W0328 01:06:15.172417 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:06:15.172427 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:06:15.172440 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:06:15.225631 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:06:15.225674 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:06:15.241251 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:06:15.241294 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:06:15.319701 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:06:15.319731 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:06:15.319747 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:06:12.540755 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:14.541618 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:11.371580 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:13.371640 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:15.373147 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:13.911792 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:15.912479 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:17.913926 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:15.406813 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:06:15.406853 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:06:17.993893 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:06:18.007755 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:06:18.007843 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:06:18.047750 1131323 cri.go:89] found id: ""
	I0328 01:06:18.047777 1131323 logs.go:276] 0 containers: []
	W0328 01:06:18.047786 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:06:18.047797 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:06:18.047855 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:06:18.088264 1131323 cri.go:89] found id: ""
	I0328 01:06:18.088291 1131323 logs.go:276] 0 containers: []
	W0328 01:06:18.088303 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:06:18.088311 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:06:18.088369 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:06:18.127485 1131323 cri.go:89] found id: ""
	I0328 01:06:18.127514 1131323 logs.go:276] 0 containers: []
	W0328 01:06:18.127523 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:06:18.127530 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:06:18.127581 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:06:18.167462 1131323 cri.go:89] found id: ""
	I0328 01:06:18.167496 1131323 logs.go:276] 0 containers: []
	W0328 01:06:18.167510 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:06:18.167516 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:06:18.167571 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:06:18.209536 1131323 cri.go:89] found id: ""
	I0328 01:06:18.209571 1131323 logs.go:276] 0 containers: []
	W0328 01:06:18.209583 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:06:18.209591 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:06:18.209662 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:06:18.247565 1131323 cri.go:89] found id: ""
	I0328 01:06:18.247601 1131323 logs.go:276] 0 containers: []
	W0328 01:06:18.247614 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:06:18.247623 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:06:18.247701 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:06:18.288123 1131323 cri.go:89] found id: ""
	I0328 01:06:18.288162 1131323 logs.go:276] 0 containers: []
	W0328 01:06:18.288172 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:06:18.288179 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:06:18.288242 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:06:18.328132 1131323 cri.go:89] found id: ""
	I0328 01:06:18.328161 1131323 logs.go:276] 0 containers: []
	W0328 01:06:18.328170 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:06:18.328181 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:06:18.328193 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:06:18.403245 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:06:18.403287 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:06:18.403305 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:06:18.483446 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:06:18.483500 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:06:18.527357 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:06:18.527392 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:06:18.588402 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:06:18.588463 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:06:16.542137 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:18.542554 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:20.546396 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:17.872147 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:20.373000 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:20.412369 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:22.412661 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:21.103566 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:06:21.117538 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:06:21.117616 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:06:21.174215 1131323 cri.go:89] found id: ""
	I0328 01:06:21.174270 1131323 logs.go:276] 0 containers: []
	W0328 01:06:21.174284 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:06:21.174293 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:06:21.174364 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:06:21.238666 1131323 cri.go:89] found id: ""
	I0328 01:06:21.238707 1131323 logs.go:276] 0 containers: []
	W0328 01:06:21.238722 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:06:21.238730 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:06:21.238803 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:06:21.303510 1131323 cri.go:89] found id: ""
	I0328 01:06:21.303543 1131323 logs.go:276] 0 containers: []
	W0328 01:06:21.303553 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:06:21.303559 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:06:21.303614 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:06:21.345823 1131323 cri.go:89] found id: ""
	I0328 01:06:21.345853 1131323 logs.go:276] 0 containers: []
	W0328 01:06:21.345862 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:06:21.345870 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:06:21.345940 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:06:21.386205 1131323 cri.go:89] found id: ""
	I0328 01:06:21.386248 1131323 logs.go:276] 0 containers: []
	W0328 01:06:21.386261 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:06:21.386269 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:06:21.386335 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:06:21.427424 1131323 cri.go:89] found id: ""
	I0328 01:06:21.427457 1131323 logs.go:276] 0 containers: []
	W0328 01:06:21.427470 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:06:21.427478 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:06:21.427546 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:06:21.465054 1131323 cri.go:89] found id: ""
	I0328 01:06:21.465087 1131323 logs.go:276] 0 containers: []
	W0328 01:06:21.465099 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:06:21.465107 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:06:21.465177 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:06:21.507197 1131323 cri.go:89] found id: ""
	I0328 01:06:21.507229 1131323 logs.go:276] 0 containers: []
	W0328 01:06:21.507238 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:06:21.507248 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:06:21.507263 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:06:21.586657 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:06:21.586709 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:06:21.633702 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:06:21.633739 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:06:21.688960 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:06:21.688999 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:06:21.704675 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:06:21.704714 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:06:21.781612 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:06:24.282521 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:06:24.297096 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:06:24.297185 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:06:24.338745 1131323 cri.go:89] found id: ""
	I0328 01:06:24.338780 1131323 logs.go:276] 0 containers: []
	W0328 01:06:24.338793 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:06:24.338802 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:06:24.338872 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:06:24.375499 1131323 cri.go:89] found id: ""
	I0328 01:06:24.375528 1131323 logs.go:276] 0 containers: []
	W0328 01:06:24.375540 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:06:24.375548 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:06:24.375616 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:06:24.410939 1131323 cri.go:89] found id: ""
	I0328 01:06:24.410966 1131323 logs.go:276] 0 containers: []
	W0328 01:06:24.410978 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:06:24.410986 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:06:24.411042 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:06:24.455316 1131323 cri.go:89] found id: ""
	I0328 01:06:24.455345 1131323 logs.go:276] 0 containers: []
	W0328 01:06:24.455354 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:06:24.455360 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:06:24.455427 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:06:24.493177 1131323 cri.go:89] found id: ""
	I0328 01:06:24.493206 1131323 logs.go:276] 0 containers: []
	W0328 01:06:24.493219 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:06:24.493228 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:06:24.493300 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:06:24.533612 1131323 cri.go:89] found id: ""
	I0328 01:06:24.533648 1131323 logs.go:276] 0 containers: []
	W0328 01:06:24.533659 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:06:24.533668 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:06:24.533743 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:06:24.573960 1131323 cri.go:89] found id: ""
	I0328 01:06:24.573998 1131323 logs.go:276] 0 containers: []
	W0328 01:06:24.574014 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:06:24.574020 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:06:24.574074 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:06:24.617282 1131323 cri.go:89] found id: ""
	I0328 01:06:24.617319 1131323 logs.go:276] 0 containers: []
	W0328 01:06:24.617333 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:06:24.617346 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:06:24.617364 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:06:24.691660 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:06:24.691688 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:06:24.691707 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:06:24.773138 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:06:24.773180 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:06:24.820408 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:06:24.820440 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:06:24.875901 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:06:24.875940 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:06:23.041030 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:25.041064 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:22.874513 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:25.378939 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:24.413732 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:26.912433 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:27.392663 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:06:27.407958 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:06:27.408046 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:06:27.446750 1131323 cri.go:89] found id: ""
	I0328 01:06:27.446782 1131323 logs.go:276] 0 containers: []
	W0328 01:06:27.446792 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:06:27.446799 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:06:27.446872 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:06:27.489199 1131323 cri.go:89] found id: ""
	I0328 01:06:27.489236 1131323 logs.go:276] 0 containers: []
	W0328 01:06:27.489249 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:06:27.489258 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:06:27.489316 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:06:27.525754 1131323 cri.go:89] found id: ""
	I0328 01:06:27.525787 1131323 logs.go:276] 0 containers: []
	W0328 01:06:27.525796 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:06:27.525803 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:06:27.525861 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:06:27.560817 1131323 cri.go:89] found id: ""
	I0328 01:06:27.560849 1131323 logs.go:276] 0 containers: []
	W0328 01:06:27.560858 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:06:27.560866 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:06:27.560930 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:06:27.597706 1131323 cri.go:89] found id: ""
	I0328 01:06:27.597736 1131323 logs.go:276] 0 containers: []
	W0328 01:06:27.597744 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:06:27.597750 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:06:27.597821 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:06:27.635170 1131323 cri.go:89] found id: ""
	I0328 01:06:27.635211 1131323 logs.go:276] 0 containers: []
	W0328 01:06:27.635223 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:06:27.635232 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:06:27.635299 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:06:27.672043 1131323 cri.go:89] found id: ""
	I0328 01:06:27.672079 1131323 logs.go:276] 0 containers: []
	W0328 01:06:27.672091 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:06:27.672099 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:06:27.672166 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:06:27.711401 1131323 cri.go:89] found id: ""
	I0328 01:06:27.711435 1131323 logs.go:276] 0 containers: []
	W0328 01:06:27.711448 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:06:27.711468 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:06:27.711488 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:06:27.755172 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:06:27.755211 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:06:27.807588 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:06:27.807632 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:06:27.823557 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:06:27.823589 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:06:27.905292 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:06:27.905316 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:06:27.905329 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:06:27.041105 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:29.041205 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:27.873797 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:30.374214 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:29.412378 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:31.413211 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:30.491565 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:06:30.505601 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:06:30.505667 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:06:30.541894 1131323 cri.go:89] found id: ""
	I0328 01:06:30.541929 1131323 logs.go:276] 0 containers: []
	W0328 01:06:30.541940 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:06:30.541949 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:06:30.542029 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:06:30.581484 1131323 cri.go:89] found id: ""
	I0328 01:06:30.581514 1131323 logs.go:276] 0 containers: []
	W0328 01:06:30.581532 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:06:30.581538 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:06:30.581613 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:06:30.624788 1131323 cri.go:89] found id: ""
	I0328 01:06:30.624830 1131323 logs.go:276] 0 containers: []
	W0328 01:06:30.624842 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:06:30.624850 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:06:30.624922 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:06:30.664373 1131323 cri.go:89] found id: ""
	I0328 01:06:30.664403 1131323 logs.go:276] 0 containers: []
	W0328 01:06:30.664413 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:06:30.664420 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:06:30.664489 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:06:30.702885 1131323 cri.go:89] found id: ""
	I0328 01:06:30.702917 1131323 logs.go:276] 0 containers: []
	W0328 01:06:30.702928 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:06:30.702934 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:06:30.703006 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:06:30.748170 1131323 cri.go:89] found id: ""
	I0328 01:06:30.748205 1131323 logs.go:276] 0 containers: []
	W0328 01:06:30.748217 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:06:30.748226 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:06:30.748316 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:06:30.785218 1131323 cri.go:89] found id: ""
	I0328 01:06:30.785255 1131323 logs.go:276] 0 containers: []
	W0328 01:06:30.785268 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:06:30.785276 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:06:30.785343 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:06:30.825529 1131323 cri.go:89] found id: ""
	I0328 01:06:30.825555 1131323 logs.go:276] 0 containers: []
	W0328 01:06:30.825565 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:06:30.825575 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:06:30.825589 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:06:30.881353 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:06:30.881391 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:06:30.896682 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:06:30.896718 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:06:30.973356 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:06:30.973386 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:06:30.973402 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:06:31.049014 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:06:31.049047 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:06:33.594365 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:06:33.609372 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:06:33.609460 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:06:33.648699 1131323 cri.go:89] found id: ""
	I0328 01:06:33.648728 1131323 logs.go:276] 0 containers: []
	W0328 01:06:33.648749 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:06:33.648757 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:06:33.648829 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:06:33.686707 1131323 cri.go:89] found id: ""
	I0328 01:06:33.686744 1131323 logs.go:276] 0 containers: []
	W0328 01:06:33.686758 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:06:33.686767 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:06:33.686832 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:06:33.723091 1131323 cri.go:89] found id: ""
	I0328 01:06:33.723121 1131323 logs.go:276] 0 containers: []
	W0328 01:06:33.723130 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:06:33.723136 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:06:33.723187 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:06:33.763439 1131323 cri.go:89] found id: ""
	I0328 01:06:33.763471 1131323 logs.go:276] 0 containers: []
	W0328 01:06:33.763481 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:06:33.763488 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:06:33.763544 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:06:33.812236 1131323 cri.go:89] found id: ""
	I0328 01:06:33.812271 1131323 logs.go:276] 0 containers: []
	W0328 01:06:33.812285 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:06:33.812294 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:06:33.812365 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:06:33.849421 1131323 cri.go:89] found id: ""
	I0328 01:06:33.849454 1131323 logs.go:276] 0 containers: []
	W0328 01:06:33.849465 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:06:33.849473 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:06:33.849528 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:06:33.888020 1131323 cri.go:89] found id: ""
	I0328 01:06:33.888051 1131323 logs.go:276] 0 containers: []
	W0328 01:06:33.888065 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:06:33.888078 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:06:33.888145 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:06:33.925952 1131323 cri.go:89] found id: ""
	I0328 01:06:33.925990 1131323 logs.go:276] 0 containers: []
	W0328 01:06:33.926003 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:06:33.926016 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:06:33.926034 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:06:33.976695 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:06:33.976734 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:06:33.991708 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:06:33.991752 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:06:34.068244 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:06:34.068276 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:06:34.068293 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:06:34.155843 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:06:34.155885 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:06:31.041375 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:33.041526 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:35.541169 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:32.872009 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:34.873043 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:33.913191 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:36.413213 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:36.697480 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:06:36.712322 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:06:36.712420 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:06:36.749541 1131323 cri.go:89] found id: ""
	I0328 01:06:36.749570 1131323 logs.go:276] 0 containers: []
	W0328 01:06:36.749579 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:06:36.749587 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:06:36.749655 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:06:36.788226 1131323 cri.go:89] found id: ""
	I0328 01:06:36.788255 1131323 logs.go:276] 0 containers: []
	W0328 01:06:36.788264 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:06:36.788270 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:06:36.788323 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:06:36.823824 1131323 cri.go:89] found id: ""
	I0328 01:06:36.823856 1131323 logs.go:276] 0 containers: []
	W0328 01:06:36.823866 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:06:36.823872 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:06:36.823927 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:06:36.869331 1131323 cri.go:89] found id: ""
	I0328 01:06:36.869362 1131323 logs.go:276] 0 containers: []
	W0328 01:06:36.869371 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:06:36.869378 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:06:36.869473 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:06:36.907918 1131323 cri.go:89] found id: ""
	I0328 01:06:36.907950 1131323 logs.go:276] 0 containers: []
	W0328 01:06:36.907960 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:06:36.907966 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:06:36.908028 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:06:36.947708 1131323 cri.go:89] found id: ""
	I0328 01:06:36.947738 1131323 logs.go:276] 0 containers: []
	W0328 01:06:36.947749 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:06:36.947757 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:06:36.947824 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:06:36.986200 1131323 cri.go:89] found id: ""
	I0328 01:06:36.986251 1131323 logs.go:276] 0 containers: []
	W0328 01:06:36.986266 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:06:36.986275 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:06:36.986350 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:06:37.026670 1131323 cri.go:89] found id: ""
	I0328 01:06:37.026698 1131323 logs.go:276] 0 containers: []
	W0328 01:06:37.026708 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:06:37.026718 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:06:37.026732 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:06:37.079891 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:06:37.079933 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:06:37.094347 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:06:37.094378 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:06:37.168653 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:06:37.168681 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:06:37.168695 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:06:37.247909 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:06:37.247949 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:06:39.791285 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:06:39.807921 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:06:39.808000 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:06:39.851460 1131323 cri.go:89] found id: ""
	I0328 01:06:39.851499 1131323 logs.go:276] 0 containers: []
	W0328 01:06:39.851512 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:06:39.851520 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:06:39.851593 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:06:39.889506 1131323 cri.go:89] found id: ""
	I0328 01:06:39.889541 1131323 logs.go:276] 0 containers: []
	W0328 01:06:39.889554 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:06:39.889564 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:06:39.889632 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:06:39.930291 1131323 cri.go:89] found id: ""
	I0328 01:06:39.930321 1131323 logs.go:276] 0 containers: []
	W0328 01:06:39.930331 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:06:39.930337 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:06:39.930400 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:06:39.965121 1131323 cri.go:89] found id: ""
	I0328 01:06:39.965160 1131323 logs.go:276] 0 containers: []
	W0328 01:06:39.965174 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:06:39.965183 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:06:39.965252 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:06:40.003217 1131323 cri.go:89] found id: ""
	I0328 01:06:40.003248 1131323 logs.go:276] 0 containers: []
	W0328 01:06:40.003258 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:06:40.003264 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:06:40.003319 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:06:40.042702 1131323 cri.go:89] found id: ""
	I0328 01:06:40.042737 1131323 logs.go:276] 0 containers: []
	W0328 01:06:40.042749 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:06:40.042759 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:06:40.042826 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:06:40.079733 1131323 cri.go:89] found id: ""
	I0328 01:06:40.079769 1131323 logs.go:276] 0 containers: []
	W0328 01:06:40.079780 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:06:40.079788 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:06:40.079852 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:06:40.117066 1131323 cri.go:89] found id: ""
	I0328 01:06:40.117098 1131323 logs.go:276] 0 containers: []
	W0328 01:06:40.117107 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:06:40.117117 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:06:40.117130 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:06:40.158589 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:06:40.158623 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:06:40.210997 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:06:40.211049 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:06:40.225419 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:06:40.225453 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:06:40.305356 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:06:40.305385 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:06:40.305401 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:06:37.541534 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:39.541905 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:36.874220 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:39.373763 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:38.413719 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:40.912939 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:42.913528 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:42.896394 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:06:42.912285 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:06:42.912355 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:06:42.949381 1131323 cri.go:89] found id: ""
	I0328 01:06:42.949411 1131323 logs.go:276] 0 containers: []
	W0328 01:06:42.949420 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:06:42.949427 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:06:42.949496 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:06:42.985325 1131323 cri.go:89] found id: ""
	I0328 01:06:42.985358 1131323 logs.go:276] 0 containers: []
	W0328 01:06:42.985371 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:06:42.985388 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:06:42.985456 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:06:43.023570 1131323 cri.go:89] found id: ""
	I0328 01:06:43.023616 1131323 logs.go:276] 0 containers: []
	W0328 01:06:43.023630 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:06:43.023638 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:06:43.023714 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:06:43.062995 1131323 cri.go:89] found id: ""
	I0328 01:06:43.063025 1131323 logs.go:276] 0 containers: []
	W0328 01:06:43.063036 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:06:43.063042 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:06:43.063111 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:06:43.101666 1131323 cri.go:89] found id: ""
	I0328 01:06:43.101704 1131323 logs.go:276] 0 containers: []
	W0328 01:06:43.101713 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:06:43.101720 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:06:43.101789 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:06:43.150713 1131323 cri.go:89] found id: ""
	I0328 01:06:43.150745 1131323 logs.go:276] 0 containers: []
	W0328 01:06:43.150757 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:06:43.150765 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:06:43.150830 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:06:43.193449 1131323 cri.go:89] found id: ""
	I0328 01:06:43.193479 1131323 logs.go:276] 0 containers: []
	W0328 01:06:43.193487 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:06:43.193495 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:06:43.193559 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:06:43.237641 1131323 cri.go:89] found id: ""
	I0328 01:06:43.237673 1131323 logs.go:276] 0 containers: []
	W0328 01:06:43.237682 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:06:43.237698 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:06:43.237714 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:06:43.287282 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:06:43.287320 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:06:43.303307 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:06:43.303343 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:06:43.383597 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:06:43.383619 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:06:43.383632 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:06:43.467874 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:06:43.467914 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:06:42.041406 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:44.540550 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:41.874286 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:44.372393 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:45.410973 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:47.412852 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:46.011081 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:06:46.025731 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:06:46.025824 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:06:46.064336 1131323 cri.go:89] found id: ""
	I0328 01:06:46.064371 1131323 logs.go:276] 0 containers: []
	W0328 01:06:46.064385 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:06:46.064394 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:06:46.064451 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:06:46.104493 1131323 cri.go:89] found id: ""
	I0328 01:06:46.104530 1131323 logs.go:276] 0 containers: []
	W0328 01:06:46.104550 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:06:46.104559 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:06:46.104636 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:06:46.147546 1131323 cri.go:89] found id: ""
	I0328 01:06:46.147582 1131323 logs.go:276] 0 containers: []
	W0328 01:06:46.147594 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:06:46.147602 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:06:46.147656 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:06:46.186162 1131323 cri.go:89] found id: ""
	I0328 01:06:46.186197 1131323 logs.go:276] 0 containers: []
	W0328 01:06:46.186207 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:06:46.186213 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:06:46.186296 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:06:46.230412 1131323 cri.go:89] found id: ""
	I0328 01:06:46.230450 1131323 logs.go:276] 0 containers: []
	W0328 01:06:46.230464 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:06:46.230473 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:06:46.230552 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:06:46.266000 1131323 cri.go:89] found id: ""
	I0328 01:06:46.266037 1131323 logs.go:276] 0 containers: []
	W0328 01:06:46.266050 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:06:46.266059 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:06:46.266126 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:06:46.301031 1131323 cri.go:89] found id: ""
	I0328 01:06:46.301065 1131323 logs.go:276] 0 containers: []
	W0328 01:06:46.301077 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:06:46.301084 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:06:46.301155 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:06:46.339222 1131323 cri.go:89] found id: ""
	I0328 01:06:46.339248 1131323 logs.go:276] 0 containers: []
	W0328 01:06:46.339258 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:06:46.339271 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:06:46.339290 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:06:46.352558 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:06:46.352595 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:06:46.427283 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:06:46.427308 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:06:46.427325 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:06:46.512134 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:06:46.512178 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:06:46.558276 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:06:46.558307 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:06:49.113455 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:06:49.127554 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:06:49.127645 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:06:49.169380 1131323 cri.go:89] found id: ""
	I0328 01:06:49.169421 1131323 logs.go:276] 0 containers: []
	W0328 01:06:49.169435 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:06:49.169444 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:06:49.169511 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:06:49.204540 1131323 cri.go:89] found id: ""
	I0328 01:06:49.204568 1131323 logs.go:276] 0 containers: []
	W0328 01:06:49.204579 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:06:49.204596 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:06:49.204664 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:06:49.243074 1131323 cri.go:89] found id: ""
	I0328 01:06:49.243102 1131323 logs.go:276] 0 containers: []
	W0328 01:06:49.243112 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:06:49.243119 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:06:49.243170 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:06:49.281264 1131323 cri.go:89] found id: ""
	I0328 01:06:49.281301 1131323 logs.go:276] 0 containers: []
	W0328 01:06:49.281314 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:06:49.281322 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:06:49.281391 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:06:49.320473 1131323 cri.go:89] found id: ""
	I0328 01:06:49.320505 1131323 logs.go:276] 0 containers: []
	W0328 01:06:49.320514 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:06:49.320521 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:06:49.320592 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:06:49.357715 1131323 cri.go:89] found id: ""
	I0328 01:06:49.357749 1131323 logs.go:276] 0 containers: []
	W0328 01:06:49.357759 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:06:49.357766 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:06:49.357823 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:06:49.398427 1131323 cri.go:89] found id: ""
	I0328 01:06:49.398464 1131323 logs.go:276] 0 containers: []
	W0328 01:06:49.398477 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:06:49.398498 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:06:49.398576 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:06:49.439921 1131323 cri.go:89] found id: ""
	I0328 01:06:49.439956 1131323 logs.go:276] 0 containers: []
	W0328 01:06:49.439969 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:06:49.439982 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:06:49.440003 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:06:49.557260 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:06:49.557289 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:06:49.557312 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:06:49.640105 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:06:49.640169 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:06:49.683153 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:06:49.683185 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:06:49.737420 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:06:49.737463 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:06:46.541377 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:49.041761 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:46.374869 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:48.875897 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:49.912535 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:51.912893 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:52.253208 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:06:52.268572 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:06:52.268649 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:06:52.305136 1131323 cri.go:89] found id: ""
	I0328 01:06:52.305180 1131323 logs.go:276] 0 containers: []
	W0328 01:06:52.305193 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:06:52.305202 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:06:52.305273 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:06:52.344774 1131323 cri.go:89] found id: ""
	I0328 01:06:52.344806 1131323 logs.go:276] 0 containers: []
	W0328 01:06:52.344816 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:06:52.344823 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:06:52.344885 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:06:52.382127 1131323 cri.go:89] found id: ""
	I0328 01:06:52.382174 1131323 logs.go:276] 0 containers: []
	W0328 01:06:52.382185 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:06:52.382200 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:06:52.382280 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:06:52.421340 1131323 cri.go:89] found id: ""
	I0328 01:06:52.421368 1131323 logs.go:276] 0 containers: []
	W0328 01:06:52.421377 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:06:52.421383 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:06:52.421433 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:06:52.460046 1131323 cri.go:89] found id: ""
	I0328 01:06:52.460084 1131323 logs.go:276] 0 containers: []
	W0328 01:06:52.460100 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:06:52.460107 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:06:52.460164 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:06:52.500067 1131323 cri.go:89] found id: ""
	I0328 01:06:52.500094 1131323 logs.go:276] 0 containers: []
	W0328 01:06:52.500102 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:06:52.500109 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:06:52.500171 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:06:52.537614 1131323 cri.go:89] found id: ""
	I0328 01:06:52.537646 1131323 logs.go:276] 0 containers: []
	W0328 01:06:52.537671 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:06:52.537680 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:06:52.537745 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:06:52.577362 1131323 cri.go:89] found id: ""
	I0328 01:06:52.577392 1131323 logs.go:276] 0 containers: []
	W0328 01:06:52.577402 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:06:52.577417 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:06:52.577434 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:06:52.633638 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:06:52.633689 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:06:52.650762 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:06:52.650796 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:06:52.729436 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:06:52.729470 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:06:52.729484 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:06:52.818193 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:06:52.818248 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:06:51.540541 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:53.541340 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:55.542165 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:51.376916 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:53.872313 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:55.873335 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:54.411986 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:56.412892 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:55.362950 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:06:55.378461 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:06:55.378577 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:06:55.419968 1131323 cri.go:89] found id: ""
	I0328 01:06:55.419995 1131323 logs.go:276] 0 containers: []
	W0328 01:06:55.420005 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:06:55.420010 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:06:55.420072 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:06:55.464308 1131323 cri.go:89] found id: ""
	I0328 01:06:55.464341 1131323 logs.go:276] 0 containers: []
	W0328 01:06:55.464350 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:06:55.464357 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:06:55.464421 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:06:55.523059 1131323 cri.go:89] found id: ""
	I0328 01:06:55.523092 1131323 logs.go:276] 0 containers: []
	W0328 01:06:55.523106 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:06:55.523114 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:06:55.523186 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:06:55.570957 1131323 cri.go:89] found id: ""
	I0328 01:06:55.570990 1131323 logs.go:276] 0 containers: []
	W0328 01:06:55.571004 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:06:55.571013 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:06:55.571077 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:06:55.606712 1131323 cri.go:89] found id: ""
	I0328 01:06:55.606739 1131323 logs.go:276] 0 containers: []
	W0328 01:06:55.606749 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:06:55.606755 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:06:55.606817 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:06:55.646445 1131323 cri.go:89] found id: ""
	I0328 01:06:55.646477 1131323 logs.go:276] 0 containers: []
	W0328 01:06:55.646486 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:06:55.646493 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:06:55.646548 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:06:55.685176 1131323 cri.go:89] found id: ""
	I0328 01:06:55.685208 1131323 logs.go:276] 0 containers: []
	W0328 01:06:55.685217 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:06:55.685225 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:06:55.685289 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:06:55.722948 1131323 cri.go:89] found id: ""
	I0328 01:06:55.722984 1131323 logs.go:276] 0 containers: []
	W0328 01:06:55.722995 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:06:55.723006 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:06:55.723022 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:06:55.797332 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:06:55.797368 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:06:55.797385 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:06:55.877648 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:06:55.877688 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:06:55.918966 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:06:55.918997 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:06:55.971226 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:06:55.971272 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:06:58.488464 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:06:58.504999 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:06:58.505088 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:06:58.549290 1131323 cri.go:89] found id: ""
	I0328 01:06:58.549325 1131323 logs.go:276] 0 containers: []
	W0328 01:06:58.549338 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:06:58.549347 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:06:58.549414 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:06:58.589222 1131323 cri.go:89] found id: ""
	I0328 01:06:58.589252 1131323 logs.go:276] 0 containers: []
	W0328 01:06:58.589261 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:06:58.589271 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:06:58.589337 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:06:58.626470 1131323 cri.go:89] found id: ""
	I0328 01:06:58.626499 1131323 logs.go:276] 0 containers: []
	W0328 01:06:58.626508 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:06:58.626514 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:06:58.626578 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:06:58.671634 1131323 cri.go:89] found id: ""
	I0328 01:06:58.671663 1131323 logs.go:276] 0 containers: []
	W0328 01:06:58.671674 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:06:58.671683 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:06:58.671744 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:06:58.707335 1131323 cri.go:89] found id: ""
	I0328 01:06:58.707370 1131323 logs.go:276] 0 containers: []
	W0328 01:06:58.707381 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:06:58.707390 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:06:58.707459 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:06:58.745635 1131323 cri.go:89] found id: ""
	I0328 01:06:58.745666 1131323 logs.go:276] 0 containers: []
	W0328 01:06:58.745679 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:06:58.745687 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:06:58.745752 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:06:58.792172 1131323 cri.go:89] found id: ""
	I0328 01:06:58.792205 1131323 logs.go:276] 0 containers: []
	W0328 01:06:58.792216 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:06:58.792225 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:06:58.792287 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:06:58.840027 1131323 cri.go:89] found id: ""
	I0328 01:06:58.840063 1131323 logs.go:276] 0 containers: []
	W0328 01:06:58.840075 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:06:58.840089 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:06:58.840108 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:06:58.921964 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:06:58.921988 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:06:58.922003 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:06:59.016935 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:06:59.016980 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:06:59.065747 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:06:59.065788 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:06:59.119189 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:06:59.119231 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:06:58.042362 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:00.544351 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:57.875649 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:00.371953 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:58.406154 1130949 pod_ready.go:81] duration metric: took 4m0.000981669s for pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace to be "Ready" ...
	E0328 01:06:58.406192 1130949 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace to be "Ready" (will not retry!)
	I0328 01:06:58.406218 1130949 pod_ready.go:38] duration metric: took 4m11.713667334s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0328 01:06:58.406275 1130949 kubeadm.go:591] duration metric: took 4m19.018883002s to restartPrimaryControlPlane
	W0328 01:06:58.406372 1130949 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0328 01:06:58.406432 1130949 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0328 01:07:01.637081 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:07:01.652557 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:07:01.652634 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:07:01.691795 1131323 cri.go:89] found id: ""
	I0328 01:07:01.691832 1131323 logs.go:276] 0 containers: []
	W0328 01:07:01.691846 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:07:01.691854 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:07:01.691927 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:07:01.732815 1131323 cri.go:89] found id: ""
	I0328 01:07:01.732850 1131323 logs.go:276] 0 containers: []
	W0328 01:07:01.732861 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:07:01.732868 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:07:01.732938 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:07:01.776370 1131323 cri.go:89] found id: ""
	I0328 01:07:01.776408 1131323 logs.go:276] 0 containers: []
	W0328 01:07:01.776422 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:07:01.776431 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:07:01.776501 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:07:01.821260 1131323 cri.go:89] found id: ""
	I0328 01:07:01.821290 1131323 logs.go:276] 0 containers: []
	W0328 01:07:01.821301 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:07:01.821308 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:07:01.821377 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:07:01.860666 1131323 cri.go:89] found id: ""
	I0328 01:07:01.860696 1131323 logs.go:276] 0 containers: []
	W0328 01:07:01.860708 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:07:01.860719 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:07:01.860787 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:07:01.898255 1131323 cri.go:89] found id: ""
	I0328 01:07:01.898291 1131323 logs.go:276] 0 containers: []
	W0328 01:07:01.898304 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:07:01.898314 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:07:01.898383 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:07:01.937770 1131323 cri.go:89] found id: ""
	I0328 01:07:01.937809 1131323 logs.go:276] 0 containers: []
	W0328 01:07:01.937822 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:07:01.937830 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:07:01.937901 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:07:01.976946 1131323 cri.go:89] found id: ""
	I0328 01:07:01.976981 1131323 logs.go:276] 0 containers: []
	W0328 01:07:01.976994 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:07:01.977008 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:07:01.977027 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:07:02.062804 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:07:02.062845 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:07:02.110750 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:07:02.110783 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:07:02.179633 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:07:02.179677 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:07:02.203131 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:07:02.203181 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:07:02.303281 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:07:04.804238 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:07:04.819654 1131323 kubeadm.go:591] duration metric: took 4m2.527630194s to restartPrimaryControlPlane
	W0328 01:07:04.819747 1131323 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0328 01:07:04.819787 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0328 01:07:03.041692 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:05.540478 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:02.372472 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:04.376413 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:07.322821 1131323 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (2.50300166s)
	I0328 01:07:07.322918 1131323 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0328 01:07:07.338692 1131323 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0328 01:07:07.349812 1131323 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0328 01:07:07.361566 1131323 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0328 01:07:07.361597 1131323 kubeadm.go:156] found existing configuration files:
	
	I0328 01:07:07.361667 1131323 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0328 01:07:07.372926 1131323 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0328 01:07:07.373008 1131323 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0328 01:07:07.383770 1131323 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0328 01:07:07.394260 1131323 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0328 01:07:07.394332 1131323 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0328 01:07:07.405874 1131323 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0328 01:07:07.417177 1131323 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0328 01:07:07.417254 1131323 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0328 01:07:07.428589 1131323 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0328 01:07:07.438788 1131323 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0328 01:07:07.438845 1131323 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0328 01:07:07.449649 1131323 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0328 01:07:07.533886 1131323 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0328 01:07:07.533989 1131323 kubeadm.go:309] [preflight] Running pre-flight checks
	I0328 01:07:07.693599 1131323 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0328 01:07:07.693736 1131323 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0328 01:07:07.693852 1131323 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0328 01:07:07.910557 1131323 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0328 01:07:07.912634 1131323 out.go:204]   - Generating certificates and keys ...
	I0328 01:07:07.912743 1131323 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0328 01:07:07.912855 1131323 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0328 01:07:07.912984 1131323 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0328 01:07:07.913098 1131323 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0328 01:07:07.913212 1131323 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0328 01:07:07.913298 1131323 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0328 01:07:07.913384 1131323 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0328 01:07:07.913569 1131323 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0328 01:07:07.913947 1131323 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0328 01:07:07.914429 1131323 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0328 01:07:07.914649 1131323 kubeadm.go:309] [certs] Using the existing "sa" key
	I0328 01:07:07.914728 1131323 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0328 01:07:08.225778 1131323 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0328 01:07:08.353927 1131323 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0328 01:07:08.631240 1131323 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0328 01:07:08.824445 1131323 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0328 01:07:08.840240 1131323 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0328 01:07:08.841200 1131323 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0328 01:07:08.841315 1131323 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0328 01:07:08.997129 1131323 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0328 01:07:08.999073 1131323 out.go:204]   - Booting up control plane ...
	I0328 01:07:08.999224 1131323 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0328 01:07:09.014811 1131323 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0328 01:07:09.015898 1131323 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0328 01:07:09.016727 1131323 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0328 01:07:09.019426 1131323 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0328 01:07:07.541363 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:10.041094 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:06.874606 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:09.372537 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:12.540137 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:14.541608 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:11.372643 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:13.873029 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:16.541814 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:19.047225 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:16.372556 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:18.871954 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:20.872047 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:21.542880 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:24.041786 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:22.872845 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:24.873747 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:26.042186 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:28.541303 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:30.540610 1130949 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.134147754s)
	I0328 01:07:30.540688 1130949 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0328 01:07:30.558971 1130949 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0328 01:07:30.570331 1130949 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0328 01:07:30.581192 1130949 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0328 01:07:30.581246 1130949 kubeadm.go:156] found existing configuration files:
	
	I0328 01:07:30.581306 1130949 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0328 01:07:30.592337 1130949 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0328 01:07:30.592410 1130949 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0328 01:07:30.603288 1130949 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0328 01:07:30.613714 1130949 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0328 01:07:30.613776 1130949 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0328 01:07:30.624281 1130949 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0328 01:07:30.634569 1130949 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0328 01:07:30.634644 1130949 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0328 01:07:30.647279 1130949 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0328 01:07:30.658554 1130949 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0328 01:07:30.658646 1130949 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0328 01:07:30.670364 1130949 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0328 01:07:30.730349 1130949 kubeadm.go:309] [init] Using Kubernetes version: v1.29.3
	I0328 01:07:30.730414 1130949 kubeadm.go:309] [preflight] Running pre-flight checks
	I0328 01:07:30.887056 1130949 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0328 01:07:30.887234 1130949 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0328 01:07:30.887385 1130949 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0328 01:07:31.104288 1130949 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0328 01:07:27.373135 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:29.373436 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:31.106496 1130949 out.go:204]   - Generating certificates and keys ...
	I0328 01:07:31.106628 1130949 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0328 01:07:31.106697 1130949 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0328 01:07:31.106765 1130949 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0328 01:07:31.106826 1130949 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0328 01:07:31.106892 1130949 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0328 01:07:31.107528 1130949 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0328 01:07:31.108302 1130949 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0328 01:07:31.112246 1130949 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0328 01:07:31.112762 1130949 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0328 01:07:31.113711 1130949 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0328 01:07:31.115230 1130949 kubeadm.go:309] [certs] Using the existing "sa" key
	I0328 01:07:31.115284 1130949 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0328 01:07:31.297632 1130949 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0328 01:07:32.446275 1130949 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0328 01:07:32.565869 1130949 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0328 01:07:32.641288 1130949 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0328 01:07:32.817229 1130949 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0328 01:07:32.817814 1130949 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0328 01:07:32.820366 1130949 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0328 01:07:32.822328 1130949 out.go:204]   - Booting up control plane ...
	I0328 01:07:32.822467 1130949 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0328 01:07:32.822550 1130949 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0328 01:07:32.822990 1130949 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0328 01:07:32.846800 1130949 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0328 01:07:32.847829 1130949 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0328 01:07:32.847902 1130949 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0328 01:07:31.044103 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:33.542106 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:35.542875 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:31.873591 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:33.875737 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:35.881819 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:32.992001 1130949 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0328 01:07:38.997010 1130949 kubeadm.go:309] [apiclient] All control plane components are healthy after 6.003888 seconds
	I0328 01:07:39.012971 1130949 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0328 01:07:39.036328 1130949 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0328 01:07:39.569806 1130949 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0328 01:07:39.570135 1130949 kubeadm.go:309] [mark-control-plane] Marking the node embed-certs-808809 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0328 01:07:40.085165 1130949 kubeadm.go:309] [bootstrap-token] Using token: 4zk5zi.uttj4zihedk5oj6k
	I0328 01:07:40.086719 1130949 out.go:204]   - Configuring RBAC rules ...
	I0328 01:07:40.086873 1130949 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0328 01:07:40.096373 1130949 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0328 01:07:40.106484 1130949 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0328 01:07:40.110525 1130949 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0328 01:07:40.120015 1130949 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0328 01:07:40.129060 1130949 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0328 01:07:40.141167 1130949 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0328 01:07:40.415429 1130949 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0328 01:07:40.507275 1130949 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0328 01:07:40.507333 1130949 kubeadm.go:309] 
	I0328 01:07:40.507551 1130949 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0328 01:07:40.507617 1130949 kubeadm.go:309] 
	I0328 01:07:40.507860 1130949 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0328 01:07:40.507891 1130949 kubeadm.go:309] 
	I0328 01:07:40.507947 1130949 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0328 01:07:40.508057 1130949 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0328 01:07:40.508140 1130949 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0328 01:07:40.508157 1130949 kubeadm.go:309] 
	I0328 01:07:40.508250 1130949 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0328 01:07:40.508264 1130949 kubeadm.go:309] 
	I0328 01:07:40.508329 1130949 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0328 01:07:40.508344 1130949 kubeadm.go:309] 
	I0328 01:07:40.508421 1130949 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0328 01:07:40.508539 1130949 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0328 01:07:40.508626 1130949 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0328 01:07:40.508632 1130949 kubeadm.go:309] 
	I0328 01:07:40.508804 1130949 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0328 01:07:40.508970 1130949 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0328 01:07:40.508990 1130949 kubeadm.go:309] 
	I0328 01:07:40.509155 1130949 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token 4zk5zi.uttj4zihedk5oj6k \
	I0328 01:07:40.509474 1130949 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:eb47e943713ff45bf2b8bbea854932df17a469668ec0f8e79ea3bb626e56fc59 \
	I0328 01:07:40.509514 1130949 kubeadm.go:309] 	--control-plane 
	I0328 01:07:40.509524 1130949 kubeadm.go:309] 
	I0328 01:07:40.509641 1130949 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0328 01:07:40.509655 1130949 kubeadm.go:309] 
	I0328 01:07:40.509767 1130949 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token 4zk5zi.uttj4zihedk5oj6k \
	I0328 01:07:40.509932 1130949 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:eb47e943713ff45bf2b8bbea854932df17a469668ec0f8e79ea3bb626e56fc59 
	I0328 01:07:40.510139 1130949 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0328 01:07:40.510157 1130949 cni.go:84] Creating CNI manager for ""
	I0328 01:07:40.510166 1130949 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0328 01:07:40.512099 1130949 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0328 01:07:38.041290 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:40.041569 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:38.373789 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:40.374369 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:40.513314 1130949 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0328 01:07:40.563257 1130949 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0328 01:07:40.627024 1130949 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0328 01:07:40.627097 1130949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:07:40.627137 1130949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-808809 minikube.k8s.io/updated_at=2024_03_28T01_07_40_0700 minikube.k8s.io/version=v1.33.0-beta.0 minikube.k8s.io/commit=1a940f980f77c9bfc8ef93678db8c1f49c9dd79d minikube.k8s.io/name=embed-certs-808809 minikube.k8s.io/primary=true
	I0328 01:07:40.928916 1130949 ops.go:34] apiserver oom_adj: -16
	I0328 01:07:40.929138 1130949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:07:41.429797 1130949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:07:41.930103 1130949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:07:42.429366 1130949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:07:42.540932 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:44.035055 1131600 pod_ready.go:81] duration metric: took 4m0.000860608s for pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace to be "Ready" ...
	E0328 01:07:44.035094 1131600 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace to be "Ready" (will not retry!)
	I0328 01:07:44.035124 1131600 pod_ready.go:38] duration metric: took 4m14.608998431s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0328 01:07:44.035180 1131600 kubeadm.go:591] duration metric: took 4m23.470228903s to restartPrimaryControlPlane
	W0328 01:07:44.035292 1131600 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0328 01:07:44.035344 1131600 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0328 01:07:42.375179 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:44.876120 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:42.929464 1130949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:07:43.429369 1130949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:07:43.929241 1130949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:07:44.429904 1130949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:07:44.930251 1130949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:07:45.429816 1130949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:07:45.930177 1130949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:07:46.429416 1130949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:07:46.929152 1130949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:07:47.429708 1130949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:07:49.021732 1131323 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0328 01:07:49.021890 1131323 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0328 01:07:49.022195 1131323 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0328 01:07:47.373358 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:49.872482 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:47.929139 1130949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:07:48.429732 1130949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:07:48.930207 1130949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:07:49.429230 1130949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:07:49.929298 1130949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:07:50.429919 1130949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:07:50.929364 1130949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:07:51.429403 1130949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:07:51.929356 1130949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:07:52.429410 1130949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:07:52.929894 1130949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:07:53.043365 1130949 kubeadm.go:1107] duration metric: took 12.416334145s to wait for elevateKubeSystemPrivileges
	W0328 01:07:53.043410 1130949 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0328 01:07:53.043419 1130949 kubeadm.go:393] duration metric: took 5m13.709259014s to StartCluster
	I0328 01:07:53.043445 1130949 settings.go:142] acquiring lock: {Name:mk594dd5d083edcb4c668528431bf610fe4ea638 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 01:07:53.043560 1130949 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18485-1069254/kubeconfig
	I0328 01:07:53.045798 1130949 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18485-1069254/kubeconfig: {Name:mk608bb0dce90860f51972a105c5a28b1a9b081e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 01:07:53.046158 1130949 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.72.210 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0328 01:07:53.047867 1130949 out.go:177] * Verifying Kubernetes components...
	I0328 01:07:53.046201 1130949 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0328 01:07:53.046412 1130949 config.go:182] Loaded profile config "embed-certs-808809": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0328 01:07:53.049163 1130949 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-808809"
	I0328 01:07:53.049175 1130949 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0328 01:07:53.049195 1130949 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-808809"
	W0328 01:07:53.049204 1130949 addons.go:243] addon storage-provisioner should already be in state true
	I0328 01:07:53.049230 1130949 host.go:66] Checking if "embed-certs-808809" exists ...
	I0328 01:07:53.049205 1130949 addons.go:69] Setting default-storageclass=true in profile "embed-certs-808809"
	I0328 01:07:53.049250 1130949 addons.go:69] Setting metrics-server=true in profile "embed-certs-808809"
	I0328 01:07:53.049271 1130949 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-808809"
	I0328 01:07:53.049309 1130949 addons.go:234] Setting addon metrics-server=true in "embed-certs-808809"
	W0328 01:07:53.049327 1130949 addons.go:243] addon metrics-server should already be in state true
	I0328 01:07:53.049371 1130949 host.go:66] Checking if "embed-certs-808809" exists ...
	I0328 01:07:53.049530 1130949 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18485-1069254/.minikube/bin/docker-machine-driver-kvm2
	I0328 01:07:53.049569 1130949 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 01:07:53.049696 1130949 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18485-1069254/.minikube/bin/docker-machine-driver-kvm2
	I0328 01:07:53.049729 1130949 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 01:07:53.049795 1130949 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18485-1069254/.minikube/bin/docker-machine-driver-kvm2
	I0328 01:07:53.049838 1130949 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 01:07:53.067042 1130949 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36143
	I0328 01:07:53.067078 1130949 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33653
	I0328 01:07:53.067536 1130949 main.go:141] libmachine: () Calling .GetVersion
	I0328 01:07:53.067599 1130949 main.go:141] libmachine: () Calling .GetVersion
	I0328 01:07:53.068156 1130949 main.go:141] libmachine: Using API Version  1
	I0328 01:07:53.068184 1130949 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 01:07:53.068289 1130949 main.go:141] libmachine: Using API Version  1
	I0328 01:07:53.068315 1130949 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 01:07:53.068583 1130949 main.go:141] libmachine: () Calling .GetMachineName
	I0328 01:07:53.068669 1130949 main.go:141] libmachine: () Calling .GetMachineName
	I0328 01:07:53.069095 1130949 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18485-1069254/.minikube/bin/docker-machine-driver-kvm2
	I0328 01:07:53.069121 1130949 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 01:07:53.069245 1130949 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18485-1069254/.minikube/bin/docker-machine-driver-kvm2
	I0328 01:07:53.069276 1130949 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 01:07:53.069991 1130949 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43737
	I0328 01:07:53.070509 1130949 main.go:141] libmachine: () Calling .GetVersion
	I0328 01:07:53.071078 1130949 main.go:141] libmachine: Using API Version  1
	I0328 01:07:53.071103 1130949 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 01:07:53.071480 1130949 main.go:141] libmachine: () Calling .GetMachineName
	I0328 01:07:53.071705 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetState
	I0328 01:07:53.075617 1130949 addons.go:234] Setting addon default-storageclass=true in "embed-certs-808809"
	W0328 01:07:53.075659 1130949 addons.go:243] addon default-storageclass should already be in state true
	I0328 01:07:53.075703 1130949 host.go:66] Checking if "embed-certs-808809" exists ...
	I0328 01:07:53.075982 1130949 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18485-1069254/.minikube/bin/docker-machine-driver-kvm2
	I0328 01:07:53.076011 1130949 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 01:07:53.085991 1130949 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39265
	I0328 01:07:53.086508 1130949 main.go:141] libmachine: () Calling .GetVersion
	I0328 01:07:53.086724 1130949 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36033
	I0328 01:07:53.087105 1130949 main.go:141] libmachine: Using API Version  1
	I0328 01:07:53.087122 1130949 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 01:07:53.087158 1130949 main.go:141] libmachine: () Calling .GetVersion
	I0328 01:07:53.087646 1130949 main.go:141] libmachine: Using API Version  1
	I0328 01:07:53.087667 1130949 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 01:07:53.087706 1130949 main.go:141] libmachine: () Calling .GetMachineName
	I0328 01:07:53.087922 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetState
	I0328 01:07:53.088031 1130949 main.go:141] libmachine: () Calling .GetMachineName
	I0328 01:07:53.088225 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetState
	I0328 01:07:53.089941 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .DriverName
	I0328 01:07:53.090168 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .DriverName
	I0328 01:07:53.091945 1130949 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0328 01:07:53.093023 1130949 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41613
	I0328 01:07:53.093537 1130949 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0328 01:07:53.093553 1130949 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0328 01:07:53.093563 1130949 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0328 01:07:53.095147 1130949 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0328 01:07:53.095165 1130949 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0328 01:07:53.093574 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHHostname
	I0328 01:07:53.095185 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHHostname
	I0328 01:07:53.093939 1130949 main.go:141] libmachine: () Calling .GetVersion
	I0328 01:07:53.096301 1130949 main.go:141] libmachine: Using API Version  1
	I0328 01:07:53.096322 1130949 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 01:07:53.096662 1130949 main.go:141] libmachine: () Calling .GetMachineName
	I0328 01:07:53.097251 1130949 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18485-1069254/.minikube/bin/docker-machine-driver-kvm2
	I0328 01:07:53.097306 1130949 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 01:07:53.098907 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:07:53.099014 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:07:53.099513 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d4:d2", ip: ""} in network mk-embed-certs-808809: {Iface:virbr4 ExpiryTime:2024-03-28 02:02:25 +0000 UTC Type:0 Mac:52:54:00:60:d4:d2 Iaid: IPaddr:192.168.72.210 Prefix:24 Hostname:embed-certs-808809 Clientid:01:52:54:00:60:d4:d2}
	I0328 01:07:53.099546 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined IP address 192.168.72.210 and MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:07:53.099996 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHPort
	I0328 01:07:53.100126 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d4:d2", ip: ""} in network mk-embed-certs-808809: {Iface:virbr4 ExpiryTime:2024-03-28 02:02:25 +0000 UTC Type:0 Mac:52:54:00:60:d4:d2 Iaid: IPaddr:192.168.72.210 Prefix:24 Hostname:embed-certs-808809 Clientid:01:52:54:00:60:d4:d2}
	I0328 01:07:53.100177 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHKeyPath
	I0328 01:07:53.100187 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined IP address 192.168.72.210 and MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:07:53.100287 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHUsername
	I0328 01:07:53.100392 1130949 sshutil.go:53] new ssh client: &{IP:192.168.72.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/embed-certs-808809/id_rsa Username:docker}
	I0328 01:07:53.100470 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHPort
	I0328 01:07:53.100576 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHKeyPath
	I0328 01:07:53.100709 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHUsername
	I0328 01:07:53.100796 1130949 sshutil.go:53] new ssh client: &{IP:192.168.72.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/embed-certs-808809/id_rsa Username:docker}
	I0328 01:07:53.114056 1130949 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41121
	I0328 01:07:53.114680 1130949 main.go:141] libmachine: () Calling .GetVersion
	I0328 01:07:53.115279 1130949 main.go:141] libmachine: Using API Version  1
	I0328 01:07:53.115313 1130949 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 01:07:53.115721 1130949 main.go:141] libmachine: () Calling .GetMachineName
	I0328 01:07:53.116061 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetState
	I0328 01:07:53.118022 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .DriverName
	I0328 01:07:53.118348 1130949 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0328 01:07:53.118370 1130949 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0328 01:07:53.118391 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHHostname
	I0328 01:07:53.121337 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:07:53.121699 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d4:d2", ip: ""} in network mk-embed-certs-808809: {Iface:virbr4 ExpiryTime:2024-03-28 02:02:25 +0000 UTC Type:0 Mac:52:54:00:60:d4:d2 Iaid: IPaddr:192.168.72.210 Prefix:24 Hostname:embed-certs-808809 Clientid:01:52:54:00:60:d4:d2}
	I0328 01:07:53.121728 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined IP address 192.168.72.210 and MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:07:53.121906 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHPort
	I0328 01:07:53.122084 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHKeyPath
	I0328 01:07:53.122266 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHUsername
	I0328 01:07:53.122414 1130949 sshutil.go:53] new ssh client: &{IP:192.168.72.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/embed-certs-808809/id_rsa Username:docker}
	I0328 01:07:53.242121 1130949 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0328 01:07:53.267118 1130949 node_ready.go:35] waiting up to 6m0s for node "embed-certs-808809" to be "Ready" ...
	I0328 01:07:53.276640 1130949 node_ready.go:49] node "embed-certs-808809" has status "Ready":"True"
	I0328 01:07:53.276670 1130949 node_ready.go:38] duration metric: took 9.513599ms for node "embed-certs-808809" to be "Ready" ...
	I0328 01:07:53.276683 1130949 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0328 01:07:53.283091 1130949 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-2rn6k" in "kube-system" namespace to be "Ready" ...
	I0328 01:07:53.325201 1130949 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0328 01:07:53.325234 1130949 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0328 01:07:53.341335 1130949 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0328 01:07:53.361084 1130949 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0328 01:07:53.361109 1130949 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0328 01:07:53.393089 1130949 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0328 01:07:53.393116 1130949 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0328 01:07:53.419245 1130949 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0328 01:07:53.445663 1130949 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0328 01:07:53.515515 1130949 main.go:141] libmachine: Making call to close driver server
	I0328 01:07:53.515555 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .Close
	I0328 01:07:53.515871 1130949 main.go:141] libmachine: Successfully made call to close driver server
	I0328 01:07:53.515891 1130949 main.go:141] libmachine: Making call to close connection to plugin binary
	I0328 01:07:53.515901 1130949 main.go:141] libmachine: Making call to close driver server
	I0328 01:07:53.515910 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .Close
	I0328 01:07:53.516173 1130949 main.go:141] libmachine: Successfully made call to close driver server
	I0328 01:07:53.516253 1130949 main.go:141] libmachine: Making call to close connection to plugin binary
	I0328 01:07:53.516212 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | Closing plugin on server side
	I0328 01:07:53.527854 1130949 main.go:141] libmachine: Making call to close driver server
	I0328 01:07:53.527882 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .Close
	I0328 01:07:53.528152 1130949 main.go:141] libmachine: Successfully made call to close driver server
	I0328 01:07:53.528173 1130949 main.go:141] libmachine: Making call to close connection to plugin binary
	I0328 01:07:53.528220 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | Closing plugin on server side
	I0328 01:07:54.159164 1130949 main.go:141] libmachine: Making call to close driver server
	I0328 01:07:54.159192 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .Close
	I0328 01:07:54.159264 1130949 main.go:141] libmachine: Making call to close driver server
	I0328 01:07:54.159292 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .Close
	I0328 01:07:54.159523 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | Closing plugin on server side
	I0328 01:07:54.159597 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | Closing plugin on server side
	I0328 01:07:54.159619 1130949 main.go:141] libmachine: Successfully made call to close driver server
	I0328 01:07:54.159637 1130949 main.go:141] libmachine: Making call to close connection to plugin binary
	I0328 01:07:54.159648 1130949 main.go:141] libmachine: Making call to close driver server
	I0328 01:07:54.159658 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .Close
	I0328 01:07:54.159660 1130949 main.go:141] libmachine: Successfully made call to close driver server
	I0328 01:07:54.159667 1130949 main.go:141] libmachine: Making call to close connection to plugin binary
	I0328 01:07:54.159688 1130949 main.go:141] libmachine: Making call to close driver server
	I0328 01:07:54.159696 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .Close
	I0328 01:07:54.159981 1130949 main.go:141] libmachine: Successfully made call to close driver server
	I0328 01:07:54.160037 1130949 main.go:141] libmachine: Making call to close connection to plugin binary
	I0328 01:07:54.160056 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | Closing plugin on server side
	I0328 01:07:54.160062 1130949 addons.go:470] Verifying addon metrics-server=true in "embed-certs-808809"
	I0328 01:07:54.160088 1130949 main.go:141] libmachine: Successfully made call to close driver server
	I0328 01:07:54.160090 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | Closing plugin on server side
	I0328 01:07:54.160106 1130949 main.go:141] libmachine: Making call to close connection to plugin binary
	I0328 01:07:54.162879 1130949 out.go:177] * Enabled addons: default-storageclass, metrics-server, storage-provisioner
	I0328 01:07:54.022449 1131323 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0328 01:07:54.022704 1131323 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0328 01:07:52.372314 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:54.372913 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:54.164263 1130949 addons.go:505] duration metric: took 1.11806212s for enable addons: enabled=[default-storageclass metrics-server storage-provisioner]
	I0328 01:07:55.294728 1130949 pod_ready.go:102] pod "coredns-76f75df574-2rn6k" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:55.790690 1130949 pod_ready.go:92] pod "coredns-76f75df574-2rn6k" in "kube-system" namespace has status "Ready":"True"
	I0328 01:07:55.790717 1130949 pod_ready.go:81] duration metric: took 2.50759161s for pod "coredns-76f75df574-2rn6k" in "kube-system" namespace to be "Ready" ...
	I0328 01:07:55.790726 1130949 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-pgcdh" in "kube-system" namespace to be "Ready" ...
	I0328 01:07:55.796249 1130949 pod_ready.go:92] pod "coredns-76f75df574-pgcdh" in "kube-system" namespace has status "Ready":"True"
	I0328 01:07:55.796279 1130949 pod_ready.go:81] duration metric: took 5.54233ms for pod "coredns-76f75df574-pgcdh" in "kube-system" namespace to be "Ready" ...
	I0328 01:07:55.796291 1130949 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-808809" in "kube-system" namespace to be "Ready" ...
	I0328 01:07:55.801226 1130949 pod_ready.go:92] pod "etcd-embed-certs-808809" in "kube-system" namespace has status "Ready":"True"
	I0328 01:07:55.801254 1130949 pod_ready.go:81] duration metric: took 4.956106ms for pod "etcd-embed-certs-808809" in "kube-system" namespace to be "Ready" ...
	I0328 01:07:55.801263 1130949 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-808809" in "kube-system" namespace to be "Ready" ...
	I0328 01:07:55.814571 1130949 pod_ready.go:92] pod "kube-apiserver-embed-certs-808809" in "kube-system" namespace has status "Ready":"True"
	I0328 01:07:55.814599 1130949 pod_ready.go:81] duration metric: took 13.328662ms for pod "kube-apiserver-embed-certs-808809" in "kube-system" namespace to be "Ready" ...
	I0328 01:07:55.814613 1130949 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-808809" in "kube-system" namespace to be "Ready" ...
	I0328 01:07:55.825995 1130949 pod_ready.go:92] pod "kube-controller-manager-embed-certs-808809" in "kube-system" namespace has status "Ready":"True"
	I0328 01:07:55.826022 1130949 pod_ready.go:81] duration metric: took 11.401096ms for pod "kube-controller-manager-embed-certs-808809" in "kube-system" namespace to be "Ready" ...
	I0328 01:07:55.826035 1130949 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-tjbhs" in "kube-system" namespace to be "Ready" ...
	I0328 01:07:56.188116 1130949 pod_ready.go:92] pod "kube-proxy-tjbhs" in "kube-system" namespace has status "Ready":"True"
	I0328 01:07:56.188147 1130949 pod_ready.go:81] duration metric: took 362.103962ms for pod "kube-proxy-tjbhs" in "kube-system" namespace to be "Ready" ...
	I0328 01:07:56.188161 1130949 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-808809" in "kube-system" namespace to be "Ready" ...
	I0328 01:07:56.588294 1130949 pod_ready.go:92] pod "kube-scheduler-embed-certs-808809" in "kube-system" namespace has status "Ready":"True"
	I0328 01:07:56.588334 1130949 pod_ready.go:81] duration metric: took 400.16517ms for pod "kube-scheduler-embed-certs-808809" in "kube-system" namespace to be "Ready" ...
	I0328 01:07:56.588347 1130949 pod_ready.go:38] duration metric: took 3.311651338s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0328 01:07:56.588369 1130949 api_server.go:52] waiting for apiserver process to appear ...
	I0328 01:07:56.588445 1130949 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:07:56.606404 1130949 api_server.go:72] duration metric: took 3.560197315s to wait for apiserver process to appear ...
	I0328 01:07:56.606435 1130949 api_server.go:88] waiting for apiserver healthz status ...
	I0328 01:07:56.606460 1130949 api_server.go:253] Checking apiserver healthz at https://192.168.72.210:8443/healthz ...
	I0328 01:07:56.612218 1130949 api_server.go:279] https://192.168.72.210:8443/healthz returned 200:
	ok
	I0328 01:07:56.613459 1130949 api_server.go:141] control plane version: v1.29.3
	I0328 01:07:56.613481 1130949 api_server.go:131] duration metric: took 7.039378ms to wait for apiserver health ...
	I0328 01:07:56.613490 1130949 system_pods.go:43] waiting for kube-system pods to appear ...
	I0328 01:07:56.793192 1130949 system_pods.go:59] 9 kube-system pods found
	I0328 01:07:56.793227 1130949 system_pods.go:61] "coredns-76f75df574-2rn6k" [2a77c778-dd83-4e2e-b45a-ca16e3922b45] Running
	I0328 01:07:56.793232 1130949 system_pods.go:61] "coredns-76f75df574-pgcdh" [52452b24-490e-4999-b700-198c6f9b2fa1] Running
	I0328 01:07:56.793236 1130949 system_pods.go:61] "etcd-embed-certs-808809" [cba526ab-c8ca-4b90-abb8-533d1bf6cb4f] Running
	I0328 01:07:56.793239 1130949 system_pods.go:61] "kube-apiserver-embed-certs-808809" [02934ac6-1e07-4cbe-ba2f-4149e59d6044] Running
	I0328 01:07:56.793243 1130949 system_pods.go:61] "kube-controller-manager-embed-certs-808809" [b3350ff9-7abb-407e-939c-fcde61186c17] Running
	I0328 01:07:56.793246 1130949 system_pods.go:61] "kube-proxy-tjbhs" [cdb30ca1-5165-4e24-888a-df79af7987d0] Running
	I0328 01:07:56.793249 1130949 system_pods.go:61] "kube-scheduler-embed-certs-808809" [5b52a22d-6ffb-468b-b7b9-8f89b0dee3b8] Running
	I0328 01:07:56.793255 1130949 system_pods.go:61] "metrics-server-57f55c9bc5-bqbfl" [8434fd7d-838b-4cf2-96a3-e4d613633871] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0328 01:07:56.793260 1130949 system_pods.go:61] "storage-provisioner" [20c1951e-7da8-4025-bbcf-2da60f87f3ab] Running
	I0328 01:07:56.793268 1130949 system_pods.go:74] duration metric: took 179.77213ms to wait for pod list to return data ...
	I0328 01:07:56.793275 1130949 default_sa.go:34] waiting for default service account to be created ...
	I0328 01:07:56.988234 1130949 default_sa.go:45] found service account: "default"
	I0328 01:07:56.988274 1130949 default_sa.go:55] duration metric: took 194.984089ms for default service account to be created ...
	I0328 01:07:56.988288 1130949 system_pods.go:116] waiting for k8s-apps to be running ...
	I0328 01:07:57.192153 1130949 system_pods.go:86] 9 kube-system pods found
	I0328 01:07:57.192188 1130949 system_pods.go:89] "coredns-76f75df574-2rn6k" [2a77c778-dd83-4e2e-b45a-ca16e3922b45] Running
	I0328 01:07:57.192194 1130949 system_pods.go:89] "coredns-76f75df574-pgcdh" [52452b24-490e-4999-b700-198c6f9b2fa1] Running
	I0328 01:07:57.192200 1130949 system_pods.go:89] "etcd-embed-certs-808809" [cba526ab-c8ca-4b90-abb8-533d1bf6cb4f] Running
	I0328 01:07:57.192205 1130949 system_pods.go:89] "kube-apiserver-embed-certs-808809" [02934ac6-1e07-4cbe-ba2f-4149e59d6044] Running
	I0328 01:07:57.192210 1130949 system_pods.go:89] "kube-controller-manager-embed-certs-808809" [b3350ff9-7abb-407e-939c-fcde61186c17] Running
	I0328 01:07:57.192214 1130949 system_pods.go:89] "kube-proxy-tjbhs" [cdb30ca1-5165-4e24-888a-df79af7987d0] Running
	I0328 01:07:57.192218 1130949 system_pods.go:89] "kube-scheduler-embed-certs-808809" [5b52a22d-6ffb-468b-b7b9-8f89b0dee3b8] Running
	I0328 01:07:57.192225 1130949 system_pods.go:89] "metrics-server-57f55c9bc5-bqbfl" [8434fd7d-838b-4cf2-96a3-e4d613633871] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0328 01:07:57.192230 1130949 system_pods.go:89] "storage-provisioner" [20c1951e-7da8-4025-bbcf-2da60f87f3ab] Running
	I0328 01:07:57.192239 1130949 system_pods.go:126] duration metric: took 203.942878ms to wait for k8s-apps to be running ...
	I0328 01:07:57.192249 1130949 system_svc.go:44] waiting for kubelet service to be running ....
	I0328 01:07:57.192301 1130949 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0328 01:07:57.209840 1130949 system_svc.go:56] duration metric: took 17.576605ms WaitForService to wait for kubelet
	I0328 01:07:57.209883 1130949 kubeadm.go:576] duration metric: took 4.163683877s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0328 01:07:57.209918 1130949 node_conditions.go:102] verifying NodePressure condition ...
	I0328 01:07:57.388321 1130949 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0328 01:07:57.388347 1130949 node_conditions.go:123] node cpu capacity is 2
	I0328 01:07:57.388357 1130949 node_conditions.go:105] duration metric: took 178.433633ms to run NodePressure ...
	I0328 01:07:57.388370 1130949 start.go:240] waiting for startup goroutines ...
	I0328 01:07:57.388377 1130949 start.go:245] waiting for cluster config update ...
	I0328 01:07:57.388387 1130949 start.go:254] writing updated cluster config ...
	I0328 01:07:57.388784 1130949 ssh_runner.go:195] Run: rm -f paused
	I0328 01:07:57.446699 1130949 start.go:600] kubectl: 1.29.3, cluster: 1.29.3 (minor skew: 0)
	I0328 01:07:57.448951 1130949 out.go:177] * Done! kubectl is now configured to use "embed-certs-808809" cluster and "default" namespace by default
	I0328 01:07:56.373123 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:58.872454 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:08:04.023273 1131323 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0328 01:08:04.023535 1131323 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0328 01:08:01.372711 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:08:03.877734 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:08:06.374031 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:08:07.366164 1130827 pod_ready.go:81] duration metric: took 4m0.000887668s for pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace to be "Ready" ...
	E0328 01:08:07.366245 1130827 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace to be "Ready" (will not retry!)
	I0328 01:08:07.366271 1130827 pod_ready.go:38] duration metric: took 4m7.906522585s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0328 01:08:07.366301 1130827 kubeadm.go:591] duration metric: took 4m15.27169704s to restartPrimaryControlPlane
	W0328 01:08:07.366368 1130827 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0328 01:08:07.366406 1130827 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0328 01:08:16.281280 1131600 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.245904746s)
	I0328 01:08:16.281365 1131600 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0328 01:08:16.298463 1131600 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0328 01:08:16.310406 1131600 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0328 01:08:16.321387 1131600 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0328 01:08:16.321415 1131600 kubeadm.go:156] found existing configuration files:
	
	I0328 01:08:16.321475 1131600 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0328 01:08:16.331965 1131600 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0328 01:08:16.332033 1131600 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0328 01:08:16.343030 1131600 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0328 01:08:16.353193 1131600 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0328 01:08:16.353254 1131600 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0328 01:08:16.363865 1131600 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0328 01:08:16.374276 1131600 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0328 01:08:16.374346 1131600 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0328 01:08:16.385300 1131600 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0328 01:08:16.396118 1131600 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0328 01:08:16.396181 1131600 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0328 01:08:16.406896 1131600 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0328 01:08:16.626615 1131600 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0328 01:08:24.024091 1131323 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0328 01:08:24.024388 1131323 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0328 01:08:25.420974 1131600 kubeadm.go:309] [init] Using Kubernetes version: v1.29.3
	I0328 01:08:25.421059 1131600 kubeadm.go:309] [preflight] Running pre-flight checks
	I0328 01:08:25.421154 1131600 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0328 01:08:25.421300 1131600 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0328 01:08:25.421547 1131600 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0328 01:08:25.421649 1131600 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0328 01:08:25.423435 1131600 out.go:204]   - Generating certificates and keys ...
	I0328 01:08:25.423549 1131600 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0328 01:08:25.423630 1131600 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0328 01:08:25.423749 1131600 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0328 01:08:25.423844 1131600 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0328 01:08:25.423956 1131600 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0328 01:08:25.424058 1131600 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0328 01:08:25.424166 1131600 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0328 01:08:25.424260 1131600 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0328 01:08:25.424375 1131600 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0328 01:08:25.424489 1131600 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0328 01:08:25.424552 1131600 kubeadm.go:309] [certs] Using the existing "sa" key
	I0328 01:08:25.424642 1131600 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0328 01:08:25.424700 1131600 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0328 01:08:25.424765 1131600 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0328 01:08:25.424832 1131600 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0328 01:08:25.424920 1131600 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0328 01:08:25.424982 1131600 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0328 01:08:25.425106 1131600 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0328 01:08:25.425207 1131600 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0328 01:08:25.426863 1131600 out.go:204]   - Booting up control plane ...
	I0328 01:08:25.427001 1131600 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0328 01:08:25.427108 1131600 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0328 01:08:25.427205 1131600 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0328 01:08:25.427327 1131600 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0328 01:08:25.427431 1131600 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0328 01:08:25.427491 1131600 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0328 01:08:25.427686 1131600 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0328 01:08:25.427784 1131600 kubeadm.go:309] [apiclient] All control plane components are healthy after 6.003000 seconds
	I0328 01:08:25.427897 1131600 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0328 01:08:25.428032 1131600 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0328 01:08:25.428109 1131600 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0328 01:08:25.428325 1131600 kubeadm.go:309] [mark-control-plane] Marking the node default-k8s-diff-port-283961 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0328 01:08:25.428408 1131600 kubeadm.go:309] [bootstrap-token] Using token: g6jusr.8nbqw788gjbu8fwz
	I0328 01:08:25.430595 1131600 out.go:204]   - Configuring RBAC rules ...
	I0328 01:08:25.430734 1131600 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0328 01:08:25.430837 1131600 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0328 01:08:25.430981 1131600 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0328 01:08:25.431163 1131600 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0328 01:08:25.431357 1131600 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0328 01:08:25.431481 1131600 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0328 01:08:25.431670 1131600 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0328 01:08:25.431726 1131600 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0328 01:08:25.431767 1131600 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0328 01:08:25.431774 1131600 kubeadm.go:309] 
	I0328 01:08:25.431819 1131600 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0328 01:08:25.431829 1131600 kubeadm.go:309] 
	I0328 01:08:25.431893 1131600 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0328 01:08:25.431900 1131600 kubeadm.go:309] 
	I0328 01:08:25.431934 1131600 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0328 01:08:25.432028 1131600 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0328 01:08:25.432089 1131600 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0328 01:08:25.432114 1131600 kubeadm.go:309] 
	I0328 01:08:25.432178 1131600 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0328 01:08:25.432186 1131600 kubeadm.go:309] 
	I0328 01:08:25.432245 1131600 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0328 01:08:25.432255 1131600 kubeadm.go:309] 
	I0328 01:08:25.432342 1131600 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0328 01:08:25.432454 1131600 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0328 01:08:25.432566 1131600 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0328 01:08:25.432576 1131600 kubeadm.go:309] 
	I0328 01:08:25.432719 1131600 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0328 01:08:25.432812 1131600 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0328 01:08:25.432825 1131600 kubeadm.go:309] 
	I0328 01:08:25.432914 1131600 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8444 --token g6jusr.8nbqw788gjbu8fwz \
	I0328 01:08:25.433018 1131600 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:eb47e943713ff45bf2b8bbea854932df17a469668ec0f8e79ea3bb626e56fc59 \
	I0328 01:08:25.433052 1131600 kubeadm.go:309] 	--control-plane 
	I0328 01:08:25.433058 1131600 kubeadm.go:309] 
	I0328 01:08:25.433135 1131600 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0328 01:08:25.433143 1131600 kubeadm.go:309] 
	I0328 01:08:25.433222 1131600 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8444 --token g6jusr.8nbqw788gjbu8fwz \
	I0328 01:08:25.433318 1131600 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:eb47e943713ff45bf2b8bbea854932df17a469668ec0f8e79ea3bb626e56fc59 
	I0328 01:08:25.433337 1131600 cni.go:84] Creating CNI manager for ""
	I0328 01:08:25.433346 1131600 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0328 01:08:25.434943 1131600 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0328 01:08:25.436103 1131600 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0328 01:08:25.483149 1131600 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0328 01:08:25.508422 1131600 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0328 01:08:25.508514 1131600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:25.508518 1131600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-283961 minikube.k8s.io/updated_at=2024_03_28T01_08_25_0700 minikube.k8s.io/version=v1.33.0-beta.0 minikube.k8s.io/commit=1a940f980f77c9bfc8ef93678db8c1f49c9dd79d minikube.k8s.io/name=default-k8s-diff-port-283961 minikube.k8s.io/primary=true
	I0328 01:08:25.537955 1131600 ops.go:34] apiserver oom_adj: -16
	I0328 01:08:25.738462 1131600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:26.239473 1131600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:26.739478 1131600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:27.238883 1131600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:27.738830 1131600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:28.239281 1131600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:28.738643 1131600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:29.238703 1131600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:29.739025 1131600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:30.239127 1131600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:30.739473 1131600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:31.239461 1131600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:31.739480 1131600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:32.239525 1131600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:32.738543 1131600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:33.239468 1131600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:33.739475 1131600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:34.238558 1131600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:34.739550 1131600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:35.239400 1131600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:35.738766 1131600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:36.239384 1131600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:36.738797 1131600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:37.238736 1131600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:37.739543 1131600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:37.850963 1131600 kubeadm.go:1107] duration metric: took 12.342521507s to wait for elevateKubeSystemPrivileges
	W0328 01:08:37.851011 1131600 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0328 01:08:37.851024 1131600 kubeadm.go:393] duration metric: took 5m17.339661641s to StartCluster
	I0328 01:08:37.851048 1131600 settings.go:142] acquiring lock: {Name:mk594dd5d083edcb4c668528431bf610fe4ea638 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 01:08:37.851164 1131600 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18485-1069254/kubeconfig
	I0328 01:08:37.853862 1131600 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18485-1069254/kubeconfig: {Name:mk608bb0dce90860f51972a105c5a28b1a9b081e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 01:08:37.854264 1131600 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.39.224 Port:8444 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0328 01:08:37.856170 1131600 out.go:177] * Verifying Kubernetes components...
	I0328 01:08:37.854341 1131600 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0328 01:08:37.854447 1131600 config.go:182] Loaded profile config "default-k8s-diff-port-283961": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0328 01:08:37.857860 1131600 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0328 01:08:37.857864 1131600 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-283961"
	I0328 01:08:37.857878 1131600 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-283961"
	I0328 01:08:37.857885 1131600 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-283961"
	I0328 01:08:37.857909 1131600 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-283961"
	I0328 01:08:37.857912 1131600 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-283961"
	W0328 01:08:37.857923 1131600 addons.go:243] addon storage-provisioner should already be in state true
	I0328 01:08:37.857928 1131600 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-283961"
	W0328 01:08:37.857941 1131600 addons.go:243] addon metrics-server should already be in state true
	I0328 01:08:37.857970 1131600 host.go:66] Checking if "default-k8s-diff-port-283961" exists ...
	I0328 01:08:37.857983 1131600 host.go:66] Checking if "default-k8s-diff-port-283961" exists ...
	I0328 01:08:37.858330 1131600 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18485-1069254/.minikube/bin/docker-machine-driver-kvm2
	I0328 01:08:37.858363 1131600 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 01:08:37.858403 1131600 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18485-1069254/.minikube/bin/docker-machine-driver-kvm2
	I0328 01:08:37.858436 1131600 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 01:08:37.858335 1131600 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18485-1069254/.minikube/bin/docker-machine-driver-kvm2
	I0328 01:08:37.858509 1131600 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 01:08:37.881197 1131600 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32799
	I0328 01:08:37.881230 1131600 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35101
	I0328 01:08:37.881244 1131600 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40403
	I0328 01:08:37.881857 1131600 main.go:141] libmachine: () Calling .GetVersion
	I0328 01:08:37.881882 1131600 main.go:141] libmachine: () Calling .GetVersion
	I0328 01:08:37.882021 1131600 main.go:141] libmachine: () Calling .GetVersion
	I0328 01:08:37.882460 1131600 main.go:141] libmachine: Using API Version  1
	I0328 01:08:37.882482 1131600 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 01:08:37.882523 1131600 main.go:141] libmachine: Using API Version  1
	I0328 01:08:37.882540 1131600 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 01:08:37.882585 1131600 main.go:141] libmachine: Using API Version  1
	I0328 01:08:37.882601 1131600 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 01:08:37.882934 1131600 main.go:141] libmachine: () Calling .GetMachineName
	I0328 01:08:37.882992 1131600 main.go:141] libmachine: () Calling .GetMachineName
	I0328 01:08:37.883007 1131600 main.go:141] libmachine: () Calling .GetMachineName
	I0328 01:08:37.883239 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetState
	I0328 01:08:37.883592 1131600 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18485-1069254/.minikube/bin/docker-machine-driver-kvm2
	I0328 01:08:37.883620 1131600 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18485-1069254/.minikube/bin/docker-machine-driver-kvm2
	I0328 01:08:37.883625 1131600 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 01:08:37.883644 1131600 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 01:08:37.887335 1131600 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-283961"
	W0328 01:08:37.887359 1131600 addons.go:243] addon default-storageclass should already be in state true
	I0328 01:08:37.887390 1131600 host.go:66] Checking if "default-k8s-diff-port-283961" exists ...
	I0328 01:08:37.887745 1131600 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18485-1069254/.minikube/bin/docker-machine-driver-kvm2
	I0328 01:08:37.887779 1131600 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 01:08:37.901416 1131600 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37383
	I0328 01:08:37.901909 1131600 main.go:141] libmachine: () Calling .GetVersion
	I0328 01:08:37.902530 1131600 main.go:141] libmachine: Using API Version  1
	I0328 01:08:37.902559 1131600 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 01:08:37.902967 1131600 main.go:141] libmachine: () Calling .GetMachineName
	I0328 01:08:37.903211 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetState
	I0328 01:08:37.904529 1131600 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36633
	I0328 01:08:37.905034 1131600 main.go:141] libmachine: () Calling .GetVersion
	I0328 01:08:37.905268 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .DriverName
	I0328 01:08:37.907486 1131600 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0328 01:08:37.905802 1131600 main.go:141] libmachine: Using API Version  1
	I0328 01:08:37.909062 1131600 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 01:08:37.909180 1131600 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0328 01:08:37.909196 1131600 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0328 01:08:37.909218 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHHostname
	I0328 01:08:37.909555 1131600 main.go:141] libmachine: () Calling .GetMachineName
	I0328 01:08:37.909794 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetState
	I0328 01:08:37.911251 1131600 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33539
	I0328 01:08:37.911845 1131600 main.go:141] libmachine: () Calling .GetVersion
	I0328 01:08:37.911995 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .DriverName
	I0328 01:08:37.913838 1131600 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0328 01:08:37.912457 1131600 main.go:141] libmachine: Using API Version  1
	I0328 01:08:37.913039 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:08:37.913804 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHPort
	I0328 01:08:37.915256 1131600 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 01:08:37.915268 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:df:6f", ip: ""} in network mk-default-k8s-diff-port-283961: {Iface:virbr1 ExpiryTime:2024-03-28 02:03:06 +0000 UTC Type:0 Mac:52:54:00:c4:df:6f Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:default-k8s-diff-port-283961 Clientid:01:52:54:00:c4:df:6f}
	I0328 01:08:37.915288 1131600 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0328 01:08:37.915297 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined IP address 192.168.39.224 and MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:08:37.915303 1131600 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0328 01:08:37.915321 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHHostname
	I0328 01:08:37.915492 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHKeyPath
	I0328 01:08:37.915674 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHUsername
	I0328 01:08:37.915894 1131600 sshutil.go:53] new ssh client: &{IP:192.168.39.224 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/default-k8s-diff-port-283961/id_rsa Username:docker}
	I0328 01:08:37.916689 1131600 main.go:141] libmachine: () Calling .GetMachineName
	I0328 01:08:37.917364 1131600 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18485-1069254/.minikube/bin/docker-machine-driver-kvm2
	I0328 01:08:37.917410 1131600 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 01:08:37.918302 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:08:37.918651 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:df:6f", ip: ""} in network mk-default-k8s-diff-port-283961: {Iface:virbr1 ExpiryTime:2024-03-28 02:03:06 +0000 UTC Type:0 Mac:52:54:00:c4:df:6f Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:default-k8s-diff-port-283961 Clientid:01:52:54:00:c4:df:6f}
	I0328 01:08:37.918678 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined IP address 192.168.39.224 and MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:08:37.918944 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHPort
	I0328 01:08:37.919117 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHKeyPath
	I0328 01:08:37.919267 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHUsername
	I0328 01:08:37.919386 1131600 sshutil.go:53] new ssh client: &{IP:192.168.39.224 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/default-k8s-diff-port-283961/id_rsa Username:docker}
	I0328 01:08:37.935233 1131600 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45659
	I0328 01:08:37.935750 1131600 main.go:141] libmachine: () Calling .GetVersion
	I0328 01:08:37.936283 1131600 main.go:141] libmachine: Using API Version  1
	I0328 01:08:37.936301 1131600 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 01:08:37.936691 1131600 main.go:141] libmachine: () Calling .GetMachineName
	I0328 01:08:37.936872 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetState
	I0328 01:08:37.938736 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .DriverName
	I0328 01:08:37.939016 1131600 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0328 01:08:37.939042 1131600 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0328 01:08:37.939065 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHHostname
	I0328 01:08:37.941653 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:08:37.941967 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:df:6f", ip: ""} in network mk-default-k8s-diff-port-283961: {Iface:virbr1 ExpiryTime:2024-03-28 02:03:06 +0000 UTC Type:0 Mac:52:54:00:c4:df:6f Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:default-k8s-diff-port-283961 Clientid:01:52:54:00:c4:df:6f}
	I0328 01:08:37.941991 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined IP address 192.168.39.224 and MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:08:37.942199 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHPort
	I0328 01:08:37.942405 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHKeyPath
	I0328 01:08:37.942575 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHUsername
	I0328 01:08:37.942761 1131600 sshutil.go:53] new ssh client: &{IP:192.168.39.224 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/default-k8s-diff-port-283961/id_rsa Username:docker}
	I0328 01:08:38.109817 1131600 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0328 01:08:38.134996 1131600 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-283961" to be "Ready" ...
	I0328 01:08:38.158252 1131600 node_ready.go:49] node "default-k8s-diff-port-283961" has status "Ready":"True"
	I0328 01:08:38.158286 1131600 node_ready.go:38] duration metric: took 23.249221ms for node "default-k8s-diff-port-283961" to be "Ready" ...
	I0328 01:08:38.158305 1131600 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0328 01:08:38.170391 1131600 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-gdv5x" in "kube-system" namespace to be "Ready" ...
	I0328 01:08:38.277223 1131600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0328 01:08:38.299923 1131600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0328 01:08:38.300686 1131600 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0328 01:08:38.300707 1131600 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0328 01:08:38.355800 1131600 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0328 01:08:38.355837 1131600 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0328 01:08:38.464742 1131600 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0328 01:08:38.464769 1131600 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0328 01:08:38.542696 1131600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0328 01:08:39.644116 1131600 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.344141889s)
	I0328 01:08:39.644184 1131600 main.go:141] libmachine: Making call to close driver server
	I0328 01:08:39.644189 1131600 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.366934481s)
	I0328 01:08:39.644197 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .Close
	I0328 01:08:39.644210 1131600 main.go:141] libmachine: Making call to close driver server
	I0328 01:08:39.644219 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .Close
	I0328 01:08:39.644620 1131600 main.go:141] libmachine: Successfully made call to close driver server
	I0328 01:08:39.644644 1131600 main.go:141] libmachine: Making call to close connection to plugin binary
	I0328 01:08:39.644654 1131600 main.go:141] libmachine: Making call to close driver server
	I0328 01:08:39.644664 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .Close
	I0328 01:08:39.644846 1131600 main.go:141] libmachine: Successfully made call to close driver server
	I0328 01:08:39.644865 1131600 main.go:141] libmachine: Making call to close connection to plugin binary
	I0328 01:08:39.644890 1131600 main.go:141] libmachine: Making call to close driver server
	I0328 01:08:39.644905 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .Close
	I0328 01:08:39.644987 1131600 main.go:141] libmachine: Successfully made call to close driver server
	I0328 01:08:39.645004 1131600 main.go:141] libmachine: Making call to close connection to plugin binary
	I0328 01:08:39.645154 1131600 main.go:141] libmachine: Successfully made call to close driver server
	I0328 01:08:39.645171 1131600 main.go:141] libmachine: Making call to close connection to plugin binary
	I0328 01:08:39.708104 1131600 main.go:141] libmachine: Making call to close driver server
	I0328 01:08:39.708143 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .Close
	I0328 01:08:39.708543 1131600 main.go:141] libmachine: Successfully made call to close driver server
	I0328 01:08:39.708567 1131600 main.go:141] libmachine: Making call to close connection to plugin binary
	I0328 01:08:39.739487 1131600 pod_ready.go:92] pod "coredns-76f75df574-gdv5x" in "kube-system" namespace has status "Ready":"True"
	I0328 01:08:39.739515 1131600 pod_ready.go:81] duration metric: took 1.569088177s for pod "coredns-76f75df574-gdv5x" in "kube-system" namespace to be "Ready" ...
	I0328 01:08:39.739526 1131600 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-qzcfp" in "kube-system" namespace to be "Ready" ...
	I0328 01:08:39.797314 1131600 pod_ready.go:92] pod "coredns-76f75df574-qzcfp" in "kube-system" namespace has status "Ready":"True"
	I0328 01:08:39.797347 1131600 pod_ready.go:81] duration metric: took 57.813218ms for pod "coredns-76f75df574-qzcfp" in "kube-system" namespace to be "Ready" ...
	I0328 01:08:39.797366 1131600 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-283961" in "kube-system" namespace to be "Ready" ...
	I0328 01:08:39.830784 1131600 pod_ready.go:92] pod "etcd-default-k8s-diff-port-283961" in "kube-system" namespace has status "Ready":"True"
	I0328 01:08:39.830865 1131600 pod_ready.go:81] duration metric: took 33.488753ms for pod "etcd-default-k8s-diff-port-283961" in "kube-system" namespace to be "Ready" ...
	I0328 01:08:39.830886 1131600 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-283961" in "kube-system" namespace to be "Ready" ...
	I0328 01:08:39.852459 1131600 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-283961" in "kube-system" namespace has status "Ready":"True"
	I0328 01:08:39.852489 1131600 pod_ready.go:81] duration metric: took 21.594748ms for pod "kube-apiserver-default-k8s-diff-port-283961" in "kube-system" namespace to be "Ready" ...
	I0328 01:08:39.852501 1131600 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-283961" in "kube-system" namespace to be "Ready" ...
	I0328 01:08:39.862630 1131600 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-283961" in "kube-system" namespace has status "Ready":"True"
	I0328 01:08:39.862658 1131600 pod_ready.go:81] duration metric: took 10.149867ms for pod "kube-controller-manager-default-k8s-diff-port-283961" in "kube-system" namespace to be "Ready" ...
	I0328 01:08:39.862674 1131600 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-js7j2" in "kube-system" namespace to be "Ready" ...
	I0328 01:08:39.893124 1131600 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.350363727s)
	I0328 01:08:39.893191 1131600 main.go:141] libmachine: Making call to close driver server
	I0328 01:08:39.893216 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .Close
	I0328 01:08:39.893559 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | Closing plugin on server side
	I0328 01:08:39.893568 1131600 main.go:141] libmachine: Successfully made call to close driver server
	I0328 01:08:39.893617 1131600 main.go:141] libmachine: Making call to close connection to plugin binary
	I0328 01:08:39.893634 1131600 main.go:141] libmachine: Making call to close driver server
	I0328 01:08:39.893646 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .Close
	I0328 01:08:39.893985 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | Closing plugin on server side
	I0328 01:08:39.893985 1131600 main.go:141] libmachine: Successfully made call to close driver server
	I0328 01:08:39.894013 1131600 main.go:141] libmachine: Making call to close connection to plugin binary
	I0328 01:08:39.894031 1131600 addons.go:470] Verifying addon metrics-server=true in "default-k8s-diff-port-283961"
	I0328 01:08:39.896978 1131600 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0328 01:08:39.898636 1131600 addons.go:505] duration metric: took 2.044292782s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0328 01:08:40.138962 1131600 pod_ready.go:92] pod "kube-proxy-js7j2" in "kube-system" namespace has status "Ready":"True"
	I0328 01:08:40.138994 1131600 pod_ready.go:81] duration metric: took 276.313147ms for pod "kube-proxy-js7j2" in "kube-system" namespace to be "Ready" ...
	I0328 01:08:40.139006 1131600 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-283961" in "kube-system" namespace to be "Ready" ...
	I0328 01:08:40.538892 1131600 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-283961" in "kube-system" namespace has status "Ready":"True"
	I0328 01:08:40.538917 1131600 pod_ready.go:81] duration metric: took 399.903327ms for pod "kube-scheduler-default-k8s-diff-port-283961" in "kube-system" namespace to be "Ready" ...
	I0328 01:08:40.538925 1131600 pod_ready.go:38] duration metric: took 2.380606168s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0328 01:08:40.538943 1131600 api_server.go:52] waiting for apiserver process to appear ...
	I0328 01:08:40.539009 1131600 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:08:40.561639 1131600 api_server.go:72] duration metric: took 2.707321816s to wait for apiserver process to appear ...
	I0328 01:08:40.561681 1131600 api_server.go:88] waiting for apiserver healthz status ...
	I0328 01:08:40.561709 1131600 api_server.go:253] Checking apiserver healthz at https://192.168.39.224:8444/healthz ...
	I0328 01:08:40.568521 1131600 api_server.go:279] https://192.168.39.224:8444/healthz returned 200:
	ok
	I0328 01:08:40.570016 1131600 api_server.go:141] control plane version: v1.29.3
	I0328 01:08:40.570060 1131600 api_server.go:131] duration metric: took 8.369036ms to wait for apiserver health ...
	I0328 01:08:40.570071 1131600 system_pods.go:43] waiting for kube-system pods to appear ...
	I0328 01:08:39.696094 1130827 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.32965227s)
	I0328 01:08:39.696193 1130827 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0328 01:08:39.717556 1130827 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0328 01:08:39.730434 1130827 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0328 01:08:39.746521 1130827 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0328 01:08:39.746567 1130827 kubeadm.go:156] found existing configuration files:
	
	I0328 01:08:39.746644 1130827 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0328 01:08:39.758252 1130827 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0328 01:08:39.758352 1130827 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0328 01:08:39.771929 1130827 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0328 01:08:39.785312 1130827 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0328 01:08:39.785400 1130827 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0328 01:08:39.800685 1130827 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0328 01:08:39.814982 1130827 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0328 01:08:39.815073 1130827 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0328 01:08:39.828804 1130827 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0328 01:08:39.841984 1130827 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0328 01:08:39.842074 1130827 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0328 01:08:39.854502 1130827 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0328 01:08:40.089742 1130827 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0328 01:08:40.742900 1131600 system_pods.go:59] 9 kube-system pods found
	I0328 01:08:40.742938 1131600 system_pods.go:61] "coredns-76f75df574-gdv5x" [5b4b835c-ae9d-4eff-ab37-6ccb7e36a748] Running
	I0328 01:08:40.742945 1131600 system_pods.go:61] "coredns-76f75df574-qzcfp" [8e7bfa94-f249-4f7a-be7b-9a615810c956] Running
	I0328 01:08:40.742951 1131600 system_pods.go:61] "etcd-default-k8s-diff-port-283961" [eb26a2e2-882b-4bc0-9ca1-7fe88b1c6c7e] Running
	I0328 01:08:40.742958 1131600 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-283961" [1c31e7a3-507e-43ea-96cf-1d182a1e0875] Running
	I0328 01:08:40.742964 1131600 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-283961" [5bbc6650-b37c-4b10-bc6e-cceb214f8881] Running
	I0328 01:08:40.742968 1131600 system_pods.go:61] "kube-proxy-js7j2" [1f31a6ee-9417-4f4a-ba0b-fdbab6a9169d] Running
	I0328 01:08:40.742972 1131600 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-283961" [3a5691f7-d6d4-45e7-aea7-94fe1cfa541a] Running
	I0328 01:08:40.742980 1131600 system_pods.go:61] "metrics-server-57f55c9bc5-gkv67" [7f0f5d0b-6821-44b6-8f3b-0bc0aeccc356] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0328 01:08:40.742986 1131600 system_pods.go:61] "storage-provisioner" [cb80efe2-521f-45d5-84e7-f6dc216b4c6d] Running
	I0328 01:08:40.742998 1131600 system_pods.go:74] duration metric: took 172.918886ms to wait for pod list to return data ...
	I0328 01:08:40.743010 1131600 default_sa.go:34] waiting for default service account to be created ...
	I0328 01:08:40.939208 1131600 default_sa.go:45] found service account: "default"
	I0328 01:08:40.939255 1131600 default_sa.go:55] duration metric: took 196.220048ms for default service account to be created ...
	I0328 01:08:40.939266 1131600 system_pods.go:116] waiting for k8s-apps to be running ...
	I0328 01:08:41.144986 1131600 system_pods.go:86] 9 kube-system pods found
	I0328 01:08:41.145023 1131600 system_pods.go:89] "coredns-76f75df574-gdv5x" [5b4b835c-ae9d-4eff-ab37-6ccb7e36a748] Running
	I0328 01:08:41.145030 1131600 system_pods.go:89] "coredns-76f75df574-qzcfp" [8e7bfa94-f249-4f7a-be7b-9a615810c956] Running
	I0328 01:08:41.145034 1131600 system_pods.go:89] "etcd-default-k8s-diff-port-283961" [eb26a2e2-882b-4bc0-9ca1-7fe88b1c6c7e] Running
	I0328 01:08:41.145039 1131600 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-283961" [1c31e7a3-507e-43ea-96cf-1d182a1e0875] Running
	I0328 01:08:41.145043 1131600 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-283961" [5bbc6650-b37c-4b10-bc6e-cceb214f8881] Running
	I0328 01:08:41.145047 1131600 system_pods.go:89] "kube-proxy-js7j2" [1f31a6ee-9417-4f4a-ba0b-fdbab6a9169d] Running
	I0328 01:08:41.145051 1131600 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-283961" [3a5691f7-d6d4-45e7-aea7-94fe1cfa541a] Running
	I0328 01:08:41.145058 1131600 system_pods.go:89] "metrics-server-57f55c9bc5-gkv67" [7f0f5d0b-6821-44b6-8f3b-0bc0aeccc356] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0328 01:08:41.145062 1131600 system_pods.go:89] "storage-provisioner" [cb80efe2-521f-45d5-84e7-f6dc216b4c6d] Running
	I0328 01:08:41.145072 1131600 system_pods.go:126] duration metric: took 205.800485ms to wait for k8s-apps to be running ...
	I0328 01:08:41.145083 1131600 system_svc.go:44] waiting for kubelet service to be running ....
	I0328 01:08:41.145131 1131600 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0328 01:08:41.163220 1131600 system_svc.go:56] duration metric: took 18.120266ms WaitForService to wait for kubelet
	I0328 01:08:41.163255 1131600 kubeadm.go:576] duration metric: took 3.308947131s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0328 01:08:41.163280 1131600 node_conditions.go:102] verifying NodePressure condition ...
	I0328 01:08:41.339219 1131600 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0328 01:08:41.339247 1131600 node_conditions.go:123] node cpu capacity is 2
	I0328 01:08:41.339292 1131600 node_conditions.go:105] duration metric: took 176.004328ms to run NodePressure ...
	I0328 01:08:41.339306 1131600 start.go:240] waiting for startup goroutines ...
	I0328 01:08:41.339317 1131600 start.go:245] waiting for cluster config update ...
	I0328 01:08:41.339334 1131600 start.go:254] writing updated cluster config ...
	I0328 01:08:41.339656 1131600 ssh_runner.go:195] Run: rm -f paused
	I0328 01:08:41.399111 1131600 start.go:600] kubectl: 1.29.3, cluster: 1.29.3 (minor skew: 0)
	I0328 01:08:41.401360 1131600 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-283961" cluster and "default" namespace by default
	I0328 01:08:49.653091 1130827 kubeadm.go:309] [init] Using Kubernetes version: v1.30.0-beta.0
	I0328 01:08:49.653205 1130827 kubeadm.go:309] [preflight] Running pre-flight checks
	I0328 01:08:49.653327 1130827 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0328 01:08:49.653468 1130827 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0328 01:08:49.653576 1130827 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0328 01:08:49.653666 1130827 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0328 01:08:49.656419 1130827 out.go:204]   - Generating certificates and keys ...
	I0328 01:08:49.656503 1130827 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0328 01:08:49.656583 1130827 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0328 01:08:49.656669 1130827 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0328 01:08:49.656775 1130827 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0328 01:08:49.656903 1130827 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0328 01:08:49.656973 1130827 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0328 01:08:49.657057 1130827 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0328 01:08:49.657138 1130827 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0328 01:08:49.657246 1130827 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0328 01:08:49.657362 1130827 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0328 01:08:49.657415 1130827 kubeadm.go:309] [certs] Using the existing "sa" key
	I0328 01:08:49.657510 1130827 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0328 01:08:49.657601 1130827 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0328 01:08:49.657713 1130827 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0328 01:08:49.657811 1130827 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0328 01:08:49.657900 1130827 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0328 01:08:49.657980 1130827 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0328 01:08:49.658074 1130827 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0328 01:08:49.658160 1130827 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0328 01:08:49.659588 1130827 out.go:204]   - Booting up control plane ...
	I0328 01:08:49.659669 1130827 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0328 01:08:49.659771 1130827 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0328 01:08:49.659855 1130827 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0328 01:08:49.659962 1130827 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0328 01:08:49.660075 1130827 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0328 01:08:49.660139 1130827 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0328 01:08:49.660309 1130827 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0328 01:08:49.660426 1130827 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0328 01:08:49.660518 1130827 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 501.594495ms
	I0328 01:08:49.660610 1130827 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0328 01:08:49.660691 1130827 kubeadm.go:309] [api-check] The API server is healthy after 5.502996727s
	I0328 01:08:49.660830 1130827 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0328 01:08:49.660975 1130827 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0328 01:08:49.661028 1130827 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0328 01:08:49.661198 1130827 kubeadm.go:309] [mark-control-plane] Marking the node no-preload-248059 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0328 01:08:49.661283 1130827 kubeadm.go:309] [bootstrap-token] Using token: 4jnfa0.q3dre6ogqbxtw8j0
	I0328 01:08:49.662907 1130827 out.go:204]   - Configuring RBAC rules ...
	I0328 01:08:49.663014 1130827 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0328 01:08:49.663090 1130827 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0328 01:08:49.663239 1130827 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0328 01:08:49.663379 1130827 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0328 01:08:49.663484 1130827 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0328 01:08:49.663576 1130827 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0328 01:08:49.663688 1130827 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0328 01:08:49.663750 1130827 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0328 01:08:49.663811 1130827 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0328 01:08:49.663820 1130827 kubeadm.go:309] 
	I0328 01:08:49.663871 1130827 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0328 01:08:49.663877 1130827 kubeadm.go:309] 
	I0328 01:08:49.663976 1130827 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0328 01:08:49.663984 1130827 kubeadm.go:309] 
	I0328 01:08:49.664004 1130827 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0328 01:08:49.664080 1130827 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0328 01:08:49.664144 1130827 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0328 01:08:49.664151 1130827 kubeadm.go:309] 
	I0328 01:08:49.664202 1130827 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0328 01:08:49.664209 1130827 kubeadm.go:309] 
	I0328 01:08:49.664246 1130827 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0328 01:08:49.664252 1130827 kubeadm.go:309] 
	I0328 01:08:49.664301 1130827 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0328 01:08:49.664370 1130827 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0328 01:08:49.664436 1130827 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0328 01:08:49.664444 1130827 kubeadm.go:309] 
	I0328 01:08:49.664515 1130827 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0328 01:08:49.664600 1130827 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0328 01:08:49.664607 1130827 kubeadm.go:309] 
	I0328 01:08:49.664678 1130827 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token 4jnfa0.q3dre6ogqbxtw8j0 \
	I0328 01:08:49.664764 1130827 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:eb47e943713ff45bf2b8bbea854932df17a469668ec0f8e79ea3bb626e56fc59 \
	I0328 01:08:49.664783 1130827 kubeadm.go:309] 	--control-plane 
	I0328 01:08:49.664789 1130827 kubeadm.go:309] 
	I0328 01:08:49.664856 1130827 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0328 01:08:49.664863 1130827 kubeadm.go:309] 
	I0328 01:08:49.664938 1130827 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token 4jnfa0.q3dre6ogqbxtw8j0 \
	I0328 01:08:49.665073 1130827 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:eb47e943713ff45bf2b8bbea854932df17a469668ec0f8e79ea3bb626e56fc59 
	I0328 01:08:49.665117 1130827 cni.go:84] Creating CNI manager for ""
	I0328 01:08:49.665130 1130827 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0328 01:08:49.667556 1130827 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0328 01:08:49.668776 1130827 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0328 01:08:49.680262 1130827 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0328 01:08:49.701490 1130827 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0328 01:08:49.701557 1130827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:49.701606 1130827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-248059 minikube.k8s.io/updated_at=2024_03_28T01_08_49_0700 minikube.k8s.io/version=v1.33.0-beta.0 minikube.k8s.io/commit=1a940f980f77c9bfc8ef93678db8c1f49c9dd79d minikube.k8s.io/name=no-preload-248059 minikube.k8s.io/primary=true
	I0328 01:08:49.734009 1130827 ops.go:34] apiserver oom_adj: -16
	I0328 01:08:49.901866 1130827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:50.402635 1130827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:50.902480 1130827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:51.402417 1130827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:51.902253 1130827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:52.402411 1130827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:52.901926 1130827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:53.402394 1130827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:53.902738 1130827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:54.401878 1130827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:54.901920 1130827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:55.401878 1130827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:55.902140 1130827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:56.402863 1130827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:56.901970 1130827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:57.402088 1130827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:57.901869 1130827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:58.402056 1130827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:58.902333 1130827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:59.402753 1130827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:59.902930 1130827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:09:00.402623 1130827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:09:00.901863 1130827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:09:01.402264 1130827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:09:01.902054 1130827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:09:02.402212 1130827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:09:02.503310 1130827 kubeadm.go:1107] duration metric: took 12.80181586s to wait for elevateKubeSystemPrivileges
	W0328 01:09:02.503352 1130827 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0328 01:09:02.503362 1130827 kubeadm.go:393] duration metric: took 5m10.46697508s to StartCluster
	I0328 01:09:02.503380 1130827 settings.go:142] acquiring lock: {Name:mk594dd5d083edcb4c668528431bf610fe4ea638 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 01:09:02.503482 1130827 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18485-1069254/kubeconfig
	I0328 01:09:02.505909 1130827 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18485-1069254/kubeconfig: {Name:mk608bb0dce90860f51972a105c5a28b1a9b081e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 01:09:02.506302 1130827 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.61.107 Port:8443 KubernetesVersion:v1.30.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0328 01:09:02.508103 1130827 out.go:177] * Verifying Kubernetes components...
	I0328 01:09:02.506385 1130827 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0328 01:09:02.506502 1130827 config.go:182] Loaded profile config "no-preload-248059": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0-beta.0
	I0328 01:09:02.509509 1130827 addons.go:69] Setting default-storageclass=true in profile "no-preload-248059"
	I0328 01:09:02.509519 1130827 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0328 01:09:02.509517 1130827 addons.go:69] Setting metrics-server=true in profile "no-preload-248059"
	I0328 01:09:02.509542 1130827 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-248059"
	I0328 01:09:02.509559 1130827 addons.go:234] Setting addon metrics-server=true in "no-preload-248059"
	W0328 01:09:02.509580 1130827 addons.go:243] addon metrics-server should already be in state true
	I0328 01:09:02.509509 1130827 addons.go:69] Setting storage-provisioner=true in profile "no-preload-248059"
	I0328 01:09:02.509623 1130827 host.go:66] Checking if "no-preload-248059" exists ...
	I0328 01:09:02.509636 1130827 addons.go:234] Setting addon storage-provisioner=true in "no-preload-248059"
	W0328 01:09:02.509690 1130827 addons.go:243] addon storage-provisioner should already be in state true
	I0328 01:09:02.509729 1130827 host.go:66] Checking if "no-preload-248059" exists ...
	I0328 01:09:02.510005 1130827 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18485-1069254/.minikube/bin/docker-machine-driver-kvm2
	I0328 01:09:02.510009 1130827 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18485-1069254/.minikube/bin/docker-machine-driver-kvm2
	I0328 01:09:02.510049 1130827 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 01:09:02.510050 1130827 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 01:09:02.510053 1130827 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18485-1069254/.minikube/bin/docker-machine-driver-kvm2
	I0328 01:09:02.510085 1130827 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 01:09:02.528082 1130827 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42381
	I0328 01:09:02.528124 1130827 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43929
	I0328 01:09:02.528714 1130827 main.go:141] libmachine: () Calling .GetVersion
	I0328 01:09:02.528738 1130827 main.go:141] libmachine: () Calling .GetVersion
	I0328 01:09:02.529081 1130827 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34573
	I0328 01:09:02.529378 1130827 main.go:141] libmachine: Using API Version  1
	I0328 01:09:02.529397 1130827 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 01:09:02.529444 1130827 main.go:141] libmachine: Using API Version  1
	I0328 01:09:02.529464 1130827 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 01:09:02.529465 1130827 main.go:141] libmachine: () Calling .GetVersion
	I0328 01:09:02.529791 1130827 main.go:141] libmachine: () Calling .GetMachineName
	I0328 01:09:02.529849 1130827 main.go:141] libmachine: () Calling .GetMachineName
	I0328 01:09:02.529948 1130827 main.go:141] libmachine: Using API Version  1
	I0328 01:09:02.529965 1130827 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 01:09:02.529950 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetState
	I0328 01:09:02.530389 1130827 main.go:141] libmachine: () Calling .GetMachineName
	I0328 01:09:02.530437 1130827 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18485-1069254/.minikube/bin/docker-machine-driver-kvm2
	I0328 01:09:02.530472 1130827 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 01:09:02.531004 1130827 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18485-1069254/.minikube/bin/docker-machine-driver-kvm2
	I0328 01:09:02.531058 1130827 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 01:09:02.534108 1130827 addons.go:234] Setting addon default-storageclass=true in "no-preload-248059"
	W0328 01:09:02.534134 1130827 addons.go:243] addon default-storageclass should already be in state true
	I0328 01:09:02.534173 1130827 host.go:66] Checking if "no-preload-248059" exists ...
	I0328 01:09:02.534563 1130827 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18485-1069254/.minikube/bin/docker-machine-driver-kvm2
	I0328 01:09:02.534592 1130827 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 01:09:02.546812 1130827 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35769
	I0328 01:09:02.547478 1130827 main.go:141] libmachine: () Calling .GetVersion
	I0328 01:09:02.547999 1130827 main.go:141] libmachine: Using API Version  1
	I0328 01:09:02.548031 1130827 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 01:09:02.548370 1130827 main.go:141] libmachine: () Calling .GetMachineName
	I0328 01:09:02.548616 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetState
	I0328 01:09:02.549185 1130827 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37367
	I0328 01:09:02.549663 1130827 main.go:141] libmachine: () Calling .GetVersion
	I0328 01:09:02.550365 1130827 main.go:141] libmachine: Using API Version  1
	I0328 01:09:02.550390 1130827 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 01:09:02.550772 1130827 main.go:141] libmachine: () Calling .GetMachineName
	I0328 01:09:02.550787 1130827 main.go:141] libmachine: (no-preload-248059) Calling .DriverName
	I0328 01:09:02.550977 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetState
	I0328 01:09:02.553075 1130827 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0328 01:09:02.554750 1130827 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0328 01:09:02.554769 1130827 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0328 01:09:02.552577 1130827 main.go:141] libmachine: (no-preload-248059) Calling .DriverName
	I0328 01:09:02.554788 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHHostname
	I0328 01:09:02.553550 1130827 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45153
	I0328 01:09:02.556534 1130827 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0328 01:09:02.555339 1130827 main.go:141] libmachine: () Calling .GetVersion
	I0328 01:09:02.558480 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:09:02.563734 1130827 main.go:141] libmachine: (no-preload-248059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:33:e2", ip: ""} in network mk-no-preload-248059: {Iface:virbr3 ExpiryTime:2024-03-28 02:03:25 +0000 UTC Type:0 Mac:52:54:00:58:33:e2 Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:no-preload-248059 Clientid:01:52:54:00:58:33:e2}
	I0328 01:09:02.563773 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined IP address 192.168.61.107 and MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:09:02.563823 1130827 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0328 01:09:02.563846 1130827 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0328 01:09:02.563876 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHHostname
	I0328 01:09:02.564584 1130827 main.go:141] libmachine: Using API Version  1
	I0328 01:09:02.564604 1130827 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 01:09:02.564633 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHPort
	I0328 01:09:02.564933 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHKeyPath
	I0328 01:09:02.565025 1130827 main.go:141] libmachine: () Calling .GetMachineName
	I0328 01:09:02.565458 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHUsername
	I0328 01:09:02.565593 1130827 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18485-1069254/.minikube/bin/docker-machine-driver-kvm2
	I0328 01:09:02.565617 1130827 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 01:09:02.565745 1130827 sshutil.go:53] new ssh client: &{IP:192.168.61.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/no-preload-248059/id_rsa Username:docker}
	I0328 01:09:02.569766 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:09:02.570083 1130827 main.go:141] libmachine: (no-preload-248059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:33:e2", ip: ""} in network mk-no-preload-248059: {Iface:virbr3 ExpiryTime:2024-03-28 02:03:25 +0000 UTC Type:0 Mac:52:54:00:58:33:e2 Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:no-preload-248059 Clientid:01:52:54:00:58:33:e2}
	I0328 01:09:02.570104 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined IP address 192.168.61.107 and MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:09:02.570413 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHPort
	I0328 01:09:02.570778 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHKeyPath
	I0328 01:09:02.570975 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHUsername
	I0328 01:09:02.571142 1130827 sshutil.go:53] new ssh client: &{IP:192.168.61.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/no-preload-248059/id_rsa Username:docker}
	I0328 01:09:02.589503 1130827 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41085
	I0328 01:09:02.590061 1130827 main.go:141] libmachine: () Calling .GetVersion
	I0328 01:09:02.590641 1130827 main.go:141] libmachine: Using API Version  1
	I0328 01:09:02.590661 1130827 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 01:09:02.591065 1130827 main.go:141] libmachine: () Calling .GetMachineName
	I0328 01:09:02.591310 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetState
	I0328 01:09:02.593270 1130827 main.go:141] libmachine: (no-preload-248059) Calling .DriverName
	I0328 01:09:02.593665 1130827 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0328 01:09:02.593696 1130827 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0328 01:09:02.593717 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHHostname
	I0328 01:09:02.596796 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:09:02.597270 1130827 main.go:141] libmachine: (no-preload-248059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:33:e2", ip: ""} in network mk-no-preload-248059: {Iface:virbr3 ExpiryTime:2024-03-28 02:03:25 +0000 UTC Type:0 Mac:52:54:00:58:33:e2 Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:no-preload-248059 Clientid:01:52:54:00:58:33:e2}
	I0328 01:09:02.597298 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined IP address 192.168.61.107 and MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:09:02.597460 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHPort
	I0328 01:09:02.597637 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHKeyPath
	I0328 01:09:02.597807 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHUsername
	I0328 01:09:02.597937 1130827 sshutil.go:53] new ssh client: &{IP:192.168.61.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/no-preload-248059/id_rsa Username:docker}
	I0328 01:09:02.705837 1130827 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0328 01:09:02.727955 1130827 node_ready.go:35] waiting up to 6m0s for node "no-preload-248059" to be "Ready" ...
	I0328 01:09:02.737291 1130827 node_ready.go:49] node "no-preload-248059" has status "Ready":"True"
	I0328 01:09:02.737325 1130827 node_ready.go:38] duration metric: took 9.337953ms for node "no-preload-248059" to be "Ready" ...
	I0328 01:09:02.737338 1130827 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0328 01:09:02.741939 1130827 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-248059" in "kube-system" namespace to be "Ready" ...
	I0328 01:09:02.749157 1130827 pod_ready.go:92] pod "etcd-no-preload-248059" in "kube-system" namespace has status "Ready":"True"
	I0328 01:09:02.749192 1130827 pod_ready.go:81] duration metric: took 7.224004ms for pod "etcd-no-preload-248059" in "kube-system" namespace to be "Ready" ...
	I0328 01:09:02.749205 1130827 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-248059" in "kube-system" namespace to be "Ready" ...
	I0328 01:09:02.755106 1130827 pod_ready.go:92] pod "kube-apiserver-no-preload-248059" in "kube-system" namespace has status "Ready":"True"
	I0328 01:09:02.755132 1130827 pod_ready.go:81] duration metric: took 5.919446ms for pod "kube-apiserver-no-preload-248059" in "kube-system" namespace to be "Ready" ...
	I0328 01:09:02.755144 1130827 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-248059" in "kube-system" namespace to be "Ready" ...
	I0328 01:09:02.761123 1130827 pod_ready.go:92] pod "kube-controller-manager-no-preload-248059" in "kube-system" namespace has status "Ready":"True"
	I0328 01:09:02.761171 1130827 pod_ready.go:81] duration metric: took 6.017877ms for pod "kube-controller-manager-no-preload-248059" in "kube-system" namespace to be "Ready" ...
	I0328 01:09:02.761187 1130827 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-248059" in "kube-system" namespace to be "Ready" ...
	I0328 01:09:02.773958 1130827 pod_ready.go:92] pod "kube-scheduler-no-preload-248059" in "kube-system" namespace has status "Ready":"True"
	I0328 01:09:02.773983 1130827 pod_ready.go:81] duration metric: took 12.787671ms for pod "kube-scheduler-no-preload-248059" in "kube-system" namespace to be "Ready" ...
	I0328 01:09:02.773991 1130827 pod_ready.go:38] duration metric: took 36.637128ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0328 01:09:02.774008 1130827 api_server.go:52] waiting for apiserver process to appear ...
	I0328 01:09:02.774068 1130827 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:09:02.794342 1130827 api_server.go:72] duration metric: took 287.989042ms to wait for apiserver process to appear ...
	I0328 01:09:02.794376 1130827 api_server.go:88] waiting for apiserver healthz status ...
	I0328 01:09:02.794408 1130827 api_server.go:253] Checking apiserver healthz at https://192.168.61.107:8443/healthz ...
	I0328 01:09:02.826957 1130827 api_server.go:279] https://192.168.61.107:8443/healthz returned 200:
	ok
	I0328 01:09:02.830377 1130827 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0328 01:09:02.830399 1130827 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0328 01:09:02.837250 1130827 api_server.go:141] control plane version: v1.30.0-beta.0
	I0328 01:09:02.837284 1130827 api_server.go:131] duration metric: took 42.898933ms to wait for apiserver health ...
	I0328 01:09:02.837295 1130827 system_pods.go:43] waiting for kube-system pods to appear ...
	I0328 01:09:02.838515 1130827 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0328 01:09:02.865482 1130827 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0328 01:09:02.880510 1130827 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0328 01:09:02.880544 1130827 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0328 01:09:02.933895 1130827 system_pods.go:59] 4 kube-system pods found
	I0328 01:09:02.933958 1130827 system_pods.go:61] "etcd-no-preload-248059" [df24a43f-e4f8-4ee2-a2d5-9a718a197670] Running
	I0328 01:09:02.933967 1130827 system_pods.go:61] "kube-apiserver-no-preload-248059" [60aa0336-e0a2-476a-9458-c76fb40a95e1] Running
	I0328 01:09:02.933973 1130827 system_pods.go:61] "kube-controller-manager-no-preload-248059" [3759c96b-4476-439f-bf97-ea6175e53272] Running
	I0328 01:09:02.933977 1130827 system_pods.go:61] "kube-scheduler-no-preload-248059" [63a844a3-9d1e-48b0-90eb-52d3d20f0de8] Running
	I0328 01:09:02.933984 1130827 system_pods.go:74] duration metric: took 96.68223ms to wait for pod list to return data ...
	I0328 01:09:02.933994 1130827 default_sa.go:34] waiting for default service account to be created ...
	I0328 01:09:02.939507 1130827 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0328 01:09:02.939538 1130827 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0328 01:09:02.994042 1130827 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0328 01:09:03.160934 1130827 default_sa.go:45] found service account: "default"
	I0328 01:09:03.160971 1130827 default_sa.go:55] duration metric: took 226.968222ms for default service account to be created ...
	I0328 01:09:03.160982 1130827 system_pods.go:116] waiting for k8s-apps to be running ...
	I0328 01:09:03.396511 1130827 system_pods.go:86] 7 kube-system pods found
	I0328 01:09:03.396549 1130827 system_pods.go:89] "coredns-7db6d8ff4d-8zzf5" [91f329ea-6d6d-45dc-ac77-40a2739249b4] Pending
	I0328 01:09:03.396554 1130827 system_pods.go:89] "coredns-7db6d8ff4d-qtgp9" [aac5a4d0-acf3-426c-a81e-d129f94d58f3] Pending
	I0328 01:09:03.396558 1130827 system_pods.go:89] "etcd-no-preload-248059" [df24a43f-e4f8-4ee2-a2d5-9a718a197670] Running
	I0328 01:09:03.396562 1130827 system_pods.go:89] "kube-apiserver-no-preload-248059" [60aa0336-e0a2-476a-9458-c76fb40a95e1] Running
	I0328 01:09:03.396567 1130827 system_pods.go:89] "kube-controller-manager-no-preload-248059" [3759c96b-4476-439f-bf97-ea6175e53272] Running
	I0328 01:09:03.396575 1130827 system_pods.go:89] "kube-proxy-g5f6g" [d9c30bc3-42b1-446f-838b-979489cf661d] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0328 01:09:03.396580 1130827 system_pods.go:89] "kube-scheduler-no-preload-248059" [63a844a3-9d1e-48b0-90eb-52d3d20f0de8] Running
	I0328 01:09:03.396601 1130827 retry.go:31] will retry after 288.008379ms: missing components: kube-dns, kube-proxy
	I0328 01:09:03.697645 1130827 system_pods.go:86] 7 kube-system pods found
	I0328 01:09:03.697688 1130827 system_pods.go:89] "coredns-7db6d8ff4d-8zzf5" [91f329ea-6d6d-45dc-ac77-40a2739249b4] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0328 01:09:03.697697 1130827 system_pods.go:89] "coredns-7db6d8ff4d-qtgp9" [aac5a4d0-acf3-426c-a81e-d129f94d58f3] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0328 01:09:03.697704 1130827 system_pods.go:89] "etcd-no-preload-248059" [df24a43f-e4f8-4ee2-a2d5-9a718a197670] Running
	I0328 01:09:03.697710 1130827 system_pods.go:89] "kube-apiserver-no-preload-248059" [60aa0336-e0a2-476a-9458-c76fb40a95e1] Running
	I0328 01:09:03.697720 1130827 system_pods.go:89] "kube-controller-manager-no-preload-248059" [3759c96b-4476-439f-bf97-ea6175e53272] Running
	I0328 01:09:03.697726 1130827 system_pods.go:89] "kube-proxy-g5f6g" [d9c30bc3-42b1-446f-838b-979489cf661d] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0328 01:09:03.697730 1130827 system_pods.go:89] "kube-scheduler-no-preload-248059" [63a844a3-9d1e-48b0-90eb-52d3d20f0de8] Running
	I0328 01:09:03.697750 1130827 retry.go:31] will retry after 356.016468ms: missing components: kube-dns, kube-proxy
	I0328 01:09:03.962535 1130827 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.097008499s)
	I0328 01:09:03.962614 1130827 main.go:141] libmachine: Making call to close driver server
	I0328 01:09:03.962633 1130827 main.go:141] libmachine: (no-preload-248059) Calling .Close
	I0328 01:09:03.963093 1130827 main.go:141] libmachine: Successfully made call to close driver server
	I0328 01:09:03.963119 1130827 main.go:141] libmachine: Making call to close connection to plugin binary
	I0328 01:09:03.963129 1130827 main.go:141] libmachine: Making call to close driver server
	I0328 01:09:03.963139 1130827 main.go:141] libmachine: (no-preload-248059) Calling .Close
	I0328 01:09:03.963406 1130827 main.go:141] libmachine: Successfully made call to close driver server
	I0328 01:09:03.963424 1130827 main.go:141] libmachine: Making call to close connection to plugin binary
	I0328 01:09:03.964335 1130827 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.125788348s)
	I0328 01:09:03.964375 1130827 main.go:141] libmachine: Making call to close driver server
	I0328 01:09:03.964384 1130827 main.go:141] libmachine: (no-preload-248059) Calling .Close
	I0328 01:09:03.964712 1130827 main.go:141] libmachine: (no-preload-248059) DBG | Closing plugin on server side
	I0328 01:09:03.964740 1130827 main.go:141] libmachine: Successfully made call to close driver server
	I0328 01:09:03.964763 1130827 main.go:141] libmachine: Making call to close connection to plugin binary
	I0328 01:09:03.964776 1130827 main.go:141] libmachine: Making call to close driver server
	I0328 01:09:03.964785 1130827 main.go:141] libmachine: (no-preload-248059) Calling .Close
	I0328 01:09:03.965054 1130827 main.go:141] libmachine: Successfully made call to close driver server
	I0328 01:09:03.965125 1130827 main.go:141] libmachine: Making call to close connection to plugin binary
	I0328 01:09:03.965142 1130827 main.go:141] libmachine: (no-preload-248059) DBG | Closing plugin on server side
	I0328 01:09:04.002303 1130827 main.go:141] libmachine: Making call to close driver server
	I0328 01:09:04.002340 1130827 main.go:141] libmachine: (no-preload-248059) Calling .Close
	I0328 01:09:04.002744 1130827 main.go:141] libmachine: Successfully made call to close driver server
	I0328 01:09:04.002766 1130827 main.go:141] libmachine: Making call to close connection to plugin binary
	I0328 01:09:04.062017 1130827 system_pods.go:86] 8 kube-system pods found
	I0328 01:09:04.062096 1130827 system_pods.go:89] "coredns-7db6d8ff4d-8zzf5" [91f329ea-6d6d-45dc-ac77-40a2739249b4] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0328 01:09:04.062111 1130827 system_pods.go:89] "coredns-7db6d8ff4d-qtgp9" [aac5a4d0-acf3-426c-a81e-d129f94d58f3] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0328 01:09:04.062121 1130827 system_pods.go:89] "etcd-no-preload-248059" [df24a43f-e4f8-4ee2-a2d5-9a718a197670] Running
	I0328 01:09:04.062132 1130827 system_pods.go:89] "kube-apiserver-no-preload-248059" [60aa0336-e0a2-476a-9458-c76fb40a95e1] Running
	I0328 01:09:04.062158 1130827 system_pods.go:89] "kube-controller-manager-no-preload-248059" [3759c96b-4476-439f-bf97-ea6175e53272] Running
	I0328 01:09:04.062172 1130827 system_pods.go:89] "kube-proxy-g5f6g" [d9c30bc3-42b1-446f-838b-979489cf661d] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0328 01:09:04.062180 1130827 system_pods.go:89] "kube-scheduler-no-preload-248059" [63a844a3-9d1e-48b0-90eb-52d3d20f0de8] Running
	I0328 01:09:04.062192 1130827 system_pods.go:89] "storage-provisioner" [1dcee5b1-4531-4068-bce7-081d51602015] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0328 01:09:04.062220 1130827 retry.go:31] will retry after 477.684804ms: missing components: kube-dns, kube-proxy
	I0328 01:09:04.574661 1130827 system_pods.go:86] 9 kube-system pods found
	I0328 01:09:04.574716 1130827 system_pods.go:89] "coredns-7db6d8ff4d-8zzf5" [91f329ea-6d6d-45dc-ac77-40a2739249b4] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0328 01:09:04.574728 1130827 system_pods.go:89] "coredns-7db6d8ff4d-qtgp9" [aac5a4d0-acf3-426c-a81e-d129f94d58f3] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0328 01:09:04.574740 1130827 system_pods.go:89] "etcd-no-preload-248059" [df24a43f-e4f8-4ee2-a2d5-9a718a197670] Running
	I0328 01:09:04.574748 1130827 system_pods.go:89] "kube-apiserver-no-preload-248059" [60aa0336-e0a2-476a-9458-c76fb40a95e1] Running
	I0328 01:09:04.574754 1130827 system_pods.go:89] "kube-controller-manager-no-preload-248059" [3759c96b-4476-439f-bf97-ea6175e53272] Running
	I0328 01:09:04.574761 1130827 system_pods.go:89] "kube-proxy-g5f6g" [d9c30bc3-42b1-446f-838b-979489cf661d] Running
	I0328 01:09:04.574768 1130827 system_pods.go:89] "kube-scheduler-no-preload-248059" [63a844a3-9d1e-48b0-90eb-52d3d20f0de8] Running
	I0328 01:09:04.574778 1130827 system_pods.go:89] "metrics-server-569cc877fc-frc5k" [d1b84bf5-8f9e-4da6-8aea-568e9bb1a4dd] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0328 01:09:04.574799 1130827 system_pods.go:89] "storage-provisioner" [1dcee5b1-4531-4068-bce7-081d51602015] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0328 01:09:04.574821 1130827 retry.go:31] will retry after 460.13955ms: missing components: kube-dns
	I0328 01:09:04.692708 1130827 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.69861394s)
	I0328 01:09:04.692782 1130827 main.go:141] libmachine: Making call to close driver server
	I0328 01:09:04.692798 1130827 main.go:141] libmachine: (no-preload-248059) Calling .Close
	I0328 01:09:04.693323 1130827 main.go:141] libmachine: Successfully made call to close driver server
	I0328 01:09:04.693366 1130827 main.go:141] libmachine: Making call to close connection to plugin binary
	I0328 01:09:04.693376 1130827 main.go:141] libmachine: Making call to close driver server
	I0328 01:09:04.693384 1130827 main.go:141] libmachine: (no-preload-248059) Calling .Close
	I0328 01:09:04.693320 1130827 main.go:141] libmachine: (no-preload-248059) DBG | Closing plugin on server side
	I0328 01:09:04.693818 1130827 main.go:141] libmachine: Successfully made call to close driver server
	I0328 01:09:04.693865 1130827 main.go:141] libmachine: Making call to close connection to plugin binary
	I0328 01:09:04.693879 1130827 main.go:141] libmachine: (no-preload-248059) DBG | Closing plugin on server side
	I0328 01:09:04.693895 1130827 addons.go:470] Verifying addon metrics-server=true in "no-preload-248059"
	I0328 01:09:04.696310 1130827 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0328 01:09:04.025791 1131323 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0328 01:09:04.026055 1131323 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0328 01:09:04.026065 1131323 kubeadm.go:309] 
	I0328 01:09:04.026124 1131323 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0328 01:09:04.026172 1131323 kubeadm.go:309] 		timed out waiting for the condition
	I0328 01:09:04.026181 1131323 kubeadm.go:309] 
	I0328 01:09:04.026221 1131323 kubeadm.go:309] 	This error is likely caused by:
	I0328 01:09:04.026279 1131323 kubeadm.go:309] 		- The kubelet is not running
	I0328 01:09:04.026401 1131323 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0328 01:09:04.026411 1131323 kubeadm.go:309] 
	I0328 01:09:04.026529 1131323 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0328 01:09:04.026586 1131323 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0328 01:09:04.026632 1131323 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0328 01:09:04.026640 1131323 kubeadm.go:309] 
	I0328 01:09:04.026758 1131323 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0328 01:09:04.026884 1131323 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0328 01:09:04.026902 1131323 kubeadm.go:309] 
	I0328 01:09:04.027061 1131323 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0328 01:09:04.027222 1131323 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0328 01:09:04.027335 1131323 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0328 01:09:04.027429 1131323 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0328 01:09:04.027537 1131323 kubeadm.go:309] 
	I0328 01:09:04.029027 1131323 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0328 01:09:04.029164 1131323 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0328 01:09:04.029284 1131323 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	W0328 01:09:04.029477 1131323 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0328 01:09:04.029545 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0328 01:09:04.543275 1131323 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0328 01:09:04.562572 1131323 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0328 01:09:04.577013 1131323 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0328 01:09:04.577040 1131323 kubeadm.go:156] found existing configuration files:
	
	I0328 01:09:04.577102 1131323 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0328 01:09:04.590795 1131323 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0328 01:09:04.590885 1131323 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0328 01:09:04.604227 1131323 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0328 01:09:04.616720 1131323 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0328 01:09:04.616818 1131323 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0328 01:09:04.630095 1131323 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0328 01:09:04.643166 1131323 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0328 01:09:04.643259 1131323 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0328 01:09:04.658084 1131323 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0328 01:09:04.671786 1131323 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0328 01:09:04.671874 1131323 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0328 01:09:04.685852 1131323 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0328 01:09:04.779013 1131323 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0328 01:09:04.779113 1131323 kubeadm.go:309] [preflight] Running pre-flight checks
	I0328 01:09:04.964178 1131323 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0328 01:09:04.964317 1131323 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0328 01:09:04.964463 1131323 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0328 01:09:05.181712 1131323 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0328 01:09:05.183644 1131323 out.go:204]   - Generating certificates and keys ...
	I0328 01:09:05.183759 1131323 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0328 01:09:05.183851 1131323 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0328 01:09:05.183962 1131323 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0328 01:09:05.184042 1131323 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0328 01:09:05.184156 1131323 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0328 01:09:05.184244 1131323 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0328 01:09:05.184337 1131323 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0328 01:09:05.184424 1131323 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0328 01:09:05.184535 1131323 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0328 01:09:05.184633 1131323 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0328 01:09:05.184683 1131323 kubeadm.go:309] [certs] Using the existing "sa" key
	I0328 01:09:05.184758 1131323 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0328 01:09:04.698039 1130827 addons.go:505] duration metric: took 2.191652421s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0328 01:09:05.044303 1130827 system_pods.go:86] 9 kube-system pods found
	I0328 01:09:05.044340 1130827 system_pods.go:89] "coredns-7db6d8ff4d-8zzf5" [91f329ea-6d6d-45dc-ac77-40a2739249b4] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0328 01:09:05.044348 1130827 system_pods.go:89] "coredns-7db6d8ff4d-qtgp9" [aac5a4d0-acf3-426c-a81e-d129f94d58f3] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0328 01:09:05.044354 1130827 system_pods.go:89] "etcd-no-preload-248059" [df24a43f-e4f8-4ee2-a2d5-9a718a197670] Running
	I0328 01:09:05.044360 1130827 system_pods.go:89] "kube-apiserver-no-preload-248059" [60aa0336-e0a2-476a-9458-c76fb40a95e1] Running
	I0328 01:09:05.044366 1130827 system_pods.go:89] "kube-controller-manager-no-preload-248059" [3759c96b-4476-439f-bf97-ea6175e53272] Running
	I0328 01:09:05.044369 1130827 system_pods.go:89] "kube-proxy-g5f6g" [d9c30bc3-42b1-446f-838b-979489cf661d] Running
	I0328 01:09:05.044373 1130827 system_pods.go:89] "kube-scheduler-no-preload-248059" [63a844a3-9d1e-48b0-90eb-52d3d20f0de8] Running
	I0328 01:09:05.044378 1130827 system_pods.go:89] "metrics-server-569cc877fc-frc5k" [d1b84bf5-8f9e-4da6-8aea-568e9bb1a4dd] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0328 01:09:05.044387 1130827 system_pods.go:89] "storage-provisioner" [1dcee5b1-4531-4068-bce7-081d51602015] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0328 01:09:05.044406 1130827 retry.go:31] will retry after 486.01075ms: missing components: kube-dns
	I0328 01:09:05.539158 1130827 system_pods.go:86] 9 kube-system pods found
	I0328 01:09:05.539204 1130827 system_pods.go:89] "coredns-7db6d8ff4d-8zzf5" [91f329ea-6d6d-45dc-ac77-40a2739249b4] Running
	I0328 01:09:05.539213 1130827 system_pods.go:89] "coredns-7db6d8ff4d-qtgp9" [aac5a4d0-acf3-426c-a81e-d129f94d58f3] Running
	I0328 01:09:05.539219 1130827 system_pods.go:89] "etcd-no-preload-248059" [df24a43f-e4f8-4ee2-a2d5-9a718a197670] Running
	I0328 01:09:05.539226 1130827 system_pods.go:89] "kube-apiserver-no-preload-248059" [60aa0336-e0a2-476a-9458-c76fb40a95e1] Running
	I0328 01:09:05.539232 1130827 system_pods.go:89] "kube-controller-manager-no-preload-248059" [3759c96b-4476-439f-bf97-ea6175e53272] Running
	I0328 01:09:05.539238 1130827 system_pods.go:89] "kube-proxy-g5f6g" [d9c30bc3-42b1-446f-838b-979489cf661d] Running
	I0328 01:09:05.539244 1130827 system_pods.go:89] "kube-scheduler-no-preload-248059" [63a844a3-9d1e-48b0-90eb-52d3d20f0de8] Running
	I0328 01:09:05.539255 1130827 system_pods.go:89] "metrics-server-569cc877fc-frc5k" [d1b84bf5-8f9e-4da6-8aea-568e9bb1a4dd] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0328 01:09:05.539260 1130827 system_pods.go:89] "storage-provisioner" [1dcee5b1-4531-4068-bce7-081d51602015] Running
	I0328 01:09:05.539274 1130827 system_pods.go:126] duration metric: took 2.37828469s to wait for k8s-apps to be running ...
	I0328 01:09:05.539292 1130827 system_svc.go:44] waiting for kubelet service to be running ....
	I0328 01:09:05.539362 1130827 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0328 01:09:05.560593 1130827 system_svc.go:56] duration metric: took 21.288819ms WaitForService to wait for kubelet
	I0328 01:09:05.560628 1130827 kubeadm.go:576] duration metric: took 3.054281955s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0328 01:09:05.560657 1130827 node_conditions.go:102] verifying NodePressure condition ...
	I0328 01:09:05.564453 1130827 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0328 01:09:05.564489 1130827 node_conditions.go:123] node cpu capacity is 2
	I0328 01:09:05.564502 1130827 node_conditions.go:105] duration metric: took 3.837449ms to run NodePressure ...
	I0328 01:09:05.564517 1130827 start.go:240] waiting for startup goroutines ...
	I0328 01:09:05.564527 1130827 start.go:245] waiting for cluster config update ...
	I0328 01:09:05.564542 1130827 start.go:254] writing updated cluster config ...
	I0328 01:09:05.564843 1130827 ssh_runner.go:195] Run: rm -f paused
	I0328 01:09:05.623218 1130827 start.go:600] kubectl: 1.29.3, cluster: 1.30.0-beta.0 (minor skew: 1)
	I0328 01:09:05.625408 1130827 out.go:177] * Done! kubectl is now configured to use "no-preload-248059" cluster and "default" namespace by default
	I0328 01:09:05.587190 1131323 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0328 01:09:05.923219 1131323 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0328 01:09:06.087945 1131323 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0328 01:09:06.245638 1131323 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0328 01:09:06.266195 1131323 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0328 01:09:06.267461 1131323 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0328 01:09:06.267551 1131323 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0328 01:09:06.434155 1131323 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0328 01:09:06.436300 1131323 out.go:204]   - Booting up control plane ...
	I0328 01:09:06.436447 1131323 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0328 01:09:06.446573 1131323 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0328 01:09:06.447461 1131323 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0328 01:09:06.448313 1131323 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0328 01:09:06.450917 1131323 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0328 01:09:46.453199 1131323 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0328 01:09:46.453386 1131323 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0328 01:09:46.453643 1131323 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0328 01:09:51.454402 1131323 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0328 01:09:51.454665 1131323 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0328 01:10:01.455189 1131323 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0328 01:10:01.455417 1131323 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0328 01:10:21.456491 1131323 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0328 01:10:21.456726 1131323 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0328 01:11:01.456972 1131323 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0328 01:11:01.457256 1131323 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0328 01:11:01.457269 1131323 kubeadm.go:309] 
	I0328 01:11:01.457310 1131323 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0328 01:11:01.457404 1131323 kubeadm.go:309] 		timed out waiting for the condition
	I0328 01:11:01.457441 1131323 kubeadm.go:309] 
	I0328 01:11:01.457492 1131323 kubeadm.go:309] 	This error is likely caused by:
	I0328 01:11:01.457550 1131323 kubeadm.go:309] 		- The kubelet is not running
	I0328 01:11:01.457696 1131323 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0328 01:11:01.457708 1131323 kubeadm.go:309] 
	I0328 01:11:01.457856 1131323 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0328 01:11:01.457906 1131323 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0328 01:11:01.457935 1131323 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0328 01:11:01.457943 1131323 kubeadm.go:309] 
	I0328 01:11:01.458033 1131323 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0328 01:11:01.458139 1131323 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0328 01:11:01.458155 1131323 kubeadm.go:309] 
	I0328 01:11:01.458331 1131323 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0328 01:11:01.458483 1131323 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0328 01:11:01.458594 1131323 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0328 01:11:01.458707 1131323 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0328 01:11:01.458718 1131323 kubeadm.go:309] 
	I0328 01:11:01.459597 1131323 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0328 01:11:01.459737 1131323 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0328 01:11:01.459822 1131323 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0328 01:11:01.459962 1131323 kubeadm.go:393] duration metric: took 7m59.227261729s to StartCluster
	I0328 01:11:01.460023 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:11:01.460167 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:11:01.522644 1131323 cri.go:89] found id: ""
	I0328 01:11:01.522687 1131323 logs.go:276] 0 containers: []
	W0328 01:11:01.522700 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:11:01.522710 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:11:01.522782 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:11:01.567898 1131323 cri.go:89] found id: ""
	I0328 01:11:01.567928 1131323 logs.go:276] 0 containers: []
	W0328 01:11:01.567937 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:11:01.567945 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:11:01.568005 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:11:01.604782 1131323 cri.go:89] found id: ""
	I0328 01:11:01.604810 1131323 logs.go:276] 0 containers: []
	W0328 01:11:01.604819 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:11:01.604825 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:11:01.604935 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:11:01.642875 1131323 cri.go:89] found id: ""
	I0328 01:11:01.642908 1131323 logs.go:276] 0 containers: []
	W0328 01:11:01.642920 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:11:01.642929 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:11:01.642993 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:11:01.682186 1131323 cri.go:89] found id: ""
	I0328 01:11:01.682216 1131323 logs.go:276] 0 containers: []
	W0328 01:11:01.682223 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:11:01.682241 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:11:01.682312 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:11:01.720654 1131323 cri.go:89] found id: ""
	I0328 01:11:01.720689 1131323 logs.go:276] 0 containers: []
	W0328 01:11:01.720697 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:11:01.720704 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:11:01.720759 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:11:01.757340 1131323 cri.go:89] found id: ""
	I0328 01:11:01.757372 1131323 logs.go:276] 0 containers: []
	W0328 01:11:01.757383 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:11:01.757392 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:11:01.757462 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:11:01.797426 1131323 cri.go:89] found id: ""
	I0328 01:11:01.797462 1131323 logs.go:276] 0 containers: []
	W0328 01:11:01.797473 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:11:01.797488 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:11:01.797506 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:11:01.859582 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:11:01.859623 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:11:01.876027 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:11:01.876073 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:11:01.966513 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:11:01.966539 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:11:01.966557 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:11:02.084853 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:11:02.084894 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0328 01:11:02.127221 1131323 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0328 01:11:02.127288 1131323 out.go:239] * 
	W0328 01:11:02.127417 1131323 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0328 01:11:02.127456 1131323 out.go:239] * 
	W0328 01:11:02.128313 1131323 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0328 01:11:02.131916 1131323 out.go:177] 
	W0328 01:11:02.133288 1131323 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0328 01:11:02.133351 1131323 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0328 01:11:02.133381 1131323 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0328 01:11:02.134991 1131323 out.go:177] 
	
	
	==> CRI-O <==
	Mar 28 01:18:07 no-preload-248059 crio[709]: time="2024-03-28 01:18:07.774932745Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1711588687774908623,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:97399,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=34890ef8-5afa-4401-9e87-4fd2cb44c567 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 28 01:18:07 no-preload-248059 crio[709]: time="2024-03-28 01:18:07.775409206Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c3813612-9d55-43f8-b625-2675f725da2f name=/runtime.v1.RuntimeService/ListContainers
	Mar 28 01:18:07 no-preload-248059 crio[709]: time="2024-03-28 01:18:07.775483796Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c3813612-9d55-43f8-b625-2675f725da2f name=/runtime.v1.RuntimeService/ListContainers
	Mar 28 01:18:07 no-preload-248059 crio[709]: time="2024-03-28 01:18:07.775785651Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2b9a3a8eb8ca9f67fd3d07a31c8119852de5aa2dc7c5dfa2c9dc35a2cc0f49fb,PodSandboxId:0920cf2e87dde2e665fd7b735c88708c4e59d37201f8a0a28e813da8b143468b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1711588144939175336,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1dcee5b1-4531-4068-bce7-081d51602015,},Annotations:map[string]string{io.kubernetes.container.hash: 3272bc23,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e25e21af79e013a4972f47527934ce39ffb915fc2de57e34d6c67f4bcbeb3c47,PodSandboxId:53c8b47dbdbb932b1ee62a0c91f702f7689b513c8fb781e5278e402f007c54aa,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1711588144460731334,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-8zzf5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 91f329ea-6d6d-45dc-ac77-40a2739249b4,},Annotations:map[string]string{io.kubernetes.container.hash: 3eee10e4,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f72c2ab4e509668aad306c31e730305e50bd950a3215e2d1aca869727d99b2f,PodSandboxId:59c32c596fb1b4aa2d2ca503f7fb700ff702eb4ca5c25156c4f65be3b7bb5a9f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1711588144393868121,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-qtgp9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa
c5a4d0-acf3-426c-a81e-d129f94d58f3,},Annotations:map[string]string{io.kubernetes.container.hash: f233c0b8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4233f922b7075b65b10e11b3e4e6100d66f2a5d7c2bac926615979defb1956c0,PodSandboxId:3772dd558dadbcaac1079e4aaaa39ba86d97b6da9f327f3d435314a6106066a2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:3f2e573c9528d6ccab780899b4b39b75a17f00250f31ec462fccb116d45befa8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3f2e573c9528d6ccab780899b4b39b75a17f00250f31ec462fccb116d45befa8,State:CONTAINER_RUNNING,CreatedAt:
1711588143678898175,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-g5f6g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d9c30bc3-42b1-446f-838b-979489cf661d,},Annotations:map[string]string{io.kubernetes.container.hash: eaa40bd4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9bc243955de3ae2bec25a51813f8f427b5858033de734b70866e943e518b6bd7,PodSandboxId:8ad3533817cba753b0aa138721b2785cc9c424618b40c07f487cd32cb6cd9c42,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1711588123345258983,Labels:map[string]s
tring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-248059,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e69d64566c92e3525f54abb99300da39,},Annotations:map[string]string{io.kubernetes.container.hash: 99e7a510,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:985e4e157e023253fd3baf217e792648385a081fdff62f924e938ffe7eb2b80d,PodSandboxId:8314db254c90fe303494976ded25ab371bc516f0527f30576dbe6e5580f09ac6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:746ac2129b574b8743dca05f512b7e097235ca7229b75100a38ec9cdb23454ac,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:746ac2129b574b8743dca05f512b7e097235ca7229b75100a38ec9cdb23454ac,State:CONTAINER_RUNNING,CreatedAt:1711588123285421818,Labels:map[string]string{io.kubernetes.container.nam
e: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-248059,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd8f772112ebebea502645fbe658d615,},Annotations:map[string]string{io.kubernetes.container.hash: 27285f37,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:20179eaa0c7f1d12ca086349dfdb854d43ee9515f46f5fb49de086de286cbc3c,PodSandboxId:bd5cbe84cfa9bc60450e4fd2635c4ff9bcac69d23ae1fb8e9040a9b99fc5f7ce,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:f68bf73ee2fbd4ea8b58c2038ce18d4e06cb59833c44dda7435b586add021841,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f68bf73ee2fbd4ea8b58c2038ce18d4e06cb59833c44dda7435b586add021841,State:CONTAINER_RUNNING,CreatedAt:1711588123253501175,Labels:map[string]string{io.kubernetes.container.name: k
ube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-248059,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8a38351dbe7f1abafd21396e32b13b05,},Annotations:map[string]string{io.kubernetes.container.hash: 3378c71d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c238f08ea2841d188d12c6e86c187d972d4d38c3032bf9dffe8d5d0a2482debc,PodSandboxId:45bd5b0d85da6648fc0e145c9826d36915fc738285c0af6dfe315f954bbee165,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c2da1dd389fe0ec92e0cd9e98f8def82c47e8e08ab27041cd23683c60aa1dcaa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2da1dd389fe0ec92e0cd9e98f8def82c47e8e08ab27041cd23683c60aa1dcaa,State:CONTAINER_RUNNING,CreatedAt:1711588123178228047,Labels:map[string]string{io.kubernetes.container.na
me: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-248059,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f055c15fe98f52895520db52ff8bcf3b,},Annotations:map[string]string{io.kubernetes.container.hash: 54931dd1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c3813612-9d55-43f8-b625-2675f725da2f name=/runtime.v1.RuntimeService/ListContainers
	Mar 28 01:18:07 no-preload-248059 crio[709]: time="2024-03-28 01:18:07.820720837Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=34e2a068-36da-408e-860a-040909a10924 name=/runtime.v1.RuntimeService/Version
	Mar 28 01:18:07 no-preload-248059 crio[709]: time="2024-03-28 01:18:07.820796001Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=34e2a068-36da-408e-860a-040909a10924 name=/runtime.v1.RuntimeService/Version
	Mar 28 01:18:07 no-preload-248059 crio[709]: time="2024-03-28 01:18:07.824070859Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0ab0ccd7-c0c4-4298-a10c-e9017ce7a753 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 28 01:18:07 no-preload-248059 crio[709]: time="2024-03-28 01:18:07.824445408Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1711588687824421265,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:97399,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0ab0ccd7-c0c4-4298-a10c-e9017ce7a753 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 28 01:18:07 no-preload-248059 crio[709]: time="2024-03-28 01:18:07.825151831Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=713cf883-4fcf-478e-84d9-53f3238e9043 name=/runtime.v1.RuntimeService/ListContainers
	Mar 28 01:18:07 no-preload-248059 crio[709]: time="2024-03-28 01:18:07.825227991Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=713cf883-4fcf-478e-84d9-53f3238e9043 name=/runtime.v1.RuntimeService/ListContainers
	Mar 28 01:18:07 no-preload-248059 crio[709]: time="2024-03-28 01:18:07.825468380Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2b9a3a8eb8ca9f67fd3d07a31c8119852de5aa2dc7c5dfa2c9dc35a2cc0f49fb,PodSandboxId:0920cf2e87dde2e665fd7b735c88708c4e59d37201f8a0a28e813da8b143468b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1711588144939175336,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1dcee5b1-4531-4068-bce7-081d51602015,},Annotations:map[string]string{io.kubernetes.container.hash: 3272bc23,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e25e21af79e013a4972f47527934ce39ffb915fc2de57e34d6c67f4bcbeb3c47,PodSandboxId:53c8b47dbdbb932b1ee62a0c91f702f7689b513c8fb781e5278e402f007c54aa,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1711588144460731334,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-8zzf5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 91f329ea-6d6d-45dc-ac77-40a2739249b4,},Annotations:map[string]string{io.kubernetes.container.hash: 3eee10e4,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f72c2ab4e509668aad306c31e730305e50bd950a3215e2d1aca869727d99b2f,PodSandboxId:59c32c596fb1b4aa2d2ca503f7fb700ff702eb4ca5c25156c4f65be3b7bb5a9f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1711588144393868121,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-qtgp9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa
c5a4d0-acf3-426c-a81e-d129f94d58f3,},Annotations:map[string]string{io.kubernetes.container.hash: f233c0b8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4233f922b7075b65b10e11b3e4e6100d66f2a5d7c2bac926615979defb1956c0,PodSandboxId:3772dd558dadbcaac1079e4aaaa39ba86d97b6da9f327f3d435314a6106066a2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:3f2e573c9528d6ccab780899b4b39b75a17f00250f31ec462fccb116d45befa8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3f2e573c9528d6ccab780899b4b39b75a17f00250f31ec462fccb116d45befa8,State:CONTAINER_RUNNING,CreatedAt:
1711588143678898175,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-g5f6g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d9c30bc3-42b1-446f-838b-979489cf661d,},Annotations:map[string]string{io.kubernetes.container.hash: eaa40bd4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9bc243955de3ae2bec25a51813f8f427b5858033de734b70866e943e518b6bd7,PodSandboxId:8ad3533817cba753b0aa138721b2785cc9c424618b40c07f487cd32cb6cd9c42,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1711588123345258983,Labels:map[string]s
tring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-248059,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e69d64566c92e3525f54abb99300da39,},Annotations:map[string]string{io.kubernetes.container.hash: 99e7a510,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:985e4e157e023253fd3baf217e792648385a081fdff62f924e938ffe7eb2b80d,PodSandboxId:8314db254c90fe303494976ded25ab371bc516f0527f30576dbe6e5580f09ac6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:746ac2129b574b8743dca05f512b7e097235ca7229b75100a38ec9cdb23454ac,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:746ac2129b574b8743dca05f512b7e097235ca7229b75100a38ec9cdb23454ac,State:CONTAINER_RUNNING,CreatedAt:1711588123285421818,Labels:map[string]string{io.kubernetes.container.nam
e: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-248059,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd8f772112ebebea502645fbe658d615,},Annotations:map[string]string{io.kubernetes.container.hash: 27285f37,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:20179eaa0c7f1d12ca086349dfdb854d43ee9515f46f5fb49de086de286cbc3c,PodSandboxId:bd5cbe84cfa9bc60450e4fd2635c4ff9bcac69d23ae1fb8e9040a9b99fc5f7ce,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:f68bf73ee2fbd4ea8b58c2038ce18d4e06cb59833c44dda7435b586add021841,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f68bf73ee2fbd4ea8b58c2038ce18d4e06cb59833c44dda7435b586add021841,State:CONTAINER_RUNNING,CreatedAt:1711588123253501175,Labels:map[string]string{io.kubernetes.container.name: k
ube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-248059,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8a38351dbe7f1abafd21396e32b13b05,},Annotations:map[string]string{io.kubernetes.container.hash: 3378c71d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c238f08ea2841d188d12c6e86c187d972d4d38c3032bf9dffe8d5d0a2482debc,PodSandboxId:45bd5b0d85da6648fc0e145c9826d36915fc738285c0af6dfe315f954bbee165,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c2da1dd389fe0ec92e0cd9e98f8def82c47e8e08ab27041cd23683c60aa1dcaa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2da1dd389fe0ec92e0cd9e98f8def82c47e8e08ab27041cd23683c60aa1dcaa,State:CONTAINER_RUNNING,CreatedAt:1711588123178228047,Labels:map[string]string{io.kubernetes.container.na
me: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-248059,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f055c15fe98f52895520db52ff8bcf3b,},Annotations:map[string]string{io.kubernetes.container.hash: 54931dd1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=713cf883-4fcf-478e-84d9-53f3238e9043 name=/runtime.v1.RuntimeService/ListContainers
	Mar 28 01:18:07 no-preload-248059 crio[709]: time="2024-03-28 01:18:07.870873282Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=5a5252b5-f369-41ff-8508-5f1a12d2a93f name=/runtime.v1.RuntimeService/Version
	Mar 28 01:18:07 no-preload-248059 crio[709]: time="2024-03-28 01:18:07.870947865Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=5a5252b5-f369-41ff-8508-5f1a12d2a93f name=/runtime.v1.RuntimeService/Version
	Mar 28 01:18:07 no-preload-248059 crio[709]: time="2024-03-28 01:18:07.872266674Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=74dd9e6d-c995-4182-9cef-439ec9107496 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 28 01:18:07 no-preload-248059 crio[709]: time="2024-03-28 01:18:07.872766592Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1711588687872742693,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:97399,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=74dd9e6d-c995-4182-9cef-439ec9107496 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 28 01:18:07 no-preload-248059 crio[709]: time="2024-03-28 01:18:07.873350602Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0ec48513-f43b-480e-a452-0edc45d700eb name=/runtime.v1.RuntimeService/ListContainers
	Mar 28 01:18:07 no-preload-248059 crio[709]: time="2024-03-28 01:18:07.873408616Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0ec48513-f43b-480e-a452-0edc45d700eb name=/runtime.v1.RuntimeService/ListContainers
	Mar 28 01:18:07 no-preload-248059 crio[709]: time="2024-03-28 01:18:07.873742657Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2b9a3a8eb8ca9f67fd3d07a31c8119852de5aa2dc7c5dfa2c9dc35a2cc0f49fb,PodSandboxId:0920cf2e87dde2e665fd7b735c88708c4e59d37201f8a0a28e813da8b143468b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1711588144939175336,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1dcee5b1-4531-4068-bce7-081d51602015,},Annotations:map[string]string{io.kubernetes.container.hash: 3272bc23,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e25e21af79e013a4972f47527934ce39ffb915fc2de57e34d6c67f4bcbeb3c47,PodSandboxId:53c8b47dbdbb932b1ee62a0c91f702f7689b513c8fb781e5278e402f007c54aa,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1711588144460731334,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-8zzf5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 91f329ea-6d6d-45dc-ac77-40a2739249b4,},Annotations:map[string]string{io.kubernetes.container.hash: 3eee10e4,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f72c2ab4e509668aad306c31e730305e50bd950a3215e2d1aca869727d99b2f,PodSandboxId:59c32c596fb1b4aa2d2ca503f7fb700ff702eb4ca5c25156c4f65be3b7bb5a9f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1711588144393868121,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-qtgp9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa
c5a4d0-acf3-426c-a81e-d129f94d58f3,},Annotations:map[string]string{io.kubernetes.container.hash: f233c0b8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4233f922b7075b65b10e11b3e4e6100d66f2a5d7c2bac926615979defb1956c0,PodSandboxId:3772dd558dadbcaac1079e4aaaa39ba86d97b6da9f327f3d435314a6106066a2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:3f2e573c9528d6ccab780899b4b39b75a17f00250f31ec462fccb116d45befa8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3f2e573c9528d6ccab780899b4b39b75a17f00250f31ec462fccb116d45befa8,State:CONTAINER_RUNNING,CreatedAt:
1711588143678898175,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-g5f6g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d9c30bc3-42b1-446f-838b-979489cf661d,},Annotations:map[string]string{io.kubernetes.container.hash: eaa40bd4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9bc243955de3ae2bec25a51813f8f427b5858033de734b70866e943e518b6bd7,PodSandboxId:8ad3533817cba753b0aa138721b2785cc9c424618b40c07f487cd32cb6cd9c42,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1711588123345258983,Labels:map[string]s
tring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-248059,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e69d64566c92e3525f54abb99300da39,},Annotations:map[string]string{io.kubernetes.container.hash: 99e7a510,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:985e4e157e023253fd3baf217e792648385a081fdff62f924e938ffe7eb2b80d,PodSandboxId:8314db254c90fe303494976ded25ab371bc516f0527f30576dbe6e5580f09ac6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:746ac2129b574b8743dca05f512b7e097235ca7229b75100a38ec9cdb23454ac,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:746ac2129b574b8743dca05f512b7e097235ca7229b75100a38ec9cdb23454ac,State:CONTAINER_RUNNING,CreatedAt:1711588123285421818,Labels:map[string]string{io.kubernetes.container.nam
e: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-248059,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd8f772112ebebea502645fbe658d615,},Annotations:map[string]string{io.kubernetes.container.hash: 27285f37,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:20179eaa0c7f1d12ca086349dfdb854d43ee9515f46f5fb49de086de286cbc3c,PodSandboxId:bd5cbe84cfa9bc60450e4fd2635c4ff9bcac69d23ae1fb8e9040a9b99fc5f7ce,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:f68bf73ee2fbd4ea8b58c2038ce18d4e06cb59833c44dda7435b586add021841,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f68bf73ee2fbd4ea8b58c2038ce18d4e06cb59833c44dda7435b586add021841,State:CONTAINER_RUNNING,CreatedAt:1711588123253501175,Labels:map[string]string{io.kubernetes.container.name: k
ube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-248059,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8a38351dbe7f1abafd21396e32b13b05,},Annotations:map[string]string{io.kubernetes.container.hash: 3378c71d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c238f08ea2841d188d12c6e86c187d972d4d38c3032bf9dffe8d5d0a2482debc,PodSandboxId:45bd5b0d85da6648fc0e145c9826d36915fc738285c0af6dfe315f954bbee165,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c2da1dd389fe0ec92e0cd9e98f8def82c47e8e08ab27041cd23683c60aa1dcaa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2da1dd389fe0ec92e0cd9e98f8def82c47e8e08ab27041cd23683c60aa1dcaa,State:CONTAINER_RUNNING,CreatedAt:1711588123178228047,Labels:map[string]string{io.kubernetes.container.na
me: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-248059,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f055c15fe98f52895520db52ff8bcf3b,},Annotations:map[string]string{io.kubernetes.container.hash: 54931dd1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0ec48513-f43b-480e-a452-0edc45d700eb name=/runtime.v1.RuntimeService/ListContainers
	Mar 28 01:18:07 no-preload-248059 crio[709]: time="2024-03-28 01:18:07.910681534Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d84f95c9-1779-4498-b2a0-09cb1b823b5a name=/runtime.v1.RuntimeService/Version
	Mar 28 01:18:07 no-preload-248059 crio[709]: time="2024-03-28 01:18:07.910757985Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d84f95c9-1779-4498-b2a0-09cb1b823b5a name=/runtime.v1.RuntimeService/Version
	Mar 28 01:18:07 no-preload-248059 crio[709]: time="2024-03-28 01:18:07.912970104Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b1a1f8b4-48e3-409c-b74d-deadcde8df28 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 28 01:18:07 no-preload-248059 crio[709]: time="2024-03-28 01:18:07.913380960Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1711588687913356520,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:97399,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b1a1f8b4-48e3-409c-b74d-deadcde8df28 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 28 01:18:07 no-preload-248059 crio[709]: time="2024-03-28 01:18:07.914286087Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=26f427ea-b186-43c4-be9d-997ebf8fef8d name=/runtime.v1.RuntimeService/ListContainers
	Mar 28 01:18:07 no-preload-248059 crio[709]: time="2024-03-28 01:18:07.914357375Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=26f427ea-b186-43c4-be9d-997ebf8fef8d name=/runtime.v1.RuntimeService/ListContainers
	Mar 28 01:18:07 no-preload-248059 crio[709]: time="2024-03-28 01:18:07.914645529Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2b9a3a8eb8ca9f67fd3d07a31c8119852de5aa2dc7c5dfa2c9dc35a2cc0f49fb,PodSandboxId:0920cf2e87dde2e665fd7b735c88708c4e59d37201f8a0a28e813da8b143468b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1711588144939175336,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1dcee5b1-4531-4068-bce7-081d51602015,},Annotations:map[string]string{io.kubernetes.container.hash: 3272bc23,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e25e21af79e013a4972f47527934ce39ffb915fc2de57e34d6c67f4bcbeb3c47,PodSandboxId:53c8b47dbdbb932b1ee62a0c91f702f7689b513c8fb781e5278e402f007c54aa,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1711588144460731334,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-8zzf5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 91f329ea-6d6d-45dc-ac77-40a2739249b4,},Annotations:map[string]string{io.kubernetes.container.hash: 3eee10e4,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f72c2ab4e509668aad306c31e730305e50bd950a3215e2d1aca869727d99b2f,PodSandboxId:59c32c596fb1b4aa2d2ca503f7fb700ff702eb4ca5c25156c4f65be3b7bb5a9f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1711588144393868121,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-qtgp9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa
c5a4d0-acf3-426c-a81e-d129f94d58f3,},Annotations:map[string]string{io.kubernetes.container.hash: f233c0b8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4233f922b7075b65b10e11b3e4e6100d66f2a5d7c2bac926615979defb1956c0,PodSandboxId:3772dd558dadbcaac1079e4aaaa39ba86d97b6da9f327f3d435314a6106066a2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:3f2e573c9528d6ccab780899b4b39b75a17f00250f31ec462fccb116d45befa8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3f2e573c9528d6ccab780899b4b39b75a17f00250f31ec462fccb116d45befa8,State:CONTAINER_RUNNING,CreatedAt:
1711588143678898175,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-g5f6g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d9c30bc3-42b1-446f-838b-979489cf661d,},Annotations:map[string]string{io.kubernetes.container.hash: eaa40bd4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9bc243955de3ae2bec25a51813f8f427b5858033de734b70866e943e518b6bd7,PodSandboxId:8ad3533817cba753b0aa138721b2785cc9c424618b40c07f487cd32cb6cd9c42,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1711588123345258983,Labels:map[string]s
tring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-248059,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e69d64566c92e3525f54abb99300da39,},Annotations:map[string]string{io.kubernetes.container.hash: 99e7a510,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:985e4e157e023253fd3baf217e792648385a081fdff62f924e938ffe7eb2b80d,PodSandboxId:8314db254c90fe303494976ded25ab371bc516f0527f30576dbe6e5580f09ac6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:746ac2129b574b8743dca05f512b7e097235ca7229b75100a38ec9cdb23454ac,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:746ac2129b574b8743dca05f512b7e097235ca7229b75100a38ec9cdb23454ac,State:CONTAINER_RUNNING,CreatedAt:1711588123285421818,Labels:map[string]string{io.kubernetes.container.nam
e: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-248059,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd8f772112ebebea502645fbe658d615,},Annotations:map[string]string{io.kubernetes.container.hash: 27285f37,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:20179eaa0c7f1d12ca086349dfdb854d43ee9515f46f5fb49de086de286cbc3c,PodSandboxId:bd5cbe84cfa9bc60450e4fd2635c4ff9bcac69d23ae1fb8e9040a9b99fc5f7ce,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:f68bf73ee2fbd4ea8b58c2038ce18d4e06cb59833c44dda7435b586add021841,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f68bf73ee2fbd4ea8b58c2038ce18d4e06cb59833c44dda7435b586add021841,State:CONTAINER_RUNNING,CreatedAt:1711588123253501175,Labels:map[string]string{io.kubernetes.container.name: k
ube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-248059,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8a38351dbe7f1abafd21396e32b13b05,},Annotations:map[string]string{io.kubernetes.container.hash: 3378c71d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c238f08ea2841d188d12c6e86c187d972d4d38c3032bf9dffe8d5d0a2482debc,PodSandboxId:45bd5b0d85da6648fc0e145c9826d36915fc738285c0af6dfe315f954bbee165,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c2da1dd389fe0ec92e0cd9e98f8def82c47e8e08ab27041cd23683c60aa1dcaa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2da1dd389fe0ec92e0cd9e98f8def82c47e8e08ab27041cd23683c60aa1dcaa,State:CONTAINER_RUNNING,CreatedAt:1711588123178228047,Labels:map[string]string{io.kubernetes.container.na
me: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-248059,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f055c15fe98f52895520db52ff8bcf3b,},Annotations:map[string]string{io.kubernetes.container.hash: 54931dd1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=26f427ea-b186-43c4-be9d-997ebf8fef8d name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	2b9a3a8eb8ca9       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   9 minutes ago       Running             storage-provisioner       0                   0920cf2e87dde       storage-provisioner
	e25e21af79e01       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   9 minutes ago       Running             coredns                   0                   53c8b47dbdbb9       coredns-7db6d8ff4d-8zzf5
	9f72c2ab4e509       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   9 minutes ago       Running             coredns                   0                   59c32c596fb1b       coredns-7db6d8ff4d-qtgp9
	4233f922b7075       3f2e573c9528d6ccab780899b4b39b75a17f00250f31ec462fccb116d45befa8   9 minutes ago       Running             kube-proxy                0                   3772dd558dadb       kube-proxy-g5f6g
	9bc243955de3a       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   9 minutes ago       Running             etcd                      2                   8ad3533817cba       etcd-no-preload-248059
	985e4e157e023       746ac2129b574b8743dca05f512b7e097235ca7229b75100a38ec9cdb23454ac   9 minutes ago       Running             kube-scheduler            2                   8314db254c90f       kube-scheduler-no-preload-248059
	20179eaa0c7f1       f68bf73ee2fbd4ea8b58c2038ce18d4e06cb59833c44dda7435b586add021841   9 minutes ago       Running             kube-controller-manager   2                   bd5cbe84cfa9b       kube-controller-manager-no-preload-248059
	c238f08ea2841       c2da1dd389fe0ec92e0cd9e98f8def82c47e8e08ab27041cd23683c60aa1dcaa   9 minutes ago       Running             kube-apiserver            2                   45bd5b0d85da6       kube-apiserver-no-preload-248059
	
	
	==> coredns [9f72c2ab4e509668aad306c31e730305e50bd950a3215e2d1aca869727d99b2f] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [e25e21af79e013a4972f47527934ce39ffb915fc2de57e34d6c67f4bcbeb3c47] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> describe nodes <==
	Name:               no-preload-248059
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-248059
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1a940f980f77c9bfc8ef93678db8c1f49c9dd79d
	                    minikube.k8s.io/name=no-preload-248059
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_03_28T01_08_49_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 28 Mar 2024 01:08:46 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-248059
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 28 Mar 2024 01:18:00 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 28 Mar 2024 01:14:15 +0000   Thu, 28 Mar 2024 01:08:44 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 28 Mar 2024 01:14:15 +0000   Thu, 28 Mar 2024 01:08:44 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 28 Mar 2024 01:14:15 +0000   Thu, 28 Mar 2024 01:08:44 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 28 Mar 2024 01:14:15 +0000   Thu, 28 Mar 2024 01:08:46 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.107
	  Hostname:    no-preload-248059
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 89bcb29939fb40d7ac8ffbe51d037041
	  System UUID:                89bcb299-39fb-40d7-ac8f-fbe51d037041
	  Boot ID:                    0ed144c6-e0e9-469d-b22e-b6114c7629e1
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0-beta.0
	  Kube-Proxy Version:         v1.30.0-beta.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7db6d8ff4d-8zzf5                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m5s
	  kube-system                 coredns-7db6d8ff4d-qtgp9                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m5s
	  kube-system                 etcd-no-preload-248059                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         9m19s
	  kube-system                 kube-apiserver-no-preload-248059             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m19s
	  kube-system                 kube-controller-manager-no-preload-248059    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m19s
	  kube-system                 kube-proxy-g5f6g                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m6s
	  kube-system                 kube-scheduler-no-preload-248059             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m21s
	  kube-system                 metrics-server-569cc877fc-frc5k              100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         9m4s
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m5s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   0 (0%!)(MISSING)
	  memory             440Mi (20%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 9m3s   kube-proxy       
	  Normal  Starting                 9m20s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  9m19s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  9m19s  kubelet          Node no-preload-248059 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m19s  kubelet          Node no-preload-248059 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m19s  kubelet          Node no-preload-248059 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           9m6s   node-controller  Node no-preload-248059 event: Registered Node no-preload-248059 in Controller
	
	
	==> dmesg <==
	[  +0.041276] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.795924] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.836951] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.681573] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.226730] systemd-fstab-generator[624]: Ignoring "noauto" option for root device
	[  +0.063221] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.071498] systemd-fstab-generator[636]: Ignoring "noauto" option for root device
	[  +0.177482] systemd-fstab-generator[650]: Ignoring "noauto" option for root device
	[  +0.182987] systemd-fstab-generator[662]: Ignoring "noauto" option for root device
	[  +0.332151] systemd-fstab-generator[692]: Ignoring "noauto" option for root device
	[ +17.175029] systemd-fstab-generator[1204]: Ignoring "noauto" option for root device
	[  +0.062262] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.204612] systemd-fstab-generator[1329]: Ignoring "noauto" option for root device
	[  +2.954320] kauditd_printk_skb: 97 callbacks suppressed
	[Mar28 01:04] kauditd_printk_skb: 52 callbacks suppressed
	[  +9.133446] kauditd_printk_skb: 20 callbacks suppressed
	[Mar28 01:08] kauditd_printk_skb: 4 callbacks suppressed
	[  +2.764349] systemd-fstab-generator[3840]: Ignoring "noauto" option for root device
	[  +6.602970] systemd-fstab-generator[4160]: Ignoring "noauto" option for root device
	[  +0.088395] kauditd_printk_skb: 57 callbacks suppressed
	[Mar28 01:09] systemd-fstab-generator[4368]: Ignoring "noauto" option for root device
	[  +0.091370] kauditd_printk_skb: 12 callbacks suppressed
	[ +57.456959] kauditd_printk_skb: 78 callbacks suppressed
	
	
	==> etcd [9bc243955de3ae2bec25a51813f8f427b5858033de734b70866e943e518b6bd7] <==
	{"level":"info","ts":"2024-03-28T01:08:43.878875Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7a1421f129b0f3c4 switched to configuration voters=(8796693291831718852)"}
	{"level":"info","ts":"2024-03-28T01:08:43.879062Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"740117290cb61fd6","local-member-id":"7a1421f129b0f3c4","added-peer-id":"7a1421f129b0f3c4","added-peer-peer-urls":["https://192.168.61.107:2380"]}
	{"level":"info","ts":"2024-03-28T01:08:43.896707Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-03-28T01:08:43.896881Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.61.107:2380"}
	{"level":"info","ts":"2024-03-28T01:08:43.897031Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.61.107:2380"}
	{"level":"info","ts":"2024-03-28T01:08:43.897022Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"7a1421f129b0f3c4","initial-advertise-peer-urls":["https://192.168.61.107:2380"],"listen-peer-urls":["https://192.168.61.107:2380"],"advertise-client-urls":["https://192.168.61.107:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.61.107:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-03-28T01:08:43.897087Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-03-28T01:08:44.114647Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7a1421f129b0f3c4 is starting a new election at term 1"}
	{"level":"info","ts":"2024-03-28T01:08:44.114748Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7a1421f129b0f3c4 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-03-28T01:08:44.114792Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7a1421f129b0f3c4 received MsgPreVoteResp from 7a1421f129b0f3c4 at term 1"}
	{"level":"info","ts":"2024-03-28T01:08:44.114811Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7a1421f129b0f3c4 became candidate at term 2"}
	{"level":"info","ts":"2024-03-28T01:08:44.114817Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7a1421f129b0f3c4 received MsgVoteResp from 7a1421f129b0f3c4 at term 2"}
	{"level":"info","ts":"2024-03-28T01:08:44.114824Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7a1421f129b0f3c4 became leader at term 2"}
	{"level":"info","ts":"2024-03-28T01:08:44.114834Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 7a1421f129b0f3c4 elected leader 7a1421f129b0f3c4 at term 2"}
	{"level":"info","ts":"2024-03-28T01:08:44.118853Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"7a1421f129b0f3c4","local-member-attributes":"{Name:no-preload-248059 ClientURLs:[https://192.168.61.107:2379]}","request-path":"/0/members/7a1421f129b0f3c4/attributes","cluster-id":"740117290cb61fd6","publish-timeout":"7s"}
	{"level":"info","ts":"2024-03-28T01:08:44.119039Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-28T01:08:44.119222Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-28T01:08:44.122645Z","caller":"etcdserver/server.go:2578","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-28T01:08:44.124772Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-03-28T01:08:44.124896Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-03-28T01:08:44.129098Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-03-28T01:08:44.133509Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.61.107:2379"}
	{"level":"info","ts":"2024-03-28T01:08:44.171921Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"740117290cb61fd6","local-member-id":"7a1421f129b0f3c4","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-28T01:08:44.172062Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-28T01:08:44.172118Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	
	
	==> kernel <==
	 01:18:08 up 14 min,  0 users,  load average: 0.38, 0.35, 0.22
	Linux no-preload-248059 5.10.207 #1 SMP Wed Mar 27 22:02:20 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [c238f08ea2841d188d12c6e86c187d972d4d38c3032bf9dffe8d5d0a2482debc] <==
	I0328 01:12:05.221844       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0328 01:13:46.013490       1 handler_proxy.go:93] no RequestInfo found in the context
	E0328 01:13:46.013705       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W0328 01:13:47.014002       1 handler_proxy.go:93] no RequestInfo found in the context
	E0328 01:13:47.014073       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0328 01:13:47.014083       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0328 01:13:47.014126       1 handler_proxy.go:93] no RequestInfo found in the context
	E0328 01:13:47.014171       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0328 01:13:47.015361       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0328 01:14:47.015270       1 handler_proxy.go:93] no RequestInfo found in the context
	E0328 01:14:47.015495       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0328 01:14:47.015530       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0328 01:14:47.015737       1 handler_proxy.go:93] no RequestInfo found in the context
	E0328 01:14:47.015823       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0328 01:14:47.017005       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0328 01:16:47.016744       1 handler_proxy.go:93] no RequestInfo found in the context
	E0328 01:16:47.017231       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	W0328 01:16:47.017297       1 handler_proxy.go:93] no RequestInfo found in the context
	E0328 01:16:47.017399       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0328 01:16:47.017430       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0328 01:16:47.018624       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [20179eaa0c7f1d12ca086349dfdb854d43ee9515f46f5fb49de086de286cbc3c] <==
	I0328 01:12:32.815889       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0328 01:13:02.369983       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0328 01:13:02.826251       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0328 01:13:32.377469       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0328 01:13:32.837052       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0328 01:14:02.383918       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0328 01:14:02.845986       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0328 01:14:32.390516       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0328 01:14:32.859012       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0328 01:14:48.041804       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="285.782µs"
	I0328 01:15:02.041958       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="75.484µs"
	E0328 01:15:02.396063       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0328 01:15:02.870822       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0328 01:15:32.402298       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0328 01:15:32.881297       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0328 01:16:02.409459       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0328 01:16:02.894307       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0328 01:16:32.415360       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0328 01:16:32.902749       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0328 01:17:02.422386       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0328 01:17:02.911758       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0328 01:17:32.428116       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0328 01:17:32.921213       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0328 01:18:02.435464       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0328 01:18:02.930367       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [4233f922b7075b65b10e11b3e4e6100d66f2a5d7c2bac926615979defb1956c0] <==
	I0328 01:09:04.971527       1 server_linux.go:69] "Using iptables proxy"
	I0328 01:09:05.005404       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.61.107"]
	I0328 01:09:05.107825       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0328 01:09:05.107929       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0328 01:09:05.107960       1 server_linux.go:165] "Using iptables Proxier"
	I0328 01:09:05.112423       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0328 01:09:05.112848       1 server.go:872] "Version info" version="v1.30.0-beta.0"
	I0328 01:09:05.113114       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0328 01:09:05.114770       1 config.go:192] "Starting service config controller"
	I0328 01:09:05.114843       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0328 01:09:05.114898       1 config.go:101] "Starting endpoint slice config controller"
	I0328 01:09:05.114915       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0328 01:09:05.115708       1 config.go:319] "Starting node config controller"
	I0328 01:09:05.115761       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0328 01:09:05.215634       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0328 01:09:05.215864       1 shared_informer.go:320] Caches are synced for service config
	I0328 01:09:05.216216       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [985e4e157e023253fd3baf217e792648385a081fdff62f924e938ffe7eb2b80d] <==
	W0328 01:08:46.078456       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0328 01:08:46.078484       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0328 01:08:46.078534       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0328 01:08:46.078634       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0328 01:08:46.078662       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0328 01:08:46.078687       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0328 01:08:46.079007       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0328 01:08:46.079095       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0328 01:08:46.894692       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0328 01:08:46.894748       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0328 01:08:47.021016       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0328 01:08:47.021167       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0328 01:08:47.077631       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0328 01:08:47.077689       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0328 01:08:47.312889       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0328 01:08:47.312951       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0328 01:08:47.338273       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0328 01:08:47.338330       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0328 01:08:47.348239       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0328 01:08:47.348350       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0328 01:08:47.363004       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0328 01:08:47.363069       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0328 01:08:47.397932       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0328 01:08:47.397991       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0328 01:08:49.464402       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Mar 28 01:17:48 no-preload-248059 kubelet[4167]: E0328 01:17:48.021661    4167 kubelet_pods.go:2464] "unknown runtime class" runtimeClassName=""
	Mar 28 01:17:48 no-preload-248059 kubelet[4167]: E0328 01:17:48.022106    4167 kubelet_pods.go:2464] "unknown runtime class" runtimeClassName=""
	Mar 28 01:17:48 no-preload-248059 kubelet[4167]: E0328 01:17:48.022157    4167 kubelet_pods.go:2464] "unknown runtime class" runtimeClassName=""
	Mar 28 01:17:48 no-preload-248059 kubelet[4167]: E0328 01:17:48.024116    4167 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-frc5k" podUID="d1b84bf5-8f9e-4da6-8aea-568e9bb1a4dd"
	Mar 28 01:17:49 no-preload-248059 kubelet[4167]: E0328 01:17:49.065658    4167 iptables.go:577] "Could not set up iptables canary" err=<
	Mar 28 01:17:49 no-preload-248059 kubelet[4167]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 28 01:17:49 no-preload-248059 kubelet[4167]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 28 01:17:49 no-preload-248059 kubelet[4167]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 28 01:17:49 no-preload-248059 kubelet[4167]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 28 01:17:50 no-preload-248059 kubelet[4167]: E0328 01:17:50.021197    4167 kubelet_pods.go:2464] "unknown runtime class" runtimeClassName=""
	Mar 28 01:17:50 no-preload-248059 kubelet[4167]: E0328 01:17:50.021242    4167 kubelet_pods.go:2464] "unknown runtime class" runtimeClassName=""
	Mar 28 01:17:50 no-preload-248059 kubelet[4167]: E0328 01:17:50.021247    4167 kubelet_pods.go:2464] "unknown runtime class" runtimeClassName=""
	Mar 28 01:17:51 no-preload-248059 kubelet[4167]: E0328 01:17:51.021215    4167 kubelet_pods.go:2464] "unknown runtime class" runtimeClassName=""
	Mar 28 01:17:51 no-preload-248059 kubelet[4167]: E0328 01:17:51.021701    4167 kubelet_pods.go:2464] "unknown runtime class" runtimeClassName=""
	Mar 28 01:17:51 no-preload-248059 kubelet[4167]: E0328 01:17:51.021922    4167 kubelet_pods.go:2464] "unknown runtime class" runtimeClassName=""
	Mar 28 01:17:52 no-preload-248059 kubelet[4167]: E0328 01:17:52.021377    4167 kubelet_pods.go:2464] "unknown runtime class" runtimeClassName=""
	Mar 28 01:17:52 no-preload-248059 kubelet[4167]: E0328 01:17:52.021441    4167 kubelet_pods.go:2464] "unknown runtime class" runtimeClassName=""
	Mar 28 01:17:52 no-preload-248059 kubelet[4167]: E0328 01:17:52.021448    4167 kubelet_pods.go:2464] "unknown runtime class" runtimeClassName=""
	Mar 28 01:18:02 no-preload-248059 kubelet[4167]: E0328 01:18:02.021332    4167 kubelet_pods.go:2464] "unknown runtime class" runtimeClassName=""
	Mar 28 01:18:02 no-preload-248059 kubelet[4167]: E0328 01:18:02.021449    4167 kubelet_pods.go:2464] "unknown runtime class" runtimeClassName=""
	Mar 28 01:18:02 no-preload-248059 kubelet[4167]: E0328 01:18:02.021458    4167 kubelet_pods.go:2464] "unknown runtime class" runtimeClassName=""
	Mar 28 01:18:02 no-preload-248059 kubelet[4167]: E0328 01:18:02.023525    4167 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-frc5k" podUID="d1b84bf5-8f9e-4da6-8aea-568e9bb1a4dd"
	Mar 28 01:18:05 no-preload-248059 kubelet[4167]: E0328 01:18:05.021746    4167 kubelet_pods.go:2464] "unknown runtime class" runtimeClassName=""
	Mar 28 01:18:05 no-preload-248059 kubelet[4167]: E0328 01:18:05.021839    4167 kubelet_pods.go:2464] "unknown runtime class" runtimeClassName=""
	Mar 28 01:18:05 no-preload-248059 kubelet[4167]: E0328 01:18:05.021850    4167 kubelet_pods.go:2464] "unknown runtime class" runtimeClassName=""
	
	
	==> storage-provisioner [2b9a3a8eb8ca9f67fd3d07a31c8119852de5aa2dc7c5dfa2c9dc35a2cc0f49fb] <==
	I0328 01:09:05.107439       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0328 01:09:05.130640       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0328 01:09:05.130718       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0328 01:09:05.144443       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0328 01:09:05.144673       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-248059_7c0dcb76-d036-45a2-95c2-ef87401c31ce!
	I0328 01:09:05.148985       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"fd5ac8e3-42ab-4e5e-876e-864a1f13c990", APIVersion:"v1", ResourceVersion:"401", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-248059_7c0dcb76-d036-45a2-95c2-ef87401c31ce became leader
	I0328 01:09:05.246773       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-248059_7c0dcb76-d036-45a2-95c2-ef87401c31ce!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-248059 -n no-preload-248059
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-248059 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-569cc877fc-frc5k
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-248059 describe pod metrics-server-569cc877fc-frc5k
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-248059 describe pod metrics-server-569cc877fc-frc5k: exit status 1 (66.651419ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-569cc877fc-frc5k" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-248059 describe pod metrics-server-569cc877fc-frc5k: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (544.24s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (543.49s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
E0328 01:11:14.355723 1076522 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/addons-910864/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
E0328 01:11:15.902116 1076522 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/auto-443419/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
E0328 01:11:19.153861 1076522 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/calico-443419/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
E0328 01:11:21.207419 1076522 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/functional-800754/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
E0328 01:11:47.936427 1076522 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/custom-flannel-443419/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
E0328 01:11:51.710390 1076522 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/kindnet-443419/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
E0328 01:12:10.821312 1076522 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/enable-default-cni-443419/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
E0328 01:12:42.198369 1076522 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/calico-443419/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
E0328 01:13:10.981456 1076522 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/custom-flannel-443419/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
E0328 01:13:24.183084 1076522 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/flannel-443419/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
E0328 01:13:33.866431 1076522 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/enable-default-cni-443419/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
E0328 01:13:36.487718 1076522 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/bridge-443419/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
E0328 01:14:47.230366 1076522 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/flannel-443419/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
E0328 01:14:52.855014 1076522 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/auto-443419/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
E0328 01:14:59.531111 1076522 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/bridge-443419/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
E0328 01:15:28.662627 1076522 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/kindnet-443419/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
E0328 01:16:14.356321 1076522 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/addons-910864/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
E0328 01:16:19.153569 1076522 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/calico-443419/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
E0328 01:16:21.207678 1076522 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/functional-800754/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
E0328 01:16:47.935686 1076522 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/custom-flannel-443419/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
E0328 01:17:10.821406 1076522 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/enable-default-cni-443419/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
E0328 01:18:24.182412 1076522 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/flannel-443419/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
E0328 01:18:36.487946 1076522 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/bridge-443419/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
E0328 01:19:24.258287 1076522 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/functional-800754/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
E0328 01:19:52.855439 1076522 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/auto-443419/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline
start_stop_delete_test.go:274: ***** TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-986088 -n old-k8s-version-986088
start_stop_delete_test.go:274: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-986088 -n old-k8s-version-986088: exit status 2 (260.827858ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:274: status error: exit status 2 (may be ok)
start_stop_delete_test.go:274: "old-k8s-version-986088" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-986088 -n old-k8s-version-986088
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-986088 -n old-k8s-version-986088: exit status 2 (259.249825ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-986088 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-986088 logs -n 25: (1.596231849s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|----------------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   |    Version     |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|----------------|---------------------|---------------------|
	| stop    | -p no-preload-248059                                   | no-preload-248059            | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:55 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |                |                     |                     |
	| addons  | enable metrics-server -p embed-certs-808809            | embed-certs-808809           | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:55 UTC | 28 Mar 24 00:55 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |                |                     |                     |
	| stop    | -p embed-certs-808809                                  | embed-certs-808809           | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:55 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |                |                     |                     |
	| addons  | enable metrics-server -p newest-cni-013642             | newest-cni-013642            | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:55 UTC | 28 Mar 24 00:55 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |                |                     |                     |
	| stop    | -p newest-cni-013642                                   | newest-cni-013642            | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:55 UTC | 28 Mar 24 00:55 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |                |                     |                     |
	| addons  | enable dashboard -p newest-cni-013642                  | newest-cni-013642            | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:55 UTC | 28 Mar 24 00:55 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| start   | -p newest-cni-013642 --memory=2200 --alsologtostderr   | newest-cni-013642            | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:55 UTC | 28 Mar 24 00:56 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |                |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |                |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |                |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |                |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.30.0-beta.0                    |                              |         |                |                     |                     |
	| image   | newest-cni-013642 image list                           | newest-cni-013642            | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:56 UTC | 28 Mar 24 00:56 UTC |
	|         | --format=json                                          |                              |         |                |                     |                     |
	| pause   | -p newest-cni-013642                                   | newest-cni-013642            | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:56 UTC | 28 Mar 24 00:56 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |                |                     |                     |
	| unpause | -p newest-cni-013642                                   | newest-cni-013642            | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:56 UTC | 28 Mar 24 00:56 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |                |                     |                     |
	| delete  | -p newest-cni-013642                                   | newest-cni-013642            | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:56 UTC | 28 Mar 24 00:56 UTC |
	| delete  | -p newest-cni-013642                                   | newest-cni-013642            | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:56 UTC | 28 Mar 24 00:56 UTC |
	| start   | -p                                                     | default-k8s-diff-port-283961 | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:56 UTC | 28 Mar 24 00:57 UTC |
	|         | default-k8s-diff-port-283961                           |                              |         |                |                     |                     |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |                |                     |                     |
	|         | --driver=kvm2                                          |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.29.3                           |                              |         |                |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-986088        | old-k8s-version-986088       | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:57 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |                |                     |                     |
	| addons  | enable dashboard -p no-preload-248059                  | no-preload-248059            | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:57 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-283961  | default-k8s-diff-port-283961 | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:57 UTC | 28 Mar 24 00:57 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |                |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-283961 | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:57 UTC |                     |
	|         | default-k8s-diff-port-283961                           |                              |         |                |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |                |                     |                     |
	| start   | -p no-preload-248059 --memory=2200                     | no-preload-248059            | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:57 UTC | 28 Mar 24 01:09 UTC |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |                |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.30.0-beta.0                    |                              |         |                |                     |                     |
	| addons  | enable dashboard -p embed-certs-808809                 | embed-certs-808809           | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:57 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| start   | -p embed-certs-808809                                  | embed-certs-808809           | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:57 UTC | 28 Mar 24 01:07 UTC |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |                |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.29.3                           |                              |         |                |                     |                     |
	| stop    | -p old-k8s-version-986088                              | old-k8s-version-986088       | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:59 UTC | 28 Mar 24 00:59 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |                |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-986088             | old-k8s-version-986088       | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:59 UTC | 28 Mar 24 00:59 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| start   | -p old-k8s-version-986088                              | old-k8s-version-986088       | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:59 UTC |                     |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --kvm-network=default                                  |                              |         |                |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |                |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |                |                     |                     |
	|         | --keep-context=false                                   |                              |         |                |                     |                     |
	|         | --driver=kvm2                                          |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |                |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-283961       | default-k8s-diff-port-283961 | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:59 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-283961 | jenkins | v1.33.0-beta.0 | 28 Mar 24 01:00 UTC | 28 Mar 24 01:08 UTC |
	|         | default-k8s-diff-port-283961                           |                              |         |                |                     |                     |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |                |                     |                     |
	|         | --driver=kvm2                                          |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.29.3                           |                              |         |                |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/28 01:00:05
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0328 01:00:05.675380 1131600 out.go:291] Setting OutFile to fd 1 ...
	I0328 01:00:05.675675 1131600 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 01:00:05.675710 1131600 out.go:304] Setting ErrFile to fd 2...
	I0328 01:00:05.675718 1131600 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 01:00:05.676017 1131600 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18485-1069254/.minikube/bin
	I0328 01:00:05.676919 1131600 out.go:298] Setting JSON to false
	I0328 01:00:05.678046 1131600 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":31303,"bootTime":1711556303,"procs":200,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1054-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0328 01:00:05.678129 1131600 start.go:139] virtualization: kvm guest
	I0328 01:00:05.681128 1131600 out.go:177] * [default-k8s-diff-port-283961] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I0328 01:00:05.683139 1131600 out.go:177]   - MINIKUBE_LOCATION=18485
	I0328 01:00:05.683129 1131600 notify.go:220] Checking for updates...
	I0328 01:00:05.685082 1131600 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0328 01:00:05.686765 1131600 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18485-1069254/kubeconfig
	I0328 01:00:05.688389 1131600 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18485-1069254/.minikube
	I0328 01:00:05.690187 1131600 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0328 01:00:05.691887 1131600 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0328 01:00:05.693775 1131600 config.go:182] Loaded profile config "default-k8s-diff-port-283961": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0328 01:00:05.694270 1131600 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18485-1069254/.minikube/bin/docker-machine-driver-kvm2
	I0328 01:00:05.694323 1131600 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 01:00:05.709757 1131600 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35333
	I0328 01:00:05.710275 1131600 main.go:141] libmachine: () Calling .GetVersion
	I0328 01:00:05.710875 1131600 main.go:141] libmachine: Using API Version  1
	I0328 01:00:05.710900 1131600 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 01:00:05.711323 1131600 main.go:141] libmachine: () Calling .GetMachineName
	I0328 01:00:05.711531 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .DriverName
	I0328 01:00:05.711893 1131600 driver.go:392] Setting default libvirt URI to qemu:///system
	I0328 01:00:05.712342 1131600 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18485-1069254/.minikube/bin/docker-machine-driver-kvm2
	I0328 01:00:05.712392 1131600 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 01:00:05.727583 1131600 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44249
	I0328 01:00:05.728107 1131600 main.go:141] libmachine: () Calling .GetVersion
	I0328 01:00:05.728595 1131600 main.go:141] libmachine: Using API Version  1
	I0328 01:00:05.728625 1131600 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 01:00:05.728945 1131600 main.go:141] libmachine: () Calling .GetMachineName
	I0328 01:00:05.729170 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .DriverName
	I0328 01:00:05.763895 1131600 out.go:177] * Using the kvm2 driver based on existing profile
	I0328 01:00:05.765397 1131600 start.go:297] selected driver: kvm2
	I0328 01:00:05.765431 1131600 start.go:901] validating driver "kvm2" against &{Name:default-k8s-diff-port-283961 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.29.3 ClusterName:default-k8s-diff-port-283961 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.224 Port:8444 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisk
s:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0328 01:00:05.765564 1131600 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0328 01:00:05.766282 1131600 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0328 01:00:05.766391 1131600 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18485-1069254/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0328 01:00:05.783130 1131600 install.go:137] /home/jenkins/minikube-integration/18485-1069254/.minikube/bin/docker-machine-driver-kvm2 version is 1.33.0-beta.0
	I0328 01:00:05.783602 1131600 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0328 01:00:05.783724 1131600 cni.go:84] Creating CNI manager for ""
	I0328 01:00:05.783745 1131600 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0328 01:00:05.783795 1131600 start.go:340] cluster config:
	{Name:default-k8s-diff-port-283961 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:default-k8s-diff-port-283961 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.224 Port:8444 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0328 01:00:05.783949 1131600 iso.go:125] acquiring lock: {Name:mk3da1fa7d63a581c817f327d86a827697458cb8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0328 01:00:05.785871 1131600 out.go:177] * Starting "default-k8s-diff-port-283961" primary control-plane node in "default-k8s-diff-port-283961" cluster
	I0328 01:00:02.570474 1130827 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.107:22: connect: no route to host
	I0328 01:00:05.787210 1131600 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0328 01:00:05.787259 1131600 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4
	I0328 01:00:05.787272 1131600 cache.go:56] Caching tarball of preloaded images
	I0328 01:00:05.787364 1131600 preload.go:173] Found /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0328 01:00:05.787376 1131600 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on crio
	I0328 01:00:05.787509 1131600 profile.go:142] Saving config to /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/default-k8s-diff-port-283961/config.json ...
	I0328 01:00:05.787742 1131600 start.go:360] acquireMachinesLock for default-k8s-diff-port-283961: {Name:mk85e225431128bbd27ac7bb3815095957281902 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0328 01:00:08.650481 1130827 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.107:22: connect: no route to host
	I0328 01:00:11.722571 1130827 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.107:22: connect: no route to host
	I0328 01:00:17.802536 1130827 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.107:22: connect: no route to host
	I0328 01:00:20.874568 1130827 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.107:22: connect: no route to host
	I0328 01:00:26.954473 1130827 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.107:22: connect: no route to host
	I0328 01:00:30.026674 1130827 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.107:22: connect: no route to host
	I0328 01:00:36.106489 1130827 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.107:22: connect: no route to host
	I0328 01:00:39.178555 1130827 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.107:22: connect: no route to host
	I0328 01:00:45.258539 1130827 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.107:22: connect: no route to host
	I0328 01:00:48.330581 1130827 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.107:22: connect: no route to host
	I0328 01:00:54.410577 1130827 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.107:22: connect: no route to host
	I0328 01:00:57.482545 1130827 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.107:22: connect: no route to host
	I0328 01:01:03.562558 1130827 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.107:22: connect: no route to host
	I0328 01:01:06.634602 1130827 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.107:22: connect: no route to host
	I0328 01:01:12.714559 1130827 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.107:22: connect: no route to host
	I0328 01:01:15.786597 1130827 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.107:22: connect: no route to host
	I0328 01:01:21.866544 1130827 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.107:22: connect: no route to host
	I0328 01:01:24.938619 1130827 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.107:22: connect: no route to host
	I0328 01:01:31.018631 1130827 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.107:22: connect: no route to host
	I0328 01:01:34.090562 1130827 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.107:22: connect: no route to host
	I0328 01:01:40.170864 1130827 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.107:22: connect: no route to host
	I0328 01:01:43.242565 1130827 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.107:22: connect: no route to host
	I0328 01:01:49.322492 1130827 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.107:22: connect: no route to host
	I0328 01:01:52.394572 1130827 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.107:22: connect: no route to host
	I0328 01:01:58.474562 1130827 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.107:22: connect: no route to host
	I0328 01:02:01.546621 1130827 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.107:22: connect: no route to host
	I0328 01:02:07.626510 1130827 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.107:22: connect: no route to host
	I0328 01:02:10.698534 1130827 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.107:22: connect: no route to host
	I0328 01:02:13.703348 1130949 start.go:364] duration metric: took 4m25.677777198s to acquireMachinesLock for "embed-certs-808809"
	I0328 01:02:13.703416 1130949 start.go:96] Skipping create...Using existing machine configuration
	I0328 01:02:13.703429 1130949 fix.go:54] fixHost starting: 
	I0328 01:02:13.703888 1130949 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18485-1069254/.minikube/bin/docker-machine-driver-kvm2
	I0328 01:02:13.703923 1130949 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 01:02:13.719480 1130949 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44835
	I0328 01:02:13.719968 1130949 main.go:141] libmachine: () Calling .GetVersion
	I0328 01:02:13.720450 1130949 main.go:141] libmachine: Using API Version  1
	I0328 01:02:13.720475 1130949 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 01:02:13.720774 1130949 main.go:141] libmachine: () Calling .GetMachineName
	I0328 01:02:13.721011 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .DriverName
	I0328 01:02:13.721182 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetState
	I0328 01:02:13.722796 1130949 fix.go:112] recreateIfNeeded on embed-certs-808809: state=Stopped err=<nil>
	I0328 01:02:13.722828 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .DriverName
	W0328 01:02:13.722972 1130949 fix.go:138] unexpected machine state, will restart: <nil>
	I0328 01:02:13.724895 1130949 out.go:177] * Restarting existing kvm2 VM for "embed-certs-808809" ...
	I0328 01:02:13.700647 1130827 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0328 01:02:13.700689 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetMachineName
	I0328 01:02:13.701054 1130827 buildroot.go:166] provisioning hostname "no-preload-248059"
	I0328 01:02:13.701085 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetMachineName
	I0328 01:02:13.701344 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHHostname
	I0328 01:02:13.703200 1130827 machine.go:97] duration metric: took 4m37.399616994s to provisionDockerMachine
	I0328 01:02:13.703243 1130827 fix.go:56] duration metric: took 4m37.42352766s for fixHost
	I0328 01:02:13.703249 1130827 start.go:83] releasing machines lock for "no-preload-248059", held for 4m37.423563163s
	W0328 01:02:13.703274 1130827 start.go:713] error starting host: provision: host is not running
	W0328 01:02:13.703400 1130827 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0328 01:02:13.703411 1130827 start.go:728] Will try again in 5 seconds ...
	I0328 01:02:13.726437 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .Start
	I0328 01:02:13.726574 1130949 main.go:141] libmachine: (embed-certs-808809) Ensuring networks are active...
	I0328 01:02:13.727407 1130949 main.go:141] libmachine: (embed-certs-808809) Ensuring network default is active
	I0328 01:02:13.727667 1130949 main.go:141] libmachine: (embed-certs-808809) Ensuring network mk-embed-certs-808809 is active
	I0328 01:02:13.728050 1130949 main.go:141] libmachine: (embed-certs-808809) Getting domain xml...
	I0328 01:02:13.728836 1130949 main.go:141] libmachine: (embed-certs-808809) Creating domain...
	I0328 01:02:14.931757 1130949 main.go:141] libmachine: (embed-certs-808809) Waiting to get IP...
	I0328 01:02:14.932921 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:14.933298 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | unable to find current IP address of domain embed-certs-808809 in network mk-embed-certs-808809
	I0328 01:02:14.933396 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | I0328 01:02:14.933294 1131950 retry.go:31] will retry after 279.257708ms: waiting for machine to come up
	I0328 01:02:15.213830 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:15.214439 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | unable to find current IP address of domain embed-certs-808809 in network mk-embed-certs-808809
	I0328 01:02:15.214472 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | I0328 01:02:15.214415 1131950 retry.go:31] will retry after 387.406107ms: waiting for machine to come up
	I0328 01:02:15.603078 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:15.603464 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | unable to find current IP address of domain embed-certs-808809 in network mk-embed-certs-808809
	I0328 01:02:15.603497 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | I0328 01:02:15.603431 1131950 retry.go:31] will retry after 466.553599ms: waiting for machine to come up
	I0328 01:02:16.072165 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:16.072702 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | unable to find current IP address of domain embed-certs-808809 in network mk-embed-certs-808809
	I0328 01:02:16.072732 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | I0328 01:02:16.072643 1131950 retry.go:31] will retry after 375.428381ms: waiting for machine to come up
	I0328 01:02:16.449155 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:16.449614 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | unable to find current IP address of domain embed-certs-808809 in network mk-embed-certs-808809
	I0328 01:02:16.449652 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | I0328 01:02:16.449553 1131950 retry.go:31] will retry after 466.238903ms: waiting for machine to come up
	I0328 01:02:16.917246 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:16.917697 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | unable to find current IP address of domain embed-certs-808809 in network mk-embed-certs-808809
	I0328 01:02:16.917723 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | I0328 01:02:16.917633 1131950 retry.go:31] will retry after 772.819544ms: waiting for machine to come up
	I0328 01:02:17.691645 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:17.692121 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | unable to find current IP address of domain embed-certs-808809 in network mk-embed-certs-808809
	I0328 01:02:17.692151 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | I0328 01:02:17.692071 1131950 retry.go:31] will retry after 1.19065976s: waiting for machine to come up
	I0328 01:02:18.704949 1130827 start.go:360] acquireMachinesLock for no-preload-248059: {Name:mk85e225431128bbd27ac7bb3815095957281902 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0328 01:02:18.884525 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:18.885019 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | unable to find current IP address of domain embed-certs-808809 in network mk-embed-certs-808809
	I0328 01:02:18.885044 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | I0328 01:02:18.884980 1131950 retry.go:31] will retry after 1.434726863s: waiting for machine to come up
	I0328 01:02:20.321473 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:20.322009 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | unable to find current IP address of domain embed-certs-808809 in network mk-embed-certs-808809
	I0328 01:02:20.322035 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | I0328 01:02:20.321951 1131950 retry.go:31] will retry after 1.275277555s: waiting for machine to come up
	I0328 01:02:21.599454 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:21.600049 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | unable to find current IP address of domain embed-certs-808809 in network mk-embed-certs-808809
	I0328 01:02:21.600074 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | I0328 01:02:21.599982 1131950 retry.go:31] will retry after 1.852516502s: waiting for machine to come up
	I0328 01:02:23.455282 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:23.455760 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | unable to find current IP address of domain embed-certs-808809 in network mk-embed-certs-808809
	I0328 01:02:23.455830 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | I0328 01:02:23.455746 1131950 retry.go:31] will retry after 2.056736141s: waiting for machine to come up
	I0328 01:02:25.514112 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:25.514538 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | unable to find current IP address of domain embed-certs-808809 in network mk-embed-certs-808809
	I0328 01:02:25.514569 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | I0328 01:02:25.514492 1131950 retry.go:31] will retry after 2.711520437s: waiting for machine to come up
	I0328 01:02:32.751719 1131323 start.go:364] duration metric: took 3m27.302408957s to acquireMachinesLock for "old-k8s-version-986088"
	I0328 01:02:32.751823 1131323 start.go:96] Skipping create...Using existing machine configuration
	I0328 01:02:32.751833 1131323 fix.go:54] fixHost starting: 
	I0328 01:02:32.752289 1131323 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18485-1069254/.minikube/bin/docker-machine-driver-kvm2
	I0328 01:02:32.752326 1131323 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 01:02:32.770119 1131323 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45807
	I0328 01:02:32.770723 1131323 main.go:141] libmachine: () Calling .GetVersion
	I0328 01:02:32.771352 1131323 main.go:141] libmachine: Using API Version  1
	I0328 01:02:32.771380 1131323 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 01:02:32.771790 1131323 main.go:141] libmachine: () Calling .GetMachineName
	I0328 01:02:32.772020 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .DriverName
	I0328 01:02:32.772206 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetState
	I0328 01:02:32.773947 1131323 fix.go:112] recreateIfNeeded on old-k8s-version-986088: state=Stopped err=<nil>
	I0328 01:02:32.773980 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .DriverName
	W0328 01:02:32.774166 1131323 fix.go:138] unexpected machine state, will restart: <nil>
	I0328 01:02:32.776416 1131323 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-986088" ...
	I0328 01:02:28.229576 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:28.229970 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | unable to find current IP address of domain embed-certs-808809 in network mk-embed-certs-808809
	I0328 01:02:28.230000 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | I0328 01:02:28.229920 1131950 retry.go:31] will retry after 3.231405371s: waiting for machine to come up
	I0328 01:02:31.463477 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:31.463884 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has current primary IP address 192.168.72.210 and MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:31.463902 1130949 main.go:141] libmachine: (embed-certs-808809) Found IP for machine: 192.168.72.210
	I0328 01:02:31.463915 1130949 main.go:141] libmachine: (embed-certs-808809) Reserving static IP address...
	I0328 01:02:31.464394 1130949 main.go:141] libmachine: (embed-certs-808809) Reserved static IP address: 192.168.72.210
	I0328 01:02:31.464413 1130949 main.go:141] libmachine: (embed-certs-808809) Waiting for SSH to be available...
	I0328 01:02:31.464439 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | found host DHCP lease matching {name: "embed-certs-808809", mac: "52:54:00:60:d4:d2", ip: "192.168.72.210"} in network mk-embed-certs-808809: {Iface:virbr4 ExpiryTime:2024-03-28 02:02:25 +0000 UTC Type:0 Mac:52:54:00:60:d4:d2 Iaid: IPaddr:192.168.72.210 Prefix:24 Hostname:embed-certs-808809 Clientid:01:52:54:00:60:d4:d2}
	I0328 01:02:31.464464 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | skip adding static IP to network mk-embed-certs-808809 - found existing host DHCP lease matching {name: "embed-certs-808809", mac: "52:54:00:60:d4:d2", ip: "192.168.72.210"}
	I0328 01:02:31.464480 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | Getting to WaitForSSH function...
	I0328 01:02:31.466488 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:31.466876 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d4:d2", ip: ""} in network mk-embed-certs-808809: {Iface:virbr4 ExpiryTime:2024-03-28 02:02:25 +0000 UTC Type:0 Mac:52:54:00:60:d4:d2 Iaid: IPaddr:192.168.72.210 Prefix:24 Hostname:embed-certs-808809 Clientid:01:52:54:00:60:d4:d2}
	I0328 01:02:31.466916 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined IP address 192.168.72.210 and MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:31.467054 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | Using SSH client type: external
	I0328 01:02:31.467085 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | Using SSH private key: /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/embed-certs-808809/id_rsa (-rw-------)
	I0328 01:02:31.467124 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.210 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/embed-certs-808809/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0328 01:02:31.467138 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | About to run SSH command:
	I0328 01:02:31.467155 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | exit 0
	I0328 01:02:31.590708 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | SSH cmd err, output: <nil>: 
	I0328 01:02:31.591111 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetConfigRaw
	I0328 01:02:31.591959 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetIP
	I0328 01:02:31.594592 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:31.595075 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d4:d2", ip: ""} in network mk-embed-certs-808809: {Iface:virbr4 ExpiryTime:2024-03-28 02:02:25 +0000 UTC Type:0 Mac:52:54:00:60:d4:d2 Iaid: IPaddr:192.168.72.210 Prefix:24 Hostname:embed-certs-808809 Clientid:01:52:54:00:60:d4:d2}
	I0328 01:02:31.595114 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined IP address 192.168.72.210 and MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:31.595364 1130949 profile.go:142] Saving config to /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/embed-certs-808809/config.json ...
	I0328 01:02:31.595634 1130949 machine.go:94] provisionDockerMachine start ...
	I0328 01:02:31.595656 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .DriverName
	I0328 01:02:31.595901 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHHostname
	I0328 01:02:31.598184 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:31.598529 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d4:d2", ip: ""} in network mk-embed-certs-808809: {Iface:virbr4 ExpiryTime:2024-03-28 02:02:25 +0000 UTC Type:0 Mac:52:54:00:60:d4:d2 Iaid: IPaddr:192.168.72.210 Prefix:24 Hostname:embed-certs-808809 Clientid:01:52:54:00:60:d4:d2}
	I0328 01:02:31.598556 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined IP address 192.168.72.210 and MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:31.598681 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHPort
	I0328 01:02:31.598851 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHKeyPath
	I0328 01:02:31.599012 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHKeyPath
	I0328 01:02:31.599163 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHUsername
	I0328 01:02:31.599333 1130949 main.go:141] libmachine: Using SSH client type: native
	I0328 01:02:31.599604 1130949 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.210 22 <nil> <nil>}
	I0328 01:02:31.599619 1130949 main.go:141] libmachine: About to run SSH command:
	hostname
	I0328 01:02:31.703241 1130949 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0328 01:02:31.703272 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetMachineName
	I0328 01:02:31.703575 1130949 buildroot.go:166] provisioning hostname "embed-certs-808809"
	I0328 01:02:31.703602 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetMachineName
	I0328 01:02:31.703779 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHHostname
	I0328 01:02:31.706495 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:31.706777 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d4:d2", ip: ""} in network mk-embed-certs-808809: {Iface:virbr4 ExpiryTime:2024-03-28 02:02:25 +0000 UTC Type:0 Mac:52:54:00:60:d4:d2 Iaid: IPaddr:192.168.72.210 Prefix:24 Hostname:embed-certs-808809 Clientid:01:52:54:00:60:d4:d2}
	I0328 01:02:31.706799 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined IP address 192.168.72.210 and MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:31.706978 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHPort
	I0328 01:02:31.707146 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHKeyPath
	I0328 01:02:31.707334 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHKeyPath
	I0328 01:02:31.707580 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHUsername
	I0328 01:02:31.707765 1130949 main.go:141] libmachine: Using SSH client type: native
	I0328 01:02:31.707985 1130949 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.210 22 <nil> <nil>}
	I0328 01:02:31.708004 1130949 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-808809 && echo "embed-certs-808809" | sudo tee /etc/hostname
	I0328 01:02:31.821578 1130949 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-808809
	
	I0328 01:02:31.821608 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHHostname
	I0328 01:02:31.824412 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:31.824791 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d4:d2", ip: ""} in network mk-embed-certs-808809: {Iface:virbr4 ExpiryTime:2024-03-28 02:02:25 +0000 UTC Type:0 Mac:52:54:00:60:d4:d2 Iaid: IPaddr:192.168.72.210 Prefix:24 Hostname:embed-certs-808809 Clientid:01:52:54:00:60:d4:d2}
	I0328 01:02:31.824825 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined IP address 192.168.72.210 and MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:31.825030 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHPort
	I0328 01:02:31.825253 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHKeyPath
	I0328 01:02:31.825432 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHKeyPath
	I0328 01:02:31.825589 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHUsername
	I0328 01:02:31.825758 1130949 main.go:141] libmachine: Using SSH client type: native
	I0328 01:02:31.825950 1130949 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.210 22 <nil> <nil>}
	I0328 01:02:31.825976 1130949 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-808809' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-808809/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-808809' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0328 01:02:31.937655 1130949 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0328 01:02:31.937701 1130949 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18485-1069254/.minikube CaCertPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18485-1069254/.minikube}
	I0328 01:02:31.937728 1130949 buildroot.go:174] setting up certificates
	I0328 01:02:31.937742 1130949 provision.go:84] configureAuth start
	I0328 01:02:31.937754 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetMachineName
	I0328 01:02:31.938093 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetIP
	I0328 01:02:31.940874 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:31.941328 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d4:d2", ip: ""} in network mk-embed-certs-808809: {Iface:virbr4 ExpiryTime:2024-03-28 02:02:25 +0000 UTC Type:0 Mac:52:54:00:60:d4:d2 Iaid: IPaddr:192.168.72.210 Prefix:24 Hostname:embed-certs-808809 Clientid:01:52:54:00:60:d4:d2}
	I0328 01:02:31.941360 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined IP address 192.168.72.210 and MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:31.941580 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHHostname
	I0328 01:02:31.944250 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:31.944580 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d4:d2", ip: ""} in network mk-embed-certs-808809: {Iface:virbr4 ExpiryTime:2024-03-28 02:02:25 +0000 UTC Type:0 Mac:52:54:00:60:d4:d2 Iaid: IPaddr:192.168.72.210 Prefix:24 Hostname:embed-certs-808809 Clientid:01:52:54:00:60:d4:d2}
	I0328 01:02:31.944610 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined IP address 192.168.72.210 and MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:31.944828 1130949 provision.go:143] copyHostCerts
	I0328 01:02:31.944910 1130949 exec_runner.go:144] found /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.pem, removing ...
	I0328 01:02:31.944926 1130949 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.pem
	I0328 01:02:31.945006 1130949 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.pem (1078 bytes)
	I0328 01:02:31.945151 1130949 exec_runner.go:144] found /home/jenkins/minikube-integration/18485-1069254/.minikube/cert.pem, removing ...
	I0328 01:02:31.945162 1130949 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18485-1069254/.minikube/cert.pem
	I0328 01:02:31.945205 1130949 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18485-1069254/.minikube/cert.pem (1123 bytes)
	I0328 01:02:31.945285 1130949 exec_runner.go:144] found /home/jenkins/minikube-integration/18485-1069254/.minikube/key.pem, removing ...
	I0328 01:02:31.945294 1130949 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18485-1069254/.minikube/key.pem
	I0328 01:02:31.945330 1130949 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18485-1069254/.minikube/key.pem (1679 bytes)
	I0328 01:02:31.945400 1130949 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca-key.pem org=jenkins.embed-certs-808809 san=[127.0.0.1 192.168.72.210 embed-certs-808809 localhost minikube]
	I0328 01:02:32.070925 1130949 provision.go:177] copyRemoteCerts
	I0328 01:02:32.071007 1130949 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0328 01:02:32.071067 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHHostname
	I0328 01:02:32.073876 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:32.074295 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d4:d2", ip: ""} in network mk-embed-certs-808809: {Iface:virbr4 ExpiryTime:2024-03-28 02:02:25 +0000 UTC Type:0 Mac:52:54:00:60:d4:d2 Iaid: IPaddr:192.168.72.210 Prefix:24 Hostname:embed-certs-808809 Clientid:01:52:54:00:60:d4:d2}
	I0328 01:02:32.074339 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined IP address 192.168.72.210 and MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:32.074541 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHPort
	I0328 01:02:32.074754 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHKeyPath
	I0328 01:02:32.074931 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHUsername
	I0328 01:02:32.075091 1130949 sshutil.go:53] new ssh client: &{IP:192.168.72.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/embed-certs-808809/id_rsa Username:docker}
	I0328 01:02:32.158945 1130949 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0328 01:02:32.184903 1130949 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0328 01:02:32.210411 1130949 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0328 01:02:32.235788 1130949 provision.go:87] duration metric: took 298.03126ms to configureAuth
	I0328 01:02:32.235827 1130949 buildroot.go:189] setting minikube options for container-runtime
	I0328 01:02:32.236116 1130949 config.go:182] Loaded profile config "embed-certs-808809": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0328 01:02:32.236336 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHHostname
	I0328 01:02:32.239186 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:32.239520 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d4:d2", ip: ""} in network mk-embed-certs-808809: {Iface:virbr4 ExpiryTime:2024-03-28 02:02:25 +0000 UTC Type:0 Mac:52:54:00:60:d4:d2 Iaid: IPaddr:192.168.72.210 Prefix:24 Hostname:embed-certs-808809 Clientid:01:52:54:00:60:d4:d2}
	I0328 01:02:32.239555 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined IP address 192.168.72.210 and MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:32.239782 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHPort
	I0328 01:02:32.240036 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHKeyPath
	I0328 01:02:32.240257 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHKeyPath
	I0328 01:02:32.240431 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHUsername
	I0328 01:02:32.240633 1130949 main.go:141] libmachine: Using SSH client type: native
	I0328 01:02:32.240836 1130949 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.210 22 <nil> <nil>}
	I0328 01:02:32.240862 1130949 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0328 01:02:32.513263 1130949 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0328 01:02:32.513298 1130949 machine.go:97] duration metric: took 917.647337ms to provisionDockerMachine
	I0328 01:02:32.513314 1130949 start.go:293] postStartSetup for "embed-certs-808809" (driver="kvm2")
	I0328 01:02:32.513326 1130949 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0328 01:02:32.513365 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .DriverName
	I0328 01:02:32.513727 1130949 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0328 01:02:32.513770 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHHostname
	I0328 01:02:32.516906 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:32.517382 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d4:d2", ip: ""} in network mk-embed-certs-808809: {Iface:virbr4 ExpiryTime:2024-03-28 02:02:25 +0000 UTC Type:0 Mac:52:54:00:60:d4:d2 Iaid: IPaddr:192.168.72.210 Prefix:24 Hostname:embed-certs-808809 Clientid:01:52:54:00:60:d4:d2}
	I0328 01:02:32.517425 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined IP address 192.168.72.210 and MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:32.517603 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHPort
	I0328 01:02:32.517831 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHKeyPath
	I0328 01:02:32.517989 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHUsername
	I0328 01:02:32.518115 1130949 sshutil.go:53] new ssh client: &{IP:192.168.72.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/embed-certs-808809/id_rsa Username:docker}
	I0328 01:02:32.600013 1130949 ssh_runner.go:195] Run: cat /etc/os-release
	I0328 01:02:32.604953 1130949 info.go:137] Remote host: Buildroot 2023.02.9
	I0328 01:02:32.604983 1130949 filesync.go:126] Scanning /home/jenkins/minikube-integration/18485-1069254/.minikube/addons for local assets ...
	I0328 01:02:32.605057 1130949 filesync.go:126] Scanning /home/jenkins/minikube-integration/18485-1069254/.minikube/files for local assets ...
	I0328 01:02:32.605148 1130949 filesync.go:149] local asset: /home/jenkins/minikube-integration/18485-1069254/.minikube/files/etc/ssl/certs/10765222.pem -> 10765222.pem in /etc/ssl/certs
	I0328 01:02:32.605265 1130949 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0328 01:02:32.617685 1130949 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/files/etc/ssl/certs/10765222.pem --> /etc/ssl/certs/10765222.pem (1708 bytes)
	I0328 01:02:32.646415 1130949 start.go:296] duration metric: took 133.084551ms for postStartSetup
	I0328 01:02:32.646462 1130949 fix.go:56] duration metric: took 18.943034019s for fixHost
	I0328 01:02:32.646490 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHHostname
	I0328 01:02:32.649346 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:32.649686 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d4:d2", ip: ""} in network mk-embed-certs-808809: {Iface:virbr4 ExpiryTime:2024-03-28 02:02:25 +0000 UTC Type:0 Mac:52:54:00:60:d4:d2 Iaid: IPaddr:192.168.72.210 Prefix:24 Hostname:embed-certs-808809 Clientid:01:52:54:00:60:d4:d2}
	I0328 01:02:32.649717 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined IP address 192.168.72.210 and MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:32.649864 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHPort
	I0328 01:02:32.650191 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHKeyPath
	I0328 01:02:32.650444 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHKeyPath
	I0328 01:02:32.650637 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHUsername
	I0328 01:02:32.650844 1130949 main.go:141] libmachine: Using SSH client type: native
	I0328 01:02:32.651036 1130949 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.210 22 <nil> <nil>}
	I0328 01:02:32.651069 1130949 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0328 01:02:32.751522 1130949 main.go:141] libmachine: SSH cmd err, output: <nil>: 1711587752.718800758
	
	I0328 01:02:32.751547 1130949 fix.go:216] guest clock: 1711587752.718800758
	I0328 01:02:32.751556 1130949 fix.go:229] Guest: 2024-03-28 01:02:32.718800758 +0000 UTC Remote: 2024-03-28 01:02:32.646466137 +0000 UTC m=+284.780134501 (delta=72.334621ms)
	I0328 01:02:32.751598 1130949 fix.go:200] guest clock delta is within tolerance: 72.334621ms
	I0328 01:02:32.751610 1130949 start.go:83] releasing machines lock for "embed-certs-808809", held for 19.048217918s
	I0328 01:02:32.751638 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .DriverName
	I0328 01:02:32.751953 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetIP
	I0328 01:02:32.754795 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:32.755205 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d4:d2", ip: ""} in network mk-embed-certs-808809: {Iface:virbr4 ExpiryTime:2024-03-28 02:02:25 +0000 UTC Type:0 Mac:52:54:00:60:d4:d2 Iaid: IPaddr:192.168.72.210 Prefix:24 Hostname:embed-certs-808809 Clientid:01:52:54:00:60:d4:d2}
	I0328 01:02:32.755240 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined IP address 192.168.72.210 and MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:32.755454 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .DriverName
	I0328 01:02:32.756111 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .DriverName
	I0328 01:02:32.756320 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .DriverName
	I0328 01:02:32.756412 1130949 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0328 01:02:32.756475 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHHostname
	I0328 01:02:32.756612 1130949 ssh_runner.go:195] Run: cat /version.json
	I0328 01:02:32.756646 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHHostname
	I0328 01:02:32.759337 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:32.759468 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:32.759788 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d4:d2", ip: ""} in network mk-embed-certs-808809: {Iface:virbr4 ExpiryTime:2024-03-28 02:02:25 +0000 UTC Type:0 Mac:52:54:00:60:d4:d2 Iaid: IPaddr:192.168.72.210 Prefix:24 Hostname:embed-certs-808809 Clientid:01:52:54:00:60:d4:d2}
	I0328 01:02:32.759808 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined IP address 192.168.72.210 and MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:32.759845 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d4:d2", ip: ""} in network mk-embed-certs-808809: {Iface:virbr4 ExpiryTime:2024-03-28 02:02:25 +0000 UTC Type:0 Mac:52:54:00:60:d4:d2 Iaid: IPaddr:192.168.72.210 Prefix:24 Hostname:embed-certs-808809 Clientid:01:52:54:00:60:d4:d2}
	I0328 01:02:32.759866 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined IP address 192.168.72.210 and MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:32.760009 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHPort
	I0328 01:02:32.760018 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHPort
	I0328 01:02:32.760214 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHKeyPath
	I0328 01:02:32.760222 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHKeyPath
	I0328 01:02:32.760364 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHUsername
	I0328 01:02:32.760532 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHUsername
	I0328 01:02:32.760639 1130949 sshutil.go:53] new ssh client: &{IP:192.168.72.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/embed-certs-808809/id_rsa Username:docker}
	I0328 01:02:32.760698 1130949 sshutil.go:53] new ssh client: &{IP:192.168.72.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/embed-certs-808809/id_rsa Username:docker}
	I0328 01:02:32.840137 1130949 ssh_runner.go:195] Run: systemctl --version
	I0328 01:02:32.874039 1130949 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0328 01:02:33.020534 1130949 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0328 01:02:33.027141 1130949 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0328 01:02:33.027213 1130949 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0328 01:02:33.043738 1130949 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0328 01:02:33.043767 1130949 start.go:494] detecting cgroup driver to use...
	I0328 01:02:33.043840 1130949 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0328 01:02:33.064332 1130949 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0328 01:02:33.081926 1130949 docker.go:217] disabling cri-docker service (if available) ...
	I0328 01:02:33.082016 1130949 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0328 01:02:33.097179 1130949 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0328 01:02:33.113157 1130949 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0328 01:02:33.233183 1130949 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0328 01:02:33.374061 1130949 docker.go:233] disabling docker service ...
	I0328 01:02:33.374145 1130949 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0328 01:02:33.389813 1130949 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0328 01:02:33.403439 1130949 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0328 01:02:33.546146 1130949 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0328 01:02:33.706968 1130949 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0328 01:02:33.722279 1130949 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0328 01:02:33.742578 1130949 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0328 01:02:33.742652 1130949 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 01:02:33.754966 1130949 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0328 01:02:33.755027 1130949 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 01:02:33.767170 1130949 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 01:02:33.779960 1130949 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 01:02:33.792448 1130949 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0328 01:02:33.804912 1130949 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 01:02:33.818038 1130949 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 01:02:33.838794 1130949 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 01:02:33.852157 1130949 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0328 01:02:33.862921 1130949 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0328 01:02:33.862981 1130949 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0328 01:02:33.880973 1130949 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0328 01:02:33.892698 1130949 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0328 01:02:34.029903 1130949 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0328 01:02:34.170977 1130949 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0328 01:02:34.171074 1130949 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0328 01:02:34.176652 1130949 start.go:562] Will wait 60s for crictl version
	I0328 01:02:34.176736 1130949 ssh_runner.go:195] Run: which crictl
	I0328 01:02:34.180993 1130949 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0328 01:02:34.224564 1130949 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0328 01:02:34.224675 1130949 ssh_runner.go:195] Run: crio --version
	I0328 01:02:34.254457 1130949 ssh_runner.go:195] Run: crio --version
	I0328 01:02:34.287281 1130949 out.go:177] * Preparing Kubernetes v1.29.3 on CRI-O 1.29.1 ...
	I0328 01:02:32.778280 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .Start
	I0328 01:02:32.778470 1131323 main.go:141] libmachine: (old-k8s-version-986088) Ensuring networks are active...
	I0328 01:02:32.779179 1131323 main.go:141] libmachine: (old-k8s-version-986088) Ensuring network default is active
	I0328 01:02:32.779577 1131323 main.go:141] libmachine: (old-k8s-version-986088) Ensuring network mk-old-k8s-version-986088 is active
	I0328 01:02:32.779982 1131323 main.go:141] libmachine: (old-k8s-version-986088) Getting domain xml...
	I0328 01:02:32.780732 1131323 main.go:141] libmachine: (old-k8s-version-986088) Creating domain...
	I0328 01:02:34.066287 1131323 main.go:141] libmachine: (old-k8s-version-986088) Waiting to get IP...
	I0328 01:02:34.067193 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:34.067618 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | unable to find current IP address of domain old-k8s-version-986088 in network mk-old-k8s-version-986088
	I0328 01:02:34.067684 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | I0328 01:02:34.067586 1132067 retry.go:31] will retry after 291.270379ms: waiting for machine to come up
	I0328 01:02:34.360203 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:34.360690 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | unable to find current IP address of domain old-k8s-version-986088 in network mk-old-k8s-version-986088
	I0328 01:02:34.360721 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | I0328 01:02:34.360638 1132067 retry.go:31] will retry after 234.968456ms: waiting for machine to come up
	I0328 01:02:34.597291 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:34.597818 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | unable to find current IP address of domain old-k8s-version-986088 in network mk-old-k8s-version-986088
	I0328 01:02:34.597849 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | I0328 01:02:34.597750 1132067 retry.go:31] will retry after 382.522593ms: waiting for machine to come up
	I0328 01:02:34.982502 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:34.983176 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | unable to find current IP address of domain old-k8s-version-986088 in network mk-old-k8s-version-986088
	I0328 01:02:34.983205 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | I0328 01:02:34.983133 1132067 retry.go:31] will retry after 436.332635ms: waiting for machine to come up
	I0328 01:02:34.288748 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetIP
	I0328 01:02:34.292122 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:34.292516 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d4:d2", ip: ""} in network mk-embed-certs-808809: {Iface:virbr4 ExpiryTime:2024-03-28 02:02:25 +0000 UTC Type:0 Mac:52:54:00:60:d4:d2 Iaid: IPaddr:192.168.72.210 Prefix:24 Hostname:embed-certs-808809 Clientid:01:52:54:00:60:d4:d2}
	I0328 01:02:34.292556 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined IP address 192.168.72.210 and MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:34.292869 1130949 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0328 01:02:34.298738 1130949 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0328 01:02:34.313529 1130949 kubeadm.go:877] updating cluster {Name:embed-certs-808809 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.29.3 ClusterName:embed-certs-808809 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.210 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0328 01:02:34.313698 1130949 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0328 01:02:34.313762 1130949 ssh_runner.go:195] Run: sudo crictl images --output json
	I0328 01:02:34.356518 1130949 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.3". assuming images are not preloaded.
	I0328 01:02:34.356614 1130949 ssh_runner.go:195] Run: which lz4
	I0328 01:02:34.361492 1130949 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0328 01:02:34.366053 1130949 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0328 01:02:34.366090 1130949 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (402967820 bytes)
	I0328 01:02:36.024197 1130949 crio.go:462] duration metric: took 1.662731937s to copy over tarball
	I0328 01:02:36.024287 1130949 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0328 01:02:35.421623 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:35.422164 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | unable to find current IP address of domain old-k8s-version-986088 in network mk-old-k8s-version-986088
	I0328 01:02:35.422198 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | I0328 01:02:35.422135 1132067 retry.go:31] will retry after 700.861268ms: waiting for machine to come up
	I0328 01:02:36.124589 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:36.125001 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | unable to find current IP address of domain old-k8s-version-986088 in network mk-old-k8s-version-986088
	I0328 01:02:36.125031 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | I0328 01:02:36.124948 1132067 retry.go:31] will retry after 932.342478ms: waiting for machine to come up
	I0328 01:02:37.058954 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:37.059390 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | unable to find current IP address of domain old-k8s-version-986088 in network mk-old-k8s-version-986088
	I0328 01:02:37.059424 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | I0328 01:02:37.059332 1132067 retry.go:31] will retry after 1.163248691s: waiting for machine to come up
	I0328 01:02:38.224574 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:38.225019 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | unable to find current IP address of domain old-k8s-version-986088 in network mk-old-k8s-version-986088
	I0328 01:02:38.225053 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | I0328 01:02:38.224959 1132067 retry.go:31] will retry after 1.13372539s: waiting for machine to come up
	I0328 01:02:39.360393 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:39.360953 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | unable to find current IP address of domain old-k8s-version-986088 in network mk-old-k8s-version-986088
	I0328 01:02:39.360984 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | I0328 01:02:39.360906 1132067 retry.go:31] will retry after 1.793272671s: waiting for machine to come up
	I0328 01:02:38.420741 1130949 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.396415089s)
	I0328 01:02:38.420788 1130949 crio.go:469] duration metric: took 2.39655808s to extract the tarball
	I0328 01:02:38.420797 1130949 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0328 01:02:38.459869 1130949 ssh_runner.go:195] Run: sudo crictl images --output json
	I0328 01:02:38.505999 1130949 crio.go:514] all images are preloaded for cri-o runtime.
	I0328 01:02:38.506030 1130949 cache_images.go:84] Images are preloaded, skipping loading
	I0328 01:02:38.506039 1130949 kubeadm.go:928] updating node { 192.168.72.210 8443 v1.29.3 crio true true} ...
	I0328 01:02:38.506185 1130949 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-808809 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.210
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.3 ClusterName:embed-certs-808809 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0328 01:02:38.506301 1130949 ssh_runner.go:195] Run: crio config
	I0328 01:02:38.551608 1130949 cni.go:84] Creating CNI manager for ""
	I0328 01:02:38.551633 1130949 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0328 01:02:38.551646 1130949 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0328 01:02:38.551673 1130949 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.210 APIServerPort:8443 KubernetesVersion:v1.29.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-808809 NodeName:embed-certs-808809 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.210"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.210 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0328 01:02:38.551813 1130949 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.210
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-808809"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.210
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.210"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0328 01:02:38.551881 1130949 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.3
	I0328 01:02:38.562640 1130949 binaries.go:44] Found k8s binaries, skipping transfer
	I0328 01:02:38.562732 1130949 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0328 01:02:38.572870 1130949 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0328 01:02:38.590866 1130949 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0328 01:02:38.608302 1130949 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0328 01:02:38.626925 1130949 ssh_runner.go:195] Run: grep 192.168.72.210	control-plane.minikube.internal$ /etc/hosts
	I0328 01:02:38.631111 1130949 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.210	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0328 01:02:38.644528 1130949 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0328 01:02:38.785485 1130949 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0328 01:02:38.804087 1130949 certs.go:68] Setting up /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/embed-certs-808809 for IP: 192.168.72.210
	I0328 01:02:38.804113 1130949 certs.go:194] generating shared ca certs ...
	I0328 01:02:38.804132 1130949 certs.go:226] acquiring lock for ca certs: {Name:mkf4dec8f33bbf51de6ed3aabdf175c7fd744ef9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 01:02:38.804285 1130949 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.key
	I0328 01:02:38.804326 1130949 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/proxy-client-ca.key
	I0328 01:02:38.804363 1130949 certs.go:256] generating profile certs ...
	I0328 01:02:38.804505 1130949 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/embed-certs-808809/client.key
	I0328 01:02:38.804588 1130949 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/embed-certs-808809/apiserver.key.bdc16448
	I0328 01:02:38.804638 1130949 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/embed-certs-808809/proxy-client.key
	I0328 01:02:38.804798 1130949 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/1076522.pem (1338 bytes)
	W0328 01:02:38.804829 1130949 certs.go:480] ignoring /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/1076522_empty.pem, impossibly tiny 0 bytes
	I0328 01:02:38.804836 1130949 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca-key.pem (1675 bytes)
	I0328 01:02:38.804860 1130949 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem (1078 bytes)
	I0328 01:02:38.804882 1130949 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/cert.pem (1123 bytes)
	I0328 01:02:38.804902 1130949 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/key.pem (1679 bytes)
	I0328 01:02:38.804943 1130949 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/files/etc/ssl/certs/10765222.pem (1708 bytes)
	I0328 01:02:38.805829 1130949 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0328 01:02:38.864847 1130949 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0328 01:02:38.899197 1130949 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0328 01:02:38.926734 1130949 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0328 01:02:38.958277 1130949 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/embed-certs-808809/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0328 01:02:38.997201 1130949 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/embed-certs-808809/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0328 01:02:39.023136 1130949 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/embed-certs-808809/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0328 01:02:39.048459 1130949 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/embed-certs-808809/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0328 01:02:39.074052 1130949 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/1076522.pem --> /usr/share/ca-certificates/1076522.pem (1338 bytes)
	I0328 01:02:39.099326 1130949 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/files/etc/ssl/certs/10765222.pem --> /usr/share/ca-certificates/10765222.pem (1708 bytes)
	I0328 01:02:39.124775 1130949 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0328 01:02:39.149638 1130949 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0328 01:02:39.169169 1130949 ssh_runner.go:195] Run: openssl version
	I0328 01:02:39.175948 1130949 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1076522.pem && ln -fs /usr/share/ca-certificates/1076522.pem /etc/ssl/certs/1076522.pem"
	I0328 01:02:39.188255 1130949 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1076522.pem
	I0328 01:02:39.194296 1130949 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 27 23:42 /usr/share/ca-certificates/1076522.pem
	I0328 01:02:39.194374 1130949 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1076522.pem
	I0328 01:02:39.201138 1130949 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1076522.pem /etc/ssl/certs/51391683.0"
	I0328 01:02:39.213554 1130949 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10765222.pem && ln -fs /usr/share/ca-certificates/10765222.pem /etc/ssl/certs/10765222.pem"
	I0328 01:02:39.226474 1130949 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10765222.pem
	I0328 01:02:39.232074 1130949 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 27 23:42 /usr/share/ca-certificates/10765222.pem
	I0328 01:02:39.232149 1130949 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10765222.pem
	I0328 01:02:39.238733 1130949 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/10765222.pem /etc/ssl/certs/3ec20f2e.0"
	I0328 01:02:39.250983 1130949 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0328 01:02:39.263746 1130949 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0328 01:02:39.268967 1130949 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 27 23:34 /usr/share/ca-certificates/minikubeCA.pem
	I0328 01:02:39.269038 1130949 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0328 01:02:39.275589 1130949 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0328 01:02:39.287731 1130949 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0328 01:02:39.292985 1130949 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0328 01:02:39.300366 1130949 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0328 01:02:39.307241 1130949 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0328 01:02:39.314522 1130949 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0328 01:02:39.321070 1130949 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0328 01:02:39.327777 1130949 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0328 01:02:39.334174 1130949 kubeadm.go:391] StartCluster: {Name:embed-certs-808809 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29
.3 ClusterName:embed-certs-808809 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.210 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0328 01:02:39.334310 1130949 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0328 01:02:39.334367 1130949 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0328 01:02:39.376035 1130949 cri.go:89] found id: ""
	I0328 01:02:39.376145 1130949 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0328 01:02:39.387349 1130949 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0328 01:02:39.387377 1130949 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0328 01:02:39.387385 1130949 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0328 01:02:39.387469 1130949 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0328 01:02:39.397918 1130949 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0328 01:02:39.399122 1130949 kubeconfig.go:125] found "embed-certs-808809" server: "https://192.168.72.210:8443"
	I0328 01:02:39.401219 1130949 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0328 01:02:39.411475 1130949 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.72.210
	I0328 01:02:39.411562 1130949 kubeadm.go:1154] stopping kube-system containers ...
	I0328 01:02:39.411583 1130949 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0328 01:02:39.411650 1130949 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0328 01:02:39.449529 1130949 cri.go:89] found id: ""
	I0328 01:02:39.449638 1130949 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0328 01:02:39.468553 1130949 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0328 01:02:39.479489 1130949 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0328 01:02:39.479522 1130949 kubeadm.go:156] found existing configuration files:
	
	I0328 01:02:39.479589 1130949 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0328 01:02:39.489619 1130949 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0328 01:02:39.489689 1130949 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0328 01:02:39.499726 1130949 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0328 01:02:39.509362 1130949 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0328 01:02:39.509447 1130949 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0328 01:02:39.519262 1130949 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0328 01:02:39.528858 1130949 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0328 01:02:39.528920 1130949 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0328 01:02:39.538784 1130949 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0328 01:02:39.548517 1130949 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0328 01:02:39.548593 1130949 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0328 01:02:39.559931 1130949 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0328 01:02:39.574178 1130949 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0328 01:02:39.706243 1130949 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0328 01:02:40.342144 1130949 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0328 01:02:40.559108 1130949 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0328 01:02:40.636713 1130949 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0328 01:02:40.743171 1130949 api_server.go:52] waiting for apiserver process to appear ...
	I0328 01:02:40.743269 1130949 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:02:41.243401 1130949 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:02:41.743363 1130949 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:02:41.776504 1130949 api_server.go:72] duration metric: took 1.033329844s to wait for apiserver process to appear ...
	I0328 01:02:41.776547 1130949 api_server.go:88] waiting for apiserver healthz status ...
	I0328 01:02:41.776574 1130949 api_server.go:253] Checking apiserver healthz at https://192.168.72.210:8443/healthz ...
	I0328 01:02:41.777140 1130949 api_server.go:269] stopped: https://192.168.72.210:8443/healthz: Get "https://192.168.72.210:8443/healthz": dial tcp 192.168.72.210:8443: connect: connection refused
	I0328 01:02:42.276690 1130949 api_server.go:253] Checking apiserver healthz at https://192.168.72.210:8443/healthz ...
	I0328 01:02:41.156898 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:41.157309 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | unable to find current IP address of domain old-k8s-version-986088 in network mk-old-k8s-version-986088
	I0328 01:02:41.157336 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | I0328 01:02:41.157263 1132067 retry.go:31] will retry after 1.863775673s: waiting for machine to come up
	I0328 01:02:43.023074 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:43.023470 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | unable to find current IP address of domain old-k8s-version-986088 in network mk-old-k8s-version-986088
	I0328 01:02:43.023507 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | I0328 01:02:43.023419 1132067 retry.go:31] will retry after 2.73600503s: waiting for machine to come up
	I0328 01:02:44.743286 1130949 api_server.go:279] https://192.168.72.210:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0328 01:02:44.743383 1130949 api_server.go:103] status: https://192.168.72.210:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0328 01:02:44.743412 1130949 api_server.go:253] Checking apiserver healthz at https://192.168.72.210:8443/healthz ...
	I0328 01:02:44.822370 1130949 api_server.go:279] https://192.168.72.210:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0328 01:02:44.822416 1130949 api_server.go:103] status: https://192.168.72.210:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0328 01:02:44.822436 1130949 api_server.go:253] Checking apiserver healthz at https://192.168.72.210:8443/healthz ...
	I0328 01:02:44.847406 1130949 api_server.go:279] https://192.168.72.210:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0328 01:02:44.847462 1130949 api_server.go:103] status: https://192.168.72.210:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0328 01:02:45.276899 1130949 api_server.go:253] Checking apiserver healthz at https://192.168.72.210:8443/healthz ...
	I0328 01:02:45.281884 1130949 api_server.go:279] https://192.168.72.210:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0328 01:02:45.281919 1130949 api_server.go:103] status: https://192.168.72.210:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0328 01:02:45.777495 1130949 api_server.go:253] Checking apiserver healthz at https://192.168.72.210:8443/healthz ...
	I0328 01:02:45.783673 1130949 api_server.go:279] https://192.168.72.210:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0328 01:02:45.783704 1130949 api_server.go:103] status: https://192.168.72.210:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0328 01:02:46.277372 1130949 api_server.go:253] Checking apiserver healthz at https://192.168.72.210:8443/healthz ...
	I0328 01:02:46.282281 1130949 api_server.go:279] https://192.168.72.210:8443/healthz returned 200:
	ok
	I0328 01:02:46.291242 1130949 api_server.go:141] control plane version: v1.29.3
	I0328 01:02:46.291287 1130949 api_server.go:131] duration metric: took 4.514730698s to wait for apiserver health ...
	I0328 01:02:46.291301 1130949 cni.go:84] Creating CNI manager for ""
	I0328 01:02:46.291310 1130949 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0328 01:02:46.293461 1130949 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0328 01:02:46.294971 1130949 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0328 01:02:46.312955 1130949 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0328 01:02:46.345653 1130949 system_pods.go:43] waiting for kube-system pods to appear ...
	I0328 01:02:46.355470 1130949 system_pods.go:59] 8 kube-system pods found
	I0328 01:02:46.355506 1130949 system_pods.go:61] "coredns-76f75df574-pr5d8" [90a6f3d5-6f33-4c41-804b-4b20c518aa23] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0328 01:02:46.355512 1130949 system_pods.go:61] "etcd-embed-certs-808809" [93b6b8ee-f83f-4848-b2c5-912ec07acd52] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0328 01:02:46.355519 1130949 system_pods.go:61] "kube-apiserver-embed-certs-808809" [22eb788f-4647-4a07-b5bf-ecdd54c28fcd] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0328 01:02:46.355530 1130949 system_pods.go:61] "kube-controller-manager-embed-certs-808809" [83fecd9f-c0de-4afe-b5b5-7c04bd3adc20] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0328 01:02:46.355545 1130949 system_pods.go:61] "kube-proxy-qwzpg" [57a814c6-54c8-4fa7-b7d7-bcdd4bbc91d7] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0328 01:02:46.355553 1130949 system_pods.go:61] "kube-scheduler-embed-certs-808809" [0b229d84-43fb-45ee-8d49-39204812d490] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0328 01:02:46.355568 1130949 system_pods.go:61] "metrics-server-57f55c9bc5-swsxp" [4b20e133-3054-4806-9b7f-44d8c8c35a4d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0328 01:02:46.355580 1130949 system_pods.go:61] "storage-provisioner" [59303061-19e3-4aed-8753-804988a2a44e] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0328 01:02:46.355590 1130949 system_pods.go:74] duration metric: took 9.908316ms to wait for pod list to return data ...
	I0328 01:02:46.355603 1130949 node_conditions.go:102] verifying NodePressure condition ...
	I0328 01:02:46.358936 1130949 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0328 01:02:46.358987 1130949 node_conditions.go:123] node cpu capacity is 2
	I0328 01:02:46.359006 1130949 node_conditions.go:105] duration metric: took 3.394695ms to run NodePressure ...
	I0328 01:02:46.359054 1130949 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0328 01:02:46.686479 1130949 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0328 01:02:46.692502 1130949 kubeadm.go:733] kubelet initialised
	I0328 01:02:46.692526 1130949 kubeadm.go:734] duration metric: took 6.022393ms waiting for restarted kubelet to initialise ...
	I0328 01:02:46.692534 1130949 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0328 01:02:46.699146 1130949 pod_ready.go:78] waiting up to 4m0s for pod "coredns-76f75df574-pr5d8" in "kube-system" namespace to be "Ready" ...
	I0328 01:02:45.762440 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:45.762891 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | unable to find current IP address of domain old-k8s-version-986088 in network mk-old-k8s-version-986088
	I0328 01:02:45.762915 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | I0328 01:02:45.762845 1132067 retry.go:31] will retry after 2.201941476s: waiting for machine to come up
	I0328 01:02:47.966601 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:47.967196 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | unable to find current IP address of domain old-k8s-version-986088 in network mk-old-k8s-version-986088
	I0328 01:02:47.967237 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | I0328 01:02:47.967144 1132067 retry.go:31] will retry after 4.122216816s: waiting for machine to come up
	I0328 01:02:48.709890 1130949 pod_ready.go:102] pod "coredns-76f75df574-pr5d8" in "kube-system" namespace has status "Ready":"False"
	I0328 01:02:51.207697 1130949 pod_ready.go:102] pod "coredns-76f75df574-pr5d8" in "kube-system" namespace has status "Ready":"False"
	I0328 01:02:53.391471 1131600 start.go:364] duration metric: took 2m47.603687739s to acquireMachinesLock for "default-k8s-diff-port-283961"
	I0328 01:02:53.391553 1131600 start.go:96] Skipping create...Using existing machine configuration
	I0328 01:02:53.391565 1131600 fix.go:54] fixHost starting: 
	I0328 01:02:53.391980 1131600 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18485-1069254/.minikube/bin/docker-machine-driver-kvm2
	I0328 01:02:53.392031 1131600 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 01:02:53.409035 1131600 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34073
	I0328 01:02:53.409556 1131600 main.go:141] libmachine: () Calling .GetVersion
	I0328 01:02:53.410105 1131600 main.go:141] libmachine: Using API Version  1
	I0328 01:02:53.410136 1131600 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 01:02:53.410492 1131600 main.go:141] libmachine: () Calling .GetMachineName
	I0328 01:02:53.410734 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .DriverName
	I0328 01:02:53.410903 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetState
	I0328 01:02:53.412710 1131600 fix.go:112] recreateIfNeeded on default-k8s-diff-port-283961: state=Stopped err=<nil>
	I0328 01:02:53.412739 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .DriverName
	W0328 01:02:53.412927 1131600 fix.go:138] unexpected machine state, will restart: <nil>
	I0328 01:02:53.414773 1131600 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-283961" ...
	I0328 01:02:52.091210 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:52.091759 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has current primary IP address 192.168.50.174 and MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:52.091794 1131323 main.go:141] libmachine: (old-k8s-version-986088) Found IP for machine: 192.168.50.174
	I0328 01:02:52.091841 1131323 main.go:141] libmachine: (old-k8s-version-986088) Reserving static IP address...
	I0328 01:02:52.092295 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | found host DHCP lease matching {name: "old-k8s-version-986088", mac: "52:54:00:f6:94:40", ip: "192.168.50.174"} in network mk-old-k8s-version-986088: {Iface:virbr2 ExpiryTime:2024-03-28 02:02:44 +0000 UTC Type:0 Mac:52:54:00:f6:94:40 Iaid: IPaddr:192.168.50.174 Prefix:24 Hostname:old-k8s-version-986088 Clientid:01:52:54:00:f6:94:40}
	I0328 01:02:52.092321 1131323 main.go:141] libmachine: (old-k8s-version-986088) Reserved static IP address: 192.168.50.174
	I0328 01:02:52.092343 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | skip adding static IP to network mk-old-k8s-version-986088 - found existing host DHCP lease matching {name: "old-k8s-version-986088", mac: "52:54:00:f6:94:40", ip: "192.168.50.174"}
	I0328 01:02:52.092356 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | Getting to WaitForSSH function...
	I0328 01:02:52.092373 1131323 main.go:141] libmachine: (old-k8s-version-986088) Waiting for SSH to be available...
	I0328 01:02:52.094682 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:52.095012 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:94:40", ip: ""} in network mk-old-k8s-version-986088: {Iface:virbr2 ExpiryTime:2024-03-28 02:02:44 +0000 UTC Type:0 Mac:52:54:00:f6:94:40 Iaid: IPaddr:192.168.50.174 Prefix:24 Hostname:old-k8s-version-986088 Clientid:01:52:54:00:f6:94:40}
	I0328 01:02:52.095033 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined IP address 192.168.50.174 and MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:52.095158 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | Using SSH client type: external
	I0328 01:02:52.095180 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | Using SSH private key: /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/old-k8s-version-986088/id_rsa (-rw-------)
	I0328 01:02:52.095208 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.174 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/old-k8s-version-986088/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0328 01:02:52.095218 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | About to run SSH command:
	I0328 01:02:52.095232 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | exit 0
	I0328 01:02:52.218494 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | SSH cmd err, output: <nil>: 
	I0328 01:02:52.218983 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetConfigRaw
	I0328 01:02:52.219663 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetIP
	I0328 01:02:52.222349 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:52.222791 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:94:40", ip: ""} in network mk-old-k8s-version-986088: {Iface:virbr2 ExpiryTime:2024-03-28 02:02:44 +0000 UTC Type:0 Mac:52:54:00:f6:94:40 Iaid: IPaddr:192.168.50.174 Prefix:24 Hostname:old-k8s-version-986088 Clientid:01:52:54:00:f6:94:40}
	I0328 01:02:52.222823 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined IP address 192.168.50.174 and MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:52.223191 1131323 profile.go:142] Saving config to /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/old-k8s-version-986088/config.json ...
	I0328 01:02:52.223388 1131323 machine.go:94] provisionDockerMachine start ...
	I0328 01:02:52.223409 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .DriverName
	I0328 01:02:52.223605 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHHostname
	I0328 01:02:52.225686 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:52.225999 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:94:40", ip: ""} in network mk-old-k8s-version-986088: {Iface:virbr2 ExpiryTime:2024-03-28 02:02:44 +0000 UTC Type:0 Mac:52:54:00:f6:94:40 Iaid: IPaddr:192.168.50.174 Prefix:24 Hostname:old-k8s-version-986088 Clientid:01:52:54:00:f6:94:40}
	I0328 01:02:52.226038 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined IP address 192.168.50.174 and MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:52.226131 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHPort
	I0328 01:02:52.226341 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHKeyPath
	I0328 01:02:52.226507 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHKeyPath
	I0328 01:02:52.226633 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHUsername
	I0328 01:02:52.226802 1131323 main.go:141] libmachine: Using SSH client type: native
	I0328 01:02:52.227078 1131323 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.174 22 <nil> <nil>}
	I0328 01:02:52.227095 1131323 main.go:141] libmachine: About to run SSH command:
	hostname
	I0328 01:02:52.327218 1131323 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0328 01:02:52.327249 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetMachineName
	I0328 01:02:52.327515 1131323 buildroot.go:166] provisioning hostname "old-k8s-version-986088"
	I0328 01:02:52.327542 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetMachineName
	I0328 01:02:52.327754 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHHostname
	I0328 01:02:52.330253 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:52.330661 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:94:40", ip: ""} in network mk-old-k8s-version-986088: {Iface:virbr2 ExpiryTime:2024-03-28 02:02:44 +0000 UTC Type:0 Mac:52:54:00:f6:94:40 Iaid: IPaddr:192.168.50.174 Prefix:24 Hostname:old-k8s-version-986088 Clientid:01:52:54:00:f6:94:40}
	I0328 01:02:52.330691 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined IP address 192.168.50.174 and MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:52.330827 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHPort
	I0328 01:02:52.331048 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHKeyPath
	I0328 01:02:52.331258 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHKeyPath
	I0328 01:02:52.331406 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHUsername
	I0328 01:02:52.331593 1131323 main.go:141] libmachine: Using SSH client type: native
	I0328 01:02:52.331772 1131323 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.174 22 <nil> <nil>}
	I0328 01:02:52.331783 1131323 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-986088 && echo "old-k8s-version-986088" | sudo tee /etc/hostname
	I0328 01:02:52.445910 1131323 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-986088
	
	I0328 01:02:52.445943 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHHostname
	I0328 01:02:52.449023 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:52.449320 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:94:40", ip: ""} in network mk-old-k8s-version-986088: {Iface:virbr2 ExpiryTime:2024-03-28 02:02:44 +0000 UTC Type:0 Mac:52:54:00:f6:94:40 Iaid: IPaddr:192.168.50.174 Prefix:24 Hostname:old-k8s-version-986088 Clientid:01:52:54:00:f6:94:40}
	I0328 01:02:52.449358 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined IP address 192.168.50.174 and MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:52.449595 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHPort
	I0328 01:02:52.449810 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHKeyPath
	I0328 01:02:52.449970 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHKeyPath
	I0328 01:02:52.450116 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHUsername
	I0328 01:02:52.450310 1131323 main.go:141] libmachine: Using SSH client type: native
	I0328 01:02:52.450572 1131323 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.174 22 <nil> <nil>}
	I0328 01:02:52.450640 1131323 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-986088' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-986088/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-986088' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0328 01:02:52.567493 1131323 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0328 01:02:52.567529 1131323 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18485-1069254/.minikube CaCertPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18485-1069254/.minikube}
	I0328 01:02:52.567559 1131323 buildroot.go:174] setting up certificates
	I0328 01:02:52.567573 1131323 provision.go:84] configureAuth start
	I0328 01:02:52.567587 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetMachineName
	I0328 01:02:52.567944 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetIP
	I0328 01:02:52.570860 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:52.571363 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:94:40", ip: ""} in network mk-old-k8s-version-986088: {Iface:virbr2 ExpiryTime:2024-03-28 02:02:44 +0000 UTC Type:0 Mac:52:54:00:f6:94:40 Iaid: IPaddr:192.168.50.174 Prefix:24 Hostname:old-k8s-version-986088 Clientid:01:52:54:00:f6:94:40}
	I0328 01:02:52.571400 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined IP address 192.168.50.174 and MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:52.571547 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHHostname
	I0328 01:02:52.574052 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:52.574483 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:94:40", ip: ""} in network mk-old-k8s-version-986088: {Iface:virbr2 ExpiryTime:2024-03-28 02:02:44 +0000 UTC Type:0 Mac:52:54:00:f6:94:40 Iaid: IPaddr:192.168.50.174 Prefix:24 Hostname:old-k8s-version-986088 Clientid:01:52:54:00:f6:94:40}
	I0328 01:02:52.574517 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined IP address 192.168.50.174 and MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:52.574619 1131323 provision.go:143] copyHostCerts
	I0328 01:02:52.574698 1131323 exec_runner.go:144] found /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.pem, removing ...
	I0328 01:02:52.574710 1131323 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.pem
	I0328 01:02:52.574778 1131323 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.pem (1078 bytes)
	I0328 01:02:52.574894 1131323 exec_runner.go:144] found /home/jenkins/minikube-integration/18485-1069254/.minikube/cert.pem, removing ...
	I0328 01:02:52.574908 1131323 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18485-1069254/.minikube/cert.pem
	I0328 01:02:52.574985 1131323 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18485-1069254/.minikube/cert.pem (1123 bytes)
	I0328 01:02:52.575086 1131323 exec_runner.go:144] found /home/jenkins/minikube-integration/18485-1069254/.minikube/key.pem, removing ...
	I0328 01:02:52.575095 1131323 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18485-1069254/.minikube/key.pem
	I0328 01:02:52.575117 1131323 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18485-1069254/.minikube/key.pem (1679 bytes)
	I0328 01:02:52.575194 1131323 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-986088 san=[127.0.0.1 192.168.50.174 localhost minikube old-k8s-version-986088]
	I0328 01:02:52.688709 1131323 provision.go:177] copyRemoteCerts
	I0328 01:02:52.688776 1131323 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0328 01:02:52.688809 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHHostname
	I0328 01:02:52.691529 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:52.691977 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:94:40", ip: ""} in network mk-old-k8s-version-986088: {Iface:virbr2 ExpiryTime:2024-03-28 02:02:44 +0000 UTC Type:0 Mac:52:54:00:f6:94:40 Iaid: IPaddr:192.168.50.174 Prefix:24 Hostname:old-k8s-version-986088 Clientid:01:52:54:00:f6:94:40}
	I0328 01:02:52.692024 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined IP address 192.168.50.174 and MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:52.692188 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHPort
	I0328 01:02:52.692425 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHKeyPath
	I0328 01:02:52.692620 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHUsername
	I0328 01:02:52.692774 1131323 sshutil.go:53] new ssh client: &{IP:192.168.50.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/old-k8s-version-986088/id_rsa Username:docker}
	I0328 01:02:52.777200 1131323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0328 01:02:52.808740 1131323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0328 01:02:52.836646 1131323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0328 01:02:52.862627 1131323 provision.go:87] duration metric: took 295.032419ms to configureAuth
	I0328 01:02:52.862668 1131323 buildroot.go:189] setting minikube options for container-runtime
	I0328 01:02:52.862908 1131323 config.go:182] Loaded profile config "old-k8s-version-986088": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0328 01:02:52.863019 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHHostname
	I0328 01:02:52.865838 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:52.866585 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:94:40", ip: ""} in network mk-old-k8s-version-986088: {Iface:virbr2 ExpiryTime:2024-03-28 02:02:44 +0000 UTC Type:0 Mac:52:54:00:f6:94:40 Iaid: IPaddr:192.168.50.174 Prefix:24 Hostname:old-k8s-version-986088 Clientid:01:52:54:00:f6:94:40}
	I0328 01:02:52.866630 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined IP address 192.168.50.174 and MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:52.867271 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHPort
	I0328 01:02:52.867521 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHKeyPath
	I0328 01:02:52.867687 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHKeyPath
	I0328 01:02:52.867826 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHUsername
	I0328 01:02:52.867961 1131323 main.go:141] libmachine: Using SSH client type: native
	I0328 01:02:52.868176 1131323 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.174 22 <nil> <nil>}
	I0328 01:02:52.868194 1131323 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0328 01:02:53.154903 1131323 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0328 01:02:53.154936 1131323 machine.go:97] duration metric: took 931.534047ms to provisionDockerMachine
	I0328 01:02:53.154949 1131323 start.go:293] postStartSetup for "old-k8s-version-986088" (driver="kvm2")
	I0328 01:02:53.154961 1131323 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0328 01:02:53.154997 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .DriverName
	I0328 01:02:53.155353 1131323 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0328 01:02:53.155386 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHHostname
	I0328 01:02:53.158072 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:53.158448 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:94:40", ip: ""} in network mk-old-k8s-version-986088: {Iface:virbr2 ExpiryTime:2024-03-28 02:02:44 +0000 UTC Type:0 Mac:52:54:00:f6:94:40 Iaid: IPaddr:192.168.50.174 Prefix:24 Hostname:old-k8s-version-986088 Clientid:01:52:54:00:f6:94:40}
	I0328 01:02:53.158482 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined IP address 192.168.50.174 and MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:53.158612 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHPort
	I0328 01:02:53.158825 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHKeyPath
	I0328 01:02:53.158974 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHUsername
	I0328 01:02:53.159102 1131323 sshutil.go:53] new ssh client: &{IP:192.168.50.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/old-k8s-version-986088/id_rsa Username:docker}
	I0328 01:02:53.243411 1131323 ssh_runner.go:195] Run: cat /etc/os-release
	I0328 01:02:53.247745 1131323 info.go:137] Remote host: Buildroot 2023.02.9
	I0328 01:02:53.247769 1131323 filesync.go:126] Scanning /home/jenkins/minikube-integration/18485-1069254/.minikube/addons for local assets ...
	I0328 01:02:53.247830 1131323 filesync.go:126] Scanning /home/jenkins/minikube-integration/18485-1069254/.minikube/files for local assets ...
	I0328 01:02:53.247903 1131323 filesync.go:149] local asset: /home/jenkins/minikube-integration/18485-1069254/.minikube/files/etc/ssl/certs/10765222.pem -> 10765222.pem in /etc/ssl/certs
	I0328 01:02:53.247990 1131323 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0328 01:02:53.258574 1131323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/files/etc/ssl/certs/10765222.pem --> /etc/ssl/certs/10765222.pem (1708 bytes)
	I0328 01:02:53.284249 1131323 start.go:296] duration metric: took 129.2844ms for postStartSetup
	I0328 01:02:53.284300 1131323 fix.go:56] duration metric: took 20.532468979s for fixHost
	I0328 01:02:53.284324 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHHostname
	I0328 01:02:53.287097 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:53.287505 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:94:40", ip: ""} in network mk-old-k8s-version-986088: {Iface:virbr2 ExpiryTime:2024-03-28 02:02:44 +0000 UTC Type:0 Mac:52:54:00:f6:94:40 Iaid: IPaddr:192.168.50.174 Prefix:24 Hostname:old-k8s-version-986088 Clientid:01:52:54:00:f6:94:40}
	I0328 01:02:53.287534 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined IP address 192.168.50.174 and MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:53.287642 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHPort
	I0328 01:02:53.287874 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHKeyPath
	I0328 01:02:53.288039 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHKeyPath
	I0328 01:02:53.288225 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHUsername
	I0328 01:02:53.288439 1131323 main.go:141] libmachine: Using SSH client type: native
	I0328 01:02:53.288601 1131323 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.174 22 <nil> <nil>}
	I0328 01:02:53.288612 1131323 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0328 01:02:53.391262 1131323 main.go:141] libmachine: SSH cmd err, output: <nil>: 1711587773.373998758
	
	I0328 01:02:53.391292 1131323 fix.go:216] guest clock: 1711587773.373998758
	I0328 01:02:53.391299 1131323 fix.go:229] Guest: 2024-03-28 01:02:53.373998758 +0000 UTC Remote: 2024-03-28 01:02:53.284304642 +0000 UTC m=+227.998260980 (delta=89.694116ms)
	I0328 01:02:53.391341 1131323 fix.go:200] guest clock delta is within tolerance: 89.694116ms
	I0328 01:02:53.391346 1131323 start.go:83] releasing machines lock for "old-k8s-version-986088", held for 20.639550927s
	I0328 01:02:53.391377 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .DriverName
	I0328 01:02:53.391728 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetIP
	I0328 01:02:53.394421 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:53.394780 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:94:40", ip: ""} in network mk-old-k8s-version-986088: {Iface:virbr2 ExpiryTime:2024-03-28 02:02:44 +0000 UTC Type:0 Mac:52:54:00:f6:94:40 Iaid: IPaddr:192.168.50.174 Prefix:24 Hostname:old-k8s-version-986088 Clientid:01:52:54:00:f6:94:40}
	I0328 01:02:53.394811 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined IP address 192.168.50.174 and MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:53.394932 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .DriverName
	I0328 01:02:53.395449 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .DriverName
	I0328 01:02:53.395729 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .DriverName
	I0328 01:02:53.395828 1131323 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0328 01:02:53.395883 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHHostname
	I0328 01:02:53.395985 1131323 ssh_runner.go:195] Run: cat /version.json
	I0328 01:02:53.396014 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHHostname
	I0328 01:02:53.398819 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:53.399010 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:53.399281 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:94:40", ip: ""} in network mk-old-k8s-version-986088: {Iface:virbr2 ExpiryTime:2024-03-28 02:02:44 +0000 UTC Type:0 Mac:52:54:00:f6:94:40 Iaid: IPaddr:192.168.50.174 Prefix:24 Hostname:old-k8s-version-986088 Clientid:01:52:54:00:f6:94:40}
	I0328 01:02:53.399320 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined IP address 192.168.50.174 and MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:53.399451 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHPort
	I0328 01:02:53.399550 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:94:40", ip: ""} in network mk-old-k8s-version-986088: {Iface:virbr2 ExpiryTime:2024-03-28 02:02:44 +0000 UTC Type:0 Mac:52:54:00:f6:94:40 Iaid: IPaddr:192.168.50.174 Prefix:24 Hostname:old-k8s-version-986088 Clientid:01:52:54:00:f6:94:40}
	I0328 01:02:53.399620 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined IP address 192.168.50.174 and MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:53.399640 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHKeyPath
	I0328 01:02:53.399880 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHUsername
	I0328 01:02:53.399902 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHPort
	I0328 01:02:53.400065 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHKeyPath
	I0328 01:02:53.400081 1131323 sshutil.go:53] new ssh client: &{IP:192.168.50.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/old-k8s-version-986088/id_rsa Username:docker}
	I0328 01:02:53.400245 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHUsername
	I0328 01:02:53.400445 1131323 sshutil.go:53] new ssh client: &{IP:192.168.50.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/old-k8s-version-986088/id_rsa Username:docker}
	I0328 01:02:53.514453 1131323 ssh_runner.go:195] Run: systemctl --version
	I0328 01:02:53.521123 1131323 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0328 01:02:53.678366 1131323 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0328 01:02:53.685402 1131323 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0328 01:02:53.685473 1131323 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0328 01:02:53.702781 1131323 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0328 01:02:53.702816 1131323 start.go:494] detecting cgroup driver to use...
	I0328 01:02:53.702900 1131323 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0328 01:02:53.720343 1131323 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0328 01:02:53.736749 1131323 docker.go:217] disabling cri-docker service (if available) ...
	I0328 01:02:53.736824 1131323 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0328 01:02:53.761087 1131323 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0328 01:02:53.779008 1131323 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0328 01:02:53.895064 1131323 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0328 01:02:54.060741 1131323 docker.go:233] disabling docker service ...
	I0328 01:02:54.060825 1131323 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0328 01:02:54.079139 1131323 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0328 01:02:54.093523 1131323 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0328 01:02:54.247544 1131323 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0328 01:02:54.396392 1131323 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0328 01:02:54.422612 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0328 01:02:54.443759 1131323 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0328 01:02:54.443817 1131323 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 01:02:54.459794 1131323 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0328 01:02:54.459875 1131323 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 01:02:54.472784 1131323 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 01:02:54.484963 1131323 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 01:02:54.496654 1131323 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0328 01:02:54.508382 1131323 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0328 01:02:54.518607 1131323 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0328 01:02:54.518687 1131323 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0328 01:02:54.532356 1131323 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0328 01:02:54.544424 1131323 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0328 01:02:54.685782 1131323 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0328 01:02:54.847233 1131323 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0328 01:02:54.847314 1131323 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0328 01:02:54.853148 1131323 start.go:562] Will wait 60s for crictl version
	I0328 01:02:54.853248 1131323 ssh_runner.go:195] Run: which crictl
	I0328 01:02:54.857536 1131323 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0328 01:02:54.901937 1131323 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0328 01:02:54.902082 1131323 ssh_runner.go:195] Run: crio --version
	I0328 01:02:54.935571 1131323 ssh_runner.go:195] Run: crio --version
	I0328 01:02:54.971452 1131323 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0328 01:02:54.972964 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetIP
	I0328 01:02:54.976523 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:54.976985 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:94:40", ip: ""} in network mk-old-k8s-version-986088: {Iface:virbr2 ExpiryTime:2024-03-28 02:02:44 +0000 UTC Type:0 Mac:52:54:00:f6:94:40 Iaid: IPaddr:192.168.50.174 Prefix:24 Hostname:old-k8s-version-986088 Clientid:01:52:54:00:f6:94:40}
	I0328 01:02:54.977017 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined IP address 192.168.50.174 and MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:54.977369 1131323 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0328 01:02:54.982326 1131323 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0328 01:02:54.996239 1131323 kubeadm.go:877] updating cluster {Name:old-k8s-version-986088 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-986088 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.174 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0328 01:02:54.996371 1131323 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0328 01:02:54.996433 1131323 ssh_runner.go:195] Run: sudo crictl images --output json
	I0328 01:02:55.045404 1131323 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0328 01:02:55.045483 1131323 ssh_runner.go:195] Run: which lz4
	I0328 01:02:55.050226 1131323 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0328 01:02:55.055182 1131323 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0328 01:02:55.055221 1131323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0328 01:02:53.416101 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .Start
	I0328 01:02:53.416332 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Ensuring networks are active...
	I0328 01:02:53.417021 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Ensuring network default is active
	I0328 01:02:53.417446 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Ensuring network mk-default-k8s-diff-port-283961 is active
	I0328 01:02:53.417857 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Getting domain xml...
	I0328 01:02:53.418555 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Creating domain...
	I0328 01:02:54.777201 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Waiting to get IP...
	I0328 01:02:54.778055 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:02:54.778563 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | unable to find current IP address of domain default-k8s-diff-port-283961 in network mk-default-k8s-diff-port-283961
	I0328 01:02:54.778705 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | I0328 01:02:54.778537 1132240 retry.go:31] will retry after 259.031702ms: waiting for machine to come up
	I0328 01:02:55.039365 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:02:55.039926 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | unable to find current IP address of domain default-k8s-diff-port-283961 in network mk-default-k8s-diff-port-283961
	I0328 01:02:55.039963 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | I0328 01:02:55.039860 1132240 retry.go:31] will retry after 254.124553ms: waiting for machine to come up
	I0328 01:02:55.295658 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:02:55.296218 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | unable to find current IP address of domain default-k8s-diff-port-283961 in network mk-default-k8s-diff-port-283961
	I0328 01:02:55.296265 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | I0328 01:02:55.296174 1132240 retry.go:31] will retry after 349.637234ms: waiting for machine to come up
	I0328 01:02:55.647590 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:02:55.648356 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | unable to find current IP address of domain default-k8s-diff-port-283961 in network mk-default-k8s-diff-port-283961
	I0328 01:02:55.648392 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | I0328 01:02:55.648298 1132240 retry.go:31] will retry after 446.471208ms: waiting for machine to come up
	I0328 01:02:53.707811 1130949 pod_ready.go:102] pod "coredns-76f75df574-pr5d8" in "kube-system" namespace has status "Ready":"False"
	I0328 01:02:55.708380 1130949 pod_ready.go:102] pod "coredns-76f75df574-pr5d8" in "kube-system" namespace has status "Ready":"False"
	I0328 01:02:57.213059 1130949 pod_ready.go:92] pod "coredns-76f75df574-pr5d8" in "kube-system" namespace has status "Ready":"True"
	I0328 01:02:57.213097 1130949 pod_ready.go:81] duration metric: took 10.513921238s for pod "coredns-76f75df574-pr5d8" in "kube-system" namespace to be "Ready" ...
	I0328 01:02:57.213113 1130949 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-808809" in "kube-system" namespace to be "Ready" ...
	I0328 01:02:57.222308 1130949 pod_ready.go:92] pod "etcd-embed-certs-808809" in "kube-system" namespace has status "Ready":"True"
	I0328 01:02:57.222344 1130949 pod_ready.go:81] duration metric: took 9.214056ms for pod "etcd-embed-certs-808809" in "kube-system" namespace to be "Ready" ...
	I0328 01:02:57.222357 1130949 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-808809" in "kube-system" namespace to be "Ready" ...
	I0328 01:02:57.231530 1130949 pod_ready.go:92] pod "kube-apiserver-embed-certs-808809" in "kube-system" namespace has status "Ready":"True"
	I0328 01:02:57.231558 1130949 pod_ready.go:81] duration metric: took 9.192864ms for pod "kube-apiserver-embed-certs-808809" in "kube-system" namespace to be "Ready" ...
	I0328 01:02:57.231568 1130949 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-808809" in "kube-system" namespace to be "Ready" ...
	I0328 01:02:56.994163 1131323 crio.go:462] duration metric: took 1.943992561s to copy over tarball
	I0328 01:02:56.994252 1131323 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0328 01:03:00.215115 1131323 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.220825311s)
	I0328 01:03:00.215159 1131323 crio.go:469] duration metric: took 3.22095583s to extract the tarball
	I0328 01:03:00.215171 1131323 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0328 01:03:00.259151 1131323 ssh_runner.go:195] Run: sudo crictl images --output json
	I0328 01:03:00.298446 1131323 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0328 01:03:00.298492 1131323 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0328 01:03:00.298585 1131323 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0328 01:03:00.298601 1131323 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0328 01:03:00.298613 1131323 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0328 01:03:00.298644 1131323 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0328 01:03:00.298662 1131323 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0328 01:03:00.298698 1131323 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0328 01:03:00.298593 1131323 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0328 01:03:00.298585 1131323 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0328 01:03:00.300347 1131323 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0328 01:03:00.300424 1131323 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0328 01:03:00.300470 1131323 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0328 01:03:00.300474 1131323 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0328 01:03:00.300637 1131323 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0328 01:03:00.300652 1131323 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0328 01:03:00.300723 1131323 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0328 01:03:00.300793 1131323 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0328 01:02:56.095939 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:02:56.096463 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | unable to find current IP address of domain default-k8s-diff-port-283961 in network mk-default-k8s-diff-port-283961
	I0328 01:02:56.096501 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | I0328 01:02:56.096412 1132240 retry.go:31] will retry after 490.029649ms: waiting for machine to come up
	I0328 01:02:56.588298 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:02:56.588835 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | unable to find current IP address of domain default-k8s-diff-port-283961 in network mk-default-k8s-diff-port-283961
	I0328 01:02:56.588868 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | I0328 01:02:56.588796 1132240 retry.go:31] will retry after 831.356628ms: waiting for machine to come up
	I0328 01:02:57.421917 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:02:57.422410 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | unable to find current IP address of domain default-k8s-diff-port-283961 in network mk-default-k8s-diff-port-283961
	I0328 01:02:57.422443 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | I0328 01:02:57.422353 1132240 retry.go:31] will retry after 1.164764985s: waiting for machine to come up
	I0328 01:02:58.588827 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:02:58.589183 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | unable to find current IP address of domain default-k8s-diff-port-283961 in network mk-default-k8s-diff-port-283961
	I0328 01:02:58.589225 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | I0328 01:02:58.589119 1132240 retry.go:31] will retry after 1.307248783s: waiting for machine to come up
	I0328 01:02:59.897607 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:02:59.897976 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | unable to find current IP address of domain default-k8s-diff-port-283961 in network mk-default-k8s-diff-port-283961
	I0328 01:02:59.898008 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | I0328 01:02:59.897926 1132240 retry.go:31] will retry after 1.560958271s: waiting for machine to come up
	I0328 01:02:58.241179 1130949 pod_ready.go:92] pod "kube-controller-manager-embed-certs-808809" in "kube-system" namespace has status "Ready":"True"
	I0328 01:02:58.241216 1130949 pod_ready.go:81] duration metric: took 1.00963904s for pod "kube-controller-manager-embed-certs-808809" in "kube-system" namespace to be "Ready" ...
	I0328 01:02:58.241245 1130949 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-qwzpg" in "kube-system" namespace to be "Ready" ...
	I0328 01:02:58.249787 1130949 pod_ready.go:92] pod "kube-proxy-qwzpg" in "kube-system" namespace has status "Ready":"True"
	I0328 01:02:58.249826 1130949 pod_ready.go:81] duration metric: took 8.571225ms for pod "kube-proxy-qwzpg" in "kube-system" namespace to be "Ready" ...
	I0328 01:02:58.249840 1130949 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-808809" in "kube-system" namespace to be "Ready" ...
	I0328 01:02:58.405101 1130949 pod_ready.go:92] pod "kube-scheduler-embed-certs-808809" in "kube-system" namespace has status "Ready":"True"
	I0328 01:02:58.405130 1130949 pod_ready.go:81] duration metric: took 155.281142ms for pod "kube-scheduler-embed-certs-808809" in "kube-system" namespace to be "Ready" ...
	I0328 01:02:58.405141 1130949 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace to be "Ready" ...
	I0328 01:03:00.412202 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:02.412688 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:00.499788 1131323 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0328 01:03:00.539135 1131323 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0328 01:03:00.541462 1131323 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0328 01:03:00.544184 1131323 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0328 01:03:00.544227 1131323 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0328 01:03:00.544261 1131323 ssh_runner.go:195] Run: which crictl
	I0328 01:03:00.555720 1131323 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0328 01:03:00.560189 1131323 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0328 01:03:00.562639 1131323 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0328 01:03:00.574105 1131323 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0328 01:03:00.681717 1131323 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0328 01:03:00.681742 1131323 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0328 01:03:00.681765 1131323 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0328 01:03:00.681803 1131323 ssh_runner.go:195] Run: which crictl
	I0328 01:03:00.682033 1131323 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0328 01:03:00.682076 1131323 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0328 01:03:00.682115 1131323 ssh_runner.go:195] Run: which crictl
	I0328 01:03:00.732868 1131323 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0328 01:03:00.732922 1131323 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0328 01:03:00.732988 1131323 ssh_runner.go:195] Run: which crictl
	I0328 01:03:00.742680 1131323 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0328 01:03:00.742730 1131323 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0328 01:03:00.742762 1131323 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0328 01:03:00.742777 1131323 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0328 01:03:00.742805 1131323 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0328 01:03:00.742770 1131323 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0328 01:03:00.742817 1131323 ssh_runner.go:195] Run: which crictl
	I0328 01:03:00.742851 1131323 ssh_runner.go:195] Run: which crictl
	I0328 01:03:00.742865 1131323 ssh_runner.go:195] Run: which crictl
	I0328 01:03:00.770435 1131323 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0328 01:03:00.770472 1131323 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0328 01:03:00.770567 1131323 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0328 01:03:00.770588 1131323 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0328 01:03:00.770727 1131323 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0328 01:03:00.770760 1131323 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0328 01:03:00.770728 1131323 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0328 01:03:00.882338 1131323 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0328 01:03:00.896602 1131323 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0328 01:03:00.918814 1131323 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0328 01:03:00.918869 1131323 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0328 01:03:00.918919 1131323 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0328 01:03:00.918968 1131323 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0328 01:03:01.186124 1131323 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0328 01:03:01.334547 1131323 cache_images.go:92] duration metric: took 1.036031169s to LoadCachedImages
	W0328 01:03:01.334676 1131323 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	I0328 01:03:01.334694 1131323 kubeadm.go:928] updating node { 192.168.50.174 8443 v1.20.0 crio true true} ...
	I0328 01:03:01.334827 1131323 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-986088 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.174
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-986088 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0328 01:03:01.334926 1131323 ssh_runner.go:195] Run: crio config
	I0328 01:03:01.391004 1131323 cni.go:84] Creating CNI manager for ""
	I0328 01:03:01.391034 1131323 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0328 01:03:01.391054 1131323 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0328 01:03:01.391081 1131323 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.174 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-986088 NodeName:old-k8s-version-986088 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.174"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.174 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0328 01:03:01.391265 1131323 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.174
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-986088"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.174
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.174"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0328 01:03:01.391347 1131323 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0328 01:03:01.403684 1131323 binaries.go:44] Found k8s binaries, skipping transfer
	I0328 01:03:01.403779 1131323 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0328 01:03:01.415168 1131323 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0328 01:03:01.434329 1131323 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0328 01:03:01.456280 1131323 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0328 01:03:01.476625 1131323 ssh_runner.go:195] Run: grep 192.168.50.174	control-plane.minikube.internal$ /etc/hosts
	I0328 01:03:01.480867 1131323 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.174	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0328 01:03:01.493833 1131323 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0328 01:03:01.642273 1131323 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0328 01:03:01.661857 1131323 certs.go:68] Setting up /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/old-k8s-version-986088 for IP: 192.168.50.174
	I0328 01:03:01.661887 1131323 certs.go:194] generating shared ca certs ...
	I0328 01:03:01.661909 1131323 certs.go:226] acquiring lock for ca certs: {Name:mkf4dec8f33bbf51de6ed3aabdf175c7fd744ef9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 01:03:01.662115 1131323 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.key
	I0328 01:03:01.662174 1131323 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/proxy-client-ca.key
	I0328 01:03:01.662188 1131323 certs.go:256] generating profile certs ...
	I0328 01:03:01.662324 1131323 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/old-k8s-version-986088/client.key
	I0328 01:03:01.662399 1131323 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/old-k8s-version-986088/apiserver.key.b88fbc7e
	I0328 01:03:01.662447 1131323 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/old-k8s-version-986088/proxy-client.key
	I0328 01:03:01.662600 1131323 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/1076522.pem (1338 bytes)
	W0328 01:03:01.662656 1131323 certs.go:480] ignoring /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/1076522_empty.pem, impossibly tiny 0 bytes
	I0328 01:03:01.662672 1131323 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca-key.pem (1675 bytes)
	I0328 01:03:01.662703 1131323 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem (1078 bytes)
	I0328 01:03:01.662738 1131323 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/cert.pem (1123 bytes)
	I0328 01:03:01.662774 1131323 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/key.pem (1679 bytes)
	I0328 01:03:01.662826 1131323 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/files/etc/ssl/certs/10765222.pem (1708 bytes)
	I0328 01:03:01.663831 1131323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0328 01:03:01.697171 1131323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0328 01:03:01.742118 1131323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0328 01:03:01.783263 1131323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0328 01:03:01.831682 1131323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/old-k8s-version-986088/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0328 01:03:01.878051 1131323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/old-k8s-version-986088/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0328 01:03:01.915626 1131323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/old-k8s-version-986088/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0328 01:03:01.942247 1131323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/old-k8s-version-986088/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0328 01:03:01.969054 1131323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/files/etc/ssl/certs/10765222.pem --> /usr/share/ca-certificates/10765222.pem (1708 bytes)
	I0328 01:03:01.998651 1131323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0328 01:03:02.024881 1131323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/1076522.pem --> /usr/share/ca-certificates/1076522.pem (1338 bytes)
	I0328 01:03:02.051284 1131323 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0328 01:03:02.070414 1131323 ssh_runner.go:195] Run: openssl version
	I0328 01:03:02.076635 1131323 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10765222.pem && ln -fs /usr/share/ca-certificates/10765222.pem /etc/ssl/certs/10765222.pem"
	I0328 01:03:02.089288 1131323 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10765222.pem
	I0328 01:03:02.094260 1131323 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 27 23:42 /usr/share/ca-certificates/10765222.pem
	I0328 01:03:02.094322 1131323 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10765222.pem
	I0328 01:03:02.100846 1131323 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/10765222.pem /etc/ssl/certs/3ec20f2e.0"
	I0328 01:03:02.114474 1131323 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0328 01:03:02.126467 1131323 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0328 01:03:02.131240 1131323 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 27 23:34 /usr/share/ca-certificates/minikubeCA.pem
	I0328 01:03:02.131293 1131323 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0328 01:03:02.137496 1131323 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0328 01:03:02.150863 1131323 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1076522.pem && ln -fs /usr/share/ca-certificates/1076522.pem /etc/ssl/certs/1076522.pem"
	I0328 01:03:02.163536 1131323 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1076522.pem
	I0328 01:03:02.168767 1131323 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 27 23:42 /usr/share/ca-certificates/1076522.pem
	I0328 01:03:02.168850 1131323 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1076522.pem
	I0328 01:03:02.175218 1131323 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1076522.pem /etc/ssl/certs/51391683.0"
	I0328 01:03:02.188272 1131323 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0328 01:03:02.193348 1131323 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0328 01:03:02.199969 1131323 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0328 01:03:02.206424 1131323 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0328 01:03:02.213530 1131323 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0328 01:03:02.220136 1131323 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0328 01:03:02.226502 1131323 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0328 01:03:02.232708 1131323 kubeadm.go:391] StartCluster: {Name:old-k8s-version-986088 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-986088 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.174 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0328 01:03:02.232831 1131323 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0328 01:03:02.232890 1131323 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0328 01:03:02.280062 1131323 cri.go:89] found id: ""
	I0328 01:03:02.280160 1131323 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0328 01:03:02.291968 1131323 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0328 01:03:02.292003 1131323 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0328 01:03:02.292011 1131323 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0328 01:03:02.292072 1131323 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0328 01:03:02.304006 1131323 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0328 01:03:02.305105 1131323 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-986088" does not appear in /home/jenkins/minikube-integration/18485-1069254/kubeconfig
	I0328 01:03:02.305785 1131323 kubeconfig.go:62] /home/jenkins/minikube-integration/18485-1069254/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-986088" cluster setting kubeconfig missing "old-k8s-version-986088" context setting]
	I0328 01:03:02.306728 1131323 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18485-1069254/kubeconfig: {Name:mk608bb0dce90860f51972a105c5a28b1a9b081e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 01:03:02.308610 1131323 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0328 01:03:02.320212 1131323 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.50.174
	I0328 01:03:02.320265 1131323 kubeadm.go:1154] stopping kube-system containers ...
	I0328 01:03:02.320283 1131323 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0328 01:03:02.320356 1131323 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0328 01:03:02.366411 1131323 cri.go:89] found id: ""
	I0328 01:03:02.366500 1131323 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0328 01:03:02.388351 1131323 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0328 01:03:02.402621 1131323 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0328 01:03:02.402652 1131323 kubeadm.go:156] found existing configuration files:
	
	I0328 01:03:02.402718 1131323 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0328 01:03:02.415559 1131323 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0328 01:03:02.415633 1131323 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0328 01:03:02.426666 1131323 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0328 01:03:02.439497 1131323 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0328 01:03:02.439558 1131323 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0328 01:03:02.451040 1131323 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0328 01:03:02.461780 1131323 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0328 01:03:02.461876 1131323 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0328 01:03:02.473295 1131323 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0328 01:03:02.484762 1131323 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0328 01:03:02.484841 1131323 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0328 01:03:02.496304 1131323 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0328 01:03:02.507634 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0328 01:03:02.641980 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0328 01:03:03.598106 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0328 01:03:03.840026 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0328 01:03:03.970336 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0328 01:03:04.067774 1131323 api_server.go:52] waiting for apiserver process to appear ...
	I0328 01:03:04.067911 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:04.568260 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:05.068794 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:01.460535 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:01.461008 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | unable to find current IP address of domain default-k8s-diff-port-283961 in network mk-default-k8s-diff-port-283961
	I0328 01:03:01.461039 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | I0328 01:03:01.460962 1132240 retry.go:31] will retry after 1.839531745s: waiting for machine to come up
	I0328 01:03:03.302965 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:03.303445 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | unable to find current IP address of domain default-k8s-diff-port-283961 in network mk-default-k8s-diff-port-283961
	I0328 01:03:03.303479 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | I0328 01:03:03.303387 1132240 retry.go:31] will retry after 2.461748315s: waiting for machine to come up
	I0328 01:03:04.413898 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:06.913608 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:05.568716 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:06.068362 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:06.568235 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:07.068696 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:07.567976 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:08.068032 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:08.568586 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:09.068046 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:09.568699 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:10.067967 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:05.767795 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:05.768329 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | unable to find current IP address of domain default-k8s-diff-port-283961 in network mk-default-k8s-diff-port-283961
	I0328 01:03:05.768360 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | I0328 01:03:05.768279 1132240 retry.go:31] will retry after 2.321291255s: waiting for machine to come up
	I0328 01:03:08.092644 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:08.093094 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | unable to find current IP address of domain default-k8s-diff-port-283961 in network mk-default-k8s-diff-port-283961
	I0328 01:03:08.093131 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | I0328 01:03:08.093046 1132240 retry.go:31] will retry after 4.151205276s: waiting for machine to come up
	I0328 01:03:09.413199 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:11.912234 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:13.671756 1130827 start.go:364] duration metric: took 54.966750689s to acquireMachinesLock for "no-preload-248059"
	I0328 01:03:13.671815 1130827 start.go:96] Skipping create...Using existing machine configuration
	I0328 01:03:13.671823 1130827 fix.go:54] fixHost starting: 
	I0328 01:03:13.672255 1130827 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18485-1069254/.minikube/bin/docker-machine-driver-kvm2
	I0328 01:03:13.672292 1130827 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 01:03:13.689811 1130827 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42017
	I0328 01:03:13.690364 1130827 main.go:141] libmachine: () Calling .GetVersion
	I0328 01:03:13.690817 1130827 main.go:141] libmachine: Using API Version  1
	I0328 01:03:13.690843 1130827 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 01:03:13.691213 1130827 main.go:141] libmachine: () Calling .GetMachineName
	I0328 01:03:13.691395 1130827 main.go:141] libmachine: (no-preload-248059) Calling .DriverName
	I0328 01:03:13.691523 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetState
	I0328 01:03:13.693093 1130827 fix.go:112] recreateIfNeeded on no-preload-248059: state=Stopped err=<nil>
	I0328 01:03:13.693123 1130827 main.go:141] libmachine: (no-preload-248059) Calling .DriverName
	W0328 01:03:13.693280 1130827 fix.go:138] unexpected machine state, will restart: <nil>
	I0328 01:03:13.695158 1130827 out.go:177] * Restarting existing kvm2 VM for "no-preload-248059" ...
	I0328 01:03:10.568240 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:11.068028 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:11.568146 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:12.068467 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:12.568820 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:13.068031 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:13.568977 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:14.068050 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:14.567938 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:15.068711 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:12.248769 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:12.249440 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Found IP for machine: 192.168.39.224
	I0328 01:03:12.249467 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Reserving static IP address...
	I0328 01:03:12.249498 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has current primary IP address 192.168.39.224 and MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:12.249832 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-283961", mac: "52:54:00:c4:df:6f", ip: "192.168.39.224"} in network mk-default-k8s-diff-port-283961: {Iface:virbr1 ExpiryTime:2024-03-28 02:03:06 +0000 UTC Type:0 Mac:52:54:00:c4:df:6f Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:default-k8s-diff-port-283961 Clientid:01:52:54:00:c4:df:6f}
	I0328 01:03:12.249872 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | skip adding static IP to network mk-default-k8s-diff-port-283961 - found existing host DHCP lease matching {name: "default-k8s-diff-port-283961", mac: "52:54:00:c4:df:6f", ip: "192.168.39.224"}
	I0328 01:03:12.249888 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Reserved static IP address: 192.168.39.224
	I0328 01:03:12.249908 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Waiting for SSH to be available...
	I0328 01:03:12.249921 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | Getting to WaitForSSH function...
	I0328 01:03:12.252053 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:12.252487 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:df:6f", ip: ""} in network mk-default-k8s-diff-port-283961: {Iface:virbr1 ExpiryTime:2024-03-28 02:03:06 +0000 UTC Type:0 Mac:52:54:00:c4:df:6f Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:default-k8s-diff-port-283961 Clientid:01:52:54:00:c4:df:6f}
	I0328 01:03:12.252521 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined IP address 192.168.39.224 and MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:12.252646 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | Using SSH client type: external
	I0328 01:03:12.252677 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | Using SSH private key: /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/default-k8s-diff-port-283961/id_rsa (-rw-------)
	I0328 01:03:12.252709 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.224 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/default-k8s-diff-port-283961/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0328 01:03:12.252731 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | About to run SSH command:
	I0328 01:03:12.252750 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | exit 0
	I0328 01:03:12.378419 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | SSH cmd err, output: <nil>: 
	I0328 01:03:12.378866 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetConfigRaw
	I0328 01:03:12.379659 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetIP
	I0328 01:03:12.382631 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:12.382997 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:df:6f", ip: ""} in network mk-default-k8s-diff-port-283961: {Iface:virbr1 ExpiryTime:2024-03-28 02:03:06 +0000 UTC Type:0 Mac:52:54:00:c4:df:6f Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:default-k8s-diff-port-283961 Clientid:01:52:54:00:c4:df:6f}
	I0328 01:03:12.383023 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined IP address 192.168.39.224 and MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:12.383276 1131600 profile.go:142] Saving config to /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/default-k8s-diff-port-283961/config.json ...
	I0328 01:03:12.383534 1131600 machine.go:94] provisionDockerMachine start ...
	I0328 01:03:12.383567 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .DriverName
	I0328 01:03:12.383805 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHHostname
	I0328 01:03:12.386472 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:12.386839 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:df:6f", ip: ""} in network mk-default-k8s-diff-port-283961: {Iface:virbr1 ExpiryTime:2024-03-28 02:03:06 +0000 UTC Type:0 Mac:52:54:00:c4:df:6f Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:default-k8s-diff-port-283961 Clientid:01:52:54:00:c4:df:6f}
	I0328 01:03:12.386870 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined IP address 192.168.39.224 and MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:12.387035 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHPort
	I0328 01:03:12.387240 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHKeyPath
	I0328 01:03:12.387399 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHKeyPath
	I0328 01:03:12.387577 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHUsername
	I0328 01:03:12.387729 1131600 main.go:141] libmachine: Using SSH client type: native
	I0328 01:03:12.387931 1131600 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.224 22 <nil> <nil>}
	I0328 01:03:12.387943 1131600 main.go:141] libmachine: About to run SSH command:
	hostname
	I0328 01:03:12.499608 1131600 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0328 01:03:12.499644 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetMachineName
	I0328 01:03:12.499930 1131600 buildroot.go:166] provisioning hostname "default-k8s-diff-port-283961"
	I0328 01:03:12.499962 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetMachineName
	I0328 01:03:12.500154 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHHostname
	I0328 01:03:12.502737 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:12.503089 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:df:6f", ip: ""} in network mk-default-k8s-diff-port-283961: {Iface:virbr1 ExpiryTime:2024-03-28 02:03:06 +0000 UTC Type:0 Mac:52:54:00:c4:df:6f Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:default-k8s-diff-port-283961 Clientid:01:52:54:00:c4:df:6f}
	I0328 01:03:12.503120 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined IP address 192.168.39.224 and MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:12.503295 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHPort
	I0328 01:03:12.503516 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHKeyPath
	I0328 01:03:12.503725 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHKeyPath
	I0328 01:03:12.503892 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHUsername
	I0328 01:03:12.504093 1131600 main.go:141] libmachine: Using SSH client type: native
	I0328 01:03:12.504271 1131600 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.224 22 <nil> <nil>}
	I0328 01:03:12.504285 1131600 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-283961 && echo "default-k8s-diff-port-283961" | sudo tee /etc/hostname
	I0328 01:03:12.625590 1131600 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-283961
	
	I0328 01:03:12.625624 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHHostname
	I0328 01:03:12.628570 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:12.628883 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:df:6f", ip: ""} in network mk-default-k8s-diff-port-283961: {Iface:virbr1 ExpiryTime:2024-03-28 02:03:06 +0000 UTC Type:0 Mac:52:54:00:c4:df:6f Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:default-k8s-diff-port-283961 Clientid:01:52:54:00:c4:df:6f}
	I0328 01:03:12.628968 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined IP address 192.168.39.224 and MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:12.629143 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHPort
	I0328 01:03:12.629397 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHKeyPath
	I0328 01:03:12.629627 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHKeyPath
	I0328 01:03:12.629825 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHUsername
	I0328 01:03:12.630008 1131600 main.go:141] libmachine: Using SSH client type: native
	I0328 01:03:12.630191 1131600 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.224 22 <nil> <nil>}
	I0328 01:03:12.630210 1131600 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-283961' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-283961/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-283961' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0328 01:03:12.744240 1131600 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0328 01:03:12.744280 1131600 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18485-1069254/.minikube CaCertPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18485-1069254/.minikube}
	I0328 01:03:12.744327 1131600 buildroot.go:174] setting up certificates
	I0328 01:03:12.744342 1131600 provision.go:84] configureAuth start
	I0328 01:03:12.744361 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetMachineName
	I0328 01:03:12.744722 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetIP
	I0328 01:03:12.747139 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:12.747448 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:df:6f", ip: ""} in network mk-default-k8s-diff-port-283961: {Iface:virbr1 ExpiryTime:2024-03-28 02:03:06 +0000 UTC Type:0 Mac:52:54:00:c4:df:6f Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:default-k8s-diff-port-283961 Clientid:01:52:54:00:c4:df:6f}
	I0328 01:03:12.747478 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined IP address 192.168.39.224 and MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:12.747582 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHHostname
	I0328 01:03:12.749705 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:12.749964 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:df:6f", ip: ""} in network mk-default-k8s-diff-port-283961: {Iface:virbr1 ExpiryTime:2024-03-28 02:03:06 +0000 UTC Type:0 Mac:52:54:00:c4:df:6f Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:default-k8s-diff-port-283961 Clientid:01:52:54:00:c4:df:6f}
	I0328 01:03:12.749995 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined IP address 192.168.39.224 and MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:12.750125 1131600 provision.go:143] copyHostCerts
	I0328 01:03:12.750203 1131600 exec_runner.go:144] found /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.pem, removing ...
	I0328 01:03:12.750217 1131600 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.pem
	I0328 01:03:12.750323 1131600 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.pem (1078 bytes)
	I0328 01:03:12.750435 1131600 exec_runner.go:144] found /home/jenkins/minikube-integration/18485-1069254/.minikube/cert.pem, removing ...
	I0328 01:03:12.750446 1131600 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18485-1069254/.minikube/cert.pem
	I0328 01:03:12.750479 1131600 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18485-1069254/.minikube/cert.pem (1123 bytes)
	I0328 01:03:12.750557 1131600 exec_runner.go:144] found /home/jenkins/minikube-integration/18485-1069254/.minikube/key.pem, removing ...
	I0328 01:03:12.750567 1131600 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18485-1069254/.minikube/key.pem
	I0328 01:03:12.750599 1131600 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18485-1069254/.minikube/key.pem (1679 bytes)
	I0328 01:03:12.750670 1131600 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-283961 san=[127.0.0.1 192.168.39.224 default-k8s-diff-port-283961 localhost minikube]
	I0328 01:03:12.963182 1131600 provision.go:177] copyRemoteCerts
	I0328 01:03:12.963265 1131600 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0328 01:03:12.963313 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHHostname
	I0328 01:03:12.965946 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:12.966177 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:df:6f", ip: ""} in network mk-default-k8s-diff-port-283961: {Iface:virbr1 ExpiryTime:2024-03-28 02:03:06 +0000 UTC Type:0 Mac:52:54:00:c4:df:6f Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:default-k8s-diff-port-283961 Clientid:01:52:54:00:c4:df:6f}
	I0328 01:03:12.966207 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined IP address 192.168.39.224 and MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:12.966347 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHPort
	I0328 01:03:12.966573 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHKeyPath
	I0328 01:03:12.966773 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHUsername
	I0328 01:03:12.966934 1131600 sshutil.go:53] new ssh client: &{IP:192.168.39.224 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/default-k8s-diff-port-283961/id_rsa Username:docker}
	I0328 01:03:13.057477 1131600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0328 01:03:13.083706 1131600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0328 01:03:13.109167 1131600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0328 01:03:13.136835 1131600 provision.go:87] duration metric: took 392.475069ms to configureAuth
	I0328 01:03:13.136867 1131600 buildroot.go:189] setting minikube options for container-runtime
	I0328 01:03:13.137048 1131600 config.go:182] Loaded profile config "default-k8s-diff-port-283961": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0328 01:03:13.137131 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHHostname
	I0328 01:03:13.139508 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:13.139761 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:df:6f", ip: ""} in network mk-default-k8s-diff-port-283961: {Iface:virbr1 ExpiryTime:2024-03-28 02:03:06 +0000 UTC Type:0 Mac:52:54:00:c4:df:6f Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:default-k8s-diff-port-283961 Clientid:01:52:54:00:c4:df:6f}
	I0328 01:03:13.139792 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined IP address 192.168.39.224 and MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:13.139959 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHPort
	I0328 01:03:13.140170 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHKeyPath
	I0328 01:03:13.140343 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHKeyPath
	I0328 01:03:13.140502 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHUsername
	I0328 01:03:13.140685 1131600 main.go:141] libmachine: Using SSH client type: native
	I0328 01:03:13.140873 1131600 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.224 22 <nil> <nil>}
	I0328 01:03:13.140897 1131600 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0328 01:03:13.422372 1131600 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0328 01:03:13.422405 1131600 machine.go:97] duration metric: took 1.038857021s to provisionDockerMachine
	I0328 01:03:13.422418 1131600 start.go:293] postStartSetup for "default-k8s-diff-port-283961" (driver="kvm2")
	I0328 01:03:13.422428 1131600 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0328 01:03:13.422456 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .DriverName
	I0328 01:03:13.422788 1131600 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0328 01:03:13.422819 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHHostname
	I0328 01:03:13.425539 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:13.425865 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:df:6f", ip: ""} in network mk-default-k8s-diff-port-283961: {Iface:virbr1 ExpiryTime:2024-03-28 02:03:06 +0000 UTC Type:0 Mac:52:54:00:c4:df:6f Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:default-k8s-diff-port-283961 Clientid:01:52:54:00:c4:df:6f}
	I0328 01:03:13.425894 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined IP address 192.168.39.224 and MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:13.426023 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHPort
	I0328 01:03:13.426225 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHKeyPath
	I0328 01:03:13.426407 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHUsername
	I0328 01:03:13.426577 1131600 sshutil.go:53] new ssh client: &{IP:192.168.39.224 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/default-k8s-diff-port-283961/id_rsa Username:docker}
	I0328 01:03:13.511874 1131600 ssh_runner.go:195] Run: cat /etc/os-release
	I0328 01:03:13.516643 1131600 info.go:137] Remote host: Buildroot 2023.02.9
	I0328 01:03:13.516673 1131600 filesync.go:126] Scanning /home/jenkins/minikube-integration/18485-1069254/.minikube/addons for local assets ...
	I0328 01:03:13.516749 1131600 filesync.go:126] Scanning /home/jenkins/minikube-integration/18485-1069254/.minikube/files for local assets ...
	I0328 01:03:13.516846 1131600 filesync.go:149] local asset: /home/jenkins/minikube-integration/18485-1069254/.minikube/files/etc/ssl/certs/10765222.pem -> 10765222.pem in /etc/ssl/certs
	I0328 01:03:13.516969 1131600 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0328 01:03:13.529004 1131600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/files/etc/ssl/certs/10765222.pem --> /etc/ssl/certs/10765222.pem (1708 bytes)
	I0328 01:03:13.557244 1131600 start.go:296] duration metric: took 134.810243ms for postStartSetup
	I0328 01:03:13.557289 1131600 fix.go:56] duration metric: took 20.165726422s for fixHost
	I0328 01:03:13.557313 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHHostname
	I0328 01:03:13.560216 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:13.560585 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:df:6f", ip: ""} in network mk-default-k8s-diff-port-283961: {Iface:virbr1 ExpiryTime:2024-03-28 02:03:06 +0000 UTC Type:0 Mac:52:54:00:c4:df:6f Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:default-k8s-diff-port-283961 Clientid:01:52:54:00:c4:df:6f}
	I0328 01:03:13.560623 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined IP address 192.168.39.224 and MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:13.560803 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHPort
	I0328 01:03:13.561050 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHKeyPath
	I0328 01:03:13.561188 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHKeyPath
	I0328 01:03:13.561303 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHUsername
	I0328 01:03:13.561552 1131600 main.go:141] libmachine: Using SSH client type: native
	I0328 01:03:13.561742 1131600 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.224 22 <nil> <nil>}
	I0328 01:03:13.561757 1131600 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0328 01:03:13.671545 1131600 main.go:141] libmachine: SSH cmd err, output: <nil>: 1711587793.617322674
	
	I0328 01:03:13.671570 1131600 fix.go:216] guest clock: 1711587793.617322674
	I0328 01:03:13.671578 1131600 fix.go:229] Guest: 2024-03-28 01:03:13.617322674 +0000 UTC Remote: 2024-03-28 01:03:13.55729386 +0000 UTC m=+187.934897846 (delta=60.028814ms)
	I0328 01:03:13.671632 1131600 fix.go:200] guest clock delta is within tolerance: 60.028814ms
	I0328 01:03:13.671642 1131600 start.go:83] releasing machines lock for "default-k8s-diff-port-283961", held for 20.280118311s
	I0328 01:03:13.671673 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .DriverName
	I0328 01:03:13.671976 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetIP
	I0328 01:03:13.674978 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:13.675384 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:df:6f", ip: ""} in network mk-default-k8s-diff-port-283961: {Iface:virbr1 ExpiryTime:2024-03-28 02:03:06 +0000 UTC Type:0 Mac:52:54:00:c4:df:6f Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:default-k8s-diff-port-283961 Clientid:01:52:54:00:c4:df:6f}
	I0328 01:03:13.675436 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined IP address 192.168.39.224 and MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:13.675562 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .DriverName
	I0328 01:03:13.676167 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .DriverName
	I0328 01:03:13.676337 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .DriverName
	I0328 01:03:13.676436 1131600 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0328 01:03:13.676501 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHHostname
	I0328 01:03:13.676557 1131600 ssh_runner.go:195] Run: cat /version.json
	I0328 01:03:13.676578 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHHostname
	I0328 01:03:13.679418 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:13.679452 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:13.679758 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:df:6f", ip: ""} in network mk-default-k8s-diff-port-283961: {Iface:virbr1 ExpiryTime:2024-03-28 02:03:06 +0000 UTC Type:0 Mac:52:54:00:c4:df:6f Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:default-k8s-diff-port-283961 Clientid:01:52:54:00:c4:df:6f}
	I0328 01:03:13.679785 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined IP address 192.168.39.224 and MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:13.679813 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:df:6f", ip: ""} in network mk-default-k8s-diff-port-283961: {Iface:virbr1 ExpiryTime:2024-03-28 02:03:06 +0000 UTC Type:0 Mac:52:54:00:c4:df:6f Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:default-k8s-diff-port-283961 Clientid:01:52:54:00:c4:df:6f}
	I0328 01:03:13.679832 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined IP address 192.168.39.224 and MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:13.679986 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHPort
	I0328 01:03:13.680089 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHPort
	I0328 01:03:13.680190 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHKeyPath
	I0328 01:03:13.680255 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHKeyPath
	I0328 01:03:13.680345 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHUsername
	I0328 01:03:13.680410 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHUsername
	I0328 01:03:13.680517 1131600 sshutil.go:53] new ssh client: &{IP:192.168.39.224 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/default-k8s-diff-port-283961/id_rsa Username:docker}
	I0328 01:03:13.680608 1131600 sshutil.go:53] new ssh client: &{IP:192.168.39.224 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/default-k8s-diff-port-283961/id_rsa Username:docker}
	I0328 01:03:13.759826 1131600 ssh_runner.go:195] Run: systemctl --version
	I0328 01:03:13.796647 1131600 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0328 01:03:13.947036 1131600 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0328 01:03:13.954165 1131600 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0328 01:03:13.954265 1131600 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0328 01:03:13.973503 1131600 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0328 01:03:13.973538 1131600 start.go:494] detecting cgroup driver to use...
	I0328 01:03:13.973629 1131600 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0328 01:03:13.997675 1131600 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0328 01:03:14.015349 1131600 docker.go:217] disabling cri-docker service (if available) ...
	I0328 01:03:14.015421 1131600 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0328 01:03:14.031099 1131600 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0328 01:03:14.046446 1131600 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0328 01:03:14.186993 1131600 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0328 01:03:14.351164 1131600 docker.go:233] disabling docker service ...
	I0328 01:03:14.351232 1131600 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0328 01:03:14.370629 1131600 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0328 01:03:14.387837 1131600 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0328 01:03:14.544060 1131600 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0328 01:03:14.707699 1131600 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0328 01:03:14.725658 1131600 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0328 01:03:14.746063 1131600 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0328 01:03:14.746141 1131600 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 01:03:14.759244 1131600 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0328 01:03:14.759317 1131600 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 01:03:14.773015 1131600 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 01:03:14.786810 1131600 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 01:03:14.807101 1131600 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0328 01:03:14.821013 1131600 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 01:03:14.834181 1131600 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 01:03:14.861163 1131600 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 01:03:14.874274 1131600 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0328 01:03:14.885890 1131600 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0328 01:03:14.885968 1131600 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0328 01:03:14.903142 1131600 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0328 01:03:14.916364 1131600 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0328 01:03:15.073343 1131600 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0328 01:03:15.218406 1131600 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0328 01:03:15.218500 1131600 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0328 01:03:15.226299 1131600 start.go:562] Will wait 60s for crictl version
	I0328 01:03:15.226373 1131600 ssh_runner.go:195] Run: which crictl
	I0328 01:03:15.232051 1131600 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0328 01:03:15.278793 1131600 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0328 01:03:15.278903 1131600 ssh_runner.go:195] Run: crio --version
	I0328 01:03:15.313408 1131600 ssh_runner.go:195] Run: crio --version
	I0328 01:03:15.351613 1131600 out.go:177] * Preparing Kubernetes v1.29.3 on CRI-O 1.29.1 ...
	I0328 01:03:15.353013 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetIP
	I0328 01:03:15.355924 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:15.356306 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:df:6f", ip: ""} in network mk-default-k8s-diff-port-283961: {Iface:virbr1 ExpiryTime:2024-03-28 02:03:06 +0000 UTC Type:0 Mac:52:54:00:c4:df:6f Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:default-k8s-diff-port-283961 Clientid:01:52:54:00:c4:df:6f}
	I0328 01:03:15.356341 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined IP address 192.168.39.224 and MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:15.356555 1131600 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0328 01:03:15.361194 1131600 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0328 01:03:15.380926 1131600 kubeadm.go:877] updating cluster {Name:default-k8s-diff-port-283961 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.29.3 ClusterName:default-k8s-diff-port-283961 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.224 Port:8444 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0328 01:03:15.381043 1131600 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0328 01:03:15.381099 1131600 ssh_runner.go:195] Run: sudo crictl images --output json
	I0328 01:03:15.423322 1131600 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.3". assuming images are not preloaded.
	I0328 01:03:15.423409 1131600 ssh_runner.go:195] Run: which lz4
	I0328 01:03:15.428123 1131600 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0328 01:03:15.433023 1131600 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0328 01:03:15.433065 1131600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (402967820 bytes)
	I0328 01:03:13.696314 1130827 main.go:141] libmachine: (no-preload-248059) Calling .Start
	I0328 01:03:13.696506 1130827 main.go:141] libmachine: (no-preload-248059) Ensuring networks are active...
	I0328 01:03:13.697344 1130827 main.go:141] libmachine: (no-preload-248059) Ensuring network default is active
	I0328 01:03:13.697668 1130827 main.go:141] libmachine: (no-preload-248059) Ensuring network mk-no-preload-248059 is active
	I0328 01:03:13.698009 1130827 main.go:141] libmachine: (no-preload-248059) Getting domain xml...
	I0328 01:03:13.698805 1130827 main.go:141] libmachine: (no-preload-248059) Creating domain...
	I0328 01:03:14.955922 1130827 main.go:141] libmachine: (no-preload-248059) Waiting to get IP...
	I0328 01:03:14.957088 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:14.957534 1130827 main.go:141] libmachine: (no-preload-248059) DBG | unable to find current IP address of domain no-preload-248059 in network mk-no-preload-248059
	I0328 01:03:14.957660 1130827 main.go:141] libmachine: (no-preload-248059) DBG | I0328 01:03:14.957533 1132389 retry.go:31] will retry after 222.894093ms: waiting for machine to come up
	I0328 01:03:15.182078 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:15.182541 1130827 main.go:141] libmachine: (no-preload-248059) DBG | unable to find current IP address of domain no-preload-248059 in network mk-no-preload-248059
	I0328 01:03:15.182580 1130827 main.go:141] libmachine: (no-preload-248059) DBG | I0328 01:03:15.182528 1132389 retry.go:31] will retry after 263.74163ms: waiting for machine to come up
	I0328 01:03:15.448081 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:15.448653 1130827 main.go:141] libmachine: (no-preload-248059) DBG | unable to find current IP address of domain no-preload-248059 in network mk-no-preload-248059
	I0328 01:03:15.448684 1130827 main.go:141] libmachine: (no-preload-248059) DBG | I0328 01:03:15.448586 1132389 retry.go:31] will retry after 444.066222ms: waiting for machine to come up
	I0328 01:03:15.894141 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:15.894695 1130827 main.go:141] libmachine: (no-preload-248059) DBG | unable to find current IP address of domain no-preload-248059 in network mk-no-preload-248059
	I0328 01:03:15.894732 1130827 main.go:141] libmachine: (no-preload-248059) DBG | I0328 01:03:15.894650 1132389 retry.go:31] will retry after 469.421771ms: waiting for machine to come up
	I0328 01:03:14.413443 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:16.418789 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:15.568507 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:16.068210 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:16.568761 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:17.067929 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:17.568403 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:18.068454 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:18.568086 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:19.068049 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:19.569020 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:20.068068 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:17.139682 1131600 crio.go:462] duration metric: took 1.71160157s to copy over tarball
	I0328 01:03:17.139764 1131600 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0328 01:03:19.581198 1131600 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.441406061s)
	I0328 01:03:19.581229 1131600 crio.go:469] duration metric: took 2.441510253s to extract the tarball
	I0328 01:03:19.581241 1131600 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0328 01:03:19.620964 1131600 ssh_runner.go:195] Run: sudo crictl images --output json
	I0328 01:03:19.666765 1131600 crio.go:514] all images are preloaded for cri-o runtime.
	I0328 01:03:19.666791 1131600 cache_images.go:84] Images are preloaded, skipping loading
	I0328 01:03:19.666802 1131600 kubeadm.go:928] updating node { 192.168.39.224 8444 v1.29.3 crio true true} ...
	I0328 01:03:19.666921 1131600 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-283961 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.224
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.3 ClusterName:default-k8s-diff-port-283961 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0328 01:03:19.666987 1131600 ssh_runner.go:195] Run: crio config
	I0328 01:03:19.716082 1131600 cni.go:84] Creating CNI manager for ""
	I0328 01:03:19.716106 1131600 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0328 01:03:19.716115 1131600 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0328 01:03:19.716139 1131600 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.224 APIServerPort:8444 KubernetesVersion:v1.29.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-283961 NodeName:default-k8s-diff-port-283961 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.224"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.224 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0328 01:03:19.716323 1131600 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.224
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-283961"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.224
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.224"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0328 01:03:19.716399 1131600 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.3
	I0328 01:03:19.727826 1131600 binaries.go:44] Found k8s binaries, skipping transfer
	I0328 01:03:19.727913 1131600 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0328 01:03:19.738525 1131600 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0328 01:03:19.756732 1131600 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0328 01:03:19.776665 1131600 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0328 01:03:19.795756 1131600 ssh_runner.go:195] Run: grep 192.168.39.224	control-plane.minikube.internal$ /etc/hosts
	I0328 01:03:19.800097 1131600 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.224	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0328 01:03:19.813019 1131600 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0328 01:03:19.946740 1131600 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0328 01:03:19.964216 1131600 certs.go:68] Setting up /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/default-k8s-diff-port-283961 for IP: 192.168.39.224
	I0328 01:03:19.964244 1131600 certs.go:194] generating shared ca certs ...
	I0328 01:03:19.964262 1131600 certs.go:226] acquiring lock for ca certs: {Name:mkf4dec8f33bbf51de6ed3aabdf175c7fd744ef9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 01:03:19.964448 1131600 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.key
	I0328 01:03:19.964524 1131600 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/proxy-client-ca.key
	I0328 01:03:19.964538 1131600 certs.go:256] generating profile certs ...
	I0328 01:03:19.964648 1131600 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/default-k8s-diff-port-283961/client.key
	I0328 01:03:19.964735 1131600 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/default-k8s-diff-port-283961/apiserver.key.22bfb146
	I0328 01:03:19.964810 1131600 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/default-k8s-diff-port-283961/proxy-client.key
	I0328 01:03:19.964956 1131600 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/1076522.pem (1338 bytes)
	W0328 01:03:19.965008 1131600 certs.go:480] ignoring /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/1076522_empty.pem, impossibly tiny 0 bytes
	I0328 01:03:19.965021 1131600 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca-key.pem (1675 bytes)
	I0328 01:03:19.965058 1131600 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem (1078 bytes)
	I0328 01:03:19.965091 1131600 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/cert.pem (1123 bytes)
	I0328 01:03:19.965113 1131600 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/key.pem (1679 bytes)
	I0328 01:03:19.965154 1131600 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/files/etc/ssl/certs/10765222.pem (1708 bytes)
	I0328 01:03:19.966026 1131600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0328 01:03:19.998578 1131600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0328 01:03:20.042666 1131600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0328 01:03:20.075405 1131600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0328 01:03:20.117888 1131600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/default-k8s-diff-port-283961/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0328 01:03:20.145160 1131600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/default-k8s-diff-port-283961/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0328 01:03:20.178207 1131600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/default-k8s-diff-port-283961/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0328 01:03:20.208610 1131600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/default-k8s-diff-port-283961/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0328 01:03:20.235356 1131600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0328 01:03:20.262434 1131600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/1076522.pem --> /usr/share/ca-certificates/1076522.pem (1338 bytes)
	I0328 01:03:20.291315 1131600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/files/etc/ssl/certs/10765222.pem --> /usr/share/ca-certificates/10765222.pem (1708 bytes)
	I0328 01:03:20.318034 1131600 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0328 01:03:20.337627 1131600 ssh_runner.go:195] Run: openssl version
	I0328 01:03:20.344242 1131600 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0328 01:03:20.360732 1131600 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0328 01:03:20.365858 1131600 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 27 23:34 /usr/share/ca-certificates/minikubeCA.pem
	I0328 01:03:20.365926 1131600 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0328 01:03:20.372120 1131600 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0328 01:03:20.384554 1131600 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1076522.pem && ln -fs /usr/share/ca-certificates/1076522.pem /etc/ssl/certs/1076522.pem"
	I0328 01:03:20.401731 1131600 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1076522.pem
	I0328 01:03:20.406945 1131600 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 27 23:42 /usr/share/ca-certificates/1076522.pem
	I0328 01:03:20.407024 1131600 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1076522.pem
	I0328 01:03:20.414661 1131600 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1076522.pem /etc/ssl/certs/51391683.0"
	I0328 01:03:20.427573 1131600 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10765222.pem && ln -fs /usr/share/ca-certificates/10765222.pem /etc/ssl/certs/10765222.pem"
	I0328 01:03:20.439807 1131600 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10765222.pem
	I0328 01:03:20.445064 1131600 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 27 23:42 /usr/share/ca-certificates/10765222.pem
	I0328 01:03:20.445138 1131600 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10765222.pem
	I0328 01:03:20.451754 1131600 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/10765222.pem /etc/ssl/certs/3ec20f2e.0"
	I0328 01:03:20.464988 1131600 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0328 01:03:20.470461 1131600 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0328 01:03:20.477200 1131600 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0328 01:03:20.484238 1131600 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0328 01:03:20.491125 1131600 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0328 01:03:20.497888 1131600 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0328 01:03:20.504680 1131600 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0328 01:03:20.511372 1131600 kubeadm.go:391] StartCluster: {Name:default-k8s-diff-port-283961 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.29.3 ClusterName:default-k8s-diff-port-283961 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.224 Port:8444 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0328 01:03:20.511477 1131600 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0328 01:03:20.511542 1131600 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0328 01:03:20.552247 1131600 cri.go:89] found id: ""
	I0328 01:03:20.552345 1131600 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0328 01:03:20.564906 1131600 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0328 01:03:20.564937 1131600 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0328 01:03:20.564944 1131600 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0328 01:03:20.565002 1131600 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0328 01:03:20.576394 1131600 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0328 01:03:20.593699 1131600 kubeconfig.go:125] found "default-k8s-diff-port-283961" server: "https://192.168.39.224:8444"
	I0328 01:03:20.595978 1131600 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0328 01:03:20.609519 1131600 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.39.224
	I0328 01:03:20.609565 1131600 kubeadm.go:1154] stopping kube-system containers ...
	I0328 01:03:20.609583 1131600 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0328 01:03:20.609651 1131600 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0328 01:03:20.651892 1131600 cri.go:89] found id: ""
	I0328 01:03:20.651967 1131600 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0328 01:03:20.671895 1131600 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0328 01:03:16.365505 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:16.366404 1130827 main.go:141] libmachine: (no-preload-248059) DBG | unable to find current IP address of domain no-preload-248059 in network mk-no-preload-248059
	I0328 01:03:16.366435 1130827 main.go:141] libmachine: (no-preload-248059) DBG | I0328 01:03:16.366360 1132389 retry.go:31] will retry after 488.383898ms: waiting for machine to come up
	I0328 01:03:16.856125 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:16.856727 1130827 main.go:141] libmachine: (no-preload-248059) DBG | unable to find current IP address of domain no-preload-248059 in network mk-no-preload-248059
	I0328 01:03:16.856761 1130827 main.go:141] libmachine: (no-preload-248059) DBG | I0328 01:03:16.856626 1132389 retry.go:31] will retry after 617.77144ms: waiting for machine to come up
	I0328 01:03:17.476749 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:17.477351 1130827 main.go:141] libmachine: (no-preload-248059) DBG | unable to find current IP address of domain no-preload-248059 in network mk-no-preload-248059
	I0328 01:03:17.477386 1130827 main.go:141] libmachine: (no-preload-248059) DBG | I0328 01:03:17.477282 1132389 retry.go:31] will retry after 835.951988ms: waiting for machine to come up
	I0328 01:03:18.315387 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:18.315894 1130827 main.go:141] libmachine: (no-preload-248059) DBG | unable to find current IP address of domain no-preload-248059 in network mk-no-preload-248059
	I0328 01:03:18.315925 1130827 main.go:141] libmachine: (no-preload-248059) DBG | I0328 01:03:18.315848 1132389 retry.go:31] will retry after 1.405695765s: waiting for machine to come up
	I0328 01:03:19.723053 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:19.723559 1130827 main.go:141] libmachine: (no-preload-248059) DBG | unable to find current IP address of domain no-preload-248059 in network mk-no-preload-248059
	I0328 01:03:19.723591 1130827 main.go:141] libmachine: (no-preload-248059) DBG | I0328 01:03:19.723473 1132389 retry.go:31] will retry after 1.555358462s: waiting for machine to come up
	I0328 01:03:18.913403 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:21.599662 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:20.568464 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:21.068983 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:21.568470 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:22.068772 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:22.568940 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:23.068907 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:23.568272 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:24.068055 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:24.568056 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:25.068006 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:20.685320 1131600 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0328 01:03:21.187521 1131600 kubeadm.go:156] found existing configuration files:
	
	I0328 01:03:21.187587 1131600 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0328 01:03:21.200463 1131600 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0328 01:03:21.200533 1131600 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0328 01:03:21.212763 1131600 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0328 01:03:21.224344 1131600 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0328 01:03:21.224419 1131600 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0328 01:03:21.235869 1131600 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0328 01:03:21.245970 1131600 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0328 01:03:21.246045 1131600 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0328 01:03:21.258589 1131600 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0328 01:03:21.270651 1131600 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0328 01:03:21.270724 1131600 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0328 01:03:21.283074 1131600 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0328 01:03:21.295811 1131600 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0328 01:03:21.668224 1131600 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0328 01:03:23.046357 1131600 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.378083996s)
	I0328 01:03:23.046401 1131600 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0328 01:03:23.271959 1131600 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0328 01:03:23.353976 1131600 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0328 01:03:23.501611 1131600 api_server.go:52] waiting for apiserver process to appear ...
	I0328 01:03:23.501734 1131600 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:24.002619 1131600 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:24.502614 1131600 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:24.547383 1131600 api_server.go:72] duration metric: took 1.045771287s to wait for apiserver process to appear ...
	I0328 01:03:24.547419 1131600 api_server.go:88] waiting for apiserver healthz status ...
	I0328 01:03:24.547447 1131600 api_server.go:253] Checking apiserver healthz at https://192.168.39.224:8444/healthz ...
	I0328 01:03:24.548081 1131600 api_server.go:269] stopped: https://192.168.39.224:8444/healthz: Get "https://192.168.39.224:8444/healthz": dial tcp 192.168.39.224:8444: connect: connection refused
	I0328 01:03:25.047885 1131600 api_server.go:253] Checking apiserver healthz at https://192.168.39.224:8444/healthz ...
	I0328 01:03:21.279945 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:21.590947 1130827 main.go:141] libmachine: (no-preload-248059) DBG | unable to find current IP address of domain no-preload-248059 in network mk-no-preload-248059
	I0328 01:03:21.590967 1130827 main.go:141] libmachine: (no-preload-248059) DBG | I0328 01:03:21.280358 1132389 retry.go:31] will retry after 1.905587467s: waiting for machine to come up
	I0328 01:03:23.187571 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:23.188214 1130827 main.go:141] libmachine: (no-preload-248059) DBG | unable to find current IP address of domain no-preload-248059 in network mk-no-preload-248059
	I0328 01:03:23.188248 1130827 main.go:141] libmachine: (no-preload-248059) DBG | I0328 01:03:23.188159 1132389 retry.go:31] will retry after 2.68043246s: waiting for machine to come up
	I0328 01:03:25.871414 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:25.871997 1130827 main.go:141] libmachine: (no-preload-248059) DBG | unable to find current IP address of domain no-preload-248059 in network mk-no-preload-248059
	I0328 01:03:25.872030 1130827 main.go:141] libmachine: (no-preload-248059) DBG | I0328 01:03:25.871956 1132389 retry.go:31] will retry after 2.689404788s: waiting for machine to come up
	I0328 01:03:23.913816 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:26.413616 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:27.352533 1131600 api_server.go:279] https://192.168.39.224:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0328 01:03:27.352570 1131600 api_server.go:103] status: https://192.168.39.224:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0328 01:03:27.352589 1131600 api_server.go:253] Checking apiserver healthz at https://192.168.39.224:8444/healthz ...
	I0328 01:03:27.453408 1131600 api_server.go:279] https://192.168.39.224:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0328 01:03:27.453448 1131600 api_server.go:103] status: https://192.168.39.224:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0328 01:03:27.547781 1131600 api_server.go:253] Checking apiserver healthz at https://192.168.39.224:8444/healthz ...
	I0328 01:03:27.552703 1131600 api_server.go:279] https://192.168.39.224:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0328 01:03:27.552738 1131600 api_server.go:103] status: https://192.168.39.224:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0328 01:03:28.048135 1131600 api_server.go:253] Checking apiserver healthz at https://192.168.39.224:8444/healthz ...
	I0328 01:03:28.053291 1131600 api_server.go:279] https://192.168.39.224:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0328 01:03:28.053322 1131600 api_server.go:103] status: https://192.168.39.224:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0328 01:03:28.548374 1131600 api_server.go:253] Checking apiserver healthz at https://192.168.39.224:8444/healthz ...
	I0328 01:03:28.553141 1131600 api_server.go:279] https://192.168.39.224:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0328 01:03:28.553178 1131600 api_server.go:103] status: https://192.168.39.224:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0328 01:03:29.047609 1131600 api_server.go:253] Checking apiserver healthz at https://192.168.39.224:8444/healthz ...
	I0328 01:03:29.053027 1131600 api_server.go:279] https://192.168.39.224:8444/healthz returned 200:
	ok
	I0328 01:03:29.060710 1131600 api_server.go:141] control plane version: v1.29.3
	I0328 01:03:29.060747 1131600 api_server.go:131] duration metric: took 4.513320481s to wait for apiserver health ...
	I0328 01:03:29.060757 1131600 cni.go:84] Creating CNI manager for ""
	I0328 01:03:29.060764 1131600 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0328 01:03:29.062763 1131600 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0328 01:03:25.568927 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:26.068371 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:26.568107 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:27.068037 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:27.567985 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:28.068036 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:28.568843 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:29.068483 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:29.568942 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:30.068849 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:29.064492 1131600 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0328 01:03:29.089164 1131600 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0328 01:03:29.115071 1131600 system_pods.go:43] waiting for kube-system pods to appear ...
	I0328 01:03:29.126819 1131600 system_pods.go:59] 8 kube-system pods found
	I0328 01:03:29.126871 1131600 system_pods.go:61] "coredns-76f75df574-79cdj" [48ffe344-a386-4904-a73e-56e3ce0a8bef] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0328 01:03:29.126885 1131600 system_pods.go:61] "etcd-default-k8s-diff-port-283961" [1d8fc768-e39c-4c96-bd65-2ae76fc9c6ec] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0328 01:03:29.126898 1131600 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-283961" [7c5c9f85-f16f-4248-8d2d-73c1ed2b0128] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0328 01:03:29.126912 1131600 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-283961" [2e943e7b-5506-4797-9e77-4a33e06056fa] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0328 01:03:29.126931 1131600 system_pods.go:61] "kube-proxy-d776v" [c1c86f61-b074-4a51-89e6-17c7b1076748] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0328 01:03:29.126944 1131600 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-283961" [8a840579-4145-4b68-ab3f-b1ebd3d63e81] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0328 01:03:29.126956 1131600 system_pods.go:61] "metrics-server-57f55c9bc5-w4ww4" [6d60f9e6-8ac7-4fad-91dc-61520586666c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0328 01:03:29.126968 1131600 system_pods.go:61] "storage-provisioner" [2b5e2e68-7e7c-46ec-bcec-ff9b01cbb8d9] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0328 01:03:29.126979 1131600 system_pods.go:74] duration metric: took 11.875076ms to wait for pod list to return data ...
	I0328 01:03:29.126992 1131600 node_conditions.go:102] verifying NodePressure condition ...
	I0328 01:03:29.130927 1131600 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0328 01:03:29.130971 1131600 node_conditions.go:123] node cpu capacity is 2
	I0328 01:03:29.130986 1131600 node_conditions.go:105] duration metric: took 3.984383ms to run NodePressure ...
	I0328 01:03:29.131011 1131600 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0328 01:03:29.421513 1131600 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0328 01:03:29.426043 1131600 kubeadm.go:733] kubelet initialised
	I0328 01:03:29.426104 1131600 kubeadm.go:734] duration metric: took 4.524275ms waiting for restarted kubelet to initialise ...
	I0328 01:03:29.426114 1131600 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0328 01:03:29.432378 1131600 pod_ready.go:78] waiting up to 4m0s for pod "coredns-76f75df574-79cdj" in "kube-system" namespace to be "Ready" ...
	I0328 01:03:28.563249 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:28.563778 1130827 main.go:141] libmachine: (no-preload-248059) DBG | unable to find current IP address of domain no-preload-248059 in network mk-no-preload-248059
	I0328 01:03:28.563808 1130827 main.go:141] libmachine: (no-preload-248059) DBG | I0328 01:03:28.563718 1132389 retry.go:31] will retry after 2.919225956s: waiting for machine to come up
	I0328 01:03:28.913653 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:30.914379 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:31.484584 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:31.485027 1130827 main.go:141] libmachine: (no-preload-248059) Found IP for machine: 192.168.61.107
	I0328 01:03:31.485048 1130827 main.go:141] libmachine: (no-preload-248059) Reserving static IP address...
	I0328 01:03:31.485065 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has current primary IP address 192.168.61.107 and MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:31.485584 1130827 main.go:141] libmachine: (no-preload-248059) DBG | found host DHCP lease matching {name: "no-preload-248059", mac: "52:54:00:58:33:e2", ip: "192.168.61.107"} in network mk-no-preload-248059: {Iface:virbr3 ExpiryTime:2024-03-28 02:03:25 +0000 UTC Type:0 Mac:52:54:00:58:33:e2 Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:no-preload-248059 Clientid:01:52:54:00:58:33:e2}
	I0328 01:03:31.485617 1130827 main.go:141] libmachine: (no-preload-248059) Reserved static IP address: 192.168.61.107
	I0328 01:03:31.485638 1130827 main.go:141] libmachine: (no-preload-248059) DBG | skip adding static IP to network mk-no-preload-248059 - found existing host DHCP lease matching {name: "no-preload-248059", mac: "52:54:00:58:33:e2", ip: "192.168.61.107"}
	I0328 01:03:31.485651 1130827 main.go:141] libmachine: (no-preload-248059) Waiting for SSH to be available...
	I0328 01:03:31.485671 1130827 main.go:141] libmachine: (no-preload-248059) DBG | Getting to WaitForSSH function...
	I0328 01:03:31.487909 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:31.488293 1130827 main.go:141] libmachine: (no-preload-248059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:33:e2", ip: ""} in network mk-no-preload-248059: {Iface:virbr3 ExpiryTime:2024-03-28 02:03:25 +0000 UTC Type:0 Mac:52:54:00:58:33:e2 Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:no-preload-248059 Clientid:01:52:54:00:58:33:e2}
	I0328 01:03:31.488322 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined IP address 192.168.61.107 and MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:31.488469 1130827 main.go:141] libmachine: (no-preload-248059) DBG | Using SSH client type: external
	I0328 01:03:31.488506 1130827 main.go:141] libmachine: (no-preload-248059) DBG | Using SSH private key: /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/no-preload-248059/id_rsa (-rw-------)
	I0328 01:03:31.488531 1130827 main.go:141] libmachine: (no-preload-248059) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.107 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/no-preload-248059/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0328 01:03:31.488541 1130827 main.go:141] libmachine: (no-preload-248059) DBG | About to run SSH command:
	I0328 01:03:31.488555 1130827 main.go:141] libmachine: (no-preload-248059) DBG | exit 0
	I0328 01:03:31.618358 1130827 main.go:141] libmachine: (no-preload-248059) DBG | SSH cmd err, output: <nil>: 
	I0328 01:03:31.618786 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetConfigRaw
	I0328 01:03:31.619494 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetIP
	I0328 01:03:31.622183 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:31.622555 1130827 main.go:141] libmachine: (no-preload-248059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:33:e2", ip: ""} in network mk-no-preload-248059: {Iface:virbr3 ExpiryTime:2024-03-28 02:03:25 +0000 UTC Type:0 Mac:52:54:00:58:33:e2 Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:no-preload-248059 Clientid:01:52:54:00:58:33:e2}
	I0328 01:03:31.622584 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined IP address 192.168.61.107 and MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:31.622889 1130827 profile.go:142] Saving config to /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/no-preload-248059/config.json ...
	I0328 01:03:31.623120 1130827 machine.go:94] provisionDockerMachine start ...
	I0328 01:03:31.623147 1130827 main.go:141] libmachine: (no-preload-248059) Calling .DriverName
	I0328 01:03:31.623400 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHHostname
	I0328 01:03:31.626078 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:31.626432 1130827 main.go:141] libmachine: (no-preload-248059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:33:e2", ip: ""} in network mk-no-preload-248059: {Iface:virbr3 ExpiryTime:2024-03-28 02:03:25 +0000 UTC Type:0 Mac:52:54:00:58:33:e2 Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:no-preload-248059 Clientid:01:52:54:00:58:33:e2}
	I0328 01:03:31.626458 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined IP address 192.168.61.107 and MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:31.626663 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHPort
	I0328 01:03:31.626864 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHKeyPath
	I0328 01:03:31.627031 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHKeyPath
	I0328 01:03:31.627179 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHUsername
	I0328 01:03:31.627380 1130827 main.go:141] libmachine: Using SSH client type: native
	I0328 01:03:31.627595 1130827 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.107 22 <nil> <nil>}
	I0328 01:03:31.627611 1130827 main.go:141] libmachine: About to run SSH command:
	hostname
	I0328 01:03:31.739662 1130827 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0328 01:03:31.739699 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetMachineName
	I0328 01:03:31.740049 1130827 buildroot.go:166] provisioning hostname "no-preload-248059"
	I0328 01:03:31.740086 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetMachineName
	I0328 01:03:31.740421 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHHostname
	I0328 01:03:31.743410 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:31.743776 1130827 main.go:141] libmachine: (no-preload-248059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:33:e2", ip: ""} in network mk-no-preload-248059: {Iface:virbr3 ExpiryTime:2024-03-28 02:03:25 +0000 UTC Type:0 Mac:52:54:00:58:33:e2 Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:no-preload-248059 Clientid:01:52:54:00:58:33:e2}
	I0328 01:03:31.743811 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined IP address 192.168.61.107 and MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:31.744001 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHPort
	I0328 01:03:31.744212 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHKeyPath
	I0328 01:03:31.744394 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHKeyPath
	I0328 01:03:31.744515 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHUsername
	I0328 01:03:31.744669 1130827 main.go:141] libmachine: Using SSH client type: native
	I0328 01:03:31.744846 1130827 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.107 22 <nil> <nil>}
	I0328 01:03:31.744860 1130827 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-248059 && echo "no-preload-248059" | sudo tee /etc/hostname
	I0328 01:03:31.869330 1130827 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-248059
	
	I0328 01:03:31.869368 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHHostname
	I0328 01:03:31.872451 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:31.872817 1130827 main.go:141] libmachine: (no-preload-248059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:33:e2", ip: ""} in network mk-no-preload-248059: {Iface:virbr3 ExpiryTime:2024-03-28 02:03:25 +0000 UTC Type:0 Mac:52:54:00:58:33:e2 Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:no-preload-248059 Clientid:01:52:54:00:58:33:e2}
	I0328 01:03:31.872868 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined IP address 192.168.61.107 and MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:31.873159 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHPort
	I0328 01:03:31.873405 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHKeyPath
	I0328 01:03:31.873632 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHKeyPath
	I0328 01:03:31.873803 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHUsername
	I0328 01:03:31.873982 1130827 main.go:141] libmachine: Using SSH client type: native
	I0328 01:03:31.874220 1130827 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.107 22 <nil> <nil>}
	I0328 01:03:31.874268 1130827 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-248059' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-248059/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-248059' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0328 01:03:31.997509 1130827 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0328 01:03:31.997543 1130827 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18485-1069254/.minikube CaCertPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18485-1069254/.minikube}
	I0328 01:03:31.997565 1130827 buildroot.go:174] setting up certificates
	I0328 01:03:31.997573 1130827 provision.go:84] configureAuth start
	I0328 01:03:31.997583 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetMachineName
	I0328 01:03:31.997870 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetIP
	I0328 01:03:32.000739 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:32.001127 1130827 main.go:141] libmachine: (no-preload-248059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:33:e2", ip: ""} in network mk-no-preload-248059: {Iface:virbr3 ExpiryTime:2024-03-28 02:03:25 +0000 UTC Type:0 Mac:52:54:00:58:33:e2 Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:no-preload-248059 Clientid:01:52:54:00:58:33:e2}
	I0328 01:03:32.001162 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined IP address 192.168.61.107 and MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:32.001306 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHHostname
	I0328 01:03:32.003571 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:32.003958 1130827 main.go:141] libmachine: (no-preload-248059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:33:e2", ip: ""} in network mk-no-preload-248059: {Iface:virbr3 ExpiryTime:2024-03-28 02:03:25 +0000 UTC Type:0 Mac:52:54:00:58:33:e2 Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:no-preload-248059 Clientid:01:52:54:00:58:33:e2}
	I0328 01:03:32.003988 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined IP address 192.168.61.107 and MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:32.004162 1130827 provision.go:143] copyHostCerts
	I0328 01:03:32.004246 1130827 exec_runner.go:144] found /home/jenkins/minikube-integration/18485-1069254/.minikube/key.pem, removing ...
	I0328 01:03:32.004261 1130827 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18485-1069254/.minikube/key.pem
	I0328 01:03:32.004329 1130827 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18485-1069254/.minikube/key.pem (1679 bytes)
	I0328 01:03:32.004442 1130827 exec_runner.go:144] found /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.pem, removing ...
	I0328 01:03:32.004454 1130827 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.pem
	I0328 01:03:32.004486 1130827 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.pem (1078 bytes)
	I0328 01:03:32.004562 1130827 exec_runner.go:144] found /home/jenkins/minikube-integration/18485-1069254/.minikube/cert.pem, removing ...
	I0328 01:03:32.004572 1130827 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18485-1069254/.minikube/cert.pem
	I0328 01:03:32.004602 1130827 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18485-1069254/.minikube/cert.pem (1123 bytes)
	I0328 01:03:32.004667 1130827 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca-key.pem org=jenkins.no-preload-248059 san=[127.0.0.1 192.168.61.107 localhost minikube no-preload-248059]
	I0328 01:03:32.206585 1130827 provision.go:177] copyRemoteCerts
	I0328 01:03:32.206657 1130827 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0328 01:03:32.206691 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHHostname
	I0328 01:03:32.210170 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:32.210636 1130827 main.go:141] libmachine: (no-preload-248059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:33:e2", ip: ""} in network mk-no-preload-248059: {Iface:virbr3 ExpiryTime:2024-03-28 02:03:25 +0000 UTC Type:0 Mac:52:54:00:58:33:e2 Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:no-preload-248059 Clientid:01:52:54:00:58:33:e2}
	I0328 01:03:32.210676 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined IP address 192.168.61.107 and MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:32.210979 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHPort
	I0328 01:03:32.211187 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHKeyPath
	I0328 01:03:32.211364 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHUsername
	I0328 01:03:32.211564 1130827 sshutil.go:53] new ssh client: &{IP:192.168.61.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/no-preload-248059/id_rsa Username:docker}
	I0328 01:03:32.305858 1130827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0328 01:03:32.337654 1130827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0328 01:03:32.368942 1130827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0328 01:03:32.401639 1130827 provision.go:87] duration metric: took 404.051415ms to configureAuth
	I0328 01:03:32.401669 1130827 buildroot.go:189] setting minikube options for container-runtime
	I0328 01:03:32.401936 1130827 config.go:182] Loaded profile config "no-preload-248059": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0-beta.0
	I0328 01:03:32.402025 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHHostname
	I0328 01:03:32.404890 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:32.405352 1130827 main.go:141] libmachine: (no-preload-248059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:33:e2", ip: ""} in network mk-no-preload-248059: {Iface:virbr3 ExpiryTime:2024-03-28 02:03:25 +0000 UTC Type:0 Mac:52:54:00:58:33:e2 Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:no-preload-248059 Clientid:01:52:54:00:58:33:e2}
	I0328 01:03:32.405387 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined IP address 192.168.61.107 and MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:32.405588 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHPort
	I0328 01:03:32.405858 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHKeyPath
	I0328 01:03:32.406091 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHKeyPath
	I0328 01:03:32.406303 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHUsername
	I0328 01:03:32.406510 1130827 main.go:141] libmachine: Using SSH client type: native
	I0328 01:03:32.406731 1130827 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.107 22 <nil> <nil>}
	I0328 01:03:32.406759 1130827 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0328 01:03:32.697738 1130827 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0328 01:03:32.697768 1130827 machine.go:97] duration metric: took 1.074632092s to provisionDockerMachine
	I0328 01:03:32.697781 1130827 start.go:293] postStartSetup for "no-preload-248059" (driver="kvm2")
	I0328 01:03:32.697795 1130827 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0328 01:03:32.697812 1130827 main.go:141] libmachine: (no-preload-248059) Calling .DriverName
	I0328 01:03:32.698263 1130827 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0328 01:03:32.698298 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHHostname
	I0328 01:03:32.701020 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:32.701421 1130827 main.go:141] libmachine: (no-preload-248059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:33:e2", ip: ""} in network mk-no-preload-248059: {Iface:virbr3 ExpiryTime:2024-03-28 02:03:25 +0000 UTC Type:0 Mac:52:54:00:58:33:e2 Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:no-preload-248059 Clientid:01:52:54:00:58:33:e2}
	I0328 01:03:32.701450 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined IP address 192.168.61.107 and MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:32.701609 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHPort
	I0328 01:03:32.701837 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHKeyPath
	I0328 01:03:32.702010 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHUsername
	I0328 01:03:32.702188 1130827 sshutil.go:53] new ssh client: &{IP:192.168.61.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/no-preload-248059/id_rsa Username:docker}
	I0328 01:03:32.790670 1130827 ssh_runner.go:195] Run: cat /etc/os-release
	I0328 01:03:32.795098 1130827 info.go:137] Remote host: Buildroot 2023.02.9
	I0328 01:03:32.795131 1130827 filesync.go:126] Scanning /home/jenkins/minikube-integration/18485-1069254/.minikube/addons for local assets ...
	I0328 01:03:32.795222 1130827 filesync.go:126] Scanning /home/jenkins/minikube-integration/18485-1069254/.minikube/files for local assets ...
	I0328 01:03:32.795297 1130827 filesync.go:149] local asset: /home/jenkins/minikube-integration/18485-1069254/.minikube/files/etc/ssl/certs/10765222.pem -> 10765222.pem in /etc/ssl/certs
	I0328 01:03:32.795402 1130827 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0328 01:03:32.806276 1130827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/files/etc/ssl/certs/10765222.pem --> /etc/ssl/certs/10765222.pem (1708 bytes)
	I0328 01:03:32.832753 1130827 start.go:296] duration metric: took 134.954685ms for postStartSetup
	I0328 01:03:32.832801 1130827 fix.go:56] duration metric: took 19.16097847s for fixHost
	I0328 01:03:32.832825 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHHostname
	I0328 01:03:32.835830 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:32.836199 1130827 main.go:141] libmachine: (no-preload-248059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:33:e2", ip: ""} in network mk-no-preload-248059: {Iface:virbr3 ExpiryTime:2024-03-28 02:03:25 +0000 UTC Type:0 Mac:52:54:00:58:33:e2 Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:no-preload-248059 Clientid:01:52:54:00:58:33:e2}
	I0328 01:03:32.836237 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined IP address 192.168.61.107 and MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:32.836472 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHPort
	I0328 01:03:32.836707 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHKeyPath
	I0328 01:03:32.836949 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHKeyPath
	I0328 01:03:32.837104 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHUsername
	I0328 01:03:32.837339 1130827 main.go:141] libmachine: Using SSH client type: native
	I0328 01:03:32.837551 1130827 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.107 22 <nil> <nil>}
	I0328 01:03:32.837563 1130827 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0328 01:03:32.947440 1130827 main.go:141] libmachine: SSH cmd err, output: <nil>: 1711587812.922631180
	
	I0328 01:03:32.947477 1130827 fix.go:216] guest clock: 1711587812.922631180
	I0328 01:03:32.947486 1130827 fix.go:229] Guest: 2024-03-28 01:03:32.92263118 +0000 UTC Remote: 2024-03-28 01:03:32.832804811 +0000 UTC m=+356.715929719 (delta=89.826369ms)
	I0328 01:03:32.947507 1130827 fix.go:200] guest clock delta is within tolerance: 89.826369ms
	I0328 01:03:32.947512 1130827 start.go:83] releasing machines lock for "no-preload-248059", held for 19.275724068s
	I0328 01:03:32.947531 1130827 main.go:141] libmachine: (no-preload-248059) Calling .DriverName
	I0328 01:03:32.947805 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetIP
	I0328 01:03:32.950439 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:32.950814 1130827 main.go:141] libmachine: (no-preload-248059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:33:e2", ip: ""} in network mk-no-preload-248059: {Iface:virbr3 ExpiryTime:2024-03-28 02:03:25 +0000 UTC Type:0 Mac:52:54:00:58:33:e2 Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:no-preload-248059 Clientid:01:52:54:00:58:33:e2}
	I0328 01:03:32.950844 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined IP address 192.168.61.107 and MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:32.950992 1130827 main.go:141] libmachine: (no-preload-248059) Calling .DriverName
	I0328 01:03:32.951517 1130827 main.go:141] libmachine: (no-preload-248059) Calling .DriverName
	I0328 01:03:32.951707 1130827 main.go:141] libmachine: (no-preload-248059) Calling .DriverName
	I0328 01:03:32.951809 1130827 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0328 01:03:32.951852 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHHostname
	I0328 01:03:32.951938 1130827 ssh_runner.go:195] Run: cat /version.json
	I0328 01:03:32.951964 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHHostname
	I0328 01:03:32.954721 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:32.955058 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:32.955135 1130827 main.go:141] libmachine: (no-preload-248059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:33:e2", ip: ""} in network mk-no-preload-248059: {Iface:virbr3 ExpiryTime:2024-03-28 02:03:25 +0000 UTC Type:0 Mac:52:54:00:58:33:e2 Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:no-preload-248059 Clientid:01:52:54:00:58:33:e2}
	I0328 01:03:32.955165 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined IP address 192.168.61.107 and MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:32.955314 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHPort
	I0328 01:03:32.955473 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHKeyPath
	I0328 01:03:32.955512 1130827 main.go:141] libmachine: (no-preload-248059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:33:e2", ip: ""} in network mk-no-preload-248059: {Iface:virbr3 ExpiryTime:2024-03-28 02:03:25 +0000 UTC Type:0 Mac:52:54:00:58:33:e2 Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:no-preload-248059 Clientid:01:52:54:00:58:33:e2}
	I0328 01:03:32.955538 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined IP address 192.168.61.107 and MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:32.955622 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHUsername
	I0328 01:03:32.955698 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHPort
	I0328 01:03:32.955809 1130827 sshutil.go:53] new ssh client: &{IP:192.168.61.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/no-preload-248059/id_rsa Username:docker}
	I0328 01:03:32.955859 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHKeyPath
	I0328 01:03:32.956001 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHUsername
	I0328 01:03:32.956134 1130827 sshutil.go:53] new ssh client: &{IP:192.168.61.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/no-preload-248059/id_rsa Username:docker}
	I0328 01:03:33.079381 1130827 ssh_runner.go:195] Run: systemctl --version
	I0328 01:03:33.086184 1130827 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0328 01:03:33.241799 1130827 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0328 01:03:33.248779 1130827 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0328 01:03:33.248893 1130827 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0328 01:03:33.267944 1130827 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0328 01:03:33.267977 1130827 start.go:494] detecting cgroup driver to use...
	I0328 01:03:33.268082 1130827 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0328 01:03:33.286132 1130827 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0328 01:03:33.301676 1130827 docker.go:217] disabling cri-docker service (if available) ...
	I0328 01:03:33.301762 1130827 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0328 01:03:33.317202 1130827 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0328 01:03:33.333162 1130827 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0328 01:03:33.458738 1130827 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0328 01:03:33.608509 1130827 docker.go:233] disabling docker service ...
	I0328 01:03:33.608623 1130827 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0328 01:03:33.626616 1130827 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0328 01:03:33.641798 1130827 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0328 01:03:33.808865 1130827 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0328 01:03:33.962636 1130827 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0328 01:03:33.978138 1130827 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0328 01:03:34.002323 1130827 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0328 01:03:34.002404 1130827 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 01:03:34.014483 1130827 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0328 01:03:34.014589 1130827 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 01:03:34.028647 1130827 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 01:03:34.041601 1130827 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 01:03:34.054993 1130827 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0328 01:03:34.066671 1130827 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 01:03:34.079389 1130827 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 01:03:34.099660 1130827 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 01:03:34.112379 1130827 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0328 01:03:34.123050 1130827 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0328 01:03:34.123109 1130827 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0328 01:03:34.137132 1130827 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0328 01:03:34.147092 1130827 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0328 01:03:34.282367 1130827 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0328 01:03:34.436510 1130827 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0328 01:03:34.436599 1130827 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0328 01:03:34.443019 1130827 start.go:562] Will wait 60s for crictl version
	I0328 01:03:34.443092 1130827 ssh_runner.go:195] Run: which crictl
	I0328 01:03:34.447740 1130827 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0328 01:03:34.488366 1130827 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0328 01:03:34.488469 1130827 ssh_runner.go:195] Run: crio --version
	I0328 01:03:34.520940 1130827 ssh_runner.go:195] Run: crio --version
	I0328 01:03:34.557953 1130827 out.go:177] * Preparing Kubernetes v1.30.0-beta.0 on CRI-O 1.29.1 ...
	I0328 01:03:30.568918 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:31.068097 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:31.568306 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:32.068345 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:32.568773 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:33.068072 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:33.568377 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:34.068141 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:34.568574 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:35.067986 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:31.439199 1131600 pod_ready.go:102] pod "coredns-76f75df574-79cdj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:33.439575 1131600 pod_ready.go:102] pod "coredns-76f75df574-79cdj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:34.559624 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetIP
	I0328 01:03:34.563089 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:34.563549 1130827 main.go:141] libmachine: (no-preload-248059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:33:e2", ip: ""} in network mk-no-preload-248059: {Iface:virbr3 ExpiryTime:2024-03-28 02:03:25 +0000 UTC Type:0 Mac:52:54:00:58:33:e2 Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:no-preload-248059 Clientid:01:52:54:00:58:33:e2}
	I0328 01:03:34.563583 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined IP address 192.168.61.107 and MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:34.563943 1130827 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0328 01:03:34.570153 1130827 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0328 01:03:34.584566 1130827 kubeadm.go:877] updating cluster {Name:no-preload-248059 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.0-beta.0 ClusterName:no-preload-248059 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.107 Port:8443 KubernetesVersion:v1.30.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0328 01:03:34.584723 1130827 preload.go:132] Checking if preload exists for k8s version v1.30.0-beta.0 and runtime crio
	I0328 01:03:34.584786 1130827 ssh_runner.go:195] Run: sudo crictl images --output json
	I0328 01:03:34.620182 1130827 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.0-beta.0". assuming images are not preloaded.
	I0328 01:03:34.620215 1130827 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.30.0-beta.0 registry.k8s.io/kube-controller-manager:v1.30.0-beta.0 registry.k8s.io/kube-scheduler:v1.30.0-beta.0 registry.k8s.io/kube-proxy:v1.30.0-beta.0 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.12-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0328 01:03:34.620297 1130827 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0328 01:03:34.620312 1130827 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.12-0
	I0328 01:03:34.620333 1130827 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.30.0-beta.0
	I0328 01:03:34.620301 1130827 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.30.0-beta.0
	I0328 01:03:34.620374 1130827 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.30.0-beta.0
	I0328 01:03:34.620401 1130827 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0328 01:03:34.620481 1130827 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0328 01:03:34.620319 1130827 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.30.0-beta.0
	I0328 01:03:34.621997 1130827 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.30.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.30.0-beta.0
	I0328 01:03:34.622009 1130827 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0328 01:03:34.622009 1130827 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.30.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.30.0-beta.0
	I0328 01:03:34.622052 1130827 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.30.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.30.0-beta.0
	I0328 01:03:34.621997 1130827 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.30.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.30.0-beta.0
	I0328 01:03:34.622115 1130827 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.12-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.12-0
	I0328 01:03:34.621996 1130827 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0328 01:03:34.622438 1130827 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0328 01:03:34.832761 1130827 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.30.0-beta.0
	I0328 01:03:34.849045 1130827 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I0328 01:03:34.868049 1130827 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0328 01:03:34.883941 1130827 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.30.0-beta.0" needs transfer: "registry.k8s.io/kube-proxy:v1.30.0-beta.0" does not exist at hash "3f2e573c9528d6ccab780899b4b39b75a17f00250f31ec462fccb116d45befa8" in container runtime
	I0328 01:03:34.883988 1130827 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.30.0-beta.0
	I0328 01:03:34.884047 1130827 ssh_runner.go:195] Run: which crictl
	I0328 01:03:34.884972 1130827 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.30.0-beta.0
	I0328 01:03:34.887551 1130827 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.30.0-beta.0
	I0328 01:03:34.899677 1130827 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.12-0
	I0328 01:03:34.904772 1130827 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.30.0-beta.0
	I0328 01:03:35.045850 1130827 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0328 01:03:35.045906 1130827 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0328 01:03:35.045944 1130827 ssh_runner.go:195] Run: which crictl
	I0328 01:03:35.045959 1130827 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.30.0-beta.0
	I0328 01:03:35.064862 1130827 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.30.0-beta.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.30.0-beta.0" does not exist at hash "c2da1dd389fe0ec92e0cd9e98f8def82c47e8e08ab27041cd23683c60aa1dcaa" in container runtime
	I0328 01:03:35.064908 1130827 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.30.0-beta.0
	I0328 01:03:35.064959 1130827 ssh_runner.go:195] Run: which crictl
	I0328 01:03:35.066700 1130827 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.30.0-beta.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.30.0-beta.0" does not exist at hash "f68bf73ee2fbd4ea8b58c2038ce18d4e06cb59833c44dda7435b586add021841" in container runtime
	I0328 01:03:35.066753 1130827 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.30.0-beta.0
	I0328 01:03:35.066820 1130827 ssh_runner.go:195] Run: which crictl
	I0328 01:03:35.097425 1130827 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.30.0-beta.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.30.0-beta.0" does not exist at hash "746ac2129b574b8743dca05f512b7e097235ca7229b75100a38ec9cdb23454ac" in container runtime
	I0328 01:03:35.097479 1130827 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.30.0-beta.0
	I0328 01:03:35.097546 1130827 ssh_runner.go:195] Run: which crictl
	I0328 01:03:35.097619 1130827 cache_images.go:116] "registry.k8s.io/etcd:3.5.12-0" needs transfer: "registry.k8s.io/etcd:3.5.12-0" does not exist at hash "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899" in container runtime
	I0328 01:03:35.097667 1130827 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.12-0
	I0328 01:03:35.097715 1130827 ssh_runner.go:195] Run: which crictl
	I0328 01:03:35.126977 1130827 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0328 01:03:35.126980 1130827 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.30.0-beta.0
	I0328 01:03:35.127020 1130827 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.30.0-beta.0
	I0328 01:03:35.127084 1130827 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.12-0
	I0328 01:03:35.127090 1130827 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.30.0-beta.0
	I0328 01:03:35.127082 1130827 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.30.0-beta.0
	I0328 01:03:35.127161 1130827 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.30.0-beta.0
	I0328 01:03:35.264395 1130827 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.30.0-beta.0
	I0328 01:03:35.264499 1130827 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.30.0-beta.0 (exists)
	I0328 01:03:35.264534 1130827 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.30.0-beta.0
	I0328 01:03:35.264543 1130827 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.30.0-beta.0
	I0328 01:03:35.264506 1130827 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0328 01:03:35.264590 1130827 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.30.0-beta.0
	I0328 01:03:35.264631 1130827 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.30.0-beta.0
	I0328 01:03:35.264652 1130827 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0328 01:03:35.264516 1130827 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.30.0-beta.0
	I0328 01:03:35.264584 1130827 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.12-0
	I0328 01:03:35.264717 1130827 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.30.0-beta.0
	I0328 01:03:35.264728 1130827 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.30.0-beta.0
	I0328 01:03:35.264768 1130827 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.12-0
	I0328 01:03:35.269734 1130827 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.12-0 (exists)
	I0328 01:03:35.277344 1130827 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.30.0-beta.0 (exists)
	I0328 01:03:35.277580 1130827 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0328 01:03:35.279792 1130827 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.30.0-beta.0 (exists)
	I0328 01:03:35.280423 1130827 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.30.0-beta.0 (exists)
	I0328 01:03:35.535980 1130827 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0328 01:03:33.413451 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:35.414017 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:37.913609 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:35.568345 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:36.068227 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:36.568528 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:37.068834 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:37.568407 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:38.068142 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:38.568732 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:39.068094 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:39.568799 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:40.068973 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:35.940767 1131600 pod_ready.go:102] pod "coredns-76f75df574-79cdj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:37.440919 1131600 pod_ready.go:92] pod "coredns-76f75df574-79cdj" in "kube-system" namespace has status "Ready":"True"
	I0328 01:03:37.440949 1131600 pod_ready.go:81] duration metric: took 8.008542386s for pod "coredns-76f75df574-79cdj" in "kube-system" namespace to be "Ready" ...
	I0328 01:03:37.440963 1131600 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-283961" in "kube-system" namespace to be "Ready" ...
	I0328 01:03:39.452822 1131600 pod_ready.go:102] pod "etcd-default-k8s-diff-port-283961" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:40.467937 1131600 pod_ready.go:92] pod "etcd-default-k8s-diff-port-283961" in "kube-system" namespace has status "Ready":"True"
	I0328 01:03:40.467973 1131600 pod_ready.go:81] duration metric: took 3.027001179s for pod "etcd-default-k8s-diff-port-283961" in "kube-system" namespace to be "Ready" ...
	I0328 01:03:40.467987 1131600 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-283961" in "kube-system" namespace to be "Ready" ...
	I0328 01:03:40.491342 1131600 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-283961" in "kube-system" namespace has status "Ready":"True"
	I0328 01:03:40.491373 1131600 pod_ready.go:81] duration metric: took 23.375914ms for pod "kube-apiserver-default-k8s-diff-port-283961" in "kube-system" namespace to be "Ready" ...
	I0328 01:03:40.491387 1131600 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-283961" in "kube-system" namespace to be "Ready" ...
	I0328 01:03:40.511379 1131600 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-283961" in "kube-system" namespace has status "Ready":"True"
	I0328 01:03:40.511414 1131600 pod_ready.go:81] duration metric: took 20.018124ms for pod "kube-controller-manager-default-k8s-diff-port-283961" in "kube-system" namespace to be "Ready" ...
	I0328 01:03:40.511430 1131600 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-d776v" in "kube-system" namespace to be "Ready" ...
	I0328 01:03:40.526689 1131600 pod_ready.go:92] pod "kube-proxy-d776v" in "kube-system" namespace has status "Ready":"True"
	I0328 01:03:40.526724 1131600 pod_ready.go:81] duration metric: took 15.28424ms for pod "kube-proxy-d776v" in "kube-system" namespace to be "Ready" ...
	I0328 01:03:40.526738 1131600 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-283961" in "kube-system" namespace to be "Ready" ...
	I0328 01:03:37.431690 1130827 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.30.0-beta.0: (2.167073369s)
	I0328 01:03:37.431729 1130827 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.30.0-beta.0 from cache
	I0328 01:03:37.431755 1130827 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.12-0
	I0328 01:03:37.431764 1130827 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.895749302s)
	I0328 01:03:37.431805 1130827 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0328 01:03:37.431811 1130827 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.12-0
	I0328 01:03:37.431837 1130827 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0328 01:03:37.431870 1130827 ssh_runner.go:195] Run: which crictl
	I0328 01:03:39.913936 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:42.412656 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:40.568441 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:41.068790 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:41.568919 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:42.068166 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:42.568012 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:43.068027 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:43.568916 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:44.067940 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:44.568074 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:45.068786 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:42.535179 1131600 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-283961" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:44.034128 1131600 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-283961" in "kube-system" namespace has status "Ready":"True"
	I0328 01:03:44.034164 1131600 pod_ready.go:81] duration metric: took 3.507415677s for pod "kube-scheduler-default-k8s-diff-port-283961" in "kube-system" namespace to be "Ready" ...
	I0328 01:03:44.034175 1131600 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace to be "Ready" ...
	I0328 01:03:41.523268 1130827 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.12-0: (4.091420228s)
	I0328 01:03:41.523305 1130827 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.12-0 from cache
	I0328 01:03:41.523330 1130827 ssh_runner.go:235] Completed: which crictl: (4.091431875s)
	I0328 01:03:41.523345 1130827 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.30.0-beta.0
	I0328 01:03:41.523412 1130827 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0328 01:03:41.523445 1130827 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.30.0-beta.0
	I0328 01:03:41.567312 1130827 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0328 01:03:41.567455 1130827 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0328 01:03:44.336954 1130827 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.30.0-beta.0: (2.813479223s)
	I0328 01:03:44.336991 1130827 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.30.0-beta.0 from cache
	I0328 01:03:44.336994 1130827 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (2.769509386s)
	I0328 01:03:44.337020 1130827 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0328 01:03:44.337035 1130827 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0328 01:03:44.337080 1130827 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0328 01:03:44.414767 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:46.415110 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:45.568662 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:46.068299 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:46.568793 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:47.068929 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:47.568250 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:48.068910 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:48.568138 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:49.068128 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:49.568153 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:50.068075 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:46.042489 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:48.541049 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:50.547355 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:46.297705 1130827 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.960592772s)
	I0328 01:03:46.297744 1130827 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0328 01:03:46.297776 1130827 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.30.0-beta.0
	I0328 01:03:46.297828 1130827 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.30.0-beta.0
	I0328 01:03:47.769522 1130827 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.30.0-beta.0: (1.471661236s)
	I0328 01:03:47.769569 1130827 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.30.0-beta.0 from cache
	I0328 01:03:47.769602 1130827 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.30.0-beta.0
	I0328 01:03:47.769656 1130827 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.30.0-beta.0
	I0328 01:03:50.231843 1130827 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.30.0-beta.0: (2.462162757s)
	I0328 01:03:50.231876 1130827 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.30.0-beta.0 from cache
	I0328 01:03:50.231902 1130827 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0328 01:03:50.231956 1130827 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0328 01:03:48.913184 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:51.412474 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:50.568929 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:51.068812 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:51.568899 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:52.068890 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:52.568751 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:53.068406 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:53.568466 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:54.068039 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:54.568745 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:55.068690 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:53.041197 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:55.041977 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:51.188382 1130827 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0328 01:03:51.188441 1130827 cache_images.go:123] Successfully loaded all cached images
	I0328 01:03:51.188448 1130827 cache_images.go:92] duration metric: took 16.568214969s to LoadCachedImages
	I0328 01:03:51.188464 1130827 kubeadm.go:928] updating node { 192.168.61.107 8443 v1.30.0-beta.0 crio true true} ...
	I0328 01:03:51.188628 1130827 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-248059 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.107
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0-beta.0 ClusterName:no-preload-248059 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0328 01:03:51.188710 1130827 ssh_runner.go:195] Run: crio config
	I0328 01:03:51.237071 1130827 cni.go:84] Creating CNI manager for ""
	I0328 01:03:51.237099 1130827 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0328 01:03:51.237109 1130827 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0328 01:03:51.237131 1130827 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.107 APIServerPort:8443 KubernetesVersion:v1.30.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-248059 NodeName:no-preload-248059 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.107"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.107 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0328 01:03:51.237263 1130827 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.107
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-248059"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.107
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.107"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0328 01:03:51.237330 1130827 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0-beta.0
	I0328 01:03:51.248044 1130827 binaries.go:44] Found k8s binaries, skipping transfer
	I0328 01:03:51.248113 1130827 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0328 01:03:51.257854 1130827 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (324 bytes)
	I0328 01:03:51.276307 1130827 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I0328 01:03:51.294698 1130827 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2168 bytes)
	I0328 01:03:51.313297 1130827 ssh_runner.go:195] Run: grep 192.168.61.107	control-plane.minikube.internal$ /etc/hosts
	I0328 01:03:51.317668 1130827 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.107	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0328 01:03:51.330478 1130827 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0328 01:03:51.457500 1130827 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0328 01:03:51.484463 1130827 certs.go:68] Setting up /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/no-preload-248059 for IP: 192.168.61.107
	I0328 01:03:51.484493 1130827 certs.go:194] generating shared ca certs ...
	I0328 01:03:51.484518 1130827 certs.go:226] acquiring lock for ca certs: {Name:mkf4dec8f33bbf51de6ed3aabdf175c7fd744ef9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 01:03:51.484718 1130827 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.key
	I0328 01:03:51.484768 1130827 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/proxy-client-ca.key
	I0328 01:03:51.484781 1130827 certs.go:256] generating profile certs ...
	I0328 01:03:51.484910 1130827 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/no-preload-248059/client.key
	I0328 01:03:51.484989 1130827 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/no-preload-248059/apiserver.key.85d037b2
	I0328 01:03:51.485040 1130827 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/no-preload-248059/proxy-client.key
	I0328 01:03:51.485196 1130827 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/1076522.pem (1338 bytes)
	W0328 01:03:51.485243 1130827 certs.go:480] ignoring /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/1076522_empty.pem, impossibly tiny 0 bytes
	I0328 01:03:51.485257 1130827 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca-key.pem (1675 bytes)
	I0328 01:03:51.485292 1130827 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem (1078 bytes)
	I0328 01:03:51.485327 1130827 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/cert.pem (1123 bytes)
	I0328 01:03:51.485357 1130827 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/key.pem (1679 bytes)
	I0328 01:03:51.485416 1130827 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/files/etc/ssl/certs/10765222.pem (1708 bytes)
	I0328 01:03:51.486614 1130827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0328 01:03:51.537554 1130827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0328 01:03:51.587256 1130827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0328 01:03:51.620264 1130827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0328 01:03:51.652100 1130827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/no-preload-248059/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0328 01:03:51.694388 1130827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/no-preload-248059/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0328 01:03:51.720913 1130827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/no-preload-248059/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0328 01:03:51.747141 1130827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/no-preload-248059/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0328 01:03:51.776370 1130827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/1076522.pem --> /usr/share/ca-certificates/1076522.pem (1338 bytes)
	I0328 01:03:51.803168 1130827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/files/etc/ssl/certs/10765222.pem --> /usr/share/ca-certificates/10765222.pem (1708 bytes)
	I0328 01:03:51.831138 1130827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0328 01:03:51.857272 1130827 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0328 01:03:51.876070 1130827 ssh_runner.go:195] Run: openssl version
	I0328 01:03:51.882197 1130827 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1076522.pem && ln -fs /usr/share/ca-certificates/1076522.pem /etc/ssl/certs/1076522.pem"
	I0328 01:03:51.893560 1130827 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1076522.pem
	I0328 01:03:51.898293 1130827 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 27 23:42 /usr/share/ca-certificates/1076522.pem
	I0328 01:03:51.898361 1130827 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1076522.pem
	I0328 01:03:51.904549 1130827 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1076522.pem /etc/ssl/certs/51391683.0"
	I0328 01:03:51.918175 1130827 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10765222.pem && ln -fs /usr/share/ca-certificates/10765222.pem /etc/ssl/certs/10765222.pem"
	I0328 01:03:51.930387 1130827 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10765222.pem
	I0328 01:03:51.935610 1130827 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 27 23:42 /usr/share/ca-certificates/10765222.pem
	I0328 01:03:51.935691 1130827 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10765222.pem
	I0328 01:03:51.942127 1130827 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/10765222.pem /etc/ssl/certs/3ec20f2e.0"
	I0328 01:03:51.954252 1130827 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0328 01:03:51.966727 1130827 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0328 01:03:51.971742 1130827 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 27 23:34 /usr/share/ca-certificates/minikubeCA.pem
	I0328 01:03:51.971810 1130827 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0328 01:03:51.978082 1130827 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0328 01:03:51.992233 1130827 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0328 01:03:51.997556 1130827 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0328 01:03:52.004178 1130827 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0328 01:03:52.010666 1130827 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0328 01:03:52.017076 1130827 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0328 01:03:52.023334 1130827 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0328 01:03:52.029980 1130827 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0328 01:03:52.036395 1130827 kubeadm.go:391] StartCluster: {Name:no-preload-248059 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.0-beta.0 ClusterName:no-preload-248059 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.107 Port:8443 KubernetesVersion:v1.30.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0
m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0328 01:03:52.036483 1130827 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0328 01:03:52.036539 1130827 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0328 01:03:52.080486 1130827 cri.go:89] found id: ""
	I0328 01:03:52.080580 1130827 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0328 01:03:52.094552 1130827 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0328 01:03:52.094583 1130827 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0328 01:03:52.094599 1130827 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0328 01:03:52.094650 1130827 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0328 01:03:52.107008 1130827 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0328 01:03:52.108200 1130827 kubeconfig.go:125] found "no-preload-248059" server: "https://192.168.61.107:8443"
	I0328 01:03:52.110536 1130827 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0328 01:03:52.122998 1130827 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.61.107
	I0328 01:03:52.123044 1130827 kubeadm.go:1154] stopping kube-system containers ...
	I0328 01:03:52.123090 1130827 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0328 01:03:52.123170 1130827 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0328 01:03:52.165568 1130827 cri.go:89] found id: ""
	I0328 01:03:52.165666 1130827 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0328 01:03:52.183930 1130827 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0328 01:03:52.195188 1130827 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0328 01:03:52.195215 1130827 kubeadm.go:156] found existing configuration files:
	
	I0328 01:03:52.195271 1130827 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0328 01:03:52.205872 1130827 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0328 01:03:52.205932 1130827 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0328 01:03:52.216481 1130827 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0328 01:03:52.226719 1130827 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0328 01:03:52.226787 1130827 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0328 01:03:52.238852 1130827 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0328 01:03:52.250272 1130827 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0328 01:03:52.250341 1130827 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0328 01:03:52.262474 1130827 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0328 01:03:52.273981 1130827 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0328 01:03:52.274059 1130827 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0328 01:03:52.286028 1130827 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0328 01:03:52.297016 1130827 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0328 01:03:52.406981 1130827 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0328 01:03:53.521529 1130827 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.114505514s)
	I0328 01:03:53.521569 1130827 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0328 01:03:53.735728 1130827 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0328 01:03:53.808590 1130827 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0328 01:03:53.931165 1130827 api_server.go:52] waiting for apiserver process to appear ...
	I0328 01:03:53.931281 1130827 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:54.432358 1130827 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:54.931653 1130827 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:54.948811 1130827 api_server.go:72] duration metric: took 1.017647613s to wait for apiserver process to appear ...
	I0328 01:03:54.948843 1130827 api_server.go:88] waiting for apiserver healthz status ...
	I0328 01:03:54.948871 1130827 api_server.go:253] Checking apiserver healthz at https://192.168.61.107:8443/healthz ...
	I0328 01:03:54.949490 1130827 api_server.go:269] stopped: https://192.168.61.107:8443/healthz: Get "https://192.168.61.107:8443/healthz": dial tcp 192.168.61.107:8443: connect: connection refused
	I0328 01:03:55.449050 1130827 api_server.go:253] Checking apiserver healthz at https://192.168.61.107:8443/healthz ...
	I0328 01:03:53.413775 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:55.914095 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:57.515811 1130827 api_server.go:279] https://192.168.61.107:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0328 01:03:57.515852 1130827 api_server.go:103] status: https://192.168.61.107:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0328 01:03:57.515872 1130827 api_server.go:253] Checking apiserver healthz at https://192.168.61.107:8443/healthz ...
	I0328 01:03:57.564527 1130827 api_server.go:279] https://192.168.61.107:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0328 01:03:57.564560 1130827 api_server.go:103] status: https://192.168.61.107:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0328 01:03:57.949780 1130827 api_server.go:253] Checking apiserver healthz at https://192.168.61.107:8443/healthz ...
	I0328 01:03:57.955515 1130827 api_server.go:279] https://192.168.61.107:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0328 01:03:57.955565 1130827 api_server.go:103] status: https://192.168.61.107:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0328 01:03:58.449103 1130827 api_server.go:253] Checking apiserver healthz at https://192.168.61.107:8443/healthz ...
	I0328 01:03:58.456345 1130827 api_server.go:279] https://192.168.61.107:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0328 01:03:58.456384 1130827 api_server.go:103] status: https://192.168.61.107:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0328 01:03:58.949575 1130827 api_server.go:253] Checking apiserver healthz at https://192.168.61.107:8443/healthz ...
	I0328 01:03:58.954466 1130827 api_server.go:279] https://192.168.61.107:8443/healthz returned 200:
	ok
	I0328 01:03:58.961213 1130827 api_server.go:141] control plane version: v1.30.0-beta.0
	I0328 01:03:58.961244 1130827 api_server.go:131] duration metric: took 4.012391589s to wait for apiserver health ...
	I0328 01:03:58.961256 1130827 cni.go:84] Creating CNI manager for ""
	I0328 01:03:58.961265 1130827 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0328 01:03:58.963147 1130827 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0328 01:03:55.568378 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:56.068253 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:56.568989 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:57.068709 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:57.569038 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:58.068236 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:58.568386 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:59.068971 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:59.568858 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:04:00.067964 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:57.043266 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:59.541626 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:58.964446 1130827 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0328 01:03:58.979425 1130827 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0328 01:03:59.042826 1130827 system_pods.go:43] waiting for kube-system pods to appear ...
	I0328 01:03:59.060388 1130827 system_pods.go:59] 8 kube-system pods found
	I0328 01:03:59.060429 1130827 system_pods.go:61] "coredns-7db6d8ff4d-86n4s" [71402ca8-dfa7-4caf-a422-6de9f24bf9dc] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0328 01:03:59.060439 1130827 system_pods.go:61] "etcd-no-preload-248059" [954b6886-b84f-4d94-bbce-7e520142eb4b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0328 01:03:59.060451 1130827 system_pods.go:61] "kube-apiserver-no-preload-248059" [2d3caabe-27c2-44e7-8f52-76e03f262e2d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0328 01:03:59.060462 1130827 system_pods.go:61] "kube-controller-manager-no-preload-248059" [30b9f4aa-c9a7-4d91-8e4d-35ad32f40425] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0328 01:03:59.060472 1130827 system_pods.go:61] "kube-proxy-b9qpb" [7ab4cca8-0ba2-4177-84cd-c6ac045930fc] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0328 01:03:59.060481 1130827 system_pods.go:61] "kube-scheduler-no-preload-248059" [4d9e45e3-d990-40d4-a4be-8384c39eb9b0] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0328 01:03:59.060493 1130827 system_pods.go:61] "metrics-server-569cc877fc-cvnrj" [063a47ac-9ceb-4521-9dde-aca02ec5e0d8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0328 01:03:59.060508 1130827 system_pods.go:61] "storage-provisioner" [0a0eb2d3-a426-4b76-8009-1a0a0e0312bb] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0328 01:03:59.060518 1130827 system_pods.go:74] duration metric: took 17.666067ms to wait for pod list to return data ...
	I0328 01:03:59.060533 1130827 node_conditions.go:102] verifying NodePressure condition ...
	I0328 01:03:59.065018 1130827 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0328 01:03:59.065054 1130827 node_conditions.go:123] node cpu capacity is 2
	I0328 01:03:59.065071 1130827 node_conditions.go:105] duration metric: took 4.531253ms to run NodePressure ...
	I0328 01:03:59.065097 1130827 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-beta.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0328 01:03:59.454609 1130827 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0328 01:03:59.459707 1130827 kubeadm.go:733] kubelet initialised
	I0328 01:03:59.459730 1130827 kubeadm.go:734] duration metric: took 5.09757ms waiting for restarted kubelet to initialise ...
	I0328 01:03:59.459739 1130827 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0328 01:03:59.465352 1130827 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-86n4s" in "kube-system" namespace to be "Ready" ...
	I0328 01:03:59.471020 1130827 pod_ready.go:97] node "no-preload-248059" hosting pod "coredns-7db6d8ff4d-86n4s" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-248059" has status "Ready":"False"
	I0328 01:03:59.471054 1130827 pod_ready.go:81] duration metric: took 5.676291ms for pod "coredns-7db6d8ff4d-86n4s" in "kube-system" namespace to be "Ready" ...
	E0328 01:03:59.471067 1130827 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-248059" hosting pod "coredns-7db6d8ff4d-86n4s" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-248059" has status "Ready":"False"
	I0328 01:03:59.471075 1130827 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-248059" in "kube-system" namespace to be "Ready" ...
	I0328 01:03:59.476393 1130827 pod_ready.go:97] node "no-preload-248059" hosting pod "etcd-no-preload-248059" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-248059" has status "Ready":"False"
	I0328 01:03:59.476421 1130827 pod_ready.go:81] duration metric: took 5.333391ms for pod "etcd-no-preload-248059" in "kube-system" namespace to be "Ready" ...
	E0328 01:03:59.476430 1130827 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-248059" hosting pod "etcd-no-preload-248059" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-248059" has status "Ready":"False"
	I0328 01:03:59.476436 1130827 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-248059" in "kube-system" namespace to be "Ready" ...
	I0328 01:03:59.485889 1130827 pod_ready.go:97] node "no-preload-248059" hosting pod "kube-apiserver-no-preload-248059" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-248059" has status "Ready":"False"
	I0328 01:03:59.485924 1130827 pod_ready.go:81] duration metric: took 9.481204ms for pod "kube-apiserver-no-preload-248059" in "kube-system" namespace to be "Ready" ...
	E0328 01:03:59.485937 1130827 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-248059" hosting pod "kube-apiserver-no-preload-248059" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-248059" has status "Ready":"False"
	I0328 01:03:59.485957 1130827 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-248059" in "kube-system" namespace to be "Ready" ...
	I0328 01:03:59.491064 1130827 pod_ready.go:97] node "no-preload-248059" hosting pod "kube-controller-manager-no-preload-248059" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-248059" has status "Ready":"False"
	I0328 01:03:59.491095 1130827 pod_ready.go:81] duration metric: took 5.125981ms for pod "kube-controller-manager-no-preload-248059" in "kube-system" namespace to be "Ready" ...
	E0328 01:03:59.491107 1130827 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-248059" hosting pod "kube-controller-manager-no-preload-248059" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-248059" has status "Ready":"False"
	I0328 01:03:59.491116 1130827 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-b9qpb" in "kube-system" namespace to be "Ready" ...
	I0328 01:03:59.858724 1130827 pod_ready.go:92] pod "kube-proxy-b9qpb" in "kube-system" namespace has status "Ready":"True"
	I0328 01:03:59.858753 1130827 pod_ready.go:81] duration metric: took 367.628034ms for pod "kube-proxy-b9qpb" in "kube-system" namespace to be "Ready" ...
	I0328 01:03:59.858764 1130827 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-248059" in "kube-system" namespace to be "Ready" ...
	I0328 01:03:58.413911 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:00.913297 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:02.913414 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:00.568622 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:04:01.067943 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:04:01.567964 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:04:02.068537 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:04:02.568772 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:04:03.068458 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:04:03.568943 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:04:04.068085 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:04:04.068176 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:04:04.112601 1131323 cri.go:89] found id: ""
	I0328 01:04:04.112631 1131323 logs.go:276] 0 containers: []
	W0328 01:04:04.112642 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:04:04.112650 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:04:04.112726 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:04:04.151837 1131323 cri.go:89] found id: ""
	I0328 01:04:04.151873 1131323 logs.go:276] 0 containers: []
	W0328 01:04:04.151885 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:04:04.151894 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:04:04.151965 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:04:04.193411 1131323 cri.go:89] found id: ""
	I0328 01:04:04.193451 1131323 logs.go:276] 0 containers: []
	W0328 01:04:04.193463 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:04:04.193473 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:04:04.193545 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:04:04.239623 1131323 cri.go:89] found id: ""
	I0328 01:04:04.239652 1131323 logs.go:276] 0 containers: []
	W0328 01:04:04.239662 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:04:04.239673 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:04:04.239732 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:04:04.279561 1131323 cri.go:89] found id: ""
	I0328 01:04:04.279600 1131323 logs.go:276] 0 containers: []
	W0328 01:04:04.279615 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:04:04.279627 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:04:04.279708 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:04:04.318680 1131323 cri.go:89] found id: ""
	I0328 01:04:04.318710 1131323 logs.go:276] 0 containers: []
	W0328 01:04:04.318722 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:04:04.318731 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:04:04.318797 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:04:04.356486 1131323 cri.go:89] found id: ""
	I0328 01:04:04.356514 1131323 logs.go:276] 0 containers: []
	W0328 01:04:04.356523 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:04:04.356530 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:04:04.356586 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:04:04.394281 1131323 cri.go:89] found id: ""
	I0328 01:04:04.394319 1131323 logs.go:276] 0 containers: []
	W0328 01:04:04.394334 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:04:04.394348 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:04:04.394364 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:04:04.458688 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:04:04.458729 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:04:04.501399 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:04:04.501440 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:04:04.556183 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:04:04.556225 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:04:04.571392 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:04:04.571427 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:04:04.709967 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:04:02.041555 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:04.541464 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:01.866183 1130827 pod_ready.go:102] pod "kube-scheduler-no-preload-248059" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:03.868706 1130827 pod_ready.go:102] pod "kube-scheduler-no-preload-248059" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:04.915667 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:07.412548 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:07.210550 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:04:07.224274 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:04:07.224345 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:04:07.262604 1131323 cri.go:89] found id: ""
	I0328 01:04:07.262640 1131323 logs.go:276] 0 containers: []
	W0328 01:04:07.262665 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:04:07.262674 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:04:07.262763 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:04:07.296868 1131323 cri.go:89] found id: ""
	I0328 01:04:07.296907 1131323 logs.go:276] 0 containers: []
	W0328 01:04:07.296918 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:04:07.296926 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:04:07.296992 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:04:07.333110 1131323 cri.go:89] found id: ""
	I0328 01:04:07.333149 1131323 logs.go:276] 0 containers: []
	W0328 01:04:07.333162 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:04:07.333171 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:04:07.333240 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:04:07.371138 1131323 cri.go:89] found id: ""
	I0328 01:04:07.371168 1131323 logs.go:276] 0 containers: []
	W0328 01:04:07.371186 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:04:07.371195 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:04:07.371259 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:04:07.412197 1131323 cri.go:89] found id: ""
	I0328 01:04:07.412230 1131323 logs.go:276] 0 containers: []
	W0328 01:04:07.412242 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:04:07.412251 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:04:07.412331 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:04:07.457021 1131323 cri.go:89] found id: ""
	I0328 01:04:07.457052 1131323 logs.go:276] 0 containers: []
	W0328 01:04:07.457070 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:04:07.457080 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:04:07.457153 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:04:07.517996 1131323 cri.go:89] found id: ""
	I0328 01:04:07.518026 1131323 logs.go:276] 0 containers: []
	W0328 01:04:07.518034 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:04:07.518040 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:04:07.518111 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:04:07.556829 1131323 cri.go:89] found id: ""
	I0328 01:04:07.556856 1131323 logs.go:276] 0 containers: []
	W0328 01:04:07.556865 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:04:07.556875 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:04:07.556890 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:04:07.572234 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:04:07.572270 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:04:07.648615 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:04:07.648641 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:04:07.648658 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:04:07.719617 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:04:07.719665 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:04:07.764053 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:04:07.764097 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:04:10.319480 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:04:06.542160 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:08.550725 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:06.366150 1130827 pod_ready.go:102] pod "kube-scheduler-no-preload-248059" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:07.365200 1130827 pod_ready.go:92] pod "kube-scheduler-no-preload-248059" in "kube-system" namespace has status "Ready":"True"
	I0328 01:04:07.365233 1130827 pod_ready.go:81] duration metric: took 7.506461201s for pod "kube-scheduler-no-preload-248059" in "kube-system" namespace to be "Ready" ...
	I0328 01:04:07.365256 1130827 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace to be "Ready" ...
	I0328 01:04:09.373694 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:09.413378 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:11.913400 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:10.334347 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:04:10.335893 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:04:10.375231 1131323 cri.go:89] found id: ""
	I0328 01:04:10.375263 1131323 logs.go:276] 0 containers: []
	W0328 01:04:10.375274 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:04:10.375281 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:04:10.375353 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:04:10.413652 1131323 cri.go:89] found id: ""
	I0328 01:04:10.413706 1131323 logs.go:276] 0 containers: []
	W0328 01:04:10.413726 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:04:10.413736 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:04:10.413805 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:04:10.449546 1131323 cri.go:89] found id: ""
	I0328 01:04:10.449588 1131323 logs.go:276] 0 containers: []
	W0328 01:04:10.449597 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:04:10.449604 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:04:10.449658 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:04:10.487518 1131323 cri.go:89] found id: ""
	I0328 01:04:10.487556 1131323 logs.go:276] 0 containers: []
	W0328 01:04:10.487570 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:04:10.487579 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:04:10.487663 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:04:10.525088 1131323 cri.go:89] found id: ""
	I0328 01:04:10.525124 1131323 logs.go:276] 0 containers: []
	W0328 01:04:10.525137 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:04:10.525146 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:04:10.525213 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:04:10.567177 1131323 cri.go:89] found id: ""
	I0328 01:04:10.567209 1131323 logs.go:276] 0 containers: []
	W0328 01:04:10.567221 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:04:10.567231 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:04:10.567302 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:04:10.609440 1131323 cri.go:89] found id: ""
	I0328 01:04:10.609474 1131323 logs.go:276] 0 containers: []
	W0328 01:04:10.609485 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:04:10.609492 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:04:10.609549 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:04:10.652466 1131323 cri.go:89] found id: ""
	I0328 01:04:10.652502 1131323 logs.go:276] 0 containers: []
	W0328 01:04:10.652516 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:04:10.652529 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:04:10.652546 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:04:10.737406 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:04:10.737451 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:04:10.786955 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:04:10.786991 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:04:10.843072 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:04:10.843114 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:04:10.857209 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:04:10.857244 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:04:10.950885 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:04:13.451542 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:04:13.465833 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:04:13.465924 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:04:13.503353 1131323 cri.go:89] found id: ""
	I0328 01:04:13.503386 1131323 logs.go:276] 0 containers: []
	W0328 01:04:13.503398 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:04:13.503407 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:04:13.503474 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:04:13.543175 1131323 cri.go:89] found id: ""
	I0328 01:04:13.543208 1131323 logs.go:276] 0 containers: []
	W0328 01:04:13.543220 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:04:13.543229 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:04:13.543287 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:04:13.580796 1131323 cri.go:89] found id: ""
	I0328 01:04:13.580829 1131323 logs.go:276] 0 containers: []
	W0328 01:04:13.580840 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:04:13.580848 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:04:13.580900 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:04:13.619483 1131323 cri.go:89] found id: ""
	I0328 01:04:13.619516 1131323 logs.go:276] 0 containers: []
	W0328 01:04:13.619529 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:04:13.619539 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:04:13.619596 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:04:13.654651 1131323 cri.go:89] found id: ""
	I0328 01:04:13.654683 1131323 logs.go:276] 0 containers: []
	W0328 01:04:13.654697 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:04:13.654705 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:04:13.654774 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:04:13.691763 1131323 cri.go:89] found id: ""
	I0328 01:04:13.691794 1131323 logs.go:276] 0 containers: []
	W0328 01:04:13.691805 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:04:13.691813 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:04:13.691881 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:04:13.730580 1131323 cri.go:89] found id: ""
	I0328 01:04:13.730614 1131323 logs.go:276] 0 containers: []
	W0328 01:04:13.730627 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:04:13.730635 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:04:13.730694 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:04:13.767802 1131323 cri.go:89] found id: ""
	I0328 01:04:13.767834 1131323 logs.go:276] 0 containers: []
	W0328 01:04:13.767848 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:04:13.767860 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:04:13.767876 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:04:13.815612 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:04:13.815653 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:04:13.870945 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:04:13.870991 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:04:13.891456 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:04:13.891506 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:04:14.022124 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:04:14.022163 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:04:14.022187 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:04:11.041196 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:13.044490 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:15.541942 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:11.873574 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:13.875251 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:14.412081 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:16.412837 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:16.604087 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:04:16.618872 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:04:16.618971 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:04:16.665628 1131323 cri.go:89] found id: ""
	I0328 01:04:16.665661 1131323 logs.go:276] 0 containers: []
	W0328 01:04:16.665675 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:04:16.665683 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:04:16.665780 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:04:16.703727 1131323 cri.go:89] found id: ""
	I0328 01:04:16.703758 1131323 logs.go:276] 0 containers: []
	W0328 01:04:16.703768 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:04:16.703775 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:04:16.703835 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:04:16.741425 1131323 cri.go:89] found id: ""
	I0328 01:04:16.741455 1131323 logs.go:276] 0 containers: []
	W0328 01:04:16.741464 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:04:16.741470 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:04:16.741524 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:04:16.782333 1131323 cri.go:89] found id: ""
	I0328 01:04:16.782373 1131323 logs.go:276] 0 containers: []
	W0328 01:04:16.782387 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:04:16.782398 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:04:16.782469 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:04:16.820321 1131323 cri.go:89] found id: ""
	I0328 01:04:16.820355 1131323 logs.go:276] 0 containers: []
	W0328 01:04:16.820364 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:04:16.820372 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:04:16.820429 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:04:16.861091 1131323 cri.go:89] found id: ""
	I0328 01:04:16.861130 1131323 logs.go:276] 0 containers: []
	W0328 01:04:16.861144 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:04:16.861154 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:04:16.861226 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:04:16.901347 1131323 cri.go:89] found id: ""
	I0328 01:04:16.901394 1131323 logs.go:276] 0 containers: []
	W0328 01:04:16.901408 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:04:16.901418 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:04:16.901491 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:04:16.944027 1131323 cri.go:89] found id: ""
	I0328 01:04:16.944067 1131323 logs.go:276] 0 containers: []
	W0328 01:04:16.944080 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:04:16.944093 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:04:16.944110 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:04:16.959104 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:04:16.959151 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:04:17.035432 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:04:17.035464 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:04:17.035480 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:04:17.116236 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:04:17.116276 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:04:17.159321 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:04:17.159370 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:04:19.711326 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:04:19.726016 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:04:19.726094 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:04:19.776639 1131323 cri.go:89] found id: ""
	I0328 01:04:19.776676 1131323 logs.go:276] 0 containers: []
	W0328 01:04:19.776690 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:04:19.776700 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:04:19.776782 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:04:19.817849 1131323 cri.go:89] found id: ""
	I0328 01:04:19.817887 1131323 logs.go:276] 0 containers: []
	W0328 01:04:19.817897 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:04:19.817904 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:04:19.817981 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:04:19.855055 1131323 cri.go:89] found id: ""
	I0328 01:04:19.855089 1131323 logs.go:276] 0 containers: []
	W0328 01:04:19.855102 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:04:19.855110 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:04:19.855177 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:04:19.895296 1131323 cri.go:89] found id: ""
	I0328 01:04:19.895332 1131323 logs.go:276] 0 containers: []
	W0328 01:04:19.895346 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:04:19.895354 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:04:19.895414 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:04:19.930936 1131323 cri.go:89] found id: ""
	I0328 01:04:19.930968 1131323 logs.go:276] 0 containers: []
	W0328 01:04:19.930980 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:04:19.930989 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:04:19.931067 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:04:19.968573 1131323 cri.go:89] found id: ""
	I0328 01:04:19.968610 1131323 logs.go:276] 0 containers: []
	W0328 01:04:19.968623 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:04:19.968632 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:04:19.968693 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:04:20.006130 1131323 cri.go:89] found id: ""
	I0328 01:04:20.006180 1131323 logs.go:276] 0 containers: []
	W0328 01:04:20.006195 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:04:20.006203 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:04:20.006304 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:04:20.043646 1131323 cri.go:89] found id: ""
	I0328 01:04:20.043678 1131323 logs.go:276] 0 containers: []
	W0328 01:04:20.043689 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:04:20.043701 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:04:20.043717 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:04:20.058728 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:04:20.058761 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:04:20.136392 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:04:20.136417 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:04:20.136431 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:04:20.214971 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:04:20.215015 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:04:20.255002 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:04:20.255047 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:04:18.041868 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:20.542175 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:16.372600 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:18.373203 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:20.374228 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:18.913596 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:20.913978 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:22.914777 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:22.810078 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:04:22.824083 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:04:22.824169 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:04:22.862037 1131323 cri.go:89] found id: ""
	I0328 01:04:22.862066 1131323 logs.go:276] 0 containers: []
	W0328 01:04:22.862074 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:04:22.862081 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:04:22.862141 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:04:22.901625 1131323 cri.go:89] found id: ""
	I0328 01:04:22.901658 1131323 logs.go:276] 0 containers: []
	W0328 01:04:22.901670 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:04:22.901679 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:04:22.901752 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:04:22.938858 1131323 cri.go:89] found id: ""
	I0328 01:04:22.938891 1131323 logs.go:276] 0 containers: []
	W0328 01:04:22.938903 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:04:22.938912 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:04:22.938983 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:04:22.978781 1131323 cri.go:89] found id: ""
	I0328 01:04:22.978818 1131323 logs.go:276] 0 containers: []
	W0328 01:04:22.978829 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:04:22.978837 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:04:22.978910 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:04:23.016844 1131323 cri.go:89] found id: ""
	I0328 01:04:23.016882 1131323 logs.go:276] 0 containers: []
	W0328 01:04:23.016895 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:04:23.016904 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:04:23.016975 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:04:23.058456 1131323 cri.go:89] found id: ""
	I0328 01:04:23.058508 1131323 logs.go:276] 0 containers: []
	W0328 01:04:23.058522 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:04:23.058531 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:04:23.058604 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:04:23.099368 1131323 cri.go:89] found id: ""
	I0328 01:04:23.099399 1131323 logs.go:276] 0 containers: []
	W0328 01:04:23.099408 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:04:23.099420 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:04:23.099492 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:04:23.135593 1131323 cri.go:89] found id: ""
	I0328 01:04:23.135634 1131323 logs.go:276] 0 containers: []
	W0328 01:04:23.135653 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:04:23.135665 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:04:23.135679 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:04:23.191215 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:04:23.191260 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:04:23.206849 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:04:23.206884 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:04:23.289566 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:04:23.289596 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:04:23.289618 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:04:23.365429 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:04:23.365480 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:04:23.042312 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:25.541788 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:22.872233 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:25.373908 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:25.413591 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:27.912983 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:25.914883 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:04:25.929336 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:04:25.929415 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:04:25.969452 1131323 cri.go:89] found id: ""
	I0328 01:04:25.969485 1131323 logs.go:276] 0 containers: []
	W0328 01:04:25.969497 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:04:25.969506 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:04:25.969573 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:04:26.008978 1131323 cri.go:89] found id: ""
	I0328 01:04:26.009006 1131323 logs.go:276] 0 containers: []
	W0328 01:04:26.009015 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:04:26.009022 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:04:26.009075 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:04:26.051110 1131323 cri.go:89] found id: ""
	I0328 01:04:26.051138 1131323 logs.go:276] 0 containers: []
	W0328 01:04:26.051146 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:04:26.051153 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:04:26.051213 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:04:26.088231 1131323 cri.go:89] found id: ""
	I0328 01:04:26.088262 1131323 logs.go:276] 0 containers: []
	W0328 01:04:26.088271 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:04:26.088277 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:04:26.088342 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:04:26.125741 1131323 cri.go:89] found id: ""
	I0328 01:04:26.125782 1131323 logs.go:276] 0 containers: []
	W0328 01:04:26.125794 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:04:26.125800 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:04:26.125867 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:04:26.163367 1131323 cri.go:89] found id: ""
	I0328 01:04:26.163406 1131323 logs.go:276] 0 containers: []
	W0328 01:04:26.163417 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:04:26.163426 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:04:26.163503 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:04:26.202302 1131323 cri.go:89] found id: ""
	I0328 01:04:26.202340 1131323 logs.go:276] 0 containers: []
	W0328 01:04:26.202355 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:04:26.202364 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:04:26.202422 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:04:26.240880 1131323 cri.go:89] found id: ""
	I0328 01:04:26.240911 1131323 logs.go:276] 0 containers: []
	W0328 01:04:26.240921 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:04:26.240931 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:04:26.240943 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:04:26.283151 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:04:26.283180 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:04:26.341313 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:04:26.341350 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:04:26.356762 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:04:26.356791 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:04:26.428033 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:04:26.428054 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:04:26.428066 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:04:29.006332 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:04:29.020634 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:04:29.020745 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:04:29.060812 1131323 cri.go:89] found id: ""
	I0328 01:04:29.060843 1131323 logs.go:276] 0 containers: []
	W0328 01:04:29.060852 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:04:29.060859 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:04:29.060916 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:04:29.100110 1131323 cri.go:89] found id: ""
	I0328 01:04:29.100139 1131323 logs.go:276] 0 containers: []
	W0328 01:04:29.100149 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:04:29.100155 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:04:29.100212 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:04:29.140345 1131323 cri.go:89] found id: ""
	I0328 01:04:29.140384 1131323 logs.go:276] 0 containers: []
	W0328 01:04:29.140396 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:04:29.140404 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:04:29.140479 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:04:29.182415 1131323 cri.go:89] found id: ""
	I0328 01:04:29.182449 1131323 logs.go:276] 0 containers: []
	W0328 01:04:29.182459 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:04:29.182465 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:04:29.182533 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:04:29.225177 1131323 cri.go:89] found id: ""
	I0328 01:04:29.225214 1131323 logs.go:276] 0 containers: []
	W0328 01:04:29.225225 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:04:29.225233 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:04:29.225310 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:04:29.265437 1131323 cri.go:89] found id: ""
	I0328 01:04:29.265471 1131323 logs.go:276] 0 containers: []
	W0328 01:04:29.265485 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:04:29.265493 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:04:29.265556 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:04:29.301578 1131323 cri.go:89] found id: ""
	I0328 01:04:29.301617 1131323 logs.go:276] 0 containers: []
	W0328 01:04:29.301630 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:04:29.301639 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:04:29.301719 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:04:29.340816 1131323 cri.go:89] found id: ""
	I0328 01:04:29.340847 1131323 logs.go:276] 0 containers: []
	W0328 01:04:29.340856 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:04:29.340867 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:04:29.340880 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:04:29.384658 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:04:29.384687 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:04:29.439243 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:04:29.439285 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:04:29.456179 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:04:29.456211 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:04:29.534878 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:04:29.534906 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:04:29.534927 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:04:28.041463 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:30.042506 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:27.872489 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:30.371109 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:29.913856 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:32.415699 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:32.115798 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:04:32.130464 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:04:32.130560 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:04:32.168846 1131323 cri.go:89] found id: ""
	I0328 01:04:32.168877 1131323 logs.go:276] 0 containers: []
	W0328 01:04:32.168887 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:04:32.168894 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:04:32.168952 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:04:32.208590 1131323 cri.go:89] found id: ""
	I0328 01:04:32.208622 1131323 logs.go:276] 0 containers: []
	W0328 01:04:32.208632 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:04:32.208638 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:04:32.208694 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:04:32.247323 1131323 cri.go:89] found id: ""
	I0328 01:04:32.247362 1131323 logs.go:276] 0 containers: []
	W0328 01:04:32.247375 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:04:32.247384 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:04:32.247507 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:04:32.285260 1131323 cri.go:89] found id: ""
	I0328 01:04:32.285293 1131323 logs.go:276] 0 containers: []
	W0328 01:04:32.285312 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:04:32.285319 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:04:32.285395 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:04:32.326678 1131323 cri.go:89] found id: ""
	I0328 01:04:32.326712 1131323 logs.go:276] 0 containers: []
	W0328 01:04:32.326725 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:04:32.326740 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:04:32.326823 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:04:32.363375 1131323 cri.go:89] found id: ""
	I0328 01:04:32.363403 1131323 logs.go:276] 0 containers: []
	W0328 01:04:32.363412 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:04:32.363419 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:04:32.363473 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:04:32.401410 1131323 cri.go:89] found id: ""
	I0328 01:04:32.401449 1131323 logs.go:276] 0 containers: []
	W0328 01:04:32.401462 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:04:32.401470 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:04:32.401558 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:04:32.438645 1131323 cri.go:89] found id: ""
	I0328 01:04:32.438680 1131323 logs.go:276] 0 containers: []
	W0328 01:04:32.438691 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:04:32.438703 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:04:32.438718 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:04:32.488743 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:04:32.488786 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:04:32.503908 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:04:32.503944 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:04:32.577307 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:04:32.577333 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:04:32.577350 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:04:32.657787 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:04:32.657832 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:04:35.201151 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:04:35.215313 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:04:35.215383 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:04:35.253467 1131323 cri.go:89] found id: ""
	I0328 01:04:35.253504 1131323 logs.go:276] 0 containers: []
	W0328 01:04:35.253515 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:04:35.253522 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:04:35.253593 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:04:35.290218 1131323 cri.go:89] found id: ""
	I0328 01:04:35.290280 1131323 logs.go:276] 0 containers: []
	W0328 01:04:35.290292 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:04:35.290300 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:04:35.290378 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:04:35.330714 1131323 cri.go:89] found id: ""
	I0328 01:04:35.330749 1131323 logs.go:276] 0 containers: []
	W0328 01:04:35.330757 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:04:35.330764 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:04:35.330831 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:04:32.542071 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:34.544163 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:32.372100 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:34.872293 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:34.913212 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:37.411734 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:35.371524 1131323 cri.go:89] found id: ""
	I0328 01:04:35.371553 1131323 logs.go:276] 0 containers: []
	W0328 01:04:35.371570 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:04:35.371577 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:04:35.371630 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:04:35.411610 1131323 cri.go:89] found id: ""
	I0328 01:04:35.411638 1131323 logs.go:276] 0 containers: []
	W0328 01:04:35.411646 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:04:35.411652 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:04:35.411711 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:04:35.456709 1131323 cri.go:89] found id: ""
	I0328 01:04:35.456745 1131323 logs.go:276] 0 containers: []
	W0328 01:04:35.456758 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:04:35.456766 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:04:35.456836 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:04:35.492688 1131323 cri.go:89] found id: ""
	I0328 01:04:35.492719 1131323 logs.go:276] 0 containers: []
	W0328 01:04:35.492729 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:04:35.492755 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:04:35.492811 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:04:35.531205 1131323 cri.go:89] found id: ""
	I0328 01:04:35.531234 1131323 logs.go:276] 0 containers: []
	W0328 01:04:35.531243 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:04:35.531254 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:04:35.531266 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:04:35.611803 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:04:35.611845 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:04:35.653513 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:04:35.653551 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:04:35.708030 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:04:35.708075 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:04:35.724542 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:04:35.724576 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:04:35.798624 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:04:38.299312 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:04:38.314128 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:04:38.314213 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:04:38.357728 1131323 cri.go:89] found id: ""
	I0328 01:04:38.357761 1131323 logs.go:276] 0 containers: []
	W0328 01:04:38.357779 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:04:38.357786 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:04:38.357848 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:04:38.394512 1131323 cri.go:89] found id: ""
	I0328 01:04:38.394541 1131323 logs.go:276] 0 containers: []
	W0328 01:04:38.394549 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:04:38.394558 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:04:38.394618 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:04:38.434353 1131323 cri.go:89] found id: ""
	I0328 01:04:38.434380 1131323 logs.go:276] 0 containers: []
	W0328 01:04:38.434391 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:04:38.434399 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:04:38.434466 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:04:38.477662 1131323 cri.go:89] found id: ""
	I0328 01:04:38.477693 1131323 logs.go:276] 0 containers: []
	W0328 01:04:38.477703 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:04:38.477710 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:04:38.477763 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:04:38.515014 1131323 cri.go:89] found id: ""
	I0328 01:04:38.515044 1131323 logs.go:276] 0 containers: []
	W0328 01:04:38.515053 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:04:38.515060 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:04:38.515117 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:04:38.558865 1131323 cri.go:89] found id: ""
	I0328 01:04:38.558899 1131323 logs.go:276] 0 containers: []
	W0328 01:04:38.558911 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:04:38.558920 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:04:38.558982 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:04:38.600261 1131323 cri.go:89] found id: ""
	I0328 01:04:38.600290 1131323 logs.go:276] 0 containers: []
	W0328 01:04:38.600299 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:04:38.600306 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:04:38.600366 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:04:38.637131 1131323 cri.go:89] found id: ""
	I0328 01:04:38.637167 1131323 logs.go:276] 0 containers: []
	W0328 01:04:38.637179 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:04:38.637194 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:04:38.637218 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:04:38.716032 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:04:38.716058 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:04:38.716079 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:04:38.804534 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:04:38.804578 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:04:38.851781 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:04:38.851820 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:04:38.910091 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:04:38.910125 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:04:37.041273 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:39.541843 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:37.372262 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:39.372555 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:39.912953 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:42.412667 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:41.425801 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:04:41.441072 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:04:41.441168 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:04:41.482934 1131323 cri.go:89] found id: ""
	I0328 01:04:41.482962 1131323 logs.go:276] 0 containers: []
	W0328 01:04:41.482974 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:04:41.482983 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:04:41.483063 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:04:41.521762 1131323 cri.go:89] found id: ""
	I0328 01:04:41.521796 1131323 logs.go:276] 0 containers: []
	W0328 01:04:41.521810 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:04:41.521819 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:04:41.521931 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:04:41.560814 1131323 cri.go:89] found id: ""
	I0328 01:04:41.560847 1131323 logs.go:276] 0 containers: []
	W0328 01:04:41.560857 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:04:41.560864 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:04:41.560928 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:04:41.601158 1131323 cri.go:89] found id: ""
	I0328 01:04:41.601189 1131323 logs.go:276] 0 containers: []
	W0328 01:04:41.601199 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:04:41.601206 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:04:41.601271 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:04:41.638760 1131323 cri.go:89] found id: ""
	I0328 01:04:41.638789 1131323 logs.go:276] 0 containers: []
	W0328 01:04:41.638799 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:04:41.638806 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:04:41.638861 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:04:41.675235 1131323 cri.go:89] found id: ""
	I0328 01:04:41.675268 1131323 logs.go:276] 0 containers: []
	W0328 01:04:41.675278 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:04:41.675285 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:04:41.675341 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:04:41.712918 1131323 cri.go:89] found id: ""
	I0328 01:04:41.712957 1131323 logs.go:276] 0 containers: []
	W0328 01:04:41.712972 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:04:41.712983 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:04:41.713078 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:04:41.750552 1131323 cri.go:89] found id: ""
	I0328 01:04:41.750582 1131323 logs.go:276] 0 containers: []
	W0328 01:04:41.750591 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:04:41.750601 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:04:41.750617 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:04:41.811163 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:04:41.811204 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:04:41.826502 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:04:41.826547 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:04:41.900727 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:04:41.900759 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:04:41.900777 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:04:41.981731 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:04:41.981783 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:04:44.525845 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:04:44.542301 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:04:44.542389 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:04:44.584907 1131323 cri.go:89] found id: ""
	I0328 01:04:44.584936 1131323 logs.go:276] 0 containers: []
	W0328 01:04:44.584945 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:04:44.584952 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:04:44.585007 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:04:44.630465 1131323 cri.go:89] found id: ""
	I0328 01:04:44.630499 1131323 logs.go:276] 0 containers: []
	W0328 01:04:44.630511 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:04:44.630520 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:04:44.630588 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:04:44.669095 1131323 cri.go:89] found id: ""
	I0328 01:04:44.669131 1131323 logs.go:276] 0 containers: []
	W0328 01:04:44.669143 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:04:44.669152 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:04:44.669235 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:04:44.708445 1131323 cri.go:89] found id: ""
	I0328 01:04:44.708484 1131323 logs.go:276] 0 containers: []
	W0328 01:04:44.708495 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:04:44.708502 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:04:44.708570 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:04:44.747706 1131323 cri.go:89] found id: ""
	I0328 01:04:44.747744 1131323 logs.go:276] 0 containers: []
	W0328 01:04:44.747755 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:04:44.747762 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:04:44.747822 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:04:44.787768 1131323 cri.go:89] found id: ""
	I0328 01:04:44.787807 1131323 logs.go:276] 0 containers: []
	W0328 01:04:44.787821 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:04:44.787830 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:04:44.787899 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:04:44.829018 1131323 cri.go:89] found id: ""
	I0328 01:04:44.829049 1131323 logs.go:276] 0 containers: []
	W0328 01:04:44.829059 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:04:44.829066 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:04:44.829123 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:04:44.874334 1131323 cri.go:89] found id: ""
	I0328 01:04:44.874374 1131323 logs.go:276] 0 containers: []
	W0328 01:04:44.874383 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:04:44.874393 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:04:44.874405 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:04:44.921577 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:04:44.921619 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:04:44.976660 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:04:44.976713 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:04:44.991365 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:04:44.991400 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:04:45.067595 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:04:45.067630 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:04:45.067651 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:04:42.042736 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:44.543288 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:41.372902 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:43.872925 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:45.873163 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:44.913827 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:47.412342 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:47.647634 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:04:47.663581 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:04:47.663687 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:04:47.702889 1131323 cri.go:89] found id: ""
	I0328 01:04:47.702940 1131323 logs.go:276] 0 containers: []
	W0328 01:04:47.702954 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:04:47.702966 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:04:47.703043 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:04:47.744995 1131323 cri.go:89] found id: ""
	I0328 01:04:47.745027 1131323 logs.go:276] 0 containers: []
	W0328 01:04:47.745037 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:04:47.745044 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:04:47.745103 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:04:47.785518 1131323 cri.go:89] found id: ""
	I0328 01:04:47.785550 1131323 logs.go:276] 0 containers: []
	W0328 01:04:47.785562 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:04:47.785572 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:04:47.785645 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:04:47.831739 1131323 cri.go:89] found id: ""
	I0328 01:04:47.831771 1131323 logs.go:276] 0 containers: []
	W0328 01:04:47.831786 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:04:47.831794 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:04:47.831867 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:04:47.871864 1131323 cri.go:89] found id: ""
	I0328 01:04:47.871906 1131323 logs.go:276] 0 containers: []
	W0328 01:04:47.871918 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:04:47.871929 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:04:47.872008 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:04:47.907899 1131323 cri.go:89] found id: ""
	I0328 01:04:47.907934 1131323 logs.go:276] 0 containers: []
	W0328 01:04:47.907946 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:04:47.907955 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:04:47.908022 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:04:47.946073 1131323 cri.go:89] found id: ""
	I0328 01:04:47.946107 1131323 logs.go:276] 0 containers: []
	W0328 01:04:47.946118 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:04:47.946127 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:04:47.946223 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:04:47.986122 1131323 cri.go:89] found id: ""
	I0328 01:04:47.986154 1131323 logs.go:276] 0 containers: []
	W0328 01:04:47.986168 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:04:47.986182 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:04:47.986198 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:04:48.057234 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:04:48.057271 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:04:48.109881 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:04:48.109926 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:04:48.125154 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:04:48.125189 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:04:48.208295 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:04:48.208327 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:04:48.208345 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:04:47.041447 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:49.542203 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:48.371275 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:50.372057 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:49.413451 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:51.414465 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:50.785126 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:04:50.800000 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:04:50.800078 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:04:50.839883 1131323 cri.go:89] found id: ""
	I0328 01:04:50.839911 1131323 logs.go:276] 0 containers: []
	W0328 01:04:50.839920 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:04:50.839927 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:04:50.839983 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:04:50.879627 1131323 cri.go:89] found id: ""
	I0328 01:04:50.879654 1131323 logs.go:276] 0 containers: []
	W0328 01:04:50.879661 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:04:50.879668 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:04:50.879734 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:04:50.918392 1131323 cri.go:89] found id: ""
	I0328 01:04:50.918434 1131323 logs.go:276] 0 containers: []
	W0328 01:04:50.918446 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:04:50.918454 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:04:50.918517 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:04:50.957198 1131323 cri.go:89] found id: ""
	I0328 01:04:50.957234 1131323 logs.go:276] 0 containers: []
	W0328 01:04:50.957248 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:04:50.957257 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:04:50.957328 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:04:50.997389 1131323 cri.go:89] found id: ""
	I0328 01:04:50.997424 1131323 logs.go:276] 0 containers: []
	W0328 01:04:50.997438 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:04:50.997446 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:04:50.997513 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:04:51.040259 1131323 cri.go:89] found id: ""
	I0328 01:04:51.040296 1131323 logs.go:276] 0 containers: []
	W0328 01:04:51.040309 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:04:51.040318 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:04:51.040389 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:04:51.081824 1131323 cri.go:89] found id: ""
	I0328 01:04:51.081858 1131323 logs.go:276] 0 containers: []
	W0328 01:04:51.081868 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:04:51.081875 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:04:51.081942 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:04:51.119742 1131323 cri.go:89] found id: ""
	I0328 01:04:51.119783 1131323 logs.go:276] 0 containers: []
	W0328 01:04:51.119796 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:04:51.119810 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:04:51.119836 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:04:51.173486 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:04:51.173529 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:04:51.188532 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:04:51.188568 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:04:51.269181 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:04:51.269207 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:04:51.269226 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:04:51.349882 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:04:51.349936 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:04:53.893562 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:04:53.910104 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:04:53.910186 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:04:53.951333 1131323 cri.go:89] found id: ""
	I0328 01:04:53.951375 1131323 logs.go:276] 0 containers: []
	W0328 01:04:53.951388 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:04:53.951397 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:04:53.951472 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:04:53.992438 1131323 cri.go:89] found id: ""
	I0328 01:04:53.992474 1131323 logs.go:276] 0 containers: []
	W0328 01:04:53.992486 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:04:53.992493 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:04:53.992561 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:04:54.032934 1131323 cri.go:89] found id: ""
	I0328 01:04:54.032969 1131323 logs.go:276] 0 containers: []
	W0328 01:04:54.032982 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:04:54.032992 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:04:54.033061 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:04:54.074670 1131323 cri.go:89] found id: ""
	I0328 01:04:54.074707 1131323 logs.go:276] 0 containers: []
	W0328 01:04:54.074777 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:04:54.074801 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:04:54.074875 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:04:54.111527 1131323 cri.go:89] found id: ""
	I0328 01:04:54.111555 1131323 logs.go:276] 0 containers: []
	W0328 01:04:54.111566 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:04:54.111573 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:04:54.111658 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:04:54.151401 1131323 cri.go:89] found id: ""
	I0328 01:04:54.151428 1131323 logs.go:276] 0 containers: []
	W0328 01:04:54.151437 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:04:54.151443 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:04:54.151494 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:04:54.197997 1131323 cri.go:89] found id: ""
	I0328 01:04:54.198036 1131323 logs.go:276] 0 containers: []
	W0328 01:04:54.198048 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:04:54.198058 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:04:54.198135 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:04:54.234016 1131323 cri.go:89] found id: ""
	I0328 01:04:54.234049 1131323 logs.go:276] 0 containers: []
	W0328 01:04:54.234058 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:04:54.234068 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:04:54.234081 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:04:54.286118 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:04:54.286161 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:04:54.300489 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:04:54.300541 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:04:54.376949 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:04:54.376972 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:04:54.376988 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:04:54.463857 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:04:54.463901 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:04:52.041517 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:54.042088 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:52.875923 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:55.371823 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:53.912140 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:55.912329 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:57.026395 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:04:57.041270 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:04:57.041358 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:04:57.082380 1131323 cri.go:89] found id: ""
	I0328 01:04:57.082416 1131323 logs.go:276] 0 containers: []
	W0328 01:04:57.082428 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:04:57.082436 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:04:57.082503 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:04:57.121835 1131323 cri.go:89] found id: ""
	I0328 01:04:57.121870 1131323 logs.go:276] 0 containers: []
	W0328 01:04:57.121885 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:04:57.121894 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:04:57.121969 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:04:57.163688 1131323 cri.go:89] found id: ""
	I0328 01:04:57.163725 1131323 logs.go:276] 0 containers: []
	W0328 01:04:57.163737 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:04:57.163745 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:04:57.163819 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:04:57.212628 1131323 cri.go:89] found id: ""
	I0328 01:04:57.212666 1131323 logs.go:276] 0 containers: []
	W0328 01:04:57.212693 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:04:57.212703 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:04:57.212788 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:04:57.249196 1131323 cri.go:89] found id: ""
	I0328 01:04:57.249231 1131323 logs.go:276] 0 containers: []
	W0328 01:04:57.249244 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:04:57.249253 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:04:57.249318 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:04:57.286996 1131323 cri.go:89] found id: ""
	I0328 01:04:57.287031 1131323 logs.go:276] 0 containers: []
	W0328 01:04:57.287040 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:04:57.287047 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:04:57.287101 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:04:57.324523 1131323 cri.go:89] found id: ""
	I0328 01:04:57.324551 1131323 logs.go:276] 0 containers: []
	W0328 01:04:57.324560 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:04:57.324566 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:04:57.324627 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:04:57.363946 1131323 cri.go:89] found id: ""
	I0328 01:04:57.363984 1131323 logs.go:276] 0 containers: []
	W0328 01:04:57.363998 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:04:57.364012 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:04:57.364034 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:04:57.418300 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:04:57.418337 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:04:57.433214 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:04:57.433242 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:04:57.508623 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:04:57.508651 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:04:57.508665 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:04:57.586336 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:04:57.586377 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:05:00.129903 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:05:00.146829 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:05:00.146920 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:05:00.197823 1131323 cri.go:89] found id: ""
	I0328 01:05:00.197856 1131323 logs.go:276] 0 containers: []
	W0328 01:05:00.197865 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:05:00.197872 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:05:00.197930 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:05:00.257523 1131323 cri.go:89] found id: ""
	I0328 01:05:00.257561 1131323 logs.go:276] 0 containers: []
	W0328 01:05:00.257575 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:05:00.257584 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:05:00.257657 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:05:00.314511 1131323 cri.go:89] found id: ""
	I0328 01:05:00.314539 1131323 logs.go:276] 0 containers: []
	W0328 01:05:00.314549 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:05:00.314558 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:05:00.314610 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:04:56.042295 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:58.541684 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:00.543232 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:57.372451 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:59.372577 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:58.412203 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:00.412880 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:02.913222 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:00.351043 1131323 cri.go:89] found id: ""
	I0328 01:05:00.351076 1131323 logs.go:276] 0 containers: []
	W0328 01:05:00.351090 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:05:00.351098 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:05:00.351167 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:05:00.391477 1131323 cri.go:89] found id: ""
	I0328 01:05:00.391507 1131323 logs.go:276] 0 containers: []
	W0328 01:05:00.391519 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:05:00.391525 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:05:00.391595 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:05:00.436196 1131323 cri.go:89] found id: ""
	I0328 01:05:00.436230 1131323 logs.go:276] 0 containers: []
	W0328 01:05:00.436242 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:05:00.436249 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:05:00.436316 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:05:00.473389 1131323 cri.go:89] found id: ""
	I0328 01:05:00.473428 1131323 logs.go:276] 0 containers: []
	W0328 01:05:00.473441 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:05:00.473450 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:05:00.473523 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:05:00.508829 1131323 cri.go:89] found id: ""
	I0328 01:05:00.508866 1131323 logs.go:276] 0 containers: []
	W0328 01:05:00.508879 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:05:00.508901 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:05:00.508931 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:05:00.553709 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:05:00.553741 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:05:00.612679 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:05:00.612732 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:05:00.630908 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:05:00.630948 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:05:00.706984 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:05:00.707016 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:05:00.707034 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:05:03.287887 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:05:03.304679 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:05:03.304779 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:05:03.343579 1131323 cri.go:89] found id: ""
	I0328 01:05:03.343608 1131323 logs.go:276] 0 containers: []
	W0328 01:05:03.343618 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:05:03.343625 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:05:03.343677 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:05:03.387158 1131323 cri.go:89] found id: ""
	I0328 01:05:03.387192 1131323 logs.go:276] 0 containers: []
	W0328 01:05:03.387206 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:05:03.387224 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:05:03.387308 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:05:03.426622 1131323 cri.go:89] found id: ""
	I0328 01:05:03.426653 1131323 logs.go:276] 0 containers: []
	W0328 01:05:03.426663 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:05:03.426670 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:05:03.426724 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:05:03.468743 1131323 cri.go:89] found id: ""
	I0328 01:05:03.468780 1131323 logs.go:276] 0 containers: []
	W0328 01:05:03.468793 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:05:03.468801 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:05:03.468870 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:05:03.508518 1131323 cri.go:89] found id: ""
	I0328 01:05:03.508554 1131323 logs.go:276] 0 containers: []
	W0328 01:05:03.508566 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:05:03.508575 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:05:03.508653 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:05:03.548295 1131323 cri.go:89] found id: ""
	I0328 01:05:03.548331 1131323 logs.go:276] 0 containers: []
	W0328 01:05:03.548343 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:05:03.548353 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:05:03.548444 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:05:03.591561 1131323 cri.go:89] found id: ""
	I0328 01:05:03.591594 1131323 logs.go:276] 0 containers: []
	W0328 01:05:03.591607 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:05:03.591615 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:05:03.591670 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:05:03.635055 1131323 cri.go:89] found id: ""
	I0328 01:05:03.635086 1131323 logs.go:276] 0 containers: []
	W0328 01:05:03.635097 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:05:03.635109 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:05:03.635127 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:05:03.715639 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:05:03.715683 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:05:03.755888 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:05:03.755931 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:05:03.810128 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:05:03.810170 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:05:03.825197 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:05:03.825227 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:05:03.908589 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:05:03.043330 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:05.541544 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:01.372692 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:03.373747 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:05.871945 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:05.413583 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:07.912379 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:06.409060 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:05:06.424034 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:05:06.424119 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:05:06.461827 1131323 cri.go:89] found id: ""
	I0328 01:05:06.461888 1131323 logs.go:276] 0 containers: []
	W0328 01:05:06.461902 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:05:06.461911 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:05:06.461985 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:05:06.505006 1131323 cri.go:89] found id: ""
	I0328 01:05:06.505061 1131323 logs.go:276] 0 containers: []
	W0328 01:05:06.505078 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:05:06.505085 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:05:06.505145 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:05:06.542000 1131323 cri.go:89] found id: ""
	I0328 01:05:06.542033 1131323 logs.go:276] 0 containers: []
	W0328 01:05:06.542044 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:05:06.542052 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:05:06.542121 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:05:06.583725 1131323 cri.go:89] found id: ""
	I0328 01:05:06.583778 1131323 logs.go:276] 0 containers: []
	W0328 01:05:06.583800 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:05:06.583810 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:05:06.583887 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:05:06.620457 1131323 cri.go:89] found id: ""
	I0328 01:05:06.620501 1131323 logs.go:276] 0 containers: []
	W0328 01:05:06.620516 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:05:06.620524 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:05:06.620595 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:05:06.664380 1131323 cri.go:89] found id: ""
	I0328 01:05:06.664412 1131323 logs.go:276] 0 containers: []
	W0328 01:05:06.664425 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:05:06.664432 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:05:06.664502 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:05:06.701799 1131323 cri.go:89] found id: ""
	I0328 01:05:06.701850 1131323 logs.go:276] 0 containers: []
	W0328 01:05:06.701862 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:05:06.701870 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:05:06.701935 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:05:06.739899 1131323 cri.go:89] found id: ""
	I0328 01:05:06.739936 1131323 logs.go:276] 0 containers: []
	W0328 01:05:06.739948 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:05:06.739958 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:05:06.739973 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:05:06.814373 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:05:06.814404 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:05:06.814421 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:05:06.894331 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:05:06.894371 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:05:06.952912 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:05:06.952979 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:05:07.011851 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:05:07.011900 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:05:09.528068 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:05:09.545082 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:05:09.545167 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:05:09.586944 1131323 cri.go:89] found id: ""
	I0328 01:05:09.586983 1131323 logs.go:276] 0 containers: []
	W0328 01:05:09.586996 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:05:09.587004 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:05:09.587077 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:05:09.624153 1131323 cri.go:89] found id: ""
	I0328 01:05:09.624184 1131323 logs.go:276] 0 containers: []
	W0328 01:05:09.624192 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:05:09.624198 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:05:09.624256 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:05:09.661125 1131323 cri.go:89] found id: ""
	I0328 01:05:09.661160 1131323 logs.go:276] 0 containers: []
	W0328 01:05:09.661172 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:05:09.661182 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:05:09.661256 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:05:09.699865 1131323 cri.go:89] found id: ""
	I0328 01:05:09.699903 1131323 logs.go:276] 0 containers: []
	W0328 01:05:09.699916 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:05:09.699925 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:05:09.699992 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:05:09.737925 1131323 cri.go:89] found id: ""
	I0328 01:05:09.737958 1131323 logs.go:276] 0 containers: []
	W0328 01:05:09.737967 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:05:09.737973 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:05:09.738029 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:05:09.776906 1131323 cri.go:89] found id: ""
	I0328 01:05:09.776941 1131323 logs.go:276] 0 containers: []
	W0328 01:05:09.776950 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:05:09.776957 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:05:09.777021 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:05:09.815767 1131323 cri.go:89] found id: ""
	I0328 01:05:09.815794 1131323 logs.go:276] 0 containers: []
	W0328 01:05:09.815803 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:05:09.815809 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:05:09.815876 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:05:09.855880 1131323 cri.go:89] found id: ""
	I0328 01:05:09.855915 1131323 logs.go:276] 0 containers: []
	W0328 01:05:09.855928 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:05:09.855941 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:05:09.855958 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:05:09.918339 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:05:09.918376 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:05:09.932775 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:05:09.932810 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:05:10.011566 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:05:10.011594 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:05:10.011610 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:05:10.096057 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:05:10.096100 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:05:08.041230 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:10.041991 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:07.873367 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:10.372311 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:09.913349 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:12.412259 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:12.641999 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:05:12.655761 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:05:12.655843 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:05:12.697335 1131323 cri.go:89] found id: ""
	I0328 01:05:12.697369 1131323 logs.go:276] 0 containers: []
	W0328 01:05:12.697381 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:05:12.697390 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:05:12.697453 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:05:12.736482 1131323 cri.go:89] found id: ""
	I0328 01:05:12.736520 1131323 logs.go:276] 0 containers: []
	W0328 01:05:12.736534 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:05:12.736544 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:05:12.736617 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:05:12.771992 1131323 cri.go:89] found id: ""
	I0328 01:05:12.772034 1131323 logs.go:276] 0 containers: []
	W0328 01:05:12.772046 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:05:12.772055 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:05:12.772125 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:05:12.810738 1131323 cri.go:89] found id: ""
	I0328 01:05:12.810770 1131323 logs.go:276] 0 containers: []
	W0328 01:05:12.810779 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:05:12.810786 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:05:12.810837 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:05:12.848172 1131323 cri.go:89] found id: ""
	I0328 01:05:12.848209 1131323 logs.go:276] 0 containers: []
	W0328 01:05:12.848222 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:05:12.848230 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:05:12.848310 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:05:12.884660 1131323 cri.go:89] found id: ""
	I0328 01:05:12.884698 1131323 logs.go:276] 0 containers: []
	W0328 01:05:12.884710 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:05:12.884719 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:05:12.884794 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:05:12.926180 1131323 cri.go:89] found id: ""
	I0328 01:05:12.926209 1131323 logs.go:276] 0 containers: []
	W0328 01:05:12.926218 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:05:12.926244 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:05:12.926303 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:05:12.966938 1131323 cri.go:89] found id: ""
	I0328 01:05:12.966969 1131323 logs.go:276] 0 containers: []
	W0328 01:05:12.966983 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:05:12.966996 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:05:12.967014 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:05:13.018501 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:05:13.018541 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:05:13.033140 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:05:13.033175 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:05:13.108806 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:05:13.108832 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:05:13.108858 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:05:13.189198 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:05:13.189241 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:05:12.541088 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:15.041830 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:12.372413 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:14.372804 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:14.414059 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:16.912343 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:15.737415 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:05:15.752534 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:05:15.752614 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:05:15.789941 1131323 cri.go:89] found id: ""
	I0328 01:05:15.789974 1131323 logs.go:276] 0 containers: []
	W0328 01:05:15.789986 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:05:15.789994 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:05:15.790107 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:05:15.827688 1131323 cri.go:89] found id: ""
	I0328 01:05:15.827731 1131323 logs.go:276] 0 containers: []
	W0328 01:05:15.827745 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:05:15.827766 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:05:15.827831 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:05:15.867005 1131323 cri.go:89] found id: ""
	I0328 01:05:15.867041 1131323 logs.go:276] 0 containers: []
	W0328 01:05:15.867054 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:05:15.867064 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:05:15.867149 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:05:15.909924 1131323 cri.go:89] found id: ""
	I0328 01:05:15.910035 1131323 logs.go:276] 0 containers: []
	W0328 01:05:15.910055 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:05:15.910066 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:05:15.910139 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:05:15.950571 1131323 cri.go:89] found id: ""
	I0328 01:05:15.950606 1131323 logs.go:276] 0 containers: []
	W0328 01:05:15.950619 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:05:15.950632 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:05:15.950707 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:05:15.992557 1131323 cri.go:89] found id: ""
	I0328 01:05:15.992594 1131323 logs.go:276] 0 containers: []
	W0328 01:05:15.992605 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:05:15.992615 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:05:15.992687 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:05:16.032417 1131323 cri.go:89] found id: ""
	I0328 01:05:16.032458 1131323 logs.go:276] 0 containers: []
	W0328 01:05:16.032473 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:05:16.032482 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:05:16.032559 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:05:16.071399 1131323 cri.go:89] found id: ""
	I0328 01:05:16.071433 1131323 logs.go:276] 0 containers: []
	W0328 01:05:16.071445 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:05:16.071459 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:05:16.071481 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:05:16.147078 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:05:16.147113 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:05:16.147131 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:05:16.223828 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:05:16.223870 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:05:16.269377 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:05:16.269409 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:05:16.318545 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:05:16.318584 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:05:18.836044 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:05:18.851138 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:05:18.851231 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:05:18.887223 1131323 cri.go:89] found id: ""
	I0328 01:05:18.887260 1131323 logs.go:276] 0 containers: []
	W0328 01:05:18.887273 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:05:18.887283 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:05:18.887354 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:05:18.928652 1131323 cri.go:89] found id: ""
	I0328 01:05:18.928682 1131323 logs.go:276] 0 containers: []
	W0328 01:05:18.928692 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:05:18.928698 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:05:18.928756 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:05:18.968519 1131323 cri.go:89] found id: ""
	I0328 01:05:18.968555 1131323 logs.go:276] 0 containers: []
	W0328 01:05:18.968567 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:05:18.968575 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:05:18.968646 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:05:19.010939 1131323 cri.go:89] found id: ""
	I0328 01:05:19.010977 1131323 logs.go:276] 0 containers: []
	W0328 01:05:19.010991 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:05:19.010999 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:05:19.011070 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:05:19.048723 1131323 cri.go:89] found id: ""
	I0328 01:05:19.048748 1131323 logs.go:276] 0 containers: []
	W0328 01:05:19.048758 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:05:19.048769 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:05:19.048820 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:05:19.091761 1131323 cri.go:89] found id: ""
	I0328 01:05:19.091794 1131323 logs.go:276] 0 containers: []
	W0328 01:05:19.091803 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:05:19.091810 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:05:19.091863 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:05:19.134017 1131323 cri.go:89] found id: ""
	I0328 01:05:19.134049 1131323 logs.go:276] 0 containers: []
	W0328 01:05:19.134059 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:05:19.134065 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:05:19.134119 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:05:19.176070 1131323 cri.go:89] found id: ""
	I0328 01:05:19.176106 1131323 logs.go:276] 0 containers: []
	W0328 01:05:19.176118 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:05:19.176131 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:05:19.176155 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:05:19.261546 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:05:19.261584 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:05:19.261605 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:05:19.340271 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:05:19.340314 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:05:19.383625 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:05:19.383676 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:05:19.441635 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:05:19.441679 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:05:17.541876 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:20.040841 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:16.872723 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:19.372916 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:19.414384 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:21.912881 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:21.958362 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:05:21.974427 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:05:21.974528 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:05:22.013099 1131323 cri.go:89] found id: ""
	I0328 01:05:22.013139 1131323 logs.go:276] 0 containers: []
	W0328 01:05:22.013152 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:05:22.013160 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:05:22.013229 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:05:22.055558 1131323 cri.go:89] found id: ""
	I0328 01:05:22.055594 1131323 logs.go:276] 0 containers: []
	W0328 01:05:22.055604 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:05:22.055611 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:05:22.055668 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:05:22.106836 1131323 cri.go:89] found id: ""
	I0328 01:05:22.106870 1131323 logs.go:276] 0 containers: []
	W0328 01:05:22.106879 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:05:22.106886 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:05:22.106961 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:05:22.145135 1131323 cri.go:89] found id: ""
	I0328 01:05:22.145175 1131323 logs.go:276] 0 containers: []
	W0328 01:05:22.145189 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:05:22.145197 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:05:22.145266 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:05:22.183879 1131323 cri.go:89] found id: ""
	I0328 01:05:22.183909 1131323 logs.go:276] 0 containers: []
	W0328 01:05:22.183919 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:05:22.183926 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:05:22.183981 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:05:22.223087 1131323 cri.go:89] found id: ""
	I0328 01:05:22.223115 1131323 logs.go:276] 0 containers: []
	W0328 01:05:22.223124 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:05:22.223131 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:05:22.223209 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:05:22.263232 1131323 cri.go:89] found id: ""
	I0328 01:05:22.263262 1131323 logs.go:276] 0 containers: []
	W0328 01:05:22.263272 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:05:22.263279 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:05:22.263331 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:05:22.302919 1131323 cri.go:89] found id: ""
	I0328 01:05:22.302954 1131323 logs.go:276] 0 containers: []
	W0328 01:05:22.302967 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:05:22.302980 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:05:22.302998 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:05:22.358550 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:05:22.358596 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:05:22.374688 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:05:22.374722 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:05:22.453584 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:05:22.453609 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:05:22.453624 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:05:22.540983 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:05:22.541048 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:05:25.091773 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:05:25.107412 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:05:25.107484 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:05:25.143917 1131323 cri.go:89] found id: ""
	I0328 01:05:25.143944 1131323 logs.go:276] 0 containers: []
	W0328 01:05:25.143953 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:05:25.143960 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:05:25.144013 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:05:25.183615 1131323 cri.go:89] found id: ""
	I0328 01:05:25.183650 1131323 logs.go:276] 0 containers: []
	W0328 01:05:25.183659 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:05:25.183666 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:05:25.183729 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:05:25.221125 1131323 cri.go:89] found id: ""
	I0328 01:05:25.221158 1131323 logs.go:276] 0 containers: []
	W0328 01:05:25.221167 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:05:25.221174 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:05:25.221229 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:05:25.262023 1131323 cri.go:89] found id: ""
	I0328 01:05:25.262056 1131323 logs.go:276] 0 containers: []
	W0328 01:05:25.262065 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:05:25.262072 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:05:25.262134 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:05:25.297919 1131323 cri.go:89] found id: ""
	I0328 01:05:25.297948 1131323 logs.go:276] 0 containers: []
	W0328 01:05:25.297957 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:05:25.297964 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:05:25.298035 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:05:22.041977 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:24.542416 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:21.872312 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:23.872885 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:23.914459 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:25.916730 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:25.336582 1131323 cri.go:89] found id: ""
	I0328 01:05:25.336610 1131323 logs.go:276] 0 containers: []
	W0328 01:05:25.336620 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:05:25.336627 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:05:25.336690 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:05:25.375554 1131323 cri.go:89] found id: ""
	I0328 01:05:25.375589 1131323 logs.go:276] 0 containers: []
	W0328 01:05:25.375600 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:05:25.375609 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:05:25.375683 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:05:25.415941 1131323 cri.go:89] found id: ""
	I0328 01:05:25.415973 1131323 logs.go:276] 0 containers: []
	W0328 01:05:25.415984 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:05:25.415996 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:05:25.416013 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:05:25.430168 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:05:25.430196 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:05:25.507782 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:05:25.507805 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:05:25.507862 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:05:25.588780 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:05:25.588824 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:05:25.634958 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:05:25.634997 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:05:28.190651 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:05:28.205714 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:05:28.205794 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:05:28.242015 1131323 cri.go:89] found id: ""
	I0328 01:05:28.242056 1131323 logs.go:276] 0 containers: []
	W0328 01:05:28.242067 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:05:28.242077 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:05:28.242169 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:05:28.289132 1131323 cri.go:89] found id: ""
	I0328 01:05:28.289169 1131323 logs.go:276] 0 containers: []
	W0328 01:05:28.289182 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:05:28.289189 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:05:28.289256 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:05:28.327001 1131323 cri.go:89] found id: ""
	I0328 01:05:28.327031 1131323 logs.go:276] 0 containers: []
	W0328 01:05:28.327040 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:05:28.327052 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:05:28.327105 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:05:28.365474 1131323 cri.go:89] found id: ""
	I0328 01:05:28.365507 1131323 logs.go:276] 0 containers: []
	W0328 01:05:28.365516 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:05:28.365523 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:05:28.365587 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:05:28.405494 1131323 cri.go:89] found id: ""
	I0328 01:05:28.405553 1131323 logs.go:276] 0 containers: []
	W0328 01:05:28.405567 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:05:28.405576 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:05:28.405652 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:05:28.448464 1131323 cri.go:89] found id: ""
	I0328 01:05:28.448502 1131323 logs.go:276] 0 containers: []
	W0328 01:05:28.448512 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:05:28.448521 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:05:28.448594 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:05:28.488143 1131323 cri.go:89] found id: ""
	I0328 01:05:28.488172 1131323 logs.go:276] 0 containers: []
	W0328 01:05:28.488182 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:05:28.488189 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:05:28.488258 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:05:28.545977 1131323 cri.go:89] found id: ""
	I0328 01:05:28.546012 1131323 logs.go:276] 0 containers: []
	W0328 01:05:28.546024 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:05:28.546036 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:05:28.546050 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:05:28.629955 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:05:28.630001 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:05:28.670504 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:05:28.670536 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:05:28.722021 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:05:28.722069 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:05:28.737274 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:05:28.737310 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:05:28.824025 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:05:27.041205 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:29.041342 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:26.372037 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:28.373545 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:30.872569 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:28.414921 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:30.912980 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:31.324497 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:05:31.339715 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:05:31.339811 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:05:31.379017 1131323 cri.go:89] found id: ""
	I0328 01:05:31.379050 1131323 logs.go:276] 0 containers: []
	W0328 01:05:31.379062 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:05:31.379072 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:05:31.379138 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:05:31.420024 1131323 cri.go:89] found id: ""
	I0328 01:05:31.420055 1131323 logs.go:276] 0 containers: []
	W0328 01:05:31.420065 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:05:31.420071 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:05:31.420136 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:05:31.458732 1131323 cri.go:89] found id: ""
	I0328 01:05:31.458764 1131323 logs.go:276] 0 containers: []
	W0328 01:05:31.458773 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:05:31.458779 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:05:31.458835 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:05:31.504249 1131323 cri.go:89] found id: ""
	I0328 01:05:31.504280 1131323 logs.go:276] 0 containers: []
	W0328 01:05:31.504292 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:05:31.504300 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:05:31.504366 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:05:31.545284 1131323 cri.go:89] found id: ""
	I0328 01:05:31.545316 1131323 logs.go:276] 0 containers: []
	W0328 01:05:31.545324 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:05:31.545331 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:05:31.545385 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:05:31.583402 1131323 cri.go:89] found id: ""
	I0328 01:05:31.583434 1131323 logs.go:276] 0 containers: []
	W0328 01:05:31.583444 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:05:31.583453 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:05:31.583587 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:05:31.624411 1131323 cri.go:89] found id: ""
	I0328 01:05:31.624449 1131323 logs.go:276] 0 containers: []
	W0328 01:05:31.624462 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:05:31.624471 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:05:31.624528 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:05:31.666103 1131323 cri.go:89] found id: ""
	I0328 01:05:31.666144 1131323 logs.go:276] 0 containers: []
	W0328 01:05:31.666158 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:05:31.666173 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:05:31.666192 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:05:31.717595 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:05:31.717636 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:05:31.731606 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:05:31.731637 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:05:31.803302 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:05:31.803325 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:05:31.803339 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:05:31.885552 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:05:31.885590 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:05:34.432446 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:05:34.448002 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:05:34.448085 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:05:34.493207 1131323 cri.go:89] found id: ""
	I0328 01:05:34.493246 1131323 logs.go:276] 0 containers: []
	W0328 01:05:34.493259 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:05:34.493268 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:05:34.493337 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:05:34.541838 1131323 cri.go:89] found id: ""
	I0328 01:05:34.541871 1131323 logs.go:276] 0 containers: []
	W0328 01:05:34.541883 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:05:34.541891 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:05:34.541956 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:05:34.582319 1131323 cri.go:89] found id: ""
	I0328 01:05:34.582357 1131323 logs.go:276] 0 containers: []
	W0328 01:05:34.582371 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:05:34.582380 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:05:34.582458 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:05:34.618753 1131323 cri.go:89] found id: ""
	I0328 01:05:34.618788 1131323 logs.go:276] 0 containers: []
	W0328 01:05:34.618801 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:05:34.618810 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:05:34.618882 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:05:34.656994 1131323 cri.go:89] found id: ""
	I0328 01:05:34.657027 1131323 logs.go:276] 0 containers: []
	W0328 01:05:34.657037 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:05:34.657043 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:05:34.657114 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:05:34.695214 1131323 cri.go:89] found id: ""
	I0328 01:05:34.695252 1131323 logs.go:276] 0 containers: []
	W0328 01:05:34.695264 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:05:34.695271 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:05:34.695337 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:05:34.733688 1131323 cri.go:89] found id: ""
	I0328 01:05:34.733718 1131323 logs.go:276] 0 containers: []
	W0328 01:05:34.733731 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:05:34.733739 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:05:34.733808 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:05:34.771697 1131323 cri.go:89] found id: ""
	I0328 01:05:34.771729 1131323 logs.go:276] 0 containers: []
	W0328 01:05:34.771744 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:05:34.771758 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:05:34.771776 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:05:34.828190 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:05:34.828236 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:05:34.842741 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:05:34.842776 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:05:34.918494 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:05:34.918525 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:05:34.918541 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:05:35.012689 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:05:35.012747 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:05:31.042633 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:33.541295 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:35.541588 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:33.371991 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:35.872753 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:33.412886 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:35.914065 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:37.574759 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:05:37.590014 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:05:37.590128 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:05:37.626883 1131323 cri.go:89] found id: ""
	I0328 01:05:37.626914 1131323 logs.go:276] 0 containers: []
	W0328 01:05:37.626926 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:05:37.626935 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:05:37.627005 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:05:37.665171 1131323 cri.go:89] found id: ""
	I0328 01:05:37.665202 1131323 logs.go:276] 0 containers: []
	W0328 01:05:37.665215 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:05:37.665225 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:05:37.665294 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:05:37.702923 1131323 cri.go:89] found id: ""
	I0328 01:05:37.702963 1131323 logs.go:276] 0 containers: []
	W0328 01:05:37.702976 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:05:37.702984 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:05:37.703064 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:05:37.741148 1131323 cri.go:89] found id: ""
	I0328 01:05:37.741182 1131323 logs.go:276] 0 containers: []
	W0328 01:05:37.741191 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:05:37.741199 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:05:37.741269 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:05:37.782298 1131323 cri.go:89] found id: ""
	I0328 01:05:37.782331 1131323 logs.go:276] 0 containers: []
	W0328 01:05:37.782341 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:05:37.782348 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:05:37.782407 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:05:37.819056 1131323 cri.go:89] found id: ""
	I0328 01:05:37.819110 1131323 logs.go:276] 0 containers: []
	W0328 01:05:37.819124 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:05:37.819134 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:05:37.819215 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:05:37.862372 1131323 cri.go:89] found id: ""
	I0328 01:05:37.862414 1131323 logs.go:276] 0 containers: []
	W0328 01:05:37.862427 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:05:37.862436 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:05:37.862507 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:05:37.899639 1131323 cri.go:89] found id: ""
	I0328 01:05:37.899675 1131323 logs.go:276] 0 containers: []
	W0328 01:05:37.899689 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:05:37.899703 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:05:37.899721 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:05:37.978962 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:05:37.978990 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:05:37.979007 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:05:38.058972 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:05:38.059015 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:05:38.102975 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:05:38.103016 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:05:38.157994 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:05:38.158035 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:05:38.041091 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:40.041892 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:38.371787 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:40.373131 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:38.412214 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:40.415412 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:42.912341 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:40.673425 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:05:40.690969 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:05:40.691041 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:05:40.735552 1131323 cri.go:89] found id: ""
	I0328 01:05:40.735585 1131323 logs.go:276] 0 containers: []
	W0328 01:05:40.735594 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:05:40.735602 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:05:40.735669 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:05:40.816611 1131323 cri.go:89] found id: ""
	I0328 01:05:40.816648 1131323 logs.go:276] 0 containers: []
	W0328 01:05:40.816661 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:05:40.816669 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:05:40.816725 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:05:40.864093 1131323 cri.go:89] found id: ""
	I0328 01:05:40.864125 1131323 logs.go:276] 0 containers: []
	W0328 01:05:40.864138 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:05:40.864147 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:05:40.864218 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:05:40.908781 1131323 cri.go:89] found id: ""
	I0328 01:05:40.908817 1131323 logs.go:276] 0 containers: []
	W0328 01:05:40.908829 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:05:40.908846 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:05:40.908914 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:05:40.950330 1131323 cri.go:89] found id: ""
	I0328 01:05:40.950369 1131323 logs.go:276] 0 containers: []
	W0328 01:05:40.950382 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:05:40.950390 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:05:40.950481 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:05:40.989983 1131323 cri.go:89] found id: ""
	I0328 01:05:40.990041 1131323 logs.go:276] 0 containers: []
	W0328 01:05:40.990054 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:05:40.990063 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:05:40.990136 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:05:41.042428 1131323 cri.go:89] found id: ""
	I0328 01:05:41.042470 1131323 logs.go:276] 0 containers: []
	W0328 01:05:41.042481 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:05:41.042489 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:05:41.042560 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:05:41.089309 1131323 cri.go:89] found id: ""
	I0328 01:05:41.089342 1131323 logs.go:276] 0 containers: []
	W0328 01:05:41.089353 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:05:41.089363 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:05:41.089377 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:05:41.148502 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:05:41.148547 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:05:41.163889 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:05:41.163918 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:05:41.242825 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:05:41.242848 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:05:41.242861 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:05:41.322658 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:05:41.322702 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:05:43.865117 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:05:43.880642 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:05:43.880729 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:05:43.919519 1131323 cri.go:89] found id: ""
	I0328 01:05:43.919550 1131323 logs.go:276] 0 containers: []
	W0328 01:05:43.919559 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:05:43.919565 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:05:43.919622 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:05:43.957906 1131323 cri.go:89] found id: ""
	I0328 01:05:43.957936 1131323 logs.go:276] 0 containers: []
	W0328 01:05:43.957945 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:05:43.957951 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:05:43.958008 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:05:44.001448 1131323 cri.go:89] found id: ""
	I0328 01:05:44.001486 1131323 logs.go:276] 0 containers: []
	W0328 01:05:44.001497 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:05:44.001505 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:05:44.001573 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:05:44.039767 1131323 cri.go:89] found id: ""
	I0328 01:05:44.039801 1131323 logs.go:276] 0 containers: []
	W0328 01:05:44.039812 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:05:44.039818 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:05:44.039871 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:05:44.079441 1131323 cri.go:89] found id: ""
	I0328 01:05:44.079470 1131323 logs.go:276] 0 containers: []
	W0328 01:05:44.079480 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:05:44.079486 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:05:44.079541 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:05:44.116534 1131323 cri.go:89] found id: ""
	I0328 01:05:44.116584 1131323 logs.go:276] 0 containers: []
	W0328 01:05:44.116596 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:05:44.116604 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:05:44.116670 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:05:44.163335 1131323 cri.go:89] found id: ""
	I0328 01:05:44.163369 1131323 logs.go:276] 0 containers: []
	W0328 01:05:44.163381 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:05:44.163389 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:05:44.163457 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:05:44.201367 1131323 cri.go:89] found id: ""
	I0328 01:05:44.201403 1131323 logs.go:276] 0 containers: []
	W0328 01:05:44.201413 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:05:44.201424 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:05:44.201442 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:05:44.257485 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:05:44.257529 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:05:44.272489 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:05:44.272534 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:05:44.354442 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:05:44.354477 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:05:44.354498 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:05:44.436219 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:05:44.436262 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:05:42.044443 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:44.541648 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:42.872072 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:44.873552 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:44.913292 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:47.412489 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:46.982131 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:05:46.998022 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:05:46.998100 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:05:47.037167 1131323 cri.go:89] found id: ""
	I0328 01:05:47.037205 1131323 logs.go:276] 0 containers: []
	W0328 01:05:47.037217 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:05:47.037226 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:05:47.037295 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:05:47.076175 1131323 cri.go:89] found id: ""
	I0328 01:05:47.076213 1131323 logs.go:276] 0 containers: []
	W0328 01:05:47.076226 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:05:47.076235 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:05:47.076306 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:05:47.115193 1131323 cri.go:89] found id: ""
	I0328 01:05:47.115227 1131323 logs.go:276] 0 containers: []
	W0328 01:05:47.115237 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:05:47.115244 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:05:47.115297 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:05:47.154942 1131323 cri.go:89] found id: ""
	I0328 01:05:47.154976 1131323 logs.go:276] 0 containers: []
	W0328 01:05:47.154989 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:05:47.154998 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:05:47.155069 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:05:47.196571 1131323 cri.go:89] found id: ""
	I0328 01:05:47.196609 1131323 logs.go:276] 0 containers: []
	W0328 01:05:47.196622 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:05:47.196631 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:05:47.196707 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:05:47.237572 1131323 cri.go:89] found id: ""
	I0328 01:05:47.237616 1131323 logs.go:276] 0 containers: []
	W0328 01:05:47.237625 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:05:47.237633 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:05:47.237691 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:05:47.275208 1131323 cri.go:89] found id: ""
	I0328 01:05:47.275254 1131323 logs.go:276] 0 containers: []
	W0328 01:05:47.275265 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:05:47.275272 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:05:47.275329 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:05:47.313515 1131323 cri.go:89] found id: ""
	I0328 01:05:47.313555 1131323 logs.go:276] 0 containers: []
	W0328 01:05:47.313568 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:05:47.313582 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:05:47.313598 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:05:47.368993 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:05:47.369033 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:05:47.383063 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:05:47.383097 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:05:47.460239 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:05:47.460278 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:05:47.460298 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:05:47.538552 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:05:47.538594 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:05:50.084960 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:05:50.101764 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:05:50.101859 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:05:50.141457 1131323 cri.go:89] found id: ""
	I0328 01:05:50.141488 1131323 logs.go:276] 0 containers: []
	W0328 01:05:50.141497 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:05:50.141504 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:05:50.141557 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:05:50.178184 1131323 cri.go:89] found id: ""
	I0328 01:05:50.178220 1131323 logs.go:276] 0 containers: []
	W0328 01:05:50.178254 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:05:50.178263 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:05:50.178358 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:05:50.217908 1131323 cri.go:89] found id: ""
	I0328 01:05:50.217946 1131323 logs.go:276] 0 containers: []
	W0328 01:05:50.217959 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:05:50.217966 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:05:50.218027 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:05:50.256029 1131323 cri.go:89] found id: ""
	I0328 01:05:50.256058 1131323 logs.go:276] 0 containers: []
	W0328 01:05:50.256067 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:05:50.256074 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:05:50.256130 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:05:50.295054 1131323 cri.go:89] found id: ""
	I0328 01:05:50.295087 1131323 logs.go:276] 0 containers: []
	W0328 01:05:50.295100 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:05:50.295106 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:05:50.295165 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:05:47.042338 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:49.542501 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:47.372867 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:49.872948 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:49.913873 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:52.412600 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:50.334695 1131323 cri.go:89] found id: ""
	I0328 01:05:50.336588 1131323 logs.go:276] 0 containers: []
	W0328 01:05:50.336605 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:05:50.336614 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:05:50.336697 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:05:50.375968 1131323 cri.go:89] found id: ""
	I0328 01:05:50.376003 1131323 logs.go:276] 0 containers: []
	W0328 01:05:50.376013 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:05:50.376021 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:05:50.376091 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:05:50.417146 1131323 cri.go:89] found id: ""
	I0328 01:05:50.417175 1131323 logs.go:276] 0 containers: []
	W0328 01:05:50.417184 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:05:50.417194 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:05:50.417207 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:05:50.474090 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:05:50.474131 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:05:50.489006 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:05:50.489040 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:05:50.566220 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:05:50.566268 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:05:50.566286 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:05:50.645593 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:05:50.645653 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:05:53.190872 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:05:53.205223 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:05:53.205320 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:05:53.242396 1131323 cri.go:89] found id: ""
	I0328 01:05:53.242433 1131323 logs.go:276] 0 containers: []
	W0328 01:05:53.242445 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:05:53.242455 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:05:53.242524 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:05:53.281237 1131323 cri.go:89] found id: ""
	I0328 01:05:53.281275 1131323 logs.go:276] 0 containers: []
	W0328 01:05:53.281288 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:05:53.281297 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:05:53.281357 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:05:53.321239 1131323 cri.go:89] found id: ""
	I0328 01:05:53.321268 1131323 logs.go:276] 0 containers: []
	W0328 01:05:53.321287 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:05:53.321296 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:05:53.321358 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:05:53.359240 1131323 cri.go:89] found id: ""
	I0328 01:05:53.359269 1131323 logs.go:276] 0 containers: []
	W0328 01:05:53.359278 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:05:53.359284 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:05:53.359337 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:05:53.396973 1131323 cri.go:89] found id: ""
	I0328 01:05:53.397008 1131323 logs.go:276] 0 containers: []
	W0328 01:05:53.397021 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:05:53.397030 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:05:53.397100 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:05:53.438368 1131323 cri.go:89] found id: ""
	I0328 01:05:53.438400 1131323 logs.go:276] 0 containers: []
	W0328 01:05:53.438408 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:05:53.438415 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:05:53.438477 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:05:53.474679 1131323 cri.go:89] found id: ""
	I0328 01:05:53.474708 1131323 logs.go:276] 0 containers: []
	W0328 01:05:53.474732 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:05:53.474742 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:05:53.474799 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:05:53.512509 1131323 cri.go:89] found id: ""
	I0328 01:05:53.512547 1131323 logs.go:276] 0 containers: []
	W0328 01:05:53.512560 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:05:53.512579 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:05:53.512599 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:05:53.569536 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:05:53.569580 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:05:53.584977 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:05:53.585016 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:05:53.657865 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:05:53.657895 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:05:53.657908 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:05:53.733158 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:05:53.733203 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:05:52.041508 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:54.541663 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:52.373317 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:54.872090 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:54.913464 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:57.413256 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:56.278693 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:05:56.291870 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:05:56.291949 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:05:56.332909 1131323 cri.go:89] found id: ""
	I0328 01:05:56.332943 1131323 logs.go:276] 0 containers: []
	W0328 01:05:56.332957 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:05:56.332965 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:05:56.333038 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:05:56.370608 1131323 cri.go:89] found id: ""
	I0328 01:05:56.370638 1131323 logs.go:276] 0 containers: []
	W0328 01:05:56.370649 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:05:56.370657 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:05:56.370721 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:05:56.408031 1131323 cri.go:89] found id: ""
	I0328 01:05:56.408068 1131323 logs.go:276] 0 containers: []
	W0328 01:05:56.408081 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:05:56.408100 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:05:56.408170 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:05:56.445057 1131323 cri.go:89] found id: ""
	I0328 01:05:56.445092 1131323 logs.go:276] 0 containers: []
	W0328 01:05:56.445105 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:05:56.445113 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:05:56.445177 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:05:56.486868 1131323 cri.go:89] found id: ""
	I0328 01:05:56.486898 1131323 logs.go:276] 0 containers: []
	W0328 01:05:56.486908 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:05:56.486914 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:05:56.486969 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:05:56.533594 1131323 cri.go:89] found id: ""
	I0328 01:05:56.533622 1131323 logs.go:276] 0 containers: []
	W0328 01:05:56.533632 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:05:56.533638 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:05:56.533702 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:05:56.569200 1131323 cri.go:89] found id: ""
	I0328 01:05:56.569237 1131323 logs.go:276] 0 containers: []
	W0328 01:05:56.569250 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:05:56.569258 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:05:56.569335 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:05:56.604919 1131323 cri.go:89] found id: ""
	I0328 01:05:56.604955 1131323 logs.go:276] 0 containers: []
	W0328 01:05:56.604968 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:05:56.604982 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:05:56.605011 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:05:56.654473 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:05:56.654513 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:05:56.671309 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:05:56.671339 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:05:56.739516 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:05:56.739543 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:05:56.739559 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:05:56.817445 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:05:56.817495 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:05:59.361711 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:05:59.375672 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:05:59.375750 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:05:59.414329 1131323 cri.go:89] found id: ""
	I0328 01:05:59.414360 1131323 logs.go:276] 0 containers: []
	W0328 01:05:59.414371 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:05:59.414379 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:05:59.414443 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:05:59.454813 1131323 cri.go:89] found id: ""
	I0328 01:05:59.454846 1131323 logs.go:276] 0 containers: []
	W0328 01:05:59.454855 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:05:59.454862 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:05:59.454917 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:05:59.492890 1131323 cri.go:89] found id: ""
	I0328 01:05:59.492924 1131323 logs.go:276] 0 containers: []
	W0328 01:05:59.492936 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:05:59.492946 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:05:59.493043 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:05:59.529412 1131323 cri.go:89] found id: ""
	I0328 01:05:59.529443 1131323 logs.go:276] 0 containers: []
	W0328 01:05:59.529454 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:05:59.529464 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:05:59.529521 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:05:59.568620 1131323 cri.go:89] found id: ""
	I0328 01:05:59.568655 1131323 logs.go:276] 0 containers: []
	W0328 01:05:59.568664 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:05:59.568671 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:05:59.568731 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:05:59.605826 1131323 cri.go:89] found id: ""
	I0328 01:05:59.605861 1131323 logs.go:276] 0 containers: []
	W0328 01:05:59.605874 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:05:59.605883 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:05:59.605955 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:05:59.645799 1131323 cri.go:89] found id: ""
	I0328 01:05:59.645833 1131323 logs.go:276] 0 containers: []
	W0328 01:05:59.645847 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:05:59.645856 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:05:59.645931 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:05:59.683866 1131323 cri.go:89] found id: ""
	I0328 01:05:59.683903 1131323 logs.go:276] 0 containers: []
	W0328 01:05:59.683916 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:05:59.683929 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:05:59.683953 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:05:59.726678 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:05:59.726711 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:05:59.779910 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:05:59.779954 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:05:59.795743 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:05:59.795774 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:05:59.875137 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:05:59.875162 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:05:59.875174 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:05:56.542345 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:58.542599 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:00.543094 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:57.372258 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:59.872483 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:59.912150 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:01.913694 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:02.455212 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:06:02.468850 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:06:02.468945 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:06:02.506347 1131323 cri.go:89] found id: ""
	I0328 01:06:02.506385 1131323 logs.go:276] 0 containers: []
	W0328 01:06:02.506397 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:06:02.506406 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:06:02.506484 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:06:02.546056 1131323 cri.go:89] found id: ""
	I0328 01:06:02.546085 1131323 logs.go:276] 0 containers: []
	W0328 01:06:02.546096 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:06:02.546103 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:06:02.546173 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:06:02.585343 1131323 cri.go:89] found id: ""
	I0328 01:06:02.585385 1131323 logs.go:276] 0 containers: []
	W0328 01:06:02.585398 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:06:02.585407 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:06:02.585563 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:06:02.625380 1131323 cri.go:89] found id: ""
	I0328 01:06:02.625414 1131323 logs.go:276] 0 containers: []
	W0328 01:06:02.625423 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:06:02.625429 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:06:02.625486 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:06:02.664653 1131323 cri.go:89] found id: ""
	I0328 01:06:02.664687 1131323 logs.go:276] 0 containers: []
	W0328 01:06:02.664701 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:06:02.664708 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:06:02.664764 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:06:02.704468 1131323 cri.go:89] found id: ""
	I0328 01:06:02.704498 1131323 logs.go:276] 0 containers: []
	W0328 01:06:02.704511 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:06:02.704519 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:06:02.704595 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:06:02.740969 1131323 cri.go:89] found id: ""
	I0328 01:06:02.740997 1131323 logs.go:276] 0 containers: []
	W0328 01:06:02.741007 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:06:02.741014 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:06:02.741102 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:06:02.782113 1131323 cri.go:89] found id: ""
	I0328 01:06:02.782150 1131323 logs.go:276] 0 containers: []
	W0328 01:06:02.782163 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:06:02.782185 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:06:02.782203 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:06:02.836804 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:06:02.836848 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:06:02.852266 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:06:02.852299 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:06:02.929441 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:06:02.929467 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:06:02.929484 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:06:03.008114 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:06:03.008156 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:06:03.041919 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:05.542209 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:02.372332 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:04.871689 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:04.413251 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:06.912348 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:05.554291 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:06:05.570208 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:06:05.570304 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:06:05.610887 1131323 cri.go:89] found id: ""
	I0328 01:06:05.610916 1131323 logs.go:276] 0 containers: []
	W0328 01:06:05.610926 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:06:05.610932 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:06:05.610991 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:06:05.651561 1131323 cri.go:89] found id: ""
	I0328 01:06:05.651600 1131323 logs.go:276] 0 containers: []
	W0328 01:06:05.651610 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:06:05.651616 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:06:05.651681 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:06:05.690801 1131323 cri.go:89] found id: ""
	I0328 01:06:05.690830 1131323 logs.go:276] 0 containers: []
	W0328 01:06:05.690843 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:06:05.690851 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:06:05.690920 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:06:05.729098 1131323 cri.go:89] found id: ""
	I0328 01:06:05.729136 1131323 logs.go:276] 0 containers: []
	W0328 01:06:05.729146 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:06:05.729153 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:06:05.729225 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:06:05.774461 1131323 cri.go:89] found id: ""
	I0328 01:06:05.774499 1131323 logs.go:276] 0 containers: []
	W0328 01:06:05.774520 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:06:05.774530 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:06:05.774602 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:06:05.812135 1131323 cri.go:89] found id: ""
	I0328 01:06:05.812166 1131323 logs.go:276] 0 containers: []
	W0328 01:06:05.812180 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:06:05.812188 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:06:05.812255 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:06:05.847744 1131323 cri.go:89] found id: ""
	I0328 01:06:05.847775 1131323 logs.go:276] 0 containers: []
	W0328 01:06:05.847786 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:06:05.847796 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:06:05.847863 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:06:05.885600 1131323 cri.go:89] found id: ""
	I0328 01:06:05.885641 1131323 logs.go:276] 0 containers: []
	W0328 01:06:05.885656 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:06:05.885669 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:06:05.885684 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:06:05.963837 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:06:05.963879 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:06:06.007342 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:06:06.007381 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:06:06.062798 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:06:06.062843 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:06:06.077547 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:06:06.077599 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:06:06.148373 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:06:08.648791 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:06:08.664082 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:06:08.664154 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:06:08.701746 1131323 cri.go:89] found id: ""
	I0328 01:06:08.701776 1131323 logs.go:276] 0 containers: []
	W0328 01:06:08.701789 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:06:08.701797 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:06:08.701855 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:06:08.739035 1131323 cri.go:89] found id: ""
	I0328 01:06:08.739066 1131323 logs.go:276] 0 containers: []
	W0328 01:06:08.739076 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:06:08.739083 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:06:08.739136 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:06:08.776128 1131323 cri.go:89] found id: ""
	I0328 01:06:08.776166 1131323 logs.go:276] 0 containers: []
	W0328 01:06:08.776180 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:06:08.776189 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:06:08.776255 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:06:08.816136 1131323 cri.go:89] found id: ""
	I0328 01:06:08.816172 1131323 logs.go:276] 0 containers: []
	W0328 01:06:08.816187 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:06:08.816196 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:06:08.816271 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:06:08.855675 1131323 cri.go:89] found id: ""
	I0328 01:06:08.855709 1131323 logs.go:276] 0 containers: []
	W0328 01:06:08.855722 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:06:08.855730 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:06:08.855802 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:06:08.893161 1131323 cri.go:89] found id: ""
	I0328 01:06:08.893198 1131323 logs.go:276] 0 containers: []
	W0328 01:06:08.893212 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:06:08.893221 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:06:08.893297 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:06:08.935498 1131323 cri.go:89] found id: ""
	I0328 01:06:08.935527 1131323 logs.go:276] 0 containers: []
	W0328 01:06:08.935540 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:06:08.935548 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:06:08.935622 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:06:08.971622 1131323 cri.go:89] found id: ""
	I0328 01:06:08.971657 1131323 logs.go:276] 0 containers: []
	W0328 01:06:08.971668 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:06:08.971679 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:06:08.971696 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:06:09.039975 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:06:09.040036 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:06:09.057877 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:06:09.057920 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:06:09.130093 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:06:09.130119 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:06:09.130135 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:06:09.217177 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:06:09.217228 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:06:08.040921 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:10.042895 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:06.872367 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:08.873187 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:08.914313 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:11.412330 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:11.762393 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:06:11.776356 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:06:11.776424 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:06:11.811982 1131323 cri.go:89] found id: ""
	I0328 01:06:11.812017 1131323 logs.go:276] 0 containers: []
	W0328 01:06:11.812030 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:06:11.812038 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:06:11.812103 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:06:11.849789 1131323 cri.go:89] found id: ""
	I0328 01:06:11.849817 1131323 logs.go:276] 0 containers: []
	W0328 01:06:11.849826 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:06:11.849833 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:06:11.849884 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:06:11.890455 1131323 cri.go:89] found id: ""
	I0328 01:06:11.890488 1131323 logs.go:276] 0 containers: []
	W0328 01:06:11.890497 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:06:11.890503 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:06:11.890559 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:06:11.929047 1131323 cri.go:89] found id: ""
	I0328 01:06:11.929093 1131323 logs.go:276] 0 containers: []
	W0328 01:06:11.929102 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:06:11.929108 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:06:11.929164 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:06:11.969536 1131323 cri.go:89] found id: ""
	I0328 01:06:11.969566 1131323 logs.go:276] 0 containers: []
	W0328 01:06:11.969576 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:06:11.969583 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:06:11.969641 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:06:12.008779 1131323 cri.go:89] found id: ""
	I0328 01:06:12.008811 1131323 logs.go:276] 0 containers: []
	W0328 01:06:12.008821 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:06:12.008828 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:06:12.008890 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:06:12.044061 1131323 cri.go:89] found id: ""
	I0328 01:06:12.044091 1131323 logs.go:276] 0 containers: []
	W0328 01:06:12.044104 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:06:12.044112 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:06:12.044176 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:06:12.082307 1131323 cri.go:89] found id: ""
	I0328 01:06:12.082336 1131323 logs.go:276] 0 containers: []
	W0328 01:06:12.082346 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:06:12.082357 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:06:12.082369 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:06:12.133044 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:06:12.133091 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:06:12.148584 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:06:12.148624 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:06:12.218799 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:06:12.218834 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:06:12.218852 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:06:12.295580 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:06:12.295623 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:06:14.842815 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:06:14.856385 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:06:14.856456 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:06:14.895351 1131323 cri.go:89] found id: ""
	I0328 01:06:14.895409 1131323 logs.go:276] 0 containers: []
	W0328 01:06:14.895418 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:06:14.895424 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:06:14.895476 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:06:14.930333 1131323 cri.go:89] found id: ""
	I0328 01:06:14.930366 1131323 logs.go:276] 0 containers: []
	W0328 01:06:14.930380 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:06:14.930389 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:06:14.930461 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:06:14.968701 1131323 cri.go:89] found id: ""
	I0328 01:06:14.968742 1131323 logs.go:276] 0 containers: []
	W0328 01:06:14.968754 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:06:14.968767 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:06:14.968867 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:06:15.004580 1131323 cri.go:89] found id: ""
	I0328 01:06:15.004613 1131323 logs.go:276] 0 containers: []
	W0328 01:06:15.004626 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:06:15.004634 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:06:15.004700 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:06:15.046702 1131323 cri.go:89] found id: ""
	I0328 01:06:15.046726 1131323 logs.go:276] 0 containers: []
	W0328 01:06:15.046736 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:06:15.046742 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:06:15.046795 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:06:15.088693 1131323 cri.go:89] found id: ""
	I0328 01:06:15.088725 1131323 logs.go:276] 0 containers: []
	W0328 01:06:15.088734 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:06:15.088741 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:06:15.088797 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:06:15.130293 1131323 cri.go:89] found id: ""
	I0328 01:06:15.130324 1131323 logs.go:276] 0 containers: []
	W0328 01:06:15.130333 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:06:15.130339 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:06:15.130394 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:06:15.172381 1131323 cri.go:89] found id: ""
	I0328 01:06:15.172408 1131323 logs.go:276] 0 containers: []
	W0328 01:06:15.172417 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:06:15.172427 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:06:15.172440 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:06:15.225631 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:06:15.225674 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:06:15.241251 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:06:15.241294 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:06:15.319701 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:06:15.319731 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:06:15.319747 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:06:12.540755 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:14.541618 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:11.371580 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:13.371640 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:15.373147 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:13.911792 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:15.912479 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:17.913926 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:15.406813 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:06:15.406853 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:06:17.993893 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:06:18.007755 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:06:18.007843 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:06:18.047750 1131323 cri.go:89] found id: ""
	I0328 01:06:18.047777 1131323 logs.go:276] 0 containers: []
	W0328 01:06:18.047786 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:06:18.047797 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:06:18.047855 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:06:18.088264 1131323 cri.go:89] found id: ""
	I0328 01:06:18.088291 1131323 logs.go:276] 0 containers: []
	W0328 01:06:18.088303 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:06:18.088311 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:06:18.088369 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:06:18.127485 1131323 cri.go:89] found id: ""
	I0328 01:06:18.127514 1131323 logs.go:276] 0 containers: []
	W0328 01:06:18.127523 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:06:18.127530 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:06:18.127581 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:06:18.167462 1131323 cri.go:89] found id: ""
	I0328 01:06:18.167496 1131323 logs.go:276] 0 containers: []
	W0328 01:06:18.167510 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:06:18.167516 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:06:18.167571 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:06:18.209536 1131323 cri.go:89] found id: ""
	I0328 01:06:18.209571 1131323 logs.go:276] 0 containers: []
	W0328 01:06:18.209583 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:06:18.209591 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:06:18.209662 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:06:18.247565 1131323 cri.go:89] found id: ""
	I0328 01:06:18.247601 1131323 logs.go:276] 0 containers: []
	W0328 01:06:18.247614 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:06:18.247623 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:06:18.247701 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:06:18.288123 1131323 cri.go:89] found id: ""
	I0328 01:06:18.288162 1131323 logs.go:276] 0 containers: []
	W0328 01:06:18.288172 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:06:18.288179 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:06:18.288242 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:06:18.328132 1131323 cri.go:89] found id: ""
	I0328 01:06:18.328161 1131323 logs.go:276] 0 containers: []
	W0328 01:06:18.328170 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:06:18.328181 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:06:18.328193 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:06:18.403245 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:06:18.403287 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:06:18.403305 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:06:18.483446 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:06:18.483500 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:06:18.527357 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:06:18.527392 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:06:18.588402 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:06:18.588463 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:06:16.542137 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:18.542554 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:20.546396 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:17.872147 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:20.373000 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:20.412369 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:22.412661 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:21.103566 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:06:21.117538 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:06:21.117616 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:06:21.174215 1131323 cri.go:89] found id: ""
	I0328 01:06:21.174270 1131323 logs.go:276] 0 containers: []
	W0328 01:06:21.174284 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:06:21.174293 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:06:21.174364 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:06:21.238666 1131323 cri.go:89] found id: ""
	I0328 01:06:21.238707 1131323 logs.go:276] 0 containers: []
	W0328 01:06:21.238722 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:06:21.238730 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:06:21.238803 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:06:21.303510 1131323 cri.go:89] found id: ""
	I0328 01:06:21.303543 1131323 logs.go:276] 0 containers: []
	W0328 01:06:21.303553 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:06:21.303559 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:06:21.303614 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:06:21.345823 1131323 cri.go:89] found id: ""
	I0328 01:06:21.345853 1131323 logs.go:276] 0 containers: []
	W0328 01:06:21.345862 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:06:21.345870 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:06:21.345940 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:06:21.386205 1131323 cri.go:89] found id: ""
	I0328 01:06:21.386248 1131323 logs.go:276] 0 containers: []
	W0328 01:06:21.386261 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:06:21.386269 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:06:21.386335 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:06:21.427424 1131323 cri.go:89] found id: ""
	I0328 01:06:21.427457 1131323 logs.go:276] 0 containers: []
	W0328 01:06:21.427470 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:06:21.427478 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:06:21.427546 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:06:21.465054 1131323 cri.go:89] found id: ""
	I0328 01:06:21.465087 1131323 logs.go:276] 0 containers: []
	W0328 01:06:21.465099 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:06:21.465107 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:06:21.465177 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:06:21.507197 1131323 cri.go:89] found id: ""
	I0328 01:06:21.507229 1131323 logs.go:276] 0 containers: []
	W0328 01:06:21.507238 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:06:21.507248 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:06:21.507263 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:06:21.586657 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:06:21.586709 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:06:21.633702 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:06:21.633739 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:06:21.688960 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:06:21.688999 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:06:21.704675 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:06:21.704714 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:06:21.781612 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:06:24.282521 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:06:24.297096 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:06:24.297185 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:06:24.338745 1131323 cri.go:89] found id: ""
	I0328 01:06:24.338780 1131323 logs.go:276] 0 containers: []
	W0328 01:06:24.338793 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:06:24.338802 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:06:24.338872 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:06:24.375499 1131323 cri.go:89] found id: ""
	I0328 01:06:24.375528 1131323 logs.go:276] 0 containers: []
	W0328 01:06:24.375540 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:06:24.375548 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:06:24.375616 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:06:24.410939 1131323 cri.go:89] found id: ""
	I0328 01:06:24.410966 1131323 logs.go:276] 0 containers: []
	W0328 01:06:24.410978 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:06:24.410986 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:06:24.411042 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:06:24.455316 1131323 cri.go:89] found id: ""
	I0328 01:06:24.455345 1131323 logs.go:276] 0 containers: []
	W0328 01:06:24.455354 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:06:24.455360 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:06:24.455427 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:06:24.493177 1131323 cri.go:89] found id: ""
	I0328 01:06:24.493206 1131323 logs.go:276] 0 containers: []
	W0328 01:06:24.493219 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:06:24.493228 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:06:24.493300 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:06:24.533612 1131323 cri.go:89] found id: ""
	I0328 01:06:24.533648 1131323 logs.go:276] 0 containers: []
	W0328 01:06:24.533659 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:06:24.533668 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:06:24.533743 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:06:24.573960 1131323 cri.go:89] found id: ""
	I0328 01:06:24.573998 1131323 logs.go:276] 0 containers: []
	W0328 01:06:24.574014 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:06:24.574020 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:06:24.574074 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:06:24.617282 1131323 cri.go:89] found id: ""
	I0328 01:06:24.617319 1131323 logs.go:276] 0 containers: []
	W0328 01:06:24.617333 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:06:24.617346 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:06:24.617364 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:06:24.691660 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:06:24.691688 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:06:24.691707 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:06:24.773138 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:06:24.773180 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:06:24.820408 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:06:24.820440 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:06:24.875901 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:06:24.875940 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:06:23.041030 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:25.041064 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:22.874513 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:25.378939 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:24.413732 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:26.912433 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:27.392663 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:06:27.407958 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:06:27.408046 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:06:27.446750 1131323 cri.go:89] found id: ""
	I0328 01:06:27.446782 1131323 logs.go:276] 0 containers: []
	W0328 01:06:27.446792 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:06:27.446799 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:06:27.446872 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:06:27.489199 1131323 cri.go:89] found id: ""
	I0328 01:06:27.489236 1131323 logs.go:276] 0 containers: []
	W0328 01:06:27.489249 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:06:27.489258 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:06:27.489316 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:06:27.525754 1131323 cri.go:89] found id: ""
	I0328 01:06:27.525787 1131323 logs.go:276] 0 containers: []
	W0328 01:06:27.525796 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:06:27.525803 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:06:27.525861 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:06:27.560817 1131323 cri.go:89] found id: ""
	I0328 01:06:27.560849 1131323 logs.go:276] 0 containers: []
	W0328 01:06:27.560858 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:06:27.560866 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:06:27.560930 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:06:27.597706 1131323 cri.go:89] found id: ""
	I0328 01:06:27.597736 1131323 logs.go:276] 0 containers: []
	W0328 01:06:27.597744 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:06:27.597750 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:06:27.597821 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:06:27.635170 1131323 cri.go:89] found id: ""
	I0328 01:06:27.635211 1131323 logs.go:276] 0 containers: []
	W0328 01:06:27.635223 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:06:27.635232 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:06:27.635299 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:06:27.672043 1131323 cri.go:89] found id: ""
	I0328 01:06:27.672079 1131323 logs.go:276] 0 containers: []
	W0328 01:06:27.672091 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:06:27.672099 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:06:27.672166 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:06:27.711401 1131323 cri.go:89] found id: ""
	I0328 01:06:27.711435 1131323 logs.go:276] 0 containers: []
	W0328 01:06:27.711448 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:06:27.711468 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:06:27.711488 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:06:27.755172 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:06:27.755211 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:06:27.807588 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:06:27.807632 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:06:27.823557 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:06:27.823589 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:06:27.905292 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:06:27.905316 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:06:27.905329 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:06:27.041105 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:29.041205 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:27.873797 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:30.374214 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:29.412378 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:31.413211 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:30.491565 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:06:30.505601 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:06:30.505667 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:06:30.541894 1131323 cri.go:89] found id: ""
	I0328 01:06:30.541929 1131323 logs.go:276] 0 containers: []
	W0328 01:06:30.541940 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:06:30.541949 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:06:30.542029 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:06:30.581484 1131323 cri.go:89] found id: ""
	I0328 01:06:30.581514 1131323 logs.go:276] 0 containers: []
	W0328 01:06:30.581532 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:06:30.581538 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:06:30.581613 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:06:30.624788 1131323 cri.go:89] found id: ""
	I0328 01:06:30.624830 1131323 logs.go:276] 0 containers: []
	W0328 01:06:30.624842 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:06:30.624850 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:06:30.624922 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:06:30.664373 1131323 cri.go:89] found id: ""
	I0328 01:06:30.664403 1131323 logs.go:276] 0 containers: []
	W0328 01:06:30.664413 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:06:30.664420 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:06:30.664489 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:06:30.702885 1131323 cri.go:89] found id: ""
	I0328 01:06:30.702917 1131323 logs.go:276] 0 containers: []
	W0328 01:06:30.702928 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:06:30.702934 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:06:30.703006 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:06:30.748170 1131323 cri.go:89] found id: ""
	I0328 01:06:30.748205 1131323 logs.go:276] 0 containers: []
	W0328 01:06:30.748217 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:06:30.748226 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:06:30.748316 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:06:30.785218 1131323 cri.go:89] found id: ""
	I0328 01:06:30.785255 1131323 logs.go:276] 0 containers: []
	W0328 01:06:30.785268 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:06:30.785276 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:06:30.785343 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:06:30.825529 1131323 cri.go:89] found id: ""
	I0328 01:06:30.825555 1131323 logs.go:276] 0 containers: []
	W0328 01:06:30.825565 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:06:30.825575 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:06:30.825589 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:06:30.881353 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:06:30.881391 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:06:30.896682 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:06:30.896718 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:06:30.973356 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:06:30.973386 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:06:30.973402 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:06:31.049014 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:06:31.049047 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:06:33.594365 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:06:33.609372 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:06:33.609460 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:06:33.648699 1131323 cri.go:89] found id: ""
	I0328 01:06:33.648728 1131323 logs.go:276] 0 containers: []
	W0328 01:06:33.648749 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:06:33.648757 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:06:33.648829 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:06:33.686707 1131323 cri.go:89] found id: ""
	I0328 01:06:33.686744 1131323 logs.go:276] 0 containers: []
	W0328 01:06:33.686758 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:06:33.686767 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:06:33.686832 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:06:33.723091 1131323 cri.go:89] found id: ""
	I0328 01:06:33.723121 1131323 logs.go:276] 0 containers: []
	W0328 01:06:33.723130 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:06:33.723136 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:06:33.723187 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:06:33.763439 1131323 cri.go:89] found id: ""
	I0328 01:06:33.763471 1131323 logs.go:276] 0 containers: []
	W0328 01:06:33.763481 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:06:33.763488 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:06:33.763544 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:06:33.812236 1131323 cri.go:89] found id: ""
	I0328 01:06:33.812271 1131323 logs.go:276] 0 containers: []
	W0328 01:06:33.812285 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:06:33.812294 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:06:33.812365 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:06:33.849421 1131323 cri.go:89] found id: ""
	I0328 01:06:33.849454 1131323 logs.go:276] 0 containers: []
	W0328 01:06:33.849465 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:06:33.849473 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:06:33.849528 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:06:33.888020 1131323 cri.go:89] found id: ""
	I0328 01:06:33.888051 1131323 logs.go:276] 0 containers: []
	W0328 01:06:33.888065 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:06:33.888078 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:06:33.888145 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:06:33.925952 1131323 cri.go:89] found id: ""
	I0328 01:06:33.925990 1131323 logs.go:276] 0 containers: []
	W0328 01:06:33.926003 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:06:33.926016 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:06:33.926034 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:06:33.976695 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:06:33.976734 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:06:33.991708 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:06:33.991752 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:06:34.068244 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:06:34.068276 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:06:34.068293 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:06:34.155843 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:06:34.155885 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:06:31.041375 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:33.041526 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:35.541169 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:32.872009 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:34.873043 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:33.913191 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:36.413213 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:36.697480 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:06:36.712322 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:06:36.712420 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:06:36.749541 1131323 cri.go:89] found id: ""
	I0328 01:06:36.749570 1131323 logs.go:276] 0 containers: []
	W0328 01:06:36.749579 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:06:36.749587 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:06:36.749655 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:06:36.788226 1131323 cri.go:89] found id: ""
	I0328 01:06:36.788255 1131323 logs.go:276] 0 containers: []
	W0328 01:06:36.788264 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:06:36.788270 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:06:36.788323 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:06:36.823824 1131323 cri.go:89] found id: ""
	I0328 01:06:36.823856 1131323 logs.go:276] 0 containers: []
	W0328 01:06:36.823866 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:06:36.823872 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:06:36.823927 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:06:36.869331 1131323 cri.go:89] found id: ""
	I0328 01:06:36.869362 1131323 logs.go:276] 0 containers: []
	W0328 01:06:36.869371 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:06:36.869378 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:06:36.869473 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:06:36.907918 1131323 cri.go:89] found id: ""
	I0328 01:06:36.907950 1131323 logs.go:276] 0 containers: []
	W0328 01:06:36.907960 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:06:36.907966 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:06:36.908028 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:06:36.947708 1131323 cri.go:89] found id: ""
	I0328 01:06:36.947738 1131323 logs.go:276] 0 containers: []
	W0328 01:06:36.947749 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:06:36.947757 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:06:36.947824 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:06:36.986200 1131323 cri.go:89] found id: ""
	I0328 01:06:36.986251 1131323 logs.go:276] 0 containers: []
	W0328 01:06:36.986266 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:06:36.986275 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:06:36.986350 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:06:37.026670 1131323 cri.go:89] found id: ""
	I0328 01:06:37.026698 1131323 logs.go:276] 0 containers: []
	W0328 01:06:37.026708 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:06:37.026718 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:06:37.026732 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:06:37.079891 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:06:37.079933 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:06:37.094347 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:06:37.094378 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:06:37.168653 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:06:37.168681 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:06:37.168695 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:06:37.247909 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:06:37.247949 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:06:39.791285 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:06:39.807921 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:06:39.808000 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:06:39.851460 1131323 cri.go:89] found id: ""
	I0328 01:06:39.851499 1131323 logs.go:276] 0 containers: []
	W0328 01:06:39.851512 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:06:39.851520 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:06:39.851593 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:06:39.889506 1131323 cri.go:89] found id: ""
	I0328 01:06:39.889541 1131323 logs.go:276] 0 containers: []
	W0328 01:06:39.889554 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:06:39.889564 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:06:39.889632 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:06:39.930291 1131323 cri.go:89] found id: ""
	I0328 01:06:39.930321 1131323 logs.go:276] 0 containers: []
	W0328 01:06:39.930331 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:06:39.930337 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:06:39.930400 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:06:39.965121 1131323 cri.go:89] found id: ""
	I0328 01:06:39.965160 1131323 logs.go:276] 0 containers: []
	W0328 01:06:39.965174 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:06:39.965183 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:06:39.965252 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:06:40.003217 1131323 cri.go:89] found id: ""
	I0328 01:06:40.003248 1131323 logs.go:276] 0 containers: []
	W0328 01:06:40.003258 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:06:40.003264 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:06:40.003319 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:06:40.042702 1131323 cri.go:89] found id: ""
	I0328 01:06:40.042737 1131323 logs.go:276] 0 containers: []
	W0328 01:06:40.042749 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:06:40.042759 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:06:40.042826 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:06:40.079733 1131323 cri.go:89] found id: ""
	I0328 01:06:40.079769 1131323 logs.go:276] 0 containers: []
	W0328 01:06:40.079780 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:06:40.079788 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:06:40.079852 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:06:40.117066 1131323 cri.go:89] found id: ""
	I0328 01:06:40.117098 1131323 logs.go:276] 0 containers: []
	W0328 01:06:40.117107 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:06:40.117117 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:06:40.117130 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:06:40.158589 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:06:40.158623 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:06:40.210997 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:06:40.211049 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:06:40.225419 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:06:40.225453 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:06:40.305356 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:06:40.305385 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:06:40.305401 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:06:37.541534 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:39.541905 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:36.874220 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:39.373763 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:38.413719 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:40.912939 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:42.913528 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:42.896394 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:06:42.912285 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:06:42.912355 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:06:42.949381 1131323 cri.go:89] found id: ""
	I0328 01:06:42.949411 1131323 logs.go:276] 0 containers: []
	W0328 01:06:42.949420 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:06:42.949427 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:06:42.949496 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:06:42.985325 1131323 cri.go:89] found id: ""
	I0328 01:06:42.985358 1131323 logs.go:276] 0 containers: []
	W0328 01:06:42.985371 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:06:42.985388 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:06:42.985456 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:06:43.023570 1131323 cri.go:89] found id: ""
	I0328 01:06:43.023616 1131323 logs.go:276] 0 containers: []
	W0328 01:06:43.023630 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:06:43.023638 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:06:43.023714 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:06:43.062995 1131323 cri.go:89] found id: ""
	I0328 01:06:43.063025 1131323 logs.go:276] 0 containers: []
	W0328 01:06:43.063036 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:06:43.063042 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:06:43.063111 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:06:43.101666 1131323 cri.go:89] found id: ""
	I0328 01:06:43.101704 1131323 logs.go:276] 0 containers: []
	W0328 01:06:43.101713 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:06:43.101720 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:06:43.101789 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:06:43.150713 1131323 cri.go:89] found id: ""
	I0328 01:06:43.150745 1131323 logs.go:276] 0 containers: []
	W0328 01:06:43.150757 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:06:43.150765 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:06:43.150830 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:06:43.193449 1131323 cri.go:89] found id: ""
	I0328 01:06:43.193479 1131323 logs.go:276] 0 containers: []
	W0328 01:06:43.193487 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:06:43.193495 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:06:43.193559 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:06:43.237641 1131323 cri.go:89] found id: ""
	I0328 01:06:43.237673 1131323 logs.go:276] 0 containers: []
	W0328 01:06:43.237682 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:06:43.237698 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:06:43.237714 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:06:43.287282 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:06:43.287320 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:06:43.303307 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:06:43.303343 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:06:43.383597 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:06:43.383619 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:06:43.383632 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:06:43.467874 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:06:43.467914 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:06:42.041406 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:44.540550 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:41.874286 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:44.372393 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:45.410973 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:47.412852 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:46.011081 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:06:46.025731 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:06:46.025824 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:06:46.064336 1131323 cri.go:89] found id: ""
	I0328 01:06:46.064371 1131323 logs.go:276] 0 containers: []
	W0328 01:06:46.064385 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:06:46.064394 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:06:46.064451 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:06:46.104493 1131323 cri.go:89] found id: ""
	I0328 01:06:46.104530 1131323 logs.go:276] 0 containers: []
	W0328 01:06:46.104550 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:06:46.104559 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:06:46.104636 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:06:46.147546 1131323 cri.go:89] found id: ""
	I0328 01:06:46.147582 1131323 logs.go:276] 0 containers: []
	W0328 01:06:46.147594 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:06:46.147602 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:06:46.147656 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:06:46.186162 1131323 cri.go:89] found id: ""
	I0328 01:06:46.186197 1131323 logs.go:276] 0 containers: []
	W0328 01:06:46.186207 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:06:46.186213 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:06:46.186296 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:06:46.230412 1131323 cri.go:89] found id: ""
	I0328 01:06:46.230450 1131323 logs.go:276] 0 containers: []
	W0328 01:06:46.230464 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:06:46.230473 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:06:46.230552 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:06:46.266000 1131323 cri.go:89] found id: ""
	I0328 01:06:46.266037 1131323 logs.go:276] 0 containers: []
	W0328 01:06:46.266050 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:06:46.266059 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:06:46.266126 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:06:46.301031 1131323 cri.go:89] found id: ""
	I0328 01:06:46.301065 1131323 logs.go:276] 0 containers: []
	W0328 01:06:46.301077 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:06:46.301084 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:06:46.301155 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:06:46.339222 1131323 cri.go:89] found id: ""
	I0328 01:06:46.339248 1131323 logs.go:276] 0 containers: []
	W0328 01:06:46.339258 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:06:46.339271 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:06:46.339290 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:06:46.352558 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:06:46.352595 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:06:46.427283 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:06:46.427308 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:06:46.427325 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:06:46.512134 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:06:46.512178 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:06:46.558276 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:06:46.558307 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:06:49.113455 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:06:49.127554 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:06:49.127645 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:06:49.169380 1131323 cri.go:89] found id: ""
	I0328 01:06:49.169421 1131323 logs.go:276] 0 containers: []
	W0328 01:06:49.169435 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:06:49.169444 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:06:49.169511 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:06:49.204540 1131323 cri.go:89] found id: ""
	I0328 01:06:49.204568 1131323 logs.go:276] 0 containers: []
	W0328 01:06:49.204579 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:06:49.204596 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:06:49.204664 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:06:49.243074 1131323 cri.go:89] found id: ""
	I0328 01:06:49.243102 1131323 logs.go:276] 0 containers: []
	W0328 01:06:49.243112 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:06:49.243119 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:06:49.243170 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:06:49.281264 1131323 cri.go:89] found id: ""
	I0328 01:06:49.281301 1131323 logs.go:276] 0 containers: []
	W0328 01:06:49.281314 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:06:49.281322 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:06:49.281391 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:06:49.320473 1131323 cri.go:89] found id: ""
	I0328 01:06:49.320505 1131323 logs.go:276] 0 containers: []
	W0328 01:06:49.320514 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:06:49.320521 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:06:49.320592 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:06:49.357715 1131323 cri.go:89] found id: ""
	I0328 01:06:49.357749 1131323 logs.go:276] 0 containers: []
	W0328 01:06:49.357759 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:06:49.357766 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:06:49.357823 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:06:49.398427 1131323 cri.go:89] found id: ""
	I0328 01:06:49.398464 1131323 logs.go:276] 0 containers: []
	W0328 01:06:49.398477 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:06:49.398498 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:06:49.398576 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:06:49.439921 1131323 cri.go:89] found id: ""
	I0328 01:06:49.439956 1131323 logs.go:276] 0 containers: []
	W0328 01:06:49.439969 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:06:49.439982 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:06:49.440003 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:06:49.557260 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:06:49.557289 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:06:49.557312 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:06:49.640105 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:06:49.640169 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:06:49.683153 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:06:49.683185 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:06:49.737420 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:06:49.737463 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:06:46.541377 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:49.041761 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:46.374869 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:48.875897 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:49.912535 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:51.912893 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:52.253208 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:06:52.268572 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:06:52.268649 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:06:52.305136 1131323 cri.go:89] found id: ""
	I0328 01:06:52.305180 1131323 logs.go:276] 0 containers: []
	W0328 01:06:52.305193 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:06:52.305202 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:06:52.305273 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:06:52.344774 1131323 cri.go:89] found id: ""
	I0328 01:06:52.344806 1131323 logs.go:276] 0 containers: []
	W0328 01:06:52.344816 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:06:52.344823 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:06:52.344885 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:06:52.382127 1131323 cri.go:89] found id: ""
	I0328 01:06:52.382174 1131323 logs.go:276] 0 containers: []
	W0328 01:06:52.382185 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:06:52.382200 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:06:52.382280 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:06:52.421340 1131323 cri.go:89] found id: ""
	I0328 01:06:52.421368 1131323 logs.go:276] 0 containers: []
	W0328 01:06:52.421377 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:06:52.421383 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:06:52.421433 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:06:52.460046 1131323 cri.go:89] found id: ""
	I0328 01:06:52.460084 1131323 logs.go:276] 0 containers: []
	W0328 01:06:52.460100 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:06:52.460107 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:06:52.460164 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:06:52.500067 1131323 cri.go:89] found id: ""
	I0328 01:06:52.500094 1131323 logs.go:276] 0 containers: []
	W0328 01:06:52.500102 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:06:52.500109 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:06:52.500171 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:06:52.537614 1131323 cri.go:89] found id: ""
	I0328 01:06:52.537646 1131323 logs.go:276] 0 containers: []
	W0328 01:06:52.537671 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:06:52.537680 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:06:52.537745 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:06:52.577362 1131323 cri.go:89] found id: ""
	I0328 01:06:52.577392 1131323 logs.go:276] 0 containers: []
	W0328 01:06:52.577402 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:06:52.577417 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:06:52.577434 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:06:52.633638 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:06:52.633689 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:06:52.650762 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:06:52.650796 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:06:52.729436 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:06:52.729470 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:06:52.729484 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:06:52.818193 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:06:52.818248 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:06:51.540541 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:53.541340 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:55.542165 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:51.376916 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:53.872313 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:55.873335 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:54.411986 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:56.412892 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:55.362950 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:06:55.378461 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:06:55.378577 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:06:55.419968 1131323 cri.go:89] found id: ""
	I0328 01:06:55.419995 1131323 logs.go:276] 0 containers: []
	W0328 01:06:55.420005 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:06:55.420010 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:06:55.420072 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:06:55.464308 1131323 cri.go:89] found id: ""
	I0328 01:06:55.464341 1131323 logs.go:276] 0 containers: []
	W0328 01:06:55.464350 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:06:55.464357 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:06:55.464421 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:06:55.523059 1131323 cri.go:89] found id: ""
	I0328 01:06:55.523092 1131323 logs.go:276] 0 containers: []
	W0328 01:06:55.523106 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:06:55.523114 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:06:55.523186 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:06:55.570957 1131323 cri.go:89] found id: ""
	I0328 01:06:55.570990 1131323 logs.go:276] 0 containers: []
	W0328 01:06:55.571004 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:06:55.571013 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:06:55.571077 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:06:55.606712 1131323 cri.go:89] found id: ""
	I0328 01:06:55.606739 1131323 logs.go:276] 0 containers: []
	W0328 01:06:55.606749 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:06:55.606755 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:06:55.606817 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:06:55.646445 1131323 cri.go:89] found id: ""
	I0328 01:06:55.646477 1131323 logs.go:276] 0 containers: []
	W0328 01:06:55.646486 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:06:55.646493 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:06:55.646548 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:06:55.685176 1131323 cri.go:89] found id: ""
	I0328 01:06:55.685208 1131323 logs.go:276] 0 containers: []
	W0328 01:06:55.685217 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:06:55.685225 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:06:55.685289 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:06:55.722948 1131323 cri.go:89] found id: ""
	I0328 01:06:55.722984 1131323 logs.go:276] 0 containers: []
	W0328 01:06:55.722995 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:06:55.723006 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:06:55.723022 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:06:55.797332 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:06:55.797368 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:06:55.797385 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:06:55.877648 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:06:55.877688 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:06:55.918966 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:06:55.918997 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:06:55.971226 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:06:55.971272 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:06:58.488464 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:06:58.504999 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:06:58.505088 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:06:58.549290 1131323 cri.go:89] found id: ""
	I0328 01:06:58.549325 1131323 logs.go:276] 0 containers: []
	W0328 01:06:58.549338 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:06:58.549347 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:06:58.549414 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:06:58.589222 1131323 cri.go:89] found id: ""
	I0328 01:06:58.589252 1131323 logs.go:276] 0 containers: []
	W0328 01:06:58.589261 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:06:58.589271 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:06:58.589337 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:06:58.626470 1131323 cri.go:89] found id: ""
	I0328 01:06:58.626499 1131323 logs.go:276] 0 containers: []
	W0328 01:06:58.626508 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:06:58.626514 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:06:58.626578 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:06:58.671634 1131323 cri.go:89] found id: ""
	I0328 01:06:58.671663 1131323 logs.go:276] 0 containers: []
	W0328 01:06:58.671674 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:06:58.671683 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:06:58.671744 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:06:58.707335 1131323 cri.go:89] found id: ""
	I0328 01:06:58.707370 1131323 logs.go:276] 0 containers: []
	W0328 01:06:58.707381 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:06:58.707390 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:06:58.707459 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:06:58.745635 1131323 cri.go:89] found id: ""
	I0328 01:06:58.745666 1131323 logs.go:276] 0 containers: []
	W0328 01:06:58.745679 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:06:58.745687 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:06:58.745752 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:06:58.792172 1131323 cri.go:89] found id: ""
	I0328 01:06:58.792205 1131323 logs.go:276] 0 containers: []
	W0328 01:06:58.792216 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:06:58.792225 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:06:58.792287 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:06:58.840027 1131323 cri.go:89] found id: ""
	I0328 01:06:58.840063 1131323 logs.go:276] 0 containers: []
	W0328 01:06:58.840075 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:06:58.840089 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:06:58.840108 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:06:58.921964 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:06:58.921988 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:06:58.922003 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:06:59.016935 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:06:59.016980 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:06:59.065747 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:06:59.065788 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:06:59.119189 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:06:59.119231 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:06:58.042362 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:00.544351 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:57.875649 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:00.371953 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:58.406154 1130949 pod_ready.go:81] duration metric: took 4m0.000981669s for pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace to be "Ready" ...
	E0328 01:06:58.406192 1130949 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace to be "Ready" (will not retry!)
	I0328 01:06:58.406218 1130949 pod_ready.go:38] duration metric: took 4m11.713667334s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0328 01:06:58.406275 1130949 kubeadm.go:591] duration metric: took 4m19.018883002s to restartPrimaryControlPlane
	W0328 01:06:58.406372 1130949 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0328 01:06:58.406432 1130949 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0328 01:07:01.637081 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:07:01.652557 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:07:01.652634 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:07:01.691795 1131323 cri.go:89] found id: ""
	I0328 01:07:01.691832 1131323 logs.go:276] 0 containers: []
	W0328 01:07:01.691846 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:07:01.691854 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:07:01.691927 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:07:01.732815 1131323 cri.go:89] found id: ""
	I0328 01:07:01.732850 1131323 logs.go:276] 0 containers: []
	W0328 01:07:01.732861 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:07:01.732868 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:07:01.732938 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:07:01.776370 1131323 cri.go:89] found id: ""
	I0328 01:07:01.776408 1131323 logs.go:276] 0 containers: []
	W0328 01:07:01.776422 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:07:01.776431 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:07:01.776501 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:07:01.821260 1131323 cri.go:89] found id: ""
	I0328 01:07:01.821290 1131323 logs.go:276] 0 containers: []
	W0328 01:07:01.821301 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:07:01.821308 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:07:01.821377 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:07:01.860666 1131323 cri.go:89] found id: ""
	I0328 01:07:01.860696 1131323 logs.go:276] 0 containers: []
	W0328 01:07:01.860708 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:07:01.860719 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:07:01.860787 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:07:01.898255 1131323 cri.go:89] found id: ""
	I0328 01:07:01.898291 1131323 logs.go:276] 0 containers: []
	W0328 01:07:01.898304 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:07:01.898314 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:07:01.898383 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:07:01.937770 1131323 cri.go:89] found id: ""
	I0328 01:07:01.937809 1131323 logs.go:276] 0 containers: []
	W0328 01:07:01.937822 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:07:01.937830 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:07:01.937901 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:07:01.976946 1131323 cri.go:89] found id: ""
	I0328 01:07:01.976981 1131323 logs.go:276] 0 containers: []
	W0328 01:07:01.976994 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:07:01.977008 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:07:01.977027 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:07:02.062804 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:07:02.062845 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:07:02.110750 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:07:02.110783 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:07:02.179633 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:07:02.179677 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:07:02.203131 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:07:02.203181 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:07:02.303281 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:07:04.804238 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:07:04.819654 1131323 kubeadm.go:591] duration metric: took 4m2.527630194s to restartPrimaryControlPlane
	W0328 01:07:04.819747 1131323 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0328 01:07:04.819787 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0328 01:07:03.041692 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:05.540478 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:02.372472 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:04.376413 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:07.322821 1131323 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (2.50300166s)
	I0328 01:07:07.322918 1131323 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0328 01:07:07.338692 1131323 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0328 01:07:07.349812 1131323 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0328 01:07:07.361566 1131323 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0328 01:07:07.361597 1131323 kubeadm.go:156] found existing configuration files:
	
	I0328 01:07:07.361667 1131323 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0328 01:07:07.372926 1131323 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0328 01:07:07.373008 1131323 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0328 01:07:07.383770 1131323 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0328 01:07:07.394260 1131323 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0328 01:07:07.394332 1131323 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0328 01:07:07.405874 1131323 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0328 01:07:07.417177 1131323 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0328 01:07:07.417254 1131323 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0328 01:07:07.428589 1131323 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0328 01:07:07.438788 1131323 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0328 01:07:07.438845 1131323 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0328 01:07:07.449649 1131323 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0328 01:07:07.533886 1131323 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0328 01:07:07.533989 1131323 kubeadm.go:309] [preflight] Running pre-flight checks
	I0328 01:07:07.693599 1131323 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0328 01:07:07.693736 1131323 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0328 01:07:07.693852 1131323 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0328 01:07:07.910557 1131323 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0328 01:07:07.912634 1131323 out.go:204]   - Generating certificates and keys ...
	I0328 01:07:07.912743 1131323 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0328 01:07:07.912855 1131323 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0328 01:07:07.912984 1131323 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0328 01:07:07.913098 1131323 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0328 01:07:07.913212 1131323 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0328 01:07:07.913298 1131323 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0328 01:07:07.913384 1131323 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0328 01:07:07.913569 1131323 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0328 01:07:07.913947 1131323 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0328 01:07:07.914429 1131323 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0328 01:07:07.914649 1131323 kubeadm.go:309] [certs] Using the existing "sa" key
	I0328 01:07:07.914728 1131323 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0328 01:07:08.225778 1131323 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0328 01:07:08.353927 1131323 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0328 01:07:08.631240 1131323 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0328 01:07:08.824445 1131323 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0328 01:07:08.840240 1131323 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0328 01:07:08.841200 1131323 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0328 01:07:08.841315 1131323 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0328 01:07:08.997129 1131323 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0328 01:07:08.999073 1131323 out.go:204]   - Booting up control plane ...
	I0328 01:07:08.999224 1131323 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0328 01:07:09.014811 1131323 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0328 01:07:09.015898 1131323 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0328 01:07:09.016727 1131323 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0328 01:07:09.019426 1131323 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0328 01:07:07.541363 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:10.041094 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:06.874606 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:09.372537 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:12.540137 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:14.541608 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:11.372643 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:13.873029 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:16.541814 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:19.047225 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:16.372556 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:18.871954 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:20.872047 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:21.542880 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:24.041786 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:22.872845 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:24.873747 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:26.042186 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:28.541303 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:30.540610 1130949 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.134147754s)
	I0328 01:07:30.540688 1130949 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0328 01:07:30.558971 1130949 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0328 01:07:30.570331 1130949 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0328 01:07:30.581192 1130949 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0328 01:07:30.581246 1130949 kubeadm.go:156] found existing configuration files:
	
	I0328 01:07:30.581306 1130949 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0328 01:07:30.592337 1130949 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0328 01:07:30.592410 1130949 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0328 01:07:30.603288 1130949 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0328 01:07:30.613714 1130949 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0328 01:07:30.613776 1130949 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0328 01:07:30.624281 1130949 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0328 01:07:30.634569 1130949 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0328 01:07:30.634644 1130949 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0328 01:07:30.647279 1130949 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0328 01:07:30.658554 1130949 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0328 01:07:30.658646 1130949 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0328 01:07:30.670364 1130949 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0328 01:07:30.730349 1130949 kubeadm.go:309] [init] Using Kubernetes version: v1.29.3
	I0328 01:07:30.730414 1130949 kubeadm.go:309] [preflight] Running pre-flight checks
	I0328 01:07:30.887056 1130949 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0328 01:07:30.887234 1130949 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0328 01:07:30.887385 1130949 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0328 01:07:31.104288 1130949 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0328 01:07:27.373135 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:29.373436 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:31.106496 1130949 out.go:204]   - Generating certificates and keys ...
	I0328 01:07:31.106628 1130949 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0328 01:07:31.106697 1130949 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0328 01:07:31.106765 1130949 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0328 01:07:31.106826 1130949 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0328 01:07:31.106892 1130949 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0328 01:07:31.107528 1130949 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0328 01:07:31.108302 1130949 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0328 01:07:31.112246 1130949 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0328 01:07:31.112762 1130949 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0328 01:07:31.113711 1130949 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0328 01:07:31.115230 1130949 kubeadm.go:309] [certs] Using the existing "sa" key
	I0328 01:07:31.115284 1130949 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0328 01:07:31.297632 1130949 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0328 01:07:32.446275 1130949 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0328 01:07:32.565869 1130949 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0328 01:07:32.641288 1130949 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0328 01:07:32.817229 1130949 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0328 01:07:32.817814 1130949 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0328 01:07:32.820366 1130949 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0328 01:07:32.822328 1130949 out.go:204]   - Booting up control plane ...
	I0328 01:07:32.822467 1130949 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0328 01:07:32.822550 1130949 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0328 01:07:32.822990 1130949 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0328 01:07:32.846800 1130949 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0328 01:07:32.847829 1130949 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0328 01:07:32.847902 1130949 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0328 01:07:31.044103 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:33.542106 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:35.542875 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:31.873591 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:33.875737 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:35.881819 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:32.992001 1130949 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0328 01:07:38.997010 1130949 kubeadm.go:309] [apiclient] All control plane components are healthy after 6.003888 seconds
	I0328 01:07:39.012971 1130949 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0328 01:07:39.036328 1130949 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0328 01:07:39.569806 1130949 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0328 01:07:39.570135 1130949 kubeadm.go:309] [mark-control-plane] Marking the node embed-certs-808809 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0328 01:07:40.085165 1130949 kubeadm.go:309] [bootstrap-token] Using token: 4zk5zi.uttj4zihedk5oj6k
	I0328 01:07:40.086719 1130949 out.go:204]   - Configuring RBAC rules ...
	I0328 01:07:40.086873 1130949 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0328 01:07:40.096373 1130949 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0328 01:07:40.106484 1130949 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0328 01:07:40.110525 1130949 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0328 01:07:40.120015 1130949 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0328 01:07:40.129060 1130949 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0328 01:07:40.141167 1130949 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0328 01:07:40.415429 1130949 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0328 01:07:40.507275 1130949 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0328 01:07:40.507333 1130949 kubeadm.go:309] 
	I0328 01:07:40.507551 1130949 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0328 01:07:40.507617 1130949 kubeadm.go:309] 
	I0328 01:07:40.507860 1130949 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0328 01:07:40.507891 1130949 kubeadm.go:309] 
	I0328 01:07:40.507947 1130949 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0328 01:07:40.508057 1130949 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0328 01:07:40.508140 1130949 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0328 01:07:40.508157 1130949 kubeadm.go:309] 
	I0328 01:07:40.508250 1130949 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0328 01:07:40.508264 1130949 kubeadm.go:309] 
	I0328 01:07:40.508329 1130949 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0328 01:07:40.508344 1130949 kubeadm.go:309] 
	I0328 01:07:40.508421 1130949 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0328 01:07:40.508539 1130949 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0328 01:07:40.508626 1130949 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0328 01:07:40.508632 1130949 kubeadm.go:309] 
	I0328 01:07:40.508804 1130949 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0328 01:07:40.508970 1130949 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0328 01:07:40.508990 1130949 kubeadm.go:309] 
	I0328 01:07:40.509155 1130949 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token 4zk5zi.uttj4zihedk5oj6k \
	I0328 01:07:40.509474 1130949 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:eb47e943713ff45bf2b8bbea854932df17a469668ec0f8e79ea3bb626e56fc59 \
	I0328 01:07:40.509514 1130949 kubeadm.go:309] 	--control-plane 
	I0328 01:07:40.509524 1130949 kubeadm.go:309] 
	I0328 01:07:40.509641 1130949 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0328 01:07:40.509655 1130949 kubeadm.go:309] 
	I0328 01:07:40.509767 1130949 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token 4zk5zi.uttj4zihedk5oj6k \
	I0328 01:07:40.509932 1130949 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:eb47e943713ff45bf2b8bbea854932df17a469668ec0f8e79ea3bb626e56fc59 
	I0328 01:07:40.510139 1130949 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0328 01:07:40.510157 1130949 cni.go:84] Creating CNI manager for ""
	I0328 01:07:40.510166 1130949 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0328 01:07:40.512099 1130949 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0328 01:07:38.041290 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:40.041569 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:38.373789 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:40.374369 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:40.513314 1130949 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0328 01:07:40.563257 1130949 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0328 01:07:40.627024 1130949 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0328 01:07:40.627097 1130949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:07:40.627137 1130949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-808809 minikube.k8s.io/updated_at=2024_03_28T01_07_40_0700 minikube.k8s.io/version=v1.33.0-beta.0 minikube.k8s.io/commit=1a940f980f77c9bfc8ef93678db8c1f49c9dd79d minikube.k8s.io/name=embed-certs-808809 minikube.k8s.io/primary=true
	I0328 01:07:40.928916 1130949 ops.go:34] apiserver oom_adj: -16
	I0328 01:07:40.929138 1130949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:07:41.429797 1130949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:07:41.930103 1130949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:07:42.429366 1130949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:07:42.540932 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:44.035055 1131600 pod_ready.go:81] duration metric: took 4m0.000860608s for pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace to be "Ready" ...
	E0328 01:07:44.035094 1131600 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace to be "Ready" (will not retry!)
	I0328 01:07:44.035124 1131600 pod_ready.go:38] duration metric: took 4m14.608998431s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0328 01:07:44.035180 1131600 kubeadm.go:591] duration metric: took 4m23.470228903s to restartPrimaryControlPlane
	W0328 01:07:44.035292 1131600 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0328 01:07:44.035344 1131600 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0328 01:07:42.375179 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:44.876120 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:42.929464 1130949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:07:43.429369 1130949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:07:43.929241 1130949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:07:44.429904 1130949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:07:44.930251 1130949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:07:45.429816 1130949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:07:45.930177 1130949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:07:46.429416 1130949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:07:46.929152 1130949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:07:47.429708 1130949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:07:49.021732 1131323 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0328 01:07:49.021890 1131323 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0328 01:07:49.022195 1131323 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0328 01:07:47.373358 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:49.872482 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:47.929139 1130949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:07:48.429732 1130949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:07:48.930207 1130949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:07:49.429230 1130949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:07:49.929298 1130949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:07:50.429919 1130949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:07:50.929364 1130949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:07:51.429403 1130949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:07:51.929356 1130949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:07:52.429410 1130949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:07:52.929894 1130949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:07:53.043365 1130949 kubeadm.go:1107] duration metric: took 12.416334145s to wait for elevateKubeSystemPrivileges
	W0328 01:07:53.043410 1130949 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0328 01:07:53.043419 1130949 kubeadm.go:393] duration metric: took 5m13.709259014s to StartCluster
	I0328 01:07:53.043445 1130949 settings.go:142] acquiring lock: {Name:mk594dd5d083edcb4c668528431bf610fe4ea638 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 01:07:53.043560 1130949 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18485-1069254/kubeconfig
	I0328 01:07:53.045798 1130949 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18485-1069254/kubeconfig: {Name:mk608bb0dce90860f51972a105c5a28b1a9b081e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 01:07:53.046158 1130949 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.72.210 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0328 01:07:53.047867 1130949 out.go:177] * Verifying Kubernetes components...
	I0328 01:07:53.046201 1130949 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0328 01:07:53.046412 1130949 config.go:182] Loaded profile config "embed-certs-808809": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0328 01:07:53.049163 1130949 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-808809"
	I0328 01:07:53.049175 1130949 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0328 01:07:53.049195 1130949 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-808809"
	W0328 01:07:53.049204 1130949 addons.go:243] addon storage-provisioner should already be in state true
	I0328 01:07:53.049230 1130949 host.go:66] Checking if "embed-certs-808809" exists ...
	I0328 01:07:53.049205 1130949 addons.go:69] Setting default-storageclass=true in profile "embed-certs-808809"
	I0328 01:07:53.049250 1130949 addons.go:69] Setting metrics-server=true in profile "embed-certs-808809"
	I0328 01:07:53.049271 1130949 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-808809"
	I0328 01:07:53.049309 1130949 addons.go:234] Setting addon metrics-server=true in "embed-certs-808809"
	W0328 01:07:53.049327 1130949 addons.go:243] addon metrics-server should already be in state true
	I0328 01:07:53.049371 1130949 host.go:66] Checking if "embed-certs-808809" exists ...
	I0328 01:07:53.049530 1130949 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18485-1069254/.minikube/bin/docker-machine-driver-kvm2
	I0328 01:07:53.049569 1130949 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 01:07:53.049696 1130949 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18485-1069254/.minikube/bin/docker-machine-driver-kvm2
	I0328 01:07:53.049729 1130949 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 01:07:53.049795 1130949 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18485-1069254/.minikube/bin/docker-machine-driver-kvm2
	I0328 01:07:53.049838 1130949 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 01:07:53.067042 1130949 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36143
	I0328 01:07:53.067078 1130949 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33653
	I0328 01:07:53.067536 1130949 main.go:141] libmachine: () Calling .GetVersion
	I0328 01:07:53.067599 1130949 main.go:141] libmachine: () Calling .GetVersion
	I0328 01:07:53.068156 1130949 main.go:141] libmachine: Using API Version  1
	I0328 01:07:53.068184 1130949 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 01:07:53.068289 1130949 main.go:141] libmachine: Using API Version  1
	I0328 01:07:53.068315 1130949 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 01:07:53.068583 1130949 main.go:141] libmachine: () Calling .GetMachineName
	I0328 01:07:53.068669 1130949 main.go:141] libmachine: () Calling .GetMachineName
	I0328 01:07:53.069095 1130949 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18485-1069254/.minikube/bin/docker-machine-driver-kvm2
	I0328 01:07:53.069121 1130949 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 01:07:53.069245 1130949 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18485-1069254/.minikube/bin/docker-machine-driver-kvm2
	I0328 01:07:53.069276 1130949 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 01:07:53.069991 1130949 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43737
	I0328 01:07:53.070509 1130949 main.go:141] libmachine: () Calling .GetVersion
	I0328 01:07:53.071078 1130949 main.go:141] libmachine: Using API Version  1
	I0328 01:07:53.071103 1130949 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 01:07:53.071480 1130949 main.go:141] libmachine: () Calling .GetMachineName
	I0328 01:07:53.071705 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetState
	I0328 01:07:53.075617 1130949 addons.go:234] Setting addon default-storageclass=true in "embed-certs-808809"
	W0328 01:07:53.075659 1130949 addons.go:243] addon default-storageclass should already be in state true
	I0328 01:07:53.075703 1130949 host.go:66] Checking if "embed-certs-808809" exists ...
	I0328 01:07:53.075982 1130949 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18485-1069254/.minikube/bin/docker-machine-driver-kvm2
	I0328 01:07:53.076011 1130949 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 01:07:53.085991 1130949 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39265
	I0328 01:07:53.086508 1130949 main.go:141] libmachine: () Calling .GetVersion
	I0328 01:07:53.086724 1130949 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36033
	I0328 01:07:53.087105 1130949 main.go:141] libmachine: Using API Version  1
	I0328 01:07:53.087122 1130949 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 01:07:53.087158 1130949 main.go:141] libmachine: () Calling .GetVersion
	I0328 01:07:53.087646 1130949 main.go:141] libmachine: Using API Version  1
	I0328 01:07:53.087667 1130949 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 01:07:53.087706 1130949 main.go:141] libmachine: () Calling .GetMachineName
	I0328 01:07:53.087922 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetState
	I0328 01:07:53.088031 1130949 main.go:141] libmachine: () Calling .GetMachineName
	I0328 01:07:53.088225 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetState
	I0328 01:07:53.089941 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .DriverName
	I0328 01:07:53.090168 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .DriverName
	I0328 01:07:53.091945 1130949 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0328 01:07:53.093023 1130949 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41613
	I0328 01:07:53.093537 1130949 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0328 01:07:53.093553 1130949 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0328 01:07:53.093563 1130949 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0328 01:07:53.095147 1130949 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0328 01:07:53.095165 1130949 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0328 01:07:53.093574 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHHostname
	I0328 01:07:53.095185 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHHostname
	I0328 01:07:53.093939 1130949 main.go:141] libmachine: () Calling .GetVersion
	I0328 01:07:53.096301 1130949 main.go:141] libmachine: Using API Version  1
	I0328 01:07:53.096322 1130949 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 01:07:53.096662 1130949 main.go:141] libmachine: () Calling .GetMachineName
	I0328 01:07:53.097251 1130949 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18485-1069254/.minikube/bin/docker-machine-driver-kvm2
	I0328 01:07:53.097306 1130949 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 01:07:53.098907 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:07:53.099014 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:07:53.099513 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d4:d2", ip: ""} in network mk-embed-certs-808809: {Iface:virbr4 ExpiryTime:2024-03-28 02:02:25 +0000 UTC Type:0 Mac:52:54:00:60:d4:d2 Iaid: IPaddr:192.168.72.210 Prefix:24 Hostname:embed-certs-808809 Clientid:01:52:54:00:60:d4:d2}
	I0328 01:07:53.099546 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined IP address 192.168.72.210 and MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:07:53.099996 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHPort
	I0328 01:07:53.100126 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d4:d2", ip: ""} in network mk-embed-certs-808809: {Iface:virbr4 ExpiryTime:2024-03-28 02:02:25 +0000 UTC Type:0 Mac:52:54:00:60:d4:d2 Iaid: IPaddr:192.168.72.210 Prefix:24 Hostname:embed-certs-808809 Clientid:01:52:54:00:60:d4:d2}
	I0328 01:07:53.100177 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHKeyPath
	I0328 01:07:53.100187 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined IP address 192.168.72.210 and MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:07:53.100287 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHUsername
	I0328 01:07:53.100392 1130949 sshutil.go:53] new ssh client: &{IP:192.168.72.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/embed-certs-808809/id_rsa Username:docker}
	I0328 01:07:53.100470 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHPort
	I0328 01:07:53.100576 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHKeyPath
	I0328 01:07:53.100709 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHUsername
	I0328 01:07:53.100796 1130949 sshutil.go:53] new ssh client: &{IP:192.168.72.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/embed-certs-808809/id_rsa Username:docker}
	I0328 01:07:53.114056 1130949 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41121
	I0328 01:07:53.114680 1130949 main.go:141] libmachine: () Calling .GetVersion
	I0328 01:07:53.115279 1130949 main.go:141] libmachine: Using API Version  1
	I0328 01:07:53.115313 1130949 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 01:07:53.115721 1130949 main.go:141] libmachine: () Calling .GetMachineName
	I0328 01:07:53.116061 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetState
	I0328 01:07:53.118022 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .DriverName
	I0328 01:07:53.118348 1130949 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0328 01:07:53.118370 1130949 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0328 01:07:53.118391 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHHostname
	I0328 01:07:53.121337 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:07:53.121699 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d4:d2", ip: ""} in network mk-embed-certs-808809: {Iface:virbr4 ExpiryTime:2024-03-28 02:02:25 +0000 UTC Type:0 Mac:52:54:00:60:d4:d2 Iaid: IPaddr:192.168.72.210 Prefix:24 Hostname:embed-certs-808809 Clientid:01:52:54:00:60:d4:d2}
	I0328 01:07:53.121728 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined IP address 192.168.72.210 and MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:07:53.121906 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHPort
	I0328 01:07:53.122084 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHKeyPath
	I0328 01:07:53.122266 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHUsername
	I0328 01:07:53.122414 1130949 sshutil.go:53] new ssh client: &{IP:192.168.72.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/embed-certs-808809/id_rsa Username:docker}
	I0328 01:07:53.242121 1130949 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0328 01:07:53.267118 1130949 node_ready.go:35] waiting up to 6m0s for node "embed-certs-808809" to be "Ready" ...
	I0328 01:07:53.276640 1130949 node_ready.go:49] node "embed-certs-808809" has status "Ready":"True"
	I0328 01:07:53.276670 1130949 node_ready.go:38] duration metric: took 9.513599ms for node "embed-certs-808809" to be "Ready" ...
	I0328 01:07:53.276683 1130949 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0328 01:07:53.283091 1130949 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-2rn6k" in "kube-system" namespace to be "Ready" ...
	I0328 01:07:53.325201 1130949 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0328 01:07:53.325234 1130949 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0328 01:07:53.341335 1130949 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0328 01:07:53.361084 1130949 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0328 01:07:53.361109 1130949 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0328 01:07:53.393089 1130949 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0328 01:07:53.393116 1130949 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0328 01:07:53.419245 1130949 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0328 01:07:53.445663 1130949 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0328 01:07:53.515515 1130949 main.go:141] libmachine: Making call to close driver server
	I0328 01:07:53.515555 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .Close
	I0328 01:07:53.515871 1130949 main.go:141] libmachine: Successfully made call to close driver server
	I0328 01:07:53.515891 1130949 main.go:141] libmachine: Making call to close connection to plugin binary
	I0328 01:07:53.515901 1130949 main.go:141] libmachine: Making call to close driver server
	I0328 01:07:53.515910 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .Close
	I0328 01:07:53.516173 1130949 main.go:141] libmachine: Successfully made call to close driver server
	I0328 01:07:53.516253 1130949 main.go:141] libmachine: Making call to close connection to plugin binary
	I0328 01:07:53.516212 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | Closing plugin on server side
	I0328 01:07:53.527854 1130949 main.go:141] libmachine: Making call to close driver server
	I0328 01:07:53.527882 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .Close
	I0328 01:07:53.528152 1130949 main.go:141] libmachine: Successfully made call to close driver server
	I0328 01:07:53.528173 1130949 main.go:141] libmachine: Making call to close connection to plugin binary
	I0328 01:07:53.528220 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | Closing plugin on server side
	I0328 01:07:54.159164 1130949 main.go:141] libmachine: Making call to close driver server
	I0328 01:07:54.159192 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .Close
	I0328 01:07:54.159264 1130949 main.go:141] libmachine: Making call to close driver server
	I0328 01:07:54.159292 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .Close
	I0328 01:07:54.159523 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | Closing plugin on server side
	I0328 01:07:54.159597 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | Closing plugin on server side
	I0328 01:07:54.159619 1130949 main.go:141] libmachine: Successfully made call to close driver server
	I0328 01:07:54.159637 1130949 main.go:141] libmachine: Making call to close connection to plugin binary
	I0328 01:07:54.159648 1130949 main.go:141] libmachine: Making call to close driver server
	I0328 01:07:54.159658 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .Close
	I0328 01:07:54.159660 1130949 main.go:141] libmachine: Successfully made call to close driver server
	I0328 01:07:54.159667 1130949 main.go:141] libmachine: Making call to close connection to plugin binary
	I0328 01:07:54.159688 1130949 main.go:141] libmachine: Making call to close driver server
	I0328 01:07:54.159696 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .Close
	I0328 01:07:54.159981 1130949 main.go:141] libmachine: Successfully made call to close driver server
	I0328 01:07:54.160037 1130949 main.go:141] libmachine: Making call to close connection to plugin binary
	I0328 01:07:54.160056 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | Closing plugin on server side
	I0328 01:07:54.160062 1130949 addons.go:470] Verifying addon metrics-server=true in "embed-certs-808809"
	I0328 01:07:54.160088 1130949 main.go:141] libmachine: Successfully made call to close driver server
	I0328 01:07:54.160090 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | Closing plugin on server side
	I0328 01:07:54.160106 1130949 main.go:141] libmachine: Making call to close connection to plugin binary
	I0328 01:07:54.162879 1130949 out.go:177] * Enabled addons: default-storageclass, metrics-server, storage-provisioner
	I0328 01:07:54.022449 1131323 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0328 01:07:54.022704 1131323 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0328 01:07:52.372314 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:54.372913 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:54.164263 1130949 addons.go:505] duration metric: took 1.11806212s for enable addons: enabled=[default-storageclass metrics-server storage-provisioner]
	I0328 01:07:55.294728 1130949 pod_ready.go:102] pod "coredns-76f75df574-2rn6k" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:55.790690 1130949 pod_ready.go:92] pod "coredns-76f75df574-2rn6k" in "kube-system" namespace has status "Ready":"True"
	I0328 01:07:55.790717 1130949 pod_ready.go:81] duration metric: took 2.50759161s for pod "coredns-76f75df574-2rn6k" in "kube-system" namespace to be "Ready" ...
	I0328 01:07:55.790726 1130949 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-pgcdh" in "kube-system" namespace to be "Ready" ...
	I0328 01:07:55.796249 1130949 pod_ready.go:92] pod "coredns-76f75df574-pgcdh" in "kube-system" namespace has status "Ready":"True"
	I0328 01:07:55.796279 1130949 pod_ready.go:81] duration metric: took 5.54233ms for pod "coredns-76f75df574-pgcdh" in "kube-system" namespace to be "Ready" ...
	I0328 01:07:55.796291 1130949 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-808809" in "kube-system" namespace to be "Ready" ...
	I0328 01:07:55.801226 1130949 pod_ready.go:92] pod "etcd-embed-certs-808809" in "kube-system" namespace has status "Ready":"True"
	I0328 01:07:55.801254 1130949 pod_ready.go:81] duration metric: took 4.956106ms for pod "etcd-embed-certs-808809" in "kube-system" namespace to be "Ready" ...
	I0328 01:07:55.801263 1130949 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-808809" in "kube-system" namespace to be "Ready" ...
	I0328 01:07:55.814571 1130949 pod_ready.go:92] pod "kube-apiserver-embed-certs-808809" in "kube-system" namespace has status "Ready":"True"
	I0328 01:07:55.814599 1130949 pod_ready.go:81] duration metric: took 13.328662ms for pod "kube-apiserver-embed-certs-808809" in "kube-system" namespace to be "Ready" ...
	I0328 01:07:55.814613 1130949 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-808809" in "kube-system" namespace to be "Ready" ...
	I0328 01:07:55.825995 1130949 pod_ready.go:92] pod "kube-controller-manager-embed-certs-808809" in "kube-system" namespace has status "Ready":"True"
	I0328 01:07:55.826022 1130949 pod_ready.go:81] duration metric: took 11.401096ms for pod "kube-controller-manager-embed-certs-808809" in "kube-system" namespace to be "Ready" ...
	I0328 01:07:55.826035 1130949 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-tjbhs" in "kube-system" namespace to be "Ready" ...
	I0328 01:07:56.188116 1130949 pod_ready.go:92] pod "kube-proxy-tjbhs" in "kube-system" namespace has status "Ready":"True"
	I0328 01:07:56.188147 1130949 pod_ready.go:81] duration metric: took 362.103962ms for pod "kube-proxy-tjbhs" in "kube-system" namespace to be "Ready" ...
	I0328 01:07:56.188161 1130949 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-808809" in "kube-system" namespace to be "Ready" ...
	I0328 01:07:56.588294 1130949 pod_ready.go:92] pod "kube-scheduler-embed-certs-808809" in "kube-system" namespace has status "Ready":"True"
	I0328 01:07:56.588334 1130949 pod_ready.go:81] duration metric: took 400.16517ms for pod "kube-scheduler-embed-certs-808809" in "kube-system" namespace to be "Ready" ...
	I0328 01:07:56.588347 1130949 pod_ready.go:38] duration metric: took 3.311651338s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0328 01:07:56.588369 1130949 api_server.go:52] waiting for apiserver process to appear ...
	I0328 01:07:56.588445 1130949 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:07:56.606404 1130949 api_server.go:72] duration metric: took 3.560197315s to wait for apiserver process to appear ...
	I0328 01:07:56.606435 1130949 api_server.go:88] waiting for apiserver healthz status ...
	I0328 01:07:56.606460 1130949 api_server.go:253] Checking apiserver healthz at https://192.168.72.210:8443/healthz ...
	I0328 01:07:56.612218 1130949 api_server.go:279] https://192.168.72.210:8443/healthz returned 200:
	ok
	I0328 01:07:56.613459 1130949 api_server.go:141] control plane version: v1.29.3
	I0328 01:07:56.613481 1130949 api_server.go:131] duration metric: took 7.039378ms to wait for apiserver health ...
	I0328 01:07:56.613490 1130949 system_pods.go:43] waiting for kube-system pods to appear ...
	I0328 01:07:56.793192 1130949 system_pods.go:59] 9 kube-system pods found
	I0328 01:07:56.793227 1130949 system_pods.go:61] "coredns-76f75df574-2rn6k" [2a77c778-dd83-4e2e-b45a-ca16e3922b45] Running
	I0328 01:07:56.793232 1130949 system_pods.go:61] "coredns-76f75df574-pgcdh" [52452b24-490e-4999-b700-198c6f9b2fa1] Running
	I0328 01:07:56.793236 1130949 system_pods.go:61] "etcd-embed-certs-808809" [cba526ab-c8ca-4b90-abb8-533d1bf6cb4f] Running
	I0328 01:07:56.793239 1130949 system_pods.go:61] "kube-apiserver-embed-certs-808809" [02934ac6-1e07-4cbe-ba2f-4149e59d6044] Running
	I0328 01:07:56.793243 1130949 system_pods.go:61] "kube-controller-manager-embed-certs-808809" [b3350ff9-7abb-407e-939c-fcde61186c17] Running
	I0328 01:07:56.793246 1130949 system_pods.go:61] "kube-proxy-tjbhs" [cdb30ca1-5165-4e24-888a-df79af7987d0] Running
	I0328 01:07:56.793249 1130949 system_pods.go:61] "kube-scheduler-embed-certs-808809" [5b52a22d-6ffb-468b-b7b9-8f89b0dee3b8] Running
	I0328 01:07:56.793255 1130949 system_pods.go:61] "metrics-server-57f55c9bc5-bqbfl" [8434fd7d-838b-4cf2-96a3-e4d613633871] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0328 01:07:56.793260 1130949 system_pods.go:61] "storage-provisioner" [20c1951e-7da8-4025-bbcf-2da60f87f3ab] Running
	I0328 01:07:56.793268 1130949 system_pods.go:74] duration metric: took 179.77213ms to wait for pod list to return data ...
	I0328 01:07:56.793275 1130949 default_sa.go:34] waiting for default service account to be created ...
	I0328 01:07:56.988234 1130949 default_sa.go:45] found service account: "default"
	I0328 01:07:56.988274 1130949 default_sa.go:55] duration metric: took 194.984089ms for default service account to be created ...
	I0328 01:07:56.988288 1130949 system_pods.go:116] waiting for k8s-apps to be running ...
	I0328 01:07:57.192153 1130949 system_pods.go:86] 9 kube-system pods found
	I0328 01:07:57.192188 1130949 system_pods.go:89] "coredns-76f75df574-2rn6k" [2a77c778-dd83-4e2e-b45a-ca16e3922b45] Running
	I0328 01:07:57.192194 1130949 system_pods.go:89] "coredns-76f75df574-pgcdh" [52452b24-490e-4999-b700-198c6f9b2fa1] Running
	I0328 01:07:57.192200 1130949 system_pods.go:89] "etcd-embed-certs-808809" [cba526ab-c8ca-4b90-abb8-533d1bf6cb4f] Running
	I0328 01:07:57.192205 1130949 system_pods.go:89] "kube-apiserver-embed-certs-808809" [02934ac6-1e07-4cbe-ba2f-4149e59d6044] Running
	I0328 01:07:57.192210 1130949 system_pods.go:89] "kube-controller-manager-embed-certs-808809" [b3350ff9-7abb-407e-939c-fcde61186c17] Running
	I0328 01:07:57.192214 1130949 system_pods.go:89] "kube-proxy-tjbhs" [cdb30ca1-5165-4e24-888a-df79af7987d0] Running
	I0328 01:07:57.192218 1130949 system_pods.go:89] "kube-scheduler-embed-certs-808809" [5b52a22d-6ffb-468b-b7b9-8f89b0dee3b8] Running
	I0328 01:07:57.192225 1130949 system_pods.go:89] "metrics-server-57f55c9bc5-bqbfl" [8434fd7d-838b-4cf2-96a3-e4d613633871] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0328 01:07:57.192230 1130949 system_pods.go:89] "storage-provisioner" [20c1951e-7da8-4025-bbcf-2da60f87f3ab] Running
	I0328 01:07:57.192239 1130949 system_pods.go:126] duration metric: took 203.942878ms to wait for k8s-apps to be running ...
	I0328 01:07:57.192249 1130949 system_svc.go:44] waiting for kubelet service to be running ....
	I0328 01:07:57.192301 1130949 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0328 01:07:57.209840 1130949 system_svc.go:56] duration metric: took 17.576605ms WaitForService to wait for kubelet
	I0328 01:07:57.209883 1130949 kubeadm.go:576] duration metric: took 4.163683877s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0328 01:07:57.209918 1130949 node_conditions.go:102] verifying NodePressure condition ...
	I0328 01:07:57.388321 1130949 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0328 01:07:57.388347 1130949 node_conditions.go:123] node cpu capacity is 2
	I0328 01:07:57.388357 1130949 node_conditions.go:105] duration metric: took 178.433633ms to run NodePressure ...
	I0328 01:07:57.388370 1130949 start.go:240] waiting for startup goroutines ...
	I0328 01:07:57.388377 1130949 start.go:245] waiting for cluster config update ...
	I0328 01:07:57.388387 1130949 start.go:254] writing updated cluster config ...
	I0328 01:07:57.388784 1130949 ssh_runner.go:195] Run: rm -f paused
	I0328 01:07:57.446699 1130949 start.go:600] kubectl: 1.29.3, cluster: 1.29.3 (minor skew: 0)
	I0328 01:07:57.448951 1130949 out.go:177] * Done! kubectl is now configured to use "embed-certs-808809" cluster and "default" namespace by default
	I0328 01:07:56.373123 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:58.872454 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:08:04.023273 1131323 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0328 01:08:04.023535 1131323 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0328 01:08:01.372711 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:08:03.877734 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:08:06.374031 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:08:07.366164 1130827 pod_ready.go:81] duration metric: took 4m0.000887668s for pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace to be "Ready" ...
	E0328 01:08:07.366245 1130827 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace to be "Ready" (will not retry!)
	I0328 01:08:07.366271 1130827 pod_ready.go:38] duration metric: took 4m7.906522585s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0328 01:08:07.366301 1130827 kubeadm.go:591] duration metric: took 4m15.27169704s to restartPrimaryControlPlane
	W0328 01:08:07.366368 1130827 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0328 01:08:07.366406 1130827 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0328 01:08:16.281280 1131600 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.245904746s)
	I0328 01:08:16.281365 1131600 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0328 01:08:16.298463 1131600 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0328 01:08:16.310406 1131600 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0328 01:08:16.321387 1131600 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0328 01:08:16.321415 1131600 kubeadm.go:156] found existing configuration files:
	
	I0328 01:08:16.321475 1131600 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0328 01:08:16.331965 1131600 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0328 01:08:16.332033 1131600 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0328 01:08:16.343030 1131600 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0328 01:08:16.353193 1131600 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0328 01:08:16.353254 1131600 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0328 01:08:16.363865 1131600 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0328 01:08:16.374276 1131600 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0328 01:08:16.374346 1131600 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0328 01:08:16.385300 1131600 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0328 01:08:16.396118 1131600 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0328 01:08:16.396181 1131600 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0328 01:08:16.406896 1131600 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0328 01:08:16.626615 1131600 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0328 01:08:24.024091 1131323 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0328 01:08:24.024388 1131323 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0328 01:08:25.420974 1131600 kubeadm.go:309] [init] Using Kubernetes version: v1.29.3
	I0328 01:08:25.421059 1131600 kubeadm.go:309] [preflight] Running pre-flight checks
	I0328 01:08:25.421154 1131600 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0328 01:08:25.421300 1131600 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0328 01:08:25.421547 1131600 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0328 01:08:25.421649 1131600 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0328 01:08:25.423435 1131600 out.go:204]   - Generating certificates and keys ...
	I0328 01:08:25.423549 1131600 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0328 01:08:25.423630 1131600 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0328 01:08:25.423749 1131600 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0328 01:08:25.423844 1131600 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0328 01:08:25.423956 1131600 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0328 01:08:25.424058 1131600 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0328 01:08:25.424166 1131600 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0328 01:08:25.424260 1131600 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0328 01:08:25.424375 1131600 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0328 01:08:25.424489 1131600 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0328 01:08:25.424552 1131600 kubeadm.go:309] [certs] Using the existing "sa" key
	I0328 01:08:25.424642 1131600 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0328 01:08:25.424700 1131600 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0328 01:08:25.424765 1131600 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0328 01:08:25.424832 1131600 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0328 01:08:25.424920 1131600 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0328 01:08:25.424982 1131600 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0328 01:08:25.425106 1131600 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0328 01:08:25.425207 1131600 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0328 01:08:25.426863 1131600 out.go:204]   - Booting up control plane ...
	I0328 01:08:25.427001 1131600 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0328 01:08:25.427108 1131600 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0328 01:08:25.427205 1131600 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0328 01:08:25.427327 1131600 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0328 01:08:25.427431 1131600 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0328 01:08:25.427491 1131600 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0328 01:08:25.427686 1131600 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0328 01:08:25.427784 1131600 kubeadm.go:309] [apiclient] All control plane components are healthy after 6.003000 seconds
	I0328 01:08:25.427897 1131600 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0328 01:08:25.428032 1131600 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0328 01:08:25.428109 1131600 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0328 01:08:25.428325 1131600 kubeadm.go:309] [mark-control-plane] Marking the node default-k8s-diff-port-283961 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0328 01:08:25.428408 1131600 kubeadm.go:309] [bootstrap-token] Using token: g6jusr.8nbqw788gjbu8fwz
	I0328 01:08:25.430595 1131600 out.go:204]   - Configuring RBAC rules ...
	I0328 01:08:25.430734 1131600 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0328 01:08:25.430837 1131600 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0328 01:08:25.430981 1131600 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0328 01:08:25.431163 1131600 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0328 01:08:25.431357 1131600 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0328 01:08:25.431481 1131600 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0328 01:08:25.431670 1131600 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0328 01:08:25.431726 1131600 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0328 01:08:25.431767 1131600 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0328 01:08:25.431774 1131600 kubeadm.go:309] 
	I0328 01:08:25.431819 1131600 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0328 01:08:25.431829 1131600 kubeadm.go:309] 
	I0328 01:08:25.431893 1131600 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0328 01:08:25.431900 1131600 kubeadm.go:309] 
	I0328 01:08:25.431934 1131600 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0328 01:08:25.432028 1131600 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0328 01:08:25.432089 1131600 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0328 01:08:25.432114 1131600 kubeadm.go:309] 
	I0328 01:08:25.432178 1131600 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0328 01:08:25.432186 1131600 kubeadm.go:309] 
	I0328 01:08:25.432245 1131600 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0328 01:08:25.432255 1131600 kubeadm.go:309] 
	I0328 01:08:25.432342 1131600 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0328 01:08:25.432454 1131600 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0328 01:08:25.432566 1131600 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0328 01:08:25.432576 1131600 kubeadm.go:309] 
	I0328 01:08:25.432719 1131600 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0328 01:08:25.432812 1131600 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0328 01:08:25.432825 1131600 kubeadm.go:309] 
	I0328 01:08:25.432914 1131600 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8444 --token g6jusr.8nbqw788gjbu8fwz \
	I0328 01:08:25.433018 1131600 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:eb47e943713ff45bf2b8bbea854932df17a469668ec0f8e79ea3bb626e56fc59 \
	I0328 01:08:25.433052 1131600 kubeadm.go:309] 	--control-plane 
	I0328 01:08:25.433058 1131600 kubeadm.go:309] 
	I0328 01:08:25.433135 1131600 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0328 01:08:25.433143 1131600 kubeadm.go:309] 
	I0328 01:08:25.433222 1131600 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8444 --token g6jusr.8nbqw788gjbu8fwz \
	I0328 01:08:25.433318 1131600 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:eb47e943713ff45bf2b8bbea854932df17a469668ec0f8e79ea3bb626e56fc59 
	I0328 01:08:25.433337 1131600 cni.go:84] Creating CNI manager for ""
	I0328 01:08:25.433346 1131600 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0328 01:08:25.434943 1131600 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0328 01:08:25.436103 1131600 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0328 01:08:25.483149 1131600 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0328 01:08:25.508422 1131600 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0328 01:08:25.508514 1131600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:25.508518 1131600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-283961 minikube.k8s.io/updated_at=2024_03_28T01_08_25_0700 minikube.k8s.io/version=v1.33.0-beta.0 minikube.k8s.io/commit=1a940f980f77c9bfc8ef93678db8c1f49c9dd79d minikube.k8s.io/name=default-k8s-diff-port-283961 minikube.k8s.io/primary=true
	I0328 01:08:25.537955 1131600 ops.go:34] apiserver oom_adj: -16
	I0328 01:08:25.738462 1131600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:26.239473 1131600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:26.739478 1131600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:27.238883 1131600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:27.738830 1131600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:28.239281 1131600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:28.738643 1131600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:29.238703 1131600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:29.739025 1131600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:30.239127 1131600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:30.739473 1131600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:31.239461 1131600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:31.739480 1131600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:32.239525 1131600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:32.738543 1131600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:33.239468 1131600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:33.739475 1131600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:34.238558 1131600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:34.739550 1131600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:35.239400 1131600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:35.738766 1131600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:36.239384 1131600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:36.738797 1131600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:37.238736 1131600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:37.739543 1131600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:37.850963 1131600 kubeadm.go:1107] duration metric: took 12.342521507s to wait for elevateKubeSystemPrivileges
	W0328 01:08:37.851011 1131600 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0328 01:08:37.851024 1131600 kubeadm.go:393] duration metric: took 5m17.339661641s to StartCluster
	I0328 01:08:37.851048 1131600 settings.go:142] acquiring lock: {Name:mk594dd5d083edcb4c668528431bf610fe4ea638 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 01:08:37.851164 1131600 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18485-1069254/kubeconfig
	I0328 01:08:37.853862 1131600 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18485-1069254/kubeconfig: {Name:mk608bb0dce90860f51972a105c5a28b1a9b081e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 01:08:37.854264 1131600 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.39.224 Port:8444 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0328 01:08:37.856170 1131600 out.go:177] * Verifying Kubernetes components...
	I0328 01:08:37.854341 1131600 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0328 01:08:37.854447 1131600 config.go:182] Loaded profile config "default-k8s-diff-port-283961": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0328 01:08:37.857860 1131600 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0328 01:08:37.857864 1131600 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-283961"
	I0328 01:08:37.857878 1131600 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-283961"
	I0328 01:08:37.857885 1131600 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-283961"
	I0328 01:08:37.857909 1131600 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-283961"
	I0328 01:08:37.857912 1131600 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-283961"
	W0328 01:08:37.857923 1131600 addons.go:243] addon storage-provisioner should already be in state true
	I0328 01:08:37.857928 1131600 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-283961"
	W0328 01:08:37.857941 1131600 addons.go:243] addon metrics-server should already be in state true
	I0328 01:08:37.857970 1131600 host.go:66] Checking if "default-k8s-diff-port-283961" exists ...
	I0328 01:08:37.857983 1131600 host.go:66] Checking if "default-k8s-diff-port-283961" exists ...
	I0328 01:08:37.858330 1131600 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18485-1069254/.minikube/bin/docker-machine-driver-kvm2
	I0328 01:08:37.858363 1131600 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 01:08:37.858403 1131600 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18485-1069254/.minikube/bin/docker-machine-driver-kvm2
	I0328 01:08:37.858436 1131600 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 01:08:37.858335 1131600 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18485-1069254/.minikube/bin/docker-machine-driver-kvm2
	I0328 01:08:37.858509 1131600 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 01:08:37.881197 1131600 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32799
	I0328 01:08:37.881230 1131600 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35101
	I0328 01:08:37.881244 1131600 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40403
	I0328 01:08:37.881857 1131600 main.go:141] libmachine: () Calling .GetVersion
	I0328 01:08:37.881882 1131600 main.go:141] libmachine: () Calling .GetVersion
	I0328 01:08:37.882021 1131600 main.go:141] libmachine: () Calling .GetVersion
	I0328 01:08:37.882460 1131600 main.go:141] libmachine: Using API Version  1
	I0328 01:08:37.882482 1131600 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 01:08:37.882523 1131600 main.go:141] libmachine: Using API Version  1
	I0328 01:08:37.882540 1131600 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 01:08:37.882585 1131600 main.go:141] libmachine: Using API Version  1
	I0328 01:08:37.882601 1131600 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 01:08:37.882934 1131600 main.go:141] libmachine: () Calling .GetMachineName
	I0328 01:08:37.882992 1131600 main.go:141] libmachine: () Calling .GetMachineName
	I0328 01:08:37.883007 1131600 main.go:141] libmachine: () Calling .GetMachineName
	I0328 01:08:37.883239 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetState
	I0328 01:08:37.883592 1131600 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18485-1069254/.minikube/bin/docker-machine-driver-kvm2
	I0328 01:08:37.883620 1131600 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18485-1069254/.minikube/bin/docker-machine-driver-kvm2
	I0328 01:08:37.883625 1131600 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 01:08:37.883644 1131600 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 01:08:37.887335 1131600 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-283961"
	W0328 01:08:37.887359 1131600 addons.go:243] addon default-storageclass should already be in state true
	I0328 01:08:37.887390 1131600 host.go:66] Checking if "default-k8s-diff-port-283961" exists ...
	I0328 01:08:37.887745 1131600 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18485-1069254/.minikube/bin/docker-machine-driver-kvm2
	I0328 01:08:37.887779 1131600 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 01:08:37.901416 1131600 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37383
	I0328 01:08:37.901909 1131600 main.go:141] libmachine: () Calling .GetVersion
	I0328 01:08:37.902530 1131600 main.go:141] libmachine: Using API Version  1
	I0328 01:08:37.902559 1131600 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 01:08:37.902967 1131600 main.go:141] libmachine: () Calling .GetMachineName
	I0328 01:08:37.903211 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetState
	I0328 01:08:37.904529 1131600 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36633
	I0328 01:08:37.905034 1131600 main.go:141] libmachine: () Calling .GetVersion
	I0328 01:08:37.905268 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .DriverName
	I0328 01:08:37.907486 1131600 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0328 01:08:37.905802 1131600 main.go:141] libmachine: Using API Version  1
	I0328 01:08:37.909062 1131600 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 01:08:37.909180 1131600 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0328 01:08:37.909196 1131600 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0328 01:08:37.909218 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHHostname
	I0328 01:08:37.909555 1131600 main.go:141] libmachine: () Calling .GetMachineName
	I0328 01:08:37.909794 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetState
	I0328 01:08:37.911251 1131600 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33539
	I0328 01:08:37.911845 1131600 main.go:141] libmachine: () Calling .GetVersion
	I0328 01:08:37.911995 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .DriverName
	I0328 01:08:37.913838 1131600 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0328 01:08:37.912457 1131600 main.go:141] libmachine: Using API Version  1
	I0328 01:08:37.913039 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:08:37.913804 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHPort
	I0328 01:08:37.915256 1131600 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 01:08:37.915268 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:df:6f", ip: ""} in network mk-default-k8s-diff-port-283961: {Iface:virbr1 ExpiryTime:2024-03-28 02:03:06 +0000 UTC Type:0 Mac:52:54:00:c4:df:6f Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:default-k8s-diff-port-283961 Clientid:01:52:54:00:c4:df:6f}
	I0328 01:08:37.915288 1131600 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0328 01:08:37.915297 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined IP address 192.168.39.224 and MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:08:37.915303 1131600 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0328 01:08:37.915321 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHHostname
	I0328 01:08:37.915492 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHKeyPath
	I0328 01:08:37.915674 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHUsername
	I0328 01:08:37.915894 1131600 sshutil.go:53] new ssh client: &{IP:192.168.39.224 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/default-k8s-diff-port-283961/id_rsa Username:docker}
	I0328 01:08:37.916689 1131600 main.go:141] libmachine: () Calling .GetMachineName
	I0328 01:08:37.917364 1131600 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18485-1069254/.minikube/bin/docker-machine-driver-kvm2
	I0328 01:08:37.917410 1131600 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 01:08:37.918302 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:08:37.918651 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:df:6f", ip: ""} in network mk-default-k8s-diff-port-283961: {Iface:virbr1 ExpiryTime:2024-03-28 02:03:06 +0000 UTC Type:0 Mac:52:54:00:c4:df:6f Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:default-k8s-diff-port-283961 Clientid:01:52:54:00:c4:df:6f}
	I0328 01:08:37.918678 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined IP address 192.168.39.224 and MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:08:37.918944 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHPort
	I0328 01:08:37.919117 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHKeyPath
	I0328 01:08:37.919267 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHUsername
	I0328 01:08:37.919386 1131600 sshutil.go:53] new ssh client: &{IP:192.168.39.224 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/default-k8s-diff-port-283961/id_rsa Username:docker}
	I0328 01:08:37.935233 1131600 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45659
	I0328 01:08:37.935750 1131600 main.go:141] libmachine: () Calling .GetVersion
	I0328 01:08:37.936283 1131600 main.go:141] libmachine: Using API Version  1
	I0328 01:08:37.936301 1131600 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 01:08:37.936691 1131600 main.go:141] libmachine: () Calling .GetMachineName
	I0328 01:08:37.936872 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetState
	I0328 01:08:37.938736 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .DriverName
	I0328 01:08:37.939016 1131600 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0328 01:08:37.939042 1131600 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0328 01:08:37.939065 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHHostname
	I0328 01:08:37.941653 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:08:37.941967 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:df:6f", ip: ""} in network mk-default-k8s-diff-port-283961: {Iface:virbr1 ExpiryTime:2024-03-28 02:03:06 +0000 UTC Type:0 Mac:52:54:00:c4:df:6f Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:default-k8s-diff-port-283961 Clientid:01:52:54:00:c4:df:6f}
	I0328 01:08:37.941991 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined IP address 192.168.39.224 and MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:08:37.942199 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHPort
	I0328 01:08:37.942405 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHKeyPath
	I0328 01:08:37.942575 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHUsername
	I0328 01:08:37.942761 1131600 sshutil.go:53] new ssh client: &{IP:192.168.39.224 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/default-k8s-diff-port-283961/id_rsa Username:docker}
	I0328 01:08:38.109817 1131600 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0328 01:08:38.134996 1131600 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-283961" to be "Ready" ...
	I0328 01:08:38.158252 1131600 node_ready.go:49] node "default-k8s-diff-port-283961" has status "Ready":"True"
	I0328 01:08:38.158286 1131600 node_ready.go:38] duration metric: took 23.249221ms for node "default-k8s-diff-port-283961" to be "Ready" ...
	I0328 01:08:38.158305 1131600 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0328 01:08:38.170391 1131600 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-gdv5x" in "kube-system" namespace to be "Ready" ...
	I0328 01:08:38.277223 1131600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0328 01:08:38.299923 1131600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0328 01:08:38.300686 1131600 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0328 01:08:38.300707 1131600 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0328 01:08:38.355800 1131600 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0328 01:08:38.355837 1131600 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0328 01:08:38.464742 1131600 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0328 01:08:38.464769 1131600 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0328 01:08:38.542696 1131600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0328 01:08:39.644116 1131600 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.344141889s)
	I0328 01:08:39.644184 1131600 main.go:141] libmachine: Making call to close driver server
	I0328 01:08:39.644189 1131600 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.366934481s)
	I0328 01:08:39.644197 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .Close
	I0328 01:08:39.644210 1131600 main.go:141] libmachine: Making call to close driver server
	I0328 01:08:39.644219 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .Close
	I0328 01:08:39.644620 1131600 main.go:141] libmachine: Successfully made call to close driver server
	I0328 01:08:39.644644 1131600 main.go:141] libmachine: Making call to close connection to plugin binary
	I0328 01:08:39.644654 1131600 main.go:141] libmachine: Making call to close driver server
	I0328 01:08:39.644664 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .Close
	I0328 01:08:39.644846 1131600 main.go:141] libmachine: Successfully made call to close driver server
	I0328 01:08:39.644865 1131600 main.go:141] libmachine: Making call to close connection to plugin binary
	I0328 01:08:39.644890 1131600 main.go:141] libmachine: Making call to close driver server
	I0328 01:08:39.644905 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .Close
	I0328 01:08:39.644987 1131600 main.go:141] libmachine: Successfully made call to close driver server
	I0328 01:08:39.645004 1131600 main.go:141] libmachine: Making call to close connection to plugin binary
	I0328 01:08:39.645154 1131600 main.go:141] libmachine: Successfully made call to close driver server
	I0328 01:08:39.645171 1131600 main.go:141] libmachine: Making call to close connection to plugin binary
	I0328 01:08:39.708104 1131600 main.go:141] libmachine: Making call to close driver server
	I0328 01:08:39.708143 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .Close
	I0328 01:08:39.708543 1131600 main.go:141] libmachine: Successfully made call to close driver server
	I0328 01:08:39.708567 1131600 main.go:141] libmachine: Making call to close connection to plugin binary
	I0328 01:08:39.739487 1131600 pod_ready.go:92] pod "coredns-76f75df574-gdv5x" in "kube-system" namespace has status "Ready":"True"
	I0328 01:08:39.739515 1131600 pod_ready.go:81] duration metric: took 1.569088177s for pod "coredns-76f75df574-gdv5x" in "kube-system" namespace to be "Ready" ...
	I0328 01:08:39.739526 1131600 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-qzcfp" in "kube-system" namespace to be "Ready" ...
	I0328 01:08:39.797314 1131600 pod_ready.go:92] pod "coredns-76f75df574-qzcfp" in "kube-system" namespace has status "Ready":"True"
	I0328 01:08:39.797347 1131600 pod_ready.go:81] duration metric: took 57.813218ms for pod "coredns-76f75df574-qzcfp" in "kube-system" namespace to be "Ready" ...
	I0328 01:08:39.797366 1131600 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-283961" in "kube-system" namespace to be "Ready" ...
	I0328 01:08:39.830784 1131600 pod_ready.go:92] pod "etcd-default-k8s-diff-port-283961" in "kube-system" namespace has status "Ready":"True"
	I0328 01:08:39.830865 1131600 pod_ready.go:81] duration metric: took 33.488753ms for pod "etcd-default-k8s-diff-port-283961" in "kube-system" namespace to be "Ready" ...
	I0328 01:08:39.830886 1131600 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-283961" in "kube-system" namespace to be "Ready" ...
	I0328 01:08:39.852459 1131600 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-283961" in "kube-system" namespace has status "Ready":"True"
	I0328 01:08:39.852489 1131600 pod_ready.go:81] duration metric: took 21.594748ms for pod "kube-apiserver-default-k8s-diff-port-283961" in "kube-system" namespace to be "Ready" ...
	I0328 01:08:39.852501 1131600 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-283961" in "kube-system" namespace to be "Ready" ...
	I0328 01:08:39.862630 1131600 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-283961" in "kube-system" namespace has status "Ready":"True"
	I0328 01:08:39.862658 1131600 pod_ready.go:81] duration metric: took 10.149867ms for pod "kube-controller-manager-default-k8s-diff-port-283961" in "kube-system" namespace to be "Ready" ...
	I0328 01:08:39.862674 1131600 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-js7j2" in "kube-system" namespace to be "Ready" ...
	I0328 01:08:39.893124 1131600 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.350363727s)
	I0328 01:08:39.893191 1131600 main.go:141] libmachine: Making call to close driver server
	I0328 01:08:39.893216 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .Close
	I0328 01:08:39.893559 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | Closing plugin on server side
	I0328 01:08:39.893568 1131600 main.go:141] libmachine: Successfully made call to close driver server
	I0328 01:08:39.893617 1131600 main.go:141] libmachine: Making call to close connection to plugin binary
	I0328 01:08:39.893634 1131600 main.go:141] libmachine: Making call to close driver server
	I0328 01:08:39.893646 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .Close
	I0328 01:08:39.893985 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | Closing plugin on server side
	I0328 01:08:39.893985 1131600 main.go:141] libmachine: Successfully made call to close driver server
	I0328 01:08:39.894013 1131600 main.go:141] libmachine: Making call to close connection to plugin binary
	I0328 01:08:39.894031 1131600 addons.go:470] Verifying addon metrics-server=true in "default-k8s-diff-port-283961"
	I0328 01:08:39.896978 1131600 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0328 01:08:39.898636 1131600 addons.go:505] duration metric: took 2.044292782s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0328 01:08:40.138962 1131600 pod_ready.go:92] pod "kube-proxy-js7j2" in "kube-system" namespace has status "Ready":"True"
	I0328 01:08:40.138994 1131600 pod_ready.go:81] duration metric: took 276.313147ms for pod "kube-proxy-js7j2" in "kube-system" namespace to be "Ready" ...
	I0328 01:08:40.139006 1131600 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-283961" in "kube-system" namespace to be "Ready" ...
	I0328 01:08:40.538892 1131600 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-283961" in "kube-system" namespace has status "Ready":"True"
	I0328 01:08:40.538917 1131600 pod_ready.go:81] duration metric: took 399.903327ms for pod "kube-scheduler-default-k8s-diff-port-283961" in "kube-system" namespace to be "Ready" ...
	I0328 01:08:40.538925 1131600 pod_ready.go:38] duration metric: took 2.380606168s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0328 01:08:40.538943 1131600 api_server.go:52] waiting for apiserver process to appear ...
	I0328 01:08:40.539009 1131600 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:08:40.561639 1131600 api_server.go:72] duration metric: took 2.707321816s to wait for apiserver process to appear ...
	I0328 01:08:40.561681 1131600 api_server.go:88] waiting for apiserver healthz status ...
	I0328 01:08:40.561709 1131600 api_server.go:253] Checking apiserver healthz at https://192.168.39.224:8444/healthz ...
	I0328 01:08:40.568521 1131600 api_server.go:279] https://192.168.39.224:8444/healthz returned 200:
	ok
	I0328 01:08:40.570016 1131600 api_server.go:141] control plane version: v1.29.3
	I0328 01:08:40.570060 1131600 api_server.go:131] duration metric: took 8.369036ms to wait for apiserver health ...
	I0328 01:08:40.570071 1131600 system_pods.go:43] waiting for kube-system pods to appear ...
	I0328 01:08:39.696094 1130827 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.32965227s)
	I0328 01:08:39.696193 1130827 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0328 01:08:39.717556 1130827 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0328 01:08:39.730434 1130827 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0328 01:08:39.746521 1130827 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0328 01:08:39.746567 1130827 kubeadm.go:156] found existing configuration files:
	
	I0328 01:08:39.746644 1130827 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0328 01:08:39.758252 1130827 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0328 01:08:39.758352 1130827 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0328 01:08:39.771929 1130827 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0328 01:08:39.785312 1130827 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0328 01:08:39.785400 1130827 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0328 01:08:39.800685 1130827 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0328 01:08:39.814982 1130827 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0328 01:08:39.815073 1130827 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0328 01:08:39.828804 1130827 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0328 01:08:39.841984 1130827 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0328 01:08:39.842074 1130827 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0328 01:08:39.854502 1130827 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0328 01:08:40.089742 1130827 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0328 01:08:40.742900 1131600 system_pods.go:59] 9 kube-system pods found
	I0328 01:08:40.742938 1131600 system_pods.go:61] "coredns-76f75df574-gdv5x" [5b4b835c-ae9d-4eff-ab37-6ccb7e36a748] Running
	I0328 01:08:40.742945 1131600 system_pods.go:61] "coredns-76f75df574-qzcfp" [8e7bfa94-f249-4f7a-be7b-9a615810c956] Running
	I0328 01:08:40.742951 1131600 system_pods.go:61] "etcd-default-k8s-diff-port-283961" [eb26a2e2-882b-4bc0-9ca1-7fe88b1c6c7e] Running
	I0328 01:08:40.742958 1131600 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-283961" [1c31e7a3-507e-43ea-96cf-1d182a1e0875] Running
	I0328 01:08:40.742964 1131600 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-283961" [5bbc6650-b37c-4b10-bc6e-cceb214f8881] Running
	I0328 01:08:40.742968 1131600 system_pods.go:61] "kube-proxy-js7j2" [1f31a6ee-9417-4f4a-ba0b-fdbab6a9169d] Running
	I0328 01:08:40.742972 1131600 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-283961" [3a5691f7-d6d4-45e7-aea7-94fe1cfa541a] Running
	I0328 01:08:40.742980 1131600 system_pods.go:61] "metrics-server-57f55c9bc5-gkv67" [7f0f5d0b-6821-44b6-8f3b-0bc0aeccc356] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0328 01:08:40.742986 1131600 system_pods.go:61] "storage-provisioner" [cb80efe2-521f-45d5-84e7-f6dc216b4c6d] Running
	I0328 01:08:40.742998 1131600 system_pods.go:74] duration metric: took 172.918886ms to wait for pod list to return data ...
	I0328 01:08:40.743010 1131600 default_sa.go:34] waiting for default service account to be created ...
	I0328 01:08:40.939208 1131600 default_sa.go:45] found service account: "default"
	I0328 01:08:40.939255 1131600 default_sa.go:55] duration metric: took 196.220048ms for default service account to be created ...
	I0328 01:08:40.939266 1131600 system_pods.go:116] waiting for k8s-apps to be running ...
	I0328 01:08:41.144986 1131600 system_pods.go:86] 9 kube-system pods found
	I0328 01:08:41.145023 1131600 system_pods.go:89] "coredns-76f75df574-gdv5x" [5b4b835c-ae9d-4eff-ab37-6ccb7e36a748] Running
	I0328 01:08:41.145030 1131600 system_pods.go:89] "coredns-76f75df574-qzcfp" [8e7bfa94-f249-4f7a-be7b-9a615810c956] Running
	I0328 01:08:41.145034 1131600 system_pods.go:89] "etcd-default-k8s-diff-port-283961" [eb26a2e2-882b-4bc0-9ca1-7fe88b1c6c7e] Running
	I0328 01:08:41.145039 1131600 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-283961" [1c31e7a3-507e-43ea-96cf-1d182a1e0875] Running
	I0328 01:08:41.145043 1131600 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-283961" [5bbc6650-b37c-4b10-bc6e-cceb214f8881] Running
	I0328 01:08:41.145047 1131600 system_pods.go:89] "kube-proxy-js7j2" [1f31a6ee-9417-4f4a-ba0b-fdbab6a9169d] Running
	I0328 01:08:41.145051 1131600 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-283961" [3a5691f7-d6d4-45e7-aea7-94fe1cfa541a] Running
	I0328 01:08:41.145058 1131600 system_pods.go:89] "metrics-server-57f55c9bc5-gkv67" [7f0f5d0b-6821-44b6-8f3b-0bc0aeccc356] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0328 01:08:41.145062 1131600 system_pods.go:89] "storage-provisioner" [cb80efe2-521f-45d5-84e7-f6dc216b4c6d] Running
	I0328 01:08:41.145072 1131600 system_pods.go:126] duration metric: took 205.800485ms to wait for k8s-apps to be running ...
	I0328 01:08:41.145083 1131600 system_svc.go:44] waiting for kubelet service to be running ....
	I0328 01:08:41.145131 1131600 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0328 01:08:41.163220 1131600 system_svc.go:56] duration metric: took 18.120266ms WaitForService to wait for kubelet
	I0328 01:08:41.163255 1131600 kubeadm.go:576] duration metric: took 3.308947131s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0328 01:08:41.163280 1131600 node_conditions.go:102] verifying NodePressure condition ...
	I0328 01:08:41.339219 1131600 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0328 01:08:41.339247 1131600 node_conditions.go:123] node cpu capacity is 2
	I0328 01:08:41.339292 1131600 node_conditions.go:105] duration metric: took 176.004328ms to run NodePressure ...
	I0328 01:08:41.339306 1131600 start.go:240] waiting for startup goroutines ...
	I0328 01:08:41.339317 1131600 start.go:245] waiting for cluster config update ...
	I0328 01:08:41.339334 1131600 start.go:254] writing updated cluster config ...
	I0328 01:08:41.339656 1131600 ssh_runner.go:195] Run: rm -f paused
	I0328 01:08:41.399111 1131600 start.go:600] kubectl: 1.29.3, cluster: 1.29.3 (minor skew: 0)
	I0328 01:08:41.401360 1131600 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-283961" cluster and "default" namespace by default
	I0328 01:08:49.653091 1130827 kubeadm.go:309] [init] Using Kubernetes version: v1.30.0-beta.0
	I0328 01:08:49.653205 1130827 kubeadm.go:309] [preflight] Running pre-flight checks
	I0328 01:08:49.653327 1130827 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0328 01:08:49.653468 1130827 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0328 01:08:49.653576 1130827 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0328 01:08:49.653666 1130827 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0328 01:08:49.656419 1130827 out.go:204]   - Generating certificates and keys ...
	I0328 01:08:49.656503 1130827 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0328 01:08:49.656583 1130827 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0328 01:08:49.656669 1130827 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0328 01:08:49.656775 1130827 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0328 01:08:49.656903 1130827 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0328 01:08:49.656973 1130827 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0328 01:08:49.657057 1130827 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0328 01:08:49.657138 1130827 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0328 01:08:49.657246 1130827 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0328 01:08:49.657362 1130827 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0328 01:08:49.657415 1130827 kubeadm.go:309] [certs] Using the existing "sa" key
	I0328 01:08:49.657510 1130827 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0328 01:08:49.657601 1130827 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0328 01:08:49.657713 1130827 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0328 01:08:49.657811 1130827 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0328 01:08:49.657900 1130827 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0328 01:08:49.657980 1130827 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0328 01:08:49.658074 1130827 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0328 01:08:49.658160 1130827 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0328 01:08:49.659588 1130827 out.go:204]   - Booting up control plane ...
	I0328 01:08:49.659669 1130827 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0328 01:08:49.659771 1130827 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0328 01:08:49.659855 1130827 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0328 01:08:49.659962 1130827 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0328 01:08:49.660075 1130827 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0328 01:08:49.660139 1130827 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0328 01:08:49.660309 1130827 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0328 01:08:49.660426 1130827 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0328 01:08:49.660518 1130827 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 501.594495ms
	I0328 01:08:49.660610 1130827 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0328 01:08:49.660691 1130827 kubeadm.go:309] [api-check] The API server is healthy after 5.502996727s
	I0328 01:08:49.660830 1130827 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0328 01:08:49.660975 1130827 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0328 01:08:49.661028 1130827 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0328 01:08:49.661198 1130827 kubeadm.go:309] [mark-control-plane] Marking the node no-preload-248059 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0328 01:08:49.661283 1130827 kubeadm.go:309] [bootstrap-token] Using token: 4jnfa0.q3dre6ogqbxtw8j0
	I0328 01:08:49.662907 1130827 out.go:204]   - Configuring RBAC rules ...
	I0328 01:08:49.663014 1130827 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0328 01:08:49.663090 1130827 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0328 01:08:49.663239 1130827 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0328 01:08:49.663379 1130827 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0328 01:08:49.663484 1130827 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0328 01:08:49.663576 1130827 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0328 01:08:49.663688 1130827 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0328 01:08:49.663750 1130827 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0328 01:08:49.663811 1130827 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0328 01:08:49.663820 1130827 kubeadm.go:309] 
	I0328 01:08:49.663871 1130827 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0328 01:08:49.663877 1130827 kubeadm.go:309] 
	I0328 01:08:49.663976 1130827 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0328 01:08:49.663984 1130827 kubeadm.go:309] 
	I0328 01:08:49.664004 1130827 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0328 01:08:49.664080 1130827 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0328 01:08:49.664144 1130827 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0328 01:08:49.664151 1130827 kubeadm.go:309] 
	I0328 01:08:49.664202 1130827 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0328 01:08:49.664209 1130827 kubeadm.go:309] 
	I0328 01:08:49.664246 1130827 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0328 01:08:49.664252 1130827 kubeadm.go:309] 
	I0328 01:08:49.664301 1130827 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0328 01:08:49.664370 1130827 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0328 01:08:49.664436 1130827 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0328 01:08:49.664444 1130827 kubeadm.go:309] 
	I0328 01:08:49.664515 1130827 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0328 01:08:49.664600 1130827 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0328 01:08:49.664607 1130827 kubeadm.go:309] 
	I0328 01:08:49.664678 1130827 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token 4jnfa0.q3dre6ogqbxtw8j0 \
	I0328 01:08:49.664764 1130827 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:eb47e943713ff45bf2b8bbea854932df17a469668ec0f8e79ea3bb626e56fc59 \
	I0328 01:08:49.664783 1130827 kubeadm.go:309] 	--control-plane 
	I0328 01:08:49.664789 1130827 kubeadm.go:309] 
	I0328 01:08:49.664856 1130827 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0328 01:08:49.664863 1130827 kubeadm.go:309] 
	I0328 01:08:49.664938 1130827 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token 4jnfa0.q3dre6ogqbxtw8j0 \
	I0328 01:08:49.665073 1130827 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:eb47e943713ff45bf2b8bbea854932df17a469668ec0f8e79ea3bb626e56fc59 
	I0328 01:08:49.665117 1130827 cni.go:84] Creating CNI manager for ""
	I0328 01:08:49.665130 1130827 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0328 01:08:49.667556 1130827 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0328 01:08:49.668776 1130827 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0328 01:08:49.680262 1130827 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0328 01:08:49.701490 1130827 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0328 01:08:49.701557 1130827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:49.701606 1130827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-248059 minikube.k8s.io/updated_at=2024_03_28T01_08_49_0700 minikube.k8s.io/version=v1.33.0-beta.0 minikube.k8s.io/commit=1a940f980f77c9bfc8ef93678db8c1f49c9dd79d minikube.k8s.io/name=no-preload-248059 minikube.k8s.io/primary=true
	I0328 01:08:49.734009 1130827 ops.go:34] apiserver oom_adj: -16
	I0328 01:08:49.901866 1130827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:50.402635 1130827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:50.902480 1130827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:51.402417 1130827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:51.902253 1130827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:52.402411 1130827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:52.901926 1130827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:53.402394 1130827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:53.902738 1130827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:54.401878 1130827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:54.901920 1130827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:55.401878 1130827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:55.902140 1130827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:56.402863 1130827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:56.901970 1130827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:57.402088 1130827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:57.901869 1130827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:58.402056 1130827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:58.902333 1130827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:59.402753 1130827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:59.902930 1130827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:09:00.402623 1130827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:09:00.901863 1130827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:09:01.402264 1130827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:09:01.902054 1130827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:09:02.402212 1130827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:09:02.503310 1130827 kubeadm.go:1107] duration metric: took 12.80181586s to wait for elevateKubeSystemPrivileges
	W0328 01:09:02.503352 1130827 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0328 01:09:02.503362 1130827 kubeadm.go:393] duration metric: took 5m10.46697508s to StartCluster
	I0328 01:09:02.503380 1130827 settings.go:142] acquiring lock: {Name:mk594dd5d083edcb4c668528431bf610fe4ea638 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 01:09:02.503482 1130827 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18485-1069254/kubeconfig
	I0328 01:09:02.505909 1130827 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18485-1069254/kubeconfig: {Name:mk608bb0dce90860f51972a105c5a28b1a9b081e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 01:09:02.506302 1130827 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.61.107 Port:8443 KubernetesVersion:v1.30.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0328 01:09:02.508103 1130827 out.go:177] * Verifying Kubernetes components...
	I0328 01:09:02.506385 1130827 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0328 01:09:02.506502 1130827 config.go:182] Loaded profile config "no-preload-248059": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0-beta.0
	I0328 01:09:02.509509 1130827 addons.go:69] Setting default-storageclass=true in profile "no-preload-248059"
	I0328 01:09:02.509519 1130827 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0328 01:09:02.509517 1130827 addons.go:69] Setting metrics-server=true in profile "no-preload-248059"
	I0328 01:09:02.509542 1130827 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-248059"
	I0328 01:09:02.509559 1130827 addons.go:234] Setting addon metrics-server=true in "no-preload-248059"
	W0328 01:09:02.509580 1130827 addons.go:243] addon metrics-server should already be in state true
	I0328 01:09:02.509509 1130827 addons.go:69] Setting storage-provisioner=true in profile "no-preload-248059"
	I0328 01:09:02.509623 1130827 host.go:66] Checking if "no-preload-248059" exists ...
	I0328 01:09:02.509636 1130827 addons.go:234] Setting addon storage-provisioner=true in "no-preload-248059"
	W0328 01:09:02.509690 1130827 addons.go:243] addon storage-provisioner should already be in state true
	I0328 01:09:02.509729 1130827 host.go:66] Checking if "no-preload-248059" exists ...
	I0328 01:09:02.510005 1130827 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18485-1069254/.minikube/bin/docker-machine-driver-kvm2
	I0328 01:09:02.510009 1130827 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18485-1069254/.minikube/bin/docker-machine-driver-kvm2
	I0328 01:09:02.510049 1130827 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 01:09:02.510050 1130827 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 01:09:02.510053 1130827 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18485-1069254/.minikube/bin/docker-machine-driver-kvm2
	I0328 01:09:02.510085 1130827 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 01:09:02.528082 1130827 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42381
	I0328 01:09:02.528124 1130827 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43929
	I0328 01:09:02.528714 1130827 main.go:141] libmachine: () Calling .GetVersion
	I0328 01:09:02.528738 1130827 main.go:141] libmachine: () Calling .GetVersion
	I0328 01:09:02.529081 1130827 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34573
	I0328 01:09:02.529378 1130827 main.go:141] libmachine: Using API Version  1
	I0328 01:09:02.529397 1130827 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 01:09:02.529444 1130827 main.go:141] libmachine: Using API Version  1
	I0328 01:09:02.529464 1130827 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 01:09:02.529465 1130827 main.go:141] libmachine: () Calling .GetVersion
	I0328 01:09:02.529791 1130827 main.go:141] libmachine: () Calling .GetMachineName
	I0328 01:09:02.529849 1130827 main.go:141] libmachine: () Calling .GetMachineName
	I0328 01:09:02.529948 1130827 main.go:141] libmachine: Using API Version  1
	I0328 01:09:02.529965 1130827 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 01:09:02.529950 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetState
	I0328 01:09:02.530389 1130827 main.go:141] libmachine: () Calling .GetMachineName
	I0328 01:09:02.530437 1130827 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18485-1069254/.minikube/bin/docker-machine-driver-kvm2
	I0328 01:09:02.530472 1130827 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 01:09:02.531004 1130827 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18485-1069254/.minikube/bin/docker-machine-driver-kvm2
	I0328 01:09:02.531058 1130827 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 01:09:02.534108 1130827 addons.go:234] Setting addon default-storageclass=true in "no-preload-248059"
	W0328 01:09:02.534134 1130827 addons.go:243] addon default-storageclass should already be in state true
	I0328 01:09:02.534173 1130827 host.go:66] Checking if "no-preload-248059" exists ...
	I0328 01:09:02.534563 1130827 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18485-1069254/.minikube/bin/docker-machine-driver-kvm2
	I0328 01:09:02.534592 1130827 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 01:09:02.546812 1130827 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35769
	I0328 01:09:02.547478 1130827 main.go:141] libmachine: () Calling .GetVersion
	I0328 01:09:02.547999 1130827 main.go:141] libmachine: Using API Version  1
	I0328 01:09:02.548031 1130827 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 01:09:02.548370 1130827 main.go:141] libmachine: () Calling .GetMachineName
	I0328 01:09:02.548616 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetState
	I0328 01:09:02.549185 1130827 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37367
	I0328 01:09:02.549663 1130827 main.go:141] libmachine: () Calling .GetVersion
	I0328 01:09:02.550365 1130827 main.go:141] libmachine: Using API Version  1
	I0328 01:09:02.550390 1130827 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 01:09:02.550772 1130827 main.go:141] libmachine: () Calling .GetMachineName
	I0328 01:09:02.550787 1130827 main.go:141] libmachine: (no-preload-248059) Calling .DriverName
	I0328 01:09:02.550977 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetState
	I0328 01:09:02.553075 1130827 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0328 01:09:02.554750 1130827 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0328 01:09:02.554769 1130827 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0328 01:09:02.552577 1130827 main.go:141] libmachine: (no-preload-248059) Calling .DriverName
	I0328 01:09:02.554788 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHHostname
	I0328 01:09:02.553550 1130827 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45153
	I0328 01:09:02.556534 1130827 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0328 01:09:02.555339 1130827 main.go:141] libmachine: () Calling .GetVersion
	I0328 01:09:02.558480 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:09:02.563734 1130827 main.go:141] libmachine: (no-preload-248059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:33:e2", ip: ""} in network mk-no-preload-248059: {Iface:virbr3 ExpiryTime:2024-03-28 02:03:25 +0000 UTC Type:0 Mac:52:54:00:58:33:e2 Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:no-preload-248059 Clientid:01:52:54:00:58:33:e2}
	I0328 01:09:02.563773 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined IP address 192.168.61.107 and MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:09:02.563823 1130827 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0328 01:09:02.563846 1130827 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0328 01:09:02.563876 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHHostname
	I0328 01:09:02.564584 1130827 main.go:141] libmachine: Using API Version  1
	I0328 01:09:02.564604 1130827 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 01:09:02.564633 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHPort
	I0328 01:09:02.564933 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHKeyPath
	I0328 01:09:02.565025 1130827 main.go:141] libmachine: () Calling .GetMachineName
	I0328 01:09:02.565458 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHUsername
	I0328 01:09:02.565593 1130827 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18485-1069254/.minikube/bin/docker-machine-driver-kvm2
	I0328 01:09:02.565617 1130827 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 01:09:02.565745 1130827 sshutil.go:53] new ssh client: &{IP:192.168.61.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/no-preload-248059/id_rsa Username:docker}
	I0328 01:09:02.569766 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:09:02.570083 1130827 main.go:141] libmachine: (no-preload-248059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:33:e2", ip: ""} in network mk-no-preload-248059: {Iface:virbr3 ExpiryTime:2024-03-28 02:03:25 +0000 UTC Type:0 Mac:52:54:00:58:33:e2 Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:no-preload-248059 Clientid:01:52:54:00:58:33:e2}
	I0328 01:09:02.570104 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined IP address 192.168.61.107 and MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:09:02.570413 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHPort
	I0328 01:09:02.570778 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHKeyPath
	I0328 01:09:02.570975 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHUsername
	I0328 01:09:02.571142 1130827 sshutil.go:53] new ssh client: &{IP:192.168.61.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/no-preload-248059/id_rsa Username:docker}
	I0328 01:09:02.589503 1130827 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41085
	I0328 01:09:02.590061 1130827 main.go:141] libmachine: () Calling .GetVersion
	I0328 01:09:02.590641 1130827 main.go:141] libmachine: Using API Version  1
	I0328 01:09:02.590661 1130827 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 01:09:02.591065 1130827 main.go:141] libmachine: () Calling .GetMachineName
	I0328 01:09:02.591310 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetState
	I0328 01:09:02.593270 1130827 main.go:141] libmachine: (no-preload-248059) Calling .DriverName
	I0328 01:09:02.593665 1130827 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0328 01:09:02.593696 1130827 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0328 01:09:02.593717 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHHostname
	I0328 01:09:02.596796 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:09:02.597270 1130827 main.go:141] libmachine: (no-preload-248059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:33:e2", ip: ""} in network mk-no-preload-248059: {Iface:virbr3 ExpiryTime:2024-03-28 02:03:25 +0000 UTC Type:0 Mac:52:54:00:58:33:e2 Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:no-preload-248059 Clientid:01:52:54:00:58:33:e2}
	I0328 01:09:02.597298 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined IP address 192.168.61.107 and MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:09:02.597460 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHPort
	I0328 01:09:02.597637 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHKeyPath
	I0328 01:09:02.597807 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHUsername
	I0328 01:09:02.597937 1130827 sshutil.go:53] new ssh client: &{IP:192.168.61.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/no-preload-248059/id_rsa Username:docker}
	I0328 01:09:02.705837 1130827 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0328 01:09:02.727955 1130827 node_ready.go:35] waiting up to 6m0s for node "no-preload-248059" to be "Ready" ...
	I0328 01:09:02.737291 1130827 node_ready.go:49] node "no-preload-248059" has status "Ready":"True"
	I0328 01:09:02.737325 1130827 node_ready.go:38] duration metric: took 9.337953ms for node "no-preload-248059" to be "Ready" ...
	I0328 01:09:02.737338 1130827 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0328 01:09:02.741939 1130827 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-248059" in "kube-system" namespace to be "Ready" ...
	I0328 01:09:02.749157 1130827 pod_ready.go:92] pod "etcd-no-preload-248059" in "kube-system" namespace has status "Ready":"True"
	I0328 01:09:02.749192 1130827 pod_ready.go:81] duration metric: took 7.224004ms for pod "etcd-no-preload-248059" in "kube-system" namespace to be "Ready" ...
	I0328 01:09:02.749205 1130827 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-248059" in "kube-system" namespace to be "Ready" ...
	I0328 01:09:02.755106 1130827 pod_ready.go:92] pod "kube-apiserver-no-preload-248059" in "kube-system" namespace has status "Ready":"True"
	I0328 01:09:02.755132 1130827 pod_ready.go:81] duration metric: took 5.919446ms for pod "kube-apiserver-no-preload-248059" in "kube-system" namespace to be "Ready" ...
	I0328 01:09:02.755144 1130827 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-248059" in "kube-system" namespace to be "Ready" ...
	I0328 01:09:02.761123 1130827 pod_ready.go:92] pod "kube-controller-manager-no-preload-248059" in "kube-system" namespace has status "Ready":"True"
	I0328 01:09:02.761171 1130827 pod_ready.go:81] duration metric: took 6.017877ms for pod "kube-controller-manager-no-preload-248059" in "kube-system" namespace to be "Ready" ...
	I0328 01:09:02.761187 1130827 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-248059" in "kube-system" namespace to be "Ready" ...
	I0328 01:09:02.773958 1130827 pod_ready.go:92] pod "kube-scheduler-no-preload-248059" in "kube-system" namespace has status "Ready":"True"
	I0328 01:09:02.773983 1130827 pod_ready.go:81] duration metric: took 12.787671ms for pod "kube-scheduler-no-preload-248059" in "kube-system" namespace to be "Ready" ...
	I0328 01:09:02.773991 1130827 pod_ready.go:38] duration metric: took 36.637128ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0328 01:09:02.774008 1130827 api_server.go:52] waiting for apiserver process to appear ...
	I0328 01:09:02.774068 1130827 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:09:02.794342 1130827 api_server.go:72] duration metric: took 287.989042ms to wait for apiserver process to appear ...
	I0328 01:09:02.794376 1130827 api_server.go:88] waiting for apiserver healthz status ...
	I0328 01:09:02.794408 1130827 api_server.go:253] Checking apiserver healthz at https://192.168.61.107:8443/healthz ...
	I0328 01:09:02.826957 1130827 api_server.go:279] https://192.168.61.107:8443/healthz returned 200:
	ok
	I0328 01:09:02.830377 1130827 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0328 01:09:02.830399 1130827 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0328 01:09:02.837250 1130827 api_server.go:141] control plane version: v1.30.0-beta.0
	I0328 01:09:02.837284 1130827 api_server.go:131] duration metric: took 42.898933ms to wait for apiserver health ...
	I0328 01:09:02.837295 1130827 system_pods.go:43] waiting for kube-system pods to appear ...
	I0328 01:09:02.838515 1130827 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0328 01:09:02.865482 1130827 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0328 01:09:02.880510 1130827 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0328 01:09:02.880544 1130827 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0328 01:09:02.933895 1130827 system_pods.go:59] 4 kube-system pods found
	I0328 01:09:02.933958 1130827 system_pods.go:61] "etcd-no-preload-248059" [df24a43f-e4f8-4ee2-a2d5-9a718a197670] Running
	I0328 01:09:02.933967 1130827 system_pods.go:61] "kube-apiserver-no-preload-248059" [60aa0336-e0a2-476a-9458-c76fb40a95e1] Running
	I0328 01:09:02.933973 1130827 system_pods.go:61] "kube-controller-manager-no-preload-248059" [3759c96b-4476-439f-bf97-ea6175e53272] Running
	I0328 01:09:02.933977 1130827 system_pods.go:61] "kube-scheduler-no-preload-248059" [63a844a3-9d1e-48b0-90eb-52d3d20f0de8] Running
	I0328 01:09:02.933984 1130827 system_pods.go:74] duration metric: took 96.68223ms to wait for pod list to return data ...
	I0328 01:09:02.933994 1130827 default_sa.go:34] waiting for default service account to be created ...
	I0328 01:09:02.939507 1130827 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0328 01:09:02.939538 1130827 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0328 01:09:02.994042 1130827 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0328 01:09:03.160934 1130827 default_sa.go:45] found service account: "default"
	I0328 01:09:03.160971 1130827 default_sa.go:55] duration metric: took 226.968222ms for default service account to be created ...
	I0328 01:09:03.160982 1130827 system_pods.go:116] waiting for k8s-apps to be running ...
	I0328 01:09:03.396511 1130827 system_pods.go:86] 7 kube-system pods found
	I0328 01:09:03.396549 1130827 system_pods.go:89] "coredns-7db6d8ff4d-8zzf5" [91f329ea-6d6d-45dc-ac77-40a2739249b4] Pending
	I0328 01:09:03.396554 1130827 system_pods.go:89] "coredns-7db6d8ff4d-qtgp9" [aac5a4d0-acf3-426c-a81e-d129f94d58f3] Pending
	I0328 01:09:03.396558 1130827 system_pods.go:89] "etcd-no-preload-248059" [df24a43f-e4f8-4ee2-a2d5-9a718a197670] Running
	I0328 01:09:03.396562 1130827 system_pods.go:89] "kube-apiserver-no-preload-248059" [60aa0336-e0a2-476a-9458-c76fb40a95e1] Running
	I0328 01:09:03.396567 1130827 system_pods.go:89] "kube-controller-manager-no-preload-248059" [3759c96b-4476-439f-bf97-ea6175e53272] Running
	I0328 01:09:03.396575 1130827 system_pods.go:89] "kube-proxy-g5f6g" [d9c30bc3-42b1-446f-838b-979489cf661d] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0328 01:09:03.396580 1130827 system_pods.go:89] "kube-scheduler-no-preload-248059" [63a844a3-9d1e-48b0-90eb-52d3d20f0de8] Running
	I0328 01:09:03.396601 1130827 retry.go:31] will retry after 288.008379ms: missing components: kube-dns, kube-proxy
	I0328 01:09:03.697645 1130827 system_pods.go:86] 7 kube-system pods found
	I0328 01:09:03.697688 1130827 system_pods.go:89] "coredns-7db6d8ff4d-8zzf5" [91f329ea-6d6d-45dc-ac77-40a2739249b4] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0328 01:09:03.697697 1130827 system_pods.go:89] "coredns-7db6d8ff4d-qtgp9" [aac5a4d0-acf3-426c-a81e-d129f94d58f3] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0328 01:09:03.697704 1130827 system_pods.go:89] "etcd-no-preload-248059" [df24a43f-e4f8-4ee2-a2d5-9a718a197670] Running
	I0328 01:09:03.697710 1130827 system_pods.go:89] "kube-apiserver-no-preload-248059" [60aa0336-e0a2-476a-9458-c76fb40a95e1] Running
	I0328 01:09:03.697720 1130827 system_pods.go:89] "kube-controller-manager-no-preload-248059" [3759c96b-4476-439f-bf97-ea6175e53272] Running
	I0328 01:09:03.697726 1130827 system_pods.go:89] "kube-proxy-g5f6g" [d9c30bc3-42b1-446f-838b-979489cf661d] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0328 01:09:03.697730 1130827 system_pods.go:89] "kube-scheduler-no-preload-248059" [63a844a3-9d1e-48b0-90eb-52d3d20f0de8] Running
	I0328 01:09:03.697750 1130827 retry.go:31] will retry after 356.016468ms: missing components: kube-dns, kube-proxy
	I0328 01:09:03.962535 1130827 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.097008499s)
	I0328 01:09:03.962614 1130827 main.go:141] libmachine: Making call to close driver server
	I0328 01:09:03.962633 1130827 main.go:141] libmachine: (no-preload-248059) Calling .Close
	I0328 01:09:03.963093 1130827 main.go:141] libmachine: Successfully made call to close driver server
	I0328 01:09:03.963119 1130827 main.go:141] libmachine: Making call to close connection to plugin binary
	I0328 01:09:03.963129 1130827 main.go:141] libmachine: Making call to close driver server
	I0328 01:09:03.963139 1130827 main.go:141] libmachine: (no-preload-248059) Calling .Close
	I0328 01:09:03.963406 1130827 main.go:141] libmachine: Successfully made call to close driver server
	I0328 01:09:03.963424 1130827 main.go:141] libmachine: Making call to close connection to plugin binary
	I0328 01:09:03.964335 1130827 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.125788348s)
	I0328 01:09:03.964375 1130827 main.go:141] libmachine: Making call to close driver server
	I0328 01:09:03.964384 1130827 main.go:141] libmachine: (no-preload-248059) Calling .Close
	I0328 01:09:03.964712 1130827 main.go:141] libmachine: (no-preload-248059) DBG | Closing plugin on server side
	I0328 01:09:03.964740 1130827 main.go:141] libmachine: Successfully made call to close driver server
	I0328 01:09:03.964763 1130827 main.go:141] libmachine: Making call to close connection to plugin binary
	I0328 01:09:03.964776 1130827 main.go:141] libmachine: Making call to close driver server
	I0328 01:09:03.964785 1130827 main.go:141] libmachine: (no-preload-248059) Calling .Close
	I0328 01:09:03.965054 1130827 main.go:141] libmachine: Successfully made call to close driver server
	I0328 01:09:03.965125 1130827 main.go:141] libmachine: Making call to close connection to plugin binary
	I0328 01:09:03.965142 1130827 main.go:141] libmachine: (no-preload-248059) DBG | Closing plugin on server side
	I0328 01:09:04.002303 1130827 main.go:141] libmachine: Making call to close driver server
	I0328 01:09:04.002340 1130827 main.go:141] libmachine: (no-preload-248059) Calling .Close
	I0328 01:09:04.002744 1130827 main.go:141] libmachine: Successfully made call to close driver server
	I0328 01:09:04.002766 1130827 main.go:141] libmachine: Making call to close connection to plugin binary
	I0328 01:09:04.062017 1130827 system_pods.go:86] 8 kube-system pods found
	I0328 01:09:04.062096 1130827 system_pods.go:89] "coredns-7db6d8ff4d-8zzf5" [91f329ea-6d6d-45dc-ac77-40a2739249b4] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0328 01:09:04.062111 1130827 system_pods.go:89] "coredns-7db6d8ff4d-qtgp9" [aac5a4d0-acf3-426c-a81e-d129f94d58f3] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0328 01:09:04.062121 1130827 system_pods.go:89] "etcd-no-preload-248059" [df24a43f-e4f8-4ee2-a2d5-9a718a197670] Running
	I0328 01:09:04.062132 1130827 system_pods.go:89] "kube-apiserver-no-preload-248059" [60aa0336-e0a2-476a-9458-c76fb40a95e1] Running
	I0328 01:09:04.062158 1130827 system_pods.go:89] "kube-controller-manager-no-preload-248059" [3759c96b-4476-439f-bf97-ea6175e53272] Running
	I0328 01:09:04.062172 1130827 system_pods.go:89] "kube-proxy-g5f6g" [d9c30bc3-42b1-446f-838b-979489cf661d] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0328 01:09:04.062180 1130827 system_pods.go:89] "kube-scheduler-no-preload-248059" [63a844a3-9d1e-48b0-90eb-52d3d20f0de8] Running
	I0328 01:09:04.062192 1130827 system_pods.go:89] "storage-provisioner" [1dcee5b1-4531-4068-bce7-081d51602015] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0328 01:09:04.062220 1130827 retry.go:31] will retry after 477.684804ms: missing components: kube-dns, kube-proxy
	I0328 01:09:04.574661 1130827 system_pods.go:86] 9 kube-system pods found
	I0328 01:09:04.574716 1130827 system_pods.go:89] "coredns-7db6d8ff4d-8zzf5" [91f329ea-6d6d-45dc-ac77-40a2739249b4] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0328 01:09:04.574728 1130827 system_pods.go:89] "coredns-7db6d8ff4d-qtgp9" [aac5a4d0-acf3-426c-a81e-d129f94d58f3] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0328 01:09:04.574740 1130827 system_pods.go:89] "etcd-no-preload-248059" [df24a43f-e4f8-4ee2-a2d5-9a718a197670] Running
	I0328 01:09:04.574748 1130827 system_pods.go:89] "kube-apiserver-no-preload-248059" [60aa0336-e0a2-476a-9458-c76fb40a95e1] Running
	I0328 01:09:04.574754 1130827 system_pods.go:89] "kube-controller-manager-no-preload-248059" [3759c96b-4476-439f-bf97-ea6175e53272] Running
	I0328 01:09:04.574761 1130827 system_pods.go:89] "kube-proxy-g5f6g" [d9c30bc3-42b1-446f-838b-979489cf661d] Running
	I0328 01:09:04.574768 1130827 system_pods.go:89] "kube-scheduler-no-preload-248059" [63a844a3-9d1e-48b0-90eb-52d3d20f0de8] Running
	I0328 01:09:04.574778 1130827 system_pods.go:89] "metrics-server-569cc877fc-frc5k" [d1b84bf5-8f9e-4da6-8aea-568e9bb1a4dd] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0328 01:09:04.574799 1130827 system_pods.go:89] "storage-provisioner" [1dcee5b1-4531-4068-bce7-081d51602015] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0328 01:09:04.574821 1130827 retry.go:31] will retry after 460.13955ms: missing components: kube-dns
	I0328 01:09:04.692708 1130827 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.69861394s)
	I0328 01:09:04.692782 1130827 main.go:141] libmachine: Making call to close driver server
	I0328 01:09:04.692798 1130827 main.go:141] libmachine: (no-preload-248059) Calling .Close
	I0328 01:09:04.693323 1130827 main.go:141] libmachine: Successfully made call to close driver server
	I0328 01:09:04.693366 1130827 main.go:141] libmachine: Making call to close connection to plugin binary
	I0328 01:09:04.693376 1130827 main.go:141] libmachine: Making call to close driver server
	I0328 01:09:04.693384 1130827 main.go:141] libmachine: (no-preload-248059) Calling .Close
	I0328 01:09:04.693320 1130827 main.go:141] libmachine: (no-preload-248059) DBG | Closing plugin on server side
	I0328 01:09:04.693818 1130827 main.go:141] libmachine: Successfully made call to close driver server
	I0328 01:09:04.693865 1130827 main.go:141] libmachine: Making call to close connection to plugin binary
	I0328 01:09:04.693879 1130827 main.go:141] libmachine: (no-preload-248059) DBG | Closing plugin on server side
	I0328 01:09:04.693895 1130827 addons.go:470] Verifying addon metrics-server=true in "no-preload-248059"
	I0328 01:09:04.696310 1130827 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0328 01:09:04.025791 1131323 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0328 01:09:04.026055 1131323 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0328 01:09:04.026065 1131323 kubeadm.go:309] 
	I0328 01:09:04.026124 1131323 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0328 01:09:04.026172 1131323 kubeadm.go:309] 		timed out waiting for the condition
	I0328 01:09:04.026181 1131323 kubeadm.go:309] 
	I0328 01:09:04.026221 1131323 kubeadm.go:309] 	This error is likely caused by:
	I0328 01:09:04.026279 1131323 kubeadm.go:309] 		- The kubelet is not running
	I0328 01:09:04.026401 1131323 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0328 01:09:04.026411 1131323 kubeadm.go:309] 
	I0328 01:09:04.026529 1131323 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0328 01:09:04.026586 1131323 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0328 01:09:04.026632 1131323 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0328 01:09:04.026640 1131323 kubeadm.go:309] 
	I0328 01:09:04.026758 1131323 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0328 01:09:04.026884 1131323 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0328 01:09:04.026902 1131323 kubeadm.go:309] 
	I0328 01:09:04.027061 1131323 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0328 01:09:04.027222 1131323 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0328 01:09:04.027335 1131323 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0328 01:09:04.027429 1131323 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0328 01:09:04.027537 1131323 kubeadm.go:309] 
	I0328 01:09:04.029027 1131323 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0328 01:09:04.029164 1131323 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0328 01:09:04.029284 1131323 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	W0328 01:09:04.029477 1131323 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0328 01:09:04.029545 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0328 01:09:04.543275 1131323 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0328 01:09:04.562572 1131323 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0328 01:09:04.577013 1131323 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0328 01:09:04.577040 1131323 kubeadm.go:156] found existing configuration files:
	
	I0328 01:09:04.577102 1131323 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0328 01:09:04.590795 1131323 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0328 01:09:04.590885 1131323 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0328 01:09:04.604227 1131323 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0328 01:09:04.616720 1131323 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0328 01:09:04.616818 1131323 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0328 01:09:04.630095 1131323 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0328 01:09:04.643166 1131323 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0328 01:09:04.643259 1131323 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0328 01:09:04.658084 1131323 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0328 01:09:04.671786 1131323 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0328 01:09:04.671874 1131323 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0328 01:09:04.685852 1131323 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0328 01:09:04.779013 1131323 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0328 01:09:04.779113 1131323 kubeadm.go:309] [preflight] Running pre-flight checks
	I0328 01:09:04.964178 1131323 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0328 01:09:04.964317 1131323 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0328 01:09:04.964463 1131323 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0328 01:09:05.181712 1131323 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0328 01:09:05.183644 1131323 out.go:204]   - Generating certificates and keys ...
	I0328 01:09:05.183759 1131323 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0328 01:09:05.183851 1131323 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0328 01:09:05.183962 1131323 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0328 01:09:05.184042 1131323 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0328 01:09:05.184156 1131323 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0328 01:09:05.184244 1131323 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0328 01:09:05.184337 1131323 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0328 01:09:05.184424 1131323 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0328 01:09:05.184535 1131323 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0328 01:09:05.184633 1131323 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0328 01:09:05.184683 1131323 kubeadm.go:309] [certs] Using the existing "sa" key
	I0328 01:09:05.184758 1131323 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0328 01:09:04.698039 1130827 addons.go:505] duration metric: took 2.191652421s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0328 01:09:05.044303 1130827 system_pods.go:86] 9 kube-system pods found
	I0328 01:09:05.044340 1130827 system_pods.go:89] "coredns-7db6d8ff4d-8zzf5" [91f329ea-6d6d-45dc-ac77-40a2739249b4] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0328 01:09:05.044348 1130827 system_pods.go:89] "coredns-7db6d8ff4d-qtgp9" [aac5a4d0-acf3-426c-a81e-d129f94d58f3] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0328 01:09:05.044354 1130827 system_pods.go:89] "etcd-no-preload-248059" [df24a43f-e4f8-4ee2-a2d5-9a718a197670] Running
	I0328 01:09:05.044360 1130827 system_pods.go:89] "kube-apiserver-no-preload-248059" [60aa0336-e0a2-476a-9458-c76fb40a95e1] Running
	I0328 01:09:05.044366 1130827 system_pods.go:89] "kube-controller-manager-no-preload-248059" [3759c96b-4476-439f-bf97-ea6175e53272] Running
	I0328 01:09:05.044369 1130827 system_pods.go:89] "kube-proxy-g5f6g" [d9c30bc3-42b1-446f-838b-979489cf661d] Running
	I0328 01:09:05.044373 1130827 system_pods.go:89] "kube-scheduler-no-preload-248059" [63a844a3-9d1e-48b0-90eb-52d3d20f0de8] Running
	I0328 01:09:05.044378 1130827 system_pods.go:89] "metrics-server-569cc877fc-frc5k" [d1b84bf5-8f9e-4da6-8aea-568e9bb1a4dd] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0328 01:09:05.044387 1130827 system_pods.go:89] "storage-provisioner" [1dcee5b1-4531-4068-bce7-081d51602015] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0328 01:09:05.044406 1130827 retry.go:31] will retry after 486.01075ms: missing components: kube-dns
	I0328 01:09:05.539158 1130827 system_pods.go:86] 9 kube-system pods found
	I0328 01:09:05.539204 1130827 system_pods.go:89] "coredns-7db6d8ff4d-8zzf5" [91f329ea-6d6d-45dc-ac77-40a2739249b4] Running
	I0328 01:09:05.539213 1130827 system_pods.go:89] "coredns-7db6d8ff4d-qtgp9" [aac5a4d0-acf3-426c-a81e-d129f94d58f3] Running
	I0328 01:09:05.539219 1130827 system_pods.go:89] "etcd-no-preload-248059" [df24a43f-e4f8-4ee2-a2d5-9a718a197670] Running
	I0328 01:09:05.539226 1130827 system_pods.go:89] "kube-apiserver-no-preload-248059" [60aa0336-e0a2-476a-9458-c76fb40a95e1] Running
	I0328 01:09:05.539232 1130827 system_pods.go:89] "kube-controller-manager-no-preload-248059" [3759c96b-4476-439f-bf97-ea6175e53272] Running
	I0328 01:09:05.539238 1130827 system_pods.go:89] "kube-proxy-g5f6g" [d9c30bc3-42b1-446f-838b-979489cf661d] Running
	I0328 01:09:05.539244 1130827 system_pods.go:89] "kube-scheduler-no-preload-248059" [63a844a3-9d1e-48b0-90eb-52d3d20f0de8] Running
	I0328 01:09:05.539255 1130827 system_pods.go:89] "metrics-server-569cc877fc-frc5k" [d1b84bf5-8f9e-4da6-8aea-568e9bb1a4dd] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0328 01:09:05.539260 1130827 system_pods.go:89] "storage-provisioner" [1dcee5b1-4531-4068-bce7-081d51602015] Running
	I0328 01:09:05.539274 1130827 system_pods.go:126] duration metric: took 2.37828469s to wait for k8s-apps to be running ...
	I0328 01:09:05.539292 1130827 system_svc.go:44] waiting for kubelet service to be running ....
	I0328 01:09:05.539362 1130827 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0328 01:09:05.560593 1130827 system_svc.go:56] duration metric: took 21.288819ms WaitForService to wait for kubelet
	I0328 01:09:05.560628 1130827 kubeadm.go:576] duration metric: took 3.054281955s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0328 01:09:05.560657 1130827 node_conditions.go:102] verifying NodePressure condition ...
	I0328 01:09:05.564453 1130827 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0328 01:09:05.564489 1130827 node_conditions.go:123] node cpu capacity is 2
	I0328 01:09:05.564502 1130827 node_conditions.go:105] duration metric: took 3.837449ms to run NodePressure ...
	I0328 01:09:05.564517 1130827 start.go:240] waiting for startup goroutines ...
	I0328 01:09:05.564527 1130827 start.go:245] waiting for cluster config update ...
	I0328 01:09:05.564542 1130827 start.go:254] writing updated cluster config ...
	I0328 01:09:05.564843 1130827 ssh_runner.go:195] Run: rm -f paused
	I0328 01:09:05.623218 1130827 start.go:600] kubectl: 1.29.3, cluster: 1.30.0-beta.0 (minor skew: 1)
	I0328 01:09:05.625408 1130827 out.go:177] * Done! kubectl is now configured to use "no-preload-248059" cluster and "default" namespace by default
	I0328 01:09:05.587190 1131323 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0328 01:09:05.923219 1131323 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0328 01:09:06.087945 1131323 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0328 01:09:06.245638 1131323 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0328 01:09:06.266195 1131323 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0328 01:09:06.267461 1131323 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0328 01:09:06.267551 1131323 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0328 01:09:06.434155 1131323 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0328 01:09:06.436300 1131323 out.go:204]   - Booting up control plane ...
	I0328 01:09:06.436447 1131323 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0328 01:09:06.446573 1131323 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0328 01:09:06.447461 1131323 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0328 01:09:06.448313 1131323 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0328 01:09:06.450917 1131323 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0328 01:09:46.453199 1131323 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0328 01:09:46.453386 1131323 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0328 01:09:46.453643 1131323 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0328 01:09:51.454402 1131323 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0328 01:09:51.454665 1131323 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0328 01:10:01.455189 1131323 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0328 01:10:01.455417 1131323 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0328 01:10:21.456491 1131323 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0328 01:10:21.456726 1131323 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0328 01:11:01.456972 1131323 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0328 01:11:01.457256 1131323 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0328 01:11:01.457269 1131323 kubeadm.go:309] 
	I0328 01:11:01.457310 1131323 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0328 01:11:01.457404 1131323 kubeadm.go:309] 		timed out waiting for the condition
	I0328 01:11:01.457441 1131323 kubeadm.go:309] 
	I0328 01:11:01.457492 1131323 kubeadm.go:309] 	This error is likely caused by:
	I0328 01:11:01.457550 1131323 kubeadm.go:309] 		- The kubelet is not running
	I0328 01:11:01.457696 1131323 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0328 01:11:01.457708 1131323 kubeadm.go:309] 
	I0328 01:11:01.457856 1131323 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0328 01:11:01.457906 1131323 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0328 01:11:01.457935 1131323 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0328 01:11:01.457943 1131323 kubeadm.go:309] 
	I0328 01:11:01.458033 1131323 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0328 01:11:01.458139 1131323 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0328 01:11:01.458155 1131323 kubeadm.go:309] 
	I0328 01:11:01.458331 1131323 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0328 01:11:01.458483 1131323 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0328 01:11:01.458594 1131323 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0328 01:11:01.458707 1131323 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0328 01:11:01.458718 1131323 kubeadm.go:309] 
	I0328 01:11:01.459597 1131323 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0328 01:11:01.459737 1131323 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0328 01:11:01.459822 1131323 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0328 01:11:01.459962 1131323 kubeadm.go:393] duration metric: took 7m59.227261729s to StartCluster
	I0328 01:11:01.460023 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:11:01.460167 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:11:01.522644 1131323 cri.go:89] found id: ""
	I0328 01:11:01.522687 1131323 logs.go:276] 0 containers: []
	W0328 01:11:01.522700 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:11:01.522710 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:11:01.522782 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:11:01.567898 1131323 cri.go:89] found id: ""
	I0328 01:11:01.567928 1131323 logs.go:276] 0 containers: []
	W0328 01:11:01.567937 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:11:01.567945 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:11:01.568005 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:11:01.604782 1131323 cri.go:89] found id: ""
	I0328 01:11:01.604810 1131323 logs.go:276] 0 containers: []
	W0328 01:11:01.604819 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:11:01.604825 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:11:01.604935 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:11:01.642875 1131323 cri.go:89] found id: ""
	I0328 01:11:01.642908 1131323 logs.go:276] 0 containers: []
	W0328 01:11:01.642920 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:11:01.642929 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:11:01.642993 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:11:01.682186 1131323 cri.go:89] found id: ""
	I0328 01:11:01.682216 1131323 logs.go:276] 0 containers: []
	W0328 01:11:01.682223 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:11:01.682241 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:11:01.682312 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:11:01.720654 1131323 cri.go:89] found id: ""
	I0328 01:11:01.720689 1131323 logs.go:276] 0 containers: []
	W0328 01:11:01.720697 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:11:01.720704 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:11:01.720759 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:11:01.757340 1131323 cri.go:89] found id: ""
	I0328 01:11:01.757372 1131323 logs.go:276] 0 containers: []
	W0328 01:11:01.757383 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:11:01.757392 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:11:01.757462 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:11:01.797426 1131323 cri.go:89] found id: ""
	I0328 01:11:01.797462 1131323 logs.go:276] 0 containers: []
	W0328 01:11:01.797473 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:11:01.797488 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:11:01.797506 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:11:01.859582 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:11:01.859623 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:11:01.876027 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:11:01.876073 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:11:01.966513 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:11:01.966539 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:11:01.966557 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:11:02.084853 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:11:02.084894 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0328 01:11:02.127221 1131323 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0328 01:11:02.127288 1131323 out.go:239] * 
	W0328 01:11:02.127417 1131323 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0328 01:11:02.127456 1131323 out.go:239] * 
	W0328 01:11:02.128313 1131323 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0328 01:11:02.131916 1131323 out.go:177] 
	W0328 01:11:02.133288 1131323 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0328 01:11:02.133351 1131323 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0328 01:11:02.133381 1131323 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0328 01:11:02.134991 1131323 out.go:177] 
	
	
	==> CRI-O <==
	Mar 28 01:20:07 old-k8s-version-986088 crio[655]: time="2024-03-28 01:20:07.541651185Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1711588807541629097,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b4927c5f-363d-49a0-8a6d-6940767f5bea name=/runtime.v1.ImageService/ImageFsInfo
	Mar 28 01:20:07 old-k8s-version-986088 crio[655]: time="2024-03-28 01:20:07.542517208Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=52ca3f61-75b5-44c3-a780-3038fdb608d8 name=/runtime.v1.RuntimeService/ListContainers
	Mar 28 01:20:07 old-k8s-version-986088 crio[655]: time="2024-03-28 01:20:07.542602505Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=52ca3f61-75b5-44c3-a780-3038fdb608d8 name=/runtime.v1.RuntimeService/ListContainers
	Mar 28 01:20:07 old-k8s-version-986088 crio[655]: time="2024-03-28 01:20:07.542643702Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=52ca3f61-75b5-44c3-a780-3038fdb608d8 name=/runtime.v1.RuntimeService/ListContainers
	Mar 28 01:20:07 old-k8s-version-986088 crio[655]: time="2024-03-28 01:20:07.576545058Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9ea639e7-8451-47f3-b5b8-ec2cdf96244f name=/runtime.v1.RuntimeService/Version
	Mar 28 01:20:07 old-k8s-version-986088 crio[655]: time="2024-03-28 01:20:07.576646773Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9ea639e7-8451-47f3-b5b8-ec2cdf96244f name=/runtime.v1.RuntimeService/Version
	Mar 28 01:20:07 old-k8s-version-986088 crio[655]: time="2024-03-28 01:20:07.577579445Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7cc3d83a-01d0-473d-a6d5-56cf88aaa231 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 28 01:20:07 old-k8s-version-986088 crio[655]: time="2024-03-28 01:20:07.578009130Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1711588807577986206,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7cc3d83a-01d0-473d-a6d5-56cf88aaa231 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 28 01:20:07 old-k8s-version-986088 crio[655]: time="2024-03-28 01:20:07.578619149Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5d0c6d79-79f7-479c-9398-877687aae715 name=/runtime.v1.RuntimeService/ListContainers
	Mar 28 01:20:07 old-k8s-version-986088 crio[655]: time="2024-03-28 01:20:07.578705362Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5d0c6d79-79f7-479c-9398-877687aae715 name=/runtime.v1.RuntimeService/ListContainers
	Mar 28 01:20:07 old-k8s-version-986088 crio[655]: time="2024-03-28 01:20:07.578740316Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=5d0c6d79-79f7-479c-9398-877687aae715 name=/runtime.v1.RuntimeService/ListContainers
	Mar 28 01:20:07 old-k8s-version-986088 crio[655]: time="2024-03-28 01:20:07.614312944Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=562b1fbb-d03f-4f5a-99e4-5a121047693e name=/runtime.v1.RuntimeService/Version
	Mar 28 01:20:07 old-k8s-version-986088 crio[655]: time="2024-03-28 01:20:07.614474970Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=562b1fbb-d03f-4f5a-99e4-5a121047693e name=/runtime.v1.RuntimeService/Version
	Mar 28 01:20:07 old-k8s-version-986088 crio[655]: time="2024-03-28 01:20:07.615845742Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=3af5e646-a72f-471f-baf0-7c494eaeae95 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 28 01:20:07 old-k8s-version-986088 crio[655]: time="2024-03-28 01:20:07.616248022Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1711588807616225349,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3af5e646-a72f-471f-baf0-7c494eaeae95 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 28 01:20:07 old-k8s-version-986088 crio[655]: time="2024-03-28 01:20:07.616917614Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=62b8f1db-efaf-4398-bbe4-3184b052ac9f name=/runtime.v1.RuntimeService/ListContainers
	Mar 28 01:20:07 old-k8s-version-986088 crio[655]: time="2024-03-28 01:20:07.616993019Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=62b8f1db-efaf-4398-bbe4-3184b052ac9f name=/runtime.v1.RuntimeService/ListContainers
	Mar 28 01:20:07 old-k8s-version-986088 crio[655]: time="2024-03-28 01:20:07.617028345Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=62b8f1db-efaf-4398-bbe4-3184b052ac9f name=/runtime.v1.RuntimeService/ListContainers
	Mar 28 01:20:07 old-k8s-version-986088 crio[655]: time="2024-03-28 01:20:07.652210110Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=5d11ae32-6a16-471b-9499-a93e50fd7641 name=/runtime.v1.RuntimeService/Version
	Mar 28 01:20:07 old-k8s-version-986088 crio[655]: time="2024-03-28 01:20:07.652329629Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=5d11ae32-6a16-471b-9499-a93e50fd7641 name=/runtime.v1.RuntimeService/Version
	Mar 28 01:20:07 old-k8s-version-986088 crio[655]: time="2024-03-28 01:20:07.653534594Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7d065bb1-4463-4bbb-8942-77f2c9592c30 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 28 01:20:07 old-k8s-version-986088 crio[655]: time="2024-03-28 01:20:07.653934999Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1711588807653908141,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7d065bb1-4463-4bbb-8942-77f2c9592c30 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 28 01:20:07 old-k8s-version-986088 crio[655]: time="2024-03-28 01:20:07.654602293Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=442d9cc6-eb8e-42e2-bec0-466975b8510f name=/runtime.v1.RuntimeService/ListContainers
	Mar 28 01:20:07 old-k8s-version-986088 crio[655]: time="2024-03-28 01:20:07.654678643Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=442d9cc6-eb8e-42e2-bec0-466975b8510f name=/runtime.v1.RuntimeService/ListContainers
	Mar 28 01:20:07 old-k8s-version-986088 crio[655]: time="2024-03-28 01:20:07.654726950Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=442d9cc6-eb8e-42e2-bec0-466975b8510f name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Mar28 01:02] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.054089] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.043023] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.677467] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.716356] systemd-fstab-generator[115]: Ignoring "noauto" option for root device
	[  +1.626498] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.938962] systemd-fstab-generator[574]: Ignoring "noauto" option for root device
	[  +0.065252] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.078257] systemd-fstab-generator[586]: Ignoring "noauto" option for root device
	[  +0.191570] systemd-fstab-generator[601]: Ignoring "noauto" option for root device
	[  +0.159223] systemd-fstab-generator[613]: Ignoring "noauto" option for root device
	[  +0.285028] systemd-fstab-generator[640]: Ignoring "noauto" option for root device
	[Mar28 01:03] systemd-fstab-generator[845]: Ignoring "noauto" option for root device
	[  +0.069643] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.129611] systemd-fstab-generator[969]: Ignoring "noauto" option for root device
	[ +11.468422] kauditd_printk_skb: 46 callbacks suppressed
	[Mar28 01:07] systemd-fstab-generator[4979]: Ignoring "noauto" option for root device
	[Mar28 01:09] systemd-fstab-generator[5264]: Ignoring "noauto" option for root device
	[  +0.093089] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 01:20:07 up 17 min,  0 users,  load average: 0.05, 0.07, 0.06
	Linux old-k8s-version-986088 5.10.207 #1 SMP Wed Mar 27 22:02:20 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Mar 28 01:20:03 old-k8s-version-986088 kubelet[6440]:         /usr/local/go/src/net/ipsock.go:280 +0x4d4
	Mar 28 01:20:03 old-k8s-version-986088 kubelet[6440]: net.(*Resolver).resolveAddrList(0x70c5740, 0x4f7fe40, 0xc000032f60, 0x48abf6d, 0x4, 0x48ab5d6, 0x3, 0xc000ad5ad0, 0x24, 0x0, ...)
	Mar 28 01:20:03 old-k8s-version-986088 kubelet[6440]:         /usr/local/go/src/net/dial.go:221 +0x47d
	Mar 28 01:20:03 old-k8s-version-986088 kubelet[6440]: net.(*Dialer).DialContext(0xc000bdf860, 0x4f7fe00, 0xc000052030, 0x48ab5d6, 0x3, 0xc000ad5ad0, 0x24, 0x0, 0x0, 0x0, ...)
	Mar 28 01:20:03 old-k8s-version-986088 kubelet[6440]:         /usr/local/go/src/net/dial.go:403 +0x22b
	Mar 28 01:20:03 old-k8s-version-986088 kubelet[6440]: k8s.io/kubernetes/vendor/k8s.io/client-go/util/connrotation.(*Dialer).DialContext(0xc000bec600, 0x4f7fe00, 0xc000052030, 0x48ab5d6, 0x3, 0xc000ad5ad0, 0x24, 0x1000000000060, 0x7efc00e11820, 0x118, ...)
	Mar 28 01:20:03 old-k8s-version-986088 kubelet[6440]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/util/connrotation/connrotation.go:73 +0x7e
	Mar 28 01:20:03 old-k8s-version-986088 kubelet[6440]: net/http.(*Transport).dial(0xc000bfa000, 0x4f7fe00, 0xc000052030, 0x48ab5d6, 0x3, 0xc000ad5ad0, 0x24, 0x0, 0x0, 0x0, ...)
	Mar 28 01:20:03 old-k8s-version-986088 kubelet[6440]:         /usr/local/go/src/net/http/transport.go:1141 +0x1fd
	Mar 28 01:20:03 old-k8s-version-986088 kubelet[6440]: net/http.(*Transport).dialConn(0xc000bfa000, 0x4f7fe00, 0xc000052030, 0x0, 0xc000952000, 0x5, 0xc000ad5ad0, 0x24, 0x0, 0xc0007ce000, ...)
	Mar 28 01:20:03 old-k8s-version-986088 kubelet[6440]:         /usr/local/go/src/net/http/transport.go:1575 +0x1abb
	Mar 28 01:20:03 old-k8s-version-986088 kubelet[6440]: net/http.(*Transport).dialConnFor(0xc000bfa000, 0xc000c5e370)
	Mar 28 01:20:03 old-k8s-version-986088 kubelet[6440]:         /usr/local/go/src/net/http/transport.go:1421 +0xc6
	Mar 28 01:20:03 old-k8s-version-986088 kubelet[6440]: created by net/http.(*Transport).queueForDial
	Mar 28 01:20:03 old-k8s-version-986088 kubelet[6440]:         /usr/local/go/src/net/http/transport.go:1390 +0x40f
	Mar 28 01:20:03 old-k8s-version-986088 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Mar 28 01:20:03 old-k8s-version-986088 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Mar 28 01:20:03 old-k8s-version-986088 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 114.
	Mar 28 01:20:03 old-k8s-version-986088 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Mar 28 01:20:03 old-k8s-version-986088 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Mar 28 01:20:03 old-k8s-version-986088 kubelet[6449]: I0328 01:20:03.724985    6449 server.go:416] Version: v1.20.0
	Mar 28 01:20:03 old-k8s-version-986088 kubelet[6449]: I0328 01:20:03.725222    6449 server.go:837] Client rotation is on, will bootstrap in background
	Mar 28 01:20:03 old-k8s-version-986088 kubelet[6449]: I0328 01:20:03.727173    6449 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Mar 28 01:20:03 old-k8s-version-986088 kubelet[6449]: I0328 01:20:03.728311    6449 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	Mar 28 01:20:03 old-k8s-version-986088 kubelet[6449]: W0328 01:20:03.728583    6449 manager.go:159] Cannot detect current cgroup on cgroup v2
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-986088 -n old-k8s-version-986088
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-986088 -n old-k8s-version-986088: exit status 2 (256.599622ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-986088" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (543.49s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (421.57s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-808809 -n embed-certs-808809
start_stop_delete_test.go:287: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-03-28 01:23:59.746065614 +0000 UTC m=+6670.997544551
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-808809 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-808809 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.718µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-808809 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-808809 -n embed-certs-808809
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-808809 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-808809 logs -n 25: (2.053267209s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|----------------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   |    Version     |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|----------------|---------------------|---------------------|
	| stop    | -p embed-certs-808809                                  | embed-certs-808809           | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:55 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |                |                     |                     |
	| addons  | enable metrics-server -p newest-cni-013642             | newest-cni-013642            | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:55 UTC | 28 Mar 24 00:55 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |                |                     |                     |
	| stop    | -p newest-cni-013642                                   | newest-cni-013642            | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:55 UTC | 28 Mar 24 00:55 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |                |                     |                     |
	| addons  | enable dashboard -p newest-cni-013642                  | newest-cni-013642            | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:55 UTC | 28 Mar 24 00:55 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| start   | -p newest-cni-013642 --memory=2200 --alsologtostderr   | newest-cni-013642            | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:55 UTC | 28 Mar 24 00:56 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |                |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |                |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |                |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |                |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.30.0-beta.0                    |                              |         |                |                     |                     |
	| image   | newest-cni-013642 image list                           | newest-cni-013642            | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:56 UTC | 28 Mar 24 00:56 UTC |
	|         | --format=json                                          |                              |         |                |                     |                     |
	| pause   | -p newest-cni-013642                                   | newest-cni-013642            | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:56 UTC | 28 Mar 24 00:56 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |                |                     |                     |
	| unpause | -p newest-cni-013642                                   | newest-cni-013642            | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:56 UTC | 28 Mar 24 00:56 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |                |                     |                     |
	| delete  | -p newest-cni-013642                                   | newest-cni-013642            | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:56 UTC | 28 Mar 24 00:56 UTC |
	| delete  | -p newest-cni-013642                                   | newest-cni-013642            | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:56 UTC | 28 Mar 24 00:56 UTC |
	| start   | -p                                                     | default-k8s-diff-port-283961 | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:56 UTC | 28 Mar 24 00:57 UTC |
	|         | default-k8s-diff-port-283961                           |                              |         |                |                     |                     |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |                |                     |                     |
	|         | --driver=kvm2                                          |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.29.3                           |                              |         |                |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-986088        | old-k8s-version-986088       | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:57 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |                |                     |                     |
	| addons  | enable dashboard -p no-preload-248059                  | no-preload-248059            | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:57 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-283961  | default-k8s-diff-port-283961 | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:57 UTC | 28 Mar 24 00:57 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |                |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-283961 | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:57 UTC |                     |
	|         | default-k8s-diff-port-283961                           |                              |         |                |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |                |                     |                     |
	| start   | -p no-preload-248059 --memory=2200                     | no-preload-248059            | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:57 UTC | 28 Mar 24 01:09 UTC |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |                |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.30.0-beta.0                    |                              |         |                |                     |                     |
	| addons  | enable dashboard -p embed-certs-808809                 | embed-certs-808809           | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:57 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| start   | -p embed-certs-808809                                  | embed-certs-808809           | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:57 UTC | 28 Mar 24 01:07 UTC |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |                |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.29.3                           |                              |         |                |                     |                     |
	| stop    | -p old-k8s-version-986088                              | old-k8s-version-986088       | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:59 UTC | 28 Mar 24 00:59 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |                |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-986088             | old-k8s-version-986088       | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:59 UTC | 28 Mar 24 00:59 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| start   | -p old-k8s-version-986088                              | old-k8s-version-986088       | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:59 UTC |                     |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --kvm-network=default                                  |                              |         |                |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |                |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |                |                     |                     |
	|         | --keep-context=false                                   |                              |         |                |                     |                     |
	|         | --driver=kvm2                                          |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |                |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-283961       | default-k8s-diff-port-283961 | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:59 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-283961 | jenkins | v1.33.0-beta.0 | 28 Mar 24 01:00 UTC | 28 Mar 24 01:08 UTC |
	|         | default-k8s-diff-port-283961                           |                              |         |                |                     |                     |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |                |                     |                     |
	|         | --driver=kvm2                                          |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.29.3                           |                              |         |                |                     |                     |
	| delete  | -p old-k8s-version-986088                              | old-k8s-version-986088       | jenkins | v1.33.0-beta.0 | 28 Mar 24 01:22 UTC | 28 Mar 24 01:22 UTC |
	| delete  | -p no-preload-248059                                   | no-preload-248059            | jenkins | v1.33.0-beta.0 | 28 Mar 24 01:22 UTC | 28 Mar 24 01:22 UTC |
	|---------|--------------------------------------------------------|------------------------------|---------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/28 01:00:05
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0328 01:00:05.675380 1131600 out.go:291] Setting OutFile to fd 1 ...
	I0328 01:00:05.675675 1131600 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 01:00:05.675710 1131600 out.go:304] Setting ErrFile to fd 2...
	I0328 01:00:05.675718 1131600 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 01:00:05.676017 1131600 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18485-1069254/.minikube/bin
	I0328 01:00:05.676919 1131600 out.go:298] Setting JSON to false
	I0328 01:00:05.678046 1131600 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":31303,"bootTime":1711556303,"procs":200,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1054-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0328 01:00:05.678129 1131600 start.go:139] virtualization: kvm guest
	I0328 01:00:05.681128 1131600 out.go:177] * [default-k8s-diff-port-283961] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I0328 01:00:05.683139 1131600 out.go:177]   - MINIKUBE_LOCATION=18485
	I0328 01:00:05.683129 1131600 notify.go:220] Checking for updates...
	I0328 01:00:05.685082 1131600 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0328 01:00:05.686765 1131600 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18485-1069254/kubeconfig
	I0328 01:00:05.688389 1131600 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18485-1069254/.minikube
	I0328 01:00:05.690187 1131600 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0328 01:00:05.691887 1131600 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0328 01:00:05.693775 1131600 config.go:182] Loaded profile config "default-k8s-diff-port-283961": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0328 01:00:05.694270 1131600 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18485-1069254/.minikube/bin/docker-machine-driver-kvm2
	I0328 01:00:05.694323 1131600 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 01:00:05.709757 1131600 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35333
	I0328 01:00:05.710275 1131600 main.go:141] libmachine: () Calling .GetVersion
	I0328 01:00:05.710875 1131600 main.go:141] libmachine: Using API Version  1
	I0328 01:00:05.710900 1131600 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 01:00:05.711323 1131600 main.go:141] libmachine: () Calling .GetMachineName
	I0328 01:00:05.711531 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .DriverName
	I0328 01:00:05.711893 1131600 driver.go:392] Setting default libvirt URI to qemu:///system
	I0328 01:00:05.712342 1131600 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18485-1069254/.minikube/bin/docker-machine-driver-kvm2
	I0328 01:00:05.712392 1131600 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 01:00:05.727583 1131600 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44249
	I0328 01:00:05.728107 1131600 main.go:141] libmachine: () Calling .GetVersion
	I0328 01:00:05.728595 1131600 main.go:141] libmachine: Using API Version  1
	I0328 01:00:05.728625 1131600 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 01:00:05.728945 1131600 main.go:141] libmachine: () Calling .GetMachineName
	I0328 01:00:05.729170 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .DriverName
	I0328 01:00:05.763895 1131600 out.go:177] * Using the kvm2 driver based on existing profile
	I0328 01:00:05.765397 1131600 start.go:297] selected driver: kvm2
	I0328 01:00:05.765431 1131600 start.go:901] validating driver "kvm2" against &{Name:default-k8s-diff-port-283961 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.29.3 ClusterName:default-k8s-diff-port-283961 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.224 Port:8444 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisk
s:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0328 01:00:05.765564 1131600 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0328 01:00:05.766282 1131600 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0328 01:00:05.766391 1131600 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18485-1069254/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0328 01:00:05.783130 1131600 install.go:137] /home/jenkins/minikube-integration/18485-1069254/.minikube/bin/docker-machine-driver-kvm2 version is 1.33.0-beta.0
	I0328 01:00:05.783602 1131600 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0328 01:00:05.783724 1131600 cni.go:84] Creating CNI manager for ""
	I0328 01:00:05.783745 1131600 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0328 01:00:05.783795 1131600 start.go:340] cluster config:
	{Name:default-k8s-diff-port-283961 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:default-k8s-diff-port-283961 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.224 Port:8444 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0328 01:00:05.783949 1131600 iso.go:125] acquiring lock: {Name:mk3da1fa7d63a581c817f327d86a827697458cb8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0328 01:00:05.785871 1131600 out.go:177] * Starting "default-k8s-diff-port-283961" primary control-plane node in "default-k8s-diff-port-283961" cluster
	I0328 01:00:02.570474 1130827 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.107:22: connect: no route to host
	I0328 01:00:05.787210 1131600 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0328 01:00:05.787259 1131600 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4
	I0328 01:00:05.787272 1131600 cache.go:56] Caching tarball of preloaded images
	I0328 01:00:05.787364 1131600 preload.go:173] Found /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0328 01:00:05.787376 1131600 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on crio
	I0328 01:00:05.787509 1131600 profile.go:142] Saving config to /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/default-k8s-diff-port-283961/config.json ...
	I0328 01:00:05.787742 1131600 start.go:360] acquireMachinesLock for default-k8s-diff-port-283961: {Name:mk85e225431128bbd27ac7bb3815095957281902 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0328 01:00:08.650481 1130827 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.107:22: connect: no route to host
	I0328 01:00:11.722571 1130827 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.107:22: connect: no route to host
	I0328 01:00:17.802536 1130827 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.107:22: connect: no route to host
	I0328 01:00:20.874568 1130827 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.107:22: connect: no route to host
	I0328 01:00:26.954473 1130827 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.107:22: connect: no route to host
	I0328 01:00:30.026674 1130827 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.107:22: connect: no route to host
	I0328 01:00:36.106489 1130827 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.107:22: connect: no route to host
	I0328 01:00:39.178555 1130827 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.107:22: connect: no route to host
	I0328 01:00:45.258539 1130827 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.107:22: connect: no route to host
	I0328 01:00:48.330581 1130827 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.107:22: connect: no route to host
	I0328 01:00:54.410577 1130827 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.107:22: connect: no route to host
	I0328 01:00:57.482545 1130827 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.107:22: connect: no route to host
	I0328 01:01:03.562558 1130827 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.107:22: connect: no route to host
	I0328 01:01:06.634602 1130827 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.107:22: connect: no route to host
	I0328 01:01:12.714559 1130827 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.107:22: connect: no route to host
	I0328 01:01:15.786597 1130827 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.107:22: connect: no route to host
	I0328 01:01:21.866544 1130827 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.107:22: connect: no route to host
	I0328 01:01:24.938619 1130827 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.107:22: connect: no route to host
	I0328 01:01:31.018631 1130827 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.107:22: connect: no route to host
	I0328 01:01:34.090562 1130827 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.107:22: connect: no route to host
	I0328 01:01:40.170864 1130827 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.107:22: connect: no route to host
	I0328 01:01:43.242565 1130827 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.107:22: connect: no route to host
	I0328 01:01:49.322492 1130827 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.107:22: connect: no route to host
	I0328 01:01:52.394572 1130827 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.107:22: connect: no route to host
	I0328 01:01:58.474562 1130827 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.107:22: connect: no route to host
	I0328 01:02:01.546621 1130827 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.107:22: connect: no route to host
	I0328 01:02:07.626510 1130827 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.107:22: connect: no route to host
	I0328 01:02:10.698534 1130827 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.107:22: connect: no route to host
	I0328 01:02:13.703348 1130949 start.go:364] duration metric: took 4m25.677777198s to acquireMachinesLock for "embed-certs-808809"
	I0328 01:02:13.703416 1130949 start.go:96] Skipping create...Using existing machine configuration
	I0328 01:02:13.703429 1130949 fix.go:54] fixHost starting: 
	I0328 01:02:13.703888 1130949 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18485-1069254/.minikube/bin/docker-machine-driver-kvm2
	I0328 01:02:13.703923 1130949 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 01:02:13.719480 1130949 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44835
	I0328 01:02:13.719968 1130949 main.go:141] libmachine: () Calling .GetVersion
	I0328 01:02:13.720450 1130949 main.go:141] libmachine: Using API Version  1
	I0328 01:02:13.720475 1130949 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 01:02:13.720774 1130949 main.go:141] libmachine: () Calling .GetMachineName
	I0328 01:02:13.721011 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .DriverName
	I0328 01:02:13.721182 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetState
	I0328 01:02:13.722796 1130949 fix.go:112] recreateIfNeeded on embed-certs-808809: state=Stopped err=<nil>
	I0328 01:02:13.722828 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .DriverName
	W0328 01:02:13.722972 1130949 fix.go:138] unexpected machine state, will restart: <nil>
	I0328 01:02:13.724895 1130949 out.go:177] * Restarting existing kvm2 VM for "embed-certs-808809" ...
	I0328 01:02:13.700647 1130827 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0328 01:02:13.700689 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetMachineName
	I0328 01:02:13.701054 1130827 buildroot.go:166] provisioning hostname "no-preload-248059"
	I0328 01:02:13.701085 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetMachineName
	I0328 01:02:13.701344 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHHostname
	I0328 01:02:13.703200 1130827 machine.go:97] duration metric: took 4m37.399616994s to provisionDockerMachine
	I0328 01:02:13.703243 1130827 fix.go:56] duration metric: took 4m37.42352766s for fixHost
	I0328 01:02:13.703249 1130827 start.go:83] releasing machines lock for "no-preload-248059", held for 4m37.423563163s
	W0328 01:02:13.703274 1130827 start.go:713] error starting host: provision: host is not running
	W0328 01:02:13.703400 1130827 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0328 01:02:13.703411 1130827 start.go:728] Will try again in 5 seconds ...
	I0328 01:02:13.726437 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .Start
	I0328 01:02:13.726574 1130949 main.go:141] libmachine: (embed-certs-808809) Ensuring networks are active...
	I0328 01:02:13.727407 1130949 main.go:141] libmachine: (embed-certs-808809) Ensuring network default is active
	I0328 01:02:13.727667 1130949 main.go:141] libmachine: (embed-certs-808809) Ensuring network mk-embed-certs-808809 is active
	I0328 01:02:13.728050 1130949 main.go:141] libmachine: (embed-certs-808809) Getting domain xml...
	I0328 01:02:13.728836 1130949 main.go:141] libmachine: (embed-certs-808809) Creating domain...
	I0328 01:02:14.931757 1130949 main.go:141] libmachine: (embed-certs-808809) Waiting to get IP...
	I0328 01:02:14.932921 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:14.933298 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | unable to find current IP address of domain embed-certs-808809 in network mk-embed-certs-808809
	I0328 01:02:14.933396 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | I0328 01:02:14.933294 1131950 retry.go:31] will retry after 279.257708ms: waiting for machine to come up
	I0328 01:02:15.213830 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:15.214439 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | unable to find current IP address of domain embed-certs-808809 in network mk-embed-certs-808809
	I0328 01:02:15.214472 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | I0328 01:02:15.214415 1131950 retry.go:31] will retry after 387.406107ms: waiting for machine to come up
	I0328 01:02:15.603078 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:15.603464 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | unable to find current IP address of domain embed-certs-808809 in network mk-embed-certs-808809
	I0328 01:02:15.603497 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | I0328 01:02:15.603431 1131950 retry.go:31] will retry after 466.553599ms: waiting for machine to come up
	I0328 01:02:16.072165 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:16.072702 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | unable to find current IP address of domain embed-certs-808809 in network mk-embed-certs-808809
	I0328 01:02:16.072732 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | I0328 01:02:16.072643 1131950 retry.go:31] will retry after 375.428381ms: waiting for machine to come up
	I0328 01:02:16.449155 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:16.449614 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | unable to find current IP address of domain embed-certs-808809 in network mk-embed-certs-808809
	I0328 01:02:16.449652 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | I0328 01:02:16.449553 1131950 retry.go:31] will retry after 466.238903ms: waiting for machine to come up
	I0328 01:02:16.917246 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:16.917697 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | unable to find current IP address of domain embed-certs-808809 in network mk-embed-certs-808809
	I0328 01:02:16.917723 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | I0328 01:02:16.917633 1131950 retry.go:31] will retry after 772.819544ms: waiting for machine to come up
	I0328 01:02:17.691645 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:17.692121 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | unable to find current IP address of domain embed-certs-808809 in network mk-embed-certs-808809
	I0328 01:02:17.692151 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | I0328 01:02:17.692071 1131950 retry.go:31] will retry after 1.19065976s: waiting for machine to come up
	I0328 01:02:18.704949 1130827 start.go:360] acquireMachinesLock for no-preload-248059: {Name:mk85e225431128bbd27ac7bb3815095957281902 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0328 01:02:18.884525 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:18.885019 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | unable to find current IP address of domain embed-certs-808809 in network mk-embed-certs-808809
	I0328 01:02:18.885044 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | I0328 01:02:18.884980 1131950 retry.go:31] will retry after 1.434726863s: waiting for machine to come up
	I0328 01:02:20.321473 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:20.322009 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | unable to find current IP address of domain embed-certs-808809 in network mk-embed-certs-808809
	I0328 01:02:20.322035 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | I0328 01:02:20.321951 1131950 retry.go:31] will retry after 1.275277555s: waiting for machine to come up
	I0328 01:02:21.599454 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:21.600049 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | unable to find current IP address of domain embed-certs-808809 in network mk-embed-certs-808809
	I0328 01:02:21.600074 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | I0328 01:02:21.599982 1131950 retry.go:31] will retry after 1.852516502s: waiting for machine to come up
	I0328 01:02:23.455282 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:23.455760 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | unable to find current IP address of domain embed-certs-808809 in network mk-embed-certs-808809
	I0328 01:02:23.455830 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | I0328 01:02:23.455746 1131950 retry.go:31] will retry after 2.056736141s: waiting for machine to come up
	I0328 01:02:25.514112 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:25.514538 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | unable to find current IP address of domain embed-certs-808809 in network mk-embed-certs-808809
	I0328 01:02:25.514569 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | I0328 01:02:25.514492 1131950 retry.go:31] will retry after 2.711520437s: waiting for machine to come up
	I0328 01:02:32.751719 1131323 start.go:364] duration metric: took 3m27.302408957s to acquireMachinesLock for "old-k8s-version-986088"
	I0328 01:02:32.751823 1131323 start.go:96] Skipping create...Using existing machine configuration
	I0328 01:02:32.751833 1131323 fix.go:54] fixHost starting: 
	I0328 01:02:32.752289 1131323 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18485-1069254/.minikube/bin/docker-machine-driver-kvm2
	I0328 01:02:32.752326 1131323 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 01:02:32.770119 1131323 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45807
	I0328 01:02:32.770723 1131323 main.go:141] libmachine: () Calling .GetVersion
	I0328 01:02:32.771352 1131323 main.go:141] libmachine: Using API Version  1
	I0328 01:02:32.771380 1131323 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 01:02:32.771790 1131323 main.go:141] libmachine: () Calling .GetMachineName
	I0328 01:02:32.772020 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .DriverName
	I0328 01:02:32.772206 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetState
	I0328 01:02:32.773947 1131323 fix.go:112] recreateIfNeeded on old-k8s-version-986088: state=Stopped err=<nil>
	I0328 01:02:32.773980 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .DriverName
	W0328 01:02:32.774166 1131323 fix.go:138] unexpected machine state, will restart: <nil>
	I0328 01:02:32.776416 1131323 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-986088" ...
	I0328 01:02:28.229576 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:28.229970 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | unable to find current IP address of domain embed-certs-808809 in network mk-embed-certs-808809
	I0328 01:02:28.230000 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | I0328 01:02:28.229920 1131950 retry.go:31] will retry after 3.231405371s: waiting for machine to come up
	I0328 01:02:31.463477 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:31.463884 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has current primary IP address 192.168.72.210 and MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:31.463902 1130949 main.go:141] libmachine: (embed-certs-808809) Found IP for machine: 192.168.72.210
	I0328 01:02:31.463915 1130949 main.go:141] libmachine: (embed-certs-808809) Reserving static IP address...
	I0328 01:02:31.464394 1130949 main.go:141] libmachine: (embed-certs-808809) Reserved static IP address: 192.168.72.210
	I0328 01:02:31.464413 1130949 main.go:141] libmachine: (embed-certs-808809) Waiting for SSH to be available...
	I0328 01:02:31.464439 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | found host DHCP lease matching {name: "embed-certs-808809", mac: "52:54:00:60:d4:d2", ip: "192.168.72.210"} in network mk-embed-certs-808809: {Iface:virbr4 ExpiryTime:2024-03-28 02:02:25 +0000 UTC Type:0 Mac:52:54:00:60:d4:d2 Iaid: IPaddr:192.168.72.210 Prefix:24 Hostname:embed-certs-808809 Clientid:01:52:54:00:60:d4:d2}
	I0328 01:02:31.464464 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | skip adding static IP to network mk-embed-certs-808809 - found existing host DHCP lease matching {name: "embed-certs-808809", mac: "52:54:00:60:d4:d2", ip: "192.168.72.210"}
	I0328 01:02:31.464480 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | Getting to WaitForSSH function...
	I0328 01:02:31.466488 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:31.466876 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d4:d2", ip: ""} in network mk-embed-certs-808809: {Iface:virbr4 ExpiryTime:2024-03-28 02:02:25 +0000 UTC Type:0 Mac:52:54:00:60:d4:d2 Iaid: IPaddr:192.168.72.210 Prefix:24 Hostname:embed-certs-808809 Clientid:01:52:54:00:60:d4:d2}
	I0328 01:02:31.466916 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined IP address 192.168.72.210 and MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:31.467054 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | Using SSH client type: external
	I0328 01:02:31.467085 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | Using SSH private key: /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/embed-certs-808809/id_rsa (-rw-------)
	I0328 01:02:31.467124 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.210 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/embed-certs-808809/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0328 01:02:31.467138 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | About to run SSH command:
	I0328 01:02:31.467155 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | exit 0
	I0328 01:02:31.590708 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | SSH cmd err, output: <nil>: 
	I0328 01:02:31.591111 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetConfigRaw
	I0328 01:02:31.591959 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetIP
	I0328 01:02:31.594592 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:31.595075 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d4:d2", ip: ""} in network mk-embed-certs-808809: {Iface:virbr4 ExpiryTime:2024-03-28 02:02:25 +0000 UTC Type:0 Mac:52:54:00:60:d4:d2 Iaid: IPaddr:192.168.72.210 Prefix:24 Hostname:embed-certs-808809 Clientid:01:52:54:00:60:d4:d2}
	I0328 01:02:31.595114 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined IP address 192.168.72.210 and MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:31.595364 1130949 profile.go:142] Saving config to /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/embed-certs-808809/config.json ...
	I0328 01:02:31.595634 1130949 machine.go:94] provisionDockerMachine start ...
	I0328 01:02:31.595656 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .DriverName
	I0328 01:02:31.595901 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHHostname
	I0328 01:02:31.598184 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:31.598529 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d4:d2", ip: ""} in network mk-embed-certs-808809: {Iface:virbr4 ExpiryTime:2024-03-28 02:02:25 +0000 UTC Type:0 Mac:52:54:00:60:d4:d2 Iaid: IPaddr:192.168.72.210 Prefix:24 Hostname:embed-certs-808809 Clientid:01:52:54:00:60:d4:d2}
	I0328 01:02:31.598556 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined IP address 192.168.72.210 and MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:31.598681 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHPort
	I0328 01:02:31.598851 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHKeyPath
	I0328 01:02:31.599012 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHKeyPath
	I0328 01:02:31.599163 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHUsername
	I0328 01:02:31.599333 1130949 main.go:141] libmachine: Using SSH client type: native
	I0328 01:02:31.599604 1130949 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.210 22 <nil> <nil>}
	I0328 01:02:31.599619 1130949 main.go:141] libmachine: About to run SSH command:
	hostname
	I0328 01:02:31.703241 1130949 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0328 01:02:31.703272 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetMachineName
	I0328 01:02:31.703575 1130949 buildroot.go:166] provisioning hostname "embed-certs-808809"
	I0328 01:02:31.703602 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetMachineName
	I0328 01:02:31.703779 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHHostname
	I0328 01:02:31.706495 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:31.706777 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d4:d2", ip: ""} in network mk-embed-certs-808809: {Iface:virbr4 ExpiryTime:2024-03-28 02:02:25 +0000 UTC Type:0 Mac:52:54:00:60:d4:d2 Iaid: IPaddr:192.168.72.210 Prefix:24 Hostname:embed-certs-808809 Clientid:01:52:54:00:60:d4:d2}
	I0328 01:02:31.706799 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined IP address 192.168.72.210 and MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:31.706978 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHPort
	I0328 01:02:31.707146 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHKeyPath
	I0328 01:02:31.707334 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHKeyPath
	I0328 01:02:31.707580 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHUsername
	I0328 01:02:31.707765 1130949 main.go:141] libmachine: Using SSH client type: native
	I0328 01:02:31.707985 1130949 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.210 22 <nil> <nil>}
	I0328 01:02:31.708004 1130949 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-808809 && echo "embed-certs-808809" | sudo tee /etc/hostname
	I0328 01:02:31.821578 1130949 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-808809
	
	I0328 01:02:31.821608 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHHostname
	I0328 01:02:31.824412 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:31.824791 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d4:d2", ip: ""} in network mk-embed-certs-808809: {Iface:virbr4 ExpiryTime:2024-03-28 02:02:25 +0000 UTC Type:0 Mac:52:54:00:60:d4:d2 Iaid: IPaddr:192.168.72.210 Prefix:24 Hostname:embed-certs-808809 Clientid:01:52:54:00:60:d4:d2}
	I0328 01:02:31.824825 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined IP address 192.168.72.210 and MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:31.825030 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHPort
	I0328 01:02:31.825253 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHKeyPath
	I0328 01:02:31.825432 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHKeyPath
	I0328 01:02:31.825589 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHUsername
	I0328 01:02:31.825758 1130949 main.go:141] libmachine: Using SSH client type: native
	I0328 01:02:31.825950 1130949 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.210 22 <nil> <nil>}
	I0328 01:02:31.825976 1130949 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-808809' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-808809/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-808809' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0328 01:02:31.937655 1130949 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0328 01:02:31.937701 1130949 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18485-1069254/.minikube CaCertPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18485-1069254/.minikube}
	I0328 01:02:31.937728 1130949 buildroot.go:174] setting up certificates
	I0328 01:02:31.937742 1130949 provision.go:84] configureAuth start
	I0328 01:02:31.937754 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetMachineName
	I0328 01:02:31.938093 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetIP
	I0328 01:02:31.940874 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:31.941328 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d4:d2", ip: ""} in network mk-embed-certs-808809: {Iface:virbr4 ExpiryTime:2024-03-28 02:02:25 +0000 UTC Type:0 Mac:52:54:00:60:d4:d2 Iaid: IPaddr:192.168.72.210 Prefix:24 Hostname:embed-certs-808809 Clientid:01:52:54:00:60:d4:d2}
	I0328 01:02:31.941360 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined IP address 192.168.72.210 and MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:31.941580 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHHostname
	I0328 01:02:31.944250 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:31.944580 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d4:d2", ip: ""} in network mk-embed-certs-808809: {Iface:virbr4 ExpiryTime:2024-03-28 02:02:25 +0000 UTC Type:0 Mac:52:54:00:60:d4:d2 Iaid: IPaddr:192.168.72.210 Prefix:24 Hostname:embed-certs-808809 Clientid:01:52:54:00:60:d4:d2}
	I0328 01:02:31.944610 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined IP address 192.168.72.210 and MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:31.944828 1130949 provision.go:143] copyHostCerts
	I0328 01:02:31.944910 1130949 exec_runner.go:144] found /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.pem, removing ...
	I0328 01:02:31.944926 1130949 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.pem
	I0328 01:02:31.945006 1130949 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.pem (1078 bytes)
	I0328 01:02:31.945151 1130949 exec_runner.go:144] found /home/jenkins/minikube-integration/18485-1069254/.minikube/cert.pem, removing ...
	I0328 01:02:31.945162 1130949 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18485-1069254/.minikube/cert.pem
	I0328 01:02:31.945205 1130949 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18485-1069254/.minikube/cert.pem (1123 bytes)
	I0328 01:02:31.945285 1130949 exec_runner.go:144] found /home/jenkins/minikube-integration/18485-1069254/.minikube/key.pem, removing ...
	I0328 01:02:31.945294 1130949 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18485-1069254/.minikube/key.pem
	I0328 01:02:31.945330 1130949 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18485-1069254/.minikube/key.pem (1679 bytes)
	I0328 01:02:31.945400 1130949 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca-key.pem org=jenkins.embed-certs-808809 san=[127.0.0.1 192.168.72.210 embed-certs-808809 localhost minikube]
	I0328 01:02:32.070925 1130949 provision.go:177] copyRemoteCerts
	I0328 01:02:32.071007 1130949 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0328 01:02:32.071067 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHHostname
	I0328 01:02:32.073876 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:32.074295 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d4:d2", ip: ""} in network mk-embed-certs-808809: {Iface:virbr4 ExpiryTime:2024-03-28 02:02:25 +0000 UTC Type:0 Mac:52:54:00:60:d4:d2 Iaid: IPaddr:192.168.72.210 Prefix:24 Hostname:embed-certs-808809 Clientid:01:52:54:00:60:d4:d2}
	I0328 01:02:32.074339 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined IP address 192.168.72.210 and MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:32.074541 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHPort
	I0328 01:02:32.074754 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHKeyPath
	I0328 01:02:32.074931 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHUsername
	I0328 01:02:32.075091 1130949 sshutil.go:53] new ssh client: &{IP:192.168.72.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/embed-certs-808809/id_rsa Username:docker}
	I0328 01:02:32.158945 1130949 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0328 01:02:32.184903 1130949 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0328 01:02:32.210411 1130949 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0328 01:02:32.235788 1130949 provision.go:87] duration metric: took 298.03126ms to configureAuth
	I0328 01:02:32.235827 1130949 buildroot.go:189] setting minikube options for container-runtime
	I0328 01:02:32.236116 1130949 config.go:182] Loaded profile config "embed-certs-808809": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0328 01:02:32.236336 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHHostname
	I0328 01:02:32.239186 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:32.239520 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d4:d2", ip: ""} in network mk-embed-certs-808809: {Iface:virbr4 ExpiryTime:2024-03-28 02:02:25 +0000 UTC Type:0 Mac:52:54:00:60:d4:d2 Iaid: IPaddr:192.168.72.210 Prefix:24 Hostname:embed-certs-808809 Clientid:01:52:54:00:60:d4:d2}
	I0328 01:02:32.239555 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined IP address 192.168.72.210 and MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:32.239782 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHPort
	I0328 01:02:32.240036 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHKeyPath
	I0328 01:02:32.240257 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHKeyPath
	I0328 01:02:32.240431 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHUsername
	I0328 01:02:32.240633 1130949 main.go:141] libmachine: Using SSH client type: native
	I0328 01:02:32.240836 1130949 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.210 22 <nil> <nil>}
	I0328 01:02:32.240862 1130949 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0328 01:02:32.513263 1130949 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0328 01:02:32.513298 1130949 machine.go:97] duration metric: took 917.647337ms to provisionDockerMachine
	I0328 01:02:32.513314 1130949 start.go:293] postStartSetup for "embed-certs-808809" (driver="kvm2")
	I0328 01:02:32.513326 1130949 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0328 01:02:32.513365 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .DriverName
	I0328 01:02:32.513727 1130949 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0328 01:02:32.513770 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHHostname
	I0328 01:02:32.516906 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:32.517382 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d4:d2", ip: ""} in network mk-embed-certs-808809: {Iface:virbr4 ExpiryTime:2024-03-28 02:02:25 +0000 UTC Type:0 Mac:52:54:00:60:d4:d2 Iaid: IPaddr:192.168.72.210 Prefix:24 Hostname:embed-certs-808809 Clientid:01:52:54:00:60:d4:d2}
	I0328 01:02:32.517425 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined IP address 192.168.72.210 and MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:32.517603 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHPort
	I0328 01:02:32.517831 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHKeyPath
	I0328 01:02:32.517989 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHUsername
	I0328 01:02:32.518115 1130949 sshutil.go:53] new ssh client: &{IP:192.168.72.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/embed-certs-808809/id_rsa Username:docker}
	I0328 01:02:32.600013 1130949 ssh_runner.go:195] Run: cat /etc/os-release
	I0328 01:02:32.604953 1130949 info.go:137] Remote host: Buildroot 2023.02.9
	I0328 01:02:32.604983 1130949 filesync.go:126] Scanning /home/jenkins/minikube-integration/18485-1069254/.minikube/addons for local assets ...
	I0328 01:02:32.605057 1130949 filesync.go:126] Scanning /home/jenkins/minikube-integration/18485-1069254/.minikube/files for local assets ...
	I0328 01:02:32.605148 1130949 filesync.go:149] local asset: /home/jenkins/minikube-integration/18485-1069254/.minikube/files/etc/ssl/certs/10765222.pem -> 10765222.pem in /etc/ssl/certs
	I0328 01:02:32.605265 1130949 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0328 01:02:32.617685 1130949 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/files/etc/ssl/certs/10765222.pem --> /etc/ssl/certs/10765222.pem (1708 bytes)
	I0328 01:02:32.646415 1130949 start.go:296] duration metric: took 133.084551ms for postStartSetup
	I0328 01:02:32.646462 1130949 fix.go:56] duration metric: took 18.943034019s for fixHost
	I0328 01:02:32.646490 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHHostname
	I0328 01:02:32.649346 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:32.649686 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d4:d2", ip: ""} in network mk-embed-certs-808809: {Iface:virbr4 ExpiryTime:2024-03-28 02:02:25 +0000 UTC Type:0 Mac:52:54:00:60:d4:d2 Iaid: IPaddr:192.168.72.210 Prefix:24 Hostname:embed-certs-808809 Clientid:01:52:54:00:60:d4:d2}
	I0328 01:02:32.649717 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined IP address 192.168.72.210 and MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:32.649864 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHPort
	I0328 01:02:32.650191 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHKeyPath
	I0328 01:02:32.650444 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHKeyPath
	I0328 01:02:32.650637 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHUsername
	I0328 01:02:32.650844 1130949 main.go:141] libmachine: Using SSH client type: native
	I0328 01:02:32.651036 1130949 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.210 22 <nil> <nil>}
	I0328 01:02:32.651069 1130949 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0328 01:02:32.751522 1130949 main.go:141] libmachine: SSH cmd err, output: <nil>: 1711587752.718800758
	
	I0328 01:02:32.751547 1130949 fix.go:216] guest clock: 1711587752.718800758
	I0328 01:02:32.751556 1130949 fix.go:229] Guest: 2024-03-28 01:02:32.718800758 +0000 UTC Remote: 2024-03-28 01:02:32.646466137 +0000 UTC m=+284.780134501 (delta=72.334621ms)
	I0328 01:02:32.751598 1130949 fix.go:200] guest clock delta is within tolerance: 72.334621ms
	I0328 01:02:32.751610 1130949 start.go:83] releasing machines lock for "embed-certs-808809", held for 19.048217918s
	I0328 01:02:32.751638 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .DriverName
	I0328 01:02:32.751953 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetIP
	I0328 01:02:32.754795 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:32.755205 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d4:d2", ip: ""} in network mk-embed-certs-808809: {Iface:virbr4 ExpiryTime:2024-03-28 02:02:25 +0000 UTC Type:0 Mac:52:54:00:60:d4:d2 Iaid: IPaddr:192.168.72.210 Prefix:24 Hostname:embed-certs-808809 Clientid:01:52:54:00:60:d4:d2}
	I0328 01:02:32.755240 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined IP address 192.168.72.210 and MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:32.755454 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .DriverName
	I0328 01:02:32.756111 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .DriverName
	I0328 01:02:32.756320 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .DriverName
	I0328 01:02:32.756412 1130949 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0328 01:02:32.756475 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHHostname
	I0328 01:02:32.756612 1130949 ssh_runner.go:195] Run: cat /version.json
	I0328 01:02:32.756646 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHHostname
	I0328 01:02:32.759337 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:32.759468 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:32.759788 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d4:d2", ip: ""} in network mk-embed-certs-808809: {Iface:virbr4 ExpiryTime:2024-03-28 02:02:25 +0000 UTC Type:0 Mac:52:54:00:60:d4:d2 Iaid: IPaddr:192.168.72.210 Prefix:24 Hostname:embed-certs-808809 Clientid:01:52:54:00:60:d4:d2}
	I0328 01:02:32.759808 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined IP address 192.168.72.210 and MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:32.759845 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d4:d2", ip: ""} in network mk-embed-certs-808809: {Iface:virbr4 ExpiryTime:2024-03-28 02:02:25 +0000 UTC Type:0 Mac:52:54:00:60:d4:d2 Iaid: IPaddr:192.168.72.210 Prefix:24 Hostname:embed-certs-808809 Clientid:01:52:54:00:60:d4:d2}
	I0328 01:02:32.759866 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined IP address 192.168.72.210 and MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:32.760009 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHPort
	I0328 01:02:32.760018 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHPort
	I0328 01:02:32.760214 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHKeyPath
	I0328 01:02:32.760222 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHKeyPath
	I0328 01:02:32.760364 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHUsername
	I0328 01:02:32.760532 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHUsername
	I0328 01:02:32.760639 1130949 sshutil.go:53] new ssh client: &{IP:192.168.72.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/embed-certs-808809/id_rsa Username:docker}
	I0328 01:02:32.760698 1130949 sshutil.go:53] new ssh client: &{IP:192.168.72.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/embed-certs-808809/id_rsa Username:docker}
	I0328 01:02:32.840137 1130949 ssh_runner.go:195] Run: systemctl --version
	I0328 01:02:32.874039 1130949 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0328 01:02:33.020534 1130949 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0328 01:02:33.027141 1130949 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0328 01:02:33.027213 1130949 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0328 01:02:33.043738 1130949 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0328 01:02:33.043767 1130949 start.go:494] detecting cgroup driver to use...
	I0328 01:02:33.043840 1130949 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0328 01:02:33.064332 1130949 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0328 01:02:33.081926 1130949 docker.go:217] disabling cri-docker service (if available) ...
	I0328 01:02:33.082016 1130949 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0328 01:02:33.097179 1130949 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0328 01:02:33.113157 1130949 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0328 01:02:33.233183 1130949 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0328 01:02:33.374061 1130949 docker.go:233] disabling docker service ...
	I0328 01:02:33.374145 1130949 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0328 01:02:33.389813 1130949 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0328 01:02:33.403439 1130949 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0328 01:02:33.546146 1130949 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0328 01:02:33.706968 1130949 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0328 01:02:33.722279 1130949 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0328 01:02:33.742578 1130949 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0328 01:02:33.742652 1130949 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 01:02:33.754966 1130949 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0328 01:02:33.755027 1130949 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 01:02:33.767170 1130949 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 01:02:33.779960 1130949 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 01:02:33.792448 1130949 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0328 01:02:33.804912 1130949 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 01:02:33.818038 1130949 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 01:02:33.838794 1130949 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 01:02:33.852157 1130949 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0328 01:02:33.862921 1130949 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0328 01:02:33.862981 1130949 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0328 01:02:33.880973 1130949 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0328 01:02:33.892698 1130949 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0328 01:02:34.029903 1130949 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0328 01:02:34.170977 1130949 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0328 01:02:34.171074 1130949 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0328 01:02:34.176652 1130949 start.go:562] Will wait 60s for crictl version
	I0328 01:02:34.176736 1130949 ssh_runner.go:195] Run: which crictl
	I0328 01:02:34.180993 1130949 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0328 01:02:34.224564 1130949 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0328 01:02:34.224675 1130949 ssh_runner.go:195] Run: crio --version
	I0328 01:02:34.254457 1130949 ssh_runner.go:195] Run: crio --version
	I0328 01:02:34.287281 1130949 out.go:177] * Preparing Kubernetes v1.29.3 on CRI-O 1.29.1 ...
	I0328 01:02:32.778280 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .Start
	I0328 01:02:32.778470 1131323 main.go:141] libmachine: (old-k8s-version-986088) Ensuring networks are active...
	I0328 01:02:32.779179 1131323 main.go:141] libmachine: (old-k8s-version-986088) Ensuring network default is active
	I0328 01:02:32.779577 1131323 main.go:141] libmachine: (old-k8s-version-986088) Ensuring network mk-old-k8s-version-986088 is active
	I0328 01:02:32.779982 1131323 main.go:141] libmachine: (old-k8s-version-986088) Getting domain xml...
	I0328 01:02:32.780732 1131323 main.go:141] libmachine: (old-k8s-version-986088) Creating domain...
	I0328 01:02:34.066287 1131323 main.go:141] libmachine: (old-k8s-version-986088) Waiting to get IP...
	I0328 01:02:34.067193 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:34.067618 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | unable to find current IP address of domain old-k8s-version-986088 in network mk-old-k8s-version-986088
	I0328 01:02:34.067684 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | I0328 01:02:34.067586 1132067 retry.go:31] will retry after 291.270379ms: waiting for machine to come up
	I0328 01:02:34.360203 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:34.360690 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | unable to find current IP address of domain old-k8s-version-986088 in network mk-old-k8s-version-986088
	I0328 01:02:34.360721 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | I0328 01:02:34.360638 1132067 retry.go:31] will retry after 234.968456ms: waiting for machine to come up
	I0328 01:02:34.597291 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:34.597818 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | unable to find current IP address of domain old-k8s-version-986088 in network mk-old-k8s-version-986088
	I0328 01:02:34.597849 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | I0328 01:02:34.597750 1132067 retry.go:31] will retry after 382.522593ms: waiting for machine to come up
	I0328 01:02:34.982502 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:34.983176 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | unable to find current IP address of domain old-k8s-version-986088 in network mk-old-k8s-version-986088
	I0328 01:02:34.983205 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | I0328 01:02:34.983133 1132067 retry.go:31] will retry after 436.332635ms: waiting for machine to come up
	I0328 01:02:34.288748 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetIP
	I0328 01:02:34.292122 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:34.292516 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d4:d2", ip: ""} in network mk-embed-certs-808809: {Iface:virbr4 ExpiryTime:2024-03-28 02:02:25 +0000 UTC Type:0 Mac:52:54:00:60:d4:d2 Iaid: IPaddr:192.168.72.210 Prefix:24 Hostname:embed-certs-808809 Clientid:01:52:54:00:60:d4:d2}
	I0328 01:02:34.292556 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined IP address 192.168.72.210 and MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:34.292869 1130949 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0328 01:02:34.298738 1130949 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0328 01:02:34.313529 1130949 kubeadm.go:877] updating cluster {Name:embed-certs-808809 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.29.3 ClusterName:embed-certs-808809 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.210 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0328 01:02:34.313698 1130949 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0328 01:02:34.313762 1130949 ssh_runner.go:195] Run: sudo crictl images --output json
	I0328 01:02:34.356518 1130949 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.3". assuming images are not preloaded.
	I0328 01:02:34.356614 1130949 ssh_runner.go:195] Run: which lz4
	I0328 01:02:34.361492 1130949 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0328 01:02:34.366053 1130949 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0328 01:02:34.366090 1130949 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (402967820 bytes)
	I0328 01:02:36.024197 1130949 crio.go:462] duration metric: took 1.662731937s to copy over tarball
	I0328 01:02:36.024287 1130949 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0328 01:02:35.421623 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:35.422164 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | unable to find current IP address of domain old-k8s-version-986088 in network mk-old-k8s-version-986088
	I0328 01:02:35.422198 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | I0328 01:02:35.422135 1132067 retry.go:31] will retry after 700.861268ms: waiting for machine to come up
	I0328 01:02:36.124589 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:36.125001 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | unable to find current IP address of domain old-k8s-version-986088 in network mk-old-k8s-version-986088
	I0328 01:02:36.125031 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | I0328 01:02:36.124948 1132067 retry.go:31] will retry after 932.342478ms: waiting for machine to come up
	I0328 01:02:37.058954 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:37.059390 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | unable to find current IP address of domain old-k8s-version-986088 in network mk-old-k8s-version-986088
	I0328 01:02:37.059424 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | I0328 01:02:37.059332 1132067 retry.go:31] will retry after 1.163248691s: waiting for machine to come up
	I0328 01:02:38.224574 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:38.225019 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | unable to find current IP address of domain old-k8s-version-986088 in network mk-old-k8s-version-986088
	I0328 01:02:38.225053 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | I0328 01:02:38.224959 1132067 retry.go:31] will retry after 1.13372539s: waiting for machine to come up
	I0328 01:02:39.360393 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:39.360953 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | unable to find current IP address of domain old-k8s-version-986088 in network mk-old-k8s-version-986088
	I0328 01:02:39.360984 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | I0328 01:02:39.360906 1132067 retry.go:31] will retry after 1.793272671s: waiting for machine to come up
	I0328 01:02:38.420741 1130949 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.396415089s)
	I0328 01:02:38.420788 1130949 crio.go:469] duration metric: took 2.39655808s to extract the tarball
	I0328 01:02:38.420797 1130949 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0328 01:02:38.459869 1130949 ssh_runner.go:195] Run: sudo crictl images --output json
	I0328 01:02:38.505999 1130949 crio.go:514] all images are preloaded for cri-o runtime.
	I0328 01:02:38.506030 1130949 cache_images.go:84] Images are preloaded, skipping loading
	I0328 01:02:38.506039 1130949 kubeadm.go:928] updating node { 192.168.72.210 8443 v1.29.3 crio true true} ...
	I0328 01:02:38.506185 1130949 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-808809 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.210
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.3 ClusterName:embed-certs-808809 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0328 01:02:38.506301 1130949 ssh_runner.go:195] Run: crio config
	I0328 01:02:38.551608 1130949 cni.go:84] Creating CNI manager for ""
	I0328 01:02:38.551633 1130949 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0328 01:02:38.551646 1130949 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0328 01:02:38.551673 1130949 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.210 APIServerPort:8443 KubernetesVersion:v1.29.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-808809 NodeName:embed-certs-808809 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.210"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.210 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0328 01:02:38.551813 1130949 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.210
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-808809"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.210
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.210"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0328 01:02:38.551881 1130949 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.3
	I0328 01:02:38.562640 1130949 binaries.go:44] Found k8s binaries, skipping transfer
	I0328 01:02:38.562732 1130949 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0328 01:02:38.572870 1130949 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0328 01:02:38.590866 1130949 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0328 01:02:38.608302 1130949 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0328 01:02:38.626925 1130949 ssh_runner.go:195] Run: grep 192.168.72.210	control-plane.minikube.internal$ /etc/hosts
	I0328 01:02:38.631111 1130949 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.210	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0328 01:02:38.644528 1130949 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0328 01:02:38.785485 1130949 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0328 01:02:38.804087 1130949 certs.go:68] Setting up /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/embed-certs-808809 for IP: 192.168.72.210
	I0328 01:02:38.804113 1130949 certs.go:194] generating shared ca certs ...
	I0328 01:02:38.804132 1130949 certs.go:226] acquiring lock for ca certs: {Name:mkf4dec8f33bbf51de6ed3aabdf175c7fd744ef9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 01:02:38.804285 1130949 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.key
	I0328 01:02:38.804326 1130949 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/proxy-client-ca.key
	I0328 01:02:38.804363 1130949 certs.go:256] generating profile certs ...
	I0328 01:02:38.804505 1130949 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/embed-certs-808809/client.key
	I0328 01:02:38.804588 1130949 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/embed-certs-808809/apiserver.key.bdc16448
	I0328 01:02:38.804638 1130949 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/embed-certs-808809/proxy-client.key
	I0328 01:02:38.804798 1130949 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/1076522.pem (1338 bytes)
	W0328 01:02:38.804829 1130949 certs.go:480] ignoring /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/1076522_empty.pem, impossibly tiny 0 bytes
	I0328 01:02:38.804836 1130949 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca-key.pem (1675 bytes)
	I0328 01:02:38.804860 1130949 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem (1078 bytes)
	I0328 01:02:38.804882 1130949 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/cert.pem (1123 bytes)
	I0328 01:02:38.804902 1130949 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/key.pem (1679 bytes)
	I0328 01:02:38.804943 1130949 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/files/etc/ssl/certs/10765222.pem (1708 bytes)
	I0328 01:02:38.805829 1130949 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0328 01:02:38.864847 1130949 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0328 01:02:38.899197 1130949 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0328 01:02:38.926734 1130949 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0328 01:02:38.958277 1130949 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/embed-certs-808809/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0328 01:02:38.997201 1130949 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/embed-certs-808809/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0328 01:02:39.023136 1130949 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/embed-certs-808809/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0328 01:02:39.048459 1130949 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/embed-certs-808809/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0328 01:02:39.074052 1130949 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/1076522.pem --> /usr/share/ca-certificates/1076522.pem (1338 bytes)
	I0328 01:02:39.099326 1130949 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/files/etc/ssl/certs/10765222.pem --> /usr/share/ca-certificates/10765222.pem (1708 bytes)
	I0328 01:02:39.124775 1130949 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0328 01:02:39.149638 1130949 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0328 01:02:39.169169 1130949 ssh_runner.go:195] Run: openssl version
	I0328 01:02:39.175948 1130949 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1076522.pem && ln -fs /usr/share/ca-certificates/1076522.pem /etc/ssl/certs/1076522.pem"
	I0328 01:02:39.188255 1130949 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1076522.pem
	I0328 01:02:39.194296 1130949 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 27 23:42 /usr/share/ca-certificates/1076522.pem
	I0328 01:02:39.194374 1130949 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1076522.pem
	I0328 01:02:39.201138 1130949 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1076522.pem /etc/ssl/certs/51391683.0"
	I0328 01:02:39.213554 1130949 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10765222.pem && ln -fs /usr/share/ca-certificates/10765222.pem /etc/ssl/certs/10765222.pem"
	I0328 01:02:39.226474 1130949 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10765222.pem
	I0328 01:02:39.232074 1130949 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 27 23:42 /usr/share/ca-certificates/10765222.pem
	I0328 01:02:39.232149 1130949 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10765222.pem
	I0328 01:02:39.238733 1130949 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/10765222.pem /etc/ssl/certs/3ec20f2e.0"
	I0328 01:02:39.250983 1130949 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0328 01:02:39.263746 1130949 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0328 01:02:39.268967 1130949 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 27 23:34 /usr/share/ca-certificates/minikubeCA.pem
	I0328 01:02:39.269038 1130949 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0328 01:02:39.275589 1130949 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0328 01:02:39.287731 1130949 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0328 01:02:39.292985 1130949 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0328 01:02:39.300366 1130949 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0328 01:02:39.307241 1130949 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0328 01:02:39.314522 1130949 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0328 01:02:39.321070 1130949 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0328 01:02:39.327777 1130949 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0328 01:02:39.334174 1130949 kubeadm.go:391] StartCluster: {Name:embed-certs-808809 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29
.3 ClusterName:embed-certs-808809 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.210 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0328 01:02:39.334310 1130949 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0328 01:02:39.334367 1130949 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0328 01:02:39.376035 1130949 cri.go:89] found id: ""
	I0328 01:02:39.376145 1130949 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0328 01:02:39.387349 1130949 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0328 01:02:39.387377 1130949 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0328 01:02:39.387385 1130949 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0328 01:02:39.387469 1130949 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0328 01:02:39.397918 1130949 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0328 01:02:39.399122 1130949 kubeconfig.go:125] found "embed-certs-808809" server: "https://192.168.72.210:8443"
	I0328 01:02:39.401219 1130949 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0328 01:02:39.411475 1130949 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.72.210
	I0328 01:02:39.411562 1130949 kubeadm.go:1154] stopping kube-system containers ...
	I0328 01:02:39.411583 1130949 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0328 01:02:39.411650 1130949 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0328 01:02:39.449529 1130949 cri.go:89] found id: ""
	I0328 01:02:39.449638 1130949 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0328 01:02:39.468553 1130949 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0328 01:02:39.479489 1130949 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0328 01:02:39.479522 1130949 kubeadm.go:156] found existing configuration files:
	
	I0328 01:02:39.479589 1130949 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0328 01:02:39.489619 1130949 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0328 01:02:39.489689 1130949 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0328 01:02:39.499726 1130949 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0328 01:02:39.509362 1130949 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0328 01:02:39.509447 1130949 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0328 01:02:39.519262 1130949 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0328 01:02:39.528858 1130949 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0328 01:02:39.528920 1130949 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0328 01:02:39.538784 1130949 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0328 01:02:39.548517 1130949 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0328 01:02:39.548593 1130949 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0328 01:02:39.559931 1130949 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0328 01:02:39.574178 1130949 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0328 01:02:39.706243 1130949 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0328 01:02:40.342144 1130949 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0328 01:02:40.559108 1130949 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0328 01:02:40.636713 1130949 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0328 01:02:40.743171 1130949 api_server.go:52] waiting for apiserver process to appear ...
	I0328 01:02:40.743269 1130949 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:02:41.243401 1130949 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:02:41.743363 1130949 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:02:41.776504 1130949 api_server.go:72] duration metric: took 1.033329844s to wait for apiserver process to appear ...
	I0328 01:02:41.776547 1130949 api_server.go:88] waiting for apiserver healthz status ...
	I0328 01:02:41.776574 1130949 api_server.go:253] Checking apiserver healthz at https://192.168.72.210:8443/healthz ...
	I0328 01:02:41.777140 1130949 api_server.go:269] stopped: https://192.168.72.210:8443/healthz: Get "https://192.168.72.210:8443/healthz": dial tcp 192.168.72.210:8443: connect: connection refused
	I0328 01:02:42.276690 1130949 api_server.go:253] Checking apiserver healthz at https://192.168.72.210:8443/healthz ...
	I0328 01:02:41.156898 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:41.157309 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | unable to find current IP address of domain old-k8s-version-986088 in network mk-old-k8s-version-986088
	I0328 01:02:41.157336 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | I0328 01:02:41.157263 1132067 retry.go:31] will retry after 1.863775673s: waiting for machine to come up
	I0328 01:02:43.023074 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:43.023470 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | unable to find current IP address of domain old-k8s-version-986088 in network mk-old-k8s-version-986088
	I0328 01:02:43.023507 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | I0328 01:02:43.023419 1132067 retry.go:31] will retry after 2.73600503s: waiting for machine to come up
	I0328 01:02:44.743286 1130949 api_server.go:279] https://192.168.72.210:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0328 01:02:44.743383 1130949 api_server.go:103] status: https://192.168.72.210:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0328 01:02:44.743412 1130949 api_server.go:253] Checking apiserver healthz at https://192.168.72.210:8443/healthz ...
	I0328 01:02:44.822370 1130949 api_server.go:279] https://192.168.72.210:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0328 01:02:44.822416 1130949 api_server.go:103] status: https://192.168.72.210:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0328 01:02:44.822436 1130949 api_server.go:253] Checking apiserver healthz at https://192.168.72.210:8443/healthz ...
	I0328 01:02:44.847406 1130949 api_server.go:279] https://192.168.72.210:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0328 01:02:44.847462 1130949 api_server.go:103] status: https://192.168.72.210:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0328 01:02:45.276899 1130949 api_server.go:253] Checking apiserver healthz at https://192.168.72.210:8443/healthz ...
	I0328 01:02:45.281884 1130949 api_server.go:279] https://192.168.72.210:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0328 01:02:45.281919 1130949 api_server.go:103] status: https://192.168.72.210:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0328 01:02:45.777495 1130949 api_server.go:253] Checking apiserver healthz at https://192.168.72.210:8443/healthz ...
	I0328 01:02:45.783673 1130949 api_server.go:279] https://192.168.72.210:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0328 01:02:45.783704 1130949 api_server.go:103] status: https://192.168.72.210:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0328 01:02:46.277372 1130949 api_server.go:253] Checking apiserver healthz at https://192.168.72.210:8443/healthz ...
	I0328 01:02:46.282281 1130949 api_server.go:279] https://192.168.72.210:8443/healthz returned 200:
	ok
	I0328 01:02:46.291242 1130949 api_server.go:141] control plane version: v1.29.3
	I0328 01:02:46.291287 1130949 api_server.go:131] duration metric: took 4.514730698s to wait for apiserver health ...
	I0328 01:02:46.291301 1130949 cni.go:84] Creating CNI manager for ""
	I0328 01:02:46.291310 1130949 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0328 01:02:46.293461 1130949 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0328 01:02:46.294971 1130949 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0328 01:02:46.312955 1130949 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0328 01:02:46.345653 1130949 system_pods.go:43] waiting for kube-system pods to appear ...
	I0328 01:02:46.355470 1130949 system_pods.go:59] 8 kube-system pods found
	I0328 01:02:46.355506 1130949 system_pods.go:61] "coredns-76f75df574-pr5d8" [90a6f3d5-6f33-4c41-804b-4b20c518aa23] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0328 01:02:46.355512 1130949 system_pods.go:61] "etcd-embed-certs-808809" [93b6b8ee-f83f-4848-b2c5-912ec07acd52] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0328 01:02:46.355519 1130949 system_pods.go:61] "kube-apiserver-embed-certs-808809" [22eb788f-4647-4a07-b5bf-ecdd54c28fcd] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0328 01:02:46.355530 1130949 system_pods.go:61] "kube-controller-manager-embed-certs-808809" [83fecd9f-c0de-4afe-b5b5-7c04bd3adc20] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0328 01:02:46.355545 1130949 system_pods.go:61] "kube-proxy-qwzpg" [57a814c6-54c8-4fa7-b7d7-bcdd4bbc91d7] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0328 01:02:46.355553 1130949 system_pods.go:61] "kube-scheduler-embed-certs-808809" [0b229d84-43fb-45ee-8d49-39204812d490] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0328 01:02:46.355568 1130949 system_pods.go:61] "metrics-server-57f55c9bc5-swsxp" [4b20e133-3054-4806-9b7f-44d8c8c35a4d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0328 01:02:46.355580 1130949 system_pods.go:61] "storage-provisioner" [59303061-19e3-4aed-8753-804988a2a44e] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0328 01:02:46.355590 1130949 system_pods.go:74] duration metric: took 9.908316ms to wait for pod list to return data ...
	I0328 01:02:46.355603 1130949 node_conditions.go:102] verifying NodePressure condition ...
	I0328 01:02:46.358936 1130949 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0328 01:02:46.358987 1130949 node_conditions.go:123] node cpu capacity is 2
	I0328 01:02:46.359006 1130949 node_conditions.go:105] duration metric: took 3.394695ms to run NodePressure ...
	I0328 01:02:46.359054 1130949 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0328 01:02:46.686479 1130949 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0328 01:02:46.692502 1130949 kubeadm.go:733] kubelet initialised
	I0328 01:02:46.692526 1130949 kubeadm.go:734] duration metric: took 6.022393ms waiting for restarted kubelet to initialise ...
	I0328 01:02:46.692534 1130949 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0328 01:02:46.699146 1130949 pod_ready.go:78] waiting up to 4m0s for pod "coredns-76f75df574-pr5d8" in "kube-system" namespace to be "Ready" ...
	I0328 01:02:45.762440 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:45.762891 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | unable to find current IP address of domain old-k8s-version-986088 in network mk-old-k8s-version-986088
	I0328 01:02:45.762915 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | I0328 01:02:45.762845 1132067 retry.go:31] will retry after 2.201941476s: waiting for machine to come up
	I0328 01:02:47.966601 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:47.967196 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | unable to find current IP address of domain old-k8s-version-986088 in network mk-old-k8s-version-986088
	I0328 01:02:47.967237 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | I0328 01:02:47.967144 1132067 retry.go:31] will retry after 4.122216816s: waiting for machine to come up
	I0328 01:02:48.709890 1130949 pod_ready.go:102] pod "coredns-76f75df574-pr5d8" in "kube-system" namespace has status "Ready":"False"
	I0328 01:02:51.207697 1130949 pod_ready.go:102] pod "coredns-76f75df574-pr5d8" in "kube-system" namespace has status "Ready":"False"
	I0328 01:02:53.391471 1131600 start.go:364] duration metric: took 2m47.603687739s to acquireMachinesLock for "default-k8s-diff-port-283961"
	I0328 01:02:53.391553 1131600 start.go:96] Skipping create...Using existing machine configuration
	I0328 01:02:53.391565 1131600 fix.go:54] fixHost starting: 
	I0328 01:02:53.391980 1131600 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18485-1069254/.minikube/bin/docker-machine-driver-kvm2
	I0328 01:02:53.392031 1131600 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 01:02:53.409035 1131600 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34073
	I0328 01:02:53.409556 1131600 main.go:141] libmachine: () Calling .GetVersion
	I0328 01:02:53.410105 1131600 main.go:141] libmachine: Using API Version  1
	I0328 01:02:53.410136 1131600 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 01:02:53.410492 1131600 main.go:141] libmachine: () Calling .GetMachineName
	I0328 01:02:53.410734 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .DriverName
	I0328 01:02:53.410903 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetState
	I0328 01:02:53.412710 1131600 fix.go:112] recreateIfNeeded on default-k8s-diff-port-283961: state=Stopped err=<nil>
	I0328 01:02:53.412739 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .DriverName
	W0328 01:02:53.412927 1131600 fix.go:138] unexpected machine state, will restart: <nil>
	I0328 01:02:53.414773 1131600 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-283961" ...
	I0328 01:02:52.091210 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:52.091759 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has current primary IP address 192.168.50.174 and MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:52.091794 1131323 main.go:141] libmachine: (old-k8s-version-986088) Found IP for machine: 192.168.50.174
	I0328 01:02:52.091841 1131323 main.go:141] libmachine: (old-k8s-version-986088) Reserving static IP address...
	I0328 01:02:52.092295 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | found host DHCP lease matching {name: "old-k8s-version-986088", mac: "52:54:00:f6:94:40", ip: "192.168.50.174"} in network mk-old-k8s-version-986088: {Iface:virbr2 ExpiryTime:2024-03-28 02:02:44 +0000 UTC Type:0 Mac:52:54:00:f6:94:40 Iaid: IPaddr:192.168.50.174 Prefix:24 Hostname:old-k8s-version-986088 Clientid:01:52:54:00:f6:94:40}
	I0328 01:02:52.092321 1131323 main.go:141] libmachine: (old-k8s-version-986088) Reserved static IP address: 192.168.50.174
	I0328 01:02:52.092343 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | skip adding static IP to network mk-old-k8s-version-986088 - found existing host DHCP lease matching {name: "old-k8s-version-986088", mac: "52:54:00:f6:94:40", ip: "192.168.50.174"}
	I0328 01:02:52.092356 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | Getting to WaitForSSH function...
	I0328 01:02:52.092373 1131323 main.go:141] libmachine: (old-k8s-version-986088) Waiting for SSH to be available...
	I0328 01:02:52.094682 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:52.095012 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:94:40", ip: ""} in network mk-old-k8s-version-986088: {Iface:virbr2 ExpiryTime:2024-03-28 02:02:44 +0000 UTC Type:0 Mac:52:54:00:f6:94:40 Iaid: IPaddr:192.168.50.174 Prefix:24 Hostname:old-k8s-version-986088 Clientid:01:52:54:00:f6:94:40}
	I0328 01:02:52.095033 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined IP address 192.168.50.174 and MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:52.095158 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | Using SSH client type: external
	I0328 01:02:52.095180 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | Using SSH private key: /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/old-k8s-version-986088/id_rsa (-rw-------)
	I0328 01:02:52.095208 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.174 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/old-k8s-version-986088/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0328 01:02:52.095218 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | About to run SSH command:
	I0328 01:02:52.095232 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | exit 0
	I0328 01:02:52.218494 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | SSH cmd err, output: <nil>: 
	I0328 01:02:52.218983 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetConfigRaw
	I0328 01:02:52.219663 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetIP
	I0328 01:02:52.222349 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:52.222791 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:94:40", ip: ""} in network mk-old-k8s-version-986088: {Iface:virbr2 ExpiryTime:2024-03-28 02:02:44 +0000 UTC Type:0 Mac:52:54:00:f6:94:40 Iaid: IPaddr:192.168.50.174 Prefix:24 Hostname:old-k8s-version-986088 Clientid:01:52:54:00:f6:94:40}
	I0328 01:02:52.222823 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined IP address 192.168.50.174 and MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:52.223191 1131323 profile.go:142] Saving config to /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/old-k8s-version-986088/config.json ...
	I0328 01:02:52.223388 1131323 machine.go:94] provisionDockerMachine start ...
	I0328 01:02:52.223409 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .DriverName
	I0328 01:02:52.223605 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHHostname
	I0328 01:02:52.225686 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:52.225999 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:94:40", ip: ""} in network mk-old-k8s-version-986088: {Iface:virbr2 ExpiryTime:2024-03-28 02:02:44 +0000 UTC Type:0 Mac:52:54:00:f6:94:40 Iaid: IPaddr:192.168.50.174 Prefix:24 Hostname:old-k8s-version-986088 Clientid:01:52:54:00:f6:94:40}
	I0328 01:02:52.226038 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined IP address 192.168.50.174 and MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:52.226131 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHPort
	I0328 01:02:52.226341 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHKeyPath
	I0328 01:02:52.226507 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHKeyPath
	I0328 01:02:52.226633 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHUsername
	I0328 01:02:52.226802 1131323 main.go:141] libmachine: Using SSH client type: native
	I0328 01:02:52.227078 1131323 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.174 22 <nil> <nil>}
	I0328 01:02:52.227095 1131323 main.go:141] libmachine: About to run SSH command:
	hostname
	I0328 01:02:52.327218 1131323 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0328 01:02:52.327249 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetMachineName
	I0328 01:02:52.327515 1131323 buildroot.go:166] provisioning hostname "old-k8s-version-986088"
	I0328 01:02:52.327542 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetMachineName
	I0328 01:02:52.327754 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHHostname
	I0328 01:02:52.330253 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:52.330661 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:94:40", ip: ""} in network mk-old-k8s-version-986088: {Iface:virbr2 ExpiryTime:2024-03-28 02:02:44 +0000 UTC Type:0 Mac:52:54:00:f6:94:40 Iaid: IPaddr:192.168.50.174 Prefix:24 Hostname:old-k8s-version-986088 Clientid:01:52:54:00:f6:94:40}
	I0328 01:02:52.330691 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined IP address 192.168.50.174 and MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:52.330827 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHPort
	I0328 01:02:52.331048 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHKeyPath
	I0328 01:02:52.331258 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHKeyPath
	I0328 01:02:52.331406 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHUsername
	I0328 01:02:52.331593 1131323 main.go:141] libmachine: Using SSH client type: native
	I0328 01:02:52.331772 1131323 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.174 22 <nil> <nil>}
	I0328 01:02:52.331783 1131323 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-986088 && echo "old-k8s-version-986088" | sudo tee /etc/hostname
	I0328 01:02:52.445910 1131323 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-986088
	
	I0328 01:02:52.445943 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHHostname
	I0328 01:02:52.449023 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:52.449320 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:94:40", ip: ""} in network mk-old-k8s-version-986088: {Iface:virbr2 ExpiryTime:2024-03-28 02:02:44 +0000 UTC Type:0 Mac:52:54:00:f6:94:40 Iaid: IPaddr:192.168.50.174 Prefix:24 Hostname:old-k8s-version-986088 Clientid:01:52:54:00:f6:94:40}
	I0328 01:02:52.449358 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined IP address 192.168.50.174 and MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:52.449595 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHPort
	I0328 01:02:52.449810 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHKeyPath
	I0328 01:02:52.449970 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHKeyPath
	I0328 01:02:52.450116 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHUsername
	I0328 01:02:52.450310 1131323 main.go:141] libmachine: Using SSH client type: native
	I0328 01:02:52.450572 1131323 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.174 22 <nil> <nil>}
	I0328 01:02:52.450640 1131323 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-986088' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-986088/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-986088' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0328 01:02:52.567493 1131323 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0328 01:02:52.567529 1131323 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18485-1069254/.minikube CaCertPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18485-1069254/.minikube}
	I0328 01:02:52.567559 1131323 buildroot.go:174] setting up certificates
	I0328 01:02:52.567573 1131323 provision.go:84] configureAuth start
	I0328 01:02:52.567587 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetMachineName
	I0328 01:02:52.567944 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetIP
	I0328 01:02:52.570860 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:52.571363 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:94:40", ip: ""} in network mk-old-k8s-version-986088: {Iface:virbr2 ExpiryTime:2024-03-28 02:02:44 +0000 UTC Type:0 Mac:52:54:00:f6:94:40 Iaid: IPaddr:192.168.50.174 Prefix:24 Hostname:old-k8s-version-986088 Clientid:01:52:54:00:f6:94:40}
	I0328 01:02:52.571400 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined IP address 192.168.50.174 and MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:52.571547 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHHostname
	I0328 01:02:52.574052 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:52.574483 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:94:40", ip: ""} in network mk-old-k8s-version-986088: {Iface:virbr2 ExpiryTime:2024-03-28 02:02:44 +0000 UTC Type:0 Mac:52:54:00:f6:94:40 Iaid: IPaddr:192.168.50.174 Prefix:24 Hostname:old-k8s-version-986088 Clientid:01:52:54:00:f6:94:40}
	I0328 01:02:52.574517 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined IP address 192.168.50.174 and MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:52.574619 1131323 provision.go:143] copyHostCerts
	I0328 01:02:52.574698 1131323 exec_runner.go:144] found /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.pem, removing ...
	I0328 01:02:52.574710 1131323 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.pem
	I0328 01:02:52.574778 1131323 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.pem (1078 bytes)
	I0328 01:02:52.574894 1131323 exec_runner.go:144] found /home/jenkins/minikube-integration/18485-1069254/.minikube/cert.pem, removing ...
	I0328 01:02:52.574908 1131323 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18485-1069254/.minikube/cert.pem
	I0328 01:02:52.574985 1131323 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18485-1069254/.minikube/cert.pem (1123 bytes)
	I0328 01:02:52.575086 1131323 exec_runner.go:144] found /home/jenkins/minikube-integration/18485-1069254/.minikube/key.pem, removing ...
	I0328 01:02:52.575095 1131323 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18485-1069254/.minikube/key.pem
	I0328 01:02:52.575117 1131323 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18485-1069254/.minikube/key.pem (1679 bytes)
	I0328 01:02:52.575194 1131323 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-986088 san=[127.0.0.1 192.168.50.174 localhost minikube old-k8s-version-986088]
	I0328 01:02:52.688709 1131323 provision.go:177] copyRemoteCerts
	I0328 01:02:52.688776 1131323 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0328 01:02:52.688809 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHHostname
	I0328 01:02:52.691529 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:52.691977 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:94:40", ip: ""} in network mk-old-k8s-version-986088: {Iface:virbr2 ExpiryTime:2024-03-28 02:02:44 +0000 UTC Type:0 Mac:52:54:00:f6:94:40 Iaid: IPaddr:192.168.50.174 Prefix:24 Hostname:old-k8s-version-986088 Clientid:01:52:54:00:f6:94:40}
	I0328 01:02:52.692024 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined IP address 192.168.50.174 and MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:52.692188 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHPort
	I0328 01:02:52.692425 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHKeyPath
	I0328 01:02:52.692620 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHUsername
	I0328 01:02:52.692774 1131323 sshutil.go:53] new ssh client: &{IP:192.168.50.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/old-k8s-version-986088/id_rsa Username:docker}
	I0328 01:02:52.777200 1131323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0328 01:02:52.808740 1131323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0328 01:02:52.836646 1131323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0328 01:02:52.862627 1131323 provision.go:87] duration metric: took 295.032419ms to configureAuth
	I0328 01:02:52.862668 1131323 buildroot.go:189] setting minikube options for container-runtime
	I0328 01:02:52.862908 1131323 config.go:182] Loaded profile config "old-k8s-version-986088": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0328 01:02:52.863019 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHHostname
	I0328 01:02:52.865838 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:52.866585 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:94:40", ip: ""} in network mk-old-k8s-version-986088: {Iface:virbr2 ExpiryTime:2024-03-28 02:02:44 +0000 UTC Type:0 Mac:52:54:00:f6:94:40 Iaid: IPaddr:192.168.50.174 Prefix:24 Hostname:old-k8s-version-986088 Clientid:01:52:54:00:f6:94:40}
	I0328 01:02:52.866630 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined IP address 192.168.50.174 and MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:52.867271 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHPort
	I0328 01:02:52.867521 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHKeyPath
	I0328 01:02:52.867687 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHKeyPath
	I0328 01:02:52.867826 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHUsername
	I0328 01:02:52.867961 1131323 main.go:141] libmachine: Using SSH client type: native
	I0328 01:02:52.868176 1131323 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.174 22 <nil> <nil>}
	I0328 01:02:52.868194 1131323 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0328 01:02:53.154903 1131323 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0328 01:02:53.154936 1131323 machine.go:97] duration metric: took 931.534047ms to provisionDockerMachine
	I0328 01:02:53.154949 1131323 start.go:293] postStartSetup for "old-k8s-version-986088" (driver="kvm2")
	I0328 01:02:53.154961 1131323 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0328 01:02:53.154997 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .DriverName
	I0328 01:02:53.155353 1131323 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0328 01:02:53.155386 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHHostname
	I0328 01:02:53.158072 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:53.158448 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:94:40", ip: ""} in network mk-old-k8s-version-986088: {Iface:virbr2 ExpiryTime:2024-03-28 02:02:44 +0000 UTC Type:0 Mac:52:54:00:f6:94:40 Iaid: IPaddr:192.168.50.174 Prefix:24 Hostname:old-k8s-version-986088 Clientid:01:52:54:00:f6:94:40}
	I0328 01:02:53.158482 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined IP address 192.168.50.174 and MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:53.158612 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHPort
	I0328 01:02:53.158825 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHKeyPath
	I0328 01:02:53.158974 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHUsername
	I0328 01:02:53.159102 1131323 sshutil.go:53] new ssh client: &{IP:192.168.50.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/old-k8s-version-986088/id_rsa Username:docker}
	I0328 01:02:53.243411 1131323 ssh_runner.go:195] Run: cat /etc/os-release
	I0328 01:02:53.247745 1131323 info.go:137] Remote host: Buildroot 2023.02.9
	I0328 01:02:53.247769 1131323 filesync.go:126] Scanning /home/jenkins/minikube-integration/18485-1069254/.minikube/addons for local assets ...
	I0328 01:02:53.247830 1131323 filesync.go:126] Scanning /home/jenkins/minikube-integration/18485-1069254/.minikube/files for local assets ...
	I0328 01:02:53.247903 1131323 filesync.go:149] local asset: /home/jenkins/minikube-integration/18485-1069254/.minikube/files/etc/ssl/certs/10765222.pem -> 10765222.pem in /etc/ssl/certs
	I0328 01:02:53.247990 1131323 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0328 01:02:53.258574 1131323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/files/etc/ssl/certs/10765222.pem --> /etc/ssl/certs/10765222.pem (1708 bytes)
	I0328 01:02:53.284249 1131323 start.go:296] duration metric: took 129.2844ms for postStartSetup
	I0328 01:02:53.284300 1131323 fix.go:56] duration metric: took 20.532468979s for fixHost
	I0328 01:02:53.284324 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHHostname
	I0328 01:02:53.287097 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:53.287505 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:94:40", ip: ""} in network mk-old-k8s-version-986088: {Iface:virbr2 ExpiryTime:2024-03-28 02:02:44 +0000 UTC Type:0 Mac:52:54:00:f6:94:40 Iaid: IPaddr:192.168.50.174 Prefix:24 Hostname:old-k8s-version-986088 Clientid:01:52:54:00:f6:94:40}
	I0328 01:02:53.287534 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined IP address 192.168.50.174 and MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:53.287642 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHPort
	I0328 01:02:53.287874 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHKeyPath
	I0328 01:02:53.288039 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHKeyPath
	I0328 01:02:53.288225 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHUsername
	I0328 01:02:53.288439 1131323 main.go:141] libmachine: Using SSH client type: native
	I0328 01:02:53.288601 1131323 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.174 22 <nil> <nil>}
	I0328 01:02:53.288612 1131323 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0328 01:02:53.391262 1131323 main.go:141] libmachine: SSH cmd err, output: <nil>: 1711587773.373998758
	
	I0328 01:02:53.391292 1131323 fix.go:216] guest clock: 1711587773.373998758
	I0328 01:02:53.391299 1131323 fix.go:229] Guest: 2024-03-28 01:02:53.373998758 +0000 UTC Remote: 2024-03-28 01:02:53.284304642 +0000 UTC m=+227.998260980 (delta=89.694116ms)
	I0328 01:02:53.391341 1131323 fix.go:200] guest clock delta is within tolerance: 89.694116ms
	I0328 01:02:53.391346 1131323 start.go:83] releasing machines lock for "old-k8s-version-986088", held for 20.639550927s
	I0328 01:02:53.391377 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .DriverName
	I0328 01:02:53.391728 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetIP
	I0328 01:02:53.394421 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:53.394780 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:94:40", ip: ""} in network mk-old-k8s-version-986088: {Iface:virbr2 ExpiryTime:2024-03-28 02:02:44 +0000 UTC Type:0 Mac:52:54:00:f6:94:40 Iaid: IPaddr:192.168.50.174 Prefix:24 Hostname:old-k8s-version-986088 Clientid:01:52:54:00:f6:94:40}
	I0328 01:02:53.394811 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined IP address 192.168.50.174 and MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:53.394932 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .DriverName
	I0328 01:02:53.395449 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .DriverName
	I0328 01:02:53.395729 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .DriverName
	I0328 01:02:53.395828 1131323 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0328 01:02:53.395883 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHHostname
	I0328 01:02:53.395985 1131323 ssh_runner.go:195] Run: cat /version.json
	I0328 01:02:53.396014 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHHostname
	I0328 01:02:53.398819 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:53.399010 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:53.399281 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:94:40", ip: ""} in network mk-old-k8s-version-986088: {Iface:virbr2 ExpiryTime:2024-03-28 02:02:44 +0000 UTC Type:0 Mac:52:54:00:f6:94:40 Iaid: IPaddr:192.168.50.174 Prefix:24 Hostname:old-k8s-version-986088 Clientid:01:52:54:00:f6:94:40}
	I0328 01:02:53.399320 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined IP address 192.168.50.174 and MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:53.399451 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHPort
	I0328 01:02:53.399550 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:94:40", ip: ""} in network mk-old-k8s-version-986088: {Iface:virbr2 ExpiryTime:2024-03-28 02:02:44 +0000 UTC Type:0 Mac:52:54:00:f6:94:40 Iaid: IPaddr:192.168.50.174 Prefix:24 Hostname:old-k8s-version-986088 Clientid:01:52:54:00:f6:94:40}
	I0328 01:02:53.399620 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined IP address 192.168.50.174 and MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:53.399640 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHKeyPath
	I0328 01:02:53.399880 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHUsername
	I0328 01:02:53.399902 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHPort
	I0328 01:02:53.400065 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHKeyPath
	I0328 01:02:53.400081 1131323 sshutil.go:53] new ssh client: &{IP:192.168.50.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/old-k8s-version-986088/id_rsa Username:docker}
	I0328 01:02:53.400245 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHUsername
	I0328 01:02:53.400445 1131323 sshutil.go:53] new ssh client: &{IP:192.168.50.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/old-k8s-version-986088/id_rsa Username:docker}
	I0328 01:02:53.514453 1131323 ssh_runner.go:195] Run: systemctl --version
	I0328 01:02:53.521123 1131323 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0328 01:02:53.678366 1131323 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0328 01:02:53.685402 1131323 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0328 01:02:53.685473 1131323 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0328 01:02:53.702781 1131323 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0328 01:02:53.702816 1131323 start.go:494] detecting cgroup driver to use...
	I0328 01:02:53.702900 1131323 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0328 01:02:53.720343 1131323 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0328 01:02:53.736749 1131323 docker.go:217] disabling cri-docker service (if available) ...
	I0328 01:02:53.736824 1131323 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0328 01:02:53.761087 1131323 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0328 01:02:53.779008 1131323 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0328 01:02:53.895064 1131323 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0328 01:02:54.060741 1131323 docker.go:233] disabling docker service ...
	I0328 01:02:54.060825 1131323 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0328 01:02:54.079139 1131323 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0328 01:02:54.093523 1131323 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0328 01:02:54.247544 1131323 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0328 01:02:54.396392 1131323 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0328 01:02:54.422612 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0328 01:02:54.443759 1131323 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0328 01:02:54.443817 1131323 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 01:02:54.459794 1131323 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0328 01:02:54.459875 1131323 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 01:02:54.472784 1131323 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 01:02:54.484963 1131323 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 01:02:54.496654 1131323 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0328 01:02:54.508382 1131323 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0328 01:02:54.518607 1131323 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0328 01:02:54.518687 1131323 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0328 01:02:54.532356 1131323 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0328 01:02:54.544424 1131323 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0328 01:02:54.685782 1131323 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0328 01:02:54.847233 1131323 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0328 01:02:54.847314 1131323 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0328 01:02:54.853148 1131323 start.go:562] Will wait 60s for crictl version
	I0328 01:02:54.853248 1131323 ssh_runner.go:195] Run: which crictl
	I0328 01:02:54.857536 1131323 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0328 01:02:54.901937 1131323 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0328 01:02:54.902082 1131323 ssh_runner.go:195] Run: crio --version
	I0328 01:02:54.935571 1131323 ssh_runner.go:195] Run: crio --version
	I0328 01:02:54.971452 1131323 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0328 01:02:54.972964 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetIP
	I0328 01:02:54.976523 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:54.976985 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:94:40", ip: ""} in network mk-old-k8s-version-986088: {Iface:virbr2 ExpiryTime:2024-03-28 02:02:44 +0000 UTC Type:0 Mac:52:54:00:f6:94:40 Iaid: IPaddr:192.168.50.174 Prefix:24 Hostname:old-k8s-version-986088 Clientid:01:52:54:00:f6:94:40}
	I0328 01:02:54.977017 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined IP address 192.168.50.174 and MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:54.977369 1131323 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0328 01:02:54.982326 1131323 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0328 01:02:54.996239 1131323 kubeadm.go:877] updating cluster {Name:old-k8s-version-986088 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-986088 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.174 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0328 01:02:54.996371 1131323 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0328 01:02:54.996433 1131323 ssh_runner.go:195] Run: sudo crictl images --output json
	I0328 01:02:55.045404 1131323 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0328 01:02:55.045483 1131323 ssh_runner.go:195] Run: which lz4
	I0328 01:02:55.050226 1131323 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0328 01:02:55.055182 1131323 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0328 01:02:55.055221 1131323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0328 01:02:53.416101 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .Start
	I0328 01:02:53.416332 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Ensuring networks are active...
	I0328 01:02:53.417021 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Ensuring network default is active
	I0328 01:02:53.417446 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Ensuring network mk-default-k8s-diff-port-283961 is active
	I0328 01:02:53.417857 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Getting domain xml...
	I0328 01:02:53.418555 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Creating domain...
	I0328 01:02:54.777201 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Waiting to get IP...
	I0328 01:02:54.778055 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:02:54.778563 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | unable to find current IP address of domain default-k8s-diff-port-283961 in network mk-default-k8s-diff-port-283961
	I0328 01:02:54.778705 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | I0328 01:02:54.778537 1132240 retry.go:31] will retry after 259.031702ms: waiting for machine to come up
	I0328 01:02:55.039365 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:02:55.039926 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | unable to find current IP address of domain default-k8s-diff-port-283961 in network mk-default-k8s-diff-port-283961
	I0328 01:02:55.039963 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | I0328 01:02:55.039860 1132240 retry.go:31] will retry after 254.124553ms: waiting for machine to come up
	I0328 01:02:55.295658 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:02:55.296218 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | unable to find current IP address of domain default-k8s-diff-port-283961 in network mk-default-k8s-diff-port-283961
	I0328 01:02:55.296265 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | I0328 01:02:55.296174 1132240 retry.go:31] will retry after 349.637234ms: waiting for machine to come up
	I0328 01:02:55.647590 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:02:55.648356 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | unable to find current IP address of domain default-k8s-diff-port-283961 in network mk-default-k8s-diff-port-283961
	I0328 01:02:55.648392 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | I0328 01:02:55.648298 1132240 retry.go:31] will retry after 446.471208ms: waiting for machine to come up
	I0328 01:02:53.707811 1130949 pod_ready.go:102] pod "coredns-76f75df574-pr5d8" in "kube-system" namespace has status "Ready":"False"
	I0328 01:02:55.708380 1130949 pod_ready.go:102] pod "coredns-76f75df574-pr5d8" in "kube-system" namespace has status "Ready":"False"
	I0328 01:02:57.213059 1130949 pod_ready.go:92] pod "coredns-76f75df574-pr5d8" in "kube-system" namespace has status "Ready":"True"
	I0328 01:02:57.213097 1130949 pod_ready.go:81] duration metric: took 10.513921238s for pod "coredns-76f75df574-pr5d8" in "kube-system" namespace to be "Ready" ...
	I0328 01:02:57.213113 1130949 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-808809" in "kube-system" namespace to be "Ready" ...
	I0328 01:02:57.222308 1130949 pod_ready.go:92] pod "etcd-embed-certs-808809" in "kube-system" namespace has status "Ready":"True"
	I0328 01:02:57.222344 1130949 pod_ready.go:81] duration metric: took 9.214056ms for pod "etcd-embed-certs-808809" in "kube-system" namespace to be "Ready" ...
	I0328 01:02:57.222357 1130949 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-808809" in "kube-system" namespace to be "Ready" ...
	I0328 01:02:57.231530 1130949 pod_ready.go:92] pod "kube-apiserver-embed-certs-808809" in "kube-system" namespace has status "Ready":"True"
	I0328 01:02:57.231558 1130949 pod_ready.go:81] duration metric: took 9.192864ms for pod "kube-apiserver-embed-certs-808809" in "kube-system" namespace to be "Ready" ...
	I0328 01:02:57.231568 1130949 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-808809" in "kube-system" namespace to be "Ready" ...
	I0328 01:02:56.994163 1131323 crio.go:462] duration metric: took 1.943992561s to copy over tarball
	I0328 01:02:56.994252 1131323 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0328 01:03:00.215115 1131323 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.220825311s)
	I0328 01:03:00.215159 1131323 crio.go:469] duration metric: took 3.22095583s to extract the tarball
	I0328 01:03:00.215171 1131323 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0328 01:03:00.259151 1131323 ssh_runner.go:195] Run: sudo crictl images --output json
	I0328 01:03:00.298446 1131323 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0328 01:03:00.298492 1131323 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0328 01:03:00.298585 1131323 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0328 01:03:00.298601 1131323 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0328 01:03:00.298613 1131323 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0328 01:03:00.298644 1131323 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0328 01:03:00.298662 1131323 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0328 01:03:00.298698 1131323 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0328 01:03:00.298593 1131323 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0328 01:03:00.298585 1131323 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0328 01:03:00.300347 1131323 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0328 01:03:00.300424 1131323 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0328 01:03:00.300470 1131323 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0328 01:03:00.300474 1131323 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0328 01:03:00.300637 1131323 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0328 01:03:00.300652 1131323 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0328 01:03:00.300723 1131323 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0328 01:03:00.300793 1131323 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0328 01:02:56.095939 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:02:56.096463 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | unable to find current IP address of domain default-k8s-diff-port-283961 in network mk-default-k8s-diff-port-283961
	I0328 01:02:56.096501 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | I0328 01:02:56.096412 1132240 retry.go:31] will retry after 490.029649ms: waiting for machine to come up
	I0328 01:02:56.588298 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:02:56.588835 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | unable to find current IP address of domain default-k8s-diff-port-283961 in network mk-default-k8s-diff-port-283961
	I0328 01:02:56.588868 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | I0328 01:02:56.588796 1132240 retry.go:31] will retry after 831.356628ms: waiting for machine to come up
	I0328 01:02:57.421917 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:02:57.422410 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | unable to find current IP address of domain default-k8s-diff-port-283961 in network mk-default-k8s-diff-port-283961
	I0328 01:02:57.422443 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | I0328 01:02:57.422353 1132240 retry.go:31] will retry after 1.164764985s: waiting for machine to come up
	I0328 01:02:58.588827 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:02:58.589183 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | unable to find current IP address of domain default-k8s-diff-port-283961 in network mk-default-k8s-diff-port-283961
	I0328 01:02:58.589225 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | I0328 01:02:58.589119 1132240 retry.go:31] will retry after 1.307248783s: waiting for machine to come up
	I0328 01:02:59.897607 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:02:59.897976 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | unable to find current IP address of domain default-k8s-diff-port-283961 in network mk-default-k8s-diff-port-283961
	I0328 01:02:59.898008 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | I0328 01:02:59.897926 1132240 retry.go:31] will retry after 1.560958271s: waiting for machine to come up
	I0328 01:02:58.241179 1130949 pod_ready.go:92] pod "kube-controller-manager-embed-certs-808809" in "kube-system" namespace has status "Ready":"True"
	I0328 01:02:58.241216 1130949 pod_ready.go:81] duration metric: took 1.00963904s for pod "kube-controller-manager-embed-certs-808809" in "kube-system" namespace to be "Ready" ...
	I0328 01:02:58.241245 1130949 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-qwzpg" in "kube-system" namespace to be "Ready" ...
	I0328 01:02:58.249787 1130949 pod_ready.go:92] pod "kube-proxy-qwzpg" in "kube-system" namespace has status "Ready":"True"
	I0328 01:02:58.249826 1130949 pod_ready.go:81] duration metric: took 8.571225ms for pod "kube-proxy-qwzpg" in "kube-system" namespace to be "Ready" ...
	I0328 01:02:58.249840 1130949 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-808809" in "kube-system" namespace to be "Ready" ...
	I0328 01:02:58.405101 1130949 pod_ready.go:92] pod "kube-scheduler-embed-certs-808809" in "kube-system" namespace has status "Ready":"True"
	I0328 01:02:58.405130 1130949 pod_ready.go:81] duration metric: took 155.281142ms for pod "kube-scheduler-embed-certs-808809" in "kube-system" namespace to be "Ready" ...
	I0328 01:02:58.405141 1130949 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace to be "Ready" ...
	I0328 01:03:00.412202 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:02.412688 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:00.499788 1131323 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0328 01:03:00.539135 1131323 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0328 01:03:00.541462 1131323 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0328 01:03:00.544184 1131323 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0328 01:03:00.544227 1131323 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0328 01:03:00.544261 1131323 ssh_runner.go:195] Run: which crictl
	I0328 01:03:00.555720 1131323 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0328 01:03:00.560189 1131323 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0328 01:03:00.562639 1131323 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0328 01:03:00.574105 1131323 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0328 01:03:00.681717 1131323 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0328 01:03:00.681742 1131323 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0328 01:03:00.681765 1131323 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0328 01:03:00.681803 1131323 ssh_runner.go:195] Run: which crictl
	I0328 01:03:00.682033 1131323 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0328 01:03:00.682076 1131323 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0328 01:03:00.682115 1131323 ssh_runner.go:195] Run: which crictl
	I0328 01:03:00.732868 1131323 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0328 01:03:00.732922 1131323 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0328 01:03:00.732988 1131323 ssh_runner.go:195] Run: which crictl
	I0328 01:03:00.742680 1131323 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0328 01:03:00.742730 1131323 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0328 01:03:00.742762 1131323 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0328 01:03:00.742777 1131323 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0328 01:03:00.742805 1131323 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0328 01:03:00.742770 1131323 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0328 01:03:00.742817 1131323 ssh_runner.go:195] Run: which crictl
	I0328 01:03:00.742851 1131323 ssh_runner.go:195] Run: which crictl
	I0328 01:03:00.742865 1131323 ssh_runner.go:195] Run: which crictl
	I0328 01:03:00.770435 1131323 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0328 01:03:00.770472 1131323 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0328 01:03:00.770567 1131323 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0328 01:03:00.770588 1131323 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0328 01:03:00.770727 1131323 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0328 01:03:00.770760 1131323 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0328 01:03:00.770728 1131323 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0328 01:03:00.882338 1131323 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0328 01:03:00.896602 1131323 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0328 01:03:00.918814 1131323 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0328 01:03:00.918869 1131323 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0328 01:03:00.918919 1131323 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0328 01:03:00.918968 1131323 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0328 01:03:01.186124 1131323 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0328 01:03:01.334547 1131323 cache_images.go:92] duration metric: took 1.036031169s to LoadCachedImages
	W0328 01:03:01.334676 1131323 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	I0328 01:03:01.334694 1131323 kubeadm.go:928] updating node { 192.168.50.174 8443 v1.20.0 crio true true} ...
	I0328 01:03:01.334827 1131323 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-986088 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.174
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-986088 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0328 01:03:01.334926 1131323 ssh_runner.go:195] Run: crio config
	I0328 01:03:01.391004 1131323 cni.go:84] Creating CNI manager for ""
	I0328 01:03:01.391034 1131323 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0328 01:03:01.391054 1131323 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0328 01:03:01.391081 1131323 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.174 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-986088 NodeName:old-k8s-version-986088 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.174"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.174 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0328 01:03:01.391265 1131323 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.174
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-986088"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.174
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.174"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0328 01:03:01.391347 1131323 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0328 01:03:01.403684 1131323 binaries.go:44] Found k8s binaries, skipping transfer
	I0328 01:03:01.403779 1131323 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0328 01:03:01.415168 1131323 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0328 01:03:01.434329 1131323 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0328 01:03:01.456280 1131323 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0328 01:03:01.476625 1131323 ssh_runner.go:195] Run: grep 192.168.50.174	control-plane.minikube.internal$ /etc/hosts
	I0328 01:03:01.480867 1131323 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.174	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0328 01:03:01.493833 1131323 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0328 01:03:01.642273 1131323 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0328 01:03:01.661857 1131323 certs.go:68] Setting up /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/old-k8s-version-986088 for IP: 192.168.50.174
	I0328 01:03:01.661887 1131323 certs.go:194] generating shared ca certs ...
	I0328 01:03:01.661909 1131323 certs.go:226] acquiring lock for ca certs: {Name:mkf4dec8f33bbf51de6ed3aabdf175c7fd744ef9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 01:03:01.662115 1131323 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.key
	I0328 01:03:01.662174 1131323 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/proxy-client-ca.key
	I0328 01:03:01.662188 1131323 certs.go:256] generating profile certs ...
	I0328 01:03:01.662324 1131323 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/old-k8s-version-986088/client.key
	I0328 01:03:01.662399 1131323 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/old-k8s-version-986088/apiserver.key.b88fbc7e
	I0328 01:03:01.662447 1131323 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/old-k8s-version-986088/proxy-client.key
	I0328 01:03:01.662600 1131323 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/1076522.pem (1338 bytes)
	W0328 01:03:01.662656 1131323 certs.go:480] ignoring /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/1076522_empty.pem, impossibly tiny 0 bytes
	I0328 01:03:01.662672 1131323 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca-key.pem (1675 bytes)
	I0328 01:03:01.662703 1131323 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem (1078 bytes)
	I0328 01:03:01.662738 1131323 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/cert.pem (1123 bytes)
	I0328 01:03:01.662774 1131323 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/key.pem (1679 bytes)
	I0328 01:03:01.662826 1131323 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/files/etc/ssl/certs/10765222.pem (1708 bytes)
	I0328 01:03:01.663831 1131323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0328 01:03:01.697171 1131323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0328 01:03:01.742118 1131323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0328 01:03:01.783263 1131323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0328 01:03:01.831682 1131323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/old-k8s-version-986088/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0328 01:03:01.878051 1131323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/old-k8s-version-986088/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0328 01:03:01.915626 1131323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/old-k8s-version-986088/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0328 01:03:01.942247 1131323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/old-k8s-version-986088/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0328 01:03:01.969054 1131323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/files/etc/ssl/certs/10765222.pem --> /usr/share/ca-certificates/10765222.pem (1708 bytes)
	I0328 01:03:01.998651 1131323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0328 01:03:02.024881 1131323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/1076522.pem --> /usr/share/ca-certificates/1076522.pem (1338 bytes)
	I0328 01:03:02.051284 1131323 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0328 01:03:02.070414 1131323 ssh_runner.go:195] Run: openssl version
	I0328 01:03:02.076635 1131323 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10765222.pem && ln -fs /usr/share/ca-certificates/10765222.pem /etc/ssl/certs/10765222.pem"
	I0328 01:03:02.089288 1131323 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10765222.pem
	I0328 01:03:02.094260 1131323 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 27 23:42 /usr/share/ca-certificates/10765222.pem
	I0328 01:03:02.094322 1131323 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10765222.pem
	I0328 01:03:02.100846 1131323 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/10765222.pem /etc/ssl/certs/3ec20f2e.0"
	I0328 01:03:02.114474 1131323 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0328 01:03:02.126467 1131323 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0328 01:03:02.131240 1131323 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 27 23:34 /usr/share/ca-certificates/minikubeCA.pem
	I0328 01:03:02.131293 1131323 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0328 01:03:02.137496 1131323 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0328 01:03:02.150863 1131323 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1076522.pem && ln -fs /usr/share/ca-certificates/1076522.pem /etc/ssl/certs/1076522.pem"
	I0328 01:03:02.163536 1131323 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1076522.pem
	I0328 01:03:02.168767 1131323 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 27 23:42 /usr/share/ca-certificates/1076522.pem
	I0328 01:03:02.168850 1131323 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1076522.pem
	I0328 01:03:02.175218 1131323 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1076522.pem /etc/ssl/certs/51391683.0"
	I0328 01:03:02.188272 1131323 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0328 01:03:02.193348 1131323 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0328 01:03:02.199969 1131323 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0328 01:03:02.206424 1131323 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0328 01:03:02.213530 1131323 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0328 01:03:02.220136 1131323 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0328 01:03:02.226502 1131323 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0328 01:03:02.232708 1131323 kubeadm.go:391] StartCluster: {Name:old-k8s-version-986088 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-986088 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.174 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0328 01:03:02.232831 1131323 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0328 01:03:02.232890 1131323 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0328 01:03:02.280062 1131323 cri.go:89] found id: ""
	I0328 01:03:02.280160 1131323 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0328 01:03:02.291968 1131323 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0328 01:03:02.292003 1131323 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0328 01:03:02.292011 1131323 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0328 01:03:02.292072 1131323 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0328 01:03:02.304006 1131323 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0328 01:03:02.305105 1131323 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-986088" does not appear in /home/jenkins/minikube-integration/18485-1069254/kubeconfig
	I0328 01:03:02.305785 1131323 kubeconfig.go:62] /home/jenkins/minikube-integration/18485-1069254/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-986088" cluster setting kubeconfig missing "old-k8s-version-986088" context setting]
	I0328 01:03:02.306728 1131323 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18485-1069254/kubeconfig: {Name:mk608bb0dce90860f51972a105c5a28b1a9b081e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 01:03:02.308610 1131323 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0328 01:03:02.320212 1131323 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.50.174
	I0328 01:03:02.320265 1131323 kubeadm.go:1154] stopping kube-system containers ...
	I0328 01:03:02.320283 1131323 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0328 01:03:02.320356 1131323 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0328 01:03:02.366411 1131323 cri.go:89] found id: ""
	I0328 01:03:02.366500 1131323 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0328 01:03:02.388351 1131323 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0328 01:03:02.402621 1131323 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0328 01:03:02.402652 1131323 kubeadm.go:156] found existing configuration files:
	
	I0328 01:03:02.402718 1131323 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0328 01:03:02.415559 1131323 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0328 01:03:02.415633 1131323 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0328 01:03:02.426666 1131323 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0328 01:03:02.439497 1131323 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0328 01:03:02.439558 1131323 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0328 01:03:02.451040 1131323 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0328 01:03:02.461780 1131323 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0328 01:03:02.461876 1131323 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0328 01:03:02.473295 1131323 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0328 01:03:02.484762 1131323 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0328 01:03:02.484841 1131323 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0328 01:03:02.496304 1131323 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0328 01:03:02.507634 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0328 01:03:02.641980 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0328 01:03:03.598106 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0328 01:03:03.840026 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0328 01:03:03.970336 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0328 01:03:04.067774 1131323 api_server.go:52] waiting for apiserver process to appear ...
	I0328 01:03:04.067911 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:04.568260 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:05.068794 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:01.460535 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:01.461008 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | unable to find current IP address of domain default-k8s-diff-port-283961 in network mk-default-k8s-diff-port-283961
	I0328 01:03:01.461039 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | I0328 01:03:01.460962 1132240 retry.go:31] will retry after 1.839531745s: waiting for machine to come up
	I0328 01:03:03.302965 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:03.303445 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | unable to find current IP address of domain default-k8s-diff-port-283961 in network mk-default-k8s-diff-port-283961
	I0328 01:03:03.303479 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | I0328 01:03:03.303387 1132240 retry.go:31] will retry after 2.461748315s: waiting for machine to come up
	I0328 01:03:04.413898 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:06.913608 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:05.568716 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:06.068362 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:06.568235 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:07.068696 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:07.567976 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:08.068032 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:08.568586 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:09.068046 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:09.568699 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:10.067967 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:05.767795 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:05.768329 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | unable to find current IP address of domain default-k8s-diff-port-283961 in network mk-default-k8s-diff-port-283961
	I0328 01:03:05.768360 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | I0328 01:03:05.768279 1132240 retry.go:31] will retry after 2.321291255s: waiting for machine to come up
	I0328 01:03:08.092644 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:08.093094 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | unable to find current IP address of domain default-k8s-diff-port-283961 in network mk-default-k8s-diff-port-283961
	I0328 01:03:08.093131 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | I0328 01:03:08.093046 1132240 retry.go:31] will retry after 4.151205276s: waiting for machine to come up
	I0328 01:03:09.413199 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:11.912234 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:13.671756 1130827 start.go:364] duration metric: took 54.966750689s to acquireMachinesLock for "no-preload-248059"
	I0328 01:03:13.671815 1130827 start.go:96] Skipping create...Using existing machine configuration
	I0328 01:03:13.671823 1130827 fix.go:54] fixHost starting: 
	I0328 01:03:13.672255 1130827 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18485-1069254/.minikube/bin/docker-machine-driver-kvm2
	I0328 01:03:13.672292 1130827 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 01:03:13.689811 1130827 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42017
	I0328 01:03:13.690364 1130827 main.go:141] libmachine: () Calling .GetVersion
	I0328 01:03:13.690817 1130827 main.go:141] libmachine: Using API Version  1
	I0328 01:03:13.690843 1130827 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 01:03:13.691213 1130827 main.go:141] libmachine: () Calling .GetMachineName
	I0328 01:03:13.691395 1130827 main.go:141] libmachine: (no-preload-248059) Calling .DriverName
	I0328 01:03:13.691523 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetState
	I0328 01:03:13.693093 1130827 fix.go:112] recreateIfNeeded on no-preload-248059: state=Stopped err=<nil>
	I0328 01:03:13.693123 1130827 main.go:141] libmachine: (no-preload-248059) Calling .DriverName
	W0328 01:03:13.693280 1130827 fix.go:138] unexpected machine state, will restart: <nil>
	I0328 01:03:13.695158 1130827 out.go:177] * Restarting existing kvm2 VM for "no-preload-248059" ...
	I0328 01:03:10.568240 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:11.068028 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:11.568146 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:12.068467 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:12.568820 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:13.068031 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:13.568977 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:14.068050 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:14.567938 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:15.068711 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:12.248769 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:12.249440 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Found IP for machine: 192.168.39.224
	I0328 01:03:12.249467 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Reserving static IP address...
	I0328 01:03:12.249498 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has current primary IP address 192.168.39.224 and MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:12.249832 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-283961", mac: "52:54:00:c4:df:6f", ip: "192.168.39.224"} in network mk-default-k8s-diff-port-283961: {Iface:virbr1 ExpiryTime:2024-03-28 02:03:06 +0000 UTC Type:0 Mac:52:54:00:c4:df:6f Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:default-k8s-diff-port-283961 Clientid:01:52:54:00:c4:df:6f}
	I0328 01:03:12.249872 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | skip adding static IP to network mk-default-k8s-diff-port-283961 - found existing host DHCP lease matching {name: "default-k8s-diff-port-283961", mac: "52:54:00:c4:df:6f", ip: "192.168.39.224"}
	I0328 01:03:12.249888 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Reserved static IP address: 192.168.39.224
	I0328 01:03:12.249908 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Waiting for SSH to be available...
	I0328 01:03:12.249921 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | Getting to WaitForSSH function...
	I0328 01:03:12.252053 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:12.252487 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:df:6f", ip: ""} in network mk-default-k8s-diff-port-283961: {Iface:virbr1 ExpiryTime:2024-03-28 02:03:06 +0000 UTC Type:0 Mac:52:54:00:c4:df:6f Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:default-k8s-diff-port-283961 Clientid:01:52:54:00:c4:df:6f}
	I0328 01:03:12.252521 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined IP address 192.168.39.224 and MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:12.252646 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | Using SSH client type: external
	I0328 01:03:12.252677 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | Using SSH private key: /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/default-k8s-diff-port-283961/id_rsa (-rw-------)
	I0328 01:03:12.252709 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.224 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/default-k8s-diff-port-283961/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0328 01:03:12.252731 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | About to run SSH command:
	I0328 01:03:12.252750 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | exit 0
	I0328 01:03:12.378419 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | SSH cmd err, output: <nil>: 
	I0328 01:03:12.378866 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetConfigRaw
	I0328 01:03:12.379659 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetIP
	I0328 01:03:12.382631 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:12.382997 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:df:6f", ip: ""} in network mk-default-k8s-diff-port-283961: {Iface:virbr1 ExpiryTime:2024-03-28 02:03:06 +0000 UTC Type:0 Mac:52:54:00:c4:df:6f Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:default-k8s-diff-port-283961 Clientid:01:52:54:00:c4:df:6f}
	I0328 01:03:12.383023 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined IP address 192.168.39.224 and MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:12.383276 1131600 profile.go:142] Saving config to /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/default-k8s-diff-port-283961/config.json ...
	I0328 01:03:12.383534 1131600 machine.go:94] provisionDockerMachine start ...
	I0328 01:03:12.383567 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .DriverName
	I0328 01:03:12.383805 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHHostname
	I0328 01:03:12.386472 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:12.386839 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:df:6f", ip: ""} in network mk-default-k8s-diff-port-283961: {Iface:virbr1 ExpiryTime:2024-03-28 02:03:06 +0000 UTC Type:0 Mac:52:54:00:c4:df:6f Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:default-k8s-diff-port-283961 Clientid:01:52:54:00:c4:df:6f}
	I0328 01:03:12.386870 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined IP address 192.168.39.224 and MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:12.387035 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHPort
	I0328 01:03:12.387240 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHKeyPath
	I0328 01:03:12.387399 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHKeyPath
	I0328 01:03:12.387577 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHUsername
	I0328 01:03:12.387729 1131600 main.go:141] libmachine: Using SSH client type: native
	I0328 01:03:12.387931 1131600 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.224 22 <nil> <nil>}
	I0328 01:03:12.387943 1131600 main.go:141] libmachine: About to run SSH command:
	hostname
	I0328 01:03:12.499608 1131600 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0328 01:03:12.499644 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetMachineName
	I0328 01:03:12.499930 1131600 buildroot.go:166] provisioning hostname "default-k8s-diff-port-283961"
	I0328 01:03:12.499962 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetMachineName
	I0328 01:03:12.500154 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHHostname
	I0328 01:03:12.502737 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:12.503089 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:df:6f", ip: ""} in network mk-default-k8s-diff-port-283961: {Iface:virbr1 ExpiryTime:2024-03-28 02:03:06 +0000 UTC Type:0 Mac:52:54:00:c4:df:6f Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:default-k8s-diff-port-283961 Clientid:01:52:54:00:c4:df:6f}
	I0328 01:03:12.503120 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined IP address 192.168.39.224 and MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:12.503295 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHPort
	I0328 01:03:12.503516 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHKeyPath
	I0328 01:03:12.503725 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHKeyPath
	I0328 01:03:12.503892 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHUsername
	I0328 01:03:12.504093 1131600 main.go:141] libmachine: Using SSH client type: native
	I0328 01:03:12.504271 1131600 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.224 22 <nil> <nil>}
	I0328 01:03:12.504285 1131600 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-283961 && echo "default-k8s-diff-port-283961" | sudo tee /etc/hostname
	I0328 01:03:12.625590 1131600 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-283961
	
	I0328 01:03:12.625624 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHHostname
	I0328 01:03:12.628570 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:12.628883 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:df:6f", ip: ""} in network mk-default-k8s-diff-port-283961: {Iface:virbr1 ExpiryTime:2024-03-28 02:03:06 +0000 UTC Type:0 Mac:52:54:00:c4:df:6f Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:default-k8s-diff-port-283961 Clientid:01:52:54:00:c4:df:6f}
	I0328 01:03:12.628968 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined IP address 192.168.39.224 and MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:12.629143 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHPort
	I0328 01:03:12.629397 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHKeyPath
	I0328 01:03:12.629627 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHKeyPath
	I0328 01:03:12.629825 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHUsername
	I0328 01:03:12.630008 1131600 main.go:141] libmachine: Using SSH client type: native
	I0328 01:03:12.630191 1131600 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.224 22 <nil> <nil>}
	I0328 01:03:12.630210 1131600 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-283961' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-283961/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-283961' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0328 01:03:12.744240 1131600 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0328 01:03:12.744280 1131600 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18485-1069254/.minikube CaCertPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18485-1069254/.minikube}
	I0328 01:03:12.744327 1131600 buildroot.go:174] setting up certificates
	I0328 01:03:12.744342 1131600 provision.go:84] configureAuth start
	I0328 01:03:12.744361 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetMachineName
	I0328 01:03:12.744722 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetIP
	I0328 01:03:12.747139 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:12.747448 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:df:6f", ip: ""} in network mk-default-k8s-diff-port-283961: {Iface:virbr1 ExpiryTime:2024-03-28 02:03:06 +0000 UTC Type:0 Mac:52:54:00:c4:df:6f Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:default-k8s-diff-port-283961 Clientid:01:52:54:00:c4:df:6f}
	I0328 01:03:12.747478 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined IP address 192.168.39.224 and MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:12.747582 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHHostname
	I0328 01:03:12.749705 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:12.749964 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:df:6f", ip: ""} in network mk-default-k8s-diff-port-283961: {Iface:virbr1 ExpiryTime:2024-03-28 02:03:06 +0000 UTC Type:0 Mac:52:54:00:c4:df:6f Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:default-k8s-diff-port-283961 Clientid:01:52:54:00:c4:df:6f}
	I0328 01:03:12.749995 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined IP address 192.168.39.224 and MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:12.750125 1131600 provision.go:143] copyHostCerts
	I0328 01:03:12.750203 1131600 exec_runner.go:144] found /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.pem, removing ...
	I0328 01:03:12.750217 1131600 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.pem
	I0328 01:03:12.750323 1131600 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.pem (1078 bytes)
	I0328 01:03:12.750435 1131600 exec_runner.go:144] found /home/jenkins/minikube-integration/18485-1069254/.minikube/cert.pem, removing ...
	I0328 01:03:12.750446 1131600 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18485-1069254/.minikube/cert.pem
	I0328 01:03:12.750479 1131600 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18485-1069254/.minikube/cert.pem (1123 bytes)
	I0328 01:03:12.750557 1131600 exec_runner.go:144] found /home/jenkins/minikube-integration/18485-1069254/.minikube/key.pem, removing ...
	I0328 01:03:12.750567 1131600 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18485-1069254/.minikube/key.pem
	I0328 01:03:12.750599 1131600 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18485-1069254/.minikube/key.pem (1679 bytes)
	I0328 01:03:12.750670 1131600 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-283961 san=[127.0.0.1 192.168.39.224 default-k8s-diff-port-283961 localhost minikube]
	I0328 01:03:12.963182 1131600 provision.go:177] copyRemoteCerts
	I0328 01:03:12.963265 1131600 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0328 01:03:12.963313 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHHostname
	I0328 01:03:12.965946 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:12.966177 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:df:6f", ip: ""} in network mk-default-k8s-diff-port-283961: {Iface:virbr1 ExpiryTime:2024-03-28 02:03:06 +0000 UTC Type:0 Mac:52:54:00:c4:df:6f Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:default-k8s-diff-port-283961 Clientid:01:52:54:00:c4:df:6f}
	I0328 01:03:12.966207 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined IP address 192.168.39.224 and MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:12.966347 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHPort
	I0328 01:03:12.966573 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHKeyPath
	I0328 01:03:12.966773 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHUsername
	I0328 01:03:12.966934 1131600 sshutil.go:53] new ssh client: &{IP:192.168.39.224 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/default-k8s-diff-port-283961/id_rsa Username:docker}
	I0328 01:03:13.057477 1131600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0328 01:03:13.083706 1131600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0328 01:03:13.109167 1131600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0328 01:03:13.136835 1131600 provision.go:87] duration metric: took 392.475069ms to configureAuth
	I0328 01:03:13.136867 1131600 buildroot.go:189] setting minikube options for container-runtime
	I0328 01:03:13.137048 1131600 config.go:182] Loaded profile config "default-k8s-diff-port-283961": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0328 01:03:13.137131 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHHostname
	I0328 01:03:13.139508 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:13.139761 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:df:6f", ip: ""} in network mk-default-k8s-diff-port-283961: {Iface:virbr1 ExpiryTime:2024-03-28 02:03:06 +0000 UTC Type:0 Mac:52:54:00:c4:df:6f Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:default-k8s-diff-port-283961 Clientid:01:52:54:00:c4:df:6f}
	I0328 01:03:13.139792 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined IP address 192.168.39.224 and MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:13.139959 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHPort
	I0328 01:03:13.140170 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHKeyPath
	I0328 01:03:13.140343 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHKeyPath
	I0328 01:03:13.140502 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHUsername
	I0328 01:03:13.140685 1131600 main.go:141] libmachine: Using SSH client type: native
	I0328 01:03:13.140873 1131600 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.224 22 <nil> <nil>}
	I0328 01:03:13.140897 1131600 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0328 01:03:13.422372 1131600 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0328 01:03:13.422405 1131600 machine.go:97] duration metric: took 1.038857021s to provisionDockerMachine
	I0328 01:03:13.422418 1131600 start.go:293] postStartSetup for "default-k8s-diff-port-283961" (driver="kvm2")
	I0328 01:03:13.422428 1131600 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0328 01:03:13.422456 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .DriverName
	I0328 01:03:13.422788 1131600 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0328 01:03:13.422819 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHHostname
	I0328 01:03:13.425539 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:13.425865 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:df:6f", ip: ""} in network mk-default-k8s-diff-port-283961: {Iface:virbr1 ExpiryTime:2024-03-28 02:03:06 +0000 UTC Type:0 Mac:52:54:00:c4:df:6f Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:default-k8s-diff-port-283961 Clientid:01:52:54:00:c4:df:6f}
	I0328 01:03:13.425894 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined IP address 192.168.39.224 and MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:13.426023 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHPort
	I0328 01:03:13.426225 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHKeyPath
	I0328 01:03:13.426407 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHUsername
	I0328 01:03:13.426577 1131600 sshutil.go:53] new ssh client: &{IP:192.168.39.224 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/default-k8s-diff-port-283961/id_rsa Username:docker}
	I0328 01:03:13.511874 1131600 ssh_runner.go:195] Run: cat /etc/os-release
	I0328 01:03:13.516643 1131600 info.go:137] Remote host: Buildroot 2023.02.9
	I0328 01:03:13.516673 1131600 filesync.go:126] Scanning /home/jenkins/minikube-integration/18485-1069254/.minikube/addons for local assets ...
	I0328 01:03:13.516749 1131600 filesync.go:126] Scanning /home/jenkins/minikube-integration/18485-1069254/.minikube/files for local assets ...
	I0328 01:03:13.516846 1131600 filesync.go:149] local asset: /home/jenkins/minikube-integration/18485-1069254/.minikube/files/etc/ssl/certs/10765222.pem -> 10765222.pem in /etc/ssl/certs
	I0328 01:03:13.516969 1131600 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0328 01:03:13.529004 1131600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/files/etc/ssl/certs/10765222.pem --> /etc/ssl/certs/10765222.pem (1708 bytes)
	I0328 01:03:13.557244 1131600 start.go:296] duration metric: took 134.810243ms for postStartSetup
	I0328 01:03:13.557289 1131600 fix.go:56] duration metric: took 20.165726422s for fixHost
	I0328 01:03:13.557313 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHHostname
	I0328 01:03:13.560216 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:13.560585 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:df:6f", ip: ""} in network mk-default-k8s-diff-port-283961: {Iface:virbr1 ExpiryTime:2024-03-28 02:03:06 +0000 UTC Type:0 Mac:52:54:00:c4:df:6f Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:default-k8s-diff-port-283961 Clientid:01:52:54:00:c4:df:6f}
	I0328 01:03:13.560623 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined IP address 192.168.39.224 and MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:13.560803 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHPort
	I0328 01:03:13.561050 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHKeyPath
	I0328 01:03:13.561188 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHKeyPath
	I0328 01:03:13.561303 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHUsername
	I0328 01:03:13.561552 1131600 main.go:141] libmachine: Using SSH client type: native
	I0328 01:03:13.561742 1131600 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.224 22 <nil> <nil>}
	I0328 01:03:13.561757 1131600 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0328 01:03:13.671545 1131600 main.go:141] libmachine: SSH cmd err, output: <nil>: 1711587793.617322674
	
	I0328 01:03:13.671570 1131600 fix.go:216] guest clock: 1711587793.617322674
	I0328 01:03:13.671578 1131600 fix.go:229] Guest: 2024-03-28 01:03:13.617322674 +0000 UTC Remote: 2024-03-28 01:03:13.55729386 +0000 UTC m=+187.934897846 (delta=60.028814ms)
	I0328 01:03:13.671632 1131600 fix.go:200] guest clock delta is within tolerance: 60.028814ms
	I0328 01:03:13.671642 1131600 start.go:83] releasing machines lock for "default-k8s-diff-port-283961", held for 20.280118311s
	I0328 01:03:13.671673 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .DriverName
	I0328 01:03:13.671976 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetIP
	I0328 01:03:13.674978 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:13.675384 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:df:6f", ip: ""} in network mk-default-k8s-diff-port-283961: {Iface:virbr1 ExpiryTime:2024-03-28 02:03:06 +0000 UTC Type:0 Mac:52:54:00:c4:df:6f Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:default-k8s-diff-port-283961 Clientid:01:52:54:00:c4:df:6f}
	I0328 01:03:13.675436 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined IP address 192.168.39.224 and MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:13.675562 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .DriverName
	I0328 01:03:13.676167 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .DriverName
	I0328 01:03:13.676337 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .DriverName
	I0328 01:03:13.676436 1131600 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0328 01:03:13.676501 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHHostname
	I0328 01:03:13.676557 1131600 ssh_runner.go:195] Run: cat /version.json
	I0328 01:03:13.676578 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHHostname
	I0328 01:03:13.679418 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:13.679452 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:13.679758 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:df:6f", ip: ""} in network mk-default-k8s-diff-port-283961: {Iface:virbr1 ExpiryTime:2024-03-28 02:03:06 +0000 UTC Type:0 Mac:52:54:00:c4:df:6f Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:default-k8s-diff-port-283961 Clientid:01:52:54:00:c4:df:6f}
	I0328 01:03:13.679785 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined IP address 192.168.39.224 and MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:13.679813 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:df:6f", ip: ""} in network mk-default-k8s-diff-port-283961: {Iface:virbr1 ExpiryTime:2024-03-28 02:03:06 +0000 UTC Type:0 Mac:52:54:00:c4:df:6f Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:default-k8s-diff-port-283961 Clientid:01:52:54:00:c4:df:6f}
	I0328 01:03:13.679832 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined IP address 192.168.39.224 and MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:13.679986 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHPort
	I0328 01:03:13.680089 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHPort
	I0328 01:03:13.680190 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHKeyPath
	I0328 01:03:13.680255 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHKeyPath
	I0328 01:03:13.680345 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHUsername
	I0328 01:03:13.680410 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHUsername
	I0328 01:03:13.680517 1131600 sshutil.go:53] new ssh client: &{IP:192.168.39.224 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/default-k8s-diff-port-283961/id_rsa Username:docker}
	I0328 01:03:13.680608 1131600 sshutil.go:53] new ssh client: &{IP:192.168.39.224 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/default-k8s-diff-port-283961/id_rsa Username:docker}
	I0328 01:03:13.759826 1131600 ssh_runner.go:195] Run: systemctl --version
	I0328 01:03:13.796647 1131600 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0328 01:03:13.947036 1131600 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0328 01:03:13.954165 1131600 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0328 01:03:13.954265 1131600 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0328 01:03:13.973503 1131600 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0328 01:03:13.973538 1131600 start.go:494] detecting cgroup driver to use...
	I0328 01:03:13.973629 1131600 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0328 01:03:13.997675 1131600 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0328 01:03:14.015349 1131600 docker.go:217] disabling cri-docker service (if available) ...
	I0328 01:03:14.015421 1131600 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0328 01:03:14.031099 1131600 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0328 01:03:14.046446 1131600 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0328 01:03:14.186993 1131600 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0328 01:03:14.351164 1131600 docker.go:233] disabling docker service ...
	I0328 01:03:14.351232 1131600 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0328 01:03:14.370629 1131600 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0328 01:03:14.387837 1131600 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0328 01:03:14.544060 1131600 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0328 01:03:14.707699 1131600 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0328 01:03:14.725658 1131600 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0328 01:03:14.746063 1131600 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0328 01:03:14.746141 1131600 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 01:03:14.759244 1131600 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0328 01:03:14.759317 1131600 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 01:03:14.773015 1131600 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 01:03:14.786810 1131600 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 01:03:14.807101 1131600 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0328 01:03:14.821013 1131600 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 01:03:14.834181 1131600 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 01:03:14.861163 1131600 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 01:03:14.874274 1131600 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0328 01:03:14.885890 1131600 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0328 01:03:14.885968 1131600 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0328 01:03:14.903142 1131600 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0328 01:03:14.916364 1131600 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0328 01:03:15.073343 1131600 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0328 01:03:15.218406 1131600 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0328 01:03:15.218500 1131600 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0328 01:03:15.226299 1131600 start.go:562] Will wait 60s for crictl version
	I0328 01:03:15.226373 1131600 ssh_runner.go:195] Run: which crictl
	I0328 01:03:15.232051 1131600 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0328 01:03:15.278793 1131600 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0328 01:03:15.278903 1131600 ssh_runner.go:195] Run: crio --version
	I0328 01:03:15.313408 1131600 ssh_runner.go:195] Run: crio --version
	I0328 01:03:15.351613 1131600 out.go:177] * Preparing Kubernetes v1.29.3 on CRI-O 1.29.1 ...
	I0328 01:03:15.353013 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetIP
	I0328 01:03:15.355924 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:15.356306 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:df:6f", ip: ""} in network mk-default-k8s-diff-port-283961: {Iface:virbr1 ExpiryTime:2024-03-28 02:03:06 +0000 UTC Type:0 Mac:52:54:00:c4:df:6f Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:default-k8s-diff-port-283961 Clientid:01:52:54:00:c4:df:6f}
	I0328 01:03:15.356341 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined IP address 192.168.39.224 and MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:15.356555 1131600 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0328 01:03:15.361194 1131600 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0328 01:03:15.380926 1131600 kubeadm.go:877] updating cluster {Name:default-k8s-diff-port-283961 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.29.3 ClusterName:default-k8s-diff-port-283961 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.224 Port:8444 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0328 01:03:15.381043 1131600 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0328 01:03:15.381099 1131600 ssh_runner.go:195] Run: sudo crictl images --output json
	I0328 01:03:15.423322 1131600 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.3". assuming images are not preloaded.
	I0328 01:03:15.423409 1131600 ssh_runner.go:195] Run: which lz4
	I0328 01:03:15.428123 1131600 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0328 01:03:15.433023 1131600 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0328 01:03:15.433065 1131600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (402967820 bytes)
	I0328 01:03:13.696314 1130827 main.go:141] libmachine: (no-preload-248059) Calling .Start
	I0328 01:03:13.696506 1130827 main.go:141] libmachine: (no-preload-248059) Ensuring networks are active...
	I0328 01:03:13.697344 1130827 main.go:141] libmachine: (no-preload-248059) Ensuring network default is active
	I0328 01:03:13.697668 1130827 main.go:141] libmachine: (no-preload-248059) Ensuring network mk-no-preload-248059 is active
	I0328 01:03:13.698009 1130827 main.go:141] libmachine: (no-preload-248059) Getting domain xml...
	I0328 01:03:13.698805 1130827 main.go:141] libmachine: (no-preload-248059) Creating domain...
	I0328 01:03:14.955922 1130827 main.go:141] libmachine: (no-preload-248059) Waiting to get IP...
	I0328 01:03:14.957088 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:14.957534 1130827 main.go:141] libmachine: (no-preload-248059) DBG | unable to find current IP address of domain no-preload-248059 in network mk-no-preload-248059
	I0328 01:03:14.957660 1130827 main.go:141] libmachine: (no-preload-248059) DBG | I0328 01:03:14.957533 1132389 retry.go:31] will retry after 222.894093ms: waiting for machine to come up
	I0328 01:03:15.182078 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:15.182541 1130827 main.go:141] libmachine: (no-preload-248059) DBG | unable to find current IP address of domain no-preload-248059 in network mk-no-preload-248059
	I0328 01:03:15.182580 1130827 main.go:141] libmachine: (no-preload-248059) DBG | I0328 01:03:15.182528 1132389 retry.go:31] will retry after 263.74163ms: waiting for machine to come up
	I0328 01:03:15.448081 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:15.448653 1130827 main.go:141] libmachine: (no-preload-248059) DBG | unable to find current IP address of domain no-preload-248059 in network mk-no-preload-248059
	I0328 01:03:15.448684 1130827 main.go:141] libmachine: (no-preload-248059) DBG | I0328 01:03:15.448586 1132389 retry.go:31] will retry after 444.066222ms: waiting for machine to come up
	I0328 01:03:15.894141 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:15.894695 1130827 main.go:141] libmachine: (no-preload-248059) DBG | unable to find current IP address of domain no-preload-248059 in network mk-no-preload-248059
	I0328 01:03:15.894732 1130827 main.go:141] libmachine: (no-preload-248059) DBG | I0328 01:03:15.894650 1132389 retry.go:31] will retry after 469.421771ms: waiting for machine to come up
	I0328 01:03:14.413443 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:16.418789 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:15.568507 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:16.068210 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:16.568761 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:17.067929 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:17.568403 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:18.068454 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:18.568086 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:19.068049 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:19.569020 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:20.068068 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:17.139682 1131600 crio.go:462] duration metric: took 1.71160157s to copy over tarball
	I0328 01:03:17.139764 1131600 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0328 01:03:19.581198 1131600 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.441406061s)
	I0328 01:03:19.581229 1131600 crio.go:469] duration metric: took 2.441510253s to extract the tarball
	I0328 01:03:19.581241 1131600 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0328 01:03:19.620964 1131600 ssh_runner.go:195] Run: sudo crictl images --output json
	I0328 01:03:19.666765 1131600 crio.go:514] all images are preloaded for cri-o runtime.
	I0328 01:03:19.666791 1131600 cache_images.go:84] Images are preloaded, skipping loading
	I0328 01:03:19.666802 1131600 kubeadm.go:928] updating node { 192.168.39.224 8444 v1.29.3 crio true true} ...
	I0328 01:03:19.666921 1131600 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-283961 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.224
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.3 ClusterName:default-k8s-diff-port-283961 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0328 01:03:19.666987 1131600 ssh_runner.go:195] Run: crio config
	I0328 01:03:19.716082 1131600 cni.go:84] Creating CNI manager for ""
	I0328 01:03:19.716106 1131600 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0328 01:03:19.716115 1131600 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0328 01:03:19.716139 1131600 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.224 APIServerPort:8444 KubernetesVersion:v1.29.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-283961 NodeName:default-k8s-diff-port-283961 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.224"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.224 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0328 01:03:19.716323 1131600 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.224
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-283961"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.224
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.224"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0328 01:03:19.716399 1131600 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.3
	I0328 01:03:19.727826 1131600 binaries.go:44] Found k8s binaries, skipping transfer
	I0328 01:03:19.727913 1131600 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0328 01:03:19.738525 1131600 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0328 01:03:19.756732 1131600 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0328 01:03:19.776665 1131600 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0328 01:03:19.795756 1131600 ssh_runner.go:195] Run: grep 192.168.39.224	control-plane.minikube.internal$ /etc/hosts
	I0328 01:03:19.800097 1131600 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.224	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0328 01:03:19.813019 1131600 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0328 01:03:19.946740 1131600 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0328 01:03:19.964216 1131600 certs.go:68] Setting up /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/default-k8s-diff-port-283961 for IP: 192.168.39.224
	I0328 01:03:19.964244 1131600 certs.go:194] generating shared ca certs ...
	I0328 01:03:19.964262 1131600 certs.go:226] acquiring lock for ca certs: {Name:mkf4dec8f33bbf51de6ed3aabdf175c7fd744ef9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 01:03:19.964448 1131600 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.key
	I0328 01:03:19.964524 1131600 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/proxy-client-ca.key
	I0328 01:03:19.964538 1131600 certs.go:256] generating profile certs ...
	I0328 01:03:19.964648 1131600 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/default-k8s-diff-port-283961/client.key
	I0328 01:03:19.964735 1131600 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/default-k8s-diff-port-283961/apiserver.key.22bfb146
	I0328 01:03:19.964810 1131600 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/default-k8s-diff-port-283961/proxy-client.key
	I0328 01:03:19.964956 1131600 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/1076522.pem (1338 bytes)
	W0328 01:03:19.965008 1131600 certs.go:480] ignoring /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/1076522_empty.pem, impossibly tiny 0 bytes
	I0328 01:03:19.965021 1131600 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca-key.pem (1675 bytes)
	I0328 01:03:19.965058 1131600 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem (1078 bytes)
	I0328 01:03:19.965091 1131600 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/cert.pem (1123 bytes)
	I0328 01:03:19.965113 1131600 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/key.pem (1679 bytes)
	I0328 01:03:19.965154 1131600 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/files/etc/ssl/certs/10765222.pem (1708 bytes)
	I0328 01:03:19.966026 1131600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0328 01:03:19.998578 1131600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0328 01:03:20.042666 1131600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0328 01:03:20.075405 1131600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0328 01:03:20.117888 1131600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/default-k8s-diff-port-283961/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0328 01:03:20.145160 1131600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/default-k8s-diff-port-283961/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0328 01:03:20.178207 1131600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/default-k8s-diff-port-283961/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0328 01:03:20.208610 1131600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/default-k8s-diff-port-283961/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0328 01:03:20.235356 1131600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0328 01:03:20.262434 1131600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/1076522.pem --> /usr/share/ca-certificates/1076522.pem (1338 bytes)
	I0328 01:03:20.291315 1131600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/files/etc/ssl/certs/10765222.pem --> /usr/share/ca-certificates/10765222.pem (1708 bytes)
	I0328 01:03:20.318034 1131600 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0328 01:03:20.337627 1131600 ssh_runner.go:195] Run: openssl version
	I0328 01:03:20.344242 1131600 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0328 01:03:20.360732 1131600 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0328 01:03:20.365858 1131600 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 27 23:34 /usr/share/ca-certificates/minikubeCA.pem
	I0328 01:03:20.365926 1131600 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0328 01:03:20.372120 1131600 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0328 01:03:20.384554 1131600 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1076522.pem && ln -fs /usr/share/ca-certificates/1076522.pem /etc/ssl/certs/1076522.pem"
	I0328 01:03:20.401731 1131600 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1076522.pem
	I0328 01:03:20.406945 1131600 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 27 23:42 /usr/share/ca-certificates/1076522.pem
	I0328 01:03:20.407024 1131600 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1076522.pem
	I0328 01:03:20.414661 1131600 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1076522.pem /etc/ssl/certs/51391683.0"
	I0328 01:03:20.427573 1131600 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10765222.pem && ln -fs /usr/share/ca-certificates/10765222.pem /etc/ssl/certs/10765222.pem"
	I0328 01:03:20.439807 1131600 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10765222.pem
	I0328 01:03:20.445064 1131600 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 27 23:42 /usr/share/ca-certificates/10765222.pem
	I0328 01:03:20.445138 1131600 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10765222.pem
	I0328 01:03:20.451754 1131600 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/10765222.pem /etc/ssl/certs/3ec20f2e.0"
	I0328 01:03:20.464988 1131600 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0328 01:03:20.470461 1131600 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0328 01:03:20.477200 1131600 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0328 01:03:20.484238 1131600 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0328 01:03:20.491125 1131600 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0328 01:03:20.497888 1131600 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0328 01:03:20.504680 1131600 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0328 01:03:20.511372 1131600 kubeadm.go:391] StartCluster: {Name:default-k8s-diff-port-283961 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.29.3 ClusterName:default-k8s-diff-port-283961 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.224 Port:8444 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0328 01:03:20.511477 1131600 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0328 01:03:20.511542 1131600 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0328 01:03:20.552247 1131600 cri.go:89] found id: ""
	I0328 01:03:20.552345 1131600 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0328 01:03:20.564906 1131600 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0328 01:03:20.564937 1131600 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0328 01:03:20.564944 1131600 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0328 01:03:20.565002 1131600 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0328 01:03:20.576394 1131600 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0328 01:03:20.593699 1131600 kubeconfig.go:125] found "default-k8s-diff-port-283961" server: "https://192.168.39.224:8444"
	I0328 01:03:20.595978 1131600 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0328 01:03:20.609519 1131600 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.39.224
	I0328 01:03:20.609565 1131600 kubeadm.go:1154] stopping kube-system containers ...
	I0328 01:03:20.609583 1131600 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0328 01:03:20.609651 1131600 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0328 01:03:20.651892 1131600 cri.go:89] found id: ""
	I0328 01:03:20.651967 1131600 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0328 01:03:20.671895 1131600 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0328 01:03:16.365505 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:16.366404 1130827 main.go:141] libmachine: (no-preload-248059) DBG | unable to find current IP address of domain no-preload-248059 in network mk-no-preload-248059
	I0328 01:03:16.366435 1130827 main.go:141] libmachine: (no-preload-248059) DBG | I0328 01:03:16.366360 1132389 retry.go:31] will retry after 488.383898ms: waiting for machine to come up
	I0328 01:03:16.856125 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:16.856727 1130827 main.go:141] libmachine: (no-preload-248059) DBG | unable to find current IP address of domain no-preload-248059 in network mk-no-preload-248059
	I0328 01:03:16.856761 1130827 main.go:141] libmachine: (no-preload-248059) DBG | I0328 01:03:16.856626 1132389 retry.go:31] will retry after 617.77144ms: waiting for machine to come up
	I0328 01:03:17.476749 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:17.477351 1130827 main.go:141] libmachine: (no-preload-248059) DBG | unable to find current IP address of domain no-preload-248059 in network mk-no-preload-248059
	I0328 01:03:17.477386 1130827 main.go:141] libmachine: (no-preload-248059) DBG | I0328 01:03:17.477282 1132389 retry.go:31] will retry after 835.951988ms: waiting for machine to come up
	I0328 01:03:18.315387 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:18.315894 1130827 main.go:141] libmachine: (no-preload-248059) DBG | unable to find current IP address of domain no-preload-248059 in network mk-no-preload-248059
	I0328 01:03:18.315925 1130827 main.go:141] libmachine: (no-preload-248059) DBG | I0328 01:03:18.315848 1132389 retry.go:31] will retry after 1.405695765s: waiting for machine to come up
	I0328 01:03:19.723053 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:19.723559 1130827 main.go:141] libmachine: (no-preload-248059) DBG | unable to find current IP address of domain no-preload-248059 in network mk-no-preload-248059
	I0328 01:03:19.723591 1130827 main.go:141] libmachine: (no-preload-248059) DBG | I0328 01:03:19.723473 1132389 retry.go:31] will retry after 1.555358462s: waiting for machine to come up
	I0328 01:03:18.913403 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:21.599662 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:20.568464 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:21.068983 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:21.568470 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:22.068772 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:22.568940 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:23.068907 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:23.568272 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:24.068055 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:24.568056 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:25.068006 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:20.685320 1131600 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0328 01:03:21.187521 1131600 kubeadm.go:156] found existing configuration files:
	
	I0328 01:03:21.187587 1131600 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0328 01:03:21.200463 1131600 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0328 01:03:21.200533 1131600 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0328 01:03:21.212763 1131600 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0328 01:03:21.224344 1131600 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0328 01:03:21.224419 1131600 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0328 01:03:21.235869 1131600 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0328 01:03:21.245970 1131600 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0328 01:03:21.246045 1131600 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0328 01:03:21.258589 1131600 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0328 01:03:21.270651 1131600 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0328 01:03:21.270724 1131600 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0328 01:03:21.283074 1131600 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0328 01:03:21.295811 1131600 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0328 01:03:21.668224 1131600 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0328 01:03:23.046357 1131600 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.378083996s)
	I0328 01:03:23.046401 1131600 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0328 01:03:23.271959 1131600 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0328 01:03:23.353976 1131600 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0328 01:03:23.501611 1131600 api_server.go:52] waiting for apiserver process to appear ...
	I0328 01:03:23.501734 1131600 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:24.002619 1131600 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:24.502614 1131600 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:24.547383 1131600 api_server.go:72] duration metric: took 1.045771287s to wait for apiserver process to appear ...
	I0328 01:03:24.547419 1131600 api_server.go:88] waiting for apiserver healthz status ...
	I0328 01:03:24.547447 1131600 api_server.go:253] Checking apiserver healthz at https://192.168.39.224:8444/healthz ...
	I0328 01:03:24.548081 1131600 api_server.go:269] stopped: https://192.168.39.224:8444/healthz: Get "https://192.168.39.224:8444/healthz": dial tcp 192.168.39.224:8444: connect: connection refused
	I0328 01:03:25.047885 1131600 api_server.go:253] Checking apiserver healthz at https://192.168.39.224:8444/healthz ...
	I0328 01:03:21.279945 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:21.590947 1130827 main.go:141] libmachine: (no-preload-248059) DBG | unable to find current IP address of domain no-preload-248059 in network mk-no-preload-248059
	I0328 01:03:21.590967 1130827 main.go:141] libmachine: (no-preload-248059) DBG | I0328 01:03:21.280358 1132389 retry.go:31] will retry after 1.905587467s: waiting for machine to come up
	I0328 01:03:23.187571 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:23.188214 1130827 main.go:141] libmachine: (no-preload-248059) DBG | unable to find current IP address of domain no-preload-248059 in network mk-no-preload-248059
	I0328 01:03:23.188248 1130827 main.go:141] libmachine: (no-preload-248059) DBG | I0328 01:03:23.188159 1132389 retry.go:31] will retry after 2.68043246s: waiting for machine to come up
	I0328 01:03:25.871414 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:25.871997 1130827 main.go:141] libmachine: (no-preload-248059) DBG | unable to find current IP address of domain no-preload-248059 in network mk-no-preload-248059
	I0328 01:03:25.872030 1130827 main.go:141] libmachine: (no-preload-248059) DBG | I0328 01:03:25.871956 1132389 retry.go:31] will retry after 2.689404788s: waiting for machine to come up
	I0328 01:03:23.913816 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:26.413616 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:27.352533 1131600 api_server.go:279] https://192.168.39.224:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0328 01:03:27.352570 1131600 api_server.go:103] status: https://192.168.39.224:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0328 01:03:27.352589 1131600 api_server.go:253] Checking apiserver healthz at https://192.168.39.224:8444/healthz ...
	I0328 01:03:27.453408 1131600 api_server.go:279] https://192.168.39.224:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0328 01:03:27.453448 1131600 api_server.go:103] status: https://192.168.39.224:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0328 01:03:27.547781 1131600 api_server.go:253] Checking apiserver healthz at https://192.168.39.224:8444/healthz ...
	I0328 01:03:27.552703 1131600 api_server.go:279] https://192.168.39.224:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0328 01:03:27.552738 1131600 api_server.go:103] status: https://192.168.39.224:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0328 01:03:28.048135 1131600 api_server.go:253] Checking apiserver healthz at https://192.168.39.224:8444/healthz ...
	I0328 01:03:28.053291 1131600 api_server.go:279] https://192.168.39.224:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0328 01:03:28.053322 1131600 api_server.go:103] status: https://192.168.39.224:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0328 01:03:28.548374 1131600 api_server.go:253] Checking apiserver healthz at https://192.168.39.224:8444/healthz ...
	I0328 01:03:28.553141 1131600 api_server.go:279] https://192.168.39.224:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0328 01:03:28.553178 1131600 api_server.go:103] status: https://192.168.39.224:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0328 01:03:29.047609 1131600 api_server.go:253] Checking apiserver healthz at https://192.168.39.224:8444/healthz ...
	I0328 01:03:29.053027 1131600 api_server.go:279] https://192.168.39.224:8444/healthz returned 200:
	ok
	I0328 01:03:29.060710 1131600 api_server.go:141] control plane version: v1.29.3
	I0328 01:03:29.060747 1131600 api_server.go:131] duration metric: took 4.513320481s to wait for apiserver health ...
	I0328 01:03:29.060757 1131600 cni.go:84] Creating CNI manager for ""
	I0328 01:03:29.060764 1131600 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0328 01:03:29.062763 1131600 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0328 01:03:25.568927 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:26.068371 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:26.568107 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:27.068037 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:27.567985 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:28.068036 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:28.568843 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:29.068483 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:29.568942 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:30.068849 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:29.064492 1131600 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0328 01:03:29.089164 1131600 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0328 01:03:29.115071 1131600 system_pods.go:43] waiting for kube-system pods to appear ...
	I0328 01:03:29.126819 1131600 system_pods.go:59] 8 kube-system pods found
	I0328 01:03:29.126871 1131600 system_pods.go:61] "coredns-76f75df574-79cdj" [48ffe344-a386-4904-a73e-56e3ce0a8bef] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0328 01:03:29.126885 1131600 system_pods.go:61] "etcd-default-k8s-diff-port-283961" [1d8fc768-e39c-4c96-bd65-2ae76fc9c6ec] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0328 01:03:29.126898 1131600 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-283961" [7c5c9f85-f16f-4248-8d2d-73c1ed2b0128] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0328 01:03:29.126912 1131600 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-283961" [2e943e7b-5506-4797-9e77-4a33e06056fa] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0328 01:03:29.126931 1131600 system_pods.go:61] "kube-proxy-d776v" [c1c86f61-b074-4a51-89e6-17c7b1076748] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0328 01:03:29.126944 1131600 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-283961" [8a840579-4145-4b68-ab3f-b1ebd3d63e81] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0328 01:03:29.126956 1131600 system_pods.go:61] "metrics-server-57f55c9bc5-w4ww4" [6d60f9e6-8ac7-4fad-91dc-61520586666c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0328 01:03:29.126968 1131600 system_pods.go:61] "storage-provisioner" [2b5e2e68-7e7c-46ec-bcec-ff9b01cbb8d9] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0328 01:03:29.126979 1131600 system_pods.go:74] duration metric: took 11.875076ms to wait for pod list to return data ...
	I0328 01:03:29.126992 1131600 node_conditions.go:102] verifying NodePressure condition ...
	I0328 01:03:29.130927 1131600 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0328 01:03:29.130971 1131600 node_conditions.go:123] node cpu capacity is 2
	I0328 01:03:29.130986 1131600 node_conditions.go:105] duration metric: took 3.984383ms to run NodePressure ...
	I0328 01:03:29.131011 1131600 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0328 01:03:29.421513 1131600 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0328 01:03:29.426043 1131600 kubeadm.go:733] kubelet initialised
	I0328 01:03:29.426104 1131600 kubeadm.go:734] duration metric: took 4.524275ms waiting for restarted kubelet to initialise ...
	I0328 01:03:29.426114 1131600 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0328 01:03:29.432378 1131600 pod_ready.go:78] waiting up to 4m0s for pod "coredns-76f75df574-79cdj" in "kube-system" namespace to be "Ready" ...
	I0328 01:03:28.563249 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:28.563778 1130827 main.go:141] libmachine: (no-preload-248059) DBG | unable to find current IP address of domain no-preload-248059 in network mk-no-preload-248059
	I0328 01:03:28.563808 1130827 main.go:141] libmachine: (no-preload-248059) DBG | I0328 01:03:28.563718 1132389 retry.go:31] will retry after 2.919225956s: waiting for machine to come up
	I0328 01:03:28.913653 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:30.914379 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:31.484584 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:31.485027 1130827 main.go:141] libmachine: (no-preload-248059) Found IP for machine: 192.168.61.107
	I0328 01:03:31.485048 1130827 main.go:141] libmachine: (no-preload-248059) Reserving static IP address...
	I0328 01:03:31.485065 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has current primary IP address 192.168.61.107 and MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:31.485584 1130827 main.go:141] libmachine: (no-preload-248059) DBG | found host DHCP lease matching {name: "no-preload-248059", mac: "52:54:00:58:33:e2", ip: "192.168.61.107"} in network mk-no-preload-248059: {Iface:virbr3 ExpiryTime:2024-03-28 02:03:25 +0000 UTC Type:0 Mac:52:54:00:58:33:e2 Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:no-preload-248059 Clientid:01:52:54:00:58:33:e2}
	I0328 01:03:31.485617 1130827 main.go:141] libmachine: (no-preload-248059) Reserved static IP address: 192.168.61.107
	I0328 01:03:31.485638 1130827 main.go:141] libmachine: (no-preload-248059) DBG | skip adding static IP to network mk-no-preload-248059 - found existing host DHCP lease matching {name: "no-preload-248059", mac: "52:54:00:58:33:e2", ip: "192.168.61.107"}
	I0328 01:03:31.485651 1130827 main.go:141] libmachine: (no-preload-248059) Waiting for SSH to be available...
	I0328 01:03:31.485671 1130827 main.go:141] libmachine: (no-preload-248059) DBG | Getting to WaitForSSH function...
	I0328 01:03:31.487909 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:31.488293 1130827 main.go:141] libmachine: (no-preload-248059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:33:e2", ip: ""} in network mk-no-preload-248059: {Iface:virbr3 ExpiryTime:2024-03-28 02:03:25 +0000 UTC Type:0 Mac:52:54:00:58:33:e2 Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:no-preload-248059 Clientid:01:52:54:00:58:33:e2}
	I0328 01:03:31.488322 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined IP address 192.168.61.107 and MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:31.488469 1130827 main.go:141] libmachine: (no-preload-248059) DBG | Using SSH client type: external
	I0328 01:03:31.488506 1130827 main.go:141] libmachine: (no-preload-248059) DBG | Using SSH private key: /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/no-preload-248059/id_rsa (-rw-------)
	I0328 01:03:31.488531 1130827 main.go:141] libmachine: (no-preload-248059) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.107 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/no-preload-248059/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0328 01:03:31.488541 1130827 main.go:141] libmachine: (no-preload-248059) DBG | About to run SSH command:
	I0328 01:03:31.488555 1130827 main.go:141] libmachine: (no-preload-248059) DBG | exit 0
	I0328 01:03:31.618358 1130827 main.go:141] libmachine: (no-preload-248059) DBG | SSH cmd err, output: <nil>: 
	I0328 01:03:31.618786 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetConfigRaw
	I0328 01:03:31.619494 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetIP
	I0328 01:03:31.622183 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:31.622555 1130827 main.go:141] libmachine: (no-preload-248059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:33:e2", ip: ""} in network mk-no-preload-248059: {Iface:virbr3 ExpiryTime:2024-03-28 02:03:25 +0000 UTC Type:0 Mac:52:54:00:58:33:e2 Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:no-preload-248059 Clientid:01:52:54:00:58:33:e2}
	I0328 01:03:31.622584 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined IP address 192.168.61.107 and MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:31.622889 1130827 profile.go:142] Saving config to /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/no-preload-248059/config.json ...
	I0328 01:03:31.623120 1130827 machine.go:94] provisionDockerMachine start ...
	I0328 01:03:31.623147 1130827 main.go:141] libmachine: (no-preload-248059) Calling .DriverName
	I0328 01:03:31.623400 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHHostname
	I0328 01:03:31.626078 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:31.626432 1130827 main.go:141] libmachine: (no-preload-248059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:33:e2", ip: ""} in network mk-no-preload-248059: {Iface:virbr3 ExpiryTime:2024-03-28 02:03:25 +0000 UTC Type:0 Mac:52:54:00:58:33:e2 Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:no-preload-248059 Clientid:01:52:54:00:58:33:e2}
	I0328 01:03:31.626458 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined IP address 192.168.61.107 and MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:31.626663 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHPort
	I0328 01:03:31.626864 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHKeyPath
	I0328 01:03:31.627031 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHKeyPath
	I0328 01:03:31.627179 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHUsername
	I0328 01:03:31.627380 1130827 main.go:141] libmachine: Using SSH client type: native
	I0328 01:03:31.627595 1130827 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.107 22 <nil> <nil>}
	I0328 01:03:31.627611 1130827 main.go:141] libmachine: About to run SSH command:
	hostname
	I0328 01:03:31.739662 1130827 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0328 01:03:31.739699 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetMachineName
	I0328 01:03:31.740049 1130827 buildroot.go:166] provisioning hostname "no-preload-248059"
	I0328 01:03:31.740086 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetMachineName
	I0328 01:03:31.740421 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHHostname
	I0328 01:03:31.743410 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:31.743776 1130827 main.go:141] libmachine: (no-preload-248059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:33:e2", ip: ""} in network mk-no-preload-248059: {Iface:virbr3 ExpiryTime:2024-03-28 02:03:25 +0000 UTC Type:0 Mac:52:54:00:58:33:e2 Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:no-preload-248059 Clientid:01:52:54:00:58:33:e2}
	I0328 01:03:31.743811 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined IP address 192.168.61.107 and MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:31.744001 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHPort
	I0328 01:03:31.744212 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHKeyPath
	I0328 01:03:31.744394 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHKeyPath
	I0328 01:03:31.744515 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHUsername
	I0328 01:03:31.744669 1130827 main.go:141] libmachine: Using SSH client type: native
	I0328 01:03:31.744846 1130827 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.107 22 <nil> <nil>}
	I0328 01:03:31.744860 1130827 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-248059 && echo "no-preload-248059" | sudo tee /etc/hostname
	I0328 01:03:31.869330 1130827 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-248059
	
	I0328 01:03:31.869368 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHHostname
	I0328 01:03:31.872451 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:31.872817 1130827 main.go:141] libmachine: (no-preload-248059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:33:e2", ip: ""} in network mk-no-preload-248059: {Iface:virbr3 ExpiryTime:2024-03-28 02:03:25 +0000 UTC Type:0 Mac:52:54:00:58:33:e2 Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:no-preload-248059 Clientid:01:52:54:00:58:33:e2}
	I0328 01:03:31.872868 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined IP address 192.168.61.107 and MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:31.873159 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHPort
	I0328 01:03:31.873405 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHKeyPath
	I0328 01:03:31.873632 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHKeyPath
	I0328 01:03:31.873803 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHUsername
	I0328 01:03:31.873982 1130827 main.go:141] libmachine: Using SSH client type: native
	I0328 01:03:31.874220 1130827 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.107 22 <nil> <nil>}
	I0328 01:03:31.874268 1130827 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-248059' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-248059/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-248059' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0328 01:03:31.997509 1130827 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0328 01:03:31.997543 1130827 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18485-1069254/.minikube CaCertPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18485-1069254/.minikube}
	I0328 01:03:31.997565 1130827 buildroot.go:174] setting up certificates
	I0328 01:03:31.997573 1130827 provision.go:84] configureAuth start
	I0328 01:03:31.997583 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetMachineName
	I0328 01:03:31.997870 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetIP
	I0328 01:03:32.000739 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:32.001127 1130827 main.go:141] libmachine: (no-preload-248059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:33:e2", ip: ""} in network mk-no-preload-248059: {Iface:virbr3 ExpiryTime:2024-03-28 02:03:25 +0000 UTC Type:0 Mac:52:54:00:58:33:e2 Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:no-preload-248059 Clientid:01:52:54:00:58:33:e2}
	I0328 01:03:32.001162 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined IP address 192.168.61.107 and MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:32.001306 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHHostname
	I0328 01:03:32.003571 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:32.003958 1130827 main.go:141] libmachine: (no-preload-248059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:33:e2", ip: ""} in network mk-no-preload-248059: {Iface:virbr3 ExpiryTime:2024-03-28 02:03:25 +0000 UTC Type:0 Mac:52:54:00:58:33:e2 Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:no-preload-248059 Clientid:01:52:54:00:58:33:e2}
	I0328 01:03:32.003988 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined IP address 192.168.61.107 and MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:32.004162 1130827 provision.go:143] copyHostCerts
	I0328 01:03:32.004246 1130827 exec_runner.go:144] found /home/jenkins/minikube-integration/18485-1069254/.minikube/key.pem, removing ...
	I0328 01:03:32.004261 1130827 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18485-1069254/.minikube/key.pem
	I0328 01:03:32.004329 1130827 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18485-1069254/.minikube/key.pem (1679 bytes)
	I0328 01:03:32.004442 1130827 exec_runner.go:144] found /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.pem, removing ...
	I0328 01:03:32.004454 1130827 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.pem
	I0328 01:03:32.004486 1130827 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.pem (1078 bytes)
	I0328 01:03:32.004562 1130827 exec_runner.go:144] found /home/jenkins/minikube-integration/18485-1069254/.minikube/cert.pem, removing ...
	I0328 01:03:32.004572 1130827 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18485-1069254/.minikube/cert.pem
	I0328 01:03:32.004602 1130827 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18485-1069254/.minikube/cert.pem (1123 bytes)
	I0328 01:03:32.004667 1130827 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca-key.pem org=jenkins.no-preload-248059 san=[127.0.0.1 192.168.61.107 localhost minikube no-preload-248059]
	I0328 01:03:32.206585 1130827 provision.go:177] copyRemoteCerts
	I0328 01:03:32.206657 1130827 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0328 01:03:32.206691 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHHostname
	I0328 01:03:32.210170 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:32.210636 1130827 main.go:141] libmachine: (no-preload-248059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:33:e2", ip: ""} in network mk-no-preload-248059: {Iface:virbr3 ExpiryTime:2024-03-28 02:03:25 +0000 UTC Type:0 Mac:52:54:00:58:33:e2 Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:no-preload-248059 Clientid:01:52:54:00:58:33:e2}
	I0328 01:03:32.210676 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined IP address 192.168.61.107 and MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:32.210979 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHPort
	I0328 01:03:32.211187 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHKeyPath
	I0328 01:03:32.211364 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHUsername
	I0328 01:03:32.211564 1130827 sshutil.go:53] new ssh client: &{IP:192.168.61.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/no-preload-248059/id_rsa Username:docker}
	I0328 01:03:32.305858 1130827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0328 01:03:32.337654 1130827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0328 01:03:32.368942 1130827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0328 01:03:32.401639 1130827 provision.go:87] duration metric: took 404.051415ms to configureAuth
	I0328 01:03:32.401669 1130827 buildroot.go:189] setting minikube options for container-runtime
	I0328 01:03:32.401936 1130827 config.go:182] Loaded profile config "no-preload-248059": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0-beta.0
	I0328 01:03:32.402025 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHHostname
	I0328 01:03:32.404890 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:32.405352 1130827 main.go:141] libmachine: (no-preload-248059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:33:e2", ip: ""} in network mk-no-preload-248059: {Iface:virbr3 ExpiryTime:2024-03-28 02:03:25 +0000 UTC Type:0 Mac:52:54:00:58:33:e2 Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:no-preload-248059 Clientid:01:52:54:00:58:33:e2}
	I0328 01:03:32.405387 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined IP address 192.168.61.107 and MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:32.405588 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHPort
	I0328 01:03:32.405858 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHKeyPath
	I0328 01:03:32.406091 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHKeyPath
	I0328 01:03:32.406303 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHUsername
	I0328 01:03:32.406510 1130827 main.go:141] libmachine: Using SSH client type: native
	I0328 01:03:32.406731 1130827 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.107 22 <nil> <nil>}
	I0328 01:03:32.406759 1130827 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0328 01:03:32.697738 1130827 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0328 01:03:32.697768 1130827 machine.go:97] duration metric: took 1.074632092s to provisionDockerMachine
	I0328 01:03:32.697781 1130827 start.go:293] postStartSetup for "no-preload-248059" (driver="kvm2")
	I0328 01:03:32.697795 1130827 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0328 01:03:32.697812 1130827 main.go:141] libmachine: (no-preload-248059) Calling .DriverName
	I0328 01:03:32.698263 1130827 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0328 01:03:32.698298 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHHostname
	I0328 01:03:32.701020 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:32.701421 1130827 main.go:141] libmachine: (no-preload-248059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:33:e2", ip: ""} in network mk-no-preload-248059: {Iface:virbr3 ExpiryTime:2024-03-28 02:03:25 +0000 UTC Type:0 Mac:52:54:00:58:33:e2 Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:no-preload-248059 Clientid:01:52:54:00:58:33:e2}
	I0328 01:03:32.701450 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined IP address 192.168.61.107 and MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:32.701609 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHPort
	I0328 01:03:32.701837 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHKeyPath
	I0328 01:03:32.702010 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHUsername
	I0328 01:03:32.702188 1130827 sshutil.go:53] new ssh client: &{IP:192.168.61.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/no-preload-248059/id_rsa Username:docker}
	I0328 01:03:32.790670 1130827 ssh_runner.go:195] Run: cat /etc/os-release
	I0328 01:03:32.795098 1130827 info.go:137] Remote host: Buildroot 2023.02.9
	I0328 01:03:32.795131 1130827 filesync.go:126] Scanning /home/jenkins/minikube-integration/18485-1069254/.minikube/addons for local assets ...
	I0328 01:03:32.795222 1130827 filesync.go:126] Scanning /home/jenkins/minikube-integration/18485-1069254/.minikube/files for local assets ...
	I0328 01:03:32.795297 1130827 filesync.go:149] local asset: /home/jenkins/minikube-integration/18485-1069254/.minikube/files/etc/ssl/certs/10765222.pem -> 10765222.pem in /etc/ssl/certs
	I0328 01:03:32.795402 1130827 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0328 01:03:32.806276 1130827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/files/etc/ssl/certs/10765222.pem --> /etc/ssl/certs/10765222.pem (1708 bytes)
	I0328 01:03:32.832753 1130827 start.go:296] duration metric: took 134.954685ms for postStartSetup
	I0328 01:03:32.832801 1130827 fix.go:56] duration metric: took 19.16097847s for fixHost
	I0328 01:03:32.832825 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHHostname
	I0328 01:03:32.835830 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:32.836199 1130827 main.go:141] libmachine: (no-preload-248059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:33:e2", ip: ""} in network mk-no-preload-248059: {Iface:virbr3 ExpiryTime:2024-03-28 02:03:25 +0000 UTC Type:0 Mac:52:54:00:58:33:e2 Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:no-preload-248059 Clientid:01:52:54:00:58:33:e2}
	I0328 01:03:32.836237 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined IP address 192.168.61.107 and MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:32.836472 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHPort
	I0328 01:03:32.836707 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHKeyPath
	I0328 01:03:32.836949 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHKeyPath
	I0328 01:03:32.837104 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHUsername
	I0328 01:03:32.837339 1130827 main.go:141] libmachine: Using SSH client type: native
	I0328 01:03:32.837551 1130827 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.107 22 <nil> <nil>}
	I0328 01:03:32.837563 1130827 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0328 01:03:32.947440 1130827 main.go:141] libmachine: SSH cmd err, output: <nil>: 1711587812.922631180
	
	I0328 01:03:32.947477 1130827 fix.go:216] guest clock: 1711587812.922631180
	I0328 01:03:32.947486 1130827 fix.go:229] Guest: 2024-03-28 01:03:32.92263118 +0000 UTC Remote: 2024-03-28 01:03:32.832804811 +0000 UTC m=+356.715929719 (delta=89.826369ms)
	I0328 01:03:32.947507 1130827 fix.go:200] guest clock delta is within tolerance: 89.826369ms
	I0328 01:03:32.947512 1130827 start.go:83] releasing machines lock for "no-preload-248059", held for 19.275724068s
	I0328 01:03:32.947531 1130827 main.go:141] libmachine: (no-preload-248059) Calling .DriverName
	I0328 01:03:32.947805 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetIP
	I0328 01:03:32.950439 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:32.950814 1130827 main.go:141] libmachine: (no-preload-248059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:33:e2", ip: ""} in network mk-no-preload-248059: {Iface:virbr3 ExpiryTime:2024-03-28 02:03:25 +0000 UTC Type:0 Mac:52:54:00:58:33:e2 Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:no-preload-248059 Clientid:01:52:54:00:58:33:e2}
	I0328 01:03:32.950844 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined IP address 192.168.61.107 and MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:32.950992 1130827 main.go:141] libmachine: (no-preload-248059) Calling .DriverName
	I0328 01:03:32.951517 1130827 main.go:141] libmachine: (no-preload-248059) Calling .DriverName
	I0328 01:03:32.951707 1130827 main.go:141] libmachine: (no-preload-248059) Calling .DriverName
	I0328 01:03:32.951809 1130827 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0328 01:03:32.951852 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHHostname
	I0328 01:03:32.951938 1130827 ssh_runner.go:195] Run: cat /version.json
	I0328 01:03:32.951964 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHHostname
	I0328 01:03:32.954721 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:32.955058 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:32.955135 1130827 main.go:141] libmachine: (no-preload-248059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:33:e2", ip: ""} in network mk-no-preload-248059: {Iface:virbr3 ExpiryTime:2024-03-28 02:03:25 +0000 UTC Type:0 Mac:52:54:00:58:33:e2 Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:no-preload-248059 Clientid:01:52:54:00:58:33:e2}
	I0328 01:03:32.955165 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined IP address 192.168.61.107 and MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:32.955314 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHPort
	I0328 01:03:32.955473 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHKeyPath
	I0328 01:03:32.955512 1130827 main.go:141] libmachine: (no-preload-248059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:33:e2", ip: ""} in network mk-no-preload-248059: {Iface:virbr3 ExpiryTime:2024-03-28 02:03:25 +0000 UTC Type:0 Mac:52:54:00:58:33:e2 Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:no-preload-248059 Clientid:01:52:54:00:58:33:e2}
	I0328 01:03:32.955538 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined IP address 192.168.61.107 and MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:32.955622 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHUsername
	I0328 01:03:32.955698 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHPort
	I0328 01:03:32.955809 1130827 sshutil.go:53] new ssh client: &{IP:192.168.61.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/no-preload-248059/id_rsa Username:docker}
	I0328 01:03:32.955859 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHKeyPath
	I0328 01:03:32.956001 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHUsername
	I0328 01:03:32.956134 1130827 sshutil.go:53] new ssh client: &{IP:192.168.61.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/no-preload-248059/id_rsa Username:docker}
	I0328 01:03:33.079381 1130827 ssh_runner.go:195] Run: systemctl --version
	I0328 01:03:33.086184 1130827 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0328 01:03:33.241799 1130827 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0328 01:03:33.248779 1130827 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0328 01:03:33.248893 1130827 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0328 01:03:33.267944 1130827 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0328 01:03:33.267977 1130827 start.go:494] detecting cgroup driver to use...
	I0328 01:03:33.268082 1130827 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0328 01:03:33.286132 1130827 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0328 01:03:33.301676 1130827 docker.go:217] disabling cri-docker service (if available) ...
	I0328 01:03:33.301762 1130827 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0328 01:03:33.317202 1130827 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0328 01:03:33.333162 1130827 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0328 01:03:33.458738 1130827 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0328 01:03:33.608509 1130827 docker.go:233] disabling docker service ...
	I0328 01:03:33.608623 1130827 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0328 01:03:33.626616 1130827 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0328 01:03:33.641798 1130827 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0328 01:03:33.808865 1130827 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0328 01:03:33.962636 1130827 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0328 01:03:33.978138 1130827 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0328 01:03:34.002323 1130827 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0328 01:03:34.002404 1130827 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 01:03:34.014483 1130827 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0328 01:03:34.014589 1130827 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 01:03:34.028647 1130827 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 01:03:34.041601 1130827 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 01:03:34.054993 1130827 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0328 01:03:34.066671 1130827 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 01:03:34.079389 1130827 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 01:03:34.099660 1130827 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 01:03:34.112379 1130827 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0328 01:03:34.123050 1130827 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0328 01:03:34.123109 1130827 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0328 01:03:34.137132 1130827 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0328 01:03:34.147092 1130827 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0328 01:03:34.282367 1130827 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0328 01:03:34.436510 1130827 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0328 01:03:34.436599 1130827 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0328 01:03:34.443019 1130827 start.go:562] Will wait 60s for crictl version
	I0328 01:03:34.443092 1130827 ssh_runner.go:195] Run: which crictl
	I0328 01:03:34.447740 1130827 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0328 01:03:34.488366 1130827 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0328 01:03:34.488469 1130827 ssh_runner.go:195] Run: crio --version
	I0328 01:03:34.520940 1130827 ssh_runner.go:195] Run: crio --version
	I0328 01:03:34.557953 1130827 out.go:177] * Preparing Kubernetes v1.30.0-beta.0 on CRI-O 1.29.1 ...
	I0328 01:03:30.568918 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:31.068097 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:31.568306 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:32.068345 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:32.568773 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:33.068072 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:33.568377 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:34.068141 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:34.568574 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:35.067986 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:31.439199 1131600 pod_ready.go:102] pod "coredns-76f75df574-79cdj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:33.439575 1131600 pod_ready.go:102] pod "coredns-76f75df574-79cdj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:34.559624 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetIP
	I0328 01:03:34.563089 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:34.563549 1130827 main.go:141] libmachine: (no-preload-248059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:33:e2", ip: ""} in network mk-no-preload-248059: {Iface:virbr3 ExpiryTime:2024-03-28 02:03:25 +0000 UTC Type:0 Mac:52:54:00:58:33:e2 Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:no-preload-248059 Clientid:01:52:54:00:58:33:e2}
	I0328 01:03:34.563583 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined IP address 192.168.61.107 and MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:34.563943 1130827 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0328 01:03:34.570153 1130827 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0328 01:03:34.584566 1130827 kubeadm.go:877] updating cluster {Name:no-preload-248059 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.0-beta.0 ClusterName:no-preload-248059 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.107 Port:8443 KubernetesVersion:v1.30.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0328 01:03:34.584723 1130827 preload.go:132] Checking if preload exists for k8s version v1.30.0-beta.0 and runtime crio
	I0328 01:03:34.584786 1130827 ssh_runner.go:195] Run: sudo crictl images --output json
	I0328 01:03:34.620182 1130827 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.0-beta.0". assuming images are not preloaded.
	I0328 01:03:34.620215 1130827 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.30.0-beta.0 registry.k8s.io/kube-controller-manager:v1.30.0-beta.0 registry.k8s.io/kube-scheduler:v1.30.0-beta.0 registry.k8s.io/kube-proxy:v1.30.0-beta.0 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.12-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0328 01:03:34.620297 1130827 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0328 01:03:34.620312 1130827 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.12-0
	I0328 01:03:34.620333 1130827 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.30.0-beta.0
	I0328 01:03:34.620301 1130827 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.30.0-beta.0
	I0328 01:03:34.620374 1130827 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.30.0-beta.0
	I0328 01:03:34.620401 1130827 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0328 01:03:34.620481 1130827 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0328 01:03:34.620319 1130827 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.30.0-beta.0
	I0328 01:03:34.621997 1130827 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.30.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.30.0-beta.0
	I0328 01:03:34.622009 1130827 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0328 01:03:34.622009 1130827 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.30.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.30.0-beta.0
	I0328 01:03:34.622052 1130827 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.30.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.30.0-beta.0
	I0328 01:03:34.621997 1130827 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.30.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.30.0-beta.0
	I0328 01:03:34.622115 1130827 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.12-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.12-0
	I0328 01:03:34.621996 1130827 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0328 01:03:34.622438 1130827 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0328 01:03:34.832761 1130827 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.30.0-beta.0
	I0328 01:03:34.849045 1130827 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I0328 01:03:34.868049 1130827 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0328 01:03:34.883941 1130827 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.30.0-beta.0" needs transfer: "registry.k8s.io/kube-proxy:v1.30.0-beta.0" does not exist at hash "3f2e573c9528d6ccab780899b4b39b75a17f00250f31ec462fccb116d45befa8" in container runtime
	I0328 01:03:34.883988 1130827 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.30.0-beta.0
	I0328 01:03:34.884047 1130827 ssh_runner.go:195] Run: which crictl
	I0328 01:03:34.884972 1130827 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.30.0-beta.0
	I0328 01:03:34.887551 1130827 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.30.0-beta.0
	I0328 01:03:34.899677 1130827 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.12-0
	I0328 01:03:34.904772 1130827 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.30.0-beta.0
	I0328 01:03:35.045850 1130827 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0328 01:03:35.045906 1130827 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0328 01:03:35.045944 1130827 ssh_runner.go:195] Run: which crictl
	I0328 01:03:35.045959 1130827 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.30.0-beta.0
	I0328 01:03:35.064862 1130827 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.30.0-beta.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.30.0-beta.0" does not exist at hash "c2da1dd389fe0ec92e0cd9e98f8def82c47e8e08ab27041cd23683c60aa1dcaa" in container runtime
	I0328 01:03:35.064908 1130827 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.30.0-beta.0
	I0328 01:03:35.064959 1130827 ssh_runner.go:195] Run: which crictl
	I0328 01:03:35.066700 1130827 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.30.0-beta.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.30.0-beta.0" does not exist at hash "f68bf73ee2fbd4ea8b58c2038ce18d4e06cb59833c44dda7435b586add021841" in container runtime
	I0328 01:03:35.066753 1130827 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.30.0-beta.0
	I0328 01:03:35.066820 1130827 ssh_runner.go:195] Run: which crictl
	I0328 01:03:35.097425 1130827 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.30.0-beta.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.30.0-beta.0" does not exist at hash "746ac2129b574b8743dca05f512b7e097235ca7229b75100a38ec9cdb23454ac" in container runtime
	I0328 01:03:35.097479 1130827 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.30.0-beta.0
	I0328 01:03:35.097546 1130827 ssh_runner.go:195] Run: which crictl
	I0328 01:03:35.097619 1130827 cache_images.go:116] "registry.k8s.io/etcd:3.5.12-0" needs transfer: "registry.k8s.io/etcd:3.5.12-0" does not exist at hash "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899" in container runtime
	I0328 01:03:35.097667 1130827 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.12-0
	I0328 01:03:35.097715 1130827 ssh_runner.go:195] Run: which crictl
	I0328 01:03:35.126977 1130827 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0328 01:03:35.126980 1130827 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.30.0-beta.0
	I0328 01:03:35.127020 1130827 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.30.0-beta.0
	I0328 01:03:35.127084 1130827 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.12-0
	I0328 01:03:35.127090 1130827 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.30.0-beta.0
	I0328 01:03:35.127082 1130827 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.30.0-beta.0
	I0328 01:03:35.127161 1130827 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.30.0-beta.0
	I0328 01:03:35.264395 1130827 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.30.0-beta.0
	I0328 01:03:35.264499 1130827 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.30.0-beta.0 (exists)
	I0328 01:03:35.264534 1130827 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.30.0-beta.0
	I0328 01:03:35.264543 1130827 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.30.0-beta.0
	I0328 01:03:35.264506 1130827 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0328 01:03:35.264590 1130827 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.30.0-beta.0
	I0328 01:03:35.264631 1130827 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.30.0-beta.0
	I0328 01:03:35.264652 1130827 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0328 01:03:35.264516 1130827 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.30.0-beta.0
	I0328 01:03:35.264584 1130827 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.12-0
	I0328 01:03:35.264717 1130827 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.30.0-beta.0
	I0328 01:03:35.264728 1130827 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.30.0-beta.0
	I0328 01:03:35.264768 1130827 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.12-0
	I0328 01:03:35.269734 1130827 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.12-0 (exists)
	I0328 01:03:35.277344 1130827 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.30.0-beta.0 (exists)
	I0328 01:03:35.277580 1130827 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0328 01:03:35.279792 1130827 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.30.0-beta.0 (exists)
	I0328 01:03:35.280423 1130827 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.30.0-beta.0 (exists)
	I0328 01:03:35.535980 1130827 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0328 01:03:33.413451 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:35.414017 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:37.913609 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:35.568345 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:36.068227 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:36.568528 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:37.068834 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:37.568407 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:38.068142 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:38.568732 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:39.068094 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:39.568799 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:40.068973 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:35.940767 1131600 pod_ready.go:102] pod "coredns-76f75df574-79cdj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:37.440919 1131600 pod_ready.go:92] pod "coredns-76f75df574-79cdj" in "kube-system" namespace has status "Ready":"True"
	I0328 01:03:37.440949 1131600 pod_ready.go:81] duration metric: took 8.008542386s for pod "coredns-76f75df574-79cdj" in "kube-system" namespace to be "Ready" ...
	I0328 01:03:37.440963 1131600 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-283961" in "kube-system" namespace to be "Ready" ...
	I0328 01:03:39.452822 1131600 pod_ready.go:102] pod "etcd-default-k8s-diff-port-283961" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:40.467937 1131600 pod_ready.go:92] pod "etcd-default-k8s-diff-port-283961" in "kube-system" namespace has status "Ready":"True"
	I0328 01:03:40.467973 1131600 pod_ready.go:81] duration metric: took 3.027001179s for pod "etcd-default-k8s-diff-port-283961" in "kube-system" namespace to be "Ready" ...
	I0328 01:03:40.467987 1131600 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-283961" in "kube-system" namespace to be "Ready" ...
	I0328 01:03:40.491342 1131600 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-283961" in "kube-system" namespace has status "Ready":"True"
	I0328 01:03:40.491373 1131600 pod_ready.go:81] duration metric: took 23.375914ms for pod "kube-apiserver-default-k8s-diff-port-283961" in "kube-system" namespace to be "Ready" ...
	I0328 01:03:40.491387 1131600 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-283961" in "kube-system" namespace to be "Ready" ...
	I0328 01:03:40.511379 1131600 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-283961" in "kube-system" namespace has status "Ready":"True"
	I0328 01:03:40.511414 1131600 pod_ready.go:81] duration metric: took 20.018124ms for pod "kube-controller-manager-default-k8s-diff-port-283961" in "kube-system" namespace to be "Ready" ...
	I0328 01:03:40.511430 1131600 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-d776v" in "kube-system" namespace to be "Ready" ...
	I0328 01:03:40.526689 1131600 pod_ready.go:92] pod "kube-proxy-d776v" in "kube-system" namespace has status "Ready":"True"
	I0328 01:03:40.526724 1131600 pod_ready.go:81] duration metric: took 15.28424ms for pod "kube-proxy-d776v" in "kube-system" namespace to be "Ready" ...
	I0328 01:03:40.526738 1131600 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-283961" in "kube-system" namespace to be "Ready" ...
	I0328 01:03:37.431690 1130827 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.30.0-beta.0: (2.167073369s)
	I0328 01:03:37.431729 1130827 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.30.0-beta.0 from cache
	I0328 01:03:37.431755 1130827 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.12-0
	I0328 01:03:37.431764 1130827 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.895749302s)
	I0328 01:03:37.431805 1130827 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0328 01:03:37.431811 1130827 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.12-0
	I0328 01:03:37.431837 1130827 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0328 01:03:37.431870 1130827 ssh_runner.go:195] Run: which crictl
	I0328 01:03:39.913936 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:42.412656 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:40.568441 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:41.068790 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:41.568919 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:42.068166 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:42.568012 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:43.068027 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:43.568916 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:44.067940 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:44.568074 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:45.068786 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:42.535179 1131600 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-283961" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:44.034128 1131600 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-283961" in "kube-system" namespace has status "Ready":"True"
	I0328 01:03:44.034164 1131600 pod_ready.go:81] duration metric: took 3.507415677s for pod "kube-scheduler-default-k8s-diff-port-283961" in "kube-system" namespace to be "Ready" ...
	I0328 01:03:44.034175 1131600 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace to be "Ready" ...
	I0328 01:03:41.523268 1130827 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.12-0: (4.091420228s)
	I0328 01:03:41.523305 1130827 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.12-0 from cache
	I0328 01:03:41.523330 1130827 ssh_runner.go:235] Completed: which crictl: (4.091431875s)
	I0328 01:03:41.523345 1130827 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.30.0-beta.0
	I0328 01:03:41.523412 1130827 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0328 01:03:41.523445 1130827 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.30.0-beta.0
	I0328 01:03:41.567312 1130827 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0328 01:03:41.567455 1130827 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0328 01:03:44.336954 1130827 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.30.0-beta.0: (2.813479223s)
	I0328 01:03:44.336991 1130827 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.30.0-beta.0 from cache
	I0328 01:03:44.336994 1130827 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (2.769509386s)
	I0328 01:03:44.337020 1130827 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0328 01:03:44.337035 1130827 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0328 01:03:44.337080 1130827 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0328 01:03:44.414767 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:46.415110 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:45.568662 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:46.068299 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:46.568793 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:47.068929 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:47.568250 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:48.068910 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:48.568138 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:49.068128 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:49.568153 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:50.068075 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:46.042489 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:48.541049 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:50.547355 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:46.297705 1130827 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.960592772s)
	I0328 01:03:46.297744 1130827 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0328 01:03:46.297776 1130827 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.30.0-beta.0
	I0328 01:03:46.297828 1130827 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.30.0-beta.0
	I0328 01:03:47.769522 1130827 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.30.0-beta.0: (1.471661236s)
	I0328 01:03:47.769569 1130827 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.30.0-beta.0 from cache
	I0328 01:03:47.769602 1130827 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.30.0-beta.0
	I0328 01:03:47.769656 1130827 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.30.0-beta.0
	I0328 01:03:50.231843 1130827 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.30.0-beta.0: (2.462162757s)
	I0328 01:03:50.231876 1130827 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.30.0-beta.0 from cache
	I0328 01:03:50.231902 1130827 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0328 01:03:50.231956 1130827 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0328 01:03:48.913184 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:51.412474 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:50.568929 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:51.068812 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:51.568899 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:52.068890 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:52.568751 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:53.068406 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:53.568466 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:54.068039 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:54.568745 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:55.068690 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:53.041197 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:55.041977 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:51.188382 1130827 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0328 01:03:51.188441 1130827 cache_images.go:123] Successfully loaded all cached images
	I0328 01:03:51.188448 1130827 cache_images.go:92] duration metric: took 16.568214969s to LoadCachedImages
	I0328 01:03:51.188464 1130827 kubeadm.go:928] updating node { 192.168.61.107 8443 v1.30.0-beta.0 crio true true} ...
	I0328 01:03:51.188628 1130827 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-248059 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.107
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0-beta.0 ClusterName:no-preload-248059 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0328 01:03:51.188710 1130827 ssh_runner.go:195] Run: crio config
	I0328 01:03:51.237071 1130827 cni.go:84] Creating CNI manager for ""
	I0328 01:03:51.237099 1130827 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0328 01:03:51.237109 1130827 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0328 01:03:51.237131 1130827 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.107 APIServerPort:8443 KubernetesVersion:v1.30.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-248059 NodeName:no-preload-248059 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.107"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.107 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0328 01:03:51.237263 1130827 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.107
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-248059"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.107
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.107"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0328 01:03:51.237330 1130827 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0-beta.0
	I0328 01:03:51.248044 1130827 binaries.go:44] Found k8s binaries, skipping transfer
	I0328 01:03:51.248113 1130827 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0328 01:03:51.257854 1130827 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (324 bytes)
	I0328 01:03:51.276307 1130827 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I0328 01:03:51.294698 1130827 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2168 bytes)
	I0328 01:03:51.313297 1130827 ssh_runner.go:195] Run: grep 192.168.61.107	control-plane.minikube.internal$ /etc/hosts
	I0328 01:03:51.317668 1130827 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.107	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0328 01:03:51.330478 1130827 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0328 01:03:51.457500 1130827 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0328 01:03:51.484463 1130827 certs.go:68] Setting up /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/no-preload-248059 for IP: 192.168.61.107
	I0328 01:03:51.484493 1130827 certs.go:194] generating shared ca certs ...
	I0328 01:03:51.484518 1130827 certs.go:226] acquiring lock for ca certs: {Name:mkf4dec8f33bbf51de6ed3aabdf175c7fd744ef9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 01:03:51.484718 1130827 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.key
	I0328 01:03:51.484768 1130827 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/proxy-client-ca.key
	I0328 01:03:51.484781 1130827 certs.go:256] generating profile certs ...
	I0328 01:03:51.484910 1130827 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/no-preload-248059/client.key
	I0328 01:03:51.484989 1130827 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/no-preload-248059/apiserver.key.85d037b2
	I0328 01:03:51.485040 1130827 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/no-preload-248059/proxy-client.key
	I0328 01:03:51.485196 1130827 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/1076522.pem (1338 bytes)
	W0328 01:03:51.485243 1130827 certs.go:480] ignoring /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/1076522_empty.pem, impossibly tiny 0 bytes
	I0328 01:03:51.485257 1130827 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca-key.pem (1675 bytes)
	I0328 01:03:51.485292 1130827 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem (1078 bytes)
	I0328 01:03:51.485327 1130827 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/cert.pem (1123 bytes)
	I0328 01:03:51.485357 1130827 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/key.pem (1679 bytes)
	I0328 01:03:51.485416 1130827 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/files/etc/ssl/certs/10765222.pem (1708 bytes)
	I0328 01:03:51.486614 1130827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0328 01:03:51.537554 1130827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0328 01:03:51.587256 1130827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0328 01:03:51.620264 1130827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0328 01:03:51.652100 1130827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/no-preload-248059/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0328 01:03:51.694388 1130827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/no-preload-248059/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0328 01:03:51.720913 1130827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/no-preload-248059/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0328 01:03:51.747141 1130827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/no-preload-248059/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0328 01:03:51.776370 1130827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/1076522.pem --> /usr/share/ca-certificates/1076522.pem (1338 bytes)
	I0328 01:03:51.803168 1130827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/files/etc/ssl/certs/10765222.pem --> /usr/share/ca-certificates/10765222.pem (1708 bytes)
	I0328 01:03:51.831138 1130827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0328 01:03:51.857272 1130827 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0328 01:03:51.876070 1130827 ssh_runner.go:195] Run: openssl version
	I0328 01:03:51.882197 1130827 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1076522.pem && ln -fs /usr/share/ca-certificates/1076522.pem /etc/ssl/certs/1076522.pem"
	I0328 01:03:51.893560 1130827 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1076522.pem
	I0328 01:03:51.898293 1130827 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 27 23:42 /usr/share/ca-certificates/1076522.pem
	I0328 01:03:51.898361 1130827 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1076522.pem
	I0328 01:03:51.904549 1130827 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1076522.pem /etc/ssl/certs/51391683.0"
	I0328 01:03:51.918175 1130827 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10765222.pem && ln -fs /usr/share/ca-certificates/10765222.pem /etc/ssl/certs/10765222.pem"
	I0328 01:03:51.930387 1130827 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10765222.pem
	I0328 01:03:51.935610 1130827 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 27 23:42 /usr/share/ca-certificates/10765222.pem
	I0328 01:03:51.935691 1130827 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10765222.pem
	I0328 01:03:51.942127 1130827 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/10765222.pem /etc/ssl/certs/3ec20f2e.0"
	I0328 01:03:51.954252 1130827 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0328 01:03:51.966727 1130827 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0328 01:03:51.971742 1130827 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 27 23:34 /usr/share/ca-certificates/minikubeCA.pem
	I0328 01:03:51.971810 1130827 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0328 01:03:51.978082 1130827 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0328 01:03:51.992233 1130827 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0328 01:03:51.997556 1130827 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0328 01:03:52.004178 1130827 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0328 01:03:52.010666 1130827 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0328 01:03:52.017076 1130827 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0328 01:03:52.023334 1130827 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0328 01:03:52.029980 1130827 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0328 01:03:52.036395 1130827 kubeadm.go:391] StartCluster: {Name:no-preload-248059 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.0-beta.0 ClusterName:no-preload-248059 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.107 Port:8443 KubernetesVersion:v1.30.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0
m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0328 01:03:52.036483 1130827 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0328 01:03:52.036539 1130827 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0328 01:03:52.080486 1130827 cri.go:89] found id: ""
	I0328 01:03:52.080580 1130827 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0328 01:03:52.094552 1130827 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0328 01:03:52.094583 1130827 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0328 01:03:52.094599 1130827 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0328 01:03:52.094650 1130827 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0328 01:03:52.107008 1130827 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0328 01:03:52.108200 1130827 kubeconfig.go:125] found "no-preload-248059" server: "https://192.168.61.107:8443"
	I0328 01:03:52.110536 1130827 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0328 01:03:52.122998 1130827 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.61.107
	I0328 01:03:52.123044 1130827 kubeadm.go:1154] stopping kube-system containers ...
	I0328 01:03:52.123090 1130827 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0328 01:03:52.123170 1130827 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0328 01:03:52.165568 1130827 cri.go:89] found id: ""
	I0328 01:03:52.165666 1130827 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0328 01:03:52.183930 1130827 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0328 01:03:52.195188 1130827 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0328 01:03:52.195215 1130827 kubeadm.go:156] found existing configuration files:
	
	I0328 01:03:52.195271 1130827 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0328 01:03:52.205872 1130827 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0328 01:03:52.205932 1130827 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0328 01:03:52.216481 1130827 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0328 01:03:52.226719 1130827 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0328 01:03:52.226787 1130827 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0328 01:03:52.238852 1130827 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0328 01:03:52.250272 1130827 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0328 01:03:52.250341 1130827 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0328 01:03:52.262474 1130827 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0328 01:03:52.273981 1130827 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0328 01:03:52.274059 1130827 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0328 01:03:52.286028 1130827 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0328 01:03:52.297016 1130827 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0328 01:03:52.406981 1130827 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0328 01:03:53.521529 1130827 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.114505514s)
	I0328 01:03:53.521569 1130827 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0328 01:03:53.735728 1130827 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0328 01:03:53.808590 1130827 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0328 01:03:53.931165 1130827 api_server.go:52] waiting for apiserver process to appear ...
	I0328 01:03:53.931281 1130827 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:54.432358 1130827 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:54.931653 1130827 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:54.948811 1130827 api_server.go:72] duration metric: took 1.017647613s to wait for apiserver process to appear ...
	I0328 01:03:54.948843 1130827 api_server.go:88] waiting for apiserver healthz status ...
	I0328 01:03:54.948871 1130827 api_server.go:253] Checking apiserver healthz at https://192.168.61.107:8443/healthz ...
	I0328 01:03:54.949490 1130827 api_server.go:269] stopped: https://192.168.61.107:8443/healthz: Get "https://192.168.61.107:8443/healthz": dial tcp 192.168.61.107:8443: connect: connection refused
	I0328 01:03:55.449050 1130827 api_server.go:253] Checking apiserver healthz at https://192.168.61.107:8443/healthz ...
	I0328 01:03:53.413775 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:55.914095 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:57.515811 1130827 api_server.go:279] https://192.168.61.107:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0328 01:03:57.515852 1130827 api_server.go:103] status: https://192.168.61.107:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0328 01:03:57.515872 1130827 api_server.go:253] Checking apiserver healthz at https://192.168.61.107:8443/healthz ...
	I0328 01:03:57.564527 1130827 api_server.go:279] https://192.168.61.107:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0328 01:03:57.564560 1130827 api_server.go:103] status: https://192.168.61.107:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0328 01:03:57.949780 1130827 api_server.go:253] Checking apiserver healthz at https://192.168.61.107:8443/healthz ...
	I0328 01:03:57.955515 1130827 api_server.go:279] https://192.168.61.107:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0328 01:03:57.955565 1130827 api_server.go:103] status: https://192.168.61.107:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0328 01:03:58.449103 1130827 api_server.go:253] Checking apiserver healthz at https://192.168.61.107:8443/healthz ...
	I0328 01:03:58.456345 1130827 api_server.go:279] https://192.168.61.107:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0328 01:03:58.456384 1130827 api_server.go:103] status: https://192.168.61.107:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0328 01:03:58.949575 1130827 api_server.go:253] Checking apiserver healthz at https://192.168.61.107:8443/healthz ...
	I0328 01:03:58.954466 1130827 api_server.go:279] https://192.168.61.107:8443/healthz returned 200:
	ok
	I0328 01:03:58.961213 1130827 api_server.go:141] control plane version: v1.30.0-beta.0
	I0328 01:03:58.961244 1130827 api_server.go:131] duration metric: took 4.012391589s to wait for apiserver health ...
	I0328 01:03:58.961256 1130827 cni.go:84] Creating CNI manager for ""
	I0328 01:03:58.961265 1130827 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0328 01:03:58.963147 1130827 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0328 01:03:55.568378 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:56.068253 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:56.568989 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:57.068709 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:57.569038 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:58.068236 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:58.568386 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:59.068971 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:59.568858 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:04:00.067964 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:57.043266 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:59.541626 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:58.964446 1130827 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0328 01:03:58.979425 1130827 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0328 01:03:59.042826 1130827 system_pods.go:43] waiting for kube-system pods to appear ...
	I0328 01:03:59.060388 1130827 system_pods.go:59] 8 kube-system pods found
	I0328 01:03:59.060429 1130827 system_pods.go:61] "coredns-7db6d8ff4d-86n4s" [71402ca8-dfa7-4caf-a422-6de9f24bf9dc] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0328 01:03:59.060439 1130827 system_pods.go:61] "etcd-no-preload-248059" [954b6886-b84f-4d94-bbce-7e520142eb4b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0328 01:03:59.060451 1130827 system_pods.go:61] "kube-apiserver-no-preload-248059" [2d3caabe-27c2-44e7-8f52-76e03f262e2d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0328 01:03:59.060462 1130827 system_pods.go:61] "kube-controller-manager-no-preload-248059" [30b9f4aa-c9a7-4d91-8e4d-35ad32f40425] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0328 01:03:59.060472 1130827 system_pods.go:61] "kube-proxy-b9qpb" [7ab4cca8-0ba2-4177-84cd-c6ac045930fc] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0328 01:03:59.060481 1130827 system_pods.go:61] "kube-scheduler-no-preload-248059" [4d9e45e3-d990-40d4-a4be-8384c39eb9b0] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0328 01:03:59.060493 1130827 system_pods.go:61] "metrics-server-569cc877fc-cvnrj" [063a47ac-9ceb-4521-9dde-aca02ec5e0d8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0328 01:03:59.060508 1130827 system_pods.go:61] "storage-provisioner" [0a0eb2d3-a426-4b76-8009-1a0a0e0312bb] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0328 01:03:59.060518 1130827 system_pods.go:74] duration metric: took 17.666067ms to wait for pod list to return data ...
	I0328 01:03:59.060533 1130827 node_conditions.go:102] verifying NodePressure condition ...
	I0328 01:03:59.065018 1130827 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0328 01:03:59.065054 1130827 node_conditions.go:123] node cpu capacity is 2
	I0328 01:03:59.065071 1130827 node_conditions.go:105] duration metric: took 4.531253ms to run NodePressure ...
	I0328 01:03:59.065097 1130827 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-beta.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0328 01:03:59.454609 1130827 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0328 01:03:59.459707 1130827 kubeadm.go:733] kubelet initialised
	I0328 01:03:59.459730 1130827 kubeadm.go:734] duration metric: took 5.09757ms waiting for restarted kubelet to initialise ...
	I0328 01:03:59.459739 1130827 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0328 01:03:59.465352 1130827 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-86n4s" in "kube-system" namespace to be "Ready" ...
	I0328 01:03:59.471020 1130827 pod_ready.go:97] node "no-preload-248059" hosting pod "coredns-7db6d8ff4d-86n4s" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-248059" has status "Ready":"False"
	I0328 01:03:59.471054 1130827 pod_ready.go:81] duration metric: took 5.676291ms for pod "coredns-7db6d8ff4d-86n4s" in "kube-system" namespace to be "Ready" ...
	E0328 01:03:59.471067 1130827 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-248059" hosting pod "coredns-7db6d8ff4d-86n4s" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-248059" has status "Ready":"False"
	I0328 01:03:59.471075 1130827 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-248059" in "kube-system" namespace to be "Ready" ...
	I0328 01:03:59.476393 1130827 pod_ready.go:97] node "no-preload-248059" hosting pod "etcd-no-preload-248059" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-248059" has status "Ready":"False"
	I0328 01:03:59.476421 1130827 pod_ready.go:81] duration metric: took 5.333391ms for pod "etcd-no-preload-248059" in "kube-system" namespace to be "Ready" ...
	E0328 01:03:59.476430 1130827 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-248059" hosting pod "etcd-no-preload-248059" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-248059" has status "Ready":"False"
	I0328 01:03:59.476436 1130827 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-248059" in "kube-system" namespace to be "Ready" ...
	I0328 01:03:59.485889 1130827 pod_ready.go:97] node "no-preload-248059" hosting pod "kube-apiserver-no-preload-248059" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-248059" has status "Ready":"False"
	I0328 01:03:59.485924 1130827 pod_ready.go:81] duration metric: took 9.481204ms for pod "kube-apiserver-no-preload-248059" in "kube-system" namespace to be "Ready" ...
	E0328 01:03:59.485937 1130827 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-248059" hosting pod "kube-apiserver-no-preload-248059" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-248059" has status "Ready":"False"
	I0328 01:03:59.485957 1130827 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-248059" in "kube-system" namespace to be "Ready" ...
	I0328 01:03:59.491064 1130827 pod_ready.go:97] node "no-preload-248059" hosting pod "kube-controller-manager-no-preload-248059" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-248059" has status "Ready":"False"
	I0328 01:03:59.491095 1130827 pod_ready.go:81] duration metric: took 5.125981ms for pod "kube-controller-manager-no-preload-248059" in "kube-system" namespace to be "Ready" ...
	E0328 01:03:59.491107 1130827 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-248059" hosting pod "kube-controller-manager-no-preload-248059" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-248059" has status "Ready":"False"
	I0328 01:03:59.491116 1130827 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-b9qpb" in "kube-system" namespace to be "Ready" ...
	I0328 01:03:59.858724 1130827 pod_ready.go:92] pod "kube-proxy-b9qpb" in "kube-system" namespace has status "Ready":"True"
	I0328 01:03:59.858753 1130827 pod_ready.go:81] duration metric: took 367.628034ms for pod "kube-proxy-b9qpb" in "kube-system" namespace to be "Ready" ...
	I0328 01:03:59.858764 1130827 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-248059" in "kube-system" namespace to be "Ready" ...
	I0328 01:03:58.413911 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:00.913297 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:02.913414 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:00.568622 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:04:01.067943 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:04:01.567964 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:04:02.068537 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:04:02.568772 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:04:03.068458 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:04:03.568943 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:04:04.068085 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:04:04.068176 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:04:04.112601 1131323 cri.go:89] found id: ""
	I0328 01:04:04.112631 1131323 logs.go:276] 0 containers: []
	W0328 01:04:04.112642 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:04:04.112650 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:04:04.112726 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:04:04.151837 1131323 cri.go:89] found id: ""
	I0328 01:04:04.151873 1131323 logs.go:276] 0 containers: []
	W0328 01:04:04.151885 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:04:04.151894 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:04:04.151965 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:04:04.193411 1131323 cri.go:89] found id: ""
	I0328 01:04:04.193451 1131323 logs.go:276] 0 containers: []
	W0328 01:04:04.193463 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:04:04.193473 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:04:04.193545 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:04:04.239623 1131323 cri.go:89] found id: ""
	I0328 01:04:04.239652 1131323 logs.go:276] 0 containers: []
	W0328 01:04:04.239662 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:04:04.239673 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:04:04.239732 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:04:04.279561 1131323 cri.go:89] found id: ""
	I0328 01:04:04.279600 1131323 logs.go:276] 0 containers: []
	W0328 01:04:04.279615 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:04:04.279627 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:04:04.279708 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:04:04.318680 1131323 cri.go:89] found id: ""
	I0328 01:04:04.318710 1131323 logs.go:276] 0 containers: []
	W0328 01:04:04.318722 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:04:04.318731 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:04:04.318797 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:04:04.356486 1131323 cri.go:89] found id: ""
	I0328 01:04:04.356514 1131323 logs.go:276] 0 containers: []
	W0328 01:04:04.356523 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:04:04.356530 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:04:04.356586 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:04:04.394281 1131323 cri.go:89] found id: ""
	I0328 01:04:04.394319 1131323 logs.go:276] 0 containers: []
	W0328 01:04:04.394334 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:04:04.394348 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:04:04.394364 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:04:04.458688 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:04:04.458729 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:04:04.501399 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:04:04.501440 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:04:04.556183 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:04:04.556225 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:04:04.571392 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:04:04.571427 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:04:04.709967 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:04:02.041555 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:04.541464 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:01.866183 1130827 pod_ready.go:102] pod "kube-scheduler-no-preload-248059" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:03.868706 1130827 pod_ready.go:102] pod "kube-scheduler-no-preload-248059" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:04.915667 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:07.412548 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:07.210550 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:04:07.224274 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:04:07.224345 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:04:07.262604 1131323 cri.go:89] found id: ""
	I0328 01:04:07.262640 1131323 logs.go:276] 0 containers: []
	W0328 01:04:07.262665 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:04:07.262674 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:04:07.262763 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:04:07.296868 1131323 cri.go:89] found id: ""
	I0328 01:04:07.296907 1131323 logs.go:276] 0 containers: []
	W0328 01:04:07.296918 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:04:07.296926 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:04:07.296992 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:04:07.333110 1131323 cri.go:89] found id: ""
	I0328 01:04:07.333149 1131323 logs.go:276] 0 containers: []
	W0328 01:04:07.333162 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:04:07.333171 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:04:07.333240 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:04:07.371138 1131323 cri.go:89] found id: ""
	I0328 01:04:07.371168 1131323 logs.go:276] 0 containers: []
	W0328 01:04:07.371186 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:04:07.371195 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:04:07.371259 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:04:07.412197 1131323 cri.go:89] found id: ""
	I0328 01:04:07.412230 1131323 logs.go:276] 0 containers: []
	W0328 01:04:07.412242 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:04:07.412251 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:04:07.412331 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:04:07.457021 1131323 cri.go:89] found id: ""
	I0328 01:04:07.457052 1131323 logs.go:276] 0 containers: []
	W0328 01:04:07.457070 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:04:07.457080 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:04:07.457153 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:04:07.517996 1131323 cri.go:89] found id: ""
	I0328 01:04:07.518026 1131323 logs.go:276] 0 containers: []
	W0328 01:04:07.518034 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:04:07.518040 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:04:07.518111 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:04:07.556829 1131323 cri.go:89] found id: ""
	I0328 01:04:07.556856 1131323 logs.go:276] 0 containers: []
	W0328 01:04:07.556865 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:04:07.556875 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:04:07.556890 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:04:07.572234 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:04:07.572270 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:04:07.648615 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:04:07.648641 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:04:07.648658 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:04:07.719617 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:04:07.719665 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:04:07.764053 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:04:07.764097 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:04:10.319480 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:04:06.542160 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:08.550725 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:06.366150 1130827 pod_ready.go:102] pod "kube-scheduler-no-preload-248059" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:07.365200 1130827 pod_ready.go:92] pod "kube-scheduler-no-preload-248059" in "kube-system" namespace has status "Ready":"True"
	I0328 01:04:07.365233 1130827 pod_ready.go:81] duration metric: took 7.506461201s for pod "kube-scheduler-no-preload-248059" in "kube-system" namespace to be "Ready" ...
	I0328 01:04:07.365256 1130827 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace to be "Ready" ...
	I0328 01:04:09.373694 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:09.413378 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:11.913400 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:10.334347 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:04:10.335893 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:04:10.375231 1131323 cri.go:89] found id: ""
	I0328 01:04:10.375263 1131323 logs.go:276] 0 containers: []
	W0328 01:04:10.375274 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:04:10.375281 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:04:10.375353 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:04:10.413652 1131323 cri.go:89] found id: ""
	I0328 01:04:10.413706 1131323 logs.go:276] 0 containers: []
	W0328 01:04:10.413726 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:04:10.413736 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:04:10.413805 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:04:10.449546 1131323 cri.go:89] found id: ""
	I0328 01:04:10.449588 1131323 logs.go:276] 0 containers: []
	W0328 01:04:10.449597 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:04:10.449604 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:04:10.449658 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:04:10.487518 1131323 cri.go:89] found id: ""
	I0328 01:04:10.487556 1131323 logs.go:276] 0 containers: []
	W0328 01:04:10.487570 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:04:10.487579 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:04:10.487663 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:04:10.525088 1131323 cri.go:89] found id: ""
	I0328 01:04:10.525124 1131323 logs.go:276] 0 containers: []
	W0328 01:04:10.525137 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:04:10.525146 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:04:10.525213 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:04:10.567177 1131323 cri.go:89] found id: ""
	I0328 01:04:10.567209 1131323 logs.go:276] 0 containers: []
	W0328 01:04:10.567221 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:04:10.567231 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:04:10.567302 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:04:10.609440 1131323 cri.go:89] found id: ""
	I0328 01:04:10.609474 1131323 logs.go:276] 0 containers: []
	W0328 01:04:10.609485 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:04:10.609492 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:04:10.609549 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:04:10.652466 1131323 cri.go:89] found id: ""
	I0328 01:04:10.652502 1131323 logs.go:276] 0 containers: []
	W0328 01:04:10.652516 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:04:10.652529 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:04:10.652546 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:04:10.737406 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:04:10.737451 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:04:10.786955 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:04:10.786991 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:04:10.843072 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:04:10.843114 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:04:10.857209 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:04:10.857244 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:04:10.950885 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:04:13.451542 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:04:13.465833 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:04:13.465924 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:04:13.503353 1131323 cri.go:89] found id: ""
	I0328 01:04:13.503386 1131323 logs.go:276] 0 containers: []
	W0328 01:04:13.503398 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:04:13.503407 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:04:13.503474 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:04:13.543175 1131323 cri.go:89] found id: ""
	I0328 01:04:13.543208 1131323 logs.go:276] 0 containers: []
	W0328 01:04:13.543220 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:04:13.543229 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:04:13.543287 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:04:13.580796 1131323 cri.go:89] found id: ""
	I0328 01:04:13.580829 1131323 logs.go:276] 0 containers: []
	W0328 01:04:13.580840 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:04:13.580848 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:04:13.580900 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:04:13.619483 1131323 cri.go:89] found id: ""
	I0328 01:04:13.619516 1131323 logs.go:276] 0 containers: []
	W0328 01:04:13.619529 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:04:13.619539 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:04:13.619596 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:04:13.654651 1131323 cri.go:89] found id: ""
	I0328 01:04:13.654683 1131323 logs.go:276] 0 containers: []
	W0328 01:04:13.654697 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:04:13.654705 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:04:13.654774 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:04:13.691763 1131323 cri.go:89] found id: ""
	I0328 01:04:13.691794 1131323 logs.go:276] 0 containers: []
	W0328 01:04:13.691805 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:04:13.691813 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:04:13.691881 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:04:13.730580 1131323 cri.go:89] found id: ""
	I0328 01:04:13.730614 1131323 logs.go:276] 0 containers: []
	W0328 01:04:13.730627 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:04:13.730635 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:04:13.730694 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:04:13.767802 1131323 cri.go:89] found id: ""
	I0328 01:04:13.767834 1131323 logs.go:276] 0 containers: []
	W0328 01:04:13.767848 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:04:13.767860 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:04:13.767876 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:04:13.815612 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:04:13.815653 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:04:13.870945 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:04:13.870991 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:04:13.891456 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:04:13.891506 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:04:14.022124 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:04:14.022163 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:04:14.022187 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:04:11.041196 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:13.044490 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:15.541942 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:11.873574 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:13.875251 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:14.412081 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:16.412837 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:16.604087 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:04:16.618872 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:04:16.618971 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:04:16.665628 1131323 cri.go:89] found id: ""
	I0328 01:04:16.665661 1131323 logs.go:276] 0 containers: []
	W0328 01:04:16.665675 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:04:16.665683 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:04:16.665780 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:04:16.703727 1131323 cri.go:89] found id: ""
	I0328 01:04:16.703758 1131323 logs.go:276] 0 containers: []
	W0328 01:04:16.703768 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:04:16.703775 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:04:16.703835 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:04:16.741425 1131323 cri.go:89] found id: ""
	I0328 01:04:16.741455 1131323 logs.go:276] 0 containers: []
	W0328 01:04:16.741464 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:04:16.741470 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:04:16.741524 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:04:16.782333 1131323 cri.go:89] found id: ""
	I0328 01:04:16.782373 1131323 logs.go:276] 0 containers: []
	W0328 01:04:16.782387 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:04:16.782398 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:04:16.782469 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:04:16.820321 1131323 cri.go:89] found id: ""
	I0328 01:04:16.820355 1131323 logs.go:276] 0 containers: []
	W0328 01:04:16.820364 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:04:16.820372 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:04:16.820429 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:04:16.861091 1131323 cri.go:89] found id: ""
	I0328 01:04:16.861130 1131323 logs.go:276] 0 containers: []
	W0328 01:04:16.861144 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:04:16.861154 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:04:16.861226 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:04:16.901347 1131323 cri.go:89] found id: ""
	I0328 01:04:16.901394 1131323 logs.go:276] 0 containers: []
	W0328 01:04:16.901408 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:04:16.901418 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:04:16.901491 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:04:16.944027 1131323 cri.go:89] found id: ""
	I0328 01:04:16.944067 1131323 logs.go:276] 0 containers: []
	W0328 01:04:16.944080 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:04:16.944093 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:04:16.944110 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:04:16.959104 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:04:16.959151 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:04:17.035432 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:04:17.035464 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:04:17.035480 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:04:17.116236 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:04:17.116276 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:04:17.159321 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:04:17.159370 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:04:19.711326 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:04:19.726016 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:04:19.726094 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:04:19.776639 1131323 cri.go:89] found id: ""
	I0328 01:04:19.776676 1131323 logs.go:276] 0 containers: []
	W0328 01:04:19.776690 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:04:19.776700 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:04:19.776782 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:04:19.817849 1131323 cri.go:89] found id: ""
	I0328 01:04:19.817887 1131323 logs.go:276] 0 containers: []
	W0328 01:04:19.817897 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:04:19.817904 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:04:19.817981 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:04:19.855055 1131323 cri.go:89] found id: ""
	I0328 01:04:19.855089 1131323 logs.go:276] 0 containers: []
	W0328 01:04:19.855102 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:04:19.855110 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:04:19.855177 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:04:19.895296 1131323 cri.go:89] found id: ""
	I0328 01:04:19.895332 1131323 logs.go:276] 0 containers: []
	W0328 01:04:19.895346 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:04:19.895354 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:04:19.895414 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:04:19.930936 1131323 cri.go:89] found id: ""
	I0328 01:04:19.930968 1131323 logs.go:276] 0 containers: []
	W0328 01:04:19.930980 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:04:19.930989 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:04:19.931067 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:04:19.968573 1131323 cri.go:89] found id: ""
	I0328 01:04:19.968610 1131323 logs.go:276] 0 containers: []
	W0328 01:04:19.968623 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:04:19.968632 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:04:19.968693 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:04:20.006130 1131323 cri.go:89] found id: ""
	I0328 01:04:20.006180 1131323 logs.go:276] 0 containers: []
	W0328 01:04:20.006195 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:04:20.006203 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:04:20.006304 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:04:20.043646 1131323 cri.go:89] found id: ""
	I0328 01:04:20.043678 1131323 logs.go:276] 0 containers: []
	W0328 01:04:20.043689 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:04:20.043701 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:04:20.043717 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:04:20.058728 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:04:20.058761 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:04:20.136392 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:04:20.136417 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:04:20.136431 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:04:20.214971 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:04:20.215015 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:04:20.255002 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:04:20.255047 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:04:18.041868 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:20.542175 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:16.372600 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:18.373203 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:20.374228 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:18.913596 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:20.913978 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:22.914777 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:22.810078 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:04:22.824083 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:04:22.824169 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:04:22.862037 1131323 cri.go:89] found id: ""
	I0328 01:04:22.862066 1131323 logs.go:276] 0 containers: []
	W0328 01:04:22.862074 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:04:22.862081 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:04:22.862141 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:04:22.901625 1131323 cri.go:89] found id: ""
	I0328 01:04:22.901658 1131323 logs.go:276] 0 containers: []
	W0328 01:04:22.901670 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:04:22.901679 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:04:22.901752 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:04:22.938858 1131323 cri.go:89] found id: ""
	I0328 01:04:22.938891 1131323 logs.go:276] 0 containers: []
	W0328 01:04:22.938903 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:04:22.938912 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:04:22.938983 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:04:22.978781 1131323 cri.go:89] found id: ""
	I0328 01:04:22.978818 1131323 logs.go:276] 0 containers: []
	W0328 01:04:22.978829 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:04:22.978837 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:04:22.978910 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:04:23.016844 1131323 cri.go:89] found id: ""
	I0328 01:04:23.016882 1131323 logs.go:276] 0 containers: []
	W0328 01:04:23.016895 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:04:23.016904 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:04:23.016975 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:04:23.058456 1131323 cri.go:89] found id: ""
	I0328 01:04:23.058508 1131323 logs.go:276] 0 containers: []
	W0328 01:04:23.058522 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:04:23.058531 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:04:23.058604 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:04:23.099368 1131323 cri.go:89] found id: ""
	I0328 01:04:23.099399 1131323 logs.go:276] 0 containers: []
	W0328 01:04:23.099408 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:04:23.099420 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:04:23.099492 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:04:23.135593 1131323 cri.go:89] found id: ""
	I0328 01:04:23.135634 1131323 logs.go:276] 0 containers: []
	W0328 01:04:23.135653 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:04:23.135665 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:04:23.135679 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:04:23.191215 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:04:23.191260 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:04:23.206849 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:04:23.206884 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:04:23.289566 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:04:23.289596 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:04:23.289618 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:04:23.365429 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:04:23.365480 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:04:23.042312 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:25.541788 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:22.872233 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:25.373908 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:25.413591 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:27.912983 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:25.914883 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:04:25.929336 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:04:25.929415 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:04:25.969452 1131323 cri.go:89] found id: ""
	I0328 01:04:25.969485 1131323 logs.go:276] 0 containers: []
	W0328 01:04:25.969497 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:04:25.969506 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:04:25.969573 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:04:26.008978 1131323 cri.go:89] found id: ""
	I0328 01:04:26.009006 1131323 logs.go:276] 0 containers: []
	W0328 01:04:26.009015 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:04:26.009022 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:04:26.009075 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:04:26.051110 1131323 cri.go:89] found id: ""
	I0328 01:04:26.051138 1131323 logs.go:276] 0 containers: []
	W0328 01:04:26.051146 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:04:26.051153 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:04:26.051213 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:04:26.088231 1131323 cri.go:89] found id: ""
	I0328 01:04:26.088262 1131323 logs.go:276] 0 containers: []
	W0328 01:04:26.088271 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:04:26.088277 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:04:26.088342 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:04:26.125741 1131323 cri.go:89] found id: ""
	I0328 01:04:26.125782 1131323 logs.go:276] 0 containers: []
	W0328 01:04:26.125794 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:04:26.125800 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:04:26.125867 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:04:26.163367 1131323 cri.go:89] found id: ""
	I0328 01:04:26.163406 1131323 logs.go:276] 0 containers: []
	W0328 01:04:26.163417 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:04:26.163426 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:04:26.163503 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:04:26.202302 1131323 cri.go:89] found id: ""
	I0328 01:04:26.202340 1131323 logs.go:276] 0 containers: []
	W0328 01:04:26.202355 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:04:26.202364 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:04:26.202422 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:04:26.240880 1131323 cri.go:89] found id: ""
	I0328 01:04:26.240911 1131323 logs.go:276] 0 containers: []
	W0328 01:04:26.240921 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:04:26.240931 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:04:26.240943 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:04:26.283151 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:04:26.283180 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:04:26.341313 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:04:26.341350 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:04:26.356762 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:04:26.356791 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:04:26.428033 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:04:26.428054 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:04:26.428066 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:04:29.006332 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:04:29.020634 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:04:29.020745 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:04:29.060812 1131323 cri.go:89] found id: ""
	I0328 01:04:29.060843 1131323 logs.go:276] 0 containers: []
	W0328 01:04:29.060852 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:04:29.060859 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:04:29.060916 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:04:29.100110 1131323 cri.go:89] found id: ""
	I0328 01:04:29.100139 1131323 logs.go:276] 0 containers: []
	W0328 01:04:29.100149 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:04:29.100155 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:04:29.100212 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:04:29.140345 1131323 cri.go:89] found id: ""
	I0328 01:04:29.140384 1131323 logs.go:276] 0 containers: []
	W0328 01:04:29.140396 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:04:29.140404 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:04:29.140479 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:04:29.182415 1131323 cri.go:89] found id: ""
	I0328 01:04:29.182449 1131323 logs.go:276] 0 containers: []
	W0328 01:04:29.182459 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:04:29.182465 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:04:29.182533 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:04:29.225177 1131323 cri.go:89] found id: ""
	I0328 01:04:29.225214 1131323 logs.go:276] 0 containers: []
	W0328 01:04:29.225225 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:04:29.225233 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:04:29.225310 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:04:29.265437 1131323 cri.go:89] found id: ""
	I0328 01:04:29.265471 1131323 logs.go:276] 0 containers: []
	W0328 01:04:29.265485 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:04:29.265493 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:04:29.265556 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:04:29.301578 1131323 cri.go:89] found id: ""
	I0328 01:04:29.301617 1131323 logs.go:276] 0 containers: []
	W0328 01:04:29.301630 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:04:29.301639 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:04:29.301719 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:04:29.340816 1131323 cri.go:89] found id: ""
	I0328 01:04:29.340847 1131323 logs.go:276] 0 containers: []
	W0328 01:04:29.340856 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:04:29.340867 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:04:29.340880 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:04:29.384658 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:04:29.384687 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:04:29.439243 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:04:29.439285 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:04:29.456179 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:04:29.456211 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:04:29.534878 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:04:29.534906 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:04:29.534927 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:04:28.041463 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:30.042506 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:27.872489 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:30.371109 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:29.913856 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:32.415699 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:32.115798 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:04:32.130464 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:04:32.130560 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:04:32.168846 1131323 cri.go:89] found id: ""
	I0328 01:04:32.168877 1131323 logs.go:276] 0 containers: []
	W0328 01:04:32.168887 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:04:32.168894 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:04:32.168952 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:04:32.208590 1131323 cri.go:89] found id: ""
	I0328 01:04:32.208622 1131323 logs.go:276] 0 containers: []
	W0328 01:04:32.208632 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:04:32.208638 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:04:32.208694 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:04:32.247323 1131323 cri.go:89] found id: ""
	I0328 01:04:32.247362 1131323 logs.go:276] 0 containers: []
	W0328 01:04:32.247375 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:04:32.247384 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:04:32.247507 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:04:32.285260 1131323 cri.go:89] found id: ""
	I0328 01:04:32.285293 1131323 logs.go:276] 0 containers: []
	W0328 01:04:32.285312 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:04:32.285319 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:04:32.285395 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:04:32.326678 1131323 cri.go:89] found id: ""
	I0328 01:04:32.326712 1131323 logs.go:276] 0 containers: []
	W0328 01:04:32.326725 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:04:32.326740 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:04:32.326823 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:04:32.363375 1131323 cri.go:89] found id: ""
	I0328 01:04:32.363403 1131323 logs.go:276] 0 containers: []
	W0328 01:04:32.363412 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:04:32.363419 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:04:32.363473 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:04:32.401410 1131323 cri.go:89] found id: ""
	I0328 01:04:32.401449 1131323 logs.go:276] 0 containers: []
	W0328 01:04:32.401462 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:04:32.401470 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:04:32.401558 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:04:32.438645 1131323 cri.go:89] found id: ""
	I0328 01:04:32.438680 1131323 logs.go:276] 0 containers: []
	W0328 01:04:32.438691 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:04:32.438703 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:04:32.438718 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:04:32.488743 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:04:32.488786 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:04:32.503908 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:04:32.503944 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:04:32.577307 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:04:32.577333 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:04:32.577350 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:04:32.657787 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:04:32.657832 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:04:35.201151 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:04:35.215313 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:04:35.215383 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:04:35.253467 1131323 cri.go:89] found id: ""
	I0328 01:04:35.253504 1131323 logs.go:276] 0 containers: []
	W0328 01:04:35.253515 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:04:35.253522 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:04:35.253593 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:04:35.290218 1131323 cri.go:89] found id: ""
	I0328 01:04:35.290280 1131323 logs.go:276] 0 containers: []
	W0328 01:04:35.290292 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:04:35.290300 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:04:35.290378 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:04:35.330714 1131323 cri.go:89] found id: ""
	I0328 01:04:35.330749 1131323 logs.go:276] 0 containers: []
	W0328 01:04:35.330757 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:04:35.330764 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:04:35.330831 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:04:32.542071 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:34.544163 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:32.372100 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:34.872293 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:34.913212 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:37.411734 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:35.371524 1131323 cri.go:89] found id: ""
	I0328 01:04:35.371553 1131323 logs.go:276] 0 containers: []
	W0328 01:04:35.371570 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:04:35.371577 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:04:35.371630 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:04:35.411610 1131323 cri.go:89] found id: ""
	I0328 01:04:35.411638 1131323 logs.go:276] 0 containers: []
	W0328 01:04:35.411646 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:04:35.411652 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:04:35.411711 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:04:35.456709 1131323 cri.go:89] found id: ""
	I0328 01:04:35.456745 1131323 logs.go:276] 0 containers: []
	W0328 01:04:35.456758 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:04:35.456766 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:04:35.456836 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:04:35.492688 1131323 cri.go:89] found id: ""
	I0328 01:04:35.492719 1131323 logs.go:276] 0 containers: []
	W0328 01:04:35.492729 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:04:35.492755 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:04:35.492811 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:04:35.531205 1131323 cri.go:89] found id: ""
	I0328 01:04:35.531234 1131323 logs.go:276] 0 containers: []
	W0328 01:04:35.531243 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:04:35.531254 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:04:35.531266 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:04:35.611803 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:04:35.611845 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:04:35.653513 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:04:35.653551 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:04:35.708030 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:04:35.708075 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:04:35.724542 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:04:35.724576 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:04:35.798624 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:04:38.299312 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:04:38.314128 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:04:38.314213 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:04:38.357728 1131323 cri.go:89] found id: ""
	I0328 01:04:38.357761 1131323 logs.go:276] 0 containers: []
	W0328 01:04:38.357779 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:04:38.357786 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:04:38.357848 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:04:38.394512 1131323 cri.go:89] found id: ""
	I0328 01:04:38.394541 1131323 logs.go:276] 0 containers: []
	W0328 01:04:38.394549 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:04:38.394558 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:04:38.394618 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:04:38.434353 1131323 cri.go:89] found id: ""
	I0328 01:04:38.434380 1131323 logs.go:276] 0 containers: []
	W0328 01:04:38.434391 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:04:38.434399 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:04:38.434466 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:04:38.477662 1131323 cri.go:89] found id: ""
	I0328 01:04:38.477693 1131323 logs.go:276] 0 containers: []
	W0328 01:04:38.477703 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:04:38.477710 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:04:38.477763 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:04:38.515014 1131323 cri.go:89] found id: ""
	I0328 01:04:38.515044 1131323 logs.go:276] 0 containers: []
	W0328 01:04:38.515053 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:04:38.515060 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:04:38.515117 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:04:38.558865 1131323 cri.go:89] found id: ""
	I0328 01:04:38.558899 1131323 logs.go:276] 0 containers: []
	W0328 01:04:38.558911 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:04:38.558920 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:04:38.558982 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:04:38.600261 1131323 cri.go:89] found id: ""
	I0328 01:04:38.600290 1131323 logs.go:276] 0 containers: []
	W0328 01:04:38.600299 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:04:38.600306 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:04:38.600366 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:04:38.637131 1131323 cri.go:89] found id: ""
	I0328 01:04:38.637167 1131323 logs.go:276] 0 containers: []
	W0328 01:04:38.637179 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:04:38.637194 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:04:38.637218 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:04:38.716032 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:04:38.716058 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:04:38.716079 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:04:38.804534 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:04:38.804578 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:04:38.851781 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:04:38.851820 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:04:38.910091 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:04:38.910125 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:04:37.041273 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:39.541843 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:37.372262 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:39.372555 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:39.912953 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:42.412667 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:41.425801 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:04:41.441072 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:04:41.441168 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:04:41.482934 1131323 cri.go:89] found id: ""
	I0328 01:04:41.482962 1131323 logs.go:276] 0 containers: []
	W0328 01:04:41.482974 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:04:41.482983 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:04:41.483063 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:04:41.521762 1131323 cri.go:89] found id: ""
	I0328 01:04:41.521796 1131323 logs.go:276] 0 containers: []
	W0328 01:04:41.521810 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:04:41.521819 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:04:41.521931 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:04:41.560814 1131323 cri.go:89] found id: ""
	I0328 01:04:41.560847 1131323 logs.go:276] 0 containers: []
	W0328 01:04:41.560857 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:04:41.560864 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:04:41.560928 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:04:41.601158 1131323 cri.go:89] found id: ""
	I0328 01:04:41.601189 1131323 logs.go:276] 0 containers: []
	W0328 01:04:41.601199 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:04:41.601206 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:04:41.601271 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:04:41.638760 1131323 cri.go:89] found id: ""
	I0328 01:04:41.638789 1131323 logs.go:276] 0 containers: []
	W0328 01:04:41.638799 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:04:41.638806 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:04:41.638861 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:04:41.675235 1131323 cri.go:89] found id: ""
	I0328 01:04:41.675268 1131323 logs.go:276] 0 containers: []
	W0328 01:04:41.675278 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:04:41.675285 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:04:41.675341 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:04:41.712918 1131323 cri.go:89] found id: ""
	I0328 01:04:41.712957 1131323 logs.go:276] 0 containers: []
	W0328 01:04:41.712972 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:04:41.712983 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:04:41.713078 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:04:41.750552 1131323 cri.go:89] found id: ""
	I0328 01:04:41.750582 1131323 logs.go:276] 0 containers: []
	W0328 01:04:41.750591 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:04:41.750601 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:04:41.750617 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:04:41.811163 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:04:41.811204 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:04:41.826502 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:04:41.826547 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:04:41.900727 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:04:41.900759 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:04:41.900777 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:04:41.981731 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:04:41.981783 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:04:44.525845 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:04:44.542301 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:04:44.542389 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:04:44.584907 1131323 cri.go:89] found id: ""
	I0328 01:04:44.584936 1131323 logs.go:276] 0 containers: []
	W0328 01:04:44.584945 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:04:44.584952 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:04:44.585007 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:04:44.630465 1131323 cri.go:89] found id: ""
	I0328 01:04:44.630499 1131323 logs.go:276] 0 containers: []
	W0328 01:04:44.630511 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:04:44.630520 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:04:44.630588 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:04:44.669095 1131323 cri.go:89] found id: ""
	I0328 01:04:44.669131 1131323 logs.go:276] 0 containers: []
	W0328 01:04:44.669143 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:04:44.669152 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:04:44.669235 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:04:44.708445 1131323 cri.go:89] found id: ""
	I0328 01:04:44.708484 1131323 logs.go:276] 0 containers: []
	W0328 01:04:44.708495 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:04:44.708502 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:04:44.708570 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:04:44.747706 1131323 cri.go:89] found id: ""
	I0328 01:04:44.747744 1131323 logs.go:276] 0 containers: []
	W0328 01:04:44.747755 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:04:44.747762 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:04:44.747822 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:04:44.787768 1131323 cri.go:89] found id: ""
	I0328 01:04:44.787807 1131323 logs.go:276] 0 containers: []
	W0328 01:04:44.787821 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:04:44.787830 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:04:44.787899 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:04:44.829018 1131323 cri.go:89] found id: ""
	I0328 01:04:44.829049 1131323 logs.go:276] 0 containers: []
	W0328 01:04:44.829059 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:04:44.829066 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:04:44.829123 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:04:44.874334 1131323 cri.go:89] found id: ""
	I0328 01:04:44.874374 1131323 logs.go:276] 0 containers: []
	W0328 01:04:44.874383 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:04:44.874393 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:04:44.874405 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:04:44.921577 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:04:44.921619 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:04:44.976660 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:04:44.976713 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:04:44.991365 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:04:44.991400 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:04:45.067595 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:04:45.067630 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:04:45.067651 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:04:42.042736 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:44.543288 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:41.372902 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:43.872925 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:45.873163 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:44.913827 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:47.412342 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:47.647634 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:04:47.663581 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:04:47.663687 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:04:47.702889 1131323 cri.go:89] found id: ""
	I0328 01:04:47.702940 1131323 logs.go:276] 0 containers: []
	W0328 01:04:47.702954 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:04:47.702966 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:04:47.703043 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:04:47.744995 1131323 cri.go:89] found id: ""
	I0328 01:04:47.745027 1131323 logs.go:276] 0 containers: []
	W0328 01:04:47.745037 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:04:47.745044 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:04:47.745103 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:04:47.785518 1131323 cri.go:89] found id: ""
	I0328 01:04:47.785550 1131323 logs.go:276] 0 containers: []
	W0328 01:04:47.785562 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:04:47.785572 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:04:47.785645 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:04:47.831739 1131323 cri.go:89] found id: ""
	I0328 01:04:47.831771 1131323 logs.go:276] 0 containers: []
	W0328 01:04:47.831786 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:04:47.831794 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:04:47.831867 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:04:47.871864 1131323 cri.go:89] found id: ""
	I0328 01:04:47.871906 1131323 logs.go:276] 0 containers: []
	W0328 01:04:47.871918 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:04:47.871929 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:04:47.872008 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:04:47.907899 1131323 cri.go:89] found id: ""
	I0328 01:04:47.907934 1131323 logs.go:276] 0 containers: []
	W0328 01:04:47.907946 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:04:47.907955 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:04:47.908022 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:04:47.946073 1131323 cri.go:89] found id: ""
	I0328 01:04:47.946107 1131323 logs.go:276] 0 containers: []
	W0328 01:04:47.946118 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:04:47.946127 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:04:47.946223 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:04:47.986122 1131323 cri.go:89] found id: ""
	I0328 01:04:47.986154 1131323 logs.go:276] 0 containers: []
	W0328 01:04:47.986168 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:04:47.986182 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:04:47.986198 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:04:48.057234 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:04:48.057271 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:04:48.109881 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:04:48.109926 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:04:48.125154 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:04:48.125189 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:04:48.208295 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:04:48.208327 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:04:48.208345 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:04:47.041447 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:49.542203 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:48.371275 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:50.372057 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:49.413451 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:51.414465 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:50.785126 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:04:50.800000 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:04:50.800078 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:04:50.839883 1131323 cri.go:89] found id: ""
	I0328 01:04:50.839911 1131323 logs.go:276] 0 containers: []
	W0328 01:04:50.839920 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:04:50.839927 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:04:50.839983 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:04:50.879627 1131323 cri.go:89] found id: ""
	I0328 01:04:50.879654 1131323 logs.go:276] 0 containers: []
	W0328 01:04:50.879661 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:04:50.879668 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:04:50.879734 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:04:50.918392 1131323 cri.go:89] found id: ""
	I0328 01:04:50.918434 1131323 logs.go:276] 0 containers: []
	W0328 01:04:50.918446 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:04:50.918454 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:04:50.918517 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:04:50.957198 1131323 cri.go:89] found id: ""
	I0328 01:04:50.957234 1131323 logs.go:276] 0 containers: []
	W0328 01:04:50.957248 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:04:50.957257 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:04:50.957328 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:04:50.997389 1131323 cri.go:89] found id: ""
	I0328 01:04:50.997424 1131323 logs.go:276] 0 containers: []
	W0328 01:04:50.997438 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:04:50.997446 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:04:50.997513 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:04:51.040259 1131323 cri.go:89] found id: ""
	I0328 01:04:51.040296 1131323 logs.go:276] 0 containers: []
	W0328 01:04:51.040309 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:04:51.040318 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:04:51.040389 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:04:51.081824 1131323 cri.go:89] found id: ""
	I0328 01:04:51.081858 1131323 logs.go:276] 0 containers: []
	W0328 01:04:51.081868 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:04:51.081875 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:04:51.081942 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:04:51.119742 1131323 cri.go:89] found id: ""
	I0328 01:04:51.119783 1131323 logs.go:276] 0 containers: []
	W0328 01:04:51.119796 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:04:51.119810 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:04:51.119836 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:04:51.173486 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:04:51.173529 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:04:51.188532 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:04:51.188568 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:04:51.269181 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:04:51.269207 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:04:51.269226 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:04:51.349882 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:04:51.349936 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:04:53.893562 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:04:53.910104 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:04:53.910186 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:04:53.951333 1131323 cri.go:89] found id: ""
	I0328 01:04:53.951375 1131323 logs.go:276] 0 containers: []
	W0328 01:04:53.951388 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:04:53.951397 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:04:53.951472 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:04:53.992438 1131323 cri.go:89] found id: ""
	I0328 01:04:53.992474 1131323 logs.go:276] 0 containers: []
	W0328 01:04:53.992486 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:04:53.992493 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:04:53.992561 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:04:54.032934 1131323 cri.go:89] found id: ""
	I0328 01:04:54.032969 1131323 logs.go:276] 0 containers: []
	W0328 01:04:54.032982 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:04:54.032992 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:04:54.033061 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:04:54.074670 1131323 cri.go:89] found id: ""
	I0328 01:04:54.074707 1131323 logs.go:276] 0 containers: []
	W0328 01:04:54.074777 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:04:54.074801 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:04:54.074875 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:04:54.111527 1131323 cri.go:89] found id: ""
	I0328 01:04:54.111555 1131323 logs.go:276] 0 containers: []
	W0328 01:04:54.111566 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:04:54.111573 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:04:54.111658 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:04:54.151401 1131323 cri.go:89] found id: ""
	I0328 01:04:54.151428 1131323 logs.go:276] 0 containers: []
	W0328 01:04:54.151437 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:04:54.151443 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:04:54.151494 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:04:54.197997 1131323 cri.go:89] found id: ""
	I0328 01:04:54.198036 1131323 logs.go:276] 0 containers: []
	W0328 01:04:54.198048 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:04:54.198058 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:04:54.198135 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:04:54.234016 1131323 cri.go:89] found id: ""
	I0328 01:04:54.234049 1131323 logs.go:276] 0 containers: []
	W0328 01:04:54.234058 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:04:54.234068 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:04:54.234081 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:04:54.286118 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:04:54.286161 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:04:54.300489 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:04:54.300541 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:04:54.376949 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:04:54.376972 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:04:54.376988 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:04:54.463857 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:04:54.463901 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:04:52.041517 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:54.042088 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:52.875923 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:55.371823 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:53.912140 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:55.912329 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:57.026395 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:04:57.041270 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:04:57.041358 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:04:57.082380 1131323 cri.go:89] found id: ""
	I0328 01:04:57.082416 1131323 logs.go:276] 0 containers: []
	W0328 01:04:57.082428 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:04:57.082436 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:04:57.082503 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:04:57.121835 1131323 cri.go:89] found id: ""
	I0328 01:04:57.121870 1131323 logs.go:276] 0 containers: []
	W0328 01:04:57.121885 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:04:57.121894 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:04:57.121969 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:04:57.163688 1131323 cri.go:89] found id: ""
	I0328 01:04:57.163725 1131323 logs.go:276] 0 containers: []
	W0328 01:04:57.163737 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:04:57.163745 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:04:57.163819 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:04:57.212628 1131323 cri.go:89] found id: ""
	I0328 01:04:57.212666 1131323 logs.go:276] 0 containers: []
	W0328 01:04:57.212693 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:04:57.212703 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:04:57.212788 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:04:57.249196 1131323 cri.go:89] found id: ""
	I0328 01:04:57.249231 1131323 logs.go:276] 0 containers: []
	W0328 01:04:57.249244 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:04:57.249253 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:04:57.249318 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:04:57.286996 1131323 cri.go:89] found id: ""
	I0328 01:04:57.287031 1131323 logs.go:276] 0 containers: []
	W0328 01:04:57.287040 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:04:57.287047 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:04:57.287101 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:04:57.324523 1131323 cri.go:89] found id: ""
	I0328 01:04:57.324551 1131323 logs.go:276] 0 containers: []
	W0328 01:04:57.324560 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:04:57.324566 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:04:57.324627 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:04:57.363946 1131323 cri.go:89] found id: ""
	I0328 01:04:57.363984 1131323 logs.go:276] 0 containers: []
	W0328 01:04:57.363998 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:04:57.364012 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:04:57.364034 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:04:57.418300 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:04:57.418337 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:04:57.433214 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:04:57.433242 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:04:57.508623 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:04:57.508651 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:04:57.508665 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:04:57.586336 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:04:57.586377 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:05:00.129903 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:05:00.146829 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:05:00.146920 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:05:00.197823 1131323 cri.go:89] found id: ""
	I0328 01:05:00.197856 1131323 logs.go:276] 0 containers: []
	W0328 01:05:00.197865 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:05:00.197872 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:05:00.197930 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:05:00.257523 1131323 cri.go:89] found id: ""
	I0328 01:05:00.257561 1131323 logs.go:276] 0 containers: []
	W0328 01:05:00.257575 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:05:00.257584 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:05:00.257657 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:05:00.314511 1131323 cri.go:89] found id: ""
	I0328 01:05:00.314539 1131323 logs.go:276] 0 containers: []
	W0328 01:05:00.314549 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:05:00.314558 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:05:00.314610 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:04:56.042295 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:58.541684 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:00.543232 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:57.372451 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:59.372577 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:58.412203 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:00.412880 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:02.913222 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:00.351043 1131323 cri.go:89] found id: ""
	I0328 01:05:00.351076 1131323 logs.go:276] 0 containers: []
	W0328 01:05:00.351090 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:05:00.351098 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:05:00.351167 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:05:00.391477 1131323 cri.go:89] found id: ""
	I0328 01:05:00.391507 1131323 logs.go:276] 0 containers: []
	W0328 01:05:00.391519 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:05:00.391525 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:05:00.391595 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:05:00.436196 1131323 cri.go:89] found id: ""
	I0328 01:05:00.436230 1131323 logs.go:276] 0 containers: []
	W0328 01:05:00.436242 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:05:00.436249 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:05:00.436316 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:05:00.473389 1131323 cri.go:89] found id: ""
	I0328 01:05:00.473428 1131323 logs.go:276] 0 containers: []
	W0328 01:05:00.473441 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:05:00.473450 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:05:00.473523 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:05:00.508829 1131323 cri.go:89] found id: ""
	I0328 01:05:00.508866 1131323 logs.go:276] 0 containers: []
	W0328 01:05:00.508879 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:05:00.508901 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:05:00.508931 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:05:00.553709 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:05:00.553741 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:05:00.612679 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:05:00.612732 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:05:00.630908 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:05:00.630948 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:05:00.706984 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:05:00.707016 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:05:00.707034 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:05:03.287887 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:05:03.304679 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:05:03.304779 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:05:03.343579 1131323 cri.go:89] found id: ""
	I0328 01:05:03.343608 1131323 logs.go:276] 0 containers: []
	W0328 01:05:03.343618 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:05:03.343625 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:05:03.343677 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:05:03.387158 1131323 cri.go:89] found id: ""
	I0328 01:05:03.387192 1131323 logs.go:276] 0 containers: []
	W0328 01:05:03.387206 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:05:03.387224 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:05:03.387308 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:05:03.426622 1131323 cri.go:89] found id: ""
	I0328 01:05:03.426653 1131323 logs.go:276] 0 containers: []
	W0328 01:05:03.426663 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:05:03.426670 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:05:03.426724 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:05:03.468743 1131323 cri.go:89] found id: ""
	I0328 01:05:03.468780 1131323 logs.go:276] 0 containers: []
	W0328 01:05:03.468793 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:05:03.468801 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:05:03.468870 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:05:03.508518 1131323 cri.go:89] found id: ""
	I0328 01:05:03.508554 1131323 logs.go:276] 0 containers: []
	W0328 01:05:03.508566 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:05:03.508575 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:05:03.508653 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:05:03.548295 1131323 cri.go:89] found id: ""
	I0328 01:05:03.548331 1131323 logs.go:276] 0 containers: []
	W0328 01:05:03.548343 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:05:03.548353 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:05:03.548444 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:05:03.591561 1131323 cri.go:89] found id: ""
	I0328 01:05:03.591594 1131323 logs.go:276] 0 containers: []
	W0328 01:05:03.591607 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:05:03.591615 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:05:03.591670 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:05:03.635055 1131323 cri.go:89] found id: ""
	I0328 01:05:03.635086 1131323 logs.go:276] 0 containers: []
	W0328 01:05:03.635097 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:05:03.635109 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:05:03.635127 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:05:03.715639 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:05:03.715683 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:05:03.755888 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:05:03.755931 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:05:03.810128 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:05:03.810170 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:05:03.825197 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:05:03.825227 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:05:03.908589 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:05:03.043330 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:05.541544 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:01.372692 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:03.373747 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:05.871945 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:05.413583 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:07.912379 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:06.409060 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:05:06.424034 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:05:06.424119 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:05:06.461827 1131323 cri.go:89] found id: ""
	I0328 01:05:06.461888 1131323 logs.go:276] 0 containers: []
	W0328 01:05:06.461902 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:05:06.461911 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:05:06.461985 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:05:06.505006 1131323 cri.go:89] found id: ""
	I0328 01:05:06.505061 1131323 logs.go:276] 0 containers: []
	W0328 01:05:06.505078 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:05:06.505085 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:05:06.505145 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:05:06.542000 1131323 cri.go:89] found id: ""
	I0328 01:05:06.542033 1131323 logs.go:276] 0 containers: []
	W0328 01:05:06.542044 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:05:06.542052 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:05:06.542121 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:05:06.583725 1131323 cri.go:89] found id: ""
	I0328 01:05:06.583778 1131323 logs.go:276] 0 containers: []
	W0328 01:05:06.583800 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:05:06.583810 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:05:06.583887 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:05:06.620457 1131323 cri.go:89] found id: ""
	I0328 01:05:06.620501 1131323 logs.go:276] 0 containers: []
	W0328 01:05:06.620516 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:05:06.620524 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:05:06.620595 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:05:06.664380 1131323 cri.go:89] found id: ""
	I0328 01:05:06.664412 1131323 logs.go:276] 0 containers: []
	W0328 01:05:06.664425 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:05:06.664432 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:05:06.664502 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:05:06.701799 1131323 cri.go:89] found id: ""
	I0328 01:05:06.701850 1131323 logs.go:276] 0 containers: []
	W0328 01:05:06.701862 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:05:06.701870 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:05:06.701935 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:05:06.739899 1131323 cri.go:89] found id: ""
	I0328 01:05:06.739936 1131323 logs.go:276] 0 containers: []
	W0328 01:05:06.739948 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:05:06.739958 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:05:06.739973 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:05:06.814373 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:05:06.814404 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:05:06.814421 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:05:06.894331 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:05:06.894371 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:05:06.952912 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:05:06.952979 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:05:07.011851 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:05:07.011900 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:05:09.528068 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:05:09.545082 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:05:09.545167 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:05:09.586944 1131323 cri.go:89] found id: ""
	I0328 01:05:09.586983 1131323 logs.go:276] 0 containers: []
	W0328 01:05:09.586996 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:05:09.587004 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:05:09.587077 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:05:09.624153 1131323 cri.go:89] found id: ""
	I0328 01:05:09.624184 1131323 logs.go:276] 0 containers: []
	W0328 01:05:09.624192 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:05:09.624198 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:05:09.624256 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:05:09.661125 1131323 cri.go:89] found id: ""
	I0328 01:05:09.661160 1131323 logs.go:276] 0 containers: []
	W0328 01:05:09.661172 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:05:09.661182 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:05:09.661256 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:05:09.699865 1131323 cri.go:89] found id: ""
	I0328 01:05:09.699903 1131323 logs.go:276] 0 containers: []
	W0328 01:05:09.699916 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:05:09.699925 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:05:09.699992 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:05:09.737925 1131323 cri.go:89] found id: ""
	I0328 01:05:09.737958 1131323 logs.go:276] 0 containers: []
	W0328 01:05:09.737967 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:05:09.737973 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:05:09.738029 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:05:09.776906 1131323 cri.go:89] found id: ""
	I0328 01:05:09.776941 1131323 logs.go:276] 0 containers: []
	W0328 01:05:09.776950 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:05:09.776957 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:05:09.777021 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:05:09.815767 1131323 cri.go:89] found id: ""
	I0328 01:05:09.815794 1131323 logs.go:276] 0 containers: []
	W0328 01:05:09.815803 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:05:09.815809 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:05:09.815876 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:05:09.855880 1131323 cri.go:89] found id: ""
	I0328 01:05:09.855915 1131323 logs.go:276] 0 containers: []
	W0328 01:05:09.855928 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:05:09.855941 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:05:09.855958 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:05:09.918339 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:05:09.918376 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:05:09.932775 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:05:09.932810 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:05:10.011566 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:05:10.011594 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:05:10.011610 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:05:10.096057 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:05:10.096100 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:05:08.041230 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:10.041991 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:07.873367 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:10.372311 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:09.913349 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:12.412259 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:12.641999 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:05:12.655761 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:05:12.655843 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:05:12.697335 1131323 cri.go:89] found id: ""
	I0328 01:05:12.697369 1131323 logs.go:276] 0 containers: []
	W0328 01:05:12.697381 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:05:12.697390 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:05:12.697453 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:05:12.736482 1131323 cri.go:89] found id: ""
	I0328 01:05:12.736520 1131323 logs.go:276] 0 containers: []
	W0328 01:05:12.736534 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:05:12.736544 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:05:12.736617 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:05:12.771992 1131323 cri.go:89] found id: ""
	I0328 01:05:12.772034 1131323 logs.go:276] 0 containers: []
	W0328 01:05:12.772046 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:05:12.772055 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:05:12.772125 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:05:12.810738 1131323 cri.go:89] found id: ""
	I0328 01:05:12.810770 1131323 logs.go:276] 0 containers: []
	W0328 01:05:12.810779 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:05:12.810786 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:05:12.810837 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:05:12.848172 1131323 cri.go:89] found id: ""
	I0328 01:05:12.848209 1131323 logs.go:276] 0 containers: []
	W0328 01:05:12.848222 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:05:12.848230 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:05:12.848310 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:05:12.884660 1131323 cri.go:89] found id: ""
	I0328 01:05:12.884698 1131323 logs.go:276] 0 containers: []
	W0328 01:05:12.884710 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:05:12.884719 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:05:12.884794 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:05:12.926180 1131323 cri.go:89] found id: ""
	I0328 01:05:12.926209 1131323 logs.go:276] 0 containers: []
	W0328 01:05:12.926218 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:05:12.926244 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:05:12.926303 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:05:12.966938 1131323 cri.go:89] found id: ""
	I0328 01:05:12.966969 1131323 logs.go:276] 0 containers: []
	W0328 01:05:12.966983 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:05:12.966996 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:05:12.967014 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:05:13.018501 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:05:13.018541 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:05:13.033140 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:05:13.033175 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:05:13.108806 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:05:13.108832 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:05:13.108858 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:05:13.189198 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:05:13.189241 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:05:12.541088 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:15.041830 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:12.372413 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:14.372804 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:14.414059 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:16.912343 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:15.737415 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:05:15.752534 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:05:15.752614 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:05:15.789941 1131323 cri.go:89] found id: ""
	I0328 01:05:15.789974 1131323 logs.go:276] 0 containers: []
	W0328 01:05:15.789986 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:05:15.789994 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:05:15.790107 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:05:15.827688 1131323 cri.go:89] found id: ""
	I0328 01:05:15.827731 1131323 logs.go:276] 0 containers: []
	W0328 01:05:15.827745 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:05:15.827766 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:05:15.827831 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:05:15.867005 1131323 cri.go:89] found id: ""
	I0328 01:05:15.867041 1131323 logs.go:276] 0 containers: []
	W0328 01:05:15.867054 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:05:15.867064 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:05:15.867149 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:05:15.909924 1131323 cri.go:89] found id: ""
	I0328 01:05:15.910035 1131323 logs.go:276] 0 containers: []
	W0328 01:05:15.910055 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:05:15.910066 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:05:15.910139 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:05:15.950571 1131323 cri.go:89] found id: ""
	I0328 01:05:15.950606 1131323 logs.go:276] 0 containers: []
	W0328 01:05:15.950619 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:05:15.950632 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:05:15.950707 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:05:15.992557 1131323 cri.go:89] found id: ""
	I0328 01:05:15.992594 1131323 logs.go:276] 0 containers: []
	W0328 01:05:15.992605 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:05:15.992615 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:05:15.992687 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:05:16.032417 1131323 cri.go:89] found id: ""
	I0328 01:05:16.032458 1131323 logs.go:276] 0 containers: []
	W0328 01:05:16.032473 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:05:16.032482 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:05:16.032559 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:05:16.071399 1131323 cri.go:89] found id: ""
	I0328 01:05:16.071433 1131323 logs.go:276] 0 containers: []
	W0328 01:05:16.071445 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:05:16.071459 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:05:16.071481 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:05:16.147078 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:05:16.147113 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:05:16.147131 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:05:16.223828 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:05:16.223870 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:05:16.269377 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:05:16.269409 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:05:16.318545 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:05:16.318584 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:05:18.836044 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:05:18.851138 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:05:18.851231 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:05:18.887223 1131323 cri.go:89] found id: ""
	I0328 01:05:18.887260 1131323 logs.go:276] 0 containers: []
	W0328 01:05:18.887273 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:05:18.887283 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:05:18.887354 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:05:18.928652 1131323 cri.go:89] found id: ""
	I0328 01:05:18.928682 1131323 logs.go:276] 0 containers: []
	W0328 01:05:18.928692 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:05:18.928698 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:05:18.928756 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:05:18.968519 1131323 cri.go:89] found id: ""
	I0328 01:05:18.968555 1131323 logs.go:276] 0 containers: []
	W0328 01:05:18.968567 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:05:18.968575 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:05:18.968646 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:05:19.010939 1131323 cri.go:89] found id: ""
	I0328 01:05:19.010977 1131323 logs.go:276] 0 containers: []
	W0328 01:05:19.010991 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:05:19.010999 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:05:19.011070 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:05:19.048723 1131323 cri.go:89] found id: ""
	I0328 01:05:19.048748 1131323 logs.go:276] 0 containers: []
	W0328 01:05:19.048758 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:05:19.048769 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:05:19.048820 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:05:19.091761 1131323 cri.go:89] found id: ""
	I0328 01:05:19.091794 1131323 logs.go:276] 0 containers: []
	W0328 01:05:19.091803 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:05:19.091810 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:05:19.091863 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:05:19.134017 1131323 cri.go:89] found id: ""
	I0328 01:05:19.134049 1131323 logs.go:276] 0 containers: []
	W0328 01:05:19.134059 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:05:19.134065 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:05:19.134119 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:05:19.176070 1131323 cri.go:89] found id: ""
	I0328 01:05:19.176106 1131323 logs.go:276] 0 containers: []
	W0328 01:05:19.176118 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:05:19.176131 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:05:19.176155 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:05:19.261546 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:05:19.261584 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:05:19.261605 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:05:19.340271 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:05:19.340314 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:05:19.383625 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:05:19.383676 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:05:19.441635 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:05:19.441679 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:05:17.541876 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:20.040841 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:16.872723 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:19.372916 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:19.414384 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:21.912881 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:21.958362 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:05:21.974427 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:05:21.974528 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:05:22.013099 1131323 cri.go:89] found id: ""
	I0328 01:05:22.013139 1131323 logs.go:276] 0 containers: []
	W0328 01:05:22.013152 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:05:22.013160 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:05:22.013229 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:05:22.055558 1131323 cri.go:89] found id: ""
	I0328 01:05:22.055594 1131323 logs.go:276] 0 containers: []
	W0328 01:05:22.055604 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:05:22.055611 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:05:22.055668 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:05:22.106836 1131323 cri.go:89] found id: ""
	I0328 01:05:22.106870 1131323 logs.go:276] 0 containers: []
	W0328 01:05:22.106879 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:05:22.106886 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:05:22.106961 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:05:22.145135 1131323 cri.go:89] found id: ""
	I0328 01:05:22.145175 1131323 logs.go:276] 0 containers: []
	W0328 01:05:22.145189 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:05:22.145197 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:05:22.145266 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:05:22.183879 1131323 cri.go:89] found id: ""
	I0328 01:05:22.183909 1131323 logs.go:276] 0 containers: []
	W0328 01:05:22.183919 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:05:22.183926 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:05:22.183981 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:05:22.223087 1131323 cri.go:89] found id: ""
	I0328 01:05:22.223115 1131323 logs.go:276] 0 containers: []
	W0328 01:05:22.223124 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:05:22.223131 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:05:22.223209 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:05:22.263232 1131323 cri.go:89] found id: ""
	I0328 01:05:22.263262 1131323 logs.go:276] 0 containers: []
	W0328 01:05:22.263272 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:05:22.263279 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:05:22.263331 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:05:22.302919 1131323 cri.go:89] found id: ""
	I0328 01:05:22.302954 1131323 logs.go:276] 0 containers: []
	W0328 01:05:22.302967 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:05:22.302980 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:05:22.302998 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:05:22.358550 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:05:22.358596 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:05:22.374688 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:05:22.374722 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:05:22.453584 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:05:22.453609 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:05:22.453624 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:05:22.540983 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:05:22.541048 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:05:25.091773 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:05:25.107412 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:05:25.107484 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:05:25.143917 1131323 cri.go:89] found id: ""
	I0328 01:05:25.143944 1131323 logs.go:276] 0 containers: []
	W0328 01:05:25.143953 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:05:25.143960 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:05:25.144013 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:05:25.183615 1131323 cri.go:89] found id: ""
	I0328 01:05:25.183650 1131323 logs.go:276] 0 containers: []
	W0328 01:05:25.183659 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:05:25.183666 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:05:25.183729 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:05:25.221125 1131323 cri.go:89] found id: ""
	I0328 01:05:25.221158 1131323 logs.go:276] 0 containers: []
	W0328 01:05:25.221167 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:05:25.221174 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:05:25.221229 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:05:25.262023 1131323 cri.go:89] found id: ""
	I0328 01:05:25.262056 1131323 logs.go:276] 0 containers: []
	W0328 01:05:25.262065 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:05:25.262072 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:05:25.262134 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:05:25.297919 1131323 cri.go:89] found id: ""
	I0328 01:05:25.297948 1131323 logs.go:276] 0 containers: []
	W0328 01:05:25.297957 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:05:25.297964 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:05:25.298035 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:05:22.041977 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:24.542416 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:21.872312 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:23.872885 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:23.914459 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:25.916730 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:25.336582 1131323 cri.go:89] found id: ""
	I0328 01:05:25.336610 1131323 logs.go:276] 0 containers: []
	W0328 01:05:25.336620 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:05:25.336627 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:05:25.336690 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:05:25.375554 1131323 cri.go:89] found id: ""
	I0328 01:05:25.375589 1131323 logs.go:276] 0 containers: []
	W0328 01:05:25.375600 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:05:25.375609 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:05:25.375683 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:05:25.415941 1131323 cri.go:89] found id: ""
	I0328 01:05:25.415973 1131323 logs.go:276] 0 containers: []
	W0328 01:05:25.415984 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:05:25.415996 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:05:25.416013 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:05:25.430168 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:05:25.430196 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:05:25.507782 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:05:25.507805 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:05:25.507862 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:05:25.588780 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:05:25.588824 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:05:25.634958 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:05:25.634997 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:05:28.190651 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:05:28.205714 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:05:28.205794 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:05:28.242015 1131323 cri.go:89] found id: ""
	I0328 01:05:28.242056 1131323 logs.go:276] 0 containers: []
	W0328 01:05:28.242067 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:05:28.242077 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:05:28.242169 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:05:28.289132 1131323 cri.go:89] found id: ""
	I0328 01:05:28.289169 1131323 logs.go:276] 0 containers: []
	W0328 01:05:28.289182 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:05:28.289189 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:05:28.289256 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:05:28.327001 1131323 cri.go:89] found id: ""
	I0328 01:05:28.327031 1131323 logs.go:276] 0 containers: []
	W0328 01:05:28.327040 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:05:28.327052 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:05:28.327105 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:05:28.365474 1131323 cri.go:89] found id: ""
	I0328 01:05:28.365507 1131323 logs.go:276] 0 containers: []
	W0328 01:05:28.365516 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:05:28.365523 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:05:28.365587 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:05:28.405494 1131323 cri.go:89] found id: ""
	I0328 01:05:28.405553 1131323 logs.go:276] 0 containers: []
	W0328 01:05:28.405567 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:05:28.405576 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:05:28.405652 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:05:28.448464 1131323 cri.go:89] found id: ""
	I0328 01:05:28.448502 1131323 logs.go:276] 0 containers: []
	W0328 01:05:28.448512 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:05:28.448521 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:05:28.448594 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:05:28.488143 1131323 cri.go:89] found id: ""
	I0328 01:05:28.488172 1131323 logs.go:276] 0 containers: []
	W0328 01:05:28.488182 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:05:28.488189 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:05:28.488258 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:05:28.545977 1131323 cri.go:89] found id: ""
	I0328 01:05:28.546012 1131323 logs.go:276] 0 containers: []
	W0328 01:05:28.546024 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:05:28.546036 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:05:28.546050 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:05:28.629955 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:05:28.630001 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:05:28.670504 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:05:28.670536 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:05:28.722021 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:05:28.722069 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:05:28.737274 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:05:28.737310 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:05:28.824025 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:05:27.041205 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:29.041342 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:26.372037 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:28.373545 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:30.872569 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:28.414921 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:30.912980 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:31.324497 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:05:31.339715 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:05:31.339811 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:05:31.379017 1131323 cri.go:89] found id: ""
	I0328 01:05:31.379050 1131323 logs.go:276] 0 containers: []
	W0328 01:05:31.379062 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:05:31.379072 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:05:31.379138 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:05:31.420024 1131323 cri.go:89] found id: ""
	I0328 01:05:31.420055 1131323 logs.go:276] 0 containers: []
	W0328 01:05:31.420065 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:05:31.420071 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:05:31.420136 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:05:31.458732 1131323 cri.go:89] found id: ""
	I0328 01:05:31.458764 1131323 logs.go:276] 0 containers: []
	W0328 01:05:31.458773 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:05:31.458779 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:05:31.458835 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:05:31.504249 1131323 cri.go:89] found id: ""
	I0328 01:05:31.504280 1131323 logs.go:276] 0 containers: []
	W0328 01:05:31.504292 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:05:31.504300 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:05:31.504366 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:05:31.545284 1131323 cri.go:89] found id: ""
	I0328 01:05:31.545316 1131323 logs.go:276] 0 containers: []
	W0328 01:05:31.545324 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:05:31.545331 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:05:31.545385 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:05:31.583402 1131323 cri.go:89] found id: ""
	I0328 01:05:31.583434 1131323 logs.go:276] 0 containers: []
	W0328 01:05:31.583444 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:05:31.583453 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:05:31.583587 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:05:31.624411 1131323 cri.go:89] found id: ""
	I0328 01:05:31.624449 1131323 logs.go:276] 0 containers: []
	W0328 01:05:31.624462 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:05:31.624471 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:05:31.624528 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:05:31.666103 1131323 cri.go:89] found id: ""
	I0328 01:05:31.666144 1131323 logs.go:276] 0 containers: []
	W0328 01:05:31.666158 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:05:31.666173 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:05:31.666192 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:05:31.717595 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:05:31.717636 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:05:31.731606 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:05:31.731637 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:05:31.803302 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:05:31.803325 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:05:31.803339 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:05:31.885552 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:05:31.885590 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:05:34.432446 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:05:34.448002 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:05:34.448085 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:05:34.493207 1131323 cri.go:89] found id: ""
	I0328 01:05:34.493246 1131323 logs.go:276] 0 containers: []
	W0328 01:05:34.493259 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:05:34.493268 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:05:34.493337 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:05:34.541838 1131323 cri.go:89] found id: ""
	I0328 01:05:34.541871 1131323 logs.go:276] 0 containers: []
	W0328 01:05:34.541883 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:05:34.541891 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:05:34.541956 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:05:34.582319 1131323 cri.go:89] found id: ""
	I0328 01:05:34.582357 1131323 logs.go:276] 0 containers: []
	W0328 01:05:34.582371 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:05:34.582380 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:05:34.582458 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:05:34.618753 1131323 cri.go:89] found id: ""
	I0328 01:05:34.618788 1131323 logs.go:276] 0 containers: []
	W0328 01:05:34.618801 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:05:34.618810 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:05:34.618882 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:05:34.656994 1131323 cri.go:89] found id: ""
	I0328 01:05:34.657027 1131323 logs.go:276] 0 containers: []
	W0328 01:05:34.657037 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:05:34.657043 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:05:34.657114 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:05:34.695214 1131323 cri.go:89] found id: ""
	I0328 01:05:34.695252 1131323 logs.go:276] 0 containers: []
	W0328 01:05:34.695264 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:05:34.695271 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:05:34.695337 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:05:34.733688 1131323 cri.go:89] found id: ""
	I0328 01:05:34.733718 1131323 logs.go:276] 0 containers: []
	W0328 01:05:34.733731 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:05:34.733739 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:05:34.733808 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:05:34.771697 1131323 cri.go:89] found id: ""
	I0328 01:05:34.771729 1131323 logs.go:276] 0 containers: []
	W0328 01:05:34.771744 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:05:34.771758 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:05:34.771776 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:05:34.828190 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:05:34.828236 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:05:34.842741 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:05:34.842776 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:05:34.918494 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:05:34.918525 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:05:34.918541 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:05:35.012689 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:05:35.012747 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:05:31.042633 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:33.541295 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:35.541588 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:33.371991 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:35.872753 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:33.412886 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:35.914065 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:37.574759 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:05:37.590014 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:05:37.590128 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:05:37.626883 1131323 cri.go:89] found id: ""
	I0328 01:05:37.626914 1131323 logs.go:276] 0 containers: []
	W0328 01:05:37.626926 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:05:37.626935 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:05:37.627005 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:05:37.665171 1131323 cri.go:89] found id: ""
	I0328 01:05:37.665202 1131323 logs.go:276] 0 containers: []
	W0328 01:05:37.665215 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:05:37.665225 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:05:37.665294 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:05:37.702923 1131323 cri.go:89] found id: ""
	I0328 01:05:37.702963 1131323 logs.go:276] 0 containers: []
	W0328 01:05:37.702976 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:05:37.702984 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:05:37.703064 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:05:37.741148 1131323 cri.go:89] found id: ""
	I0328 01:05:37.741182 1131323 logs.go:276] 0 containers: []
	W0328 01:05:37.741191 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:05:37.741199 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:05:37.741269 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:05:37.782298 1131323 cri.go:89] found id: ""
	I0328 01:05:37.782331 1131323 logs.go:276] 0 containers: []
	W0328 01:05:37.782341 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:05:37.782348 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:05:37.782407 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:05:37.819056 1131323 cri.go:89] found id: ""
	I0328 01:05:37.819110 1131323 logs.go:276] 0 containers: []
	W0328 01:05:37.819124 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:05:37.819134 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:05:37.819215 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:05:37.862372 1131323 cri.go:89] found id: ""
	I0328 01:05:37.862414 1131323 logs.go:276] 0 containers: []
	W0328 01:05:37.862427 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:05:37.862436 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:05:37.862507 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:05:37.899639 1131323 cri.go:89] found id: ""
	I0328 01:05:37.899675 1131323 logs.go:276] 0 containers: []
	W0328 01:05:37.899689 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:05:37.899703 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:05:37.899721 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:05:37.978962 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:05:37.978990 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:05:37.979007 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:05:38.058972 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:05:38.059015 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:05:38.102975 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:05:38.103016 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:05:38.157994 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:05:38.158035 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:05:38.041091 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:40.041892 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:38.371787 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:40.373131 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:38.412214 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:40.415412 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:42.912341 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:40.673425 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:05:40.690969 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:05:40.691041 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:05:40.735552 1131323 cri.go:89] found id: ""
	I0328 01:05:40.735585 1131323 logs.go:276] 0 containers: []
	W0328 01:05:40.735594 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:05:40.735602 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:05:40.735669 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:05:40.816611 1131323 cri.go:89] found id: ""
	I0328 01:05:40.816648 1131323 logs.go:276] 0 containers: []
	W0328 01:05:40.816661 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:05:40.816669 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:05:40.816725 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:05:40.864093 1131323 cri.go:89] found id: ""
	I0328 01:05:40.864125 1131323 logs.go:276] 0 containers: []
	W0328 01:05:40.864138 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:05:40.864147 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:05:40.864218 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:05:40.908781 1131323 cri.go:89] found id: ""
	I0328 01:05:40.908817 1131323 logs.go:276] 0 containers: []
	W0328 01:05:40.908829 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:05:40.908846 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:05:40.908914 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:05:40.950330 1131323 cri.go:89] found id: ""
	I0328 01:05:40.950369 1131323 logs.go:276] 0 containers: []
	W0328 01:05:40.950382 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:05:40.950390 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:05:40.950481 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:05:40.989983 1131323 cri.go:89] found id: ""
	I0328 01:05:40.990041 1131323 logs.go:276] 0 containers: []
	W0328 01:05:40.990054 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:05:40.990063 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:05:40.990136 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:05:41.042428 1131323 cri.go:89] found id: ""
	I0328 01:05:41.042470 1131323 logs.go:276] 0 containers: []
	W0328 01:05:41.042481 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:05:41.042489 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:05:41.042560 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:05:41.089309 1131323 cri.go:89] found id: ""
	I0328 01:05:41.089342 1131323 logs.go:276] 0 containers: []
	W0328 01:05:41.089353 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:05:41.089363 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:05:41.089377 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:05:41.148502 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:05:41.148547 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:05:41.163889 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:05:41.163918 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:05:41.242825 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:05:41.242848 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:05:41.242861 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:05:41.322658 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:05:41.322702 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:05:43.865117 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:05:43.880642 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:05:43.880729 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:05:43.919519 1131323 cri.go:89] found id: ""
	I0328 01:05:43.919550 1131323 logs.go:276] 0 containers: []
	W0328 01:05:43.919559 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:05:43.919565 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:05:43.919622 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:05:43.957906 1131323 cri.go:89] found id: ""
	I0328 01:05:43.957936 1131323 logs.go:276] 0 containers: []
	W0328 01:05:43.957945 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:05:43.957951 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:05:43.958008 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:05:44.001448 1131323 cri.go:89] found id: ""
	I0328 01:05:44.001486 1131323 logs.go:276] 0 containers: []
	W0328 01:05:44.001497 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:05:44.001505 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:05:44.001573 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:05:44.039767 1131323 cri.go:89] found id: ""
	I0328 01:05:44.039801 1131323 logs.go:276] 0 containers: []
	W0328 01:05:44.039812 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:05:44.039818 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:05:44.039871 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:05:44.079441 1131323 cri.go:89] found id: ""
	I0328 01:05:44.079470 1131323 logs.go:276] 0 containers: []
	W0328 01:05:44.079480 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:05:44.079486 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:05:44.079541 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:05:44.116534 1131323 cri.go:89] found id: ""
	I0328 01:05:44.116584 1131323 logs.go:276] 0 containers: []
	W0328 01:05:44.116596 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:05:44.116604 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:05:44.116670 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:05:44.163335 1131323 cri.go:89] found id: ""
	I0328 01:05:44.163369 1131323 logs.go:276] 0 containers: []
	W0328 01:05:44.163381 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:05:44.163389 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:05:44.163457 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:05:44.201367 1131323 cri.go:89] found id: ""
	I0328 01:05:44.201403 1131323 logs.go:276] 0 containers: []
	W0328 01:05:44.201413 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:05:44.201424 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:05:44.201442 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:05:44.257485 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:05:44.257529 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:05:44.272489 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:05:44.272534 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:05:44.354442 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:05:44.354477 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:05:44.354498 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:05:44.436219 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:05:44.436262 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:05:42.044443 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:44.541648 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:42.872072 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:44.873552 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:44.913292 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:47.412489 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:46.982131 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:05:46.998022 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:05:46.998100 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:05:47.037167 1131323 cri.go:89] found id: ""
	I0328 01:05:47.037205 1131323 logs.go:276] 0 containers: []
	W0328 01:05:47.037217 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:05:47.037226 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:05:47.037295 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:05:47.076175 1131323 cri.go:89] found id: ""
	I0328 01:05:47.076213 1131323 logs.go:276] 0 containers: []
	W0328 01:05:47.076226 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:05:47.076235 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:05:47.076306 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:05:47.115193 1131323 cri.go:89] found id: ""
	I0328 01:05:47.115227 1131323 logs.go:276] 0 containers: []
	W0328 01:05:47.115237 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:05:47.115244 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:05:47.115297 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:05:47.154942 1131323 cri.go:89] found id: ""
	I0328 01:05:47.154976 1131323 logs.go:276] 0 containers: []
	W0328 01:05:47.154989 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:05:47.154998 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:05:47.155069 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:05:47.196571 1131323 cri.go:89] found id: ""
	I0328 01:05:47.196609 1131323 logs.go:276] 0 containers: []
	W0328 01:05:47.196622 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:05:47.196631 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:05:47.196707 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:05:47.237572 1131323 cri.go:89] found id: ""
	I0328 01:05:47.237616 1131323 logs.go:276] 0 containers: []
	W0328 01:05:47.237625 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:05:47.237633 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:05:47.237691 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:05:47.275208 1131323 cri.go:89] found id: ""
	I0328 01:05:47.275254 1131323 logs.go:276] 0 containers: []
	W0328 01:05:47.275265 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:05:47.275272 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:05:47.275329 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:05:47.313515 1131323 cri.go:89] found id: ""
	I0328 01:05:47.313555 1131323 logs.go:276] 0 containers: []
	W0328 01:05:47.313568 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:05:47.313582 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:05:47.313598 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:05:47.368993 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:05:47.369033 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:05:47.383063 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:05:47.383097 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:05:47.460239 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:05:47.460278 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:05:47.460298 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:05:47.538552 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:05:47.538594 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:05:50.084960 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:05:50.101764 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:05:50.101859 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:05:50.141457 1131323 cri.go:89] found id: ""
	I0328 01:05:50.141488 1131323 logs.go:276] 0 containers: []
	W0328 01:05:50.141497 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:05:50.141504 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:05:50.141557 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:05:50.178184 1131323 cri.go:89] found id: ""
	I0328 01:05:50.178220 1131323 logs.go:276] 0 containers: []
	W0328 01:05:50.178254 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:05:50.178263 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:05:50.178358 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:05:50.217908 1131323 cri.go:89] found id: ""
	I0328 01:05:50.217946 1131323 logs.go:276] 0 containers: []
	W0328 01:05:50.217959 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:05:50.217966 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:05:50.218027 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:05:50.256029 1131323 cri.go:89] found id: ""
	I0328 01:05:50.256058 1131323 logs.go:276] 0 containers: []
	W0328 01:05:50.256067 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:05:50.256074 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:05:50.256130 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:05:50.295054 1131323 cri.go:89] found id: ""
	I0328 01:05:50.295087 1131323 logs.go:276] 0 containers: []
	W0328 01:05:50.295100 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:05:50.295106 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:05:50.295165 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:05:47.042338 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:49.542501 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:47.372867 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:49.872948 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:49.913873 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:52.412600 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:50.334695 1131323 cri.go:89] found id: ""
	I0328 01:05:50.336588 1131323 logs.go:276] 0 containers: []
	W0328 01:05:50.336605 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:05:50.336614 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:05:50.336697 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:05:50.375968 1131323 cri.go:89] found id: ""
	I0328 01:05:50.376003 1131323 logs.go:276] 0 containers: []
	W0328 01:05:50.376013 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:05:50.376021 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:05:50.376091 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:05:50.417146 1131323 cri.go:89] found id: ""
	I0328 01:05:50.417175 1131323 logs.go:276] 0 containers: []
	W0328 01:05:50.417184 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:05:50.417194 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:05:50.417207 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:05:50.474090 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:05:50.474131 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:05:50.489006 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:05:50.489040 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:05:50.566220 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:05:50.566268 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:05:50.566286 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:05:50.645593 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:05:50.645653 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:05:53.190872 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:05:53.205223 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:05:53.205320 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:05:53.242396 1131323 cri.go:89] found id: ""
	I0328 01:05:53.242433 1131323 logs.go:276] 0 containers: []
	W0328 01:05:53.242445 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:05:53.242455 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:05:53.242524 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:05:53.281237 1131323 cri.go:89] found id: ""
	I0328 01:05:53.281275 1131323 logs.go:276] 0 containers: []
	W0328 01:05:53.281288 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:05:53.281297 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:05:53.281357 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:05:53.321239 1131323 cri.go:89] found id: ""
	I0328 01:05:53.321268 1131323 logs.go:276] 0 containers: []
	W0328 01:05:53.321287 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:05:53.321296 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:05:53.321358 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:05:53.359240 1131323 cri.go:89] found id: ""
	I0328 01:05:53.359269 1131323 logs.go:276] 0 containers: []
	W0328 01:05:53.359278 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:05:53.359284 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:05:53.359337 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:05:53.396973 1131323 cri.go:89] found id: ""
	I0328 01:05:53.397008 1131323 logs.go:276] 0 containers: []
	W0328 01:05:53.397021 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:05:53.397030 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:05:53.397100 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:05:53.438368 1131323 cri.go:89] found id: ""
	I0328 01:05:53.438400 1131323 logs.go:276] 0 containers: []
	W0328 01:05:53.438408 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:05:53.438415 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:05:53.438477 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:05:53.474679 1131323 cri.go:89] found id: ""
	I0328 01:05:53.474708 1131323 logs.go:276] 0 containers: []
	W0328 01:05:53.474732 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:05:53.474742 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:05:53.474799 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:05:53.512509 1131323 cri.go:89] found id: ""
	I0328 01:05:53.512547 1131323 logs.go:276] 0 containers: []
	W0328 01:05:53.512560 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:05:53.512579 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:05:53.512599 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:05:53.569536 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:05:53.569580 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:05:53.584977 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:05:53.585016 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:05:53.657865 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:05:53.657895 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:05:53.657908 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:05:53.733158 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:05:53.733203 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:05:52.041508 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:54.541663 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:52.373317 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:54.872090 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:54.913464 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:57.413256 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:56.278693 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:05:56.291870 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:05:56.291949 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:05:56.332909 1131323 cri.go:89] found id: ""
	I0328 01:05:56.332943 1131323 logs.go:276] 0 containers: []
	W0328 01:05:56.332957 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:05:56.332965 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:05:56.333038 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:05:56.370608 1131323 cri.go:89] found id: ""
	I0328 01:05:56.370638 1131323 logs.go:276] 0 containers: []
	W0328 01:05:56.370649 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:05:56.370657 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:05:56.370721 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:05:56.408031 1131323 cri.go:89] found id: ""
	I0328 01:05:56.408068 1131323 logs.go:276] 0 containers: []
	W0328 01:05:56.408081 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:05:56.408100 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:05:56.408170 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:05:56.445057 1131323 cri.go:89] found id: ""
	I0328 01:05:56.445092 1131323 logs.go:276] 0 containers: []
	W0328 01:05:56.445105 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:05:56.445113 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:05:56.445177 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:05:56.486868 1131323 cri.go:89] found id: ""
	I0328 01:05:56.486898 1131323 logs.go:276] 0 containers: []
	W0328 01:05:56.486908 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:05:56.486914 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:05:56.486969 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:05:56.533594 1131323 cri.go:89] found id: ""
	I0328 01:05:56.533622 1131323 logs.go:276] 0 containers: []
	W0328 01:05:56.533632 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:05:56.533638 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:05:56.533702 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:05:56.569200 1131323 cri.go:89] found id: ""
	I0328 01:05:56.569237 1131323 logs.go:276] 0 containers: []
	W0328 01:05:56.569250 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:05:56.569258 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:05:56.569335 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:05:56.604919 1131323 cri.go:89] found id: ""
	I0328 01:05:56.604955 1131323 logs.go:276] 0 containers: []
	W0328 01:05:56.604968 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:05:56.604982 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:05:56.605011 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:05:56.654473 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:05:56.654513 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:05:56.671309 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:05:56.671339 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:05:56.739516 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:05:56.739543 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:05:56.739559 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:05:56.817445 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:05:56.817495 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:05:59.361711 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:05:59.375672 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:05:59.375750 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:05:59.414329 1131323 cri.go:89] found id: ""
	I0328 01:05:59.414360 1131323 logs.go:276] 0 containers: []
	W0328 01:05:59.414371 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:05:59.414379 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:05:59.414443 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:05:59.454813 1131323 cri.go:89] found id: ""
	I0328 01:05:59.454846 1131323 logs.go:276] 0 containers: []
	W0328 01:05:59.454855 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:05:59.454862 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:05:59.454917 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:05:59.492890 1131323 cri.go:89] found id: ""
	I0328 01:05:59.492924 1131323 logs.go:276] 0 containers: []
	W0328 01:05:59.492936 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:05:59.492946 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:05:59.493043 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:05:59.529412 1131323 cri.go:89] found id: ""
	I0328 01:05:59.529443 1131323 logs.go:276] 0 containers: []
	W0328 01:05:59.529454 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:05:59.529464 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:05:59.529521 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:05:59.568620 1131323 cri.go:89] found id: ""
	I0328 01:05:59.568655 1131323 logs.go:276] 0 containers: []
	W0328 01:05:59.568664 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:05:59.568671 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:05:59.568731 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:05:59.605826 1131323 cri.go:89] found id: ""
	I0328 01:05:59.605861 1131323 logs.go:276] 0 containers: []
	W0328 01:05:59.605874 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:05:59.605883 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:05:59.605955 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:05:59.645799 1131323 cri.go:89] found id: ""
	I0328 01:05:59.645833 1131323 logs.go:276] 0 containers: []
	W0328 01:05:59.645847 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:05:59.645856 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:05:59.645931 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:05:59.683866 1131323 cri.go:89] found id: ""
	I0328 01:05:59.683903 1131323 logs.go:276] 0 containers: []
	W0328 01:05:59.683916 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:05:59.683929 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:05:59.683953 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:05:59.726678 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:05:59.726711 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:05:59.779910 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:05:59.779954 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:05:59.795743 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:05:59.795774 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:05:59.875137 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:05:59.875162 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:05:59.875174 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:05:56.542345 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:58.542599 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:00.543094 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:57.372258 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:59.872483 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:59.912150 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:01.913694 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:02.455212 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:06:02.468850 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:06:02.468945 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:06:02.506347 1131323 cri.go:89] found id: ""
	I0328 01:06:02.506385 1131323 logs.go:276] 0 containers: []
	W0328 01:06:02.506397 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:06:02.506406 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:06:02.506484 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:06:02.546056 1131323 cri.go:89] found id: ""
	I0328 01:06:02.546085 1131323 logs.go:276] 0 containers: []
	W0328 01:06:02.546096 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:06:02.546103 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:06:02.546173 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:06:02.585343 1131323 cri.go:89] found id: ""
	I0328 01:06:02.585385 1131323 logs.go:276] 0 containers: []
	W0328 01:06:02.585398 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:06:02.585407 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:06:02.585563 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:06:02.625380 1131323 cri.go:89] found id: ""
	I0328 01:06:02.625414 1131323 logs.go:276] 0 containers: []
	W0328 01:06:02.625423 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:06:02.625429 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:06:02.625486 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:06:02.664653 1131323 cri.go:89] found id: ""
	I0328 01:06:02.664687 1131323 logs.go:276] 0 containers: []
	W0328 01:06:02.664701 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:06:02.664708 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:06:02.664764 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:06:02.704468 1131323 cri.go:89] found id: ""
	I0328 01:06:02.704498 1131323 logs.go:276] 0 containers: []
	W0328 01:06:02.704511 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:06:02.704519 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:06:02.704595 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:06:02.740969 1131323 cri.go:89] found id: ""
	I0328 01:06:02.740997 1131323 logs.go:276] 0 containers: []
	W0328 01:06:02.741007 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:06:02.741014 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:06:02.741102 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:06:02.782113 1131323 cri.go:89] found id: ""
	I0328 01:06:02.782150 1131323 logs.go:276] 0 containers: []
	W0328 01:06:02.782163 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:06:02.782185 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:06:02.782203 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:06:02.836804 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:06:02.836848 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:06:02.852266 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:06:02.852299 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:06:02.929441 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:06:02.929467 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:06:02.929484 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:06:03.008114 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:06:03.008156 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:06:03.041919 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:05.542209 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:02.372332 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:04.871689 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:04.413251 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:06.912348 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:05.554291 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:06:05.570208 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:06:05.570304 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:06:05.610887 1131323 cri.go:89] found id: ""
	I0328 01:06:05.610916 1131323 logs.go:276] 0 containers: []
	W0328 01:06:05.610926 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:06:05.610932 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:06:05.610991 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:06:05.651561 1131323 cri.go:89] found id: ""
	I0328 01:06:05.651600 1131323 logs.go:276] 0 containers: []
	W0328 01:06:05.651610 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:06:05.651616 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:06:05.651681 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:06:05.690801 1131323 cri.go:89] found id: ""
	I0328 01:06:05.690830 1131323 logs.go:276] 0 containers: []
	W0328 01:06:05.690843 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:06:05.690851 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:06:05.690920 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:06:05.729098 1131323 cri.go:89] found id: ""
	I0328 01:06:05.729136 1131323 logs.go:276] 0 containers: []
	W0328 01:06:05.729146 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:06:05.729153 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:06:05.729225 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:06:05.774461 1131323 cri.go:89] found id: ""
	I0328 01:06:05.774499 1131323 logs.go:276] 0 containers: []
	W0328 01:06:05.774520 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:06:05.774530 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:06:05.774602 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:06:05.812135 1131323 cri.go:89] found id: ""
	I0328 01:06:05.812166 1131323 logs.go:276] 0 containers: []
	W0328 01:06:05.812180 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:06:05.812188 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:06:05.812255 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:06:05.847744 1131323 cri.go:89] found id: ""
	I0328 01:06:05.847775 1131323 logs.go:276] 0 containers: []
	W0328 01:06:05.847786 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:06:05.847796 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:06:05.847863 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:06:05.885600 1131323 cri.go:89] found id: ""
	I0328 01:06:05.885641 1131323 logs.go:276] 0 containers: []
	W0328 01:06:05.885656 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:06:05.885669 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:06:05.885684 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:06:05.963837 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:06:05.963879 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:06:06.007342 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:06:06.007381 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:06:06.062798 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:06:06.062843 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:06:06.077547 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:06:06.077599 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:06:06.148373 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:06:08.648791 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:06:08.664082 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:06:08.664154 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:06:08.701746 1131323 cri.go:89] found id: ""
	I0328 01:06:08.701776 1131323 logs.go:276] 0 containers: []
	W0328 01:06:08.701789 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:06:08.701797 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:06:08.701855 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:06:08.739035 1131323 cri.go:89] found id: ""
	I0328 01:06:08.739066 1131323 logs.go:276] 0 containers: []
	W0328 01:06:08.739076 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:06:08.739083 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:06:08.739136 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:06:08.776128 1131323 cri.go:89] found id: ""
	I0328 01:06:08.776166 1131323 logs.go:276] 0 containers: []
	W0328 01:06:08.776180 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:06:08.776189 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:06:08.776255 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:06:08.816136 1131323 cri.go:89] found id: ""
	I0328 01:06:08.816172 1131323 logs.go:276] 0 containers: []
	W0328 01:06:08.816187 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:06:08.816196 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:06:08.816271 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:06:08.855675 1131323 cri.go:89] found id: ""
	I0328 01:06:08.855709 1131323 logs.go:276] 0 containers: []
	W0328 01:06:08.855722 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:06:08.855730 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:06:08.855802 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:06:08.893161 1131323 cri.go:89] found id: ""
	I0328 01:06:08.893198 1131323 logs.go:276] 0 containers: []
	W0328 01:06:08.893212 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:06:08.893221 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:06:08.893297 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:06:08.935498 1131323 cri.go:89] found id: ""
	I0328 01:06:08.935527 1131323 logs.go:276] 0 containers: []
	W0328 01:06:08.935540 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:06:08.935548 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:06:08.935622 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:06:08.971622 1131323 cri.go:89] found id: ""
	I0328 01:06:08.971657 1131323 logs.go:276] 0 containers: []
	W0328 01:06:08.971668 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:06:08.971679 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:06:08.971696 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:06:09.039975 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:06:09.040036 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:06:09.057877 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:06:09.057920 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:06:09.130093 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:06:09.130119 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:06:09.130135 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:06:09.217177 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:06:09.217228 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:06:08.040921 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:10.042895 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:06.872367 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:08.873187 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:08.914313 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:11.412330 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:11.762393 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:06:11.776356 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:06:11.776424 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:06:11.811982 1131323 cri.go:89] found id: ""
	I0328 01:06:11.812017 1131323 logs.go:276] 0 containers: []
	W0328 01:06:11.812030 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:06:11.812038 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:06:11.812103 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:06:11.849789 1131323 cri.go:89] found id: ""
	I0328 01:06:11.849817 1131323 logs.go:276] 0 containers: []
	W0328 01:06:11.849826 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:06:11.849833 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:06:11.849884 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:06:11.890455 1131323 cri.go:89] found id: ""
	I0328 01:06:11.890488 1131323 logs.go:276] 0 containers: []
	W0328 01:06:11.890497 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:06:11.890503 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:06:11.890559 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:06:11.929047 1131323 cri.go:89] found id: ""
	I0328 01:06:11.929093 1131323 logs.go:276] 0 containers: []
	W0328 01:06:11.929102 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:06:11.929108 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:06:11.929164 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:06:11.969536 1131323 cri.go:89] found id: ""
	I0328 01:06:11.969566 1131323 logs.go:276] 0 containers: []
	W0328 01:06:11.969576 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:06:11.969583 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:06:11.969641 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:06:12.008779 1131323 cri.go:89] found id: ""
	I0328 01:06:12.008811 1131323 logs.go:276] 0 containers: []
	W0328 01:06:12.008821 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:06:12.008828 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:06:12.008890 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:06:12.044061 1131323 cri.go:89] found id: ""
	I0328 01:06:12.044091 1131323 logs.go:276] 0 containers: []
	W0328 01:06:12.044104 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:06:12.044112 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:06:12.044176 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:06:12.082307 1131323 cri.go:89] found id: ""
	I0328 01:06:12.082336 1131323 logs.go:276] 0 containers: []
	W0328 01:06:12.082346 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:06:12.082357 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:06:12.082369 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:06:12.133044 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:06:12.133091 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:06:12.148584 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:06:12.148624 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:06:12.218799 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:06:12.218834 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:06:12.218852 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:06:12.295580 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:06:12.295623 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:06:14.842815 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:06:14.856385 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:06:14.856456 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:06:14.895351 1131323 cri.go:89] found id: ""
	I0328 01:06:14.895409 1131323 logs.go:276] 0 containers: []
	W0328 01:06:14.895418 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:06:14.895424 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:06:14.895476 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:06:14.930333 1131323 cri.go:89] found id: ""
	I0328 01:06:14.930366 1131323 logs.go:276] 0 containers: []
	W0328 01:06:14.930380 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:06:14.930389 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:06:14.930461 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:06:14.968701 1131323 cri.go:89] found id: ""
	I0328 01:06:14.968742 1131323 logs.go:276] 0 containers: []
	W0328 01:06:14.968754 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:06:14.968767 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:06:14.968867 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:06:15.004580 1131323 cri.go:89] found id: ""
	I0328 01:06:15.004613 1131323 logs.go:276] 0 containers: []
	W0328 01:06:15.004626 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:06:15.004634 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:06:15.004700 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:06:15.046702 1131323 cri.go:89] found id: ""
	I0328 01:06:15.046726 1131323 logs.go:276] 0 containers: []
	W0328 01:06:15.046736 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:06:15.046742 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:06:15.046795 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:06:15.088693 1131323 cri.go:89] found id: ""
	I0328 01:06:15.088725 1131323 logs.go:276] 0 containers: []
	W0328 01:06:15.088734 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:06:15.088741 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:06:15.088797 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:06:15.130293 1131323 cri.go:89] found id: ""
	I0328 01:06:15.130324 1131323 logs.go:276] 0 containers: []
	W0328 01:06:15.130333 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:06:15.130339 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:06:15.130394 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:06:15.172381 1131323 cri.go:89] found id: ""
	I0328 01:06:15.172408 1131323 logs.go:276] 0 containers: []
	W0328 01:06:15.172417 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:06:15.172427 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:06:15.172440 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:06:15.225631 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:06:15.225674 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:06:15.241251 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:06:15.241294 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:06:15.319701 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:06:15.319731 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:06:15.319747 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:06:12.540755 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:14.541618 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:11.371580 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:13.371640 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:15.373147 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:13.911792 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:15.912479 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:17.913926 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:15.406813 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:06:15.406853 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:06:17.993893 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:06:18.007755 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:06:18.007843 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:06:18.047750 1131323 cri.go:89] found id: ""
	I0328 01:06:18.047777 1131323 logs.go:276] 0 containers: []
	W0328 01:06:18.047786 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:06:18.047797 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:06:18.047855 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:06:18.088264 1131323 cri.go:89] found id: ""
	I0328 01:06:18.088291 1131323 logs.go:276] 0 containers: []
	W0328 01:06:18.088303 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:06:18.088311 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:06:18.088369 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:06:18.127485 1131323 cri.go:89] found id: ""
	I0328 01:06:18.127514 1131323 logs.go:276] 0 containers: []
	W0328 01:06:18.127523 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:06:18.127530 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:06:18.127581 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:06:18.167462 1131323 cri.go:89] found id: ""
	I0328 01:06:18.167496 1131323 logs.go:276] 0 containers: []
	W0328 01:06:18.167510 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:06:18.167516 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:06:18.167571 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:06:18.209536 1131323 cri.go:89] found id: ""
	I0328 01:06:18.209571 1131323 logs.go:276] 0 containers: []
	W0328 01:06:18.209583 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:06:18.209591 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:06:18.209662 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:06:18.247565 1131323 cri.go:89] found id: ""
	I0328 01:06:18.247601 1131323 logs.go:276] 0 containers: []
	W0328 01:06:18.247614 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:06:18.247623 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:06:18.247701 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:06:18.288123 1131323 cri.go:89] found id: ""
	I0328 01:06:18.288162 1131323 logs.go:276] 0 containers: []
	W0328 01:06:18.288172 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:06:18.288179 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:06:18.288242 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:06:18.328132 1131323 cri.go:89] found id: ""
	I0328 01:06:18.328161 1131323 logs.go:276] 0 containers: []
	W0328 01:06:18.328170 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:06:18.328181 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:06:18.328193 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:06:18.403245 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:06:18.403287 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:06:18.403305 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:06:18.483446 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:06:18.483500 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:06:18.527357 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:06:18.527392 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:06:18.588402 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:06:18.588463 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:06:16.542137 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:18.542554 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:20.546396 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:17.872147 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:20.373000 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:20.412369 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:22.412661 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:21.103566 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:06:21.117538 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:06:21.117616 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:06:21.174215 1131323 cri.go:89] found id: ""
	I0328 01:06:21.174270 1131323 logs.go:276] 0 containers: []
	W0328 01:06:21.174284 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:06:21.174293 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:06:21.174364 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:06:21.238666 1131323 cri.go:89] found id: ""
	I0328 01:06:21.238707 1131323 logs.go:276] 0 containers: []
	W0328 01:06:21.238722 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:06:21.238730 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:06:21.238803 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:06:21.303510 1131323 cri.go:89] found id: ""
	I0328 01:06:21.303543 1131323 logs.go:276] 0 containers: []
	W0328 01:06:21.303553 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:06:21.303559 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:06:21.303614 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:06:21.345823 1131323 cri.go:89] found id: ""
	I0328 01:06:21.345853 1131323 logs.go:276] 0 containers: []
	W0328 01:06:21.345862 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:06:21.345870 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:06:21.345940 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:06:21.386205 1131323 cri.go:89] found id: ""
	I0328 01:06:21.386248 1131323 logs.go:276] 0 containers: []
	W0328 01:06:21.386261 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:06:21.386269 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:06:21.386335 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:06:21.427424 1131323 cri.go:89] found id: ""
	I0328 01:06:21.427457 1131323 logs.go:276] 0 containers: []
	W0328 01:06:21.427470 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:06:21.427478 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:06:21.427546 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:06:21.465054 1131323 cri.go:89] found id: ""
	I0328 01:06:21.465087 1131323 logs.go:276] 0 containers: []
	W0328 01:06:21.465099 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:06:21.465107 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:06:21.465177 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:06:21.507197 1131323 cri.go:89] found id: ""
	I0328 01:06:21.507229 1131323 logs.go:276] 0 containers: []
	W0328 01:06:21.507238 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:06:21.507248 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:06:21.507263 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:06:21.586657 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:06:21.586709 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:06:21.633702 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:06:21.633739 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:06:21.688960 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:06:21.688999 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:06:21.704675 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:06:21.704714 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:06:21.781612 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:06:24.282521 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:06:24.297096 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:06:24.297185 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:06:24.338745 1131323 cri.go:89] found id: ""
	I0328 01:06:24.338780 1131323 logs.go:276] 0 containers: []
	W0328 01:06:24.338793 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:06:24.338802 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:06:24.338872 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:06:24.375499 1131323 cri.go:89] found id: ""
	I0328 01:06:24.375528 1131323 logs.go:276] 0 containers: []
	W0328 01:06:24.375540 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:06:24.375548 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:06:24.375616 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:06:24.410939 1131323 cri.go:89] found id: ""
	I0328 01:06:24.410966 1131323 logs.go:276] 0 containers: []
	W0328 01:06:24.410978 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:06:24.410986 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:06:24.411042 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:06:24.455316 1131323 cri.go:89] found id: ""
	I0328 01:06:24.455345 1131323 logs.go:276] 0 containers: []
	W0328 01:06:24.455354 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:06:24.455360 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:06:24.455427 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:06:24.493177 1131323 cri.go:89] found id: ""
	I0328 01:06:24.493206 1131323 logs.go:276] 0 containers: []
	W0328 01:06:24.493219 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:06:24.493228 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:06:24.493300 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:06:24.533612 1131323 cri.go:89] found id: ""
	I0328 01:06:24.533648 1131323 logs.go:276] 0 containers: []
	W0328 01:06:24.533659 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:06:24.533668 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:06:24.533743 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:06:24.573960 1131323 cri.go:89] found id: ""
	I0328 01:06:24.573998 1131323 logs.go:276] 0 containers: []
	W0328 01:06:24.574014 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:06:24.574020 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:06:24.574074 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:06:24.617282 1131323 cri.go:89] found id: ""
	I0328 01:06:24.617319 1131323 logs.go:276] 0 containers: []
	W0328 01:06:24.617333 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:06:24.617346 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:06:24.617364 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:06:24.691660 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:06:24.691688 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:06:24.691707 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:06:24.773138 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:06:24.773180 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:06:24.820408 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:06:24.820440 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:06:24.875901 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:06:24.875940 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:06:23.041030 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:25.041064 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:22.874513 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:25.378939 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:24.413732 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:26.912433 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:27.392663 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:06:27.407958 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:06:27.408046 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:06:27.446750 1131323 cri.go:89] found id: ""
	I0328 01:06:27.446782 1131323 logs.go:276] 0 containers: []
	W0328 01:06:27.446792 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:06:27.446799 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:06:27.446872 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:06:27.489199 1131323 cri.go:89] found id: ""
	I0328 01:06:27.489236 1131323 logs.go:276] 0 containers: []
	W0328 01:06:27.489249 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:06:27.489258 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:06:27.489316 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:06:27.525754 1131323 cri.go:89] found id: ""
	I0328 01:06:27.525787 1131323 logs.go:276] 0 containers: []
	W0328 01:06:27.525796 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:06:27.525803 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:06:27.525861 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:06:27.560817 1131323 cri.go:89] found id: ""
	I0328 01:06:27.560849 1131323 logs.go:276] 0 containers: []
	W0328 01:06:27.560858 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:06:27.560866 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:06:27.560930 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:06:27.597706 1131323 cri.go:89] found id: ""
	I0328 01:06:27.597736 1131323 logs.go:276] 0 containers: []
	W0328 01:06:27.597744 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:06:27.597750 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:06:27.597821 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:06:27.635170 1131323 cri.go:89] found id: ""
	I0328 01:06:27.635211 1131323 logs.go:276] 0 containers: []
	W0328 01:06:27.635223 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:06:27.635232 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:06:27.635299 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:06:27.672043 1131323 cri.go:89] found id: ""
	I0328 01:06:27.672079 1131323 logs.go:276] 0 containers: []
	W0328 01:06:27.672091 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:06:27.672099 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:06:27.672166 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:06:27.711401 1131323 cri.go:89] found id: ""
	I0328 01:06:27.711435 1131323 logs.go:276] 0 containers: []
	W0328 01:06:27.711448 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:06:27.711468 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:06:27.711488 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:06:27.755172 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:06:27.755211 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:06:27.807588 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:06:27.807632 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:06:27.823557 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:06:27.823589 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:06:27.905292 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:06:27.905316 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:06:27.905329 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:06:27.041105 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:29.041205 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:27.873797 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:30.374214 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:29.412378 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:31.413211 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:30.491565 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:06:30.505601 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:06:30.505667 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:06:30.541894 1131323 cri.go:89] found id: ""
	I0328 01:06:30.541929 1131323 logs.go:276] 0 containers: []
	W0328 01:06:30.541940 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:06:30.541949 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:06:30.542029 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:06:30.581484 1131323 cri.go:89] found id: ""
	I0328 01:06:30.581514 1131323 logs.go:276] 0 containers: []
	W0328 01:06:30.581532 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:06:30.581538 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:06:30.581613 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:06:30.624788 1131323 cri.go:89] found id: ""
	I0328 01:06:30.624830 1131323 logs.go:276] 0 containers: []
	W0328 01:06:30.624842 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:06:30.624850 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:06:30.624922 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:06:30.664373 1131323 cri.go:89] found id: ""
	I0328 01:06:30.664403 1131323 logs.go:276] 0 containers: []
	W0328 01:06:30.664413 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:06:30.664420 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:06:30.664489 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:06:30.702885 1131323 cri.go:89] found id: ""
	I0328 01:06:30.702917 1131323 logs.go:276] 0 containers: []
	W0328 01:06:30.702928 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:06:30.702934 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:06:30.703006 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:06:30.748170 1131323 cri.go:89] found id: ""
	I0328 01:06:30.748205 1131323 logs.go:276] 0 containers: []
	W0328 01:06:30.748217 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:06:30.748226 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:06:30.748316 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:06:30.785218 1131323 cri.go:89] found id: ""
	I0328 01:06:30.785255 1131323 logs.go:276] 0 containers: []
	W0328 01:06:30.785268 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:06:30.785276 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:06:30.785343 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:06:30.825529 1131323 cri.go:89] found id: ""
	I0328 01:06:30.825555 1131323 logs.go:276] 0 containers: []
	W0328 01:06:30.825565 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:06:30.825575 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:06:30.825589 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:06:30.881353 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:06:30.881391 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:06:30.896682 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:06:30.896718 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:06:30.973356 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:06:30.973386 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:06:30.973402 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:06:31.049014 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:06:31.049047 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:06:33.594365 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:06:33.609372 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:06:33.609460 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:06:33.648699 1131323 cri.go:89] found id: ""
	I0328 01:06:33.648728 1131323 logs.go:276] 0 containers: []
	W0328 01:06:33.648749 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:06:33.648757 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:06:33.648829 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:06:33.686707 1131323 cri.go:89] found id: ""
	I0328 01:06:33.686744 1131323 logs.go:276] 0 containers: []
	W0328 01:06:33.686758 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:06:33.686767 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:06:33.686832 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:06:33.723091 1131323 cri.go:89] found id: ""
	I0328 01:06:33.723121 1131323 logs.go:276] 0 containers: []
	W0328 01:06:33.723130 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:06:33.723136 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:06:33.723187 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:06:33.763439 1131323 cri.go:89] found id: ""
	I0328 01:06:33.763471 1131323 logs.go:276] 0 containers: []
	W0328 01:06:33.763481 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:06:33.763488 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:06:33.763544 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:06:33.812236 1131323 cri.go:89] found id: ""
	I0328 01:06:33.812271 1131323 logs.go:276] 0 containers: []
	W0328 01:06:33.812285 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:06:33.812294 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:06:33.812365 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:06:33.849421 1131323 cri.go:89] found id: ""
	I0328 01:06:33.849454 1131323 logs.go:276] 0 containers: []
	W0328 01:06:33.849465 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:06:33.849473 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:06:33.849528 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:06:33.888020 1131323 cri.go:89] found id: ""
	I0328 01:06:33.888051 1131323 logs.go:276] 0 containers: []
	W0328 01:06:33.888065 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:06:33.888078 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:06:33.888145 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:06:33.925952 1131323 cri.go:89] found id: ""
	I0328 01:06:33.925990 1131323 logs.go:276] 0 containers: []
	W0328 01:06:33.926003 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:06:33.926016 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:06:33.926034 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:06:33.976695 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:06:33.976734 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:06:33.991708 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:06:33.991752 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:06:34.068244 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:06:34.068276 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:06:34.068293 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:06:34.155843 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:06:34.155885 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:06:31.041375 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:33.041526 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:35.541169 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:32.872009 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:34.873043 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:33.913191 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:36.413213 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:36.697480 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:06:36.712322 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:06:36.712420 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:06:36.749541 1131323 cri.go:89] found id: ""
	I0328 01:06:36.749570 1131323 logs.go:276] 0 containers: []
	W0328 01:06:36.749579 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:06:36.749587 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:06:36.749655 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:06:36.788226 1131323 cri.go:89] found id: ""
	I0328 01:06:36.788255 1131323 logs.go:276] 0 containers: []
	W0328 01:06:36.788264 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:06:36.788270 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:06:36.788323 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:06:36.823824 1131323 cri.go:89] found id: ""
	I0328 01:06:36.823856 1131323 logs.go:276] 0 containers: []
	W0328 01:06:36.823866 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:06:36.823872 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:06:36.823927 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:06:36.869331 1131323 cri.go:89] found id: ""
	I0328 01:06:36.869362 1131323 logs.go:276] 0 containers: []
	W0328 01:06:36.869371 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:06:36.869378 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:06:36.869473 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:06:36.907918 1131323 cri.go:89] found id: ""
	I0328 01:06:36.907950 1131323 logs.go:276] 0 containers: []
	W0328 01:06:36.907960 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:06:36.907966 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:06:36.908028 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:06:36.947708 1131323 cri.go:89] found id: ""
	I0328 01:06:36.947738 1131323 logs.go:276] 0 containers: []
	W0328 01:06:36.947749 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:06:36.947757 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:06:36.947824 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:06:36.986200 1131323 cri.go:89] found id: ""
	I0328 01:06:36.986251 1131323 logs.go:276] 0 containers: []
	W0328 01:06:36.986266 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:06:36.986275 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:06:36.986350 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:06:37.026670 1131323 cri.go:89] found id: ""
	I0328 01:06:37.026698 1131323 logs.go:276] 0 containers: []
	W0328 01:06:37.026708 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:06:37.026718 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:06:37.026732 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:06:37.079891 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:06:37.079933 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:06:37.094347 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:06:37.094378 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:06:37.168653 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:06:37.168681 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:06:37.168695 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:06:37.247909 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:06:37.247949 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:06:39.791285 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:06:39.807921 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:06:39.808000 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:06:39.851460 1131323 cri.go:89] found id: ""
	I0328 01:06:39.851499 1131323 logs.go:276] 0 containers: []
	W0328 01:06:39.851512 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:06:39.851520 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:06:39.851593 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:06:39.889506 1131323 cri.go:89] found id: ""
	I0328 01:06:39.889541 1131323 logs.go:276] 0 containers: []
	W0328 01:06:39.889554 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:06:39.889564 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:06:39.889632 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:06:39.930291 1131323 cri.go:89] found id: ""
	I0328 01:06:39.930321 1131323 logs.go:276] 0 containers: []
	W0328 01:06:39.930331 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:06:39.930337 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:06:39.930400 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:06:39.965121 1131323 cri.go:89] found id: ""
	I0328 01:06:39.965160 1131323 logs.go:276] 0 containers: []
	W0328 01:06:39.965174 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:06:39.965183 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:06:39.965252 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:06:40.003217 1131323 cri.go:89] found id: ""
	I0328 01:06:40.003248 1131323 logs.go:276] 0 containers: []
	W0328 01:06:40.003258 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:06:40.003264 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:06:40.003319 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:06:40.042702 1131323 cri.go:89] found id: ""
	I0328 01:06:40.042737 1131323 logs.go:276] 0 containers: []
	W0328 01:06:40.042749 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:06:40.042759 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:06:40.042826 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:06:40.079733 1131323 cri.go:89] found id: ""
	I0328 01:06:40.079769 1131323 logs.go:276] 0 containers: []
	W0328 01:06:40.079780 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:06:40.079788 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:06:40.079852 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:06:40.117066 1131323 cri.go:89] found id: ""
	I0328 01:06:40.117098 1131323 logs.go:276] 0 containers: []
	W0328 01:06:40.117107 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:06:40.117117 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:06:40.117130 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:06:40.158589 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:06:40.158623 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:06:40.210997 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:06:40.211049 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:06:40.225419 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:06:40.225453 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:06:40.305356 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:06:40.305385 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:06:40.305401 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:06:37.541534 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:39.541905 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:36.874220 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:39.373763 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:38.413719 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:40.912939 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:42.913528 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:42.896394 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:06:42.912285 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:06:42.912355 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:06:42.949381 1131323 cri.go:89] found id: ""
	I0328 01:06:42.949411 1131323 logs.go:276] 0 containers: []
	W0328 01:06:42.949420 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:06:42.949427 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:06:42.949496 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:06:42.985325 1131323 cri.go:89] found id: ""
	I0328 01:06:42.985358 1131323 logs.go:276] 0 containers: []
	W0328 01:06:42.985371 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:06:42.985388 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:06:42.985456 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:06:43.023570 1131323 cri.go:89] found id: ""
	I0328 01:06:43.023616 1131323 logs.go:276] 0 containers: []
	W0328 01:06:43.023630 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:06:43.023638 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:06:43.023714 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:06:43.062995 1131323 cri.go:89] found id: ""
	I0328 01:06:43.063025 1131323 logs.go:276] 0 containers: []
	W0328 01:06:43.063036 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:06:43.063042 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:06:43.063111 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:06:43.101666 1131323 cri.go:89] found id: ""
	I0328 01:06:43.101704 1131323 logs.go:276] 0 containers: []
	W0328 01:06:43.101713 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:06:43.101720 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:06:43.101789 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:06:43.150713 1131323 cri.go:89] found id: ""
	I0328 01:06:43.150745 1131323 logs.go:276] 0 containers: []
	W0328 01:06:43.150757 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:06:43.150765 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:06:43.150830 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:06:43.193449 1131323 cri.go:89] found id: ""
	I0328 01:06:43.193479 1131323 logs.go:276] 0 containers: []
	W0328 01:06:43.193487 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:06:43.193495 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:06:43.193559 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:06:43.237641 1131323 cri.go:89] found id: ""
	I0328 01:06:43.237673 1131323 logs.go:276] 0 containers: []
	W0328 01:06:43.237682 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:06:43.237698 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:06:43.237714 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:06:43.287282 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:06:43.287320 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:06:43.303307 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:06:43.303343 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:06:43.383597 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:06:43.383619 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:06:43.383632 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:06:43.467874 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:06:43.467914 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:06:42.041406 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:44.540550 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:41.874286 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:44.372393 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:45.410973 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:47.412852 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:46.011081 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:06:46.025731 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:06:46.025824 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:06:46.064336 1131323 cri.go:89] found id: ""
	I0328 01:06:46.064371 1131323 logs.go:276] 0 containers: []
	W0328 01:06:46.064385 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:06:46.064394 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:06:46.064451 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:06:46.104493 1131323 cri.go:89] found id: ""
	I0328 01:06:46.104530 1131323 logs.go:276] 0 containers: []
	W0328 01:06:46.104550 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:06:46.104559 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:06:46.104636 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:06:46.147546 1131323 cri.go:89] found id: ""
	I0328 01:06:46.147582 1131323 logs.go:276] 0 containers: []
	W0328 01:06:46.147594 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:06:46.147602 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:06:46.147656 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:06:46.186162 1131323 cri.go:89] found id: ""
	I0328 01:06:46.186197 1131323 logs.go:276] 0 containers: []
	W0328 01:06:46.186207 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:06:46.186213 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:06:46.186296 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:06:46.230412 1131323 cri.go:89] found id: ""
	I0328 01:06:46.230450 1131323 logs.go:276] 0 containers: []
	W0328 01:06:46.230464 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:06:46.230473 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:06:46.230552 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:06:46.266000 1131323 cri.go:89] found id: ""
	I0328 01:06:46.266037 1131323 logs.go:276] 0 containers: []
	W0328 01:06:46.266050 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:06:46.266059 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:06:46.266126 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:06:46.301031 1131323 cri.go:89] found id: ""
	I0328 01:06:46.301065 1131323 logs.go:276] 0 containers: []
	W0328 01:06:46.301077 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:06:46.301084 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:06:46.301155 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:06:46.339222 1131323 cri.go:89] found id: ""
	I0328 01:06:46.339248 1131323 logs.go:276] 0 containers: []
	W0328 01:06:46.339258 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:06:46.339271 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:06:46.339290 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:06:46.352558 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:06:46.352595 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:06:46.427283 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:06:46.427308 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:06:46.427325 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:06:46.512134 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:06:46.512178 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:06:46.558276 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:06:46.558307 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:06:49.113455 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:06:49.127554 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:06:49.127645 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:06:49.169380 1131323 cri.go:89] found id: ""
	I0328 01:06:49.169421 1131323 logs.go:276] 0 containers: []
	W0328 01:06:49.169435 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:06:49.169444 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:06:49.169511 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:06:49.204540 1131323 cri.go:89] found id: ""
	I0328 01:06:49.204568 1131323 logs.go:276] 0 containers: []
	W0328 01:06:49.204579 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:06:49.204596 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:06:49.204664 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:06:49.243074 1131323 cri.go:89] found id: ""
	I0328 01:06:49.243102 1131323 logs.go:276] 0 containers: []
	W0328 01:06:49.243112 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:06:49.243119 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:06:49.243170 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:06:49.281264 1131323 cri.go:89] found id: ""
	I0328 01:06:49.281301 1131323 logs.go:276] 0 containers: []
	W0328 01:06:49.281314 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:06:49.281322 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:06:49.281391 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:06:49.320473 1131323 cri.go:89] found id: ""
	I0328 01:06:49.320505 1131323 logs.go:276] 0 containers: []
	W0328 01:06:49.320514 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:06:49.320521 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:06:49.320592 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:06:49.357715 1131323 cri.go:89] found id: ""
	I0328 01:06:49.357749 1131323 logs.go:276] 0 containers: []
	W0328 01:06:49.357759 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:06:49.357766 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:06:49.357823 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:06:49.398427 1131323 cri.go:89] found id: ""
	I0328 01:06:49.398464 1131323 logs.go:276] 0 containers: []
	W0328 01:06:49.398477 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:06:49.398498 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:06:49.398576 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:06:49.439921 1131323 cri.go:89] found id: ""
	I0328 01:06:49.439956 1131323 logs.go:276] 0 containers: []
	W0328 01:06:49.439969 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:06:49.439982 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:06:49.440003 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:06:49.557260 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:06:49.557289 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:06:49.557312 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:06:49.640105 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:06:49.640169 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:06:49.683153 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:06:49.683185 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:06:49.737420 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:06:49.737463 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:06:46.541377 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:49.041761 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:46.374869 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:48.875897 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:49.912535 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:51.912893 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:52.253208 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:06:52.268572 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:06:52.268649 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:06:52.305136 1131323 cri.go:89] found id: ""
	I0328 01:06:52.305180 1131323 logs.go:276] 0 containers: []
	W0328 01:06:52.305193 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:06:52.305202 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:06:52.305273 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:06:52.344774 1131323 cri.go:89] found id: ""
	I0328 01:06:52.344806 1131323 logs.go:276] 0 containers: []
	W0328 01:06:52.344816 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:06:52.344823 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:06:52.344885 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:06:52.382127 1131323 cri.go:89] found id: ""
	I0328 01:06:52.382174 1131323 logs.go:276] 0 containers: []
	W0328 01:06:52.382185 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:06:52.382200 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:06:52.382280 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:06:52.421340 1131323 cri.go:89] found id: ""
	I0328 01:06:52.421368 1131323 logs.go:276] 0 containers: []
	W0328 01:06:52.421377 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:06:52.421383 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:06:52.421433 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:06:52.460046 1131323 cri.go:89] found id: ""
	I0328 01:06:52.460084 1131323 logs.go:276] 0 containers: []
	W0328 01:06:52.460100 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:06:52.460107 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:06:52.460164 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:06:52.500067 1131323 cri.go:89] found id: ""
	I0328 01:06:52.500094 1131323 logs.go:276] 0 containers: []
	W0328 01:06:52.500102 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:06:52.500109 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:06:52.500171 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:06:52.537614 1131323 cri.go:89] found id: ""
	I0328 01:06:52.537646 1131323 logs.go:276] 0 containers: []
	W0328 01:06:52.537671 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:06:52.537680 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:06:52.537745 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:06:52.577362 1131323 cri.go:89] found id: ""
	I0328 01:06:52.577392 1131323 logs.go:276] 0 containers: []
	W0328 01:06:52.577402 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:06:52.577417 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:06:52.577434 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:06:52.633638 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:06:52.633689 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:06:52.650762 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:06:52.650796 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:06:52.729436 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:06:52.729470 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:06:52.729484 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:06:52.818193 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:06:52.818248 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:06:51.540541 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:53.541340 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:55.542165 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:51.376916 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:53.872313 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:55.873335 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:54.411986 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:56.412892 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:55.362950 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:06:55.378461 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:06:55.378577 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:06:55.419968 1131323 cri.go:89] found id: ""
	I0328 01:06:55.419995 1131323 logs.go:276] 0 containers: []
	W0328 01:06:55.420005 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:06:55.420010 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:06:55.420072 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:06:55.464308 1131323 cri.go:89] found id: ""
	I0328 01:06:55.464341 1131323 logs.go:276] 0 containers: []
	W0328 01:06:55.464350 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:06:55.464357 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:06:55.464421 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:06:55.523059 1131323 cri.go:89] found id: ""
	I0328 01:06:55.523092 1131323 logs.go:276] 0 containers: []
	W0328 01:06:55.523106 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:06:55.523114 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:06:55.523186 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:06:55.570957 1131323 cri.go:89] found id: ""
	I0328 01:06:55.570990 1131323 logs.go:276] 0 containers: []
	W0328 01:06:55.571004 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:06:55.571013 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:06:55.571077 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:06:55.606712 1131323 cri.go:89] found id: ""
	I0328 01:06:55.606739 1131323 logs.go:276] 0 containers: []
	W0328 01:06:55.606749 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:06:55.606755 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:06:55.606817 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:06:55.646445 1131323 cri.go:89] found id: ""
	I0328 01:06:55.646477 1131323 logs.go:276] 0 containers: []
	W0328 01:06:55.646486 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:06:55.646493 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:06:55.646548 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:06:55.685176 1131323 cri.go:89] found id: ""
	I0328 01:06:55.685208 1131323 logs.go:276] 0 containers: []
	W0328 01:06:55.685217 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:06:55.685225 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:06:55.685289 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:06:55.722948 1131323 cri.go:89] found id: ""
	I0328 01:06:55.722984 1131323 logs.go:276] 0 containers: []
	W0328 01:06:55.722995 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:06:55.723006 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:06:55.723022 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:06:55.797332 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:06:55.797368 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:06:55.797385 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:06:55.877648 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:06:55.877688 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:06:55.918966 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:06:55.918997 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:06:55.971226 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:06:55.971272 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:06:58.488464 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:06:58.504999 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:06:58.505088 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:06:58.549290 1131323 cri.go:89] found id: ""
	I0328 01:06:58.549325 1131323 logs.go:276] 0 containers: []
	W0328 01:06:58.549338 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:06:58.549347 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:06:58.549414 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:06:58.589222 1131323 cri.go:89] found id: ""
	I0328 01:06:58.589252 1131323 logs.go:276] 0 containers: []
	W0328 01:06:58.589261 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:06:58.589271 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:06:58.589337 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:06:58.626470 1131323 cri.go:89] found id: ""
	I0328 01:06:58.626499 1131323 logs.go:276] 0 containers: []
	W0328 01:06:58.626508 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:06:58.626514 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:06:58.626578 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:06:58.671634 1131323 cri.go:89] found id: ""
	I0328 01:06:58.671663 1131323 logs.go:276] 0 containers: []
	W0328 01:06:58.671674 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:06:58.671683 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:06:58.671744 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:06:58.707335 1131323 cri.go:89] found id: ""
	I0328 01:06:58.707370 1131323 logs.go:276] 0 containers: []
	W0328 01:06:58.707381 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:06:58.707390 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:06:58.707459 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:06:58.745635 1131323 cri.go:89] found id: ""
	I0328 01:06:58.745666 1131323 logs.go:276] 0 containers: []
	W0328 01:06:58.745679 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:06:58.745687 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:06:58.745752 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:06:58.792172 1131323 cri.go:89] found id: ""
	I0328 01:06:58.792205 1131323 logs.go:276] 0 containers: []
	W0328 01:06:58.792216 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:06:58.792225 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:06:58.792287 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:06:58.840027 1131323 cri.go:89] found id: ""
	I0328 01:06:58.840063 1131323 logs.go:276] 0 containers: []
	W0328 01:06:58.840075 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:06:58.840089 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:06:58.840108 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:06:58.921964 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:06:58.921988 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:06:58.922003 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:06:59.016935 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:06:59.016980 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:06:59.065747 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:06:59.065788 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:06:59.119189 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:06:59.119231 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:06:58.042362 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:00.544351 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:57.875649 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:00.371953 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:58.406154 1130949 pod_ready.go:81] duration metric: took 4m0.000981669s for pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace to be "Ready" ...
	E0328 01:06:58.406192 1130949 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace to be "Ready" (will not retry!)
	I0328 01:06:58.406218 1130949 pod_ready.go:38] duration metric: took 4m11.713667334s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0328 01:06:58.406275 1130949 kubeadm.go:591] duration metric: took 4m19.018883002s to restartPrimaryControlPlane
	W0328 01:06:58.406372 1130949 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0328 01:06:58.406432 1130949 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0328 01:07:01.637081 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:07:01.652557 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:07:01.652634 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:07:01.691795 1131323 cri.go:89] found id: ""
	I0328 01:07:01.691832 1131323 logs.go:276] 0 containers: []
	W0328 01:07:01.691846 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:07:01.691854 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:07:01.691927 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:07:01.732815 1131323 cri.go:89] found id: ""
	I0328 01:07:01.732850 1131323 logs.go:276] 0 containers: []
	W0328 01:07:01.732861 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:07:01.732868 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:07:01.732938 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:07:01.776370 1131323 cri.go:89] found id: ""
	I0328 01:07:01.776408 1131323 logs.go:276] 0 containers: []
	W0328 01:07:01.776422 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:07:01.776431 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:07:01.776501 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:07:01.821260 1131323 cri.go:89] found id: ""
	I0328 01:07:01.821290 1131323 logs.go:276] 0 containers: []
	W0328 01:07:01.821301 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:07:01.821308 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:07:01.821377 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:07:01.860666 1131323 cri.go:89] found id: ""
	I0328 01:07:01.860696 1131323 logs.go:276] 0 containers: []
	W0328 01:07:01.860708 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:07:01.860719 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:07:01.860787 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:07:01.898255 1131323 cri.go:89] found id: ""
	I0328 01:07:01.898291 1131323 logs.go:276] 0 containers: []
	W0328 01:07:01.898304 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:07:01.898314 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:07:01.898383 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:07:01.937770 1131323 cri.go:89] found id: ""
	I0328 01:07:01.937809 1131323 logs.go:276] 0 containers: []
	W0328 01:07:01.937822 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:07:01.937830 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:07:01.937901 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:07:01.976946 1131323 cri.go:89] found id: ""
	I0328 01:07:01.976981 1131323 logs.go:276] 0 containers: []
	W0328 01:07:01.976994 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:07:01.977008 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:07:01.977027 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:07:02.062804 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:07:02.062845 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:07:02.110750 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:07:02.110783 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:07:02.179633 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:07:02.179677 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:07:02.203131 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:07:02.203181 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:07:02.303281 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:07:04.804238 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:07:04.819654 1131323 kubeadm.go:591] duration metric: took 4m2.527630194s to restartPrimaryControlPlane
	W0328 01:07:04.819747 1131323 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0328 01:07:04.819787 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0328 01:07:03.041692 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:05.540478 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:02.372472 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:04.376413 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:07.322821 1131323 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (2.50300166s)
	I0328 01:07:07.322918 1131323 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0328 01:07:07.338692 1131323 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0328 01:07:07.349812 1131323 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0328 01:07:07.361566 1131323 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0328 01:07:07.361597 1131323 kubeadm.go:156] found existing configuration files:
	
	I0328 01:07:07.361667 1131323 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0328 01:07:07.372926 1131323 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0328 01:07:07.373008 1131323 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0328 01:07:07.383770 1131323 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0328 01:07:07.394260 1131323 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0328 01:07:07.394332 1131323 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0328 01:07:07.405874 1131323 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0328 01:07:07.417177 1131323 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0328 01:07:07.417254 1131323 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0328 01:07:07.428589 1131323 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0328 01:07:07.438788 1131323 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0328 01:07:07.438845 1131323 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0328 01:07:07.449649 1131323 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0328 01:07:07.533886 1131323 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0328 01:07:07.533989 1131323 kubeadm.go:309] [preflight] Running pre-flight checks
	I0328 01:07:07.693599 1131323 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0328 01:07:07.693736 1131323 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0328 01:07:07.693852 1131323 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0328 01:07:07.910557 1131323 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0328 01:07:07.912634 1131323 out.go:204]   - Generating certificates and keys ...
	I0328 01:07:07.912743 1131323 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0328 01:07:07.912855 1131323 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0328 01:07:07.912984 1131323 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0328 01:07:07.913098 1131323 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0328 01:07:07.913212 1131323 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0328 01:07:07.913298 1131323 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0328 01:07:07.913384 1131323 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0328 01:07:07.913569 1131323 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0328 01:07:07.913947 1131323 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0328 01:07:07.914429 1131323 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0328 01:07:07.914649 1131323 kubeadm.go:309] [certs] Using the existing "sa" key
	I0328 01:07:07.914728 1131323 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0328 01:07:08.225778 1131323 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0328 01:07:08.353927 1131323 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0328 01:07:08.631240 1131323 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0328 01:07:08.824445 1131323 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0328 01:07:08.840240 1131323 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0328 01:07:08.841200 1131323 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0328 01:07:08.841315 1131323 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0328 01:07:08.997129 1131323 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0328 01:07:08.999073 1131323 out.go:204]   - Booting up control plane ...
	I0328 01:07:08.999224 1131323 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0328 01:07:09.014811 1131323 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0328 01:07:09.015898 1131323 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0328 01:07:09.016727 1131323 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0328 01:07:09.019426 1131323 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0328 01:07:07.541363 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:10.041094 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:06.874606 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:09.372537 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:12.540137 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:14.541608 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:11.372643 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:13.873029 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:16.541814 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:19.047225 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:16.372556 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:18.871954 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:20.872047 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:21.542880 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:24.041786 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:22.872845 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:24.873747 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:26.042186 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:28.541303 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:30.540610 1130949 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.134147754s)
	I0328 01:07:30.540688 1130949 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0328 01:07:30.558971 1130949 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0328 01:07:30.570331 1130949 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0328 01:07:30.581192 1130949 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0328 01:07:30.581246 1130949 kubeadm.go:156] found existing configuration files:
	
	I0328 01:07:30.581306 1130949 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0328 01:07:30.592337 1130949 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0328 01:07:30.592410 1130949 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0328 01:07:30.603288 1130949 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0328 01:07:30.613714 1130949 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0328 01:07:30.613776 1130949 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0328 01:07:30.624281 1130949 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0328 01:07:30.634569 1130949 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0328 01:07:30.634644 1130949 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0328 01:07:30.647279 1130949 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0328 01:07:30.658554 1130949 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0328 01:07:30.658646 1130949 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0328 01:07:30.670364 1130949 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0328 01:07:30.730349 1130949 kubeadm.go:309] [init] Using Kubernetes version: v1.29.3
	I0328 01:07:30.730414 1130949 kubeadm.go:309] [preflight] Running pre-flight checks
	I0328 01:07:30.887056 1130949 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0328 01:07:30.887234 1130949 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0328 01:07:30.887385 1130949 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0328 01:07:31.104288 1130949 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0328 01:07:27.373135 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:29.373436 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:31.106496 1130949 out.go:204]   - Generating certificates and keys ...
	I0328 01:07:31.106628 1130949 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0328 01:07:31.106697 1130949 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0328 01:07:31.106765 1130949 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0328 01:07:31.106826 1130949 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0328 01:07:31.106892 1130949 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0328 01:07:31.107528 1130949 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0328 01:07:31.108302 1130949 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0328 01:07:31.112246 1130949 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0328 01:07:31.112762 1130949 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0328 01:07:31.113711 1130949 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0328 01:07:31.115230 1130949 kubeadm.go:309] [certs] Using the existing "sa" key
	I0328 01:07:31.115284 1130949 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0328 01:07:31.297632 1130949 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0328 01:07:32.446275 1130949 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0328 01:07:32.565869 1130949 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0328 01:07:32.641288 1130949 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0328 01:07:32.817229 1130949 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0328 01:07:32.817814 1130949 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0328 01:07:32.820366 1130949 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0328 01:07:32.822328 1130949 out.go:204]   - Booting up control plane ...
	I0328 01:07:32.822467 1130949 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0328 01:07:32.822550 1130949 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0328 01:07:32.822990 1130949 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0328 01:07:32.846800 1130949 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0328 01:07:32.847829 1130949 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0328 01:07:32.847902 1130949 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0328 01:07:31.044103 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:33.542106 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:35.542875 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:31.873591 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:33.875737 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:35.881819 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:32.992001 1130949 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0328 01:07:38.997010 1130949 kubeadm.go:309] [apiclient] All control plane components are healthy after 6.003888 seconds
	I0328 01:07:39.012971 1130949 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0328 01:07:39.036328 1130949 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0328 01:07:39.569806 1130949 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0328 01:07:39.570135 1130949 kubeadm.go:309] [mark-control-plane] Marking the node embed-certs-808809 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0328 01:07:40.085165 1130949 kubeadm.go:309] [bootstrap-token] Using token: 4zk5zi.uttj4zihedk5oj6k
	I0328 01:07:40.086719 1130949 out.go:204]   - Configuring RBAC rules ...
	I0328 01:07:40.086873 1130949 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0328 01:07:40.096373 1130949 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0328 01:07:40.106484 1130949 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0328 01:07:40.110525 1130949 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0328 01:07:40.120015 1130949 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0328 01:07:40.129060 1130949 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0328 01:07:40.141167 1130949 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0328 01:07:40.415429 1130949 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0328 01:07:40.507275 1130949 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0328 01:07:40.507333 1130949 kubeadm.go:309] 
	I0328 01:07:40.507551 1130949 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0328 01:07:40.507617 1130949 kubeadm.go:309] 
	I0328 01:07:40.507860 1130949 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0328 01:07:40.507891 1130949 kubeadm.go:309] 
	I0328 01:07:40.507947 1130949 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0328 01:07:40.508057 1130949 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0328 01:07:40.508140 1130949 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0328 01:07:40.508157 1130949 kubeadm.go:309] 
	I0328 01:07:40.508250 1130949 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0328 01:07:40.508264 1130949 kubeadm.go:309] 
	I0328 01:07:40.508329 1130949 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0328 01:07:40.508344 1130949 kubeadm.go:309] 
	I0328 01:07:40.508421 1130949 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0328 01:07:40.508539 1130949 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0328 01:07:40.508626 1130949 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0328 01:07:40.508632 1130949 kubeadm.go:309] 
	I0328 01:07:40.508804 1130949 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0328 01:07:40.508970 1130949 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0328 01:07:40.508990 1130949 kubeadm.go:309] 
	I0328 01:07:40.509155 1130949 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token 4zk5zi.uttj4zihedk5oj6k \
	I0328 01:07:40.509474 1130949 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:eb47e943713ff45bf2b8bbea854932df17a469668ec0f8e79ea3bb626e56fc59 \
	I0328 01:07:40.509514 1130949 kubeadm.go:309] 	--control-plane 
	I0328 01:07:40.509524 1130949 kubeadm.go:309] 
	I0328 01:07:40.509641 1130949 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0328 01:07:40.509655 1130949 kubeadm.go:309] 
	I0328 01:07:40.509767 1130949 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token 4zk5zi.uttj4zihedk5oj6k \
	I0328 01:07:40.509932 1130949 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:eb47e943713ff45bf2b8bbea854932df17a469668ec0f8e79ea3bb626e56fc59 
	I0328 01:07:40.510139 1130949 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0328 01:07:40.510157 1130949 cni.go:84] Creating CNI manager for ""
	I0328 01:07:40.510166 1130949 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0328 01:07:40.512099 1130949 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0328 01:07:38.041290 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:40.041569 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:38.373789 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:40.374369 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:40.513314 1130949 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0328 01:07:40.563257 1130949 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0328 01:07:40.627024 1130949 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0328 01:07:40.627097 1130949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:07:40.627137 1130949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-808809 minikube.k8s.io/updated_at=2024_03_28T01_07_40_0700 minikube.k8s.io/version=v1.33.0-beta.0 minikube.k8s.io/commit=1a940f980f77c9bfc8ef93678db8c1f49c9dd79d minikube.k8s.io/name=embed-certs-808809 minikube.k8s.io/primary=true
	I0328 01:07:40.928916 1130949 ops.go:34] apiserver oom_adj: -16
	I0328 01:07:40.929138 1130949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:07:41.429797 1130949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:07:41.930103 1130949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:07:42.429366 1130949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:07:42.540932 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:44.035055 1131600 pod_ready.go:81] duration metric: took 4m0.000860608s for pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace to be "Ready" ...
	E0328 01:07:44.035094 1131600 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace to be "Ready" (will not retry!)
	I0328 01:07:44.035124 1131600 pod_ready.go:38] duration metric: took 4m14.608998431s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0328 01:07:44.035180 1131600 kubeadm.go:591] duration metric: took 4m23.470228903s to restartPrimaryControlPlane
	W0328 01:07:44.035292 1131600 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0328 01:07:44.035344 1131600 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0328 01:07:42.375179 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:44.876120 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:42.929464 1130949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:07:43.429369 1130949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:07:43.929241 1130949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:07:44.429904 1130949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:07:44.930251 1130949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:07:45.429816 1130949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:07:45.930177 1130949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:07:46.429416 1130949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:07:46.929152 1130949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:07:47.429708 1130949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:07:49.021732 1131323 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0328 01:07:49.021890 1131323 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0328 01:07:49.022195 1131323 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0328 01:07:47.373358 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:49.872482 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:47.929139 1130949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:07:48.429732 1130949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:07:48.930207 1130949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:07:49.429230 1130949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:07:49.929298 1130949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:07:50.429919 1130949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:07:50.929364 1130949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:07:51.429403 1130949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:07:51.929356 1130949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:07:52.429410 1130949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:07:52.929894 1130949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:07:53.043365 1130949 kubeadm.go:1107] duration metric: took 12.416334145s to wait for elevateKubeSystemPrivileges
	W0328 01:07:53.043410 1130949 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0328 01:07:53.043419 1130949 kubeadm.go:393] duration metric: took 5m13.709259014s to StartCluster
	I0328 01:07:53.043445 1130949 settings.go:142] acquiring lock: {Name:mk594dd5d083edcb4c668528431bf610fe4ea638 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 01:07:53.043560 1130949 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18485-1069254/kubeconfig
	I0328 01:07:53.045798 1130949 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18485-1069254/kubeconfig: {Name:mk608bb0dce90860f51972a105c5a28b1a9b081e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 01:07:53.046158 1130949 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.72.210 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0328 01:07:53.047867 1130949 out.go:177] * Verifying Kubernetes components...
	I0328 01:07:53.046201 1130949 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0328 01:07:53.046412 1130949 config.go:182] Loaded profile config "embed-certs-808809": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0328 01:07:53.049163 1130949 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-808809"
	I0328 01:07:53.049175 1130949 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0328 01:07:53.049195 1130949 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-808809"
	W0328 01:07:53.049204 1130949 addons.go:243] addon storage-provisioner should already be in state true
	I0328 01:07:53.049230 1130949 host.go:66] Checking if "embed-certs-808809" exists ...
	I0328 01:07:53.049205 1130949 addons.go:69] Setting default-storageclass=true in profile "embed-certs-808809"
	I0328 01:07:53.049250 1130949 addons.go:69] Setting metrics-server=true in profile "embed-certs-808809"
	I0328 01:07:53.049271 1130949 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-808809"
	I0328 01:07:53.049309 1130949 addons.go:234] Setting addon metrics-server=true in "embed-certs-808809"
	W0328 01:07:53.049327 1130949 addons.go:243] addon metrics-server should already be in state true
	I0328 01:07:53.049371 1130949 host.go:66] Checking if "embed-certs-808809" exists ...
	I0328 01:07:53.049530 1130949 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18485-1069254/.minikube/bin/docker-machine-driver-kvm2
	I0328 01:07:53.049569 1130949 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 01:07:53.049696 1130949 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18485-1069254/.minikube/bin/docker-machine-driver-kvm2
	I0328 01:07:53.049729 1130949 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 01:07:53.049795 1130949 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18485-1069254/.minikube/bin/docker-machine-driver-kvm2
	I0328 01:07:53.049838 1130949 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 01:07:53.067042 1130949 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36143
	I0328 01:07:53.067078 1130949 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33653
	I0328 01:07:53.067536 1130949 main.go:141] libmachine: () Calling .GetVersion
	I0328 01:07:53.067599 1130949 main.go:141] libmachine: () Calling .GetVersion
	I0328 01:07:53.068156 1130949 main.go:141] libmachine: Using API Version  1
	I0328 01:07:53.068184 1130949 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 01:07:53.068289 1130949 main.go:141] libmachine: Using API Version  1
	I0328 01:07:53.068315 1130949 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 01:07:53.068583 1130949 main.go:141] libmachine: () Calling .GetMachineName
	I0328 01:07:53.068669 1130949 main.go:141] libmachine: () Calling .GetMachineName
	I0328 01:07:53.069095 1130949 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18485-1069254/.minikube/bin/docker-machine-driver-kvm2
	I0328 01:07:53.069121 1130949 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 01:07:53.069245 1130949 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18485-1069254/.minikube/bin/docker-machine-driver-kvm2
	I0328 01:07:53.069276 1130949 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 01:07:53.069991 1130949 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43737
	I0328 01:07:53.070509 1130949 main.go:141] libmachine: () Calling .GetVersion
	I0328 01:07:53.071078 1130949 main.go:141] libmachine: Using API Version  1
	I0328 01:07:53.071103 1130949 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 01:07:53.071480 1130949 main.go:141] libmachine: () Calling .GetMachineName
	I0328 01:07:53.071705 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetState
	I0328 01:07:53.075617 1130949 addons.go:234] Setting addon default-storageclass=true in "embed-certs-808809"
	W0328 01:07:53.075659 1130949 addons.go:243] addon default-storageclass should already be in state true
	I0328 01:07:53.075703 1130949 host.go:66] Checking if "embed-certs-808809" exists ...
	I0328 01:07:53.075982 1130949 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18485-1069254/.minikube/bin/docker-machine-driver-kvm2
	I0328 01:07:53.076011 1130949 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 01:07:53.085991 1130949 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39265
	I0328 01:07:53.086508 1130949 main.go:141] libmachine: () Calling .GetVersion
	I0328 01:07:53.086724 1130949 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36033
	I0328 01:07:53.087105 1130949 main.go:141] libmachine: Using API Version  1
	I0328 01:07:53.087122 1130949 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 01:07:53.087158 1130949 main.go:141] libmachine: () Calling .GetVersion
	I0328 01:07:53.087646 1130949 main.go:141] libmachine: Using API Version  1
	I0328 01:07:53.087667 1130949 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 01:07:53.087706 1130949 main.go:141] libmachine: () Calling .GetMachineName
	I0328 01:07:53.087922 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetState
	I0328 01:07:53.088031 1130949 main.go:141] libmachine: () Calling .GetMachineName
	I0328 01:07:53.088225 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetState
	I0328 01:07:53.089941 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .DriverName
	I0328 01:07:53.090168 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .DriverName
	I0328 01:07:53.091945 1130949 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0328 01:07:53.093023 1130949 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41613
	I0328 01:07:53.093537 1130949 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0328 01:07:53.093553 1130949 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0328 01:07:53.093563 1130949 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0328 01:07:53.095147 1130949 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0328 01:07:53.095165 1130949 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0328 01:07:53.093574 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHHostname
	I0328 01:07:53.095185 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHHostname
	I0328 01:07:53.093939 1130949 main.go:141] libmachine: () Calling .GetVersion
	I0328 01:07:53.096301 1130949 main.go:141] libmachine: Using API Version  1
	I0328 01:07:53.096322 1130949 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 01:07:53.096662 1130949 main.go:141] libmachine: () Calling .GetMachineName
	I0328 01:07:53.097251 1130949 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18485-1069254/.minikube/bin/docker-machine-driver-kvm2
	I0328 01:07:53.097306 1130949 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 01:07:53.098907 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:07:53.099014 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:07:53.099513 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d4:d2", ip: ""} in network mk-embed-certs-808809: {Iface:virbr4 ExpiryTime:2024-03-28 02:02:25 +0000 UTC Type:0 Mac:52:54:00:60:d4:d2 Iaid: IPaddr:192.168.72.210 Prefix:24 Hostname:embed-certs-808809 Clientid:01:52:54:00:60:d4:d2}
	I0328 01:07:53.099546 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined IP address 192.168.72.210 and MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:07:53.099996 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHPort
	I0328 01:07:53.100126 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d4:d2", ip: ""} in network mk-embed-certs-808809: {Iface:virbr4 ExpiryTime:2024-03-28 02:02:25 +0000 UTC Type:0 Mac:52:54:00:60:d4:d2 Iaid: IPaddr:192.168.72.210 Prefix:24 Hostname:embed-certs-808809 Clientid:01:52:54:00:60:d4:d2}
	I0328 01:07:53.100177 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHKeyPath
	I0328 01:07:53.100187 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined IP address 192.168.72.210 and MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:07:53.100287 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHUsername
	I0328 01:07:53.100392 1130949 sshutil.go:53] new ssh client: &{IP:192.168.72.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/embed-certs-808809/id_rsa Username:docker}
	I0328 01:07:53.100470 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHPort
	I0328 01:07:53.100576 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHKeyPath
	I0328 01:07:53.100709 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHUsername
	I0328 01:07:53.100796 1130949 sshutil.go:53] new ssh client: &{IP:192.168.72.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/embed-certs-808809/id_rsa Username:docker}
	I0328 01:07:53.114056 1130949 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41121
	I0328 01:07:53.114680 1130949 main.go:141] libmachine: () Calling .GetVersion
	I0328 01:07:53.115279 1130949 main.go:141] libmachine: Using API Version  1
	I0328 01:07:53.115313 1130949 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 01:07:53.115721 1130949 main.go:141] libmachine: () Calling .GetMachineName
	I0328 01:07:53.116061 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetState
	I0328 01:07:53.118022 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .DriverName
	I0328 01:07:53.118348 1130949 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0328 01:07:53.118370 1130949 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0328 01:07:53.118391 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHHostname
	I0328 01:07:53.121337 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:07:53.121699 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d4:d2", ip: ""} in network mk-embed-certs-808809: {Iface:virbr4 ExpiryTime:2024-03-28 02:02:25 +0000 UTC Type:0 Mac:52:54:00:60:d4:d2 Iaid: IPaddr:192.168.72.210 Prefix:24 Hostname:embed-certs-808809 Clientid:01:52:54:00:60:d4:d2}
	I0328 01:07:53.121728 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined IP address 192.168.72.210 and MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:07:53.121906 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHPort
	I0328 01:07:53.122084 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHKeyPath
	I0328 01:07:53.122266 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHUsername
	I0328 01:07:53.122414 1130949 sshutil.go:53] new ssh client: &{IP:192.168.72.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/embed-certs-808809/id_rsa Username:docker}
	I0328 01:07:53.242121 1130949 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0328 01:07:53.267118 1130949 node_ready.go:35] waiting up to 6m0s for node "embed-certs-808809" to be "Ready" ...
	I0328 01:07:53.276640 1130949 node_ready.go:49] node "embed-certs-808809" has status "Ready":"True"
	I0328 01:07:53.276670 1130949 node_ready.go:38] duration metric: took 9.513599ms for node "embed-certs-808809" to be "Ready" ...
	I0328 01:07:53.276683 1130949 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0328 01:07:53.283091 1130949 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-2rn6k" in "kube-system" namespace to be "Ready" ...
	I0328 01:07:53.325201 1130949 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0328 01:07:53.325234 1130949 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0328 01:07:53.341335 1130949 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0328 01:07:53.361084 1130949 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0328 01:07:53.361109 1130949 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0328 01:07:53.393089 1130949 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0328 01:07:53.393116 1130949 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0328 01:07:53.419245 1130949 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0328 01:07:53.445663 1130949 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0328 01:07:53.515515 1130949 main.go:141] libmachine: Making call to close driver server
	I0328 01:07:53.515555 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .Close
	I0328 01:07:53.515871 1130949 main.go:141] libmachine: Successfully made call to close driver server
	I0328 01:07:53.515891 1130949 main.go:141] libmachine: Making call to close connection to plugin binary
	I0328 01:07:53.515901 1130949 main.go:141] libmachine: Making call to close driver server
	I0328 01:07:53.515910 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .Close
	I0328 01:07:53.516173 1130949 main.go:141] libmachine: Successfully made call to close driver server
	I0328 01:07:53.516253 1130949 main.go:141] libmachine: Making call to close connection to plugin binary
	I0328 01:07:53.516212 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | Closing plugin on server side
	I0328 01:07:53.527854 1130949 main.go:141] libmachine: Making call to close driver server
	I0328 01:07:53.527882 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .Close
	I0328 01:07:53.528152 1130949 main.go:141] libmachine: Successfully made call to close driver server
	I0328 01:07:53.528173 1130949 main.go:141] libmachine: Making call to close connection to plugin binary
	I0328 01:07:53.528220 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | Closing plugin on server side
	I0328 01:07:54.159164 1130949 main.go:141] libmachine: Making call to close driver server
	I0328 01:07:54.159192 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .Close
	I0328 01:07:54.159264 1130949 main.go:141] libmachine: Making call to close driver server
	I0328 01:07:54.159292 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .Close
	I0328 01:07:54.159523 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | Closing plugin on server side
	I0328 01:07:54.159597 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | Closing plugin on server side
	I0328 01:07:54.159619 1130949 main.go:141] libmachine: Successfully made call to close driver server
	I0328 01:07:54.159637 1130949 main.go:141] libmachine: Making call to close connection to plugin binary
	I0328 01:07:54.159648 1130949 main.go:141] libmachine: Making call to close driver server
	I0328 01:07:54.159658 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .Close
	I0328 01:07:54.159660 1130949 main.go:141] libmachine: Successfully made call to close driver server
	I0328 01:07:54.159667 1130949 main.go:141] libmachine: Making call to close connection to plugin binary
	I0328 01:07:54.159688 1130949 main.go:141] libmachine: Making call to close driver server
	I0328 01:07:54.159696 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .Close
	I0328 01:07:54.159981 1130949 main.go:141] libmachine: Successfully made call to close driver server
	I0328 01:07:54.160037 1130949 main.go:141] libmachine: Making call to close connection to plugin binary
	I0328 01:07:54.160056 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | Closing plugin on server side
	I0328 01:07:54.160062 1130949 addons.go:470] Verifying addon metrics-server=true in "embed-certs-808809"
	I0328 01:07:54.160088 1130949 main.go:141] libmachine: Successfully made call to close driver server
	I0328 01:07:54.160090 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | Closing plugin on server side
	I0328 01:07:54.160106 1130949 main.go:141] libmachine: Making call to close connection to plugin binary
	I0328 01:07:54.162879 1130949 out.go:177] * Enabled addons: default-storageclass, metrics-server, storage-provisioner
	I0328 01:07:54.022449 1131323 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0328 01:07:54.022704 1131323 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0328 01:07:52.372314 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:54.372913 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:54.164263 1130949 addons.go:505] duration metric: took 1.11806212s for enable addons: enabled=[default-storageclass metrics-server storage-provisioner]
	I0328 01:07:55.294728 1130949 pod_ready.go:102] pod "coredns-76f75df574-2rn6k" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:55.790690 1130949 pod_ready.go:92] pod "coredns-76f75df574-2rn6k" in "kube-system" namespace has status "Ready":"True"
	I0328 01:07:55.790717 1130949 pod_ready.go:81] duration metric: took 2.50759161s for pod "coredns-76f75df574-2rn6k" in "kube-system" namespace to be "Ready" ...
	I0328 01:07:55.790726 1130949 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-pgcdh" in "kube-system" namespace to be "Ready" ...
	I0328 01:07:55.796249 1130949 pod_ready.go:92] pod "coredns-76f75df574-pgcdh" in "kube-system" namespace has status "Ready":"True"
	I0328 01:07:55.796279 1130949 pod_ready.go:81] duration metric: took 5.54233ms for pod "coredns-76f75df574-pgcdh" in "kube-system" namespace to be "Ready" ...
	I0328 01:07:55.796291 1130949 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-808809" in "kube-system" namespace to be "Ready" ...
	I0328 01:07:55.801226 1130949 pod_ready.go:92] pod "etcd-embed-certs-808809" in "kube-system" namespace has status "Ready":"True"
	I0328 01:07:55.801254 1130949 pod_ready.go:81] duration metric: took 4.956106ms for pod "etcd-embed-certs-808809" in "kube-system" namespace to be "Ready" ...
	I0328 01:07:55.801263 1130949 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-808809" in "kube-system" namespace to be "Ready" ...
	I0328 01:07:55.814571 1130949 pod_ready.go:92] pod "kube-apiserver-embed-certs-808809" in "kube-system" namespace has status "Ready":"True"
	I0328 01:07:55.814599 1130949 pod_ready.go:81] duration metric: took 13.328662ms for pod "kube-apiserver-embed-certs-808809" in "kube-system" namespace to be "Ready" ...
	I0328 01:07:55.814613 1130949 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-808809" in "kube-system" namespace to be "Ready" ...
	I0328 01:07:55.825995 1130949 pod_ready.go:92] pod "kube-controller-manager-embed-certs-808809" in "kube-system" namespace has status "Ready":"True"
	I0328 01:07:55.826022 1130949 pod_ready.go:81] duration metric: took 11.401096ms for pod "kube-controller-manager-embed-certs-808809" in "kube-system" namespace to be "Ready" ...
	I0328 01:07:55.826035 1130949 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-tjbhs" in "kube-system" namespace to be "Ready" ...
	I0328 01:07:56.188116 1130949 pod_ready.go:92] pod "kube-proxy-tjbhs" in "kube-system" namespace has status "Ready":"True"
	I0328 01:07:56.188147 1130949 pod_ready.go:81] duration metric: took 362.103962ms for pod "kube-proxy-tjbhs" in "kube-system" namespace to be "Ready" ...
	I0328 01:07:56.188161 1130949 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-808809" in "kube-system" namespace to be "Ready" ...
	I0328 01:07:56.588294 1130949 pod_ready.go:92] pod "kube-scheduler-embed-certs-808809" in "kube-system" namespace has status "Ready":"True"
	I0328 01:07:56.588334 1130949 pod_ready.go:81] duration metric: took 400.16517ms for pod "kube-scheduler-embed-certs-808809" in "kube-system" namespace to be "Ready" ...
	I0328 01:07:56.588347 1130949 pod_ready.go:38] duration metric: took 3.311651338s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0328 01:07:56.588369 1130949 api_server.go:52] waiting for apiserver process to appear ...
	I0328 01:07:56.588445 1130949 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:07:56.606404 1130949 api_server.go:72] duration metric: took 3.560197315s to wait for apiserver process to appear ...
	I0328 01:07:56.606435 1130949 api_server.go:88] waiting for apiserver healthz status ...
	I0328 01:07:56.606460 1130949 api_server.go:253] Checking apiserver healthz at https://192.168.72.210:8443/healthz ...
	I0328 01:07:56.612218 1130949 api_server.go:279] https://192.168.72.210:8443/healthz returned 200:
	ok
	I0328 01:07:56.613459 1130949 api_server.go:141] control plane version: v1.29.3
	I0328 01:07:56.613481 1130949 api_server.go:131] duration metric: took 7.039378ms to wait for apiserver health ...
	I0328 01:07:56.613490 1130949 system_pods.go:43] waiting for kube-system pods to appear ...
	I0328 01:07:56.793192 1130949 system_pods.go:59] 9 kube-system pods found
	I0328 01:07:56.793227 1130949 system_pods.go:61] "coredns-76f75df574-2rn6k" [2a77c778-dd83-4e2e-b45a-ca16e3922b45] Running
	I0328 01:07:56.793232 1130949 system_pods.go:61] "coredns-76f75df574-pgcdh" [52452b24-490e-4999-b700-198c6f9b2fa1] Running
	I0328 01:07:56.793236 1130949 system_pods.go:61] "etcd-embed-certs-808809" [cba526ab-c8ca-4b90-abb8-533d1bf6cb4f] Running
	I0328 01:07:56.793239 1130949 system_pods.go:61] "kube-apiserver-embed-certs-808809" [02934ac6-1e07-4cbe-ba2f-4149e59d6044] Running
	I0328 01:07:56.793243 1130949 system_pods.go:61] "kube-controller-manager-embed-certs-808809" [b3350ff9-7abb-407e-939c-fcde61186c17] Running
	I0328 01:07:56.793246 1130949 system_pods.go:61] "kube-proxy-tjbhs" [cdb30ca1-5165-4e24-888a-df79af7987d0] Running
	I0328 01:07:56.793249 1130949 system_pods.go:61] "kube-scheduler-embed-certs-808809" [5b52a22d-6ffb-468b-b7b9-8f89b0dee3b8] Running
	I0328 01:07:56.793255 1130949 system_pods.go:61] "metrics-server-57f55c9bc5-bqbfl" [8434fd7d-838b-4cf2-96a3-e4d613633871] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0328 01:07:56.793260 1130949 system_pods.go:61] "storage-provisioner" [20c1951e-7da8-4025-bbcf-2da60f87f3ab] Running
	I0328 01:07:56.793268 1130949 system_pods.go:74] duration metric: took 179.77213ms to wait for pod list to return data ...
	I0328 01:07:56.793275 1130949 default_sa.go:34] waiting for default service account to be created ...
	I0328 01:07:56.988234 1130949 default_sa.go:45] found service account: "default"
	I0328 01:07:56.988274 1130949 default_sa.go:55] duration metric: took 194.984089ms for default service account to be created ...
	I0328 01:07:56.988288 1130949 system_pods.go:116] waiting for k8s-apps to be running ...
	I0328 01:07:57.192153 1130949 system_pods.go:86] 9 kube-system pods found
	I0328 01:07:57.192188 1130949 system_pods.go:89] "coredns-76f75df574-2rn6k" [2a77c778-dd83-4e2e-b45a-ca16e3922b45] Running
	I0328 01:07:57.192194 1130949 system_pods.go:89] "coredns-76f75df574-pgcdh" [52452b24-490e-4999-b700-198c6f9b2fa1] Running
	I0328 01:07:57.192200 1130949 system_pods.go:89] "etcd-embed-certs-808809" [cba526ab-c8ca-4b90-abb8-533d1bf6cb4f] Running
	I0328 01:07:57.192205 1130949 system_pods.go:89] "kube-apiserver-embed-certs-808809" [02934ac6-1e07-4cbe-ba2f-4149e59d6044] Running
	I0328 01:07:57.192210 1130949 system_pods.go:89] "kube-controller-manager-embed-certs-808809" [b3350ff9-7abb-407e-939c-fcde61186c17] Running
	I0328 01:07:57.192214 1130949 system_pods.go:89] "kube-proxy-tjbhs" [cdb30ca1-5165-4e24-888a-df79af7987d0] Running
	I0328 01:07:57.192218 1130949 system_pods.go:89] "kube-scheduler-embed-certs-808809" [5b52a22d-6ffb-468b-b7b9-8f89b0dee3b8] Running
	I0328 01:07:57.192225 1130949 system_pods.go:89] "metrics-server-57f55c9bc5-bqbfl" [8434fd7d-838b-4cf2-96a3-e4d613633871] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0328 01:07:57.192230 1130949 system_pods.go:89] "storage-provisioner" [20c1951e-7da8-4025-bbcf-2da60f87f3ab] Running
	I0328 01:07:57.192239 1130949 system_pods.go:126] duration metric: took 203.942878ms to wait for k8s-apps to be running ...
	I0328 01:07:57.192249 1130949 system_svc.go:44] waiting for kubelet service to be running ....
	I0328 01:07:57.192301 1130949 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0328 01:07:57.209840 1130949 system_svc.go:56] duration metric: took 17.576605ms WaitForService to wait for kubelet
	I0328 01:07:57.209883 1130949 kubeadm.go:576] duration metric: took 4.163683877s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0328 01:07:57.209918 1130949 node_conditions.go:102] verifying NodePressure condition ...
	I0328 01:07:57.388321 1130949 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0328 01:07:57.388347 1130949 node_conditions.go:123] node cpu capacity is 2
	I0328 01:07:57.388357 1130949 node_conditions.go:105] duration metric: took 178.433633ms to run NodePressure ...
	I0328 01:07:57.388370 1130949 start.go:240] waiting for startup goroutines ...
	I0328 01:07:57.388377 1130949 start.go:245] waiting for cluster config update ...
	I0328 01:07:57.388387 1130949 start.go:254] writing updated cluster config ...
	I0328 01:07:57.388784 1130949 ssh_runner.go:195] Run: rm -f paused
	I0328 01:07:57.446699 1130949 start.go:600] kubectl: 1.29.3, cluster: 1.29.3 (minor skew: 0)
	I0328 01:07:57.448951 1130949 out.go:177] * Done! kubectl is now configured to use "embed-certs-808809" cluster and "default" namespace by default
	I0328 01:07:56.373123 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:58.872454 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:08:04.023273 1131323 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0328 01:08:04.023535 1131323 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0328 01:08:01.372711 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:08:03.877734 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:08:06.374031 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:08:07.366164 1130827 pod_ready.go:81] duration metric: took 4m0.000887668s for pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace to be "Ready" ...
	E0328 01:08:07.366245 1130827 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace to be "Ready" (will not retry!)
	I0328 01:08:07.366271 1130827 pod_ready.go:38] duration metric: took 4m7.906522585s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0328 01:08:07.366301 1130827 kubeadm.go:591] duration metric: took 4m15.27169704s to restartPrimaryControlPlane
	W0328 01:08:07.366368 1130827 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0328 01:08:07.366406 1130827 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0328 01:08:16.281280 1131600 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.245904746s)
	I0328 01:08:16.281365 1131600 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0328 01:08:16.298463 1131600 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0328 01:08:16.310406 1131600 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0328 01:08:16.321387 1131600 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0328 01:08:16.321415 1131600 kubeadm.go:156] found existing configuration files:
	
	I0328 01:08:16.321475 1131600 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0328 01:08:16.331965 1131600 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0328 01:08:16.332033 1131600 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0328 01:08:16.343030 1131600 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0328 01:08:16.353193 1131600 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0328 01:08:16.353254 1131600 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0328 01:08:16.363865 1131600 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0328 01:08:16.374276 1131600 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0328 01:08:16.374346 1131600 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0328 01:08:16.385300 1131600 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0328 01:08:16.396118 1131600 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0328 01:08:16.396181 1131600 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0328 01:08:16.406896 1131600 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0328 01:08:16.626615 1131600 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0328 01:08:24.024091 1131323 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0328 01:08:24.024388 1131323 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0328 01:08:25.420974 1131600 kubeadm.go:309] [init] Using Kubernetes version: v1.29.3
	I0328 01:08:25.421059 1131600 kubeadm.go:309] [preflight] Running pre-flight checks
	I0328 01:08:25.421154 1131600 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0328 01:08:25.421300 1131600 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0328 01:08:25.421547 1131600 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0328 01:08:25.421649 1131600 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0328 01:08:25.423435 1131600 out.go:204]   - Generating certificates and keys ...
	I0328 01:08:25.423549 1131600 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0328 01:08:25.423630 1131600 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0328 01:08:25.423749 1131600 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0328 01:08:25.423844 1131600 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0328 01:08:25.423956 1131600 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0328 01:08:25.424058 1131600 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0328 01:08:25.424166 1131600 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0328 01:08:25.424260 1131600 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0328 01:08:25.424375 1131600 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0328 01:08:25.424489 1131600 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0328 01:08:25.424552 1131600 kubeadm.go:309] [certs] Using the existing "sa" key
	I0328 01:08:25.424642 1131600 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0328 01:08:25.424700 1131600 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0328 01:08:25.424765 1131600 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0328 01:08:25.424832 1131600 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0328 01:08:25.424920 1131600 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0328 01:08:25.424982 1131600 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0328 01:08:25.425106 1131600 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0328 01:08:25.425207 1131600 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0328 01:08:25.426863 1131600 out.go:204]   - Booting up control plane ...
	I0328 01:08:25.427001 1131600 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0328 01:08:25.427108 1131600 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0328 01:08:25.427205 1131600 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0328 01:08:25.427327 1131600 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0328 01:08:25.427431 1131600 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0328 01:08:25.427491 1131600 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0328 01:08:25.427686 1131600 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0328 01:08:25.427784 1131600 kubeadm.go:309] [apiclient] All control plane components are healthy after 6.003000 seconds
	I0328 01:08:25.427897 1131600 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0328 01:08:25.428032 1131600 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0328 01:08:25.428109 1131600 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0328 01:08:25.428325 1131600 kubeadm.go:309] [mark-control-plane] Marking the node default-k8s-diff-port-283961 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0328 01:08:25.428408 1131600 kubeadm.go:309] [bootstrap-token] Using token: g6jusr.8nbqw788gjbu8fwz
	I0328 01:08:25.430595 1131600 out.go:204]   - Configuring RBAC rules ...
	I0328 01:08:25.430734 1131600 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0328 01:08:25.430837 1131600 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0328 01:08:25.430981 1131600 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0328 01:08:25.431163 1131600 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0328 01:08:25.431357 1131600 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0328 01:08:25.431481 1131600 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0328 01:08:25.431670 1131600 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0328 01:08:25.431726 1131600 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0328 01:08:25.431767 1131600 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0328 01:08:25.431774 1131600 kubeadm.go:309] 
	I0328 01:08:25.431819 1131600 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0328 01:08:25.431829 1131600 kubeadm.go:309] 
	I0328 01:08:25.431893 1131600 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0328 01:08:25.431900 1131600 kubeadm.go:309] 
	I0328 01:08:25.431934 1131600 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0328 01:08:25.432028 1131600 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0328 01:08:25.432089 1131600 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0328 01:08:25.432114 1131600 kubeadm.go:309] 
	I0328 01:08:25.432178 1131600 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0328 01:08:25.432186 1131600 kubeadm.go:309] 
	I0328 01:08:25.432245 1131600 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0328 01:08:25.432255 1131600 kubeadm.go:309] 
	I0328 01:08:25.432342 1131600 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0328 01:08:25.432454 1131600 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0328 01:08:25.432566 1131600 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0328 01:08:25.432576 1131600 kubeadm.go:309] 
	I0328 01:08:25.432719 1131600 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0328 01:08:25.432812 1131600 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0328 01:08:25.432825 1131600 kubeadm.go:309] 
	I0328 01:08:25.432914 1131600 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8444 --token g6jusr.8nbqw788gjbu8fwz \
	I0328 01:08:25.433018 1131600 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:eb47e943713ff45bf2b8bbea854932df17a469668ec0f8e79ea3bb626e56fc59 \
	I0328 01:08:25.433052 1131600 kubeadm.go:309] 	--control-plane 
	I0328 01:08:25.433058 1131600 kubeadm.go:309] 
	I0328 01:08:25.433135 1131600 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0328 01:08:25.433143 1131600 kubeadm.go:309] 
	I0328 01:08:25.433222 1131600 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8444 --token g6jusr.8nbqw788gjbu8fwz \
	I0328 01:08:25.433318 1131600 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:eb47e943713ff45bf2b8bbea854932df17a469668ec0f8e79ea3bb626e56fc59 
	I0328 01:08:25.433337 1131600 cni.go:84] Creating CNI manager for ""
	I0328 01:08:25.433346 1131600 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0328 01:08:25.434943 1131600 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0328 01:08:25.436103 1131600 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0328 01:08:25.483149 1131600 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0328 01:08:25.508422 1131600 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0328 01:08:25.508514 1131600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:25.508518 1131600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-283961 minikube.k8s.io/updated_at=2024_03_28T01_08_25_0700 minikube.k8s.io/version=v1.33.0-beta.0 minikube.k8s.io/commit=1a940f980f77c9bfc8ef93678db8c1f49c9dd79d minikube.k8s.io/name=default-k8s-diff-port-283961 minikube.k8s.io/primary=true
	I0328 01:08:25.537955 1131600 ops.go:34] apiserver oom_adj: -16
	I0328 01:08:25.738462 1131600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:26.239473 1131600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:26.739478 1131600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:27.238883 1131600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:27.738830 1131600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:28.239281 1131600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:28.738643 1131600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:29.238703 1131600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:29.739025 1131600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:30.239127 1131600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:30.739473 1131600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:31.239461 1131600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:31.739480 1131600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:32.239525 1131600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:32.738543 1131600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:33.239468 1131600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:33.739475 1131600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:34.238558 1131600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:34.739550 1131600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:35.239400 1131600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:35.738766 1131600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:36.239384 1131600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:36.738797 1131600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:37.238736 1131600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:37.739543 1131600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:37.850963 1131600 kubeadm.go:1107] duration metric: took 12.342521507s to wait for elevateKubeSystemPrivileges
	W0328 01:08:37.851011 1131600 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0328 01:08:37.851024 1131600 kubeadm.go:393] duration metric: took 5m17.339661641s to StartCluster
	I0328 01:08:37.851048 1131600 settings.go:142] acquiring lock: {Name:mk594dd5d083edcb4c668528431bf610fe4ea638 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 01:08:37.851164 1131600 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18485-1069254/kubeconfig
	I0328 01:08:37.853862 1131600 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18485-1069254/kubeconfig: {Name:mk608bb0dce90860f51972a105c5a28b1a9b081e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 01:08:37.854264 1131600 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.39.224 Port:8444 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0328 01:08:37.856170 1131600 out.go:177] * Verifying Kubernetes components...
	I0328 01:08:37.854341 1131600 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0328 01:08:37.854447 1131600 config.go:182] Loaded profile config "default-k8s-diff-port-283961": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0328 01:08:37.857860 1131600 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0328 01:08:37.857864 1131600 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-283961"
	I0328 01:08:37.857878 1131600 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-283961"
	I0328 01:08:37.857885 1131600 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-283961"
	I0328 01:08:37.857909 1131600 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-283961"
	I0328 01:08:37.857912 1131600 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-283961"
	W0328 01:08:37.857923 1131600 addons.go:243] addon storage-provisioner should already be in state true
	I0328 01:08:37.857928 1131600 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-283961"
	W0328 01:08:37.857941 1131600 addons.go:243] addon metrics-server should already be in state true
	I0328 01:08:37.857970 1131600 host.go:66] Checking if "default-k8s-diff-port-283961" exists ...
	I0328 01:08:37.857983 1131600 host.go:66] Checking if "default-k8s-diff-port-283961" exists ...
	I0328 01:08:37.858330 1131600 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18485-1069254/.minikube/bin/docker-machine-driver-kvm2
	I0328 01:08:37.858363 1131600 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 01:08:37.858403 1131600 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18485-1069254/.minikube/bin/docker-machine-driver-kvm2
	I0328 01:08:37.858436 1131600 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 01:08:37.858335 1131600 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18485-1069254/.minikube/bin/docker-machine-driver-kvm2
	I0328 01:08:37.858509 1131600 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 01:08:37.881197 1131600 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32799
	I0328 01:08:37.881230 1131600 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35101
	I0328 01:08:37.881244 1131600 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40403
	I0328 01:08:37.881857 1131600 main.go:141] libmachine: () Calling .GetVersion
	I0328 01:08:37.881882 1131600 main.go:141] libmachine: () Calling .GetVersion
	I0328 01:08:37.882021 1131600 main.go:141] libmachine: () Calling .GetVersion
	I0328 01:08:37.882460 1131600 main.go:141] libmachine: Using API Version  1
	I0328 01:08:37.882482 1131600 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 01:08:37.882523 1131600 main.go:141] libmachine: Using API Version  1
	I0328 01:08:37.882540 1131600 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 01:08:37.882585 1131600 main.go:141] libmachine: Using API Version  1
	I0328 01:08:37.882601 1131600 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 01:08:37.882934 1131600 main.go:141] libmachine: () Calling .GetMachineName
	I0328 01:08:37.882992 1131600 main.go:141] libmachine: () Calling .GetMachineName
	I0328 01:08:37.883007 1131600 main.go:141] libmachine: () Calling .GetMachineName
	I0328 01:08:37.883239 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetState
	I0328 01:08:37.883592 1131600 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18485-1069254/.minikube/bin/docker-machine-driver-kvm2
	I0328 01:08:37.883620 1131600 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18485-1069254/.minikube/bin/docker-machine-driver-kvm2
	I0328 01:08:37.883625 1131600 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 01:08:37.883644 1131600 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 01:08:37.887335 1131600 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-283961"
	W0328 01:08:37.887359 1131600 addons.go:243] addon default-storageclass should already be in state true
	I0328 01:08:37.887390 1131600 host.go:66] Checking if "default-k8s-diff-port-283961" exists ...
	I0328 01:08:37.887745 1131600 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18485-1069254/.minikube/bin/docker-machine-driver-kvm2
	I0328 01:08:37.887779 1131600 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 01:08:37.901416 1131600 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37383
	I0328 01:08:37.901909 1131600 main.go:141] libmachine: () Calling .GetVersion
	I0328 01:08:37.902530 1131600 main.go:141] libmachine: Using API Version  1
	I0328 01:08:37.902559 1131600 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 01:08:37.902967 1131600 main.go:141] libmachine: () Calling .GetMachineName
	I0328 01:08:37.903211 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetState
	I0328 01:08:37.904529 1131600 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36633
	I0328 01:08:37.905034 1131600 main.go:141] libmachine: () Calling .GetVersion
	I0328 01:08:37.905268 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .DriverName
	I0328 01:08:37.907486 1131600 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0328 01:08:37.905802 1131600 main.go:141] libmachine: Using API Version  1
	I0328 01:08:37.909062 1131600 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 01:08:37.909180 1131600 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0328 01:08:37.909196 1131600 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0328 01:08:37.909218 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHHostname
	I0328 01:08:37.909555 1131600 main.go:141] libmachine: () Calling .GetMachineName
	I0328 01:08:37.909794 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetState
	I0328 01:08:37.911251 1131600 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33539
	I0328 01:08:37.911845 1131600 main.go:141] libmachine: () Calling .GetVersion
	I0328 01:08:37.911995 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .DriverName
	I0328 01:08:37.913838 1131600 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0328 01:08:37.912457 1131600 main.go:141] libmachine: Using API Version  1
	I0328 01:08:37.913039 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:08:37.913804 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHPort
	I0328 01:08:37.915256 1131600 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 01:08:37.915268 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:df:6f", ip: ""} in network mk-default-k8s-diff-port-283961: {Iface:virbr1 ExpiryTime:2024-03-28 02:03:06 +0000 UTC Type:0 Mac:52:54:00:c4:df:6f Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:default-k8s-diff-port-283961 Clientid:01:52:54:00:c4:df:6f}
	I0328 01:08:37.915288 1131600 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0328 01:08:37.915297 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined IP address 192.168.39.224 and MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:08:37.915303 1131600 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0328 01:08:37.915321 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHHostname
	I0328 01:08:37.915492 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHKeyPath
	I0328 01:08:37.915674 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHUsername
	I0328 01:08:37.915894 1131600 sshutil.go:53] new ssh client: &{IP:192.168.39.224 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/default-k8s-diff-port-283961/id_rsa Username:docker}
	I0328 01:08:37.916689 1131600 main.go:141] libmachine: () Calling .GetMachineName
	I0328 01:08:37.917364 1131600 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18485-1069254/.minikube/bin/docker-machine-driver-kvm2
	I0328 01:08:37.917410 1131600 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 01:08:37.918302 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:08:37.918651 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:df:6f", ip: ""} in network mk-default-k8s-diff-port-283961: {Iface:virbr1 ExpiryTime:2024-03-28 02:03:06 +0000 UTC Type:0 Mac:52:54:00:c4:df:6f Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:default-k8s-diff-port-283961 Clientid:01:52:54:00:c4:df:6f}
	I0328 01:08:37.918678 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined IP address 192.168.39.224 and MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:08:37.918944 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHPort
	I0328 01:08:37.919117 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHKeyPath
	I0328 01:08:37.919267 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHUsername
	I0328 01:08:37.919386 1131600 sshutil.go:53] new ssh client: &{IP:192.168.39.224 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/default-k8s-diff-port-283961/id_rsa Username:docker}
	I0328 01:08:37.935233 1131600 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45659
	I0328 01:08:37.935750 1131600 main.go:141] libmachine: () Calling .GetVersion
	I0328 01:08:37.936283 1131600 main.go:141] libmachine: Using API Version  1
	I0328 01:08:37.936301 1131600 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 01:08:37.936691 1131600 main.go:141] libmachine: () Calling .GetMachineName
	I0328 01:08:37.936872 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetState
	I0328 01:08:37.938736 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .DriverName
	I0328 01:08:37.939016 1131600 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0328 01:08:37.939042 1131600 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0328 01:08:37.939065 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHHostname
	I0328 01:08:37.941653 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:08:37.941967 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:df:6f", ip: ""} in network mk-default-k8s-diff-port-283961: {Iface:virbr1 ExpiryTime:2024-03-28 02:03:06 +0000 UTC Type:0 Mac:52:54:00:c4:df:6f Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:default-k8s-diff-port-283961 Clientid:01:52:54:00:c4:df:6f}
	I0328 01:08:37.941991 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined IP address 192.168.39.224 and MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:08:37.942199 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHPort
	I0328 01:08:37.942405 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHKeyPath
	I0328 01:08:37.942575 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHUsername
	I0328 01:08:37.942761 1131600 sshutil.go:53] new ssh client: &{IP:192.168.39.224 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/default-k8s-diff-port-283961/id_rsa Username:docker}
	I0328 01:08:38.109817 1131600 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0328 01:08:38.134996 1131600 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-283961" to be "Ready" ...
	I0328 01:08:38.158252 1131600 node_ready.go:49] node "default-k8s-diff-port-283961" has status "Ready":"True"
	I0328 01:08:38.158286 1131600 node_ready.go:38] duration metric: took 23.249221ms for node "default-k8s-diff-port-283961" to be "Ready" ...
	I0328 01:08:38.158305 1131600 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0328 01:08:38.170391 1131600 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-gdv5x" in "kube-system" namespace to be "Ready" ...
	I0328 01:08:38.277223 1131600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0328 01:08:38.299923 1131600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0328 01:08:38.300686 1131600 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0328 01:08:38.300707 1131600 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0328 01:08:38.355800 1131600 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0328 01:08:38.355837 1131600 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0328 01:08:38.464742 1131600 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0328 01:08:38.464769 1131600 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0328 01:08:38.542696 1131600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0328 01:08:39.644116 1131600 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.344141889s)
	I0328 01:08:39.644184 1131600 main.go:141] libmachine: Making call to close driver server
	I0328 01:08:39.644189 1131600 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.366934481s)
	I0328 01:08:39.644197 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .Close
	I0328 01:08:39.644210 1131600 main.go:141] libmachine: Making call to close driver server
	I0328 01:08:39.644219 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .Close
	I0328 01:08:39.644620 1131600 main.go:141] libmachine: Successfully made call to close driver server
	I0328 01:08:39.644644 1131600 main.go:141] libmachine: Making call to close connection to plugin binary
	I0328 01:08:39.644654 1131600 main.go:141] libmachine: Making call to close driver server
	I0328 01:08:39.644664 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .Close
	I0328 01:08:39.644846 1131600 main.go:141] libmachine: Successfully made call to close driver server
	I0328 01:08:39.644865 1131600 main.go:141] libmachine: Making call to close connection to plugin binary
	I0328 01:08:39.644890 1131600 main.go:141] libmachine: Making call to close driver server
	I0328 01:08:39.644905 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .Close
	I0328 01:08:39.644987 1131600 main.go:141] libmachine: Successfully made call to close driver server
	I0328 01:08:39.645004 1131600 main.go:141] libmachine: Making call to close connection to plugin binary
	I0328 01:08:39.645154 1131600 main.go:141] libmachine: Successfully made call to close driver server
	I0328 01:08:39.645171 1131600 main.go:141] libmachine: Making call to close connection to plugin binary
	I0328 01:08:39.708104 1131600 main.go:141] libmachine: Making call to close driver server
	I0328 01:08:39.708143 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .Close
	I0328 01:08:39.708543 1131600 main.go:141] libmachine: Successfully made call to close driver server
	I0328 01:08:39.708567 1131600 main.go:141] libmachine: Making call to close connection to plugin binary
	I0328 01:08:39.739487 1131600 pod_ready.go:92] pod "coredns-76f75df574-gdv5x" in "kube-system" namespace has status "Ready":"True"
	I0328 01:08:39.739515 1131600 pod_ready.go:81] duration metric: took 1.569088177s for pod "coredns-76f75df574-gdv5x" in "kube-system" namespace to be "Ready" ...
	I0328 01:08:39.739526 1131600 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-qzcfp" in "kube-system" namespace to be "Ready" ...
	I0328 01:08:39.797314 1131600 pod_ready.go:92] pod "coredns-76f75df574-qzcfp" in "kube-system" namespace has status "Ready":"True"
	I0328 01:08:39.797347 1131600 pod_ready.go:81] duration metric: took 57.813218ms for pod "coredns-76f75df574-qzcfp" in "kube-system" namespace to be "Ready" ...
	I0328 01:08:39.797366 1131600 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-283961" in "kube-system" namespace to be "Ready" ...
	I0328 01:08:39.830784 1131600 pod_ready.go:92] pod "etcd-default-k8s-diff-port-283961" in "kube-system" namespace has status "Ready":"True"
	I0328 01:08:39.830865 1131600 pod_ready.go:81] duration metric: took 33.488753ms for pod "etcd-default-k8s-diff-port-283961" in "kube-system" namespace to be "Ready" ...
	I0328 01:08:39.830886 1131600 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-283961" in "kube-system" namespace to be "Ready" ...
	I0328 01:08:39.852459 1131600 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-283961" in "kube-system" namespace has status "Ready":"True"
	I0328 01:08:39.852489 1131600 pod_ready.go:81] duration metric: took 21.594748ms for pod "kube-apiserver-default-k8s-diff-port-283961" in "kube-system" namespace to be "Ready" ...
	I0328 01:08:39.852501 1131600 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-283961" in "kube-system" namespace to be "Ready" ...
	I0328 01:08:39.862630 1131600 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-283961" in "kube-system" namespace has status "Ready":"True"
	I0328 01:08:39.862658 1131600 pod_ready.go:81] duration metric: took 10.149867ms for pod "kube-controller-manager-default-k8s-diff-port-283961" in "kube-system" namespace to be "Ready" ...
	I0328 01:08:39.862674 1131600 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-js7j2" in "kube-system" namespace to be "Ready" ...
	I0328 01:08:39.893124 1131600 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.350363727s)
	I0328 01:08:39.893191 1131600 main.go:141] libmachine: Making call to close driver server
	I0328 01:08:39.893216 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .Close
	I0328 01:08:39.893559 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | Closing plugin on server side
	I0328 01:08:39.893568 1131600 main.go:141] libmachine: Successfully made call to close driver server
	I0328 01:08:39.893617 1131600 main.go:141] libmachine: Making call to close connection to plugin binary
	I0328 01:08:39.893634 1131600 main.go:141] libmachine: Making call to close driver server
	I0328 01:08:39.893646 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .Close
	I0328 01:08:39.893985 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | Closing plugin on server side
	I0328 01:08:39.893985 1131600 main.go:141] libmachine: Successfully made call to close driver server
	I0328 01:08:39.894013 1131600 main.go:141] libmachine: Making call to close connection to plugin binary
	I0328 01:08:39.894031 1131600 addons.go:470] Verifying addon metrics-server=true in "default-k8s-diff-port-283961"
	I0328 01:08:39.896978 1131600 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0328 01:08:39.898636 1131600 addons.go:505] duration metric: took 2.044292782s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0328 01:08:40.138962 1131600 pod_ready.go:92] pod "kube-proxy-js7j2" in "kube-system" namespace has status "Ready":"True"
	I0328 01:08:40.138994 1131600 pod_ready.go:81] duration metric: took 276.313147ms for pod "kube-proxy-js7j2" in "kube-system" namespace to be "Ready" ...
	I0328 01:08:40.139006 1131600 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-283961" in "kube-system" namespace to be "Ready" ...
	I0328 01:08:40.538892 1131600 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-283961" in "kube-system" namespace has status "Ready":"True"
	I0328 01:08:40.538917 1131600 pod_ready.go:81] duration metric: took 399.903327ms for pod "kube-scheduler-default-k8s-diff-port-283961" in "kube-system" namespace to be "Ready" ...
	I0328 01:08:40.538925 1131600 pod_ready.go:38] duration metric: took 2.380606168s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0328 01:08:40.538943 1131600 api_server.go:52] waiting for apiserver process to appear ...
	I0328 01:08:40.539009 1131600 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:08:40.561639 1131600 api_server.go:72] duration metric: took 2.707321816s to wait for apiserver process to appear ...
	I0328 01:08:40.561681 1131600 api_server.go:88] waiting for apiserver healthz status ...
	I0328 01:08:40.561709 1131600 api_server.go:253] Checking apiserver healthz at https://192.168.39.224:8444/healthz ...
	I0328 01:08:40.568521 1131600 api_server.go:279] https://192.168.39.224:8444/healthz returned 200:
	ok
	I0328 01:08:40.570016 1131600 api_server.go:141] control plane version: v1.29.3
	I0328 01:08:40.570060 1131600 api_server.go:131] duration metric: took 8.369036ms to wait for apiserver health ...
	I0328 01:08:40.570071 1131600 system_pods.go:43] waiting for kube-system pods to appear ...
	I0328 01:08:39.696094 1130827 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.32965227s)
	I0328 01:08:39.696193 1130827 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0328 01:08:39.717556 1130827 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0328 01:08:39.730434 1130827 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0328 01:08:39.746521 1130827 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0328 01:08:39.746567 1130827 kubeadm.go:156] found existing configuration files:
	
	I0328 01:08:39.746644 1130827 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0328 01:08:39.758252 1130827 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0328 01:08:39.758352 1130827 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0328 01:08:39.771929 1130827 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0328 01:08:39.785312 1130827 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0328 01:08:39.785400 1130827 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0328 01:08:39.800685 1130827 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0328 01:08:39.814982 1130827 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0328 01:08:39.815073 1130827 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0328 01:08:39.828804 1130827 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0328 01:08:39.841984 1130827 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0328 01:08:39.842074 1130827 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0328 01:08:39.854502 1130827 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0328 01:08:40.089742 1130827 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0328 01:08:40.742900 1131600 system_pods.go:59] 9 kube-system pods found
	I0328 01:08:40.742938 1131600 system_pods.go:61] "coredns-76f75df574-gdv5x" [5b4b835c-ae9d-4eff-ab37-6ccb7e36a748] Running
	I0328 01:08:40.742945 1131600 system_pods.go:61] "coredns-76f75df574-qzcfp" [8e7bfa94-f249-4f7a-be7b-9a615810c956] Running
	I0328 01:08:40.742951 1131600 system_pods.go:61] "etcd-default-k8s-diff-port-283961" [eb26a2e2-882b-4bc0-9ca1-7fe88b1c6c7e] Running
	I0328 01:08:40.742958 1131600 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-283961" [1c31e7a3-507e-43ea-96cf-1d182a1e0875] Running
	I0328 01:08:40.742964 1131600 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-283961" [5bbc6650-b37c-4b10-bc6e-cceb214f8881] Running
	I0328 01:08:40.742968 1131600 system_pods.go:61] "kube-proxy-js7j2" [1f31a6ee-9417-4f4a-ba0b-fdbab6a9169d] Running
	I0328 01:08:40.742972 1131600 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-283961" [3a5691f7-d6d4-45e7-aea7-94fe1cfa541a] Running
	I0328 01:08:40.742980 1131600 system_pods.go:61] "metrics-server-57f55c9bc5-gkv67" [7f0f5d0b-6821-44b6-8f3b-0bc0aeccc356] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0328 01:08:40.742986 1131600 system_pods.go:61] "storage-provisioner" [cb80efe2-521f-45d5-84e7-f6dc216b4c6d] Running
	I0328 01:08:40.742998 1131600 system_pods.go:74] duration metric: took 172.918886ms to wait for pod list to return data ...
	I0328 01:08:40.743010 1131600 default_sa.go:34] waiting for default service account to be created ...
	I0328 01:08:40.939208 1131600 default_sa.go:45] found service account: "default"
	I0328 01:08:40.939255 1131600 default_sa.go:55] duration metric: took 196.220048ms for default service account to be created ...
	I0328 01:08:40.939266 1131600 system_pods.go:116] waiting for k8s-apps to be running ...
	I0328 01:08:41.144986 1131600 system_pods.go:86] 9 kube-system pods found
	I0328 01:08:41.145023 1131600 system_pods.go:89] "coredns-76f75df574-gdv5x" [5b4b835c-ae9d-4eff-ab37-6ccb7e36a748] Running
	I0328 01:08:41.145030 1131600 system_pods.go:89] "coredns-76f75df574-qzcfp" [8e7bfa94-f249-4f7a-be7b-9a615810c956] Running
	I0328 01:08:41.145034 1131600 system_pods.go:89] "etcd-default-k8s-diff-port-283961" [eb26a2e2-882b-4bc0-9ca1-7fe88b1c6c7e] Running
	I0328 01:08:41.145039 1131600 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-283961" [1c31e7a3-507e-43ea-96cf-1d182a1e0875] Running
	I0328 01:08:41.145043 1131600 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-283961" [5bbc6650-b37c-4b10-bc6e-cceb214f8881] Running
	I0328 01:08:41.145047 1131600 system_pods.go:89] "kube-proxy-js7j2" [1f31a6ee-9417-4f4a-ba0b-fdbab6a9169d] Running
	I0328 01:08:41.145051 1131600 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-283961" [3a5691f7-d6d4-45e7-aea7-94fe1cfa541a] Running
	I0328 01:08:41.145058 1131600 system_pods.go:89] "metrics-server-57f55c9bc5-gkv67" [7f0f5d0b-6821-44b6-8f3b-0bc0aeccc356] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0328 01:08:41.145062 1131600 system_pods.go:89] "storage-provisioner" [cb80efe2-521f-45d5-84e7-f6dc216b4c6d] Running
	I0328 01:08:41.145072 1131600 system_pods.go:126] duration metric: took 205.800485ms to wait for k8s-apps to be running ...
	I0328 01:08:41.145083 1131600 system_svc.go:44] waiting for kubelet service to be running ....
	I0328 01:08:41.145131 1131600 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0328 01:08:41.163220 1131600 system_svc.go:56] duration metric: took 18.120266ms WaitForService to wait for kubelet
	I0328 01:08:41.163255 1131600 kubeadm.go:576] duration metric: took 3.308947131s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0328 01:08:41.163280 1131600 node_conditions.go:102] verifying NodePressure condition ...
	I0328 01:08:41.339219 1131600 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0328 01:08:41.339247 1131600 node_conditions.go:123] node cpu capacity is 2
	I0328 01:08:41.339292 1131600 node_conditions.go:105] duration metric: took 176.004328ms to run NodePressure ...
	I0328 01:08:41.339306 1131600 start.go:240] waiting for startup goroutines ...
	I0328 01:08:41.339317 1131600 start.go:245] waiting for cluster config update ...
	I0328 01:08:41.339334 1131600 start.go:254] writing updated cluster config ...
	I0328 01:08:41.339656 1131600 ssh_runner.go:195] Run: rm -f paused
	I0328 01:08:41.399111 1131600 start.go:600] kubectl: 1.29.3, cluster: 1.29.3 (minor skew: 0)
	I0328 01:08:41.401360 1131600 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-283961" cluster and "default" namespace by default
	I0328 01:08:49.653091 1130827 kubeadm.go:309] [init] Using Kubernetes version: v1.30.0-beta.0
	I0328 01:08:49.653205 1130827 kubeadm.go:309] [preflight] Running pre-flight checks
	I0328 01:08:49.653327 1130827 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0328 01:08:49.653468 1130827 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0328 01:08:49.653576 1130827 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0328 01:08:49.653666 1130827 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0328 01:08:49.656419 1130827 out.go:204]   - Generating certificates and keys ...
	I0328 01:08:49.656503 1130827 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0328 01:08:49.656583 1130827 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0328 01:08:49.656669 1130827 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0328 01:08:49.656775 1130827 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0328 01:08:49.656903 1130827 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0328 01:08:49.656973 1130827 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0328 01:08:49.657057 1130827 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0328 01:08:49.657138 1130827 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0328 01:08:49.657246 1130827 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0328 01:08:49.657362 1130827 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0328 01:08:49.657415 1130827 kubeadm.go:309] [certs] Using the existing "sa" key
	I0328 01:08:49.657510 1130827 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0328 01:08:49.657601 1130827 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0328 01:08:49.657713 1130827 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0328 01:08:49.657811 1130827 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0328 01:08:49.657900 1130827 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0328 01:08:49.657980 1130827 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0328 01:08:49.658074 1130827 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0328 01:08:49.658160 1130827 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0328 01:08:49.659588 1130827 out.go:204]   - Booting up control plane ...
	I0328 01:08:49.659669 1130827 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0328 01:08:49.659771 1130827 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0328 01:08:49.659855 1130827 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0328 01:08:49.659962 1130827 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0328 01:08:49.660075 1130827 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0328 01:08:49.660139 1130827 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0328 01:08:49.660309 1130827 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0328 01:08:49.660426 1130827 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0328 01:08:49.660518 1130827 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 501.594495ms
	I0328 01:08:49.660610 1130827 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0328 01:08:49.660691 1130827 kubeadm.go:309] [api-check] The API server is healthy after 5.502996727s
	I0328 01:08:49.660830 1130827 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0328 01:08:49.660975 1130827 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0328 01:08:49.661028 1130827 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0328 01:08:49.661198 1130827 kubeadm.go:309] [mark-control-plane] Marking the node no-preload-248059 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0328 01:08:49.661283 1130827 kubeadm.go:309] [bootstrap-token] Using token: 4jnfa0.q3dre6ogqbxtw8j0
	I0328 01:08:49.662907 1130827 out.go:204]   - Configuring RBAC rules ...
	I0328 01:08:49.663014 1130827 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0328 01:08:49.663090 1130827 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0328 01:08:49.663239 1130827 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0328 01:08:49.663379 1130827 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0328 01:08:49.663484 1130827 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0328 01:08:49.663576 1130827 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0328 01:08:49.663688 1130827 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0328 01:08:49.663750 1130827 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0328 01:08:49.663811 1130827 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0328 01:08:49.663820 1130827 kubeadm.go:309] 
	I0328 01:08:49.663871 1130827 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0328 01:08:49.663877 1130827 kubeadm.go:309] 
	I0328 01:08:49.663976 1130827 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0328 01:08:49.663984 1130827 kubeadm.go:309] 
	I0328 01:08:49.664004 1130827 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0328 01:08:49.664080 1130827 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0328 01:08:49.664144 1130827 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0328 01:08:49.664151 1130827 kubeadm.go:309] 
	I0328 01:08:49.664202 1130827 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0328 01:08:49.664209 1130827 kubeadm.go:309] 
	I0328 01:08:49.664246 1130827 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0328 01:08:49.664252 1130827 kubeadm.go:309] 
	I0328 01:08:49.664301 1130827 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0328 01:08:49.664370 1130827 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0328 01:08:49.664436 1130827 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0328 01:08:49.664444 1130827 kubeadm.go:309] 
	I0328 01:08:49.664515 1130827 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0328 01:08:49.664600 1130827 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0328 01:08:49.664607 1130827 kubeadm.go:309] 
	I0328 01:08:49.664678 1130827 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token 4jnfa0.q3dre6ogqbxtw8j0 \
	I0328 01:08:49.664764 1130827 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:eb47e943713ff45bf2b8bbea854932df17a469668ec0f8e79ea3bb626e56fc59 \
	I0328 01:08:49.664783 1130827 kubeadm.go:309] 	--control-plane 
	I0328 01:08:49.664789 1130827 kubeadm.go:309] 
	I0328 01:08:49.664856 1130827 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0328 01:08:49.664863 1130827 kubeadm.go:309] 
	I0328 01:08:49.664938 1130827 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token 4jnfa0.q3dre6ogqbxtw8j0 \
	I0328 01:08:49.665073 1130827 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:eb47e943713ff45bf2b8bbea854932df17a469668ec0f8e79ea3bb626e56fc59 
	I0328 01:08:49.665117 1130827 cni.go:84] Creating CNI manager for ""
	I0328 01:08:49.665130 1130827 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0328 01:08:49.667556 1130827 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0328 01:08:49.668776 1130827 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0328 01:08:49.680262 1130827 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0328 01:08:49.701490 1130827 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0328 01:08:49.701557 1130827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:49.701606 1130827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-248059 minikube.k8s.io/updated_at=2024_03_28T01_08_49_0700 minikube.k8s.io/version=v1.33.0-beta.0 minikube.k8s.io/commit=1a940f980f77c9bfc8ef93678db8c1f49c9dd79d minikube.k8s.io/name=no-preload-248059 minikube.k8s.io/primary=true
	I0328 01:08:49.734009 1130827 ops.go:34] apiserver oom_adj: -16
	I0328 01:08:49.901866 1130827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:50.402635 1130827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:50.902480 1130827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:51.402417 1130827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:51.902253 1130827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:52.402411 1130827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:52.901926 1130827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:53.402394 1130827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:53.902738 1130827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:54.401878 1130827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:54.901920 1130827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:55.401878 1130827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:55.902140 1130827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:56.402863 1130827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:56.901970 1130827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:57.402088 1130827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:57.901869 1130827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:58.402056 1130827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:58.902333 1130827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:59.402753 1130827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:59.902930 1130827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:09:00.402623 1130827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:09:00.901863 1130827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:09:01.402264 1130827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:09:01.902054 1130827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:09:02.402212 1130827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:09:02.503310 1130827 kubeadm.go:1107] duration metric: took 12.80181586s to wait for elevateKubeSystemPrivileges
	W0328 01:09:02.503352 1130827 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0328 01:09:02.503362 1130827 kubeadm.go:393] duration metric: took 5m10.46697508s to StartCluster
	I0328 01:09:02.503380 1130827 settings.go:142] acquiring lock: {Name:mk594dd5d083edcb4c668528431bf610fe4ea638 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 01:09:02.503482 1130827 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18485-1069254/kubeconfig
	I0328 01:09:02.505909 1130827 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18485-1069254/kubeconfig: {Name:mk608bb0dce90860f51972a105c5a28b1a9b081e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 01:09:02.506302 1130827 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.61.107 Port:8443 KubernetesVersion:v1.30.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0328 01:09:02.508103 1130827 out.go:177] * Verifying Kubernetes components...
	I0328 01:09:02.506385 1130827 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0328 01:09:02.506502 1130827 config.go:182] Loaded profile config "no-preload-248059": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0-beta.0
	I0328 01:09:02.509509 1130827 addons.go:69] Setting default-storageclass=true in profile "no-preload-248059"
	I0328 01:09:02.509519 1130827 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0328 01:09:02.509517 1130827 addons.go:69] Setting metrics-server=true in profile "no-preload-248059"
	I0328 01:09:02.509542 1130827 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-248059"
	I0328 01:09:02.509559 1130827 addons.go:234] Setting addon metrics-server=true in "no-preload-248059"
	W0328 01:09:02.509580 1130827 addons.go:243] addon metrics-server should already be in state true
	I0328 01:09:02.509509 1130827 addons.go:69] Setting storage-provisioner=true in profile "no-preload-248059"
	I0328 01:09:02.509623 1130827 host.go:66] Checking if "no-preload-248059" exists ...
	I0328 01:09:02.509636 1130827 addons.go:234] Setting addon storage-provisioner=true in "no-preload-248059"
	W0328 01:09:02.509690 1130827 addons.go:243] addon storage-provisioner should already be in state true
	I0328 01:09:02.509729 1130827 host.go:66] Checking if "no-preload-248059" exists ...
	I0328 01:09:02.510005 1130827 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18485-1069254/.minikube/bin/docker-machine-driver-kvm2
	I0328 01:09:02.510009 1130827 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18485-1069254/.minikube/bin/docker-machine-driver-kvm2
	I0328 01:09:02.510049 1130827 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 01:09:02.510050 1130827 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 01:09:02.510053 1130827 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18485-1069254/.minikube/bin/docker-machine-driver-kvm2
	I0328 01:09:02.510085 1130827 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 01:09:02.528082 1130827 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42381
	I0328 01:09:02.528124 1130827 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43929
	I0328 01:09:02.528714 1130827 main.go:141] libmachine: () Calling .GetVersion
	I0328 01:09:02.528738 1130827 main.go:141] libmachine: () Calling .GetVersion
	I0328 01:09:02.529081 1130827 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34573
	I0328 01:09:02.529378 1130827 main.go:141] libmachine: Using API Version  1
	I0328 01:09:02.529397 1130827 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 01:09:02.529444 1130827 main.go:141] libmachine: Using API Version  1
	I0328 01:09:02.529464 1130827 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 01:09:02.529465 1130827 main.go:141] libmachine: () Calling .GetVersion
	I0328 01:09:02.529791 1130827 main.go:141] libmachine: () Calling .GetMachineName
	I0328 01:09:02.529849 1130827 main.go:141] libmachine: () Calling .GetMachineName
	I0328 01:09:02.529948 1130827 main.go:141] libmachine: Using API Version  1
	I0328 01:09:02.529965 1130827 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 01:09:02.529950 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetState
	I0328 01:09:02.530389 1130827 main.go:141] libmachine: () Calling .GetMachineName
	I0328 01:09:02.530437 1130827 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18485-1069254/.minikube/bin/docker-machine-driver-kvm2
	I0328 01:09:02.530472 1130827 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 01:09:02.531004 1130827 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18485-1069254/.minikube/bin/docker-machine-driver-kvm2
	I0328 01:09:02.531058 1130827 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 01:09:02.534108 1130827 addons.go:234] Setting addon default-storageclass=true in "no-preload-248059"
	W0328 01:09:02.534134 1130827 addons.go:243] addon default-storageclass should already be in state true
	I0328 01:09:02.534173 1130827 host.go:66] Checking if "no-preload-248059" exists ...
	I0328 01:09:02.534563 1130827 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18485-1069254/.minikube/bin/docker-machine-driver-kvm2
	I0328 01:09:02.534592 1130827 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 01:09:02.546812 1130827 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35769
	I0328 01:09:02.547478 1130827 main.go:141] libmachine: () Calling .GetVersion
	I0328 01:09:02.547999 1130827 main.go:141] libmachine: Using API Version  1
	I0328 01:09:02.548031 1130827 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 01:09:02.548370 1130827 main.go:141] libmachine: () Calling .GetMachineName
	I0328 01:09:02.548616 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetState
	I0328 01:09:02.549185 1130827 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37367
	I0328 01:09:02.549663 1130827 main.go:141] libmachine: () Calling .GetVersion
	I0328 01:09:02.550365 1130827 main.go:141] libmachine: Using API Version  1
	I0328 01:09:02.550390 1130827 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 01:09:02.550772 1130827 main.go:141] libmachine: () Calling .GetMachineName
	I0328 01:09:02.550787 1130827 main.go:141] libmachine: (no-preload-248059) Calling .DriverName
	I0328 01:09:02.550977 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetState
	I0328 01:09:02.553075 1130827 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0328 01:09:02.554750 1130827 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0328 01:09:02.554769 1130827 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0328 01:09:02.552577 1130827 main.go:141] libmachine: (no-preload-248059) Calling .DriverName
	I0328 01:09:02.554788 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHHostname
	I0328 01:09:02.553550 1130827 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45153
	I0328 01:09:02.556534 1130827 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0328 01:09:02.555339 1130827 main.go:141] libmachine: () Calling .GetVersion
	I0328 01:09:02.558480 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:09:02.563734 1130827 main.go:141] libmachine: (no-preload-248059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:33:e2", ip: ""} in network mk-no-preload-248059: {Iface:virbr3 ExpiryTime:2024-03-28 02:03:25 +0000 UTC Type:0 Mac:52:54:00:58:33:e2 Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:no-preload-248059 Clientid:01:52:54:00:58:33:e2}
	I0328 01:09:02.563773 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined IP address 192.168.61.107 and MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:09:02.563823 1130827 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0328 01:09:02.563846 1130827 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0328 01:09:02.563876 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHHostname
	I0328 01:09:02.564584 1130827 main.go:141] libmachine: Using API Version  1
	I0328 01:09:02.564604 1130827 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 01:09:02.564633 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHPort
	I0328 01:09:02.564933 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHKeyPath
	I0328 01:09:02.565025 1130827 main.go:141] libmachine: () Calling .GetMachineName
	I0328 01:09:02.565458 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHUsername
	I0328 01:09:02.565593 1130827 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18485-1069254/.minikube/bin/docker-machine-driver-kvm2
	I0328 01:09:02.565617 1130827 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 01:09:02.565745 1130827 sshutil.go:53] new ssh client: &{IP:192.168.61.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/no-preload-248059/id_rsa Username:docker}
	I0328 01:09:02.569766 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:09:02.570083 1130827 main.go:141] libmachine: (no-preload-248059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:33:e2", ip: ""} in network mk-no-preload-248059: {Iface:virbr3 ExpiryTime:2024-03-28 02:03:25 +0000 UTC Type:0 Mac:52:54:00:58:33:e2 Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:no-preload-248059 Clientid:01:52:54:00:58:33:e2}
	I0328 01:09:02.570104 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined IP address 192.168.61.107 and MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:09:02.570413 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHPort
	I0328 01:09:02.570778 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHKeyPath
	I0328 01:09:02.570975 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHUsername
	I0328 01:09:02.571142 1130827 sshutil.go:53] new ssh client: &{IP:192.168.61.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/no-preload-248059/id_rsa Username:docker}
	I0328 01:09:02.589503 1130827 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41085
	I0328 01:09:02.590061 1130827 main.go:141] libmachine: () Calling .GetVersion
	I0328 01:09:02.590641 1130827 main.go:141] libmachine: Using API Version  1
	I0328 01:09:02.590661 1130827 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 01:09:02.591065 1130827 main.go:141] libmachine: () Calling .GetMachineName
	I0328 01:09:02.591310 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetState
	I0328 01:09:02.593270 1130827 main.go:141] libmachine: (no-preload-248059) Calling .DriverName
	I0328 01:09:02.593665 1130827 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0328 01:09:02.593696 1130827 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0328 01:09:02.593717 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHHostname
	I0328 01:09:02.596796 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:09:02.597270 1130827 main.go:141] libmachine: (no-preload-248059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:33:e2", ip: ""} in network mk-no-preload-248059: {Iface:virbr3 ExpiryTime:2024-03-28 02:03:25 +0000 UTC Type:0 Mac:52:54:00:58:33:e2 Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:no-preload-248059 Clientid:01:52:54:00:58:33:e2}
	I0328 01:09:02.597298 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined IP address 192.168.61.107 and MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:09:02.597460 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHPort
	I0328 01:09:02.597637 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHKeyPath
	I0328 01:09:02.597807 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHUsername
	I0328 01:09:02.597937 1130827 sshutil.go:53] new ssh client: &{IP:192.168.61.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/no-preload-248059/id_rsa Username:docker}
	I0328 01:09:02.705837 1130827 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0328 01:09:02.727955 1130827 node_ready.go:35] waiting up to 6m0s for node "no-preload-248059" to be "Ready" ...
	I0328 01:09:02.737291 1130827 node_ready.go:49] node "no-preload-248059" has status "Ready":"True"
	I0328 01:09:02.737325 1130827 node_ready.go:38] duration metric: took 9.337953ms for node "no-preload-248059" to be "Ready" ...
	I0328 01:09:02.737338 1130827 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0328 01:09:02.741939 1130827 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-248059" in "kube-system" namespace to be "Ready" ...
	I0328 01:09:02.749157 1130827 pod_ready.go:92] pod "etcd-no-preload-248059" in "kube-system" namespace has status "Ready":"True"
	I0328 01:09:02.749192 1130827 pod_ready.go:81] duration metric: took 7.224004ms for pod "etcd-no-preload-248059" in "kube-system" namespace to be "Ready" ...
	I0328 01:09:02.749205 1130827 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-248059" in "kube-system" namespace to be "Ready" ...
	I0328 01:09:02.755106 1130827 pod_ready.go:92] pod "kube-apiserver-no-preload-248059" in "kube-system" namespace has status "Ready":"True"
	I0328 01:09:02.755132 1130827 pod_ready.go:81] duration metric: took 5.919446ms for pod "kube-apiserver-no-preload-248059" in "kube-system" namespace to be "Ready" ...
	I0328 01:09:02.755144 1130827 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-248059" in "kube-system" namespace to be "Ready" ...
	I0328 01:09:02.761123 1130827 pod_ready.go:92] pod "kube-controller-manager-no-preload-248059" in "kube-system" namespace has status "Ready":"True"
	I0328 01:09:02.761171 1130827 pod_ready.go:81] duration metric: took 6.017877ms for pod "kube-controller-manager-no-preload-248059" in "kube-system" namespace to be "Ready" ...
	I0328 01:09:02.761187 1130827 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-248059" in "kube-system" namespace to be "Ready" ...
	I0328 01:09:02.773958 1130827 pod_ready.go:92] pod "kube-scheduler-no-preload-248059" in "kube-system" namespace has status "Ready":"True"
	I0328 01:09:02.773983 1130827 pod_ready.go:81] duration metric: took 12.787671ms for pod "kube-scheduler-no-preload-248059" in "kube-system" namespace to be "Ready" ...
	I0328 01:09:02.773991 1130827 pod_ready.go:38] duration metric: took 36.637128ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0328 01:09:02.774008 1130827 api_server.go:52] waiting for apiserver process to appear ...
	I0328 01:09:02.774068 1130827 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:09:02.794342 1130827 api_server.go:72] duration metric: took 287.989042ms to wait for apiserver process to appear ...
	I0328 01:09:02.794376 1130827 api_server.go:88] waiting for apiserver healthz status ...
	I0328 01:09:02.794408 1130827 api_server.go:253] Checking apiserver healthz at https://192.168.61.107:8443/healthz ...
	I0328 01:09:02.826957 1130827 api_server.go:279] https://192.168.61.107:8443/healthz returned 200:
	ok
	I0328 01:09:02.830377 1130827 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0328 01:09:02.830399 1130827 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0328 01:09:02.837250 1130827 api_server.go:141] control plane version: v1.30.0-beta.0
	I0328 01:09:02.837284 1130827 api_server.go:131] duration metric: took 42.898933ms to wait for apiserver health ...
	I0328 01:09:02.837295 1130827 system_pods.go:43] waiting for kube-system pods to appear ...
	I0328 01:09:02.838515 1130827 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0328 01:09:02.865482 1130827 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0328 01:09:02.880510 1130827 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0328 01:09:02.880544 1130827 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0328 01:09:02.933895 1130827 system_pods.go:59] 4 kube-system pods found
	I0328 01:09:02.933958 1130827 system_pods.go:61] "etcd-no-preload-248059" [df24a43f-e4f8-4ee2-a2d5-9a718a197670] Running
	I0328 01:09:02.933967 1130827 system_pods.go:61] "kube-apiserver-no-preload-248059" [60aa0336-e0a2-476a-9458-c76fb40a95e1] Running
	I0328 01:09:02.933973 1130827 system_pods.go:61] "kube-controller-manager-no-preload-248059" [3759c96b-4476-439f-bf97-ea6175e53272] Running
	I0328 01:09:02.933977 1130827 system_pods.go:61] "kube-scheduler-no-preload-248059" [63a844a3-9d1e-48b0-90eb-52d3d20f0de8] Running
	I0328 01:09:02.933984 1130827 system_pods.go:74] duration metric: took 96.68223ms to wait for pod list to return data ...
	I0328 01:09:02.933994 1130827 default_sa.go:34] waiting for default service account to be created ...
	I0328 01:09:02.939507 1130827 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0328 01:09:02.939538 1130827 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0328 01:09:02.994042 1130827 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0328 01:09:03.160934 1130827 default_sa.go:45] found service account: "default"
	I0328 01:09:03.160971 1130827 default_sa.go:55] duration metric: took 226.968222ms for default service account to be created ...
	I0328 01:09:03.160982 1130827 system_pods.go:116] waiting for k8s-apps to be running ...
	I0328 01:09:03.396511 1130827 system_pods.go:86] 7 kube-system pods found
	I0328 01:09:03.396549 1130827 system_pods.go:89] "coredns-7db6d8ff4d-8zzf5" [91f329ea-6d6d-45dc-ac77-40a2739249b4] Pending
	I0328 01:09:03.396554 1130827 system_pods.go:89] "coredns-7db6d8ff4d-qtgp9" [aac5a4d0-acf3-426c-a81e-d129f94d58f3] Pending
	I0328 01:09:03.396558 1130827 system_pods.go:89] "etcd-no-preload-248059" [df24a43f-e4f8-4ee2-a2d5-9a718a197670] Running
	I0328 01:09:03.396562 1130827 system_pods.go:89] "kube-apiserver-no-preload-248059" [60aa0336-e0a2-476a-9458-c76fb40a95e1] Running
	I0328 01:09:03.396567 1130827 system_pods.go:89] "kube-controller-manager-no-preload-248059" [3759c96b-4476-439f-bf97-ea6175e53272] Running
	I0328 01:09:03.396575 1130827 system_pods.go:89] "kube-proxy-g5f6g" [d9c30bc3-42b1-446f-838b-979489cf661d] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0328 01:09:03.396580 1130827 system_pods.go:89] "kube-scheduler-no-preload-248059" [63a844a3-9d1e-48b0-90eb-52d3d20f0de8] Running
	I0328 01:09:03.396601 1130827 retry.go:31] will retry after 288.008379ms: missing components: kube-dns, kube-proxy
	I0328 01:09:03.697645 1130827 system_pods.go:86] 7 kube-system pods found
	I0328 01:09:03.697688 1130827 system_pods.go:89] "coredns-7db6d8ff4d-8zzf5" [91f329ea-6d6d-45dc-ac77-40a2739249b4] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0328 01:09:03.697697 1130827 system_pods.go:89] "coredns-7db6d8ff4d-qtgp9" [aac5a4d0-acf3-426c-a81e-d129f94d58f3] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0328 01:09:03.697704 1130827 system_pods.go:89] "etcd-no-preload-248059" [df24a43f-e4f8-4ee2-a2d5-9a718a197670] Running
	I0328 01:09:03.697710 1130827 system_pods.go:89] "kube-apiserver-no-preload-248059" [60aa0336-e0a2-476a-9458-c76fb40a95e1] Running
	I0328 01:09:03.697720 1130827 system_pods.go:89] "kube-controller-manager-no-preload-248059" [3759c96b-4476-439f-bf97-ea6175e53272] Running
	I0328 01:09:03.697726 1130827 system_pods.go:89] "kube-proxy-g5f6g" [d9c30bc3-42b1-446f-838b-979489cf661d] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0328 01:09:03.697730 1130827 system_pods.go:89] "kube-scheduler-no-preload-248059" [63a844a3-9d1e-48b0-90eb-52d3d20f0de8] Running
	I0328 01:09:03.697750 1130827 retry.go:31] will retry after 356.016468ms: missing components: kube-dns, kube-proxy
	I0328 01:09:03.962535 1130827 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.097008499s)
	I0328 01:09:03.962614 1130827 main.go:141] libmachine: Making call to close driver server
	I0328 01:09:03.962633 1130827 main.go:141] libmachine: (no-preload-248059) Calling .Close
	I0328 01:09:03.963093 1130827 main.go:141] libmachine: Successfully made call to close driver server
	I0328 01:09:03.963119 1130827 main.go:141] libmachine: Making call to close connection to plugin binary
	I0328 01:09:03.963129 1130827 main.go:141] libmachine: Making call to close driver server
	I0328 01:09:03.963139 1130827 main.go:141] libmachine: (no-preload-248059) Calling .Close
	I0328 01:09:03.963406 1130827 main.go:141] libmachine: Successfully made call to close driver server
	I0328 01:09:03.963424 1130827 main.go:141] libmachine: Making call to close connection to plugin binary
	I0328 01:09:03.964335 1130827 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.125788348s)
	I0328 01:09:03.964375 1130827 main.go:141] libmachine: Making call to close driver server
	I0328 01:09:03.964384 1130827 main.go:141] libmachine: (no-preload-248059) Calling .Close
	I0328 01:09:03.964712 1130827 main.go:141] libmachine: (no-preload-248059) DBG | Closing plugin on server side
	I0328 01:09:03.964740 1130827 main.go:141] libmachine: Successfully made call to close driver server
	I0328 01:09:03.964763 1130827 main.go:141] libmachine: Making call to close connection to plugin binary
	I0328 01:09:03.964776 1130827 main.go:141] libmachine: Making call to close driver server
	I0328 01:09:03.964785 1130827 main.go:141] libmachine: (no-preload-248059) Calling .Close
	I0328 01:09:03.965054 1130827 main.go:141] libmachine: Successfully made call to close driver server
	I0328 01:09:03.965125 1130827 main.go:141] libmachine: Making call to close connection to plugin binary
	I0328 01:09:03.965142 1130827 main.go:141] libmachine: (no-preload-248059) DBG | Closing plugin on server side
	I0328 01:09:04.002303 1130827 main.go:141] libmachine: Making call to close driver server
	I0328 01:09:04.002340 1130827 main.go:141] libmachine: (no-preload-248059) Calling .Close
	I0328 01:09:04.002744 1130827 main.go:141] libmachine: Successfully made call to close driver server
	I0328 01:09:04.002766 1130827 main.go:141] libmachine: Making call to close connection to plugin binary
	I0328 01:09:04.062017 1130827 system_pods.go:86] 8 kube-system pods found
	I0328 01:09:04.062096 1130827 system_pods.go:89] "coredns-7db6d8ff4d-8zzf5" [91f329ea-6d6d-45dc-ac77-40a2739249b4] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0328 01:09:04.062111 1130827 system_pods.go:89] "coredns-7db6d8ff4d-qtgp9" [aac5a4d0-acf3-426c-a81e-d129f94d58f3] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0328 01:09:04.062121 1130827 system_pods.go:89] "etcd-no-preload-248059" [df24a43f-e4f8-4ee2-a2d5-9a718a197670] Running
	I0328 01:09:04.062132 1130827 system_pods.go:89] "kube-apiserver-no-preload-248059" [60aa0336-e0a2-476a-9458-c76fb40a95e1] Running
	I0328 01:09:04.062158 1130827 system_pods.go:89] "kube-controller-manager-no-preload-248059" [3759c96b-4476-439f-bf97-ea6175e53272] Running
	I0328 01:09:04.062172 1130827 system_pods.go:89] "kube-proxy-g5f6g" [d9c30bc3-42b1-446f-838b-979489cf661d] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0328 01:09:04.062180 1130827 system_pods.go:89] "kube-scheduler-no-preload-248059" [63a844a3-9d1e-48b0-90eb-52d3d20f0de8] Running
	I0328 01:09:04.062192 1130827 system_pods.go:89] "storage-provisioner" [1dcee5b1-4531-4068-bce7-081d51602015] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0328 01:09:04.062220 1130827 retry.go:31] will retry after 477.684804ms: missing components: kube-dns, kube-proxy
	I0328 01:09:04.574661 1130827 system_pods.go:86] 9 kube-system pods found
	I0328 01:09:04.574716 1130827 system_pods.go:89] "coredns-7db6d8ff4d-8zzf5" [91f329ea-6d6d-45dc-ac77-40a2739249b4] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0328 01:09:04.574728 1130827 system_pods.go:89] "coredns-7db6d8ff4d-qtgp9" [aac5a4d0-acf3-426c-a81e-d129f94d58f3] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0328 01:09:04.574740 1130827 system_pods.go:89] "etcd-no-preload-248059" [df24a43f-e4f8-4ee2-a2d5-9a718a197670] Running
	I0328 01:09:04.574748 1130827 system_pods.go:89] "kube-apiserver-no-preload-248059" [60aa0336-e0a2-476a-9458-c76fb40a95e1] Running
	I0328 01:09:04.574754 1130827 system_pods.go:89] "kube-controller-manager-no-preload-248059" [3759c96b-4476-439f-bf97-ea6175e53272] Running
	I0328 01:09:04.574761 1130827 system_pods.go:89] "kube-proxy-g5f6g" [d9c30bc3-42b1-446f-838b-979489cf661d] Running
	I0328 01:09:04.574768 1130827 system_pods.go:89] "kube-scheduler-no-preload-248059" [63a844a3-9d1e-48b0-90eb-52d3d20f0de8] Running
	I0328 01:09:04.574778 1130827 system_pods.go:89] "metrics-server-569cc877fc-frc5k" [d1b84bf5-8f9e-4da6-8aea-568e9bb1a4dd] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0328 01:09:04.574799 1130827 system_pods.go:89] "storage-provisioner" [1dcee5b1-4531-4068-bce7-081d51602015] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0328 01:09:04.574821 1130827 retry.go:31] will retry after 460.13955ms: missing components: kube-dns
	I0328 01:09:04.692708 1130827 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.69861394s)
	I0328 01:09:04.692782 1130827 main.go:141] libmachine: Making call to close driver server
	I0328 01:09:04.692798 1130827 main.go:141] libmachine: (no-preload-248059) Calling .Close
	I0328 01:09:04.693323 1130827 main.go:141] libmachine: Successfully made call to close driver server
	I0328 01:09:04.693366 1130827 main.go:141] libmachine: Making call to close connection to plugin binary
	I0328 01:09:04.693376 1130827 main.go:141] libmachine: Making call to close driver server
	I0328 01:09:04.693384 1130827 main.go:141] libmachine: (no-preload-248059) Calling .Close
	I0328 01:09:04.693320 1130827 main.go:141] libmachine: (no-preload-248059) DBG | Closing plugin on server side
	I0328 01:09:04.693818 1130827 main.go:141] libmachine: Successfully made call to close driver server
	I0328 01:09:04.693865 1130827 main.go:141] libmachine: Making call to close connection to plugin binary
	I0328 01:09:04.693879 1130827 main.go:141] libmachine: (no-preload-248059) DBG | Closing plugin on server side
	I0328 01:09:04.693895 1130827 addons.go:470] Verifying addon metrics-server=true in "no-preload-248059"
	I0328 01:09:04.696310 1130827 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0328 01:09:04.025791 1131323 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0328 01:09:04.026055 1131323 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0328 01:09:04.026065 1131323 kubeadm.go:309] 
	I0328 01:09:04.026124 1131323 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0328 01:09:04.026172 1131323 kubeadm.go:309] 		timed out waiting for the condition
	I0328 01:09:04.026181 1131323 kubeadm.go:309] 
	I0328 01:09:04.026221 1131323 kubeadm.go:309] 	This error is likely caused by:
	I0328 01:09:04.026279 1131323 kubeadm.go:309] 		- The kubelet is not running
	I0328 01:09:04.026401 1131323 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0328 01:09:04.026411 1131323 kubeadm.go:309] 
	I0328 01:09:04.026529 1131323 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0328 01:09:04.026586 1131323 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0328 01:09:04.026632 1131323 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0328 01:09:04.026640 1131323 kubeadm.go:309] 
	I0328 01:09:04.026758 1131323 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0328 01:09:04.026884 1131323 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0328 01:09:04.026902 1131323 kubeadm.go:309] 
	I0328 01:09:04.027061 1131323 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0328 01:09:04.027222 1131323 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0328 01:09:04.027335 1131323 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0328 01:09:04.027429 1131323 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0328 01:09:04.027537 1131323 kubeadm.go:309] 
	I0328 01:09:04.029027 1131323 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0328 01:09:04.029164 1131323 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0328 01:09:04.029284 1131323 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	W0328 01:09:04.029477 1131323 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0328 01:09:04.029545 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0328 01:09:04.543275 1131323 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0328 01:09:04.562572 1131323 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0328 01:09:04.577013 1131323 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0328 01:09:04.577040 1131323 kubeadm.go:156] found existing configuration files:
	
	I0328 01:09:04.577102 1131323 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0328 01:09:04.590795 1131323 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0328 01:09:04.590885 1131323 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0328 01:09:04.604227 1131323 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0328 01:09:04.616720 1131323 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0328 01:09:04.616818 1131323 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0328 01:09:04.630095 1131323 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0328 01:09:04.643166 1131323 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0328 01:09:04.643259 1131323 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0328 01:09:04.658084 1131323 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0328 01:09:04.671786 1131323 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0328 01:09:04.671874 1131323 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0328 01:09:04.685852 1131323 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0328 01:09:04.779013 1131323 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0328 01:09:04.779113 1131323 kubeadm.go:309] [preflight] Running pre-flight checks
	I0328 01:09:04.964178 1131323 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0328 01:09:04.964317 1131323 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0328 01:09:04.964463 1131323 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0328 01:09:05.181712 1131323 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0328 01:09:05.183644 1131323 out.go:204]   - Generating certificates and keys ...
	I0328 01:09:05.183759 1131323 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0328 01:09:05.183851 1131323 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0328 01:09:05.183962 1131323 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0328 01:09:05.184042 1131323 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0328 01:09:05.184156 1131323 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0328 01:09:05.184244 1131323 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0328 01:09:05.184337 1131323 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0328 01:09:05.184424 1131323 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0328 01:09:05.184535 1131323 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0328 01:09:05.184633 1131323 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0328 01:09:05.184683 1131323 kubeadm.go:309] [certs] Using the existing "sa" key
	I0328 01:09:05.184758 1131323 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0328 01:09:04.698039 1130827 addons.go:505] duration metric: took 2.191652421s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0328 01:09:05.044303 1130827 system_pods.go:86] 9 kube-system pods found
	I0328 01:09:05.044340 1130827 system_pods.go:89] "coredns-7db6d8ff4d-8zzf5" [91f329ea-6d6d-45dc-ac77-40a2739249b4] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0328 01:09:05.044348 1130827 system_pods.go:89] "coredns-7db6d8ff4d-qtgp9" [aac5a4d0-acf3-426c-a81e-d129f94d58f3] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0328 01:09:05.044354 1130827 system_pods.go:89] "etcd-no-preload-248059" [df24a43f-e4f8-4ee2-a2d5-9a718a197670] Running
	I0328 01:09:05.044360 1130827 system_pods.go:89] "kube-apiserver-no-preload-248059" [60aa0336-e0a2-476a-9458-c76fb40a95e1] Running
	I0328 01:09:05.044366 1130827 system_pods.go:89] "kube-controller-manager-no-preload-248059" [3759c96b-4476-439f-bf97-ea6175e53272] Running
	I0328 01:09:05.044369 1130827 system_pods.go:89] "kube-proxy-g5f6g" [d9c30bc3-42b1-446f-838b-979489cf661d] Running
	I0328 01:09:05.044373 1130827 system_pods.go:89] "kube-scheduler-no-preload-248059" [63a844a3-9d1e-48b0-90eb-52d3d20f0de8] Running
	I0328 01:09:05.044378 1130827 system_pods.go:89] "metrics-server-569cc877fc-frc5k" [d1b84bf5-8f9e-4da6-8aea-568e9bb1a4dd] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0328 01:09:05.044387 1130827 system_pods.go:89] "storage-provisioner" [1dcee5b1-4531-4068-bce7-081d51602015] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0328 01:09:05.044406 1130827 retry.go:31] will retry after 486.01075ms: missing components: kube-dns
	I0328 01:09:05.539158 1130827 system_pods.go:86] 9 kube-system pods found
	I0328 01:09:05.539204 1130827 system_pods.go:89] "coredns-7db6d8ff4d-8zzf5" [91f329ea-6d6d-45dc-ac77-40a2739249b4] Running
	I0328 01:09:05.539213 1130827 system_pods.go:89] "coredns-7db6d8ff4d-qtgp9" [aac5a4d0-acf3-426c-a81e-d129f94d58f3] Running
	I0328 01:09:05.539219 1130827 system_pods.go:89] "etcd-no-preload-248059" [df24a43f-e4f8-4ee2-a2d5-9a718a197670] Running
	I0328 01:09:05.539226 1130827 system_pods.go:89] "kube-apiserver-no-preload-248059" [60aa0336-e0a2-476a-9458-c76fb40a95e1] Running
	I0328 01:09:05.539232 1130827 system_pods.go:89] "kube-controller-manager-no-preload-248059" [3759c96b-4476-439f-bf97-ea6175e53272] Running
	I0328 01:09:05.539238 1130827 system_pods.go:89] "kube-proxy-g5f6g" [d9c30bc3-42b1-446f-838b-979489cf661d] Running
	I0328 01:09:05.539244 1130827 system_pods.go:89] "kube-scheduler-no-preload-248059" [63a844a3-9d1e-48b0-90eb-52d3d20f0de8] Running
	I0328 01:09:05.539255 1130827 system_pods.go:89] "metrics-server-569cc877fc-frc5k" [d1b84bf5-8f9e-4da6-8aea-568e9bb1a4dd] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0328 01:09:05.539260 1130827 system_pods.go:89] "storage-provisioner" [1dcee5b1-4531-4068-bce7-081d51602015] Running
	I0328 01:09:05.539274 1130827 system_pods.go:126] duration metric: took 2.37828469s to wait for k8s-apps to be running ...
	I0328 01:09:05.539292 1130827 system_svc.go:44] waiting for kubelet service to be running ....
	I0328 01:09:05.539362 1130827 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0328 01:09:05.560593 1130827 system_svc.go:56] duration metric: took 21.288819ms WaitForService to wait for kubelet
	I0328 01:09:05.560628 1130827 kubeadm.go:576] duration metric: took 3.054281955s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0328 01:09:05.560657 1130827 node_conditions.go:102] verifying NodePressure condition ...
	I0328 01:09:05.564453 1130827 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0328 01:09:05.564489 1130827 node_conditions.go:123] node cpu capacity is 2
	I0328 01:09:05.564502 1130827 node_conditions.go:105] duration metric: took 3.837449ms to run NodePressure ...
	I0328 01:09:05.564517 1130827 start.go:240] waiting for startup goroutines ...
	I0328 01:09:05.564527 1130827 start.go:245] waiting for cluster config update ...
	I0328 01:09:05.564542 1130827 start.go:254] writing updated cluster config ...
	I0328 01:09:05.564843 1130827 ssh_runner.go:195] Run: rm -f paused
	I0328 01:09:05.623218 1130827 start.go:600] kubectl: 1.29.3, cluster: 1.30.0-beta.0 (minor skew: 1)
	I0328 01:09:05.625408 1130827 out.go:177] * Done! kubectl is now configured to use "no-preload-248059" cluster and "default" namespace by default
	I0328 01:09:05.587190 1131323 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0328 01:09:05.923219 1131323 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0328 01:09:06.087945 1131323 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0328 01:09:06.245638 1131323 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0328 01:09:06.266195 1131323 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0328 01:09:06.267461 1131323 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0328 01:09:06.267551 1131323 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0328 01:09:06.434155 1131323 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0328 01:09:06.436300 1131323 out.go:204]   - Booting up control plane ...
	I0328 01:09:06.436447 1131323 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0328 01:09:06.446573 1131323 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0328 01:09:06.447461 1131323 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0328 01:09:06.448313 1131323 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0328 01:09:06.450917 1131323 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0328 01:09:46.453199 1131323 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0328 01:09:46.453386 1131323 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0328 01:09:46.453643 1131323 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0328 01:09:51.454402 1131323 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0328 01:09:51.454665 1131323 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0328 01:10:01.455189 1131323 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0328 01:10:01.455417 1131323 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0328 01:10:21.456491 1131323 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0328 01:10:21.456726 1131323 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0328 01:11:01.456972 1131323 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0328 01:11:01.457256 1131323 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0328 01:11:01.457269 1131323 kubeadm.go:309] 
	I0328 01:11:01.457310 1131323 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0328 01:11:01.457404 1131323 kubeadm.go:309] 		timed out waiting for the condition
	I0328 01:11:01.457441 1131323 kubeadm.go:309] 
	I0328 01:11:01.457492 1131323 kubeadm.go:309] 	This error is likely caused by:
	I0328 01:11:01.457550 1131323 kubeadm.go:309] 		- The kubelet is not running
	I0328 01:11:01.457696 1131323 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0328 01:11:01.457708 1131323 kubeadm.go:309] 
	I0328 01:11:01.457856 1131323 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0328 01:11:01.457906 1131323 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0328 01:11:01.457935 1131323 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0328 01:11:01.457943 1131323 kubeadm.go:309] 
	I0328 01:11:01.458033 1131323 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0328 01:11:01.458139 1131323 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0328 01:11:01.458155 1131323 kubeadm.go:309] 
	I0328 01:11:01.458331 1131323 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0328 01:11:01.458483 1131323 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0328 01:11:01.458594 1131323 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0328 01:11:01.458707 1131323 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0328 01:11:01.458718 1131323 kubeadm.go:309] 
	I0328 01:11:01.459597 1131323 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0328 01:11:01.459737 1131323 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0328 01:11:01.459822 1131323 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0328 01:11:01.459962 1131323 kubeadm.go:393] duration metric: took 7m59.227261729s to StartCluster
	I0328 01:11:01.460023 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:11:01.460167 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:11:01.522644 1131323 cri.go:89] found id: ""
	I0328 01:11:01.522687 1131323 logs.go:276] 0 containers: []
	W0328 01:11:01.522700 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:11:01.522710 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:11:01.522782 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:11:01.567898 1131323 cri.go:89] found id: ""
	I0328 01:11:01.567928 1131323 logs.go:276] 0 containers: []
	W0328 01:11:01.567937 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:11:01.567945 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:11:01.568005 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:11:01.604782 1131323 cri.go:89] found id: ""
	I0328 01:11:01.604810 1131323 logs.go:276] 0 containers: []
	W0328 01:11:01.604819 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:11:01.604825 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:11:01.604935 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:11:01.642875 1131323 cri.go:89] found id: ""
	I0328 01:11:01.642908 1131323 logs.go:276] 0 containers: []
	W0328 01:11:01.642920 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:11:01.642929 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:11:01.642993 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:11:01.682186 1131323 cri.go:89] found id: ""
	I0328 01:11:01.682216 1131323 logs.go:276] 0 containers: []
	W0328 01:11:01.682223 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:11:01.682241 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:11:01.682312 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:11:01.720654 1131323 cri.go:89] found id: ""
	I0328 01:11:01.720689 1131323 logs.go:276] 0 containers: []
	W0328 01:11:01.720697 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:11:01.720704 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:11:01.720759 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:11:01.757340 1131323 cri.go:89] found id: ""
	I0328 01:11:01.757372 1131323 logs.go:276] 0 containers: []
	W0328 01:11:01.757383 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:11:01.757392 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:11:01.757462 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:11:01.797426 1131323 cri.go:89] found id: ""
	I0328 01:11:01.797462 1131323 logs.go:276] 0 containers: []
	W0328 01:11:01.797473 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:11:01.797488 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:11:01.797506 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:11:01.859582 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:11:01.859623 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:11:01.876027 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:11:01.876073 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:11:01.966513 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:11:01.966539 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:11:01.966557 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:11:02.084853 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:11:02.084894 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0328 01:11:02.127221 1131323 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0328 01:11:02.127288 1131323 out.go:239] * 
	W0328 01:11:02.127417 1131323 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0328 01:11:02.127456 1131323 out.go:239] * 
	W0328 01:11:02.128313 1131323 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0328 01:11:02.131916 1131323 out.go:177] 
	W0328 01:11:02.133288 1131323 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0328 01:11:02.133351 1131323 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0328 01:11:02.133381 1131323 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0328 01:11:02.134991 1131323 out.go:177] 
	
	
	==> CRI-O <==
	Mar 28 01:24:01 embed-certs-808809 crio[700]: time="2024-03-28 01:24:01.232696747Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:d13be47cdfe2d55cfc468405cc79fffada20ae6e4ac957e9096e5f1a7cb8ed43,Metadata:&PodSandboxMetadata{Name:coredns-76f75df574-pgcdh,Uid:52452b24-490e-4999-b700-198c6f9b2fa1,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1711588075021193455,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-76f75df574-pgcdh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 52452b24-490e-4999-b700-198c6f9b2fa1,k8s-app: kube-dns,pod-template-hash: 76f75df574,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-03-28T01:07:52.907041512Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:fad0a5eeca45eab29bdac781c0dc7c18488da0d3f13f62e2f5cd1835585a98b0,Metadata:&PodSandboxMetadata{Name:kube-proxy-tjbhs,Uid:cdb30ca1-5165-4e24-888a-df79af7987d0,Namespace:kube-system,At
tempt:0,},State:SANDBOX_READY,CreatedAt:1711588074826056782,Labels:map[string]string{controller-revision-hash: 7659797656,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-tjbhs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cdb30ca1-5165-4e24-888a-df79af7987d0,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-03-28T01:07:52.716264325Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:d8ec51bb0fc8c97f17b2d0ad68ec94a6b2408e9dd2fda55f2c1987a82b0ce31d,Metadata:&PodSandboxMetadata{Name:coredns-76f75df574-2rn6k,Uid:2a77c778-dd83-4e2e-b45a-ca16e3922b45,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1711588074639563931,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-76f75df574-2rn6k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2a77c778-dd83-4e2e-b45a-ca16e3922b45,k8s-app: kube-dns,pod-template-hash: 76f75df574,},Annotations:m
ap[string]string{kubernetes.io/config.seen: 2024-03-28T01:07:52.832558188Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:e8a102066890c532311038e6c72e9556f225349f7841774a6b878e24bc779ca9,Metadata:&PodSandboxMetadata{Name:metrics-server-57f55c9bc5-bqbfl,Uid:8434fd7d-838b-4cf2-96a3-e4d613633871,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1711588074352797283,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-57f55c9bc5-bqbfl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8434fd7d-838b-4cf2-96a3-e4d613633871,k8s-app: metrics-server,pod-template-hash: 57f55c9bc5,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-03-28T01:07:54.035391793Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:44b0cdf3876f13c6e331f982b09d269acd6f6da9d1d02e4a173f8848430acff9,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:20c1951e-7da8-4025-bbcf-2da60f87f3ab,Namespace:kube-system,Attempt:0,},Sta
te:SANDBOX_READY,CreatedAt:1711588074346768287,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20c1951e-7da8-4025-bbcf-2da60f87f3ab,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/t
mp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-03-28T01:07:54.036342678Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:8cb6101361b286ade5c77b27888d852fef11b2154f3493ef7c329a8a2497f761,Metadata:&PodSandboxMetadata{Name:kube-scheduler-embed-certs-808809,Uid:415bbf6af6af03844395934967f1d53e,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1711588053967485273,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-embed-certs-808809,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 415bbf6af6af03844395934967f1d53e,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 415bbf6af6af03844395934967f1d53e,kubernetes.io/config.seen: 2024-03-28T01:07:33.511636569Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:fa06dc12c22a31cdfde9b37c5f79193892430055c56ff9b94ada4e5d0a5060bb,Metadata:&PodSandboxMetadata{Name:kube-controlle
r-manager-embed-certs-808809,Uid:b27a7f528d676bb567a98dd9c93ba802,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1711588053957310147,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-embed-certs-808809,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b27a7f528d676bb567a98dd9c93ba802,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: b27a7f528d676bb567a98dd9c93ba802,kubernetes.io/config.seen: 2024-03-28T01:07:33.511631484Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:dae6f2944cfe146fe51c15d22ea9ac3564c7566a59e12e80fb7f005ec86f6908,Metadata:&PodSandboxMetadata{Name:kube-apiserver-embed-certs-808809,Uid:82aa56ffa6fd4273e5fcfbb8ee4837e3,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1711588053952795223,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver
-embed-certs-808809,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82aa56ffa6fd4273e5fcfbb8ee4837e3,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.72.210:8443,kubernetes.io/config.hash: 82aa56ffa6fd4273e5fcfbb8ee4837e3,kubernetes.io/config.seen: 2024-03-28T01:07:33.511641011Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:ee08dd080a76fdba4eeaa1378d59c389b012222d8d48ce7cf0a50226bbbc375e,Metadata:&PodSandboxMetadata{Name:etcd-embed-certs-808809,Uid:23726df11311a725c5c2cea5aa7bbf82,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1711588053948496234,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-embed-certs-808809,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 23726df11311a725c5c2cea5aa7bbf82,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.7
2.210:2379,kubernetes.io/config.hash: 23726df11311a725c5c2cea5aa7bbf82,kubernetes.io/config.seen: 2024-03-28T01:07:33.511638119Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=c1ea3ccf-19ad-454e-9a9f-5f5a92c7283d name=/runtime.v1.RuntimeService/ListPodSandbox
	Mar 28 01:24:01 embed-certs-808809 crio[700]: time="2024-03-28 01:24:01.233979031Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=246ff0c5-b8b5-4664-9001-95f853753d96 name=/runtime.v1.RuntimeService/ListContainers
	Mar 28 01:24:01 embed-certs-808809 crio[700]: time="2024-03-28 01:24:01.234040315Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=246ff0c5-b8b5-4664-9001-95f853753d96 name=/runtime.v1.RuntimeService/ListContainers
	Mar 28 01:24:01 embed-certs-808809 crio[700]: time="2024-03-28 01:24:01.235311328Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f76a2cc1195c1d8095c2eaf14403049b3091569f9fce2b5c62a421df809f99d6,PodSandboxId:d13be47cdfe2d55cfc468405cc79fffada20ae6e4ac957e9096e5f1a7cb8ed43,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1711588075222203315,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-pgcdh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 52452b24-490e-4999-b700-198c6f9b2fa1,},Annotations:map[string]string{io.kubernetes.container.hash: b98b61cd,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d0fd0f91f9b4fa1ab25e032a39edcd5bc44b739fe492a9a2637fe447630c6048,PodSandboxId:fad0a5eeca45eab29bdac781c0dc7c18488da0d3f13f62e2f5cd1835585a98b0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1711588074985489557,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tjbhs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: cdb30ca1-5165-4e24-888a-df79af7987d0,},Annotations:map[string]string{io.kubernetes.container.hash: b2fb58c1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77d15563a8b85fedf40c66723f296b06f62de317f6266c0b3d4af970cfb7e7fd,PodSandboxId:d8ec51bb0fc8c97f17b2d0ad68ec94a6b2408e9dd2fda55f2c1987a82b0ce31d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1711588074840305163,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-2rn6k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2a77c778-dd83-4e2e-b45
a-ca16e3922b45,},Annotations:map[string]string{io.kubernetes.container.hash: 9855ca01,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e8068bfab6214b30b06e6ea1061ef0a1fdb69e653672c29bb9d353b14e1fc56,PodSandboxId:44b0cdf3876f13c6e331f982b09d269acd6f6da9d1d02e4a173f8848430acff9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:17115880744
96417762,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20c1951e-7da8-4025-bbcf-2da60f87f3ab,},Annotations:map[string]string{io.kubernetes.container.hash: f91dd921,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:05add19d22fda2570b52ec7b279575972ce3d4e3ed1b15e14a842c35667338fe,PodSandboxId:8cb6101361b286ade5c77b27888d852fef11b2154f3493ef7c329a8a2497f761,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1711588054214339857,Labels:m
ap[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-808809,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 415bbf6af6af03844395934967f1d53e,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7f806518515daaea31de47549c3df138890f62d699f2eb1ad54abc661c5c095,PodSandboxId:fa06dc12c22a31cdfde9b37c5f79193892430055c56ff9b94ada4e5d0a5060bb,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1711588054250077399,Labels:map[
string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-808809,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b27a7f528d676bb567a98dd9c93ba802,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:afb39c84cbe45f2c1cd49e7ca7da5be67d0236bc61f3b4a4a8f3209867d57b0a,PodSandboxId:ee08dd080a76fdba4eeaa1378d59c389b012222d8d48ce7cf0a50226bbbc375e,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1711588054211596130,Labels:map[stri
ng]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-808809,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 23726df11311a725c5c2cea5aa7bbf82,},Annotations:map[string]string{io.kubernetes.container.hash: f47fb476,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1473b87f5ec0a698f975e02b643017deca11b4bf6f0d78f062b12764558bc159,PodSandboxId:dae6f2944cfe146fe51c15d22ea9ac3564c7566a59e12e80fb7f005ec86f6908,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1711588054187453180,Labels:map[string]string{io.kubernetes.containe
r.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-808809,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82aa56ffa6fd4273e5fcfbb8ee4837e3,},Annotations:map[string]string{io.kubernetes.container.hash: ebc61d60,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=246ff0c5-b8b5-4664-9001-95f853753d96 name=/runtime.v1.RuntimeService/ListContainers
	Mar 28 01:24:01 embed-certs-808809 crio[700]: time="2024-03-28 01:24:01.243969346Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=40a39587-44aa-484a-9cb6-d2f5c1f89a3e name=/runtime.v1.RuntimeService/Version
	Mar 28 01:24:01 embed-certs-808809 crio[700]: time="2024-03-28 01:24:01.244025027Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=40a39587-44aa-484a-9cb6-d2f5c1f89a3e name=/runtime.v1.RuntimeService/Version
	Mar 28 01:24:01 embed-certs-808809 crio[700]: time="2024-03-28 01:24:01.245251770Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=53bdbb3b-6a4e-4001-bd9b-a8b2a5e3ae15 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 28 01:24:01 embed-certs-808809 crio[700]: time="2024-03-28 01:24:01.245667988Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1711589041245646490,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:130129,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=53bdbb3b-6a4e-4001-bd9b-a8b2a5e3ae15 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 28 01:24:01 embed-certs-808809 crio[700]: time="2024-03-28 01:24:01.246485864Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=026594cf-1f50-4fda-b1e2-87ca21565480 name=/runtime.v1.RuntimeService/ListContainers
	Mar 28 01:24:01 embed-certs-808809 crio[700]: time="2024-03-28 01:24:01.246543537Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=026594cf-1f50-4fda-b1e2-87ca21565480 name=/runtime.v1.RuntimeService/ListContainers
	Mar 28 01:24:01 embed-certs-808809 crio[700]: time="2024-03-28 01:24:01.246716735Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f76a2cc1195c1d8095c2eaf14403049b3091569f9fce2b5c62a421df809f99d6,PodSandboxId:d13be47cdfe2d55cfc468405cc79fffada20ae6e4ac957e9096e5f1a7cb8ed43,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1711588075222203315,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-pgcdh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 52452b24-490e-4999-b700-198c6f9b2fa1,},Annotations:map[string]string{io.kubernetes.container.hash: b98b61cd,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d0fd0f91f9b4fa1ab25e032a39edcd5bc44b739fe492a9a2637fe447630c6048,PodSandboxId:fad0a5eeca45eab29bdac781c0dc7c18488da0d3f13f62e2f5cd1835585a98b0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1711588074985489557,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tjbhs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: cdb30ca1-5165-4e24-888a-df79af7987d0,},Annotations:map[string]string{io.kubernetes.container.hash: b2fb58c1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77d15563a8b85fedf40c66723f296b06f62de317f6266c0b3d4af970cfb7e7fd,PodSandboxId:d8ec51bb0fc8c97f17b2d0ad68ec94a6b2408e9dd2fda55f2c1987a82b0ce31d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1711588074840305163,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-2rn6k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2a77c778-dd83-4e2e-b45
a-ca16e3922b45,},Annotations:map[string]string{io.kubernetes.container.hash: 9855ca01,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e8068bfab6214b30b06e6ea1061ef0a1fdb69e653672c29bb9d353b14e1fc56,PodSandboxId:44b0cdf3876f13c6e331f982b09d269acd6f6da9d1d02e4a173f8848430acff9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:17115880744
96417762,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20c1951e-7da8-4025-bbcf-2da60f87f3ab,},Annotations:map[string]string{io.kubernetes.container.hash: f91dd921,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:05add19d22fda2570b52ec7b279575972ce3d4e3ed1b15e14a842c35667338fe,PodSandboxId:8cb6101361b286ade5c77b27888d852fef11b2154f3493ef7c329a8a2497f761,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1711588054214339857,Labels:m
ap[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-808809,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 415bbf6af6af03844395934967f1d53e,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7f806518515daaea31de47549c3df138890f62d699f2eb1ad54abc661c5c095,PodSandboxId:fa06dc12c22a31cdfde9b37c5f79193892430055c56ff9b94ada4e5d0a5060bb,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1711588054250077399,Labels:map[
string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-808809,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b27a7f528d676bb567a98dd9c93ba802,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:afb39c84cbe45f2c1cd49e7ca7da5be67d0236bc61f3b4a4a8f3209867d57b0a,PodSandboxId:ee08dd080a76fdba4eeaa1378d59c389b012222d8d48ce7cf0a50226bbbc375e,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1711588054211596130,Labels:map[stri
ng]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-808809,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 23726df11311a725c5c2cea5aa7bbf82,},Annotations:map[string]string{io.kubernetes.container.hash: f47fb476,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1473b87f5ec0a698f975e02b643017deca11b4bf6f0d78f062b12764558bc159,PodSandboxId:dae6f2944cfe146fe51c15d22ea9ac3564c7566a59e12e80fb7f005ec86f6908,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1711588054187453180,Labels:map[string]string{io.kubernetes.containe
r.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-808809,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82aa56ffa6fd4273e5fcfbb8ee4837e3,},Annotations:map[string]string{io.kubernetes.container.hash: ebc61d60,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=026594cf-1f50-4fda-b1e2-87ca21565480 name=/runtime.v1.RuntimeService/ListContainers
	Mar 28 01:24:01 embed-certs-808809 crio[700]: time="2024-03-28 01:24:01.286729792Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3865ef47-cd2a-4d52-a191-cce715992854 name=/runtime.v1.RuntimeService/Version
	Mar 28 01:24:01 embed-certs-808809 crio[700]: time="2024-03-28 01:24:01.286826510Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3865ef47-cd2a-4d52-a191-cce715992854 name=/runtime.v1.RuntimeService/Version
	Mar 28 01:24:01 embed-certs-808809 crio[700]: time="2024-03-28 01:24:01.288097675Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=1c6010f5-b06d-49d3-bf2a-1c8750fcc581 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 28 01:24:01 embed-certs-808809 crio[700]: time="2024-03-28 01:24:01.288516093Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1711589041288488478,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:130129,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1c6010f5-b06d-49d3-bf2a-1c8750fcc581 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 28 01:24:01 embed-certs-808809 crio[700]: time="2024-03-28 01:24:01.289324473Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=65d57813-5c07-41c9-bd9a-781452985d2f name=/runtime.v1.RuntimeService/ListContainers
	Mar 28 01:24:01 embed-certs-808809 crio[700]: time="2024-03-28 01:24:01.289377476Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=65d57813-5c07-41c9-bd9a-781452985d2f name=/runtime.v1.RuntimeService/ListContainers
	Mar 28 01:24:01 embed-certs-808809 crio[700]: time="2024-03-28 01:24:01.289564732Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f76a2cc1195c1d8095c2eaf14403049b3091569f9fce2b5c62a421df809f99d6,PodSandboxId:d13be47cdfe2d55cfc468405cc79fffada20ae6e4ac957e9096e5f1a7cb8ed43,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1711588075222203315,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-pgcdh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 52452b24-490e-4999-b700-198c6f9b2fa1,},Annotations:map[string]string{io.kubernetes.container.hash: b98b61cd,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d0fd0f91f9b4fa1ab25e032a39edcd5bc44b739fe492a9a2637fe447630c6048,PodSandboxId:fad0a5eeca45eab29bdac781c0dc7c18488da0d3f13f62e2f5cd1835585a98b0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1711588074985489557,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tjbhs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: cdb30ca1-5165-4e24-888a-df79af7987d0,},Annotations:map[string]string{io.kubernetes.container.hash: b2fb58c1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77d15563a8b85fedf40c66723f296b06f62de317f6266c0b3d4af970cfb7e7fd,PodSandboxId:d8ec51bb0fc8c97f17b2d0ad68ec94a6b2408e9dd2fda55f2c1987a82b0ce31d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1711588074840305163,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-2rn6k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2a77c778-dd83-4e2e-b45
a-ca16e3922b45,},Annotations:map[string]string{io.kubernetes.container.hash: 9855ca01,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e8068bfab6214b30b06e6ea1061ef0a1fdb69e653672c29bb9d353b14e1fc56,PodSandboxId:44b0cdf3876f13c6e331f982b09d269acd6f6da9d1d02e4a173f8848430acff9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:17115880744
96417762,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20c1951e-7da8-4025-bbcf-2da60f87f3ab,},Annotations:map[string]string{io.kubernetes.container.hash: f91dd921,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:05add19d22fda2570b52ec7b279575972ce3d4e3ed1b15e14a842c35667338fe,PodSandboxId:8cb6101361b286ade5c77b27888d852fef11b2154f3493ef7c329a8a2497f761,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1711588054214339857,Labels:m
ap[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-808809,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 415bbf6af6af03844395934967f1d53e,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7f806518515daaea31de47549c3df138890f62d699f2eb1ad54abc661c5c095,PodSandboxId:fa06dc12c22a31cdfde9b37c5f79193892430055c56ff9b94ada4e5d0a5060bb,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1711588054250077399,Labels:map[
string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-808809,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b27a7f528d676bb567a98dd9c93ba802,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:afb39c84cbe45f2c1cd49e7ca7da5be67d0236bc61f3b4a4a8f3209867d57b0a,PodSandboxId:ee08dd080a76fdba4eeaa1378d59c389b012222d8d48ce7cf0a50226bbbc375e,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1711588054211596130,Labels:map[stri
ng]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-808809,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 23726df11311a725c5c2cea5aa7bbf82,},Annotations:map[string]string{io.kubernetes.container.hash: f47fb476,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1473b87f5ec0a698f975e02b643017deca11b4bf6f0d78f062b12764558bc159,PodSandboxId:dae6f2944cfe146fe51c15d22ea9ac3564c7566a59e12e80fb7f005ec86f6908,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1711588054187453180,Labels:map[string]string{io.kubernetes.containe
r.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-808809,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82aa56ffa6fd4273e5fcfbb8ee4837e3,},Annotations:map[string]string{io.kubernetes.container.hash: ebc61d60,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=65d57813-5c07-41c9-bd9a-781452985d2f name=/runtime.v1.RuntimeService/ListContainers
	Mar 28 01:24:01 embed-certs-808809 crio[700]: time="2024-03-28 01:24:01.328616834Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=177d6543-248b-4609-a082-3f7a1b969435 name=/runtime.v1.RuntimeService/Version
	Mar 28 01:24:01 embed-certs-808809 crio[700]: time="2024-03-28 01:24:01.328719938Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=177d6543-248b-4609-a082-3f7a1b969435 name=/runtime.v1.RuntimeService/Version
	Mar 28 01:24:01 embed-certs-808809 crio[700]: time="2024-03-28 01:24:01.330604465Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a26780bd-0a69-408b-88f4-0a97b7c8040a name=/runtime.v1.ImageService/ImageFsInfo
	Mar 28 01:24:01 embed-certs-808809 crio[700]: time="2024-03-28 01:24:01.331212484Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1711589041331187679,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:130129,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a26780bd-0a69-408b-88f4-0a97b7c8040a name=/runtime.v1.ImageService/ImageFsInfo
	Mar 28 01:24:01 embed-certs-808809 crio[700]: time="2024-03-28 01:24:01.331962571Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=82e618fd-3dd0-4bea-8b6d-b597ea9ed01e name=/runtime.v1.RuntimeService/ListContainers
	Mar 28 01:24:01 embed-certs-808809 crio[700]: time="2024-03-28 01:24:01.332020137Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=82e618fd-3dd0-4bea-8b6d-b597ea9ed01e name=/runtime.v1.RuntimeService/ListContainers
	Mar 28 01:24:01 embed-certs-808809 crio[700]: time="2024-03-28 01:24:01.332203950Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f76a2cc1195c1d8095c2eaf14403049b3091569f9fce2b5c62a421df809f99d6,PodSandboxId:d13be47cdfe2d55cfc468405cc79fffada20ae6e4ac957e9096e5f1a7cb8ed43,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1711588075222203315,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-pgcdh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 52452b24-490e-4999-b700-198c6f9b2fa1,},Annotations:map[string]string{io.kubernetes.container.hash: b98b61cd,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d0fd0f91f9b4fa1ab25e032a39edcd5bc44b739fe492a9a2637fe447630c6048,PodSandboxId:fad0a5eeca45eab29bdac781c0dc7c18488da0d3f13f62e2f5cd1835585a98b0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1711588074985489557,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tjbhs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: cdb30ca1-5165-4e24-888a-df79af7987d0,},Annotations:map[string]string{io.kubernetes.container.hash: b2fb58c1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77d15563a8b85fedf40c66723f296b06f62de317f6266c0b3d4af970cfb7e7fd,PodSandboxId:d8ec51bb0fc8c97f17b2d0ad68ec94a6b2408e9dd2fda55f2c1987a82b0ce31d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1711588074840305163,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-2rn6k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2a77c778-dd83-4e2e-b45
a-ca16e3922b45,},Annotations:map[string]string{io.kubernetes.container.hash: 9855ca01,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e8068bfab6214b30b06e6ea1061ef0a1fdb69e653672c29bb9d353b14e1fc56,PodSandboxId:44b0cdf3876f13c6e331f982b09d269acd6f6da9d1d02e4a173f8848430acff9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:17115880744
96417762,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20c1951e-7da8-4025-bbcf-2da60f87f3ab,},Annotations:map[string]string{io.kubernetes.container.hash: f91dd921,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:05add19d22fda2570b52ec7b279575972ce3d4e3ed1b15e14a842c35667338fe,PodSandboxId:8cb6101361b286ade5c77b27888d852fef11b2154f3493ef7c329a8a2497f761,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1711588054214339857,Labels:m
ap[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-808809,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 415bbf6af6af03844395934967f1d53e,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7f806518515daaea31de47549c3df138890f62d699f2eb1ad54abc661c5c095,PodSandboxId:fa06dc12c22a31cdfde9b37c5f79193892430055c56ff9b94ada4e5d0a5060bb,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1711588054250077399,Labels:map[
string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-808809,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b27a7f528d676bb567a98dd9c93ba802,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:afb39c84cbe45f2c1cd49e7ca7da5be67d0236bc61f3b4a4a8f3209867d57b0a,PodSandboxId:ee08dd080a76fdba4eeaa1378d59c389b012222d8d48ce7cf0a50226bbbc375e,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1711588054211596130,Labels:map[stri
ng]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-808809,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 23726df11311a725c5c2cea5aa7bbf82,},Annotations:map[string]string{io.kubernetes.container.hash: f47fb476,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1473b87f5ec0a698f975e02b643017deca11b4bf6f0d78f062b12764558bc159,PodSandboxId:dae6f2944cfe146fe51c15d22ea9ac3564c7566a59e12e80fb7f005ec86f6908,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1711588054187453180,Labels:map[string]string{io.kubernetes.containe
r.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-808809,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82aa56ffa6fd4273e5fcfbb8ee4837e3,},Annotations:map[string]string{io.kubernetes.container.hash: ebc61d60,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=82e618fd-3dd0-4bea-8b6d-b597ea9ed01e name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	f76a2cc1195c1       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   16 minutes ago      Running             coredns                   0                   d13be47cdfe2d       coredns-76f75df574-pgcdh
	d0fd0f91f9b4f       a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392   16 minutes ago      Running             kube-proxy                0                   fad0a5eeca45e       kube-proxy-tjbhs
	77d15563a8b85       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   16 minutes ago      Running             coredns                   0                   d8ec51bb0fc8c       coredns-76f75df574-2rn6k
	2e8068bfab621       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   16 minutes ago      Running             storage-provisioner       0                   44b0cdf3876f1       storage-provisioner
	e7f806518515d       6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3   16 minutes ago      Running             kube-controller-manager   2                   fa06dc12c22a3       kube-controller-manager-embed-certs-808809
	05add19d22fda       8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b   16 minutes ago      Running             kube-scheduler            2                   8cb6101361b28       kube-scheduler-embed-certs-808809
	afb39c84cbe45       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   16 minutes ago      Running             etcd                      2                   ee08dd080a76f       etcd-embed-certs-808809
	1473b87f5ec0a       39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533   16 minutes ago      Running             kube-apiserver            2                   dae6f2944cfe1       kube-apiserver-embed-certs-808809
	
	
	==> coredns [77d15563a8b85fedf40c66723f296b06f62de317f6266c0b3d4af970cfb7e7fd] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [f76a2cc1195c1d8095c2eaf14403049b3091569f9fce2b5c62a421df809f99d6] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> describe nodes <==
	Name:               embed-certs-808809
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-808809
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1a940f980f77c9bfc8ef93678db8c1f49c9dd79d
	                    minikube.k8s.io/name=embed-certs-808809
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_03_28T01_07_40_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 28 Mar 2024 01:07:36 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-808809
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 28 Mar 2024 01:24:00 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 28 Mar 2024 01:23:19 +0000   Thu, 28 Mar 2024 01:07:35 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 28 Mar 2024 01:23:19 +0000   Thu, 28 Mar 2024 01:07:35 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 28 Mar 2024 01:23:19 +0000   Thu, 28 Mar 2024 01:07:35 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 28 Mar 2024 01:23:19 +0000   Thu, 28 Mar 2024 01:07:50 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.210
	  Hostname:    embed-certs-808809
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 e130d39af5334fbc87366b845f05a2e1
	  System UUID:                e130d39a-f533-4fbc-8736-6b845f05a2e1
	  Boot ID:                    f85ced42-5373-45cf-9a97-c85fe4592bc2
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-76f75df574-2rn6k                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     16m
	  kube-system                 coredns-76f75df574-pgcdh                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     16m
	  kube-system                 etcd-embed-certs-808809                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         16m
	  kube-system                 kube-apiserver-embed-certs-808809             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-controller-manager-embed-certs-808809    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-proxy-tjbhs                              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-scheduler-embed-certs-808809             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 metrics-server-57f55c9bc5-bqbfl               100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         16m
	  kube-system                 storage-provisioner                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   0 (0%!)(MISSING)
	  memory             440Mi (20%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 16m   kube-proxy       
	  Normal  Starting                 16m   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  16m   kubelet          Node embed-certs-808809 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    16m   kubelet          Node embed-certs-808809 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     16m   kubelet          Node embed-certs-808809 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             16m   kubelet          Node embed-certs-808809 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  16m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                16m   kubelet          Node embed-certs-808809 status is now: NodeReady
	  Normal  RegisteredNode           16m   node-controller  Node embed-certs-808809 event: Registered Node embed-certs-808809 in Controller
	
	
	==> dmesg <==
	[  +0.041287] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.556473] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.837413] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.657032] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.617437] systemd-fstab-generator[619]: Ignoring "noauto" option for root device
	[  +0.067060] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.059267] systemd-fstab-generator[631]: Ignoring "noauto" option for root device
	[  +0.174312] systemd-fstab-generator[645]: Ignoring "noauto" option for root device
	[  +0.167803] systemd-fstab-generator[657]: Ignoring "noauto" option for root device
	[  +0.331014] systemd-fstab-generator[686]: Ignoring "noauto" option for root device
	[  +4.752980] systemd-fstab-generator[781]: Ignoring "noauto" option for root device
	[  +0.064843] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.699936] systemd-fstab-generator[903]: Ignoring "noauto" option for root device
	[  +5.673497] kauditd_printk_skb: 97 callbacks suppressed
	[  +6.777167] kauditd_printk_skb: 74 callbacks suppressed
	[Mar28 01:07] kauditd_printk_skb: 2 callbacks suppressed
	[ +26.779774] kauditd_printk_skb: 7 callbacks suppressed
	[  +2.375091] systemd-fstab-generator[3419]: Ignoring "noauto" option for root device
	[  +4.660659] kauditd_printk_skb: 53 callbacks suppressed
	[  +2.642342] systemd-fstab-generator[3740]: Ignoring "noauto" option for root device
	[ +12.937190] systemd-fstab-generator[3942]: Ignoring "noauto" option for root device
	[  +0.082277] kauditd_printk_skb: 14 callbacks suppressed
	[Mar28 01:08] kauditd_printk_skb: 78 callbacks suppressed
	
	
	==> etcd [afb39c84cbe45f2c1cd49e7ca7da5be67d0236bc61f3b4a4a8f3209867d57b0a] <==
	{"level":"info","ts":"2024-03-28T01:07:34.846394Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-03-28T01:07:34.87292Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"60056697e173d7ad is starting a new election at term 1"}
	{"level":"info","ts":"2024-03-28T01:07:34.872984Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"60056697e173d7ad became pre-candidate at term 1"}
	{"level":"info","ts":"2024-03-28T01:07:34.873019Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"60056697e173d7ad received MsgPreVoteResp from 60056697e173d7ad at term 1"}
	{"level":"info","ts":"2024-03-28T01:07:34.873031Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"60056697e173d7ad became candidate at term 2"}
	{"level":"info","ts":"2024-03-28T01:07:34.873037Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"60056697e173d7ad received MsgVoteResp from 60056697e173d7ad at term 2"}
	{"level":"info","ts":"2024-03-28T01:07:34.873044Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"60056697e173d7ad became leader at term 2"}
	{"level":"info","ts":"2024-03-28T01:07:34.873052Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 60056697e173d7ad elected leader 60056697e173d7ad at term 2"}
	{"level":"info","ts":"2024-03-28T01:07:34.877949Z","caller":"etcdserver/server.go:2578","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-28T01:07:34.882145Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"60056697e173d7ad","local-member-attributes":"{Name:embed-certs-808809 ClientURLs:[https://192.168.72.210:2379]}","request-path":"/0/members/60056697e173d7ad/attributes","cluster-id":"3e0f7cc7df3e38c1","publish-timeout":"7s"}
	{"level":"info","ts":"2024-03-28T01:07:34.883208Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-28T01:07:34.883224Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-28T01:07:34.884013Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-03-28T01:07:34.890016Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-03-28T01:07:34.886441Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-03-28T01:07:34.886496Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3e0f7cc7df3e38c1","local-member-id":"60056697e173d7ad","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-28T01:07:34.890314Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-28T01:07:34.890363Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-28T01:07:34.898242Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.72.210:2379"}
	{"level":"info","ts":"2024-03-28T01:17:34.96409Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":678}
	{"level":"info","ts":"2024-03-28T01:17:34.974924Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":678,"took":"10.46395ms","hash":760840666,"current-db-size-bytes":2367488,"current-db-size":"2.4 MB","current-db-size-in-use-bytes":2367488,"current-db-size-in-use":"2.4 MB"}
	{"level":"info","ts":"2024-03-28T01:17:34.974994Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":760840666,"revision":678,"compact-revision":-1}
	{"level":"info","ts":"2024-03-28T01:22:34.973606Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":921}
	{"level":"info","ts":"2024-03-28T01:22:34.97755Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":921,"took":"3.426155ms","hash":2634092537,"current-db-size-bytes":2367488,"current-db-size":"2.4 MB","current-db-size-in-use-bytes":1613824,"current-db-size-in-use":"1.6 MB"}
	{"level":"info","ts":"2024-03-28T01:22:34.977583Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2634092537,"revision":921,"compact-revision":678}
	
	
	==> kernel <==
	 01:24:01 up 21 min,  0 users,  load average: 0.23, 0.18, 0.12
	Linux embed-certs-808809 5.10.207 #1 SMP Wed Mar 27 22:02:20 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [1473b87f5ec0a698f975e02b643017deca11b4bf6f0d78f062b12764558bc159] <==
	I0328 01:18:37.818090       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0328 01:20:37.817178       1 handler_proxy.go:93] no RequestInfo found in the context
	E0328 01:20:37.817627       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0328 01:20:37.817662       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0328 01:20:37.818399       1 handler_proxy.go:93] no RequestInfo found in the context
	E0328 01:20:37.818521       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0328 01:20:37.819786       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0328 01:22:36.825281       1 handler_proxy.go:93] no RequestInfo found in the context
	E0328 01:22:36.825635       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W0328 01:22:37.825980       1 handler_proxy.go:93] no RequestInfo found in the context
	E0328 01:22:37.826035       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0328 01:22:37.826044       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0328 01:22:37.826093       1 handler_proxy.go:93] no RequestInfo found in the context
	E0328 01:22:37.826174       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0328 01:22:37.827329       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0328 01:23:37.826759       1 handler_proxy.go:93] no RequestInfo found in the context
	E0328 01:23:37.826832       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0328 01:23:37.826901       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0328 01:23:37.828109       1 handler_proxy.go:93] no RequestInfo found in the context
	E0328 01:23:37.828189       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0328 01:23:37.828216       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [e7f806518515daaea31de47549c3df138890f62d699f2eb1ad54abc661c5c095] <==
	E0328 01:18:52.783829       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0328 01:18:53.390067       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0328 01:19:03.522300       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="76µs"
	E0328 01:19:22.789531       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0328 01:19:23.399520       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0328 01:19:52.797303       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0328 01:19:53.409415       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0328 01:20:22.804665       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0328 01:20:23.420010       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0328 01:20:52.811298       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0328 01:20:53.430090       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0328 01:21:22.817383       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0328 01:21:23.440085       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0328 01:21:52.822787       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0328 01:21:53.450767       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0328 01:22:22.834611       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0328 01:22:23.459246       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0328 01:22:52.840818       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0328 01:22:53.471315       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0328 01:23:22.846274       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0328 01:23:23.479661       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0328 01:23:50.525524       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="357.467µs"
	E0328 01:23:52.851586       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0328 01:23:53.488763       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0328 01:24:01.520222       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="249.062µs"
	
	
	==> kube-proxy [d0fd0f91f9b4fa1ab25e032a39edcd5bc44b739fe492a9a2637fe447630c6048] <==
	I0328 01:07:55.207212       1 server_others.go:72] "Using iptables proxy"
	I0328 01:07:55.234374       1 server.go:1050] "Successfully retrieved node IP(s)" IPs=["192.168.72.210"]
	I0328 01:07:55.332271       1 server_others.go:146] "No iptables support for family" ipFamily="IPv6"
	I0328 01:07:55.332299       1 server.go:654] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0328 01:07:55.332321       1 server_others.go:168] "Using iptables Proxier"
	I0328 01:07:55.337407       1 proxier.go:245] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0328 01:07:55.337779       1 server.go:865] "Version info" version="v1.29.3"
	I0328 01:07:55.338171       1 server.go:867] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0328 01:07:55.345652       1 config.go:188] "Starting service config controller"
	I0328 01:07:55.345785       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0328 01:07:55.346017       1 config.go:97] "Starting endpoint slice config controller"
	I0328 01:07:55.346070       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0328 01:07:55.348918       1 config.go:315] "Starting node config controller"
	I0328 01:07:55.348973       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0328 01:07:55.446612       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0328 01:07:55.446707       1 shared_informer.go:318] Caches are synced for service config
	I0328 01:07:55.449472       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [05add19d22fda2570b52ec7b279575972ce3d4e3ed1b15e14a842c35667338fe] <==
	W0328 01:07:36.833624       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0328 01:07:36.833661       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0328 01:07:36.836022       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0328 01:07:36.837284       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0328 01:07:36.839470       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0328 01:07:36.839517       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0328 01:07:37.718545       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0328 01:07:37.718735       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0328 01:07:37.807926       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0328 01:07:37.808017       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0328 01:07:37.846440       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0328 01:07:37.846548       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0328 01:07:37.891574       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0328 01:07:37.891773       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0328 01:07:37.915284       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0328 01:07:37.915394       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0328 01:07:37.949092       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0328 01:07:37.949785       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0328 01:07:38.048365       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0328 01:07:38.048702       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0328 01:07:38.094645       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0328 01:07:38.095089       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0328 01:07:38.197789       1 reflector.go:539] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0328 01:07:38.198075       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0328 01:07:40.908626       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Mar 28 01:21:40 embed-certs-808809 kubelet[3747]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 28 01:21:49 embed-certs-808809 kubelet[3747]: E0328 01:21:49.505087    3747 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-bqbfl" podUID="8434fd7d-838b-4cf2-96a3-e4d613633871"
	Mar 28 01:22:00 embed-certs-808809 kubelet[3747]: E0328 01:22:00.505689    3747 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-bqbfl" podUID="8434fd7d-838b-4cf2-96a3-e4d613633871"
	Mar 28 01:22:15 embed-certs-808809 kubelet[3747]: E0328 01:22:15.504993    3747 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-bqbfl" podUID="8434fd7d-838b-4cf2-96a3-e4d613633871"
	Mar 28 01:22:30 embed-certs-808809 kubelet[3747]: E0328 01:22:30.505623    3747 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-bqbfl" podUID="8434fd7d-838b-4cf2-96a3-e4d613633871"
	Mar 28 01:22:40 embed-certs-808809 kubelet[3747]: E0328 01:22:40.560880    3747 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 28 01:22:40 embed-certs-808809 kubelet[3747]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 28 01:22:40 embed-certs-808809 kubelet[3747]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 28 01:22:40 embed-certs-808809 kubelet[3747]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 28 01:22:40 embed-certs-808809 kubelet[3747]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 28 01:22:45 embed-certs-808809 kubelet[3747]: E0328 01:22:45.506227    3747 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-bqbfl" podUID="8434fd7d-838b-4cf2-96a3-e4d613633871"
	Mar 28 01:22:57 embed-certs-808809 kubelet[3747]: E0328 01:22:57.505775    3747 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-bqbfl" podUID="8434fd7d-838b-4cf2-96a3-e4d613633871"
	Mar 28 01:23:11 embed-certs-808809 kubelet[3747]: E0328 01:23:11.505888    3747 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-bqbfl" podUID="8434fd7d-838b-4cf2-96a3-e4d613633871"
	Mar 28 01:23:24 embed-certs-808809 kubelet[3747]: E0328 01:23:24.507112    3747 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-bqbfl" podUID="8434fd7d-838b-4cf2-96a3-e4d613633871"
	Mar 28 01:23:38 embed-certs-808809 kubelet[3747]: E0328 01:23:38.530308    3747 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Mar 28 01:23:38 embed-certs-808809 kubelet[3747]: E0328 01:23:38.530394    3747 kuberuntime_image.go:55] "Failed to pull image" err="pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Mar 28 01:23:38 embed-certs-808809 kubelet[3747]: E0328 01:23:38.530733    3747 kuberuntime_manager.go:1262] container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-4rgsj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{Pr
obeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:F
ile,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod metrics-server-57f55c9bc5-bqbfl_kube-system(8434fd7d-838b-4cf2-96a3-e4d613633871): ErrImagePull: pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Mar 28 01:23:38 embed-certs-808809 kubelet[3747]: E0328 01:23:38.530787    3747 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-57f55c9bc5-bqbfl" podUID="8434fd7d-838b-4cf2-96a3-e4d613633871"
	Mar 28 01:23:40 embed-certs-808809 kubelet[3747]: E0328 01:23:40.560298    3747 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 28 01:23:40 embed-certs-808809 kubelet[3747]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 28 01:23:40 embed-certs-808809 kubelet[3747]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 28 01:23:40 embed-certs-808809 kubelet[3747]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 28 01:23:40 embed-certs-808809 kubelet[3747]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 28 01:23:50 embed-certs-808809 kubelet[3747]: E0328 01:23:50.505701    3747 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-bqbfl" podUID="8434fd7d-838b-4cf2-96a3-e4d613633871"
	Mar 28 01:24:01 embed-certs-808809 kubelet[3747]: E0328 01:24:01.507018    3747 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-bqbfl" podUID="8434fd7d-838b-4cf2-96a3-e4d613633871"
	
	
	==> storage-provisioner [2e8068bfab6214b30b06e6ea1061ef0a1fdb69e653672c29bb9d353b14e1fc56] <==
	I0328 01:07:54.678794       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0328 01:07:54.697804       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0328 01:07:54.698080       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0328 01:07:54.712597       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0328 01:07:54.717570       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"17a20dd2-996d-46f6-a17d-3df61e572ba7", APIVersion:"v1", ResourceVersion:"398", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-808809_12b76f71-59a0-4979-a1f1-79806ef62186 became leader
	I0328 01:07:54.728148       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-808809_12b76f71-59a0-4979-a1f1-79806ef62186!
	I0328 01:07:54.833978       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-808809_12b76f71-59a0-4979-a1f1-79806ef62186!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-808809 -n embed-certs-808809
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-808809 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-57f55c9bc5-bqbfl
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-808809 describe pod metrics-server-57f55c9bc5-bqbfl
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-808809 describe pod metrics-server-57f55c9bc5-bqbfl: exit status 1 (64.736195ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-bqbfl" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-808809 describe pod metrics-server-57f55c9bc5-bqbfl: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (421.57s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (520s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-283961 -n default-k8s-diff-port-283961
start_stop_delete_test.go:287: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-03-28 01:26:22.053192231 +0000 UTC m=+6813.304671162
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-283961 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-283961 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (2.038µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-283961 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-283961 -n default-k8s-diff-port-283961
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-283961 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-283961 logs -n 25: (2.031354249s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|----------------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   |    Version     |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|----------------|---------------------|---------------------|
	| addons  | enable metrics-server -p newest-cni-013642             | newest-cni-013642            | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:55 UTC | 28 Mar 24 00:55 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |                |                     |                     |
	| stop    | -p newest-cni-013642                                   | newest-cni-013642            | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:55 UTC | 28 Mar 24 00:55 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |                |                     |                     |
	| addons  | enable dashboard -p newest-cni-013642                  | newest-cni-013642            | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:55 UTC | 28 Mar 24 00:55 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| start   | -p newest-cni-013642 --memory=2200 --alsologtostderr   | newest-cni-013642            | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:55 UTC | 28 Mar 24 00:56 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |                |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |                |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |                |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |                |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.30.0-beta.0                    |                              |         |                |                     |                     |
	| image   | newest-cni-013642 image list                           | newest-cni-013642            | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:56 UTC | 28 Mar 24 00:56 UTC |
	|         | --format=json                                          |                              |         |                |                     |                     |
	| pause   | -p newest-cni-013642                                   | newest-cni-013642            | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:56 UTC | 28 Mar 24 00:56 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |                |                     |                     |
	| unpause | -p newest-cni-013642                                   | newest-cni-013642            | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:56 UTC | 28 Mar 24 00:56 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |                |                     |                     |
	| delete  | -p newest-cni-013642                                   | newest-cni-013642            | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:56 UTC | 28 Mar 24 00:56 UTC |
	| delete  | -p newest-cni-013642                                   | newest-cni-013642            | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:56 UTC | 28 Mar 24 00:56 UTC |
	| start   | -p                                                     | default-k8s-diff-port-283961 | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:56 UTC | 28 Mar 24 00:57 UTC |
	|         | default-k8s-diff-port-283961                           |                              |         |                |                     |                     |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |                |                     |                     |
	|         | --driver=kvm2                                          |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.29.3                           |                              |         |                |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-986088        | old-k8s-version-986088       | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:57 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |                |                     |                     |
	| addons  | enable dashboard -p no-preload-248059                  | no-preload-248059            | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:57 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-283961  | default-k8s-diff-port-283961 | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:57 UTC | 28 Mar 24 00:57 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |                |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-283961 | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:57 UTC |                     |
	|         | default-k8s-diff-port-283961                           |                              |         |                |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |                |                     |                     |
	| start   | -p no-preload-248059 --memory=2200                     | no-preload-248059            | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:57 UTC | 28 Mar 24 01:09 UTC |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |                |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.30.0-beta.0                    |                              |         |                |                     |                     |
	| addons  | enable dashboard -p embed-certs-808809                 | embed-certs-808809           | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:57 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| start   | -p embed-certs-808809                                  | embed-certs-808809           | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:57 UTC | 28 Mar 24 01:07 UTC |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |                |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.29.3                           |                              |         |                |                     |                     |
	| stop    | -p old-k8s-version-986088                              | old-k8s-version-986088       | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:59 UTC | 28 Mar 24 00:59 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |                |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-986088             | old-k8s-version-986088       | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:59 UTC | 28 Mar 24 00:59 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| start   | -p old-k8s-version-986088                              | old-k8s-version-986088       | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:59 UTC |                     |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --kvm-network=default                                  |                              |         |                |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |                |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |                |                     |                     |
	|         | --keep-context=false                                   |                              |         |                |                     |                     |
	|         | --driver=kvm2                                          |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |                |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-283961       | default-k8s-diff-port-283961 | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:59 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-283961 | jenkins | v1.33.0-beta.0 | 28 Mar 24 01:00 UTC | 28 Mar 24 01:08 UTC |
	|         | default-k8s-diff-port-283961                           |                              |         |                |                     |                     |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |                |                     |                     |
	|         | --driver=kvm2                                          |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.29.3                           |                              |         |                |                     |                     |
	| delete  | -p old-k8s-version-986088                              | old-k8s-version-986088       | jenkins | v1.33.0-beta.0 | 28 Mar 24 01:22 UTC | 28 Mar 24 01:22 UTC |
	| delete  | -p no-preload-248059                                   | no-preload-248059            | jenkins | v1.33.0-beta.0 | 28 Mar 24 01:22 UTC | 28 Mar 24 01:22 UTC |
	| delete  | -p embed-certs-808809                                  | embed-certs-808809           | jenkins | v1.33.0-beta.0 | 28 Mar 24 01:24 UTC | 28 Mar 24 01:24 UTC |
	|---------|--------------------------------------------------------|------------------------------|---------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/28 01:00:05
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0328 01:00:05.675380 1131600 out.go:291] Setting OutFile to fd 1 ...
	I0328 01:00:05.675675 1131600 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 01:00:05.675710 1131600 out.go:304] Setting ErrFile to fd 2...
	I0328 01:00:05.675718 1131600 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 01:00:05.676017 1131600 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18485-1069254/.minikube/bin
	I0328 01:00:05.676919 1131600 out.go:298] Setting JSON to false
	I0328 01:00:05.678046 1131600 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":31303,"bootTime":1711556303,"procs":200,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1054-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0328 01:00:05.678129 1131600 start.go:139] virtualization: kvm guest
	I0328 01:00:05.681128 1131600 out.go:177] * [default-k8s-diff-port-283961] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I0328 01:00:05.683139 1131600 out.go:177]   - MINIKUBE_LOCATION=18485
	I0328 01:00:05.683129 1131600 notify.go:220] Checking for updates...
	I0328 01:00:05.685082 1131600 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0328 01:00:05.686765 1131600 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18485-1069254/kubeconfig
	I0328 01:00:05.688389 1131600 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18485-1069254/.minikube
	I0328 01:00:05.690187 1131600 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0328 01:00:05.691887 1131600 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0328 01:00:05.693775 1131600 config.go:182] Loaded profile config "default-k8s-diff-port-283961": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0328 01:00:05.694270 1131600 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18485-1069254/.minikube/bin/docker-machine-driver-kvm2
	I0328 01:00:05.694323 1131600 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 01:00:05.709757 1131600 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35333
	I0328 01:00:05.710275 1131600 main.go:141] libmachine: () Calling .GetVersion
	I0328 01:00:05.710875 1131600 main.go:141] libmachine: Using API Version  1
	I0328 01:00:05.710900 1131600 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 01:00:05.711323 1131600 main.go:141] libmachine: () Calling .GetMachineName
	I0328 01:00:05.711531 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .DriverName
	I0328 01:00:05.711893 1131600 driver.go:392] Setting default libvirt URI to qemu:///system
	I0328 01:00:05.712342 1131600 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18485-1069254/.minikube/bin/docker-machine-driver-kvm2
	I0328 01:00:05.712392 1131600 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 01:00:05.727583 1131600 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44249
	I0328 01:00:05.728107 1131600 main.go:141] libmachine: () Calling .GetVersion
	I0328 01:00:05.728595 1131600 main.go:141] libmachine: Using API Version  1
	I0328 01:00:05.728625 1131600 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 01:00:05.728945 1131600 main.go:141] libmachine: () Calling .GetMachineName
	I0328 01:00:05.729170 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .DriverName
	I0328 01:00:05.763895 1131600 out.go:177] * Using the kvm2 driver based on existing profile
	I0328 01:00:05.765397 1131600 start.go:297] selected driver: kvm2
	I0328 01:00:05.765431 1131600 start.go:901] validating driver "kvm2" against &{Name:default-k8s-diff-port-283961 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.29.3 ClusterName:default-k8s-diff-port-283961 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.224 Port:8444 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisk
s:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0328 01:00:05.765564 1131600 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0328 01:00:05.766282 1131600 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0328 01:00:05.766391 1131600 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18485-1069254/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0328 01:00:05.783130 1131600 install.go:137] /home/jenkins/minikube-integration/18485-1069254/.minikube/bin/docker-machine-driver-kvm2 version is 1.33.0-beta.0
	I0328 01:00:05.783602 1131600 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0328 01:00:05.783724 1131600 cni.go:84] Creating CNI manager for ""
	I0328 01:00:05.783745 1131600 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0328 01:00:05.783795 1131600 start.go:340] cluster config:
	{Name:default-k8s-diff-port-283961 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:default-k8s-diff-port-283961 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.224 Port:8444 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0328 01:00:05.783949 1131600 iso.go:125] acquiring lock: {Name:mk3da1fa7d63a581c817f327d86a827697458cb8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0328 01:00:05.785871 1131600 out.go:177] * Starting "default-k8s-diff-port-283961" primary control-plane node in "default-k8s-diff-port-283961" cluster
	I0328 01:00:02.570474 1130827 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.107:22: connect: no route to host
	I0328 01:00:05.787210 1131600 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0328 01:00:05.787259 1131600 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4
	I0328 01:00:05.787272 1131600 cache.go:56] Caching tarball of preloaded images
	I0328 01:00:05.787364 1131600 preload.go:173] Found /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0328 01:00:05.787376 1131600 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on crio
	I0328 01:00:05.787509 1131600 profile.go:142] Saving config to /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/default-k8s-diff-port-283961/config.json ...
	I0328 01:00:05.787742 1131600 start.go:360] acquireMachinesLock for default-k8s-diff-port-283961: {Name:mk85e225431128bbd27ac7bb3815095957281902 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0328 01:00:08.650481 1130827 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.107:22: connect: no route to host
	I0328 01:00:11.722571 1130827 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.107:22: connect: no route to host
	I0328 01:00:17.802536 1130827 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.107:22: connect: no route to host
	I0328 01:00:20.874568 1130827 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.107:22: connect: no route to host
	I0328 01:00:26.954473 1130827 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.107:22: connect: no route to host
	I0328 01:00:30.026674 1130827 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.107:22: connect: no route to host
	I0328 01:00:36.106489 1130827 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.107:22: connect: no route to host
	I0328 01:00:39.178555 1130827 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.107:22: connect: no route to host
	I0328 01:00:45.258539 1130827 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.107:22: connect: no route to host
	I0328 01:00:48.330581 1130827 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.107:22: connect: no route to host
	I0328 01:00:54.410577 1130827 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.107:22: connect: no route to host
	I0328 01:00:57.482545 1130827 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.107:22: connect: no route to host
	I0328 01:01:03.562558 1130827 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.107:22: connect: no route to host
	I0328 01:01:06.634602 1130827 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.107:22: connect: no route to host
	I0328 01:01:12.714559 1130827 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.107:22: connect: no route to host
	I0328 01:01:15.786597 1130827 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.107:22: connect: no route to host
	I0328 01:01:21.866544 1130827 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.107:22: connect: no route to host
	I0328 01:01:24.938619 1130827 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.107:22: connect: no route to host
	I0328 01:01:31.018631 1130827 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.107:22: connect: no route to host
	I0328 01:01:34.090562 1130827 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.107:22: connect: no route to host
	I0328 01:01:40.170864 1130827 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.107:22: connect: no route to host
	I0328 01:01:43.242565 1130827 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.107:22: connect: no route to host
	I0328 01:01:49.322492 1130827 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.107:22: connect: no route to host
	I0328 01:01:52.394572 1130827 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.107:22: connect: no route to host
	I0328 01:01:58.474562 1130827 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.107:22: connect: no route to host
	I0328 01:02:01.546621 1130827 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.107:22: connect: no route to host
	I0328 01:02:07.626510 1130827 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.107:22: connect: no route to host
	I0328 01:02:10.698534 1130827 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.107:22: connect: no route to host
	I0328 01:02:13.703348 1130949 start.go:364] duration metric: took 4m25.677777198s to acquireMachinesLock for "embed-certs-808809"
	I0328 01:02:13.703416 1130949 start.go:96] Skipping create...Using existing machine configuration
	I0328 01:02:13.703429 1130949 fix.go:54] fixHost starting: 
	I0328 01:02:13.703888 1130949 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18485-1069254/.minikube/bin/docker-machine-driver-kvm2
	I0328 01:02:13.703923 1130949 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 01:02:13.719480 1130949 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44835
	I0328 01:02:13.719968 1130949 main.go:141] libmachine: () Calling .GetVersion
	I0328 01:02:13.720450 1130949 main.go:141] libmachine: Using API Version  1
	I0328 01:02:13.720475 1130949 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 01:02:13.720774 1130949 main.go:141] libmachine: () Calling .GetMachineName
	I0328 01:02:13.721011 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .DriverName
	I0328 01:02:13.721182 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetState
	I0328 01:02:13.722796 1130949 fix.go:112] recreateIfNeeded on embed-certs-808809: state=Stopped err=<nil>
	I0328 01:02:13.722828 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .DriverName
	W0328 01:02:13.722972 1130949 fix.go:138] unexpected machine state, will restart: <nil>
	I0328 01:02:13.724895 1130949 out.go:177] * Restarting existing kvm2 VM for "embed-certs-808809" ...
	I0328 01:02:13.700647 1130827 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0328 01:02:13.700689 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetMachineName
	I0328 01:02:13.701054 1130827 buildroot.go:166] provisioning hostname "no-preload-248059"
	I0328 01:02:13.701085 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetMachineName
	I0328 01:02:13.701344 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHHostname
	I0328 01:02:13.703200 1130827 machine.go:97] duration metric: took 4m37.399616994s to provisionDockerMachine
	I0328 01:02:13.703243 1130827 fix.go:56] duration metric: took 4m37.42352766s for fixHost
	I0328 01:02:13.703249 1130827 start.go:83] releasing machines lock for "no-preload-248059", held for 4m37.423563163s
	W0328 01:02:13.703274 1130827 start.go:713] error starting host: provision: host is not running
	W0328 01:02:13.703400 1130827 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0328 01:02:13.703411 1130827 start.go:728] Will try again in 5 seconds ...
	I0328 01:02:13.726437 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .Start
	I0328 01:02:13.726574 1130949 main.go:141] libmachine: (embed-certs-808809) Ensuring networks are active...
	I0328 01:02:13.727407 1130949 main.go:141] libmachine: (embed-certs-808809) Ensuring network default is active
	I0328 01:02:13.727667 1130949 main.go:141] libmachine: (embed-certs-808809) Ensuring network mk-embed-certs-808809 is active
	I0328 01:02:13.728050 1130949 main.go:141] libmachine: (embed-certs-808809) Getting domain xml...
	I0328 01:02:13.728836 1130949 main.go:141] libmachine: (embed-certs-808809) Creating domain...
	I0328 01:02:14.931757 1130949 main.go:141] libmachine: (embed-certs-808809) Waiting to get IP...
	I0328 01:02:14.932921 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:14.933298 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | unable to find current IP address of domain embed-certs-808809 in network mk-embed-certs-808809
	I0328 01:02:14.933396 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | I0328 01:02:14.933294 1131950 retry.go:31] will retry after 279.257708ms: waiting for machine to come up
	I0328 01:02:15.213830 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:15.214439 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | unable to find current IP address of domain embed-certs-808809 in network mk-embed-certs-808809
	I0328 01:02:15.214472 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | I0328 01:02:15.214415 1131950 retry.go:31] will retry after 387.406107ms: waiting for machine to come up
	I0328 01:02:15.603078 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:15.603464 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | unable to find current IP address of domain embed-certs-808809 in network mk-embed-certs-808809
	I0328 01:02:15.603497 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | I0328 01:02:15.603431 1131950 retry.go:31] will retry after 466.553599ms: waiting for machine to come up
	I0328 01:02:16.072165 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:16.072702 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | unable to find current IP address of domain embed-certs-808809 in network mk-embed-certs-808809
	I0328 01:02:16.072732 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | I0328 01:02:16.072643 1131950 retry.go:31] will retry after 375.428381ms: waiting for machine to come up
	I0328 01:02:16.449155 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:16.449614 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | unable to find current IP address of domain embed-certs-808809 in network mk-embed-certs-808809
	I0328 01:02:16.449652 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | I0328 01:02:16.449553 1131950 retry.go:31] will retry after 466.238903ms: waiting for machine to come up
	I0328 01:02:16.917246 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:16.917697 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | unable to find current IP address of domain embed-certs-808809 in network mk-embed-certs-808809
	I0328 01:02:16.917723 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | I0328 01:02:16.917633 1131950 retry.go:31] will retry after 772.819544ms: waiting for machine to come up
	I0328 01:02:17.691645 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:17.692121 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | unable to find current IP address of domain embed-certs-808809 in network mk-embed-certs-808809
	I0328 01:02:17.692151 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | I0328 01:02:17.692071 1131950 retry.go:31] will retry after 1.19065976s: waiting for machine to come up
	I0328 01:02:18.704949 1130827 start.go:360] acquireMachinesLock for no-preload-248059: {Name:mk85e225431128bbd27ac7bb3815095957281902 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0328 01:02:18.884525 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:18.885019 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | unable to find current IP address of domain embed-certs-808809 in network mk-embed-certs-808809
	I0328 01:02:18.885044 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | I0328 01:02:18.884980 1131950 retry.go:31] will retry after 1.434726863s: waiting for machine to come up
	I0328 01:02:20.321473 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:20.322009 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | unable to find current IP address of domain embed-certs-808809 in network mk-embed-certs-808809
	I0328 01:02:20.322035 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | I0328 01:02:20.321951 1131950 retry.go:31] will retry after 1.275277555s: waiting for machine to come up
	I0328 01:02:21.599454 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:21.600049 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | unable to find current IP address of domain embed-certs-808809 in network mk-embed-certs-808809
	I0328 01:02:21.600074 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | I0328 01:02:21.599982 1131950 retry.go:31] will retry after 1.852516502s: waiting for machine to come up
	I0328 01:02:23.455282 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:23.455760 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | unable to find current IP address of domain embed-certs-808809 in network mk-embed-certs-808809
	I0328 01:02:23.455830 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | I0328 01:02:23.455746 1131950 retry.go:31] will retry after 2.056736141s: waiting for machine to come up
	I0328 01:02:25.514112 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:25.514538 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | unable to find current IP address of domain embed-certs-808809 in network mk-embed-certs-808809
	I0328 01:02:25.514569 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | I0328 01:02:25.514492 1131950 retry.go:31] will retry after 2.711520437s: waiting for machine to come up
	I0328 01:02:32.751719 1131323 start.go:364] duration metric: took 3m27.302408957s to acquireMachinesLock for "old-k8s-version-986088"
	I0328 01:02:32.751823 1131323 start.go:96] Skipping create...Using existing machine configuration
	I0328 01:02:32.751833 1131323 fix.go:54] fixHost starting: 
	I0328 01:02:32.752289 1131323 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18485-1069254/.minikube/bin/docker-machine-driver-kvm2
	I0328 01:02:32.752326 1131323 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 01:02:32.770119 1131323 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45807
	I0328 01:02:32.770723 1131323 main.go:141] libmachine: () Calling .GetVersion
	I0328 01:02:32.771352 1131323 main.go:141] libmachine: Using API Version  1
	I0328 01:02:32.771380 1131323 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 01:02:32.771790 1131323 main.go:141] libmachine: () Calling .GetMachineName
	I0328 01:02:32.772020 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .DriverName
	I0328 01:02:32.772206 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetState
	I0328 01:02:32.773947 1131323 fix.go:112] recreateIfNeeded on old-k8s-version-986088: state=Stopped err=<nil>
	I0328 01:02:32.773980 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .DriverName
	W0328 01:02:32.774166 1131323 fix.go:138] unexpected machine state, will restart: <nil>
	I0328 01:02:32.776416 1131323 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-986088" ...
	I0328 01:02:28.229576 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:28.229970 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | unable to find current IP address of domain embed-certs-808809 in network mk-embed-certs-808809
	I0328 01:02:28.230000 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | I0328 01:02:28.229920 1131950 retry.go:31] will retry after 3.231405371s: waiting for machine to come up
	I0328 01:02:31.463477 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:31.463884 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has current primary IP address 192.168.72.210 and MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:31.463902 1130949 main.go:141] libmachine: (embed-certs-808809) Found IP for machine: 192.168.72.210
	I0328 01:02:31.463915 1130949 main.go:141] libmachine: (embed-certs-808809) Reserving static IP address...
	I0328 01:02:31.464394 1130949 main.go:141] libmachine: (embed-certs-808809) Reserved static IP address: 192.168.72.210
	I0328 01:02:31.464413 1130949 main.go:141] libmachine: (embed-certs-808809) Waiting for SSH to be available...
	I0328 01:02:31.464439 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | found host DHCP lease matching {name: "embed-certs-808809", mac: "52:54:00:60:d4:d2", ip: "192.168.72.210"} in network mk-embed-certs-808809: {Iface:virbr4 ExpiryTime:2024-03-28 02:02:25 +0000 UTC Type:0 Mac:52:54:00:60:d4:d2 Iaid: IPaddr:192.168.72.210 Prefix:24 Hostname:embed-certs-808809 Clientid:01:52:54:00:60:d4:d2}
	I0328 01:02:31.464464 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | skip adding static IP to network mk-embed-certs-808809 - found existing host DHCP lease matching {name: "embed-certs-808809", mac: "52:54:00:60:d4:d2", ip: "192.168.72.210"}
	I0328 01:02:31.464480 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | Getting to WaitForSSH function...
	I0328 01:02:31.466488 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:31.466876 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d4:d2", ip: ""} in network mk-embed-certs-808809: {Iface:virbr4 ExpiryTime:2024-03-28 02:02:25 +0000 UTC Type:0 Mac:52:54:00:60:d4:d2 Iaid: IPaddr:192.168.72.210 Prefix:24 Hostname:embed-certs-808809 Clientid:01:52:54:00:60:d4:d2}
	I0328 01:02:31.466916 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined IP address 192.168.72.210 and MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:31.467054 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | Using SSH client type: external
	I0328 01:02:31.467085 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | Using SSH private key: /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/embed-certs-808809/id_rsa (-rw-------)
	I0328 01:02:31.467124 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.210 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/embed-certs-808809/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0328 01:02:31.467138 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | About to run SSH command:
	I0328 01:02:31.467155 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | exit 0
	I0328 01:02:31.590708 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | SSH cmd err, output: <nil>: 
	I0328 01:02:31.591111 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetConfigRaw
	I0328 01:02:31.591959 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetIP
	I0328 01:02:31.594592 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:31.595075 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d4:d2", ip: ""} in network mk-embed-certs-808809: {Iface:virbr4 ExpiryTime:2024-03-28 02:02:25 +0000 UTC Type:0 Mac:52:54:00:60:d4:d2 Iaid: IPaddr:192.168.72.210 Prefix:24 Hostname:embed-certs-808809 Clientid:01:52:54:00:60:d4:d2}
	I0328 01:02:31.595114 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined IP address 192.168.72.210 and MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:31.595364 1130949 profile.go:142] Saving config to /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/embed-certs-808809/config.json ...
	I0328 01:02:31.595634 1130949 machine.go:94] provisionDockerMachine start ...
	I0328 01:02:31.595656 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .DriverName
	I0328 01:02:31.595901 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHHostname
	I0328 01:02:31.598184 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:31.598529 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d4:d2", ip: ""} in network mk-embed-certs-808809: {Iface:virbr4 ExpiryTime:2024-03-28 02:02:25 +0000 UTC Type:0 Mac:52:54:00:60:d4:d2 Iaid: IPaddr:192.168.72.210 Prefix:24 Hostname:embed-certs-808809 Clientid:01:52:54:00:60:d4:d2}
	I0328 01:02:31.598556 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined IP address 192.168.72.210 and MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:31.598681 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHPort
	I0328 01:02:31.598851 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHKeyPath
	I0328 01:02:31.599012 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHKeyPath
	I0328 01:02:31.599163 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHUsername
	I0328 01:02:31.599333 1130949 main.go:141] libmachine: Using SSH client type: native
	I0328 01:02:31.599604 1130949 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.210 22 <nil> <nil>}
	I0328 01:02:31.599619 1130949 main.go:141] libmachine: About to run SSH command:
	hostname
	I0328 01:02:31.703241 1130949 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0328 01:02:31.703272 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetMachineName
	I0328 01:02:31.703575 1130949 buildroot.go:166] provisioning hostname "embed-certs-808809"
	I0328 01:02:31.703602 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetMachineName
	I0328 01:02:31.703779 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHHostname
	I0328 01:02:31.706495 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:31.706777 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d4:d2", ip: ""} in network mk-embed-certs-808809: {Iface:virbr4 ExpiryTime:2024-03-28 02:02:25 +0000 UTC Type:0 Mac:52:54:00:60:d4:d2 Iaid: IPaddr:192.168.72.210 Prefix:24 Hostname:embed-certs-808809 Clientid:01:52:54:00:60:d4:d2}
	I0328 01:02:31.706799 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined IP address 192.168.72.210 and MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:31.706978 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHPort
	I0328 01:02:31.707146 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHKeyPath
	I0328 01:02:31.707334 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHKeyPath
	I0328 01:02:31.707580 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHUsername
	I0328 01:02:31.707765 1130949 main.go:141] libmachine: Using SSH client type: native
	I0328 01:02:31.707985 1130949 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.210 22 <nil> <nil>}
	I0328 01:02:31.708004 1130949 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-808809 && echo "embed-certs-808809" | sudo tee /etc/hostname
	I0328 01:02:31.821578 1130949 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-808809
	
	I0328 01:02:31.821608 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHHostname
	I0328 01:02:31.824412 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:31.824791 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d4:d2", ip: ""} in network mk-embed-certs-808809: {Iface:virbr4 ExpiryTime:2024-03-28 02:02:25 +0000 UTC Type:0 Mac:52:54:00:60:d4:d2 Iaid: IPaddr:192.168.72.210 Prefix:24 Hostname:embed-certs-808809 Clientid:01:52:54:00:60:d4:d2}
	I0328 01:02:31.824825 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined IP address 192.168.72.210 and MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:31.825030 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHPort
	I0328 01:02:31.825253 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHKeyPath
	I0328 01:02:31.825432 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHKeyPath
	I0328 01:02:31.825589 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHUsername
	I0328 01:02:31.825758 1130949 main.go:141] libmachine: Using SSH client type: native
	I0328 01:02:31.825950 1130949 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.210 22 <nil> <nil>}
	I0328 01:02:31.825976 1130949 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-808809' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-808809/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-808809' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0328 01:02:31.937655 1130949 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0328 01:02:31.937701 1130949 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18485-1069254/.minikube CaCertPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18485-1069254/.minikube}
	I0328 01:02:31.937728 1130949 buildroot.go:174] setting up certificates
	I0328 01:02:31.937742 1130949 provision.go:84] configureAuth start
	I0328 01:02:31.937754 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetMachineName
	I0328 01:02:31.938093 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetIP
	I0328 01:02:31.940874 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:31.941328 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d4:d2", ip: ""} in network mk-embed-certs-808809: {Iface:virbr4 ExpiryTime:2024-03-28 02:02:25 +0000 UTC Type:0 Mac:52:54:00:60:d4:d2 Iaid: IPaddr:192.168.72.210 Prefix:24 Hostname:embed-certs-808809 Clientid:01:52:54:00:60:d4:d2}
	I0328 01:02:31.941360 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined IP address 192.168.72.210 and MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:31.941580 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHHostname
	I0328 01:02:31.944250 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:31.944580 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d4:d2", ip: ""} in network mk-embed-certs-808809: {Iface:virbr4 ExpiryTime:2024-03-28 02:02:25 +0000 UTC Type:0 Mac:52:54:00:60:d4:d2 Iaid: IPaddr:192.168.72.210 Prefix:24 Hostname:embed-certs-808809 Clientid:01:52:54:00:60:d4:d2}
	I0328 01:02:31.944610 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined IP address 192.168.72.210 and MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:31.944828 1130949 provision.go:143] copyHostCerts
	I0328 01:02:31.944910 1130949 exec_runner.go:144] found /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.pem, removing ...
	I0328 01:02:31.944926 1130949 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.pem
	I0328 01:02:31.945006 1130949 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.pem (1078 bytes)
	I0328 01:02:31.945151 1130949 exec_runner.go:144] found /home/jenkins/minikube-integration/18485-1069254/.minikube/cert.pem, removing ...
	I0328 01:02:31.945162 1130949 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18485-1069254/.minikube/cert.pem
	I0328 01:02:31.945205 1130949 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18485-1069254/.minikube/cert.pem (1123 bytes)
	I0328 01:02:31.945285 1130949 exec_runner.go:144] found /home/jenkins/minikube-integration/18485-1069254/.minikube/key.pem, removing ...
	I0328 01:02:31.945294 1130949 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18485-1069254/.minikube/key.pem
	I0328 01:02:31.945330 1130949 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18485-1069254/.minikube/key.pem (1679 bytes)
	I0328 01:02:31.945400 1130949 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca-key.pem org=jenkins.embed-certs-808809 san=[127.0.0.1 192.168.72.210 embed-certs-808809 localhost minikube]
	I0328 01:02:32.070925 1130949 provision.go:177] copyRemoteCerts
	I0328 01:02:32.071007 1130949 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0328 01:02:32.071067 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHHostname
	I0328 01:02:32.073876 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:32.074295 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d4:d2", ip: ""} in network mk-embed-certs-808809: {Iface:virbr4 ExpiryTime:2024-03-28 02:02:25 +0000 UTC Type:0 Mac:52:54:00:60:d4:d2 Iaid: IPaddr:192.168.72.210 Prefix:24 Hostname:embed-certs-808809 Clientid:01:52:54:00:60:d4:d2}
	I0328 01:02:32.074339 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined IP address 192.168.72.210 and MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:32.074541 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHPort
	I0328 01:02:32.074754 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHKeyPath
	I0328 01:02:32.074931 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHUsername
	I0328 01:02:32.075091 1130949 sshutil.go:53] new ssh client: &{IP:192.168.72.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/embed-certs-808809/id_rsa Username:docker}
	I0328 01:02:32.158945 1130949 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0328 01:02:32.184903 1130949 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0328 01:02:32.210411 1130949 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0328 01:02:32.235788 1130949 provision.go:87] duration metric: took 298.03126ms to configureAuth
	I0328 01:02:32.235827 1130949 buildroot.go:189] setting minikube options for container-runtime
	I0328 01:02:32.236116 1130949 config.go:182] Loaded profile config "embed-certs-808809": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0328 01:02:32.236336 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHHostname
	I0328 01:02:32.239186 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:32.239520 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d4:d2", ip: ""} in network mk-embed-certs-808809: {Iface:virbr4 ExpiryTime:2024-03-28 02:02:25 +0000 UTC Type:0 Mac:52:54:00:60:d4:d2 Iaid: IPaddr:192.168.72.210 Prefix:24 Hostname:embed-certs-808809 Clientid:01:52:54:00:60:d4:d2}
	I0328 01:02:32.239555 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined IP address 192.168.72.210 and MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:32.239782 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHPort
	I0328 01:02:32.240036 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHKeyPath
	I0328 01:02:32.240257 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHKeyPath
	I0328 01:02:32.240431 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHUsername
	I0328 01:02:32.240633 1130949 main.go:141] libmachine: Using SSH client type: native
	I0328 01:02:32.240836 1130949 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.210 22 <nil> <nil>}
	I0328 01:02:32.240862 1130949 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0328 01:02:32.513263 1130949 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0328 01:02:32.513298 1130949 machine.go:97] duration metric: took 917.647337ms to provisionDockerMachine
	I0328 01:02:32.513314 1130949 start.go:293] postStartSetup for "embed-certs-808809" (driver="kvm2")
	I0328 01:02:32.513326 1130949 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0328 01:02:32.513365 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .DriverName
	I0328 01:02:32.513727 1130949 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0328 01:02:32.513770 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHHostname
	I0328 01:02:32.516906 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:32.517382 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d4:d2", ip: ""} in network mk-embed-certs-808809: {Iface:virbr4 ExpiryTime:2024-03-28 02:02:25 +0000 UTC Type:0 Mac:52:54:00:60:d4:d2 Iaid: IPaddr:192.168.72.210 Prefix:24 Hostname:embed-certs-808809 Clientid:01:52:54:00:60:d4:d2}
	I0328 01:02:32.517425 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined IP address 192.168.72.210 and MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:32.517603 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHPort
	I0328 01:02:32.517831 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHKeyPath
	I0328 01:02:32.517989 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHUsername
	I0328 01:02:32.518115 1130949 sshutil.go:53] new ssh client: &{IP:192.168.72.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/embed-certs-808809/id_rsa Username:docker}
	I0328 01:02:32.600013 1130949 ssh_runner.go:195] Run: cat /etc/os-release
	I0328 01:02:32.604953 1130949 info.go:137] Remote host: Buildroot 2023.02.9
	I0328 01:02:32.604983 1130949 filesync.go:126] Scanning /home/jenkins/minikube-integration/18485-1069254/.minikube/addons for local assets ...
	I0328 01:02:32.605057 1130949 filesync.go:126] Scanning /home/jenkins/minikube-integration/18485-1069254/.minikube/files for local assets ...
	I0328 01:02:32.605148 1130949 filesync.go:149] local asset: /home/jenkins/minikube-integration/18485-1069254/.minikube/files/etc/ssl/certs/10765222.pem -> 10765222.pem in /etc/ssl/certs
	I0328 01:02:32.605265 1130949 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0328 01:02:32.617685 1130949 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/files/etc/ssl/certs/10765222.pem --> /etc/ssl/certs/10765222.pem (1708 bytes)
	I0328 01:02:32.646415 1130949 start.go:296] duration metric: took 133.084551ms for postStartSetup
	I0328 01:02:32.646462 1130949 fix.go:56] duration metric: took 18.943034019s for fixHost
	I0328 01:02:32.646490 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHHostname
	I0328 01:02:32.649346 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:32.649686 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d4:d2", ip: ""} in network mk-embed-certs-808809: {Iface:virbr4 ExpiryTime:2024-03-28 02:02:25 +0000 UTC Type:0 Mac:52:54:00:60:d4:d2 Iaid: IPaddr:192.168.72.210 Prefix:24 Hostname:embed-certs-808809 Clientid:01:52:54:00:60:d4:d2}
	I0328 01:02:32.649717 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined IP address 192.168.72.210 and MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:32.649864 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHPort
	I0328 01:02:32.650191 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHKeyPath
	I0328 01:02:32.650444 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHKeyPath
	I0328 01:02:32.650637 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHUsername
	I0328 01:02:32.650844 1130949 main.go:141] libmachine: Using SSH client type: native
	I0328 01:02:32.651036 1130949 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.210 22 <nil> <nil>}
	I0328 01:02:32.651069 1130949 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0328 01:02:32.751522 1130949 main.go:141] libmachine: SSH cmd err, output: <nil>: 1711587752.718800758
	
	I0328 01:02:32.751547 1130949 fix.go:216] guest clock: 1711587752.718800758
	I0328 01:02:32.751556 1130949 fix.go:229] Guest: 2024-03-28 01:02:32.718800758 +0000 UTC Remote: 2024-03-28 01:02:32.646466137 +0000 UTC m=+284.780134501 (delta=72.334621ms)
	I0328 01:02:32.751598 1130949 fix.go:200] guest clock delta is within tolerance: 72.334621ms
	I0328 01:02:32.751610 1130949 start.go:83] releasing machines lock for "embed-certs-808809", held for 19.048217918s
	I0328 01:02:32.751638 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .DriverName
	I0328 01:02:32.751953 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetIP
	I0328 01:02:32.754795 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:32.755205 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d4:d2", ip: ""} in network mk-embed-certs-808809: {Iface:virbr4 ExpiryTime:2024-03-28 02:02:25 +0000 UTC Type:0 Mac:52:54:00:60:d4:d2 Iaid: IPaddr:192.168.72.210 Prefix:24 Hostname:embed-certs-808809 Clientid:01:52:54:00:60:d4:d2}
	I0328 01:02:32.755240 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined IP address 192.168.72.210 and MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:32.755454 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .DriverName
	I0328 01:02:32.756111 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .DriverName
	I0328 01:02:32.756320 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .DriverName
	I0328 01:02:32.756412 1130949 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0328 01:02:32.756475 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHHostname
	I0328 01:02:32.756612 1130949 ssh_runner.go:195] Run: cat /version.json
	I0328 01:02:32.756646 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHHostname
	I0328 01:02:32.759337 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:32.759468 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:32.759788 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d4:d2", ip: ""} in network mk-embed-certs-808809: {Iface:virbr4 ExpiryTime:2024-03-28 02:02:25 +0000 UTC Type:0 Mac:52:54:00:60:d4:d2 Iaid: IPaddr:192.168.72.210 Prefix:24 Hostname:embed-certs-808809 Clientid:01:52:54:00:60:d4:d2}
	I0328 01:02:32.759808 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined IP address 192.168.72.210 and MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:32.759845 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d4:d2", ip: ""} in network mk-embed-certs-808809: {Iface:virbr4 ExpiryTime:2024-03-28 02:02:25 +0000 UTC Type:0 Mac:52:54:00:60:d4:d2 Iaid: IPaddr:192.168.72.210 Prefix:24 Hostname:embed-certs-808809 Clientid:01:52:54:00:60:d4:d2}
	I0328 01:02:32.759866 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined IP address 192.168.72.210 and MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:32.760009 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHPort
	I0328 01:02:32.760018 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHPort
	I0328 01:02:32.760214 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHKeyPath
	I0328 01:02:32.760222 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHKeyPath
	I0328 01:02:32.760364 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHUsername
	I0328 01:02:32.760532 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHUsername
	I0328 01:02:32.760639 1130949 sshutil.go:53] new ssh client: &{IP:192.168.72.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/embed-certs-808809/id_rsa Username:docker}
	I0328 01:02:32.760698 1130949 sshutil.go:53] new ssh client: &{IP:192.168.72.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/embed-certs-808809/id_rsa Username:docker}
	I0328 01:02:32.840137 1130949 ssh_runner.go:195] Run: systemctl --version
	I0328 01:02:32.874039 1130949 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0328 01:02:33.020534 1130949 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0328 01:02:33.027141 1130949 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0328 01:02:33.027213 1130949 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0328 01:02:33.043738 1130949 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0328 01:02:33.043767 1130949 start.go:494] detecting cgroup driver to use...
	I0328 01:02:33.043840 1130949 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0328 01:02:33.064332 1130949 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0328 01:02:33.081926 1130949 docker.go:217] disabling cri-docker service (if available) ...
	I0328 01:02:33.082016 1130949 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0328 01:02:33.097179 1130949 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0328 01:02:33.113157 1130949 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0328 01:02:33.233183 1130949 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0328 01:02:33.374061 1130949 docker.go:233] disabling docker service ...
	I0328 01:02:33.374145 1130949 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0328 01:02:33.389813 1130949 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0328 01:02:33.403439 1130949 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0328 01:02:33.546146 1130949 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0328 01:02:33.706968 1130949 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0328 01:02:33.722279 1130949 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0328 01:02:33.742578 1130949 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0328 01:02:33.742652 1130949 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 01:02:33.754966 1130949 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0328 01:02:33.755027 1130949 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 01:02:33.767170 1130949 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 01:02:33.779960 1130949 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 01:02:33.792448 1130949 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0328 01:02:33.804912 1130949 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 01:02:33.818038 1130949 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 01:02:33.838794 1130949 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 01:02:33.852157 1130949 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0328 01:02:33.862921 1130949 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0328 01:02:33.862981 1130949 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0328 01:02:33.880973 1130949 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0328 01:02:33.892698 1130949 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0328 01:02:34.029903 1130949 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0328 01:02:34.170977 1130949 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0328 01:02:34.171074 1130949 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0328 01:02:34.176652 1130949 start.go:562] Will wait 60s for crictl version
	I0328 01:02:34.176736 1130949 ssh_runner.go:195] Run: which crictl
	I0328 01:02:34.180993 1130949 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0328 01:02:34.224564 1130949 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0328 01:02:34.224675 1130949 ssh_runner.go:195] Run: crio --version
	I0328 01:02:34.254457 1130949 ssh_runner.go:195] Run: crio --version
	I0328 01:02:34.287281 1130949 out.go:177] * Preparing Kubernetes v1.29.3 on CRI-O 1.29.1 ...
	I0328 01:02:32.778280 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .Start
	I0328 01:02:32.778470 1131323 main.go:141] libmachine: (old-k8s-version-986088) Ensuring networks are active...
	I0328 01:02:32.779179 1131323 main.go:141] libmachine: (old-k8s-version-986088) Ensuring network default is active
	I0328 01:02:32.779577 1131323 main.go:141] libmachine: (old-k8s-version-986088) Ensuring network mk-old-k8s-version-986088 is active
	I0328 01:02:32.779982 1131323 main.go:141] libmachine: (old-k8s-version-986088) Getting domain xml...
	I0328 01:02:32.780732 1131323 main.go:141] libmachine: (old-k8s-version-986088) Creating domain...
	I0328 01:02:34.066287 1131323 main.go:141] libmachine: (old-k8s-version-986088) Waiting to get IP...
	I0328 01:02:34.067193 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:34.067618 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | unable to find current IP address of domain old-k8s-version-986088 in network mk-old-k8s-version-986088
	I0328 01:02:34.067684 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | I0328 01:02:34.067586 1132067 retry.go:31] will retry after 291.270379ms: waiting for machine to come up
	I0328 01:02:34.360203 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:34.360690 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | unable to find current IP address of domain old-k8s-version-986088 in network mk-old-k8s-version-986088
	I0328 01:02:34.360721 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | I0328 01:02:34.360638 1132067 retry.go:31] will retry after 234.968456ms: waiting for machine to come up
	I0328 01:02:34.597291 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:34.597818 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | unable to find current IP address of domain old-k8s-version-986088 in network mk-old-k8s-version-986088
	I0328 01:02:34.597849 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | I0328 01:02:34.597750 1132067 retry.go:31] will retry after 382.522593ms: waiting for machine to come up
	I0328 01:02:34.982502 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:34.983176 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | unable to find current IP address of domain old-k8s-version-986088 in network mk-old-k8s-version-986088
	I0328 01:02:34.983205 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | I0328 01:02:34.983133 1132067 retry.go:31] will retry after 436.332635ms: waiting for machine to come up
	I0328 01:02:34.288748 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetIP
	I0328 01:02:34.292122 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:34.292516 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d4:d2", ip: ""} in network mk-embed-certs-808809: {Iface:virbr4 ExpiryTime:2024-03-28 02:02:25 +0000 UTC Type:0 Mac:52:54:00:60:d4:d2 Iaid: IPaddr:192.168.72.210 Prefix:24 Hostname:embed-certs-808809 Clientid:01:52:54:00:60:d4:d2}
	I0328 01:02:34.292556 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined IP address 192.168.72.210 and MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:34.292869 1130949 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0328 01:02:34.298738 1130949 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0328 01:02:34.313529 1130949 kubeadm.go:877] updating cluster {Name:embed-certs-808809 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.29.3 ClusterName:embed-certs-808809 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.210 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0328 01:02:34.313698 1130949 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0328 01:02:34.313762 1130949 ssh_runner.go:195] Run: sudo crictl images --output json
	I0328 01:02:34.356518 1130949 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.3". assuming images are not preloaded.
	I0328 01:02:34.356614 1130949 ssh_runner.go:195] Run: which lz4
	I0328 01:02:34.361492 1130949 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0328 01:02:34.366053 1130949 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0328 01:02:34.366090 1130949 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (402967820 bytes)
	I0328 01:02:36.024197 1130949 crio.go:462] duration metric: took 1.662731937s to copy over tarball
	I0328 01:02:36.024287 1130949 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0328 01:02:35.421623 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:35.422164 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | unable to find current IP address of domain old-k8s-version-986088 in network mk-old-k8s-version-986088
	I0328 01:02:35.422198 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | I0328 01:02:35.422135 1132067 retry.go:31] will retry after 700.861268ms: waiting for machine to come up
	I0328 01:02:36.124589 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:36.125001 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | unable to find current IP address of domain old-k8s-version-986088 in network mk-old-k8s-version-986088
	I0328 01:02:36.125031 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | I0328 01:02:36.124948 1132067 retry.go:31] will retry after 932.342478ms: waiting for machine to come up
	I0328 01:02:37.058954 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:37.059390 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | unable to find current IP address of domain old-k8s-version-986088 in network mk-old-k8s-version-986088
	I0328 01:02:37.059424 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | I0328 01:02:37.059332 1132067 retry.go:31] will retry after 1.163248691s: waiting for machine to come up
	I0328 01:02:38.224574 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:38.225019 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | unable to find current IP address of domain old-k8s-version-986088 in network mk-old-k8s-version-986088
	I0328 01:02:38.225053 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | I0328 01:02:38.224959 1132067 retry.go:31] will retry after 1.13372539s: waiting for machine to come up
	I0328 01:02:39.360393 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:39.360953 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | unable to find current IP address of domain old-k8s-version-986088 in network mk-old-k8s-version-986088
	I0328 01:02:39.360984 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | I0328 01:02:39.360906 1132067 retry.go:31] will retry after 1.793272671s: waiting for machine to come up
	I0328 01:02:38.420741 1130949 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.396415089s)
	I0328 01:02:38.420788 1130949 crio.go:469] duration metric: took 2.39655808s to extract the tarball
	I0328 01:02:38.420797 1130949 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0328 01:02:38.459869 1130949 ssh_runner.go:195] Run: sudo crictl images --output json
	I0328 01:02:38.505999 1130949 crio.go:514] all images are preloaded for cri-o runtime.
	I0328 01:02:38.506030 1130949 cache_images.go:84] Images are preloaded, skipping loading
	I0328 01:02:38.506039 1130949 kubeadm.go:928] updating node { 192.168.72.210 8443 v1.29.3 crio true true} ...
	I0328 01:02:38.506185 1130949 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-808809 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.210
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.3 ClusterName:embed-certs-808809 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0328 01:02:38.506301 1130949 ssh_runner.go:195] Run: crio config
	I0328 01:02:38.551608 1130949 cni.go:84] Creating CNI manager for ""
	I0328 01:02:38.551633 1130949 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0328 01:02:38.551646 1130949 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0328 01:02:38.551673 1130949 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.210 APIServerPort:8443 KubernetesVersion:v1.29.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-808809 NodeName:embed-certs-808809 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.210"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.210 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0328 01:02:38.551813 1130949 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.210
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-808809"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.210
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.210"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0328 01:02:38.551881 1130949 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.3
	I0328 01:02:38.562640 1130949 binaries.go:44] Found k8s binaries, skipping transfer
	I0328 01:02:38.562732 1130949 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0328 01:02:38.572870 1130949 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0328 01:02:38.590866 1130949 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0328 01:02:38.608302 1130949 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0328 01:02:38.626925 1130949 ssh_runner.go:195] Run: grep 192.168.72.210	control-plane.minikube.internal$ /etc/hosts
	I0328 01:02:38.631111 1130949 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.210	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0328 01:02:38.644528 1130949 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0328 01:02:38.785485 1130949 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0328 01:02:38.804087 1130949 certs.go:68] Setting up /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/embed-certs-808809 for IP: 192.168.72.210
	I0328 01:02:38.804113 1130949 certs.go:194] generating shared ca certs ...
	I0328 01:02:38.804132 1130949 certs.go:226] acquiring lock for ca certs: {Name:mkf4dec8f33bbf51de6ed3aabdf175c7fd744ef9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 01:02:38.804285 1130949 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.key
	I0328 01:02:38.804326 1130949 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/proxy-client-ca.key
	I0328 01:02:38.804363 1130949 certs.go:256] generating profile certs ...
	I0328 01:02:38.804505 1130949 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/embed-certs-808809/client.key
	I0328 01:02:38.804588 1130949 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/embed-certs-808809/apiserver.key.bdc16448
	I0328 01:02:38.804638 1130949 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/embed-certs-808809/proxy-client.key
	I0328 01:02:38.804798 1130949 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/1076522.pem (1338 bytes)
	W0328 01:02:38.804829 1130949 certs.go:480] ignoring /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/1076522_empty.pem, impossibly tiny 0 bytes
	I0328 01:02:38.804836 1130949 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca-key.pem (1675 bytes)
	I0328 01:02:38.804860 1130949 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem (1078 bytes)
	I0328 01:02:38.804882 1130949 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/cert.pem (1123 bytes)
	I0328 01:02:38.804902 1130949 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/key.pem (1679 bytes)
	I0328 01:02:38.804943 1130949 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/files/etc/ssl/certs/10765222.pem (1708 bytes)
	I0328 01:02:38.805829 1130949 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0328 01:02:38.864847 1130949 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0328 01:02:38.899197 1130949 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0328 01:02:38.926734 1130949 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0328 01:02:38.958277 1130949 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/embed-certs-808809/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0328 01:02:38.997201 1130949 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/embed-certs-808809/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0328 01:02:39.023136 1130949 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/embed-certs-808809/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0328 01:02:39.048459 1130949 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/embed-certs-808809/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0328 01:02:39.074052 1130949 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/1076522.pem --> /usr/share/ca-certificates/1076522.pem (1338 bytes)
	I0328 01:02:39.099326 1130949 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/files/etc/ssl/certs/10765222.pem --> /usr/share/ca-certificates/10765222.pem (1708 bytes)
	I0328 01:02:39.124775 1130949 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0328 01:02:39.149638 1130949 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0328 01:02:39.169169 1130949 ssh_runner.go:195] Run: openssl version
	I0328 01:02:39.175948 1130949 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1076522.pem && ln -fs /usr/share/ca-certificates/1076522.pem /etc/ssl/certs/1076522.pem"
	I0328 01:02:39.188255 1130949 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1076522.pem
	I0328 01:02:39.194296 1130949 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 27 23:42 /usr/share/ca-certificates/1076522.pem
	I0328 01:02:39.194374 1130949 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1076522.pem
	I0328 01:02:39.201138 1130949 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1076522.pem /etc/ssl/certs/51391683.0"
	I0328 01:02:39.213554 1130949 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10765222.pem && ln -fs /usr/share/ca-certificates/10765222.pem /etc/ssl/certs/10765222.pem"
	I0328 01:02:39.226474 1130949 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10765222.pem
	I0328 01:02:39.232074 1130949 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 27 23:42 /usr/share/ca-certificates/10765222.pem
	I0328 01:02:39.232149 1130949 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10765222.pem
	I0328 01:02:39.238733 1130949 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/10765222.pem /etc/ssl/certs/3ec20f2e.0"
	I0328 01:02:39.250983 1130949 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0328 01:02:39.263746 1130949 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0328 01:02:39.268967 1130949 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 27 23:34 /usr/share/ca-certificates/minikubeCA.pem
	I0328 01:02:39.269038 1130949 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0328 01:02:39.275589 1130949 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0328 01:02:39.287731 1130949 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0328 01:02:39.292985 1130949 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0328 01:02:39.300366 1130949 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0328 01:02:39.307241 1130949 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0328 01:02:39.314522 1130949 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0328 01:02:39.321070 1130949 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0328 01:02:39.327777 1130949 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0328 01:02:39.334174 1130949 kubeadm.go:391] StartCluster: {Name:embed-certs-808809 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29
.3 ClusterName:embed-certs-808809 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.210 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0328 01:02:39.334310 1130949 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0328 01:02:39.334367 1130949 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0328 01:02:39.376035 1130949 cri.go:89] found id: ""
	I0328 01:02:39.376145 1130949 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0328 01:02:39.387349 1130949 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0328 01:02:39.387377 1130949 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0328 01:02:39.387385 1130949 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0328 01:02:39.387469 1130949 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0328 01:02:39.397918 1130949 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0328 01:02:39.399122 1130949 kubeconfig.go:125] found "embed-certs-808809" server: "https://192.168.72.210:8443"
	I0328 01:02:39.401219 1130949 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0328 01:02:39.411475 1130949 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.72.210
	I0328 01:02:39.411562 1130949 kubeadm.go:1154] stopping kube-system containers ...
	I0328 01:02:39.411583 1130949 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0328 01:02:39.411650 1130949 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0328 01:02:39.449529 1130949 cri.go:89] found id: ""
	I0328 01:02:39.449638 1130949 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0328 01:02:39.468553 1130949 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0328 01:02:39.479489 1130949 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0328 01:02:39.479522 1130949 kubeadm.go:156] found existing configuration files:
	
	I0328 01:02:39.479589 1130949 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0328 01:02:39.489619 1130949 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0328 01:02:39.489689 1130949 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0328 01:02:39.499726 1130949 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0328 01:02:39.509362 1130949 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0328 01:02:39.509447 1130949 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0328 01:02:39.519262 1130949 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0328 01:02:39.528858 1130949 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0328 01:02:39.528920 1130949 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0328 01:02:39.538784 1130949 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0328 01:02:39.548517 1130949 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0328 01:02:39.548593 1130949 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0328 01:02:39.559931 1130949 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0328 01:02:39.574178 1130949 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0328 01:02:39.706243 1130949 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0328 01:02:40.342144 1130949 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0328 01:02:40.559108 1130949 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0328 01:02:40.636713 1130949 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0328 01:02:40.743171 1130949 api_server.go:52] waiting for apiserver process to appear ...
	I0328 01:02:40.743269 1130949 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:02:41.243401 1130949 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:02:41.743363 1130949 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:02:41.776504 1130949 api_server.go:72] duration metric: took 1.033329844s to wait for apiserver process to appear ...
	I0328 01:02:41.776547 1130949 api_server.go:88] waiting for apiserver healthz status ...
	I0328 01:02:41.776574 1130949 api_server.go:253] Checking apiserver healthz at https://192.168.72.210:8443/healthz ...
	I0328 01:02:41.777140 1130949 api_server.go:269] stopped: https://192.168.72.210:8443/healthz: Get "https://192.168.72.210:8443/healthz": dial tcp 192.168.72.210:8443: connect: connection refused
	I0328 01:02:42.276690 1130949 api_server.go:253] Checking apiserver healthz at https://192.168.72.210:8443/healthz ...
	I0328 01:02:41.156898 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:41.157309 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | unable to find current IP address of domain old-k8s-version-986088 in network mk-old-k8s-version-986088
	I0328 01:02:41.157336 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | I0328 01:02:41.157263 1132067 retry.go:31] will retry after 1.863775673s: waiting for machine to come up
	I0328 01:02:43.023074 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:43.023470 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | unable to find current IP address of domain old-k8s-version-986088 in network mk-old-k8s-version-986088
	I0328 01:02:43.023507 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | I0328 01:02:43.023419 1132067 retry.go:31] will retry after 2.73600503s: waiting for machine to come up
	I0328 01:02:44.743286 1130949 api_server.go:279] https://192.168.72.210:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0328 01:02:44.743383 1130949 api_server.go:103] status: https://192.168.72.210:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0328 01:02:44.743412 1130949 api_server.go:253] Checking apiserver healthz at https://192.168.72.210:8443/healthz ...
	I0328 01:02:44.822370 1130949 api_server.go:279] https://192.168.72.210:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0328 01:02:44.822416 1130949 api_server.go:103] status: https://192.168.72.210:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0328 01:02:44.822436 1130949 api_server.go:253] Checking apiserver healthz at https://192.168.72.210:8443/healthz ...
	I0328 01:02:44.847406 1130949 api_server.go:279] https://192.168.72.210:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0328 01:02:44.847462 1130949 api_server.go:103] status: https://192.168.72.210:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0328 01:02:45.276899 1130949 api_server.go:253] Checking apiserver healthz at https://192.168.72.210:8443/healthz ...
	I0328 01:02:45.281884 1130949 api_server.go:279] https://192.168.72.210:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0328 01:02:45.281919 1130949 api_server.go:103] status: https://192.168.72.210:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0328 01:02:45.777495 1130949 api_server.go:253] Checking apiserver healthz at https://192.168.72.210:8443/healthz ...
	I0328 01:02:45.783673 1130949 api_server.go:279] https://192.168.72.210:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0328 01:02:45.783704 1130949 api_server.go:103] status: https://192.168.72.210:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0328 01:02:46.277372 1130949 api_server.go:253] Checking apiserver healthz at https://192.168.72.210:8443/healthz ...
	I0328 01:02:46.282281 1130949 api_server.go:279] https://192.168.72.210:8443/healthz returned 200:
	ok
	I0328 01:02:46.291242 1130949 api_server.go:141] control plane version: v1.29.3
	I0328 01:02:46.291287 1130949 api_server.go:131] duration metric: took 4.514730698s to wait for apiserver health ...
	I0328 01:02:46.291301 1130949 cni.go:84] Creating CNI manager for ""
	I0328 01:02:46.291310 1130949 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0328 01:02:46.293461 1130949 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0328 01:02:46.294971 1130949 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0328 01:02:46.312955 1130949 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0328 01:02:46.345653 1130949 system_pods.go:43] waiting for kube-system pods to appear ...
	I0328 01:02:46.355470 1130949 system_pods.go:59] 8 kube-system pods found
	I0328 01:02:46.355506 1130949 system_pods.go:61] "coredns-76f75df574-pr5d8" [90a6f3d5-6f33-4c41-804b-4b20c518aa23] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0328 01:02:46.355512 1130949 system_pods.go:61] "etcd-embed-certs-808809" [93b6b8ee-f83f-4848-b2c5-912ec07acd52] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0328 01:02:46.355519 1130949 system_pods.go:61] "kube-apiserver-embed-certs-808809" [22eb788f-4647-4a07-b5bf-ecdd54c28fcd] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0328 01:02:46.355530 1130949 system_pods.go:61] "kube-controller-manager-embed-certs-808809" [83fecd9f-c0de-4afe-b5b5-7c04bd3adc20] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0328 01:02:46.355545 1130949 system_pods.go:61] "kube-proxy-qwzpg" [57a814c6-54c8-4fa7-b7d7-bcdd4bbc91d7] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0328 01:02:46.355553 1130949 system_pods.go:61] "kube-scheduler-embed-certs-808809" [0b229d84-43fb-45ee-8d49-39204812d490] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0328 01:02:46.355568 1130949 system_pods.go:61] "metrics-server-57f55c9bc5-swsxp" [4b20e133-3054-4806-9b7f-44d8c8c35a4d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0328 01:02:46.355580 1130949 system_pods.go:61] "storage-provisioner" [59303061-19e3-4aed-8753-804988a2a44e] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0328 01:02:46.355590 1130949 system_pods.go:74] duration metric: took 9.908316ms to wait for pod list to return data ...
	I0328 01:02:46.355603 1130949 node_conditions.go:102] verifying NodePressure condition ...
	I0328 01:02:46.358936 1130949 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0328 01:02:46.358987 1130949 node_conditions.go:123] node cpu capacity is 2
	I0328 01:02:46.359006 1130949 node_conditions.go:105] duration metric: took 3.394695ms to run NodePressure ...
	I0328 01:02:46.359054 1130949 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0328 01:02:46.686479 1130949 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0328 01:02:46.692502 1130949 kubeadm.go:733] kubelet initialised
	I0328 01:02:46.692526 1130949 kubeadm.go:734] duration metric: took 6.022393ms waiting for restarted kubelet to initialise ...
	I0328 01:02:46.692534 1130949 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0328 01:02:46.699146 1130949 pod_ready.go:78] waiting up to 4m0s for pod "coredns-76f75df574-pr5d8" in "kube-system" namespace to be "Ready" ...
	I0328 01:02:45.762440 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:45.762891 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | unable to find current IP address of domain old-k8s-version-986088 in network mk-old-k8s-version-986088
	I0328 01:02:45.762915 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | I0328 01:02:45.762845 1132067 retry.go:31] will retry after 2.201941476s: waiting for machine to come up
	I0328 01:02:47.966601 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:47.967196 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | unable to find current IP address of domain old-k8s-version-986088 in network mk-old-k8s-version-986088
	I0328 01:02:47.967237 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | I0328 01:02:47.967144 1132067 retry.go:31] will retry after 4.122216816s: waiting for machine to come up
	I0328 01:02:48.709890 1130949 pod_ready.go:102] pod "coredns-76f75df574-pr5d8" in "kube-system" namespace has status "Ready":"False"
	I0328 01:02:51.207697 1130949 pod_ready.go:102] pod "coredns-76f75df574-pr5d8" in "kube-system" namespace has status "Ready":"False"
	I0328 01:02:53.391471 1131600 start.go:364] duration metric: took 2m47.603687739s to acquireMachinesLock for "default-k8s-diff-port-283961"
	I0328 01:02:53.391553 1131600 start.go:96] Skipping create...Using existing machine configuration
	I0328 01:02:53.391565 1131600 fix.go:54] fixHost starting: 
	I0328 01:02:53.391980 1131600 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18485-1069254/.minikube/bin/docker-machine-driver-kvm2
	I0328 01:02:53.392031 1131600 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 01:02:53.409035 1131600 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34073
	I0328 01:02:53.409556 1131600 main.go:141] libmachine: () Calling .GetVersion
	I0328 01:02:53.410105 1131600 main.go:141] libmachine: Using API Version  1
	I0328 01:02:53.410136 1131600 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 01:02:53.410492 1131600 main.go:141] libmachine: () Calling .GetMachineName
	I0328 01:02:53.410734 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .DriverName
	I0328 01:02:53.410903 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetState
	I0328 01:02:53.412710 1131600 fix.go:112] recreateIfNeeded on default-k8s-diff-port-283961: state=Stopped err=<nil>
	I0328 01:02:53.412739 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .DriverName
	W0328 01:02:53.412927 1131600 fix.go:138] unexpected machine state, will restart: <nil>
	I0328 01:02:53.414773 1131600 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-283961" ...
	I0328 01:02:52.091210 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:52.091759 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has current primary IP address 192.168.50.174 and MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:52.091794 1131323 main.go:141] libmachine: (old-k8s-version-986088) Found IP for machine: 192.168.50.174
	I0328 01:02:52.091841 1131323 main.go:141] libmachine: (old-k8s-version-986088) Reserving static IP address...
	I0328 01:02:52.092295 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | found host DHCP lease matching {name: "old-k8s-version-986088", mac: "52:54:00:f6:94:40", ip: "192.168.50.174"} in network mk-old-k8s-version-986088: {Iface:virbr2 ExpiryTime:2024-03-28 02:02:44 +0000 UTC Type:0 Mac:52:54:00:f6:94:40 Iaid: IPaddr:192.168.50.174 Prefix:24 Hostname:old-k8s-version-986088 Clientid:01:52:54:00:f6:94:40}
	I0328 01:02:52.092321 1131323 main.go:141] libmachine: (old-k8s-version-986088) Reserved static IP address: 192.168.50.174
	I0328 01:02:52.092343 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | skip adding static IP to network mk-old-k8s-version-986088 - found existing host DHCP lease matching {name: "old-k8s-version-986088", mac: "52:54:00:f6:94:40", ip: "192.168.50.174"}
	I0328 01:02:52.092356 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | Getting to WaitForSSH function...
	I0328 01:02:52.092373 1131323 main.go:141] libmachine: (old-k8s-version-986088) Waiting for SSH to be available...
	I0328 01:02:52.094682 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:52.095012 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:94:40", ip: ""} in network mk-old-k8s-version-986088: {Iface:virbr2 ExpiryTime:2024-03-28 02:02:44 +0000 UTC Type:0 Mac:52:54:00:f6:94:40 Iaid: IPaddr:192.168.50.174 Prefix:24 Hostname:old-k8s-version-986088 Clientid:01:52:54:00:f6:94:40}
	I0328 01:02:52.095033 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined IP address 192.168.50.174 and MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:52.095158 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | Using SSH client type: external
	I0328 01:02:52.095180 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | Using SSH private key: /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/old-k8s-version-986088/id_rsa (-rw-------)
	I0328 01:02:52.095208 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.174 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/old-k8s-version-986088/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0328 01:02:52.095218 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | About to run SSH command:
	I0328 01:02:52.095232 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | exit 0
	I0328 01:02:52.218494 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | SSH cmd err, output: <nil>: 
	I0328 01:02:52.218983 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetConfigRaw
	I0328 01:02:52.219663 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetIP
	I0328 01:02:52.222349 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:52.222791 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:94:40", ip: ""} in network mk-old-k8s-version-986088: {Iface:virbr2 ExpiryTime:2024-03-28 02:02:44 +0000 UTC Type:0 Mac:52:54:00:f6:94:40 Iaid: IPaddr:192.168.50.174 Prefix:24 Hostname:old-k8s-version-986088 Clientid:01:52:54:00:f6:94:40}
	I0328 01:02:52.222823 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined IP address 192.168.50.174 and MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:52.223191 1131323 profile.go:142] Saving config to /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/old-k8s-version-986088/config.json ...
	I0328 01:02:52.223388 1131323 machine.go:94] provisionDockerMachine start ...
	I0328 01:02:52.223409 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .DriverName
	I0328 01:02:52.223605 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHHostname
	I0328 01:02:52.225686 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:52.225999 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:94:40", ip: ""} in network mk-old-k8s-version-986088: {Iface:virbr2 ExpiryTime:2024-03-28 02:02:44 +0000 UTC Type:0 Mac:52:54:00:f6:94:40 Iaid: IPaddr:192.168.50.174 Prefix:24 Hostname:old-k8s-version-986088 Clientid:01:52:54:00:f6:94:40}
	I0328 01:02:52.226038 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined IP address 192.168.50.174 and MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:52.226131 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHPort
	I0328 01:02:52.226341 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHKeyPath
	I0328 01:02:52.226507 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHKeyPath
	I0328 01:02:52.226633 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHUsername
	I0328 01:02:52.226802 1131323 main.go:141] libmachine: Using SSH client type: native
	I0328 01:02:52.227078 1131323 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.174 22 <nil> <nil>}
	I0328 01:02:52.227095 1131323 main.go:141] libmachine: About to run SSH command:
	hostname
	I0328 01:02:52.327218 1131323 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0328 01:02:52.327249 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetMachineName
	I0328 01:02:52.327515 1131323 buildroot.go:166] provisioning hostname "old-k8s-version-986088"
	I0328 01:02:52.327542 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetMachineName
	I0328 01:02:52.327754 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHHostname
	I0328 01:02:52.330253 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:52.330661 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:94:40", ip: ""} in network mk-old-k8s-version-986088: {Iface:virbr2 ExpiryTime:2024-03-28 02:02:44 +0000 UTC Type:0 Mac:52:54:00:f6:94:40 Iaid: IPaddr:192.168.50.174 Prefix:24 Hostname:old-k8s-version-986088 Clientid:01:52:54:00:f6:94:40}
	I0328 01:02:52.330691 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined IP address 192.168.50.174 and MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:52.330827 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHPort
	I0328 01:02:52.331048 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHKeyPath
	I0328 01:02:52.331258 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHKeyPath
	I0328 01:02:52.331406 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHUsername
	I0328 01:02:52.331593 1131323 main.go:141] libmachine: Using SSH client type: native
	I0328 01:02:52.331772 1131323 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.174 22 <nil> <nil>}
	I0328 01:02:52.331783 1131323 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-986088 && echo "old-k8s-version-986088" | sudo tee /etc/hostname
	I0328 01:02:52.445910 1131323 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-986088
	
	I0328 01:02:52.445943 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHHostname
	I0328 01:02:52.449023 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:52.449320 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:94:40", ip: ""} in network mk-old-k8s-version-986088: {Iface:virbr2 ExpiryTime:2024-03-28 02:02:44 +0000 UTC Type:0 Mac:52:54:00:f6:94:40 Iaid: IPaddr:192.168.50.174 Prefix:24 Hostname:old-k8s-version-986088 Clientid:01:52:54:00:f6:94:40}
	I0328 01:02:52.449358 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined IP address 192.168.50.174 and MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:52.449595 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHPort
	I0328 01:02:52.449810 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHKeyPath
	I0328 01:02:52.449970 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHKeyPath
	I0328 01:02:52.450116 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHUsername
	I0328 01:02:52.450310 1131323 main.go:141] libmachine: Using SSH client type: native
	I0328 01:02:52.450572 1131323 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.174 22 <nil> <nil>}
	I0328 01:02:52.450640 1131323 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-986088' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-986088/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-986088' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0328 01:02:52.567493 1131323 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0328 01:02:52.567529 1131323 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18485-1069254/.minikube CaCertPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18485-1069254/.minikube}
	I0328 01:02:52.567559 1131323 buildroot.go:174] setting up certificates
	I0328 01:02:52.567573 1131323 provision.go:84] configureAuth start
	I0328 01:02:52.567587 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetMachineName
	I0328 01:02:52.567944 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetIP
	I0328 01:02:52.570860 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:52.571363 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:94:40", ip: ""} in network mk-old-k8s-version-986088: {Iface:virbr2 ExpiryTime:2024-03-28 02:02:44 +0000 UTC Type:0 Mac:52:54:00:f6:94:40 Iaid: IPaddr:192.168.50.174 Prefix:24 Hostname:old-k8s-version-986088 Clientid:01:52:54:00:f6:94:40}
	I0328 01:02:52.571400 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined IP address 192.168.50.174 and MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:52.571547 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHHostname
	I0328 01:02:52.574052 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:52.574483 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:94:40", ip: ""} in network mk-old-k8s-version-986088: {Iface:virbr2 ExpiryTime:2024-03-28 02:02:44 +0000 UTC Type:0 Mac:52:54:00:f6:94:40 Iaid: IPaddr:192.168.50.174 Prefix:24 Hostname:old-k8s-version-986088 Clientid:01:52:54:00:f6:94:40}
	I0328 01:02:52.574517 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined IP address 192.168.50.174 and MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:52.574619 1131323 provision.go:143] copyHostCerts
	I0328 01:02:52.574698 1131323 exec_runner.go:144] found /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.pem, removing ...
	I0328 01:02:52.574710 1131323 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.pem
	I0328 01:02:52.574778 1131323 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.pem (1078 bytes)
	I0328 01:02:52.574894 1131323 exec_runner.go:144] found /home/jenkins/minikube-integration/18485-1069254/.minikube/cert.pem, removing ...
	I0328 01:02:52.574908 1131323 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18485-1069254/.minikube/cert.pem
	I0328 01:02:52.574985 1131323 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18485-1069254/.minikube/cert.pem (1123 bytes)
	I0328 01:02:52.575086 1131323 exec_runner.go:144] found /home/jenkins/minikube-integration/18485-1069254/.minikube/key.pem, removing ...
	I0328 01:02:52.575095 1131323 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18485-1069254/.minikube/key.pem
	I0328 01:02:52.575117 1131323 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18485-1069254/.minikube/key.pem (1679 bytes)
	I0328 01:02:52.575194 1131323 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-986088 san=[127.0.0.1 192.168.50.174 localhost minikube old-k8s-version-986088]
	I0328 01:02:52.688709 1131323 provision.go:177] copyRemoteCerts
	I0328 01:02:52.688776 1131323 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0328 01:02:52.688809 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHHostname
	I0328 01:02:52.691529 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:52.691977 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:94:40", ip: ""} in network mk-old-k8s-version-986088: {Iface:virbr2 ExpiryTime:2024-03-28 02:02:44 +0000 UTC Type:0 Mac:52:54:00:f6:94:40 Iaid: IPaddr:192.168.50.174 Prefix:24 Hostname:old-k8s-version-986088 Clientid:01:52:54:00:f6:94:40}
	I0328 01:02:52.692024 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined IP address 192.168.50.174 and MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:52.692188 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHPort
	I0328 01:02:52.692425 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHKeyPath
	I0328 01:02:52.692620 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHUsername
	I0328 01:02:52.692774 1131323 sshutil.go:53] new ssh client: &{IP:192.168.50.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/old-k8s-version-986088/id_rsa Username:docker}
	I0328 01:02:52.777200 1131323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0328 01:02:52.808740 1131323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0328 01:02:52.836646 1131323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0328 01:02:52.862627 1131323 provision.go:87] duration metric: took 295.032419ms to configureAuth
	I0328 01:02:52.862668 1131323 buildroot.go:189] setting minikube options for container-runtime
	I0328 01:02:52.862908 1131323 config.go:182] Loaded profile config "old-k8s-version-986088": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0328 01:02:52.863019 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHHostname
	I0328 01:02:52.865838 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:52.866585 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:94:40", ip: ""} in network mk-old-k8s-version-986088: {Iface:virbr2 ExpiryTime:2024-03-28 02:02:44 +0000 UTC Type:0 Mac:52:54:00:f6:94:40 Iaid: IPaddr:192.168.50.174 Prefix:24 Hostname:old-k8s-version-986088 Clientid:01:52:54:00:f6:94:40}
	I0328 01:02:52.866630 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined IP address 192.168.50.174 and MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:52.867271 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHPort
	I0328 01:02:52.867521 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHKeyPath
	I0328 01:02:52.867687 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHKeyPath
	I0328 01:02:52.867826 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHUsername
	I0328 01:02:52.867961 1131323 main.go:141] libmachine: Using SSH client type: native
	I0328 01:02:52.868176 1131323 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.174 22 <nil> <nil>}
	I0328 01:02:52.868194 1131323 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0328 01:02:53.154903 1131323 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0328 01:02:53.154936 1131323 machine.go:97] duration metric: took 931.534047ms to provisionDockerMachine
	I0328 01:02:53.154949 1131323 start.go:293] postStartSetup for "old-k8s-version-986088" (driver="kvm2")
	I0328 01:02:53.154961 1131323 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0328 01:02:53.154997 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .DriverName
	I0328 01:02:53.155353 1131323 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0328 01:02:53.155386 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHHostname
	I0328 01:02:53.158072 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:53.158448 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:94:40", ip: ""} in network mk-old-k8s-version-986088: {Iface:virbr2 ExpiryTime:2024-03-28 02:02:44 +0000 UTC Type:0 Mac:52:54:00:f6:94:40 Iaid: IPaddr:192.168.50.174 Prefix:24 Hostname:old-k8s-version-986088 Clientid:01:52:54:00:f6:94:40}
	I0328 01:02:53.158482 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined IP address 192.168.50.174 and MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:53.158612 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHPort
	I0328 01:02:53.158825 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHKeyPath
	I0328 01:02:53.158974 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHUsername
	I0328 01:02:53.159102 1131323 sshutil.go:53] new ssh client: &{IP:192.168.50.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/old-k8s-version-986088/id_rsa Username:docker}
	I0328 01:02:53.243411 1131323 ssh_runner.go:195] Run: cat /etc/os-release
	I0328 01:02:53.247745 1131323 info.go:137] Remote host: Buildroot 2023.02.9
	I0328 01:02:53.247769 1131323 filesync.go:126] Scanning /home/jenkins/minikube-integration/18485-1069254/.minikube/addons for local assets ...
	I0328 01:02:53.247830 1131323 filesync.go:126] Scanning /home/jenkins/minikube-integration/18485-1069254/.minikube/files for local assets ...
	I0328 01:02:53.247903 1131323 filesync.go:149] local asset: /home/jenkins/minikube-integration/18485-1069254/.minikube/files/etc/ssl/certs/10765222.pem -> 10765222.pem in /etc/ssl/certs
	I0328 01:02:53.247990 1131323 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0328 01:02:53.258574 1131323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/files/etc/ssl/certs/10765222.pem --> /etc/ssl/certs/10765222.pem (1708 bytes)
	I0328 01:02:53.284249 1131323 start.go:296] duration metric: took 129.2844ms for postStartSetup
	I0328 01:02:53.284300 1131323 fix.go:56] duration metric: took 20.532468979s for fixHost
	I0328 01:02:53.284324 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHHostname
	I0328 01:02:53.287097 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:53.287505 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:94:40", ip: ""} in network mk-old-k8s-version-986088: {Iface:virbr2 ExpiryTime:2024-03-28 02:02:44 +0000 UTC Type:0 Mac:52:54:00:f6:94:40 Iaid: IPaddr:192.168.50.174 Prefix:24 Hostname:old-k8s-version-986088 Clientid:01:52:54:00:f6:94:40}
	I0328 01:02:53.287534 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined IP address 192.168.50.174 and MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:53.287642 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHPort
	I0328 01:02:53.287874 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHKeyPath
	I0328 01:02:53.288039 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHKeyPath
	I0328 01:02:53.288225 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHUsername
	I0328 01:02:53.288439 1131323 main.go:141] libmachine: Using SSH client type: native
	I0328 01:02:53.288601 1131323 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.174 22 <nil> <nil>}
	I0328 01:02:53.288612 1131323 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0328 01:02:53.391262 1131323 main.go:141] libmachine: SSH cmd err, output: <nil>: 1711587773.373998758
	
	I0328 01:02:53.391292 1131323 fix.go:216] guest clock: 1711587773.373998758
	I0328 01:02:53.391299 1131323 fix.go:229] Guest: 2024-03-28 01:02:53.373998758 +0000 UTC Remote: 2024-03-28 01:02:53.284304642 +0000 UTC m=+227.998260980 (delta=89.694116ms)
	I0328 01:02:53.391341 1131323 fix.go:200] guest clock delta is within tolerance: 89.694116ms
	I0328 01:02:53.391346 1131323 start.go:83] releasing machines lock for "old-k8s-version-986088", held for 20.639550927s
	I0328 01:02:53.391377 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .DriverName
	I0328 01:02:53.391728 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetIP
	I0328 01:02:53.394421 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:53.394780 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:94:40", ip: ""} in network mk-old-k8s-version-986088: {Iface:virbr2 ExpiryTime:2024-03-28 02:02:44 +0000 UTC Type:0 Mac:52:54:00:f6:94:40 Iaid: IPaddr:192.168.50.174 Prefix:24 Hostname:old-k8s-version-986088 Clientid:01:52:54:00:f6:94:40}
	I0328 01:02:53.394811 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined IP address 192.168.50.174 and MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:53.394932 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .DriverName
	I0328 01:02:53.395449 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .DriverName
	I0328 01:02:53.395729 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .DriverName
	I0328 01:02:53.395828 1131323 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0328 01:02:53.395883 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHHostname
	I0328 01:02:53.395985 1131323 ssh_runner.go:195] Run: cat /version.json
	I0328 01:02:53.396014 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHHostname
	I0328 01:02:53.398819 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:53.399010 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:53.399281 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:94:40", ip: ""} in network mk-old-k8s-version-986088: {Iface:virbr2 ExpiryTime:2024-03-28 02:02:44 +0000 UTC Type:0 Mac:52:54:00:f6:94:40 Iaid: IPaddr:192.168.50.174 Prefix:24 Hostname:old-k8s-version-986088 Clientid:01:52:54:00:f6:94:40}
	I0328 01:02:53.399320 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined IP address 192.168.50.174 and MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:53.399451 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHPort
	I0328 01:02:53.399550 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:94:40", ip: ""} in network mk-old-k8s-version-986088: {Iface:virbr2 ExpiryTime:2024-03-28 02:02:44 +0000 UTC Type:0 Mac:52:54:00:f6:94:40 Iaid: IPaddr:192.168.50.174 Prefix:24 Hostname:old-k8s-version-986088 Clientid:01:52:54:00:f6:94:40}
	I0328 01:02:53.399620 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined IP address 192.168.50.174 and MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:53.399640 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHKeyPath
	I0328 01:02:53.399880 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHUsername
	I0328 01:02:53.399902 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHPort
	I0328 01:02:53.400065 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHKeyPath
	I0328 01:02:53.400081 1131323 sshutil.go:53] new ssh client: &{IP:192.168.50.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/old-k8s-version-986088/id_rsa Username:docker}
	I0328 01:02:53.400245 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHUsername
	I0328 01:02:53.400445 1131323 sshutil.go:53] new ssh client: &{IP:192.168.50.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/old-k8s-version-986088/id_rsa Username:docker}
	I0328 01:02:53.514453 1131323 ssh_runner.go:195] Run: systemctl --version
	I0328 01:02:53.521123 1131323 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0328 01:02:53.678366 1131323 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0328 01:02:53.685402 1131323 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0328 01:02:53.685473 1131323 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0328 01:02:53.702781 1131323 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0328 01:02:53.702816 1131323 start.go:494] detecting cgroup driver to use...
	I0328 01:02:53.702900 1131323 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0328 01:02:53.720343 1131323 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0328 01:02:53.736749 1131323 docker.go:217] disabling cri-docker service (if available) ...
	I0328 01:02:53.736824 1131323 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0328 01:02:53.761087 1131323 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0328 01:02:53.779008 1131323 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0328 01:02:53.895064 1131323 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0328 01:02:54.060741 1131323 docker.go:233] disabling docker service ...
	I0328 01:02:54.060825 1131323 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0328 01:02:54.079139 1131323 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0328 01:02:54.093523 1131323 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0328 01:02:54.247544 1131323 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0328 01:02:54.396392 1131323 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0328 01:02:54.422612 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0328 01:02:54.443759 1131323 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0328 01:02:54.443817 1131323 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 01:02:54.459794 1131323 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0328 01:02:54.459875 1131323 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 01:02:54.472784 1131323 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 01:02:54.484963 1131323 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 01:02:54.496654 1131323 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0328 01:02:54.508382 1131323 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0328 01:02:54.518607 1131323 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0328 01:02:54.518687 1131323 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0328 01:02:54.532356 1131323 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0328 01:02:54.544424 1131323 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0328 01:02:54.685782 1131323 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0328 01:02:54.847233 1131323 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0328 01:02:54.847314 1131323 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0328 01:02:54.853148 1131323 start.go:562] Will wait 60s for crictl version
	I0328 01:02:54.853248 1131323 ssh_runner.go:195] Run: which crictl
	I0328 01:02:54.857536 1131323 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0328 01:02:54.901937 1131323 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0328 01:02:54.902082 1131323 ssh_runner.go:195] Run: crio --version
	I0328 01:02:54.935571 1131323 ssh_runner.go:195] Run: crio --version
	I0328 01:02:54.971452 1131323 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0328 01:02:54.972964 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetIP
	I0328 01:02:54.976523 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:54.976985 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:94:40", ip: ""} in network mk-old-k8s-version-986088: {Iface:virbr2 ExpiryTime:2024-03-28 02:02:44 +0000 UTC Type:0 Mac:52:54:00:f6:94:40 Iaid: IPaddr:192.168.50.174 Prefix:24 Hostname:old-k8s-version-986088 Clientid:01:52:54:00:f6:94:40}
	I0328 01:02:54.977017 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined IP address 192.168.50.174 and MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:54.977369 1131323 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0328 01:02:54.982326 1131323 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0328 01:02:54.996239 1131323 kubeadm.go:877] updating cluster {Name:old-k8s-version-986088 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-986088 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.174 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0328 01:02:54.996371 1131323 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0328 01:02:54.996433 1131323 ssh_runner.go:195] Run: sudo crictl images --output json
	I0328 01:02:55.045404 1131323 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0328 01:02:55.045483 1131323 ssh_runner.go:195] Run: which lz4
	I0328 01:02:55.050226 1131323 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0328 01:02:55.055182 1131323 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0328 01:02:55.055221 1131323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0328 01:02:53.416101 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .Start
	I0328 01:02:53.416332 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Ensuring networks are active...
	I0328 01:02:53.417021 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Ensuring network default is active
	I0328 01:02:53.417446 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Ensuring network mk-default-k8s-diff-port-283961 is active
	I0328 01:02:53.417857 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Getting domain xml...
	I0328 01:02:53.418555 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Creating domain...
	I0328 01:02:54.777201 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Waiting to get IP...
	I0328 01:02:54.778055 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:02:54.778563 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | unable to find current IP address of domain default-k8s-diff-port-283961 in network mk-default-k8s-diff-port-283961
	I0328 01:02:54.778705 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | I0328 01:02:54.778537 1132240 retry.go:31] will retry after 259.031702ms: waiting for machine to come up
	I0328 01:02:55.039365 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:02:55.039926 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | unable to find current IP address of domain default-k8s-diff-port-283961 in network mk-default-k8s-diff-port-283961
	I0328 01:02:55.039963 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | I0328 01:02:55.039860 1132240 retry.go:31] will retry after 254.124553ms: waiting for machine to come up
	I0328 01:02:55.295658 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:02:55.296218 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | unable to find current IP address of domain default-k8s-diff-port-283961 in network mk-default-k8s-diff-port-283961
	I0328 01:02:55.296265 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | I0328 01:02:55.296174 1132240 retry.go:31] will retry after 349.637234ms: waiting for machine to come up
	I0328 01:02:55.647590 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:02:55.648356 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | unable to find current IP address of domain default-k8s-diff-port-283961 in network mk-default-k8s-diff-port-283961
	I0328 01:02:55.648392 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | I0328 01:02:55.648298 1132240 retry.go:31] will retry after 446.471208ms: waiting for machine to come up
	I0328 01:02:53.707811 1130949 pod_ready.go:102] pod "coredns-76f75df574-pr5d8" in "kube-system" namespace has status "Ready":"False"
	I0328 01:02:55.708380 1130949 pod_ready.go:102] pod "coredns-76f75df574-pr5d8" in "kube-system" namespace has status "Ready":"False"
	I0328 01:02:57.213059 1130949 pod_ready.go:92] pod "coredns-76f75df574-pr5d8" in "kube-system" namespace has status "Ready":"True"
	I0328 01:02:57.213097 1130949 pod_ready.go:81] duration metric: took 10.513921238s for pod "coredns-76f75df574-pr5d8" in "kube-system" namespace to be "Ready" ...
	I0328 01:02:57.213113 1130949 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-808809" in "kube-system" namespace to be "Ready" ...
	I0328 01:02:57.222308 1130949 pod_ready.go:92] pod "etcd-embed-certs-808809" in "kube-system" namespace has status "Ready":"True"
	I0328 01:02:57.222344 1130949 pod_ready.go:81] duration metric: took 9.214056ms for pod "etcd-embed-certs-808809" in "kube-system" namespace to be "Ready" ...
	I0328 01:02:57.222357 1130949 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-808809" in "kube-system" namespace to be "Ready" ...
	I0328 01:02:57.231530 1130949 pod_ready.go:92] pod "kube-apiserver-embed-certs-808809" in "kube-system" namespace has status "Ready":"True"
	I0328 01:02:57.231558 1130949 pod_ready.go:81] duration metric: took 9.192864ms for pod "kube-apiserver-embed-certs-808809" in "kube-system" namespace to be "Ready" ...
	I0328 01:02:57.231568 1130949 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-808809" in "kube-system" namespace to be "Ready" ...
	I0328 01:02:56.994163 1131323 crio.go:462] duration metric: took 1.943992561s to copy over tarball
	I0328 01:02:56.994252 1131323 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0328 01:03:00.215115 1131323 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.220825311s)
	I0328 01:03:00.215159 1131323 crio.go:469] duration metric: took 3.22095583s to extract the tarball
	I0328 01:03:00.215171 1131323 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0328 01:03:00.259151 1131323 ssh_runner.go:195] Run: sudo crictl images --output json
	I0328 01:03:00.298446 1131323 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0328 01:03:00.298492 1131323 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0328 01:03:00.298585 1131323 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0328 01:03:00.298601 1131323 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0328 01:03:00.298613 1131323 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0328 01:03:00.298644 1131323 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0328 01:03:00.298662 1131323 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0328 01:03:00.298698 1131323 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0328 01:03:00.298593 1131323 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0328 01:03:00.298585 1131323 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0328 01:03:00.300347 1131323 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0328 01:03:00.300424 1131323 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0328 01:03:00.300470 1131323 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0328 01:03:00.300474 1131323 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0328 01:03:00.300637 1131323 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0328 01:03:00.300652 1131323 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0328 01:03:00.300723 1131323 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0328 01:03:00.300793 1131323 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0328 01:02:56.095939 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:02:56.096463 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | unable to find current IP address of domain default-k8s-diff-port-283961 in network mk-default-k8s-diff-port-283961
	I0328 01:02:56.096501 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | I0328 01:02:56.096412 1132240 retry.go:31] will retry after 490.029649ms: waiting for machine to come up
	I0328 01:02:56.588298 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:02:56.588835 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | unable to find current IP address of domain default-k8s-diff-port-283961 in network mk-default-k8s-diff-port-283961
	I0328 01:02:56.588868 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | I0328 01:02:56.588796 1132240 retry.go:31] will retry after 831.356628ms: waiting for machine to come up
	I0328 01:02:57.421917 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:02:57.422410 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | unable to find current IP address of domain default-k8s-diff-port-283961 in network mk-default-k8s-diff-port-283961
	I0328 01:02:57.422443 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | I0328 01:02:57.422353 1132240 retry.go:31] will retry after 1.164764985s: waiting for machine to come up
	I0328 01:02:58.588827 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:02:58.589183 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | unable to find current IP address of domain default-k8s-diff-port-283961 in network mk-default-k8s-diff-port-283961
	I0328 01:02:58.589225 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | I0328 01:02:58.589119 1132240 retry.go:31] will retry after 1.307248783s: waiting for machine to come up
	I0328 01:02:59.897607 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:02:59.897976 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | unable to find current IP address of domain default-k8s-diff-port-283961 in network mk-default-k8s-diff-port-283961
	I0328 01:02:59.898008 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | I0328 01:02:59.897926 1132240 retry.go:31] will retry after 1.560958271s: waiting for machine to come up
	I0328 01:02:58.241179 1130949 pod_ready.go:92] pod "kube-controller-manager-embed-certs-808809" in "kube-system" namespace has status "Ready":"True"
	I0328 01:02:58.241216 1130949 pod_ready.go:81] duration metric: took 1.00963904s for pod "kube-controller-manager-embed-certs-808809" in "kube-system" namespace to be "Ready" ...
	I0328 01:02:58.241245 1130949 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-qwzpg" in "kube-system" namespace to be "Ready" ...
	I0328 01:02:58.249787 1130949 pod_ready.go:92] pod "kube-proxy-qwzpg" in "kube-system" namespace has status "Ready":"True"
	I0328 01:02:58.249826 1130949 pod_ready.go:81] duration metric: took 8.571225ms for pod "kube-proxy-qwzpg" in "kube-system" namespace to be "Ready" ...
	I0328 01:02:58.249840 1130949 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-808809" in "kube-system" namespace to be "Ready" ...
	I0328 01:02:58.405101 1130949 pod_ready.go:92] pod "kube-scheduler-embed-certs-808809" in "kube-system" namespace has status "Ready":"True"
	I0328 01:02:58.405130 1130949 pod_ready.go:81] duration metric: took 155.281142ms for pod "kube-scheduler-embed-certs-808809" in "kube-system" namespace to be "Ready" ...
	I0328 01:02:58.405141 1130949 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace to be "Ready" ...
	I0328 01:03:00.412202 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:02.412688 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:00.499788 1131323 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0328 01:03:00.539135 1131323 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0328 01:03:00.541462 1131323 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0328 01:03:00.544184 1131323 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0328 01:03:00.544227 1131323 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0328 01:03:00.544261 1131323 ssh_runner.go:195] Run: which crictl
	I0328 01:03:00.555720 1131323 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0328 01:03:00.560189 1131323 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0328 01:03:00.562639 1131323 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0328 01:03:00.574105 1131323 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0328 01:03:00.681717 1131323 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0328 01:03:00.681742 1131323 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0328 01:03:00.681765 1131323 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0328 01:03:00.681803 1131323 ssh_runner.go:195] Run: which crictl
	I0328 01:03:00.682033 1131323 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0328 01:03:00.682076 1131323 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0328 01:03:00.682115 1131323 ssh_runner.go:195] Run: which crictl
	I0328 01:03:00.732868 1131323 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0328 01:03:00.732922 1131323 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0328 01:03:00.732988 1131323 ssh_runner.go:195] Run: which crictl
	I0328 01:03:00.742680 1131323 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0328 01:03:00.742730 1131323 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0328 01:03:00.742762 1131323 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0328 01:03:00.742777 1131323 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0328 01:03:00.742805 1131323 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0328 01:03:00.742770 1131323 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0328 01:03:00.742817 1131323 ssh_runner.go:195] Run: which crictl
	I0328 01:03:00.742851 1131323 ssh_runner.go:195] Run: which crictl
	I0328 01:03:00.742865 1131323 ssh_runner.go:195] Run: which crictl
	I0328 01:03:00.770435 1131323 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0328 01:03:00.770472 1131323 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0328 01:03:00.770567 1131323 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0328 01:03:00.770588 1131323 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0328 01:03:00.770727 1131323 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0328 01:03:00.770760 1131323 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0328 01:03:00.770728 1131323 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0328 01:03:00.882338 1131323 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0328 01:03:00.896602 1131323 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0328 01:03:00.918814 1131323 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0328 01:03:00.918869 1131323 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0328 01:03:00.918919 1131323 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0328 01:03:00.918968 1131323 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0328 01:03:01.186124 1131323 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0328 01:03:01.334547 1131323 cache_images.go:92] duration metric: took 1.036031169s to LoadCachedImages
	W0328 01:03:01.334676 1131323 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	I0328 01:03:01.334694 1131323 kubeadm.go:928] updating node { 192.168.50.174 8443 v1.20.0 crio true true} ...
	I0328 01:03:01.334827 1131323 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-986088 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.174
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-986088 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0328 01:03:01.334926 1131323 ssh_runner.go:195] Run: crio config
	I0328 01:03:01.391004 1131323 cni.go:84] Creating CNI manager for ""
	I0328 01:03:01.391034 1131323 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0328 01:03:01.391054 1131323 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0328 01:03:01.391081 1131323 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.174 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-986088 NodeName:old-k8s-version-986088 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.174"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.174 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0328 01:03:01.391265 1131323 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.174
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-986088"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.174
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.174"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0328 01:03:01.391347 1131323 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0328 01:03:01.403684 1131323 binaries.go:44] Found k8s binaries, skipping transfer
	I0328 01:03:01.403779 1131323 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0328 01:03:01.415168 1131323 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0328 01:03:01.434329 1131323 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0328 01:03:01.456280 1131323 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0328 01:03:01.476625 1131323 ssh_runner.go:195] Run: grep 192.168.50.174	control-plane.minikube.internal$ /etc/hosts
	I0328 01:03:01.480867 1131323 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.174	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0328 01:03:01.493833 1131323 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0328 01:03:01.642273 1131323 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0328 01:03:01.661857 1131323 certs.go:68] Setting up /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/old-k8s-version-986088 for IP: 192.168.50.174
	I0328 01:03:01.661887 1131323 certs.go:194] generating shared ca certs ...
	I0328 01:03:01.661909 1131323 certs.go:226] acquiring lock for ca certs: {Name:mkf4dec8f33bbf51de6ed3aabdf175c7fd744ef9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 01:03:01.662115 1131323 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.key
	I0328 01:03:01.662174 1131323 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/proxy-client-ca.key
	I0328 01:03:01.662188 1131323 certs.go:256] generating profile certs ...
	I0328 01:03:01.662324 1131323 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/old-k8s-version-986088/client.key
	I0328 01:03:01.662399 1131323 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/old-k8s-version-986088/apiserver.key.b88fbc7e
	I0328 01:03:01.662447 1131323 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/old-k8s-version-986088/proxy-client.key
	I0328 01:03:01.662600 1131323 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/1076522.pem (1338 bytes)
	W0328 01:03:01.662656 1131323 certs.go:480] ignoring /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/1076522_empty.pem, impossibly tiny 0 bytes
	I0328 01:03:01.662672 1131323 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca-key.pem (1675 bytes)
	I0328 01:03:01.662703 1131323 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem (1078 bytes)
	I0328 01:03:01.662738 1131323 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/cert.pem (1123 bytes)
	I0328 01:03:01.662774 1131323 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/key.pem (1679 bytes)
	I0328 01:03:01.662826 1131323 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/files/etc/ssl/certs/10765222.pem (1708 bytes)
	I0328 01:03:01.663831 1131323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0328 01:03:01.697171 1131323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0328 01:03:01.742118 1131323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0328 01:03:01.783263 1131323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0328 01:03:01.831682 1131323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/old-k8s-version-986088/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0328 01:03:01.878051 1131323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/old-k8s-version-986088/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0328 01:03:01.915626 1131323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/old-k8s-version-986088/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0328 01:03:01.942247 1131323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/old-k8s-version-986088/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0328 01:03:01.969054 1131323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/files/etc/ssl/certs/10765222.pem --> /usr/share/ca-certificates/10765222.pem (1708 bytes)
	I0328 01:03:01.998651 1131323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0328 01:03:02.024881 1131323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/1076522.pem --> /usr/share/ca-certificates/1076522.pem (1338 bytes)
	I0328 01:03:02.051284 1131323 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0328 01:03:02.070414 1131323 ssh_runner.go:195] Run: openssl version
	I0328 01:03:02.076635 1131323 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10765222.pem && ln -fs /usr/share/ca-certificates/10765222.pem /etc/ssl/certs/10765222.pem"
	I0328 01:03:02.089288 1131323 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10765222.pem
	I0328 01:03:02.094260 1131323 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 27 23:42 /usr/share/ca-certificates/10765222.pem
	I0328 01:03:02.094322 1131323 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10765222.pem
	I0328 01:03:02.100846 1131323 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/10765222.pem /etc/ssl/certs/3ec20f2e.0"
	I0328 01:03:02.114474 1131323 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0328 01:03:02.126467 1131323 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0328 01:03:02.131240 1131323 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 27 23:34 /usr/share/ca-certificates/minikubeCA.pem
	I0328 01:03:02.131293 1131323 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0328 01:03:02.137496 1131323 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0328 01:03:02.150863 1131323 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1076522.pem && ln -fs /usr/share/ca-certificates/1076522.pem /etc/ssl/certs/1076522.pem"
	I0328 01:03:02.163536 1131323 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1076522.pem
	I0328 01:03:02.168767 1131323 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 27 23:42 /usr/share/ca-certificates/1076522.pem
	I0328 01:03:02.168850 1131323 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1076522.pem
	I0328 01:03:02.175218 1131323 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1076522.pem /etc/ssl/certs/51391683.0"
	I0328 01:03:02.188272 1131323 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0328 01:03:02.193348 1131323 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0328 01:03:02.199969 1131323 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0328 01:03:02.206424 1131323 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0328 01:03:02.213530 1131323 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0328 01:03:02.220136 1131323 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0328 01:03:02.226502 1131323 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0328 01:03:02.232708 1131323 kubeadm.go:391] StartCluster: {Name:old-k8s-version-986088 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-986088 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.174 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0328 01:03:02.232831 1131323 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0328 01:03:02.232890 1131323 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0328 01:03:02.280062 1131323 cri.go:89] found id: ""
	I0328 01:03:02.280160 1131323 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0328 01:03:02.291968 1131323 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0328 01:03:02.292003 1131323 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0328 01:03:02.292011 1131323 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0328 01:03:02.292072 1131323 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0328 01:03:02.304006 1131323 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0328 01:03:02.305105 1131323 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-986088" does not appear in /home/jenkins/minikube-integration/18485-1069254/kubeconfig
	I0328 01:03:02.305785 1131323 kubeconfig.go:62] /home/jenkins/minikube-integration/18485-1069254/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-986088" cluster setting kubeconfig missing "old-k8s-version-986088" context setting]
	I0328 01:03:02.306728 1131323 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18485-1069254/kubeconfig: {Name:mk608bb0dce90860f51972a105c5a28b1a9b081e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 01:03:02.308610 1131323 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0328 01:03:02.320212 1131323 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.50.174
	I0328 01:03:02.320265 1131323 kubeadm.go:1154] stopping kube-system containers ...
	I0328 01:03:02.320283 1131323 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0328 01:03:02.320356 1131323 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0328 01:03:02.366411 1131323 cri.go:89] found id: ""
	I0328 01:03:02.366500 1131323 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0328 01:03:02.388351 1131323 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0328 01:03:02.402621 1131323 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0328 01:03:02.402652 1131323 kubeadm.go:156] found existing configuration files:
	
	I0328 01:03:02.402718 1131323 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0328 01:03:02.415559 1131323 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0328 01:03:02.415633 1131323 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0328 01:03:02.426666 1131323 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0328 01:03:02.439497 1131323 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0328 01:03:02.439558 1131323 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0328 01:03:02.451040 1131323 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0328 01:03:02.461780 1131323 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0328 01:03:02.461876 1131323 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0328 01:03:02.473295 1131323 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0328 01:03:02.484762 1131323 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0328 01:03:02.484841 1131323 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0328 01:03:02.496304 1131323 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0328 01:03:02.507634 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0328 01:03:02.641980 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0328 01:03:03.598106 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0328 01:03:03.840026 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0328 01:03:03.970336 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0328 01:03:04.067774 1131323 api_server.go:52] waiting for apiserver process to appear ...
	I0328 01:03:04.067911 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:04.568260 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:05.068794 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:01.460535 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:01.461008 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | unable to find current IP address of domain default-k8s-diff-port-283961 in network mk-default-k8s-diff-port-283961
	I0328 01:03:01.461039 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | I0328 01:03:01.460962 1132240 retry.go:31] will retry after 1.839531745s: waiting for machine to come up
	I0328 01:03:03.302965 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:03.303445 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | unable to find current IP address of domain default-k8s-diff-port-283961 in network mk-default-k8s-diff-port-283961
	I0328 01:03:03.303479 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | I0328 01:03:03.303387 1132240 retry.go:31] will retry after 2.461748315s: waiting for machine to come up
	I0328 01:03:04.413898 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:06.913608 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:05.568716 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:06.068362 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:06.568235 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:07.068696 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:07.567976 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:08.068032 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:08.568586 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:09.068046 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:09.568699 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:10.067967 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:05.767795 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:05.768329 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | unable to find current IP address of domain default-k8s-diff-port-283961 in network mk-default-k8s-diff-port-283961
	I0328 01:03:05.768360 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | I0328 01:03:05.768279 1132240 retry.go:31] will retry after 2.321291255s: waiting for machine to come up
	I0328 01:03:08.092644 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:08.093094 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | unable to find current IP address of domain default-k8s-diff-port-283961 in network mk-default-k8s-diff-port-283961
	I0328 01:03:08.093131 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | I0328 01:03:08.093046 1132240 retry.go:31] will retry after 4.151205276s: waiting for machine to come up
	I0328 01:03:09.413199 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:11.912234 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:13.671756 1130827 start.go:364] duration metric: took 54.966750689s to acquireMachinesLock for "no-preload-248059"
	I0328 01:03:13.671815 1130827 start.go:96] Skipping create...Using existing machine configuration
	I0328 01:03:13.671823 1130827 fix.go:54] fixHost starting: 
	I0328 01:03:13.672255 1130827 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18485-1069254/.minikube/bin/docker-machine-driver-kvm2
	I0328 01:03:13.672292 1130827 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 01:03:13.689811 1130827 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42017
	I0328 01:03:13.690364 1130827 main.go:141] libmachine: () Calling .GetVersion
	I0328 01:03:13.690817 1130827 main.go:141] libmachine: Using API Version  1
	I0328 01:03:13.690843 1130827 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 01:03:13.691213 1130827 main.go:141] libmachine: () Calling .GetMachineName
	I0328 01:03:13.691395 1130827 main.go:141] libmachine: (no-preload-248059) Calling .DriverName
	I0328 01:03:13.691523 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetState
	I0328 01:03:13.693093 1130827 fix.go:112] recreateIfNeeded on no-preload-248059: state=Stopped err=<nil>
	I0328 01:03:13.693123 1130827 main.go:141] libmachine: (no-preload-248059) Calling .DriverName
	W0328 01:03:13.693280 1130827 fix.go:138] unexpected machine state, will restart: <nil>
	I0328 01:03:13.695158 1130827 out.go:177] * Restarting existing kvm2 VM for "no-preload-248059" ...
	I0328 01:03:10.568240 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:11.068028 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:11.568146 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:12.068467 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:12.568820 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:13.068031 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:13.568977 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:14.068050 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:14.567938 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:15.068711 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:12.248769 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:12.249440 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Found IP for machine: 192.168.39.224
	I0328 01:03:12.249467 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Reserving static IP address...
	I0328 01:03:12.249498 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has current primary IP address 192.168.39.224 and MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:12.249832 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-283961", mac: "52:54:00:c4:df:6f", ip: "192.168.39.224"} in network mk-default-k8s-diff-port-283961: {Iface:virbr1 ExpiryTime:2024-03-28 02:03:06 +0000 UTC Type:0 Mac:52:54:00:c4:df:6f Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:default-k8s-diff-port-283961 Clientid:01:52:54:00:c4:df:6f}
	I0328 01:03:12.249872 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | skip adding static IP to network mk-default-k8s-diff-port-283961 - found existing host DHCP lease matching {name: "default-k8s-diff-port-283961", mac: "52:54:00:c4:df:6f", ip: "192.168.39.224"}
	I0328 01:03:12.249888 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Reserved static IP address: 192.168.39.224
	I0328 01:03:12.249908 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Waiting for SSH to be available...
	I0328 01:03:12.249921 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | Getting to WaitForSSH function...
	I0328 01:03:12.252053 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:12.252487 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:df:6f", ip: ""} in network mk-default-k8s-diff-port-283961: {Iface:virbr1 ExpiryTime:2024-03-28 02:03:06 +0000 UTC Type:0 Mac:52:54:00:c4:df:6f Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:default-k8s-diff-port-283961 Clientid:01:52:54:00:c4:df:6f}
	I0328 01:03:12.252521 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined IP address 192.168.39.224 and MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:12.252646 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | Using SSH client type: external
	I0328 01:03:12.252677 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | Using SSH private key: /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/default-k8s-diff-port-283961/id_rsa (-rw-------)
	I0328 01:03:12.252709 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.224 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/default-k8s-diff-port-283961/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0328 01:03:12.252731 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | About to run SSH command:
	I0328 01:03:12.252750 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | exit 0
	I0328 01:03:12.378419 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | SSH cmd err, output: <nil>: 
	I0328 01:03:12.378866 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetConfigRaw
	I0328 01:03:12.379659 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetIP
	I0328 01:03:12.382631 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:12.382997 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:df:6f", ip: ""} in network mk-default-k8s-diff-port-283961: {Iface:virbr1 ExpiryTime:2024-03-28 02:03:06 +0000 UTC Type:0 Mac:52:54:00:c4:df:6f Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:default-k8s-diff-port-283961 Clientid:01:52:54:00:c4:df:6f}
	I0328 01:03:12.383023 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined IP address 192.168.39.224 and MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:12.383276 1131600 profile.go:142] Saving config to /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/default-k8s-diff-port-283961/config.json ...
	I0328 01:03:12.383534 1131600 machine.go:94] provisionDockerMachine start ...
	I0328 01:03:12.383567 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .DriverName
	I0328 01:03:12.383805 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHHostname
	I0328 01:03:12.386472 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:12.386839 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:df:6f", ip: ""} in network mk-default-k8s-diff-port-283961: {Iface:virbr1 ExpiryTime:2024-03-28 02:03:06 +0000 UTC Type:0 Mac:52:54:00:c4:df:6f Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:default-k8s-diff-port-283961 Clientid:01:52:54:00:c4:df:6f}
	I0328 01:03:12.386870 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined IP address 192.168.39.224 and MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:12.387035 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHPort
	I0328 01:03:12.387240 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHKeyPath
	I0328 01:03:12.387399 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHKeyPath
	I0328 01:03:12.387577 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHUsername
	I0328 01:03:12.387729 1131600 main.go:141] libmachine: Using SSH client type: native
	I0328 01:03:12.387931 1131600 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.224 22 <nil> <nil>}
	I0328 01:03:12.387943 1131600 main.go:141] libmachine: About to run SSH command:
	hostname
	I0328 01:03:12.499608 1131600 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0328 01:03:12.499644 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetMachineName
	I0328 01:03:12.499930 1131600 buildroot.go:166] provisioning hostname "default-k8s-diff-port-283961"
	I0328 01:03:12.499962 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetMachineName
	I0328 01:03:12.500154 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHHostname
	I0328 01:03:12.502737 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:12.503089 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:df:6f", ip: ""} in network mk-default-k8s-diff-port-283961: {Iface:virbr1 ExpiryTime:2024-03-28 02:03:06 +0000 UTC Type:0 Mac:52:54:00:c4:df:6f Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:default-k8s-diff-port-283961 Clientid:01:52:54:00:c4:df:6f}
	I0328 01:03:12.503120 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined IP address 192.168.39.224 and MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:12.503295 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHPort
	I0328 01:03:12.503516 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHKeyPath
	I0328 01:03:12.503725 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHKeyPath
	I0328 01:03:12.503892 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHUsername
	I0328 01:03:12.504093 1131600 main.go:141] libmachine: Using SSH client type: native
	I0328 01:03:12.504271 1131600 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.224 22 <nil> <nil>}
	I0328 01:03:12.504285 1131600 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-283961 && echo "default-k8s-diff-port-283961" | sudo tee /etc/hostname
	I0328 01:03:12.625590 1131600 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-283961
	
	I0328 01:03:12.625624 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHHostname
	I0328 01:03:12.628570 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:12.628883 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:df:6f", ip: ""} in network mk-default-k8s-diff-port-283961: {Iface:virbr1 ExpiryTime:2024-03-28 02:03:06 +0000 UTC Type:0 Mac:52:54:00:c4:df:6f Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:default-k8s-diff-port-283961 Clientid:01:52:54:00:c4:df:6f}
	I0328 01:03:12.628968 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined IP address 192.168.39.224 and MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:12.629143 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHPort
	I0328 01:03:12.629397 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHKeyPath
	I0328 01:03:12.629627 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHKeyPath
	I0328 01:03:12.629825 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHUsername
	I0328 01:03:12.630008 1131600 main.go:141] libmachine: Using SSH client type: native
	I0328 01:03:12.630191 1131600 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.224 22 <nil> <nil>}
	I0328 01:03:12.630210 1131600 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-283961' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-283961/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-283961' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0328 01:03:12.744240 1131600 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0328 01:03:12.744280 1131600 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18485-1069254/.minikube CaCertPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18485-1069254/.minikube}
	I0328 01:03:12.744327 1131600 buildroot.go:174] setting up certificates
	I0328 01:03:12.744342 1131600 provision.go:84] configureAuth start
	I0328 01:03:12.744361 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetMachineName
	I0328 01:03:12.744722 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetIP
	I0328 01:03:12.747139 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:12.747448 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:df:6f", ip: ""} in network mk-default-k8s-diff-port-283961: {Iface:virbr1 ExpiryTime:2024-03-28 02:03:06 +0000 UTC Type:0 Mac:52:54:00:c4:df:6f Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:default-k8s-diff-port-283961 Clientid:01:52:54:00:c4:df:6f}
	I0328 01:03:12.747478 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined IP address 192.168.39.224 and MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:12.747582 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHHostname
	I0328 01:03:12.749705 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:12.749964 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:df:6f", ip: ""} in network mk-default-k8s-diff-port-283961: {Iface:virbr1 ExpiryTime:2024-03-28 02:03:06 +0000 UTC Type:0 Mac:52:54:00:c4:df:6f Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:default-k8s-diff-port-283961 Clientid:01:52:54:00:c4:df:6f}
	I0328 01:03:12.749995 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined IP address 192.168.39.224 and MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:12.750125 1131600 provision.go:143] copyHostCerts
	I0328 01:03:12.750203 1131600 exec_runner.go:144] found /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.pem, removing ...
	I0328 01:03:12.750217 1131600 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.pem
	I0328 01:03:12.750323 1131600 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.pem (1078 bytes)
	I0328 01:03:12.750435 1131600 exec_runner.go:144] found /home/jenkins/minikube-integration/18485-1069254/.minikube/cert.pem, removing ...
	I0328 01:03:12.750446 1131600 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18485-1069254/.minikube/cert.pem
	I0328 01:03:12.750479 1131600 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18485-1069254/.minikube/cert.pem (1123 bytes)
	I0328 01:03:12.750557 1131600 exec_runner.go:144] found /home/jenkins/minikube-integration/18485-1069254/.minikube/key.pem, removing ...
	I0328 01:03:12.750567 1131600 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18485-1069254/.minikube/key.pem
	I0328 01:03:12.750599 1131600 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18485-1069254/.minikube/key.pem (1679 bytes)
	I0328 01:03:12.750670 1131600 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-283961 san=[127.0.0.1 192.168.39.224 default-k8s-diff-port-283961 localhost minikube]
	I0328 01:03:12.963182 1131600 provision.go:177] copyRemoteCerts
	I0328 01:03:12.963265 1131600 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0328 01:03:12.963313 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHHostname
	I0328 01:03:12.965946 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:12.966177 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:df:6f", ip: ""} in network mk-default-k8s-diff-port-283961: {Iface:virbr1 ExpiryTime:2024-03-28 02:03:06 +0000 UTC Type:0 Mac:52:54:00:c4:df:6f Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:default-k8s-diff-port-283961 Clientid:01:52:54:00:c4:df:6f}
	I0328 01:03:12.966207 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined IP address 192.168.39.224 and MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:12.966347 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHPort
	I0328 01:03:12.966573 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHKeyPath
	I0328 01:03:12.966773 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHUsername
	I0328 01:03:12.966934 1131600 sshutil.go:53] new ssh client: &{IP:192.168.39.224 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/default-k8s-diff-port-283961/id_rsa Username:docker}
	I0328 01:03:13.057477 1131600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0328 01:03:13.083706 1131600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0328 01:03:13.109167 1131600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0328 01:03:13.136835 1131600 provision.go:87] duration metric: took 392.475069ms to configureAuth
	I0328 01:03:13.136867 1131600 buildroot.go:189] setting minikube options for container-runtime
	I0328 01:03:13.137048 1131600 config.go:182] Loaded profile config "default-k8s-diff-port-283961": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0328 01:03:13.137131 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHHostname
	I0328 01:03:13.139508 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:13.139761 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:df:6f", ip: ""} in network mk-default-k8s-diff-port-283961: {Iface:virbr1 ExpiryTime:2024-03-28 02:03:06 +0000 UTC Type:0 Mac:52:54:00:c4:df:6f Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:default-k8s-diff-port-283961 Clientid:01:52:54:00:c4:df:6f}
	I0328 01:03:13.139792 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined IP address 192.168.39.224 and MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:13.139959 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHPort
	I0328 01:03:13.140170 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHKeyPath
	I0328 01:03:13.140343 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHKeyPath
	I0328 01:03:13.140502 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHUsername
	I0328 01:03:13.140685 1131600 main.go:141] libmachine: Using SSH client type: native
	I0328 01:03:13.140873 1131600 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.224 22 <nil> <nil>}
	I0328 01:03:13.140897 1131600 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0328 01:03:13.422372 1131600 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0328 01:03:13.422405 1131600 machine.go:97] duration metric: took 1.038857021s to provisionDockerMachine
	I0328 01:03:13.422418 1131600 start.go:293] postStartSetup for "default-k8s-diff-port-283961" (driver="kvm2")
	I0328 01:03:13.422428 1131600 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0328 01:03:13.422456 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .DriverName
	I0328 01:03:13.422788 1131600 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0328 01:03:13.422819 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHHostname
	I0328 01:03:13.425539 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:13.425865 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:df:6f", ip: ""} in network mk-default-k8s-diff-port-283961: {Iface:virbr1 ExpiryTime:2024-03-28 02:03:06 +0000 UTC Type:0 Mac:52:54:00:c4:df:6f Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:default-k8s-diff-port-283961 Clientid:01:52:54:00:c4:df:6f}
	I0328 01:03:13.425894 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined IP address 192.168.39.224 and MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:13.426023 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHPort
	I0328 01:03:13.426225 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHKeyPath
	I0328 01:03:13.426407 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHUsername
	I0328 01:03:13.426577 1131600 sshutil.go:53] new ssh client: &{IP:192.168.39.224 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/default-k8s-diff-port-283961/id_rsa Username:docker}
	I0328 01:03:13.511874 1131600 ssh_runner.go:195] Run: cat /etc/os-release
	I0328 01:03:13.516643 1131600 info.go:137] Remote host: Buildroot 2023.02.9
	I0328 01:03:13.516673 1131600 filesync.go:126] Scanning /home/jenkins/minikube-integration/18485-1069254/.minikube/addons for local assets ...
	I0328 01:03:13.516749 1131600 filesync.go:126] Scanning /home/jenkins/minikube-integration/18485-1069254/.minikube/files for local assets ...
	I0328 01:03:13.516846 1131600 filesync.go:149] local asset: /home/jenkins/minikube-integration/18485-1069254/.minikube/files/etc/ssl/certs/10765222.pem -> 10765222.pem in /etc/ssl/certs
	I0328 01:03:13.516969 1131600 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0328 01:03:13.529004 1131600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/files/etc/ssl/certs/10765222.pem --> /etc/ssl/certs/10765222.pem (1708 bytes)
	I0328 01:03:13.557244 1131600 start.go:296] duration metric: took 134.810243ms for postStartSetup
	I0328 01:03:13.557289 1131600 fix.go:56] duration metric: took 20.165726422s for fixHost
	I0328 01:03:13.557313 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHHostname
	I0328 01:03:13.560216 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:13.560585 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:df:6f", ip: ""} in network mk-default-k8s-diff-port-283961: {Iface:virbr1 ExpiryTime:2024-03-28 02:03:06 +0000 UTC Type:0 Mac:52:54:00:c4:df:6f Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:default-k8s-diff-port-283961 Clientid:01:52:54:00:c4:df:6f}
	I0328 01:03:13.560623 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined IP address 192.168.39.224 and MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:13.560803 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHPort
	I0328 01:03:13.561050 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHKeyPath
	I0328 01:03:13.561188 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHKeyPath
	I0328 01:03:13.561303 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHUsername
	I0328 01:03:13.561552 1131600 main.go:141] libmachine: Using SSH client type: native
	I0328 01:03:13.561742 1131600 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.224 22 <nil> <nil>}
	I0328 01:03:13.561757 1131600 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0328 01:03:13.671545 1131600 main.go:141] libmachine: SSH cmd err, output: <nil>: 1711587793.617322674
	
	I0328 01:03:13.671570 1131600 fix.go:216] guest clock: 1711587793.617322674
	I0328 01:03:13.671578 1131600 fix.go:229] Guest: 2024-03-28 01:03:13.617322674 +0000 UTC Remote: 2024-03-28 01:03:13.55729386 +0000 UTC m=+187.934897846 (delta=60.028814ms)
	I0328 01:03:13.671632 1131600 fix.go:200] guest clock delta is within tolerance: 60.028814ms
	I0328 01:03:13.671642 1131600 start.go:83] releasing machines lock for "default-k8s-diff-port-283961", held for 20.280118311s
	I0328 01:03:13.671673 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .DriverName
	I0328 01:03:13.671976 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetIP
	I0328 01:03:13.674978 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:13.675384 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:df:6f", ip: ""} in network mk-default-k8s-diff-port-283961: {Iface:virbr1 ExpiryTime:2024-03-28 02:03:06 +0000 UTC Type:0 Mac:52:54:00:c4:df:6f Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:default-k8s-diff-port-283961 Clientid:01:52:54:00:c4:df:6f}
	I0328 01:03:13.675436 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined IP address 192.168.39.224 and MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:13.675562 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .DriverName
	I0328 01:03:13.676167 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .DriverName
	I0328 01:03:13.676337 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .DriverName
	I0328 01:03:13.676436 1131600 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0328 01:03:13.676501 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHHostname
	I0328 01:03:13.676557 1131600 ssh_runner.go:195] Run: cat /version.json
	I0328 01:03:13.676578 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHHostname
	I0328 01:03:13.679418 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:13.679452 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:13.679758 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:df:6f", ip: ""} in network mk-default-k8s-diff-port-283961: {Iface:virbr1 ExpiryTime:2024-03-28 02:03:06 +0000 UTC Type:0 Mac:52:54:00:c4:df:6f Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:default-k8s-diff-port-283961 Clientid:01:52:54:00:c4:df:6f}
	I0328 01:03:13.679785 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined IP address 192.168.39.224 and MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:13.679813 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:df:6f", ip: ""} in network mk-default-k8s-diff-port-283961: {Iface:virbr1 ExpiryTime:2024-03-28 02:03:06 +0000 UTC Type:0 Mac:52:54:00:c4:df:6f Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:default-k8s-diff-port-283961 Clientid:01:52:54:00:c4:df:6f}
	I0328 01:03:13.679832 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined IP address 192.168.39.224 and MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:13.679986 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHPort
	I0328 01:03:13.680089 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHPort
	I0328 01:03:13.680190 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHKeyPath
	I0328 01:03:13.680255 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHKeyPath
	I0328 01:03:13.680345 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHUsername
	I0328 01:03:13.680410 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHUsername
	I0328 01:03:13.680517 1131600 sshutil.go:53] new ssh client: &{IP:192.168.39.224 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/default-k8s-diff-port-283961/id_rsa Username:docker}
	I0328 01:03:13.680608 1131600 sshutil.go:53] new ssh client: &{IP:192.168.39.224 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/default-k8s-diff-port-283961/id_rsa Username:docker}
	I0328 01:03:13.759826 1131600 ssh_runner.go:195] Run: systemctl --version
	I0328 01:03:13.796647 1131600 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0328 01:03:13.947036 1131600 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0328 01:03:13.954165 1131600 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0328 01:03:13.954265 1131600 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0328 01:03:13.973503 1131600 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0328 01:03:13.973538 1131600 start.go:494] detecting cgroup driver to use...
	I0328 01:03:13.973629 1131600 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0328 01:03:13.997675 1131600 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0328 01:03:14.015349 1131600 docker.go:217] disabling cri-docker service (if available) ...
	I0328 01:03:14.015421 1131600 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0328 01:03:14.031099 1131600 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0328 01:03:14.046446 1131600 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0328 01:03:14.186993 1131600 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0328 01:03:14.351164 1131600 docker.go:233] disabling docker service ...
	I0328 01:03:14.351232 1131600 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0328 01:03:14.370629 1131600 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0328 01:03:14.387837 1131600 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0328 01:03:14.544060 1131600 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0328 01:03:14.707699 1131600 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0328 01:03:14.725658 1131600 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0328 01:03:14.746063 1131600 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0328 01:03:14.746141 1131600 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 01:03:14.759244 1131600 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0328 01:03:14.759317 1131600 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 01:03:14.773015 1131600 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 01:03:14.786810 1131600 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 01:03:14.807101 1131600 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0328 01:03:14.821013 1131600 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 01:03:14.834181 1131600 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 01:03:14.861163 1131600 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 01:03:14.874274 1131600 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0328 01:03:14.885890 1131600 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0328 01:03:14.885968 1131600 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0328 01:03:14.903142 1131600 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0328 01:03:14.916364 1131600 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0328 01:03:15.073343 1131600 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0328 01:03:15.218406 1131600 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0328 01:03:15.218500 1131600 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0328 01:03:15.226299 1131600 start.go:562] Will wait 60s for crictl version
	I0328 01:03:15.226373 1131600 ssh_runner.go:195] Run: which crictl
	I0328 01:03:15.232051 1131600 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0328 01:03:15.278793 1131600 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0328 01:03:15.278903 1131600 ssh_runner.go:195] Run: crio --version
	I0328 01:03:15.313408 1131600 ssh_runner.go:195] Run: crio --version
	I0328 01:03:15.351613 1131600 out.go:177] * Preparing Kubernetes v1.29.3 on CRI-O 1.29.1 ...
	I0328 01:03:15.353013 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetIP
	I0328 01:03:15.355924 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:15.356306 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:df:6f", ip: ""} in network mk-default-k8s-diff-port-283961: {Iface:virbr1 ExpiryTime:2024-03-28 02:03:06 +0000 UTC Type:0 Mac:52:54:00:c4:df:6f Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:default-k8s-diff-port-283961 Clientid:01:52:54:00:c4:df:6f}
	I0328 01:03:15.356341 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined IP address 192.168.39.224 and MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:15.356555 1131600 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0328 01:03:15.361194 1131600 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0328 01:03:15.380926 1131600 kubeadm.go:877] updating cluster {Name:default-k8s-diff-port-283961 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.29.3 ClusterName:default-k8s-diff-port-283961 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.224 Port:8444 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0328 01:03:15.381043 1131600 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0328 01:03:15.381099 1131600 ssh_runner.go:195] Run: sudo crictl images --output json
	I0328 01:03:15.423322 1131600 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.3". assuming images are not preloaded.
	I0328 01:03:15.423409 1131600 ssh_runner.go:195] Run: which lz4
	I0328 01:03:15.428123 1131600 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0328 01:03:15.433023 1131600 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0328 01:03:15.433065 1131600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (402967820 bytes)
	I0328 01:03:13.696314 1130827 main.go:141] libmachine: (no-preload-248059) Calling .Start
	I0328 01:03:13.696506 1130827 main.go:141] libmachine: (no-preload-248059) Ensuring networks are active...
	I0328 01:03:13.697344 1130827 main.go:141] libmachine: (no-preload-248059) Ensuring network default is active
	I0328 01:03:13.697668 1130827 main.go:141] libmachine: (no-preload-248059) Ensuring network mk-no-preload-248059 is active
	I0328 01:03:13.698009 1130827 main.go:141] libmachine: (no-preload-248059) Getting domain xml...
	I0328 01:03:13.698805 1130827 main.go:141] libmachine: (no-preload-248059) Creating domain...
	I0328 01:03:14.955922 1130827 main.go:141] libmachine: (no-preload-248059) Waiting to get IP...
	I0328 01:03:14.957088 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:14.957534 1130827 main.go:141] libmachine: (no-preload-248059) DBG | unable to find current IP address of domain no-preload-248059 in network mk-no-preload-248059
	I0328 01:03:14.957660 1130827 main.go:141] libmachine: (no-preload-248059) DBG | I0328 01:03:14.957533 1132389 retry.go:31] will retry after 222.894093ms: waiting for machine to come up
	I0328 01:03:15.182078 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:15.182541 1130827 main.go:141] libmachine: (no-preload-248059) DBG | unable to find current IP address of domain no-preload-248059 in network mk-no-preload-248059
	I0328 01:03:15.182580 1130827 main.go:141] libmachine: (no-preload-248059) DBG | I0328 01:03:15.182528 1132389 retry.go:31] will retry after 263.74163ms: waiting for machine to come up
	I0328 01:03:15.448081 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:15.448653 1130827 main.go:141] libmachine: (no-preload-248059) DBG | unable to find current IP address of domain no-preload-248059 in network mk-no-preload-248059
	I0328 01:03:15.448684 1130827 main.go:141] libmachine: (no-preload-248059) DBG | I0328 01:03:15.448586 1132389 retry.go:31] will retry after 444.066222ms: waiting for machine to come up
	I0328 01:03:15.894141 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:15.894695 1130827 main.go:141] libmachine: (no-preload-248059) DBG | unable to find current IP address of domain no-preload-248059 in network mk-no-preload-248059
	I0328 01:03:15.894732 1130827 main.go:141] libmachine: (no-preload-248059) DBG | I0328 01:03:15.894650 1132389 retry.go:31] will retry after 469.421771ms: waiting for machine to come up
	I0328 01:03:14.413443 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:16.418789 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:15.568507 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:16.068210 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:16.568761 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:17.067929 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:17.568403 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:18.068454 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:18.568086 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:19.068049 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:19.569020 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:20.068068 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:17.139682 1131600 crio.go:462] duration metric: took 1.71160157s to copy over tarball
	I0328 01:03:17.139764 1131600 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0328 01:03:19.581198 1131600 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.441406061s)
	I0328 01:03:19.581229 1131600 crio.go:469] duration metric: took 2.441510253s to extract the tarball
	I0328 01:03:19.581241 1131600 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0328 01:03:19.620964 1131600 ssh_runner.go:195] Run: sudo crictl images --output json
	I0328 01:03:19.666765 1131600 crio.go:514] all images are preloaded for cri-o runtime.
	I0328 01:03:19.666791 1131600 cache_images.go:84] Images are preloaded, skipping loading
	I0328 01:03:19.666802 1131600 kubeadm.go:928] updating node { 192.168.39.224 8444 v1.29.3 crio true true} ...
	I0328 01:03:19.666921 1131600 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-283961 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.224
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.3 ClusterName:default-k8s-diff-port-283961 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0328 01:03:19.666987 1131600 ssh_runner.go:195] Run: crio config
	I0328 01:03:19.716082 1131600 cni.go:84] Creating CNI manager for ""
	I0328 01:03:19.716106 1131600 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0328 01:03:19.716115 1131600 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0328 01:03:19.716139 1131600 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.224 APIServerPort:8444 KubernetesVersion:v1.29.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-283961 NodeName:default-k8s-diff-port-283961 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.224"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.224 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0328 01:03:19.716323 1131600 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.224
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-283961"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.224
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.224"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0328 01:03:19.716399 1131600 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.3
	I0328 01:03:19.727826 1131600 binaries.go:44] Found k8s binaries, skipping transfer
	I0328 01:03:19.727913 1131600 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0328 01:03:19.738525 1131600 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0328 01:03:19.756732 1131600 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0328 01:03:19.776665 1131600 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0328 01:03:19.795756 1131600 ssh_runner.go:195] Run: grep 192.168.39.224	control-plane.minikube.internal$ /etc/hosts
	I0328 01:03:19.800097 1131600 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.224	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0328 01:03:19.813019 1131600 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0328 01:03:19.946740 1131600 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0328 01:03:19.964216 1131600 certs.go:68] Setting up /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/default-k8s-diff-port-283961 for IP: 192.168.39.224
	I0328 01:03:19.964244 1131600 certs.go:194] generating shared ca certs ...
	I0328 01:03:19.964262 1131600 certs.go:226] acquiring lock for ca certs: {Name:mkf4dec8f33bbf51de6ed3aabdf175c7fd744ef9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 01:03:19.964448 1131600 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.key
	I0328 01:03:19.964524 1131600 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/proxy-client-ca.key
	I0328 01:03:19.964538 1131600 certs.go:256] generating profile certs ...
	I0328 01:03:19.964648 1131600 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/default-k8s-diff-port-283961/client.key
	I0328 01:03:19.964735 1131600 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/default-k8s-diff-port-283961/apiserver.key.22bfb146
	I0328 01:03:19.964810 1131600 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/default-k8s-diff-port-283961/proxy-client.key
	I0328 01:03:19.964956 1131600 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/1076522.pem (1338 bytes)
	W0328 01:03:19.965008 1131600 certs.go:480] ignoring /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/1076522_empty.pem, impossibly tiny 0 bytes
	I0328 01:03:19.965021 1131600 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca-key.pem (1675 bytes)
	I0328 01:03:19.965058 1131600 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem (1078 bytes)
	I0328 01:03:19.965091 1131600 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/cert.pem (1123 bytes)
	I0328 01:03:19.965113 1131600 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/key.pem (1679 bytes)
	I0328 01:03:19.965154 1131600 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/files/etc/ssl/certs/10765222.pem (1708 bytes)
	I0328 01:03:19.966026 1131600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0328 01:03:19.998578 1131600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0328 01:03:20.042666 1131600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0328 01:03:20.075405 1131600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0328 01:03:20.117888 1131600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/default-k8s-diff-port-283961/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0328 01:03:20.145160 1131600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/default-k8s-diff-port-283961/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0328 01:03:20.178207 1131600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/default-k8s-diff-port-283961/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0328 01:03:20.208610 1131600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/default-k8s-diff-port-283961/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0328 01:03:20.235356 1131600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0328 01:03:20.262434 1131600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/1076522.pem --> /usr/share/ca-certificates/1076522.pem (1338 bytes)
	I0328 01:03:20.291315 1131600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/files/etc/ssl/certs/10765222.pem --> /usr/share/ca-certificates/10765222.pem (1708 bytes)
	I0328 01:03:20.318034 1131600 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0328 01:03:20.337627 1131600 ssh_runner.go:195] Run: openssl version
	I0328 01:03:20.344242 1131600 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0328 01:03:20.360732 1131600 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0328 01:03:20.365858 1131600 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 27 23:34 /usr/share/ca-certificates/minikubeCA.pem
	I0328 01:03:20.365926 1131600 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0328 01:03:20.372120 1131600 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0328 01:03:20.384554 1131600 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1076522.pem && ln -fs /usr/share/ca-certificates/1076522.pem /etc/ssl/certs/1076522.pem"
	I0328 01:03:20.401731 1131600 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1076522.pem
	I0328 01:03:20.406945 1131600 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 27 23:42 /usr/share/ca-certificates/1076522.pem
	I0328 01:03:20.407024 1131600 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1076522.pem
	I0328 01:03:20.414661 1131600 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1076522.pem /etc/ssl/certs/51391683.0"
	I0328 01:03:20.427573 1131600 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10765222.pem && ln -fs /usr/share/ca-certificates/10765222.pem /etc/ssl/certs/10765222.pem"
	I0328 01:03:20.439807 1131600 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10765222.pem
	I0328 01:03:20.445064 1131600 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 27 23:42 /usr/share/ca-certificates/10765222.pem
	I0328 01:03:20.445138 1131600 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10765222.pem
	I0328 01:03:20.451754 1131600 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/10765222.pem /etc/ssl/certs/3ec20f2e.0"
	I0328 01:03:20.464988 1131600 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0328 01:03:20.470461 1131600 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0328 01:03:20.477200 1131600 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0328 01:03:20.484238 1131600 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0328 01:03:20.491125 1131600 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0328 01:03:20.497888 1131600 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0328 01:03:20.504680 1131600 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0328 01:03:20.511372 1131600 kubeadm.go:391] StartCluster: {Name:default-k8s-diff-port-283961 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.29.3 ClusterName:default-k8s-diff-port-283961 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.224 Port:8444 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0328 01:03:20.511477 1131600 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0328 01:03:20.511542 1131600 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0328 01:03:20.552247 1131600 cri.go:89] found id: ""
	I0328 01:03:20.552345 1131600 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0328 01:03:20.564906 1131600 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0328 01:03:20.564937 1131600 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0328 01:03:20.564944 1131600 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0328 01:03:20.565002 1131600 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0328 01:03:20.576394 1131600 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0328 01:03:20.593699 1131600 kubeconfig.go:125] found "default-k8s-diff-port-283961" server: "https://192.168.39.224:8444"
	I0328 01:03:20.595978 1131600 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0328 01:03:20.609519 1131600 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.39.224
	I0328 01:03:20.609565 1131600 kubeadm.go:1154] stopping kube-system containers ...
	I0328 01:03:20.609583 1131600 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0328 01:03:20.609651 1131600 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0328 01:03:20.651892 1131600 cri.go:89] found id: ""
	I0328 01:03:20.651967 1131600 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0328 01:03:20.671895 1131600 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0328 01:03:16.365505 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:16.366404 1130827 main.go:141] libmachine: (no-preload-248059) DBG | unable to find current IP address of domain no-preload-248059 in network mk-no-preload-248059
	I0328 01:03:16.366435 1130827 main.go:141] libmachine: (no-preload-248059) DBG | I0328 01:03:16.366360 1132389 retry.go:31] will retry after 488.383898ms: waiting for machine to come up
	I0328 01:03:16.856125 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:16.856727 1130827 main.go:141] libmachine: (no-preload-248059) DBG | unable to find current IP address of domain no-preload-248059 in network mk-no-preload-248059
	I0328 01:03:16.856761 1130827 main.go:141] libmachine: (no-preload-248059) DBG | I0328 01:03:16.856626 1132389 retry.go:31] will retry after 617.77144ms: waiting for machine to come up
	I0328 01:03:17.476749 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:17.477351 1130827 main.go:141] libmachine: (no-preload-248059) DBG | unable to find current IP address of domain no-preload-248059 in network mk-no-preload-248059
	I0328 01:03:17.477386 1130827 main.go:141] libmachine: (no-preload-248059) DBG | I0328 01:03:17.477282 1132389 retry.go:31] will retry after 835.951988ms: waiting for machine to come up
	I0328 01:03:18.315387 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:18.315894 1130827 main.go:141] libmachine: (no-preload-248059) DBG | unable to find current IP address of domain no-preload-248059 in network mk-no-preload-248059
	I0328 01:03:18.315925 1130827 main.go:141] libmachine: (no-preload-248059) DBG | I0328 01:03:18.315848 1132389 retry.go:31] will retry after 1.405695765s: waiting for machine to come up
	I0328 01:03:19.723053 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:19.723559 1130827 main.go:141] libmachine: (no-preload-248059) DBG | unable to find current IP address of domain no-preload-248059 in network mk-no-preload-248059
	I0328 01:03:19.723591 1130827 main.go:141] libmachine: (no-preload-248059) DBG | I0328 01:03:19.723473 1132389 retry.go:31] will retry after 1.555358462s: waiting for machine to come up
	I0328 01:03:18.913403 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:21.599662 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:20.568464 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:21.068983 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:21.568470 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:22.068772 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:22.568940 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:23.068907 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:23.568272 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:24.068055 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:24.568056 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:25.068006 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:20.685320 1131600 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0328 01:03:21.187521 1131600 kubeadm.go:156] found existing configuration files:
	
	I0328 01:03:21.187587 1131600 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0328 01:03:21.200463 1131600 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0328 01:03:21.200533 1131600 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0328 01:03:21.212763 1131600 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0328 01:03:21.224344 1131600 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0328 01:03:21.224419 1131600 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0328 01:03:21.235869 1131600 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0328 01:03:21.245970 1131600 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0328 01:03:21.246045 1131600 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0328 01:03:21.258589 1131600 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0328 01:03:21.270651 1131600 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0328 01:03:21.270724 1131600 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0328 01:03:21.283074 1131600 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0328 01:03:21.295811 1131600 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0328 01:03:21.668224 1131600 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0328 01:03:23.046357 1131600 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.378083996s)
	I0328 01:03:23.046401 1131600 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0328 01:03:23.271959 1131600 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0328 01:03:23.353976 1131600 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0328 01:03:23.501611 1131600 api_server.go:52] waiting for apiserver process to appear ...
	I0328 01:03:23.501734 1131600 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:24.002619 1131600 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:24.502614 1131600 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:24.547383 1131600 api_server.go:72] duration metric: took 1.045771287s to wait for apiserver process to appear ...
	I0328 01:03:24.547419 1131600 api_server.go:88] waiting for apiserver healthz status ...
	I0328 01:03:24.547447 1131600 api_server.go:253] Checking apiserver healthz at https://192.168.39.224:8444/healthz ...
	I0328 01:03:24.548081 1131600 api_server.go:269] stopped: https://192.168.39.224:8444/healthz: Get "https://192.168.39.224:8444/healthz": dial tcp 192.168.39.224:8444: connect: connection refused
	I0328 01:03:25.047885 1131600 api_server.go:253] Checking apiserver healthz at https://192.168.39.224:8444/healthz ...
	I0328 01:03:21.279945 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:21.590947 1130827 main.go:141] libmachine: (no-preload-248059) DBG | unable to find current IP address of domain no-preload-248059 in network mk-no-preload-248059
	I0328 01:03:21.590967 1130827 main.go:141] libmachine: (no-preload-248059) DBG | I0328 01:03:21.280358 1132389 retry.go:31] will retry after 1.905587467s: waiting for machine to come up
	I0328 01:03:23.187571 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:23.188214 1130827 main.go:141] libmachine: (no-preload-248059) DBG | unable to find current IP address of domain no-preload-248059 in network mk-no-preload-248059
	I0328 01:03:23.188248 1130827 main.go:141] libmachine: (no-preload-248059) DBG | I0328 01:03:23.188159 1132389 retry.go:31] will retry after 2.68043246s: waiting for machine to come up
	I0328 01:03:25.871414 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:25.871997 1130827 main.go:141] libmachine: (no-preload-248059) DBG | unable to find current IP address of domain no-preload-248059 in network mk-no-preload-248059
	I0328 01:03:25.872030 1130827 main.go:141] libmachine: (no-preload-248059) DBG | I0328 01:03:25.871956 1132389 retry.go:31] will retry after 2.689404788s: waiting for machine to come up
	I0328 01:03:23.913816 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:26.413616 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:27.352533 1131600 api_server.go:279] https://192.168.39.224:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0328 01:03:27.352570 1131600 api_server.go:103] status: https://192.168.39.224:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0328 01:03:27.352589 1131600 api_server.go:253] Checking apiserver healthz at https://192.168.39.224:8444/healthz ...
	I0328 01:03:27.453408 1131600 api_server.go:279] https://192.168.39.224:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0328 01:03:27.453448 1131600 api_server.go:103] status: https://192.168.39.224:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0328 01:03:27.547781 1131600 api_server.go:253] Checking apiserver healthz at https://192.168.39.224:8444/healthz ...
	I0328 01:03:27.552703 1131600 api_server.go:279] https://192.168.39.224:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0328 01:03:27.552738 1131600 api_server.go:103] status: https://192.168.39.224:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0328 01:03:28.048135 1131600 api_server.go:253] Checking apiserver healthz at https://192.168.39.224:8444/healthz ...
	I0328 01:03:28.053291 1131600 api_server.go:279] https://192.168.39.224:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0328 01:03:28.053322 1131600 api_server.go:103] status: https://192.168.39.224:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0328 01:03:28.548374 1131600 api_server.go:253] Checking apiserver healthz at https://192.168.39.224:8444/healthz ...
	I0328 01:03:28.553141 1131600 api_server.go:279] https://192.168.39.224:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0328 01:03:28.553178 1131600 api_server.go:103] status: https://192.168.39.224:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0328 01:03:29.047609 1131600 api_server.go:253] Checking apiserver healthz at https://192.168.39.224:8444/healthz ...
	I0328 01:03:29.053027 1131600 api_server.go:279] https://192.168.39.224:8444/healthz returned 200:
	ok
	I0328 01:03:29.060710 1131600 api_server.go:141] control plane version: v1.29.3
	I0328 01:03:29.060747 1131600 api_server.go:131] duration metric: took 4.513320481s to wait for apiserver health ...
	I0328 01:03:29.060757 1131600 cni.go:84] Creating CNI manager for ""
	I0328 01:03:29.060764 1131600 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0328 01:03:29.062763 1131600 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0328 01:03:25.568927 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:26.068371 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:26.568107 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:27.068037 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:27.567985 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:28.068036 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:28.568843 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:29.068483 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:29.568942 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:30.068849 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:29.064492 1131600 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0328 01:03:29.089164 1131600 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0328 01:03:29.115071 1131600 system_pods.go:43] waiting for kube-system pods to appear ...
	I0328 01:03:29.126819 1131600 system_pods.go:59] 8 kube-system pods found
	I0328 01:03:29.126871 1131600 system_pods.go:61] "coredns-76f75df574-79cdj" [48ffe344-a386-4904-a73e-56e3ce0a8bef] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0328 01:03:29.126885 1131600 system_pods.go:61] "etcd-default-k8s-diff-port-283961" [1d8fc768-e39c-4c96-bd65-2ae76fc9c6ec] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0328 01:03:29.126898 1131600 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-283961" [7c5c9f85-f16f-4248-8d2d-73c1ed2b0128] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0328 01:03:29.126912 1131600 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-283961" [2e943e7b-5506-4797-9e77-4a33e06056fa] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0328 01:03:29.126931 1131600 system_pods.go:61] "kube-proxy-d776v" [c1c86f61-b074-4a51-89e6-17c7b1076748] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0328 01:03:29.126944 1131600 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-283961" [8a840579-4145-4b68-ab3f-b1ebd3d63e81] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0328 01:03:29.126956 1131600 system_pods.go:61] "metrics-server-57f55c9bc5-w4ww4" [6d60f9e6-8ac7-4fad-91dc-61520586666c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0328 01:03:29.126968 1131600 system_pods.go:61] "storage-provisioner" [2b5e2e68-7e7c-46ec-bcec-ff9b01cbb8d9] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0328 01:03:29.126979 1131600 system_pods.go:74] duration metric: took 11.875076ms to wait for pod list to return data ...
	I0328 01:03:29.126992 1131600 node_conditions.go:102] verifying NodePressure condition ...
	I0328 01:03:29.130927 1131600 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0328 01:03:29.130971 1131600 node_conditions.go:123] node cpu capacity is 2
	I0328 01:03:29.130986 1131600 node_conditions.go:105] duration metric: took 3.984383ms to run NodePressure ...
	I0328 01:03:29.131011 1131600 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0328 01:03:29.421513 1131600 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0328 01:03:29.426043 1131600 kubeadm.go:733] kubelet initialised
	I0328 01:03:29.426104 1131600 kubeadm.go:734] duration metric: took 4.524275ms waiting for restarted kubelet to initialise ...
	I0328 01:03:29.426114 1131600 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0328 01:03:29.432378 1131600 pod_ready.go:78] waiting up to 4m0s for pod "coredns-76f75df574-79cdj" in "kube-system" namespace to be "Ready" ...
	I0328 01:03:28.563249 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:28.563778 1130827 main.go:141] libmachine: (no-preload-248059) DBG | unable to find current IP address of domain no-preload-248059 in network mk-no-preload-248059
	I0328 01:03:28.563808 1130827 main.go:141] libmachine: (no-preload-248059) DBG | I0328 01:03:28.563718 1132389 retry.go:31] will retry after 2.919225956s: waiting for machine to come up
	I0328 01:03:28.913653 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:30.914379 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:31.484584 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:31.485027 1130827 main.go:141] libmachine: (no-preload-248059) Found IP for machine: 192.168.61.107
	I0328 01:03:31.485048 1130827 main.go:141] libmachine: (no-preload-248059) Reserving static IP address...
	I0328 01:03:31.485065 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has current primary IP address 192.168.61.107 and MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:31.485584 1130827 main.go:141] libmachine: (no-preload-248059) DBG | found host DHCP lease matching {name: "no-preload-248059", mac: "52:54:00:58:33:e2", ip: "192.168.61.107"} in network mk-no-preload-248059: {Iface:virbr3 ExpiryTime:2024-03-28 02:03:25 +0000 UTC Type:0 Mac:52:54:00:58:33:e2 Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:no-preload-248059 Clientid:01:52:54:00:58:33:e2}
	I0328 01:03:31.485617 1130827 main.go:141] libmachine: (no-preload-248059) Reserved static IP address: 192.168.61.107
	I0328 01:03:31.485638 1130827 main.go:141] libmachine: (no-preload-248059) DBG | skip adding static IP to network mk-no-preload-248059 - found existing host DHCP lease matching {name: "no-preload-248059", mac: "52:54:00:58:33:e2", ip: "192.168.61.107"}
	I0328 01:03:31.485651 1130827 main.go:141] libmachine: (no-preload-248059) Waiting for SSH to be available...
	I0328 01:03:31.485671 1130827 main.go:141] libmachine: (no-preload-248059) DBG | Getting to WaitForSSH function...
	I0328 01:03:31.487909 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:31.488293 1130827 main.go:141] libmachine: (no-preload-248059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:33:e2", ip: ""} in network mk-no-preload-248059: {Iface:virbr3 ExpiryTime:2024-03-28 02:03:25 +0000 UTC Type:0 Mac:52:54:00:58:33:e2 Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:no-preload-248059 Clientid:01:52:54:00:58:33:e2}
	I0328 01:03:31.488322 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined IP address 192.168.61.107 and MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:31.488469 1130827 main.go:141] libmachine: (no-preload-248059) DBG | Using SSH client type: external
	I0328 01:03:31.488506 1130827 main.go:141] libmachine: (no-preload-248059) DBG | Using SSH private key: /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/no-preload-248059/id_rsa (-rw-------)
	I0328 01:03:31.488531 1130827 main.go:141] libmachine: (no-preload-248059) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.107 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/no-preload-248059/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0328 01:03:31.488541 1130827 main.go:141] libmachine: (no-preload-248059) DBG | About to run SSH command:
	I0328 01:03:31.488555 1130827 main.go:141] libmachine: (no-preload-248059) DBG | exit 0
	I0328 01:03:31.618358 1130827 main.go:141] libmachine: (no-preload-248059) DBG | SSH cmd err, output: <nil>: 
	I0328 01:03:31.618786 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetConfigRaw
	I0328 01:03:31.619494 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetIP
	I0328 01:03:31.622183 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:31.622555 1130827 main.go:141] libmachine: (no-preload-248059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:33:e2", ip: ""} in network mk-no-preload-248059: {Iface:virbr3 ExpiryTime:2024-03-28 02:03:25 +0000 UTC Type:0 Mac:52:54:00:58:33:e2 Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:no-preload-248059 Clientid:01:52:54:00:58:33:e2}
	I0328 01:03:31.622584 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined IP address 192.168.61.107 and MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:31.622889 1130827 profile.go:142] Saving config to /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/no-preload-248059/config.json ...
	I0328 01:03:31.623120 1130827 machine.go:94] provisionDockerMachine start ...
	I0328 01:03:31.623147 1130827 main.go:141] libmachine: (no-preload-248059) Calling .DriverName
	I0328 01:03:31.623400 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHHostname
	I0328 01:03:31.626078 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:31.626432 1130827 main.go:141] libmachine: (no-preload-248059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:33:e2", ip: ""} in network mk-no-preload-248059: {Iface:virbr3 ExpiryTime:2024-03-28 02:03:25 +0000 UTC Type:0 Mac:52:54:00:58:33:e2 Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:no-preload-248059 Clientid:01:52:54:00:58:33:e2}
	I0328 01:03:31.626458 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined IP address 192.168.61.107 and MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:31.626663 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHPort
	I0328 01:03:31.626864 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHKeyPath
	I0328 01:03:31.627031 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHKeyPath
	I0328 01:03:31.627179 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHUsername
	I0328 01:03:31.627380 1130827 main.go:141] libmachine: Using SSH client type: native
	I0328 01:03:31.627595 1130827 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.107 22 <nil> <nil>}
	I0328 01:03:31.627611 1130827 main.go:141] libmachine: About to run SSH command:
	hostname
	I0328 01:03:31.739662 1130827 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0328 01:03:31.739699 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetMachineName
	I0328 01:03:31.740049 1130827 buildroot.go:166] provisioning hostname "no-preload-248059"
	I0328 01:03:31.740086 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetMachineName
	I0328 01:03:31.740421 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHHostname
	I0328 01:03:31.743410 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:31.743776 1130827 main.go:141] libmachine: (no-preload-248059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:33:e2", ip: ""} in network mk-no-preload-248059: {Iface:virbr3 ExpiryTime:2024-03-28 02:03:25 +0000 UTC Type:0 Mac:52:54:00:58:33:e2 Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:no-preload-248059 Clientid:01:52:54:00:58:33:e2}
	I0328 01:03:31.743811 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined IP address 192.168.61.107 and MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:31.744001 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHPort
	I0328 01:03:31.744212 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHKeyPath
	I0328 01:03:31.744394 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHKeyPath
	I0328 01:03:31.744515 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHUsername
	I0328 01:03:31.744669 1130827 main.go:141] libmachine: Using SSH client type: native
	I0328 01:03:31.744846 1130827 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.107 22 <nil> <nil>}
	I0328 01:03:31.744860 1130827 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-248059 && echo "no-preload-248059" | sudo tee /etc/hostname
	I0328 01:03:31.869330 1130827 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-248059
	
	I0328 01:03:31.869368 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHHostname
	I0328 01:03:31.872451 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:31.872817 1130827 main.go:141] libmachine: (no-preload-248059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:33:e2", ip: ""} in network mk-no-preload-248059: {Iface:virbr3 ExpiryTime:2024-03-28 02:03:25 +0000 UTC Type:0 Mac:52:54:00:58:33:e2 Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:no-preload-248059 Clientid:01:52:54:00:58:33:e2}
	I0328 01:03:31.872868 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined IP address 192.168.61.107 and MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:31.873159 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHPort
	I0328 01:03:31.873405 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHKeyPath
	I0328 01:03:31.873632 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHKeyPath
	I0328 01:03:31.873803 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHUsername
	I0328 01:03:31.873982 1130827 main.go:141] libmachine: Using SSH client type: native
	I0328 01:03:31.874220 1130827 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.107 22 <nil> <nil>}
	I0328 01:03:31.874268 1130827 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-248059' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-248059/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-248059' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0328 01:03:31.997509 1130827 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0328 01:03:31.997543 1130827 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18485-1069254/.minikube CaCertPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18485-1069254/.minikube}
	I0328 01:03:31.997565 1130827 buildroot.go:174] setting up certificates
	I0328 01:03:31.997573 1130827 provision.go:84] configureAuth start
	I0328 01:03:31.997583 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetMachineName
	I0328 01:03:31.997870 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetIP
	I0328 01:03:32.000739 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:32.001127 1130827 main.go:141] libmachine: (no-preload-248059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:33:e2", ip: ""} in network mk-no-preload-248059: {Iface:virbr3 ExpiryTime:2024-03-28 02:03:25 +0000 UTC Type:0 Mac:52:54:00:58:33:e2 Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:no-preload-248059 Clientid:01:52:54:00:58:33:e2}
	I0328 01:03:32.001162 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined IP address 192.168.61.107 and MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:32.001306 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHHostname
	I0328 01:03:32.003571 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:32.003958 1130827 main.go:141] libmachine: (no-preload-248059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:33:e2", ip: ""} in network mk-no-preload-248059: {Iface:virbr3 ExpiryTime:2024-03-28 02:03:25 +0000 UTC Type:0 Mac:52:54:00:58:33:e2 Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:no-preload-248059 Clientid:01:52:54:00:58:33:e2}
	I0328 01:03:32.003988 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined IP address 192.168.61.107 and MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:32.004162 1130827 provision.go:143] copyHostCerts
	I0328 01:03:32.004246 1130827 exec_runner.go:144] found /home/jenkins/minikube-integration/18485-1069254/.minikube/key.pem, removing ...
	I0328 01:03:32.004261 1130827 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18485-1069254/.minikube/key.pem
	I0328 01:03:32.004329 1130827 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18485-1069254/.minikube/key.pem (1679 bytes)
	I0328 01:03:32.004442 1130827 exec_runner.go:144] found /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.pem, removing ...
	I0328 01:03:32.004454 1130827 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.pem
	I0328 01:03:32.004486 1130827 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.pem (1078 bytes)
	I0328 01:03:32.004562 1130827 exec_runner.go:144] found /home/jenkins/minikube-integration/18485-1069254/.minikube/cert.pem, removing ...
	I0328 01:03:32.004572 1130827 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18485-1069254/.minikube/cert.pem
	I0328 01:03:32.004602 1130827 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18485-1069254/.minikube/cert.pem (1123 bytes)
	I0328 01:03:32.004667 1130827 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca-key.pem org=jenkins.no-preload-248059 san=[127.0.0.1 192.168.61.107 localhost minikube no-preload-248059]
	I0328 01:03:32.206585 1130827 provision.go:177] copyRemoteCerts
	I0328 01:03:32.206657 1130827 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0328 01:03:32.206691 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHHostname
	I0328 01:03:32.210170 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:32.210636 1130827 main.go:141] libmachine: (no-preload-248059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:33:e2", ip: ""} in network mk-no-preload-248059: {Iface:virbr3 ExpiryTime:2024-03-28 02:03:25 +0000 UTC Type:0 Mac:52:54:00:58:33:e2 Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:no-preload-248059 Clientid:01:52:54:00:58:33:e2}
	I0328 01:03:32.210676 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined IP address 192.168.61.107 and MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:32.210979 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHPort
	I0328 01:03:32.211187 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHKeyPath
	I0328 01:03:32.211364 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHUsername
	I0328 01:03:32.211564 1130827 sshutil.go:53] new ssh client: &{IP:192.168.61.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/no-preload-248059/id_rsa Username:docker}
	I0328 01:03:32.305858 1130827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0328 01:03:32.337654 1130827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0328 01:03:32.368942 1130827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0328 01:03:32.401639 1130827 provision.go:87] duration metric: took 404.051415ms to configureAuth
	I0328 01:03:32.401669 1130827 buildroot.go:189] setting minikube options for container-runtime
	I0328 01:03:32.401936 1130827 config.go:182] Loaded profile config "no-preload-248059": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0-beta.0
	I0328 01:03:32.402025 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHHostname
	I0328 01:03:32.404890 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:32.405352 1130827 main.go:141] libmachine: (no-preload-248059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:33:e2", ip: ""} in network mk-no-preload-248059: {Iface:virbr3 ExpiryTime:2024-03-28 02:03:25 +0000 UTC Type:0 Mac:52:54:00:58:33:e2 Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:no-preload-248059 Clientid:01:52:54:00:58:33:e2}
	I0328 01:03:32.405387 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined IP address 192.168.61.107 and MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:32.405588 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHPort
	I0328 01:03:32.405858 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHKeyPath
	I0328 01:03:32.406091 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHKeyPath
	I0328 01:03:32.406303 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHUsername
	I0328 01:03:32.406510 1130827 main.go:141] libmachine: Using SSH client type: native
	I0328 01:03:32.406731 1130827 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.107 22 <nil> <nil>}
	I0328 01:03:32.406759 1130827 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0328 01:03:32.697738 1130827 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0328 01:03:32.697768 1130827 machine.go:97] duration metric: took 1.074632092s to provisionDockerMachine
	I0328 01:03:32.697781 1130827 start.go:293] postStartSetup for "no-preload-248059" (driver="kvm2")
	I0328 01:03:32.697795 1130827 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0328 01:03:32.697812 1130827 main.go:141] libmachine: (no-preload-248059) Calling .DriverName
	I0328 01:03:32.698263 1130827 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0328 01:03:32.698298 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHHostname
	I0328 01:03:32.701020 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:32.701421 1130827 main.go:141] libmachine: (no-preload-248059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:33:e2", ip: ""} in network mk-no-preload-248059: {Iface:virbr3 ExpiryTime:2024-03-28 02:03:25 +0000 UTC Type:0 Mac:52:54:00:58:33:e2 Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:no-preload-248059 Clientid:01:52:54:00:58:33:e2}
	I0328 01:03:32.701450 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined IP address 192.168.61.107 and MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:32.701609 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHPort
	I0328 01:03:32.701837 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHKeyPath
	I0328 01:03:32.702010 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHUsername
	I0328 01:03:32.702188 1130827 sshutil.go:53] new ssh client: &{IP:192.168.61.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/no-preload-248059/id_rsa Username:docker}
	I0328 01:03:32.790670 1130827 ssh_runner.go:195] Run: cat /etc/os-release
	I0328 01:03:32.795098 1130827 info.go:137] Remote host: Buildroot 2023.02.9
	I0328 01:03:32.795131 1130827 filesync.go:126] Scanning /home/jenkins/minikube-integration/18485-1069254/.minikube/addons for local assets ...
	I0328 01:03:32.795222 1130827 filesync.go:126] Scanning /home/jenkins/minikube-integration/18485-1069254/.minikube/files for local assets ...
	I0328 01:03:32.795297 1130827 filesync.go:149] local asset: /home/jenkins/minikube-integration/18485-1069254/.minikube/files/etc/ssl/certs/10765222.pem -> 10765222.pem in /etc/ssl/certs
	I0328 01:03:32.795402 1130827 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0328 01:03:32.806276 1130827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/files/etc/ssl/certs/10765222.pem --> /etc/ssl/certs/10765222.pem (1708 bytes)
	I0328 01:03:32.832753 1130827 start.go:296] duration metric: took 134.954685ms for postStartSetup
	I0328 01:03:32.832801 1130827 fix.go:56] duration metric: took 19.16097847s for fixHost
	I0328 01:03:32.832825 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHHostname
	I0328 01:03:32.835830 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:32.836199 1130827 main.go:141] libmachine: (no-preload-248059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:33:e2", ip: ""} in network mk-no-preload-248059: {Iface:virbr3 ExpiryTime:2024-03-28 02:03:25 +0000 UTC Type:0 Mac:52:54:00:58:33:e2 Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:no-preload-248059 Clientid:01:52:54:00:58:33:e2}
	I0328 01:03:32.836237 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined IP address 192.168.61.107 and MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:32.836472 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHPort
	I0328 01:03:32.836707 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHKeyPath
	I0328 01:03:32.836949 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHKeyPath
	I0328 01:03:32.837104 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHUsername
	I0328 01:03:32.837339 1130827 main.go:141] libmachine: Using SSH client type: native
	I0328 01:03:32.837551 1130827 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.107 22 <nil> <nil>}
	I0328 01:03:32.837563 1130827 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0328 01:03:32.947440 1130827 main.go:141] libmachine: SSH cmd err, output: <nil>: 1711587812.922631180
	
	I0328 01:03:32.947477 1130827 fix.go:216] guest clock: 1711587812.922631180
	I0328 01:03:32.947486 1130827 fix.go:229] Guest: 2024-03-28 01:03:32.92263118 +0000 UTC Remote: 2024-03-28 01:03:32.832804811 +0000 UTC m=+356.715929719 (delta=89.826369ms)
	I0328 01:03:32.947507 1130827 fix.go:200] guest clock delta is within tolerance: 89.826369ms
	I0328 01:03:32.947512 1130827 start.go:83] releasing machines lock for "no-preload-248059", held for 19.275724068s
	I0328 01:03:32.947531 1130827 main.go:141] libmachine: (no-preload-248059) Calling .DriverName
	I0328 01:03:32.947805 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetIP
	I0328 01:03:32.950439 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:32.950814 1130827 main.go:141] libmachine: (no-preload-248059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:33:e2", ip: ""} in network mk-no-preload-248059: {Iface:virbr3 ExpiryTime:2024-03-28 02:03:25 +0000 UTC Type:0 Mac:52:54:00:58:33:e2 Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:no-preload-248059 Clientid:01:52:54:00:58:33:e2}
	I0328 01:03:32.950844 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined IP address 192.168.61.107 and MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:32.950992 1130827 main.go:141] libmachine: (no-preload-248059) Calling .DriverName
	I0328 01:03:32.951517 1130827 main.go:141] libmachine: (no-preload-248059) Calling .DriverName
	I0328 01:03:32.951707 1130827 main.go:141] libmachine: (no-preload-248059) Calling .DriverName
	I0328 01:03:32.951809 1130827 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0328 01:03:32.951852 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHHostname
	I0328 01:03:32.951938 1130827 ssh_runner.go:195] Run: cat /version.json
	I0328 01:03:32.951964 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHHostname
	I0328 01:03:32.954721 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:32.955058 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:32.955135 1130827 main.go:141] libmachine: (no-preload-248059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:33:e2", ip: ""} in network mk-no-preload-248059: {Iface:virbr3 ExpiryTime:2024-03-28 02:03:25 +0000 UTC Type:0 Mac:52:54:00:58:33:e2 Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:no-preload-248059 Clientid:01:52:54:00:58:33:e2}
	I0328 01:03:32.955165 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined IP address 192.168.61.107 and MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:32.955314 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHPort
	I0328 01:03:32.955473 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHKeyPath
	I0328 01:03:32.955512 1130827 main.go:141] libmachine: (no-preload-248059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:33:e2", ip: ""} in network mk-no-preload-248059: {Iface:virbr3 ExpiryTime:2024-03-28 02:03:25 +0000 UTC Type:0 Mac:52:54:00:58:33:e2 Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:no-preload-248059 Clientid:01:52:54:00:58:33:e2}
	I0328 01:03:32.955538 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined IP address 192.168.61.107 and MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:32.955622 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHUsername
	I0328 01:03:32.955698 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHPort
	I0328 01:03:32.955809 1130827 sshutil.go:53] new ssh client: &{IP:192.168.61.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/no-preload-248059/id_rsa Username:docker}
	I0328 01:03:32.955859 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHKeyPath
	I0328 01:03:32.956001 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHUsername
	I0328 01:03:32.956134 1130827 sshutil.go:53] new ssh client: &{IP:192.168.61.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/no-preload-248059/id_rsa Username:docker}
	I0328 01:03:33.079381 1130827 ssh_runner.go:195] Run: systemctl --version
	I0328 01:03:33.086184 1130827 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0328 01:03:33.241799 1130827 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0328 01:03:33.248779 1130827 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0328 01:03:33.248893 1130827 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0328 01:03:33.267944 1130827 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0328 01:03:33.267977 1130827 start.go:494] detecting cgroup driver to use...
	I0328 01:03:33.268082 1130827 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0328 01:03:33.286132 1130827 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0328 01:03:33.301676 1130827 docker.go:217] disabling cri-docker service (if available) ...
	I0328 01:03:33.301762 1130827 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0328 01:03:33.317202 1130827 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0328 01:03:33.333162 1130827 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0328 01:03:33.458738 1130827 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0328 01:03:33.608509 1130827 docker.go:233] disabling docker service ...
	I0328 01:03:33.608623 1130827 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0328 01:03:33.626616 1130827 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0328 01:03:33.641798 1130827 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0328 01:03:33.808865 1130827 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0328 01:03:33.962636 1130827 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0328 01:03:33.978138 1130827 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0328 01:03:34.002323 1130827 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0328 01:03:34.002404 1130827 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 01:03:34.014483 1130827 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0328 01:03:34.014589 1130827 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 01:03:34.028647 1130827 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 01:03:34.041601 1130827 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 01:03:34.054993 1130827 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0328 01:03:34.066671 1130827 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 01:03:34.079389 1130827 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 01:03:34.099660 1130827 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 01:03:34.112379 1130827 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0328 01:03:34.123050 1130827 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0328 01:03:34.123109 1130827 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0328 01:03:34.137132 1130827 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0328 01:03:34.147092 1130827 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0328 01:03:34.282367 1130827 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0328 01:03:34.436510 1130827 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0328 01:03:34.436599 1130827 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0328 01:03:34.443019 1130827 start.go:562] Will wait 60s for crictl version
	I0328 01:03:34.443092 1130827 ssh_runner.go:195] Run: which crictl
	I0328 01:03:34.447740 1130827 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0328 01:03:34.488366 1130827 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0328 01:03:34.488469 1130827 ssh_runner.go:195] Run: crio --version
	I0328 01:03:34.520940 1130827 ssh_runner.go:195] Run: crio --version
	I0328 01:03:34.557953 1130827 out.go:177] * Preparing Kubernetes v1.30.0-beta.0 on CRI-O 1.29.1 ...
	I0328 01:03:30.568918 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:31.068097 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:31.568306 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:32.068345 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:32.568773 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:33.068072 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:33.568377 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:34.068141 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:34.568574 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:35.067986 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:31.439199 1131600 pod_ready.go:102] pod "coredns-76f75df574-79cdj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:33.439575 1131600 pod_ready.go:102] pod "coredns-76f75df574-79cdj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:34.559624 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetIP
	I0328 01:03:34.563089 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:34.563549 1130827 main.go:141] libmachine: (no-preload-248059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:33:e2", ip: ""} in network mk-no-preload-248059: {Iface:virbr3 ExpiryTime:2024-03-28 02:03:25 +0000 UTC Type:0 Mac:52:54:00:58:33:e2 Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:no-preload-248059 Clientid:01:52:54:00:58:33:e2}
	I0328 01:03:34.563583 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined IP address 192.168.61.107 and MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:34.563943 1130827 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0328 01:03:34.570153 1130827 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0328 01:03:34.584566 1130827 kubeadm.go:877] updating cluster {Name:no-preload-248059 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.0-beta.0 ClusterName:no-preload-248059 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.107 Port:8443 KubernetesVersion:v1.30.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0328 01:03:34.584723 1130827 preload.go:132] Checking if preload exists for k8s version v1.30.0-beta.0 and runtime crio
	I0328 01:03:34.584786 1130827 ssh_runner.go:195] Run: sudo crictl images --output json
	I0328 01:03:34.620182 1130827 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.0-beta.0". assuming images are not preloaded.
	I0328 01:03:34.620215 1130827 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.30.0-beta.0 registry.k8s.io/kube-controller-manager:v1.30.0-beta.0 registry.k8s.io/kube-scheduler:v1.30.0-beta.0 registry.k8s.io/kube-proxy:v1.30.0-beta.0 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.12-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0328 01:03:34.620297 1130827 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0328 01:03:34.620312 1130827 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.12-0
	I0328 01:03:34.620333 1130827 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.30.0-beta.0
	I0328 01:03:34.620301 1130827 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.30.0-beta.0
	I0328 01:03:34.620374 1130827 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.30.0-beta.0
	I0328 01:03:34.620401 1130827 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0328 01:03:34.620481 1130827 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0328 01:03:34.620319 1130827 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.30.0-beta.0
	I0328 01:03:34.621997 1130827 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.30.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.30.0-beta.0
	I0328 01:03:34.622009 1130827 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0328 01:03:34.622009 1130827 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.30.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.30.0-beta.0
	I0328 01:03:34.622052 1130827 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.30.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.30.0-beta.0
	I0328 01:03:34.621997 1130827 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.30.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.30.0-beta.0
	I0328 01:03:34.622115 1130827 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.12-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.12-0
	I0328 01:03:34.621996 1130827 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0328 01:03:34.622438 1130827 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0328 01:03:34.832761 1130827 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.30.0-beta.0
	I0328 01:03:34.849045 1130827 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I0328 01:03:34.868049 1130827 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0328 01:03:34.883941 1130827 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.30.0-beta.0" needs transfer: "registry.k8s.io/kube-proxy:v1.30.0-beta.0" does not exist at hash "3f2e573c9528d6ccab780899b4b39b75a17f00250f31ec462fccb116d45befa8" in container runtime
	I0328 01:03:34.883988 1130827 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.30.0-beta.0
	I0328 01:03:34.884047 1130827 ssh_runner.go:195] Run: which crictl
	I0328 01:03:34.884972 1130827 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.30.0-beta.0
	I0328 01:03:34.887551 1130827 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.30.0-beta.0
	I0328 01:03:34.899677 1130827 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.12-0
	I0328 01:03:34.904772 1130827 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.30.0-beta.0
	I0328 01:03:35.045850 1130827 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0328 01:03:35.045906 1130827 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0328 01:03:35.045944 1130827 ssh_runner.go:195] Run: which crictl
	I0328 01:03:35.045959 1130827 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.30.0-beta.0
	I0328 01:03:35.064862 1130827 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.30.0-beta.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.30.0-beta.0" does not exist at hash "c2da1dd389fe0ec92e0cd9e98f8def82c47e8e08ab27041cd23683c60aa1dcaa" in container runtime
	I0328 01:03:35.064908 1130827 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.30.0-beta.0
	I0328 01:03:35.064959 1130827 ssh_runner.go:195] Run: which crictl
	I0328 01:03:35.066700 1130827 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.30.0-beta.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.30.0-beta.0" does not exist at hash "f68bf73ee2fbd4ea8b58c2038ce18d4e06cb59833c44dda7435b586add021841" in container runtime
	I0328 01:03:35.066753 1130827 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.30.0-beta.0
	I0328 01:03:35.066820 1130827 ssh_runner.go:195] Run: which crictl
	I0328 01:03:35.097425 1130827 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.30.0-beta.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.30.0-beta.0" does not exist at hash "746ac2129b574b8743dca05f512b7e097235ca7229b75100a38ec9cdb23454ac" in container runtime
	I0328 01:03:35.097479 1130827 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.30.0-beta.0
	I0328 01:03:35.097546 1130827 ssh_runner.go:195] Run: which crictl
	I0328 01:03:35.097619 1130827 cache_images.go:116] "registry.k8s.io/etcd:3.5.12-0" needs transfer: "registry.k8s.io/etcd:3.5.12-0" does not exist at hash "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899" in container runtime
	I0328 01:03:35.097667 1130827 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.12-0
	I0328 01:03:35.097715 1130827 ssh_runner.go:195] Run: which crictl
	I0328 01:03:35.126977 1130827 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0328 01:03:35.126980 1130827 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.30.0-beta.0
	I0328 01:03:35.127020 1130827 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.30.0-beta.0
	I0328 01:03:35.127084 1130827 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.12-0
	I0328 01:03:35.127090 1130827 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.30.0-beta.0
	I0328 01:03:35.127082 1130827 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.30.0-beta.0
	I0328 01:03:35.127161 1130827 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.30.0-beta.0
	I0328 01:03:35.264395 1130827 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.30.0-beta.0
	I0328 01:03:35.264499 1130827 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.30.0-beta.0 (exists)
	I0328 01:03:35.264534 1130827 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.30.0-beta.0
	I0328 01:03:35.264543 1130827 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.30.0-beta.0
	I0328 01:03:35.264506 1130827 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0328 01:03:35.264590 1130827 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.30.0-beta.0
	I0328 01:03:35.264631 1130827 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.30.0-beta.0
	I0328 01:03:35.264652 1130827 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0328 01:03:35.264516 1130827 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.30.0-beta.0
	I0328 01:03:35.264584 1130827 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.12-0
	I0328 01:03:35.264717 1130827 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.30.0-beta.0
	I0328 01:03:35.264728 1130827 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.30.0-beta.0
	I0328 01:03:35.264768 1130827 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.12-0
	I0328 01:03:35.269734 1130827 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.12-0 (exists)
	I0328 01:03:35.277344 1130827 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.30.0-beta.0 (exists)
	I0328 01:03:35.277580 1130827 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0328 01:03:35.279792 1130827 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.30.0-beta.0 (exists)
	I0328 01:03:35.280423 1130827 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.30.0-beta.0 (exists)
	I0328 01:03:35.535980 1130827 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0328 01:03:33.413451 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:35.414017 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:37.913609 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:35.568345 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:36.068227 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:36.568528 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:37.068834 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:37.568407 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:38.068142 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:38.568732 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:39.068094 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:39.568799 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:40.068973 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:35.940767 1131600 pod_ready.go:102] pod "coredns-76f75df574-79cdj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:37.440919 1131600 pod_ready.go:92] pod "coredns-76f75df574-79cdj" in "kube-system" namespace has status "Ready":"True"
	I0328 01:03:37.440949 1131600 pod_ready.go:81] duration metric: took 8.008542386s for pod "coredns-76f75df574-79cdj" in "kube-system" namespace to be "Ready" ...
	I0328 01:03:37.440963 1131600 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-283961" in "kube-system" namespace to be "Ready" ...
	I0328 01:03:39.452822 1131600 pod_ready.go:102] pod "etcd-default-k8s-diff-port-283961" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:40.467937 1131600 pod_ready.go:92] pod "etcd-default-k8s-diff-port-283961" in "kube-system" namespace has status "Ready":"True"
	I0328 01:03:40.467973 1131600 pod_ready.go:81] duration metric: took 3.027001179s for pod "etcd-default-k8s-diff-port-283961" in "kube-system" namespace to be "Ready" ...
	I0328 01:03:40.467987 1131600 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-283961" in "kube-system" namespace to be "Ready" ...
	I0328 01:03:40.491342 1131600 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-283961" in "kube-system" namespace has status "Ready":"True"
	I0328 01:03:40.491373 1131600 pod_ready.go:81] duration metric: took 23.375914ms for pod "kube-apiserver-default-k8s-diff-port-283961" in "kube-system" namespace to be "Ready" ...
	I0328 01:03:40.491387 1131600 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-283961" in "kube-system" namespace to be "Ready" ...
	I0328 01:03:40.511379 1131600 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-283961" in "kube-system" namespace has status "Ready":"True"
	I0328 01:03:40.511414 1131600 pod_ready.go:81] duration metric: took 20.018124ms for pod "kube-controller-manager-default-k8s-diff-port-283961" in "kube-system" namespace to be "Ready" ...
	I0328 01:03:40.511430 1131600 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-d776v" in "kube-system" namespace to be "Ready" ...
	I0328 01:03:40.526689 1131600 pod_ready.go:92] pod "kube-proxy-d776v" in "kube-system" namespace has status "Ready":"True"
	I0328 01:03:40.526724 1131600 pod_ready.go:81] duration metric: took 15.28424ms for pod "kube-proxy-d776v" in "kube-system" namespace to be "Ready" ...
	I0328 01:03:40.526738 1131600 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-283961" in "kube-system" namespace to be "Ready" ...
	I0328 01:03:37.431690 1130827 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.30.0-beta.0: (2.167073369s)
	I0328 01:03:37.431729 1130827 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.30.0-beta.0 from cache
	I0328 01:03:37.431755 1130827 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.12-0
	I0328 01:03:37.431764 1130827 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.895749302s)
	I0328 01:03:37.431805 1130827 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0328 01:03:37.431811 1130827 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.12-0
	I0328 01:03:37.431837 1130827 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0328 01:03:37.431870 1130827 ssh_runner.go:195] Run: which crictl
	I0328 01:03:39.913936 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:42.412656 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:40.568441 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:41.068790 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:41.568919 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:42.068166 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:42.568012 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:43.068027 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:43.568916 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:44.067940 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:44.568074 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:45.068786 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:42.535179 1131600 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-283961" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:44.034128 1131600 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-283961" in "kube-system" namespace has status "Ready":"True"
	I0328 01:03:44.034164 1131600 pod_ready.go:81] duration metric: took 3.507415677s for pod "kube-scheduler-default-k8s-diff-port-283961" in "kube-system" namespace to be "Ready" ...
	I0328 01:03:44.034175 1131600 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace to be "Ready" ...
	I0328 01:03:41.523268 1130827 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.12-0: (4.091420228s)
	I0328 01:03:41.523305 1130827 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.12-0 from cache
	I0328 01:03:41.523330 1130827 ssh_runner.go:235] Completed: which crictl: (4.091431875s)
	I0328 01:03:41.523345 1130827 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.30.0-beta.0
	I0328 01:03:41.523412 1130827 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0328 01:03:41.523445 1130827 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.30.0-beta.0
	I0328 01:03:41.567312 1130827 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0328 01:03:41.567455 1130827 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0328 01:03:44.336954 1130827 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.30.0-beta.0: (2.813479223s)
	I0328 01:03:44.336991 1130827 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.30.0-beta.0 from cache
	I0328 01:03:44.336994 1130827 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (2.769509386s)
	I0328 01:03:44.337020 1130827 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0328 01:03:44.337035 1130827 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0328 01:03:44.337080 1130827 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0328 01:03:44.414767 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:46.415110 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:45.568662 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:46.068299 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:46.568793 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:47.068929 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:47.568250 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:48.068910 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:48.568138 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:49.068128 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:49.568153 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:50.068075 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:46.042489 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:48.541049 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:50.547355 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:46.297705 1130827 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.960592772s)
	I0328 01:03:46.297744 1130827 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0328 01:03:46.297776 1130827 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.30.0-beta.0
	I0328 01:03:46.297828 1130827 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.30.0-beta.0
	I0328 01:03:47.769522 1130827 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.30.0-beta.0: (1.471661236s)
	I0328 01:03:47.769569 1130827 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.30.0-beta.0 from cache
	I0328 01:03:47.769602 1130827 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.30.0-beta.0
	I0328 01:03:47.769656 1130827 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.30.0-beta.0
	I0328 01:03:50.231843 1130827 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.30.0-beta.0: (2.462162757s)
	I0328 01:03:50.231876 1130827 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.30.0-beta.0 from cache
	I0328 01:03:50.231902 1130827 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0328 01:03:50.231956 1130827 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0328 01:03:48.913184 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:51.412474 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:50.568929 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:51.068812 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:51.568899 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:52.068890 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:52.568751 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:53.068406 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:53.568466 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:54.068039 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:54.568745 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:55.068690 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:53.041197 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:55.041977 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:51.188382 1130827 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0328 01:03:51.188441 1130827 cache_images.go:123] Successfully loaded all cached images
	I0328 01:03:51.188448 1130827 cache_images.go:92] duration metric: took 16.568214969s to LoadCachedImages
	I0328 01:03:51.188464 1130827 kubeadm.go:928] updating node { 192.168.61.107 8443 v1.30.0-beta.0 crio true true} ...
	I0328 01:03:51.188628 1130827 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-248059 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.107
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0-beta.0 ClusterName:no-preload-248059 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0328 01:03:51.188710 1130827 ssh_runner.go:195] Run: crio config
	I0328 01:03:51.237071 1130827 cni.go:84] Creating CNI manager for ""
	I0328 01:03:51.237099 1130827 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0328 01:03:51.237109 1130827 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0328 01:03:51.237131 1130827 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.107 APIServerPort:8443 KubernetesVersion:v1.30.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-248059 NodeName:no-preload-248059 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.107"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.107 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0328 01:03:51.237263 1130827 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.107
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-248059"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.107
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.107"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0328 01:03:51.237330 1130827 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0-beta.0
	I0328 01:03:51.248044 1130827 binaries.go:44] Found k8s binaries, skipping transfer
	I0328 01:03:51.248113 1130827 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0328 01:03:51.257854 1130827 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (324 bytes)
	I0328 01:03:51.276307 1130827 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I0328 01:03:51.294698 1130827 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2168 bytes)
	I0328 01:03:51.313297 1130827 ssh_runner.go:195] Run: grep 192.168.61.107	control-plane.minikube.internal$ /etc/hosts
	I0328 01:03:51.317668 1130827 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.107	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0328 01:03:51.330478 1130827 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0328 01:03:51.457500 1130827 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0328 01:03:51.484463 1130827 certs.go:68] Setting up /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/no-preload-248059 for IP: 192.168.61.107
	I0328 01:03:51.484493 1130827 certs.go:194] generating shared ca certs ...
	I0328 01:03:51.484518 1130827 certs.go:226] acquiring lock for ca certs: {Name:mkf4dec8f33bbf51de6ed3aabdf175c7fd744ef9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 01:03:51.484718 1130827 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.key
	I0328 01:03:51.484768 1130827 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/proxy-client-ca.key
	I0328 01:03:51.484781 1130827 certs.go:256] generating profile certs ...
	I0328 01:03:51.484910 1130827 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/no-preload-248059/client.key
	I0328 01:03:51.484989 1130827 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/no-preload-248059/apiserver.key.85d037b2
	I0328 01:03:51.485040 1130827 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/no-preload-248059/proxy-client.key
	I0328 01:03:51.485196 1130827 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/1076522.pem (1338 bytes)
	W0328 01:03:51.485243 1130827 certs.go:480] ignoring /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/1076522_empty.pem, impossibly tiny 0 bytes
	I0328 01:03:51.485257 1130827 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca-key.pem (1675 bytes)
	I0328 01:03:51.485292 1130827 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem (1078 bytes)
	I0328 01:03:51.485327 1130827 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/cert.pem (1123 bytes)
	I0328 01:03:51.485357 1130827 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/key.pem (1679 bytes)
	I0328 01:03:51.485416 1130827 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/files/etc/ssl/certs/10765222.pem (1708 bytes)
	I0328 01:03:51.486614 1130827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0328 01:03:51.537554 1130827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0328 01:03:51.587256 1130827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0328 01:03:51.620264 1130827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0328 01:03:51.652100 1130827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/no-preload-248059/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0328 01:03:51.694388 1130827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/no-preload-248059/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0328 01:03:51.720913 1130827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/no-preload-248059/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0328 01:03:51.747141 1130827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/no-preload-248059/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0328 01:03:51.776370 1130827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/1076522.pem --> /usr/share/ca-certificates/1076522.pem (1338 bytes)
	I0328 01:03:51.803168 1130827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/files/etc/ssl/certs/10765222.pem --> /usr/share/ca-certificates/10765222.pem (1708 bytes)
	I0328 01:03:51.831138 1130827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0328 01:03:51.857272 1130827 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0328 01:03:51.876070 1130827 ssh_runner.go:195] Run: openssl version
	I0328 01:03:51.882197 1130827 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1076522.pem && ln -fs /usr/share/ca-certificates/1076522.pem /etc/ssl/certs/1076522.pem"
	I0328 01:03:51.893560 1130827 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1076522.pem
	I0328 01:03:51.898293 1130827 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 27 23:42 /usr/share/ca-certificates/1076522.pem
	I0328 01:03:51.898361 1130827 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1076522.pem
	I0328 01:03:51.904549 1130827 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1076522.pem /etc/ssl/certs/51391683.0"
	I0328 01:03:51.918175 1130827 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10765222.pem && ln -fs /usr/share/ca-certificates/10765222.pem /etc/ssl/certs/10765222.pem"
	I0328 01:03:51.930387 1130827 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10765222.pem
	I0328 01:03:51.935610 1130827 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 27 23:42 /usr/share/ca-certificates/10765222.pem
	I0328 01:03:51.935691 1130827 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10765222.pem
	I0328 01:03:51.942127 1130827 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/10765222.pem /etc/ssl/certs/3ec20f2e.0"
	I0328 01:03:51.954252 1130827 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0328 01:03:51.966727 1130827 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0328 01:03:51.971742 1130827 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 27 23:34 /usr/share/ca-certificates/minikubeCA.pem
	I0328 01:03:51.971810 1130827 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0328 01:03:51.978082 1130827 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0328 01:03:51.992233 1130827 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0328 01:03:51.997556 1130827 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0328 01:03:52.004178 1130827 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0328 01:03:52.010666 1130827 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0328 01:03:52.017076 1130827 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0328 01:03:52.023334 1130827 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0328 01:03:52.029980 1130827 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0328 01:03:52.036395 1130827 kubeadm.go:391] StartCluster: {Name:no-preload-248059 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.0-beta.0 ClusterName:no-preload-248059 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.107 Port:8443 KubernetesVersion:v1.30.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0
m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0328 01:03:52.036483 1130827 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0328 01:03:52.036539 1130827 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0328 01:03:52.080486 1130827 cri.go:89] found id: ""
	I0328 01:03:52.080580 1130827 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0328 01:03:52.094552 1130827 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0328 01:03:52.094583 1130827 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0328 01:03:52.094599 1130827 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0328 01:03:52.094650 1130827 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0328 01:03:52.107008 1130827 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0328 01:03:52.108200 1130827 kubeconfig.go:125] found "no-preload-248059" server: "https://192.168.61.107:8443"
	I0328 01:03:52.110536 1130827 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0328 01:03:52.122998 1130827 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.61.107
	I0328 01:03:52.123044 1130827 kubeadm.go:1154] stopping kube-system containers ...
	I0328 01:03:52.123090 1130827 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0328 01:03:52.123170 1130827 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0328 01:03:52.165568 1130827 cri.go:89] found id: ""
	I0328 01:03:52.165666 1130827 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0328 01:03:52.183930 1130827 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0328 01:03:52.195188 1130827 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0328 01:03:52.195215 1130827 kubeadm.go:156] found existing configuration files:
	
	I0328 01:03:52.195271 1130827 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0328 01:03:52.205872 1130827 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0328 01:03:52.205932 1130827 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0328 01:03:52.216481 1130827 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0328 01:03:52.226719 1130827 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0328 01:03:52.226787 1130827 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0328 01:03:52.238852 1130827 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0328 01:03:52.250272 1130827 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0328 01:03:52.250341 1130827 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0328 01:03:52.262474 1130827 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0328 01:03:52.273981 1130827 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0328 01:03:52.274059 1130827 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0328 01:03:52.286028 1130827 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0328 01:03:52.297016 1130827 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0328 01:03:52.406981 1130827 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0328 01:03:53.521529 1130827 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.114505514s)
	I0328 01:03:53.521569 1130827 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0328 01:03:53.735728 1130827 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0328 01:03:53.808590 1130827 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0328 01:03:53.931165 1130827 api_server.go:52] waiting for apiserver process to appear ...
	I0328 01:03:53.931281 1130827 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:54.432358 1130827 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:54.931653 1130827 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:54.948811 1130827 api_server.go:72] duration metric: took 1.017647613s to wait for apiserver process to appear ...
	I0328 01:03:54.948843 1130827 api_server.go:88] waiting for apiserver healthz status ...
	I0328 01:03:54.948871 1130827 api_server.go:253] Checking apiserver healthz at https://192.168.61.107:8443/healthz ...
	I0328 01:03:54.949490 1130827 api_server.go:269] stopped: https://192.168.61.107:8443/healthz: Get "https://192.168.61.107:8443/healthz": dial tcp 192.168.61.107:8443: connect: connection refused
	I0328 01:03:55.449050 1130827 api_server.go:253] Checking apiserver healthz at https://192.168.61.107:8443/healthz ...
	I0328 01:03:53.413775 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:55.914095 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:57.515811 1130827 api_server.go:279] https://192.168.61.107:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0328 01:03:57.515852 1130827 api_server.go:103] status: https://192.168.61.107:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0328 01:03:57.515872 1130827 api_server.go:253] Checking apiserver healthz at https://192.168.61.107:8443/healthz ...
	I0328 01:03:57.564527 1130827 api_server.go:279] https://192.168.61.107:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0328 01:03:57.564560 1130827 api_server.go:103] status: https://192.168.61.107:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0328 01:03:57.949780 1130827 api_server.go:253] Checking apiserver healthz at https://192.168.61.107:8443/healthz ...
	I0328 01:03:57.955515 1130827 api_server.go:279] https://192.168.61.107:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0328 01:03:57.955565 1130827 api_server.go:103] status: https://192.168.61.107:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0328 01:03:58.449103 1130827 api_server.go:253] Checking apiserver healthz at https://192.168.61.107:8443/healthz ...
	I0328 01:03:58.456345 1130827 api_server.go:279] https://192.168.61.107:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0328 01:03:58.456384 1130827 api_server.go:103] status: https://192.168.61.107:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0328 01:03:58.949575 1130827 api_server.go:253] Checking apiserver healthz at https://192.168.61.107:8443/healthz ...
	I0328 01:03:58.954466 1130827 api_server.go:279] https://192.168.61.107:8443/healthz returned 200:
	ok
	I0328 01:03:58.961213 1130827 api_server.go:141] control plane version: v1.30.0-beta.0
	I0328 01:03:58.961244 1130827 api_server.go:131] duration metric: took 4.012391589s to wait for apiserver health ...
	I0328 01:03:58.961256 1130827 cni.go:84] Creating CNI manager for ""
	I0328 01:03:58.961265 1130827 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0328 01:03:58.963147 1130827 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0328 01:03:55.568378 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:56.068253 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:56.568989 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:57.068709 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:57.569038 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:58.068236 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:58.568386 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:59.068971 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:59.568858 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:04:00.067964 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:57.043266 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:59.541626 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:58.964446 1130827 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0328 01:03:58.979425 1130827 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0328 01:03:59.042826 1130827 system_pods.go:43] waiting for kube-system pods to appear ...
	I0328 01:03:59.060388 1130827 system_pods.go:59] 8 kube-system pods found
	I0328 01:03:59.060429 1130827 system_pods.go:61] "coredns-7db6d8ff4d-86n4s" [71402ca8-dfa7-4caf-a422-6de9f24bf9dc] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0328 01:03:59.060439 1130827 system_pods.go:61] "etcd-no-preload-248059" [954b6886-b84f-4d94-bbce-7e520142eb4b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0328 01:03:59.060451 1130827 system_pods.go:61] "kube-apiserver-no-preload-248059" [2d3caabe-27c2-44e7-8f52-76e03f262e2d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0328 01:03:59.060462 1130827 system_pods.go:61] "kube-controller-manager-no-preload-248059" [30b9f4aa-c9a7-4d91-8e4d-35ad32f40425] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0328 01:03:59.060472 1130827 system_pods.go:61] "kube-proxy-b9qpb" [7ab4cca8-0ba2-4177-84cd-c6ac045930fc] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0328 01:03:59.060481 1130827 system_pods.go:61] "kube-scheduler-no-preload-248059" [4d9e45e3-d990-40d4-a4be-8384c39eb9b0] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0328 01:03:59.060493 1130827 system_pods.go:61] "metrics-server-569cc877fc-cvnrj" [063a47ac-9ceb-4521-9dde-aca02ec5e0d8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0328 01:03:59.060508 1130827 system_pods.go:61] "storage-provisioner" [0a0eb2d3-a426-4b76-8009-1a0a0e0312bb] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0328 01:03:59.060518 1130827 system_pods.go:74] duration metric: took 17.666067ms to wait for pod list to return data ...
	I0328 01:03:59.060533 1130827 node_conditions.go:102] verifying NodePressure condition ...
	I0328 01:03:59.065018 1130827 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0328 01:03:59.065054 1130827 node_conditions.go:123] node cpu capacity is 2
	I0328 01:03:59.065071 1130827 node_conditions.go:105] duration metric: took 4.531253ms to run NodePressure ...
	I0328 01:03:59.065097 1130827 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-beta.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0328 01:03:59.454609 1130827 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0328 01:03:59.459707 1130827 kubeadm.go:733] kubelet initialised
	I0328 01:03:59.459730 1130827 kubeadm.go:734] duration metric: took 5.09757ms waiting for restarted kubelet to initialise ...
	I0328 01:03:59.459739 1130827 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0328 01:03:59.465352 1130827 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-86n4s" in "kube-system" namespace to be "Ready" ...
	I0328 01:03:59.471020 1130827 pod_ready.go:97] node "no-preload-248059" hosting pod "coredns-7db6d8ff4d-86n4s" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-248059" has status "Ready":"False"
	I0328 01:03:59.471054 1130827 pod_ready.go:81] duration metric: took 5.676291ms for pod "coredns-7db6d8ff4d-86n4s" in "kube-system" namespace to be "Ready" ...
	E0328 01:03:59.471067 1130827 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-248059" hosting pod "coredns-7db6d8ff4d-86n4s" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-248059" has status "Ready":"False"
	I0328 01:03:59.471075 1130827 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-248059" in "kube-system" namespace to be "Ready" ...
	I0328 01:03:59.476393 1130827 pod_ready.go:97] node "no-preload-248059" hosting pod "etcd-no-preload-248059" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-248059" has status "Ready":"False"
	I0328 01:03:59.476421 1130827 pod_ready.go:81] duration metric: took 5.333391ms for pod "etcd-no-preload-248059" in "kube-system" namespace to be "Ready" ...
	E0328 01:03:59.476430 1130827 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-248059" hosting pod "etcd-no-preload-248059" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-248059" has status "Ready":"False"
	I0328 01:03:59.476436 1130827 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-248059" in "kube-system" namespace to be "Ready" ...
	I0328 01:03:59.485889 1130827 pod_ready.go:97] node "no-preload-248059" hosting pod "kube-apiserver-no-preload-248059" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-248059" has status "Ready":"False"
	I0328 01:03:59.485924 1130827 pod_ready.go:81] duration metric: took 9.481204ms for pod "kube-apiserver-no-preload-248059" in "kube-system" namespace to be "Ready" ...
	E0328 01:03:59.485937 1130827 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-248059" hosting pod "kube-apiserver-no-preload-248059" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-248059" has status "Ready":"False"
	I0328 01:03:59.485957 1130827 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-248059" in "kube-system" namespace to be "Ready" ...
	I0328 01:03:59.491064 1130827 pod_ready.go:97] node "no-preload-248059" hosting pod "kube-controller-manager-no-preload-248059" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-248059" has status "Ready":"False"
	I0328 01:03:59.491095 1130827 pod_ready.go:81] duration metric: took 5.125981ms for pod "kube-controller-manager-no-preload-248059" in "kube-system" namespace to be "Ready" ...
	E0328 01:03:59.491107 1130827 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-248059" hosting pod "kube-controller-manager-no-preload-248059" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-248059" has status "Ready":"False"
	I0328 01:03:59.491116 1130827 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-b9qpb" in "kube-system" namespace to be "Ready" ...
	I0328 01:03:59.858724 1130827 pod_ready.go:92] pod "kube-proxy-b9qpb" in "kube-system" namespace has status "Ready":"True"
	I0328 01:03:59.858753 1130827 pod_ready.go:81] duration metric: took 367.628034ms for pod "kube-proxy-b9qpb" in "kube-system" namespace to be "Ready" ...
	I0328 01:03:59.858764 1130827 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-248059" in "kube-system" namespace to be "Ready" ...
	I0328 01:03:58.413911 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:00.913297 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:02.913414 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:00.568622 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:04:01.067943 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:04:01.567964 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:04:02.068537 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:04:02.568772 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:04:03.068458 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:04:03.568943 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:04:04.068085 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:04:04.068176 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:04:04.112601 1131323 cri.go:89] found id: ""
	I0328 01:04:04.112631 1131323 logs.go:276] 0 containers: []
	W0328 01:04:04.112642 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:04:04.112650 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:04:04.112726 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:04:04.151837 1131323 cri.go:89] found id: ""
	I0328 01:04:04.151873 1131323 logs.go:276] 0 containers: []
	W0328 01:04:04.151885 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:04:04.151894 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:04:04.151965 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:04:04.193411 1131323 cri.go:89] found id: ""
	I0328 01:04:04.193451 1131323 logs.go:276] 0 containers: []
	W0328 01:04:04.193463 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:04:04.193473 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:04:04.193545 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:04:04.239623 1131323 cri.go:89] found id: ""
	I0328 01:04:04.239652 1131323 logs.go:276] 0 containers: []
	W0328 01:04:04.239662 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:04:04.239673 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:04:04.239732 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:04:04.279561 1131323 cri.go:89] found id: ""
	I0328 01:04:04.279600 1131323 logs.go:276] 0 containers: []
	W0328 01:04:04.279615 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:04:04.279627 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:04:04.279708 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:04:04.318680 1131323 cri.go:89] found id: ""
	I0328 01:04:04.318710 1131323 logs.go:276] 0 containers: []
	W0328 01:04:04.318722 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:04:04.318731 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:04:04.318797 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:04:04.356486 1131323 cri.go:89] found id: ""
	I0328 01:04:04.356514 1131323 logs.go:276] 0 containers: []
	W0328 01:04:04.356523 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:04:04.356530 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:04:04.356586 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:04:04.394281 1131323 cri.go:89] found id: ""
	I0328 01:04:04.394319 1131323 logs.go:276] 0 containers: []
	W0328 01:04:04.394334 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:04:04.394348 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:04:04.394364 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:04:04.458688 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:04:04.458729 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:04:04.501399 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:04:04.501440 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:04:04.556183 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:04:04.556225 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:04:04.571392 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:04:04.571427 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:04:04.709967 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:04:02.041555 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:04.541464 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:01.866183 1130827 pod_ready.go:102] pod "kube-scheduler-no-preload-248059" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:03.868706 1130827 pod_ready.go:102] pod "kube-scheduler-no-preload-248059" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:04.915667 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:07.412548 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:07.210550 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:04:07.224274 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:04:07.224345 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:04:07.262604 1131323 cri.go:89] found id: ""
	I0328 01:04:07.262640 1131323 logs.go:276] 0 containers: []
	W0328 01:04:07.262665 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:04:07.262674 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:04:07.262763 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:04:07.296868 1131323 cri.go:89] found id: ""
	I0328 01:04:07.296907 1131323 logs.go:276] 0 containers: []
	W0328 01:04:07.296918 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:04:07.296926 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:04:07.296992 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:04:07.333110 1131323 cri.go:89] found id: ""
	I0328 01:04:07.333149 1131323 logs.go:276] 0 containers: []
	W0328 01:04:07.333162 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:04:07.333171 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:04:07.333240 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:04:07.371138 1131323 cri.go:89] found id: ""
	I0328 01:04:07.371168 1131323 logs.go:276] 0 containers: []
	W0328 01:04:07.371186 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:04:07.371195 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:04:07.371259 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:04:07.412197 1131323 cri.go:89] found id: ""
	I0328 01:04:07.412230 1131323 logs.go:276] 0 containers: []
	W0328 01:04:07.412242 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:04:07.412251 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:04:07.412331 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:04:07.457021 1131323 cri.go:89] found id: ""
	I0328 01:04:07.457052 1131323 logs.go:276] 0 containers: []
	W0328 01:04:07.457070 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:04:07.457080 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:04:07.457153 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:04:07.517996 1131323 cri.go:89] found id: ""
	I0328 01:04:07.518026 1131323 logs.go:276] 0 containers: []
	W0328 01:04:07.518034 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:04:07.518040 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:04:07.518111 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:04:07.556829 1131323 cri.go:89] found id: ""
	I0328 01:04:07.556856 1131323 logs.go:276] 0 containers: []
	W0328 01:04:07.556865 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:04:07.556875 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:04:07.556890 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:04:07.572234 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:04:07.572270 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:04:07.648615 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:04:07.648641 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:04:07.648658 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:04:07.719617 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:04:07.719665 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:04:07.764053 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:04:07.764097 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:04:10.319480 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:04:06.542160 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:08.550725 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:06.366150 1130827 pod_ready.go:102] pod "kube-scheduler-no-preload-248059" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:07.365200 1130827 pod_ready.go:92] pod "kube-scheduler-no-preload-248059" in "kube-system" namespace has status "Ready":"True"
	I0328 01:04:07.365233 1130827 pod_ready.go:81] duration metric: took 7.506461201s for pod "kube-scheduler-no-preload-248059" in "kube-system" namespace to be "Ready" ...
	I0328 01:04:07.365256 1130827 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace to be "Ready" ...
	I0328 01:04:09.373694 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:09.413378 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:11.913400 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:10.334347 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:04:10.335893 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:04:10.375231 1131323 cri.go:89] found id: ""
	I0328 01:04:10.375263 1131323 logs.go:276] 0 containers: []
	W0328 01:04:10.375274 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:04:10.375281 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:04:10.375353 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:04:10.413652 1131323 cri.go:89] found id: ""
	I0328 01:04:10.413706 1131323 logs.go:276] 0 containers: []
	W0328 01:04:10.413726 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:04:10.413736 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:04:10.413805 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:04:10.449546 1131323 cri.go:89] found id: ""
	I0328 01:04:10.449588 1131323 logs.go:276] 0 containers: []
	W0328 01:04:10.449597 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:04:10.449604 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:04:10.449658 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:04:10.487518 1131323 cri.go:89] found id: ""
	I0328 01:04:10.487556 1131323 logs.go:276] 0 containers: []
	W0328 01:04:10.487570 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:04:10.487579 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:04:10.487663 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:04:10.525088 1131323 cri.go:89] found id: ""
	I0328 01:04:10.525124 1131323 logs.go:276] 0 containers: []
	W0328 01:04:10.525137 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:04:10.525146 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:04:10.525213 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:04:10.567177 1131323 cri.go:89] found id: ""
	I0328 01:04:10.567209 1131323 logs.go:276] 0 containers: []
	W0328 01:04:10.567221 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:04:10.567231 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:04:10.567302 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:04:10.609440 1131323 cri.go:89] found id: ""
	I0328 01:04:10.609474 1131323 logs.go:276] 0 containers: []
	W0328 01:04:10.609485 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:04:10.609492 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:04:10.609549 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:04:10.652466 1131323 cri.go:89] found id: ""
	I0328 01:04:10.652502 1131323 logs.go:276] 0 containers: []
	W0328 01:04:10.652516 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:04:10.652529 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:04:10.652546 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:04:10.737406 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:04:10.737451 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:04:10.786955 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:04:10.786991 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:04:10.843072 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:04:10.843114 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:04:10.857209 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:04:10.857244 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:04:10.950885 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:04:13.451542 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:04:13.465833 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:04:13.465924 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:04:13.503353 1131323 cri.go:89] found id: ""
	I0328 01:04:13.503386 1131323 logs.go:276] 0 containers: []
	W0328 01:04:13.503398 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:04:13.503407 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:04:13.503474 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:04:13.543175 1131323 cri.go:89] found id: ""
	I0328 01:04:13.543208 1131323 logs.go:276] 0 containers: []
	W0328 01:04:13.543220 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:04:13.543229 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:04:13.543287 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:04:13.580796 1131323 cri.go:89] found id: ""
	I0328 01:04:13.580829 1131323 logs.go:276] 0 containers: []
	W0328 01:04:13.580840 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:04:13.580848 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:04:13.580900 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:04:13.619483 1131323 cri.go:89] found id: ""
	I0328 01:04:13.619516 1131323 logs.go:276] 0 containers: []
	W0328 01:04:13.619529 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:04:13.619539 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:04:13.619596 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:04:13.654651 1131323 cri.go:89] found id: ""
	I0328 01:04:13.654683 1131323 logs.go:276] 0 containers: []
	W0328 01:04:13.654697 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:04:13.654705 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:04:13.654774 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:04:13.691763 1131323 cri.go:89] found id: ""
	I0328 01:04:13.691794 1131323 logs.go:276] 0 containers: []
	W0328 01:04:13.691805 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:04:13.691813 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:04:13.691881 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:04:13.730580 1131323 cri.go:89] found id: ""
	I0328 01:04:13.730614 1131323 logs.go:276] 0 containers: []
	W0328 01:04:13.730627 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:04:13.730635 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:04:13.730694 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:04:13.767802 1131323 cri.go:89] found id: ""
	I0328 01:04:13.767834 1131323 logs.go:276] 0 containers: []
	W0328 01:04:13.767848 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:04:13.767860 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:04:13.767876 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:04:13.815612 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:04:13.815653 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:04:13.870945 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:04:13.870991 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:04:13.891456 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:04:13.891506 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:04:14.022124 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:04:14.022163 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:04:14.022187 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:04:11.041196 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:13.044490 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:15.541942 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:11.873574 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:13.875251 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:14.412081 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:16.412837 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:16.604087 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:04:16.618872 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:04:16.618971 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:04:16.665628 1131323 cri.go:89] found id: ""
	I0328 01:04:16.665661 1131323 logs.go:276] 0 containers: []
	W0328 01:04:16.665675 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:04:16.665683 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:04:16.665780 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:04:16.703727 1131323 cri.go:89] found id: ""
	I0328 01:04:16.703758 1131323 logs.go:276] 0 containers: []
	W0328 01:04:16.703768 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:04:16.703775 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:04:16.703835 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:04:16.741425 1131323 cri.go:89] found id: ""
	I0328 01:04:16.741455 1131323 logs.go:276] 0 containers: []
	W0328 01:04:16.741464 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:04:16.741470 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:04:16.741524 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:04:16.782333 1131323 cri.go:89] found id: ""
	I0328 01:04:16.782373 1131323 logs.go:276] 0 containers: []
	W0328 01:04:16.782387 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:04:16.782398 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:04:16.782469 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:04:16.820321 1131323 cri.go:89] found id: ""
	I0328 01:04:16.820355 1131323 logs.go:276] 0 containers: []
	W0328 01:04:16.820364 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:04:16.820372 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:04:16.820429 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:04:16.861091 1131323 cri.go:89] found id: ""
	I0328 01:04:16.861130 1131323 logs.go:276] 0 containers: []
	W0328 01:04:16.861144 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:04:16.861154 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:04:16.861226 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:04:16.901347 1131323 cri.go:89] found id: ""
	I0328 01:04:16.901394 1131323 logs.go:276] 0 containers: []
	W0328 01:04:16.901408 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:04:16.901418 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:04:16.901491 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:04:16.944027 1131323 cri.go:89] found id: ""
	I0328 01:04:16.944067 1131323 logs.go:276] 0 containers: []
	W0328 01:04:16.944080 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:04:16.944093 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:04:16.944110 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:04:16.959104 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:04:16.959151 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:04:17.035432 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:04:17.035464 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:04:17.035480 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:04:17.116236 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:04:17.116276 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:04:17.159321 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:04:17.159370 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:04:19.711326 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:04:19.726016 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:04:19.726094 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:04:19.776639 1131323 cri.go:89] found id: ""
	I0328 01:04:19.776676 1131323 logs.go:276] 0 containers: []
	W0328 01:04:19.776690 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:04:19.776700 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:04:19.776782 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:04:19.817849 1131323 cri.go:89] found id: ""
	I0328 01:04:19.817887 1131323 logs.go:276] 0 containers: []
	W0328 01:04:19.817897 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:04:19.817904 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:04:19.817981 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:04:19.855055 1131323 cri.go:89] found id: ""
	I0328 01:04:19.855089 1131323 logs.go:276] 0 containers: []
	W0328 01:04:19.855102 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:04:19.855110 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:04:19.855177 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:04:19.895296 1131323 cri.go:89] found id: ""
	I0328 01:04:19.895332 1131323 logs.go:276] 0 containers: []
	W0328 01:04:19.895346 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:04:19.895354 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:04:19.895414 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:04:19.930936 1131323 cri.go:89] found id: ""
	I0328 01:04:19.930968 1131323 logs.go:276] 0 containers: []
	W0328 01:04:19.930980 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:04:19.930989 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:04:19.931067 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:04:19.968573 1131323 cri.go:89] found id: ""
	I0328 01:04:19.968610 1131323 logs.go:276] 0 containers: []
	W0328 01:04:19.968623 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:04:19.968632 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:04:19.968693 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:04:20.006130 1131323 cri.go:89] found id: ""
	I0328 01:04:20.006180 1131323 logs.go:276] 0 containers: []
	W0328 01:04:20.006195 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:04:20.006203 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:04:20.006304 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:04:20.043646 1131323 cri.go:89] found id: ""
	I0328 01:04:20.043678 1131323 logs.go:276] 0 containers: []
	W0328 01:04:20.043689 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:04:20.043701 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:04:20.043717 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:04:20.058728 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:04:20.058761 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:04:20.136392 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:04:20.136417 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:04:20.136431 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:04:20.214971 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:04:20.215015 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:04:20.255002 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:04:20.255047 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:04:18.041868 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:20.542175 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:16.372600 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:18.373203 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:20.374228 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:18.913596 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:20.913978 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:22.914777 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:22.810078 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:04:22.824083 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:04:22.824169 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:04:22.862037 1131323 cri.go:89] found id: ""
	I0328 01:04:22.862066 1131323 logs.go:276] 0 containers: []
	W0328 01:04:22.862074 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:04:22.862081 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:04:22.862141 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:04:22.901625 1131323 cri.go:89] found id: ""
	I0328 01:04:22.901658 1131323 logs.go:276] 0 containers: []
	W0328 01:04:22.901670 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:04:22.901679 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:04:22.901752 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:04:22.938858 1131323 cri.go:89] found id: ""
	I0328 01:04:22.938891 1131323 logs.go:276] 0 containers: []
	W0328 01:04:22.938903 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:04:22.938912 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:04:22.938983 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:04:22.978781 1131323 cri.go:89] found id: ""
	I0328 01:04:22.978818 1131323 logs.go:276] 0 containers: []
	W0328 01:04:22.978829 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:04:22.978837 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:04:22.978910 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:04:23.016844 1131323 cri.go:89] found id: ""
	I0328 01:04:23.016882 1131323 logs.go:276] 0 containers: []
	W0328 01:04:23.016895 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:04:23.016904 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:04:23.016975 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:04:23.058456 1131323 cri.go:89] found id: ""
	I0328 01:04:23.058508 1131323 logs.go:276] 0 containers: []
	W0328 01:04:23.058522 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:04:23.058531 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:04:23.058604 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:04:23.099368 1131323 cri.go:89] found id: ""
	I0328 01:04:23.099399 1131323 logs.go:276] 0 containers: []
	W0328 01:04:23.099408 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:04:23.099420 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:04:23.099492 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:04:23.135593 1131323 cri.go:89] found id: ""
	I0328 01:04:23.135634 1131323 logs.go:276] 0 containers: []
	W0328 01:04:23.135653 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:04:23.135665 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:04:23.135679 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:04:23.191215 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:04:23.191260 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:04:23.206849 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:04:23.206884 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:04:23.289566 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:04:23.289596 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:04:23.289618 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:04:23.365429 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:04:23.365480 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:04:23.042312 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:25.541788 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:22.872233 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:25.373908 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:25.413591 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:27.912983 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:25.914883 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:04:25.929336 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:04:25.929415 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:04:25.969452 1131323 cri.go:89] found id: ""
	I0328 01:04:25.969485 1131323 logs.go:276] 0 containers: []
	W0328 01:04:25.969497 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:04:25.969506 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:04:25.969573 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:04:26.008978 1131323 cri.go:89] found id: ""
	I0328 01:04:26.009006 1131323 logs.go:276] 0 containers: []
	W0328 01:04:26.009015 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:04:26.009022 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:04:26.009075 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:04:26.051110 1131323 cri.go:89] found id: ""
	I0328 01:04:26.051138 1131323 logs.go:276] 0 containers: []
	W0328 01:04:26.051146 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:04:26.051153 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:04:26.051213 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:04:26.088231 1131323 cri.go:89] found id: ""
	I0328 01:04:26.088262 1131323 logs.go:276] 0 containers: []
	W0328 01:04:26.088271 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:04:26.088277 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:04:26.088342 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:04:26.125741 1131323 cri.go:89] found id: ""
	I0328 01:04:26.125782 1131323 logs.go:276] 0 containers: []
	W0328 01:04:26.125794 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:04:26.125800 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:04:26.125867 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:04:26.163367 1131323 cri.go:89] found id: ""
	I0328 01:04:26.163406 1131323 logs.go:276] 0 containers: []
	W0328 01:04:26.163417 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:04:26.163426 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:04:26.163503 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:04:26.202302 1131323 cri.go:89] found id: ""
	I0328 01:04:26.202340 1131323 logs.go:276] 0 containers: []
	W0328 01:04:26.202355 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:04:26.202364 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:04:26.202422 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:04:26.240880 1131323 cri.go:89] found id: ""
	I0328 01:04:26.240911 1131323 logs.go:276] 0 containers: []
	W0328 01:04:26.240921 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:04:26.240931 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:04:26.240943 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:04:26.283151 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:04:26.283180 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:04:26.341313 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:04:26.341350 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:04:26.356762 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:04:26.356791 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:04:26.428033 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:04:26.428054 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:04:26.428066 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:04:29.006332 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:04:29.020634 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:04:29.020745 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:04:29.060812 1131323 cri.go:89] found id: ""
	I0328 01:04:29.060843 1131323 logs.go:276] 0 containers: []
	W0328 01:04:29.060852 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:04:29.060859 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:04:29.060916 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:04:29.100110 1131323 cri.go:89] found id: ""
	I0328 01:04:29.100139 1131323 logs.go:276] 0 containers: []
	W0328 01:04:29.100149 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:04:29.100155 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:04:29.100212 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:04:29.140345 1131323 cri.go:89] found id: ""
	I0328 01:04:29.140384 1131323 logs.go:276] 0 containers: []
	W0328 01:04:29.140396 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:04:29.140404 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:04:29.140479 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:04:29.182415 1131323 cri.go:89] found id: ""
	I0328 01:04:29.182449 1131323 logs.go:276] 0 containers: []
	W0328 01:04:29.182459 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:04:29.182465 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:04:29.182533 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:04:29.225177 1131323 cri.go:89] found id: ""
	I0328 01:04:29.225214 1131323 logs.go:276] 0 containers: []
	W0328 01:04:29.225225 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:04:29.225233 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:04:29.225310 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:04:29.265437 1131323 cri.go:89] found id: ""
	I0328 01:04:29.265471 1131323 logs.go:276] 0 containers: []
	W0328 01:04:29.265485 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:04:29.265493 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:04:29.265556 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:04:29.301578 1131323 cri.go:89] found id: ""
	I0328 01:04:29.301617 1131323 logs.go:276] 0 containers: []
	W0328 01:04:29.301630 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:04:29.301639 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:04:29.301719 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:04:29.340816 1131323 cri.go:89] found id: ""
	I0328 01:04:29.340847 1131323 logs.go:276] 0 containers: []
	W0328 01:04:29.340856 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:04:29.340867 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:04:29.340880 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:04:29.384658 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:04:29.384687 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:04:29.439243 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:04:29.439285 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:04:29.456179 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:04:29.456211 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:04:29.534878 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:04:29.534906 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:04:29.534927 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:04:28.041463 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:30.042506 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:27.872489 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:30.371109 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:29.913856 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:32.415699 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:32.115798 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:04:32.130464 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:04:32.130560 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:04:32.168846 1131323 cri.go:89] found id: ""
	I0328 01:04:32.168877 1131323 logs.go:276] 0 containers: []
	W0328 01:04:32.168887 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:04:32.168894 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:04:32.168952 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:04:32.208590 1131323 cri.go:89] found id: ""
	I0328 01:04:32.208622 1131323 logs.go:276] 0 containers: []
	W0328 01:04:32.208632 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:04:32.208638 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:04:32.208694 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:04:32.247323 1131323 cri.go:89] found id: ""
	I0328 01:04:32.247362 1131323 logs.go:276] 0 containers: []
	W0328 01:04:32.247375 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:04:32.247384 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:04:32.247507 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:04:32.285260 1131323 cri.go:89] found id: ""
	I0328 01:04:32.285293 1131323 logs.go:276] 0 containers: []
	W0328 01:04:32.285312 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:04:32.285319 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:04:32.285395 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:04:32.326678 1131323 cri.go:89] found id: ""
	I0328 01:04:32.326712 1131323 logs.go:276] 0 containers: []
	W0328 01:04:32.326725 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:04:32.326740 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:04:32.326823 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:04:32.363375 1131323 cri.go:89] found id: ""
	I0328 01:04:32.363403 1131323 logs.go:276] 0 containers: []
	W0328 01:04:32.363412 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:04:32.363419 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:04:32.363473 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:04:32.401410 1131323 cri.go:89] found id: ""
	I0328 01:04:32.401449 1131323 logs.go:276] 0 containers: []
	W0328 01:04:32.401462 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:04:32.401470 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:04:32.401558 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:04:32.438645 1131323 cri.go:89] found id: ""
	I0328 01:04:32.438680 1131323 logs.go:276] 0 containers: []
	W0328 01:04:32.438691 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:04:32.438703 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:04:32.438718 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:04:32.488743 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:04:32.488786 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:04:32.503908 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:04:32.503944 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:04:32.577307 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:04:32.577333 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:04:32.577350 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:04:32.657787 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:04:32.657832 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:04:35.201151 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:04:35.215313 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:04:35.215383 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:04:35.253467 1131323 cri.go:89] found id: ""
	I0328 01:04:35.253504 1131323 logs.go:276] 0 containers: []
	W0328 01:04:35.253515 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:04:35.253522 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:04:35.253593 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:04:35.290218 1131323 cri.go:89] found id: ""
	I0328 01:04:35.290280 1131323 logs.go:276] 0 containers: []
	W0328 01:04:35.290292 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:04:35.290300 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:04:35.290378 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:04:35.330714 1131323 cri.go:89] found id: ""
	I0328 01:04:35.330749 1131323 logs.go:276] 0 containers: []
	W0328 01:04:35.330757 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:04:35.330764 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:04:35.330831 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:04:32.542071 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:34.544163 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:32.372100 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:34.872293 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:34.913212 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:37.411734 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:35.371524 1131323 cri.go:89] found id: ""
	I0328 01:04:35.371553 1131323 logs.go:276] 0 containers: []
	W0328 01:04:35.371570 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:04:35.371577 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:04:35.371630 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:04:35.411610 1131323 cri.go:89] found id: ""
	I0328 01:04:35.411638 1131323 logs.go:276] 0 containers: []
	W0328 01:04:35.411646 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:04:35.411652 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:04:35.411711 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:04:35.456709 1131323 cri.go:89] found id: ""
	I0328 01:04:35.456745 1131323 logs.go:276] 0 containers: []
	W0328 01:04:35.456758 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:04:35.456766 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:04:35.456836 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:04:35.492688 1131323 cri.go:89] found id: ""
	I0328 01:04:35.492719 1131323 logs.go:276] 0 containers: []
	W0328 01:04:35.492729 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:04:35.492755 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:04:35.492811 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:04:35.531205 1131323 cri.go:89] found id: ""
	I0328 01:04:35.531234 1131323 logs.go:276] 0 containers: []
	W0328 01:04:35.531243 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:04:35.531254 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:04:35.531266 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:04:35.611803 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:04:35.611845 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:04:35.653513 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:04:35.653551 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:04:35.708030 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:04:35.708075 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:04:35.724542 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:04:35.724576 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:04:35.798624 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:04:38.299312 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:04:38.314128 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:04:38.314213 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:04:38.357728 1131323 cri.go:89] found id: ""
	I0328 01:04:38.357761 1131323 logs.go:276] 0 containers: []
	W0328 01:04:38.357779 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:04:38.357786 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:04:38.357848 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:04:38.394512 1131323 cri.go:89] found id: ""
	I0328 01:04:38.394541 1131323 logs.go:276] 0 containers: []
	W0328 01:04:38.394549 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:04:38.394558 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:04:38.394618 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:04:38.434353 1131323 cri.go:89] found id: ""
	I0328 01:04:38.434380 1131323 logs.go:276] 0 containers: []
	W0328 01:04:38.434391 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:04:38.434399 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:04:38.434466 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:04:38.477662 1131323 cri.go:89] found id: ""
	I0328 01:04:38.477693 1131323 logs.go:276] 0 containers: []
	W0328 01:04:38.477703 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:04:38.477710 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:04:38.477763 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:04:38.515014 1131323 cri.go:89] found id: ""
	I0328 01:04:38.515044 1131323 logs.go:276] 0 containers: []
	W0328 01:04:38.515053 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:04:38.515060 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:04:38.515117 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:04:38.558865 1131323 cri.go:89] found id: ""
	I0328 01:04:38.558899 1131323 logs.go:276] 0 containers: []
	W0328 01:04:38.558911 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:04:38.558920 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:04:38.558982 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:04:38.600261 1131323 cri.go:89] found id: ""
	I0328 01:04:38.600290 1131323 logs.go:276] 0 containers: []
	W0328 01:04:38.600299 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:04:38.600306 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:04:38.600366 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:04:38.637131 1131323 cri.go:89] found id: ""
	I0328 01:04:38.637167 1131323 logs.go:276] 0 containers: []
	W0328 01:04:38.637179 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:04:38.637194 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:04:38.637218 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:04:38.716032 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:04:38.716058 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:04:38.716079 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:04:38.804534 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:04:38.804578 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:04:38.851781 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:04:38.851820 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:04:38.910091 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:04:38.910125 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:04:37.041273 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:39.541843 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:37.372262 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:39.372555 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:39.912953 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:42.412667 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:41.425801 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:04:41.441072 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:04:41.441168 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:04:41.482934 1131323 cri.go:89] found id: ""
	I0328 01:04:41.482962 1131323 logs.go:276] 0 containers: []
	W0328 01:04:41.482974 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:04:41.482983 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:04:41.483063 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:04:41.521762 1131323 cri.go:89] found id: ""
	I0328 01:04:41.521796 1131323 logs.go:276] 0 containers: []
	W0328 01:04:41.521810 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:04:41.521819 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:04:41.521931 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:04:41.560814 1131323 cri.go:89] found id: ""
	I0328 01:04:41.560847 1131323 logs.go:276] 0 containers: []
	W0328 01:04:41.560857 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:04:41.560864 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:04:41.560928 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:04:41.601158 1131323 cri.go:89] found id: ""
	I0328 01:04:41.601189 1131323 logs.go:276] 0 containers: []
	W0328 01:04:41.601199 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:04:41.601206 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:04:41.601271 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:04:41.638760 1131323 cri.go:89] found id: ""
	I0328 01:04:41.638789 1131323 logs.go:276] 0 containers: []
	W0328 01:04:41.638799 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:04:41.638806 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:04:41.638861 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:04:41.675235 1131323 cri.go:89] found id: ""
	I0328 01:04:41.675268 1131323 logs.go:276] 0 containers: []
	W0328 01:04:41.675278 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:04:41.675285 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:04:41.675341 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:04:41.712918 1131323 cri.go:89] found id: ""
	I0328 01:04:41.712957 1131323 logs.go:276] 0 containers: []
	W0328 01:04:41.712972 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:04:41.712983 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:04:41.713078 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:04:41.750552 1131323 cri.go:89] found id: ""
	I0328 01:04:41.750582 1131323 logs.go:276] 0 containers: []
	W0328 01:04:41.750591 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:04:41.750601 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:04:41.750617 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:04:41.811163 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:04:41.811204 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:04:41.826502 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:04:41.826547 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:04:41.900727 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:04:41.900759 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:04:41.900777 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:04:41.981731 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:04:41.981783 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:04:44.525845 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:04:44.542301 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:04:44.542389 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:04:44.584907 1131323 cri.go:89] found id: ""
	I0328 01:04:44.584936 1131323 logs.go:276] 0 containers: []
	W0328 01:04:44.584945 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:04:44.584952 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:04:44.585007 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:04:44.630465 1131323 cri.go:89] found id: ""
	I0328 01:04:44.630499 1131323 logs.go:276] 0 containers: []
	W0328 01:04:44.630511 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:04:44.630520 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:04:44.630588 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:04:44.669095 1131323 cri.go:89] found id: ""
	I0328 01:04:44.669131 1131323 logs.go:276] 0 containers: []
	W0328 01:04:44.669143 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:04:44.669152 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:04:44.669235 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:04:44.708445 1131323 cri.go:89] found id: ""
	I0328 01:04:44.708484 1131323 logs.go:276] 0 containers: []
	W0328 01:04:44.708495 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:04:44.708502 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:04:44.708570 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:04:44.747706 1131323 cri.go:89] found id: ""
	I0328 01:04:44.747744 1131323 logs.go:276] 0 containers: []
	W0328 01:04:44.747755 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:04:44.747762 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:04:44.747822 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:04:44.787768 1131323 cri.go:89] found id: ""
	I0328 01:04:44.787807 1131323 logs.go:276] 0 containers: []
	W0328 01:04:44.787821 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:04:44.787830 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:04:44.787899 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:04:44.829018 1131323 cri.go:89] found id: ""
	I0328 01:04:44.829049 1131323 logs.go:276] 0 containers: []
	W0328 01:04:44.829059 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:04:44.829066 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:04:44.829123 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:04:44.874334 1131323 cri.go:89] found id: ""
	I0328 01:04:44.874374 1131323 logs.go:276] 0 containers: []
	W0328 01:04:44.874383 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:04:44.874393 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:04:44.874405 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:04:44.921577 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:04:44.921619 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:04:44.976660 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:04:44.976713 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:04:44.991365 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:04:44.991400 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:04:45.067595 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:04:45.067630 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:04:45.067651 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:04:42.042736 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:44.543288 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:41.372902 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:43.872925 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:45.873163 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:44.913827 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:47.412342 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:47.647634 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:04:47.663581 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:04:47.663687 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:04:47.702889 1131323 cri.go:89] found id: ""
	I0328 01:04:47.702940 1131323 logs.go:276] 0 containers: []
	W0328 01:04:47.702954 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:04:47.702966 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:04:47.703043 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:04:47.744995 1131323 cri.go:89] found id: ""
	I0328 01:04:47.745027 1131323 logs.go:276] 0 containers: []
	W0328 01:04:47.745037 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:04:47.745044 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:04:47.745103 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:04:47.785518 1131323 cri.go:89] found id: ""
	I0328 01:04:47.785550 1131323 logs.go:276] 0 containers: []
	W0328 01:04:47.785562 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:04:47.785572 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:04:47.785645 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:04:47.831739 1131323 cri.go:89] found id: ""
	I0328 01:04:47.831771 1131323 logs.go:276] 0 containers: []
	W0328 01:04:47.831786 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:04:47.831794 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:04:47.831867 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:04:47.871864 1131323 cri.go:89] found id: ""
	I0328 01:04:47.871906 1131323 logs.go:276] 0 containers: []
	W0328 01:04:47.871918 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:04:47.871929 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:04:47.872008 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:04:47.907899 1131323 cri.go:89] found id: ""
	I0328 01:04:47.907934 1131323 logs.go:276] 0 containers: []
	W0328 01:04:47.907946 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:04:47.907955 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:04:47.908022 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:04:47.946073 1131323 cri.go:89] found id: ""
	I0328 01:04:47.946107 1131323 logs.go:276] 0 containers: []
	W0328 01:04:47.946118 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:04:47.946127 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:04:47.946223 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:04:47.986122 1131323 cri.go:89] found id: ""
	I0328 01:04:47.986154 1131323 logs.go:276] 0 containers: []
	W0328 01:04:47.986168 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:04:47.986182 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:04:47.986198 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:04:48.057234 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:04:48.057271 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:04:48.109881 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:04:48.109926 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:04:48.125154 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:04:48.125189 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:04:48.208295 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:04:48.208327 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:04:48.208345 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:04:47.041447 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:49.542203 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:48.371275 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:50.372057 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:49.413451 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:51.414465 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:50.785126 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:04:50.800000 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:04:50.800078 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:04:50.839883 1131323 cri.go:89] found id: ""
	I0328 01:04:50.839911 1131323 logs.go:276] 0 containers: []
	W0328 01:04:50.839920 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:04:50.839927 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:04:50.839983 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:04:50.879627 1131323 cri.go:89] found id: ""
	I0328 01:04:50.879654 1131323 logs.go:276] 0 containers: []
	W0328 01:04:50.879661 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:04:50.879668 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:04:50.879734 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:04:50.918392 1131323 cri.go:89] found id: ""
	I0328 01:04:50.918434 1131323 logs.go:276] 0 containers: []
	W0328 01:04:50.918446 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:04:50.918454 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:04:50.918517 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:04:50.957198 1131323 cri.go:89] found id: ""
	I0328 01:04:50.957234 1131323 logs.go:276] 0 containers: []
	W0328 01:04:50.957248 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:04:50.957257 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:04:50.957328 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:04:50.997389 1131323 cri.go:89] found id: ""
	I0328 01:04:50.997424 1131323 logs.go:276] 0 containers: []
	W0328 01:04:50.997438 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:04:50.997446 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:04:50.997513 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:04:51.040259 1131323 cri.go:89] found id: ""
	I0328 01:04:51.040296 1131323 logs.go:276] 0 containers: []
	W0328 01:04:51.040309 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:04:51.040318 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:04:51.040389 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:04:51.081824 1131323 cri.go:89] found id: ""
	I0328 01:04:51.081858 1131323 logs.go:276] 0 containers: []
	W0328 01:04:51.081868 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:04:51.081875 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:04:51.081942 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:04:51.119742 1131323 cri.go:89] found id: ""
	I0328 01:04:51.119783 1131323 logs.go:276] 0 containers: []
	W0328 01:04:51.119796 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:04:51.119810 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:04:51.119836 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:04:51.173486 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:04:51.173529 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:04:51.188532 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:04:51.188568 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:04:51.269181 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:04:51.269207 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:04:51.269226 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:04:51.349882 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:04:51.349936 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:04:53.893562 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:04:53.910104 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:04:53.910186 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:04:53.951333 1131323 cri.go:89] found id: ""
	I0328 01:04:53.951375 1131323 logs.go:276] 0 containers: []
	W0328 01:04:53.951388 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:04:53.951397 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:04:53.951472 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:04:53.992438 1131323 cri.go:89] found id: ""
	I0328 01:04:53.992474 1131323 logs.go:276] 0 containers: []
	W0328 01:04:53.992486 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:04:53.992493 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:04:53.992561 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:04:54.032934 1131323 cri.go:89] found id: ""
	I0328 01:04:54.032969 1131323 logs.go:276] 0 containers: []
	W0328 01:04:54.032982 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:04:54.032992 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:04:54.033061 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:04:54.074670 1131323 cri.go:89] found id: ""
	I0328 01:04:54.074707 1131323 logs.go:276] 0 containers: []
	W0328 01:04:54.074777 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:04:54.074801 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:04:54.074875 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:04:54.111527 1131323 cri.go:89] found id: ""
	I0328 01:04:54.111555 1131323 logs.go:276] 0 containers: []
	W0328 01:04:54.111566 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:04:54.111573 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:04:54.111658 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:04:54.151401 1131323 cri.go:89] found id: ""
	I0328 01:04:54.151428 1131323 logs.go:276] 0 containers: []
	W0328 01:04:54.151437 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:04:54.151443 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:04:54.151494 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:04:54.197997 1131323 cri.go:89] found id: ""
	I0328 01:04:54.198036 1131323 logs.go:276] 0 containers: []
	W0328 01:04:54.198048 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:04:54.198058 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:04:54.198135 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:04:54.234016 1131323 cri.go:89] found id: ""
	I0328 01:04:54.234049 1131323 logs.go:276] 0 containers: []
	W0328 01:04:54.234058 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:04:54.234068 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:04:54.234081 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:04:54.286118 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:04:54.286161 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:04:54.300489 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:04:54.300541 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:04:54.376949 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:04:54.376972 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:04:54.376988 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:04:54.463857 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:04:54.463901 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:04:52.041517 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:54.042088 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:52.875923 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:55.371823 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:53.912140 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:55.912329 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:57.026395 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:04:57.041270 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:04:57.041358 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:04:57.082380 1131323 cri.go:89] found id: ""
	I0328 01:04:57.082416 1131323 logs.go:276] 0 containers: []
	W0328 01:04:57.082428 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:04:57.082436 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:04:57.082503 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:04:57.121835 1131323 cri.go:89] found id: ""
	I0328 01:04:57.121870 1131323 logs.go:276] 0 containers: []
	W0328 01:04:57.121885 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:04:57.121894 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:04:57.121969 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:04:57.163688 1131323 cri.go:89] found id: ""
	I0328 01:04:57.163725 1131323 logs.go:276] 0 containers: []
	W0328 01:04:57.163737 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:04:57.163745 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:04:57.163819 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:04:57.212628 1131323 cri.go:89] found id: ""
	I0328 01:04:57.212666 1131323 logs.go:276] 0 containers: []
	W0328 01:04:57.212693 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:04:57.212703 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:04:57.212788 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:04:57.249196 1131323 cri.go:89] found id: ""
	I0328 01:04:57.249231 1131323 logs.go:276] 0 containers: []
	W0328 01:04:57.249244 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:04:57.249253 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:04:57.249318 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:04:57.286996 1131323 cri.go:89] found id: ""
	I0328 01:04:57.287031 1131323 logs.go:276] 0 containers: []
	W0328 01:04:57.287040 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:04:57.287047 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:04:57.287101 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:04:57.324523 1131323 cri.go:89] found id: ""
	I0328 01:04:57.324551 1131323 logs.go:276] 0 containers: []
	W0328 01:04:57.324560 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:04:57.324566 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:04:57.324627 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:04:57.363946 1131323 cri.go:89] found id: ""
	I0328 01:04:57.363984 1131323 logs.go:276] 0 containers: []
	W0328 01:04:57.363998 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:04:57.364012 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:04:57.364034 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:04:57.418300 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:04:57.418337 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:04:57.433214 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:04:57.433242 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:04:57.508623 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:04:57.508651 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:04:57.508665 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:04:57.586336 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:04:57.586377 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:05:00.129903 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:05:00.146829 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:05:00.146920 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:05:00.197823 1131323 cri.go:89] found id: ""
	I0328 01:05:00.197856 1131323 logs.go:276] 0 containers: []
	W0328 01:05:00.197865 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:05:00.197872 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:05:00.197930 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:05:00.257523 1131323 cri.go:89] found id: ""
	I0328 01:05:00.257561 1131323 logs.go:276] 0 containers: []
	W0328 01:05:00.257575 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:05:00.257584 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:05:00.257657 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:05:00.314511 1131323 cri.go:89] found id: ""
	I0328 01:05:00.314539 1131323 logs.go:276] 0 containers: []
	W0328 01:05:00.314549 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:05:00.314558 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:05:00.314610 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:04:56.042295 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:58.541684 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:00.543232 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:57.372451 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:59.372577 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:58.412203 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:00.412880 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:02.913222 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:00.351043 1131323 cri.go:89] found id: ""
	I0328 01:05:00.351076 1131323 logs.go:276] 0 containers: []
	W0328 01:05:00.351090 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:05:00.351098 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:05:00.351167 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:05:00.391477 1131323 cri.go:89] found id: ""
	I0328 01:05:00.391507 1131323 logs.go:276] 0 containers: []
	W0328 01:05:00.391519 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:05:00.391525 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:05:00.391595 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:05:00.436196 1131323 cri.go:89] found id: ""
	I0328 01:05:00.436230 1131323 logs.go:276] 0 containers: []
	W0328 01:05:00.436242 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:05:00.436249 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:05:00.436316 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:05:00.473389 1131323 cri.go:89] found id: ""
	I0328 01:05:00.473428 1131323 logs.go:276] 0 containers: []
	W0328 01:05:00.473441 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:05:00.473450 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:05:00.473523 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:05:00.508829 1131323 cri.go:89] found id: ""
	I0328 01:05:00.508866 1131323 logs.go:276] 0 containers: []
	W0328 01:05:00.508879 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:05:00.508901 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:05:00.508931 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:05:00.553709 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:05:00.553741 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:05:00.612679 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:05:00.612732 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:05:00.630908 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:05:00.630948 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:05:00.706984 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:05:00.707016 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:05:00.707034 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:05:03.287887 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:05:03.304679 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:05:03.304779 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:05:03.343579 1131323 cri.go:89] found id: ""
	I0328 01:05:03.343608 1131323 logs.go:276] 0 containers: []
	W0328 01:05:03.343618 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:05:03.343625 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:05:03.343677 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:05:03.387158 1131323 cri.go:89] found id: ""
	I0328 01:05:03.387192 1131323 logs.go:276] 0 containers: []
	W0328 01:05:03.387206 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:05:03.387224 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:05:03.387308 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:05:03.426622 1131323 cri.go:89] found id: ""
	I0328 01:05:03.426653 1131323 logs.go:276] 0 containers: []
	W0328 01:05:03.426663 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:05:03.426670 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:05:03.426724 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:05:03.468743 1131323 cri.go:89] found id: ""
	I0328 01:05:03.468780 1131323 logs.go:276] 0 containers: []
	W0328 01:05:03.468793 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:05:03.468801 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:05:03.468870 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:05:03.508518 1131323 cri.go:89] found id: ""
	I0328 01:05:03.508554 1131323 logs.go:276] 0 containers: []
	W0328 01:05:03.508566 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:05:03.508575 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:05:03.508653 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:05:03.548295 1131323 cri.go:89] found id: ""
	I0328 01:05:03.548331 1131323 logs.go:276] 0 containers: []
	W0328 01:05:03.548343 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:05:03.548353 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:05:03.548444 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:05:03.591561 1131323 cri.go:89] found id: ""
	I0328 01:05:03.591594 1131323 logs.go:276] 0 containers: []
	W0328 01:05:03.591607 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:05:03.591615 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:05:03.591670 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:05:03.635055 1131323 cri.go:89] found id: ""
	I0328 01:05:03.635086 1131323 logs.go:276] 0 containers: []
	W0328 01:05:03.635097 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:05:03.635109 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:05:03.635127 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:05:03.715639 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:05:03.715683 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:05:03.755888 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:05:03.755931 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:05:03.810128 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:05:03.810170 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:05:03.825197 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:05:03.825227 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:05:03.908589 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:05:03.043330 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:05.541544 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:01.372692 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:03.373747 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:05.871945 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:05.413583 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:07.912379 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:06.409060 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:05:06.424034 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:05:06.424119 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:05:06.461827 1131323 cri.go:89] found id: ""
	I0328 01:05:06.461888 1131323 logs.go:276] 0 containers: []
	W0328 01:05:06.461902 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:05:06.461911 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:05:06.461985 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:05:06.505006 1131323 cri.go:89] found id: ""
	I0328 01:05:06.505061 1131323 logs.go:276] 0 containers: []
	W0328 01:05:06.505078 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:05:06.505085 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:05:06.505145 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:05:06.542000 1131323 cri.go:89] found id: ""
	I0328 01:05:06.542033 1131323 logs.go:276] 0 containers: []
	W0328 01:05:06.542044 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:05:06.542052 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:05:06.542121 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:05:06.583725 1131323 cri.go:89] found id: ""
	I0328 01:05:06.583778 1131323 logs.go:276] 0 containers: []
	W0328 01:05:06.583800 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:05:06.583810 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:05:06.583887 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:05:06.620457 1131323 cri.go:89] found id: ""
	I0328 01:05:06.620501 1131323 logs.go:276] 0 containers: []
	W0328 01:05:06.620516 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:05:06.620524 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:05:06.620595 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:05:06.664380 1131323 cri.go:89] found id: ""
	I0328 01:05:06.664412 1131323 logs.go:276] 0 containers: []
	W0328 01:05:06.664425 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:05:06.664432 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:05:06.664502 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:05:06.701799 1131323 cri.go:89] found id: ""
	I0328 01:05:06.701850 1131323 logs.go:276] 0 containers: []
	W0328 01:05:06.701862 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:05:06.701870 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:05:06.701935 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:05:06.739899 1131323 cri.go:89] found id: ""
	I0328 01:05:06.739936 1131323 logs.go:276] 0 containers: []
	W0328 01:05:06.739948 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:05:06.739958 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:05:06.739973 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:05:06.814373 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:05:06.814404 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:05:06.814421 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:05:06.894331 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:05:06.894371 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:05:06.952912 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:05:06.952979 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:05:07.011851 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:05:07.011900 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:05:09.528068 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:05:09.545082 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:05:09.545167 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:05:09.586944 1131323 cri.go:89] found id: ""
	I0328 01:05:09.586983 1131323 logs.go:276] 0 containers: []
	W0328 01:05:09.586996 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:05:09.587004 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:05:09.587077 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:05:09.624153 1131323 cri.go:89] found id: ""
	I0328 01:05:09.624184 1131323 logs.go:276] 0 containers: []
	W0328 01:05:09.624192 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:05:09.624198 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:05:09.624256 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:05:09.661125 1131323 cri.go:89] found id: ""
	I0328 01:05:09.661160 1131323 logs.go:276] 0 containers: []
	W0328 01:05:09.661172 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:05:09.661182 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:05:09.661256 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:05:09.699865 1131323 cri.go:89] found id: ""
	I0328 01:05:09.699903 1131323 logs.go:276] 0 containers: []
	W0328 01:05:09.699916 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:05:09.699925 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:05:09.699992 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:05:09.737925 1131323 cri.go:89] found id: ""
	I0328 01:05:09.737958 1131323 logs.go:276] 0 containers: []
	W0328 01:05:09.737967 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:05:09.737973 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:05:09.738029 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:05:09.776906 1131323 cri.go:89] found id: ""
	I0328 01:05:09.776941 1131323 logs.go:276] 0 containers: []
	W0328 01:05:09.776950 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:05:09.776957 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:05:09.777021 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:05:09.815767 1131323 cri.go:89] found id: ""
	I0328 01:05:09.815794 1131323 logs.go:276] 0 containers: []
	W0328 01:05:09.815803 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:05:09.815809 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:05:09.815876 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:05:09.855880 1131323 cri.go:89] found id: ""
	I0328 01:05:09.855915 1131323 logs.go:276] 0 containers: []
	W0328 01:05:09.855928 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:05:09.855941 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:05:09.855958 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:05:09.918339 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:05:09.918376 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:05:09.932775 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:05:09.932810 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:05:10.011566 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:05:10.011594 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:05:10.011610 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:05:10.096057 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:05:10.096100 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:05:08.041230 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:10.041991 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:07.873367 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:10.372311 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:09.913349 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:12.412259 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:12.641999 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:05:12.655761 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:05:12.655843 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:05:12.697335 1131323 cri.go:89] found id: ""
	I0328 01:05:12.697369 1131323 logs.go:276] 0 containers: []
	W0328 01:05:12.697381 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:05:12.697390 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:05:12.697453 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:05:12.736482 1131323 cri.go:89] found id: ""
	I0328 01:05:12.736520 1131323 logs.go:276] 0 containers: []
	W0328 01:05:12.736534 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:05:12.736544 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:05:12.736617 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:05:12.771992 1131323 cri.go:89] found id: ""
	I0328 01:05:12.772034 1131323 logs.go:276] 0 containers: []
	W0328 01:05:12.772046 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:05:12.772055 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:05:12.772125 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:05:12.810738 1131323 cri.go:89] found id: ""
	I0328 01:05:12.810770 1131323 logs.go:276] 0 containers: []
	W0328 01:05:12.810779 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:05:12.810786 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:05:12.810837 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:05:12.848172 1131323 cri.go:89] found id: ""
	I0328 01:05:12.848209 1131323 logs.go:276] 0 containers: []
	W0328 01:05:12.848222 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:05:12.848230 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:05:12.848310 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:05:12.884660 1131323 cri.go:89] found id: ""
	I0328 01:05:12.884698 1131323 logs.go:276] 0 containers: []
	W0328 01:05:12.884710 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:05:12.884719 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:05:12.884794 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:05:12.926180 1131323 cri.go:89] found id: ""
	I0328 01:05:12.926209 1131323 logs.go:276] 0 containers: []
	W0328 01:05:12.926218 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:05:12.926244 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:05:12.926303 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:05:12.966938 1131323 cri.go:89] found id: ""
	I0328 01:05:12.966969 1131323 logs.go:276] 0 containers: []
	W0328 01:05:12.966983 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:05:12.966996 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:05:12.967014 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:05:13.018501 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:05:13.018541 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:05:13.033140 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:05:13.033175 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:05:13.108806 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:05:13.108832 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:05:13.108858 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:05:13.189198 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:05:13.189241 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:05:12.541088 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:15.041830 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:12.372413 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:14.372804 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:14.414059 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:16.912343 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:15.737415 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:05:15.752534 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:05:15.752614 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:05:15.789941 1131323 cri.go:89] found id: ""
	I0328 01:05:15.789974 1131323 logs.go:276] 0 containers: []
	W0328 01:05:15.789986 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:05:15.789994 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:05:15.790107 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:05:15.827688 1131323 cri.go:89] found id: ""
	I0328 01:05:15.827731 1131323 logs.go:276] 0 containers: []
	W0328 01:05:15.827745 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:05:15.827766 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:05:15.827831 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:05:15.867005 1131323 cri.go:89] found id: ""
	I0328 01:05:15.867041 1131323 logs.go:276] 0 containers: []
	W0328 01:05:15.867054 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:05:15.867064 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:05:15.867149 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:05:15.909924 1131323 cri.go:89] found id: ""
	I0328 01:05:15.910035 1131323 logs.go:276] 0 containers: []
	W0328 01:05:15.910055 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:05:15.910066 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:05:15.910139 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:05:15.950571 1131323 cri.go:89] found id: ""
	I0328 01:05:15.950606 1131323 logs.go:276] 0 containers: []
	W0328 01:05:15.950619 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:05:15.950632 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:05:15.950707 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:05:15.992557 1131323 cri.go:89] found id: ""
	I0328 01:05:15.992594 1131323 logs.go:276] 0 containers: []
	W0328 01:05:15.992605 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:05:15.992615 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:05:15.992687 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:05:16.032417 1131323 cri.go:89] found id: ""
	I0328 01:05:16.032458 1131323 logs.go:276] 0 containers: []
	W0328 01:05:16.032473 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:05:16.032482 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:05:16.032559 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:05:16.071399 1131323 cri.go:89] found id: ""
	I0328 01:05:16.071433 1131323 logs.go:276] 0 containers: []
	W0328 01:05:16.071445 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:05:16.071459 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:05:16.071481 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:05:16.147078 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:05:16.147113 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:05:16.147131 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:05:16.223828 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:05:16.223870 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:05:16.269377 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:05:16.269409 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:05:16.318545 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:05:16.318584 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:05:18.836044 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:05:18.851138 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:05:18.851231 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:05:18.887223 1131323 cri.go:89] found id: ""
	I0328 01:05:18.887260 1131323 logs.go:276] 0 containers: []
	W0328 01:05:18.887273 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:05:18.887283 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:05:18.887354 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:05:18.928652 1131323 cri.go:89] found id: ""
	I0328 01:05:18.928682 1131323 logs.go:276] 0 containers: []
	W0328 01:05:18.928692 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:05:18.928698 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:05:18.928756 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:05:18.968519 1131323 cri.go:89] found id: ""
	I0328 01:05:18.968555 1131323 logs.go:276] 0 containers: []
	W0328 01:05:18.968567 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:05:18.968575 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:05:18.968646 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:05:19.010939 1131323 cri.go:89] found id: ""
	I0328 01:05:19.010977 1131323 logs.go:276] 0 containers: []
	W0328 01:05:19.010991 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:05:19.010999 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:05:19.011070 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:05:19.048723 1131323 cri.go:89] found id: ""
	I0328 01:05:19.048748 1131323 logs.go:276] 0 containers: []
	W0328 01:05:19.048758 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:05:19.048769 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:05:19.048820 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:05:19.091761 1131323 cri.go:89] found id: ""
	I0328 01:05:19.091794 1131323 logs.go:276] 0 containers: []
	W0328 01:05:19.091803 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:05:19.091810 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:05:19.091863 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:05:19.134017 1131323 cri.go:89] found id: ""
	I0328 01:05:19.134049 1131323 logs.go:276] 0 containers: []
	W0328 01:05:19.134059 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:05:19.134065 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:05:19.134119 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:05:19.176070 1131323 cri.go:89] found id: ""
	I0328 01:05:19.176106 1131323 logs.go:276] 0 containers: []
	W0328 01:05:19.176118 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:05:19.176131 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:05:19.176155 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:05:19.261546 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:05:19.261584 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:05:19.261605 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:05:19.340271 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:05:19.340314 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:05:19.383625 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:05:19.383676 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:05:19.441635 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:05:19.441679 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:05:17.541876 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:20.040841 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:16.872723 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:19.372916 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:19.414384 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:21.912881 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:21.958362 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:05:21.974427 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:05:21.974528 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:05:22.013099 1131323 cri.go:89] found id: ""
	I0328 01:05:22.013139 1131323 logs.go:276] 0 containers: []
	W0328 01:05:22.013152 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:05:22.013160 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:05:22.013229 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:05:22.055558 1131323 cri.go:89] found id: ""
	I0328 01:05:22.055594 1131323 logs.go:276] 0 containers: []
	W0328 01:05:22.055604 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:05:22.055611 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:05:22.055668 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:05:22.106836 1131323 cri.go:89] found id: ""
	I0328 01:05:22.106870 1131323 logs.go:276] 0 containers: []
	W0328 01:05:22.106879 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:05:22.106886 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:05:22.106961 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:05:22.145135 1131323 cri.go:89] found id: ""
	I0328 01:05:22.145175 1131323 logs.go:276] 0 containers: []
	W0328 01:05:22.145189 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:05:22.145197 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:05:22.145266 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:05:22.183879 1131323 cri.go:89] found id: ""
	I0328 01:05:22.183909 1131323 logs.go:276] 0 containers: []
	W0328 01:05:22.183919 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:05:22.183926 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:05:22.183981 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:05:22.223087 1131323 cri.go:89] found id: ""
	I0328 01:05:22.223115 1131323 logs.go:276] 0 containers: []
	W0328 01:05:22.223124 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:05:22.223131 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:05:22.223209 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:05:22.263232 1131323 cri.go:89] found id: ""
	I0328 01:05:22.263262 1131323 logs.go:276] 0 containers: []
	W0328 01:05:22.263272 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:05:22.263279 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:05:22.263331 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:05:22.302919 1131323 cri.go:89] found id: ""
	I0328 01:05:22.302954 1131323 logs.go:276] 0 containers: []
	W0328 01:05:22.302967 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:05:22.302980 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:05:22.302998 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:05:22.358550 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:05:22.358596 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:05:22.374688 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:05:22.374722 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:05:22.453584 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:05:22.453609 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:05:22.453624 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:05:22.540983 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:05:22.541048 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:05:25.091773 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:05:25.107412 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:05:25.107484 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:05:25.143917 1131323 cri.go:89] found id: ""
	I0328 01:05:25.143944 1131323 logs.go:276] 0 containers: []
	W0328 01:05:25.143953 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:05:25.143960 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:05:25.144013 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:05:25.183615 1131323 cri.go:89] found id: ""
	I0328 01:05:25.183650 1131323 logs.go:276] 0 containers: []
	W0328 01:05:25.183659 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:05:25.183666 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:05:25.183729 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:05:25.221125 1131323 cri.go:89] found id: ""
	I0328 01:05:25.221158 1131323 logs.go:276] 0 containers: []
	W0328 01:05:25.221167 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:05:25.221174 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:05:25.221229 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:05:25.262023 1131323 cri.go:89] found id: ""
	I0328 01:05:25.262056 1131323 logs.go:276] 0 containers: []
	W0328 01:05:25.262065 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:05:25.262072 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:05:25.262134 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:05:25.297919 1131323 cri.go:89] found id: ""
	I0328 01:05:25.297948 1131323 logs.go:276] 0 containers: []
	W0328 01:05:25.297957 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:05:25.297964 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:05:25.298035 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:05:22.041977 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:24.542416 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:21.872312 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:23.872885 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:23.914459 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:25.916730 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:25.336582 1131323 cri.go:89] found id: ""
	I0328 01:05:25.336610 1131323 logs.go:276] 0 containers: []
	W0328 01:05:25.336620 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:05:25.336627 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:05:25.336690 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:05:25.375554 1131323 cri.go:89] found id: ""
	I0328 01:05:25.375589 1131323 logs.go:276] 0 containers: []
	W0328 01:05:25.375600 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:05:25.375609 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:05:25.375683 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:05:25.415941 1131323 cri.go:89] found id: ""
	I0328 01:05:25.415973 1131323 logs.go:276] 0 containers: []
	W0328 01:05:25.415984 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:05:25.415996 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:05:25.416013 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:05:25.430168 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:05:25.430196 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:05:25.507782 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:05:25.507805 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:05:25.507862 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:05:25.588780 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:05:25.588824 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:05:25.634958 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:05:25.634997 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:05:28.190651 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:05:28.205714 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:05:28.205794 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:05:28.242015 1131323 cri.go:89] found id: ""
	I0328 01:05:28.242056 1131323 logs.go:276] 0 containers: []
	W0328 01:05:28.242067 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:05:28.242077 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:05:28.242169 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:05:28.289132 1131323 cri.go:89] found id: ""
	I0328 01:05:28.289169 1131323 logs.go:276] 0 containers: []
	W0328 01:05:28.289182 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:05:28.289189 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:05:28.289256 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:05:28.327001 1131323 cri.go:89] found id: ""
	I0328 01:05:28.327031 1131323 logs.go:276] 0 containers: []
	W0328 01:05:28.327040 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:05:28.327052 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:05:28.327105 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:05:28.365474 1131323 cri.go:89] found id: ""
	I0328 01:05:28.365507 1131323 logs.go:276] 0 containers: []
	W0328 01:05:28.365516 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:05:28.365523 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:05:28.365587 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:05:28.405494 1131323 cri.go:89] found id: ""
	I0328 01:05:28.405553 1131323 logs.go:276] 0 containers: []
	W0328 01:05:28.405567 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:05:28.405576 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:05:28.405652 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:05:28.448464 1131323 cri.go:89] found id: ""
	I0328 01:05:28.448502 1131323 logs.go:276] 0 containers: []
	W0328 01:05:28.448512 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:05:28.448521 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:05:28.448594 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:05:28.488143 1131323 cri.go:89] found id: ""
	I0328 01:05:28.488172 1131323 logs.go:276] 0 containers: []
	W0328 01:05:28.488182 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:05:28.488189 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:05:28.488258 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:05:28.545977 1131323 cri.go:89] found id: ""
	I0328 01:05:28.546012 1131323 logs.go:276] 0 containers: []
	W0328 01:05:28.546024 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:05:28.546036 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:05:28.546050 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:05:28.629955 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:05:28.630001 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:05:28.670504 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:05:28.670536 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:05:28.722021 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:05:28.722069 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:05:28.737274 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:05:28.737310 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:05:28.824025 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:05:27.041205 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:29.041342 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:26.372037 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:28.373545 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:30.872569 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:28.414921 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:30.912980 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:31.324497 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:05:31.339715 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:05:31.339811 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:05:31.379017 1131323 cri.go:89] found id: ""
	I0328 01:05:31.379050 1131323 logs.go:276] 0 containers: []
	W0328 01:05:31.379062 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:05:31.379072 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:05:31.379138 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:05:31.420024 1131323 cri.go:89] found id: ""
	I0328 01:05:31.420055 1131323 logs.go:276] 0 containers: []
	W0328 01:05:31.420065 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:05:31.420071 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:05:31.420136 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:05:31.458732 1131323 cri.go:89] found id: ""
	I0328 01:05:31.458764 1131323 logs.go:276] 0 containers: []
	W0328 01:05:31.458773 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:05:31.458779 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:05:31.458835 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:05:31.504249 1131323 cri.go:89] found id: ""
	I0328 01:05:31.504280 1131323 logs.go:276] 0 containers: []
	W0328 01:05:31.504292 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:05:31.504300 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:05:31.504366 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:05:31.545284 1131323 cri.go:89] found id: ""
	I0328 01:05:31.545316 1131323 logs.go:276] 0 containers: []
	W0328 01:05:31.545324 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:05:31.545331 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:05:31.545385 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:05:31.583402 1131323 cri.go:89] found id: ""
	I0328 01:05:31.583434 1131323 logs.go:276] 0 containers: []
	W0328 01:05:31.583444 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:05:31.583453 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:05:31.583587 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:05:31.624411 1131323 cri.go:89] found id: ""
	I0328 01:05:31.624449 1131323 logs.go:276] 0 containers: []
	W0328 01:05:31.624462 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:05:31.624471 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:05:31.624528 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:05:31.666103 1131323 cri.go:89] found id: ""
	I0328 01:05:31.666144 1131323 logs.go:276] 0 containers: []
	W0328 01:05:31.666158 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:05:31.666173 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:05:31.666192 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:05:31.717595 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:05:31.717636 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:05:31.731606 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:05:31.731637 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:05:31.803302 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:05:31.803325 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:05:31.803339 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:05:31.885552 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:05:31.885590 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:05:34.432446 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:05:34.448002 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:05:34.448085 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:05:34.493207 1131323 cri.go:89] found id: ""
	I0328 01:05:34.493246 1131323 logs.go:276] 0 containers: []
	W0328 01:05:34.493259 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:05:34.493268 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:05:34.493337 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:05:34.541838 1131323 cri.go:89] found id: ""
	I0328 01:05:34.541871 1131323 logs.go:276] 0 containers: []
	W0328 01:05:34.541883 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:05:34.541891 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:05:34.541956 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:05:34.582319 1131323 cri.go:89] found id: ""
	I0328 01:05:34.582357 1131323 logs.go:276] 0 containers: []
	W0328 01:05:34.582371 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:05:34.582380 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:05:34.582458 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:05:34.618753 1131323 cri.go:89] found id: ""
	I0328 01:05:34.618788 1131323 logs.go:276] 0 containers: []
	W0328 01:05:34.618801 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:05:34.618810 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:05:34.618882 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:05:34.656994 1131323 cri.go:89] found id: ""
	I0328 01:05:34.657027 1131323 logs.go:276] 0 containers: []
	W0328 01:05:34.657037 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:05:34.657043 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:05:34.657114 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:05:34.695214 1131323 cri.go:89] found id: ""
	I0328 01:05:34.695252 1131323 logs.go:276] 0 containers: []
	W0328 01:05:34.695264 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:05:34.695271 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:05:34.695337 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:05:34.733688 1131323 cri.go:89] found id: ""
	I0328 01:05:34.733718 1131323 logs.go:276] 0 containers: []
	W0328 01:05:34.733731 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:05:34.733739 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:05:34.733808 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:05:34.771697 1131323 cri.go:89] found id: ""
	I0328 01:05:34.771729 1131323 logs.go:276] 0 containers: []
	W0328 01:05:34.771744 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:05:34.771758 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:05:34.771776 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:05:34.828190 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:05:34.828236 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:05:34.842741 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:05:34.842776 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:05:34.918494 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:05:34.918525 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:05:34.918541 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:05:35.012689 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:05:35.012747 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:05:31.042633 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:33.541295 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:35.541588 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:33.371991 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:35.872753 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:33.412886 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:35.914065 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:37.574759 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:05:37.590014 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:05:37.590128 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:05:37.626883 1131323 cri.go:89] found id: ""
	I0328 01:05:37.626914 1131323 logs.go:276] 0 containers: []
	W0328 01:05:37.626926 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:05:37.626935 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:05:37.627005 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:05:37.665171 1131323 cri.go:89] found id: ""
	I0328 01:05:37.665202 1131323 logs.go:276] 0 containers: []
	W0328 01:05:37.665215 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:05:37.665225 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:05:37.665294 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:05:37.702923 1131323 cri.go:89] found id: ""
	I0328 01:05:37.702963 1131323 logs.go:276] 0 containers: []
	W0328 01:05:37.702976 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:05:37.702984 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:05:37.703064 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:05:37.741148 1131323 cri.go:89] found id: ""
	I0328 01:05:37.741182 1131323 logs.go:276] 0 containers: []
	W0328 01:05:37.741191 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:05:37.741199 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:05:37.741269 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:05:37.782298 1131323 cri.go:89] found id: ""
	I0328 01:05:37.782331 1131323 logs.go:276] 0 containers: []
	W0328 01:05:37.782341 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:05:37.782348 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:05:37.782407 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:05:37.819056 1131323 cri.go:89] found id: ""
	I0328 01:05:37.819110 1131323 logs.go:276] 0 containers: []
	W0328 01:05:37.819124 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:05:37.819134 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:05:37.819215 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:05:37.862372 1131323 cri.go:89] found id: ""
	I0328 01:05:37.862414 1131323 logs.go:276] 0 containers: []
	W0328 01:05:37.862427 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:05:37.862436 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:05:37.862507 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:05:37.899639 1131323 cri.go:89] found id: ""
	I0328 01:05:37.899675 1131323 logs.go:276] 0 containers: []
	W0328 01:05:37.899689 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:05:37.899703 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:05:37.899721 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:05:37.978962 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:05:37.978990 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:05:37.979007 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:05:38.058972 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:05:38.059015 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:05:38.102975 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:05:38.103016 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:05:38.157994 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:05:38.158035 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:05:38.041091 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:40.041892 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:38.371787 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:40.373131 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:38.412214 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:40.415412 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:42.912341 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:40.673425 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:05:40.690969 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:05:40.691041 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:05:40.735552 1131323 cri.go:89] found id: ""
	I0328 01:05:40.735585 1131323 logs.go:276] 0 containers: []
	W0328 01:05:40.735594 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:05:40.735602 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:05:40.735669 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:05:40.816611 1131323 cri.go:89] found id: ""
	I0328 01:05:40.816648 1131323 logs.go:276] 0 containers: []
	W0328 01:05:40.816661 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:05:40.816669 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:05:40.816725 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:05:40.864093 1131323 cri.go:89] found id: ""
	I0328 01:05:40.864125 1131323 logs.go:276] 0 containers: []
	W0328 01:05:40.864138 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:05:40.864147 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:05:40.864218 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:05:40.908781 1131323 cri.go:89] found id: ""
	I0328 01:05:40.908817 1131323 logs.go:276] 0 containers: []
	W0328 01:05:40.908829 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:05:40.908846 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:05:40.908914 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:05:40.950330 1131323 cri.go:89] found id: ""
	I0328 01:05:40.950369 1131323 logs.go:276] 0 containers: []
	W0328 01:05:40.950382 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:05:40.950390 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:05:40.950481 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:05:40.989983 1131323 cri.go:89] found id: ""
	I0328 01:05:40.990041 1131323 logs.go:276] 0 containers: []
	W0328 01:05:40.990054 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:05:40.990063 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:05:40.990136 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:05:41.042428 1131323 cri.go:89] found id: ""
	I0328 01:05:41.042470 1131323 logs.go:276] 0 containers: []
	W0328 01:05:41.042481 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:05:41.042489 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:05:41.042560 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:05:41.089309 1131323 cri.go:89] found id: ""
	I0328 01:05:41.089342 1131323 logs.go:276] 0 containers: []
	W0328 01:05:41.089353 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:05:41.089363 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:05:41.089377 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:05:41.148502 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:05:41.148547 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:05:41.163889 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:05:41.163918 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:05:41.242825 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:05:41.242848 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:05:41.242861 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:05:41.322658 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:05:41.322702 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:05:43.865117 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:05:43.880642 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:05:43.880729 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:05:43.919519 1131323 cri.go:89] found id: ""
	I0328 01:05:43.919550 1131323 logs.go:276] 0 containers: []
	W0328 01:05:43.919559 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:05:43.919565 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:05:43.919622 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:05:43.957906 1131323 cri.go:89] found id: ""
	I0328 01:05:43.957936 1131323 logs.go:276] 0 containers: []
	W0328 01:05:43.957945 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:05:43.957951 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:05:43.958008 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:05:44.001448 1131323 cri.go:89] found id: ""
	I0328 01:05:44.001486 1131323 logs.go:276] 0 containers: []
	W0328 01:05:44.001497 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:05:44.001505 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:05:44.001573 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:05:44.039767 1131323 cri.go:89] found id: ""
	I0328 01:05:44.039801 1131323 logs.go:276] 0 containers: []
	W0328 01:05:44.039812 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:05:44.039818 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:05:44.039871 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:05:44.079441 1131323 cri.go:89] found id: ""
	I0328 01:05:44.079470 1131323 logs.go:276] 0 containers: []
	W0328 01:05:44.079480 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:05:44.079486 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:05:44.079541 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:05:44.116534 1131323 cri.go:89] found id: ""
	I0328 01:05:44.116584 1131323 logs.go:276] 0 containers: []
	W0328 01:05:44.116596 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:05:44.116604 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:05:44.116670 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:05:44.163335 1131323 cri.go:89] found id: ""
	I0328 01:05:44.163369 1131323 logs.go:276] 0 containers: []
	W0328 01:05:44.163381 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:05:44.163389 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:05:44.163457 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:05:44.201367 1131323 cri.go:89] found id: ""
	I0328 01:05:44.201403 1131323 logs.go:276] 0 containers: []
	W0328 01:05:44.201413 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:05:44.201424 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:05:44.201442 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:05:44.257485 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:05:44.257529 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:05:44.272489 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:05:44.272534 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:05:44.354442 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:05:44.354477 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:05:44.354498 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:05:44.436219 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:05:44.436262 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:05:42.044443 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:44.541648 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:42.872072 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:44.873552 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:44.913292 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:47.412489 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:46.982131 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:05:46.998022 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:05:46.998100 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:05:47.037167 1131323 cri.go:89] found id: ""
	I0328 01:05:47.037205 1131323 logs.go:276] 0 containers: []
	W0328 01:05:47.037217 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:05:47.037226 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:05:47.037295 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:05:47.076175 1131323 cri.go:89] found id: ""
	I0328 01:05:47.076213 1131323 logs.go:276] 0 containers: []
	W0328 01:05:47.076226 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:05:47.076235 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:05:47.076306 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:05:47.115193 1131323 cri.go:89] found id: ""
	I0328 01:05:47.115227 1131323 logs.go:276] 0 containers: []
	W0328 01:05:47.115237 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:05:47.115244 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:05:47.115297 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:05:47.154942 1131323 cri.go:89] found id: ""
	I0328 01:05:47.154976 1131323 logs.go:276] 0 containers: []
	W0328 01:05:47.154989 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:05:47.154998 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:05:47.155069 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:05:47.196571 1131323 cri.go:89] found id: ""
	I0328 01:05:47.196609 1131323 logs.go:276] 0 containers: []
	W0328 01:05:47.196622 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:05:47.196631 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:05:47.196707 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:05:47.237572 1131323 cri.go:89] found id: ""
	I0328 01:05:47.237616 1131323 logs.go:276] 0 containers: []
	W0328 01:05:47.237625 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:05:47.237633 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:05:47.237691 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:05:47.275208 1131323 cri.go:89] found id: ""
	I0328 01:05:47.275254 1131323 logs.go:276] 0 containers: []
	W0328 01:05:47.275265 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:05:47.275272 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:05:47.275329 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:05:47.313515 1131323 cri.go:89] found id: ""
	I0328 01:05:47.313555 1131323 logs.go:276] 0 containers: []
	W0328 01:05:47.313568 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:05:47.313582 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:05:47.313598 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:05:47.368993 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:05:47.369033 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:05:47.383063 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:05:47.383097 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:05:47.460239 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:05:47.460278 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:05:47.460298 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:05:47.538552 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:05:47.538594 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:05:50.084960 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:05:50.101764 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:05:50.101859 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:05:50.141457 1131323 cri.go:89] found id: ""
	I0328 01:05:50.141488 1131323 logs.go:276] 0 containers: []
	W0328 01:05:50.141497 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:05:50.141504 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:05:50.141557 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:05:50.178184 1131323 cri.go:89] found id: ""
	I0328 01:05:50.178220 1131323 logs.go:276] 0 containers: []
	W0328 01:05:50.178254 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:05:50.178263 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:05:50.178358 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:05:50.217908 1131323 cri.go:89] found id: ""
	I0328 01:05:50.217946 1131323 logs.go:276] 0 containers: []
	W0328 01:05:50.217959 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:05:50.217966 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:05:50.218027 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:05:50.256029 1131323 cri.go:89] found id: ""
	I0328 01:05:50.256058 1131323 logs.go:276] 0 containers: []
	W0328 01:05:50.256067 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:05:50.256074 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:05:50.256130 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:05:50.295054 1131323 cri.go:89] found id: ""
	I0328 01:05:50.295087 1131323 logs.go:276] 0 containers: []
	W0328 01:05:50.295100 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:05:50.295106 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:05:50.295165 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:05:47.042338 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:49.542501 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:47.372867 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:49.872948 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:49.913873 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:52.412600 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:50.334695 1131323 cri.go:89] found id: ""
	I0328 01:05:50.336588 1131323 logs.go:276] 0 containers: []
	W0328 01:05:50.336605 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:05:50.336614 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:05:50.336697 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:05:50.375968 1131323 cri.go:89] found id: ""
	I0328 01:05:50.376003 1131323 logs.go:276] 0 containers: []
	W0328 01:05:50.376013 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:05:50.376021 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:05:50.376091 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:05:50.417146 1131323 cri.go:89] found id: ""
	I0328 01:05:50.417175 1131323 logs.go:276] 0 containers: []
	W0328 01:05:50.417184 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:05:50.417194 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:05:50.417207 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:05:50.474090 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:05:50.474131 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:05:50.489006 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:05:50.489040 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:05:50.566220 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:05:50.566268 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:05:50.566286 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:05:50.645593 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:05:50.645653 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:05:53.190872 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:05:53.205223 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:05:53.205320 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:05:53.242396 1131323 cri.go:89] found id: ""
	I0328 01:05:53.242433 1131323 logs.go:276] 0 containers: []
	W0328 01:05:53.242445 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:05:53.242455 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:05:53.242524 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:05:53.281237 1131323 cri.go:89] found id: ""
	I0328 01:05:53.281275 1131323 logs.go:276] 0 containers: []
	W0328 01:05:53.281288 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:05:53.281297 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:05:53.281357 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:05:53.321239 1131323 cri.go:89] found id: ""
	I0328 01:05:53.321268 1131323 logs.go:276] 0 containers: []
	W0328 01:05:53.321287 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:05:53.321296 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:05:53.321358 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:05:53.359240 1131323 cri.go:89] found id: ""
	I0328 01:05:53.359269 1131323 logs.go:276] 0 containers: []
	W0328 01:05:53.359278 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:05:53.359284 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:05:53.359337 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:05:53.396973 1131323 cri.go:89] found id: ""
	I0328 01:05:53.397008 1131323 logs.go:276] 0 containers: []
	W0328 01:05:53.397021 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:05:53.397030 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:05:53.397100 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:05:53.438368 1131323 cri.go:89] found id: ""
	I0328 01:05:53.438400 1131323 logs.go:276] 0 containers: []
	W0328 01:05:53.438408 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:05:53.438415 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:05:53.438477 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:05:53.474679 1131323 cri.go:89] found id: ""
	I0328 01:05:53.474708 1131323 logs.go:276] 0 containers: []
	W0328 01:05:53.474732 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:05:53.474742 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:05:53.474799 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:05:53.512509 1131323 cri.go:89] found id: ""
	I0328 01:05:53.512547 1131323 logs.go:276] 0 containers: []
	W0328 01:05:53.512560 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:05:53.512579 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:05:53.512599 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:05:53.569536 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:05:53.569580 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:05:53.584977 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:05:53.585016 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:05:53.657865 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:05:53.657895 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:05:53.657908 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:05:53.733158 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:05:53.733203 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:05:52.041508 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:54.541663 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:52.373317 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:54.872090 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:54.913464 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:57.413256 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:56.278693 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:05:56.291870 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:05:56.291949 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:05:56.332909 1131323 cri.go:89] found id: ""
	I0328 01:05:56.332943 1131323 logs.go:276] 0 containers: []
	W0328 01:05:56.332957 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:05:56.332965 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:05:56.333038 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:05:56.370608 1131323 cri.go:89] found id: ""
	I0328 01:05:56.370638 1131323 logs.go:276] 0 containers: []
	W0328 01:05:56.370649 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:05:56.370657 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:05:56.370721 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:05:56.408031 1131323 cri.go:89] found id: ""
	I0328 01:05:56.408068 1131323 logs.go:276] 0 containers: []
	W0328 01:05:56.408081 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:05:56.408100 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:05:56.408170 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:05:56.445057 1131323 cri.go:89] found id: ""
	I0328 01:05:56.445092 1131323 logs.go:276] 0 containers: []
	W0328 01:05:56.445105 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:05:56.445113 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:05:56.445177 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:05:56.486868 1131323 cri.go:89] found id: ""
	I0328 01:05:56.486898 1131323 logs.go:276] 0 containers: []
	W0328 01:05:56.486908 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:05:56.486914 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:05:56.486969 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:05:56.533594 1131323 cri.go:89] found id: ""
	I0328 01:05:56.533622 1131323 logs.go:276] 0 containers: []
	W0328 01:05:56.533632 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:05:56.533638 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:05:56.533702 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:05:56.569200 1131323 cri.go:89] found id: ""
	I0328 01:05:56.569237 1131323 logs.go:276] 0 containers: []
	W0328 01:05:56.569250 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:05:56.569258 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:05:56.569335 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:05:56.604919 1131323 cri.go:89] found id: ""
	I0328 01:05:56.604955 1131323 logs.go:276] 0 containers: []
	W0328 01:05:56.604968 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:05:56.604982 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:05:56.605011 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:05:56.654473 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:05:56.654513 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:05:56.671309 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:05:56.671339 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:05:56.739516 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:05:56.739543 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:05:56.739559 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:05:56.817445 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:05:56.817495 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:05:59.361711 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:05:59.375672 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:05:59.375750 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:05:59.414329 1131323 cri.go:89] found id: ""
	I0328 01:05:59.414360 1131323 logs.go:276] 0 containers: []
	W0328 01:05:59.414371 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:05:59.414379 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:05:59.414443 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:05:59.454813 1131323 cri.go:89] found id: ""
	I0328 01:05:59.454846 1131323 logs.go:276] 0 containers: []
	W0328 01:05:59.454855 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:05:59.454862 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:05:59.454917 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:05:59.492890 1131323 cri.go:89] found id: ""
	I0328 01:05:59.492924 1131323 logs.go:276] 0 containers: []
	W0328 01:05:59.492936 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:05:59.492946 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:05:59.493043 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:05:59.529412 1131323 cri.go:89] found id: ""
	I0328 01:05:59.529443 1131323 logs.go:276] 0 containers: []
	W0328 01:05:59.529454 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:05:59.529464 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:05:59.529521 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:05:59.568620 1131323 cri.go:89] found id: ""
	I0328 01:05:59.568655 1131323 logs.go:276] 0 containers: []
	W0328 01:05:59.568664 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:05:59.568671 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:05:59.568731 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:05:59.605826 1131323 cri.go:89] found id: ""
	I0328 01:05:59.605861 1131323 logs.go:276] 0 containers: []
	W0328 01:05:59.605874 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:05:59.605883 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:05:59.605955 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:05:59.645799 1131323 cri.go:89] found id: ""
	I0328 01:05:59.645833 1131323 logs.go:276] 0 containers: []
	W0328 01:05:59.645847 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:05:59.645856 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:05:59.645931 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:05:59.683866 1131323 cri.go:89] found id: ""
	I0328 01:05:59.683903 1131323 logs.go:276] 0 containers: []
	W0328 01:05:59.683916 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:05:59.683929 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:05:59.683953 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:05:59.726678 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:05:59.726711 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:05:59.779910 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:05:59.779954 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:05:59.795743 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:05:59.795774 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:05:59.875137 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:05:59.875162 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:05:59.875174 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:05:56.542345 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:58.542599 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:00.543094 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:57.372258 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:59.872483 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:59.912150 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:01.913694 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:02.455212 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:06:02.468850 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:06:02.468945 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:06:02.506347 1131323 cri.go:89] found id: ""
	I0328 01:06:02.506385 1131323 logs.go:276] 0 containers: []
	W0328 01:06:02.506397 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:06:02.506406 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:06:02.506484 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:06:02.546056 1131323 cri.go:89] found id: ""
	I0328 01:06:02.546085 1131323 logs.go:276] 0 containers: []
	W0328 01:06:02.546096 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:06:02.546103 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:06:02.546173 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:06:02.585343 1131323 cri.go:89] found id: ""
	I0328 01:06:02.585385 1131323 logs.go:276] 0 containers: []
	W0328 01:06:02.585398 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:06:02.585407 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:06:02.585563 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:06:02.625380 1131323 cri.go:89] found id: ""
	I0328 01:06:02.625414 1131323 logs.go:276] 0 containers: []
	W0328 01:06:02.625423 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:06:02.625429 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:06:02.625486 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:06:02.664653 1131323 cri.go:89] found id: ""
	I0328 01:06:02.664687 1131323 logs.go:276] 0 containers: []
	W0328 01:06:02.664701 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:06:02.664708 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:06:02.664764 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:06:02.704468 1131323 cri.go:89] found id: ""
	I0328 01:06:02.704498 1131323 logs.go:276] 0 containers: []
	W0328 01:06:02.704511 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:06:02.704519 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:06:02.704595 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:06:02.740969 1131323 cri.go:89] found id: ""
	I0328 01:06:02.740997 1131323 logs.go:276] 0 containers: []
	W0328 01:06:02.741007 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:06:02.741014 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:06:02.741102 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:06:02.782113 1131323 cri.go:89] found id: ""
	I0328 01:06:02.782150 1131323 logs.go:276] 0 containers: []
	W0328 01:06:02.782163 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:06:02.782185 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:06:02.782203 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:06:02.836804 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:06:02.836848 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:06:02.852266 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:06:02.852299 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:06:02.929441 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:06:02.929467 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:06:02.929484 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:06:03.008114 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:06:03.008156 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:06:03.041919 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:05.542209 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:02.372332 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:04.871689 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:04.413251 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:06.912348 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:05.554291 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:06:05.570208 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:06:05.570304 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:06:05.610887 1131323 cri.go:89] found id: ""
	I0328 01:06:05.610916 1131323 logs.go:276] 0 containers: []
	W0328 01:06:05.610926 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:06:05.610932 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:06:05.610991 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:06:05.651561 1131323 cri.go:89] found id: ""
	I0328 01:06:05.651600 1131323 logs.go:276] 0 containers: []
	W0328 01:06:05.651610 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:06:05.651616 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:06:05.651681 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:06:05.690801 1131323 cri.go:89] found id: ""
	I0328 01:06:05.690830 1131323 logs.go:276] 0 containers: []
	W0328 01:06:05.690843 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:06:05.690851 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:06:05.690920 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:06:05.729098 1131323 cri.go:89] found id: ""
	I0328 01:06:05.729136 1131323 logs.go:276] 0 containers: []
	W0328 01:06:05.729146 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:06:05.729153 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:06:05.729225 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:06:05.774461 1131323 cri.go:89] found id: ""
	I0328 01:06:05.774499 1131323 logs.go:276] 0 containers: []
	W0328 01:06:05.774520 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:06:05.774530 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:06:05.774602 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:06:05.812135 1131323 cri.go:89] found id: ""
	I0328 01:06:05.812166 1131323 logs.go:276] 0 containers: []
	W0328 01:06:05.812180 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:06:05.812188 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:06:05.812255 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:06:05.847744 1131323 cri.go:89] found id: ""
	I0328 01:06:05.847775 1131323 logs.go:276] 0 containers: []
	W0328 01:06:05.847786 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:06:05.847796 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:06:05.847863 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:06:05.885600 1131323 cri.go:89] found id: ""
	I0328 01:06:05.885641 1131323 logs.go:276] 0 containers: []
	W0328 01:06:05.885656 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:06:05.885669 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:06:05.885684 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:06:05.963837 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:06:05.963879 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:06:06.007342 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:06:06.007381 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:06:06.062798 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:06:06.062843 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:06:06.077547 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:06:06.077599 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:06:06.148373 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:06:08.648791 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:06:08.664082 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:06:08.664154 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:06:08.701746 1131323 cri.go:89] found id: ""
	I0328 01:06:08.701776 1131323 logs.go:276] 0 containers: []
	W0328 01:06:08.701789 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:06:08.701797 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:06:08.701855 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:06:08.739035 1131323 cri.go:89] found id: ""
	I0328 01:06:08.739066 1131323 logs.go:276] 0 containers: []
	W0328 01:06:08.739076 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:06:08.739083 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:06:08.739136 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:06:08.776128 1131323 cri.go:89] found id: ""
	I0328 01:06:08.776166 1131323 logs.go:276] 0 containers: []
	W0328 01:06:08.776180 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:06:08.776189 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:06:08.776255 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:06:08.816136 1131323 cri.go:89] found id: ""
	I0328 01:06:08.816172 1131323 logs.go:276] 0 containers: []
	W0328 01:06:08.816187 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:06:08.816196 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:06:08.816271 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:06:08.855675 1131323 cri.go:89] found id: ""
	I0328 01:06:08.855709 1131323 logs.go:276] 0 containers: []
	W0328 01:06:08.855722 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:06:08.855730 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:06:08.855802 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:06:08.893161 1131323 cri.go:89] found id: ""
	I0328 01:06:08.893198 1131323 logs.go:276] 0 containers: []
	W0328 01:06:08.893212 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:06:08.893221 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:06:08.893297 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:06:08.935498 1131323 cri.go:89] found id: ""
	I0328 01:06:08.935527 1131323 logs.go:276] 0 containers: []
	W0328 01:06:08.935540 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:06:08.935548 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:06:08.935622 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:06:08.971622 1131323 cri.go:89] found id: ""
	I0328 01:06:08.971657 1131323 logs.go:276] 0 containers: []
	W0328 01:06:08.971668 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:06:08.971679 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:06:08.971696 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:06:09.039975 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:06:09.040036 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:06:09.057877 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:06:09.057920 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:06:09.130093 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:06:09.130119 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:06:09.130135 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:06:09.217177 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:06:09.217228 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:06:08.040921 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:10.042895 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:06.872367 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:08.873187 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:08.914313 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:11.412330 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:11.762393 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:06:11.776356 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:06:11.776424 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:06:11.811982 1131323 cri.go:89] found id: ""
	I0328 01:06:11.812017 1131323 logs.go:276] 0 containers: []
	W0328 01:06:11.812030 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:06:11.812038 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:06:11.812103 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:06:11.849789 1131323 cri.go:89] found id: ""
	I0328 01:06:11.849817 1131323 logs.go:276] 0 containers: []
	W0328 01:06:11.849826 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:06:11.849833 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:06:11.849884 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:06:11.890455 1131323 cri.go:89] found id: ""
	I0328 01:06:11.890488 1131323 logs.go:276] 0 containers: []
	W0328 01:06:11.890497 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:06:11.890503 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:06:11.890559 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:06:11.929047 1131323 cri.go:89] found id: ""
	I0328 01:06:11.929093 1131323 logs.go:276] 0 containers: []
	W0328 01:06:11.929102 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:06:11.929108 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:06:11.929164 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:06:11.969536 1131323 cri.go:89] found id: ""
	I0328 01:06:11.969566 1131323 logs.go:276] 0 containers: []
	W0328 01:06:11.969576 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:06:11.969583 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:06:11.969641 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:06:12.008779 1131323 cri.go:89] found id: ""
	I0328 01:06:12.008811 1131323 logs.go:276] 0 containers: []
	W0328 01:06:12.008821 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:06:12.008828 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:06:12.008890 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:06:12.044061 1131323 cri.go:89] found id: ""
	I0328 01:06:12.044091 1131323 logs.go:276] 0 containers: []
	W0328 01:06:12.044104 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:06:12.044112 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:06:12.044176 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:06:12.082307 1131323 cri.go:89] found id: ""
	I0328 01:06:12.082336 1131323 logs.go:276] 0 containers: []
	W0328 01:06:12.082346 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:06:12.082357 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:06:12.082369 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:06:12.133044 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:06:12.133091 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:06:12.148584 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:06:12.148624 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:06:12.218799 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:06:12.218834 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:06:12.218852 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:06:12.295580 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:06:12.295623 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:06:14.842815 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:06:14.856385 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:06:14.856456 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:06:14.895351 1131323 cri.go:89] found id: ""
	I0328 01:06:14.895409 1131323 logs.go:276] 0 containers: []
	W0328 01:06:14.895418 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:06:14.895424 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:06:14.895476 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:06:14.930333 1131323 cri.go:89] found id: ""
	I0328 01:06:14.930366 1131323 logs.go:276] 0 containers: []
	W0328 01:06:14.930380 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:06:14.930389 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:06:14.930461 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:06:14.968701 1131323 cri.go:89] found id: ""
	I0328 01:06:14.968742 1131323 logs.go:276] 0 containers: []
	W0328 01:06:14.968754 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:06:14.968767 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:06:14.968867 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:06:15.004580 1131323 cri.go:89] found id: ""
	I0328 01:06:15.004613 1131323 logs.go:276] 0 containers: []
	W0328 01:06:15.004626 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:06:15.004634 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:06:15.004700 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:06:15.046702 1131323 cri.go:89] found id: ""
	I0328 01:06:15.046726 1131323 logs.go:276] 0 containers: []
	W0328 01:06:15.046736 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:06:15.046742 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:06:15.046795 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:06:15.088693 1131323 cri.go:89] found id: ""
	I0328 01:06:15.088725 1131323 logs.go:276] 0 containers: []
	W0328 01:06:15.088734 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:06:15.088741 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:06:15.088797 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:06:15.130293 1131323 cri.go:89] found id: ""
	I0328 01:06:15.130324 1131323 logs.go:276] 0 containers: []
	W0328 01:06:15.130333 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:06:15.130339 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:06:15.130394 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:06:15.172381 1131323 cri.go:89] found id: ""
	I0328 01:06:15.172408 1131323 logs.go:276] 0 containers: []
	W0328 01:06:15.172417 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:06:15.172427 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:06:15.172440 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:06:15.225631 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:06:15.225674 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:06:15.241251 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:06:15.241294 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:06:15.319701 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:06:15.319731 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:06:15.319747 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:06:12.540755 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:14.541618 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:11.371580 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:13.371640 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:15.373147 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:13.911792 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:15.912479 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:17.913926 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:15.406813 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:06:15.406853 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:06:17.993893 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:06:18.007755 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:06:18.007843 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:06:18.047750 1131323 cri.go:89] found id: ""
	I0328 01:06:18.047777 1131323 logs.go:276] 0 containers: []
	W0328 01:06:18.047786 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:06:18.047797 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:06:18.047855 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:06:18.088264 1131323 cri.go:89] found id: ""
	I0328 01:06:18.088291 1131323 logs.go:276] 0 containers: []
	W0328 01:06:18.088303 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:06:18.088311 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:06:18.088369 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:06:18.127485 1131323 cri.go:89] found id: ""
	I0328 01:06:18.127514 1131323 logs.go:276] 0 containers: []
	W0328 01:06:18.127523 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:06:18.127530 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:06:18.127581 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:06:18.167462 1131323 cri.go:89] found id: ""
	I0328 01:06:18.167496 1131323 logs.go:276] 0 containers: []
	W0328 01:06:18.167510 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:06:18.167516 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:06:18.167571 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:06:18.209536 1131323 cri.go:89] found id: ""
	I0328 01:06:18.209571 1131323 logs.go:276] 0 containers: []
	W0328 01:06:18.209583 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:06:18.209591 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:06:18.209662 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:06:18.247565 1131323 cri.go:89] found id: ""
	I0328 01:06:18.247601 1131323 logs.go:276] 0 containers: []
	W0328 01:06:18.247614 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:06:18.247623 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:06:18.247701 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:06:18.288123 1131323 cri.go:89] found id: ""
	I0328 01:06:18.288162 1131323 logs.go:276] 0 containers: []
	W0328 01:06:18.288172 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:06:18.288179 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:06:18.288242 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:06:18.328132 1131323 cri.go:89] found id: ""
	I0328 01:06:18.328161 1131323 logs.go:276] 0 containers: []
	W0328 01:06:18.328170 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:06:18.328181 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:06:18.328193 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:06:18.403245 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:06:18.403287 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:06:18.403305 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:06:18.483446 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:06:18.483500 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:06:18.527357 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:06:18.527392 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:06:18.588402 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:06:18.588463 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:06:16.542137 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:18.542554 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:20.546396 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:17.872147 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:20.373000 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:20.412369 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:22.412661 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:21.103566 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:06:21.117538 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:06:21.117616 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:06:21.174215 1131323 cri.go:89] found id: ""
	I0328 01:06:21.174270 1131323 logs.go:276] 0 containers: []
	W0328 01:06:21.174284 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:06:21.174293 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:06:21.174364 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:06:21.238666 1131323 cri.go:89] found id: ""
	I0328 01:06:21.238707 1131323 logs.go:276] 0 containers: []
	W0328 01:06:21.238722 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:06:21.238730 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:06:21.238803 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:06:21.303510 1131323 cri.go:89] found id: ""
	I0328 01:06:21.303543 1131323 logs.go:276] 0 containers: []
	W0328 01:06:21.303553 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:06:21.303559 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:06:21.303614 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:06:21.345823 1131323 cri.go:89] found id: ""
	I0328 01:06:21.345853 1131323 logs.go:276] 0 containers: []
	W0328 01:06:21.345862 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:06:21.345870 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:06:21.345940 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:06:21.386205 1131323 cri.go:89] found id: ""
	I0328 01:06:21.386248 1131323 logs.go:276] 0 containers: []
	W0328 01:06:21.386261 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:06:21.386269 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:06:21.386335 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:06:21.427424 1131323 cri.go:89] found id: ""
	I0328 01:06:21.427457 1131323 logs.go:276] 0 containers: []
	W0328 01:06:21.427470 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:06:21.427478 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:06:21.427546 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:06:21.465054 1131323 cri.go:89] found id: ""
	I0328 01:06:21.465087 1131323 logs.go:276] 0 containers: []
	W0328 01:06:21.465099 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:06:21.465107 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:06:21.465177 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:06:21.507197 1131323 cri.go:89] found id: ""
	I0328 01:06:21.507229 1131323 logs.go:276] 0 containers: []
	W0328 01:06:21.507238 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:06:21.507248 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:06:21.507263 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:06:21.586657 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:06:21.586709 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:06:21.633702 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:06:21.633739 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:06:21.688960 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:06:21.688999 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:06:21.704675 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:06:21.704714 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:06:21.781612 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:06:24.282521 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:06:24.297096 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:06:24.297185 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:06:24.338745 1131323 cri.go:89] found id: ""
	I0328 01:06:24.338780 1131323 logs.go:276] 0 containers: []
	W0328 01:06:24.338793 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:06:24.338802 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:06:24.338872 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:06:24.375499 1131323 cri.go:89] found id: ""
	I0328 01:06:24.375528 1131323 logs.go:276] 0 containers: []
	W0328 01:06:24.375540 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:06:24.375548 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:06:24.375616 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:06:24.410939 1131323 cri.go:89] found id: ""
	I0328 01:06:24.410966 1131323 logs.go:276] 0 containers: []
	W0328 01:06:24.410978 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:06:24.410986 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:06:24.411042 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:06:24.455316 1131323 cri.go:89] found id: ""
	I0328 01:06:24.455345 1131323 logs.go:276] 0 containers: []
	W0328 01:06:24.455354 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:06:24.455360 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:06:24.455427 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:06:24.493177 1131323 cri.go:89] found id: ""
	I0328 01:06:24.493206 1131323 logs.go:276] 0 containers: []
	W0328 01:06:24.493219 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:06:24.493228 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:06:24.493300 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:06:24.533612 1131323 cri.go:89] found id: ""
	I0328 01:06:24.533648 1131323 logs.go:276] 0 containers: []
	W0328 01:06:24.533659 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:06:24.533668 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:06:24.533743 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:06:24.573960 1131323 cri.go:89] found id: ""
	I0328 01:06:24.573998 1131323 logs.go:276] 0 containers: []
	W0328 01:06:24.574014 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:06:24.574020 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:06:24.574074 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:06:24.617282 1131323 cri.go:89] found id: ""
	I0328 01:06:24.617319 1131323 logs.go:276] 0 containers: []
	W0328 01:06:24.617333 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:06:24.617346 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:06:24.617364 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:06:24.691660 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:06:24.691688 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:06:24.691707 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:06:24.773138 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:06:24.773180 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:06:24.820408 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:06:24.820440 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:06:24.875901 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:06:24.875940 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:06:23.041030 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:25.041064 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:22.874513 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:25.378939 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:24.413732 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:26.912433 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:27.392663 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:06:27.407958 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:06:27.408046 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:06:27.446750 1131323 cri.go:89] found id: ""
	I0328 01:06:27.446782 1131323 logs.go:276] 0 containers: []
	W0328 01:06:27.446792 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:06:27.446799 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:06:27.446872 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:06:27.489199 1131323 cri.go:89] found id: ""
	I0328 01:06:27.489236 1131323 logs.go:276] 0 containers: []
	W0328 01:06:27.489249 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:06:27.489258 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:06:27.489316 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:06:27.525754 1131323 cri.go:89] found id: ""
	I0328 01:06:27.525787 1131323 logs.go:276] 0 containers: []
	W0328 01:06:27.525796 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:06:27.525803 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:06:27.525861 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:06:27.560817 1131323 cri.go:89] found id: ""
	I0328 01:06:27.560849 1131323 logs.go:276] 0 containers: []
	W0328 01:06:27.560858 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:06:27.560866 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:06:27.560930 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:06:27.597706 1131323 cri.go:89] found id: ""
	I0328 01:06:27.597736 1131323 logs.go:276] 0 containers: []
	W0328 01:06:27.597744 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:06:27.597750 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:06:27.597821 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:06:27.635170 1131323 cri.go:89] found id: ""
	I0328 01:06:27.635211 1131323 logs.go:276] 0 containers: []
	W0328 01:06:27.635223 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:06:27.635232 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:06:27.635299 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:06:27.672043 1131323 cri.go:89] found id: ""
	I0328 01:06:27.672079 1131323 logs.go:276] 0 containers: []
	W0328 01:06:27.672091 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:06:27.672099 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:06:27.672166 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:06:27.711401 1131323 cri.go:89] found id: ""
	I0328 01:06:27.711435 1131323 logs.go:276] 0 containers: []
	W0328 01:06:27.711448 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:06:27.711468 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:06:27.711488 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:06:27.755172 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:06:27.755211 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:06:27.807588 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:06:27.807632 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:06:27.823557 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:06:27.823589 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:06:27.905292 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:06:27.905316 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:06:27.905329 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:06:27.041105 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:29.041205 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:27.873797 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:30.374214 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:29.412378 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:31.413211 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:30.491565 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:06:30.505601 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:06:30.505667 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:06:30.541894 1131323 cri.go:89] found id: ""
	I0328 01:06:30.541929 1131323 logs.go:276] 0 containers: []
	W0328 01:06:30.541940 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:06:30.541949 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:06:30.542029 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:06:30.581484 1131323 cri.go:89] found id: ""
	I0328 01:06:30.581514 1131323 logs.go:276] 0 containers: []
	W0328 01:06:30.581532 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:06:30.581538 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:06:30.581613 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:06:30.624788 1131323 cri.go:89] found id: ""
	I0328 01:06:30.624830 1131323 logs.go:276] 0 containers: []
	W0328 01:06:30.624842 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:06:30.624850 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:06:30.624922 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:06:30.664373 1131323 cri.go:89] found id: ""
	I0328 01:06:30.664403 1131323 logs.go:276] 0 containers: []
	W0328 01:06:30.664413 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:06:30.664420 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:06:30.664489 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:06:30.702885 1131323 cri.go:89] found id: ""
	I0328 01:06:30.702917 1131323 logs.go:276] 0 containers: []
	W0328 01:06:30.702928 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:06:30.702934 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:06:30.703006 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:06:30.748170 1131323 cri.go:89] found id: ""
	I0328 01:06:30.748205 1131323 logs.go:276] 0 containers: []
	W0328 01:06:30.748217 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:06:30.748226 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:06:30.748316 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:06:30.785218 1131323 cri.go:89] found id: ""
	I0328 01:06:30.785255 1131323 logs.go:276] 0 containers: []
	W0328 01:06:30.785268 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:06:30.785276 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:06:30.785343 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:06:30.825529 1131323 cri.go:89] found id: ""
	I0328 01:06:30.825555 1131323 logs.go:276] 0 containers: []
	W0328 01:06:30.825565 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:06:30.825575 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:06:30.825589 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:06:30.881353 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:06:30.881391 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:06:30.896682 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:06:30.896718 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:06:30.973356 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:06:30.973386 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:06:30.973402 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:06:31.049014 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:06:31.049047 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:06:33.594365 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:06:33.609372 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:06:33.609460 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:06:33.648699 1131323 cri.go:89] found id: ""
	I0328 01:06:33.648728 1131323 logs.go:276] 0 containers: []
	W0328 01:06:33.648749 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:06:33.648757 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:06:33.648829 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:06:33.686707 1131323 cri.go:89] found id: ""
	I0328 01:06:33.686744 1131323 logs.go:276] 0 containers: []
	W0328 01:06:33.686758 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:06:33.686767 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:06:33.686832 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:06:33.723091 1131323 cri.go:89] found id: ""
	I0328 01:06:33.723121 1131323 logs.go:276] 0 containers: []
	W0328 01:06:33.723130 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:06:33.723136 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:06:33.723187 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:06:33.763439 1131323 cri.go:89] found id: ""
	I0328 01:06:33.763471 1131323 logs.go:276] 0 containers: []
	W0328 01:06:33.763481 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:06:33.763488 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:06:33.763544 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:06:33.812236 1131323 cri.go:89] found id: ""
	I0328 01:06:33.812271 1131323 logs.go:276] 0 containers: []
	W0328 01:06:33.812285 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:06:33.812294 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:06:33.812365 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:06:33.849421 1131323 cri.go:89] found id: ""
	I0328 01:06:33.849454 1131323 logs.go:276] 0 containers: []
	W0328 01:06:33.849465 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:06:33.849473 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:06:33.849528 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:06:33.888020 1131323 cri.go:89] found id: ""
	I0328 01:06:33.888051 1131323 logs.go:276] 0 containers: []
	W0328 01:06:33.888065 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:06:33.888078 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:06:33.888145 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:06:33.925952 1131323 cri.go:89] found id: ""
	I0328 01:06:33.925990 1131323 logs.go:276] 0 containers: []
	W0328 01:06:33.926003 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:06:33.926016 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:06:33.926034 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:06:33.976695 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:06:33.976734 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:06:33.991708 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:06:33.991752 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:06:34.068244 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:06:34.068276 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:06:34.068293 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:06:34.155843 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:06:34.155885 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:06:31.041375 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:33.041526 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:35.541169 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:32.872009 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:34.873043 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:33.913191 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:36.413213 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:36.697480 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:06:36.712322 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:06:36.712420 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:06:36.749541 1131323 cri.go:89] found id: ""
	I0328 01:06:36.749570 1131323 logs.go:276] 0 containers: []
	W0328 01:06:36.749579 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:06:36.749587 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:06:36.749655 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:06:36.788226 1131323 cri.go:89] found id: ""
	I0328 01:06:36.788255 1131323 logs.go:276] 0 containers: []
	W0328 01:06:36.788264 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:06:36.788270 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:06:36.788323 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:06:36.823824 1131323 cri.go:89] found id: ""
	I0328 01:06:36.823856 1131323 logs.go:276] 0 containers: []
	W0328 01:06:36.823866 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:06:36.823872 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:06:36.823927 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:06:36.869331 1131323 cri.go:89] found id: ""
	I0328 01:06:36.869362 1131323 logs.go:276] 0 containers: []
	W0328 01:06:36.869371 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:06:36.869378 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:06:36.869473 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:06:36.907918 1131323 cri.go:89] found id: ""
	I0328 01:06:36.907950 1131323 logs.go:276] 0 containers: []
	W0328 01:06:36.907960 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:06:36.907966 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:06:36.908028 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:06:36.947708 1131323 cri.go:89] found id: ""
	I0328 01:06:36.947738 1131323 logs.go:276] 0 containers: []
	W0328 01:06:36.947749 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:06:36.947757 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:06:36.947824 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:06:36.986200 1131323 cri.go:89] found id: ""
	I0328 01:06:36.986251 1131323 logs.go:276] 0 containers: []
	W0328 01:06:36.986266 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:06:36.986275 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:06:36.986350 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:06:37.026670 1131323 cri.go:89] found id: ""
	I0328 01:06:37.026698 1131323 logs.go:276] 0 containers: []
	W0328 01:06:37.026708 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:06:37.026718 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:06:37.026732 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:06:37.079891 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:06:37.079933 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:06:37.094347 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:06:37.094378 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:06:37.168653 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:06:37.168681 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:06:37.168695 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:06:37.247909 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:06:37.247949 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:06:39.791285 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:06:39.807921 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:06:39.808000 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:06:39.851460 1131323 cri.go:89] found id: ""
	I0328 01:06:39.851499 1131323 logs.go:276] 0 containers: []
	W0328 01:06:39.851512 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:06:39.851520 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:06:39.851593 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:06:39.889506 1131323 cri.go:89] found id: ""
	I0328 01:06:39.889541 1131323 logs.go:276] 0 containers: []
	W0328 01:06:39.889554 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:06:39.889564 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:06:39.889632 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:06:39.930291 1131323 cri.go:89] found id: ""
	I0328 01:06:39.930321 1131323 logs.go:276] 0 containers: []
	W0328 01:06:39.930331 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:06:39.930337 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:06:39.930400 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:06:39.965121 1131323 cri.go:89] found id: ""
	I0328 01:06:39.965160 1131323 logs.go:276] 0 containers: []
	W0328 01:06:39.965174 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:06:39.965183 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:06:39.965252 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:06:40.003217 1131323 cri.go:89] found id: ""
	I0328 01:06:40.003248 1131323 logs.go:276] 0 containers: []
	W0328 01:06:40.003258 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:06:40.003264 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:06:40.003319 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:06:40.042702 1131323 cri.go:89] found id: ""
	I0328 01:06:40.042737 1131323 logs.go:276] 0 containers: []
	W0328 01:06:40.042749 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:06:40.042759 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:06:40.042826 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:06:40.079733 1131323 cri.go:89] found id: ""
	I0328 01:06:40.079769 1131323 logs.go:276] 0 containers: []
	W0328 01:06:40.079780 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:06:40.079788 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:06:40.079852 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:06:40.117066 1131323 cri.go:89] found id: ""
	I0328 01:06:40.117098 1131323 logs.go:276] 0 containers: []
	W0328 01:06:40.117107 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:06:40.117117 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:06:40.117130 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:06:40.158589 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:06:40.158623 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:06:40.210997 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:06:40.211049 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:06:40.225419 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:06:40.225453 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:06:40.305356 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:06:40.305385 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:06:40.305401 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:06:37.541534 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:39.541905 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:36.874220 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:39.373763 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:38.413719 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:40.912939 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:42.913528 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:42.896394 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:06:42.912285 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:06:42.912355 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:06:42.949381 1131323 cri.go:89] found id: ""
	I0328 01:06:42.949411 1131323 logs.go:276] 0 containers: []
	W0328 01:06:42.949420 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:06:42.949427 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:06:42.949496 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:06:42.985325 1131323 cri.go:89] found id: ""
	I0328 01:06:42.985358 1131323 logs.go:276] 0 containers: []
	W0328 01:06:42.985371 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:06:42.985388 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:06:42.985456 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:06:43.023570 1131323 cri.go:89] found id: ""
	I0328 01:06:43.023616 1131323 logs.go:276] 0 containers: []
	W0328 01:06:43.023630 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:06:43.023638 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:06:43.023714 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:06:43.062995 1131323 cri.go:89] found id: ""
	I0328 01:06:43.063025 1131323 logs.go:276] 0 containers: []
	W0328 01:06:43.063036 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:06:43.063042 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:06:43.063111 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:06:43.101666 1131323 cri.go:89] found id: ""
	I0328 01:06:43.101704 1131323 logs.go:276] 0 containers: []
	W0328 01:06:43.101713 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:06:43.101720 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:06:43.101789 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:06:43.150713 1131323 cri.go:89] found id: ""
	I0328 01:06:43.150745 1131323 logs.go:276] 0 containers: []
	W0328 01:06:43.150757 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:06:43.150765 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:06:43.150830 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:06:43.193449 1131323 cri.go:89] found id: ""
	I0328 01:06:43.193479 1131323 logs.go:276] 0 containers: []
	W0328 01:06:43.193487 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:06:43.193495 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:06:43.193559 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:06:43.237641 1131323 cri.go:89] found id: ""
	I0328 01:06:43.237673 1131323 logs.go:276] 0 containers: []
	W0328 01:06:43.237682 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:06:43.237698 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:06:43.237714 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:06:43.287282 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:06:43.287320 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:06:43.303307 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:06:43.303343 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:06:43.383597 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:06:43.383619 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:06:43.383632 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:06:43.467874 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:06:43.467914 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:06:42.041406 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:44.540550 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:41.874286 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:44.372393 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:45.410973 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:47.412852 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:46.011081 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:06:46.025731 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:06:46.025824 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:06:46.064336 1131323 cri.go:89] found id: ""
	I0328 01:06:46.064371 1131323 logs.go:276] 0 containers: []
	W0328 01:06:46.064385 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:06:46.064394 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:06:46.064451 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:06:46.104493 1131323 cri.go:89] found id: ""
	I0328 01:06:46.104530 1131323 logs.go:276] 0 containers: []
	W0328 01:06:46.104550 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:06:46.104559 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:06:46.104636 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:06:46.147546 1131323 cri.go:89] found id: ""
	I0328 01:06:46.147582 1131323 logs.go:276] 0 containers: []
	W0328 01:06:46.147594 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:06:46.147602 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:06:46.147656 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:06:46.186162 1131323 cri.go:89] found id: ""
	I0328 01:06:46.186197 1131323 logs.go:276] 0 containers: []
	W0328 01:06:46.186207 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:06:46.186213 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:06:46.186296 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:06:46.230412 1131323 cri.go:89] found id: ""
	I0328 01:06:46.230450 1131323 logs.go:276] 0 containers: []
	W0328 01:06:46.230464 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:06:46.230473 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:06:46.230552 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:06:46.266000 1131323 cri.go:89] found id: ""
	I0328 01:06:46.266037 1131323 logs.go:276] 0 containers: []
	W0328 01:06:46.266050 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:06:46.266059 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:06:46.266126 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:06:46.301031 1131323 cri.go:89] found id: ""
	I0328 01:06:46.301065 1131323 logs.go:276] 0 containers: []
	W0328 01:06:46.301077 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:06:46.301084 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:06:46.301155 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:06:46.339222 1131323 cri.go:89] found id: ""
	I0328 01:06:46.339248 1131323 logs.go:276] 0 containers: []
	W0328 01:06:46.339258 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:06:46.339271 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:06:46.339290 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:06:46.352558 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:06:46.352595 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:06:46.427283 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:06:46.427308 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:06:46.427325 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:06:46.512134 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:06:46.512178 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:06:46.558276 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:06:46.558307 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:06:49.113455 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:06:49.127554 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:06:49.127645 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:06:49.169380 1131323 cri.go:89] found id: ""
	I0328 01:06:49.169421 1131323 logs.go:276] 0 containers: []
	W0328 01:06:49.169435 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:06:49.169444 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:06:49.169511 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:06:49.204540 1131323 cri.go:89] found id: ""
	I0328 01:06:49.204568 1131323 logs.go:276] 0 containers: []
	W0328 01:06:49.204579 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:06:49.204596 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:06:49.204664 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:06:49.243074 1131323 cri.go:89] found id: ""
	I0328 01:06:49.243102 1131323 logs.go:276] 0 containers: []
	W0328 01:06:49.243112 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:06:49.243119 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:06:49.243170 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:06:49.281264 1131323 cri.go:89] found id: ""
	I0328 01:06:49.281301 1131323 logs.go:276] 0 containers: []
	W0328 01:06:49.281314 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:06:49.281322 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:06:49.281391 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:06:49.320473 1131323 cri.go:89] found id: ""
	I0328 01:06:49.320505 1131323 logs.go:276] 0 containers: []
	W0328 01:06:49.320514 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:06:49.320521 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:06:49.320592 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:06:49.357715 1131323 cri.go:89] found id: ""
	I0328 01:06:49.357749 1131323 logs.go:276] 0 containers: []
	W0328 01:06:49.357759 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:06:49.357766 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:06:49.357823 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:06:49.398427 1131323 cri.go:89] found id: ""
	I0328 01:06:49.398464 1131323 logs.go:276] 0 containers: []
	W0328 01:06:49.398477 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:06:49.398498 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:06:49.398576 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:06:49.439921 1131323 cri.go:89] found id: ""
	I0328 01:06:49.439956 1131323 logs.go:276] 0 containers: []
	W0328 01:06:49.439969 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:06:49.439982 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:06:49.440003 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:06:49.557260 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:06:49.557289 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:06:49.557312 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:06:49.640105 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:06:49.640169 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:06:49.683153 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:06:49.683185 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:06:49.737420 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:06:49.737463 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:06:46.541377 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:49.041761 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:46.374869 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:48.875897 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:49.912535 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:51.912893 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:52.253208 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:06:52.268572 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:06:52.268649 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:06:52.305136 1131323 cri.go:89] found id: ""
	I0328 01:06:52.305180 1131323 logs.go:276] 0 containers: []
	W0328 01:06:52.305193 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:06:52.305202 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:06:52.305273 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:06:52.344774 1131323 cri.go:89] found id: ""
	I0328 01:06:52.344806 1131323 logs.go:276] 0 containers: []
	W0328 01:06:52.344816 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:06:52.344823 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:06:52.344885 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:06:52.382127 1131323 cri.go:89] found id: ""
	I0328 01:06:52.382174 1131323 logs.go:276] 0 containers: []
	W0328 01:06:52.382185 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:06:52.382200 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:06:52.382280 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:06:52.421340 1131323 cri.go:89] found id: ""
	I0328 01:06:52.421368 1131323 logs.go:276] 0 containers: []
	W0328 01:06:52.421377 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:06:52.421383 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:06:52.421433 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:06:52.460046 1131323 cri.go:89] found id: ""
	I0328 01:06:52.460084 1131323 logs.go:276] 0 containers: []
	W0328 01:06:52.460100 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:06:52.460107 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:06:52.460164 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:06:52.500067 1131323 cri.go:89] found id: ""
	I0328 01:06:52.500094 1131323 logs.go:276] 0 containers: []
	W0328 01:06:52.500102 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:06:52.500109 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:06:52.500171 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:06:52.537614 1131323 cri.go:89] found id: ""
	I0328 01:06:52.537646 1131323 logs.go:276] 0 containers: []
	W0328 01:06:52.537671 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:06:52.537680 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:06:52.537745 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:06:52.577362 1131323 cri.go:89] found id: ""
	I0328 01:06:52.577392 1131323 logs.go:276] 0 containers: []
	W0328 01:06:52.577402 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:06:52.577417 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:06:52.577434 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:06:52.633638 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:06:52.633689 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:06:52.650762 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:06:52.650796 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:06:52.729436 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:06:52.729470 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:06:52.729484 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:06:52.818193 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:06:52.818248 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:06:51.540541 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:53.541340 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:55.542165 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:51.376916 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:53.872313 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:55.873335 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:54.411986 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:56.412892 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:55.362950 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:06:55.378461 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:06:55.378577 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:06:55.419968 1131323 cri.go:89] found id: ""
	I0328 01:06:55.419995 1131323 logs.go:276] 0 containers: []
	W0328 01:06:55.420005 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:06:55.420010 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:06:55.420072 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:06:55.464308 1131323 cri.go:89] found id: ""
	I0328 01:06:55.464341 1131323 logs.go:276] 0 containers: []
	W0328 01:06:55.464350 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:06:55.464357 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:06:55.464421 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:06:55.523059 1131323 cri.go:89] found id: ""
	I0328 01:06:55.523092 1131323 logs.go:276] 0 containers: []
	W0328 01:06:55.523106 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:06:55.523114 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:06:55.523186 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:06:55.570957 1131323 cri.go:89] found id: ""
	I0328 01:06:55.570990 1131323 logs.go:276] 0 containers: []
	W0328 01:06:55.571004 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:06:55.571013 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:06:55.571077 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:06:55.606712 1131323 cri.go:89] found id: ""
	I0328 01:06:55.606739 1131323 logs.go:276] 0 containers: []
	W0328 01:06:55.606749 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:06:55.606755 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:06:55.606817 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:06:55.646445 1131323 cri.go:89] found id: ""
	I0328 01:06:55.646477 1131323 logs.go:276] 0 containers: []
	W0328 01:06:55.646486 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:06:55.646493 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:06:55.646548 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:06:55.685176 1131323 cri.go:89] found id: ""
	I0328 01:06:55.685208 1131323 logs.go:276] 0 containers: []
	W0328 01:06:55.685217 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:06:55.685225 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:06:55.685289 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:06:55.722948 1131323 cri.go:89] found id: ""
	I0328 01:06:55.722984 1131323 logs.go:276] 0 containers: []
	W0328 01:06:55.722995 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:06:55.723006 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:06:55.723022 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:06:55.797332 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:06:55.797368 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:06:55.797385 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:06:55.877648 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:06:55.877688 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:06:55.918966 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:06:55.918997 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:06:55.971226 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:06:55.971272 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:06:58.488464 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:06:58.504999 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:06:58.505088 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:06:58.549290 1131323 cri.go:89] found id: ""
	I0328 01:06:58.549325 1131323 logs.go:276] 0 containers: []
	W0328 01:06:58.549338 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:06:58.549347 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:06:58.549414 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:06:58.589222 1131323 cri.go:89] found id: ""
	I0328 01:06:58.589252 1131323 logs.go:276] 0 containers: []
	W0328 01:06:58.589261 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:06:58.589271 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:06:58.589337 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:06:58.626470 1131323 cri.go:89] found id: ""
	I0328 01:06:58.626499 1131323 logs.go:276] 0 containers: []
	W0328 01:06:58.626508 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:06:58.626514 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:06:58.626578 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:06:58.671634 1131323 cri.go:89] found id: ""
	I0328 01:06:58.671663 1131323 logs.go:276] 0 containers: []
	W0328 01:06:58.671674 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:06:58.671683 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:06:58.671744 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:06:58.707335 1131323 cri.go:89] found id: ""
	I0328 01:06:58.707370 1131323 logs.go:276] 0 containers: []
	W0328 01:06:58.707381 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:06:58.707390 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:06:58.707459 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:06:58.745635 1131323 cri.go:89] found id: ""
	I0328 01:06:58.745666 1131323 logs.go:276] 0 containers: []
	W0328 01:06:58.745679 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:06:58.745687 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:06:58.745752 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:06:58.792172 1131323 cri.go:89] found id: ""
	I0328 01:06:58.792205 1131323 logs.go:276] 0 containers: []
	W0328 01:06:58.792216 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:06:58.792225 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:06:58.792287 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:06:58.840027 1131323 cri.go:89] found id: ""
	I0328 01:06:58.840063 1131323 logs.go:276] 0 containers: []
	W0328 01:06:58.840075 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:06:58.840089 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:06:58.840108 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:06:58.921964 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:06:58.921988 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:06:58.922003 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:06:59.016935 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:06:59.016980 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:06:59.065747 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:06:59.065788 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:06:59.119189 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:06:59.119231 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:06:58.042362 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:00.544351 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:57.875649 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:00.371953 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:58.406154 1130949 pod_ready.go:81] duration metric: took 4m0.000981669s for pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace to be "Ready" ...
	E0328 01:06:58.406192 1130949 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace to be "Ready" (will not retry!)
	I0328 01:06:58.406218 1130949 pod_ready.go:38] duration metric: took 4m11.713667334s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0328 01:06:58.406275 1130949 kubeadm.go:591] duration metric: took 4m19.018883002s to restartPrimaryControlPlane
	W0328 01:06:58.406372 1130949 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0328 01:06:58.406432 1130949 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0328 01:07:01.637081 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:07:01.652557 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:07:01.652634 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:07:01.691795 1131323 cri.go:89] found id: ""
	I0328 01:07:01.691832 1131323 logs.go:276] 0 containers: []
	W0328 01:07:01.691846 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:07:01.691854 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:07:01.691927 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:07:01.732815 1131323 cri.go:89] found id: ""
	I0328 01:07:01.732850 1131323 logs.go:276] 0 containers: []
	W0328 01:07:01.732861 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:07:01.732868 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:07:01.732938 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:07:01.776370 1131323 cri.go:89] found id: ""
	I0328 01:07:01.776408 1131323 logs.go:276] 0 containers: []
	W0328 01:07:01.776422 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:07:01.776431 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:07:01.776501 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:07:01.821260 1131323 cri.go:89] found id: ""
	I0328 01:07:01.821290 1131323 logs.go:276] 0 containers: []
	W0328 01:07:01.821301 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:07:01.821308 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:07:01.821377 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:07:01.860666 1131323 cri.go:89] found id: ""
	I0328 01:07:01.860696 1131323 logs.go:276] 0 containers: []
	W0328 01:07:01.860708 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:07:01.860719 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:07:01.860787 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:07:01.898255 1131323 cri.go:89] found id: ""
	I0328 01:07:01.898291 1131323 logs.go:276] 0 containers: []
	W0328 01:07:01.898304 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:07:01.898314 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:07:01.898383 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:07:01.937770 1131323 cri.go:89] found id: ""
	I0328 01:07:01.937809 1131323 logs.go:276] 0 containers: []
	W0328 01:07:01.937822 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:07:01.937830 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:07:01.937901 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:07:01.976946 1131323 cri.go:89] found id: ""
	I0328 01:07:01.976981 1131323 logs.go:276] 0 containers: []
	W0328 01:07:01.976994 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:07:01.977008 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:07:01.977027 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:07:02.062804 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:07:02.062845 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:07:02.110750 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:07:02.110783 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:07:02.179633 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:07:02.179677 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:07:02.203131 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:07:02.203181 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:07:02.303281 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:07:04.804238 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:07:04.819654 1131323 kubeadm.go:591] duration metric: took 4m2.527630194s to restartPrimaryControlPlane
	W0328 01:07:04.819747 1131323 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0328 01:07:04.819787 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0328 01:07:03.041692 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:05.540478 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:02.372472 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:04.376413 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:07.322821 1131323 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (2.50300166s)
	I0328 01:07:07.322918 1131323 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0328 01:07:07.338692 1131323 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0328 01:07:07.349812 1131323 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0328 01:07:07.361566 1131323 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0328 01:07:07.361597 1131323 kubeadm.go:156] found existing configuration files:
	
	I0328 01:07:07.361667 1131323 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0328 01:07:07.372926 1131323 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0328 01:07:07.373008 1131323 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0328 01:07:07.383770 1131323 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0328 01:07:07.394260 1131323 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0328 01:07:07.394332 1131323 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0328 01:07:07.405874 1131323 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0328 01:07:07.417177 1131323 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0328 01:07:07.417254 1131323 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0328 01:07:07.428589 1131323 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0328 01:07:07.438788 1131323 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0328 01:07:07.438845 1131323 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0328 01:07:07.449649 1131323 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0328 01:07:07.533886 1131323 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0328 01:07:07.533989 1131323 kubeadm.go:309] [preflight] Running pre-flight checks
	I0328 01:07:07.693599 1131323 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0328 01:07:07.693736 1131323 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0328 01:07:07.693852 1131323 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0328 01:07:07.910557 1131323 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0328 01:07:07.912634 1131323 out.go:204]   - Generating certificates and keys ...
	I0328 01:07:07.912743 1131323 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0328 01:07:07.912855 1131323 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0328 01:07:07.912984 1131323 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0328 01:07:07.913098 1131323 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0328 01:07:07.913212 1131323 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0328 01:07:07.913298 1131323 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0328 01:07:07.913384 1131323 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0328 01:07:07.913569 1131323 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0328 01:07:07.913947 1131323 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0328 01:07:07.914429 1131323 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0328 01:07:07.914649 1131323 kubeadm.go:309] [certs] Using the existing "sa" key
	I0328 01:07:07.914728 1131323 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0328 01:07:08.225778 1131323 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0328 01:07:08.353927 1131323 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0328 01:07:08.631240 1131323 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0328 01:07:08.824445 1131323 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0328 01:07:08.840240 1131323 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0328 01:07:08.841200 1131323 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0328 01:07:08.841315 1131323 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0328 01:07:08.997129 1131323 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0328 01:07:08.999073 1131323 out.go:204]   - Booting up control plane ...
	I0328 01:07:08.999224 1131323 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0328 01:07:09.014811 1131323 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0328 01:07:09.015898 1131323 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0328 01:07:09.016727 1131323 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0328 01:07:09.019426 1131323 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0328 01:07:07.541363 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:10.041094 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:06.874606 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:09.372537 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:12.540137 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:14.541608 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:11.372643 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:13.873029 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:16.541814 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:19.047225 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:16.372556 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:18.871954 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:20.872047 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:21.542880 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:24.041786 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:22.872845 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:24.873747 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:26.042186 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:28.541303 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:30.540610 1130949 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.134147754s)
	I0328 01:07:30.540688 1130949 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0328 01:07:30.558971 1130949 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0328 01:07:30.570331 1130949 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0328 01:07:30.581192 1130949 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0328 01:07:30.581246 1130949 kubeadm.go:156] found existing configuration files:
	
	I0328 01:07:30.581306 1130949 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0328 01:07:30.592337 1130949 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0328 01:07:30.592410 1130949 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0328 01:07:30.603288 1130949 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0328 01:07:30.613714 1130949 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0328 01:07:30.613776 1130949 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0328 01:07:30.624281 1130949 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0328 01:07:30.634569 1130949 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0328 01:07:30.634644 1130949 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0328 01:07:30.647279 1130949 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0328 01:07:30.658554 1130949 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0328 01:07:30.658646 1130949 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0328 01:07:30.670364 1130949 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0328 01:07:30.730349 1130949 kubeadm.go:309] [init] Using Kubernetes version: v1.29.3
	I0328 01:07:30.730414 1130949 kubeadm.go:309] [preflight] Running pre-flight checks
	I0328 01:07:30.887056 1130949 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0328 01:07:30.887234 1130949 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0328 01:07:30.887385 1130949 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0328 01:07:31.104288 1130949 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0328 01:07:27.373135 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:29.373436 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:31.106496 1130949 out.go:204]   - Generating certificates and keys ...
	I0328 01:07:31.106628 1130949 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0328 01:07:31.106697 1130949 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0328 01:07:31.106765 1130949 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0328 01:07:31.106826 1130949 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0328 01:07:31.106892 1130949 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0328 01:07:31.107528 1130949 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0328 01:07:31.108302 1130949 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0328 01:07:31.112246 1130949 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0328 01:07:31.112762 1130949 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0328 01:07:31.113711 1130949 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0328 01:07:31.115230 1130949 kubeadm.go:309] [certs] Using the existing "sa" key
	I0328 01:07:31.115284 1130949 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0328 01:07:31.297632 1130949 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0328 01:07:32.446275 1130949 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0328 01:07:32.565869 1130949 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0328 01:07:32.641288 1130949 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0328 01:07:32.817229 1130949 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0328 01:07:32.817814 1130949 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0328 01:07:32.820366 1130949 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0328 01:07:32.822328 1130949 out.go:204]   - Booting up control plane ...
	I0328 01:07:32.822467 1130949 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0328 01:07:32.822550 1130949 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0328 01:07:32.822990 1130949 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0328 01:07:32.846800 1130949 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0328 01:07:32.847829 1130949 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0328 01:07:32.847902 1130949 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0328 01:07:31.044103 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:33.542106 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:35.542875 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:31.873591 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:33.875737 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:35.881819 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:32.992001 1130949 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0328 01:07:38.997010 1130949 kubeadm.go:309] [apiclient] All control plane components are healthy after 6.003888 seconds
	I0328 01:07:39.012971 1130949 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0328 01:07:39.036328 1130949 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0328 01:07:39.569806 1130949 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0328 01:07:39.570135 1130949 kubeadm.go:309] [mark-control-plane] Marking the node embed-certs-808809 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0328 01:07:40.085165 1130949 kubeadm.go:309] [bootstrap-token] Using token: 4zk5zi.uttj4zihedk5oj6k
	I0328 01:07:40.086719 1130949 out.go:204]   - Configuring RBAC rules ...
	I0328 01:07:40.086873 1130949 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0328 01:07:40.096373 1130949 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0328 01:07:40.106484 1130949 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0328 01:07:40.110525 1130949 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0328 01:07:40.120015 1130949 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0328 01:07:40.129060 1130949 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0328 01:07:40.141167 1130949 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0328 01:07:40.415429 1130949 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0328 01:07:40.507275 1130949 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0328 01:07:40.507333 1130949 kubeadm.go:309] 
	I0328 01:07:40.507551 1130949 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0328 01:07:40.507617 1130949 kubeadm.go:309] 
	I0328 01:07:40.507860 1130949 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0328 01:07:40.507891 1130949 kubeadm.go:309] 
	I0328 01:07:40.507947 1130949 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0328 01:07:40.508057 1130949 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0328 01:07:40.508140 1130949 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0328 01:07:40.508157 1130949 kubeadm.go:309] 
	I0328 01:07:40.508250 1130949 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0328 01:07:40.508264 1130949 kubeadm.go:309] 
	I0328 01:07:40.508329 1130949 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0328 01:07:40.508344 1130949 kubeadm.go:309] 
	I0328 01:07:40.508421 1130949 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0328 01:07:40.508539 1130949 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0328 01:07:40.508626 1130949 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0328 01:07:40.508632 1130949 kubeadm.go:309] 
	I0328 01:07:40.508804 1130949 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0328 01:07:40.508970 1130949 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0328 01:07:40.508990 1130949 kubeadm.go:309] 
	I0328 01:07:40.509155 1130949 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token 4zk5zi.uttj4zihedk5oj6k \
	I0328 01:07:40.509474 1130949 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:eb47e943713ff45bf2b8bbea854932df17a469668ec0f8e79ea3bb626e56fc59 \
	I0328 01:07:40.509514 1130949 kubeadm.go:309] 	--control-plane 
	I0328 01:07:40.509524 1130949 kubeadm.go:309] 
	I0328 01:07:40.509641 1130949 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0328 01:07:40.509655 1130949 kubeadm.go:309] 
	I0328 01:07:40.509767 1130949 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token 4zk5zi.uttj4zihedk5oj6k \
	I0328 01:07:40.509932 1130949 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:eb47e943713ff45bf2b8bbea854932df17a469668ec0f8e79ea3bb626e56fc59 
	I0328 01:07:40.510139 1130949 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0328 01:07:40.510157 1130949 cni.go:84] Creating CNI manager for ""
	I0328 01:07:40.510166 1130949 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0328 01:07:40.512099 1130949 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0328 01:07:38.041290 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:40.041569 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:38.373789 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:40.374369 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:40.513314 1130949 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0328 01:07:40.563257 1130949 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0328 01:07:40.627024 1130949 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0328 01:07:40.627097 1130949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:07:40.627137 1130949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-808809 minikube.k8s.io/updated_at=2024_03_28T01_07_40_0700 minikube.k8s.io/version=v1.33.0-beta.0 minikube.k8s.io/commit=1a940f980f77c9bfc8ef93678db8c1f49c9dd79d minikube.k8s.io/name=embed-certs-808809 minikube.k8s.io/primary=true
	I0328 01:07:40.928916 1130949 ops.go:34] apiserver oom_adj: -16
	I0328 01:07:40.929138 1130949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:07:41.429797 1130949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:07:41.930103 1130949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:07:42.429366 1130949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:07:42.540932 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:44.035055 1131600 pod_ready.go:81] duration metric: took 4m0.000860608s for pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace to be "Ready" ...
	E0328 01:07:44.035094 1131600 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace to be "Ready" (will not retry!)
	I0328 01:07:44.035124 1131600 pod_ready.go:38] duration metric: took 4m14.608998431s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0328 01:07:44.035180 1131600 kubeadm.go:591] duration metric: took 4m23.470228903s to restartPrimaryControlPlane
	W0328 01:07:44.035292 1131600 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0328 01:07:44.035344 1131600 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0328 01:07:42.375179 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:44.876120 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:42.929464 1130949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:07:43.429369 1130949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:07:43.929241 1130949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:07:44.429904 1130949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:07:44.930251 1130949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:07:45.429816 1130949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:07:45.930177 1130949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:07:46.429416 1130949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:07:46.929152 1130949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:07:47.429708 1130949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:07:49.021732 1131323 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0328 01:07:49.021890 1131323 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0328 01:07:49.022195 1131323 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0328 01:07:47.373358 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:49.872482 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:47.929139 1130949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:07:48.429732 1130949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:07:48.930207 1130949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:07:49.429230 1130949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:07:49.929298 1130949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:07:50.429919 1130949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:07:50.929364 1130949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:07:51.429403 1130949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:07:51.929356 1130949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:07:52.429410 1130949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:07:52.929894 1130949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:07:53.043365 1130949 kubeadm.go:1107] duration metric: took 12.416334145s to wait for elevateKubeSystemPrivileges
	W0328 01:07:53.043410 1130949 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0328 01:07:53.043419 1130949 kubeadm.go:393] duration metric: took 5m13.709259014s to StartCluster
	I0328 01:07:53.043445 1130949 settings.go:142] acquiring lock: {Name:mk594dd5d083edcb4c668528431bf610fe4ea638 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 01:07:53.043560 1130949 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18485-1069254/kubeconfig
	I0328 01:07:53.045798 1130949 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18485-1069254/kubeconfig: {Name:mk608bb0dce90860f51972a105c5a28b1a9b081e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 01:07:53.046158 1130949 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.72.210 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0328 01:07:53.047867 1130949 out.go:177] * Verifying Kubernetes components...
	I0328 01:07:53.046201 1130949 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0328 01:07:53.046412 1130949 config.go:182] Loaded profile config "embed-certs-808809": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0328 01:07:53.049163 1130949 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-808809"
	I0328 01:07:53.049175 1130949 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0328 01:07:53.049195 1130949 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-808809"
	W0328 01:07:53.049204 1130949 addons.go:243] addon storage-provisioner should already be in state true
	I0328 01:07:53.049230 1130949 host.go:66] Checking if "embed-certs-808809" exists ...
	I0328 01:07:53.049205 1130949 addons.go:69] Setting default-storageclass=true in profile "embed-certs-808809"
	I0328 01:07:53.049250 1130949 addons.go:69] Setting metrics-server=true in profile "embed-certs-808809"
	I0328 01:07:53.049271 1130949 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-808809"
	I0328 01:07:53.049309 1130949 addons.go:234] Setting addon metrics-server=true in "embed-certs-808809"
	W0328 01:07:53.049327 1130949 addons.go:243] addon metrics-server should already be in state true
	I0328 01:07:53.049371 1130949 host.go:66] Checking if "embed-certs-808809" exists ...
	I0328 01:07:53.049530 1130949 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18485-1069254/.minikube/bin/docker-machine-driver-kvm2
	I0328 01:07:53.049569 1130949 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 01:07:53.049696 1130949 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18485-1069254/.minikube/bin/docker-machine-driver-kvm2
	I0328 01:07:53.049729 1130949 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 01:07:53.049795 1130949 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18485-1069254/.minikube/bin/docker-machine-driver-kvm2
	I0328 01:07:53.049838 1130949 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 01:07:53.067042 1130949 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36143
	I0328 01:07:53.067078 1130949 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33653
	I0328 01:07:53.067536 1130949 main.go:141] libmachine: () Calling .GetVersion
	I0328 01:07:53.067599 1130949 main.go:141] libmachine: () Calling .GetVersion
	I0328 01:07:53.068156 1130949 main.go:141] libmachine: Using API Version  1
	I0328 01:07:53.068184 1130949 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 01:07:53.068289 1130949 main.go:141] libmachine: Using API Version  1
	I0328 01:07:53.068315 1130949 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 01:07:53.068583 1130949 main.go:141] libmachine: () Calling .GetMachineName
	I0328 01:07:53.068669 1130949 main.go:141] libmachine: () Calling .GetMachineName
	I0328 01:07:53.069095 1130949 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18485-1069254/.minikube/bin/docker-machine-driver-kvm2
	I0328 01:07:53.069121 1130949 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 01:07:53.069245 1130949 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18485-1069254/.minikube/bin/docker-machine-driver-kvm2
	I0328 01:07:53.069276 1130949 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 01:07:53.069991 1130949 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43737
	I0328 01:07:53.070509 1130949 main.go:141] libmachine: () Calling .GetVersion
	I0328 01:07:53.071078 1130949 main.go:141] libmachine: Using API Version  1
	I0328 01:07:53.071103 1130949 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 01:07:53.071480 1130949 main.go:141] libmachine: () Calling .GetMachineName
	I0328 01:07:53.071705 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetState
	I0328 01:07:53.075617 1130949 addons.go:234] Setting addon default-storageclass=true in "embed-certs-808809"
	W0328 01:07:53.075659 1130949 addons.go:243] addon default-storageclass should already be in state true
	I0328 01:07:53.075703 1130949 host.go:66] Checking if "embed-certs-808809" exists ...
	I0328 01:07:53.075982 1130949 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18485-1069254/.minikube/bin/docker-machine-driver-kvm2
	I0328 01:07:53.076011 1130949 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 01:07:53.085991 1130949 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39265
	I0328 01:07:53.086508 1130949 main.go:141] libmachine: () Calling .GetVersion
	I0328 01:07:53.086724 1130949 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36033
	I0328 01:07:53.087105 1130949 main.go:141] libmachine: Using API Version  1
	I0328 01:07:53.087122 1130949 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 01:07:53.087158 1130949 main.go:141] libmachine: () Calling .GetVersion
	I0328 01:07:53.087646 1130949 main.go:141] libmachine: Using API Version  1
	I0328 01:07:53.087667 1130949 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 01:07:53.087706 1130949 main.go:141] libmachine: () Calling .GetMachineName
	I0328 01:07:53.087922 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetState
	I0328 01:07:53.088031 1130949 main.go:141] libmachine: () Calling .GetMachineName
	I0328 01:07:53.088225 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetState
	I0328 01:07:53.089941 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .DriverName
	I0328 01:07:53.090168 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .DriverName
	I0328 01:07:53.091945 1130949 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0328 01:07:53.093023 1130949 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41613
	I0328 01:07:53.093537 1130949 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0328 01:07:53.093553 1130949 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0328 01:07:53.093563 1130949 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0328 01:07:53.095147 1130949 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0328 01:07:53.095165 1130949 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0328 01:07:53.093574 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHHostname
	I0328 01:07:53.095185 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHHostname
	I0328 01:07:53.093939 1130949 main.go:141] libmachine: () Calling .GetVersion
	I0328 01:07:53.096301 1130949 main.go:141] libmachine: Using API Version  1
	I0328 01:07:53.096322 1130949 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 01:07:53.096662 1130949 main.go:141] libmachine: () Calling .GetMachineName
	I0328 01:07:53.097251 1130949 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18485-1069254/.minikube/bin/docker-machine-driver-kvm2
	I0328 01:07:53.097306 1130949 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 01:07:53.098907 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:07:53.099014 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:07:53.099513 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d4:d2", ip: ""} in network mk-embed-certs-808809: {Iface:virbr4 ExpiryTime:2024-03-28 02:02:25 +0000 UTC Type:0 Mac:52:54:00:60:d4:d2 Iaid: IPaddr:192.168.72.210 Prefix:24 Hostname:embed-certs-808809 Clientid:01:52:54:00:60:d4:d2}
	I0328 01:07:53.099546 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined IP address 192.168.72.210 and MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:07:53.099996 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHPort
	I0328 01:07:53.100126 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d4:d2", ip: ""} in network mk-embed-certs-808809: {Iface:virbr4 ExpiryTime:2024-03-28 02:02:25 +0000 UTC Type:0 Mac:52:54:00:60:d4:d2 Iaid: IPaddr:192.168.72.210 Prefix:24 Hostname:embed-certs-808809 Clientid:01:52:54:00:60:d4:d2}
	I0328 01:07:53.100177 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHKeyPath
	I0328 01:07:53.100187 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined IP address 192.168.72.210 and MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:07:53.100287 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHUsername
	I0328 01:07:53.100392 1130949 sshutil.go:53] new ssh client: &{IP:192.168.72.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/embed-certs-808809/id_rsa Username:docker}
	I0328 01:07:53.100470 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHPort
	I0328 01:07:53.100576 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHKeyPath
	I0328 01:07:53.100709 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHUsername
	I0328 01:07:53.100796 1130949 sshutil.go:53] new ssh client: &{IP:192.168.72.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/embed-certs-808809/id_rsa Username:docker}
	I0328 01:07:53.114056 1130949 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41121
	I0328 01:07:53.114680 1130949 main.go:141] libmachine: () Calling .GetVersion
	I0328 01:07:53.115279 1130949 main.go:141] libmachine: Using API Version  1
	I0328 01:07:53.115313 1130949 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 01:07:53.115721 1130949 main.go:141] libmachine: () Calling .GetMachineName
	I0328 01:07:53.116061 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetState
	I0328 01:07:53.118022 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .DriverName
	I0328 01:07:53.118348 1130949 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0328 01:07:53.118370 1130949 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0328 01:07:53.118391 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHHostname
	I0328 01:07:53.121337 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:07:53.121699 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d4:d2", ip: ""} in network mk-embed-certs-808809: {Iface:virbr4 ExpiryTime:2024-03-28 02:02:25 +0000 UTC Type:0 Mac:52:54:00:60:d4:d2 Iaid: IPaddr:192.168.72.210 Prefix:24 Hostname:embed-certs-808809 Clientid:01:52:54:00:60:d4:d2}
	I0328 01:07:53.121728 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined IP address 192.168.72.210 and MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:07:53.121906 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHPort
	I0328 01:07:53.122084 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHKeyPath
	I0328 01:07:53.122266 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHUsername
	I0328 01:07:53.122414 1130949 sshutil.go:53] new ssh client: &{IP:192.168.72.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/embed-certs-808809/id_rsa Username:docker}
	I0328 01:07:53.242121 1130949 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0328 01:07:53.267118 1130949 node_ready.go:35] waiting up to 6m0s for node "embed-certs-808809" to be "Ready" ...
	I0328 01:07:53.276640 1130949 node_ready.go:49] node "embed-certs-808809" has status "Ready":"True"
	I0328 01:07:53.276670 1130949 node_ready.go:38] duration metric: took 9.513599ms for node "embed-certs-808809" to be "Ready" ...
	I0328 01:07:53.276683 1130949 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0328 01:07:53.283091 1130949 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-2rn6k" in "kube-system" namespace to be "Ready" ...
	I0328 01:07:53.325201 1130949 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0328 01:07:53.325234 1130949 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0328 01:07:53.341335 1130949 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0328 01:07:53.361084 1130949 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0328 01:07:53.361109 1130949 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0328 01:07:53.393089 1130949 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0328 01:07:53.393116 1130949 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0328 01:07:53.419245 1130949 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0328 01:07:53.445663 1130949 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0328 01:07:53.515515 1130949 main.go:141] libmachine: Making call to close driver server
	I0328 01:07:53.515555 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .Close
	I0328 01:07:53.515871 1130949 main.go:141] libmachine: Successfully made call to close driver server
	I0328 01:07:53.515891 1130949 main.go:141] libmachine: Making call to close connection to plugin binary
	I0328 01:07:53.515901 1130949 main.go:141] libmachine: Making call to close driver server
	I0328 01:07:53.515910 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .Close
	I0328 01:07:53.516173 1130949 main.go:141] libmachine: Successfully made call to close driver server
	I0328 01:07:53.516253 1130949 main.go:141] libmachine: Making call to close connection to plugin binary
	I0328 01:07:53.516212 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | Closing plugin on server side
	I0328 01:07:53.527854 1130949 main.go:141] libmachine: Making call to close driver server
	I0328 01:07:53.527882 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .Close
	I0328 01:07:53.528152 1130949 main.go:141] libmachine: Successfully made call to close driver server
	I0328 01:07:53.528173 1130949 main.go:141] libmachine: Making call to close connection to plugin binary
	I0328 01:07:53.528220 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | Closing plugin on server side
	I0328 01:07:54.159164 1130949 main.go:141] libmachine: Making call to close driver server
	I0328 01:07:54.159192 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .Close
	I0328 01:07:54.159264 1130949 main.go:141] libmachine: Making call to close driver server
	I0328 01:07:54.159292 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .Close
	I0328 01:07:54.159523 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | Closing plugin on server side
	I0328 01:07:54.159597 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | Closing plugin on server side
	I0328 01:07:54.159619 1130949 main.go:141] libmachine: Successfully made call to close driver server
	I0328 01:07:54.159637 1130949 main.go:141] libmachine: Making call to close connection to plugin binary
	I0328 01:07:54.159648 1130949 main.go:141] libmachine: Making call to close driver server
	I0328 01:07:54.159658 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .Close
	I0328 01:07:54.159660 1130949 main.go:141] libmachine: Successfully made call to close driver server
	I0328 01:07:54.159667 1130949 main.go:141] libmachine: Making call to close connection to plugin binary
	I0328 01:07:54.159688 1130949 main.go:141] libmachine: Making call to close driver server
	I0328 01:07:54.159696 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .Close
	I0328 01:07:54.159981 1130949 main.go:141] libmachine: Successfully made call to close driver server
	I0328 01:07:54.160037 1130949 main.go:141] libmachine: Making call to close connection to plugin binary
	I0328 01:07:54.160056 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | Closing plugin on server side
	I0328 01:07:54.160062 1130949 addons.go:470] Verifying addon metrics-server=true in "embed-certs-808809"
	I0328 01:07:54.160088 1130949 main.go:141] libmachine: Successfully made call to close driver server
	I0328 01:07:54.160090 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | Closing plugin on server side
	I0328 01:07:54.160106 1130949 main.go:141] libmachine: Making call to close connection to plugin binary
	I0328 01:07:54.162879 1130949 out.go:177] * Enabled addons: default-storageclass, metrics-server, storage-provisioner
	I0328 01:07:54.022449 1131323 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0328 01:07:54.022704 1131323 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0328 01:07:52.372314 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:54.372913 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:54.164263 1130949 addons.go:505] duration metric: took 1.11806212s for enable addons: enabled=[default-storageclass metrics-server storage-provisioner]
	I0328 01:07:55.294728 1130949 pod_ready.go:102] pod "coredns-76f75df574-2rn6k" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:55.790690 1130949 pod_ready.go:92] pod "coredns-76f75df574-2rn6k" in "kube-system" namespace has status "Ready":"True"
	I0328 01:07:55.790717 1130949 pod_ready.go:81] duration metric: took 2.50759161s for pod "coredns-76f75df574-2rn6k" in "kube-system" namespace to be "Ready" ...
	I0328 01:07:55.790726 1130949 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-pgcdh" in "kube-system" namespace to be "Ready" ...
	I0328 01:07:55.796249 1130949 pod_ready.go:92] pod "coredns-76f75df574-pgcdh" in "kube-system" namespace has status "Ready":"True"
	I0328 01:07:55.796279 1130949 pod_ready.go:81] duration metric: took 5.54233ms for pod "coredns-76f75df574-pgcdh" in "kube-system" namespace to be "Ready" ...
	I0328 01:07:55.796291 1130949 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-808809" in "kube-system" namespace to be "Ready" ...
	I0328 01:07:55.801226 1130949 pod_ready.go:92] pod "etcd-embed-certs-808809" in "kube-system" namespace has status "Ready":"True"
	I0328 01:07:55.801254 1130949 pod_ready.go:81] duration metric: took 4.956106ms for pod "etcd-embed-certs-808809" in "kube-system" namespace to be "Ready" ...
	I0328 01:07:55.801263 1130949 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-808809" in "kube-system" namespace to be "Ready" ...
	I0328 01:07:55.814571 1130949 pod_ready.go:92] pod "kube-apiserver-embed-certs-808809" in "kube-system" namespace has status "Ready":"True"
	I0328 01:07:55.814599 1130949 pod_ready.go:81] duration metric: took 13.328662ms for pod "kube-apiserver-embed-certs-808809" in "kube-system" namespace to be "Ready" ...
	I0328 01:07:55.814613 1130949 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-808809" in "kube-system" namespace to be "Ready" ...
	I0328 01:07:55.825995 1130949 pod_ready.go:92] pod "kube-controller-manager-embed-certs-808809" in "kube-system" namespace has status "Ready":"True"
	I0328 01:07:55.826022 1130949 pod_ready.go:81] duration metric: took 11.401096ms for pod "kube-controller-manager-embed-certs-808809" in "kube-system" namespace to be "Ready" ...
	I0328 01:07:55.826035 1130949 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-tjbhs" in "kube-system" namespace to be "Ready" ...
	I0328 01:07:56.188116 1130949 pod_ready.go:92] pod "kube-proxy-tjbhs" in "kube-system" namespace has status "Ready":"True"
	I0328 01:07:56.188147 1130949 pod_ready.go:81] duration metric: took 362.103962ms for pod "kube-proxy-tjbhs" in "kube-system" namespace to be "Ready" ...
	I0328 01:07:56.188161 1130949 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-808809" in "kube-system" namespace to be "Ready" ...
	I0328 01:07:56.588294 1130949 pod_ready.go:92] pod "kube-scheduler-embed-certs-808809" in "kube-system" namespace has status "Ready":"True"
	I0328 01:07:56.588334 1130949 pod_ready.go:81] duration metric: took 400.16517ms for pod "kube-scheduler-embed-certs-808809" in "kube-system" namespace to be "Ready" ...
	I0328 01:07:56.588347 1130949 pod_ready.go:38] duration metric: took 3.311651338s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0328 01:07:56.588369 1130949 api_server.go:52] waiting for apiserver process to appear ...
	I0328 01:07:56.588445 1130949 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:07:56.606404 1130949 api_server.go:72] duration metric: took 3.560197315s to wait for apiserver process to appear ...
	I0328 01:07:56.606435 1130949 api_server.go:88] waiting for apiserver healthz status ...
	I0328 01:07:56.606460 1130949 api_server.go:253] Checking apiserver healthz at https://192.168.72.210:8443/healthz ...
	I0328 01:07:56.612218 1130949 api_server.go:279] https://192.168.72.210:8443/healthz returned 200:
	ok
	I0328 01:07:56.613459 1130949 api_server.go:141] control plane version: v1.29.3
	I0328 01:07:56.613481 1130949 api_server.go:131] duration metric: took 7.039378ms to wait for apiserver health ...
	I0328 01:07:56.613490 1130949 system_pods.go:43] waiting for kube-system pods to appear ...
	I0328 01:07:56.793192 1130949 system_pods.go:59] 9 kube-system pods found
	I0328 01:07:56.793227 1130949 system_pods.go:61] "coredns-76f75df574-2rn6k" [2a77c778-dd83-4e2e-b45a-ca16e3922b45] Running
	I0328 01:07:56.793232 1130949 system_pods.go:61] "coredns-76f75df574-pgcdh" [52452b24-490e-4999-b700-198c6f9b2fa1] Running
	I0328 01:07:56.793236 1130949 system_pods.go:61] "etcd-embed-certs-808809" [cba526ab-c8ca-4b90-abb8-533d1bf6cb4f] Running
	I0328 01:07:56.793239 1130949 system_pods.go:61] "kube-apiserver-embed-certs-808809" [02934ac6-1e07-4cbe-ba2f-4149e59d6044] Running
	I0328 01:07:56.793243 1130949 system_pods.go:61] "kube-controller-manager-embed-certs-808809" [b3350ff9-7abb-407e-939c-fcde61186c17] Running
	I0328 01:07:56.793246 1130949 system_pods.go:61] "kube-proxy-tjbhs" [cdb30ca1-5165-4e24-888a-df79af7987d0] Running
	I0328 01:07:56.793249 1130949 system_pods.go:61] "kube-scheduler-embed-certs-808809" [5b52a22d-6ffb-468b-b7b9-8f89b0dee3b8] Running
	I0328 01:07:56.793255 1130949 system_pods.go:61] "metrics-server-57f55c9bc5-bqbfl" [8434fd7d-838b-4cf2-96a3-e4d613633871] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0328 01:07:56.793260 1130949 system_pods.go:61] "storage-provisioner" [20c1951e-7da8-4025-bbcf-2da60f87f3ab] Running
	I0328 01:07:56.793268 1130949 system_pods.go:74] duration metric: took 179.77213ms to wait for pod list to return data ...
	I0328 01:07:56.793275 1130949 default_sa.go:34] waiting for default service account to be created ...
	I0328 01:07:56.988234 1130949 default_sa.go:45] found service account: "default"
	I0328 01:07:56.988274 1130949 default_sa.go:55] duration metric: took 194.984089ms for default service account to be created ...
	I0328 01:07:56.988288 1130949 system_pods.go:116] waiting for k8s-apps to be running ...
	I0328 01:07:57.192153 1130949 system_pods.go:86] 9 kube-system pods found
	I0328 01:07:57.192188 1130949 system_pods.go:89] "coredns-76f75df574-2rn6k" [2a77c778-dd83-4e2e-b45a-ca16e3922b45] Running
	I0328 01:07:57.192194 1130949 system_pods.go:89] "coredns-76f75df574-pgcdh" [52452b24-490e-4999-b700-198c6f9b2fa1] Running
	I0328 01:07:57.192200 1130949 system_pods.go:89] "etcd-embed-certs-808809" [cba526ab-c8ca-4b90-abb8-533d1bf6cb4f] Running
	I0328 01:07:57.192205 1130949 system_pods.go:89] "kube-apiserver-embed-certs-808809" [02934ac6-1e07-4cbe-ba2f-4149e59d6044] Running
	I0328 01:07:57.192210 1130949 system_pods.go:89] "kube-controller-manager-embed-certs-808809" [b3350ff9-7abb-407e-939c-fcde61186c17] Running
	I0328 01:07:57.192214 1130949 system_pods.go:89] "kube-proxy-tjbhs" [cdb30ca1-5165-4e24-888a-df79af7987d0] Running
	I0328 01:07:57.192218 1130949 system_pods.go:89] "kube-scheduler-embed-certs-808809" [5b52a22d-6ffb-468b-b7b9-8f89b0dee3b8] Running
	I0328 01:07:57.192225 1130949 system_pods.go:89] "metrics-server-57f55c9bc5-bqbfl" [8434fd7d-838b-4cf2-96a3-e4d613633871] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0328 01:07:57.192230 1130949 system_pods.go:89] "storage-provisioner" [20c1951e-7da8-4025-bbcf-2da60f87f3ab] Running
	I0328 01:07:57.192239 1130949 system_pods.go:126] duration metric: took 203.942878ms to wait for k8s-apps to be running ...
	I0328 01:07:57.192249 1130949 system_svc.go:44] waiting for kubelet service to be running ....
	I0328 01:07:57.192301 1130949 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0328 01:07:57.209840 1130949 system_svc.go:56] duration metric: took 17.576605ms WaitForService to wait for kubelet
	I0328 01:07:57.209883 1130949 kubeadm.go:576] duration metric: took 4.163683877s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0328 01:07:57.209918 1130949 node_conditions.go:102] verifying NodePressure condition ...
	I0328 01:07:57.388321 1130949 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0328 01:07:57.388347 1130949 node_conditions.go:123] node cpu capacity is 2
	I0328 01:07:57.388357 1130949 node_conditions.go:105] duration metric: took 178.433633ms to run NodePressure ...
	I0328 01:07:57.388370 1130949 start.go:240] waiting for startup goroutines ...
	I0328 01:07:57.388377 1130949 start.go:245] waiting for cluster config update ...
	I0328 01:07:57.388387 1130949 start.go:254] writing updated cluster config ...
	I0328 01:07:57.388784 1130949 ssh_runner.go:195] Run: rm -f paused
	I0328 01:07:57.446699 1130949 start.go:600] kubectl: 1.29.3, cluster: 1.29.3 (minor skew: 0)
	I0328 01:07:57.448951 1130949 out.go:177] * Done! kubectl is now configured to use "embed-certs-808809" cluster and "default" namespace by default
	I0328 01:07:56.373123 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:58.872454 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:08:04.023273 1131323 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0328 01:08:04.023535 1131323 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0328 01:08:01.372711 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:08:03.877734 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:08:06.374031 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:08:07.366164 1130827 pod_ready.go:81] duration metric: took 4m0.000887668s for pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace to be "Ready" ...
	E0328 01:08:07.366245 1130827 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace to be "Ready" (will not retry!)
	I0328 01:08:07.366271 1130827 pod_ready.go:38] duration metric: took 4m7.906522585s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0328 01:08:07.366301 1130827 kubeadm.go:591] duration metric: took 4m15.27169704s to restartPrimaryControlPlane
	W0328 01:08:07.366368 1130827 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0328 01:08:07.366406 1130827 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0328 01:08:16.281280 1131600 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.245904746s)
	I0328 01:08:16.281365 1131600 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0328 01:08:16.298463 1131600 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0328 01:08:16.310406 1131600 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0328 01:08:16.321387 1131600 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0328 01:08:16.321415 1131600 kubeadm.go:156] found existing configuration files:
	
	I0328 01:08:16.321475 1131600 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0328 01:08:16.331965 1131600 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0328 01:08:16.332033 1131600 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0328 01:08:16.343030 1131600 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0328 01:08:16.353193 1131600 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0328 01:08:16.353254 1131600 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0328 01:08:16.363865 1131600 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0328 01:08:16.374276 1131600 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0328 01:08:16.374346 1131600 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0328 01:08:16.385300 1131600 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0328 01:08:16.396118 1131600 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0328 01:08:16.396181 1131600 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0328 01:08:16.406896 1131600 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0328 01:08:16.626615 1131600 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0328 01:08:24.024091 1131323 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0328 01:08:24.024388 1131323 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0328 01:08:25.420974 1131600 kubeadm.go:309] [init] Using Kubernetes version: v1.29.3
	I0328 01:08:25.421059 1131600 kubeadm.go:309] [preflight] Running pre-flight checks
	I0328 01:08:25.421154 1131600 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0328 01:08:25.421300 1131600 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0328 01:08:25.421547 1131600 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0328 01:08:25.421649 1131600 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0328 01:08:25.423435 1131600 out.go:204]   - Generating certificates and keys ...
	I0328 01:08:25.423549 1131600 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0328 01:08:25.423630 1131600 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0328 01:08:25.423749 1131600 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0328 01:08:25.423844 1131600 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0328 01:08:25.423956 1131600 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0328 01:08:25.424058 1131600 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0328 01:08:25.424166 1131600 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0328 01:08:25.424260 1131600 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0328 01:08:25.424375 1131600 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0328 01:08:25.424489 1131600 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0328 01:08:25.424552 1131600 kubeadm.go:309] [certs] Using the existing "sa" key
	I0328 01:08:25.424642 1131600 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0328 01:08:25.424700 1131600 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0328 01:08:25.424765 1131600 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0328 01:08:25.424832 1131600 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0328 01:08:25.424920 1131600 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0328 01:08:25.424982 1131600 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0328 01:08:25.425106 1131600 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0328 01:08:25.425207 1131600 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0328 01:08:25.426863 1131600 out.go:204]   - Booting up control plane ...
	I0328 01:08:25.427001 1131600 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0328 01:08:25.427108 1131600 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0328 01:08:25.427205 1131600 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0328 01:08:25.427327 1131600 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0328 01:08:25.427431 1131600 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0328 01:08:25.427491 1131600 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0328 01:08:25.427686 1131600 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0328 01:08:25.427784 1131600 kubeadm.go:309] [apiclient] All control plane components are healthy after 6.003000 seconds
	I0328 01:08:25.427897 1131600 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0328 01:08:25.428032 1131600 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0328 01:08:25.428109 1131600 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0328 01:08:25.428325 1131600 kubeadm.go:309] [mark-control-plane] Marking the node default-k8s-diff-port-283961 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0328 01:08:25.428408 1131600 kubeadm.go:309] [bootstrap-token] Using token: g6jusr.8nbqw788gjbu8fwz
	I0328 01:08:25.430595 1131600 out.go:204]   - Configuring RBAC rules ...
	I0328 01:08:25.430734 1131600 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0328 01:08:25.430837 1131600 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0328 01:08:25.430981 1131600 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0328 01:08:25.431163 1131600 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0328 01:08:25.431357 1131600 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0328 01:08:25.431481 1131600 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0328 01:08:25.431670 1131600 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0328 01:08:25.431726 1131600 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0328 01:08:25.431767 1131600 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0328 01:08:25.431774 1131600 kubeadm.go:309] 
	I0328 01:08:25.431819 1131600 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0328 01:08:25.431829 1131600 kubeadm.go:309] 
	I0328 01:08:25.431893 1131600 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0328 01:08:25.431900 1131600 kubeadm.go:309] 
	I0328 01:08:25.431934 1131600 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0328 01:08:25.432028 1131600 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0328 01:08:25.432089 1131600 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0328 01:08:25.432114 1131600 kubeadm.go:309] 
	I0328 01:08:25.432178 1131600 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0328 01:08:25.432186 1131600 kubeadm.go:309] 
	I0328 01:08:25.432245 1131600 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0328 01:08:25.432255 1131600 kubeadm.go:309] 
	I0328 01:08:25.432342 1131600 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0328 01:08:25.432454 1131600 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0328 01:08:25.432566 1131600 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0328 01:08:25.432576 1131600 kubeadm.go:309] 
	I0328 01:08:25.432719 1131600 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0328 01:08:25.432812 1131600 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0328 01:08:25.432825 1131600 kubeadm.go:309] 
	I0328 01:08:25.432914 1131600 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8444 --token g6jusr.8nbqw788gjbu8fwz \
	I0328 01:08:25.433018 1131600 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:eb47e943713ff45bf2b8bbea854932df17a469668ec0f8e79ea3bb626e56fc59 \
	I0328 01:08:25.433052 1131600 kubeadm.go:309] 	--control-plane 
	I0328 01:08:25.433058 1131600 kubeadm.go:309] 
	I0328 01:08:25.433135 1131600 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0328 01:08:25.433143 1131600 kubeadm.go:309] 
	I0328 01:08:25.433222 1131600 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8444 --token g6jusr.8nbqw788gjbu8fwz \
	I0328 01:08:25.433318 1131600 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:eb47e943713ff45bf2b8bbea854932df17a469668ec0f8e79ea3bb626e56fc59 
	I0328 01:08:25.433337 1131600 cni.go:84] Creating CNI manager for ""
	I0328 01:08:25.433346 1131600 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0328 01:08:25.434943 1131600 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0328 01:08:25.436103 1131600 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0328 01:08:25.483149 1131600 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0328 01:08:25.508422 1131600 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0328 01:08:25.508514 1131600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:25.508518 1131600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-283961 minikube.k8s.io/updated_at=2024_03_28T01_08_25_0700 minikube.k8s.io/version=v1.33.0-beta.0 minikube.k8s.io/commit=1a940f980f77c9bfc8ef93678db8c1f49c9dd79d minikube.k8s.io/name=default-k8s-diff-port-283961 minikube.k8s.io/primary=true
	I0328 01:08:25.537955 1131600 ops.go:34] apiserver oom_adj: -16
	I0328 01:08:25.738462 1131600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:26.239473 1131600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:26.739478 1131600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:27.238883 1131600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:27.738830 1131600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:28.239281 1131600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:28.738643 1131600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:29.238703 1131600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:29.739025 1131600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:30.239127 1131600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:30.739473 1131600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:31.239461 1131600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:31.739480 1131600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:32.239525 1131600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:32.738543 1131600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:33.239468 1131600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:33.739475 1131600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:34.238558 1131600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:34.739550 1131600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:35.239400 1131600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:35.738766 1131600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:36.239384 1131600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:36.738797 1131600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:37.238736 1131600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:37.739543 1131600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:37.850963 1131600 kubeadm.go:1107] duration metric: took 12.342521507s to wait for elevateKubeSystemPrivileges
	W0328 01:08:37.851011 1131600 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0328 01:08:37.851024 1131600 kubeadm.go:393] duration metric: took 5m17.339661641s to StartCluster
	I0328 01:08:37.851048 1131600 settings.go:142] acquiring lock: {Name:mk594dd5d083edcb4c668528431bf610fe4ea638 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 01:08:37.851164 1131600 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18485-1069254/kubeconfig
	I0328 01:08:37.853862 1131600 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18485-1069254/kubeconfig: {Name:mk608bb0dce90860f51972a105c5a28b1a9b081e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 01:08:37.854264 1131600 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.39.224 Port:8444 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0328 01:08:37.856170 1131600 out.go:177] * Verifying Kubernetes components...
	I0328 01:08:37.854341 1131600 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0328 01:08:37.854447 1131600 config.go:182] Loaded profile config "default-k8s-diff-port-283961": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0328 01:08:37.857860 1131600 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0328 01:08:37.857864 1131600 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-283961"
	I0328 01:08:37.857878 1131600 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-283961"
	I0328 01:08:37.857885 1131600 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-283961"
	I0328 01:08:37.857909 1131600 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-283961"
	I0328 01:08:37.857912 1131600 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-283961"
	W0328 01:08:37.857923 1131600 addons.go:243] addon storage-provisioner should already be in state true
	I0328 01:08:37.857928 1131600 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-283961"
	W0328 01:08:37.857941 1131600 addons.go:243] addon metrics-server should already be in state true
	I0328 01:08:37.857970 1131600 host.go:66] Checking if "default-k8s-diff-port-283961" exists ...
	I0328 01:08:37.857983 1131600 host.go:66] Checking if "default-k8s-diff-port-283961" exists ...
	I0328 01:08:37.858330 1131600 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18485-1069254/.minikube/bin/docker-machine-driver-kvm2
	I0328 01:08:37.858363 1131600 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 01:08:37.858403 1131600 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18485-1069254/.minikube/bin/docker-machine-driver-kvm2
	I0328 01:08:37.858436 1131600 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 01:08:37.858335 1131600 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18485-1069254/.minikube/bin/docker-machine-driver-kvm2
	I0328 01:08:37.858509 1131600 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 01:08:37.881197 1131600 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32799
	I0328 01:08:37.881230 1131600 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35101
	I0328 01:08:37.881244 1131600 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40403
	I0328 01:08:37.881857 1131600 main.go:141] libmachine: () Calling .GetVersion
	I0328 01:08:37.881882 1131600 main.go:141] libmachine: () Calling .GetVersion
	I0328 01:08:37.882021 1131600 main.go:141] libmachine: () Calling .GetVersion
	I0328 01:08:37.882460 1131600 main.go:141] libmachine: Using API Version  1
	I0328 01:08:37.882482 1131600 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 01:08:37.882523 1131600 main.go:141] libmachine: Using API Version  1
	I0328 01:08:37.882540 1131600 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 01:08:37.882585 1131600 main.go:141] libmachine: Using API Version  1
	I0328 01:08:37.882601 1131600 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 01:08:37.882934 1131600 main.go:141] libmachine: () Calling .GetMachineName
	I0328 01:08:37.882992 1131600 main.go:141] libmachine: () Calling .GetMachineName
	I0328 01:08:37.883007 1131600 main.go:141] libmachine: () Calling .GetMachineName
	I0328 01:08:37.883239 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetState
	I0328 01:08:37.883592 1131600 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18485-1069254/.minikube/bin/docker-machine-driver-kvm2
	I0328 01:08:37.883620 1131600 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18485-1069254/.minikube/bin/docker-machine-driver-kvm2
	I0328 01:08:37.883625 1131600 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 01:08:37.883644 1131600 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 01:08:37.887335 1131600 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-283961"
	W0328 01:08:37.887359 1131600 addons.go:243] addon default-storageclass should already be in state true
	I0328 01:08:37.887390 1131600 host.go:66] Checking if "default-k8s-diff-port-283961" exists ...
	I0328 01:08:37.887745 1131600 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18485-1069254/.minikube/bin/docker-machine-driver-kvm2
	I0328 01:08:37.887779 1131600 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 01:08:37.901416 1131600 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37383
	I0328 01:08:37.901909 1131600 main.go:141] libmachine: () Calling .GetVersion
	I0328 01:08:37.902530 1131600 main.go:141] libmachine: Using API Version  1
	I0328 01:08:37.902559 1131600 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 01:08:37.902967 1131600 main.go:141] libmachine: () Calling .GetMachineName
	I0328 01:08:37.903211 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetState
	I0328 01:08:37.904529 1131600 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36633
	I0328 01:08:37.905034 1131600 main.go:141] libmachine: () Calling .GetVersion
	I0328 01:08:37.905268 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .DriverName
	I0328 01:08:37.907486 1131600 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0328 01:08:37.905802 1131600 main.go:141] libmachine: Using API Version  1
	I0328 01:08:37.909062 1131600 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 01:08:37.909180 1131600 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0328 01:08:37.909196 1131600 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0328 01:08:37.909218 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHHostname
	I0328 01:08:37.909555 1131600 main.go:141] libmachine: () Calling .GetMachineName
	I0328 01:08:37.909794 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetState
	I0328 01:08:37.911251 1131600 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33539
	I0328 01:08:37.911845 1131600 main.go:141] libmachine: () Calling .GetVersion
	I0328 01:08:37.911995 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .DriverName
	I0328 01:08:37.913838 1131600 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0328 01:08:37.912457 1131600 main.go:141] libmachine: Using API Version  1
	I0328 01:08:37.913039 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:08:37.913804 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHPort
	I0328 01:08:37.915256 1131600 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 01:08:37.915268 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:df:6f", ip: ""} in network mk-default-k8s-diff-port-283961: {Iface:virbr1 ExpiryTime:2024-03-28 02:03:06 +0000 UTC Type:0 Mac:52:54:00:c4:df:6f Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:default-k8s-diff-port-283961 Clientid:01:52:54:00:c4:df:6f}
	I0328 01:08:37.915288 1131600 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0328 01:08:37.915297 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined IP address 192.168.39.224 and MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:08:37.915303 1131600 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0328 01:08:37.915321 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHHostname
	I0328 01:08:37.915492 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHKeyPath
	I0328 01:08:37.915674 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHUsername
	I0328 01:08:37.915894 1131600 sshutil.go:53] new ssh client: &{IP:192.168.39.224 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/default-k8s-diff-port-283961/id_rsa Username:docker}
	I0328 01:08:37.916689 1131600 main.go:141] libmachine: () Calling .GetMachineName
	I0328 01:08:37.917364 1131600 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18485-1069254/.minikube/bin/docker-machine-driver-kvm2
	I0328 01:08:37.917410 1131600 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 01:08:37.918302 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:08:37.918651 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:df:6f", ip: ""} in network mk-default-k8s-diff-port-283961: {Iface:virbr1 ExpiryTime:2024-03-28 02:03:06 +0000 UTC Type:0 Mac:52:54:00:c4:df:6f Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:default-k8s-diff-port-283961 Clientid:01:52:54:00:c4:df:6f}
	I0328 01:08:37.918678 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined IP address 192.168.39.224 and MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:08:37.918944 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHPort
	I0328 01:08:37.919117 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHKeyPath
	I0328 01:08:37.919267 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHUsername
	I0328 01:08:37.919386 1131600 sshutil.go:53] new ssh client: &{IP:192.168.39.224 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/default-k8s-diff-port-283961/id_rsa Username:docker}
	I0328 01:08:37.935233 1131600 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45659
	I0328 01:08:37.935750 1131600 main.go:141] libmachine: () Calling .GetVersion
	I0328 01:08:37.936283 1131600 main.go:141] libmachine: Using API Version  1
	I0328 01:08:37.936301 1131600 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 01:08:37.936691 1131600 main.go:141] libmachine: () Calling .GetMachineName
	I0328 01:08:37.936872 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetState
	I0328 01:08:37.938736 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .DriverName
	I0328 01:08:37.939016 1131600 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0328 01:08:37.939042 1131600 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0328 01:08:37.939065 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHHostname
	I0328 01:08:37.941653 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:08:37.941967 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:df:6f", ip: ""} in network mk-default-k8s-diff-port-283961: {Iface:virbr1 ExpiryTime:2024-03-28 02:03:06 +0000 UTC Type:0 Mac:52:54:00:c4:df:6f Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:default-k8s-diff-port-283961 Clientid:01:52:54:00:c4:df:6f}
	I0328 01:08:37.941991 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined IP address 192.168.39.224 and MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:08:37.942199 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHPort
	I0328 01:08:37.942405 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHKeyPath
	I0328 01:08:37.942575 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHUsername
	I0328 01:08:37.942761 1131600 sshutil.go:53] new ssh client: &{IP:192.168.39.224 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/default-k8s-diff-port-283961/id_rsa Username:docker}
	I0328 01:08:38.109817 1131600 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0328 01:08:38.134996 1131600 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-283961" to be "Ready" ...
	I0328 01:08:38.158252 1131600 node_ready.go:49] node "default-k8s-diff-port-283961" has status "Ready":"True"
	I0328 01:08:38.158286 1131600 node_ready.go:38] duration metric: took 23.249221ms for node "default-k8s-diff-port-283961" to be "Ready" ...
	I0328 01:08:38.158305 1131600 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0328 01:08:38.170391 1131600 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-gdv5x" in "kube-system" namespace to be "Ready" ...
	I0328 01:08:38.277223 1131600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0328 01:08:38.299923 1131600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0328 01:08:38.300686 1131600 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0328 01:08:38.300707 1131600 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0328 01:08:38.355800 1131600 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0328 01:08:38.355837 1131600 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0328 01:08:38.464742 1131600 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0328 01:08:38.464769 1131600 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0328 01:08:38.542696 1131600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0328 01:08:39.644116 1131600 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.344141889s)
	I0328 01:08:39.644184 1131600 main.go:141] libmachine: Making call to close driver server
	I0328 01:08:39.644189 1131600 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.366934481s)
	I0328 01:08:39.644197 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .Close
	I0328 01:08:39.644210 1131600 main.go:141] libmachine: Making call to close driver server
	I0328 01:08:39.644219 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .Close
	I0328 01:08:39.644620 1131600 main.go:141] libmachine: Successfully made call to close driver server
	I0328 01:08:39.644644 1131600 main.go:141] libmachine: Making call to close connection to plugin binary
	I0328 01:08:39.644654 1131600 main.go:141] libmachine: Making call to close driver server
	I0328 01:08:39.644664 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .Close
	I0328 01:08:39.644846 1131600 main.go:141] libmachine: Successfully made call to close driver server
	I0328 01:08:39.644865 1131600 main.go:141] libmachine: Making call to close connection to plugin binary
	I0328 01:08:39.644890 1131600 main.go:141] libmachine: Making call to close driver server
	I0328 01:08:39.644905 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .Close
	I0328 01:08:39.644987 1131600 main.go:141] libmachine: Successfully made call to close driver server
	I0328 01:08:39.645004 1131600 main.go:141] libmachine: Making call to close connection to plugin binary
	I0328 01:08:39.645154 1131600 main.go:141] libmachine: Successfully made call to close driver server
	I0328 01:08:39.645171 1131600 main.go:141] libmachine: Making call to close connection to plugin binary
	I0328 01:08:39.708104 1131600 main.go:141] libmachine: Making call to close driver server
	I0328 01:08:39.708143 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .Close
	I0328 01:08:39.708543 1131600 main.go:141] libmachine: Successfully made call to close driver server
	I0328 01:08:39.708567 1131600 main.go:141] libmachine: Making call to close connection to plugin binary
	I0328 01:08:39.739487 1131600 pod_ready.go:92] pod "coredns-76f75df574-gdv5x" in "kube-system" namespace has status "Ready":"True"
	I0328 01:08:39.739515 1131600 pod_ready.go:81] duration metric: took 1.569088177s for pod "coredns-76f75df574-gdv5x" in "kube-system" namespace to be "Ready" ...
	I0328 01:08:39.739526 1131600 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-qzcfp" in "kube-system" namespace to be "Ready" ...
	I0328 01:08:39.797314 1131600 pod_ready.go:92] pod "coredns-76f75df574-qzcfp" in "kube-system" namespace has status "Ready":"True"
	I0328 01:08:39.797347 1131600 pod_ready.go:81] duration metric: took 57.813218ms for pod "coredns-76f75df574-qzcfp" in "kube-system" namespace to be "Ready" ...
	I0328 01:08:39.797366 1131600 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-283961" in "kube-system" namespace to be "Ready" ...
	I0328 01:08:39.830784 1131600 pod_ready.go:92] pod "etcd-default-k8s-diff-port-283961" in "kube-system" namespace has status "Ready":"True"
	I0328 01:08:39.830865 1131600 pod_ready.go:81] duration metric: took 33.488753ms for pod "etcd-default-k8s-diff-port-283961" in "kube-system" namespace to be "Ready" ...
	I0328 01:08:39.830886 1131600 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-283961" in "kube-system" namespace to be "Ready" ...
	I0328 01:08:39.852459 1131600 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-283961" in "kube-system" namespace has status "Ready":"True"
	I0328 01:08:39.852489 1131600 pod_ready.go:81] duration metric: took 21.594748ms for pod "kube-apiserver-default-k8s-diff-port-283961" in "kube-system" namespace to be "Ready" ...
	I0328 01:08:39.852501 1131600 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-283961" in "kube-system" namespace to be "Ready" ...
	I0328 01:08:39.862630 1131600 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-283961" in "kube-system" namespace has status "Ready":"True"
	I0328 01:08:39.862658 1131600 pod_ready.go:81] duration metric: took 10.149867ms for pod "kube-controller-manager-default-k8s-diff-port-283961" in "kube-system" namespace to be "Ready" ...
	I0328 01:08:39.862674 1131600 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-js7j2" in "kube-system" namespace to be "Ready" ...
	I0328 01:08:39.893124 1131600 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.350363727s)
	I0328 01:08:39.893191 1131600 main.go:141] libmachine: Making call to close driver server
	I0328 01:08:39.893216 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .Close
	I0328 01:08:39.893559 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | Closing plugin on server side
	I0328 01:08:39.893568 1131600 main.go:141] libmachine: Successfully made call to close driver server
	I0328 01:08:39.893617 1131600 main.go:141] libmachine: Making call to close connection to plugin binary
	I0328 01:08:39.893634 1131600 main.go:141] libmachine: Making call to close driver server
	I0328 01:08:39.893646 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .Close
	I0328 01:08:39.893985 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | Closing plugin on server side
	I0328 01:08:39.893985 1131600 main.go:141] libmachine: Successfully made call to close driver server
	I0328 01:08:39.894013 1131600 main.go:141] libmachine: Making call to close connection to plugin binary
	I0328 01:08:39.894031 1131600 addons.go:470] Verifying addon metrics-server=true in "default-k8s-diff-port-283961"
	I0328 01:08:39.896978 1131600 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0328 01:08:39.898636 1131600 addons.go:505] duration metric: took 2.044292782s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0328 01:08:40.138962 1131600 pod_ready.go:92] pod "kube-proxy-js7j2" in "kube-system" namespace has status "Ready":"True"
	I0328 01:08:40.138994 1131600 pod_ready.go:81] duration metric: took 276.313147ms for pod "kube-proxy-js7j2" in "kube-system" namespace to be "Ready" ...
	I0328 01:08:40.139006 1131600 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-283961" in "kube-system" namespace to be "Ready" ...
	I0328 01:08:40.538892 1131600 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-283961" in "kube-system" namespace has status "Ready":"True"
	I0328 01:08:40.538917 1131600 pod_ready.go:81] duration metric: took 399.903327ms for pod "kube-scheduler-default-k8s-diff-port-283961" in "kube-system" namespace to be "Ready" ...
	I0328 01:08:40.538925 1131600 pod_ready.go:38] duration metric: took 2.380606168s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0328 01:08:40.538943 1131600 api_server.go:52] waiting for apiserver process to appear ...
	I0328 01:08:40.539009 1131600 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:08:40.561639 1131600 api_server.go:72] duration metric: took 2.707321816s to wait for apiserver process to appear ...
	I0328 01:08:40.561681 1131600 api_server.go:88] waiting for apiserver healthz status ...
	I0328 01:08:40.561709 1131600 api_server.go:253] Checking apiserver healthz at https://192.168.39.224:8444/healthz ...
	I0328 01:08:40.568521 1131600 api_server.go:279] https://192.168.39.224:8444/healthz returned 200:
	ok
	I0328 01:08:40.570016 1131600 api_server.go:141] control plane version: v1.29.3
	I0328 01:08:40.570060 1131600 api_server.go:131] duration metric: took 8.369036ms to wait for apiserver health ...
	I0328 01:08:40.570071 1131600 system_pods.go:43] waiting for kube-system pods to appear ...
	I0328 01:08:39.696094 1130827 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.32965227s)
	I0328 01:08:39.696193 1130827 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0328 01:08:39.717556 1130827 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0328 01:08:39.730434 1130827 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0328 01:08:39.746521 1130827 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0328 01:08:39.746567 1130827 kubeadm.go:156] found existing configuration files:
	
	I0328 01:08:39.746644 1130827 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0328 01:08:39.758252 1130827 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0328 01:08:39.758352 1130827 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0328 01:08:39.771929 1130827 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0328 01:08:39.785312 1130827 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0328 01:08:39.785400 1130827 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0328 01:08:39.800685 1130827 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0328 01:08:39.814982 1130827 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0328 01:08:39.815073 1130827 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0328 01:08:39.828804 1130827 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0328 01:08:39.841984 1130827 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0328 01:08:39.842074 1130827 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0328 01:08:39.854502 1130827 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0328 01:08:40.089742 1130827 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0328 01:08:40.742900 1131600 system_pods.go:59] 9 kube-system pods found
	I0328 01:08:40.742938 1131600 system_pods.go:61] "coredns-76f75df574-gdv5x" [5b4b835c-ae9d-4eff-ab37-6ccb7e36a748] Running
	I0328 01:08:40.742945 1131600 system_pods.go:61] "coredns-76f75df574-qzcfp" [8e7bfa94-f249-4f7a-be7b-9a615810c956] Running
	I0328 01:08:40.742951 1131600 system_pods.go:61] "etcd-default-k8s-diff-port-283961" [eb26a2e2-882b-4bc0-9ca1-7fe88b1c6c7e] Running
	I0328 01:08:40.742958 1131600 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-283961" [1c31e7a3-507e-43ea-96cf-1d182a1e0875] Running
	I0328 01:08:40.742964 1131600 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-283961" [5bbc6650-b37c-4b10-bc6e-cceb214f8881] Running
	I0328 01:08:40.742968 1131600 system_pods.go:61] "kube-proxy-js7j2" [1f31a6ee-9417-4f4a-ba0b-fdbab6a9169d] Running
	I0328 01:08:40.742972 1131600 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-283961" [3a5691f7-d6d4-45e7-aea7-94fe1cfa541a] Running
	I0328 01:08:40.742980 1131600 system_pods.go:61] "metrics-server-57f55c9bc5-gkv67" [7f0f5d0b-6821-44b6-8f3b-0bc0aeccc356] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0328 01:08:40.742986 1131600 system_pods.go:61] "storage-provisioner" [cb80efe2-521f-45d5-84e7-f6dc216b4c6d] Running
	I0328 01:08:40.742998 1131600 system_pods.go:74] duration metric: took 172.918886ms to wait for pod list to return data ...
	I0328 01:08:40.743010 1131600 default_sa.go:34] waiting for default service account to be created ...
	I0328 01:08:40.939208 1131600 default_sa.go:45] found service account: "default"
	I0328 01:08:40.939255 1131600 default_sa.go:55] duration metric: took 196.220048ms for default service account to be created ...
	I0328 01:08:40.939266 1131600 system_pods.go:116] waiting for k8s-apps to be running ...
	I0328 01:08:41.144986 1131600 system_pods.go:86] 9 kube-system pods found
	I0328 01:08:41.145023 1131600 system_pods.go:89] "coredns-76f75df574-gdv5x" [5b4b835c-ae9d-4eff-ab37-6ccb7e36a748] Running
	I0328 01:08:41.145030 1131600 system_pods.go:89] "coredns-76f75df574-qzcfp" [8e7bfa94-f249-4f7a-be7b-9a615810c956] Running
	I0328 01:08:41.145034 1131600 system_pods.go:89] "etcd-default-k8s-diff-port-283961" [eb26a2e2-882b-4bc0-9ca1-7fe88b1c6c7e] Running
	I0328 01:08:41.145039 1131600 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-283961" [1c31e7a3-507e-43ea-96cf-1d182a1e0875] Running
	I0328 01:08:41.145043 1131600 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-283961" [5bbc6650-b37c-4b10-bc6e-cceb214f8881] Running
	I0328 01:08:41.145047 1131600 system_pods.go:89] "kube-proxy-js7j2" [1f31a6ee-9417-4f4a-ba0b-fdbab6a9169d] Running
	I0328 01:08:41.145051 1131600 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-283961" [3a5691f7-d6d4-45e7-aea7-94fe1cfa541a] Running
	I0328 01:08:41.145058 1131600 system_pods.go:89] "metrics-server-57f55c9bc5-gkv67" [7f0f5d0b-6821-44b6-8f3b-0bc0aeccc356] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0328 01:08:41.145062 1131600 system_pods.go:89] "storage-provisioner" [cb80efe2-521f-45d5-84e7-f6dc216b4c6d] Running
	I0328 01:08:41.145072 1131600 system_pods.go:126] duration metric: took 205.800485ms to wait for k8s-apps to be running ...
	I0328 01:08:41.145083 1131600 system_svc.go:44] waiting for kubelet service to be running ....
	I0328 01:08:41.145131 1131600 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0328 01:08:41.163220 1131600 system_svc.go:56] duration metric: took 18.120266ms WaitForService to wait for kubelet
	I0328 01:08:41.163255 1131600 kubeadm.go:576] duration metric: took 3.308947131s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0328 01:08:41.163280 1131600 node_conditions.go:102] verifying NodePressure condition ...
	I0328 01:08:41.339219 1131600 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0328 01:08:41.339247 1131600 node_conditions.go:123] node cpu capacity is 2
	I0328 01:08:41.339292 1131600 node_conditions.go:105] duration metric: took 176.004328ms to run NodePressure ...
	I0328 01:08:41.339306 1131600 start.go:240] waiting for startup goroutines ...
	I0328 01:08:41.339317 1131600 start.go:245] waiting for cluster config update ...
	I0328 01:08:41.339334 1131600 start.go:254] writing updated cluster config ...
	I0328 01:08:41.339656 1131600 ssh_runner.go:195] Run: rm -f paused
	I0328 01:08:41.399111 1131600 start.go:600] kubectl: 1.29.3, cluster: 1.29.3 (minor skew: 0)
	I0328 01:08:41.401360 1131600 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-283961" cluster and "default" namespace by default
	I0328 01:08:49.653091 1130827 kubeadm.go:309] [init] Using Kubernetes version: v1.30.0-beta.0
	I0328 01:08:49.653205 1130827 kubeadm.go:309] [preflight] Running pre-flight checks
	I0328 01:08:49.653327 1130827 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0328 01:08:49.653468 1130827 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0328 01:08:49.653576 1130827 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0328 01:08:49.653666 1130827 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0328 01:08:49.656419 1130827 out.go:204]   - Generating certificates and keys ...
	I0328 01:08:49.656503 1130827 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0328 01:08:49.656583 1130827 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0328 01:08:49.656669 1130827 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0328 01:08:49.656775 1130827 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0328 01:08:49.656903 1130827 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0328 01:08:49.656973 1130827 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0328 01:08:49.657057 1130827 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0328 01:08:49.657138 1130827 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0328 01:08:49.657246 1130827 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0328 01:08:49.657362 1130827 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0328 01:08:49.657415 1130827 kubeadm.go:309] [certs] Using the existing "sa" key
	I0328 01:08:49.657510 1130827 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0328 01:08:49.657601 1130827 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0328 01:08:49.657713 1130827 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0328 01:08:49.657811 1130827 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0328 01:08:49.657900 1130827 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0328 01:08:49.657980 1130827 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0328 01:08:49.658074 1130827 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0328 01:08:49.658160 1130827 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0328 01:08:49.659588 1130827 out.go:204]   - Booting up control plane ...
	I0328 01:08:49.659669 1130827 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0328 01:08:49.659771 1130827 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0328 01:08:49.659855 1130827 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0328 01:08:49.659962 1130827 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0328 01:08:49.660075 1130827 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0328 01:08:49.660139 1130827 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0328 01:08:49.660309 1130827 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0328 01:08:49.660426 1130827 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0328 01:08:49.660518 1130827 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 501.594495ms
	I0328 01:08:49.660610 1130827 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0328 01:08:49.660691 1130827 kubeadm.go:309] [api-check] The API server is healthy after 5.502996727s
	I0328 01:08:49.660830 1130827 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0328 01:08:49.660975 1130827 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0328 01:08:49.661028 1130827 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0328 01:08:49.661198 1130827 kubeadm.go:309] [mark-control-plane] Marking the node no-preload-248059 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0328 01:08:49.661283 1130827 kubeadm.go:309] [bootstrap-token] Using token: 4jnfa0.q3dre6ogqbxtw8j0
	I0328 01:08:49.662907 1130827 out.go:204]   - Configuring RBAC rules ...
	I0328 01:08:49.663014 1130827 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0328 01:08:49.663090 1130827 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0328 01:08:49.663239 1130827 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0328 01:08:49.663379 1130827 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0328 01:08:49.663484 1130827 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0328 01:08:49.663576 1130827 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0328 01:08:49.663688 1130827 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0328 01:08:49.663750 1130827 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0328 01:08:49.663811 1130827 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0328 01:08:49.663820 1130827 kubeadm.go:309] 
	I0328 01:08:49.663871 1130827 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0328 01:08:49.663877 1130827 kubeadm.go:309] 
	I0328 01:08:49.663976 1130827 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0328 01:08:49.663984 1130827 kubeadm.go:309] 
	I0328 01:08:49.664004 1130827 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0328 01:08:49.664080 1130827 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0328 01:08:49.664144 1130827 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0328 01:08:49.664151 1130827 kubeadm.go:309] 
	I0328 01:08:49.664202 1130827 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0328 01:08:49.664209 1130827 kubeadm.go:309] 
	I0328 01:08:49.664246 1130827 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0328 01:08:49.664252 1130827 kubeadm.go:309] 
	I0328 01:08:49.664301 1130827 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0328 01:08:49.664370 1130827 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0328 01:08:49.664436 1130827 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0328 01:08:49.664444 1130827 kubeadm.go:309] 
	I0328 01:08:49.664515 1130827 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0328 01:08:49.664600 1130827 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0328 01:08:49.664607 1130827 kubeadm.go:309] 
	I0328 01:08:49.664678 1130827 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token 4jnfa0.q3dre6ogqbxtw8j0 \
	I0328 01:08:49.664764 1130827 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:eb47e943713ff45bf2b8bbea854932df17a469668ec0f8e79ea3bb626e56fc59 \
	I0328 01:08:49.664783 1130827 kubeadm.go:309] 	--control-plane 
	I0328 01:08:49.664789 1130827 kubeadm.go:309] 
	I0328 01:08:49.664856 1130827 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0328 01:08:49.664863 1130827 kubeadm.go:309] 
	I0328 01:08:49.664938 1130827 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token 4jnfa0.q3dre6ogqbxtw8j0 \
	I0328 01:08:49.665073 1130827 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:eb47e943713ff45bf2b8bbea854932df17a469668ec0f8e79ea3bb626e56fc59 
	I0328 01:08:49.665117 1130827 cni.go:84] Creating CNI manager for ""
	I0328 01:08:49.665130 1130827 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0328 01:08:49.667556 1130827 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0328 01:08:49.668776 1130827 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0328 01:08:49.680262 1130827 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0328 01:08:49.701490 1130827 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0328 01:08:49.701557 1130827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:49.701606 1130827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-248059 minikube.k8s.io/updated_at=2024_03_28T01_08_49_0700 minikube.k8s.io/version=v1.33.0-beta.0 minikube.k8s.io/commit=1a940f980f77c9bfc8ef93678db8c1f49c9dd79d minikube.k8s.io/name=no-preload-248059 minikube.k8s.io/primary=true
	I0328 01:08:49.734009 1130827 ops.go:34] apiserver oom_adj: -16
	I0328 01:08:49.901866 1130827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:50.402635 1130827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:50.902480 1130827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:51.402417 1130827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:51.902253 1130827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:52.402411 1130827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:52.901926 1130827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:53.402394 1130827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:53.902738 1130827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:54.401878 1130827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:54.901920 1130827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:55.401878 1130827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:55.902140 1130827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:56.402863 1130827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:56.901970 1130827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:57.402088 1130827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:57.901869 1130827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:58.402056 1130827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:58.902333 1130827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:59.402753 1130827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:59.902930 1130827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:09:00.402623 1130827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:09:00.901863 1130827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:09:01.402264 1130827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:09:01.902054 1130827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:09:02.402212 1130827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:09:02.503310 1130827 kubeadm.go:1107] duration metric: took 12.80181586s to wait for elevateKubeSystemPrivileges
	W0328 01:09:02.503352 1130827 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0328 01:09:02.503362 1130827 kubeadm.go:393] duration metric: took 5m10.46697508s to StartCluster
	I0328 01:09:02.503380 1130827 settings.go:142] acquiring lock: {Name:mk594dd5d083edcb4c668528431bf610fe4ea638 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 01:09:02.503482 1130827 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18485-1069254/kubeconfig
	I0328 01:09:02.505909 1130827 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18485-1069254/kubeconfig: {Name:mk608bb0dce90860f51972a105c5a28b1a9b081e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 01:09:02.506302 1130827 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.61.107 Port:8443 KubernetesVersion:v1.30.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0328 01:09:02.508103 1130827 out.go:177] * Verifying Kubernetes components...
	I0328 01:09:02.506385 1130827 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0328 01:09:02.506502 1130827 config.go:182] Loaded profile config "no-preload-248059": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0-beta.0
	I0328 01:09:02.509509 1130827 addons.go:69] Setting default-storageclass=true in profile "no-preload-248059"
	I0328 01:09:02.509519 1130827 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0328 01:09:02.509517 1130827 addons.go:69] Setting metrics-server=true in profile "no-preload-248059"
	I0328 01:09:02.509542 1130827 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-248059"
	I0328 01:09:02.509559 1130827 addons.go:234] Setting addon metrics-server=true in "no-preload-248059"
	W0328 01:09:02.509580 1130827 addons.go:243] addon metrics-server should already be in state true
	I0328 01:09:02.509509 1130827 addons.go:69] Setting storage-provisioner=true in profile "no-preload-248059"
	I0328 01:09:02.509623 1130827 host.go:66] Checking if "no-preload-248059" exists ...
	I0328 01:09:02.509636 1130827 addons.go:234] Setting addon storage-provisioner=true in "no-preload-248059"
	W0328 01:09:02.509690 1130827 addons.go:243] addon storage-provisioner should already be in state true
	I0328 01:09:02.509729 1130827 host.go:66] Checking if "no-preload-248059" exists ...
	I0328 01:09:02.510005 1130827 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18485-1069254/.minikube/bin/docker-machine-driver-kvm2
	I0328 01:09:02.510009 1130827 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18485-1069254/.minikube/bin/docker-machine-driver-kvm2
	I0328 01:09:02.510049 1130827 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 01:09:02.510050 1130827 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 01:09:02.510053 1130827 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18485-1069254/.minikube/bin/docker-machine-driver-kvm2
	I0328 01:09:02.510085 1130827 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 01:09:02.528082 1130827 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42381
	I0328 01:09:02.528124 1130827 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43929
	I0328 01:09:02.528714 1130827 main.go:141] libmachine: () Calling .GetVersion
	I0328 01:09:02.528738 1130827 main.go:141] libmachine: () Calling .GetVersion
	I0328 01:09:02.529081 1130827 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34573
	I0328 01:09:02.529378 1130827 main.go:141] libmachine: Using API Version  1
	I0328 01:09:02.529397 1130827 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 01:09:02.529444 1130827 main.go:141] libmachine: Using API Version  1
	I0328 01:09:02.529464 1130827 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 01:09:02.529465 1130827 main.go:141] libmachine: () Calling .GetVersion
	I0328 01:09:02.529791 1130827 main.go:141] libmachine: () Calling .GetMachineName
	I0328 01:09:02.529849 1130827 main.go:141] libmachine: () Calling .GetMachineName
	I0328 01:09:02.529948 1130827 main.go:141] libmachine: Using API Version  1
	I0328 01:09:02.529965 1130827 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 01:09:02.529950 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetState
	I0328 01:09:02.530389 1130827 main.go:141] libmachine: () Calling .GetMachineName
	I0328 01:09:02.530437 1130827 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18485-1069254/.minikube/bin/docker-machine-driver-kvm2
	I0328 01:09:02.530472 1130827 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 01:09:02.531004 1130827 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18485-1069254/.minikube/bin/docker-machine-driver-kvm2
	I0328 01:09:02.531058 1130827 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 01:09:02.534108 1130827 addons.go:234] Setting addon default-storageclass=true in "no-preload-248059"
	W0328 01:09:02.534134 1130827 addons.go:243] addon default-storageclass should already be in state true
	I0328 01:09:02.534173 1130827 host.go:66] Checking if "no-preload-248059" exists ...
	I0328 01:09:02.534563 1130827 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18485-1069254/.minikube/bin/docker-machine-driver-kvm2
	I0328 01:09:02.534592 1130827 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 01:09:02.546812 1130827 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35769
	I0328 01:09:02.547478 1130827 main.go:141] libmachine: () Calling .GetVersion
	I0328 01:09:02.547999 1130827 main.go:141] libmachine: Using API Version  1
	I0328 01:09:02.548031 1130827 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 01:09:02.548370 1130827 main.go:141] libmachine: () Calling .GetMachineName
	I0328 01:09:02.548616 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetState
	I0328 01:09:02.549185 1130827 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37367
	I0328 01:09:02.549663 1130827 main.go:141] libmachine: () Calling .GetVersion
	I0328 01:09:02.550365 1130827 main.go:141] libmachine: Using API Version  1
	I0328 01:09:02.550390 1130827 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 01:09:02.550772 1130827 main.go:141] libmachine: () Calling .GetMachineName
	I0328 01:09:02.550787 1130827 main.go:141] libmachine: (no-preload-248059) Calling .DriverName
	I0328 01:09:02.550977 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetState
	I0328 01:09:02.553075 1130827 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0328 01:09:02.554750 1130827 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0328 01:09:02.554769 1130827 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0328 01:09:02.552577 1130827 main.go:141] libmachine: (no-preload-248059) Calling .DriverName
	I0328 01:09:02.554788 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHHostname
	I0328 01:09:02.553550 1130827 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45153
	I0328 01:09:02.556534 1130827 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0328 01:09:02.555339 1130827 main.go:141] libmachine: () Calling .GetVersion
	I0328 01:09:02.558480 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:09:02.563734 1130827 main.go:141] libmachine: (no-preload-248059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:33:e2", ip: ""} in network mk-no-preload-248059: {Iface:virbr3 ExpiryTime:2024-03-28 02:03:25 +0000 UTC Type:0 Mac:52:54:00:58:33:e2 Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:no-preload-248059 Clientid:01:52:54:00:58:33:e2}
	I0328 01:09:02.563773 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined IP address 192.168.61.107 and MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:09:02.563823 1130827 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0328 01:09:02.563846 1130827 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0328 01:09:02.563876 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHHostname
	I0328 01:09:02.564584 1130827 main.go:141] libmachine: Using API Version  1
	I0328 01:09:02.564604 1130827 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 01:09:02.564633 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHPort
	I0328 01:09:02.564933 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHKeyPath
	I0328 01:09:02.565025 1130827 main.go:141] libmachine: () Calling .GetMachineName
	I0328 01:09:02.565458 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHUsername
	I0328 01:09:02.565593 1130827 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18485-1069254/.minikube/bin/docker-machine-driver-kvm2
	I0328 01:09:02.565617 1130827 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 01:09:02.565745 1130827 sshutil.go:53] new ssh client: &{IP:192.168.61.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/no-preload-248059/id_rsa Username:docker}
	I0328 01:09:02.569766 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:09:02.570083 1130827 main.go:141] libmachine: (no-preload-248059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:33:e2", ip: ""} in network mk-no-preload-248059: {Iface:virbr3 ExpiryTime:2024-03-28 02:03:25 +0000 UTC Type:0 Mac:52:54:00:58:33:e2 Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:no-preload-248059 Clientid:01:52:54:00:58:33:e2}
	I0328 01:09:02.570104 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined IP address 192.168.61.107 and MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:09:02.570413 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHPort
	I0328 01:09:02.570778 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHKeyPath
	I0328 01:09:02.570975 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHUsername
	I0328 01:09:02.571142 1130827 sshutil.go:53] new ssh client: &{IP:192.168.61.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/no-preload-248059/id_rsa Username:docker}
	I0328 01:09:02.589503 1130827 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41085
	I0328 01:09:02.590061 1130827 main.go:141] libmachine: () Calling .GetVersion
	I0328 01:09:02.590641 1130827 main.go:141] libmachine: Using API Version  1
	I0328 01:09:02.590661 1130827 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 01:09:02.591065 1130827 main.go:141] libmachine: () Calling .GetMachineName
	I0328 01:09:02.591310 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetState
	I0328 01:09:02.593270 1130827 main.go:141] libmachine: (no-preload-248059) Calling .DriverName
	I0328 01:09:02.593665 1130827 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0328 01:09:02.593696 1130827 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0328 01:09:02.593717 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHHostname
	I0328 01:09:02.596796 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:09:02.597270 1130827 main.go:141] libmachine: (no-preload-248059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:33:e2", ip: ""} in network mk-no-preload-248059: {Iface:virbr3 ExpiryTime:2024-03-28 02:03:25 +0000 UTC Type:0 Mac:52:54:00:58:33:e2 Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:no-preload-248059 Clientid:01:52:54:00:58:33:e2}
	I0328 01:09:02.597298 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined IP address 192.168.61.107 and MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:09:02.597460 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHPort
	I0328 01:09:02.597637 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHKeyPath
	I0328 01:09:02.597807 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHUsername
	I0328 01:09:02.597937 1130827 sshutil.go:53] new ssh client: &{IP:192.168.61.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/no-preload-248059/id_rsa Username:docker}
	I0328 01:09:02.705837 1130827 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0328 01:09:02.727955 1130827 node_ready.go:35] waiting up to 6m0s for node "no-preload-248059" to be "Ready" ...
	I0328 01:09:02.737291 1130827 node_ready.go:49] node "no-preload-248059" has status "Ready":"True"
	I0328 01:09:02.737325 1130827 node_ready.go:38] duration metric: took 9.337953ms for node "no-preload-248059" to be "Ready" ...
	I0328 01:09:02.737338 1130827 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0328 01:09:02.741939 1130827 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-248059" in "kube-system" namespace to be "Ready" ...
	I0328 01:09:02.749157 1130827 pod_ready.go:92] pod "etcd-no-preload-248059" in "kube-system" namespace has status "Ready":"True"
	I0328 01:09:02.749192 1130827 pod_ready.go:81] duration metric: took 7.224004ms for pod "etcd-no-preload-248059" in "kube-system" namespace to be "Ready" ...
	I0328 01:09:02.749205 1130827 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-248059" in "kube-system" namespace to be "Ready" ...
	I0328 01:09:02.755106 1130827 pod_ready.go:92] pod "kube-apiserver-no-preload-248059" in "kube-system" namespace has status "Ready":"True"
	I0328 01:09:02.755132 1130827 pod_ready.go:81] duration metric: took 5.919446ms for pod "kube-apiserver-no-preload-248059" in "kube-system" namespace to be "Ready" ...
	I0328 01:09:02.755144 1130827 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-248059" in "kube-system" namespace to be "Ready" ...
	I0328 01:09:02.761123 1130827 pod_ready.go:92] pod "kube-controller-manager-no-preload-248059" in "kube-system" namespace has status "Ready":"True"
	I0328 01:09:02.761171 1130827 pod_ready.go:81] duration metric: took 6.017877ms for pod "kube-controller-manager-no-preload-248059" in "kube-system" namespace to be "Ready" ...
	I0328 01:09:02.761187 1130827 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-248059" in "kube-system" namespace to be "Ready" ...
	I0328 01:09:02.773958 1130827 pod_ready.go:92] pod "kube-scheduler-no-preload-248059" in "kube-system" namespace has status "Ready":"True"
	I0328 01:09:02.773983 1130827 pod_ready.go:81] duration metric: took 12.787671ms for pod "kube-scheduler-no-preload-248059" in "kube-system" namespace to be "Ready" ...
	I0328 01:09:02.773991 1130827 pod_ready.go:38] duration metric: took 36.637128ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0328 01:09:02.774008 1130827 api_server.go:52] waiting for apiserver process to appear ...
	I0328 01:09:02.774068 1130827 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:09:02.794342 1130827 api_server.go:72] duration metric: took 287.989042ms to wait for apiserver process to appear ...
	I0328 01:09:02.794376 1130827 api_server.go:88] waiting for apiserver healthz status ...
	I0328 01:09:02.794408 1130827 api_server.go:253] Checking apiserver healthz at https://192.168.61.107:8443/healthz ...
	I0328 01:09:02.826957 1130827 api_server.go:279] https://192.168.61.107:8443/healthz returned 200:
	ok
	I0328 01:09:02.830377 1130827 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0328 01:09:02.830399 1130827 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0328 01:09:02.837250 1130827 api_server.go:141] control plane version: v1.30.0-beta.0
	I0328 01:09:02.837284 1130827 api_server.go:131] duration metric: took 42.898933ms to wait for apiserver health ...
	I0328 01:09:02.837295 1130827 system_pods.go:43] waiting for kube-system pods to appear ...
	I0328 01:09:02.838515 1130827 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0328 01:09:02.865482 1130827 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0328 01:09:02.880510 1130827 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0328 01:09:02.880544 1130827 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0328 01:09:02.933895 1130827 system_pods.go:59] 4 kube-system pods found
	I0328 01:09:02.933958 1130827 system_pods.go:61] "etcd-no-preload-248059" [df24a43f-e4f8-4ee2-a2d5-9a718a197670] Running
	I0328 01:09:02.933967 1130827 system_pods.go:61] "kube-apiserver-no-preload-248059" [60aa0336-e0a2-476a-9458-c76fb40a95e1] Running
	I0328 01:09:02.933973 1130827 system_pods.go:61] "kube-controller-manager-no-preload-248059" [3759c96b-4476-439f-bf97-ea6175e53272] Running
	I0328 01:09:02.933977 1130827 system_pods.go:61] "kube-scheduler-no-preload-248059" [63a844a3-9d1e-48b0-90eb-52d3d20f0de8] Running
	I0328 01:09:02.933984 1130827 system_pods.go:74] duration metric: took 96.68223ms to wait for pod list to return data ...
	I0328 01:09:02.933994 1130827 default_sa.go:34] waiting for default service account to be created ...
	I0328 01:09:02.939507 1130827 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0328 01:09:02.939538 1130827 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0328 01:09:02.994042 1130827 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0328 01:09:03.160934 1130827 default_sa.go:45] found service account: "default"
	I0328 01:09:03.160971 1130827 default_sa.go:55] duration metric: took 226.968222ms for default service account to be created ...
	I0328 01:09:03.160982 1130827 system_pods.go:116] waiting for k8s-apps to be running ...
	I0328 01:09:03.396511 1130827 system_pods.go:86] 7 kube-system pods found
	I0328 01:09:03.396549 1130827 system_pods.go:89] "coredns-7db6d8ff4d-8zzf5" [91f329ea-6d6d-45dc-ac77-40a2739249b4] Pending
	I0328 01:09:03.396554 1130827 system_pods.go:89] "coredns-7db6d8ff4d-qtgp9" [aac5a4d0-acf3-426c-a81e-d129f94d58f3] Pending
	I0328 01:09:03.396558 1130827 system_pods.go:89] "etcd-no-preload-248059" [df24a43f-e4f8-4ee2-a2d5-9a718a197670] Running
	I0328 01:09:03.396562 1130827 system_pods.go:89] "kube-apiserver-no-preload-248059" [60aa0336-e0a2-476a-9458-c76fb40a95e1] Running
	I0328 01:09:03.396567 1130827 system_pods.go:89] "kube-controller-manager-no-preload-248059" [3759c96b-4476-439f-bf97-ea6175e53272] Running
	I0328 01:09:03.396575 1130827 system_pods.go:89] "kube-proxy-g5f6g" [d9c30bc3-42b1-446f-838b-979489cf661d] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0328 01:09:03.396580 1130827 system_pods.go:89] "kube-scheduler-no-preload-248059" [63a844a3-9d1e-48b0-90eb-52d3d20f0de8] Running
	I0328 01:09:03.396601 1130827 retry.go:31] will retry after 288.008379ms: missing components: kube-dns, kube-proxy
	I0328 01:09:03.697645 1130827 system_pods.go:86] 7 kube-system pods found
	I0328 01:09:03.697688 1130827 system_pods.go:89] "coredns-7db6d8ff4d-8zzf5" [91f329ea-6d6d-45dc-ac77-40a2739249b4] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0328 01:09:03.697697 1130827 system_pods.go:89] "coredns-7db6d8ff4d-qtgp9" [aac5a4d0-acf3-426c-a81e-d129f94d58f3] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0328 01:09:03.697704 1130827 system_pods.go:89] "etcd-no-preload-248059" [df24a43f-e4f8-4ee2-a2d5-9a718a197670] Running
	I0328 01:09:03.697710 1130827 system_pods.go:89] "kube-apiserver-no-preload-248059" [60aa0336-e0a2-476a-9458-c76fb40a95e1] Running
	I0328 01:09:03.697720 1130827 system_pods.go:89] "kube-controller-manager-no-preload-248059" [3759c96b-4476-439f-bf97-ea6175e53272] Running
	I0328 01:09:03.697726 1130827 system_pods.go:89] "kube-proxy-g5f6g" [d9c30bc3-42b1-446f-838b-979489cf661d] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0328 01:09:03.697730 1130827 system_pods.go:89] "kube-scheduler-no-preload-248059" [63a844a3-9d1e-48b0-90eb-52d3d20f0de8] Running
	I0328 01:09:03.697750 1130827 retry.go:31] will retry after 356.016468ms: missing components: kube-dns, kube-proxy
	I0328 01:09:03.962535 1130827 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.097008499s)
	I0328 01:09:03.962614 1130827 main.go:141] libmachine: Making call to close driver server
	I0328 01:09:03.962633 1130827 main.go:141] libmachine: (no-preload-248059) Calling .Close
	I0328 01:09:03.963093 1130827 main.go:141] libmachine: Successfully made call to close driver server
	I0328 01:09:03.963119 1130827 main.go:141] libmachine: Making call to close connection to plugin binary
	I0328 01:09:03.963129 1130827 main.go:141] libmachine: Making call to close driver server
	I0328 01:09:03.963139 1130827 main.go:141] libmachine: (no-preload-248059) Calling .Close
	I0328 01:09:03.963406 1130827 main.go:141] libmachine: Successfully made call to close driver server
	I0328 01:09:03.963424 1130827 main.go:141] libmachine: Making call to close connection to plugin binary
	I0328 01:09:03.964335 1130827 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.125788348s)
	I0328 01:09:03.964375 1130827 main.go:141] libmachine: Making call to close driver server
	I0328 01:09:03.964384 1130827 main.go:141] libmachine: (no-preload-248059) Calling .Close
	I0328 01:09:03.964712 1130827 main.go:141] libmachine: (no-preload-248059) DBG | Closing plugin on server side
	I0328 01:09:03.964740 1130827 main.go:141] libmachine: Successfully made call to close driver server
	I0328 01:09:03.964763 1130827 main.go:141] libmachine: Making call to close connection to plugin binary
	I0328 01:09:03.964776 1130827 main.go:141] libmachine: Making call to close driver server
	I0328 01:09:03.964785 1130827 main.go:141] libmachine: (no-preload-248059) Calling .Close
	I0328 01:09:03.965054 1130827 main.go:141] libmachine: Successfully made call to close driver server
	I0328 01:09:03.965125 1130827 main.go:141] libmachine: Making call to close connection to plugin binary
	I0328 01:09:03.965142 1130827 main.go:141] libmachine: (no-preload-248059) DBG | Closing plugin on server side
	I0328 01:09:04.002303 1130827 main.go:141] libmachine: Making call to close driver server
	I0328 01:09:04.002340 1130827 main.go:141] libmachine: (no-preload-248059) Calling .Close
	I0328 01:09:04.002744 1130827 main.go:141] libmachine: Successfully made call to close driver server
	I0328 01:09:04.002766 1130827 main.go:141] libmachine: Making call to close connection to plugin binary
	I0328 01:09:04.062017 1130827 system_pods.go:86] 8 kube-system pods found
	I0328 01:09:04.062096 1130827 system_pods.go:89] "coredns-7db6d8ff4d-8zzf5" [91f329ea-6d6d-45dc-ac77-40a2739249b4] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0328 01:09:04.062111 1130827 system_pods.go:89] "coredns-7db6d8ff4d-qtgp9" [aac5a4d0-acf3-426c-a81e-d129f94d58f3] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0328 01:09:04.062121 1130827 system_pods.go:89] "etcd-no-preload-248059" [df24a43f-e4f8-4ee2-a2d5-9a718a197670] Running
	I0328 01:09:04.062132 1130827 system_pods.go:89] "kube-apiserver-no-preload-248059" [60aa0336-e0a2-476a-9458-c76fb40a95e1] Running
	I0328 01:09:04.062158 1130827 system_pods.go:89] "kube-controller-manager-no-preload-248059" [3759c96b-4476-439f-bf97-ea6175e53272] Running
	I0328 01:09:04.062172 1130827 system_pods.go:89] "kube-proxy-g5f6g" [d9c30bc3-42b1-446f-838b-979489cf661d] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0328 01:09:04.062180 1130827 system_pods.go:89] "kube-scheduler-no-preload-248059" [63a844a3-9d1e-48b0-90eb-52d3d20f0de8] Running
	I0328 01:09:04.062192 1130827 system_pods.go:89] "storage-provisioner" [1dcee5b1-4531-4068-bce7-081d51602015] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0328 01:09:04.062220 1130827 retry.go:31] will retry after 477.684804ms: missing components: kube-dns, kube-proxy
	I0328 01:09:04.574661 1130827 system_pods.go:86] 9 kube-system pods found
	I0328 01:09:04.574716 1130827 system_pods.go:89] "coredns-7db6d8ff4d-8zzf5" [91f329ea-6d6d-45dc-ac77-40a2739249b4] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0328 01:09:04.574728 1130827 system_pods.go:89] "coredns-7db6d8ff4d-qtgp9" [aac5a4d0-acf3-426c-a81e-d129f94d58f3] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0328 01:09:04.574740 1130827 system_pods.go:89] "etcd-no-preload-248059" [df24a43f-e4f8-4ee2-a2d5-9a718a197670] Running
	I0328 01:09:04.574748 1130827 system_pods.go:89] "kube-apiserver-no-preload-248059" [60aa0336-e0a2-476a-9458-c76fb40a95e1] Running
	I0328 01:09:04.574754 1130827 system_pods.go:89] "kube-controller-manager-no-preload-248059" [3759c96b-4476-439f-bf97-ea6175e53272] Running
	I0328 01:09:04.574761 1130827 system_pods.go:89] "kube-proxy-g5f6g" [d9c30bc3-42b1-446f-838b-979489cf661d] Running
	I0328 01:09:04.574768 1130827 system_pods.go:89] "kube-scheduler-no-preload-248059" [63a844a3-9d1e-48b0-90eb-52d3d20f0de8] Running
	I0328 01:09:04.574778 1130827 system_pods.go:89] "metrics-server-569cc877fc-frc5k" [d1b84bf5-8f9e-4da6-8aea-568e9bb1a4dd] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0328 01:09:04.574799 1130827 system_pods.go:89] "storage-provisioner" [1dcee5b1-4531-4068-bce7-081d51602015] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0328 01:09:04.574821 1130827 retry.go:31] will retry after 460.13955ms: missing components: kube-dns
	I0328 01:09:04.692708 1130827 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.69861394s)
	I0328 01:09:04.692782 1130827 main.go:141] libmachine: Making call to close driver server
	I0328 01:09:04.692798 1130827 main.go:141] libmachine: (no-preload-248059) Calling .Close
	I0328 01:09:04.693323 1130827 main.go:141] libmachine: Successfully made call to close driver server
	I0328 01:09:04.693366 1130827 main.go:141] libmachine: Making call to close connection to plugin binary
	I0328 01:09:04.693376 1130827 main.go:141] libmachine: Making call to close driver server
	I0328 01:09:04.693384 1130827 main.go:141] libmachine: (no-preload-248059) Calling .Close
	I0328 01:09:04.693320 1130827 main.go:141] libmachine: (no-preload-248059) DBG | Closing plugin on server side
	I0328 01:09:04.693818 1130827 main.go:141] libmachine: Successfully made call to close driver server
	I0328 01:09:04.693865 1130827 main.go:141] libmachine: Making call to close connection to plugin binary
	I0328 01:09:04.693879 1130827 main.go:141] libmachine: (no-preload-248059) DBG | Closing plugin on server side
	I0328 01:09:04.693895 1130827 addons.go:470] Verifying addon metrics-server=true in "no-preload-248059"
	I0328 01:09:04.696310 1130827 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0328 01:09:04.025791 1131323 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0328 01:09:04.026055 1131323 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0328 01:09:04.026065 1131323 kubeadm.go:309] 
	I0328 01:09:04.026124 1131323 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0328 01:09:04.026172 1131323 kubeadm.go:309] 		timed out waiting for the condition
	I0328 01:09:04.026181 1131323 kubeadm.go:309] 
	I0328 01:09:04.026221 1131323 kubeadm.go:309] 	This error is likely caused by:
	I0328 01:09:04.026279 1131323 kubeadm.go:309] 		- The kubelet is not running
	I0328 01:09:04.026401 1131323 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0328 01:09:04.026411 1131323 kubeadm.go:309] 
	I0328 01:09:04.026529 1131323 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0328 01:09:04.026586 1131323 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0328 01:09:04.026632 1131323 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0328 01:09:04.026640 1131323 kubeadm.go:309] 
	I0328 01:09:04.026758 1131323 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0328 01:09:04.026884 1131323 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0328 01:09:04.026902 1131323 kubeadm.go:309] 
	I0328 01:09:04.027061 1131323 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0328 01:09:04.027222 1131323 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0328 01:09:04.027335 1131323 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0328 01:09:04.027429 1131323 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0328 01:09:04.027537 1131323 kubeadm.go:309] 
	I0328 01:09:04.029027 1131323 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0328 01:09:04.029164 1131323 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0328 01:09:04.029284 1131323 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	W0328 01:09:04.029477 1131323 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0328 01:09:04.029545 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0328 01:09:04.543275 1131323 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0328 01:09:04.562572 1131323 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0328 01:09:04.577013 1131323 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0328 01:09:04.577040 1131323 kubeadm.go:156] found existing configuration files:
	
	I0328 01:09:04.577102 1131323 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0328 01:09:04.590795 1131323 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0328 01:09:04.590885 1131323 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0328 01:09:04.604227 1131323 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0328 01:09:04.616720 1131323 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0328 01:09:04.616818 1131323 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0328 01:09:04.630095 1131323 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0328 01:09:04.643166 1131323 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0328 01:09:04.643259 1131323 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0328 01:09:04.658084 1131323 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0328 01:09:04.671786 1131323 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0328 01:09:04.671874 1131323 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0328 01:09:04.685852 1131323 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0328 01:09:04.779013 1131323 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0328 01:09:04.779113 1131323 kubeadm.go:309] [preflight] Running pre-flight checks
	I0328 01:09:04.964178 1131323 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0328 01:09:04.964317 1131323 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0328 01:09:04.964463 1131323 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0328 01:09:05.181712 1131323 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0328 01:09:05.183644 1131323 out.go:204]   - Generating certificates and keys ...
	I0328 01:09:05.183759 1131323 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0328 01:09:05.183851 1131323 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0328 01:09:05.183962 1131323 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0328 01:09:05.184042 1131323 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0328 01:09:05.184156 1131323 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0328 01:09:05.184244 1131323 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0328 01:09:05.184337 1131323 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0328 01:09:05.184424 1131323 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0328 01:09:05.184535 1131323 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0328 01:09:05.184633 1131323 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0328 01:09:05.184683 1131323 kubeadm.go:309] [certs] Using the existing "sa" key
	I0328 01:09:05.184758 1131323 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0328 01:09:04.698039 1130827 addons.go:505] duration metric: took 2.191652421s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0328 01:09:05.044303 1130827 system_pods.go:86] 9 kube-system pods found
	I0328 01:09:05.044340 1130827 system_pods.go:89] "coredns-7db6d8ff4d-8zzf5" [91f329ea-6d6d-45dc-ac77-40a2739249b4] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0328 01:09:05.044348 1130827 system_pods.go:89] "coredns-7db6d8ff4d-qtgp9" [aac5a4d0-acf3-426c-a81e-d129f94d58f3] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0328 01:09:05.044354 1130827 system_pods.go:89] "etcd-no-preload-248059" [df24a43f-e4f8-4ee2-a2d5-9a718a197670] Running
	I0328 01:09:05.044360 1130827 system_pods.go:89] "kube-apiserver-no-preload-248059" [60aa0336-e0a2-476a-9458-c76fb40a95e1] Running
	I0328 01:09:05.044366 1130827 system_pods.go:89] "kube-controller-manager-no-preload-248059" [3759c96b-4476-439f-bf97-ea6175e53272] Running
	I0328 01:09:05.044369 1130827 system_pods.go:89] "kube-proxy-g5f6g" [d9c30bc3-42b1-446f-838b-979489cf661d] Running
	I0328 01:09:05.044373 1130827 system_pods.go:89] "kube-scheduler-no-preload-248059" [63a844a3-9d1e-48b0-90eb-52d3d20f0de8] Running
	I0328 01:09:05.044378 1130827 system_pods.go:89] "metrics-server-569cc877fc-frc5k" [d1b84bf5-8f9e-4da6-8aea-568e9bb1a4dd] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0328 01:09:05.044387 1130827 system_pods.go:89] "storage-provisioner" [1dcee5b1-4531-4068-bce7-081d51602015] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0328 01:09:05.044406 1130827 retry.go:31] will retry after 486.01075ms: missing components: kube-dns
	I0328 01:09:05.539158 1130827 system_pods.go:86] 9 kube-system pods found
	I0328 01:09:05.539204 1130827 system_pods.go:89] "coredns-7db6d8ff4d-8zzf5" [91f329ea-6d6d-45dc-ac77-40a2739249b4] Running
	I0328 01:09:05.539213 1130827 system_pods.go:89] "coredns-7db6d8ff4d-qtgp9" [aac5a4d0-acf3-426c-a81e-d129f94d58f3] Running
	I0328 01:09:05.539219 1130827 system_pods.go:89] "etcd-no-preload-248059" [df24a43f-e4f8-4ee2-a2d5-9a718a197670] Running
	I0328 01:09:05.539226 1130827 system_pods.go:89] "kube-apiserver-no-preload-248059" [60aa0336-e0a2-476a-9458-c76fb40a95e1] Running
	I0328 01:09:05.539232 1130827 system_pods.go:89] "kube-controller-manager-no-preload-248059" [3759c96b-4476-439f-bf97-ea6175e53272] Running
	I0328 01:09:05.539238 1130827 system_pods.go:89] "kube-proxy-g5f6g" [d9c30bc3-42b1-446f-838b-979489cf661d] Running
	I0328 01:09:05.539244 1130827 system_pods.go:89] "kube-scheduler-no-preload-248059" [63a844a3-9d1e-48b0-90eb-52d3d20f0de8] Running
	I0328 01:09:05.539255 1130827 system_pods.go:89] "metrics-server-569cc877fc-frc5k" [d1b84bf5-8f9e-4da6-8aea-568e9bb1a4dd] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0328 01:09:05.539260 1130827 system_pods.go:89] "storage-provisioner" [1dcee5b1-4531-4068-bce7-081d51602015] Running
	I0328 01:09:05.539274 1130827 system_pods.go:126] duration metric: took 2.37828469s to wait for k8s-apps to be running ...
	I0328 01:09:05.539292 1130827 system_svc.go:44] waiting for kubelet service to be running ....
	I0328 01:09:05.539362 1130827 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0328 01:09:05.560593 1130827 system_svc.go:56] duration metric: took 21.288819ms WaitForService to wait for kubelet
	I0328 01:09:05.560628 1130827 kubeadm.go:576] duration metric: took 3.054281955s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0328 01:09:05.560657 1130827 node_conditions.go:102] verifying NodePressure condition ...
	I0328 01:09:05.564453 1130827 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0328 01:09:05.564489 1130827 node_conditions.go:123] node cpu capacity is 2
	I0328 01:09:05.564502 1130827 node_conditions.go:105] duration metric: took 3.837449ms to run NodePressure ...
	I0328 01:09:05.564517 1130827 start.go:240] waiting for startup goroutines ...
	I0328 01:09:05.564527 1130827 start.go:245] waiting for cluster config update ...
	I0328 01:09:05.564542 1130827 start.go:254] writing updated cluster config ...
	I0328 01:09:05.564843 1130827 ssh_runner.go:195] Run: rm -f paused
	I0328 01:09:05.623218 1130827 start.go:600] kubectl: 1.29.3, cluster: 1.30.0-beta.0 (minor skew: 1)
	I0328 01:09:05.625408 1130827 out.go:177] * Done! kubectl is now configured to use "no-preload-248059" cluster and "default" namespace by default
	I0328 01:09:05.587190 1131323 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0328 01:09:05.923219 1131323 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0328 01:09:06.087945 1131323 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0328 01:09:06.245638 1131323 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0328 01:09:06.266195 1131323 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0328 01:09:06.267461 1131323 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0328 01:09:06.267551 1131323 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0328 01:09:06.434155 1131323 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0328 01:09:06.436300 1131323 out.go:204]   - Booting up control plane ...
	I0328 01:09:06.436447 1131323 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0328 01:09:06.446573 1131323 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0328 01:09:06.447461 1131323 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0328 01:09:06.448313 1131323 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0328 01:09:06.450917 1131323 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0328 01:09:46.453199 1131323 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0328 01:09:46.453386 1131323 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0328 01:09:46.453643 1131323 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0328 01:09:51.454402 1131323 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0328 01:09:51.454665 1131323 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0328 01:10:01.455189 1131323 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0328 01:10:01.455417 1131323 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0328 01:10:21.456491 1131323 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0328 01:10:21.456726 1131323 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0328 01:11:01.456972 1131323 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0328 01:11:01.457256 1131323 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0328 01:11:01.457269 1131323 kubeadm.go:309] 
	I0328 01:11:01.457310 1131323 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0328 01:11:01.457404 1131323 kubeadm.go:309] 		timed out waiting for the condition
	I0328 01:11:01.457441 1131323 kubeadm.go:309] 
	I0328 01:11:01.457492 1131323 kubeadm.go:309] 	This error is likely caused by:
	I0328 01:11:01.457550 1131323 kubeadm.go:309] 		- The kubelet is not running
	I0328 01:11:01.457696 1131323 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0328 01:11:01.457708 1131323 kubeadm.go:309] 
	I0328 01:11:01.457856 1131323 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0328 01:11:01.457906 1131323 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0328 01:11:01.457935 1131323 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0328 01:11:01.457943 1131323 kubeadm.go:309] 
	I0328 01:11:01.458033 1131323 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0328 01:11:01.458139 1131323 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0328 01:11:01.458155 1131323 kubeadm.go:309] 
	I0328 01:11:01.458331 1131323 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0328 01:11:01.458483 1131323 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0328 01:11:01.458594 1131323 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0328 01:11:01.458707 1131323 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0328 01:11:01.458718 1131323 kubeadm.go:309] 
	I0328 01:11:01.459597 1131323 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0328 01:11:01.459737 1131323 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0328 01:11:01.459822 1131323 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0328 01:11:01.459962 1131323 kubeadm.go:393] duration metric: took 7m59.227261729s to StartCluster
	I0328 01:11:01.460023 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:11:01.460167 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:11:01.522644 1131323 cri.go:89] found id: ""
	I0328 01:11:01.522687 1131323 logs.go:276] 0 containers: []
	W0328 01:11:01.522700 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:11:01.522710 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:11:01.522782 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:11:01.567898 1131323 cri.go:89] found id: ""
	I0328 01:11:01.567928 1131323 logs.go:276] 0 containers: []
	W0328 01:11:01.567937 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:11:01.567945 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:11:01.568005 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:11:01.604782 1131323 cri.go:89] found id: ""
	I0328 01:11:01.604810 1131323 logs.go:276] 0 containers: []
	W0328 01:11:01.604819 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:11:01.604825 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:11:01.604935 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:11:01.642875 1131323 cri.go:89] found id: ""
	I0328 01:11:01.642908 1131323 logs.go:276] 0 containers: []
	W0328 01:11:01.642920 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:11:01.642929 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:11:01.642993 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:11:01.682186 1131323 cri.go:89] found id: ""
	I0328 01:11:01.682216 1131323 logs.go:276] 0 containers: []
	W0328 01:11:01.682223 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:11:01.682241 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:11:01.682312 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:11:01.720654 1131323 cri.go:89] found id: ""
	I0328 01:11:01.720689 1131323 logs.go:276] 0 containers: []
	W0328 01:11:01.720697 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:11:01.720704 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:11:01.720759 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:11:01.757340 1131323 cri.go:89] found id: ""
	I0328 01:11:01.757372 1131323 logs.go:276] 0 containers: []
	W0328 01:11:01.757383 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:11:01.757392 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:11:01.757462 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:11:01.797426 1131323 cri.go:89] found id: ""
	I0328 01:11:01.797462 1131323 logs.go:276] 0 containers: []
	W0328 01:11:01.797473 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:11:01.797488 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:11:01.797506 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:11:01.859582 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:11:01.859623 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:11:01.876027 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:11:01.876073 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:11:01.966513 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:11:01.966539 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:11:01.966557 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:11:02.084853 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:11:02.084894 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0328 01:11:02.127221 1131323 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0328 01:11:02.127288 1131323 out.go:239] * 
	W0328 01:11:02.127417 1131323 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0328 01:11:02.127456 1131323 out.go:239] * 
	W0328 01:11:02.128313 1131323 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0328 01:11:02.131916 1131323 out.go:177] 
	W0328 01:11:02.133288 1131323 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0328 01:11:02.133351 1131323 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0328 01:11:02.133381 1131323 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0328 01:11:02.134991 1131323 out.go:177] 
	
	
	==> CRI-O <==
	Mar 28 01:26:23 default-k8s-diff-port-283961 crio[690]: time="2024-03-28 01:26:23.514218010Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1711589183514199417,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:130129,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1b41dcf4-8ba7-40af-8049-fac11675778d name=/runtime.v1.ImageService/ImageFsInfo
	Mar 28 01:26:23 default-k8s-diff-port-283961 crio[690]: time="2024-03-28 01:26:23.514819831Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=71069eea-92ff-4f9f-b39a-f2f651d9714c name=/runtime.v1.RuntimeService/ListContainers
	Mar 28 01:26:23 default-k8s-diff-port-283961 crio[690]: time="2024-03-28 01:26:23.514870134Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=71069eea-92ff-4f9f-b39a-f2f651d9714c name=/runtime.v1.RuntimeService/ListContainers
	Mar 28 01:26:23 default-k8s-diff-port-283961 crio[690]: time="2024-03-28 01:26:23.515057119Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e424026873582b3cb422868efb139c9493e87ce38c6f5d50d6c75052ba346e03,PodSandboxId:e7a6e9a7eeb6cc208ff629ec60c10d876640cdb9da4bdc52d76c224f8c98904a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1711588120120863506,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb80efe2-521f-45d5-84e7-f6dc216b4c6d,},Annotations:map[string]string{io.kubernetes.container.hash: 7bb55d66,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de1711ee6a5dc43bc28c1177f89f91198505babc976460637ce8225259d5a408,PodSandboxId:803dccd16ce63ecad890423782c9d84b6c72ace6ba07f39a49de4bb6749d1736,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1711588119047022566,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-qzcfp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e7bfa94-f249-4f7a-be7b-9a615810c956,},Annotations:map[string]string{io.kubernetes.container.hash: 525c88a1,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:422a905518d5426ce48b860149b0f9588ee1bb14058d9bd1ac78a3ea72037fd9,PodSandboxId:e09bb8fab1a58dddc95cb04ec5a31fc709e4569bb2cde76074684683933a5afe,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1711588118915220409,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-js7j2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.po
d.uid: 1f31a6ee-9417-4f4a-ba0b-fdbab6a9169d,},Annotations:map[string]string{io.kubernetes.container.hash: f8ea6801,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ae8bdde5b55d2c17376a80e0d2822b57a3d6af056ea8deac369393e0f38fd42,PodSandboxId:3cfb417abd42b4954bb5c2a4c8b3fdb3ccf29d420bf7165f81ee5ef1e199695c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1711588118976846562,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-gdv5x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b4b835c-ae9d-4eff-ab37-
6ccb7e36a748,},Annotations:map[string]string{io.kubernetes.container.hash: 5994b028,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a84313d97f96d700a70f3447583b8682711e293b8d7186846062d8b4f3b29f3,PodSandboxId:0eb54a2d43e98c8847b017d2ee21af27cd75d4f8482dedf5f847491ca95bf120,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:171158809910719692
1,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-283961,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1dfb04a92f09d808d7e99d429b5cee4e,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:360c718fc7dc9fd42d5b06bad743933fe575f0f169492bd0d6227e57e740f172,PodSandboxId:a1a715d4cc301d4ea0d63d94d5aa77988679a62407b33116761afb7146a874a1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1711588099109294768,
Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-283961,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b67001339a7063aa3fa376614daa7f54,},Annotations:map[string]string{io.kubernetes.container.hash: a066b342,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:59a6698011c7da31695bc91fd0cc71cbd5f23ddcb2bb6527a8bc650716e83867,PodSandboxId:40793c655cfd9768e76f2f83a71424a300d0aac057cc2f611bdda09ed1b2a3fc,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1711588098983
618428,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-283961,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 752d20882748f6f16766053339f66ac2,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a32cb718b54d521eab0e1da343ce520cc70f2542de9d33964fbf54e2bc80a70,PodSandboxId:5433b9300e41b6ec6a7e4015ada8c4c66f270fa2b5ef7513159ae78190f66693,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1711588
098881885696,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-283961,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1de4e3c3c3539f681e560c69accf057,},Annotations:map[string]string{io.kubernetes.container.hash: 56b0e412,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=71069eea-92ff-4f9f-b39a-f2f651d9714c name=/runtime.v1.RuntimeService/ListContainers
	Mar 28 01:26:23 default-k8s-diff-port-283961 crio[690]: time="2024-03-28 01:26:23.556399868Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=eade27ca-9904-4440-8a4e-96069dae75af name=/runtime.v1.RuntimeService/Version
	Mar 28 01:26:23 default-k8s-diff-port-283961 crio[690]: time="2024-03-28 01:26:23.556500577Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=eade27ca-9904-4440-8a4e-96069dae75af name=/runtime.v1.RuntimeService/Version
	Mar 28 01:26:23 default-k8s-diff-port-283961 crio[690]: time="2024-03-28 01:26:23.557838670Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=00f0259a-9902-4038-884c-d7ce274b6553 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 28 01:26:23 default-k8s-diff-port-283961 crio[690]: time="2024-03-28 01:26:23.558220507Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1711589183558199599,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:130129,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=00f0259a-9902-4038-884c-d7ce274b6553 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 28 01:26:23 default-k8s-diff-port-283961 crio[690]: time="2024-03-28 01:26:23.559004624Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d6147034-bcfd-4b41-9843-9a498db79efc name=/runtime.v1.RuntimeService/ListContainers
	Mar 28 01:26:23 default-k8s-diff-port-283961 crio[690]: time="2024-03-28 01:26:23.559082850Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d6147034-bcfd-4b41-9843-9a498db79efc name=/runtime.v1.RuntimeService/ListContainers
	Mar 28 01:26:23 default-k8s-diff-port-283961 crio[690]: time="2024-03-28 01:26:23.559272597Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e424026873582b3cb422868efb139c9493e87ce38c6f5d50d6c75052ba346e03,PodSandboxId:e7a6e9a7eeb6cc208ff629ec60c10d876640cdb9da4bdc52d76c224f8c98904a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1711588120120863506,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb80efe2-521f-45d5-84e7-f6dc216b4c6d,},Annotations:map[string]string{io.kubernetes.container.hash: 7bb55d66,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de1711ee6a5dc43bc28c1177f89f91198505babc976460637ce8225259d5a408,PodSandboxId:803dccd16ce63ecad890423782c9d84b6c72ace6ba07f39a49de4bb6749d1736,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1711588119047022566,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-qzcfp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e7bfa94-f249-4f7a-be7b-9a615810c956,},Annotations:map[string]string{io.kubernetes.container.hash: 525c88a1,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:422a905518d5426ce48b860149b0f9588ee1bb14058d9bd1ac78a3ea72037fd9,PodSandboxId:e09bb8fab1a58dddc95cb04ec5a31fc709e4569bb2cde76074684683933a5afe,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1711588118915220409,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-js7j2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.po
d.uid: 1f31a6ee-9417-4f4a-ba0b-fdbab6a9169d,},Annotations:map[string]string{io.kubernetes.container.hash: f8ea6801,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ae8bdde5b55d2c17376a80e0d2822b57a3d6af056ea8deac369393e0f38fd42,PodSandboxId:3cfb417abd42b4954bb5c2a4c8b3fdb3ccf29d420bf7165f81ee5ef1e199695c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1711588118976846562,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-gdv5x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b4b835c-ae9d-4eff-ab37-
6ccb7e36a748,},Annotations:map[string]string{io.kubernetes.container.hash: 5994b028,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a84313d97f96d700a70f3447583b8682711e293b8d7186846062d8b4f3b29f3,PodSandboxId:0eb54a2d43e98c8847b017d2ee21af27cd75d4f8482dedf5f847491ca95bf120,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:171158809910719692
1,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-283961,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1dfb04a92f09d808d7e99d429b5cee4e,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:360c718fc7dc9fd42d5b06bad743933fe575f0f169492bd0d6227e57e740f172,PodSandboxId:a1a715d4cc301d4ea0d63d94d5aa77988679a62407b33116761afb7146a874a1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1711588099109294768,
Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-283961,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b67001339a7063aa3fa376614daa7f54,},Annotations:map[string]string{io.kubernetes.container.hash: a066b342,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:59a6698011c7da31695bc91fd0cc71cbd5f23ddcb2bb6527a8bc650716e83867,PodSandboxId:40793c655cfd9768e76f2f83a71424a300d0aac057cc2f611bdda09ed1b2a3fc,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1711588098983
618428,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-283961,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 752d20882748f6f16766053339f66ac2,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a32cb718b54d521eab0e1da343ce520cc70f2542de9d33964fbf54e2bc80a70,PodSandboxId:5433b9300e41b6ec6a7e4015ada8c4c66f270fa2b5ef7513159ae78190f66693,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1711588
098881885696,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-283961,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1de4e3c3c3539f681e560c69accf057,},Annotations:map[string]string{io.kubernetes.container.hash: 56b0e412,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d6147034-bcfd-4b41-9843-9a498db79efc name=/runtime.v1.RuntimeService/ListContainers
	Mar 28 01:26:23 default-k8s-diff-port-283961 crio[690]: time="2024-03-28 01:26:23.603751208Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=826d236d-8767-42d2-a241-e776b41bb54e name=/runtime.v1.RuntimeService/Version
	Mar 28 01:26:23 default-k8s-diff-port-283961 crio[690]: time="2024-03-28 01:26:23.603828001Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=826d236d-8767-42d2-a241-e776b41bb54e name=/runtime.v1.RuntimeService/Version
	Mar 28 01:26:23 default-k8s-diff-port-283961 crio[690]: time="2024-03-28 01:26:23.604974825Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=6a7769f0-bf80-4582-9dbc-68fcf77df64b name=/runtime.v1.ImageService/ImageFsInfo
	Mar 28 01:26:23 default-k8s-diff-port-283961 crio[690]: time="2024-03-28 01:26:23.605376379Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1711589183605341379,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:130129,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6a7769f0-bf80-4582-9dbc-68fcf77df64b name=/runtime.v1.ImageService/ImageFsInfo
	Mar 28 01:26:23 default-k8s-diff-port-283961 crio[690]: time="2024-03-28 01:26:23.606141458Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c8faf035-0a6b-4a47-90e8-3b23d006f720 name=/runtime.v1.RuntimeService/ListContainers
	Mar 28 01:26:23 default-k8s-diff-port-283961 crio[690]: time="2024-03-28 01:26:23.606192662Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c8faf035-0a6b-4a47-90e8-3b23d006f720 name=/runtime.v1.RuntimeService/ListContainers
	Mar 28 01:26:23 default-k8s-diff-port-283961 crio[690]: time="2024-03-28 01:26:23.606391797Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e424026873582b3cb422868efb139c9493e87ce38c6f5d50d6c75052ba346e03,PodSandboxId:e7a6e9a7eeb6cc208ff629ec60c10d876640cdb9da4bdc52d76c224f8c98904a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1711588120120863506,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb80efe2-521f-45d5-84e7-f6dc216b4c6d,},Annotations:map[string]string{io.kubernetes.container.hash: 7bb55d66,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de1711ee6a5dc43bc28c1177f89f91198505babc976460637ce8225259d5a408,PodSandboxId:803dccd16ce63ecad890423782c9d84b6c72ace6ba07f39a49de4bb6749d1736,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1711588119047022566,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-qzcfp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e7bfa94-f249-4f7a-be7b-9a615810c956,},Annotations:map[string]string{io.kubernetes.container.hash: 525c88a1,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:422a905518d5426ce48b860149b0f9588ee1bb14058d9bd1ac78a3ea72037fd9,PodSandboxId:e09bb8fab1a58dddc95cb04ec5a31fc709e4569bb2cde76074684683933a5afe,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1711588118915220409,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-js7j2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.po
d.uid: 1f31a6ee-9417-4f4a-ba0b-fdbab6a9169d,},Annotations:map[string]string{io.kubernetes.container.hash: f8ea6801,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ae8bdde5b55d2c17376a80e0d2822b57a3d6af056ea8deac369393e0f38fd42,PodSandboxId:3cfb417abd42b4954bb5c2a4c8b3fdb3ccf29d420bf7165f81ee5ef1e199695c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1711588118976846562,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-gdv5x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b4b835c-ae9d-4eff-ab37-
6ccb7e36a748,},Annotations:map[string]string{io.kubernetes.container.hash: 5994b028,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a84313d97f96d700a70f3447583b8682711e293b8d7186846062d8b4f3b29f3,PodSandboxId:0eb54a2d43e98c8847b017d2ee21af27cd75d4f8482dedf5f847491ca95bf120,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:171158809910719692
1,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-283961,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1dfb04a92f09d808d7e99d429b5cee4e,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:360c718fc7dc9fd42d5b06bad743933fe575f0f169492bd0d6227e57e740f172,PodSandboxId:a1a715d4cc301d4ea0d63d94d5aa77988679a62407b33116761afb7146a874a1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1711588099109294768,
Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-283961,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b67001339a7063aa3fa376614daa7f54,},Annotations:map[string]string{io.kubernetes.container.hash: a066b342,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:59a6698011c7da31695bc91fd0cc71cbd5f23ddcb2bb6527a8bc650716e83867,PodSandboxId:40793c655cfd9768e76f2f83a71424a300d0aac057cc2f611bdda09ed1b2a3fc,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1711588098983
618428,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-283961,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 752d20882748f6f16766053339f66ac2,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a32cb718b54d521eab0e1da343ce520cc70f2542de9d33964fbf54e2bc80a70,PodSandboxId:5433b9300e41b6ec6a7e4015ada8c4c66f270fa2b5ef7513159ae78190f66693,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1711588
098881885696,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-283961,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1de4e3c3c3539f681e560c69accf057,},Annotations:map[string]string{io.kubernetes.container.hash: 56b0e412,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c8faf035-0a6b-4a47-90e8-3b23d006f720 name=/runtime.v1.RuntimeService/ListContainers
	Mar 28 01:26:23 default-k8s-diff-port-283961 crio[690]: time="2024-03-28 01:26:23.644847201Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ed412ffd-57b8-4f7c-9f7b-3002c3b2192f name=/runtime.v1.RuntimeService/Version
	Mar 28 01:26:23 default-k8s-diff-port-283961 crio[690]: time="2024-03-28 01:26:23.644923042Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ed412ffd-57b8-4f7c-9f7b-3002c3b2192f name=/runtime.v1.RuntimeService/Version
	Mar 28 01:26:23 default-k8s-diff-port-283961 crio[690]: time="2024-03-28 01:26:23.646925702Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=83a35da0-3131-4fbe-8bdc-162e776914f2 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 28 01:26:23 default-k8s-diff-port-283961 crio[690]: time="2024-03-28 01:26:23.647312119Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1711589183647288369,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:130129,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=83a35da0-3131-4fbe-8bdc-162e776914f2 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 28 01:26:23 default-k8s-diff-port-283961 crio[690]: time="2024-03-28 01:26:23.648019045Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b61e63d5-c743-4ae4-a39e-7d6e4140b0d9 name=/runtime.v1.RuntimeService/ListContainers
	Mar 28 01:26:23 default-k8s-diff-port-283961 crio[690]: time="2024-03-28 01:26:23.648069827Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b61e63d5-c743-4ae4-a39e-7d6e4140b0d9 name=/runtime.v1.RuntimeService/ListContainers
	Mar 28 01:26:23 default-k8s-diff-port-283961 crio[690]: time="2024-03-28 01:26:23.648245748Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e424026873582b3cb422868efb139c9493e87ce38c6f5d50d6c75052ba346e03,PodSandboxId:e7a6e9a7eeb6cc208ff629ec60c10d876640cdb9da4bdc52d76c224f8c98904a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1711588120120863506,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb80efe2-521f-45d5-84e7-f6dc216b4c6d,},Annotations:map[string]string{io.kubernetes.container.hash: 7bb55d66,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de1711ee6a5dc43bc28c1177f89f91198505babc976460637ce8225259d5a408,PodSandboxId:803dccd16ce63ecad890423782c9d84b6c72ace6ba07f39a49de4bb6749d1736,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1711588119047022566,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-qzcfp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e7bfa94-f249-4f7a-be7b-9a615810c956,},Annotations:map[string]string{io.kubernetes.container.hash: 525c88a1,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:422a905518d5426ce48b860149b0f9588ee1bb14058d9bd1ac78a3ea72037fd9,PodSandboxId:e09bb8fab1a58dddc95cb04ec5a31fc709e4569bb2cde76074684683933a5afe,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1711588118915220409,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-js7j2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.po
d.uid: 1f31a6ee-9417-4f4a-ba0b-fdbab6a9169d,},Annotations:map[string]string{io.kubernetes.container.hash: f8ea6801,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ae8bdde5b55d2c17376a80e0d2822b57a3d6af056ea8deac369393e0f38fd42,PodSandboxId:3cfb417abd42b4954bb5c2a4c8b3fdb3ccf29d420bf7165f81ee5ef1e199695c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1711588118976846562,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-gdv5x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b4b835c-ae9d-4eff-ab37-
6ccb7e36a748,},Annotations:map[string]string{io.kubernetes.container.hash: 5994b028,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a84313d97f96d700a70f3447583b8682711e293b8d7186846062d8b4f3b29f3,PodSandboxId:0eb54a2d43e98c8847b017d2ee21af27cd75d4f8482dedf5f847491ca95bf120,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:171158809910719692
1,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-283961,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1dfb04a92f09d808d7e99d429b5cee4e,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:360c718fc7dc9fd42d5b06bad743933fe575f0f169492bd0d6227e57e740f172,PodSandboxId:a1a715d4cc301d4ea0d63d94d5aa77988679a62407b33116761afb7146a874a1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1711588099109294768,
Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-283961,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b67001339a7063aa3fa376614daa7f54,},Annotations:map[string]string{io.kubernetes.container.hash: a066b342,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:59a6698011c7da31695bc91fd0cc71cbd5f23ddcb2bb6527a8bc650716e83867,PodSandboxId:40793c655cfd9768e76f2f83a71424a300d0aac057cc2f611bdda09ed1b2a3fc,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1711588098983
618428,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-283961,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 752d20882748f6f16766053339f66ac2,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a32cb718b54d521eab0e1da343ce520cc70f2542de9d33964fbf54e2bc80a70,PodSandboxId:5433b9300e41b6ec6a7e4015ada8c4c66f270fa2b5ef7513159ae78190f66693,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1711588
098881885696,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-283961,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1de4e3c3c3539f681e560c69accf057,},Annotations:map[string]string{io.kubernetes.container.hash: 56b0e412,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b61e63d5-c743-4ae4-a39e-7d6e4140b0d9 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	e424026873582       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   17 minutes ago      Running             storage-provisioner       0                   e7a6e9a7eeb6c       storage-provisioner
	de1711ee6a5dc       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   17 minutes ago      Running             coredns                   0                   803dccd16ce63       coredns-76f75df574-qzcfp
	3ae8bdde5b55d       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   17 minutes ago      Running             coredns                   0                   3cfb417abd42b       coredns-76f75df574-gdv5x
	422a905518d54       a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392   17 minutes ago      Running             kube-proxy                0                   e09bb8fab1a58       kube-proxy-js7j2
	360c718fc7dc9       39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533   18 minutes ago      Running             kube-apiserver            2                   a1a715d4cc301       kube-apiserver-default-k8s-diff-port-283961
	0a84313d97f96       8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b   18 minutes ago      Running             kube-scheduler            2                   0eb54a2d43e98       kube-scheduler-default-k8s-diff-port-283961
	59a6698011c7d       6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3   18 minutes ago      Running             kube-controller-manager   2                   40793c655cfd9       kube-controller-manager-default-k8s-diff-port-283961
	5a32cb718b54d       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   18 minutes ago      Running             etcd                      2                   5433b9300e41b       etcd-default-k8s-diff-port-283961
	
	
	==> coredns [3ae8bdde5b55d2c17376a80e0d2822b57a3d6af056ea8deac369393e0f38fd42] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [de1711ee6a5dc43bc28c1177f89f91198505babc976460637ce8225259d5a408] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-283961
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-283961
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1a940f980f77c9bfc8ef93678db8c1f49c9dd79d
	                    minikube.k8s.io/name=default-k8s-diff-port-283961
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_03_28T01_08_25_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 28 Mar 2024 01:08:21 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-283961
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 28 Mar 2024 01:26:13 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 28 Mar 2024 01:24:04 +0000   Thu, 28 Mar 2024 01:08:19 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 28 Mar 2024 01:24:04 +0000   Thu, 28 Mar 2024 01:08:19 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 28 Mar 2024 01:24:04 +0000   Thu, 28 Mar 2024 01:08:19 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 28 Mar 2024 01:24:04 +0000   Thu, 28 Mar 2024 01:08:22 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.224
	  Hostname:    default-k8s-diff-port-283961
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 87d8e612642044708c030a5a4ca94107
	  System UUID:                87d8e612-6420-4470-8c03-0a5a4ca94107
	  Boot ID:                    d1c4e68a-97a6-4101-8e7c-c0a713f0e9a0
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-76f75df574-gdv5x                                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     17m
	  kube-system                 coredns-76f75df574-qzcfp                                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     17m
	  kube-system                 etcd-default-k8s-diff-port-283961                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         17m
	  kube-system                 kube-apiserver-default-k8s-diff-port-283961             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17m
	  kube-system                 kube-controller-manager-default-k8s-diff-port-283961    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17m
	  kube-system                 kube-proxy-js7j2                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17m
	  kube-system                 kube-scheduler-default-k8s-diff-port-283961             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17m
	  kube-system                 metrics-server-57f55c9bc5-gkv67                         100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         17m
	  kube-system                 storage-provisioner                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   0 (0%!)(MISSING)
	  memory             440Mi (20%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 17m   kube-proxy       
	  Normal  Starting                 17m   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  17m   kubelet          Node default-k8s-diff-port-283961 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    17m   kubelet          Node default-k8s-diff-port-283961 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     17m   kubelet          Node default-k8s-diff-port-283961 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  17m   kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           17m   node-controller  Node default-k8s-diff-port-283961 event: Registered Node default-k8s-diff-port-283961 in Controller
	
	
	==> dmesg <==
	[  +0.055565] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.045209] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[Mar28 01:03] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.847710] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.666146] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.009088] systemd-fstab-generator[608]: Ignoring "noauto" option for root device
	[  +0.062189] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.067957] systemd-fstab-generator[620]: Ignoring "noauto" option for root device
	[  +0.206518] systemd-fstab-generator[634]: Ignoring "noauto" option for root device
	[  +0.169610] systemd-fstab-generator[646]: Ignoring "noauto" option for root device
	[  +0.349923] systemd-fstab-generator[675]: Ignoring "noauto" option for root device
	[  +4.885715] systemd-fstab-generator[774]: Ignoring "noauto" option for root device
	[  +0.076595] kauditd_printk_skb: 130 callbacks suppressed
	[  +3.254116] systemd-fstab-generator[895]: Ignoring "noauto" option for root device
	[  +5.608091] kauditd_printk_skb: 97 callbacks suppressed
	[  +6.662961] kauditd_printk_skb: 74 callbacks suppressed
	[Mar28 01:08] kauditd_printk_skb: 5 callbacks suppressed
	[  +1.517241] systemd-fstab-generator[3401]: Ignoring "noauto" option for root device
	[  +4.665698] kauditd_printk_skb: 55 callbacks suppressed
	[  +2.662670] systemd-fstab-generator[3727]: Ignoring "noauto" option for root device
	[ +12.977680] systemd-fstab-generator[3916]: Ignoring "noauto" option for root device
	[  +0.118919] kauditd_printk_skb: 14 callbacks suppressed
	[Mar28 01:09] kauditd_printk_skb: 78 callbacks suppressed
	
	
	==> etcd [5a32cb718b54d521eab0e1da343ce520cc70f2542de9d33964fbf54e2bc80a70] <==
	{"level":"info","ts":"2024-03-28T01:08:19.157956Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-03-28T01:08:19.717119Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"84bfccc973752067 is starting a new election at term 1"}
	{"level":"info","ts":"2024-03-28T01:08:19.717275Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"84bfccc973752067 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-03-28T01:08:19.717497Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"84bfccc973752067 received MsgPreVoteResp from 84bfccc973752067 at term 1"}
	{"level":"info","ts":"2024-03-28T01:08:19.717802Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"84bfccc973752067 became candidate at term 2"}
	{"level":"info","ts":"2024-03-28T01:08:19.717844Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"84bfccc973752067 received MsgVoteResp from 84bfccc973752067 at term 2"}
	{"level":"info","ts":"2024-03-28T01:08:19.71788Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"84bfccc973752067 became leader at term 2"}
	{"level":"info","ts":"2024-03-28T01:08:19.718046Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 84bfccc973752067 elected leader 84bfccc973752067 at term 2"}
	{"level":"info","ts":"2024-03-28T01:08:19.722033Z","caller":"etcdserver/server.go:2578","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-28T01:08:19.724888Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"84bfccc973752067","local-member-attributes":"{Name:default-k8s-diff-port-283961 ClientURLs:[https://192.168.39.224:2379]}","request-path":"/0/members/84bfccc973752067/attributes","cluster-id":"6ff541a05f82feac","publish-timeout":"7s"}
	{"level":"info","ts":"2024-03-28T01:08:19.72497Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-28T01:08:19.724827Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6ff541a05f82feac","local-member-id":"84bfccc973752067","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-28T01:08:19.727799Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-28T01:08:19.727898Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-28T01:08:19.727957Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-28T01:08:19.744263Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.224:2379"}
	{"level":"info","ts":"2024-03-28T01:08:19.755431Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-03-28T01:08:19.760731Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-03-28T01:08:19.76787Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-03-28T01:18:20.310077Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":679}
	{"level":"info","ts":"2024-03-28T01:18:20.320972Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":679,"took":"10.313623ms","hash":2695161192,"current-db-size-bytes":2281472,"current-db-size":"2.3 MB","current-db-size-in-use-bytes":2281472,"current-db-size-in-use":"2.3 MB"}
	{"level":"info","ts":"2024-03-28T01:18:20.321079Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2695161192,"revision":679,"compact-revision":-1}
	{"level":"info","ts":"2024-03-28T01:23:20.319454Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":922}
	{"level":"info","ts":"2024-03-28T01:23:20.324043Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":922,"took":"3.927031ms","hash":3774700058,"current-db-size-bytes":2281472,"current-db-size":"2.3 MB","current-db-size-in-use-bytes":1556480,"current-db-size-in-use":"1.6 MB"}
	{"level":"info","ts":"2024-03-28T01:23:20.324134Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3774700058,"revision":922,"compact-revision":679}
	
	
	==> kernel <==
	 01:26:24 up 23 min,  0 users,  load average: 0.24, 0.21, 0.14
	Linux default-k8s-diff-port-283961 5.10.207 #1 SMP Wed Mar 27 22:02:20 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [360c718fc7dc9fd42d5b06bad743933fe575f0f169492bd0d6227e57e740f172] <==
	I0328 01:21:22.899061       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0328 01:23:21.902037       1 handler_proxy.go:93] no RequestInfo found in the context
	E0328 01:23:21.902163       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W0328 01:23:22.902711       1 handler_proxy.go:93] no RequestInfo found in the context
	E0328 01:23:22.902765       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0328 01:23:22.902774       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0328 01:23:22.902929       1 handler_proxy.go:93] no RequestInfo found in the context
	E0328 01:23:22.903045       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0328 01:23:22.904471       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0328 01:24:22.903185       1 handler_proxy.go:93] no RequestInfo found in the context
	E0328 01:24:22.903403       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0328 01:24:22.903431       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0328 01:24:22.904802       1 handler_proxy.go:93] no RequestInfo found in the context
	E0328 01:24:22.904878       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0328 01:24:22.904887       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0328 01:26:22.904732       1 handler_proxy.go:93] no RequestInfo found in the context
	E0328 01:26:22.905106       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0328 01:26:22.905181       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0328 01:26:22.905020       1 handler_proxy.go:93] no RequestInfo found in the context
	E0328 01:26:22.905352       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0328 01:26:22.906902       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [59a6698011c7da31695bc91fd0cc71cbd5f23ddcb2bb6527a8bc650716e83867] <==
	I0328 01:20:37.676146       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0328 01:21:07.141972       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0328 01:21:07.686500       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0328 01:21:37.148389       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0328 01:21:37.696418       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0328 01:22:07.154168       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0328 01:22:07.705914       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0328 01:22:37.161398       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0328 01:22:37.714431       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0328 01:23:07.166782       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0328 01:23:07.726592       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0328 01:23:37.173468       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0328 01:23:37.735459       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0328 01:24:07.179794       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0328 01:24:07.743287       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0328 01:24:37.184955       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0328 01:24:37.752580       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0328 01:24:51.341510       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="258.116µs"
	I0328 01:25:02.339172       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="114.576µs"
	E0328 01:25:07.190535       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0328 01:25:07.760597       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0328 01:25:37.195941       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0328 01:25:37.769474       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0328 01:26:07.200617       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0328 01:26:07.779735       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [422a905518d5426ce48b860149b0f9588ee1bb14058d9bd1ac78a3ea72037fd9] <==
	I0328 01:08:39.415846       1 server_others.go:72] "Using iptables proxy"
	I0328 01:08:39.459000       1 server.go:1050] "Successfully retrieved node IP(s)" IPs=["192.168.39.224"]
	I0328 01:08:39.694064       1 server_others.go:146] "No iptables support for family" ipFamily="IPv6"
	I0328 01:08:39.694084       1 server.go:654] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0328 01:08:39.694101       1 server_others.go:168] "Using iptables Proxier"
	I0328 01:08:39.743200       1 proxier.go:245] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0328 01:08:39.756270       1 server.go:865] "Version info" version="v1.29.3"
	I0328 01:08:39.763781       1 server.go:867] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0328 01:08:39.766583       1 config.go:188] "Starting service config controller"
	I0328 01:08:39.766677       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0328 01:08:39.766838       1 config.go:97] "Starting endpoint slice config controller"
	I0328 01:08:39.766904       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0328 01:08:39.771058       1 config.go:315] "Starting node config controller"
	I0328 01:08:39.771148       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0328 01:08:39.868141       1 shared_informer.go:318] Caches are synced for service config
	I0328 01:08:39.868391       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0328 01:08:39.872050       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [0a84313d97f96d700a70f3447583b8682711e293b8d7186846062d8b4f3b29f3] <==
	W0328 01:08:21.911272       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0328 01:08:21.911976       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0328 01:08:21.911353       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0328 01:08:21.912226       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0328 01:08:22.764886       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0328 01:08:22.765181       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0328 01:08:22.774715       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0328 01:08:22.775188       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0328 01:08:22.796723       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0328 01:08:22.796776       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0328 01:08:22.807576       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0328 01:08:22.807615       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0328 01:08:22.834169       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0328 01:08:22.834237       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0328 01:08:22.916223       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0328 01:08:22.916273       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0328 01:08:22.945328       1 reflector.go:539] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0328 01:08:22.945382       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0328 01:08:22.946475       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0328 01:08:22.946519       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0328 01:08:23.199043       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0328 01:08:23.199092       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0328 01:08:23.238874       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0328 01:08:23.238972       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0328 01:08:25.088378       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Mar 28 01:24:05 default-k8s-diff-port-283961 kubelet[3734]: E0328 01:24:05.323095    3734 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-gkv67" podUID="7f0f5d0b-6821-44b6-8f3b-0bc0aeccc356"
	Mar 28 01:24:16 default-k8s-diff-port-283961 kubelet[3734]: E0328 01:24:16.322125    3734 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-gkv67" podUID="7f0f5d0b-6821-44b6-8f3b-0bc0aeccc356"
	Mar 28 01:24:25 default-k8s-diff-port-283961 kubelet[3734]: E0328 01:24:25.361717    3734 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 28 01:24:25 default-k8s-diff-port-283961 kubelet[3734]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 28 01:24:25 default-k8s-diff-port-283961 kubelet[3734]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 28 01:24:25 default-k8s-diff-port-283961 kubelet[3734]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 28 01:24:25 default-k8s-diff-port-283961 kubelet[3734]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 28 01:24:27 default-k8s-diff-port-283961 kubelet[3734]: E0328 01:24:27.322239    3734 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-gkv67" podUID="7f0f5d0b-6821-44b6-8f3b-0bc0aeccc356"
	Mar 28 01:24:40 default-k8s-diff-port-283961 kubelet[3734]: E0328 01:24:40.340577    3734 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Mar 28 01:24:40 default-k8s-diff-port-283961 kubelet[3734]: E0328 01:24:40.340694    3734 kuberuntime_image.go:55] "Failed to pull image" err="pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Mar 28 01:24:40 default-k8s-diff-port-283961 kubelet[3734]: E0328 01:24:40.340915    3734 kuberuntime_manager.go:1262] container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-9l2w8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe
:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessa
gePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod metrics-server-57f55c9bc5-gkv67_kube-system(7f0f5d0b-6821-44b6-8f3b-0bc0aeccc356): ErrImagePull: pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Mar 28 01:24:40 default-k8s-diff-port-283961 kubelet[3734]: E0328 01:24:40.340959    3734 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-57f55c9bc5-gkv67" podUID="7f0f5d0b-6821-44b6-8f3b-0bc0aeccc356"
	Mar 28 01:24:51 default-k8s-diff-port-283961 kubelet[3734]: E0328 01:24:51.321627    3734 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-gkv67" podUID="7f0f5d0b-6821-44b6-8f3b-0bc0aeccc356"
	Mar 28 01:25:02 default-k8s-diff-port-283961 kubelet[3734]: E0328 01:25:02.321516    3734 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-gkv67" podUID="7f0f5d0b-6821-44b6-8f3b-0bc0aeccc356"
	Mar 28 01:25:14 default-k8s-diff-port-283961 kubelet[3734]: E0328 01:25:14.321470    3734 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-gkv67" podUID="7f0f5d0b-6821-44b6-8f3b-0bc0aeccc356"
	Mar 28 01:25:25 default-k8s-diff-port-283961 kubelet[3734]: E0328 01:25:25.359177    3734 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 28 01:25:25 default-k8s-diff-port-283961 kubelet[3734]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 28 01:25:25 default-k8s-diff-port-283961 kubelet[3734]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 28 01:25:25 default-k8s-diff-port-283961 kubelet[3734]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 28 01:25:25 default-k8s-diff-port-283961 kubelet[3734]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 28 01:25:28 default-k8s-diff-port-283961 kubelet[3734]: E0328 01:25:28.322931    3734 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-gkv67" podUID="7f0f5d0b-6821-44b6-8f3b-0bc0aeccc356"
	Mar 28 01:25:39 default-k8s-diff-port-283961 kubelet[3734]: E0328 01:25:39.324824    3734 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-gkv67" podUID="7f0f5d0b-6821-44b6-8f3b-0bc0aeccc356"
	Mar 28 01:25:50 default-k8s-diff-port-283961 kubelet[3734]: E0328 01:25:50.321328    3734 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-gkv67" podUID="7f0f5d0b-6821-44b6-8f3b-0bc0aeccc356"
	Mar 28 01:26:03 default-k8s-diff-port-283961 kubelet[3734]: E0328 01:26:03.324794    3734 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-gkv67" podUID="7f0f5d0b-6821-44b6-8f3b-0bc0aeccc356"
	Mar 28 01:26:15 default-k8s-diff-port-283961 kubelet[3734]: E0328 01:26:15.323027    3734 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-gkv67" podUID="7f0f5d0b-6821-44b6-8f3b-0bc0aeccc356"
	
	
	==> storage-provisioner [e424026873582b3cb422868efb139c9493e87ce38c6f5d50d6c75052ba346e03] <==
	I0328 01:08:40.285577       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0328 01:08:40.311694       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0328 01:08:40.311770       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0328 01:08:40.324029       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0328 01:08:40.324588       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-283961_4416619f-58e4-47e9-bc15-4a33ec62ad43!
	I0328 01:08:40.327785       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"9a7ed12b-89ef-41b7-afcc-a955c8331b11", APIVersion:"v1", ResourceVersion:"419", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-283961_4416619f-58e4-47e9-bc15-4a33ec62ad43 became leader
	I0328 01:08:40.425544       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-283961_4416619f-58e4-47e9-bc15-4a33ec62ad43!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-283961 -n default-k8s-diff-port-283961
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-283961 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-57f55c9bc5-gkv67
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-283961 describe pod metrics-server-57f55c9bc5-gkv67
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-283961 describe pod metrics-server-57f55c9bc5-gkv67: exit status 1 (64.481844ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-gkv67" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-283961 describe pod metrics-server-57f55c9bc5-gkv67: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (520.00s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (278.49s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/no-preload/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-248059 -n no-preload-248059
start_stop_delete_test.go:287: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-03-28 01:22:44.36738808 +0000 UTC m=+6595.618867010
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-248059 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-248059 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.639µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-248059 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-248059 -n no-preload-248059
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-248059 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-248059 logs -n 25: (2.402216553s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|----------------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   |    Version     |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|----------------|---------------------|---------------------|
	| stop    | -p no-preload-248059                                   | no-preload-248059            | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:55 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |                |                     |                     |
	| addons  | enable metrics-server -p embed-certs-808809            | embed-certs-808809           | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:55 UTC | 28 Mar 24 00:55 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |                |                     |                     |
	| stop    | -p embed-certs-808809                                  | embed-certs-808809           | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:55 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |                |                     |                     |
	| addons  | enable metrics-server -p newest-cni-013642             | newest-cni-013642            | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:55 UTC | 28 Mar 24 00:55 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |                |                     |                     |
	| stop    | -p newest-cni-013642                                   | newest-cni-013642            | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:55 UTC | 28 Mar 24 00:55 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |                |                     |                     |
	| addons  | enable dashboard -p newest-cni-013642                  | newest-cni-013642            | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:55 UTC | 28 Mar 24 00:55 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| start   | -p newest-cni-013642 --memory=2200 --alsologtostderr   | newest-cni-013642            | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:55 UTC | 28 Mar 24 00:56 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |                |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |                |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |                |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |                |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.30.0-beta.0                    |                              |         |                |                     |                     |
	| image   | newest-cni-013642 image list                           | newest-cni-013642            | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:56 UTC | 28 Mar 24 00:56 UTC |
	|         | --format=json                                          |                              |         |                |                     |                     |
	| pause   | -p newest-cni-013642                                   | newest-cni-013642            | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:56 UTC | 28 Mar 24 00:56 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |                |                     |                     |
	| unpause | -p newest-cni-013642                                   | newest-cni-013642            | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:56 UTC | 28 Mar 24 00:56 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |                |                     |                     |
	| delete  | -p newest-cni-013642                                   | newest-cni-013642            | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:56 UTC | 28 Mar 24 00:56 UTC |
	| delete  | -p newest-cni-013642                                   | newest-cni-013642            | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:56 UTC | 28 Mar 24 00:56 UTC |
	| start   | -p                                                     | default-k8s-diff-port-283961 | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:56 UTC | 28 Mar 24 00:57 UTC |
	|         | default-k8s-diff-port-283961                           |                              |         |                |                     |                     |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |                |                     |                     |
	|         | --driver=kvm2                                          |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.29.3                           |                              |         |                |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-986088        | old-k8s-version-986088       | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:57 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |                |                     |                     |
	| addons  | enable dashboard -p no-preload-248059                  | no-preload-248059            | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:57 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-283961  | default-k8s-diff-port-283961 | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:57 UTC | 28 Mar 24 00:57 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |                |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-283961 | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:57 UTC |                     |
	|         | default-k8s-diff-port-283961                           |                              |         |                |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |                |                     |                     |
	| start   | -p no-preload-248059 --memory=2200                     | no-preload-248059            | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:57 UTC | 28 Mar 24 01:09 UTC |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |                |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.30.0-beta.0                    |                              |         |                |                     |                     |
	| addons  | enable dashboard -p embed-certs-808809                 | embed-certs-808809           | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:57 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| start   | -p embed-certs-808809                                  | embed-certs-808809           | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:57 UTC | 28 Mar 24 01:07 UTC |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |                |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.29.3                           |                              |         |                |                     |                     |
	| stop    | -p old-k8s-version-986088                              | old-k8s-version-986088       | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:59 UTC | 28 Mar 24 00:59 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |                |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-986088             | old-k8s-version-986088       | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:59 UTC | 28 Mar 24 00:59 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| start   | -p old-k8s-version-986088                              | old-k8s-version-986088       | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:59 UTC |                     |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --kvm-network=default                                  |                              |         |                |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |                |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |                |                     |                     |
	|         | --keep-context=false                                   |                              |         |                |                     |                     |
	|         | --driver=kvm2                                          |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |                |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-283961       | default-k8s-diff-port-283961 | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:59 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-283961 | jenkins | v1.33.0-beta.0 | 28 Mar 24 01:00 UTC | 28 Mar 24 01:08 UTC |
	|         | default-k8s-diff-port-283961                           |                              |         |                |                     |                     |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |                |                     |                     |
	|         | --driver=kvm2                                          |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.29.3                           |                              |         |                |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/28 01:00:05
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0328 01:00:05.675380 1131600 out.go:291] Setting OutFile to fd 1 ...
	I0328 01:00:05.675675 1131600 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 01:00:05.675710 1131600 out.go:304] Setting ErrFile to fd 2...
	I0328 01:00:05.675718 1131600 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 01:00:05.676017 1131600 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18485-1069254/.minikube/bin
	I0328 01:00:05.676919 1131600 out.go:298] Setting JSON to false
	I0328 01:00:05.678046 1131600 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":31303,"bootTime":1711556303,"procs":200,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1054-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0328 01:00:05.678129 1131600 start.go:139] virtualization: kvm guest
	I0328 01:00:05.681128 1131600 out.go:177] * [default-k8s-diff-port-283961] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I0328 01:00:05.683139 1131600 out.go:177]   - MINIKUBE_LOCATION=18485
	I0328 01:00:05.683129 1131600 notify.go:220] Checking for updates...
	I0328 01:00:05.685082 1131600 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0328 01:00:05.686765 1131600 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18485-1069254/kubeconfig
	I0328 01:00:05.688389 1131600 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18485-1069254/.minikube
	I0328 01:00:05.690187 1131600 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0328 01:00:05.691887 1131600 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0328 01:00:05.693775 1131600 config.go:182] Loaded profile config "default-k8s-diff-port-283961": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0328 01:00:05.694270 1131600 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18485-1069254/.minikube/bin/docker-machine-driver-kvm2
	I0328 01:00:05.694323 1131600 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 01:00:05.709757 1131600 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35333
	I0328 01:00:05.710275 1131600 main.go:141] libmachine: () Calling .GetVersion
	I0328 01:00:05.710875 1131600 main.go:141] libmachine: Using API Version  1
	I0328 01:00:05.710900 1131600 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 01:00:05.711323 1131600 main.go:141] libmachine: () Calling .GetMachineName
	I0328 01:00:05.711531 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .DriverName
	I0328 01:00:05.711893 1131600 driver.go:392] Setting default libvirt URI to qemu:///system
	I0328 01:00:05.712342 1131600 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18485-1069254/.minikube/bin/docker-machine-driver-kvm2
	I0328 01:00:05.712392 1131600 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 01:00:05.727583 1131600 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44249
	I0328 01:00:05.728107 1131600 main.go:141] libmachine: () Calling .GetVersion
	I0328 01:00:05.728595 1131600 main.go:141] libmachine: Using API Version  1
	I0328 01:00:05.728625 1131600 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 01:00:05.728945 1131600 main.go:141] libmachine: () Calling .GetMachineName
	I0328 01:00:05.729170 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .DriverName
	I0328 01:00:05.763895 1131600 out.go:177] * Using the kvm2 driver based on existing profile
	I0328 01:00:05.765397 1131600 start.go:297] selected driver: kvm2
	I0328 01:00:05.765431 1131600 start.go:901] validating driver "kvm2" against &{Name:default-k8s-diff-port-283961 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.29.3 ClusterName:default-k8s-diff-port-283961 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.224 Port:8444 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisk
s:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0328 01:00:05.765564 1131600 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0328 01:00:05.766282 1131600 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0328 01:00:05.766391 1131600 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18485-1069254/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0328 01:00:05.783130 1131600 install.go:137] /home/jenkins/minikube-integration/18485-1069254/.minikube/bin/docker-machine-driver-kvm2 version is 1.33.0-beta.0
	I0328 01:00:05.783602 1131600 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0328 01:00:05.783724 1131600 cni.go:84] Creating CNI manager for ""
	I0328 01:00:05.783745 1131600 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0328 01:00:05.783795 1131600 start.go:340] cluster config:
	{Name:default-k8s-diff-port-283961 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:default-k8s-diff-port-283961 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.224 Port:8444 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0328 01:00:05.783949 1131600 iso.go:125] acquiring lock: {Name:mk3da1fa7d63a581c817f327d86a827697458cb8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0328 01:00:05.785871 1131600 out.go:177] * Starting "default-k8s-diff-port-283961" primary control-plane node in "default-k8s-diff-port-283961" cluster
	I0328 01:00:02.570474 1130827 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.107:22: connect: no route to host
	I0328 01:00:05.787210 1131600 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0328 01:00:05.787259 1131600 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4
	I0328 01:00:05.787272 1131600 cache.go:56] Caching tarball of preloaded images
	I0328 01:00:05.787364 1131600 preload.go:173] Found /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0328 01:00:05.787376 1131600 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on crio
	I0328 01:00:05.787509 1131600 profile.go:142] Saving config to /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/default-k8s-diff-port-283961/config.json ...
	I0328 01:00:05.787742 1131600 start.go:360] acquireMachinesLock for default-k8s-diff-port-283961: {Name:mk85e225431128bbd27ac7bb3815095957281902 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0328 01:00:08.650481 1130827 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.107:22: connect: no route to host
	I0328 01:00:11.722571 1130827 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.107:22: connect: no route to host
	I0328 01:00:17.802536 1130827 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.107:22: connect: no route to host
	I0328 01:00:20.874568 1130827 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.107:22: connect: no route to host
	I0328 01:00:26.954473 1130827 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.107:22: connect: no route to host
	I0328 01:00:30.026674 1130827 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.107:22: connect: no route to host
	I0328 01:00:36.106489 1130827 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.107:22: connect: no route to host
	I0328 01:00:39.178555 1130827 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.107:22: connect: no route to host
	I0328 01:00:45.258539 1130827 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.107:22: connect: no route to host
	I0328 01:00:48.330581 1130827 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.107:22: connect: no route to host
	I0328 01:00:54.410577 1130827 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.107:22: connect: no route to host
	I0328 01:00:57.482545 1130827 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.107:22: connect: no route to host
	I0328 01:01:03.562558 1130827 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.107:22: connect: no route to host
	I0328 01:01:06.634602 1130827 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.107:22: connect: no route to host
	I0328 01:01:12.714559 1130827 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.107:22: connect: no route to host
	I0328 01:01:15.786597 1130827 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.107:22: connect: no route to host
	I0328 01:01:21.866544 1130827 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.107:22: connect: no route to host
	I0328 01:01:24.938619 1130827 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.107:22: connect: no route to host
	I0328 01:01:31.018631 1130827 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.107:22: connect: no route to host
	I0328 01:01:34.090562 1130827 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.107:22: connect: no route to host
	I0328 01:01:40.170864 1130827 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.107:22: connect: no route to host
	I0328 01:01:43.242565 1130827 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.107:22: connect: no route to host
	I0328 01:01:49.322492 1130827 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.107:22: connect: no route to host
	I0328 01:01:52.394572 1130827 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.107:22: connect: no route to host
	I0328 01:01:58.474562 1130827 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.107:22: connect: no route to host
	I0328 01:02:01.546621 1130827 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.107:22: connect: no route to host
	I0328 01:02:07.626510 1130827 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.107:22: connect: no route to host
	I0328 01:02:10.698534 1130827 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.107:22: connect: no route to host
	I0328 01:02:13.703348 1130949 start.go:364] duration metric: took 4m25.677777198s to acquireMachinesLock for "embed-certs-808809"
	I0328 01:02:13.703416 1130949 start.go:96] Skipping create...Using existing machine configuration
	I0328 01:02:13.703429 1130949 fix.go:54] fixHost starting: 
	I0328 01:02:13.703888 1130949 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18485-1069254/.minikube/bin/docker-machine-driver-kvm2
	I0328 01:02:13.703923 1130949 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 01:02:13.719480 1130949 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44835
	I0328 01:02:13.719968 1130949 main.go:141] libmachine: () Calling .GetVersion
	I0328 01:02:13.720450 1130949 main.go:141] libmachine: Using API Version  1
	I0328 01:02:13.720475 1130949 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 01:02:13.720774 1130949 main.go:141] libmachine: () Calling .GetMachineName
	I0328 01:02:13.721011 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .DriverName
	I0328 01:02:13.721182 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetState
	I0328 01:02:13.722796 1130949 fix.go:112] recreateIfNeeded on embed-certs-808809: state=Stopped err=<nil>
	I0328 01:02:13.722828 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .DriverName
	W0328 01:02:13.722972 1130949 fix.go:138] unexpected machine state, will restart: <nil>
	I0328 01:02:13.724895 1130949 out.go:177] * Restarting existing kvm2 VM for "embed-certs-808809" ...
	I0328 01:02:13.700647 1130827 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0328 01:02:13.700689 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetMachineName
	I0328 01:02:13.701054 1130827 buildroot.go:166] provisioning hostname "no-preload-248059"
	I0328 01:02:13.701085 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetMachineName
	I0328 01:02:13.701344 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHHostname
	I0328 01:02:13.703200 1130827 machine.go:97] duration metric: took 4m37.399616994s to provisionDockerMachine
	I0328 01:02:13.703243 1130827 fix.go:56] duration metric: took 4m37.42352766s for fixHost
	I0328 01:02:13.703249 1130827 start.go:83] releasing machines lock for "no-preload-248059", held for 4m37.423563163s
	W0328 01:02:13.703274 1130827 start.go:713] error starting host: provision: host is not running
	W0328 01:02:13.703400 1130827 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0328 01:02:13.703411 1130827 start.go:728] Will try again in 5 seconds ...
	I0328 01:02:13.726437 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .Start
	I0328 01:02:13.726574 1130949 main.go:141] libmachine: (embed-certs-808809) Ensuring networks are active...
	I0328 01:02:13.727407 1130949 main.go:141] libmachine: (embed-certs-808809) Ensuring network default is active
	I0328 01:02:13.727667 1130949 main.go:141] libmachine: (embed-certs-808809) Ensuring network mk-embed-certs-808809 is active
	I0328 01:02:13.728050 1130949 main.go:141] libmachine: (embed-certs-808809) Getting domain xml...
	I0328 01:02:13.728836 1130949 main.go:141] libmachine: (embed-certs-808809) Creating domain...
	I0328 01:02:14.931757 1130949 main.go:141] libmachine: (embed-certs-808809) Waiting to get IP...
	I0328 01:02:14.932921 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:14.933298 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | unable to find current IP address of domain embed-certs-808809 in network mk-embed-certs-808809
	I0328 01:02:14.933396 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | I0328 01:02:14.933294 1131950 retry.go:31] will retry after 279.257708ms: waiting for machine to come up
	I0328 01:02:15.213830 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:15.214439 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | unable to find current IP address of domain embed-certs-808809 in network mk-embed-certs-808809
	I0328 01:02:15.214472 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | I0328 01:02:15.214415 1131950 retry.go:31] will retry after 387.406107ms: waiting for machine to come up
	I0328 01:02:15.603078 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:15.603464 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | unable to find current IP address of domain embed-certs-808809 in network mk-embed-certs-808809
	I0328 01:02:15.603497 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | I0328 01:02:15.603431 1131950 retry.go:31] will retry after 466.553599ms: waiting for machine to come up
	I0328 01:02:16.072165 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:16.072702 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | unable to find current IP address of domain embed-certs-808809 in network mk-embed-certs-808809
	I0328 01:02:16.072732 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | I0328 01:02:16.072643 1131950 retry.go:31] will retry after 375.428381ms: waiting for machine to come up
	I0328 01:02:16.449155 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:16.449614 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | unable to find current IP address of domain embed-certs-808809 in network mk-embed-certs-808809
	I0328 01:02:16.449652 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | I0328 01:02:16.449553 1131950 retry.go:31] will retry after 466.238903ms: waiting for machine to come up
	I0328 01:02:16.917246 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:16.917697 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | unable to find current IP address of domain embed-certs-808809 in network mk-embed-certs-808809
	I0328 01:02:16.917723 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | I0328 01:02:16.917633 1131950 retry.go:31] will retry after 772.819544ms: waiting for machine to come up
	I0328 01:02:17.691645 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:17.692121 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | unable to find current IP address of domain embed-certs-808809 in network mk-embed-certs-808809
	I0328 01:02:17.692151 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | I0328 01:02:17.692071 1131950 retry.go:31] will retry after 1.19065976s: waiting for machine to come up
	I0328 01:02:18.704949 1130827 start.go:360] acquireMachinesLock for no-preload-248059: {Name:mk85e225431128bbd27ac7bb3815095957281902 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0328 01:02:18.884525 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:18.885019 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | unable to find current IP address of domain embed-certs-808809 in network mk-embed-certs-808809
	I0328 01:02:18.885044 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | I0328 01:02:18.884980 1131950 retry.go:31] will retry after 1.434726863s: waiting for machine to come up
	I0328 01:02:20.321473 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:20.322009 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | unable to find current IP address of domain embed-certs-808809 in network mk-embed-certs-808809
	I0328 01:02:20.322035 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | I0328 01:02:20.321951 1131950 retry.go:31] will retry after 1.275277555s: waiting for machine to come up
	I0328 01:02:21.599454 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:21.600049 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | unable to find current IP address of domain embed-certs-808809 in network mk-embed-certs-808809
	I0328 01:02:21.600074 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | I0328 01:02:21.599982 1131950 retry.go:31] will retry after 1.852516502s: waiting for machine to come up
	I0328 01:02:23.455282 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:23.455760 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | unable to find current IP address of domain embed-certs-808809 in network mk-embed-certs-808809
	I0328 01:02:23.455830 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | I0328 01:02:23.455746 1131950 retry.go:31] will retry after 2.056736141s: waiting for machine to come up
	I0328 01:02:25.514112 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:25.514538 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | unable to find current IP address of domain embed-certs-808809 in network mk-embed-certs-808809
	I0328 01:02:25.514569 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | I0328 01:02:25.514492 1131950 retry.go:31] will retry after 2.711520437s: waiting for machine to come up
	I0328 01:02:32.751719 1131323 start.go:364] duration metric: took 3m27.302408957s to acquireMachinesLock for "old-k8s-version-986088"
	I0328 01:02:32.751823 1131323 start.go:96] Skipping create...Using existing machine configuration
	I0328 01:02:32.751833 1131323 fix.go:54] fixHost starting: 
	I0328 01:02:32.752289 1131323 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18485-1069254/.minikube/bin/docker-machine-driver-kvm2
	I0328 01:02:32.752326 1131323 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 01:02:32.770119 1131323 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45807
	I0328 01:02:32.770723 1131323 main.go:141] libmachine: () Calling .GetVersion
	I0328 01:02:32.771352 1131323 main.go:141] libmachine: Using API Version  1
	I0328 01:02:32.771380 1131323 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 01:02:32.771790 1131323 main.go:141] libmachine: () Calling .GetMachineName
	I0328 01:02:32.772020 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .DriverName
	I0328 01:02:32.772206 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetState
	I0328 01:02:32.773947 1131323 fix.go:112] recreateIfNeeded on old-k8s-version-986088: state=Stopped err=<nil>
	I0328 01:02:32.773980 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .DriverName
	W0328 01:02:32.774166 1131323 fix.go:138] unexpected machine state, will restart: <nil>
	I0328 01:02:32.776416 1131323 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-986088" ...
	I0328 01:02:28.229576 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:28.229970 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | unable to find current IP address of domain embed-certs-808809 in network mk-embed-certs-808809
	I0328 01:02:28.230000 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | I0328 01:02:28.229920 1131950 retry.go:31] will retry after 3.231405371s: waiting for machine to come up
	I0328 01:02:31.463477 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:31.463884 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has current primary IP address 192.168.72.210 and MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:31.463902 1130949 main.go:141] libmachine: (embed-certs-808809) Found IP for machine: 192.168.72.210
	I0328 01:02:31.463915 1130949 main.go:141] libmachine: (embed-certs-808809) Reserving static IP address...
	I0328 01:02:31.464394 1130949 main.go:141] libmachine: (embed-certs-808809) Reserved static IP address: 192.168.72.210
	I0328 01:02:31.464413 1130949 main.go:141] libmachine: (embed-certs-808809) Waiting for SSH to be available...
	I0328 01:02:31.464439 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | found host DHCP lease matching {name: "embed-certs-808809", mac: "52:54:00:60:d4:d2", ip: "192.168.72.210"} in network mk-embed-certs-808809: {Iface:virbr4 ExpiryTime:2024-03-28 02:02:25 +0000 UTC Type:0 Mac:52:54:00:60:d4:d2 Iaid: IPaddr:192.168.72.210 Prefix:24 Hostname:embed-certs-808809 Clientid:01:52:54:00:60:d4:d2}
	I0328 01:02:31.464464 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | skip adding static IP to network mk-embed-certs-808809 - found existing host DHCP lease matching {name: "embed-certs-808809", mac: "52:54:00:60:d4:d2", ip: "192.168.72.210"}
	I0328 01:02:31.464480 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | Getting to WaitForSSH function...
	I0328 01:02:31.466488 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:31.466876 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d4:d2", ip: ""} in network mk-embed-certs-808809: {Iface:virbr4 ExpiryTime:2024-03-28 02:02:25 +0000 UTC Type:0 Mac:52:54:00:60:d4:d2 Iaid: IPaddr:192.168.72.210 Prefix:24 Hostname:embed-certs-808809 Clientid:01:52:54:00:60:d4:d2}
	I0328 01:02:31.466916 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined IP address 192.168.72.210 and MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:31.467054 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | Using SSH client type: external
	I0328 01:02:31.467085 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | Using SSH private key: /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/embed-certs-808809/id_rsa (-rw-------)
	I0328 01:02:31.467124 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.210 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/embed-certs-808809/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0328 01:02:31.467138 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | About to run SSH command:
	I0328 01:02:31.467155 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | exit 0
	I0328 01:02:31.590708 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | SSH cmd err, output: <nil>: 
	I0328 01:02:31.591111 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetConfigRaw
	I0328 01:02:31.591959 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetIP
	I0328 01:02:31.594592 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:31.595075 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d4:d2", ip: ""} in network mk-embed-certs-808809: {Iface:virbr4 ExpiryTime:2024-03-28 02:02:25 +0000 UTC Type:0 Mac:52:54:00:60:d4:d2 Iaid: IPaddr:192.168.72.210 Prefix:24 Hostname:embed-certs-808809 Clientid:01:52:54:00:60:d4:d2}
	I0328 01:02:31.595114 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined IP address 192.168.72.210 and MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:31.595364 1130949 profile.go:142] Saving config to /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/embed-certs-808809/config.json ...
	I0328 01:02:31.595634 1130949 machine.go:94] provisionDockerMachine start ...
	I0328 01:02:31.595656 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .DriverName
	I0328 01:02:31.595901 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHHostname
	I0328 01:02:31.598184 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:31.598529 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d4:d2", ip: ""} in network mk-embed-certs-808809: {Iface:virbr4 ExpiryTime:2024-03-28 02:02:25 +0000 UTC Type:0 Mac:52:54:00:60:d4:d2 Iaid: IPaddr:192.168.72.210 Prefix:24 Hostname:embed-certs-808809 Clientid:01:52:54:00:60:d4:d2}
	I0328 01:02:31.598556 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined IP address 192.168.72.210 and MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:31.598681 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHPort
	I0328 01:02:31.598851 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHKeyPath
	I0328 01:02:31.599012 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHKeyPath
	I0328 01:02:31.599163 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHUsername
	I0328 01:02:31.599333 1130949 main.go:141] libmachine: Using SSH client type: native
	I0328 01:02:31.599604 1130949 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.210 22 <nil> <nil>}
	I0328 01:02:31.599619 1130949 main.go:141] libmachine: About to run SSH command:
	hostname
	I0328 01:02:31.703241 1130949 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0328 01:02:31.703272 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetMachineName
	I0328 01:02:31.703575 1130949 buildroot.go:166] provisioning hostname "embed-certs-808809"
	I0328 01:02:31.703602 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetMachineName
	I0328 01:02:31.703779 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHHostname
	I0328 01:02:31.706495 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:31.706777 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d4:d2", ip: ""} in network mk-embed-certs-808809: {Iface:virbr4 ExpiryTime:2024-03-28 02:02:25 +0000 UTC Type:0 Mac:52:54:00:60:d4:d2 Iaid: IPaddr:192.168.72.210 Prefix:24 Hostname:embed-certs-808809 Clientid:01:52:54:00:60:d4:d2}
	I0328 01:02:31.706799 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined IP address 192.168.72.210 and MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:31.706978 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHPort
	I0328 01:02:31.707146 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHKeyPath
	I0328 01:02:31.707334 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHKeyPath
	I0328 01:02:31.707580 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHUsername
	I0328 01:02:31.707765 1130949 main.go:141] libmachine: Using SSH client type: native
	I0328 01:02:31.707985 1130949 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.210 22 <nil> <nil>}
	I0328 01:02:31.708004 1130949 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-808809 && echo "embed-certs-808809" | sudo tee /etc/hostname
	I0328 01:02:31.821578 1130949 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-808809
	
	I0328 01:02:31.821608 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHHostname
	I0328 01:02:31.824412 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:31.824791 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d4:d2", ip: ""} in network mk-embed-certs-808809: {Iface:virbr4 ExpiryTime:2024-03-28 02:02:25 +0000 UTC Type:0 Mac:52:54:00:60:d4:d2 Iaid: IPaddr:192.168.72.210 Prefix:24 Hostname:embed-certs-808809 Clientid:01:52:54:00:60:d4:d2}
	I0328 01:02:31.824825 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined IP address 192.168.72.210 and MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:31.825030 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHPort
	I0328 01:02:31.825253 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHKeyPath
	I0328 01:02:31.825432 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHKeyPath
	I0328 01:02:31.825589 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHUsername
	I0328 01:02:31.825758 1130949 main.go:141] libmachine: Using SSH client type: native
	I0328 01:02:31.825950 1130949 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.210 22 <nil> <nil>}
	I0328 01:02:31.825976 1130949 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-808809' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-808809/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-808809' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0328 01:02:31.937655 1130949 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0328 01:02:31.937701 1130949 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18485-1069254/.minikube CaCertPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18485-1069254/.minikube}
	I0328 01:02:31.937728 1130949 buildroot.go:174] setting up certificates
	I0328 01:02:31.937742 1130949 provision.go:84] configureAuth start
	I0328 01:02:31.937754 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetMachineName
	I0328 01:02:31.938093 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetIP
	I0328 01:02:31.940874 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:31.941328 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d4:d2", ip: ""} in network mk-embed-certs-808809: {Iface:virbr4 ExpiryTime:2024-03-28 02:02:25 +0000 UTC Type:0 Mac:52:54:00:60:d4:d2 Iaid: IPaddr:192.168.72.210 Prefix:24 Hostname:embed-certs-808809 Clientid:01:52:54:00:60:d4:d2}
	I0328 01:02:31.941360 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined IP address 192.168.72.210 and MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:31.941580 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHHostname
	I0328 01:02:31.944250 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:31.944580 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d4:d2", ip: ""} in network mk-embed-certs-808809: {Iface:virbr4 ExpiryTime:2024-03-28 02:02:25 +0000 UTC Type:0 Mac:52:54:00:60:d4:d2 Iaid: IPaddr:192.168.72.210 Prefix:24 Hostname:embed-certs-808809 Clientid:01:52:54:00:60:d4:d2}
	I0328 01:02:31.944610 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined IP address 192.168.72.210 and MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:31.944828 1130949 provision.go:143] copyHostCerts
	I0328 01:02:31.944910 1130949 exec_runner.go:144] found /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.pem, removing ...
	I0328 01:02:31.944926 1130949 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.pem
	I0328 01:02:31.945006 1130949 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.pem (1078 bytes)
	I0328 01:02:31.945151 1130949 exec_runner.go:144] found /home/jenkins/minikube-integration/18485-1069254/.minikube/cert.pem, removing ...
	I0328 01:02:31.945162 1130949 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18485-1069254/.minikube/cert.pem
	I0328 01:02:31.945205 1130949 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18485-1069254/.minikube/cert.pem (1123 bytes)
	I0328 01:02:31.945285 1130949 exec_runner.go:144] found /home/jenkins/minikube-integration/18485-1069254/.minikube/key.pem, removing ...
	I0328 01:02:31.945294 1130949 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18485-1069254/.minikube/key.pem
	I0328 01:02:31.945330 1130949 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18485-1069254/.minikube/key.pem (1679 bytes)
	I0328 01:02:31.945400 1130949 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca-key.pem org=jenkins.embed-certs-808809 san=[127.0.0.1 192.168.72.210 embed-certs-808809 localhost minikube]
	I0328 01:02:32.070925 1130949 provision.go:177] copyRemoteCerts
	I0328 01:02:32.071007 1130949 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0328 01:02:32.071067 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHHostname
	I0328 01:02:32.073876 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:32.074295 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d4:d2", ip: ""} in network mk-embed-certs-808809: {Iface:virbr4 ExpiryTime:2024-03-28 02:02:25 +0000 UTC Type:0 Mac:52:54:00:60:d4:d2 Iaid: IPaddr:192.168.72.210 Prefix:24 Hostname:embed-certs-808809 Clientid:01:52:54:00:60:d4:d2}
	I0328 01:02:32.074339 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined IP address 192.168.72.210 and MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:32.074541 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHPort
	I0328 01:02:32.074754 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHKeyPath
	I0328 01:02:32.074931 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHUsername
	I0328 01:02:32.075091 1130949 sshutil.go:53] new ssh client: &{IP:192.168.72.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/embed-certs-808809/id_rsa Username:docker}
	I0328 01:02:32.158945 1130949 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0328 01:02:32.184903 1130949 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0328 01:02:32.210411 1130949 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0328 01:02:32.235788 1130949 provision.go:87] duration metric: took 298.03126ms to configureAuth
	I0328 01:02:32.235827 1130949 buildroot.go:189] setting minikube options for container-runtime
	I0328 01:02:32.236116 1130949 config.go:182] Loaded profile config "embed-certs-808809": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0328 01:02:32.236336 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHHostname
	I0328 01:02:32.239186 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:32.239520 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d4:d2", ip: ""} in network mk-embed-certs-808809: {Iface:virbr4 ExpiryTime:2024-03-28 02:02:25 +0000 UTC Type:0 Mac:52:54:00:60:d4:d2 Iaid: IPaddr:192.168.72.210 Prefix:24 Hostname:embed-certs-808809 Clientid:01:52:54:00:60:d4:d2}
	I0328 01:02:32.239555 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined IP address 192.168.72.210 and MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:32.239782 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHPort
	I0328 01:02:32.240036 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHKeyPath
	I0328 01:02:32.240257 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHKeyPath
	I0328 01:02:32.240431 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHUsername
	I0328 01:02:32.240633 1130949 main.go:141] libmachine: Using SSH client type: native
	I0328 01:02:32.240836 1130949 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.210 22 <nil> <nil>}
	I0328 01:02:32.240862 1130949 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0328 01:02:32.513263 1130949 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0328 01:02:32.513298 1130949 machine.go:97] duration metric: took 917.647337ms to provisionDockerMachine
	I0328 01:02:32.513314 1130949 start.go:293] postStartSetup for "embed-certs-808809" (driver="kvm2")
	I0328 01:02:32.513326 1130949 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0328 01:02:32.513365 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .DriverName
	I0328 01:02:32.513727 1130949 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0328 01:02:32.513770 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHHostname
	I0328 01:02:32.516906 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:32.517382 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d4:d2", ip: ""} in network mk-embed-certs-808809: {Iface:virbr4 ExpiryTime:2024-03-28 02:02:25 +0000 UTC Type:0 Mac:52:54:00:60:d4:d2 Iaid: IPaddr:192.168.72.210 Prefix:24 Hostname:embed-certs-808809 Clientid:01:52:54:00:60:d4:d2}
	I0328 01:02:32.517425 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined IP address 192.168.72.210 and MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:32.517603 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHPort
	I0328 01:02:32.517831 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHKeyPath
	I0328 01:02:32.517989 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHUsername
	I0328 01:02:32.518115 1130949 sshutil.go:53] new ssh client: &{IP:192.168.72.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/embed-certs-808809/id_rsa Username:docker}
	I0328 01:02:32.600013 1130949 ssh_runner.go:195] Run: cat /etc/os-release
	I0328 01:02:32.604953 1130949 info.go:137] Remote host: Buildroot 2023.02.9
	I0328 01:02:32.604983 1130949 filesync.go:126] Scanning /home/jenkins/minikube-integration/18485-1069254/.minikube/addons for local assets ...
	I0328 01:02:32.605057 1130949 filesync.go:126] Scanning /home/jenkins/minikube-integration/18485-1069254/.minikube/files for local assets ...
	I0328 01:02:32.605148 1130949 filesync.go:149] local asset: /home/jenkins/minikube-integration/18485-1069254/.minikube/files/etc/ssl/certs/10765222.pem -> 10765222.pem in /etc/ssl/certs
	I0328 01:02:32.605265 1130949 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0328 01:02:32.617685 1130949 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/files/etc/ssl/certs/10765222.pem --> /etc/ssl/certs/10765222.pem (1708 bytes)
	I0328 01:02:32.646415 1130949 start.go:296] duration metric: took 133.084551ms for postStartSetup
	I0328 01:02:32.646462 1130949 fix.go:56] duration metric: took 18.943034019s for fixHost
	I0328 01:02:32.646490 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHHostname
	I0328 01:02:32.649346 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:32.649686 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d4:d2", ip: ""} in network mk-embed-certs-808809: {Iface:virbr4 ExpiryTime:2024-03-28 02:02:25 +0000 UTC Type:0 Mac:52:54:00:60:d4:d2 Iaid: IPaddr:192.168.72.210 Prefix:24 Hostname:embed-certs-808809 Clientid:01:52:54:00:60:d4:d2}
	I0328 01:02:32.649717 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined IP address 192.168.72.210 and MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:32.649864 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHPort
	I0328 01:02:32.650191 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHKeyPath
	I0328 01:02:32.650444 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHKeyPath
	I0328 01:02:32.650637 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHUsername
	I0328 01:02:32.650844 1130949 main.go:141] libmachine: Using SSH client type: native
	I0328 01:02:32.651036 1130949 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.210 22 <nil> <nil>}
	I0328 01:02:32.651069 1130949 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0328 01:02:32.751522 1130949 main.go:141] libmachine: SSH cmd err, output: <nil>: 1711587752.718800758
	
	I0328 01:02:32.751547 1130949 fix.go:216] guest clock: 1711587752.718800758
	I0328 01:02:32.751556 1130949 fix.go:229] Guest: 2024-03-28 01:02:32.718800758 +0000 UTC Remote: 2024-03-28 01:02:32.646466137 +0000 UTC m=+284.780134501 (delta=72.334621ms)
	I0328 01:02:32.751598 1130949 fix.go:200] guest clock delta is within tolerance: 72.334621ms
	I0328 01:02:32.751610 1130949 start.go:83] releasing machines lock for "embed-certs-808809", held for 19.048217918s
	I0328 01:02:32.751638 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .DriverName
	I0328 01:02:32.751953 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetIP
	I0328 01:02:32.754795 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:32.755205 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d4:d2", ip: ""} in network mk-embed-certs-808809: {Iface:virbr4 ExpiryTime:2024-03-28 02:02:25 +0000 UTC Type:0 Mac:52:54:00:60:d4:d2 Iaid: IPaddr:192.168.72.210 Prefix:24 Hostname:embed-certs-808809 Clientid:01:52:54:00:60:d4:d2}
	I0328 01:02:32.755240 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined IP address 192.168.72.210 and MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:32.755454 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .DriverName
	I0328 01:02:32.756111 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .DriverName
	I0328 01:02:32.756320 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .DriverName
	I0328 01:02:32.756412 1130949 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0328 01:02:32.756475 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHHostname
	I0328 01:02:32.756612 1130949 ssh_runner.go:195] Run: cat /version.json
	I0328 01:02:32.756646 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHHostname
	I0328 01:02:32.759337 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:32.759468 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:32.759788 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d4:d2", ip: ""} in network mk-embed-certs-808809: {Iface:virbr4 ExpiryTime:2024-03-28 02:02:25 +0000 UTC Type:0 Mac:52:54:00:60:d4:d2 Iaid: IPaddr:192.168.72.210 Prefix:24 Hostname:embed-certs-808809 Clientid:01:52:54:00:60:d4:d2}
	I0328 01:02:32.759808 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined IP address 192.168.72.210 and MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:32.759845 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d4:d2", ip: ""} in network mk-embed-certs-808809: {Iface:virbr4 ExpiryTime:2024-03-28 02:02:25 +0000 UTC Type:0 Mac:52:54:00:60:d4:d2 Iaid: IPaddr:192.168.72.210 Prefix:24 Hostname:embed-certs-808809 Clientid:01:52:54:00:60:d4:d2}
	I0328 01:02:32.759866 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined IP address 192.168.72.210 and MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:32.760009 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHPort
	I0328 01:02:32.760018 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHPort
	I0328 01:02:32.760214 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHKeyPath
	I0328 01:02:32.760222 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHKeyPath
	I0328 01:02:32.760364 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHUsername
	I0328 01:02:32.760532 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHUsername
	I0328 01:02:32.760639 1130949 sshutil.go:53] new ssh client: &{IP:192.168.72.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/embed-certs-808809/id_rsa Username:docker}
	I0328 01:02:32.760698 1130949 sshutil.go:53] new ssh client: &{IP:192.168.72.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/embed-certs-808809/id_rsa Username:docker}
	I0328 01:02:32.840137 1130949 ssh_runner.go:195] Run: systemctl --version
	I0328 01:02:32.874039 1130949 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0328 01:02:33.020534 1130949 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0328 01:02:33.027141 1130949 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0328 01:02:33.027213 1130949 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0328 01:02:33.043738 1130949 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0328 01:02:33.043767 1130949 start.go:494] detecting cgroup driver to use...
	I0328 01:02:33.043840 1130949 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0328 01:02:33.064332 1130949 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0328 01:02:33.081926 1130949 docker.go:217] disabling cri-docker service (if available) ...
	I0328 01:02:33.082016 1130949 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0328 01:02:33.097179 1130949 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0328 01:02:33.113157 1130949 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0328 01:02:33.233183 1130949 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0328 01:02:33.374061 1130949 docker.go:233] disabling docker service ...
	I0328 01:02:33.374145 1130949 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0328 01:02:33.389813 1130949 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0328 01:02:33.403439 1130949 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0328 01:02:33.546146 1130949 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0328 01:02:33.706968 1130949 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0328 01:02:33.722279 1130949 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0328 01:02:33.742578 1130949 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0328 01:02:33.742652 1130949 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 01:02:33.754966 1130949 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0328 01:02:33.755027 1130949 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 01:02:33.767170 1130949 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 01:02:33.779960 1130949 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 01:02:33.792448 1130949 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0328 01:02:33.804912 1130949 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 01:02:33.818038 1130949 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 01:02:33.838794 1130949 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 01:02:33.852157 1130949 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0328 01:02:33.862921 1130949 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0328 01:02:33.862981 1130949 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0328 01:02:33.880973 1130949 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0328 01:02:33.892698 1130949 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0328 01:02:34.029903 1130949 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0328 01:02:34.170977 1130949 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0328 01:02:34.171074 1130949 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0328 01:02:34.176652 1130949 start.go:562] Will wait 60s for crictl version
	I0328 01:02:34.176736 1130949 ssh_runner.go:195] Run: which crictl
	I0328 01:02:34.180993 1130949 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0328 01:02:34.224564 1130949 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0328 01:02:34.224675 1130949 ssh_runner.go:195] Run: crio --version
	I0328 01:02:34.254457 1130949 ssh_runner.go:195] Run: crio --version
	I0328 01:02:34.287281 1130949 out.go:177] * Preparing Kubernetes v1.29.3 on CRI-O 1.29.1 ...
	I0328 01:02:32.778280 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .Start
	I0328 01:02:32.778470 1131323 main.go:141] libmachine: (old-k8s-version-986088) Ensuring networks are active...
	I0328 01:02:32.779179 1131323 main.go:141] libmachine: (old-k8s-version-986088) Ensuring network default is active
	I0328 01:02:32.779577 1131323 main.go:141] libmachine: (old-k8s-version-986088) Ensuring network mk-old-k8s-version-986088 is active
	I0328 01:02:32.779982 1131323 main.go:141] libmachine: (old-k8s-version-986088) Getting domain xml...
	I0328 01:02:32.780732 1131323 main.go:141] libmachine: (old-k8s-version-986088) Creating domain...
	I0328 01:02:34.066287 1131323 main.go:141] libmachine: (old-k8s-version-986088) Waiting to get IP...
	I0328 01:02:34.067193 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:34.067618 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | unable to find current IP address of domain old-k8s-version-986088 in network mk-old-k8s-version-986088
	I0328 01:02:34.067684 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | I0328 01:02:34.067586 1132067 retry.go:31] will retry after 291.270379ms: waiting for machine to come up
	I0328 01:02:34.360203 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:34.360690 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | unable to find current IP address of domain old-k8s-version-986088 in network mk-old-k8s-version-986088
	I0328 01:02:34.360721 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | I0328 01:02:34.360638 1132067 retry.go:31] will retry after 234.968456ms: waiting for machine to come up
	I0328 01:02:34.597291 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:34.597818 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | unable to find current IP address of domain old-k8s-version-986088 in network mk-old-k8s-version-986088
	I0328 01:02:34.597849 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | I0328 01:02:34.597750 1132067 retry.go:31] will retry after 382.522593ms: waiting for machine to come up
	I0328 01:02:34.982502 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:34.983176 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | unable to find current IP address of domain old-k8s-version-986088 in network mk-old-k8s-version-986088
	I0328 01:02:34.983205 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | I0328 01:02:34.983133 1132067 retry.go:31] will retry after 436.332635ms: waiting for machine to come up
	I0328 01:02:34.288748 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetIP
	I0328 01:02:34.292122 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:34.292516 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d4:d2", ip: ""} in network mk-embed-certs-808809: {Iface:virbr4 ExpiryTime:2024-03-28 02:02:25 +0000 UTC Type:0 Mac:52:54:00:60:d4:d2 Iaid: IPaddr:192.168.72.210 Prefix:24 Hostname:embed-certs-808809 Clientid:01:52:54:00:60:d4:d2}
	I0328 01:02:34.292556 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined IP address 192.168.72.210 and MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:34.292869 1130949 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0328 01:02:34.298738 1130949 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0328 01:02:34.313529 1130949 kubeadm.go:877] updating cluster {Name:embed-certs-808809 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.29.3 ClusterName:embed-certs-808809 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.210 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0328 01:02:34.313698 1130949 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0328 01:02:34.313762 1130949 ssh_runner.go:195] Run: sudo crictl images --output json
	I0328 01:02:34.356518 1130949 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.3". assuming images are not preloaded.
	I0328 01:02:34.356614 1130949 ssh_runner.go:195] Run: which lz4
	I0328 01:02:34.361492 1130949 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0328 01:02:34.366053 1130949 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0328 01:02:34.366090 1130949 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (402967820 bytes)
	I0328 01:02:36.024197 1130949 crio.go:462] duration metric: took 1.662731937s to copy over tarball
	I0328 01:02:36.024287 1130949 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0328 01:02:35.421623 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:35.422164 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | unable to find current IP address of domain old-k8s-version-986088 in network mk-old-k8s-version-986088
	I0328 01:02:35.422198 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | I0328 01:02:35.422135 1132067 retry.go:31] will retry after 700.861268ms: waiting for machine to come up
	I0328 01:02:36.124589 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:36.125001 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | unable to find current IP address of domain old-k8s-version-986088 in network mk-old-k8s-version-986088
	I0328 01:02:36.125031 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | I0328 01:02:36.124948 1132067 retry.go:31] will retry after 932.342478ms: waiting for machine to come up
	I0328 01:02:37.058954 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:37.059390 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | unable to find current IP address of domain old-k8s-version-986088 in network mk-old-k8s-version-986088
	I0328 01:02:37.059424 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | I0328 01:02:37.059332 1132067 retry.go:31] will retry after 1.163248691s: waiting for machine to come up
	I0328 01:02:38.224574 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:38.225019 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | unable to find current IP address of domain old-k8s-version-986088 in network mk-old-k8s-version-986088
	I0328 01:02:38.225053 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | I0328 01:02:38.224959 1132067 retry.go:31] will retry after 1.13372539s: waiting for machine to come up
	I0328 01:02:39.360393 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:39.360953 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | unable to find current IP address of domain old-k8s-version-986088 in network mk-old-k8s-version-986088
	I0328 01:02:39.360984 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | I0328 01:02:39.360906 1132067 retry.go:31] will retry after 1.793272671s: waiting for machine to come up
	I0328 01:02:38.420741 1130949 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.396415089s)
	I0328 01:02:38.420788 1130949 crio.go:469] duration metric: took 2.39655808s to extract the tarball
	I0328 01:02:38.420797 1130949 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0328 01:02:38.459869 1130949 ssh_runner.go:195] Run: sudo crictl images --output json
	I0328 01:02:38.505999 1130949 crio.go:514] all images are preloaded for cri-o runtime.
	I0328 01:02:38.506030 1130949 cache_images.go:84] Images are preloaded, skipping loading
	I0328 01:02:38.506039 1130949 kubeadm.go:928] updating node { 192.168.72.210 8443 v1.29.3 crio true true} ...
	I0328 01:02:38.506185 1130949 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-808809 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.210
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.3 ClusterName:embed-certs-808809 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0328 01:02:38.506301 1130949 ssh_runner.go:195] Run: crio config
	I0328 01:02:38.551608 1130949 cni.go:84] Creating CNI manager for ""
	I0328 01:02:38.551633 1130949 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0328 01:02:38.551646 1130949 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0328 01:02:38.551673 1130949 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.210 APIServerPort:8443 KubernetesVersion:v1.29.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-808809 NodeName:embed-certs-808809 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.210"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.210 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0328 01:02:38.551813 1130949 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.210
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-808809"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.210
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.210"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0328 01:02:38.551881 1130949 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.3
	I0328 01:02:38.562640 1130949 binaries.go:44] Found k8s binaries, skipping transfer
	I0328 01:02:38.562732 1130949 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0328 01:02:38.572870 1130949 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0328 01:02:38.590866 1130949 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0328 01:02:38.608302 1130949 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0328 01:02:38.626925 1130949 ssh_runner.go:195] Run: grep 192.168.72.210	control-plane.minikube.internal$ /etc/hosts
	I0328 01:02:38.631111 1130949 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.210	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0328 01:02:38.644528 1130949 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0328 01:02:38.785485 1130949 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0328 01:02:38.804087 1130949 certs.go:68] Setting up /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/embed-certs-808809 for IP: 192.168.72.210
	I0328 01:02:38.804113 1130949 certs.go:194] generating shared ca certs ...
	I0328 01:02:38.804132 1130949 certs.go:226] acquiring lock for ca certs: {Name:mkf4dec8f33bbf51de6ed3aabdf175c7fd744ef9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 01:02:38.804285 1130949 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.key
	I0328 01:02:38.804326 1130949 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/proxy-client-ca.key
	I0328 01:02:38.804363 1130949 certs.go:256] generating profile certs ...
	I0328 01:02:38.804505 1130949 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/embed-certs-808809/client.key
	I0328 01:02:38.804588 1130949 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/embed-certs-808809/apiserver.key.bdc16448
	I0328 01:02:38.804638 1130949 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/embed-certs-808809/proxy-client.key
	I0328 01:02:38.804798 1130949 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/1076522.pem (1338 bytes)
	W0328 01:02:38.804829 1130949 certs.go:480] ignoring /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/1076522_empty.pem, impossibly tiny 0 bytes
	I0328 01:02:38.804836 1130949 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca-key.pem (1675 bytes)
	I0328 01:02:38.804860 1130949 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem (1078 bytes)
	I0328 01:02:38.804882 1130949 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/cert.pem (1123 bytes)
	I0328 01:02:38.804902 1130949 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/key.pem (1679 bytes)
	I0328 01:02:38.804943 1130949 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/files/etc/ssl/certs/10765222.pem (1708 bytes)
	I0328 01:02:38.805829 1130949 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0328 01:02:38.864847 1130949 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0328 01:02:38.899197 1130949 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0328 01:02:38.926734 1130949 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0328 01:02:38.958277 1130949 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/embed-certs-808809/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0328 01:02:38.997201 1130949 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/embed-certs-808809/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0328 01:02:39.023136 1130949 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/embed-certs-808809/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0328 01:02:39.048459 1130949 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/embed-certs-808809/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0328 01:02:39.074052 1130949 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/1076522.pem --> /usr/share/ca-certificates/1076522.pem (1338 bytes)
	I0328 01:02:39.099326 1130949 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/files/etc/ssl/certs/10765222.pem --> /usr/share/ca-certificates/10765222.pem (1708 bytes)
	I0328 01:02:39.124775 1130949 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0328 01:02:39.149638 1130949 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0328 01:02:39.169169 1130949 ssh_runner.go:195] Run: openssl version
	I0328 01:02:39.175948 1130949 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1076522.pem && ln -fs /usr/share/ca-certificates/1076522.pem /etc/ssl/certs/1076522.pem"
	I0328 01:02:39.188255 1130949 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1076522.pem
	I0328 01:02:39.194296 1130949 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 27 23:42 /usr/share/ca-certificates/1076522.pem
	I0328 01:02:39.194374 1130949 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1076522.pem
	I0328 01:02:39.201138 1130949 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1076522.pem /etc/ssl/certs/51391683.0"
	I0328 01:02:39.213554 1130949 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10765222.pem && ln -fs /usr/share/ca-certificates/10765222.pem /etc/ssl/certs/10765222.pem"
	I0328 01:02:39.226474 1130949 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10765222.pem
	I0328 01:02:39.232074 1130949 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 27 23:42 /usr/share/ca-certificates/10765222.pem
	I0328 01:02:39.232149 1130949 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10765222.pem
	I0328 01:02:39.238733 1130949 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/10765222.pem /etc/ssl/certs/3ec20f2e.0"
	I0328 01:02:39.250983 1130949 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0328 01:02:39.263746 1130949 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0328 01:02:39.268967 1130949 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 27 23:34 /usr/share/ca-certificates/minikubeCA.pem
	I0328 01:02:39.269038 1130949 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0328 01:02:39.275589 1130949 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0328 01:02:39.287731 1130949 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0328 01:02:39.292985 1130949 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0328 01:02:39.300366 1130949 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0328 01:02:39.307241 1130949 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0328 01:02:39.314522 1130949 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0328 01:02:39.321070 1130949 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0328 01:02:39.327777 1130949 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0328 01:02:39.334174 1130949 kubeadm.go:391] StartCluster: {Name:embed-certs-808809 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29
.3 ClusterName:embed-certs-808809 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.210 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0328 01:02:39.334310 1130949 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0328 01:02:39.334367 1130949 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0328 01:02:39.376035 1130949 cri.go:89] found id: ""
	I0328 01:02:39.376145 1130949 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0328 01:02:39.387349 1130949 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0328 01:02:39.387377 1130949 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0328 01:02:39.387385 1130949 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0328 01:02:39.387469 1130949 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0328 01:02:39.397918 1130949 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0328 01:02:39.399122 1130949 kubeconfig.go:125] found "embed-certs-808809" server: "https://192.168.72.210:8443"
	I0328 01:02:39.401219 1130949 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0328 01:02:39.411475 1130949 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.72.210
	I0328 01:02:39.411562 1130949 kubeadm.go:1154] stopping kube-system containers ...
	I0328 01:02:39.411583 1130949 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0328 01:02:39.411650 1130949 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0328 01:02:39.449529 1130949 cri.go:89] found id: ""
	I0328 01:02:39.449638 1130949 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0328 01:02:39.468553 1130949 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0328 01:02:39.479489 1130949 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0328 01:02:39.479522 1130949 kubeadm.go:156] found existing configuration files:
	
	I0328 01:02:39.479589 1130949 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0328 01:02:39.489619 1130949 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0328 01:02:39.489689 1130949 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0328 01:02:39.499726 1130949 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0328 01:02:39.509362 1130949 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0328 01:02:39.509447 1130949 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0328 01:02:39.519262 1130949 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0328 01:02:39.528858 1130949 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0328 01:02:39.528920 1130949 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0328 01:02:39.538784 1130949 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0328 01:02:39.548517 1130949 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0328 01:02:39.548593 1130949 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0328 01:02:39.559931 1130949 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0328 01:02:39.574178 1130949 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0328 01:02:39.706243 1130949 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0328 01:02:40.342144 1130949 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0328 01:02:40.559108 1130949 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0328 01:02:40.636713 1130949 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0328 01:02:40.743171 1130949 api_server.go:52] waiting for apiserver process to appear ...
	I0328 01:02:40.743269 1130949 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:02:41.243401 1130949 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:02:41.743363 1130949 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:02:41.776504 1130949 api_server.go:72] duration metric: took 1.033329844s to wait for apiserver process to appear ...
	I0328 01:02:41.776547 1130949 api_server.go:88] waiting for apiserver healthz status ...
	I0328 01:02:41.776574 1130949 api_server.go:253] Checking apiserver healthz at https://192.168.72.210:8443/healthz ...
	I0328 01:02:41.777140 1130949 api_server.go:269] stopped: https://192.168.72.210:8443/healthz: Get "https://192.168.72.210:8443/healthz": dial tcp 192.168.72.210:8443: connect: connection refused
	I0328 01:02:42.276690 1130949 api_server.go:253] Checking apiserver healthz at https://192.168.72.210:8443/healthz ...
	I0328 01:02:41.156898 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:41.157309 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | unable to find current IP address of domain old-k8s-version-986088 in network mk-old-k8s-version-986088
	I0328 01:02:41.157336 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | I0328 01:02:41.157263 1132067 retry.go:31] will retry after 1.863775673s: waiting for machine to come up
	I0328 01:02:43.023074 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:43.023470 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | unable to find current IP address of domain old-k8s-version-986088 in network mk-old-k8s-version-986088
	I0328 01:02:43.023507 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | I0328 01:02:43.023419 1132067 retry.go:31] will retry after 2.73600503s: waiting for machine to come up
	I0328 01:02:44.743286 1130949 api_server.go:279] https://192.168.72.210:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0328 01:02:44.743383 1130949 api_server.go:103] status: https://192.168.72.210:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0328 01:02:44.743412 1130949 api_server.go:253] Checking apiserver healthz at https://192.168.72.210:8443/healthz ...
	I0328 01:02:44.822370 1130949 api_server.go:279] https://192.168.72.210:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0328 01:02:44.822416 1130949 api_server.go:103] status: https://192.168.72.210:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0328 01:02:44.822436 1130949 api_server.go:253] Checking apiserver healthz at https://192.168.72.210:8443/healthz ...
	I0328 01:02:44.847406 1130949 api_server.go:279] https://192.168.72.210:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0328 01:02:44.847462 1130949 api_server.go:103] status: https://192.168.72.210:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0328 01:02:45.276899 1130949 api_server.go:253] Checking apiserver healthz at https://192.168.72.210:8443/healthz ...
	I0328 01:02:45.281884 1130949 api_server.go:279] https://192.168.72.210:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0328 01:02:45.281919 1130949 api_server.go:103] status: https://192.168.72.210:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0328 01:02:45.777495 1130949 api_server.go:253] Checking apiserver healthz at https://192.168.72.210:8443/healthz ...
	I0328 01:02:45.783673 1130949 api_server.go:279] https://192.168.72.210:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0328 01:02:45.783704 1130949 api_server.go:103] status: https://192.168.72.210:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0328 01:02:46.277372 1130949 api_server.go:253] Checking apiserver healthz at https://192.168.72.210:8443/healthz ...
	I0328 01:02:46.282281 1130949 api_server.go:279] https://192.168.72.210:8443/healthz returned 200:
	ok
	I0328 01:02:46.291242 1130949 api_server.go:141] control plane version: v1.29.3
	I0328 01:02:46.291287 1130949 api_server.go:131] duration metric: took 4.514730698s to wait for apiserver health ...
	I0328 01:02:46.291301 1130949 cni.go:84] Creating CNI manager for ""
	I0328 01:02:46.291310 1130949 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0328 01:02:46.293461 1130949 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0328 01:02:46.294971 1130949 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0328 01:02:46.312955 1130949 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0328 01:02:46.345653 1130949 system_pods.go:43] waiting for kube-system pods to appear ...
	I0328 01:02:46.355470 1130949 system_pods.go:59] 8 kube-system pods found
	I0328 01:02:46.355506 1130949 system_pods.go:61] "coredns-76f75df574-pr5d8" [90a6f3d5-6f33-4c41-804b-4b20c518aa23] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0328 01:02:46.355512 1130949 system_pods.go:61] "etcd-embed-certs-808809" [93b6b8ee-f83f-4848-b2c5-912ec07acd52] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0328 01:02:46.355519 1130949 system_pods.go:61] "kube-apiserver-embed-certs-808809" [22eb788f-4647-4a07-b5bf-ecdd54c28fcd] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0328 01:02:46.355530 1130949 system_pods.go:61] "kube-controller-manager-embed-certs-808809" [83fecd9f-c0de-4afe-b5b5-7c04bd3adc20] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0328 01:02:46.355545 1130949 system_pods.go:61] "kube-proxy-qwzpg" [57a814c6-54c8-4fa7-b7d7-bcdd4bbc91d7] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0328 01:02:46.355553 1130949 system_pods.go:61] "kube-scheduler-embed-certs-808809" [0b229d84-43fb-45ee-8d49-39204812d490] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0328 01:02:46.355568 1130949 system_pods.go:61] "metrics-server-57f55c9bc5-swsxp" [4b20e133-3054-4806-9b7f-44d8c8c35a4d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0328 01:02:46.355580 1130949 system_pods.go:61] "storage-provisioner" [59303061-19e3-4aed-8753-804988a2a44e] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0328 01:02:46.355590 1130949 system_pods.go:74] duration metric: took 9.908316ms to wait for pod list to return data ...
	I0328 01:02:46.355603 1130949 node_conditions.go:102] verifying NodePressure condition ...
	I0328 01:02:46.358936 1130949 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0328 01:02:46.358987 1130949 node_conditions.go:123] node cpu capacity is 2
	I0328 01:02:46.359006 1130949 node_conditions.go:105] duration metric: took 3.394695ms to run NodePressure ...
	I0328 01:02:46.359054 1130949 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0328 01:02:46.686479 1130949 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0328 01:02:46.692502 1130949 kubeadm.go:733] kubelet initialised
	I0328 01:02:46.692526 1130949 kubeadm.go:734] duration metric: took 6.022393ms waiting for restarted kubelet to initialise ...
	I0328 01:02:46.692534 1130949 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0328 01:02:46.699146 1130949 pod_ready.go:78] waiting up to 4m0s for pod "coredns-76f75df574-pr5d8" in "kube-system" namespace to be "Ready" ...
	I0328 01:02:45.762440 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:45.762891 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | unable to find current IP address of domain old-k8s-version-986088 in network mk-old-k8s-version-986088
	I0328 01:02:45.762915 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | I0328 01:02:45.762845 1132067 retry.go:31] will retry after 2.201941476s: waiting for machine to come up
	I0328 01:02:47.966601 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:47.967196 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | unable to find current IP address of domain old-k8s-version-986088 in network mk-old-k8s-version-986088
	I0328 01:02:47.967237 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | I0328 01:02:47.967144 1132067 retry.go:31] will retry after 4.122216816s: waiting for machine to come up
	I0328 01:02:48.709890 1130949 pod_ready.go:102] pod "coredns-76f75df574-pr5d8" in "kube-system" namespace has status "Ready":"False"
	I0328 01:02:51.207697 1130949 pod_ready.go:102] pod "coredns-76f75df574-pr5d8" in "kube-system" namespace has status "Ready":"False"
	I0328 01:02:53.391471 1131600 start.go:364] duration metric: took 2m47.603687739s to acquireMachinesLock for "default-k8s-diff-port-283961"
	I0328 01:02:53.391553 1131600 start.go:96] Skipping create...Using existing machine configuration
	I0328 01:02:53.391565 1131600 fix.go:54] fixHost starting: 
	I0328 01:02:53.391980 1131600 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18485-1069254/.minikube/bin/docker-machine-driver-kvm2
	I0328 01:02:53.392031 1131600 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 01:02:53.409035 1131600 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34073
	I0328 01:02:53.409556 1131600 main.go:141] libmachine: () Calling .GetVersion
	I0328 01:02:53.410105 1131600 main.go:141] libmachine: Using API Version  1
	I0328 01:02:53.410136 1131600 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 01:02:53.410492 1131600 main.go:141] libmachine: () Calling .GetMachineName
	I0328 01:02:53.410734 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .DriverName
	I0328 01:02:53.410903 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetState
	I0328 01:02:53.412710 1131600 fix.go:112] recreateIfNeeded on default-k8s-diff-port-283961: state=Stopped err=<nil>
	I0328 01:02:53.412739 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .DriverName
	W0328 01:02:53.412927 1131600 fix.go:138] unexpected machine state, will restart: <nil>
	I0328 01:02:53.414773 1131600 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-283961" ...
	I0328 01:02:52.091210 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:52.091759 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has current primary IP address 192.168.50.174 and MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:52.091794 1131323 main.go:141] libmachine: (old-k8s-version-986088) Found IP for machine: 192.168.50.174
	I0328 01:02:52.091841 1131323 main.go:141] libmachine: (old-k8s-version-986088) Reserving static IP address...
	I0328 01:02:52.092295 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | found host DHCP lease matching {name: "old-k8s-version-986088", mac: "52:54:00:f6:94:40", ip: "192.168.50.174"} in network mk-old-k8s-version-986088: {Iface:virbr2 ExpiryTime:2024-03-28 02:02:44 +0000 UTC Type:0 Mac:52:54:00:f6:94:40 Iaid: IPaddr:192.168.50.174 Prefix:24 Hostname:old-k8s-version-986088 Clientid:01:52:54:00:f6:94:40}
	I0328 01:02:52.092321 1131323 main.go:141] libmachine: (old-k8s-version-986088) Reserved static IP address: 192.168.50.174
	I0328 01:02:52.092343 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | skip adding static IP to network mk-old-k8s-version-986088 - found existing host DHCP lease matching {name: "old-k8s-version-986088", mac: "52:54:00:f6:94:40", ip: "192.168.50.174"}
	I0328 01:02:52.092356 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | Getting to WaitForSSH function...
	I0328 01:02:52.092373 1131323 main.go:141] libmachine: (old-k8s-version-986088) Waiting for SSH to be available...
	I0328 01:02:52.094682 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:52.095012 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:94:40", ip: ""} in network mk-old-k8s-version-986088: {Iface:virbr2 ExpiryTime:2024-03-28 02:02:44 +0000 UTC Type:0 Mac:52:54:00:f6:94:40 Iaid: IPaddr:192.168.50.174 Prefix:24 Hostname:old-k8s-version-986088 Clientid:01:52:54:00:f6:94:40}
	I0328 01:02:52.095033 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined IP address 192.168.50.174 and MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:52.095158 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | Using SSH client type: external
	I0328 01:02:52.095180 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | Using SSH private key: /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/old-k8s-version-986088/id_rsa (-rw-------)
	I0328 01:02:52.095208 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.174 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/old-k8s-version-986088/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0328 01:02:52.095218 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | About to run SSH command:
	I0328 01:02:52.095232 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | exit 0
	I0328 01:02:52.218494 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | SSH cmd err, output: <nil>: 
	I0328 01:02:52.218983 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetConfigRaw
	I0328 01:02:52.219663 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetIP
	I0328 01:02:52.222349 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:52.222791 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:94:40", ip: ""} in network mk-old-k8s-version-986088: {Iface:virbr2 ExpiryTime:2024-03-28 02:02:44 +0000 UTC Type:0 Mac:52:54:00:f6:94:40 Iaid: IPaddr:192.168.50.174 Prefix:24 Hostname:old-k8s-version-986088 Clientid:01:52:54:00:f6:94:40}
	I0328 01:02:52.222823 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined IP address 192.168.50.174 and MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:52.223191 1131323 profile.go:142] Saving config to /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/old-k8s-version-986088/config.json ...
	I0328 01:02:52.223388 1131323 machine.go:94] provisionDockerMachine start ...
	I0328 01:02:52.223409 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .DriverName
	I0328 01:02:52.223605 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHHostname
	I0328 01:02:52.225686 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:52.225999 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:94:40", ip: ""} in network mk-old-k8s-version-986088: {Iface:virbr2 ExpiryTime:2024-03-28 02:02:44 +0000 UTC Type:0 Mac:52:54:00:f6:94:40 Iaid: IPaddr:192.168.50.174 Prefix:24 Hostname:old-k8s-version-986088 Clientid:01:52:54:00:f6:94:40}
	I0328 01:02:52.226038 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined IP address 192.168.50.174 and MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:52.226131 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHPort
	I0328 01:02:52.226341 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHKeyPath
	I0328 01:02:52.226507 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHKeyPath
	I0328 01:02:52.226633 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHUsername
	I0328 01:02:52.226802 1131323 main.go:141] libmachine: Using SSH client type: native
	I0328 01:02:52.227078 1131323 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.174 22 <nil> <nil>}
	I0328 01:02:52.227095 1131323 main.go:141] libmachine: About to run SSH command:
	hostname
	I0328 01:02:52.327218 1131323 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0328 01:02:52.327249 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetMachineName
	I0328 01:02:52.327515 1131323 buildroot.go:166] provisioning hostname "old-k8s-version-986088"
	I0328 01:02:52.327542 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetMachineName
	I0328 01:02:52.327754 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHHostname
	I0328 01:02:52.330253 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:52.330661 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:94:40", ip: ""} in network mk-old-k8s-version-986088: {Iface:virbr2 ExpiryTime:2024-03-28 02:02:44 +0000 UTC Type:0 Mac:52:54:00:f6:94:40 Iaid: IPaddr:192.168.50.174 Prefix:24 Hostname:old-k8s-version-986088 Clientid:01:52:54:00:f6:94:40}
	I0328 01:02:52.330691 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined IP address 192.168.50.174 and MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:52.330827 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHPort
	I0328 01:02:52.331048 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHKeyPath
	I0328 01:02:52.331258 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHKeyPath
	I0328 01:02:52.331406 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHUsername
	I0328 01:02:52.331593 1131323 main.go:141] libmachine: Using SSH client type: native
	I0328 01:02:52.331772 1131323 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.174 22 <nil> <nil>}
	I0328 01:02:52.331783 1131323 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-986088 && echo "old-k8s-version-986088" | sudo tee /etc/hostname
	I0328 01:02:52.445910 1131323 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-986088
	
	I0328 01:02:52.445943 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHHostname
	I0328 01:02:52.449023 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:52.449320 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:94:40", ip: ""} in network mk-old-k8s-version-986088: {Iface:virbr2 ExpiryTime:2024-03-28 02:02:44 +0000 UTC Type:0 Mac:52:54:00:f6:94:40 Iaid: IPaddr:192.168.50.174 Prefix:24 Hostname:old-k8s-version-986088 Clientid:01:52:54:00:f6:94:40}
	I0328 01:02:52.449358 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined IP address 192.168.50.174 and MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:52.449595 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHPort
	I0328 01:02:52.449810 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHKeyPath
	I0328 01:02:52.449970 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHKeyPath
	I0328 01:02:52.450116 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHUsername
	I0328 01:02:52.450310 1131323 main.go:141] libmachine: Using SSH client type: native
	I0328 01:02:52.450572 1131323 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.174 22 <nil> <nil>}
	I0328 01:02:52.450640 1131323 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-986088' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-986088/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-986088' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0328 01:02:52.567493 1131323 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0328 01:02:52.567529 1131323 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18485-1069254/.minikube CaCertPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18485-1069254/.minikube}
	I0328 01:02:52.567559 1131323 buildroot.go:174] setting up certificates
	I0328 01:02:52.567573 1131323 provision.go:84] configureAuth start
	I0328 01:02:52.567587 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetMachineName
	I0328 01:02:52.567944 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetIP
	I0328 01:02:52.570860 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:52.571363 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:94:40", ip: ""} in network mk-old-k8s-version-986088: {Iface:virbr2 ExpiryTime:2024-03-28 02:02:44 +0000 UTC Type:0 Mac:52:54:00:f6:94:40 Iaid: IPaddr:192.168.50.174 Prefix:24 Hostname:old-k8s-version-986088 Clientid:01:52:54:00:f6:94:40}
	I0328 01:02:52.571400 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined IP address 192.168.50.174 and MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:52.571547 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHHostname
	I0328 01:02:52.574052 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:52.574483 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:94:40", ip: ""} in network mk-old-k8s-version-986088: {Iface:virbr2 ExpiryTime:2024-03-28 02:02:44 +0000 UTC Type:0 Mac:52:54:00:f6:94:40 Iaid: IPaddr:192.168.50.174 Prefix:24 Hostname:old-k8s-version-986088 Clientid:01:52:54:00:f6:94:40}
	I0328 01:02:52.574517 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined IP address 192.168.50.174 and MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:52.574619 1131323 provision.go:143] copyHostCerts
	I0328 01:02:52.574698 1131323 exec_runner.go:144] found /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.pem, removing ...
	I0328 01:02:52.574710 1131323 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.pem
	I0328 01:02:52.574778 1131323 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.pem (1078 bytes)
	I0328 01:02:52.574894 1131323 exec_runner.go:144] found /home/jenkins/minikube-integration/18485-1069254/.minikube/cert.pem, removing ...
	I0328 01:02:52.574908 1131323 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18485-1069254/.minikube/cert.pem
	I0328 01:02:52.574985 1131323 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18485-1069254/.minikube/cert.pem (1123 bytes)
	I0328 01:02:52.575086 1131323 exec_runner.go:144] found /home/jenkins/minikube-integration/18485-1069254/.minikube/key.pem, removing ...
	I0328 01:02:52.575095 1131323 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18485-1069254/.minikube/key.pem
	I0328 01:02:52.575117 1131323 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18485-1069254/.minikube/key.pem (1679 bytes)
	I0328 01:02:52.575194 1131323 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-986088 san=[127.0.0.1 192.168.50.174 localhost minikube old-k8s-version-986088]
	I0328 01:02:52.688709 1131323 provision.go:177] copyRemoteCerts
	I0328 01:02:52.688776 1131323 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0328 01:02:52.688809 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHHostname
	I0328 01:02:52.691529 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:52.691977 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:94:40", ip: ""} in network mk-old-k8s-version-986088: {Iface:virbr2 ExpiryTime:2024-03-28 02:02:44 +0000 UTC Type:0 Mac:52:54:00:f6:94:40 Iaid: IPaddr:192.168.50.174 Prefix:24 Hostname:old-k8s-version-986088 Clientid:01:52:54:00:f6:94:40}
	I0328 01:02:52.692024 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined IP address 192.168.50.174 and MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:52.692188 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHPort
	I0328 01:02:52.692425 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHKeyPath
	I0328 01:02:52.692620 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHUsername
	I0328 01:02:52.692774 1131323 sshutil.go:53] new ssh client: &{IP:192.168.50.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/old-k8s-version-986088/id_rsa Username:docker}
	I0328 01:02:52.777200 1131323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0328 01:02:52.808740 1131323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0328 01:02:52.836646 1131323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0328 01:02:52.862627 1131323 provision.go:87] duration metric: took 295.032419ms to configureAuth
	I0328 01:02:52.862668 1131323 buildroot.go:189] setting minikube options for container-runtime
	I0328 01:02:52.862908 1131323 config.go:182] Loaded profile config "old-k8s-version-986088": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0328 01:02:52.863019 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHHostname
	I0328 01:02:52.865838 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:52.866585 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:94:40", ip: ""} in network mk-old-k8s-version-986088: {Iface:virbr2 ExpiryTime:2024-03-28 02:02:44 +0000 UTC Type:0 Mac:52:54:00:f6:94:40 Iaid: IPaddr:192.168.50.174 Prefix:24 Hostname:old-k8s-version-986088 Clientid:01:52:54:00:f6:94:40}
	I0328 01:02:52.866630 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined IP address 192.168.50.174 and MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:52.867271 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHPort
	I0328 01:02:52.867521 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHKeyPath
	I0328 01:02:52.867687 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHKeyPath
	I0328 01:02:52.867826 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHUsername
	I0328 01:02:52.867961 1131323 main.go:141] libmachine: Using SSH client type: native
	I0328 01:02:52.868176 1131323 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.174 22 <nil> <nil>}
	I0328 01:02:52.868194 1131323 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0328 01:02:53.154903 1131323 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0328 01:02:53.154936 1131323 machine.go:97] duration metric: took 931.534047ms to provisionDockerMachine
	I0328 01:02:53.154949 1131323 start.go:293] postStartSetup for "old-k8s-version-986088" (driver="kvm2")
	I0328 01:02:53.154961 1131323 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0328 01:02:53.154997 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .DriverName
	I0328 01:02:53.155353 1131323 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0328 01:02:53.155386 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHHostname
	I0328 01:02:53.158072 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:53.158448 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:94:40", ip: ""} in network mk-old-k8s-version-986088: {Iface:virbr2 ExpiryTime:2024-03-28 02:02:44 +0000 UTC Type:0 Mac:52:54:00:f6:94:40 Iaid: IPaddr:192.168.50.174 Prefix:24 Hostname:old-k8s-version-986088 Clientid:01:52:54:00:f6:94:40}
	I0328 01:02:53.158482 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined IP address 192.168.50.174 and MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:53.158612 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHPort
	I0328 01:02:53.158825 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHKeyPath
	I0328 01:02:53.158974 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHUsername
	I0328 01:02:53.159102 1131323 sshutil.go:53] new ssh client: &{IP:192.168.50.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/old-k8s-version-986088/id_rsa Username:docker}
	I0328 01:02:53.243411 1131323 ssh_runner.go:195] Run: cat /etc/os-release
	I0328 01:02:53.247745 1131323 info.go:137] Remote host: Buildroot 2023.02.9
	I0328 01:02:53.247769 1131323 filesync.go:126] Scanning /home/jenkins/minikube-integration/18485-1069254/.minikube/addons for local assets ...
	I0328 01:02:53.247830 1131323 filesync.go:126] Scanning /home/jenkins/minikube-integration/18485-1069254/.minikube/files for local assets ...
	I0328 01:02:53.247903 1131323 filesync.go:149] local asset: /home/jenkins/minikube-integration/18485-1069254/.minikube/files/etc/ssl/certs/10765222.pem -> 10765222.pem in /etc/ssl/certs
	I0328 01:02:53.247990 1131323 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0328 01:02:53.258574 1131323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/files/etc/ssl/certs/10765222.pem --> /etc/ssl/certs/10765222.pem (1708 bytes)
	I0328 01:02:53.284249 1131323 start.go:296] duration metric: took 129.2844ms for postStartSetup
	I0328 01:02:53.284300 1131323 fix.go:56] duration metric: took 20.532468979s for fixHost
	I0328 01:02:53.284324 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHHostname
	I0328 01:02:53.287097 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:53.287505 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:94:40", ip: ""} in network mk-old-k8s-version-986088: {Iface:virbr2 ExpiryTime:2024-03-28 02:02:44 +0000 UTC Type:0 Mac:52:54:00:f6:94:40 Iaid: IPaddr:192.168.50.174 Prefix:24 Hostname:old-k8s-version-986088 Clientid:01:52:54:00:f6:94:40}
	I0328 01:02:53.287534 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined IP address 192.168.50.174 and MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:53.287642 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHPort
	I0328 01:02:53.287874 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHKeyPath
	I0328 01:02:53.288039 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHKeyPath
	I0328 01:02:53.288225 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHUsername
	I0328 01:02:53.288439 1131323 main.go:141] libmachine: Using SSH client type: native
	I0328 01:02:53.288601 1131323 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.174 22 <nil> <nil>}
	I0328 01:02:53.288612 1131323 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0328 01:02:53.391262 1131323 main.go:141] libmachine: SSH cmd err, output: <nil>: 1711587773.373998758
	
	I0328 01:02:53.391292 1131323 fix.go:216] guest clock: 1711587773.373998758
	I0328 01:02:53.391299 1131323 fix.go:229] Guest: 2024-03-28 01:02:53.373998758 +0000 UTC Remote: 2024-03-28 01:02:53.284304642 +0000 UTC m=+227.998260980 (delta=89.694116ms)
	I0328 01:02:53.391341 1131323 fix.go:200] guest clock delta is within tolerance: 89.694116ms
	I0328 01:02:53.391346 1131323 start.go:83] releasing machines lock for "old-k8s-version-986088", held for 20.639550927s
	I0328 01:02:53.391377 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .DriverName
	I0328 01:02:53.391728 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetIP
	I0328 01:02:53.394421 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:53.394780 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:94:40", ip: ""} in network mk-old-k8s-version-986088: {Iface:virbr2 ExpiryTime:2024-03-28 02:02:44 +0000 UTC Type:0 Mac:52:54:00:f6:94:40 Iaid: IPaddr:192.168.50.174 Prefix:24 Hostname:old-k8s-version-986088 Clientid:01:52:54:00:f6:94:40}
	I0328 01:02:53.394811 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined IP address 192.168.50.174 and MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:53.394932 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .DriverName
	I0328 01:02:53.395449 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .DriverName
	I0328 01:02:53.395729 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .DriverName
	I0328 01:02:53.395828 1131323 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0328 01:02:53.395883 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHHostname
	I0328 01:02:53.395985 1131323 ssh_runner.go:195] Run: cat /version.json
	I0328 01:02:53.396014 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHHostname
	I0328 01:02:53.398819 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:53.399010 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:53.399281 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:94:40", ip: ""} in network mk-old-k8s-version-986088: {Iface:virbr2 ExpiryTime:2024-03-28 02:02:44 +0000 UTC Type:0 Mac:52:54:00:f6:94:40 Iaid: IPaddr:192.168.50.174 Prefix:24 Hostname:old-k8s-version-986088 Clientid:01:52:54:00:f6:94:40}
	I0328 01:02:53.399320 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined IP address 192.168.50.174 and MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:53.399451 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHPort
	I0328 01:02:53.399550 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:94:40", ip: ""} in network mk-old-k8s-version-986088: {Iface:virbr2 ExpiryTime:2024-03-28 02:02:44 +0000 UTC Type:0 Mac:52:54:00:f6:94:40 Iaid: IPaddr:192.168.50.174 Prefix:24 Hostname:old-k8s-version-986088 Clientid:01:52:54:00:f6:94:40}
	I0328 01:02:53.399620 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined IP address 192.168.50.174 and MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:53.399640 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHKeyPath
	I0328 01:02:53.399880 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHUsername
	I0328 01:02:53.399902 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHPort
	I0328 01:02:53.400065 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHKeyPath
	I0328 01:02:53.400081 1131323 sshutil.go:53] new ssh client: &{IP:192.168.50.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/old-k8s-version-986088/id_rsa Username:docker}
	I0328 01:02:53.400245 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHUsername
	I0328 01:02:53.400445 1131323 sshutil.go:53] new ssh client: &{IP:192.168.50.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/old-k8s-version-986088/id_rsa Username:docker}
	I0328 01:02:53.514453 1131323 ssh_runner.go:195] Run: systemctl --version
	I0328 01:02:53.521123 1131323 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0328 01:02:53.678366 1131323 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0328 01:02:53.685402 1131323 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0328 01:02:53.685473 1131323 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0328 01:02:53.702781 1131323 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0328 01:02:53.702816 1131323 start.go:494] detecting cgroup driver to use...
	I0328 01:02:53.702900 1131323 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0328 01:02:53.720343 1131323 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0328 01:02:53.736749 1131323 docker.go:217] disabling cri-docker service (if available) ...
	I0328 01:02:53.736824 1131323 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0328 01:02:53.761087 1131323 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0328 01:02:53.779008 1131323 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0328 01:02:53.895064 1131323 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0328 01:02:54.060741 1131323 docker.go:233] disabling docker service ...
	I0328 01:02:54.060825 1131323 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0328 01:02:54.079139 1131323 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0328 01:02:54.093523 1131323 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0328 01:02:54.247544 1131323 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0328 01:02:54.396392 1131323 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0328 01:02:54.422612 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0328 01:02:54.443759 1131323 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0328 01:02:54.443817 1131323 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 01:02:54.459794 1131323 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0328 01:02:54.459875 1131323 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 01:02:54.472784 1131323 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 01:02:54.484963 1131323 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 01:02:54.496654 1131323 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0328 01:02:54.508382 1131323 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0328 01:02:54.518607 1131323 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0328 01:02:54.518687 1131323 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0328 01:02:54.532356 1131323 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0328 01:02:54.544424 1131323 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0328 01:02:54.685782 1131323 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0328 01:02:54.847233 1131323 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0328 01:02:54.847314 1131323 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0328 01:02:54.853148 1131323 start.go:562] Will wait 60s for crictl version
	I0328 01:02:54.853248 1131323 ssh_runner.go:195] Run: which crictl
	I0328 01:02:54.857536 1131323 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0328 01:02:54.901937 1131323 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0328 01:02:54.902082 1131323 ssh_runner.go:195] Run: crio --version
	I0328 01:02:54.935571 1131323 ssh_runner.go:195] Run: crio --version
	I0328 01:02:54.971452 1131323 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0328 01:02:54.972964 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetIP
	I0328 01:02:54.976523 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:54.976985 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:94:40", ip: ""} in network mk-old-k8s-version-986088: {Iface:virbr2 ExpiryTime:2024-03-28 02:02:44 +0000 UTC Type:0 Mac:52:54:00:f6:94:40 Iaid: IPaddr:192.168.50.174 Prefix:24 Hostname:old-k8s-version-986088 Clientid:01:52:54:00:f6:94:40}
	I0328 01:02:54.977017 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined IP address 192.168.50.174 and MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:54.977369 1131323 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0328 01:02:54.982326 1131323 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0328 01:02:54.996239 1131323 kubeadm.go:877] updating cluster {Name:old-k8s-version-986088 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-986088 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.174 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0328 01:02:54.996371 1131323 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0328 01:02:54.996433 1131323 ssh_runner.go:195] Run: sudo crictl images --output json
	I0328 01:02:55.045404 1131323 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0328 01:02:55.045483 1131323 ssh_runner.go:195] Run: which lz4
	I0328 01:02:55.050226 1131323 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0328 01:02:55.055182 1131323 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0328 01:02:55.055221 1131323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0328 01:02:53.416101 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .Start
	I0328 01:02:53.416332 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Ensuring networks are active...
	I0328 01:02:53.417021 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Ensuring network default is active
	I0328 01:02:53.417446 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Ensuring network mk-default-k8s-diff-port-283961 is active
	I0328 01:02:53.417857 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Getting domain xml...
	I0328 01:02:53.418555 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Creating domain...
	I0328 01:02:54.777201 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Waiting to get IP...
	I0328 01:02:54.778055 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:02:54.778563 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | unable to find current IP address of domain default-k8s-diff-port-283961 in network mk-default-k8s-diff-port-283961
	I0328 01:02:54.778705 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | I0328 01:02:54.778537 1132240 retry.go:31] will retry after 259.031702ms: waiting for machine to come up
	I0328 01:02:55.039365 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:02:55.039926 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | unable to find current IP address of domain default-k8s-diff-port-283961 in network mk-default-k8s-diff-port-283961
	I0328 01:02:55.039963 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | I0328 01:02:55.039860 1132240 retry.go:31] will retry after 254.124553ms: waiting for machine to come up
	I0328 01:02:55.295658 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:02:55.296218 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | unable to find current IP address of domain default-k8s-diff-port-283961 in network mk-default-k8s-diff-port-283961
	I0328 01:02:55.296265 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | I0328 01:02:55.296174 1132240 retry.go:31] will retry after 349.637234ms: waiting for machine to come up
	I0328 01:02:55.647590 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:02:55.648356 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | unable to find current IP address of domain default-k8s-diff-port-283961 in network mk-default-k8s-diff-port-283961
	I0328 01:02:55.648392 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | I0328 01:02:55.648298 1132240 retry.go:31] will retry after 446.471208ms: waiting for machine to come up
	I0328 01:02:53.707811 1130949 pod_ready.go:102] pod "coredns-76f75df574-pr5d8" in "kube-system" namespace has status "Ready":"False"
	I0328 01:02:55.708380 1130949 pod_ready.go:102] pod "coredns-76f75df574-pr5d8" in "kube-system" namespace has status "Ready":"False"
	I0328 01:02:57.213059 1130949 pod_ready.go:92] pod "coredns-76f75df574-pr5d8" in "kube-system" namespace has status "Ready":"True"
	I0328 01:02:57.213097 1130949 pod_ready.go:81] duration metric: took 10.513921238s for pod "coredns-76f75df574-pr5d8" in "kube-system" namespace to be "Ready" ...
	I0328 01:02:57.213113 1130949 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-808809" in "kube-system" namespace to be "Ready" ...
	I0328 01:02:57.222308 1130949 pod_ready.go:92] pod "etcd-embed-certs-808809" in "kube-system" namespace has status "Ready":"True"
	I0328 01:02:57.222344 1130949 pod_ready.go:81] duration metric: took 9.214056ms for pod "etcd-embed-certs-808809" in "kube-system" namespace to be "Ready" ...
	I0328 01:02:57.222357 1130949 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-808809" in "kube-system" namespace to be "Ready" ...
	I0328 01:02:57.231530 1130949 pod_ready.go:92] pod "kube-apiserver-embed-certs-808809" in "kube-system" namespace has status "Ready":"True"
	I0328 01:02:57.231558 1130949 pod_ready.go:81] duration metric: took 9.192864ms for pod "kube-apiserver-embed-certs-808809" in "kube-system" namespace to be "Ready" ...
	I0328 01:02:57.231568 1130949 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-808809" in "kube-system" namespace to be "Ready" ...
	I0328 01:02:56.994163 1131323 crio.go:462] duration metric: took 1.943992561s to copy over tarball
	I0328 01:02:56.994252 1131323 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0328 01:03:00.215115 1131323 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.220825311s)
	I0328 01:03:00.215159 1131323 crio.go:469] duration metric: took 3.22095583s to extract the tarball
	I0328 01:03:00.215171 1131323 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0328 01:03:00.259151 1131323 ssh_runner.go:195] Run: sudo crictl images --output json
	I0328 01:03:00.298446 1131323 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0328 01:03:00.298492 1131323 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0328 01:03:00.298585 1131323 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0328 01:03:00.298601 1131323 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0328 01:03:00.298613 1131323 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0328 01:03:00.298644 1131323 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0328 01:03:00.298662 1131323 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0328 01:03:00.298698 1131323 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0328 01:03:00.298593 1131323 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0328 01:03:00.298585 1131323 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0328 01:03:00.300347 1131323 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0328 01:03:00.300424 1131323 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0328 01:03:00.300470 1131323 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0328 01:03:00.300474 1131323 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0328 01:03:00.300637 1131323 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0328 01:03:00.300652 1131323 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0328 01:03:00.300723 1131323 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0328 01:03:00.300793 1131323 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0328 01:02:56.095939 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:02:56.096463 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | unable to find current IP address of domain default-k8s-diff-port-283961 in network mk-default-k8s-diff-port-283961
	I0328 01:02:56.096501 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | I0328 01:02:56.096412 1132240 retry.go:31] will retry after 490.029649ms: waiting for machine to come up
	I0328 01:02:56.588298 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:02:56.588835 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | unable to find current IP address of domain default-k8s-diff-port-283961 in network mk-default-k8s-diff-port-283961
	I0328 01:02:56.588868 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | I0328 01:02:56.588796 1132240 retry.go:31] will retry after 831.356628ms: waiting for machine to come up
	I0328 01:02:57.421917 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:02:57.422410 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | unable to find current IP address of domain default-k8s-diff-port-283961 in network mk-default-k8s-diff-port-283961
	I0328 01:02:57.422443 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | I0328 01:02:57.422353 1132240 retry.go:31] will retry after 1.164764985s: waiting for machine to come up
	I0328 01:02:58.588827 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:02:58.589183 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | unable to find current IP address of domain default-k8s-diff-port-283961 in network mk-default-k8s-diff-port-283961
	I0328 01:02:58.589225 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | I0328 01:02:58.589119 1132240 retry.go:31] will retry after 1.307248783s: waiting for machine to come up
	I0328 01:02:59.897607 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:02:59.897976 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | unable to find current IP address of domain default-k8s-diff-port-283961 in network mk-default-k8s-diff-port-283961
	I0328 01:02:59.898008 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | I0328 01:02:59.897926 1132240 retry.go:31] will retry after 1.560958271s: waiting for machine to come up
	I0328 01:02:58.241179 1130949 pod_ready.go:92] pod "kube-controller-manager-embed-certs-808809" in "kube-system" namespace has status "Ready":"True"
	I0328 01:02:58.241216 1130949 pod_ready.go:81] duration metric: took 1.00963904s for pod "kube-controller-manager-embed-certs-808809" in "kube-system" namespace to be "Ready" ...
	I0328 01:02:58.241245 1130949 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-qwzpg" in "kube-system" namespace to be "Ready" ...
	I0328 01:02:58.249787 1130949 pod_ready.go:92] pod "kube-proxy-qwzpg" in "kube-system" namespace has status "Ready":"True"
	I0328 01:02:58.249826 1130949 pod_ready.go:81] duration metric: took 8.571225ms for pod "kube-proxy-qwzpg" in "kube-system" namespace to be "Ready" ...
	I0328 01:02:58.249840 1130949 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-808809" in "kube-system" namespace to be "Ready" ...
	I0328 01:02:58.405101 1130949 pod_ready.go:92] pod "kube-scheduler-embed-certs-808809" in "kube-system" namespace has status "Ready":"True"
	I0328 01:02:58.405130 1130949 pod_ready.go:81] duration metric: took 155.281142ms for pod "kube-scheduler-embed-certs-808809" in "kube-system" namespace to be "Ready" ...
	I0328 01:02:58.405141 1130949 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace to be "Ready" ...
	I0328 01:03:00.412202 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:02.412688 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:00.499788 1131323 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0328 01:03:00.539135 1131323 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0328 01:03:00.541462 1131323 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0328 01:03:00.544184 1131323 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0328 01:03:00.544227 1131323 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0328 01:03:00.544261 1131323 ssh_runner.go:195] Run: which crictl
	I0328 01:03:00.555720 1131323 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0328 01:03:00.560189 1131323 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0328 01:03:00.562639 1131323 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0328 01:03:00.574105 1131323 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0328 01:03:00.681717 1131323 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0328 01:03:00.681742 1131323 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0328 01:03:00.681765 1131323 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0328 01:03:00.681803 1131323 ssh_runner.go:195] Run: which crictl
	I0328 01:03:00.682033 1131323 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0328 01:03:00.682076 1131323 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0328 01:03:00.682115 1131323 ssh_runner.go:195] Run: which crictl
	I0328 01:03:00.732868 1131323 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0328 01:03:00.732922 1131323 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0328 01:03:00.732988 1131323 ssh_runner.go:195] Run: which crictl
	I0328 01:03:00.742680 1131323 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0328 01:03:00.742730 1131323 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0328 01:03:00.742762 1131323 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0328 01:03:00.742777 1131323 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0328 01:03:00.742805 1131323 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0328 01:03:00.742770 1131323 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0328 01:03:00.742817 1131323 ssh_runner.go:195] Run: which crictl
	I0328 01:03:00.742851 1131323 ssh_runner.go:195] Run: which crictl
	I0328 01:03:00.742865 1131323 ssh_runner.go:195] Run: which crictl
	I0328 01:03:00.770435 1131323 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0328 01:03:00.770472 1131323 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0328 01:03:00.770567 1131323 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0328 01:03:00.770588 1131323 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0328 01:03:00.770727 1131323 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0328 01:03:00.770760 1131323 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0328 01:03:00.770728 1131323 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0328 01:03:00.882338 1131323 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0328 01:03:00.896602 1131323 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0328 01:03:00.918814 1131323 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0328 01:03:00.918869 1131323 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0328 01:03:00.918919 1131323 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0328 01:03:00.918968 1131323 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0328 01:03:01.186124 1131323 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0328 01:03:01.334547 1131323 cache_images.go:92] duration metric: took 1.036031169s to LoadCachedImages
	W0328 01:03:01.334676 1131323 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	I0328 01:03:01.334694 1131323 kubeadm.go:928] updating node { 192.168.50.174 8443 v1.20.0 crio true true} ...
	I0328 01:03:01.334827 1131323 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-986088 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.174
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-986088 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0328 01:03:01.334926 1131323 ssh_runner.go:195] Run: crio config
	I0328 01:03:01.391004 1131323 cni.go:84] Creating CNI manager for ""
	I0328 01:03:01.391034 1131323 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0328 01:03:01.391054 1131323 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0328 01:03:01.391081 1131323 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.174 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-986088 NodeName:old-k8s-version-986088 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.174"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.174 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0328 01:03:01.391265 1131323 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.174
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-986088"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.174
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.174"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0328 01:03:01.391347 1131323 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0328 01:03:01.403684 1131323 binaries.go:44] Found k8s binaries, skipping transfer
	I0328 01:03:01.403779 1131323 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0328 01:03:01.415168 1131323 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0328 01:03:01.434329 1131323 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0328 01:03:01.456280 1131323 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0328 01:03:01.476625 1131323 ssh_runner.go:195] Run: grep 192.168.50.174	control-plane.minikube.internal$ /etc/hosts
	I0328 01:03:01.480867 1131323 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.174	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0328 01:03:01.493833 1131323 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0328 01:03:01.642273 1131323 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0328 01:03:01.661857 1131323 certs.go:68] Setting up /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/old-k8s-version-986088 for IP: 192.168.50.174
	I0328 01:03:01.661887 1131323 certs.go:194] generating shared ca certs ...
	I0328 01:03:01.661909 1131323 certs.go:226] acquiring lock for ca certs: {Name:mkf4dec8f33bbf51de6ed3aabdf175c7fd744ef9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 01:03:01.662115 1131323 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.key
	I0328 01:03:01.662174 1131323 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/proxy-client-ca.key
	I0328 01:03:01.662188 1131323 certs.go:256] generating profile certs ...
	I0328 01:03:01.662324 1131323 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/old-k8s-version-986088/client.key
	I0328 01:03:01.662399 1131323 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/old-k8s-version-986088/apiserver.key.b88fbc7e
	I0328 01:03:01.662447 1131323 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/old-k8s-version-986088/proxy-client.key
	I0328 01:03:01.662600 1131323 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/1076522.pem (1338 bytes)
	W0328 01:03:01.662656 1131323 certs.go:480] ignoring /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/1076522_empty.pem, impossibly tiny 0 bytes
	I0328 01:03:01.662672 1131323 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca-key.pem (1675 bytes)
	I0328 01:03:01.662703 1131323 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem (1078 bytes)
	I0328 01:03:01.662738 1131323 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/cert.pem (1123 bytes)
	I0328 01:03:01.662774 1131323 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/key.pem (1679 bytes)
	I0328 01:03:01.662826 1131323 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/files/etc/ssl/certs/10765222.pem (1708 bytes)
	I0328 01:03:01.663831 1131323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0328 01:03:01.697171 1131323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0328 01:03:01.742118 1131323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0328 01:03:01.783263 1131323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0328 01:03:01.831682 1131323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/old-k8s-version-986088/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0328 01:03:01.878051 1131323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/old-k8s-version-986088/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0328 01:03:01.915626 1131323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/old-k8s-version-986088/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0328 01:03:01.942247 1131323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/old-k8s-version-986088/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0328 01:03:01.969054 1131323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/files/etc/ssl/certs/10765222.pem --> /usr/share/ca-certificates/10765222.pem (1708 bytes)
	I0328 01:03:01.998651 1131323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0328 01:03:02.024881 1131323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/1076522.pem --> /usr/share/ca-certificates/1076522.pem (1338 bytes)
	I0328 01:03:02.051284 1131323 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0328 01:03:02.070414 1131323 ssh_runner.go:195] Run: openssl version
	I0328 01:03:02.076635 1131323 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10765222.pem && ln -fs /usr/share/ca-certificates/10765222.pem /etc/ssl/certs/10765222.pem"
	I0328 01:03:02.089288 1131323 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10765222.pem
	I0328 01:03:02.094260 1131323 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 27 23:42 /usr/share/ca-certificates/10765222.pem
	I0328 01:03:02.094322 1131323 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10765222.pem
	I0328 01:03:02.100846 1131323 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/10765222.pem /etc/ssl/certs/3ec20f2e.0"
	I0328 01:03:02.114474 1131323 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0328 01:03:02.126467 1131323 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0328 01:03:02.131240 1131323 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 27 23:34 /usr/share/ca-certificates/minikubeCA.pem
	I0328 01:03:02.131293 1131323 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0328 01:03:02.137496 1131323 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0328 01:03:02.150863 1131323 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1076522.pem && ln -fs /usr/share/ca-certificates/1076522.pem /etc/ssl/certs/1076522.pem"
	I0328 01:03:02.163536 1131323 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1076522.pem
	I0328 01:03:02.168767 1131323 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 27 23:42 /usr/share/ca-certificates/1076522.pem
	I0328 01:03:02.168850 1131323 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1076522.pem
	I0328 01:03:02.175218 1131323 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1076522.pem /etc/ssl/certs/51391683.0"
	I0328 01:03:02.188272 1131323 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0328 01:03:02.193348 1131323 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0328 01:03:02.199969 1131323 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0328 01:03:02.206424 1131323 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0328 01:03:02.213530 1131323 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0328 01:03:02.220136 1131323 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0328 01:03:02.226502 1131323 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0328 01:03:02.232708 1131323 kubeadm.go:391] StartCluster: {Name:old-k8s-version-986088 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-986088 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.174 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0328 01:03:02.232831 1131323 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0328 01:03:02.232890 1131323 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0328 01:03:02.280062 1131323 cri.go:89] found id: ""
	I0328 01:03:02.280160 1131323 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0328 01:03:02.291968 1131323 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0328 01:03:02.292003 1131323 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0328 01:03:02.292011 1131323 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0328 01:03:02.292072 1131323 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0328 01:03:02.304006 1131323 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0328 01:03:02.305105 1131323 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-986088" does not appear in /home/jenkins/minikube-integration/18485-1069254/kubeconfig
	I0328 01:03:02.305785 1131323 kubeconfig.go:62] /home/jenkins/minikube-integration/18485-1069254/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-986088" cluster setting kubeconfig missing "old-k8s-version-986088" context setting]
	I0328 01:03:02.306728 1131323 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18485-1069254/kubeconfig: {Name:mk608bb0dce90860f51972a105c5a28b1a9b081e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 01:03:02.308610 1131323 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0328 01:03:02.320212 1131323 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.50.174
	I0328 01:03:02.320265 1131323 kubeadm.go:1154] stopping kube-system containers ...
	I0328 01:03:02.320283 1131323 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0328 01:03:02.320356 1131323 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0328 01:03:02.366411 1131323 cri.go:89] found id: ""
	I0328 01:03:02.366500 1131323 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0328 01:03:02.388351 1131323 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0328 01:03:02.402621 1131323 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0328 01:03:02.402652 1131323 kubeadm.go:156] found existing configuration files:
	
	I0328 01:03:02.402718 1131323 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0328 01:03:02.415559 1131323 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0328 01:03:02.415633 1131323 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0328 01:03:02.426666 1131323 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0328 01:03:02.439497 1131323 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0328 01:03:02.439558 1131323 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0328 01:03:02.451040 1131323 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0328 01:03:02.461780 1131323 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0328 01:03:02.461876 1131323 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0328 01:03:02.473295 1131323 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0328 01:03:02.484762 1131323 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0328 01:03:02.484841 1131323 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0328 01:03:02.496304 1131323 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0328 01:03:02.507634 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0328 01:03:02.641980 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0328 01:03:03.598106 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0328 01:03:03.840026 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0328 01:03:03.970336 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0328 01:03:04.067774 1131323 api_server.go:52] waiting for apiserver process to appear ...
	I0328 01:03:04.067911 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:04.568260 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:05.068794 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:01.460535 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:01.461008 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | unable to find current IP address of domain default-k8s-diff-port-283961 in network mk-default-k8s-diff-port-283961
	I0328 01:03:01.461039 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | I0328 01:03:01.460962 1132240 retry.go:31] will retry after 1.839531745s: waiting for machine to come up
	I0328 01:03:03.302965 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:03.303445 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | unable to find current IP address of domain default-k8s-diff-port-283961 in network mk-default-k8s-diff-port-283961
	I0328 01:03:03.303479 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | I0328 01:03:03.303387 1132240 retry.go:31] will retry after 2.461748315s: waiting for machine to come up
	I0328 01:03:04.413898 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:06.913608 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:05.568716 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:06.068362 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:06.568235 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:07.068696 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:07.567976 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:08.068032 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:08.568586 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:09.068046 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:09.568699 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:10.067967 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:05.767795 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:05.768329 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | unable to find current IP address of domain default-k8s-diff-port-283961 in network mk-default-k8s-diff-port-283961
	I0328 01:03:05.768360 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | I0328 01:03:05.768279 1132240 retry.go:31] will retry after 2.321291255s: waiting for machine to come up
	I0328 01:03:08.092644 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:08.093094 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | unable to find current IP address of domain default-k8s-diff-port-283961 in network mk-default-k8s-diff-port-283961
	I0328 01:03:08.093131 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | I0328 01:03:08.093046 1132240 retry.go:31] will retry after 4.151205276s: waiting for machine to come up
	I0328 01:03:09.413199 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:11.912234 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:13.671756 1130827 start.go:364] duration metric: took 54.966750689s to acquireMachinesLock for "no-preload-248059"
	I0328 01:03:13.671815 1130827 start.go:96] Skipping create...Using existing machine configuration
	I0328 01:03:13.671823 1130827 fix.go:54] fixHost starting: 
	I0328 01:03:13.672255 1130827 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18485-1069254/.minikube/bin/docker-machine-driver-kvm2
	I0328 01:03:13.672292 1130827 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 01:03:13.689811 1130827 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42017
	I0328 01:03:13.690364 1130827 main.go:141] libmachine: () Calling .GetVersion
	I0328 01:03:13.690817 1130827 main.go:141] libmachine: Using API Version  1
	I0328 01:03:13.690843 1130827 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 01:03:13.691213 1130827 main.go:141] libmachine: () Calling .GetMachineName
	I0328 01:03:13.691395 1130827 main.go:141] libmachine: (no-preload-248059) Calling .DriverName
	I0328 01:03:13.691523 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetState
	I0328 01:03:13.693093 1130827 fix.go:112] recreateIfNeeded on no-preload-248059: state=Stopped err=<nil>
	I0328 01:03:13.693123 1130827 main.go:141] libmachine: (no-preload-248059) Calling .DriverName
	W0328 01:03:13.693280 1130827 fix.go:138] unexpected machine state, will restart: <nil>
	I0328 01:03:13.695158 1130827 out.go:177] * Restarting existing kvm2 VM for "no-preload-248059" ...
	I0328 01:03:10.568240 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:11.068028 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:11.568146 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:12.068467 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:12.568820 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:13.068031 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:13.568977 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:14.068050 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:14.567938 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:15.068711 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:12.248769 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:12.249440 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Found IP for machine: 192.168.39.224
	I0328 01:03:12.249467 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Reserving static IP address...
	I0328 01:03:12.249498 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has current primary IP address 192.168.39.224 and MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:12.249832 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-283961", mac: "52:54:00:c4:df:6f", ip: "192.168.39.224"} in network mk-default-k8s-diff-port-283961: {Iface:virbr1 ExpiryTime:2024-03-28 02:03:06 +0000 UTC Type:0 Mac:52:54:00:c4:df:6f Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:default-k8s-diff-port-283961 Clientid:01:52:54:00:c4:df:6f}
	I0328 01:03:12.249872 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | skip adding static IP to network mk-default-k8s-diff-port-283961 - found existing host DHCP lease matching {name: "default-k8s-diff-port-283961", mac: "52:54:00:c4:df:6f", ip: "192.168.39.224"}
	I0328 01:03:12.249888 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Reserved static IP address: 192.168.39.224
	I0328 01:03:12.249908 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Waiting for SSH to be available...
	I0328 01:03:12.249921 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | Getting to WaitForSSH function...
	I0328 01:03:12.252053 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:12.252487 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:df:6f", ip: ""} in network mk-default-k8s-diff-port-283961: {Iface:virbr1 ExpiryTime:2024-03-28 02:03:06 +0000 UTC Type:0 Mac:52:54:00:c4:df:6f Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:default-k8s-diff-port-283961 Clientid:01:52:54:00:c4:df:6f}
	I0328 01:03:12.252521 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined IP address 192.168.39.224 and MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:12.252646 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | Using SSH client type: external
	I0328 01:03:12.252677 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | Using SSH private key: /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/default-k8s-diff-port-283961/id_rsa (-rw-------)
	I0328 01:03:12.252709 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.224 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/default-k8s-diff-port-283961/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0328 01:03:12.252731 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | About to run SSH command:
	I0328 01:03:12.252750 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | exit 0
	I0328 01:03:12.378419 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | SSH cmd err, output: <nil>: 
	I0328 01:03:12.378866 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetConfigRaw
	I0328 01:03:12.379659 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetIP
	I0328 01:03:12.382631 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:12.382997 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:df:6f", ip: ""} in network mk-default-k8s-diff-port-283961: {Iface:virbr1 ExpiryTime:2024-03-28 02:03:06 +0000 UTC Type:0 Mac:52:54:00:c4:df:6f Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:default-k8s-diff-port-283961 Clientid:01:52:54:00:c4:df:6f}
	I0328 01:03:12.383023 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined IP address 192.168.39.224 and MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:12.383276 1131600 profile.go:142] Saving config to /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/default-k8s-diff-port-283961/config.json ...
	I0328 01:03:12.383534 1131600 machine.go:94] provisionDockerMachine start ...
	I0328 01:03:12.383567 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .DriverName
	I0328 01:03:12.383805 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHHostname
	I0328 01:03:12.386472 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:12.386839 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:df:6f", ip: ""} in network mk-default-k8s-diff-port-283961: {Iface:virbr1 ExpiryTime:2024-03-28 02:03:06 +0000 UTC Type:0 Mac:52:54:00:c4:df:6f Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:default-k8s-diff-port-283961 Clientid:01:52:54:00:c4:df:6f}
	I0328 01:03:12.386870 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined IP address 192.168.39.224 and MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:12.387035 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHPort
	I0328 01:03:12.387240 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHKeyPath
	I0328 01:03:12.387399 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHKeyPath
	I0328 01:03:12.387577 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHUsername
	I0328 01:03:12.387729 1131600 main.go:141] libmachine: Using SSH client type: native
	I0328 01:03:12.387931 1131600 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.224 22 <nil> <nil>}
	I0328 01:03:12.387943 1131600 main.go:141] libmachine: About to run SSH command:
	hostname
	I0328 01:03:12.499608 1131600 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0328 01:03:12.499644 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetMachineName
	I0328 01:03:12.499930 1131600 buildroot.go:166] provisioning hostname "default-k8s-diff-port-283961"
	I0328 01:03:12.499962 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetMachineName
	I0328 01:03:12.500154 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHHostname
	I0328 01:03:12.502737 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:12.503089 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:df:6f", ip: ""} in network mk-default-k8s-diff-port-283961: {Iface:virbr1 ExpiryTime:2024-03-28 02:03:06 +0000 UTC Type:0 Mac:52:54:00:c4:df:6f Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:default-k8s-diff-port-283961 Clientid:01:52:54:00:c4:df:6f}
	I0328 01:03:12.503120 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined IP address 192.168.39.224 and MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:12.503295 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHPort
	I0328 01:03:12.503516 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHKeyPath
	I0328 01:03:12.503725 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHKeyPath
	I0328 01:03:12.503892 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHUsername
	I0328 01:03:12.504093 1131600 main.go:141] libmachine: Using SSH client type: native
	I0328 01:03:12.504271 1131600 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.224 22 <nil> <nil>}
	I0328 01:03:12.504285 1131600 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-283961 && echo "default-k8s-diff-port-283961" | sudo tee /etc/hostname
	I0328 01:03:12.625590 1131600 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-283961
	
	I0328 01:03:12.625624 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHHostname
	I0328 01:03:12.628570 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:12.628883 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:df:6f", ip: ""} in network mk-default-k8s-diff-port-283961: {Iface:virbr1 ExpiryTime:2024-03-28 02:03:06 +0000 UTC Type:0 Mac:52:54:00:c4:df:6f Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:default-k8s-diff-port-283961 Clientid:01:52:54:00:c4:df:6f}
	I0328 01:03:12.628968 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined IP address 192.168.39.224 and MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:12.629143 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHPort
	I0328 01:03:12.629397 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHKeyPath
	I0328 01:03:12.629627 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHKeyPath
	I0328 01:03:12.629825 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHUsername
	I0328 01:03:12.630008 1131600 main.go:141] libmachine: Using SSH client type: native
	I0328 01:03:12.630191 1131600 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.224 22 <nil> <nil>}
	I0328 01:03:12.630210 1131600 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-283961' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-283961/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-283961' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0328 01:03:12.744240 1131600 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0328 01:03:12.744280 1131600 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18485-1069254/.minikube CaCertPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18485-1069254/.minikube}
	I0328 01:03:12.744327 1131600 buildroot.go:174] setting up certificates
	I0328 01:03:12.744342 1131600 provision.go:84] configureAuth start
	I0328 01:03:12.744361 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetMachineName
	I0328 01:03:12.744722 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetIP
	I0328 01:03:12.747139 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:12.747448 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:df:6f", ip: ""} in network mk-default-k8s-diff-port-283961: {Iface:virbr1 ExpiryTime:2024-03-28 02:03:06 +0000 UTC Type:0 Mac:52:54:00:c4:df:6f Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:default-k8s-diff-port-283961 Clientid:01:52:54:00:c4:df:6f}
	I0328 01:03:12.747478 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined IP address 192.168.39.224 and MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:12.747582 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHHostname
	I0328 01:03:12.749705 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:12.749964 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:df:6f", ip: ""} in network mk-default-k8s-diff-port-283961: {Iface:virbr1 ExpiryTime:2024-03-28 02:03:06 +0000 UTC Type:0 Mac:52:54:00:c4:df:6f Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:default-k8s-diff-port-283961 Clientid:01:52:54:00:c4:df:6f}
	I0328 01:03:12.749995 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined IP address 192.168.39.224 and MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:12.750125 1131600 provision.go:143] copyHostCerts
	I0328 01:03:12.750203 1131600 exec_runner.go:144] found /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.pem, removing ...
	I0328 01:03:12.750217 1131600 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.pem
	I0328 01:03:12.750323 1131600 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.pem (1078 bytes)
	I0328 01:03:12.750435 1131600 exec_runner.go:144] found /home/jenkins/minikube-integration/18485-1069254/.minikube/cert.pem, removing ...
	I0328 01:03:12.750446 1131600 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18485-1069254/.minikube/cert.pem
	I0328 01:03:12.750479 1131600 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18485-1069254/.minikube/cert.pem (1123 bytes)
	I0328 01:03:12.750557 1131600 exec_runner.go:144] found /home/jenkins/minikube-integration/18485-1069254/.minikube/key.pem, removing ...
	I0328 01:03:12.750567 1131600 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18485-1069254/.minikube/key.pem
	I0328 01:03:12.750599 1131600 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18485-1069254/.minikube/key.pem (1679 bytes)
	I0328 01:03:12.750670 1131600 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-283961 san=[127.0.0.1 192.168.39.224 default-k8s-diff-port-283961 localhost minikube]
	I0328 01:03:12.963182 1131600 provision.go:177] copyRemoteCerts
	I0328 01:03:12.963265 1131600 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0328 01:03:12.963313 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHHostname
	I0328 01:03:12.965946 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:12.966177 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:df:6f", ip: ""} in network mk-default-k8s-diff-port-283961: {Iface:virbr1 ExpiryTime:2024-03-28 02:03:06 +0000 UTC Type:0 Mac:52:54:00:c4:df:6f Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:default-k8s-diff-port-283961 Clientid:01:52:54:00:c4:df:6f}
	I0328 01:03:12.966207 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined IP address 192.168.39.224 and MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:12.966347 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHPort
	I0328 01:03:12.966573 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHKeyPath
	I0328 01:03:12.966773 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHUsername
	I0328 01:03:12.966934 1131600 sshutil.go:53] new ssh client: &{IP:192.168.39.224 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/default-k8s-diff-port-283961/id_rsa Username:docker}
	I0328 01:03:13.057477 1131600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0328 01:03:13.083706 1131600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0328 01:03:13.109167 1131600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0328 01:03:13.136835 1131600 provision.go:87] duration metric: took 392.475069ms to configureAuth
	I0328 01:03:13.136867 1131600 buildroot.go:189] setting minikube options for container-runtime
	I0328 01:03:13.137048 1131600 config.go:182] Loaded profile config "default-k8s-diff-port-283961": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0328 01:03:13.137131 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHHostname
	I0328 01:03:13.139508 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:13.139761 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:df:6f", ip: ""} in network mk-default-k8s-diff-port-283961: {Iface:virbr1 ExpiryTime:2024-03-28 02:03:06 +0000 UTC Type:0 Mac:52:54:00:c4:df:6f Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:default-k8s-diff-port-283961 Clientid:01:52:54:00:c4:df:6f}
	I0328 01:03:13.139792 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined IP address 192.168.39.224 and MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:13.139959 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHPort
	I0328 01:03:13.140170 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHKeyPath
	I0328 01:03:13.140343 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHKeyPath
	I0328 01:03:13.140502 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHUsername
	I0328 01:03:13.140685 1131600 main.go:141] libmachine: Using SSH client type: native
	I0328 01:03:13.140873 1131600 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.224 22 <nil> <nil>}
	I0328 01:03:13.140897 1131600 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0328 01:03:13.422372 1131600 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0328 01:03:13.422405 1131600 machine.go:97] duration metric: took 1.038857021s to provisionDockerMachine
	I0328 01:03:13.422418 1131600 start.go:293] postStartSetup for "default-k8s-diff-port-283961" (driver="kvm2")
	I0328 01:03:13.422428 1131600 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0328 01:03:13.422456 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .DriverName
	I0328 01:03:13.422788 1131600 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0328 01:03:13.422819 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHHostname
	I0328 01:03:13.425539 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:13.425865 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:df:6f", ip: ""} in network mk-default-k8s-diff-port-283961: {Iface:virbr1 ExpiryTime:2024-03-28 02:03:06 +0000 UTC Type:0 Mac:52:54:00:c4:df:6f Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:default-k8s-diff-port-283961 Clientid:01:52:54:00:c4:df:6f}
	I0328 01:03:13.425894 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined IP address 192.168.39.224 and MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:13.426023 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHPort
	I0328 01:03:13.426225 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHKeyPath
	I0328 01:03:13.426407 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHUsername
	I0328 01:03:13.426577 1131600 sshutil.go:53] new ssh client: &{IP:192.168.39.224 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/default-k8s-diff-port-283961/id_rsa Username:docker}
	I0328 01:03:13.511874 1131600 ssh_runner.go:195] Run: cat /etc/os-release
	I0328 01:03:13.516643 1131600 info.go:137] Remote host: Buildroot 2023.02.9
	I0328 01:03:13.516673 1131600 filesync.go:126] Scanning /home/jenkins/minikube-integration/18485-1069254/.minikube/addons for local assets ...
	I0328 01:03:13.516749 1131600 filesync.go:126] Scanning /home/jenkins/minikube-integration/18485-1069254/.minikube/files for local assets ...
	I0328 01:03:13.516846 1131600 filesync.go:149] local asset: /home/jenkins/minikube-integration/18485-1069254/.minikube/files/etc/ssl/certs/10765222.pem -> 10765222.pem in /etc/ssl/certs
	I0328 01:03:13.516969 1131600 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0328 01:03:13.529004 1131600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/files/etc/ssl/certs/10765222.pem --> /etc/ssl/certs/10765222.pem (1708 bytes)
	I0328 01:03:13.557244 1131600 start.go:296] duration metric: took 134.810243ms for postStartSetup
	I0328 01:03:13.557289 1131600 fix.go:56] duration metric: took 20.165726422s for fixHost
	I0328 01:03:13.557313 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHHostname
	I0328 01:03:13.560216 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:13.560585 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:df:6f", ip: ""} in network mk-default-k8s-diff-port-283961: {Iface:virbr1 ExpiryTime:2024-03-28 02:03:06 +0000 UTC Type:0 Mac:52:54:00:c4:df:6f Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:default-k8s-diff-port-283961 Clientid:01:52:54:00:c4:df:6f}
	I0328 01:03:13.560623 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined IP address 192.168.39.224 and MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:13.560803 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHPort
	I0328 01:03:13.561050 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHKeyPath
	I0328 01:03:13.561188 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHKeyPath
	I0328 01:03:13.561303 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHUsername
	I0328 01:03:13.561552 1131600 main.go:141] libmachine: Using SSH client type: native
	I0328 01:03:13.561742 1131600 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.224 22 <nil> <nil>}
	I0328 01:03:13.561757 1131600 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0328 01:03:13.671545 1131600 main.go:141] libmachine: SSH cmd err, output: <nil>: 1711587793.617322674
	
	I0328 01:03:13.671570 1131600 fix.go:216] guest clock: 1711587793.617322674
	I0328 01:03:13.671578 1131600 fix.go:229] Guest: 2024-03-28 01:03:13.617322674 +0000 UTC Remote: 2024-03-28 01:03:13.55729386 +0000 UTC m=+187.934897846 (delta=60.028814ms)
	I0328 01:03:13.671632 1131600 fix.go:200] guest clock delta is within tolerance: 60.028814ms
	I0328 01:03:13.671642 1131600 start.go:83] releasing machines lock for "default-k8s-diff-port-283961", held for 20.280118311s
	I0328 01:03:13.671673 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .DriverName
	I0328 01:03:13.671976 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetIP
	I0328 01:03:13.674978 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:13.675384 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:df:6f", ip: ""} in network mk-default-k8s-diff-port-283961: {Iface:virbr1 ExpiryTime:2024-03-28 02:03:06 +0000 UTC Type:0 Mac:52:54:00:c4:df:6f Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:default-k8s-diff-port-283961 Clientid:01:52:54:00:c4:df:6f}
	I0328 01:03:13.675436 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined IP address 192.168.39.224 and MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:13.675562 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .DriverName
	I0328 01:03:13.676167 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .DriverName
	I0328 01:03:13.676337 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .DriverName
	I0328 01:03:13.676436 1131600 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0328 01:03:13.676501 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHHostname
	I0328 01:03:13.676557 1131600 ssh_runner.go:195] Run: cat /version.json
	I0328 01:03:13.676578 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHHostname
	I0328 01:03:13.679418 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:13.679452 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:13.679758 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:df:6f", ip: ""} in network mk-default-k8s-diff-port-283961: {Iface:virbr1 ExpiryTime:2024-03-28 02:03:06 +0000 UTC Type:0 Mac:52:54:00:c4:df:6f Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:default-k8s-diff-port-283961 Clientid:01:52:54:00:c4:df:6f}
	I0328 01:03:13.679785 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined IP address 192.168.39.224 and MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:13.679813 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:df:6f", ip: ""} in network mk-default-k8s-diff-port-283961: {Iface:virbr1 ExpiryTime:2024-03-28 02:03:06 +0000 UTC Type:0 Mac:52:54:00:c4:df:6f Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:default-k8s-diff-port-283961 Clientid:01:52:54:00:c4:df:6f}
	I0328 01:03:13.679832 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined IP address 192.168.39.224 and MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:13.679986 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHPort
	I0328 01:03:13.680089 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHPort
	I0328 01:03:13.680190 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHKeyPath
	I0328 01:03:13.680255 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHKeyPath
	I0328 01:03:13.680345 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHUsername
	I0328 01:03:13.680410 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHUsername
	I0328 01:03:13.680517 1131600 sshutil.go:53] new ssh client: &{IP:192.168.39.224 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/default-k8s-diff-port-283961/id_rsa Username:docker}
	I0328 01:03:13.680608 1131600 sshutil.go:53] new ssh client: &{IP:192.168.39.224 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/default-k8s-diff-port-283961/id_rsa Username:docker}
	I0328 01:03:13.759826 1131600 ssh_runner.go:195] Run: systemctl --version
	I0328 01:03:13.796647 1131600 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0328 01:03:13.947036 1131600 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0328 01:03:13.954165 1131600 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0328 01:03:13.954265 1131600 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0328 01:03:13.973503 1131600 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0328 01:03:13.973538 1131600 start.go:494] detecting cgroup driver to use...
	I0328 01:03:13.973629 1131600 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0328 01:03:13.997675 1131600 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0328 01:03:14.015349 1131600 docker.go:217] disabling cri-docker service (if available) ...
	I0328 01:03:14.015421 1131600 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0328 01:03:14.031099 1131600 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0328 01:03:14.046446 1131600 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0328 01:03:14.186993 1131600 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0328 01:03:14.351164 1131600 docker.go:233] disabling docker service ...
	I0328 01:03:14.351232 1131600 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0328 01:03:14.370629 1131600 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0328 01:03:14.387837 1131600 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0328 01:03:14.544060 1131600 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0328 01:03:14.707699 1131600 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0328 01:03:14.725658 1131600 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0328 01:03:14.746063 1131600 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0328 01:03:14.746141 1131600 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 01:03:14.759244 1131600 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0328 01:03:14.759317 1131600 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 01:03:14.773015 1131600 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 01:03:14.786810 1131600 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 01:03:14.807101 1131600 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0328 01:03:14.821013 1131600 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 01:03:14.834181 1131600 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 01:03:14.861163 1131600 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 01:03:14.874274 1131600 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0328 01:03:14.885890 1131600 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0328 01:03:14.885968 1131600 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0328 01:03:14.903142 1131600 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0328 01:03:14.916364 1131600 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0328 01:03:15.073343 1131600 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0328 01:03:15.218406 1131600 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0328 01:03:15.218500 1131600 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0328 01:03:15.226299 1131600 start.go:562] Will wait 60s for crictl version
	I0328 01:03:15.226373 1131600 ssh_runner.go:195] Run: which crictl
	I0328 01:03:15.232051 1131600 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0328 01:03:15.278793 1131600 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0328 01:03:15.278903 1131600 ssh_runner.go:195] Run: crio --version
	I0328 01:03:15.313408 1131600 ssh_runner.go:195] Run: crio --version
	I0328 01:03:15.351613 1131600 out.go:177] * Preparing Kubernetes v1.29.3 on CRI-O 1.29.1 ...
	I0328 01:03:15.353013 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetIP
	I0328 01:03:15.355924 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:15.356306 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:df:6f", ip: ""} in network mk-default-k8s-diff-port-283961: {Iface:virbr1 ExpiryTime:2024-03-28 02:03:06 +0000 UTC Type:0 Mac:52:54:00:c4:df:6f Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:default-k8s-diff-port-283961 Clientid:01:52:54:00:c4:df:6f}
	I0328 01:03:15.356341 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined IP address 192.168.39.224 and MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:15.356555 1131600 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0328 01:03:15.361194 1131600 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0328 01:03:15.380926 1131600 kubeadm.go:877] updating cluster {Name:default-k8s-diff-port-283961 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.29.3 ClusterName:default-k8s-diff-port-283961 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.224 Port:8444 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0328 01:03:15.381043 1131600 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0328 01:03:15.381099 1131600 ssh_runner.go:195] Run: sudo crictl images --output json
	I0328 01:03:15.423322 1131600 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.3". assuming images are not preloaded.
	I0328 01:03:15.423409 1131600 ssh_runner.go:195] Run: which lz4
	I0328 01:03:15.428123 1131600 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0328 01:03:15.433023 1131600 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0328 01:03:15.433065 1131600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (402967820 bytes)
	I0328 01:03:13.696314 1130827 main.go:141] libmachine: (no-preload-248059) Calling .Start
	I0328 01:03:13.696506 1130827 main.go:141] libmachine: (no-preload-248059) Ensuring networks are active...
	I0328 01:03:13.697344 1130827 main.go:141] libmachine: (no-preload-248059) Ensuring network default is active
	I0328 01:03:13.697668 1130827 main.go:141] libmachine: (no-preload-248059) Ensuring network mk-no-preload-248059 is active
	I0328 01:03:13.698009 1130827 main.go:141] libmachine: (no-preload-248059) Getting domain xml...
	I0328 01:03:13.698805 1130827 main.go:141] libmachine: (no-preload-248059) Creating domain...
	I0328 01:03:14.955922 1130827 main.go:141] libmachine: (no-preload-248059) Waiting to get IP...
	I0328 01:03:14.957088 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:14.957534 1130827 main.go:141] libmachine: (no-preload-248059) DBG | unable to find current IP address of domain no-preload-248059 in network mk-no-preload-248059
	I0328 01:03:14.957660 1130827 main.go:141] libmachine: (no-preload-248059) DBG | I0328 01:03:14.957533 1132389 retry.go:31] will retry after 222.894093ms: waiting for machine to come up
	I0328 01:03:15.182078 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:15.182541 1130827 main.go:141] libmachine: (no-preload-248059) DBG | unable to find current IP address of domain no-preload-248059 in network mk-no-preload-248059
	I0328 01:03:15.182580 1130827 main.go:141] libmachine: (no-preload-248059) DBG | I0328 01:03:15.182528 1132389 retry.go:31] will retry after 263.74163ms: waiting for machine to come up
	I0328 01:03:15.448081 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:15.448653 1130827 main.go:141] libmachine: (no-preload-248059) DBG | unable to find current IP address of domain no-preload-248059 in network mk-no-preload-248059
	I0328 01:03:15.448684 1130827 main.go:141] libmachine: (no-preload-248059) DBG | I0328 01:03:15.448586 1132389 retry.go:31] will retry after 444.066222ms: waiting for machine to come up
	I0328 01:03:15.894141 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:15.894695 1130827 main.go:141] libmachine: (no-preload-248059) DBG | unable to find current IP address of domain no-preload-248059 in network mk-no-preload-248059
	I0328 01:03:15.894732 1130827 main.go:141] libmachine: (no-preload-248059) DBG | I0328 01:03:15.894650 1132389 retry.go:31] will retry after 469.421771ms: waiting for machine to come up
	I0328 01:03:14.413443 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:16.418789 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:15.568507 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:16.068210 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:16.568761 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:17.067929 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:17.568403 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:18.068454 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:18.568086 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:19.068049 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:19.569020 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:20.068068 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:17.139682 1131600 crio.go:462] duration metric: took 1.71160157s to copy over tarball
	I0328 01:03:17.139764 1131600 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0328 01:03:19.581198 1131600 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.441406061s)
	I0328 01:03:19.581229 1131600 crio.go:469] duration metric: took 2.441510253s to extract the tarball
	I0328 01:03:19.581241 1131600 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0328 01:03:19.620964 1131600 ssh_runner.go:195] Run: sudo crictl images --output json
	I0328 01:03:19.666765 1131600 crio.go:514] all images are preloaded for cri-o runtime.
	I0328 01:03:19.666791 1131600 cache_images.go:84] Images are preloaded, skipping loading
	I0328 01:03:19.666802 1131600 kubeadm.go:928] updating node { 192.168.39.224 8444 v1.29.3 crio true true} ...
	I0328 01:03:19.666921 1131600 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-283961 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.224
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.3 ClusterName:default-k8s-diff-port-283961 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0328 01:03:19.666987 1131600 ssh_runner.go:195] Run: crio config
	I0328 01:03:19.716082 1131600 cni.go:84] Creating CNI manager for ""
	I0328 01:03:19.716106 1131600 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0328 01:03:19.716115 1131600 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0328 01:03:19.716139 1131600 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.224 APIServerPort:8444 KubernetesVersion:v1.29.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-283961 NodeName:default-k8s-diff-port-283961 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.224"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.224 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0328 01:03:19.716323 1131600 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.224
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-283961"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.224
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.224"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0328 01:03:19.716399 1131600 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.3
	I0328 01:03:19.727826 1131600 binaries.go:44] Found k8s binaries, skipping transfer
	I0328 01:03:19.727913 1131600 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0328 01:03:19.738525 1131600 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0328 01:03:19.756732 1131600 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0328 01:03:19.776665 1131600 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0328 01:03:19.795756 1131600 ssh_runner.go:195] Run: grep 192.168.39.224	control-plane.minikube.internal$ /etc/hosts
	I0328 01:03:19.800097 1131600 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.224	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0328 01:03:19.813019 1131600 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0328 01:03:19.946740 1131600 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0328 01:03:19.964216 1131600 certs.go:68] Setting up /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/default-k8s-diff-port-283961 for IP: 192.168.39.224
	I0328 01:03:19.964244 1131600 certs.go:194] generating shared ca certs ...
	I0328 01:03:19.964262 1131600 certs.go:226] acquiring lock for ca certs: {Name:mkf4dec8f33bbf51de6ed3aabdf175c7fd744ef9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 01:03:19.964448 1131600 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.key
	I0328 01:03:19.964524 1131600 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/proxy-client-ca.key
	I0328 01:03:19.964538 1131600 certs.go:256] generating profile certs ...
	I0328 01:03:19.964648 1131600 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/default-k8s-diff-port-283961/client.key
	I0328 01:03:19.964735 1131600 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/default-k8s-diff-port-283961/apiserver.key.22bfb146
	I0328 01:03:19.964810 1131600 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/default-k8s-diff-port-283961/proxy-client.key
	I0328 01:03:19.964956 1131600 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/1076522.pem (1338 bytes)
	W0328 01:03:19.965008 1131600 certs.go:480] ignoring /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/1076522_empty.pem, impossibly tiny 0 bytes
	I0328 01:03:19.965021 1131600 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca-key.pem (1675 bytes)
	I0328 01:03:19.965058 1131600 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem (1078 bytes)
	I0328 01:03:19.965091 1131600 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/cert.pem (1123 bytes)
	I0328 01:03:19.965113 1131600 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/key.pem (1679 bytes)
	I0328 01:03:19.965154 1131600 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/files/etc/ssl/certs/10765222.pem (1708 bytes)
	I0328 01:03:19.966026 1131600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0328 01:03:19.998578 1131600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0328 01:03:20.042666 1131600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0328 01:03:20.075405 1131600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0328 01:03:20.117888 1131600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/default-k8s-diff-port-283961/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0328 01:03:20.145160 1131600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/default-k8s-diff-port-283961/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0328 01:03:20.178207 1131600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/default-k8s-diff-port-283961/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0328 01:03:20.208610 1131600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/default-k8s-diff-port-283961/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0328 01:03:20.235356 1131600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0328 01:03:20.262434 1131600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/1076522.pem --> /usr/share/ca-certificates/1076522.pem (1338 bytes)
	I0328 01:03:20.291315 1131600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/files/etc/ssl/certs/10765222.pem --> /usr/share/ca-certificates/10765222.pem (1708 bytes)
	I0328 01:03:20.318034 1131600 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0328 01:03:20.337627 1131600 ssh_runner.go:195] Run: openssl version
	I0328 01:03:20.344242 1131600 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0328 01:03:20.360732 1131600 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0328 01:03:20.365858 1131600 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 27 23:34 /usr/share/ca-certificates/minikubeCA.pem
	I0328 01:03:20.365926 1131600 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0328 01:03:20.372120 1131600 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0328 01:03:20.384554 1131600 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1076522.pem && ln -fs /usr/share/ca-certificates/1076522.pem /etc/ssl/certs/1076522.pem"
	I0328 01:03:20.401731 1131600 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1076522.pem
	I0328 01:03:20.406945 1131600 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 27 23:42 /usr/share/ca-certificates/1076522.pem
	I0328 01:03:20.407024 1131600 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1076522.pem
	I0328 01:03:20.414661 1131600 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1076522.pem /etc/ssl/certs/51391683.0"
	I0328 01:03:20.427573 1131600 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10765222.pem && ln -fs /usr/share/ca-certificates/10765222.pem /etc/ssl/certs/10765222.pem"
	I0328 01:03:20.439807 1131600 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10765222.pem
	I0328 01:03:20.445064 1131600 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 27 23:42 /usr/share/ca-certificates/10765222.pem
	I0328 01:03:20.445138 1131600 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10765222.pem
	I0328 01:03:20.451754 1131600 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/10765222.pem /etc/ssl/certs/3ec20f2e.0"
	I0328 01:03:20.464988 1131600 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0328 01:03:20.470461 1131600 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0328 01:03:20.477200 1131600 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0328 01:03:20.484238 1131600 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0328 01:03:20.491125 1131600 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0328 01:03:20.497888 1131600 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0328 01:03:20.504680 1131600 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0328 01:03:20.511372 1131600 kubeadm.go:391] StartCluster: {Name:default-k8s-diff-port-283961 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.29.3 ClusterName:default-k8s-diff-port-283961 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.224 Port:8444 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0328 01:03:20.511477 1131600 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0328 01:03:20.511542 1131600 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0328 01:03:20.552247 1131600 cri.go:89] found id: ""
	I0328 01:03:20.552345 1131600 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0328 01:03:20.564906 1131600 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0328 01:03:20.564937 1131600 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0328 01:03:20.564944 1131600 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0328 01:03:20.565002 1131600 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0328 01:03:20.576394 1131600 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0328 01:03:20.593699 1131600 kubeconfig.go:125] found "default-k8s-diff-port-283961" server: "https://192.168.39.224:8444"
	I0328 01:03:20.595978 1131600 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0328 01:03:20.609519 1131600 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.39.224
	I0328 01:03:20.609565 1131600 kubeadm.go:1154] stopping kube-system containers ...
	I0328 01:03:20.609583 1131600 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0328 01:03:20.609651 1131600 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0328 01:03:20.651892 1131600 cri.go:89] found id: ""
	I0328 01:03:20.651967 1131600 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0328 01:03:20.671895 1131600 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0328 01:03:16.365505 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:16.366404 1130827 main.go:141] libmachine: (no-preload-248059) DBG | unable to find current IP address of domain no-preload-248059 in network mk-no-preload-248059
	I0328 01:03:16.366435 1130827 main.go:141] libmachine: (no-preload-248059) DBG | I0328 01:03:16.366360 1132389 retry.go:31] will retry after 488.383898ms: waiting for machine to come up
	I0328 01:03:16.856125 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:16.856727 1130827 main.go:141] libmachine: (no-preload-248059) DBG | unable to find current IP address of domain no-preload-248059 in network mk-no-preload-248059
	I0328 01:03:16.856761 1130827 main.go:141] libmachine: (no-preload-248059) DBG | I0328 01:03:16.856626 1132389 retry.go:31] will retry after 617.77144ms: waiting for machine to come up
	I0328 01:03:17.476749 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:17.477351 1130827 main.go:141] libmachine: (no-preload-248059) DBG | unable to find current IP address of domain no-preload-248059 in network mk-no-preload-248059
	I0328 01:03:17.477386 1130827 main.go:141] libmachine: (no-preload-248059) DBG | I0328 01:03:17.477282 1132389 retry.go:31] will retry after 835.951988ms: waiting for machine to come up
	I0328 01:03:18.315387 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:18.315894 1130827 main.go:141] libmachine: (no-preload-248059) DBG | unable to find current IP address of domain no-preload-248059 in network mk-no-preload-248059
	I0328 01:03:18.315925 1130827 main.go:141] libmachine: (no-preload-248059) DBG | I0328 01:03:18.315848 1132389 retry.go:31] will retry after 1.405695765s: waiting for machine to come up
	I0328 01:03:19.723053 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:19.723559 1130827 main.go:141] libmachine: (no-preload-248059) DBG | unable to find current IP address of domain no-preload-248059 in network mk-no-preload-248059
	I0328 01:03:19.723591 1130827 main.go:141] libmachine: (no-preload-248059) DBG | I0328 01:03:19.723473 1132389 retry.go:31] will retry after 1.555358462s: waiting for machine to come up
	I0328 01:03:18.913403 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:21.599662 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:20.568464 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:21.068983 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:21.568470 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:22.068772 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:22.568940 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:23.068907 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:23.568272 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:24.068055 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:24.568056 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:25.068006 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:20.685320 1131600 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0328 01:03:21.187521 1131600 kubeadm.go:156] found existing configuration files:
	
	I0328 01:03:21.187587 1131600 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0328 01:03:21.200463 1131600 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0328 01:03:21.200533 1131600 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0328 01:03:21.212763 1131600 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0328 01:03:21.224344 1131600 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0328 01:03:21.224419 1131600 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0328 01:03:21.235869 1131600 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0328 01:03:21.245970 1131600 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0328 01:03:21.246045 1131600 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0328 01:03:21.258589 1131600 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0328 01:03:21.270651 1131600 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0328 01:03:21.270724 1131600 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0328 01:03:21.283074 1131600 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0328 01:03:21.295811 1131600 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0328 01:03:21.668224 1131600 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0328 01:03:23.046357 1131600 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.378083996s)
	I0328 01:03:23.046401 1131600 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0328 01:03:23.271959 1131600 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0328 01:03:23.353976 1131600 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0328 01:03:23.501611 1131600 api_server.go:52] waiting for apiserver process to appear ...
	I0328 01:03:23.501734 1131600 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:24.002619 1131600 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:24.502614 1131600 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:24.547383 1131600 api_server.go:72] duration metric: took 1.045771287s to wait for apiserver process to appear ...
	I0328 01:03:24.547419 1131600 api_server.go:88] waiting for apiserver healthz status ...
	I0328 01:03:24.547447 1131600 api_server.go:253] Checking apiserver healthz at https://192.168.39.224:8444/healthz ...
	I0328 01:03:24.548081 1131600 api_server.go:269] stopped: https://192.168.39.224:8444/healthz: Get "https://192.168.39.224:8444/healthz": dial tcp 192.168.39.224:8444: connect: connection refused
	I0328 01:03:25.047885 1131600 api_server.go:253] Checking apiserver healthz at https://192.168.39.224:8444/healthz ...
	I0328 01:03:21.279945 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:21.590947 1130827 main.go:141] libmachine: (no-preload-248059) DBG | unable to find current IP address of domain no-preload-248059 in network mk-no-preload-248059
	I0328 01:03:21.590967 1130827 main.go:141] libmachine: (no-preload-248059) DBG | I0328 01:03:21.280358 1132389 retry.go:31] will retry after 1.905587467s: waiting for machine to come up
	I0328 01:03:23.187571 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:23.188214 1130827 main.go:141] libmachine: (no-preload-248059) DBG | unable to find current IP address of domain no-preload-248059 in network mk-no-preload-248059
	I0328 01:03:23.188248 1130827 main.go:141] libmachine: (no-preload-248059) DBG | I0328 01:03:23.188159 1132389 retry.go:31] will retry after 2.68043246s: waiting for machine to come up
	I0328 01:03:25.871414 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:25.871997 1130827 main.go:141] libmachine: (no-preload-248059) DBG | unable to find current IP address of domain no-preload-248059 in network mk-no-preload-248059
	I0328 01:03:25.872030 1130827 main.go:141] libmachine: (no-preload-248059) DBG | I0328 01:03:25.871956 1132389 retry.go:31] will retry after 2.689404788s: waiting for machine to come up
	I0328 01:03:23.913816 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:26.413616 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:27.352533 1131600 api_server.go:279] https://192.168.39.224:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0328 01:03:27.352570 1131600 api_server.go:103] status: https://192.168.39.224:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0328 01:03:27.352589 1131600 api_server.go:253] Checking apiserver healthz at https://192.168.39.224:8444/healthz ...
	I0328 01:03:27.453408 1131600 api_server.go:279] https://192.168.39.224:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0328 01:03:27.453448 1131600 api_server.go:103] status: https://192.168.39.224:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0328 01:03:27.547781 1131600 api_server.go:253] Checking apiserver healthz at https://192.168.39.224:8444/healthz ...
	I0328 01:03:27.552703 1131600 api_server.go:279] https://192.168.39.224:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0328 01:03:27.552738 1131600 api_server.go:103] status: https://192.168.39.224:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0328 01:03:28.048135 1131600 api_server.go:253] Checking apiserver healthz at https://192.168.39.224:8444/healthz ...
	I0328 01:03:28.053291 1131600 api_server.go:279] https://192.168.39.224:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0328 01:03:28.053322 1131600 api_server.go:103] status: https://192.168.39.224:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0328 01:03:28.548374 1131600 api_server.go:253] Checking apiserver healthz at https://192.168.39.224:8444/healthz ...
	I0328 01:03:28.553141 1131600 api_server.go:279] https://192.168.39.224:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0328 01:03:28.553178 1131600 api_server.go:103] status: https://192.168.39.224:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0328 01:03:29.047609 1131600 api_server.go:253] Checking apiserver healthz at https://192.168.39.224:8444/healthz ...
	I0328 01:03:29.053027 1131600 api_server.go:279] https://192.168.39.224:8444/healthz returned 200:
	ok
	I0328 01:03:29.060710 1131600 api_server.go:141] control plane version: v1.29.3
	I0328 01:03:29.060747 1131600 api_server.go:131] duration metric: took 4.513320481s to wait for apiserver health ...
	I0328 01:03:29.060757 1131600 cni.go:84] Creating CNI manager for ""
	I0328 01:03:29.060764 1131600 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0328 01:03:29.062763 1131600 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0328 01:03:25.568927 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:26.068371 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:26.568107 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:27.068037 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:27.567985 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:28.068036 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:28.568843 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:29.068483 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:29.568942 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:30.068849 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:29.064492 1131600 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0328 01:03:29.089164 1131600 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0328 01:03:29.115071 1131600 system_pods.go:43] waiting for kube-system pods to appear ...
	I0328 01:03:29.126819 1131600 system_pods.go:59] 8 kube-system pods found
	I0328 01:03:29.126871 1131600 system_pods.go:61] "coredns-76f75df574-79cdj" [48ffe344-a386-4904-a73e-56e3ce0a8bef] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0328 01:03:29.126885 1131600 system_pods.go:61] "etcd-default-k8s-diff-port-283961" [1d8fc768-e39c-4c96-bd65-2ae76fc9c6ec] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0328 01:03:29.126898 1131600 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-283961" [7c5c9f85-f16f-4248-8d2d-73c1ed2b0128] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0328 01:03:29.126912 1131600 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-283961" [2e943e7b-5506-4797-9e77-4a33e06056fa] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0328 01:03:29.126931 1131600 system_pods.go:61] "kube-proxy-d776v" [c1c86f61-b074-4a51-89e6-17c7b1076748] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0328 01:03:29.126944 1131600 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-283961" [8a840579-4145-4b68-ab3f-b1ebd3d63e81] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0328 01:03:29.126956 1131600 system_pods.go:61] "metrics-server-57f55c9bc5-w4ww4" [6d60f9e6-8ac7-4fad-91dc-61520586666c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0328 01:03:29.126968 1131600 system_pods.go:61] "storage-provisioner" [2b5e2e68-7e7c-46ec-bcec-ff9b01cbb8d9] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0328 01:03:29.126979 1131600 system_pods.go:74] duration metric: took 11.875076ms to wait for pod list to return data ...
	I0328 01:03:29.126992 1131600 node_conditions.go:102] verifying NodePressure condition ...
	I0328 01:03:29.130927 1131600 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0328 01:03:29.130971 1131600 node_conditions.go:123] node cpu capacity is 2
	I0328 01:03:29.130986 1131600 node_conditions.go:105] duration metric: took 3.984383ms to run NodePressure ...
	I0328 01:03:29.131011 1131600 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0328 01:03:29.421513 1131600 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0328 01:03:29.426043 1131600 kubeadm.go:733] kubelet initialised
	I0328 01:03:29.426104 1131600 kubeadm.go:734] duration metric: took 4.524275ms waiting for restarted kubelet to initialise ...
	I0328 01:03:29.426114 1131600 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0328 01:03:29.432378 1131600 pod_ready.go:78] waiting up to 4m0s for pod "coredns-76f75df574-79cdj" in "kube-system" namespace to be "Ready" ...
	I0328 01:03:28.563249 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:28.563778 1130827 main.go:141] libmachine: (no-preload-248059) DBG | unable to find current IP address of domain no-preload-248059 in network mk-no-preload-248059
	I0328 01:03:28.563808 1130827 main.go:141] libmachine: (no-preload-248059) DBG | I0328 01:03:28.563718 1132389 retry.go:31] will retry after 2.919225956s: waiting for machine to come up
	I0328 01:03:28.913653 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:30.914379 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:31.484584 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:31.485027 1130827 main.go:141] libmachine: (no-preload-248059) Found IP for machine: 192.168.61.107
	I0328 01:03:31.485048 1130827 main.go:141] libmachine: (no-preload-248059) Reserving static IP address...
	I0328 01:03:31.485065 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has current primary IP address 192.168.61.107 and MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:31.485584 1130827 main.go:141] libmachine: (no-preload-248059) DBG | found host DHCP lease matching {name: "no-preload-248059", mac: "52:54:00:58:33:e2", ip: "192.168.61.107"} in network mk-no-preload-248059: {Iface:virbr3 ExpiryTime:2024-03-28 02:03:25 +0000 UTC Type:0 Mac:52:54:00:58:33:e2 Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:no-preload-248059 Clientid:01:52:54:00:58:33:e2}
	I0328 01:03:31.485617 1130827 main.go:141] libmachine: (no-preload-248059) Reserved static IP address: 192.168.61.107
	I0328 01:03:31.485638 1130827 main.go:141] libmachine: (no-preload-248059) DBG | skip adding static IP to network mk-no-preload-248059 - found existing host DHCP lease matching {name: "no-preload-248059", mac: "52:54:00:58:33:e2", ip: "192.168.61.107"}
	I0328 01:03:31.485651 1130827 main.go:141] libmachine: (no-preload-248059) Waiting for SSH to be available...
	I0328 01:03:31.485671 1130827 main.go:141] libmachine: (no-preload-248059) DBG | Getting to WaitForSSH function...
	I0328 01:03:31.487909 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:31.488293 1130827 main.go:141] libmachine: (no-preload-248059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:33:e2", ip: ""} in network mk-no-preload-248059: {Iface:virbr3 ExpiryTime:2024-03-28 02:03:25 +0000 UTC Type:0 Mac:52:54:00:58:33:e2 Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:no-preload-248059 Clientid:01:52:54:00:58:33:e2}
	I0328 01:03:31.488322 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined IP address 192.168.61.107 and MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:31.488469 1130827 main.go:141] libmachine: (no-preload-248059) DBG | Using SSH client type: external
	I0328 01:03:31.488506 1130827 main.go:141] libmachine: (no-preload-248059) DBG | Using SSH private key: /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/no-preload-248059/id_rsa (-rw-------)
	I0328 01:03:31.488531 1130827 main.go:141] libmachine: (no-preload-248059) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.107 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/no-preload-248059/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0328 01:03:31.488541 1130827 main.go:141] libmachine: (no-preload-248059) DBG | About to run SSH command:
	I0328 01:03:31.488555 1130827 main.go:141] libmachine: (no-preload-248059) DBG | exit 0
	I0328 01:03:31.618358 1130827 main.go:141] libmachine: (no-preload-248059) DBG | SSH cmd err, output: <nil>: 
	I0328 01:03:31.618786 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetConfigRaw
	I0328 01:03:31.619494 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetIP
	I0328 01:03:31.622183 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:31.622555 1130827 main.go:141] libmachine: (no-preload-248059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:33:e2", ip: ""} in network mk-no-preload-248059: {Iface:virbr3 ExpiryTime:2024-03-28 02:03:25 +0000 UTC Type:0 Mac:52:54:00:58:33:e2 Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:no-preload-248059 Clientid:01:52:54:00:58:33:e2}
	I0328 01:03:31.622584 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined IP address 192.168.61.107 and MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:31.622889 1130827 profile.go:142] Saving config to /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/no-preload-248059/config.json ...
	I0328 01:03:31.623120 1130827 machine.go:94] provisionDockerMachine start ...
	I0328 01:03:31.623147 1130827 main.go:141] libmachine: (no-preload-248059) Calling .DriverName
	I0328 01:03:31.623400 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHHostname
	I0328 01:03:31.626078 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:31.626432 1130827 main.go:141] libmachine: (no-preload-248059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:33:e2", ip: ""} in network mk-no-preload-248059: {Iface:virbr3 ExpiryTime:2024-03-28 02:03:25 +0000 UTC Type:0 Mac:52:54:00:58:33:e2 Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:no-preload-248059 Clientid:01:52:54:00:58:33:e2}
	I0328 01:03:31.626458 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined IP address 192.168.61.107 and MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:31.626663 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHPort
	I0328 01:03:31.626864 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHKeyPath
	I0328 01:03:31.627031 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHKeyPath
	I0328 01:03:31.627179 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHUsername
	I0328 01:03:31.627380 1130827 main.go:141] libmachine: Using SSH client type: native
	I0328 01:03:31.627595 1130827 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.107 22 <nil> <nil>}
	I0328 01:03:31.627611 1130827 main.go:141] libmachine: About to run SSH command:
	hostname
	I0328 01:03:31.739662 1130827 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0328 01:03:31.739699 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetMachineName
	I0328 01:03:31.740049 1130827 buildroot.go:166] provisioning hostname "no-preload-248059"
	I0328 01:03:31.740086 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetMachineName
	I0328 01:03:31.740421 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHHostname
	I0328 01:03:31.743410 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:31.743776 1130827 main.go:141] libmachine: (no-preload-248059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:33:e2", ip: ""} in network mk-no-preload-248059: {Iface:virbr3 ExpiryTime:2024-03-28 02:03:25 +0000 UTC Type:0 Mac:52:54:00:58:33:e2 Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:no-preload-248059 Clientid:01:52:54:00:58:33:e2}
	I0328 01:03:31.743811 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined IP address 192.168.61.107 and MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:31.744001 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHPort
	I0328 01:03:31.744212 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHKeyPath
	I0328 01:03:31.744394 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHKeyPath
	I0328 01:03:31.744515 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHUsername
	I0328 01:03:31.744669 1130827 main.go:141] libmachine: Using SSH client type: native
	I0328 01:03:31.744846 1130827 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.107 22 <nil> <nil>}
	I0328 01:03:31.744860 1130827 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-248059 && echo "no-preload-248059" | sudo tee /etc/hostname
	I0328 01:03:31.869330 1130827 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-248059
	
	I0328 01:03:31.869368 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHHostname
	I0328 01:03:31.872451 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:31.872817 1130827 main.go:141] libmachine: (no-preload-248059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:33:e2", ip: ""} in network mk-no-preload-248059: {Iface:virbr3 ExpiryTime:2024-03-28 02:03:25 +0000 UTC Type:0 Mac:52:54:00:58:33:e2 Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:no-preload-248059 Clientid:01:52:54:00:58:33:e2}
	I0328 01:03:31.872868 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined IP address 192.168.61.107 and MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:31.873159 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHPort
	I0328 01:03:31.873405 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHKeyPath
	I0328 01:03:31.873632 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHKeyPath
	I0328 01:03:31.873803 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHUsername
	I0328 01:03:31.873982 1130827 main.go:141] libmachine: Using SSH client type: native
	I0328 01:03:31.874220 1130827 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.107 22 <nil> <nil>}
	I0328 01:03:31.874268 1130827 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-248059' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-248059/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-248059' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0328 01:03:31.997509 1130827 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0328 01:03:31.997543 1130827 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18485-1069254/.minikube CaCertPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18485-1069254/.minikube}
	I0328 01:03:31.997565 1130827 buildroot.go:174] setting up certificates
	I0328 01:03:31.997573 1130827 provision.go:84] configureAuth start
	I0328 01:03:31.997583 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetMachineName
	I0328 01:03:31.997870 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetIP
	I0328 01:03:32.000739 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:32.001127 1130827 main.go:141] libmachine: (no-preload-248059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:33:e2", ip: ""} in network mk-no-preload-248059: {Iface:virbr3 ExpiryTime:2024-03-28 02:03:25 +0000 UTC Type:0 Mac:52:54:00:58:33:e2 Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:no-preload-248059 Clientid:01:52:54:00:58:33:e2}
	I0328 01:03:32.001162 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined IP address 192.168.61.107 and MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:32.001306 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHHostname
	I0328 01:03:32.003571 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:32.003958 1130827 main.go:141] libmachine: (no-preload-248059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:33:e2", ip: ""} in network mk-no-preload-248059: {Iface:virbr3 ExpiryTime:2024-03-28 02:03:25 +0000 UTC Type:0 Mac:52:54:00:58:33:e2 Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:no-preload-248059 Clientid:01:52:54:00:58:33:e2}
	I0328 01:03:32.003988 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined IP address 192.168.61.107 and MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:32.004162 1130827 provision.go:143] copyHostCerts
	I0328 01:03:32.004246 1130827 exec_runner.go:144] found /home/jenkins/minikube-integration/18485-1069254/.minikube/key.pem, removing ...
	I0328 01:03:32.004261 1130827 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18485-1069254/.minikube/key.pem
	I0328 01:03:32.004329 1130827 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18485-1069254/.minikube/key.pem (1679 bytes)
	I0328 01:03:32.004442 1130827 exec_runner.go:144] found /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.pem, removing ...
	I0328 01:03:32.004454 1130827 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.pem
	I0328 01:03:32.004486 1130827 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.pem (1078 bytes)
	I0328 01:03:32.004562 1130827 exec_runner.go:144] found /home/jenkins/minikube-integration/18485-1069254/.minikube/cert.pem, removing ...
	I0328 01:03:32.004572 1130827 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18485-1069254/.minikube/cert.pem
	I0328 01:03:32.004602 1130827 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18485-1069254/.minikube/cert.pem (1123 bytes)
	I0328 01:03:32.004667 1130827 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca-key.pem org=jenkins.no-preload-248059 san=[127.0.0.1 192.168.61.107 localhost minikube no-preload-248059]
	I0328 01:03:32.206585 1130827 provision.go:177] copyRemoteCerts
	I0328 01:03:32.206657 1130827 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0328 01:03:32.206691 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHHostname
	I0328 01:03:32.210170 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:32.210636 1130827 main.go:141] libmachine: (no-preload-248059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:33:e2", ip: ""} in network mk-no-preload-248059: {Iface:virbr3 ExpiryTime:2024-03-28 02:03:25 +0000 UTC Type:0 Mac:52:54:00:58:33:e2 Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:no-preload-248059 Clientid:01:52:54:00:58:33:e2}
	I0328 01:03:32.210676 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined IP address 192.168.61.107 and MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:32.210979 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHPort
	I0328 01:03:32.211187 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHKeyPath
	I0328 01:03:32.211364 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHUsername
	I0328 01:03:32.211564 1130827 sshutil.go:53] new ssh client: &{IP:192.168.61.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/no-preload-248059/id_rsa Username:docker}
	I0328 01:03:32.305858 1130827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0328 01:03:32.337654 1130827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0328 01:03:32.368942 1130827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0328 01:03:32.401639 1130827 provision.go:87] duration metric: took 404.051415ms to configureAuth
	I0328 01:03:32.401669 1130827 buildroot.go:189] setting minikube options for container-runtime
	I0328 01:03:32.401936 1130827 config.go:182] Loaded profile config "no-preload-248059": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0-beta.0
	I0328 01:03:32.402025 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHHostname
	I0328 01:03:32.404890 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:32.405352 1130827 main.go:141] libmachine: (no-preload-248059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:33:e2", ip: ""} in network mk-no-preload-248059: {Iface:virbr3 ExpiryTime:2024-03-28 02:03:25 +0000 UTC Type:0 Mac:52:54:00:58:33:e2 Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:no-preload-248059 Clientid:01:52:54:00:58:33:e2}
	I0328 01:03:32.405387 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined IP address 192.168.61.107 and MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:32.405588 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHPort
	I0328 01:03:32.405858 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHKeyPath
	I0328 01:03:32.406091 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHKeyPath
	I0328 01:03:32.406303 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHUsername
	I0328 01:03:32.406510 1130827 main.go:141] libmachine: Using SSH client type: native
	I0328 01:03:32.406731 1130827 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.107 22 <nil> <nil>}
	I0328 01:03:32.406759 1130827 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0328 01:03:32.697738 1130827 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0328 01:03:32.697768 1130827 machine.go:97] duration metric: took 1.074632092s to provisionDockerMachine
	I0328 01:03:32.697781 1130827 start.go:293] postStartSetup for "no-preload-248059" (driver="kvm2")
	I0328 01:03:32.697795 1130827 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0328 01:03:32.697812 1130827 main.go:141] libmachine: (no-preload-248059) Calling .DriverName
	I0328 01:03:32.698263 1130827 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0328 01:03:32.698298 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHHostname
	I0328 01:03:32.701020 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:32.701421 1130827 main.go:141] libmachine: (no-preload-248059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:33:e2", ip: ""} in network mk-no-preload-248059: {Iface:virbr3 ExpiryTime:2024-03-28 02:03:25 +0000 UTC Type:0 Mac:52:54:00:58:33:e2 Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:no-preload-248059 Clientid:01:52:54:00:58:33:e2}
	I0328 01:03:32.701450 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined IP address 192.168.61.107 and MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:32.701609 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHPort
	I0328 01:03:32.701837 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHKeyPath
	I0328 01:03:32.702010 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHUsername
	I0328 01:03:32.702188 1130827 sshutil.go:53] new ssh client: &{IP:192.168.61.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/no-preload-248059/id_rsa Username:docker}
	I0328 01:03:32.790670 1130827 ssh_runner.go:195] Run: cat /etc/os-release
	I0328 01:03:32.795098 1130827 info.go:137] Remote host: Buildroot 2023.02.9
	I0328 01:03:32.795131 1130827 filesync.go:126] Scanning /home/jenkins/minikube-integration/18485-1069254/.minikube/addons for local assets ...
	I0328 01:03:32.795222 1130827 filesync.go:126] Scanning /home/jenkins/minikube-integration/18485-1069254/.minikube/files for local assets ...
	I0328 01:03:32.795297 1130827 filesync.go:149] local asset: /home/jenkins/minikube-integration/18485-1069254/.minikube/files/etc/ssl/certs/10765222.pem -> 10765222.pem in /etc/ssl/certs
	I0328 01:03:32.795402 1130827 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0328 01:03:32.806276 1130827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/files/etc/ssl/certs/10765222.pem --> /etc/ssl/certs/10765222.pem (1708 bytes)
	I0328 01:03:32.832753 1130827 start.go:296] duration metric: took 134.954685ms for postStartSetup
	I0328 01:03:32.832801 1130827 fix.go:56] duration metric: took 19.16097847s for fixHost
	I0328 01:03:32.832825 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHHostname
	I0328 01:03:32.835830 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:32.836199 1130827 main.go:141] libmachine: (no-preload-248059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:33:e2", ip: ""} in network mk-no-preload-248059: {Iface:virbr3 ExpiryTime:2024-03-28 02:03:25 +0000 UTC Type:0 Mac:52:54:00:58:33:e2 Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:no-preload-248059 Clientid:01:52:54:00:58:33:e2}
	I0328 01:03:32.836237 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined IP address 192.168.61.107 and MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:32.836472 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHPort
	I0328 01:03:32.836707 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHKeyPath
	I0328 01:03:32.836949 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHKeyPath
	I0328 01:03:32.837104 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHUsername
	I0328 01:03:32.837339 1130827 main.go:141] libmachine: Using SSH client type: native
	I0328 01:03:32.837551 1130827 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.107 22 <nil> <nil>}
	I0328 01:03:32.837563 1130827 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0328 01:03:32.947440 1130827 main.go:141] libmachine: SSH cmd err, output: <nil>: 1711587812.922631180
	
	I0328 01:03:32.947477 1130827 fix.go:216] guest clock: 1711587812.922631180
	I0328 01:03:32.947486 1130827 fix.go:229] Guest: 2024-03-28 01:03:32.92263118 +0000 UTC Remote: 2024-03-28 01:03:32.832804811 +0000 UTC m=+356.715929719 (delta=89.826369ms)
	I0328 01:03:32.947507 1130827 fix.go:200] guest clock delta is within tolerance: 89.826369ms
	I0328 01:03:32.947512 1130827 start.go:83] releasing machines lock for "no-preload-248059", held for 19.275724068s
	I0328 01:03:32.947531 1130827 main.go:141] libmachine: (no-preload-248059) Calling .DriverName
	I0328 01:03:32.947805 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetIP
	I0328 01:03:32.950439 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:32.950814 1130827 main.go:141] libmachine: (no-preload-248059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:33:e2", ip: ""} in network mk-no-preload-248059: {Iface:virbr3 ExpiryTime:2024-03-28 02:03:25 +0000 UTC Type:0 Mac:52:54:00:58:33:e2 Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:no-preload-248059 Clientid:01:52:54:00:58:33:e2}
	I0328 01:03:32.950844 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined IP address 192.168.61.107 and MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:32.950992 1130827 main.go:141] libmachine: (no-preload-248059) Calling .DriverName
	I0328 01:03:32.951517 1130827 main.go:141] libmachine: (no-preload-248059) Calling .DriverName
	I0328 01:03:32.951707 1130827 main.go:141] libmachine: (no-preload-248059) Calling .DriverName
	I0328 01:03:32.951809 1130827 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0328 01:03:32.951852 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHHostname
	I0328 01:03:32.951938 1130827 ssh_runner.go:195] Run: cat /version.json
	I0328 01:03:32.951964 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHHostname
	I0328 01:03:32.954721 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:32.955058 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:32.955135 1130827 main.go:141] libmachine: (no-preload-248059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:33:e2", ip: ""} in network mk-no-preload-248059: {Iface:virbr3 ExpiryTime:2024-03-28 02:03:25 +0000 UTC Type:0 Mac:52:54:00:58:33:e2 Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:no-preload-248059 Clientid:01:52:54:00:58:33:e2}
	I0328 01:03:32.955165 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined IP address 192.168.61.107 and MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:32.955314 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHPort
	I0328 01:03:32.955473 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHKeyPath
	I0328 01:03:32.955512 1130827 main.go:141] libmachine: (no-preload-248059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:33:e2", ip: ""} in network mk-no-preload-248059: {Iface:virbr3 ExpiryTime:2024-03-28 02:03:25 +0000 UTC Type:0 Mac:52:54:00:58:33:e2 Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:no-preload-248059 Clientid:01:52:54:00:58:33:e2}
	I0328 01:03:32.955538 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined IP address 192.168.61.107 and MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:32.955622 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHUsername
	I0328 01:03:32.955698 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHPort
	I0328 01:03:32.955809 1130827 sshutil.go:53] new ssh client: &{IP:192.168.61.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/no-preload-248059/id_rsa Username:docker}
	I0328 01:03:32.955859 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHKeyPath
	I0328 01:03:32.956001 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHUsername
	I0328 01:03:32.956134 1130827 sshutil.go:53] new ssh client: &{IP:192.168.61.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/no-preload-248059/id_rsa Username:docker}
	I0328 01:03:33.079381 1130827 ssh_runner.go:195] Run: systemctl --version
	I0328 01:03:33.086184 1130827 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0328 01:03:33.241799 1130827 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0328 01:03:33.248779 1130827 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0328 01:03:33.248893 1130827 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0328 01:03:33.267944 1130827 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0328 01:03:33.267977 1130827 start.go:494] detecting cgroup driver to use...
	I0328 01:03:33.268082 1130827 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0328 01:03:33.286132 1130827 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0328 01:03:33.301676 1130827 docker.go:217] disabling cri-docker service (if available) ...
	I0328 01:03:33.301762 1130827 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0328 01:03:33.317202 1130827 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0328 01:03:33.333162 1130827 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0328 01:03:33.458738 1130827 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0328 01:03:33.608509 1130827 docker.go:233] disabling docker service ...
	I0328 01:03:33.608623 1130827 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0328 01:03:33.626616 1130827 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0328 01:03:33.641798 1130827 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0328 01:03:33.808865 1130827 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0328 01:03:33.962636 1130827 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0328 01:03:33.978138 1130827 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0328 01:03:34.002323 1130827 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0328 01:03:34.002404 1130827 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 01:03:34.014483 1130827 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0328 01:03:34.014589 1130827 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 01:03:34.028647 1130827 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 01:03:34.041601 1130827 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 01:03:34.054993 1130827 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0328 01:03:34.066671 1130827 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 01:03:34.079389 1130827 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 01:03:34.099660 1130827 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 01:03:34.112379 1130827 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0328 01:03:34.123050 1130827 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0328 01:03:34.123109 1130827 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0328 01:03:34.137132 1130827 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0328 01:03:34.147092 1130827 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0328 01:03:34.282367 1130827 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0328 01:03:34.436510 1130827 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0328 01:03:34.436599 1130827 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0328 01:03:34.443019 1130827 start.go:562] Will wait 60s for crictl version
	I0328 01:03:34.443092 1130827 ssh_runner.go:195] Run: which crictl
	I0328 01:03:34.447740 1130827 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0328 01:03:34.488366 1130827 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0328 01:03:34.488469 1130827 ssh_runner.go:195] Run: crio --version
	I0328 01:03:34.520940 1130827 ssh_runner.go:195] Run: crio --version
	I0328 01:03:34.557953 1130827 out.go:177] * Preparing Kubernetes v1.30.0-beta.0 on CRI-O 1.29.1 ...
	I0328 01:03:30.568918 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:31.068097 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:31.568306 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:32.068345 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:32.568773 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:33.068072 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:33.568377 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:34.068141 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:34.568574 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:35.067986 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:31.439199 1131600 pod_ready.go:102] pod "coredns-76f75df574-79cdj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:33.439575 1131600 pod_ready.go:102] pod "coredns-76f75df574-79cdj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:34.559624 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetIP
	I0328 01:03:34.563089 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:34.563549 1130827 main.go:141] libmachine: (no-preload-248059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:33:e2", ip: ""} in network mk-no-preload-248059: {Iface:virbr3 ExpiryTime:2024-03-28 02:03:25 +0000 UTC Type:0 Mac:52:54:00:58:33:e2 Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:no-preload-248059 Clientid:01:52:54:00:58:33:e2}
	I0328 01:03:34.563583 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined IP address 192.168.61.107 and MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:34.563943 1130827 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0328 01:03:34.570153 1130827 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0328 01:03:34.584566 1130827 kubeadm.go:877] updating cluster {Name:no-preload-248059 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.0-beta.0 ClusterName:no-preload-248059 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.107 Port:8443 KubernetesVersion:v1.30.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0328 01:03:34.584723 1130827 preload.go:132] Checking if preload exists for k8s version v1.30.0-beta.0 and runtime crio
	I0328 01:03:34.584786 1130827 ssh_runner.go:195] Run: sudo crictl images --output json
	I0328 01:03:34.620182 1130827 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.0-beta.0". assuming images are not preloaded.
	I0328 01:03:34.620215 1130827 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.30.0-beta.0 registry.k8s.io/kube-controller-manager:v1.30.0-beta.0 registry.k8s.io/kube-scheduler:v1.30.0-beta.0 registry.k8s.io/kube-proxy:v1.30.0-beta.0 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.12-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0328 01:03:34.620297 1130827 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0328 01:03:34.620312 1130827 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.12-0
	I0328 01:03:34.620333 1130827 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.30.0-beta.0
	I0328 01:03:34.620301 1130827 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.30.0-beta.0
	I0328 01:03:34.620374 1130827 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.30.0-beta.0
	I0328 01:03:34.620401 1130827 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0328 01:03:34.620481 1130827 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0328 01:03:34.620319 1130827 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.30.0-beta.0
	I0328 01:03:34.621997 1130827 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.30.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.30.0-beta.0
	I0328 01:03:34.622009 1130827 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0328 01:03:34.622009 1130827 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.30.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.30.0-beta.0
	I0328 01:03:34.622052 1130827 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.30.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.30.0-beta.0
	I0328 01:03:34.621997 1130827 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.30.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.30.0-beta.0
	I0328 01:03:34.622115 1130827 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.12-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.12-0
	I0328 01:03:34.621996 1130827 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0328 01:03:34.622438 1130827 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0328 01:03:34.832761 1130827 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.30.0-beta.0
	I0328 01:03:34.849045 1130827 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I0328 01:03:34.868049 1130827 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0328 01:03:34.883941 1130827 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.30.0-beta.0" needs transfer: "registry.k8s.io/kube-proxy:v1.30.0-beta.0" does not exist at hash "3f2e573c9528d6ccab780899b4b39b75a17f00250f31ec462fccb116d45befa8" in container runtime
	I0328 01:03:34.883988 1130827 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.30.0-beta.0
	I0328 01:03:34.884047 1130827 ssh_runner.go:195] Run: which crictl
	I0328 01:03:34.884972 1130827 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.30.0-beta.0
	I0328 01:03:34.887551 1130827 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.30.0-beta.0
	I0328 01:03:34.899677 1130827 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.12-0
	I0328 01:03:34.904772 1130827 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.30.0-beta.0
	I0328 01:03:35.045850 1130827 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0328 01:03:35.045906 1130827 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0328 01:03:35.045944 1130827 ssh_runner.go:195] Run: which crictl
	I0328 01:03:35.045959 1130827 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.30.0-beta.0
	I0328 01:03:35.064862 1130827 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.30.0-beta.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.30.0-beta.0" does not exist at hash "c2da1dd389fe0ec92e0cd9e98f8def82c47e8e08ab27041cd23683c60aa1dcaa" in container runtime
	I0328 01:03:35.064908 1130827 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.30.0-beta.0
	I0328 01:03:35.064959 1130827 ssh_runner.go:195] Run: which crictl
	I0328 01:03:35.066700 1130827 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.30.0-beta.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.30.0-beta.0" does not exist at hash "f68bf73ee2fbd4ea8b58c2038ce18d4e06cb59833c44dda7435b586add021841" in container runtime
	I0328 01:03:35.066753 1130827 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.30.0-beta.0
	I0328 01:03:35.066820 1130827 ssh_runner.go:195] Run: which crictl
	I0328 01:03:35.097425 1130827 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.30.0-beta.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.30.0-beta.0" does not exist at hash "746ac2129b574b8743dca05f512b7e097235ca7229b75100a38ec9cdb23454ac" in container runtime
	I0328 01:03:35.097479 1130827 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.30.0-beta.0
	I0328 01:03:35.097546 1130827 ssh_runner.go:195] Run: which crictl
	I0328 01:03:35.097619 1130827 cache_images.go:116] "registry.k8s.io/etcd:3.5.12-0" needs transfer: "registry.k8s.io/etcd:3.5.12-0" does not exist at hash "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899" in container runtime
	I0328 01:03:35.097667 1130827 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.12-0
	I0328 01:03:35.097715 1130827 ssh_runner.go:195] Run: which crictl
	I0328 01:03:35.126977 1130827 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0328 01:03:35.126980 1130827 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.30.0-beta.0
	I0328 01:03:35.127020 1130827 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.30.0-beta.0
	I0328 01:03:35.127084 1130827 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.12-0
	I0328 01:03:35.127090 1130827 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.30.0-beta.0
	I0328 01:03:35.127082 1130827 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.30.0-beta.0
	I0328 01:03:35.127161 1130827 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.30.0-beta.0
	I0328 01:03:35.264395 1130827 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.30.0-beta.0
	I0328 01:03:35.264499 1130827 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.30.0-beta.0 (exists)
	I0328 01:03:35.264534 1130827 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.30.0-beta.0
	I0328 01:03:35.264543 1130827 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.30.0-beta.0
	I0328 01:03:35.264506 1130827 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0328 01:03:35.264590 1130827 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.30.0-beta.0
	I0328 01:03:35.264631 1130827 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.30.0-beta.0
	I0328 01:03:35.264652 1130827 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0328 01:03:35.264516 1130827 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.30.0-beta.0
	I0328 01:03:35.264584 1130827 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.12-0
	I0328 01:03:35.264717 1130827 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.30.0-beta.0
	I0328 01:03:35.264728 1130827 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.30.0-beta.0
	I0328 01:03:35.264768 1130827 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.12-0
	I0328 01:03:35.269734 1130827 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.12-0 (exists)
	I0328 01:03:35.277344 1130827 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.30.0-beta.0 (exists)
	I0328 01:03:35.277580 1130827 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0328 01:03:35.279792 1130827 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.30.0-beta.0 (exists)
	I0328 01:03:35.280423 1130827 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.30.0-beta.0 (exists)
	I0328 01:03:35.535980 1130827 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0328 01:03:33.413451 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:35.414017 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:37.913609 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:35.568345 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:36.068227 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:36.568528 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:37.068834 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:37.568407 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:38.068142 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:38.568732 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:39.068094 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:39.568799 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:40.068973 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:35.940767 1131600 pod_ready.go:102] pod "coredns-76f75df574-79cdj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:37.440919 1131600 pod_ready.go:92] pod "coredns-76f75df574-79cdj" in "kube-system" namespace has status "Ready":"True"
	I0328 01:03:37.440949 1131600 pod_ready.go:81] duration metric: took 8.008542386s for pod "coredns-76f75df574-79cdj" in "kube-system" namespace to be "Ready" ...
	I0328 01:03:37.440963 1131600 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-283961" in "kube-system" namespace to be "Ready" ...
	I0328 01:03:39.452822 1131600 pod_ready.go:102] pod "etcd-default-k8s-diff-port-283961" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:40.467937 1131600 pod_ready.go:92] pod "etcd-default-k8s-diff-port-283961" in "kube-system" namespace has status "Ready":"True"
	I0328 01:03:40.467973 1131600 pod_ready.go:81] duration metric: took 3.027001179s for pod "etcd-default-k8s-diff-port-283961" in "kube-system" namespace to be "Ready" ...
	I0328 01:03:40.467987 1131600 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-283961" in "kube-system" namespace to be "Ready" ...
	I0328 01:03:40.491342 1131600 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-283961" in "kube-system" namespace has status "Ready":"True"
	I0328 01:03:40.491373 1131600 pod_ready.go:81] duration metric: took 23.375914ms for pod "kube-apiserver-default-k8s-diff-port-283961" in "kube-system" namespace to be "Ready" ...
	I0328 01:03:40.491387 1131600 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-283961" in "kube-system" namespace to be "Ready" ...
	I0328 01:03:40.511379 1131600 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-283961" in "kube-system" namespace has status "Ready":"True"
	I0328 01:03:40.511414 1131600 pod_ready.go:81] duration metric: took 20.018124ms for pod "kube-controller-manager-default-k8s-diff-port-283961" in "kube-system" namespace to be "Ready" ...
	I0328 01:03:40.511430 1131600 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-d776v" in "kube-system" namespace to be "Ready" ...
	I0328 01:03:40.526689 1131600 pod_ready.go:92] pod "kube-proxy-d776v" in "kube-system" namespace has status "Ready":"True"
	I0328 01:03:40.526724 1131600 pod_ready.go:81] duration metric: took 15.28424ms for pod "kube-proxy-d776v" in "kube-system" namespace to be "Ready" ...
	I0328 01:03:40.526738 1131600 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-283961" in "kube-system" namespace to be "Ready" ...
	I0328 01:03:37.431690 1130827 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.30.0-beta.0: (2.167073369s)
	I0328 01:03:37.431729 1130827 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.30.0-beta.0 from cache
	I0328 01:03:37.431755 1130827 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.12-0
	I0328 01:03:37.431764 1130827 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.895749302s)
	I0328 01:03:37.431805 1130827 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0328 01:03:37.431811 1130827 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.12-0
	I0328 01:03:37.431837 1130827 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0328 01:03:37.431870 1130827 ssh_runner.go:195] Run: which crictl
	I0328 01:03:39.913936 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:42.412656 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:40.568441 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:41.068790 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:41.568919 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:42.068166 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:42.568012 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:43.068027 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:43.568916 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:44.067940 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:44.568074 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:45.068786 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:42.535179 1131600 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-283961" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:44.034128 1131600 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-283961" in "kube-system" namespace has status "Ready":"True"
	I0328 01:03:44.034164 1131600 pod_ready.go:81] duration metric: took 3.507415677s for pod "kube-scheduler-default-k8s-diff-port-283961" in "kube-system" namespace to be "Ready" ...
	I0328 01:03:44.034175 1131600 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace to be "Ready" ...
	I0328 01:03:41.523268 1130827 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.12-0: (4.091420228s)
	I0328 01:03:41.523305 1130827 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.12-0 from cache
	I0328 01:03:41.523330 1130827 ssh_runner.go:235] Completed: which crictl: (4.091431875s)
	I0328 01:03:41.523345 1130827 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.30.0-beta.0
	I0328 01:03:41.523412 1130827 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0328 01:03:41.523445 1130827 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.30.0-beta.0
	I0328 01:03:41.567312 1130827 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0328 01:03:41.567455 1130827 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0328 01:03:44.336954 1130827 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.30.0-beta.0: (2.813479223s)
	I0328 01:03:44.336991 1130827 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.30.0-beta.0 from cache
	I0328 01:03:44.336994 1130827 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (2.769509386s)
	I0328 01:03:44.337020 1130827 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0328 01:03:44.337035 1130827 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0328 01:03:44.337080 1130827 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0328 01:03:44.414767 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:46.415110 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:45.568662 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:46.068299 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:46.568793 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:47.068929 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:47.568250 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:48.068910 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:48.568138 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:49.068128 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:49.568153 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:50.068075 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:46.042489 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:48.541049 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:50.547355 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:46.297705 1130827 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.960592772s)
	I0328 01:03:46.297744 1130827 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0328 01:03:46.297776 1130827 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.30.0-beta.0
	I0328 01:03:46.297828 1130827 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.30.0-beta.0
	I0328 01:03:47.769522 1130827 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.30.0-beta.0: (1.471661236s)
	I0328 01:03:47.769569 1130827 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.30.0-beta.0 from cache
	I0328 01:03:47.769602 1130827 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.30.0-beta.0
	I0328 01:03:47.769656 1130827 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.30.0-beta.0
	I0328 01:03:50.231843 1130827 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.30.0-beta.0: (2.462162757s)
	I0328 01:03:50.231876 1130827 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.30.0-beta.0 from cache
	I0328 01:03:50.231902 1130827 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0328 01:03:50.231956 1130827 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0328 01:03:48.913184 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:51.412474 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:50.568929 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:51.068812 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:51.568899 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:52.068890 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:52.568751 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:53.068406 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:53.568466 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:54.068039 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:54.568745 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:55.068690 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:53.041197 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:55.041977 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:51.188382 1130827 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0328 01:03:51.188441 1130827 cache_images.go:123] Successfully loaded all cached images
	I0328 01:03:51.188448 1130827 cache_images.go:92] duration metric: took 16.568214969s to LoadCachedImages
	I0328 01:03:51.188464 1130827 kubeadm.go:928] updating node { 192.168.61.107 8443 v1.30.0-beta.0 crio true true} ...
	I0328 01:03:51.188628 1130827 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-248059 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.107
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0-beta.0 ClusterName:no-preload-248059 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0328 01:03:51.188710 1130827 ssh_runner.go:195] Run: crio config
	I0328 01:03:51.237071 1130827 cni.go:84] Creating CNI manager for ""
	I0328 01:03:51.237099 1130827 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0328 01:03:51.237109 1130827 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0328 01:03:51.237131 1130827 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.107 APIServerPort:8443 KubernetesVersion:v1.30.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-248059 NodeName:no-preload-248059 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.107"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.107 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0328 01:03:51.237263 1130827 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.107
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-248059"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.107
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.107"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0328 01:03:51.237330 1130827 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0-beta.0
	I0328 01:03:51.248044 1130827 binaries.go:44] Found k8s binaries, skipping transfer
	I0328 01:03:51.248113 1130827 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0328 01:03:51.257854 1130827 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (324 bytes)
	I0328 01:03:51.276307 1130827 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I0328 01:03:51.294698 1130827 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2168 bytes)
	I0328 01:03:51.313297 1130827 ssh_runner.go:195] Run: grep 192.168.61.107	control-plane.minikube.internal$ /etc/hosts
	I0328 01:03:51.317668 1130827 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.107	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0328 01:03:51.330478 1130827 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0328 01:03:51.457500 1130827 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0328 01:03:51.484463 1130827 certs.go:68] Setting up /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/no-preload-248059 for IP: 192.168.61.107
	I0328 01:03:51.484493 1130827 certs.go:194] generating shared ca certs ...
	I0328 01:03:51.484518 1130827 certs.go:226] acquiring lock for ca certs: {Name:mkf4dec8f33bbf51de6ed3aabdf175c7fd744ef9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 01:03:51.484718 1130827 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.key
	I0328 01:03:51.484768 1130827 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/proxy-client-ca.key
	I0328 01:03:51.484781 1130827 certs.go:256] generating profile certs ...
	I0328 01:03:51.484910 1130827 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/no-preload-248059/client.key
	I0328 01:03:51.484989 1130827 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/no-preload-248059/apiserver.key.85d037b2
	I0328 01:03:51.485040 1130827 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/no-preload-248059/proxy-client.key
	I0328 01:03:51.485196 1130827 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/1076522.pem (1338 bytes)
	W0328 01:03:51.485243 1130827 certs.go:480] ignoring /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/1076522_empty.pem, impossibly tiny 0 bytes
	I0328 01:03:51.485257 1130827 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca-key.pem (1675 bytes)
	I0328 01:03:51.485292 1130827 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem (1078 bytes)
	I0328 01:03:51.485327 1130827 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/cert.pem (1123 bytes)
	I0328 01:03:51.485357 1130827 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/key.pem (1679 bytes)
	I0328 01:03:51.485416 1130827 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/files/etc/ssl/certs/10765222.pem (1708 bytes)
	I0328 01:03:51.486614 1130827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0328 01:03:51.537554 1130827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0328 01:03:51.587256 1130827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0328 01:03:51.620264 1130827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0328 01:03:51.652100 1130827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/no-preload-248059/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0328 01:03:51.694388 1130827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/no-preload-248059/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0328 01:03:51.720913 1130827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/no-preload-248059/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0328 01:03:51.747141 1130827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/no-preload-248059/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0328 01:03:51.776370 1130827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/1076522.pem --> /usr/share/ca-certificates/1076522.pem (1338 bytes)
	I0328 01:03:51.803168 1130827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/files/etc/ssl/certs/10765222.pem --> /usr/share/ca-certificates/10765222.pem (1708 bytes)
	I0328 01:03:51.831138 1130827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0328 01:03:51.857272 1130827 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0328 01:03:51.876070 1130827 ssh_runner.go:195] Run: openssl version
	I0328 01:03:51.882197 1130827 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1076522.pem && ln -fs /usr/share/ca-certificates/1076522.pem /etc/ssl/certs/1076522.pem"
	I0328 01:03:51.893560 1130827 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1076522.pem
	I0328 01:03:51.898293 1130827 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 27 23:42 /usr/share/ca-certificates/1076522.pem
	I0328 01:03:51.898361 1130827 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1076522.pem
	I0328 01:03:51.904549 1130827 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1076522.pem /etc/ssl/certs/51391683.0"
	I0328 01:03:51.918175 1130827 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10765222.pem && ln -fs /usr/share/ca-certificates/10765222.pem /etc/ssl/certs/10765222.pem"
	I0328 01:03:51.930387 1130827 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10765222.pem
	I0328 01:03:51.935610 1130827 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 27 23:42 /usr/share/ca-certificates/10765222.pem
	I0328 01:03:51.935691 1130827 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10765222.pem
	I0328 01:03:51.942127 1130827 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/10765222.pem /etc/ssl/certs/3ec20f2e.0"
	I0328 01:03:51.954252 1130827 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0328 01:03:51.966727 1130827 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0328 01:03:51.971742 1130827 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 27 23:34 /usr/share/ca-certificates/minikubeCA.pem
	I0328 01:03:51.971810 1130827 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0328 01:03:51.978082 1130827 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0328 01:03:51.992233 1130827 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0328 01:03:51.997556 1130827 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0328 01:03:52.004178 1130827 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0328 01:03:52.010666 1130827 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0328 01:03:52.017076 1130827 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0328 01:03:52.023334 1130827 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0328 01:03:52.029980 1130827 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0328 01:03:52.036395 1130827 kubeadm.go:391] StartCluster: {Name:no-preload-248059 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.0-beta.0 ClusterName:no-preload-248059 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.107 Port:8443 KubernetesVersion:v1.30.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0
m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0328 01:03:52.036483 1130827 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0328 01:03:52.036539 1130827 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0328 01:03:52.080486 1130827 cri.go:89] found id: ""
	I0328 01:03:52.080580 1130827 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0328 01:03:52.094552 1130827 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0328 01:03:52.094583 1130827 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0328 01:03:52.094599 1130827 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0328 01:03:52.094650 1130827 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0328 01:03:52.107008 1130827 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0328 01:03:52.108200 1130827 kubeconfig.go:125] found "no-preload-248059" server: "https://192.168.61.107:8443"
	I0328 01:03:52.110536 1130827 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0328 01:03:52.122998 1130827 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.61.107
	I0328 01:03:52.123044 1130827 kubeadm.go:1154] stopping kube-system containers ...
	I0328 01:03:52.123090 1130827 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0328 01:03:52.123170 1130827 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0328 01:03:52.165568 1130827 cri.go:89] found id: ""
	I0328 01:03:52.165666 1130827 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0328 01:03:52.183930 1130827 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0328 01:03:52.195188 1130827 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0328 01:03:52.195215 1130827 kubeadm.go:156] found existing configuration files:
	
	I0328 01:03:52.195271 1130827 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0328 01:03:52.205872 1130827 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0328 01:03:52.205932 1130827 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0328 01:03:52.216481 1130827 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0328 01:03:52.226719 1130827 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0328 01:03:52.226787 1130827 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0328 01:03:52.238852 1130827 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0328 01:03:52.250272 1130827 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0328 01:03:52.250341 1130827 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0328 01:03:52.262474 1130827 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0328 01:03:52.273981 1130827 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0328 01:03:52.274059 1130827 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0328 01:03:52.286028 1130827 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0328 01:03:52.297016 1130827 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0328 01:03:52.406981 1130827 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0328 01:03:53.521529 1130827 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.114505514s)
	I0328 01:03:53.521569 1130827 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0328 01:03:53.735728 1130827 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0328 01:03:53.808590 1130827 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0328 01:03:53.931165 1130827 api_server.go:52] waiting for apiserver process to appear ...
	I0328 01:03:53.931281 1130827 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:54.432358 1130827 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:54.931653 1130827 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:54.948811 1130827 api_server.go:72] duration metric: took 1.017647613s to wait for apiserver process to appear ...
	I0328 01:03:54.948843 1130827 api_server.go:88] waiting for apiserver healthz status ...
	I0328 01:03:54.948871 1130827 api_server.go:253] Checking apiserver healthz at https://192.168.61.107:8443/healthz ...
	I0328 01:03:54.949490 1130827 api_server.go:269] stopped: https://192.168.61.107:8443/healthz: Get "https://192.168.61.107:8443/healthz": dial tcp 192.168.61.107:8443: connect: connection refused
	I0328 01:03:55.449050 1130827 api_server.go:253] Checking apiserver healthz at https://192.168.61.107:8443/healthz ...
	I0328 01:03:53.413775 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:55.914095 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:57.515811 1130827 api_server.go:279] https://192.168.61.107:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0328 01:03:57.515852 1130827 api_server.go:103] status: https://192.168.61.107:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0328 01:03:57.515872 1130827 api_server.go:253] Checking apiserver healthz at https://192.168.61.107:8443/healthz ...
	I0328 01:03:57.564527 1130827 api_server.go:279] https://192.168.61.107:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0328 01:03:57.564560 1130827 api_server.go:103] status: https://192.168.61.107:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0328 01:03:57.949780 1130827 api_server.go:253] Checking apiserver healthz at https://192.168.61.107:8443/healthz ...
	I0328 01:03:57.955515 1130827 api_server.go:279] https://192.168.61.107:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0328 01:03:57.955565 1130827 api_server.go:103] status: https://192.168.61.107:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0328 01:03:58.449103 1130827 api_server.go:253] Checking apiserver healthz at https://192.168.61.107:8443/healthz ...
	I0328 01:03:58.456345 1130827 api_server.go:279] https://192.168.61.107:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0328 01:03:58.456384 1130827 api_server.go:103] status: https://192.168.61.107:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0328 01:03:58.949575 1130827 api_server.go:253] Checking apiserver healthz at https://192.168.61.107:8443/healthz ...
	I0328 01:03:58.954466 1130827 api_server.go:279] https://192.168.61.107:8443/healthz returned 200:
	ok
	I0328 01:03:58.961213 1130827 api_server.go:141] control plane version: v1.30.0-beta.0
	I0328 01:03:58.961244 1130827 api_server.go:131] duration metric: took 4.012391589s to wait for apiserver health ...
	I0328 01:03:58.961256 1130827 cni.go:84] Creating CNI manager for ""
	I0328 01:03:58.961265 1130827 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0328 01:03:58.963147 1130827 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0328 01:03:55.568378 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:56.068253 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:56.568989 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:57.068709 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:57.569038 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:58.068236 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:58.568386 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:59.068971 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:59.568858 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:04:00.067964 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:57.043266 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:59.541626 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:58.964446 1130827 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0328 01:03:58.979425 1130827 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0328 01:03:59.042826 1130827 system_pods.go:43] waiting for kube-system pods to appear ...
	I0328 01:03:59.060388 1130827 system_pods.go:59] 8 kube-system pods found
	I0328 01:03:59.060429 1130827 system_pods.go:61] "coredns-7db6d8ff4d-86n4s" [71402ca8-dfa7-4caf-a422-6de9f24bf9dc] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0328 01:03:59.060439 1130827 system_pods.go:61] "etcd-no-preload-248059" [954b6886-b84f-4d94-bbce-7e520142eb4b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0328 01:03:59.060451 1130827 system_pods.go:61] "kube-apiserver-no-preload-248059" [2d3caabe-27c2-44e7-8f52-76e03f262e2d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0328 01:03:59.060462 1130827 system_pods.go:61] "kube-controller-manager-no-preload-248059" [30b9f4aa-c9a7-4d91-8e4d-35ad32f40425] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0328 01:03:59.060472 1130827 system_pods.go:61] "kube-proxy-b9qpb" [7ab4cca8-0ba2-4177-84cd-c6ac045930fc] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0328 01:03:59.060481 1130827 system_pods.go:61] "kube-scheduler-no-preload-248059" [4d9e45e3-d990-40d4-a4be-8384c39eb9b0] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0328 01:03:59.060493 1130827 system_pods.go:61] "metrics-server-569cc877fc-cvnrj" [063a47ac-9ceb-4521-9dde-aca02ec5e0d8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0328 01:03:59.060508 1130827 system_pods.go:61] "storage-provisioner" [0a0eb2d3-a426-4b76-8009-1a0a0e0312bb] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0328 01:03:59.060518 1130827 system_pods.go:74] duration metric: took 17.666067ms to wait for pod list to return data ...
	I0328 01:03:59.060533 1130827 node_conditions.go:102] verifying NodePressure condition ...
	I0328 01:03:59.065018 1130827 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0328 01:03:59.065054 1130827 node_conditions.go:123] node cpu capacity is 2
	I0328 01:03:59.065071 1130827 node_conditions.go:105] duration metric: took 4.531253ms to run NodePressure ...
	I0328 01:03:59.065097 1130827 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-beta.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0328 01:03:59.454609 1130827 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0328 01:03:59.459707 1130827 kubeadm.go:733] kubelet initialised
	I0328 01:03:59.459730 1130827 kubeadm.go:734] duration metric: took 5.09757ms waiting for restarted kubelet to initialise ...
	I0328 01:03:59.459739 1130827 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0328 01:03:59.465352 1130827 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-86n4s" in "kube-system" namespace to be "Ready" ...
	I0328 01:03:59.471020 1130827 pod_ready.go:97] node "no-preload-248059" hosting pod "coredns-7db6d8ff4d-86n4s" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-248059" has status "Ready":"False"
	I0328 01:03:59.471054 1130827 pod_ready.go:81] duration metric: took 5.676291ms for pod "coredns-7db6d8ff4d-86n4s" in "kube-system" namespace to be "Ready" ...
	E0328 01:03:59.471067 1130827 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-248059" hosting pod "coredns-7db6d8ff4d-86n4s" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-248059" has status "Ready":"False"
	I0328 01:03:59.471075 1130827 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-248059" in "kube-system" namespace to be "Ready" ...
	I0328 01:03:59.476393 1130827 pod_ready.go:97] node "no-preload-248059" hosting pod "etcd-no-preload-248059" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-248059" has status "Ready":"False"
	I0328 01:03:59.476421 1130827 pod_ready.go:81] duration metric: took 5.333391ms for pod "etcd-no-preload-248059" in "kube-system" namespace to be "Ready" ...
	E0328 01:03:59.476430 1130827 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-248059" hosting pod "etcd-no-preload-248059" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-248059" has status "Ready":"False"
	I0328 01:03:59.476436 1130827 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-248059" in "kube-system" namespace to be "Ready" ...
	I0328 01:03:59.485889 1130827 pod_ready.go:97] node "no-preload-248059" hosting pod "kube-apiserver-no-preload-248059" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-248059" has status "Ready":"False"
	I0328 01:03:59.485924 1130827 pod_ready.go:81] duration metric: took 9.481204ms for pod "kube-apiserver-no-preload-248059" in "kube-system" namespace to be "Ready" ...
	E0328 01:03:59.485937 1130827 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-248059" hosting pod "kube-apiserver-no-preload-248059" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-248059" has status "Ready":"False"
	I0328 01:03:59.485957 1130827 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-248059" in "kube-system" namespace to be "Ready" ...
	I0328 01:03:59.491064 1130827 pod_ready.go:97] node "no-preload-248059" hosting pod "kube-controller-manager-no-preload-248059" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-248059" has status "Ready":"False"
	I0328 01:03:59.491095 1130827 pod_ready.go:81] duration metric: took 5.125981ms for pod "kube-controller-manager-no-preload-248059" in "kube-system" namespace to be "Ready" ...
	E0328 01:03:59.491107 1130827 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-248059" hosting pod "kube-controller-manager-no-preload-248059" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-248059" has status "Ready":"False"
	I0328 01:03:59.491116 1130827 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-b9qpb" in "kube-system" namespace to be "Ready" ...
	I0328 01:03:59.858724 1130827 pod_ready.go:92] pod "kube-proxy-b9qpb" in "kube-system" namespace has status "Ready":"True"
	I0328 01:03:59.858753 1130827 pod_ready.go:81] duration metric: took 367.628034ms for pod "kube-proxy-b9qpb" in "kube-system" namespace to be "Ready" ...
	I0328 01:03:59.858764 1130827 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-248059" in "kube-system" namespace to be "Ready" ...
	I0328 01:03:58.413911 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:00.913297 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:02.913414 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:00.568622 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:04:01.067943 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:04:01.567964 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:04:02.068537 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:04:02.568772 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:04:03.068458 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:04:03.568943 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:04:04.068085 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:04:04.068176 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:04:04.112601 1131323 cri.go:89] found id: ""
	I0328 01:04:04.112631 1131323 logs.go:276] 0 containers: []
	W0328 01:04:04.112642 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:04:04.112650 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:04:04.112726 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:04:04.151837 1131323 cri.go:89] found id: ""
	I0328 01:04:04.151873 1131323 logs.go:276] 0 containers: []
	W0328 01:04:04.151885 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:04:04.151894 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:04:04.151965 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:04:04.193411 1131323 cri.go:89] found id: ""
	I0328 01:04:04.193451 1131323 logs.go:276] 0 containers: []
	W0328 01:04:04.193463 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:04:04.193473 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:04:04.193545 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:04:04.239623 1131323 cri.go:89] found id: ""
	I0328 01:04:04.239652 1131323 logs.go:276] 0 containers: []
	W0328 01:04:04.239662 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:04:04.239673 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:04:04.239732 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:04:04.279561 1131323 cri.go:89] found id: ""
	I0328 01:04:04.279600 1131323 logs.go:276] 0 containers: []
	W0328 01:04:04.279615 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:04:04.279627 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:04:04.279708 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:04:04.318680 1131323 cri.go:89] found id: ""
	I0328 01:04:04.318710 1131323 logs.go:276] 0 containers: []
	W0328 01:04:04.318722 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:04:04.318731 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:04:04.318797 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:04:04.356486 1131323 cri.go:89] found id: ""
	I0328 01:04:04.356514 1131323 logs.go:276] 0 containers: []
	W0328 01:04:04.356523 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:04:04.356530 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:04:04.356586 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:04:04.394281 1131323 cri.go:89] found id: ""
	I0328 01:04:04.394319 1131323 logs.go:276] 0 containers: []
	W0328 01:04:04.394334 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:04:04.394348 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:04:04.394364 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:04:04.458688 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:04:04.458729 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:04:04.501399 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:04:04.501440 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:04:04.556183 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:04:04.556225 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:04:04.571392 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:04:04.571427 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:04:04.709967 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:04:02.041555 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:04.541464 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:01.866183 1130827 pod_ready.go:102] pod "kube-scheduler-no-preload-248059" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:03.868706 1130827 pod_ready.go:102] pod "kube-scheduler-no-preload-248059" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:04.915667 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:07.412548 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:07.210550 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:04:07.224274 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:04:07.224345 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:04:07.262604 1131323 cri.go:89] found id: ""
	I0328 01:04:07.262640 1131323 logs.go:276] 0 containers: []
	W0328 01:04:07.262665 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:04:07.262674 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:04:07.262763 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:04:07.296868 1131323 cri.go:89] found id: ""
	I0328 01:04:07.296907 1131323 logs.go:276] 0 containers: []
	W0328 01:04:07.296918 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:04:07.296926 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:04:07.296992 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:04:07.333110 1131323 cri.go:89] found id: ""
	I0328 01:04:07.333149 1131323 logs.go:276] 0 containers: []
	W0328 01:04:07.333162 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:04:07.333171 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:04:07.333240 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:04:07.371138 1131323 cri.go:89] found id: ""
	I0328 01:04:07.371168 1131323 logs.go:276] 0 containers: []
	W0328 01:04:07.371186 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:04:07.371195 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:04:07.371259 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:04:07.412197 1131323 cri.go:89] found id: ""
	I0328 01:04:07.412230 1131323 logs.go:276] 0 containers: []
	W0328 01:04:07.412242 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:04:07.412251 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:04:07.412331 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:04:07.457021 1131323 cri.go:89] found id: ""
	I0328 01:04:07.457052 1131323 logs.go:276] 0 containers: []
	W0328 01:04:07.457070 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:04:07.457080 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:04:07.457153 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:04:07.517996 1131323 cri.go:89] found id: ""
	I0328 01:04:07.518026 1131323 logs.go:276] 0 containers: []
	W0328 01:04:07.518034 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:04:07.518040 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:04:07.518111 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:04:07.556829 1131323 cri.go:89] found id: ""
	I0328 01:04:07.556856 1131323 logs.go:276] 0 containers: []
	W0328 01:04:07.556865 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:04:07.556875 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:04:07.556890 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:04:07.572234 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:04:07.572270 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:04:07.648615 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:04:07.648641 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:04:07.648658 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:04:07.719617 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:04:07.719665 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:04:07.764053 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:04:07.764097 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:04:10.319480 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:04:06.542160 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:08.550725 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:06.366150 1130827 pod_ready.go:102] pod "kube-scheduler-no-preload-248059" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:07.365200 1130827 pod_ready.go:92] pod "kube-scheduler-no-preload-248059" in "kube-system" namespace has status "Ready":"True"
	I0328 01:04:07.365233 1130827 pod_ready.go:81] duration metric: took 7.506461201s for pod "kube-scheduler-no-preload-248059" in "kube-system" namespace to be "Ready" ...
	I0328 01:04:07.365256 1130827 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace to be "Ready" ...
	I0328 01:04:09.373694 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:09.413378 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:11.913400 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:10.334347 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:04:10.335893 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:04:10.375231 1131323 cri.go:89] found id: ""
	I0328 01:04:10.375263 1131323 logs.go:276] 0 containers: []
	W0328 01:04:10.375274 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:04:10.375281 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:04:10.375353 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:04:10.413652 1131323 cri.go:89] found id: ""
	I0328 01:04:10.413706 1131323 logs.go:276] 0 containers: []
	W0328 01:04:10.413726 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:04:10.413736 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:04:10.413805 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:04:10.449546 1131323 cri.go:89] found id: ""
	I0328 01:04:10.449588 1131323 logs.go:276] 0 containers: []
	W0328 01:04:10.449597 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:04:10.449604 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:04:10.449658 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:04:10.487518 1131323 cri.go:89] found id: ""
	I0328 01:04:10.487556 1131323 logs.go:276] 0 containers: []
	W0328 01:04:10.487570 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:04:10.487579 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:04:10.487663 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:04:10.525088 1131323 cri.go:89] found id: ""
	I0328 01:04:10.525124 1131323 logs.go:276] 0 containers: []
	W0328 01:04:10.525137 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:04:10.525146 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:04:10.525213 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:04:10.567177 1131323 cri.go:89] found id: ""
	I0328 01:04:10.567209 1131323 logs.go:276] 0 containers: []
	W0328 01:04:10.567221 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:04:10.567231 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:04:10.567302 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:04:10.609440 1131323 cri.go:89] found id: ""
	I0328 01:04:10.609474 1131323 logs.go:276] 0 containers: []
	W0328 01:04:10.609485 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:04:10.609492 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:04:10.609549 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:04:10.652466 1131323 cri.go:89] found id: ""
	I0328 01:04:10.652502 1131323 logs.go:276] 0 containers: []
	W0328 01:04:10.652516 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:04:10.652529 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:04:10.652546 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:04:10.737406 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:04:10.737451 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:04:10.786955 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:04:10.786991 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:04:10.843072 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:04:10.843114 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:04:10.857209 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:04:10.857244 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:04:10.950885 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:04:13.451542 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:04:13.465833 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:04:13.465924 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:04:13.503353 1131323 cri.go:89] found id: ""
	I0328 01:04:13.503386 1131323 logs.go:276] 0 containers: []
	W0328 01:04:13.503398 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:04:13.503407 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:04:13.503474 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:04:13.543175 1131323 cri.go:89] found id: ""
	I0328 01:04:13.543208 1131323 logs.go:276] 0 containers: []
	W0328 01:04:13.543220 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:04:13.543229 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:04:13.543287 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:04:13.580796 1131323 cri.go:89] found id: ""
	I0328 01:04:13.580829 1131323 logs.go:276] 0 containers: []
	W0328 01:04:13.580840 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:04:13.580848 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:04:13.580900 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:04:13.619483 1131323 cri.go:89] found id: ""
	I0328 01:04:13.619516 1131323 logs.go:276] 0 containers: []
	W0328 01:04:13.619529 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:04:13.619539 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:04:13.619596 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:04:13.654651 1131323 cri.go:89] found id: ""
	I0328 01:04:13.654683 1131323 logs.go:276] 0 containers: []
	W0328 01:04:13.654697 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:04:13.654705 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:04:13.654774 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:04:13.691763 1131323 cri.go:89] found id: ""
	I0328 01:04:13.691794 1131323 logs.go:276] 0 containers: []
	W0328 01:04:13.691805 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:04:13.691813 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:04:13.691881 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:04:13.730580 1131323 cri.go:89] found id: ""
	I0328 01:04:13.730614 1131323 logs.go:276] 0 containers: []
	W0328 01:04:13.730627 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:04:13.730635 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:04:13.730694 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:04:13.767802 1131323 cri.go:89] found id: ""
	I0328 01:04:13.767834 1131323 logs.go:276] 0 containers: []
	W0328 01:04:13.767848 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:04:13.767860 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:04:13.767876 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:04:13.815612 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:04:13.815653 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:04:13.870945 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:04:13.870991 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:04:13.891456 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:04:13.891506 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:04:14.022124 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:04:14.022163 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:04:14.022187 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:04:11.041196 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:13.044490 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:15.541942 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:11.873574 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:13.875251 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:14.412081 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:16.412837 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:16.604087 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:04:16.618872 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:04:16.618971 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:04:16.665628 1131323 cri.go:89] found id: ""
	I0328 01:04:16.665661 1131323 logs.go:276] 0 containers: []
	W0328 01:04:16.665675 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:04:16.665683 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:04:16.665780 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:04:16.703727 1131323 cri.go:89] found id: ""
	I0328 01:04:16.703758 1131323 logs.go:276] 0 containers: []
	W0328 01:04:16.703768 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:04:16.703775 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:04:16.703835 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:04:16.741425 1131323 cri.go:89] found id: ""
	I0328 01:04:16.741455 1131323 logs.go:276] 0 containers: []
	W0328 01:04:16.741464 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:04:16.741470 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:04:16.741524 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:04:16.782333 1131323 cri.go:89] found id: ""
	I0328 01:04:16.782373 1131323 logs.go:276] 0 containers: []
	W0328 01:04:16.782387 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:04:16.782398 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:04:16.782469 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:04:16.820321 1131323 cri.go:89] found id: ""
	I0328 01:04:16.820355 1131323 logs.go:276] 0 containers: []
	W0328 01:04:16.820364 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:04:16.820372 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:04:16.820429 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:04:16.861091 1131323 cri.go:89] found id: ""
	I0328 01:04:16.861130 1131323 logs.go:276] 0 containers: []
	W0328 01:04:16.861144 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:04:16.861154 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:04:16.861226 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:04:16.901347 1131323 cri.go:89] found id: ""
	I0328 01:04:16.901394 1131323 logs.go:276] 0 containers: []
	W0328 01:04:16.901408 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:04:16.901418 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:04:16.901491 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:04:16.944027 1131323 cri.go:89] found id: ""
	I0328 01:04:16.944067 1131323 logs.go:276] 0 containers: []
	W0328 01:04:16.944080 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:04:16.944093 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:04:16.944110 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:04:16.959104 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:04:16.959151 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:04:17.035432 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:04:17.035464 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:04:17.035480 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:04:17.116236 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:04:17.116276 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:04:17.159321 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:04:17.159370 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:04:19.711326 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:04:19.726016 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:04:19.726094 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:04:19.776639 1131323 cri.go:89] found id: ""
	I0328 01:04:19.776676 1131323 logs.go:276] 0 containers: []
	W0328 01:04:19.776690 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:04:19.776700 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:04:19.776782 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:04:19.817849 1131323 cri.go:89] found id: ""
	I0328 01:04:19.817887 1131323 logs.go:276] 0 containers: []
	W0328 01:04:19.817897 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:04:19.817904 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:04:19.817981 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:04:19.855055 1131323 cri.go:89] found id: ""
	I0328 01:04:19.855089 1131323 logs.go:276] 0 containers: []
	W0328 01:04:19.855102 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:04:19.855110 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:04:19.855177 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:04:19.895296 1131323 cri.go:89] found id: ""
	I0328 01:04:19.895332 1131323 logs.go:276] 0 containers: []
	W0328 01:04:19.895346 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:04:19.895354 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:04:19.895414 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:04:19.930936 1131323 cri.go:89] found id: ""
	I0328 01:04:19.930968 1131323 logs.go:276] 0 containers: []
	W0328 01:04:19.930980 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:04:19.930989 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:04:19.931067 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:04:19.968573 1131323 cri.go:89] found id: ""
	I0328 01:04:19.968610 1131323 logs.go:276] 0 containers: []
	W0328 01:04:19.968623 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:04:19.968632 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:04:19.968693 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:04:20.006130 1131323 cri.go:89] found id: ""
	I0328 01:04:20.006180 1131323 logs.go:276] 0 containers: []
	W0328 01:04:20.006195 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:04:20.006203 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:04:20.006304 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:04:20.043646 1131323 cri.go:89] found id: ""
	I0328 01:04:20.043678 1131323 logs.go:276] 0 containers: []
	W0328 01:04:20.043689 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:04:20.043701 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:04:20.043717 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:04:20.058728 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:04:20.058761 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:04:20.136392 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:04:20.136417 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:04:20.136431 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:04:20.214971 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:04:20.215015 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:04:20.255002 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:04:20.255047 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:04:18.041868 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:20.542175 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:16.372600 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:18.373203 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:20.374228 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:18.913596 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:20.913978 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:22.914777 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:22.810078 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:04:22.824083 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:04:22.824169 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:04:22.862037 1131323 cri.go:89] found id: ""
	I0328 01:04:22.862066 1131323 logs.go:276] 0 containers: []
	W0328 01:04:22.862074 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:04:22.862081 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:04:22.862141 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:04:22.901625 1131323 cri.go:89] found id: ""
	I0328 01:04:22.901658 1131323 logs.go:276] 0 containers: []
	W0328 01:04:22.901670 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:04:22.901679 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:04:22.901752 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:04:22.938858 1131323 cri.go:89] found id: ""
	I0328 01:04:22.938891 1131323 logs.go:276] 0 containers: []
	W0328 01:04:22.938903 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:04:22.938912 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:04:22.938983 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:04:22.978781 1131323 cri.go:89] found id: ""
	I0328 01:04:22.978818 1131323 logs.go:276] 0 containers: []
	W0328 01:04:22.978829 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:04:22.978837 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:04:22.978910 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:04:23.016844 1131323 cri.go:89] found id: ""
	I0328 01:04:23.016882 1131323 logs.go:276] 0 containers: []
	W0328 01:04:23.016895 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:04:23.016904 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:04:23.016975 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:04:23.058456 1131323 cri.go:89] found id: ""
	I0328 01:04:23.058508 1131323 logs.go:276] 0 containers: []
	W0328 01:04:23.058522 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:04:23.058531 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:04:23.058604 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:04:23.099368 1131323 cri.go:89] found id: ""
	I0328 01:04:23.099399 1131323 logs.go:276] 0 containers: []
	W0328 01:04:23.099408 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:04:23.099420 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:04:23.099492 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:04:23.135593 1131323 cri.go:89] found id: ""
	I0328 01:04:23.135634 1131323 logs.go:276] 0 containers: []
	W0328 01:04:23.135653 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:04:23.135665 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:04:23.135679 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:04:23.191215 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:04:23.191260 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:04:23.206849 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:04:23.206884 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:04:23.289566 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:04:23.289596 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:04:23.289618 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:04:23.365429 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:04:23.365480 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:04:23.042312 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:25.541788 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:22.872233 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:25.373908 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:25.413591 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:27.912983 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:25.914883 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:04:25.929336 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:04:25.929415 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:04:25.969452 1131323 cri.go:89] found id: ""
	I0328 01:04:25.969485 1131323 logs.go:276] 0 containers: []
	W0328 01:04:25.969497 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:04:25.969506 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:04:25.969573 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:04:26.008978 1131323 cri.go:89] found id: ""
	I0328 01:04:26.009006 1131323 logs.go:276] 0 containers: []
	W0328 01:04:26.009015 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:04:26.009022 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:04:26.009075 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:04:26.051110 1131323 cri.go:89] found id: ""
	I0328 01:04:26.051138 1131323 logs.go:276] 0 containers: []
	W0328 01:04:26.051146 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:04:26.051153 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:04:26.051213 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:04:26.088231 1131323 cri.go:89] found id: ""
	I0328 01:04:26.088262 1131323 logs.go:276] 0 containers: []
	W0328 01:04:26.088271 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:04:26.088277 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:04:26.088342 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:04:26.125741 1131323 cri.go:89] found id: ""
	I0328 01:04:26.125782 1131323 logs.go:276] 0 containers: []
	W0328 01:04:26.125794 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:04:26.125800 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:04:26.125867 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:04:26.163367 1131323 cri.go:89] found id: ""
	I0328 01:04:26.163406 1131323 logs.go:276] 0 containers: []
	W0328 01:04:26.163417 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:04:26.163426 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:04:26.163503 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:04:26.202302 1131323 cri.go:89] found id: ""
	I0328 01:04:26.202340 1131323 logs.go:276] 0 containers: []
	W0328 01:04:26.202355 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:04:26.202364 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:04:26.202422 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:04:26.240880 1131323 cri.go:89] found id: ""
	I0328 01:04:26.240911 1131323 logs.go:276] 0 containers: []
	W0328 01:04:26.240921 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:04:26.240931 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:04:26.240943 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:04:26.283151 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:04:26.283180 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:04:26.341313 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:04:26.341350 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:04:26.356762 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:04:26.356791 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:04:26.428033 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:04:26.428054 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:04:26.428066 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:04:29.006332 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:04:29.020634 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:04:29.020745 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:04:29.060812 1131323 cri.go:89] found id: ""
	I0328 01:04:29.060843 1131323 logs.go:276] 0 containers: []
	W0328 01:04:29.060852 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:04:29.060859 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:04:29.060916 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:04:29.100110 1131323 cri.go:89] found id: ""
	I0328 01:04:29.100139 1131323 logs.go:276] 0 containers: []
	W0328 01:04:29.100149 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:04:29.100155 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:04:29.100212 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:04:29.140345 1131323 cri.go:89] found id: ""
	I0328 01:04:29.140384 1131323 logs.go:276] 0 containers: []
	W0328 01:04:29.140396 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:04:29.140404 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:04:29.140479 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:04:29.182415 1131323 cri.go:89] found id: ""
	I0328 01:04:29.182449 1131323 logs.go:276] 0 containers: []
	W0328 01:04:29.182459 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:04:29.182465 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:04:29.182533 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:04:29.225177 1131323 cri.go:89] found id: ""
	I0328 01:04:29.225214 1131323 logs.go:276] 0 containers: []
	W0328 01:04:29.225225 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:04:29.225233 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:04:29.225310 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:04:29.265437 1131323 cri.go:89] found id: ""
	I0328 01:04:29.265471 1131323 logs.go:276] 0 containers: []
	W0328 01:04:29.265485 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:04:29.265493 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:04:29.265556 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:04:29.301578 1131323 cri.go:89] found id: ""
	I0328 01:04:29.301617 1131323 logs.go:276] 0 containers: []
	W0328 01:04:29.301630 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:04:29.301639 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:04:29.301719 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:04:29.340816 1131323 cri.go:89] found id: ""
	I0328 01:04:29.340847 1131323 logs.go:276] 0 containers: []
	W0328 01:04:29.340856 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:04:29.340867 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:04:29.340880 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:04:29.384658 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:04:29.384687 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:04:29.439243 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:04:29.439285 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:04:29.456179 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:04:29.456211 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:04:29.534878 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:04:29.534906 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:04:29.534927 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:04:28.041463 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:30.042506 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:27.872489 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:30.371109 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:29.913856 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:32.415699 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:32.115798 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:04:32.130464 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:04:32.130560 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:04:32.168846 1131323 cri.go:89] found id: ""
	I0328 01:04:32.168877 1131323 logs.go:276] 0 containers: []
	W0328 01:04:32.168887 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:04:32.168894 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:04:32.168952 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:04:32.208590 1131323 cri.go:89] found id: ""
	I0328 01:04:32.208622 1131323 logs.go:276] 0 containers: []
	W0328 01:04:32.208632 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:04:32.208638 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:04:32.208694 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:04:32.247323 1131323 cri.go:89] found id: ""
	I0328 01:04:32.247362 1131323 logs.go:276] 0 containers: []
	W0328 01:04:32.247375 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:04:32.247384 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:04:32.247507 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:04:32.285260 1131323 cri.go:89] found id: ""
	I0328 01:04:32.285293 1131323 logs.go:276] 0 containers: []
	W0328 01:04:32.285312 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:04:32.285319 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:04:32.285395 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:04:32.326678 1131323 cri.go:89] found id: ""
	I0328 01:04:32.326712 1131323 logs.go:276] 0 containers: []
	W0328 01:04:32.326725 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:04:32.326740 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:04:32.326823 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:04:32.363375 1131323 cri.go:89] found id: ""
	I0328 01:04:32.363403 1131323 logs.go:276] 0 containers: []
	W0328 01:04:32.363412 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:04:32.363419 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:04:32.363473 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:04:32.401410 1131323 cri.go:89] found id: ""
	I0328 01:04:32.401449 1131323 logs.go:276] 0 containers: []
	W0328 01:04:32.401462 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:04:32.401470 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:04:32.401558 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:04:32.438645 1131323 cri.go:89] found id: ""
	I0328 01:04:32.438680 1131323 logs.go:276] 0 containers: []
	W0328 01:04:32.438691 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:04:32.438703 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:04:32.438718 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:04:32.488743 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:04:32.488786 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:04:32.503908 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:04:32.503944 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:04:32.577307 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:04:32.577333 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:04:32.577350 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:04:32.657787 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:04:32.657832 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:04:35.201151 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:04:35.215313 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:04:35.215383 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:04:35.253467 1131323 cri.go:89] found id: ""
	I0328 01:04:35.253504 1131323 logs.go:276] 0 containers: []
	W0328 01:04:35.253515 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:04:35.253522 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:04:35.253593 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:04:35.290218 1131323 cri.go:89] found id: ""
	I0328 01:04:35.290280 1131323 logs.go:276] 0 containers: []
	W0328 01:04:35.290292 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:04:35.290300 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:04:35.290378 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:04:35.330714 1131323 cri.go:89] found id: ""
	I0328 01:04:35.330749 1131323 logs.go:276] 0 containers: []
	W0328 01:04:35.330757 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:04:35.330764 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:04:35.330831 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:04:32.542071 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:34.544163 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:32.372100 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:34.872293 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:34.913212 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:37.411734 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:35.371524 1131323 cri.go:89] found id: ""
	I0328 01:04:35.371553 1131323 logs.go:276] 0 containers: []
	W0328 01:04:35.371570 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:04:35.371577 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:04:35.371630 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:04:35.411610 1131323 cri.go:89] found id: ""
	I0328 01:04:35.411638 1131323 logs.go:276] 0 containers: []
	W0328 01:04:35.411646 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:04:35.411652 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:04:35.411711 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:04:35.456709 1131323 cri.go:89] found id: ""
	I0328 01:04:35.456745 1131323 logs.go:276] 0 containers: []
	W0328 01:04:35.456758 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:04:35.456766 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:04:35.456836 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:04:35.492688 1131323 cri.go:89] found id: ""
	I0328 01:04:35.492719 1131323 logs.go:276] 0 containers: []
	W0328 01:04:35.492729 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:04:35.492755 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:04:35.492811 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:04:35.531205 1131323 cri.go:89] found id: ""
	I0328 01:04:35.531234 1131323 logs.go:276] 0 containers: []
	W0328 01:04:35.531243 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:04:35.531254 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:04:35.531266 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:04:35.611803 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:04:35.611845 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:04:35.653513 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:04:35.653551 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:04:35.708030 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:04:35.708075 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:04:35.724542 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:04:35.724576 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:04:35.798624 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:04:38.299312 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:04:38.314128 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:04:38.314213 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:04:38.357728 1131323 cri.go:89] found id: ""
	I0328 01:04:38.357761 1131323 logs.go:276] 0 containers: []
	W0328 01:04:38.357779 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:04:38.357786 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:04:38.357848 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:04:38.394512 1131323 cri.go:89] found id: ""
	I0328 01:04:38.394541 1131323 logs.go:276] 0 containers: []
	W0328 01:04:38.394549 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:04:38.394558 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:04:38.394618 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:04:38.434353 1131323 cri.go:89] found id: ""
	I0328 01:04:38.434380 1131323 logs.go:276] 0 containers: []
	W0328 01:04:38.434391 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:04:38.434399 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:04:38.434466 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:04:38.477662 1131323 cri.go:89] found id: ""
	I0328 01:04:38.477693 1131323 logs.go:276] 0 containers: []
	W0328 01:04:38.477703 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:04:38.477710 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:04:38.477763 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:04:38.515014 1131323 cri.go:89] found id: ""
	I0328 01:04:38.515044 1131323 logs.go:276] 0 containers: []
	W0328 01:04:38.515053 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:04:38.515060 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:04:38.515117 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:04:38.558865 1131323 cri.go:89] found id: ""
	I0328 01:04:38.558899 1131323 logs.go:276] 0 containers: []
	W0328 01:04:38.558911 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:04:38.558920 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:04:38.558982 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:04:38.600261 1131323 cri.go:89] found id: ""
	I0328 01:04:38.600290 1131323 logs.go:276] 0 containers: []
	W0328 01:04:38.600299 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:04:38.600306 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:04:38.600366 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:04:38.637131 1131323 cri.go:89] found id: ""
	I0328 01:04:38.637167 1131323 logs.go:276] 0 containers: []
	W0328 01:04:38.637179 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:04:38.637194 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:04:38.637218 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:04:38.716032 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:04:38.716058 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:04:38.716079 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:04:38.804534 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:04:38.804578 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:04:38.851781 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:04:38.851820 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:04:38.910091 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:04:38.910125 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:04:37.041273 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:39.541843 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:37.372262 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:39.372555 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:39.912953 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:42.412667 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:41.425801 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:04:41.441072 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:04:41.441168 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:04:41.482934 1131323 cri.go:89] found id: ""
	I0328 01:04:41.482962 1131323 logs.go:276] 0 containers: []
	W0328 01:04:41.482974 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:04:41.482983 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:04:41.483063 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:04:41.521762 1131323 cri.go:89] found id: ""
	I0328 01:04:41.521796 1131323 logs.go:276] 0 containers: []
	W0328 01:04:41.521810 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:04:41.521819 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:04:41.521931 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:04:41.560814 1131323 cri.go:89] found id: ""
	I0328 01:04:41.560847 1131323 logs.go:276] 0 containers: []
	W0328 01:04:41.560857 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:04:41.560864 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:04:41.560928 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:04:41.601158 1131323 cri.go:89] found id: ""
	I0328 01:04:41.601189 1131323 logs.go:276] 0 containers: []
	W0328 01:04:41.601199 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:04:41.601206 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:04:41.601271 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:04:41.638760 1131323 cri.go:89] found id: ""
	I0328 01:04:41.638789 1131323 logs.go:276] 0 containers: []
	W0328 01:04:41.638799 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:04:41.638806 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:04:41.638861 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:04:41.675235 1131323 cri.go:89] found id: ""
	I0328 01:04:41.675268 1131323 logs.go:276] 0 containers: []
	W0328 01:04:41.675278 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:04:41.675285 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:04:41.675341 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:04:41.712918 1131323 cri.go:89] found id: ""
	I0328 01:04:41.712957 1131323 logs.go:276] 0 containers: []
	W0328 01:04:41.712972 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:04:41.712983 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:04:41.713078 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:04:41.750552 1131323 cri.go:89] found id: ""
	I0328 01:04:41.750582 1131323 logs.go:276] 0 containers: []
	W0328 01:04:41.750591 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:04:41.750601 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:04:41.750617 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:04:41.811163 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:04:41.811204 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:04:41.826502 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:04:41.826547 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:04:41.900727 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:04:41.900759 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:04:41.900777 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:04:41.981731 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:04:41.981783 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:04:44.525845 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:04:44.542301 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:04:44.542389 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:04:44.584907 1131323 cri.go:89] found id: ""
	I0328 01:04:44.584936 1131323 logs.go:276] 0 containers: []
	W0328 01:04:44.584945 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:04:44.584952 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:04:44.585007 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:04:44.630465 1131323 cri.go:89] found id: ""
	I0328 01:04:44.630499 1131323 logs.go:276] 0 containers: []
	W0328 01:04:44.630511 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:04:44.630520 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:04:44.630588 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:04:44.669095 1131323 cri.go:89] found id: ""
	I0328 01:04:44.669131 1131323 logs.go:276] 0 containers: []
	W0328 01:04:44.669143 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:04:44.669152 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:04:44.669235 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:04:44.708445 1131323 cri.go:89] found id: ""
	I0328 01:04:44.708484 1131323 logs.go:276] 0 containers: []
	W0328 01:04:44.708495 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:04:44.708502 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:04:44.708570 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:04:44.747706 1131323 cri.go:89] found id: ""
	I0328 01:04:44.747744 1131323 logs.go:276] 0 containers: []
	W0328 01:04:44.747755 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:04:44.747762 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:04:44.747822 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:04:44.787768 1131323 cri.go:89] found id: ""
	I0328 01:04:44.787807 1131323 logs.go:276] 0 containers: []
	W0328 01:04:44.787821 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:04:44.787830 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:04:44.787899 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:04:44.829018 1131323 cri.go:89] found id: ""
	I0328 01:04:44.829049 1131323 logs.go:276] 0 containers: []
	W0328 01:04:44.829059 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:04:44.829066 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:04:44.829123 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:04:44.874334 1131323 cri.go:89] found id: ""
	I0328 01:04:44.874374 1131323 logs.go:276] 0 containers: []
	W0328 01:04:44.874383 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:04:44.874393 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:04:44.874405 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:04:44.921577 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:04:44.921619 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:04:44.976660 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:04:44.976713 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:04:44.991365 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:04:44.991400 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:04:45.067595 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:04:45.067630 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:04:45.067651 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:04:42.042736 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:44.543288 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:41.372902 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:43.872925 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:45.873163 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:44.913827 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:47.412342 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:47.647634 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:04:47.663581 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:04:47.663687 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:04:47.702889 1131323 cri.go:89] found id: ""
	I0328 01:04:47.702940 1131323 logs.go:276] 0 containers: []
	W0328 01:04:47.702954 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:04:47.702966 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:04:47.703043 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:04:47.744995 1131323 cri.go:89] found id: ""
	I0328 01:04:47.745027 1131323 logs.go:276] 0 containers: []
	W0328 01:04:47.745037 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:04:47.745044 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:04:47.745103 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:04:47.785518 1131323 cri.go:89] found id: ""
	I0328 01:04:47.785550 1131323 logs.go:276] 0 containers: []
	W0328 01:04:47.785562 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:04:47.785572 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:04:47.785645 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:04:47.831739 1131323 cri.go:89] found id: ""
	I0328 01:04:47.831771 1131323 logs.go:276] 0 containers: []
	W0328 01:04:47.831786 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:04:47.831794 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:04:47.831867 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:04:47.871864 1131323 cri.go:89] found id: ""
	I0328 01:04:47.871906 1131323 logs.go:276] 0 containers: []
	W0328 01:04:47.871918 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:04:47.871929 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:04:47.872008 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:04:47.907899 1131323 cri.go:89] found id: ""
	I0328 01:04:47.907934 1131323 logs.go:276] 0 containers: []
	W0328 01:04:47.907946 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:04:47.907955 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:04:47.908022 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:04:47.946073 1131323 cri.go:89] found id: ""
	I0328 01:04:47.946107 1131323 logs.go:276] 0 containers: []
	W0328 01:04:47.946118 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:04:47.946127 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:04:47.946223 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:04:47.986122 1131323 cri.go:89] found id: ""
	I0328 01:04:47.986154 1131323 logs.go:276] 0 containers: []
	W0328 01:04:47.986168 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:04:47.986182 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:04:47.986198 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:04:48.057234 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:04:48.057271 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:04:48.109881 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:04:48.109926 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:04:48.125154 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:04:48.125189 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:04:48.208295 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:04:48.208327 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:04:48.208345 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:04:47.041447 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:49.542203 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:48.371275 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:50.372057 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:49.413451 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:51.414465 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:50.785126 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:04:50.800000 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:04:50.800078 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:04:50.839883 1131323 cri.go:89] found id: ""
	I0328 01:04:50.839911 1131323 logs.go:276] 0 containers: []
	W0328 01:04:50.839920 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:04:50.839927 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:04:50.839983 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:04:50.879627 1131323 cri.go:89] found id: ""
	I0328 01:04:50.879654 1131323 logs.go:276] 0 containers: []
	W0328 01:04:50.879661 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:04:50.879668 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:04:50.879734 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:04:50.918392 1131323 cri.go:89] found id: ""
	I0328 01:04:50.918434 1131323 logs.go:276] 0 containers: []
	W0328 01:04:50.918446 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:04:50.918454 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:04:50.918517 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:04:50.957198 1131323 cri.go:89] found id: ""
	I0328 01:04:50.957234 1131323 logs.go:276] 0 containers: []
	W0328 01:04:50.957248 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:04:50.957257 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:04:50.957328 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:04:50.997389 1131323 cri.go:89] found id: ""
	I0328 01:04:50.997424 1131323 logs.go:276] 0 containers: []
	W0328 01:04:50.997438 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:04:50.997446 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:04:50.997513 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:04:51.040259 1131323 cri.go:89] found id: ""
	I0328 01:04:51.040296 1131323 logs.go:276] 0 containers: []
	W0328 01:04:51.040309 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:04:51.040318 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:04:51.040389 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:04:51.081824 1131323 cri.go:89] found id: ""
	I0328 01:04:51.081858 1131323 logs.go:276] 0 containers: []
	W0328 01:04:51.081868 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:04:51.081875 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:04:51.081942 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:04:51.119742 1131323 cri.go:89] found id: ""
	I0328 01:04:51.119783 1131323 logs.go:276] 0 containers: []
	W0328 01:04:51.119796 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:04:51.119810 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:04:51.119836 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:04:51.173486 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:04:51.173529 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:04:51.188532 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:04:51.188568 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:04:51.269181 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:04:51.269207 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:04:51.269226 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:04:51.349882 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:04:51.349936 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:04:53.893562 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:04:53.910104 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:04:53.910186 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:04:53.951333 1131323 cri.go:89] found id: ""
	I0328 01:04:53.951375 1131323 logs.go:276] 0 containers: []
	W0328 01:04:53.951388 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:04:53.951397 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:04:53.951472 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:04:53.992438 1131323 cri.go:89] found id: ""
	I0328 01:04:53.992474 1131323 logs.go:276] 0 containers: []
	W0328 01:04:53.992486 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:04:53.992493 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:04:53.992561 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:04:54.032934 1131323 cri.go:89] found id: ""
	I0328 01:04:54.032969 1131323 logs.go:276] 0 containers: []
	W0328 01:04:54.032982 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:04:54.032992 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:04:54.033061 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:04:54.074670 1131323 cri.go:89] found id: ""
	I0328 01:04:54.074707 1131323 logs.go:276] 0 containers: []
	W0328 01:04:54.074777 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:04:54.074801 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:04:54.074875 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:04:54.111527 1131323 cri.go:89] found id: ""
	I0328 01:04:54.111555 1131323 logs.go:276] 0 containers: []
	W0328 01:04:54.111566 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:04:54.111573 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:04:54.111658 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:04:54.151401 1131323 cri.go:89] found id: ""
	I0328 01:04:54.151428 1131323 logs.go:276] 0 containers: []
	W0328 01:04:54.151437 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:04:54.151443 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:04:54.151494 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:04:54.197997 1131323 cri.go:89] found id: ""
	I0328 01:04:54.198036 1131323 logs.go:276] 0 containers: []
	W0328 01:04:54.198048 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:04:54.198058 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:04:54.198135 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:04:54.234016 1131323 cri.go:89] found id: ""
	I0328 01:04:54.234049 1131323 logs.go:276] 0 containers: []
	W0328 01:04:54.234058 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:04:54.234068 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:04:54.234081 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:04:54.286118 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:04:54.286161 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:04:54.300489 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:04:54.300541 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:04:54.376949 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:04:54.376972 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:04:54.376988 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:04:54.463857 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:04:54.463901 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:04:52.041517 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:54.042088 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:52.875923 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:55.371823 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:53.912140 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:55.912329 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:57.026395 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:04:57.041270 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:04:57.041358 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:04:57.082380 1131323 cri.go:89] found id: ""
	I0328 01:04:57.082416 1131323 logs.go:276] 0 containers: []
	W0328 01:04:57.082428 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:04:57.082436 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:04:57.082503 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:04:57.121835 1131323 cri.go:89] found id: ""
	I0328 01:04:57.121870 1131323 logs.go:276] 0 containers: []
	W0328 01:04:57.121885 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:04:57.121894 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:04:57.121969 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:04:57.163688 1131323 cri.go:89] found id: ""
	I0328 01:04:57.163725 1131323 logs.go:276] 0 containers: []
	W0328 01:04:57.163737 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:04:57.163745 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:04:57.163819 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:04:57.212628 1131323 cri.go:89] found id: ""
	I0328 01:04:57.212666 1131323 logs.go:276] 0 containers: []
	W0328 01:04:57.212693 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:04:57.212703 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:04:57.212788 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:04:57.249196 1131323 cri.go:89] found id: ""
	I0328 01:04:57.249231 1131323 logs.go:276] 0 containers: []
	W0328 01:04:57.249244 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:04:57.249253 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:04:57.249318 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:04:57.286996 1131323 cri.go:89] found id: ""
	I0328 01:04:57.287031 1131323 logs.go:276] 0 containers: []
	W0328 01:04:57.287040 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:04:57.287047 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:04:57.287101 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:04:57.324523 1131323 cri.go:89] found id: ""
	I0328 01:04:57.324551 1131323 logs.go:276] 0 containers: []
	W0328 01:04:57.324560 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:04:57.324566 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:04:57.324627 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:04:57.363946 1131323 cri.go:89] found id: ""
	I0328 01:04:57.363984 1131323 logs.go:276] 0 containers: []
	W0328 01:04:57.363998 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:04:57.364012 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:04:57.364034 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:04:57.418300 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:04:57.418337 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:04:57.433214 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:04:57.433242 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:04:57.508623 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:04:57.508651 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:04:57.508665 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:04:57.586336 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:04:57.586377 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:05:00.129903 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:05:00.146829 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:05:00.146920 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:05:00.197823 1131323 cri.go:89] found id: ""
	I0328 01:05:00.197856 1131323 logs.go:276] 0 containers: []
	W0328 01:05:00.197865 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:05:00.197872 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:05:00.197930 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:05:00.257523 1131323 cri.go:89] found id: ""
	I0328 01:05:00.257561 1131323 logs.go:276] 0 containers: []
	W0328 01:05:00.257575 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:05:00.257584 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:05:00.257657 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:05:00.314511 1131323 cri.go:89] found id: ""
	I0328 01:05:00.314539 1131323 logs.go:276] 0 containers: []
	W0328 01:05:00.314549 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:05:00.314558 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:05:00.314610 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:04:56.042295 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:58.541684 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:00.543232 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:57.372451 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:59.372577 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:58.412203 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:00.412880 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:02.913222 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:00.351043 1131323 cri.go:89] found id: ""
	I0328 01:05:00.351076 1131323 logs.go:276] 0 containers: []
	W0328 01:05:00.351090 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:05:00.351098 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:05:00.351167 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:05:00.391477 1131323 cri.go:89] found id: ""
	I0328 01:05:00.391507 1131323 logs.go:276] 0 containers: []
	W0328 01:05:00.391519 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:05:00.391525 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:05:00.391595 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:05:00.436196 1131323 cri.go:89] found id: ""
	I0328 01:05:00.436230 1131323 logs.go:276] 0 containers: []
	W0328 01:05:00.436242 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:05:00.436249 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:05:00.436316 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:05:00.473389 1131323 cri.go:89] found id: ""
	I0328 01:05:00.473428 1131323 logs.go:276] 0 containers: []
	W0328 01:05:00.473441 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:05:00.473450 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:05:00.473523 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:05:00.508829 1131323 cri.go:89] found id: ""
	I0328 01:05:00.508866 1131323 logs.go:276] 0 containers: []
	W0328 01:05:00.508879 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:05:00.508901 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:05:00.508931 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:05:00.553709 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:05:00.553741 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:05:00.612679 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:05:00.612732 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:05:00.630908 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:05:00.630948 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:05:00.706984 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:05:00.707016 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:05:00.707034 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:05:03.287887 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:05:03.304679 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:05:03.304779 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:05:03.343579 1131323 cri.go:89] found id: ""
	I0328 01:05:03.343608 1131323 logs.go:276] 0 containers: []
	W0328 01:05:03.343618 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:05:03.343625 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:05:03.343677 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:05:03.387158 1131323 cri.go:89] found id: ""
	I0328 01:05:03.387192 1131323 logs.go:276] 0 containers: []
	W0328 01:05:03.387206 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:05:03.387224 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:05:03.387308 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:05:03.426622 1131323 cri.go:89] found id: ""
	I0328 01:05:03.426653 1131323 logs.go:276] 0 containers: []
	W0328 01:05:03.426663 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:05:03.426670 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:05:03.426724 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:05:03.468743 1131323 cri.go:89] found id: ""
	I0328 01:05:03.468780 1131323 logs.go:276] 0 containers: []
	W0328 01:05:03.468793 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:05:03.468801 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:05:03.468870 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:05:03.508518 1131323 cri.go:89] found id: ""
	I0328 01:05:03.508554 1131323 logs.go:276] 0 containers: []
	W0328 01:05:03.508566 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:05:03.508575 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:05:03.508653 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:05:03.548295 1131323 cri.go:89] found id: ""
	I0328 01:05:03.548331 1131323 logs.go:276] 0 containers: []
	W0328 01:05:03.548343 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:05:03.548353 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:05:03.548444 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:05:03.591561 1131323 cri.go:89] found id: ""
	I0328 01:05:03.591594 1131323 logs.go:276] 0 containers: []
	W0328 01:05:03.591607 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:05:03.591615 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:05:03.591670 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:05:03.635055 1131323 cri.go:89] found id: ""
	I0328 01:05:03.635086 1131323 logs.go:276] 0 containers: []
	W0328 01:05:03.635097 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:05:03.635109 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:05:03.635127 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:05:03.715639 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:05:03.715683 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:05:03.755888 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:05:03.755931 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:05:03.810128 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:05:03.810170 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:05:03.825197 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:05:03.825227 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:05:03.908589 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:05:03.043330 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:05.541544 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:01.372692 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:03.373747 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:05.871945 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:05.413583 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:07.912379 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:06.409060 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:05:06.424034 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:05:06.424119 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:05:06.461827 1131323 cri.go:89] found id: ""
	I0328 01:05:06.461888 1131323 logs.go:276] 0 containers: []
	W0328 01:05:06.461902 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:05:06.461911 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:05:06.461985 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:05:06.505006 1131323 cri.go:89] found id: ""
	I0328 01:05:06.505061 1131323 logs.go:276] 0 containers: []
	W0328 01:05:06.505078 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:05:06.505085 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:05:06.505145 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:05:06.542000 1131323 cri.go:89] found id: ""
	I0328 01:05:06.542033 1131323 logs.go:276] 0 containers: []
	W0328 01:05:06.542044 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:05:06.542052 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:05:06.542121 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:05:06.583725 1131323 cri.go:89] found id: ""
	I0328 01:05:06.583778 1131323 logs.go:276] 0 containers: []
	W0328 01:05:06.583800 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:05:06.583810 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:05:06.583887 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:05:06.620457 1131323 cri.go:89] found id: ""
	I0328 01:05:06.620501 1131323 logs.go:276] 0 containers: []
	W0328 01:05:06.620516 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:05:06.620524 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:05:06.620595 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:05:06.664380 1131323 cri.go:89] found id: ""
	I0328 01:05:06.664412 1131323 logs.go:276] 0 containers: []
	W0328 01:05:06.664425 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:05:06.664432 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:05:06.664502 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:05:06.701799 1131323 cri.go:89] found id: ""
	I0328 01:05:06.701850 1131323 logs.go:276] 0 containers: []
	W0328 01:05:06.701862 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:05:06.701870 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:05:06.701935 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:05:06.739899 1131323 cri.go:89] found id: ""
	I0328 01:05:06.739936 1131323 logs.go:276] 0 containers: []
	W0328 01:05:06.739948 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:05:06.739958 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:05:06.739973 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:05:06.814373 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:05:06.814404 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:05:06.814421 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:05:06.894331 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:05:06.894371 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:05:06.952912 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:05:06.952979 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:05:07.011851 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:05:07.011900 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:05:09.528068 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:05:09.545082 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:05:09.545167 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:05:09.586944 1131323 cri.go:89] found id: ""
	I0328 01:05:09.586983 1131323 logs.go:276] 0 containers: []
	W0328 01:05:09.586996 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:05:09.587004 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:05:09.587077 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:05:09.624153 1131323 cri.go:89] found id: ""
	I0328 01:05:09.624184 1131323 logs.go:276] 0 containers: []
	W0328 01:05:09.624192 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:05:09.624198 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:05:09.624256 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:05:09.661125 1131323 cri.go:89] found id: ""
	I0328 01:05:09.661160 1131323 logs.go:276] 0 containers: []
	W0328 01:05:09.661172 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:05:09.661182 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:05:09.661256 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:05:09.699865 1131323 cri.go:89] found id: ""
	I0328 01:05:09.699903 1131323 logs.go:276] 0 containers: []
	W0328 01:05:09.699916 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:05:09.699925 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:05:09.699992 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:05:09.737925 1131323 cri.go:89] found id: ""
	I0328 01:05:09.737958 1131323 logs.go:276] 0 containers: []
	W0328 01:05:09.737967 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:05:09.737973 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:05:09.738029 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:05:09.776906 1131323 cri.go:89] found id: ""
	I0328 01:05:09.776941 1131323 logs.go:276] 0 containers: []
	W0328 01:05:09.776950 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:05:09.776957 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:05:09.777021 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:05:09.815767 1131323 cri.go:89] found id: ""
	I0328 01:05:09.815794 1131323 logs.go:276] 0 containers: []
	W0328 01:05:09.815803 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:05:09.815809 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:05:09.815876 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:05:09.855880 1131323 cri.go:89] found id: ""
	I0328 01:05:09.855915 1131323 logs.go:276] 0 containers: []
	W0328 01:05:09.855928 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:05:09.855941 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:05:09.855958 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:05:09.918339 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:05:09.918376 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:05:09.932775 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:05:09.932810 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:05:10.011566 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:05:10.011594 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:05:10.011610 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:05:10.096057 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:05:10.096100 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:05:08.041230 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:10.041991 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:07.873367 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:10.372311 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:09.913349 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:12.412259 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:12.641999 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:05:12.655761 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:05:12.655843 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:05:12.697335 1131323 cri.go:89] found id: ""
	I0328 01:05:12.697369 1131323 logs.go:276] 0 containers: []
	W0328 01:05:12.697381 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:05:12.697390 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:05:12.697453 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:05:12.736482 1131323 cri.go:89] found id: ""
	I0328 01:05:12.736520 1131323 logs.go:276] 0 containers: []
	W0328 01:05:12.736534 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:05:12.736544 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:05:12.736617 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:05:12.771992 1131323 cri.go:89] found id: ""
	I0328 01:05:12.772034 1131323 logs.go:276] 0 containers: []
	W0328 01:05:12.772046 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:05:12.772055 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:05:12.772125 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:05:12.810738 1131323 cri.go:89] found id: ""
	I0328 01:05:12.810770 1131323 logs.go:276] 0 containers: []
	W0328 01:05:12.810779 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:05:12.810786 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:05:12.810837 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:05:12.848172 1131323 cri.go:89] found id: ""
	I0328 01:05:12.848209 1131323 logs.go:276] 0 containers: []
	W0328 01:05:12.848222 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:05:12.848230 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:05:12.848310 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:05:12.884660 1131323 cri.go:89] found id: ""
	I0328 01:05:12.884698 1131323 logs.go:276] 0 containers: []
	W0328 01:05:12.884710 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:05:12.884719 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:05:12.884794 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:05:12.926180 1131323 cri.go:89] found id: ""
	I0328 01:05:12.926209 1131323 logs.go:276] 0 containers: []
	W0328 01:05:12.926218 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:05:12.926244 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:05:12.926303 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:05:12.966938 1131323 cri.go:89] found id: ""
	I0328 01:05:12.966969 1131323 logs.go:276] 0 containers: []
	W0328 01:05:12.966983 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:05:12.966996 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:05:12.967014 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:05:13.018501 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:05:13.018541 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:05:13.033140 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:05:13.033175 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:05:13.108806 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:05:13.108832 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:05:13.108858 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:05:13.189198 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:05:13.189241 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:05:12.541088 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:15.041830 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:12.372413 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:14.372804 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:14.414059 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:16.912343 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:15.737415 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:05:15.752534 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:05:15.752614 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:05:15.789941 1131323 cri.go:89] found id: ""
	I0328 01:05:15.789974 1131323 logs.go:276] 0 containers: []
	W0328 01:05:15.789986 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:05:15.789994 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:05:15.790107 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:05:15.827688 1131323 cri.go:89] found id: ""
	I0328 01:05:15.827731 1131323 logs.go:276] 0 containers: []
	W0328 01:05:15.827745 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:05:15.827766 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:05:15.827831 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:05:15.867005 1131323 cri.go:89] found id: ""
	I0328 01:05:15.867041 1131323 logs.go:276] 0 containers: []
	W0328 01:05:15.867054 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:05:15.867064 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:05:15.867149 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:05:15.909924 1131323 cri.go:89] found id: ""
	I0328 01:05:15.910035 1131323 logs.go:276] 0 containers: []
	W0328 01:05:15.910055 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:05:15.910066 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:05:15.910139 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:05:15.950571 1131323 cri.go:89] found id: ""
	I0328 01:05:15.950606 1131323 logs.go:276] 0 containers: []
	W0328 01:05:15.950619 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:05:15.950632 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:05:15.950707 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:05:15.992557 1131323 cri.go:89] found id: ""
	I0328 01:05:15.992594 1131323 logs.go:276] 0 containers: []
	W0328 01:05:15.992605 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:05:15.992615 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:05:15.992687 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:05:16.032417 1131323 cri.go:89] found id: ""
	I0328 01:05:16.032458 1131323 logs.go:276] 0 containers: []
	W0328 01:05:16.032473 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:05:16.032482 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:05:16.032559 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:05:16.071399 1131323 cri.go:89] found id: ""
	I0328 01:05:16.071433 1131323 logs.go:276] 0 containers: []
	W0328 01:05:16.071445 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:05:16.071459 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:05:16.071481 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:05:16.147078 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:05:16.147113 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:05:16.147131 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:05:16.223828 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:05:16.223870 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:05:16.269377 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:05:16.269409 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:05:16.318545 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:05:16.318584 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:05:18.836044 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:05:18.851138 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:05:18.851231 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:05:18.887223 1131323 cri.go:89] found id: ""
	I0328 01:05:18.887260 1131323 logs.go:276] 0 containers: []
	W0328 01:05:18.887273 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:05:18.887283 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:05:18.887354 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:05:18.928652 1131323 cri.go:89] found id: ""
	I0328 01:05:18.928682 1131323 logs.go:276] 0 containers: []
	W0328 01:05:18.928692 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:05:18.928698 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:05:18.928756 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:05:18.968519 1131323 cri.go:89] found id: ""
	I0328 01:05:18.968555 1131323 logs.go:276] 0 containers: []
	W0328 01:05:18.968567 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:05:18.968575 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:05:18.968646 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:05:19.010939 1131323 cri.go:89] found id: ""
	I0328 01:05:19.010977 1131323 logs.go:276] 0 containers: []
	W0328 01:05:19.010991 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:05:19.010999 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:05:19.011070 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:05:19.048723 1131323 cri.go:89] found id: ""
	I0328 01:05:19.048748 1131323 logs.go:276] 0 containers: []
	W0328 01:05:19.048758 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:05:19.048769 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:05:19.048820 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:05:19.091761 1131323 cri.go:89] found id: ""
	I0328 01:05:19.091794 1131323 logs.go:276] 0 containers: []
	W0328 01:05:19.091803 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:05:19.091810 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:05:19.091863 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:05:19.134017 1131323 cri.go:89] found id: ""
	I0328 01:05:19.134049 1131323 logs.go:276] 0 containers: []
	W0328 01:05:19.134059 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:05:19.134065 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:05:19.134119 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:05:19.176070 1131323 cri.go:89] found id: ""
	I0328 01:05:19.176106 1131323 logs.go:276] 0 containers: []
	W0328 01:05:19.176118 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:05:19.176131 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:05:19.176155 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:05:19.261546 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:05:19.261584 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:05:19.261605 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:05:19.340271 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:05:19.340314 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:05:19.383625 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:05:19.383676 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:05:19.441635 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:05:19.441679 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:05:17.541876 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:20.040841 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:16.872723 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:19.372916 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:19.414384 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:21.912881 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:21.958362 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:05:21.974427 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:05:21.974528 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:05:22.013099 1131323 cri.go:89] found id: ""
	I0328 01:05:22.013139 1131323 logs.go:276] 0 containers: []
	W0328 01:05:22.013152 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:05:22.013160 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:05:22.013229 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:05:22.055558 1131323 cri.go:89] found id: ""
	I0328 01:05:22.055594 1131323 logs.go:276] 0 containers: []
	W0328 01:05:22.055604 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:05:22.055611 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:05:22.055668 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:05:22.106836 1131323 cri.go:89] found id: ""
	I0328 01:05:22.106870 1131323 logs.go:276] 0 containers: []
	W0328 01:05:22.106879 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:05:22.106886 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:05:22.106961 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:05:22.145135 1131323 cri.go:89] found id: ""
	I0328 01:05:22.145175 1131323 logs.go:276] 0 containers: []
	W0328 01:05:22.145189 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:05:22.145197 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:05:22.145266 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:05:22.183879 1131323 cri.go:89] found id: ""
	I0328 01:05:22.183909 1131323 logs.go:276] 0 containers: []
	W0328 01:05:22.183919 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:05:22.183926 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:05:22.183981 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:05:22.223087 1131323 cri.go:89] found id: ""
	I0328 01:05:22.223115 1131323 logs.go:276] 0 containers: []
	W0328 01:05:22.223124 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:05:22.223131 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:05:22.223209 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:05:22.263232 1131323 cri.go:89] found id: ""
	I0328 01:05:22.263262 1131323 logs.go:276] 0 containers: []
	W0328 01:05:22.263272 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:05:22.263279 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:05:22.263331 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:05:22.302919 1131323 cri.go:89] found id: ""
	I0328 01:05:22.302954 1131323 logs.go:276] 0 containers: []
	W0328 01:05:22.302967 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:05:22.302980 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:05:22.302998 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:05:22.358550 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:05:22.358596 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:05:22.374688 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:05:22.374722 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:05:22.453584 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:05:22.453609 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:05:22.453624 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:05:22.540983 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:05:22.541048 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:05:25.091773 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:05:25.107412 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:05:25.107484 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:05:25.143917 1131323 cri.go:89] found id: ""
	I0328 01:05:25.143944 1131323 logs.go:276] 0 containers: []
	W0328 01:05:25.143953 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:05:25.143960 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:05:25.144013 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:05:25.183615 1131323 cri.go:89] found id: ""
	I0328 01:05:25.183650 1131323 logs.go:276] 0 containers: []
	W0328 01:05:25.183659 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:05:25.183666 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:05:25.183729 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:05:25.221125 1131323 cri.go:89] found id: ""
	I0328 01:05:25.221158 1131323 logs.go:276] 0 containers: []
	W0328 01:05:25.221167 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:05:25.221174 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:05:25.221229 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:05:25.262023 1131323 cri.go:89] found id: ""
	I0328 01:05:25.262056 1131323 logs.go:276] 0 containers: []
	W0328 01:05:25.262065 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:05:25.262072 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:05:25.262134 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:05:25.297919 1131323 cri.go:89] found id: ""
	I0328 01:05:25.297948 1131323 logs.go:276] 0 containers: []
	W0328 01:05:25.297957 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:05:25.297964 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:05:25.298035 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:05:22.041977 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:24.542416 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:21.872312 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:23.872885 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:23.914459 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:25.916730 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:25.336582 1131323 cri.go:89] found id: ""
	I0328 01:05:25.336610 1131323 logs.go:276] 0 containers: []
	W0328 01:05:25.336620 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:05:25.336627 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:05:25.336690 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:05:25.375554 1131323 cri.go:89] found id: ""
	I0328 01:05:25.375589 1131323 logs.go:276] 0 containers: []
	W0328 01:05:25.375600 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:05:25.375609 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:05:25.375683 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:05:25.415941 1131323 cri.go:89] found id: ""
	I0328 01:05:25.415973 1131323 logs.go:276] 0 containers: []
	W0328 01:05:25.415984 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:05:25.415996 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:05:25.416013 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:05:25.430168 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:05:25.430196 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:05:25.507782 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:05:25.507805 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:05:25.507862 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:05:25.588780 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:05:25.588824 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:05:25.634958 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:05:25.634997 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:05:28.190651 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:05:28.205714 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:05:28.205794 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:05:28.242015 1131323 cri.go:89] found id: ""
	I0328 01:05:28.242056 1131323 logs.go:276] 0 containers: []
	W0328 01:05:28.242067 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:05:28.242077 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:05:28.242169 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:05:28.289132 1131323 cri.go:89] found id: ""
	I0328 01:05:28.289169 1131323 logs.go:276] 0 containers: []
	W0328 01:05:28.289182 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:05:28.289189 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:05:28.289256 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:05:28.327001 1131323 cri.go:89] found id: ""
	I0328 01:05:28.327031 1131323 logs.go:276] 0 containers: []
	W0328 01:05:28.327040 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:05:28.327052 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:05:28.327105 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:05:28.365474 1131323 cri.go:89] found id: ""
	I0328 01:05:28.365507 1131323 logs.go:276] 0 containers: []
	W0328 01:05:28.365516 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:05:28.365523 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:05:28.365587 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:05:28.405494 1131323 cri.go:89] found id: ""
	I0328 01:05:28.405553 1131323 logs.go:276] 0 containers: []
	W0328 01:05:28.405567 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:05:28.405576 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:05:28.405652 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:05:28.448464 1131323 cri.go:89] found id: ""
	I0328 01:05:28.448502 1131323 logs.go:276] 0 containers: []
	W0328 01:05:28.448512 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:05:28.448521 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:05:28.448594 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:05:28.488143 1131323 cri.go:89] found id: ""
	I0328 01:05:28.488172 1131323 logs.go:276] 0 containers: []
	W0328 01:05:28.488182 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:05:28.488189 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:05:28.488258 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:05:28.545977 1131323 cri.go:89] found id: ""
	I0328 01:05:28.546012 1131323 logs.go:276] 0 containers: []
	W0328 01:05:28.546024 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:05:28.546036 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:05:28.546050 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:05:28.629955 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:05:28.630001 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:05:28.670504 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:05:28.670536 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:05:28.722021 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:05:28.722069 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:05:28.737274 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:05:28.737310 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:05:28.824025 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:05:27.041205 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:29.041342 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:26.372037 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:28.373545 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:30.872569 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:28.414921 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:30.912980 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:31.324497 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:05:31.339715 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:05:31.339811 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:05:31.379017 1131323 cri.go:89] found id: ""
	I0328 01:05:31.379050 1131323 logs.go:276] 0 containers: []
	W0328 01:05:31.379062 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:05:31.379072 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:05:31.379138 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:05:31.420024 1131323 cri.go:89] found id: ""
	I0328 01:05:31.420055 1131323 logs.go:276] 0 containers: []
	W0328 01:05:31.420065 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:05:31.420071 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:05:31.420136 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:05:31.458732 1131323 cri.go:89] found id: ""
	I0328 01:05:31.458764 1131323 logs.go:276] 0 containers: []
	W0328 01:05:31.458773 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:05:31.458779 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:05:31.458835 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:05:31.504249 1131323 cri.go:89] found id: ""
	I0328 01:05:31.504280 1131323 logs.go:276] 0 containers: []
	W0328 01:05:31.504292 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:05:31.504300 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:05:31.504366 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:05:31.545284 1131323 cri.go:89] found id: ""
	I0328 01:05:31.545316 1131323 logs.go:276] 0 containers: []
	W0328 01:05:31.545324 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:05:31.545331 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:05:31.545385 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:05:31.583402 1131323 cri.go:89] found id: ""
	I0328 01:05:31.583434 1131323 logs.go:276] 0 containers: []
	W0328 01:05:31.583444 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:05:31.583453 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:05:31.583587 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:05:31.624411 1131323 cri.go:89] found id: ""
	I0328 01:05:31.624449 1131323 logs.go:276] 0 containers: []
	W0328 01:05:31.624462 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:05:31.624471 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:05:31.624528 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:05:31.666103 1131323 cri.go:89] found id: ""
	I0328 01:05:31.666144 1131323 logs.go:276] 0 containers: []
	W0328 01:05:31.666158 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:05:31.666173 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:05:31.666192 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:05:31.717595 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:05:31.717636 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:05:31.731606 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:05:31.731637 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:05:31.803302 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:05:31.803325 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:05:31.803339 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:05:31.885552 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:05:31.885590 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:05:34.432446 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:05:34.448002 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:05:34.448085 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:05:34.493207 1131323 cri.go:89] found id: ""
	I0328 01:05:34.493246 1131323 logs.go:276] 0 containers: []
	W0328 01:05:34.493259 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:05:34.493268 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:05:34.493337 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:05:34.541838 1131323 cri.go:89] found id: ""
	I0328 01:05:34.541871 1131323 logs.go:276] 0 containers: []
	W0328 01:05:34.541883 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:05:34.541891 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:05:34.541956 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:05:34.582319 1131323 cri.go:89] found id: ""
	I0328 01:05:34.582357 1131323 logs.go:276] 0 containers: []
	W0328 01:05:34.582371 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:05:34.582380 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:05:34.582458 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:05:34.618753 1131323 cri.go:89] found id: ""
	I0328 01:05:34.618788 1131323 logs.go:276] 0 containers: []
	W0328 01:05:34.618801 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:05:34.618810 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:05:34.618882 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:05:34.656994 1131323 cri.go:89] found id: ""
	I0328 01:05:34.657027 1131323 logs.go:276] 0 containers: []
	W0328 01:05:34.657037 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:05:34.657043 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:05:34.657114 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:05:34.695214 1131323 cri.go:89] found id: ""
	I0328 01:05:34.695252 1131323 logs.go:276] 0 containers: []
	W0328 01:05:34.695264 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:05:34.695271 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:05:34.695337 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:05:34.733688 1131323 cri.go:89] found id: ""
	I0328 01:05:34.733718 1131323 logs.go:276] 0 containers: []
	W0328 01:05:34.733731 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:05:34.733739 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:05:34.733808 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:05:34.771697 1131323 cri.go:89] found id: ""
	I0328 01:05:34.771729 1131323 logs.go:276] 0 containers: []
	W0328 01:05:34.771744 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:05:34.771758 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:05:34.771776 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:05:34.828190 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:05:34.828236 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:05:34.842741 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:05:34.842776 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:05:34.918494 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:05:34.918525 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:05:34.918541 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:05:35.012689 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:05:35.012747 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:05:31.042633 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:33.541295 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:35.541588 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:33.371991 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:35.872753 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:33.412886 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:35.914065 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:37.574759 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:05:37.590014 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:05:37.590128 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:05:37.626883 1131323 cri.go:89] found id: ""
	I0328 01:05:37.626914 1131323 logs.go:276] 0 containers: []
	W0328 01:05:37.626926 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:05:37.626935 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:05:37.627005 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:05:37.665171 1131323 cri.go:89] found id: ""
	I0328 01:05:37.665202 1131323 logs.go:276] 0 containers: []
	W0328 01:05:37.665215 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:05:37.665225 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:05:37.665294 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:05:37.702923 1131323 cri.go:89] found id: ""
	I0328 01:05:37.702963 1131323 logs.go:276] 0 containers: []
	W0328 01:05:37.702976 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:05:37.702984 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:05:37.703064 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:05:37.741148 1131323 cri.go:89] found id: ""
	I0328 01:05:37.741182 1131323 logs.go:276] 0 containers: []
	W0328 01:05:37.741191 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:05:37.741199 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:05:37.741269 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:05:37.782298 1131323 cri.go:89] found id: ""
	I0328 01:05:37.782331 1131323 logs.go:276] 0 containers: []
	W0328 01:05:37.782341 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:05:37.782348 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:05:37.782407 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:05:37.819056 1131323 cri.go:89] found id: ""
	I0328 01:05:37.819110 1131323 logs.go:276] 0 containers: []
	W0328 01:05:37.819124 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:05:37.819134 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:05:37.819215 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:05:37.862372 1131323 cri.go:89] found id: ""
	I0328 01:05:37.862414 1131323 logs.go:276] 0 containers: []
	W0328 01:05:37.862427 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:05:37.862436 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:05:37.862507 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:05:37.899639 1131323 cri.go:89] found id: ""
	I0328 01:05:37.899675 1131323 logs.go:276] 0 containers: []
	W0328 01:05:37.899689 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:05:37.899703 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:05:37.899721 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:05:37.978962 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:05:37.978990 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:05:37.979007 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:05:38.058972 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:05:38.059015 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:05:38.102975 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:05:38.103016 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:05:38.157994 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:05:38.158035 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:05:38.041091 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:40.041892 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:38.371787 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:40.373131 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:38.412214 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:40.415412 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:42.912341 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:40.673425 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:05:40.690969 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:05:40.691041 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:05:40.735552 1131323 cri.go:89] found id: ""
	I0328 01:05:40.735585 1131323 logs.go:276] 0 containers: []
	W0328 01:05:40.735594 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:05:40.735602 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:05:40.735669 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:05:40.816611 1131323 cri.go:89] found id: ""
	I0328 01:05:40.816648 1131323 logs.go:276] 0 containers: []
	W0328 01:05:40.816661 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:05:40.816669 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:05:40.816725 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:05:40.864093 1131323 cri.go:89] found id: ""
	I0328 01:05:40.864125 1131323 logs.go:276] 0 containers: []
	W0328 01:05:40.864138 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:05:40.864147 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:05:40.864218 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:05:40.908781 1131323 cri.go:89] found id: ""
	I0328 01:05:40.908817 1131323 logs.go:276] 0 containers: []
	W0328 01:05:40.908829 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:05:40.908846 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:05:40.908914 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:05:40.950330 1131323 cri.go:89] found id: ""
	I0328 01:05:40.950369 1131323 logs.go:276] 0 containers: []
	W0328 01:05:40.950382 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:05:40.950390 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:05:40.950481 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:05:40.989983 1131323 cri.go:89] found id: ""
	I0328 01:05:40.990041 1131323 logs.go:276] 0 containers: []
	W0328 01:05:40.990054 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:05:40.990063 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:05:40.990136 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:05:41.042428 1131323 cri.go:89] found id: ""
	I0328 01:05:41.042470 1131323 logs.go:276] 0 containers: []
	W0328 01:05:41.042481 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:05:41.042489 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:05:41.042560 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:05:41.089309 1131323 cri.go:89] found id: ""
	I0328 01:05:41.089342 1131323 logs.go:276] 0 containers: []
	W0328 01:05:41.089353 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:05:41.089363 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:05:41.089377 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:05:41.148502 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:05:41.148547 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:05:41.163889 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:05:41.163918 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:05:41.242825 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:05:41.242848 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:05:41.242861 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:05:41.322658 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:05:41.322702 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:05:43.865117 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:05:43.880642 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:05:43.880729 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:05:43.919519 1131323 cri.go:89] found id: ""
	I0328 01:05:43.919550 1131323 logs.go:276] 0 containers: []
	W0328 01:05:43.919559 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:05:43.919565 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:05:43.919622 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:05:43.957906 1131323 cri.go:89] found id: ""
	I0328 01:05:43.957936 1131323 logs.go:276] 0 containers: []
	W0328 01:05:43.957945 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:05:43.957951 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:05:43.958008 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:05:44.001448 1131323 cri.go:89] found id: ""
	I0328 01:05:44.001486 1131323 logs.go:276] 0 containers: []
	W0328 01:05:44.001497 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:05:44.001505 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:05:44.001573 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:05:44.039767 1131323 cri.go:89] found id: ""
	I0328 01:05:44.039801 1131323 logs.go:276] 0 containers: []
	W0328 01:05:44.039812 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:05:44.039818 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:05:44.039871 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:05:44.079441 1131323 cri.go:89] found id: ""
	I0328 01:05:44.079470 1131323 logs.go:276] 0 containers: []
	W0328 01:05:44.079480 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:05:44.079486 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:05:44.079541 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:05:44.116534 1131323 cri.go:89] found id: ""
	I0328 01:05:44.116584 1131323 logs.go:276] 0 containers: []
	W0328 01:05:44.116596 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:05:44.116604 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:05:44.116670 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:05:44.163335 1131323 cri.go:89] found id: ""
	I0328 01:05:44.163369 1131323 logs.go:276] 0 containers: []
	W0328 01:05:44.163381 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:05:44.163389 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:05:44.163457 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:05:44.201367 1131323 cri.go:89] found id: ""
	I0328 01:05:44.201403 1131323 logs.go:276] 0 containers: []
	W0328 01:05:44.201413 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:05:44.201424 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:05:44.201442 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:05:44.257485 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:05:44.257529 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:05:44.272489 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:05:44.272534 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:05:44.354442 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:05:44.354477 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:05:44.354498 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:05:44.436219 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:05:44.436262 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:05:42.044443 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:44.541648 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:42.872072 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:44.873552 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:44.913292 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:47.412489 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:46.982131 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:05:46.998022 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:05:46.998100 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:05:47.037167 1131323 cri.go:89] found id: ""
	I0328 01:05:47.037205 1131323 logs.go:276] 0 containers: []
	W0328 01:05:47.037217 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:05:47.037226 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:05:47.037295 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:05:47.076175 1131323 cri.go:89] found id: ""
	I0328 01:05:47.076213 1131323 logs.go:276] 0 containers: []
	W0328 01:05:47.076226 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:05:47.076235 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:05:47.076306 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:05:47.115193 1131323 cri.go:89] found id: ""
	I0328 01:05:47.115227 1131323 logs.go:276] 0 containers: []
	W0328 01:05:47.115237 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:05:47.115244 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:05:47.115297 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:05:47.154942 1131323 cri.go:89] found id: ""
	I0328 01:05:47.154976 1131323 logs.go:276] 0 containers: []
	W0328 01:05:47.154989 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:05:47.154998 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:05:47.155069 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:05:47.196571 1131323 cri.go:89] found id: ""
	I0328 01:05:47.196609 1131323 logs.go:276] 0 containers: []
	W0328 01:05:47.196622 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:05:47.196631 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:05:47.196707 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:05:47.237572 1131323 cri.go:89] found id: ""
	I0328 01:05:47.237616 1131323 logs.go:276] 0 containers: []
	W0328 01:05:47.237625 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:05:47.237633 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:05:47.237691 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:05:47.275208 1131323 cri.go:89] found id: ""
	I0328 01:05:47.275254 1131323 logs.go:276] 0 containers: []
	W0328 01:05:47.275265 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:05:47.275272 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:05:47.275329 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:05:47.313515 1131323 cri.go:89] found id: ""
	I0328 01:05:47.313555 1131323 logs.go:276] 0 containers: []
	W0328 01:05:47.313568 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:05:47.313582 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:05:47.313598 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:05:47.368993 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:05:47.369033 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:05:47.383063 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:05:47.383097 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:05:47.460239 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:05:47.460278 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:05:47.460298 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:05:47.538552 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:05:47.538594 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:05:50.084960 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:05:50.101764 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:05:50.101859 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:05:50.141457 1131323 cri.go:89] found id: ""
	I0328 01:05:50.141488 1131323 logs.go:276] 0 containers: []
	W0328 01:05:50.141497 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:05:50.141504 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:05:50.141557 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:05:50.178184 1131323 cri.go:89] found id: ""
	I0328 01:05:50.178220 1131323 logs.go:276] 0 containers: []
	W0328 01:05:50.178254 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:05:50.178263 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:05:50.178358 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:05:50.217908 1131323 cri.go:89] found id: ""
	I0328 01:05:50.217946 1131323 logs.go:276] 0 containers: []
	W0328 01:05:50.217959 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:05:50.217966 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:05:50.218027 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:05:50.256029 1131323 cri.go:89] found id: ""
	I0328 01:05:50.256058 1131323 logs.go:276] 0 containers: []
	W0328 01:05:50.256067 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:05:50.256074 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:05:50.256130 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:05:50.295054 1131323 cri.go:89] found id: ""
	I0328 01:05:50.295087 1131323 logs.go:276] 0 containers: []
	W0328 01:05:50.295100 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:05:50.295106 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:05:50.295165 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:05:47.042338 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:49.542501 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:47.372867 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:49.872948 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:49.913873 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:52.412600 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:50.334695 1131323 cri.go:89] found id: ""
	I0328 01:05:50.336588 1131323 logs.go:276] 0 containers: []
	W0328 01:05:50.336605 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:05:50.336614 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:05:50.336697 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:05:50.375968 1131323 cri.go:89] found id: ""
	I0328 01:05:50.376003 1131323 logs.go:276] 0 containers: []
	W0328 01:05:50.376013 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:05:50.376021 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:05:50.376091 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:05:50.417146 1131323 cri.go:89] found id: ""
	I0328 01:05:50.417175 1131323 logs.go:276] 0 containers: []
	W0328 01:05:50.417184 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:05:50.417194 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:05:50.417207 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:05:50.474090 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:05:50.474131 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:05:50.489006 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:05:50.489040 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:05:50.566220 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:05:50.566268 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:05:50.566286 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:05:50.645593 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:05:50.645653 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:05:53.190872 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:05:53.205223 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:05:53.205320 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:05:53.242396 1131323 cri.go:89] found id: ""
	I0328 01:05:53.242433 1131323 logs.go:276] 0 containers: []
	W0328 01:05:53.242445 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:05:53.242455 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:05:53.242524 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:05:53.281237 1131323 cri.go:89] found id: ""
	I0328 01:05:53.281275 1131323 logs.go:276] 0 containers: []
	W0328 01:05:53.281288 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:05:53.281297 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:05:53.281357 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:05:53.321239 1131323 cri.go:89] found id: ""
	I0328 01:05:53.321268 1131323 logs.go:276] 0 containers: []
	W0328 01:05:53.321287 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:05:53.321296 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:05:53.321358 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:05:53.359240 1131323 cri.go:89] found id: ""
	I0328 01:05:53.359269 1131323 logs.go:276] 0 containers: []
	W0328 01:05:53.359278 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:05:53.359284 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:05:53.359337 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:05:53.396973 1131323 cri.go:89] found id: ""
	I0328 01:05:53.397008 1131323 logs.go:276] 0 containers: []
	W0328 01:05:53.397021 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:05:53.397030 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:05:53.397100 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:05:53.438368 1131323 cri.go:89] found id: ""
	I0328 01:05:53.438400 1131323 logs.go:276] 0 containers: []
	W0328 01:05:53.438408 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:05:53.438415 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:05:53.438477 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:05:53.474679 1131323 cri.go:89] found id: ""
	I0328 01:05:53.474708 1131323 logs.go:276] 0 containers: []
	W0328 01:05:53.474732 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:05:53.474742 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:05:53.474799 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:05:53.512509 1131323 cri.go:89] found id: ""
	I0328 01:05:53.512547 1131323 logs.go:276] 0 containers: []
	W0328 01:05:53.512560 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:05:53.512579 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:05:53.512599 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:05:53.569536 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:05:53.569580 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:05:53.584977 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:05:53.585016 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:05:53.657865 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:05:53.657895 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:05:53.657908 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:05:53.733158 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:05:53.733203 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:05:52.041508 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:54.541663 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:52.373317 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:54.872090 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:54.913464 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:57.413256 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:56.278693 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:05:56.291870 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:05:56.291949 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:05:56.332909 1131323 cri.go:89] found id: ""
	I0328 01:05:56.332943 1131323 logs.go:276] 0 containers: []
	W0328 01:05:56.332957 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:05:56.332965 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:05:56.333038 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:05:56.370608 1131323 cri.go:89] found id: ""
	I0328 01:05:56.370638 1131323 logs.go:276] 0 containers: []
	W0328 01:05:56.370649 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:05:56.370657 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:05:56.370721 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:05:56.408031 1131323 cri.go:89] found id: ""
	I0328 01:05:56.408068 1131323 logs.go:276] 0 containers: []
	W0328 01:05:56.408081 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:05:56.408100 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:05:56.408170 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:05:56.445057 1131323 cri.go:89] found id: ""
	I0328 01:05:56.445092 1131323 logs.go:276] 0 containers: []
	W0328 01:05:56.445105 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:05:56.445113 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:05:56.445177 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:05:56.486868 1131323 cri.go:89] found id: ""
	I0328 01:05:56.486898 1131323 logs.go:276] 0 containers: []
	W0328 01:05:56.486908 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:05:56.486914 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:05:56.486969 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:05:56.533594 1131323 cri.go:89] found id: ""
	I0328 01:05:56.533622 1131323 logs.go:276] 0 containers: []
	W0328 01:05:56.533632 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:05:56.533638 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:05:56.533702 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:05:56.569200 1131323 cri.go:89] found id: ""
	I0328 01:05:56.569237 1131323 logs.go:276] 0 containers: []
	W0328 01:05:56.569250 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:05:56.569258 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:05:56.569335 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:05:56.604919 1131323 cri.go:89] found id: ""
	I0328 01:05:56.604955 1131323 logs.go:276] 0 containers: []
	W0328 01:05:56.604968 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:05:56.604982 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:05:56.605011 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:05:56.654473 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:05:56.654513 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:05:56.671309 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:05:56.671339 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:05:56.739516 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:05:56.739543 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:05:56.739559 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:05:56.817445 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:05:56.817495 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:05:59.361711 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:05:59.375672 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:05:59.375750 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:05:59.414329 1131323 cri.go:89] found id: ""
	I0328 01:05:59.414360 1131323 logs.go:276] 0 containers: []
	W0328 01:05:59.414371 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:05:59.414379 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:05:59.414443 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:05:59.454813 1131323 cri.go:89] found id: ""
	I0328 01:05:59.454846 1131323 logs.go:276] 0 containers: []
	W0328 01:05:59.454855 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:05:59.454862 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:05:59.454917 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:05:59.492890 1131323 cri.go:89] found id: ""
	I0328 01:05:59.492924 1131323 logs.go:276] 0 containers: []
	W0328 01:05:59.492936 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:05:59.492946 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:05:59.493043 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:05:59.529412 1131323 cri.go:89] found id: ""
	I0328 01:05:59.529443 1131323 logs.go:276] 0 containers: []
	W0328 01:05:59.529454 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:05:59.529464 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:05:59.529521 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:05:59.568620 1131323 cri.go:89] found id: ""
	I0328 01:05:59.568655 1131323 logs.go:276] 0 containers: []
	W0328 01:05:59.568664 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:05:59.568671 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:05:59.568731 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:05:59.605826 1131323 cri.go:89] found id: ""
	I0328 01:05:59.605861 1131323 logs.go:276] 0 containers: []
	W0328 01:05:59.605874 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:05:59.605883 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:05:59.605955 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:05:59.645799 1131323 cri.go:89] found id: ""
	I0328 01:05:59.645833 1131323 logs.go:276] 0 containers: []
	W0328 01:05:59.645847 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:05:59.645856 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:05:59.645931 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:05:59.683866 1131323 cri.go:89] found id: ""
	I0328 01:05:59.683903 1131323 logs.go:276] 0 containers: []
	W0328 01:05:59.683916 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:05:59.683929 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:05:59.683953 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:05:59.726678 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:05:59.726711 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:05:59.779910 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:05:59.779954 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:05:59.795743 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:05:59.795774 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:05:59.875137 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:05:59.875162 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:05:59.875174 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:05:56.542345 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:58.542599 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:00.543094 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:57.372258 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:59.872483 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:59.912150 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:01.913694 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:02.455212 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:06:02.468850 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:06:02.468945 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:06:02.506347 1131323 cri.go:89] found id: ""
	I0328 01:06:02.506385 1131323 logs.go:276] 0 containers: []
	W0328 01:06:02.506397 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:06:02.506406 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:06:02.506484 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:06:02.546056 1131323 cri.go:89] found id: ""
	I0328 01:06:02.546085 1131323 logs.go:276] 0 containers: []
	W0328 01:06:02.546096 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:06:02.546103 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:06:02.546173 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:06:02.585343 1131323 cri.go:89] found id: ""
	I0328 01:06:02.585385 1131323 logs.go:276] 0 containers: []
	W0328 01:06:02.585398 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:06:02.585407 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:06:02.585563 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:06:02.625380 1131323 cri.go:89] found id: ""
	I0328 01:06:02.625414 1131323 logs.go:276] 0 containers: []
	W0328 01:06:02.625423 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:06:02.625429 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:06:02.625486 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:06:02.664653 1131323 cri.go:89] found id: ""
	I0328 01:06:02.664687 1131323 logs.go:276] 0 containers: []
	W0328 01:06:02.664701 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:06:02.664708 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:06:02.664764 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:06:02.704468 1131323 cri.go:89] found id: ""
	I0328 01:06:02.704498 1131323 logs.go:276] 0 containers: []
	W0328 01:06:02.704511 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:06:02.704519 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:06:02.704595 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:06:02.740969 1131323 cri.go:89] found id: ""
	I0328 01:06:02.740997 1131323 logs.go:276] 0 containers: []
	W0328 01:06:02.741007 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:06:02.741014 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:06:02.741102 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:06:02.782113 1131323 cri.go:89] found id: ""
	I0328 01:06:02.782150 1131323 logs.go:276] 0 containers: []
	W0328 01:06:02.782163 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:06:02.782185 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:06:02.782203 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:06:02.836804 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:06:02.836848 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:06:02.852266 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:06:02.852299 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:06:02.929441 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:06:02.929467 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:06:02.929484 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:06:03.008114 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:06:03.008156 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:06:03.041919 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:05.542209 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:02.372332 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:04.871689 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:04.413251 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:06.912348 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:05.554291 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:06:05.570208 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:06:05.570304 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:06:05.610887 1131323 cri.go:89] found id: ""
	I0328 01:06:05.610916 1131323 logs.go:276] 0 containers: []
	W0328 01:06:05.610926 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:06:05.610932 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:06:05.610991 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:06:05.651561 1131323 cri.go:89] found id: ""
	I0328 01:06:05.651600 1131323 logs.go:276] 0 containers: []
	W0328 01:06:05.651610 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:06:05.651616 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:06:05.651681 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:06:05.690801 1131323 cri.go:89] found id: ""
	I0328 01:06:05.690830 1131323 logs.go:276] 0 containers: []
	W0328 01:06:05.690843 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:06:05.690851 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:06:05.690920 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:06:05.729098 1131323 cri.go:89] found id: ""
	I0328 01:06:05.729136 1131323 logs.go:276] 0 containers: []
	W0328 01:06:05.729146 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:06:05.729153 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:06:05.729225 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:06:05.774461 1131323 cri.go:89] found id: ""
	I0328 01:06:05.774499 1131323 logs.go:276] 0 containers: []
	W0328 01:06:05.774520 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:06:05.774530 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:06:05.774602 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:06:05.812135 1131323 cri.go:89] found id: ""
	I0328 01:06:05.812166 1131323 logs.go:276] 0 containers: []
	W0328 01:06:05.812180 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:06:05.812188 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:06:05.812255 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:06:05.847744 1131323 cri.go:89] found id: ""
	I0328 01:06:05.847775 1131323 logs.go:276] 0 containers: []
	W0328 01:06:05.847786 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:06:05.847796 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:06:05.847863 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:06:05.885600 1131323 cri.go:89] found id: ""
	I0328 01:06:05.885641 1131323 logs.go:276] 0 containers: []
	W0328 01:06:05.885656 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:06:05.885669 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:06:05.885684 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:06:05.963837 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:06:05.963879 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:06:06.007342 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:06:06.007381 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:06:06.062798 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:06:06.062843 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:06:06.077547 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:06:06.077599 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:06:06.148373 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:06:08.648791 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:06:08.664082 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:06:08.664154 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:06:08.701746 1131323 cri.go:89] found id: ""
	I0328 01:06:08.701776 1131323 logs.go:276] 0 containers: []
	W0328 01:06:08.701789 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:06:08.701797 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:06:08.701855 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:06:08.739035 1131323 cri.go:89] found id: ""
	I0328 01:06:08.739066 1131323 logs.go:276] 0 containers: []
	W0328 01:06:08.739076 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:06:08.739083 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:06:08.739136 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:06:08.776128 1131323 cri.go:89] found id: ""
	I0328 01:06:08.776166 1131323 logs.go:276] 0 containers: []
	W0328 01:06:08.776180 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:06:08.776189 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:06:08.776255 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:06:08.816136 1131323 cri.go:89] found id: ""
	I0328 01:06:08.816172 1131323 logs.go:276] 0 containers: []
	W0328 01:06:08.816187 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:06:08.816196 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:06:08.816271 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:06:08.855675 1131323 cri.go:89] found id: ""
	I0328 01:06:08.855709 1131323 logs.go:276] 0 containers: []
	W0328 01:06:08.855722 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:06:08.855730 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:06:08.855802 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:06:08.893161 1131323 cri.go:89] found id: ""
	I0328 01:06:08.893198 1131323 logs.go:276] 0 containers: []
	W0328 01:06:08.893212 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:06:08.893221 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:06:08.893297 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:06:08.935498 1131323 cri.go:89] found id: ""
	I0328 01:06:08.935527 1131323 logs.go:276] 0 containers: []
	W0328 01:06:08.935540 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:06:08.935548 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:06:08.935622 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:06:08.971622 1131323 cri.go:89] found id: ""
	I0328 01:06:08.971657 1131323 logs.go:276] 0 containers: []
	W0328 01:06:08.971668 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:06:08.971679 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:06:08.971696 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:06:09.039975 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:06:09.040036 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:06:09.057877 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:06:09.057920 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:06:09.130093 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:06:09.130119 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:06:09.130135 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:06:09.217177 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:06:09.217228 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:06:08.040921 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:10.042895 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:06.872367 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:08.873187 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:08.914313 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:11.412330 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:11.762393 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:06:11.776356 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:06:11.776424 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:06:11.811982 1131323 cri.go:89] found id: ""
	I0328 01:06:11.812017 1131323 logs.go:276] 0 containers: []
	W0328 01:06:11.812030 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:06:11.812038 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:06:11.812103 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:06:11.849789 1131323 cri.go:89] found id: ""
	I0328 01:06:11.849817 1131323 logs.go:276] 0 containers: []
	W0328 01:06:11.849826 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:06:11.849833 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:06:11.849884 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:06:11.890455 1131323 cri.go:89] found id: ""
	I0328 01:06:11.890488 1131323 logs.go:276] 0 containers: []
	W0328 01:06:11.890497 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:06:11.890503 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:06:11.890559 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:06:11.929047 1131323 cri.go:89] found id: ""
	I0328 01:06:11.929093 1131323 logs.go:276] 0 containers: []
	W0328 01:06:11.929102 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:06:11.929108 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:06:11.929164 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:06:11.969536 1131323 cri.go:89] found id: ""
	I0328 01:06:11.969566 1131323 logs.go:276] 0 containers: []
	W0328 01:06:11.969576 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:06:11.969583 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:06:11.969641 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:06:12.008779 1131323 cri.go:89] found id: ""
	I0328 01:06:12.008811 1131323 logs.go:276] 0 containers: []
	W0328 01:06:12.008821 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:06:12.008828 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:06:12.008890 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:06:12.044061 1131323 cri.go:89] found id: ""
	I0328 01:06:12.044091 1131323 logs.go:276] 0 containers: []
	W0328 01:06:12.044104 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:06:12.044112 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:06:12.044176 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:06:12.082307 1131323 cri.go:89] found id: ""
	I0328 01:06:12.082336 1131323 logs.go:276] 0 containers: []
	W0328 01:06:12.082346 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:06:12.082357 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:06:12.082369 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:06:12.133044 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:06:12.133091 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:06:12.148584 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:06:12.148624 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:06:12.218799 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:06:12.218834 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:06:12.218852 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:06:12.295580 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:06:12.295623 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:06:14.842815 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:06:14.856385 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:06:14.856456 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:06:14.895351 1131323 cri.go:89] found id: ""
	I0328 01:06:14.895409 1131323 logs.go:276] 0 containers: []
	W0328 01:06:14.895418 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:06:14.895424 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:06:14.895476 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:06:14.930333 1131323 cri.go:89] found id: ""
	I0328 01:06:14.930366 1131323 logs.go:276] 0 containers: []
	W0328 01:06:14.930380 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:06:14.930389 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:06:14.930461 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:06:14.968701 1131323 cri.go:89] found id: ""
	I0328 01:06:14.968742 1131323 logs.go:276] 0 containers: []
	W0328 01:06:14.968754 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:06:14.968767 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:06:14.968867 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:06:15.004580 1131323 cri.go:89] found id: ""
	I0328 01:06:15.004613 1131323 logs.go:276] 0 containers: []
	W0328 01:06:15.004626 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:06:15.004634 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:06:15.004700 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:06:15.046702 1131323 cri.go:89] found id: ""
	I0328 01:06:15.046726 1131323 logs.go:276] 0 containers: []
	W0328 01:06:15.046736 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:06:15.046742 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:06:15.046795 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:06:15.088693 1131323 cri.go:89] found id: ""
	I0328 01:06:15.088725 1131323 logs.go:276] 0 containers: []
	W0328 01:06:15.088734 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:06:15.088741 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:06:15.088797 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:06:15.130293 1131323 cri.go:89] found id: ""
	I0328 01:06:15.130324 1131323 logs.go:276] 0 containers: []
	W0328 01:06:15.130333 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:06:15.130339 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:06:15.130394 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:06:15.172381 1131323 cri.go:89] found id: ""
	I0328 01:06:15.172408 1131323 logs.go:276] 0 containers: []
	W0328 01:06:15.172417 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:06:15.172427 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:06:15.172440 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:06:15.225631 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:06:15.225674 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:06:15.241251 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:06:15.241294 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:06:15.319701 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:06:15.319731 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:06:15.319747 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:06:12.540755 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:14.541618 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:11.371580 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:13.371640 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:15.373147 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:13.911792 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:15.912479 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:17.913926 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:15.406813 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:06:15.406853 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:06:17.993893 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:06:18.007755 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:06:18.007843 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:06:18.047750 1131323 cri.go:89] found id: ""
	I0328 01:06:18.047777 1131323 logs.go:276] 0 containers: []
	W0328 01:06:18.047786 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:06:18.047797 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:06:18.047855 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:06:18.088264 1131323 cri.go:89] found id: ""
	I0328 01:06:18.088291 1131323 logs.go:276] 0 containers: []
	W0328 01:06:18.088303 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:06:18.088311 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:06:18.088369 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:06:18.127485 1131323 cri.go:89] found id: ""
	I0328 01:06:18.127514 1131323 logs.go:276] 0 containers: []
	W0328 01:06:18.127523 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:06:18.127530 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:06:18.127581 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:06:18.167462 1131323 cri.go:89] found id: ""
	I0328 01:06:18.167496 1131323 logs.go:276] 0 containers: []
	W0328 01:06:18.167510 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:06:18.167516 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:06:18.167571 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:06:18.209536 1131323 cri.go:89] found id: ""
	I0328 01:06:18.209571 1131323 logs.go:276] 0 containers: []
	W0328 01:06:18.209583 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:06:18.209591 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:06:18.209662 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:06:18.247565 1131323 cri.go:89] found id: ""
	I0328 01:06:18.247601 1131323 logs.go:276] 0 containers: []
	W0328 01:06:18.247614 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:06:18.247623 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:06:18.247701 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:06:18.288123 1131323 cri.go:89] found id: ""
	I0328 01:06:18.288162 1131323 logs.go:276] 0 containers: []
	W0328 01:06:18.288172 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:06:18.288179 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:06:18.288242 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:06:18.328132 1131323 cri.go:89] found id: ""
	I0328 01:06:18.328161 1131323 logs.go:276] 0 containers: []
	W0328 01:06:18.328170 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:06:18.328181 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:06:18.328193 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:06:18.403245 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:06:18.403287 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:06:18.403305 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:06:18.483446 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:06:18.483500 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:06:18.527357 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:06:18.527392 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:06:18.588402 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:06:18.588463 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:06:16.542137 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:18.542554 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:20.546396 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:17.872147 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:20.373000 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:20.412369 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:22.412661 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:21.103566 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:06:21.117538 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:06:21.117616 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:06:21.174215 1131323 cri.go:89] found id: ""
	I0328 01:06:21.174270 1131323 logs.go:276] 0 containers: []
	W0328 01:06:21.174284 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:06:21.174293 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:06:21.174364 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:06:21.238666 1131323 cri.go:89] found id: ""
	I0328 01:06:21.238707 1131323 logs.go:276] 0 containers: []
	W0328 01:06:21.238722 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:06:21.238730 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:06:21.238803 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:06:21.303510 1131323 cri.go:89] found id: ""
	I0328 01:06:21.303543 1131323 logs.go:276] 0 containers: []
	W0328 01:06:21.303553 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:06:21.303559 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:06:21.303614 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:06:21.345823 1131323 cri.go:89] found id: ""
	I0328 01:06:21.345853 1131323 logs.go:276] 0 containers: []
	W0328 01:06:21.345862 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:06:21.345870 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:06:21.345940 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:06:21.386205 1131323 cri.go:89] found id: ""
	I0328 01:06:21.386248 1131323 logs.go:276] 0 containers: []
	W0328 01:06:21.386261 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:06:21.386269 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:06:21.386335 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:06:21.427424 1131323 cri.go:89] found id: ""
	I0328 01:06:21.427457 1131323 logs.go:276] 0 containers: []
	W0328 01:06:21.427470 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:06:21.427478 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:06:21.427546 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:06:21.465054 1131323 cri.go:89] found id: ""
	I0328 01:06:21.465087 1131323 logs.go:276] 0 containers: []
	W0328 01:06:21.465099 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:06:21.465107 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:06:21.465177 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:06:21.507197 1131323 cri.go:89] found id: ""
	I0328 01:06:21.507229 1131323 logs.go:276] 0 containers: []
	W0328 01:06:21.507238 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:06:21.507248 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:06:21.507263 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:06:21.586657 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:06:21.586709 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:06:21.633702 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:06:21.633739 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:06:21.688960 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:06:21.688999 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:06:21.704675 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:06:21.704714 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:06:21.781612 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:06:24.282521 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:06:24.297096 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:06:24.297185 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:06:24.338745 1131323 cri.go:89] found id: ""
	I0328 01:06:24.338780 1131323 logs.go:276] 0 containers: []
	W0328 01:06:24.338793 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:06:24.338802 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:06:24.338872 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:06:24.375499 1131323 cri.go:89] found id: ""
	I0328 01:06:24.375528 1131323 logs.go:276] 0 containers: []
	W0328 01:06:24.375540 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:06:24.375548 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:06:24.375616 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:06:24.410939 1131323 cri.go:89] found id: ""
	I0328 01:06:24.410966 1131323 logs.go:276] 0 containers: []
	W0328 01:06:24.410978 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:06:24.410986 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:06:24.411042 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:06:24.455316 1131323 cri.go:89] found id: ""
	I0328 01:06:24.455345 1131323 logs.go:276] 0 containers: []
	W0328 01:06:24.455354 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:06:24.455360 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:06:24.455427 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:06:24.493177 1131323 cri.go:89] found id: ""
	I0328 01:06:24.493206 1131323 logs.go:276] 0 containers: []
	W0328 01:06:24.493219 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:06:24.493228 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:06:24.493300 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:06:24.533612 1131323 cri.go:89] found id: ""
	I0328 01:06:24.533648 1131323 logs.go:276] 0 containers: []
	W0328 01:06:24.533659 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:06:24.533668 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:06:24.533743 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:06:24.573960 1131323 cri.go:89] found id: ""
	I0328 01:06:24.573998 1131323 logs.go:276] 0 containers: []
	W0328 01:06:24.574014 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:06:24.574020 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:06:24.574074 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:06:24.617282 1131323 cri.go:89] found id: ""
	I0328 01:06:24.617319 1131323 logs.go:276] 0 containers: []
	W0328 01:06:24.617333 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:06:24.617346 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:06:24.617364 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:06:24.691660 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:06:24.691688 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:06:24.691707 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:06:24.773138 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:06:24.773180 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:06:24.820408 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:06:24.820440 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:06:24.875901 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:06:24.875940 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:06:23.041030 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:25.041064 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:22.874513 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:25.378939 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:24.413732 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:26.912433 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:27.392663 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:06:27.407958 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:06:27.408046 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:06:27.446750 1131323 cri.go:89] found id: ""
	I0328 01:06:27.446782 1131323 logs.go:276] 0 containers: []
	W0328 01:06:27.446792 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:06:27.446799 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:06:27.446872 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:06:27.489199 1131323 cri.go:89] found id: ""
	I0328 01:06:27.489236 1131323 logs.go:276] 0 containers: []
	W0328 01:06:27.489249 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:06:27.489258 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:06:27.489316 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:06:27.525754 1131323 cri.go:89] found id: ""
	I0328 01:06:27.525787 1131323 logs.go:276] 0 containers: []
	W0328 01:06:27.525796 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:06:27.525803 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:06:27.525861 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:06:27.560817 1131323 cri.go:89] found id: ""
	I0328 01:06:27.560849 1131323 logs.go:276] 0 containers: []
	W0328 01:06:27.560858 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:06:27.560866 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:06:27.560930 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:06:27.597706 1131323 cri.go:89] found id: ""
	I0328 01:06:27.597736 1131323 logs.go:276] 0 containers: []
	W0328 01:06:27.597744 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:06:27.597750 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:06:27.597821 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:06:27.635170 1131323 cri.go:89] found id: ""
	I0328 01:06:27.635211 1131323 logs.go:276] 0 containers: []
	W0328 01:06:27.635223 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:06:27.635232 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:06:27.635299 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:06:27.672043 1131323 cri.go:89] found id: ""
	I0328 01:06:27.672079 1131323 logs.go:276] 0 containers: []
	W0328 01:06:27.672091 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:06:27.672099 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:06:27.672166 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:06:27.711401 1131323 cri.go:89] found id: ""
	I0328 01:06:27.711435 1131323 logs.go:276] 0 containers: []
	W0328 01:06:27.711448 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:06:27.711468 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:06:27.711488 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:06:27.755172 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:06:27.755211 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:06:27.807588 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:06:27.807632 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:06:27.823557 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:06:27.823589 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:06:27.905292 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:06:27.905316 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:06:27.905329 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:06:27.041105 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:29.041205 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:27.873797 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:30.374214 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:29.412378 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:31.413211 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:30.491565 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:06:30.505601 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:06:30.505667 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:06:30.541894 1131323 cri.go:89] found id: ""
	I0328 01:06:30.541929 1131323 logs.go:276] 0 containers: []
	W0328 01:06:30.541940 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:06:30.541949 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:06:30.542029 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:06:30.581484 1131323 cri.go:89] found id: ""
	I0328 01:06:30.581514 1131323 logs.go:276] 0 containers: []
	W0328 01:06:30.581532 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:06:30.581538 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:06:30.581613 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:06:30.624788 1131323 cri.go:89] found id: ""
	I0328 01:06:30.624830 1131323 logs.go:276] 0 containers: []
	W0328 01:06:30.624842 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:06:30.624850 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:06:30.624922 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:06:30.664373 1131323 cri.go:89] found id: ""
	I0328 01:06:30.664403 1131323 logs.go:276] 0 containers: []
	W0328 01:06:30.664413 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:06:30.664420 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:06:30.664489 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:06:30.702885 1131323 cri.go:89] found id: ""
	I0328 01:06:30.702917 1131323 logs.go:276] 0 containers: []
	W0328 01:06:30.702928 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:06:30.702934 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:06:30.703006 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:06:30.748170 1131323 cri.go:89] found id: ""
	I0328 01:06:30.748205 1131323 logs.go:276] 0 containers: []
	W0328 01:06:30.748217 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:06:30.748226 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:06:30.748316 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:06:30.785218 1131323 cri.go:89] found id: ""
	I0328 01:06:30.785255 1131323 logs.go:276] 0 containers: []
	W0328 01:06:30.785268 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:06:30.785276 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:06:30.785343 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:06:30.825529 1131323 cri.go:89] found id: ""
	I0328 01:06:30.825555 1131323 logs.go:276] 0 containers: []
	W0328 01:06:30.825565 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:06:30.825575 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:06:30.825589 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:06:30.881353 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:06:30.881391 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:06:30.896682 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:06:30.896718 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:06:30.973356 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:06:30.973386 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:06:30.973402 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:06:31.049014 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:06:31.049047 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:06:33.594365 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:06:33.609372 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:06:33.609460 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:06:33.648699 1131323 cri.go:89] found id: ""
	I0328 01:06:33.648728 1131323 logs.go:276] 0 containers: []
	W0328 01:06:33.648749 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:06:33.648757 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:06:33.648829 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:06:33.686707 1131323 cri.go:89] found id: ""
	I0328 01:06:33.686744 1131323 logs.go:276] 0 containers: []
	W0328 01:06:33.686758 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:06:33.686767 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:06:33.686832 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:06:33.723091 1131323 cri.go:89] found id: ""
	I0328 01:06:33.723121 1131323 logs.go:276] 0 containers: []
	W0328 01:06:33.723130 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:06:33.723136 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:06:33.723187 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:06:33.763439 1131323 cri.go:89] found id: ""
	I0328 01:06:33.763471 1131323 logs.go:276] 0 containers: []
	W0328 01:06:33.763481 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:06:33.763488 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:06:33.763544 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:06:33.812236 1131323 cri.go:89] found id: ""
	I0328 01:06:33.812271 1131323 logs.go:276] 0 containers: []
	W0328 01:06:33.812285 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:06:33.812294 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:06:33.812365 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:06:33.849421 1131323 cri.go:89] found id: ""
	I0328 01:06:33.849454 1131323 logs.go:276] 0 containers: []
	W0328 01:06:33.849465 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:06:33.849473 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:06:33.849528 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:06:33.888020 1131323 cri.go:89] found id: ""
	I0328 01:06:33.888051 1131323 logs.go:276] 0 containers: []
	W0328 01:06:33.888065 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:06:33.888078 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:06:33.888145 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:06:33.925952 1131323 cri.go:89] found id: ""
	I0328 01:06:33.925990 1131323 logs.go:276] 0 containers: []
	W0328 01:06:33.926003 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:06:33.926016 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:06:33.926034 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:06:33.976695 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:06:33.976734 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:06:33.991708 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:06:33.991752 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:06:34.068244 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:06:34.068276 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:06:34.068293 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:06:34.155843 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:06:34.155885 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:06:31.041375 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:33.041526 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:35.541169 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:32.872009 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:34.873043 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:33.913191 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:36.413213 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:36.697480 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:06:36.712322 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:06:36.712420 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:06:36.749541 1131323 cri.go:89] found id: ""
	I0328 01:06:36.749570 1131323 logs.go:276] 0 containers: []
	W0328 01:06:36.749579 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:06:36.749587 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:06:36.749655 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:06:36.788226 1131323 cri.go:89] found id: ""
	I0328 01:06:36.788255 1131323 logs.go:276] 0 containers: []
	W0328 01:06:36.788264 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:06:36.788270 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:06:36.788323 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:06:36.823824 1131323 cri.go:89] found id: ""
	I0328 01:06:36.823856 1131323 logs.go:276] 0 containers: []
	W0328 01:06:36.823866 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:06:36.823872 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:06:36.823927 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:06:36.869331 1131323 cri.go:89] found id: ""
	I0328 01:06:36.869362 1131323 logs.go:276] 0 containers: []
	W0328 01:06:36.869371 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:06:36.869378 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:06:36.869473 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:06:36.907918 1131323 cri.go:89] found id: ""
	I0328 01:06:36.907950 1131323 logs.go:276] 0 containers: []
	W0328 01:06:36.907960 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:06:36.907966 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:06:36.908028 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:06:36.947708 1131323 cri.go:89] found id: ""
	I0328 01:06:36.947738 1131323 logs.go:276] 0 containers: []
	W0328 01:06:36.947749 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:06:36.947757 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:06:36.947824 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:06:36.986200 1131323 cri.go:89] found id: ""
	I0328 01:06:36.986251 1131323 logs.go:276] 0 containers: []
	W0328 01:06:36.986266 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:06:36.986275 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:06:36.986350 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:06:37.026670 1131323 cri.go:89] found id: ""
	I0328 01:06:37.026698 1131323 logs.go:276] 0 containers: []
	W0328 01:06:37.026708 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:06:37.026718 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:06:37.026732 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:06:37.079891 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:06:37.079933 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:06:37.094347 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:06:37.094378 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:06:37.168653 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:06:37.168681 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:06:37.168695 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:06:37.247909 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:06:37.247949 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:06:39.791285 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:06:39.807921 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:06:39.808000 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:06:39.851460 1131323 cri.go:89] found id: ""
	I0328 01:06:39.851499 1131323 logs.go:276] 0 containers: []
	W0328 01:06:39.851512 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:06:39.851520 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:06:39.851593 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:06:39.889506 1131323 cri.go:89] found id: ""
	I0328 01:06:39.889541 1131323 logs.go:276] 0 containers: []
	W0328 01:06:39.889554 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:06:39.889564 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:06:39.889632 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:06:39.930291 1131323 cri.go:89] found id: ""
	I0328 01:06:39.930321 1131323 logs.go:276] 0 containers: []
	W0328 01:06:39.930331 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:06:39.930337 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:06:39.930400 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:06:39.965121 1131323 cri.go:89] found id: ""
	I0328 01:06:39.965160 1131323 logs.go:276] 0 containers: []
	W0328 01:06:39.965174 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:06:39.965183 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:06:39.965252 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:06:40.003217 1131323 cri.go:89] found id: ""
	I0328 01:06:40.003248 1131323 logs.go:276] 0 containers: []
	W0328 01:06:40.003258 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:06:40.003264 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:06:40.003319 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:06:40.042702 1131323 cri.go:89] found id: ""
	I0328 01:06:40.042737 1131323 logs.go:276] 0 containers: []
	W0328 01:06:40.042749 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:06:40.042759 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:06:40.042826 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:06:40.079733 1131323 cri.go:89] found id: ""
	I0328 01:06:40.079769 1131323 logs.go:276] 0 containers: []
	W0328 01:06:40.079780 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:06:40.079788 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:06:40.079852 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:06:40.117066 1131323 cri.go:89] found id: ""
	I0328 01:06:40.117098 1131323 logs.go:276] 0 containers: []
	W0328 01:06:40.117107 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:06:40.117117 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:06:40.117130 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:06:40.158589 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:06:40.158623 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:06:40.210997 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:06:40.211049 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:06:40.225419 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:06:40.225453 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:06:40.305356 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:06:40.305385 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:06:40.305401 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:06:37.541534 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:39.541905 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:36.874220 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:39.373763 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:38.413719 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:40.912939 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:42.913528 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:42.896394 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:06:42.912285 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:06:42.912355 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:06:42.949381 1131323 cri.go:89] found id: ""
	I0328 01:06:42.949411 1131323 logs.go:276] 0 containers: []
	W0328 01:06:42.949420 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:06:42.949427 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:06:42.949496 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:06:42.985325 1131323 cri.go:89] found id: ""
	I0328 01:06:42.985358 1131323 logs.go:276] 0 containers: []
	W0328 01:06:42.985371 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:06:42.985388 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:06:42.985456 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:06:43.023570 1131323 cri.go:89] found id: ""
	I0328 01:06:43.023616 1131323 logs.go:276] 0 containers: []
	W0328 01:06:43.023630 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:06:43.023638 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:06:43.023714 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:06:43.062995 1131323 cri.go:89] found id: ""
	I0328 01:06:43.063025 1131323 logs.go:276] 0 containers: []
	W0328 01:06:43.063036 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:06:43.063042 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:06:43.063111 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:06:43.101666 1131323 cri.go:89] found id: ""
	I0328 01:06:43.101704 1131323 logs.go:276] 0 containers: []
	W0328 01:06:43.101713 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:06:43.101720 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:06:43.101789 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:06:43.150713 1131323 cri.go:89] found id: ""
	I0328 01:06:43.150745 1131323 logs.go:276] 0 containers: []
	W0328 01:06:43.150757 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:06:43.150765 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:06:43.150830 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:06:43.193449 1131323 cri.go:89] found id: ""
	I0328 01:06:43.193479 1131323 logs.go:276] 0 containers: []
	W0328 01:06:43.193487 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:06:43.193495 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:06:43.193559 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:06:43.237641 1131323 cri.go:89] found id: ""
	I0328 01:06:43.237673 1131323 logs.go:276] 0 containers: []
	W0328 01:06:43.237682 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:06:43.237698 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:06:43.237714 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:06:43.287282 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:06:43.287320 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:06:43.303307 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:06:43.303343 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:06:43.383597 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:06:43.383619 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:06:43.383632 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:06:43.467874 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:06:43.467914 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:06:42.041406 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:44.540550 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:41.874286 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:44.372393 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:45.410973 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:47.412852 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:46.011081 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:06:46.025731 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:06:46.025824 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:06:46.064336 1131323 cri.go:89] found id: ""
	I0328 01:06:46.064371 1131323 logs.go:276] 0 containers: []
	W0328 01:06:46.064385 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:06:46.064394 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:06:46.064451 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:06:46.104493 1131323 cri.go:89] found id: ""
	I0328 01:06:46.104530 1131323 logs.go:276] 0 containers: []
	W0328 01:06:46.104550 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:06:46.104559 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:06:46.104636 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:06:46.147546 1131323 cri.go:89] found id: ""
	I0328 01:06:46.147582 1131323 logs.go:276] 0 containers: []
	W0328 01:06:46.147594 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:06:46.147602 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:06:46.147656 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:06:46.186162 1131323 cri.go:89] found id: ""
	I0328 01:06:46.186197 1131323 logs.go:276] 0 containers: []
	W0328 01:06:46.186207 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:06:46.186213 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:06:46.186296 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:06:46.230412 1131323 cri.go:89] found id: ""
	I0328 01:06:46.230450 1131323 logs.go:276] 0 containers: []
	W0328 01:06:46.230464 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:06:46.230473 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:06:46.230552 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:06:46.266000 1131323 cri.go:89] found id: ""
	I0328 01:06:46.266037 1131323 logs.go:276] 0 containers: []
	W0328 01:06:46.266050 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:06:46.266059 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:06:46.266126 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:06:46.301031 1131323 cri.go:89] found id: ""
	I0328 01:06:46.301065 1131323 logs.go:276] 0 containers: []
	W0328 01:06:46.301077 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:06:46.301084 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:06:46.301155 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:06:46.339222 1131323 cri.go:89] found id: ""
	I0328 01:06:46.339248 1131323 logs.go:276] 0 containers: []
	W0328 01:06:46.339258 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:06:46.339271 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:06:46.339290 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:06:46.352558 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:06:46.352595 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:06:46.427283 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:06:46.427308 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:06:46.427325 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:06:46.512134 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:06:46.512178 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:06:46.558276 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:06:46.558307 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:06:49.113455 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:06:49.127554 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:06:49.127645 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:06:49.169380 1131323 cri.go:89] found id: ""
	I0328 01:06:49.169421 1131323 logs.go:276] 0 containers: []
	W0328 01:06:49.169435 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:06:49.169444 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:06:49.169511 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:06:49.204540 1131323 cri.go:89] found id: ""
	I0328 01:06:49.204568 1131323 logs.go:276] 0 containers: []
	W0328 01:06:49.204579 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:06:49.204596 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:06:49.204664 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:06:49.243074 1131323 cri.go:89] found id: ""
	I0328 01:06:49.243102 1131323 logs.go:276] 0 containers: []
	W0328 01:06:49.243112 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:06:49.243119 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:06:49.243170 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:06:49.281264 1131323 cri.go:89] found id: ""
	I0328 01:06:49.281301 1131323 logs.go:276] 0 containers: []
	W0328 01:06:49.281314 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:06:49.281322 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:06:49.281391 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:06:49.320473 1131323 cri.go:89] found id: ""
	I0328 01:06:49.320505 1131323 logs.go:276] 0 containers: []
	W0328 01:06:49.320514 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:06:49.320521 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:06:49.320592 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:06:49.357715 1131323 cri.go:89] found id: ""
	I0328 01:06:49.357749 1131323 logs.go:276] 0 containers: []
	W0328 01:06:49.357759 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:06:49.357766 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:06:49.357823 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:06:49.398427 1131323 cri.go:89] found id: ""
	I0328 01:06:49.398464 1131323 logs.go:276] 0 containers: []
	W0328 01:06:49.398477 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:06:49.398498 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:06:49.398576 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:06:49.439921 1131323 cri.go:89] found id: ""
	I0328 01:06:49.439956 1131323 logs.go:276] 0 containers: []
	W0328 01:06:49.439969 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:06:49.439982 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:06:49.440003 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:06:49.557260 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:06:49.557289 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:06:49.557312 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:06:49.640105 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:06:49.640169 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:06:49.683153 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:06:49.683185 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:06:49.737420 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:06:49.737463 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:06:46.541377 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:49.041761 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:46.374869 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:48.875897 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:49.912535 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:51.912893 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:52.253208 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:06:52.268572 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:06:52.268649 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:06:52.305136 1131323 cri.go:89] found id: ""
	I0328 01:06:52.305180 1131323 logs.go:276] 0 containers: []
	W0328 01:06:52.305193 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:06:52.305202 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:06:52.305273 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:06:52.344774 1131323 cri.go:89] found id: ""
	I0328 01:06:52.344806 1131323 logs.go:276] 0 containers: []
	W0328 01:06:52.344816 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:06:52.344823 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:06:52.344885 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:06:52.382127 1131323 cri.go:89] found id: ""
	I0328 01:06:52.382174 1131323 logs.go:276] 0 containers: []
	W0328 01:06:52.382185 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:06:52.382200 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:06:52.382280 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:06:52.421340 1131323 cri.go:89] found id: ""
	I0328 01:06:52.421368 1131323 logs.go:276] 0 containers: []
	W0328 01:06:52.421377 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:06:52.421383 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:06:52.421433 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:06:52.460046 1131323 cri.go:89] found id: ""
	I0328 01:06:52.460084 1131323 logs.go:276] 0 containers: []
	W0328 01:06:52.460100 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:06:52.460107 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:06:52.460164 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:06:52.500067 1131323 cri.go:89] found id: ""
	I0328 01:06:52.500094 1131323 logs.go:276] 0 containers: []
	W0328 01:06:52.500102 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:06:52.500109 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:06:52.500171 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:06:52.537614 1131323 cri.go:89] found id: ""
	I0328 01:06:52.537646 1131323 logs.go:276] 0 containers: []
	W0328 01:06:52.537671 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:06:52.537680 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:06:52.537745 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:06:52.577362 1131323 cri.go:89] found id: ""
	I0328 01:06:52.577392 1131323 logs.go:276] 0 containers: []
	W0328 01:06:52.577402 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:06:52.577417 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:06:52.577434 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:06:52.633638 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:06:52.633689 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:06:52.650762 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:06:52.650796 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:06:52.729436 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:06:52.729470 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:06:52.729484 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:06:52.818193 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:06:52.818248 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:06:51.540541 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:53.541340 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:55.542165 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:51.376916 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:53.872313 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:55.873335 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:54.411986 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:56.412892 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:55.362950 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:06:55.378461 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:06:55.378577 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:06:55.419968 1131323 cri.go:89] found id: ""
	I0328 01:06:55.419995 1131323 logs.go:276] 0 containers: []
	W0328 01:06:55.420005 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:06:55.420010 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:06:55.420072 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:06:55.464308 1131323 cri.go:89] found id: ""
	I0328 01:06:55.464341 1131323 logs.go:276] 0 containers: []
	W0328 01:06:55.464350 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:06:55.464357 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:06:55.464421 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:06:55.523059 1131323 cri.go:89] found id: ""
	I0328 01:06:55.523092 1131323 logs.go:276] 0 containers: []
	W0328 01:06:55.523106 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:06:55.523114 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:06:55.523186 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:06:55.570957 1131323 cri.go:89] found id: ""
	I0328 01:06:55.570990 1131323 logs.go:276] 0 containers: []
	W0328 01:06:55.571004 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:06:55.571013 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:06:55.571077 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:06:55.606712 1131323 cri.go:89] found id: ""
	I0328 01:06:55.606739 1131323 logs.go:276] 0 containers: []
	W0328 01:06:55.606749 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:06:55.606755 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:06:55.606817 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:06:55.646445 1131323 cri.go:89] found id: ""
	I0328 01:06:55.646477 1131323 logs.go:276] 0 containers: []
	W0328 01:06:55.646486 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:06:55.646493 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:06:55.646548 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:06:55.685176 1131323 cri.go:89] found id: ""
	I0328 01:06:55.685208 1131323 logs.go:276] 0 containers: []
	W0328 01:06:55.685217 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:06:55.685225 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:06:55.685289 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:06:55.722948 1131323 cri.go:89] found id: ""
	I0328 01:06:55.722984 1131323 logs.go:276] 0 containers: []
	W0328 01:06:55.722995 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:06:55.723006 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:06:55.723022 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:06:55.797332 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:06:55.797368 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:06:55.797385 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:06:55.877648 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:06:55.877688 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:06:55.918966 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:06:55.918997 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:06:55.971226 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:06:55.971272 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:06:58.488464 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:06:58.504999 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:06:58.505088 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:06:58.549290 1131323 cri.go:89] found id: ""
	I0328 01:06:58.549325 1131323 logs.go:276] 0 containers: []
	W0328 01:06:58.549338 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:06:58.549347 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:06:58.549414 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:06:58.589222 1131323 cri.go:89] found id: ""
	I0328 01:06:58.589252 1131323 logs.go:276] 0 containers: []
	W0328 01:06:58.589261 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:06:58.589271 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:06:58.589337 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:06:58.626470 1131323 cri.go:89] found id: ""
	I0328 01:06:58.626499 1131323 logs.go:276] 0 containers: []
	W0328 01:06:58.626508 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:06:58.626514 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:06:58.626578 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:06:58.671634 1131323 cri.go:89] found id: ""
	I0328 01:06:58.671663 1131323 logs.go:276] 0 containers: []
	W0328 01:06:58.671674 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:06:58.671683 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:06:58.671744 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:06:58.707335 1131323 cri.go:89] found id: ""
	I0328 01:06:58.707370 1131323 logs.go:276] 0 containers: []
	W0328 01:06:58.707381 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:06:58.707390 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:06:58.707459 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:06:58.745635 1131323 cri.go:89] found id: ""
	I0328 01:06:58.745666 1131323 logs.go:276] 0 containers: []
	W0328 01:06:58.745679 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:06:58.745687 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:06:58.745752 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:06:58.792172 1131323 cri.go:89] found id: ""
	I0328 01:06:58.792205 1131323 logs.go:276] 0 containers: []
	W0328 01:06:58.792216 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:06:58.792225 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:06:58.792287 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:06:58.840027 1131323 cri.go:89] found id: ""
	I0328 01:06:58.840063 1131323 logs.go:276] 0 containers: []
	W0328 01:06:58.840075 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:06:58.840089 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:06:58.840108 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:06:58.921964 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:06:58.921988 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:06:58.922003 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:06:59.016935 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:06:59.016980 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:06:59.065747 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:06:59.065788 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:06:59.119189 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:06:59.119231 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:06:58.042362 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:00.544351 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:57.875649 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:00.371953 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:58.406154 1130949 pod_ready.go:81] duration metric: took 4m0.000981669s for pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace to be "Ready" ...
	E0328 01:06:58.406192 1130949 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace to be "Ready" (will not retry!)
	I0328 01:06:58.406218 1130949 pod_ready.go:38] duration metric: took 4m11.713667334s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0328 01:06:58.406275 1130949 kubeadm.go:591] duration metric: took 4m19.018883002s to restartPrimaryControlPlane
	W0328 01:06:58.406372 1130949 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0328 01:06:58.406432 1130949 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0328 01:07:01.637081 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:07:01.652557 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:07:01.652634 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:07:01.691795 1131323 cri.go:89] found id: ""
	I0328 01:07:01.691832 1131323 logs.go:276] 0 containers: []
	W0328 01:07:01.691846 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:07:01.691854 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:07:01.691927 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:07:01.732815 1131323 cri.go:89] found id: ""
	I0328 01:07:01.732850 1131323 logs.go:276] 0 containers: []
	W0328 01:07:01.732861 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:07:01.732868 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:07:01.732938 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:07:01.776370 1131323 cri.go:89] found id: ""
	I0328 01:07:01.776408 1131323 logs.go:276] 0 containers: []
	W0328 01:07:01.776422 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:07:01.776431 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:07:01.776501 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:07:01.821260 1131323 cri.go:89] found id: ""
	I0328 01:07:01.821290 1131323 logs.go:276] 0 containers: []
	W0328 01:07:01.821301 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:07:01.821308 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:07:01.821377 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:07:01.860666 1131323 cri.go:89] found id: ""
	I0328 01:07:01.860696 1131323 logs.go:276] 0 containers: []
	W0328 01:07:01.860708 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:07:01.860719 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:07:01.860787 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:07:01.898255 1131323 cri.go:89] found id: ""
	I0328 01:07:01.898291 1131323 logs.go:276] 0 containers: []
	W0328 01:07:01.898304 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:07:01.898314 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:07:01.898383 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:07:01.937770 1131323 cri.go:89] found id: ""
	I0328 01:07:01.937809 1131323 logs.go:276] 0 containers: []
	W0328 01:07:01.937822 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:07:01.937830 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:07:01.937901 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:07:01.976946 1131323 cri.go:89] found id: ""
	I0328 01:07:01.976981 1131323 logs.go:276] 0 containers: []
	W0328 01:07:01.976994 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:07:01.977008 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:07:01.977027 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:07:02.062804 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:07:02.062845 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:07:02.110750 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:07:02.110783 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:07:02.179633 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:07:02.179677 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:07:02.203131 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:07:02.203181 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:07:02.303281 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:07:04.804238 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:07:04.819654 1131323 kubeadm.go:591] duration metric: took 4m2.527630194s to restartPrimaryControlPlane
	W0328 01:07:04.819747 1131323 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0328 01:07:04.819787 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0328 01:07:03.041692 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:05.540478 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:02.372472 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:04.376413 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:07.322821 1131323 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (2.50300166s)
	I0328 01:07:07.322918 1131323 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0328 01:07:07.338692 1131323 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0328 01:07:07.349812 1131323 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0328 01:07:07.361566 1131323 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0328 01:07:07.361597 1131323 kubeadm.go:156] found existing configuration files:
	
	I0328 01:07:07.361667 1131323 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0328 01:07:07.372926 1131323 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0328 01:07:07.373008 1131323 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0328 01:07:07.383770 1131323 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0328 01:07:07.394260 1131323 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0328 01:07:07.394332 1131323 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0328 01:07:07.405874 1131323 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0328 01:07:07.417177 1131323 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0328 01:07:07.417254 1131323 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0328 01:07:07.428589 1131323 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0328 01:07:07.438788 1131323 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0328 01:07:07.438845 1131323 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0328 01:07:07.449649 1131323 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0328 01:07:07.533886 1131323 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0328 01:07:07.533989 1131323 kubeadm.go:309] [preflight] Running pre-flight checks
	I0328 01:07:07.693599 1131323 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0328 01:07:07.693736 1131323 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0328 01:07:07.693852 1131323 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0328 01:07:07.910557 1131323 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0328 01:07:07.912634 1131323 out.go:204]   - Generating certificates and keys ...
	I0328 01:07:07.912743 1131323 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0328 01:07:07.912855 1131323 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0328 01:07:07.912984 1131323 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0328 01:07:07.913098 1131323 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0328 01:07:07.913212 1131323 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0328 01:07:07.913298 1131323 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0328 01:07:07.913384 1131323 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0328 01:07:07.913569 1131323 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0328 01:07:07.913947 1131323 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0328 01:07:07.914429 1131323 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0328 01:07:07.914649 1131323 kubeadm.go:309] [certs] Using the existing "sa" key
	I0328 01:07:07.914728 1131323 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0328 01:07:08.225778 1131323 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0328 01:07:08.353927 1131323 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0328 01:07:08.631240 1131323 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0328 01:07:08.824445 1131323 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0328 01:07:08.840240 1131323 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0328 01:07:08.841200 1131323 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0328 01:07:08.841315 1131323 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0328 01:07:08.997129 1131323 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0328 01:07:08.999073 1131323 out.go:204]   - Booting up control plane ...
	I0328 01:07:08.999224 1131323 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0328 01:07:09.014811 1131323 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0328 01:07:09.015898 1131323 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0328 01:07:09.016727 1131323 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0328 01:07:09.019426 1131323 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0328 01:07:07.541363 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:10.041094 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:06.874606 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:09.372537 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:12.540137 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:14.541608 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:11.372643 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:13.873029 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:16.541814 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:19.047225 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:16.372556 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:18.871954 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:20.872047 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:21.542880 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:24.041786 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:22.872845 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:24.873747 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:26.042186 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:28.541303 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:30.540610 1130949 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.134147754s)
	I0328 01:07:30.540688 1130949 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0328 01:07:30.558971 1130949 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0328 01:07:30.570331 1130949 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0328 01:07:30.581192 1130949 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0328 01:07:30.581246 1130949 kubeadm.go:156] found existing configuration files:
	
	I0328 01:07:30.581306 1130949 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0328 01:07:30.592337 1130949 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0328 01:07:30.592410 1130949 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0328 01:07:30.603288 1130949 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0328 01:07:30.613714 1130949 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0328 01:07:30.613776 1130949 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0328 01:07:30.624281 1130949 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0328 01:07:30.634569 1130949 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0328 01:07:30.634644 1130949 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0328 01:07:30.647279 1130949 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0328 01:07:30.658554 1130949 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0328 01:07:30.658646 1130949 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0328 01:07:30.670364 1130949 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0328 01:07:30.730349 1130949 kubeadm.go:309] [init] Using Kubernetes version: v1.29.3
	I0328 01:07:30.730414 1130949 kubeadm.go:309] [preflight] Running pre-flight checks
	I0328 01:07:30.887056 1130949 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0328 01:07:30.887234 1130949 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0328 01:07:30.887385 1130949 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0328 01:07:31.104288 1130949 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0328 01:07:27.373135 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:29.373436 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:31.106496 1130949 out.go:204]   - Generating certificates and keys ...
	I0328 01:07:31.106628 1130949 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0328 01:07:31.106697 1130949 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0328 01:07:31.106765 1130949 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0328 01:07:31.106826 1130949 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0328 01:07:31.106892 1130949 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0328 01:07:31.107528 1130949 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0328 01:07:31.108302 1130949 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0328 01:07:31.112246 1130949 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0328 01:07:31.112762 1130949 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0328 01:07:31.113711 1130949 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0328 01:07:31.115230 1130949 kubeadm.go:309] [certs] Using the existing "sa" key
	I0328 01:07:31.115284 1130949 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0328 01:07:31.297632 1130949 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0328 01:07:32.446275 1130949 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0328 01:07:32.565869 1130949 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0328 01:07:32.641288 1130949 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0328 01:07:32.817229 1130949 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0328 01:07:32.817814 1130949 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0328 01:07:32.820366 1130949 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0328 01:07:32.822328 1130949 out.go:204]   - Booting up control plane ...
	I0328 01:07:32.822467 1130949 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0328 01:07:32.822550 1130949 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0328 01:07:32.822990 1130949 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0328 01:07:32.846800 1130949 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0328 01:07:32.847829 1130949 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0328 01:07:32.847902 1130949 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0328 01:07:31.044103 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:33.542106 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:35.542875 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:31.873591 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:33.875737 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:35.881819 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:32.992001 1130949 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0328 01:07:38.997010 1130949 kubeadm.go:309] [apiclient] All control plane components are healthy after 6.003888 seconds
	I0328 01:07:39.012971 1130949 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0328 01:07:39.036328 1130949 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0328 01:07:39.569806 1130949 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0328 01:07:39.570135 1130949 kubeadm.go:309] [mark-control-plane] Marking the node embed-certs-808809 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0328 01:07:40.085165 1130949 kubeadm.go:309] [bootstrap-token] Using token: 4zk5zi.uttj4zihedk5oj6k
	I0328 01:07:40.086719 1130949 out.go:204]   - Configuring RBAC rules ...
	I0328 01:07:40.086873 1130949 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0328 01:07:40.096373 1130949 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0328 01:07:40.106484 1130949 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0328 01:07:40.110525 1130949 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0328 01:07:40.120015 1130949 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0328 01:07:40.129060 1130949 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0328 01:07:40.141167 1130949 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0328 01:07:40.415429 1130949 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0328 01:07:40.507275 1130949 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0328 01:07:40.507333 1130949 kubeadm.go:309] 
	I0328 01:07:40.507551 1130949 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0328 01:07:40.507617 1130949 kubeadm.go:309] 
	I0328 01:07:40.507860 1130949 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0328 01:07:40.507891 1130949 kubeadm.go:309] 
	I0328 01:07:40.507947 1130949 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0328 01:07:40.508057 1130949 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0328 01:07:40.508140 1130949 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0328 01:07:40.508157 1130949 kubeadm.go:309] 
	I0328 01:07:40.508250 1130949 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0328 01:07:40.508264 1130949 kubeadm.go:309] 
	I0328 01:07:40.508329 1130949 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0328 01:07:40.508344 1130949 kubeadm.go:309] 
	I0328 01:07:40.508421 1130949 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0328 01:07:40.508539 1130949 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0328 01:07:40.508626 1130949 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0328 01:07:40.508632 1130949 kubeadm.go:309] 
	I0328 01:07:40.508804 1130949 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0328 01:07:40.508970 1130949 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0328 01:07:40.508990 1130949 kubeadm.go:309] 
	I0328 01:07:40.509155 1130949 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token 4zk5zi.uttj4zihedk5oj6k \
	I0328 01:07:40.509474 1130949 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:eb47e943713ff45bf2b8bbea854932df17a469668ec0f8e79ea3bb626e56fc59 \
	I0328 01:07:40.509514 1130949 kubeadm.go:309] 	--control-plane 
	I0328 01:07:40.509524 1130949 kubeadm.go:309] 
	I0328 01:07:40.509641 1130949 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0328 01:07:40.509655 1130949 kubeadm.go:309] 
	I0328 01:07:40.509767 1130949 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token 4zk5zi.uttj4zihedk5oj6k \
	I0328 01:07:40.509932 1130949 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:eb47e943713ff45bf2b8bbea854932df17a469668ec0f8e79ea3bb626e56fc59 
	I0328 01:07:40.510139 1130949 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0328 01:07:40.510157 1130949 cni.go:84] Creating CNI manager for ""
	I0328 01:07:40.510166 1130949 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0328 01:07:40.512099 1130949 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0328 01:07:38.041290 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:40.041569 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:38.373789 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:40.374369 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:40.513314 1130949 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0328 01:07:40.563257 1130949 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0328 01:07:40.627024 1130949 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0328 01:07:40.627097 1130949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:07:40.627137 1130949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-808809 minikube.k8s.io/updated_at=2024_03_28T01_07_40_0700 minikube.k8s.io/version=v1.33.0-beta.0 minikube.k8s.io/commit=1a940f980f77c9bfc8ef93678db8c1f49c9dd79d minikube.k8s.io/name=embed-certs-808809 minikube.k8s.io/primary=true
	I0328 01:07:40.928916 1130949 ops.go:34] apiserver oom_adj: -16
	I0328 01:07:40.929138 1130949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:07:41.429797 1130949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:07:41.930103 1130949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:07:42.429366 1130949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:07:42.540932 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:44.035055 1131600 pod_ready.go:81] duration metric: took 4m0.000860608s for pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace to be "Ready" ...
	E0328 01:07:44.035094 1131600 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace to be "Ready" (will not retry!)
	I0328 01:07:44.035124 1131600 pod_ready.go:38] duration metric: took 4m14.608998431s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0328 01:07:44.035180 1131600 kubeadm.go:591] duration metric: took 4m23.470228903s to restartPrimaryControlPlane
	W0328 01:07:44.035292 1131600 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0328 01:07:44.035344 1131600 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0328 01:07:42.375179 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:44.876120 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:42.929464 1130949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:07:43.429369 1130949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:07:43.929241 1130949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:07:44.429904 1130949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:07:44.930251 1130949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:07:45.429816 1130949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:07:45.930177 1130949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:07:46.429416 1130949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:07:46.929152 1130949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:07:47.429708 1130949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:07:49.021732 1131323 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0328 01:07:49.021890 1131323 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0328 01:07:49.022195 1131323 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0328 01:07:47.373358 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:49.872482 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:47.929139 1130949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:07:48.429732 1130949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:07:48.930207 1130949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:07:49.429230 1130949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:07:49.929298 1130949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:07:50.429919 1130949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:07:50.929364 1130949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:07:51.429403 1130949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:07:51.929356 1130949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:07:52.429410 1130949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:07:52.929894 1130949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:07:53.043365 1130949 kubeadm.go:1107] duration metric: took 12.416334145s to wait for elevateKubeSystemPrivileges
	W0328 01:07:53.043410 1130949 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0328 01:07:53.043419 1130949 kubeadm.go:393] duration metric: took 5m13.709259014s to StartCluster
	I0328 01:07:53.043445 1130949 settings.go:142] acquiring lock: {Name:mk594dd5d083edcb4c668528431bf610fe4ea638 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 01:07:53.043560 1130949 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18485-1069254/kubeconfig
	I0328 01:07:53.045798 1130949 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18485-1069254/kubeconfig: {Name:mk608bb0dce90860f51972a105c5a28b1a9b081e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 01:07:53.046158 1130949 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.72.210 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0328 01:07:53.047867 1130949 out.go:177] * Verifying Kubernetes components...
	I0328 01:07:53.046201 1130949 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0328 01:07:53.046412 1130949 config.go:182] Loaded profile config "embed-certs-808809": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0328 01:07:53.049163 1130949 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-808809"
	I0328 01:07:53.049175 1130949 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0328 01:07:53.049195 1130949 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-808809"
	W0328 01:07:53.049204 1130949 addons.go:243] addon storage-provisioner should already be in state true
	I0328 01:07:53.049230 1130949 host.go:66] Checking if "embed-certs-808809" exists ...
	I0328 01:07:53.049205 1130949 addons.go:69] Setting default-storageclass=true in profile "embed-certs-808809"
	I0328 01:07:53.049250 1130949 addons.go:69] Setting metrics-server=true in profile "embed-certs-808809"
	I0328 01:07:53.049271 1130949 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-808809"
	I0328 01:07:53.049309 1130949 addons.go:234] Setting addon metrics-server=true in "embed-certs-808809"
	W0328 01:07:53.049327 1130949 addons.go:243] addon metrics-server should already be in state true
	I0328 01:07:53.049371 1130949 host.go:66] Checking if "embed-certs-808809" exists ...
	I0328 01:07:53.049530 1130949 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18485-1069254/.minikube/bin/docker-machine-driver-kvm2
	I0328 01:07:53.049569 1130949 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 01:07:53.049696 1130949 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18485-1069254/.minikube/bin/docker-machine-driver-kvm2
	I0328 01:07:53.049729 1130949 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 01:07:53.049795 1130949 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18485-1069254/.minikube/bin/docker-machine-driver-kvm2
	I0328 01:07:53.049838 1130949 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 01:07:53.067042 1130949 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36143
	I0328 01:07:53.067078 1130949 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33653
	I0328 01:07:53.067536 1130949 main.go:141] libmachine: () Calling .GetVersion
	I0328 01:07:53.067599 1130949 main.go:141] libmachine: () Calling .GetVersion
	I0328 01:07:53.068156 1130949 main.go:141] libmachine: Using API Version  1
	I0328 01:07:53.068184 1130949 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 01:07:53.068289 1130949 main.go:141] libmachine: Using API Version  1
	I0328 01:07:53.068315 1130949 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 01:07:53.068583 1130949 main.go:141] libmachine: () Calling .GetMachineName
	I0328 01:07:53.068669 1130949 main.go:141] libmachine: () Calling .GetMachineName
	I0328 01:07:53.069095 1130949 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18485-1069254/.minikube/bin/docker-machine-driver-kvm2
	I0328 01:07:53.069121 1130949 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 01:07:53.069245 1130949 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18485-1069254/.minikube/bin/docker-machine-driver-kvm2
	I0328 01:07:53.069276 1130949 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 01:07:53.069991 1130949 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43737
	I0328 01:07:53.070509 1130949 main.go:141] libmachine: () Calling .GetVersion
	I0328 01:07:53.071078 1130949 main.go:141] libmachine: Using API Version  1
	I0328 01:07:53.071103 1130949 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 01:07:53.071480 1130949 main.go:141] libmachine: () Calling .GetMachineName
	I0328 01:07:53.071705 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetState
	I0328 01:07:53.075617 1130949 addons.go:234] Setting addon default-storageclass=true in "embed-certs-808809"
	W0328 01:07:53.075659 1130949 addons.go:243] addon default-storageclass should already be in state true
	I0328 01:07:53.075703 1130949 host.go:66] Checking if "embed-certs-808809" exists ...
	I0328 01:07:53.075982 1130949 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18485-1069254/.minikube/bin/docker-machine-driver-kvm2
	I0328 01:07:53.076011 1130949 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 01:07:53.085991 1130949 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39265
	I0328 01:07:53.086508 1130949 main.go:141] libmachine: () Calling .GetVersion
	I0328 01:07:53.086724 1130949 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36033
	I0328 01:07:53.087105 1130949 main.go:141] libmachine: Using API Version  1
	I0328 01:07:53.087122 1130949 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 01:07:53.087158 1130949 main.go:141] libmachine: () Calling .GetVersion
	I0328 01:07:53.087646 1130949 main.go:141] libmachine: Using API Version  1
	I0328 01:07:53.087667 1130949 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 01:07:53.087706 1130949 main.go:141] libmachine: () Calling .GetMachineName
	I0328 01:07:53.087922 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetState
	I0328 01:07:53.088031 1130949 main.go:141] libmachine: () Calling .GetMachineName
	I0328 01:07:53.088225 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetState
	I0328 01:07:53.089941 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .DriverName
	I0328 01:07:53.090168 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .DriverName
	I0328 01:07:53.091945 1130949 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0328 01:07:53.093023 1130949 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41613
	I0328 01:07:53.093537 1130949 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0328 01:07:53.093553 1130949 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0328 01:07:53.093563 1130949 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0328 01:07:53.095147 1130949 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0328 01:07:53.095165 1130949 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0328 01:07:53.093574 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHHostname
	I0328 01:07:53.095185 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHHostname
	I0328 01:07:53.093939 1130949 main.go:141] libmachine: () Calling .GetVersion
	I0328 01:07:53.096301 1130949 main.go:141] libmachine: Using API Version  1
	I0328 01:07:53.096322 1130949 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 01:07:53.096662 1130949 main.go:141] libmachine: () Calling .GetMachineName
	I0328 01:07:53.097251 1130949 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18485-1069254/.minikube/bin/docker-machine-driver-kvm2
	I0328 01:07:53.097306 1130949 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 01:07:53.098907 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:07:53.099014 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:07:53.099513 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d4:d2", ip: ""} in network mk-embed-certs-808809: {Iface:virbr4 ExpiryTime:2024-03-28 02:02:25 +0000 UTC Type:0 Mac:52:54:00:60:d4:d2 Iaid: IPaddr:192.168.72.210 Prefix:24 Hostname:embed-certs-808809 Clientid:01:52:54:00:60:d4:d2}
	I0328 01:07:53.099546 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined IP address 192.168.72.210 and MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:07:53.099996 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHPort
	I0328 01:07:53.100126 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d4:d2", ip: ""} in network mk-embed-certs-808809: {Iface:virbr4 ExpiryTime:2024-03-28 02:02:25 +0000 UTC Type:0 Mac:52:54:00:60:d4:d2 Iaid: IPaddr:192.168.72.210 Prefix:24 Hostname:embed-certs-808809 Clientid:01:52:54:00:60:d4:d2}
	I0328 01:07:53.100177 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHKeyPath
	I0328 01:07:53.100187 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined IP address 192.168.72.210 and MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:07:53.100287 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHUsername
	I0328 01:07:53.100392 1130949 sshutil.go:53] new ssh client: &{IP:192.168.72.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/embed-certs-808809/id_rsa Username:docker}
	I0328 01:07:53.100470 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHPort
	I0328 01:07:53.100576 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHKeyPath
	I0328 01:07:53.100709 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHUsername
	I0328 01:07:53.100796 1130949 sshutil.go:53] new ssh client: &{IP:192.168.72.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/embed-certs-808809/id_rsa Username:docker}
	I0328 01:07:53.114056 1130949 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41121
	I0328 01:07:53.114680 1130949 main.go:141] libmachine: () Calling .GetVersion
	I0328 01:07:53.115279 1130949 main.go:141] libmachine: Using API Version  1
	I0328 01:07:53.115313 1130949 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 01:07:53.115721 1130949 main.go:141] libmachine: () Calling .GetMachineName
	I0328 01:07:53.116061 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetState
	I0328 01:07:53.118022 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .DriverName
	I0328 01:07:53.118348 1130949 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0328 01:07:53.118370 1130949 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0328 01:07:53.118391 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHHostname
	I0328 01:07:53.121337 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:07:53.121699 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d4:d2", ip: ""} in network mk-embed-certs-808809: {Iface:virbr4 ExpiryTime:2024-03-28 02:02:25 +0000 UTC Type:0 Mac:52:54:00:60:d4:d2 Iaid: IPaddr:192.168.72.210 Prefix:24 Hostname:embed-certs-808809 Clientid:01:52:54:00:60:d4:d2}
	I0328 01:07:53.121728 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined IP address 192.168.72.210 and MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:07:53.121906 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHPort
	I0328 01:07:53.122084 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHKeyPath
	I0328 01:07:53.122266 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHUsername
	I0328 01:07:53.122414 1130949 sshutil.go:53] new ssh client: &{IP:192.168.72.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/embed-certs-808809/id_rsa Username:docker}
	I0328 01:07:53.242121 1130949 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0328 01:07:53.267118 1130949 node_ready.go:35] waiting up to 6m0s for node "embed-certs-808809" to be "Ready" ...
	I0328 01:07:53.276640 1130949 node_ready.go:49] node "embed-certs-808809" has status "Ready":"True"
	I0328 01:07:53.276670 1130949 node_ready.go:38] duration metric: took 9.513599ms for node "embed-certs-808809" to be "Ready" ...
	I0328 01:07:53.276683 1130949 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0328 01:07:53.283091 1130949 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-2rn6k" in "kube-system" namespace to be "Ready" ...
	I0328 01:07:53.325201 1130949 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0328 01:07:53.325234 1130949 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0328 01:07:53.341335 1130949 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0328 01:07:53.361084 1130949 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0328 01:07:53.361109 1130949 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0328 01:07:53.393089 1130949 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0328 01:07:53.393116 1130949 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0328 01:07:53.419245 1130949 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0328 01:07:53.445663 1130949 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0328 01:07:53.515515 1130949 main.go:141] libmachine: Making call to close driver server
	I0328 01:07:53.515555 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .Close
	I0328 01:07:53.515871 1130949 main.go:141] libmachine: Successfully made call to close driver server
	I0328 01:07:53.515891 1130949 main.go:141] libmachine: Making call to close connection to plugin binary
	I0328 01:07:53.515901 1130949 main.go:141] libmachine: Making call to close driver server
	I0328 01:07:53.515910 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .Close
	I0328 01:07:53.516173 1130949 main.go:141] libmachine: Successfully made call to close driver server
	I0328 01:07:53.516253 1130949 main.go:141] libmachine: Making call to close connection to plugin binary
	I0328 01:07:53.516212 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | Closing plugin on server side
	I0328 01:07:53.527854 1130949 main.go:141] libmachine: Making call to close driver server
	I0328 01:07:53.527882 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .Close
	I0328 01:07:53.528152 1130949 main.go:141] libmachine: Successfully made call to close driver server
	I0328 01:07:53.528173 1130949 main.go:141] libmachine: Making call to close connection to plugin binary
	I0328 01:07:53.528220 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | Closing plugin on server side
	I0328 01:07:54.159164 1130949 main.go:141] libmachine: Making call to close driver server
	I0328 01:07:54.159192 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .Close
	I0328 01:07:54.159264 1130949 main.go:141] libmachine: Making call to close driver server
	I0328 01:07:54.159292 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .Close
	I0328 01:07:54.159523 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | Closing plugin on server side
	I0328 01:07:54.159597 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | Closing plugin on server side
	I0328 01:07:54.159619 1130949 main.go:141] libmachine: Successfully made call to close driver server
	I0328 01:07:54.159637 1130949 main.go:141] libmachine: Making call to close connection to plugin binary
	I0328 01:07:54.159648 1130949 main.go:141] libmachine: Making call to close driver server
	I0328 01:07:54.159658 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .Close
	I0328 01:07:54.159660 1130949 main.go:141] libmachine: Successfully made call to close driver server
	I0328 01:07:54.159667 1130949 main.go:141] libmachine: Making call to close connection to plugin binary
	I0328 01:07:54.159688 1130949 main.go:141] libmachine: Making call to close driver server
	I0328 01:07:54.159696 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .Close
	I0328 01:07:54.159981 1130949 main.go:141] libmachine: Successfully made call to close driver server
	I0328 01:07:54.160037 1130949 main.go:141] libmachine: Making call to close connection to plugin binary
	I0328 01:07:54.160056 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | Closing plugin on server side
	I0328 01:07:54.160062 1130949 addons.go:470] Verifying addon metrics-server=true in "embed-certs-808809"
	I0328 01:07:54.160088 1130949 main.go:141] libmachine: Successfully made call to close driver server
	I0328 01:07:54.160090 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | Closing plugin on server side
	I0328 01:07:54.160106 1130949 main.go:141] libmachine: Making call to close connection to plugin binary
	I0328 01:07:54.162879 1130949 out.go:177] * Enabled addons: default-storageclass, metrics-server, storage-provisioner
	I0328 01:07:54.022449 1131323 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0328 01:07:54.022704 1131323 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0328 01:07:52.372314 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:54.372913 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:54.164263 1130949 addons.go:505] duration metric: took 1.11806212s for enable addons: enabled=[default-storageclass metrics-server storage-provisioner]
	I0328 01:07:55.294728 1130949 pod_ready.go:102] pod "coredns-76f75df574-2rn6k" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:55.790690 1130949 pod_ready.go:92] pod "coredns-76f75df574-2rn6k" in "kube-system" namespace has status "Ready":"True"
	I0328 01:07:55.790717 1130949 pod_ready.go:81] duration metric: took 2.50759161s for pod "coredns-76f75df574-2rn6k" in "kube-system" namespace to be "Ready" ...
	I0328 01:07:55.790726 1130949 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-pgcdh" in "kube-system" namespace to be "Ready" ...
	I0328 01:07:55.796249 1130949 pod_ready.go:92] pod "coredns-76f75df574-pgcdh" in "kube-system" namespace has status "Ready":"True"
	I0328 01:07:55.796279 1130949 pod_ready.go:81] duration metric: took 5.54233ms for pod "coredns-76f75df574-pgcdh" in "kube-system" namespace to be "Ready" ...
	I0328 01:07:55.796291 1130949 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-808809" in "kube-system" namespace to be "Ready" ...
	I0328 01:07:55.801226 1130949 pod_ready.go:92] pod "etcd-embed-certs-808809" in "kube-system" namespace has status "Ready":"True"
	I0328 01:07:55.801254 1130949 pod_ready.go:81] duration metric: took 4.956106ms for pod "etcd-embed-certs-808809" in "kube-system" namespace to be "Ready" ...
	I0328 01:07:55.801263 1130949 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-808809" in "kube-system" namespace to be "Ready" ...
	I0328 01:07:55.814571 1130949 pod_ready.go:92] pod "kube-apiserver-embed-certs-808809" in "kube-system" namespace has status "Ready":"True"
	I0328 01:07:55.814599 1130949 pod_ready.go:81] duration metric: took 13.328662ms for pod "kube-apiserver-embed-certs-808809" in "kube-system" namespace to be "Ready" ...
	I0328 01:07:55.814613 1130949 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-808809" in "kube-system" namespace to be "Ready" ...
	I0328 01:07:55.825995 1130949 pod_ready.go:92] pod "kube-controller-manager-embed-certs-808809" in "kube-system" namespace has status "Ready":"True"
	I0328 01:07:55.826022 1130949 pod_ready.go:81] duration metric: took 11.401096ms for pod "kube-controller-manager-embed-certs-808809" in "kube-system" namespace to be "Ready" ...
	I0328 01:07:55.826035 1130949 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-tjbhs" in "kube-system" namespace to be "Ready" ...
	I0328 01:07:56.188116 1130949 pod_ready.go:92] pod "kube-proxy-tjbhs" in "kube-system" namespace has status "Ready":"True"
	I0328 01:07:56.188147 1130949 pod_ready.go:81] duration metric: took 362.103962ms for pod "kube-proxy-tjbhs" in "kube-system" namespace to be "Ready" ...
	I0328 01:07:56.188161 1130949 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-808809" in "kube-system" namespace to be "Ready" ...
	I0328 01:07:56.588294 1130949 pod_ready.go:92] pod "kube-scheduler-embed-certs-808809" in "kube-system" namespace has status "Ready":"True"
	I0328 01:07:56.588334 1130949 pod_ready.go:81] duration metric: took 400.16517ms for pod "kube-scheduler-embed-certs-808809" in "kube-system" namespace to be "Ready" ...
	I0328 01:07:56.588347 1130949 pod_ready.go:38] duration metric: took 3.311651338s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0328 01:07:56.588369 1130949 api_server.go:52] waiting for apiserver process to appear ...
	I0328 01:07:56.588445 1130949 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:07:56.606404 1130949 api_server.go:72] duration metric: took 3.560197315s to wait for apiserver process to appear ...
	I0328 01:07:56.606435 1130949 api_server.go:88] waiting for apiserver healthz status ...
	I0328 01:07:56.606460 1130949 api_server.go:253] Checking apiserver healthz at https://192.168.72.210:8443/healthz ...
	I0328 01:07:56.612218 1130949 api_server.go:279] https://192.168.72.210:8443/healthz returned 200:
	ok
	I0328 01:07:56.613459 1130949 api_server.go:141] control plane version: v1.29.3
	I0328 01:07:56.613481 1130949 api_server.go:131] duration metric: took 7.039378ms to wait for apiserver health ...
	I0328 01:07:56.613490 1130949 system_pods.go:43] waiting for kube-system pods to appear ...
	I0328 01:07:56.793192 1130949 system_pods.go:59] 9 kube-system pods found
	I0328 01:07:56.793227 1130949 system_pods.go:61] "coredns-76f75df574-2rn6k" [2a77c778-dd83-4e2e-b45a-ca16e3922b45] Running
	I0328 01:07:56.793232 1130949 system_pods.go:61] "coredns-76f75df574-pgcdh" [52452b24-490e-4999-b700-198c6f9b2fa1] Running
	I0328 01:07:56.793236 1130949 system_pods.go:61] "etcd-embed-certs-808809" [cba526ab-c8ca-4b90-abb8-533d1bf6cb4f] Running
	I0328 01:07:56.793239 1130949 system_pods.go:61] "kube-apiserver-embed-certs-808809" [02934ac6-1e07-4cbe-ba2f-4149e59d6044] Running
	I0328 01:07:56.793243 1130949 system_pods.go:61] "kube-controller-manager-embed-certs-808809" [b3350ff9-7abb-407e-939c-fcde61186c17] Running
	I0328 01:07:56.793246 1130949 system_pods.go:61] "kube-proxy-tjbhs" [cdb30ca1-5165-4e24-888a-df79af7987d0] Running
	I0328 01:07:56.793249 1130949 system_pods.go:61] "kube-scheduler-embed-certs-808809" [5b52a22d-6ffb-468b-b7b9-8f89b0dee3b8] Running
	I0328 01:07:56.793255 1130949 system_pods.go:61] "metrics-server-57f55c9bc5-bqbfl" [8434fd7d-838b-4cf2-96a3-e4d613633871] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0328 01:07:56.793260 1130949 system_pods.go:61] "storage-provisioner" [20c1951e-7da8-4025-bbcf-2da60f87f3ab] Running
	I0328 01:07:56.793268 1130949 system_pods.go:74] duration metric: took 179.77213ms to wait for pod list to return data ...
	I0328 01:07:56.793275 1130949 default_sa.go:34] waiting for default service account to be created ...
	I0328 01:07:56.988234 1130949 default_sa.go:45] found service account: "default"
	I0328 01:07:56.988274 1130949 default_sa.go:55] duration metric: took 194.984089ms for default service account to be created ...
	I0328 01:07:56.988288 1130949 system_pods.go:116] waiting for k8s-apps to be running ...
	I0328 01:07:57.192153 1130949 system_pods.go:86] 9 kube-system pods found
	I0328 01:07:57.192188 1130949 system_pods.go:89] "coredns-76f75df574-2rn6k" [2a77c778-dd83-4e2e-b45a-ca16e3922b45] Running
	I0328 01:07:57.192194 1130949 system_pods.go:89] "coredns-76f75df574-pgcdh" [52452b24-490e-4999-b700-198c6f9b2fa1] Running
	I0328 01:07:57.192200 1130949 system_pods.go:89] "etcd-embed-certs-808809" [cba526ab-c8ca-4b90-abb8-533d1bf6cb4f] Running
	I0328 01:07:57.192205 1130949 system_pods.go:89] "kube-apiserver-embed-certs-808809" [02934ac6-1e07-4cbe-ba2f-4149e59d6044] Running
	I0328 01:07:57.192210 1130949 system_pods.go:89] "kube-controller-manager-embed-certs-808809" [b3350ff9-7abb-407e-939c-fcde61186c17] Running
	I0328 01:07:57.192214 1130949 system_pods.go:89] "kube-proxy-tjbhs" [cdb30ca1-5165-4e24-888a-df79af7987d0] Running
	I0328 01:07:57.192218 1130949 system_pods.go:89] "kube-scheduler-embed-certs-808809" [5b52a22d-6ffb-468b-b7b9-8f89b0dee3b8] Running
	I0328 01:07:57.192225 1130949 system_pods.go:89] "metrics-server-57f55c9bc5-bqbfl" [8434fd7d-838b-4cf2-96a3-e4d613633871] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0328 01:07:57.192230 1130949 system_pods.go:89] "storage-provisioner" [20c1951e-7da8-4025-bbcf-2da60f87f3ab] Running
	I0328 01:07:57.192239 1130949 system_pods.go:126] duration metric: took 203.942878ms to wait for k8s-apps to be running ...
	I0328 01:07:57.192249 1130949 system_svc.go:44] waiting for kubelet service to be running ....
	I0328 01:07:57.192301 1130949 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0328 01:07:57.209840 1130949 system_svc.go:56] duration metric: took 17.576605ms WaitForService to wait for kubelet
	I0328 01:07:57.209883 1130949 kubeadm.go:576] duration metric: took 4.163683877s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0328 01:07:57.209918 1130949 node_conditions.go:102] verifying NodePressure condition ...
	I0328 01:07:57.388321 1130949 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0328 01:07:57.388347 1130949 node_conditions.go:123] node cpu capacity is 2
	I0328 01:07:57.388357 1130949 node_conditions.go:105] duration metric: took 178.433633ms to run NodePressure ...
	I0328 01:07:57.388370 1130949 start.go:240] waiting for startup goroutines ...
	I0328 01:07:57.388377 1130949 start.go:245] waiting for cluster config update ...
	I0328 01:07:57.388387 1130949 start.go:254] writing updated cluster config ...
	I0328 01:07:57.388784 1130949 ssh_runner.go:195] Run: rm -f paused
	I0328 01:07:57.446699 1130949 start.go:600] kubectl: 1.29.3, cluster: 1.29.3 (minor skew: 0)
	I0328 01:07:57.448951 1130949 out.go:177] * Done! kubectl is now configured to use "embed-certs-808809" cluster and "default" namespace by default
	I0328 01:07:56.373123 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:58.872454 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:08:04.023273 1131323 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0328 01:08:04.023535 1131323 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0328 01:08:01.372711 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:08:03.877734 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:08:06.374031 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:08:07.366164 1130827 pod_ready.go:81] duration metric: took 4m0.000887668s for pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace to be "Ready" ...
	E0328 01:08:07.366245 1130827 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace to be "Ready" (will not retry!)
	I0328 01:08:07.366271 1130827 pod_ready.go:38] duration metric: took 4m7.906522585s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0328 01:08:07.366301 1130827 kubeadm.go:591] duration metric: took 4m15.27169704s to restartPrimaryControlPlane
	W0328 01:08:07.366368 1130827 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0328 01:08:07.366406 1130827 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0328 01:08:16.281280 1131600 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.245904746s)
	I0328 01:08:16.281365 1131600 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0328 01:08:16.298463 1131600 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0328 01:08:16.310406 1131600 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0328 01:08:16.321387 1131600 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0328 01:08:16.321415 1131600 kubeadm.go:156] found existing configuration files:
	
	I0328 01:08:16.321475 1131600 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0328 01:08:16.331965 1131600 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0328 01:08:16.332033 1131600 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0328 01:08:16.343030 1131600 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0328 01:08:16.353193 1131600 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0328 01:08:16.353254 1131600 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0328 01:08:16.363865 1131600 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0328 01:08:16.374276 1131600 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0328 01:08:16.374346 1131600 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0328 01:08:16.385300 1131600 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0328 01:08:16.396118 1131600 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0328 01:08:16.396181 1131600 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0328 01:08:16.406896 1131600 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0328 01:08:16.626615 1131600 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0328 01:08:24.024091 1131323 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0328 01:08:24.024388 1131323 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0328 01:08:25.420974 1131600 kubeadm.go:309] [init] Using Kubernetes version: v1.29.3
	I0328 01:08:25.421059 1131600 kubeadm.go:309] [preflight] Running pre-flight checks
	I0328 01:08:25.421154 1131600 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0328 01:08:25.421300 1131600 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0328 01:08:25.421547 1131600 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0328 01:08:25.421649 1131600 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0328 01:08:25.423435 1131600 out.go:204]   - Generating certificates and keys ...
	I0328 01:08:25.423549 1131600 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0328 01:08:25.423630 1131600 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0328 01:08:25.423749 1131600 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0328 01:08:25.423844 1131600 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0328 01:08:25.423956 1131600 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0328 01:08:25.424058 1131600 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0328 01:08:25.424166 1131600 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0328 01:08:25.424260 1131600 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0328 01:08:25.424375 1131600 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0328 01:08:25.424489 1131600 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0328 01:08:25.424552 1131600 kubeadm.go:309] [certs] Using the existing "sa" key
	I0328 01:08:25.424642 1131600 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0328 01:08:25.424700 1131600 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0328 01:08:25.424765 1131600 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0328 01:08:25.424832 1131600 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0328 01:08:25.424920 1131600 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0328 01:08:25.424982 1131600 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0328 01:08:25.425106 1131600 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0328 01:08:25.425207 1131600 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0328 01:08:25.426863 1131600 out.go:204]   - Booting up control plane ...
	I0328 01:08:25.427001 1131600 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0328 01:08:25.427108 1131600 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0328 01:08:25.427205 1131600 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0328 01:08:25.427327 1131600 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0328 01:08:25.427431 1131600 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0328 01:08:25.427491 1131600 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0328 01:08:25.427686 1131600 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0328 01:08:25.427784 1131600 kubeadm.go:309] [apiclient] All control plane components are healthy after 6.003000 seconds
	I0328 01:08:25.427897 1131600 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0328 01:08:25.428032 1131600 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0328 01:08:25.428109 1131600 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0328 01:08:25.428325 1131600 kubeadm.go:309] [mark-control-plane] Marking the node default-k8s-diff-port-283961 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0328 01:08:25.428408 1131600 kubeadm.go:309] [bootstrap-token] Using token: g6jusr.8nbqw788gjbu8fwz
	I0328 01:08:25.430595 1131600 out.go:204]   - Configuring RBAC rules ...
	I0328 01:08:25.430734 1131600 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0328 01:08:25.430837 1131600 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0328 01:08:25.430981 1131600 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0328 01:08:25.431163 1131600 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0328 01:08:25.431357 1131600 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0328 01:08:25.431481 1131600 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0328 01:08:25.431670 1131600 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0328 01:08:25.431726 1131600 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0328 01:08:25.431767 1131600 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0328 01:08:25.431774 1131600 kubeadm.go:309] 
	I0328 01:08:25.431819 1131600 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0328 01:08:25.431829 1131600 kubeadm.go:309] 
	I0328 01:08:25.431893 1131600 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0328 01:08:25.431900 1131600 kubeadm.go:309] 
	I0328 01:08:25.431934 1131600 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0328 01:08:25.432028 1131600 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0328 01:08:25.432089 1131600 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0328 01:08:25.432114 1131600 kubeadm.go:309] 
	I0328 01:08:25.432178 1131600 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0328 01:08:25.432186 1131600 kubeadm.go:309] 
	I0328 01:08:25.432245 1131600 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0328 01:08:25.432255 1131600 kubeadm.go:309] 
	I0328 01:08:25.432342 1131600 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0328 01:08:25.432454 1131600 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0328 01:08:25.432566 1131600 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0328 01:08:25.432576 1131600 kubeadm.go:309] 
	I0328 01:08:25.432719 1131600 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0328 01:08:25.432812 1131600 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0328 01:08:25.432825 1131600 kubeadm.go:309] 
	I0328 01:08:25.432914 1131600 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8444 --token g6jusr.8nbqw788gjbu8fwz \
	I0328 01:08:25.433018 1131600 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:eb47e943713ff45bf2b8bbea854932df17a469668ec0f8e79ea3bb626e56fc59 \
	I0328 01:08:25.433052 1131600 kubeadm.go:309] 	--control-plane 
	I0328 01:08:25.433058 1131600 kubeadm.go:309] 
	I0328 01:08:25.433135 1131600 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0328 01:08:25.433143 1131600 kubeadm.go:309] 
	I0328 01:08:25.433222 1131600 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8444 --token g6jusr.8nbqw788gjbu8fwz \
	I0328 01:08:25.433318 1131600 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:eb47e943713ff45bf2b8bbea854932df17a469668ec0f8e79ea3bb626e56fc59 
	I0328 01:08:25.433337 1131600 cni.go:84] Creating CNI manager for ""
	I0328 01:08:25.433346 1131600 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0328 01:08:25.434943 1131600 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0328 01:08:25.436103 1131600 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0328 01:08:25.483149 1131600 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0328 01:08:25.508422 1131600 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0328 01:08:25.508514 1131600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:25.508518 1131600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-283961 minikube.k8s.io/updated_at=2024_03_28T01_08_25_0700 minikube.k8s.io/version=v1.33.0-beta.0 minikube.k8s.io/commit=1a940f980f77c9bfc8ef93678db8c1f49c9dd79d minikube.k8s.io/name=default-k8s-diff-port-283961 minikube.k8s.io/primary=true
	I0328 01:08:25.537955 1131600 ops.go:34] apiserver oom_adj: -16
	I0328 01:08:25.738462 1131600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:26.239473 1131600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:26.739478 1131600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:27.238883 1131600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:27.738830 1131600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:28.239281 1131600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:28.738643 1131600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:29.238703 1131600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:29.739025 1131600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:30.239127 1131600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:30.739473 1131600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:31.239461 1131600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:31.739480 1131600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:32.239525 1131600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:32.738543 1131600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:33.239468 1131600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:33.739475 1131600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:34.238558 1131600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:34.739550 1131600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:35.239400 1131600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:35.738766 1131600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:36.239384 1131600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:36.738797 1131600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:37.238736 1131600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:37.739543 1131600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:37.850963 1131600 kubeadm.go:1107] duration metric: took 12.342521507s to wait for elevateKubeSystemPrivileges
	W0328 01:08:37.851011 1131600 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0328 01:08:37.851024 1131600 kubeadm.go:393] duration metric: took 5m17.339661641s to StartCluster
	I0328 01:08:37.851048 1131600 settings.go:142] acquiring lock: {Name:mk594dd5d083edcb4c668528431bf610fe4ea638 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 01:08:37.851164 1131600 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18485-1069254/kubeconfig
	I0328 01:08:37.853862 1131600 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18485-1069254/kubeconfig: {Name:mk608bb0dce90860f51972a105c5a28b1a9b081e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 01:08:37.854264 1131600 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.39.224 Port:8444 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0328 01:08:37.856170 1131600 out.go:177] * Verifying Kubernetes components...
	I0328 01:08:37.854341 1131600 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0328 01:08:37.854447 1131600 config.go:182] Loaded profile config "default-k8s-diff-port-283961": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0328 01:08:37.857860 1131600 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0328 01:08:37.857864 1131600 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-283961"
	I0328 01:08:37.857878 1131600 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-283961"
	I0328 01:08:37.857885 1131600 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-283961"
	I0328 01:08:37.857909 1131600 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-283961"
	I0328 01:08:37.857912 1131600 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-283961"
	W0328 01:08:37.857923 1131600 addons.go:243] addon storage-provisioner should already be in state true
	I0328 01:08:37.857928 1131600 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-283961"
	W0328 01:08:37.857941 1131600 addons.go:243] addon metrics-server should already be in state true
	I0328 01:08:37.857970 1131600 host.go:66] Checking if "default-k8s-diff-port-283961" exists ...
	I0328 01:08:37.857983 1131600 host.go:66] Checking if "default-k8s-diff-port-283961" exists ...
	I0328 01:08:37.858330 1131600 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18485-1069254/.minikube/bin/docker-machine-driver-kvm2
	I0328 01:08:37.858363 1131600 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 01:08:37.858403 1131600 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18485-1069254/.minikube/bin/docker-machine-driver-kvm2
	I0328 01:08:37.858436 1131600 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 01:08:37.858335 1131600 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18485-1069254/.minikube/bin/docker-machine-driver-kvm2
	I0328 01:08:37.858509 1131600 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 01:08:37.881197 1131600 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32799
	I0328 01:08:37.881230 1131600 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35101
	I0328 01:08:37.881244 1131600 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40403
	I0328 01:08:37.881857 1131600 main.go:141] libmachine: () Calling .GetVersion
	I0328 01:08:37.881882 1131600 main.go:141] libmachine: () Calling .GetVersion
	I0328 01:08:37.882021 1131600 main.go:141] libmachine: () Calling .GetVersion
	I0328 01:08:37.882460 1131600 main.go:141] libmachine: Using API Version  1
	I0328 01:08:37.882482 1131600 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 01:08:37.882523 1131600 main.go:141] libmachine: Using API Version  1
	I0328 01:08:37.882540 1131600 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 01:08:37.882585 1131600 main.go:141] libmachine: Using API Version  1
	I0328 01:08:37.882601 1131600 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 01:08:37.882934 1131600 main.go:141] libmachine: () Calling .GetMachineName
	I0328 01:08:37.882992 1131600 main.go:141] libmachine: () Calling .GetMachineName
	I0328 01:08:37.883007 1131600 main.go:141] libmachine: () Calling .GetMachineName
	I0328 01:08:37.883239 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetState
	I0328 01:08:37.883592 1131600 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18485-1069254/.minikube/bin/docker-machine-driver-kvm2
	I0328 01:08:37.883620 1131600 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18485-1069254/.minikube/bin/docker-machine-driver-kvm2
	I0328 01:08:37.883625 1131600 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 01:08:37.883644 1131600 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 01:08:37.887335 1131600 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-283961"
	W0328 01:08:37.887359 1131600 addons.go:243] addon default-storageclass should already be in state true
	I0328 01:08:37.887390 1131600 host.go:66] Checking if "default-k8s-diff-port-283961" exists ...
	I0328 01:08:37.887745 1131600 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18485-1069254/.minikube/bin/docker-machine-driver-kvm2
	I0328 01:08:37.887779 1131600 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 01:08:37.901416 1131600 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37383
	I0328 01:08:37.901909 1131600 main.go:141] libmachine: () Calling .GetVersion
	I0328 01:08:37.902530 1131600 main.go:141] libmachine: Using API Version  1
	I0328 01:08:37.902559 1131600 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 01:08:37.902967 1131600 main.go:141] libmachine: () Calling .GetMachineName
	I0328 01:08:37.903211 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetState
	I0328 01:08:37.904529 1131600 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36633
	I0328 01:08:37.905034 1131600 main.go:141] libmachine: () Calling .GetVersion
	I0328 01:08:37.905268 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .DriverName
	I0328 01:08:37.907486 1131600 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0328 01:08:37.905802 1131600 main.go:141] libmachine: Using API Version  1
	I0328 01:08:37.909062 1131600 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 01:08:37.909180 1131600 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0328 01:08:37.909196 1131600 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0328 01:08:37.909218 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHHostname
	I0328 01:08:37.909555 1131600 main.go:141] libmachine: () Calling .GetMachineName
	I0328 01:08:37.909794 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetState
	I0328 01:08:37.911251 1131600 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33539
	I0328 01:08:37.911845 1131600 main.go:141] libmachine: () Calling .GetVersion
	I0328 01:08:37.911995 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .DriverName
	I0328 01:08:37.913838 1131600 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0328 01:08:37.912457 1131600 main.go:141] libmachine: Using API Version  1
	I0328 01:08:37.913039 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:08:37.913804 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHPort
	I0328 01:08:37.915256 1131600 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 01:08:37.915268 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:df:6f", ip: ""} in network mk-default-k8s-diff-port-283961: {Iface:virbr1 ExpiryTime:2024-03-28 02:03:06 +0000 UTC Type:0 Mac:52:54:00:c4:df:6f Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:default-k8s-diff-port-283961 Clientid:01:52:54:00:c4:df:6f}
	I0328 01:08:37.915288 1131600 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0328 01:08:37.915297 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined IP address 192.168.39.224 and MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:08:37.915303 1131600 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0328 01:08:37.915321 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHHostname
	I0328 01:08:37.915492 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHKeyPath
	I0328 01:08:37.915674 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHUsername
	I0328 01:08:37.915894 1131600 sshutil.go:53] new ssh client: &{IP:192.168.39.224 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/default-k8s-diff-port-283961/id_rsa Username:docker}
	I0328 01:08:37.916689 1131600 main.go:141] libmachine: () Calling .GetMachineName
	I0328 01:08:37.917364 1131600 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18485-1069254/.minikube/bin/docker-machine-driver-kvm2
	I0328 01:08:37.917410 1131600 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 01:08:37.918302 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:08:37.918651 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:df:6f", ip: ""} in network mk-default-k8s-diff-port-283961: {Iface:virbr1 ExpiryTime:2024-03-28 02:03:06 +0000 UTC Type:0 Mac:52:54:00:c4:df:6f Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:default-k8s-diff-port-283961 Clientid:01:52:54:00:c4:df:6f}
	I0328 01:08:37.918678 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined IP address 192.168.39.224 and MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:08:37.918944 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHPort
	I0328 01:08:37.919117 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHKeyPath
	I0328 01:08:37.919267 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHUsername
	I0328 01:08:37.919386 1131600 sshutil.go:53] new ssh client: &{IP:192.168.39.224 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/default-k8s-diff-port-283961/id_rsa Username:docker}
	I0328 01:08:37.935233 1131600 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45659
	I0328 01:08:37.935750 1131600 main.go:141] libmachine: () Calling .GetVersion
	I0328 01:08:37.936283 1131600 main.go:141] libmachine: Using API Version  1
	I0328 01:08:37.936301 1131600 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 01:08:37.936691 1131600 main.go:141] libmachine: () Calling .GetMachineName
	I0328 01:08:37.936872 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetState
	I0328 01:08:37.938736 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .DriverName
	I0328 01:08:37.939016 1131600 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0328 01:08:37.939042 1131600 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0328 01:08:37.939065 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHHostname
	I0328 01:08:37.941653 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:08:37.941967 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:df:6f", ip: ""} in network mk-default-k8s-diff-port-283961: {Iface:virbr1 ExpiryTime:2024-03-28 02:03:06 +0000 UTC Type:0 Mac:52:54:00:c4:df:6f Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:default-k8s-diff-port-283961 Clientid:01:52:54:00:c4:df:6f}
	I0328 01:08:37.941991 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined IP address 192.168.39.224 and MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:08:37.942199 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHPort
	I0328 01:08:37.942405 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHKeyPath
	I0328 01:08:37.942575 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHUsername
	I0328 01:08:37.942761 1131600 sshutil.go:53] new ssh client: &{IP:192.168.39.224 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/default-k8s-diff-port-283961/id_rsa Username:docker}
	I0328 01:08:38.109817 1131600 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0328 01:08:38.134996 1131600 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-283961" to be "Ready" ...
	I0328 01:08:38.158252 1131600 node_ready.go:49] node "default-k8s-diff-port-283961" has status "Ready":"True"
	I0328 01:08:38.158286 1131600 node_ready.go:38] duration metric: took 23.249221ms for node "default-k8s-diff-port-283961" to be "Ready" ...
	I0328 01:08:38.158305 1131600 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0328 01:08:38.170391 1131600 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-gdv5x" in "kube-system" namespace to be "Ready" ...
	I0328 01:08:38.277223 1131600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0328 01:08:38.299923 1131600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0328 01:08:38.300686 1131600 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0328 01:08:38.300707 1131600 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0328 01:08:38.355800 1131600 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0328 01:08:38.355837 1131600 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0328 01:08:38.464742 1131600 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0328 01:08:38.464769 1131600 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0328 01:08:38.542696 1131600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0328 01:08:39.644116 1131600 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.344141889s)
	I0328 01:08:39.644184 1131600 main.go:141] libmachine: Making call to close driver server
	I0328 01:08:39.644189 1131600 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.366934481s)
	I0328 01:08:39.644197 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .Close
	I0328 01:08:39.644210 1131600 main.go:141] libmachine: Making call to close driver server
	I0328 01:08:39.644219 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .Close
	I0328 01:08:39.644620 1131600 main.go:141] libmachine: Successfully made call to close driver server
	I0328 01:08:39.644644 1131600 main.go:141] libmachine: Making call to close connection to plugin binary
	I0328 01:08:39.644654 1131600 main.go:141] libmachine: Making call to close driver server
	I0328 01:08:39.644664 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .Close
	I0328 01:08:39.644846 1131600 main.go:141] libmachine: Successfully made call to close driver server
	I0328 01:08:39.644865 1131600 main.go:141] libmachine: Making call to close connection to plugin binary
	I0328 01:08:39.644890 1131600 main.go:141] libmachine: Making call to close driver server
	I0328 01:08:39.644905 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .Close
	I0328 01:08:39.644987 1131600 main.go:141] libmachine: Successfully made call to close driver server
	I0328 01:08:39.645004 1131600 main.go:141] libmachine: Making call to close connection to plugin binary
	I0328 01:08:39.645154 1131600 main.go:141] libmachine: Successfully made call to close driver server
	I0328 01:08:39.645171 1131600 main.go:141] libmachine: Making call to close connection to plugin binary
	I0328 01:08:39.708104 1131600 main.go:141] libmachine: Making call to close driver server
	I0328 01:08:39.708143 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .Close
	I0328 01:08:39.708543 1131600 main.go:141] libmachine: Successfully made call to close driver server
	I0328 01:08:39.708567 1131600 main.go:141] libmachine: Making call to close connection to plugin binary
	I0328 01:08:39.739487 1131600 pod_ready.go:92] pod "coredns-76f75df574-gdv5x" in "kube-system" namespace has status "Ready":"True"
	I0328 01:08:39.739515 1131600 pod_ready.go:81] duration metric: took 1.569088177s for pod "coredns-76f75df574-gdv5x" in "kube-system" namespace to be "Ready" ...
	I0328 01:08:39.739526 1131600 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-qzcfp" in "kube-system" namespace to be "Ready" ...
	I0328 01:08:39.797314 1131600 pod_ready.go:92] pod "coredns-76f75df574-qzcfp" in "kube-system" namespace has status "Ready":"True"
	I0328 01:08:39.797347 1131600 pod_ready.go:81] duration metric: took 57.813218ms for pod "coredns-76f75df574-qzcfp" in "kube-system" namespace to be "Ready" ...
	I0328 01:08:39.797366 1131600 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-283961" in "kube-system" namespace to be "Ready" ...
	I0328 01:08:39.830784 1131600 pod_ready.go:92] pod "etcd-default-k8s-diff-port-283961" in "kube-system" namespace has status "Ready":"True"
	I0328 01:08:39.830865 1131600 pod_ready.go:81] duration metric: took 33.488753ms for pod "etcd-default-k8s-diff-port-283961" in "kube-system" namespace to be "Ready" ...
	I0328 01:08:39.830886 1131600 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-283961" in "kube-system" namespace to be "Ready" ...
	I0328 01:08:39.852459 1131600 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-283961" in "kube-system" namespace has status "Ready":"True"
	I0328 01:08:39.852489 1131600 pod_ready.go:81] duration metric: took 21.594748ms for pod "kube-apiserver-default-k8s-diff-port-283961" in "kube-system" namespace to be "Ready" ...
	I0328 01:08:39.852501 1131600 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-283961" in "kube-system" namespace to be "Ready" ...
	I0328 01:08:39.862630 1131600 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-283961" in "kube-system" namespace has status "Ready":"True"
	I0328 01:08:39.862658 1131600 pod_ready.go:81] duration metric: took 10.149867ms for pod "kube-controller-manager-default-k8s-diff-port-283961" in "kube-system" namespace to be "Ready" ...
	I0328 01:08:39.862674 1131600 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-js7j2" in "kube-system" namespace to be "Ready" ...
	I0328 01:08:39.893124 1131600 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.350363727s)
	I0328 01:08:39.893191 1131600 main.go:141] libmachine: Making call to close driver server
	I0328 01:08:39.893216 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .Close
	I0328 01:08:39.893559 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | Closing plugin on server side
	I0328 01:08:39.893568 1131600 main.go:141] libmachine: Successfully made call to close driver server
	I0328 01:08:39.893617 1131600 main.go:141] libmachine: Making call to close connection to plugin binary
	I0328 01:08:39.893634 1131600 main.go:141] libmachine: Making call to close driver server
	I0328 01:08:39.893646 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .Close
	I0328 01:08:39.893985 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | Closing plugin on server side
	I0328 01:08:39.893985 1131600 main.go:141] libmachine: Successfully made call to close driver server
	I0328 01:08:39.894013 1131600 main.go:141] libmachine: Making call to close connection to plugin binary
	I0328 01:08:39.894031 1131600 addons.go:470] Verifying addon metrics-server=true in "default-k8s-diff-port-283961"
	I0328 01:08:39.896978 1131600 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0328 01:08:39.898636 1131600 addons.go:505] duration metric: took 2.044292782s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0328 01:08:40.138962 1131600 pod_ready.go:92] pod "kube-proxy-js7j2" in "kube-system" namespace has status "Ready":"True"
	I0328 01:08:40.138994 1131600 pod_ready.go:81] duration metric: took 276.313147ms for pod "kube-proxy-js7j2" in "kube-system" namespace to be "Ready" ...
	I0328 01:08:40.139006 1131600 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-283961" in "kube-system" namespace to be "Ready" ...
	I0328 01:08:40.538892 1131600 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-283961" in "kube-system" namespace has status "Ready":"True"
	I0328 01:08:40.538917 1131600 pod_ready.go:81] duration metric: took 399.903327ms for pod "kube-scheduler-default-k8s-diff-port-283961" in "kube-system" namespace to be "Ready" ...
	I0328 01:08:40.538925 1131600 pod_ready.go:38] duration metric: took 2.380606168s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0328 01:08:40.538943 1131600 api_server.go:52] waiting for apiserver process to appear ...
	I0328 01:08:40.539009 1131600 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:08:40.561639 1131600 api_server.go:72] duration metric: took 2.707321816s to wait for apiserver process to appear ...
	I0328 01:08:40.561681 1131600 api_server.go:88] waiting for apiserver healthz status ...
	I0328 01:08:40.561709 1131600 api_server.go:253] Checking apiserver healthz at https://192.168.39.224:8444/healthz ...
	I0328 01:08:40.568521 1131600 api_server.go:279] https://192.168.39.224:8444/healthz returned 200:
	ok
	I0328 01:08:40.570016 1131600 api_server.go:141] control plane version: v1.29.3
	I0328 01:08:40.570060 1131600 api_server.go:131] duration metric: took 8.369036ms to wait for apiserver health ...
	I0328 01:08:40.570071 1131600 system_pods.go:43] waiting for kube-system pods to appear ...
	I0328 01:08:39.696094 1130827 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.32965227s)
	I0328 01:08:39.696193 1130827 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0328 01:08:39.717556 1130827 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0328 01:08:39.730434 1130827 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0328 01:08:39.746521 1130827 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0328 01:08:39.746567 1130827 kubeadm.go:156] found existing configuration files:
	
	I0328 01:08:39.746644 1130827 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0328 01:08:39.758252 1130827 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0328 01:08:39.758352 1130827 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0328 01:08:39.771929 1130827 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0328 01:08:39.785312 1130827 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0328 01:08:39.785400 1130827 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0328 01:08:39.800685 1130827 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0328 01:08:39.814982 1130827 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0328 01:08:39.815073 1130827 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0328 01:08:39.828804 1130827 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0328 01:08:39.841984 1130827 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0328 01:08:39.842074 1130827 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0328 01:08:39.854502 1130827 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0328 01:08:40.089742 1130827 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0328 01:08:40.742900 1131600 system_pods.go:59] 9 kube-system pods found
	I0328 01:08:40.742938 1131600 system_pods.go:61] "coredns-76f75df574-gdv5x" [5b4b835c-ae9d-4eff-ab37-6ccb7e36a748] Running
	I0328 01:08:40.742945 1131600 system_pods.go:61] "coredns-76f75df574-qzcfp" [8e7bfa94-f249-4f7a-be7b-9a615810c956] Running
	I0328 01:08:40.742951 1131600 system_pods.go:61] "etcd-default-k8s-diff-port-283961" [eb26a2e2-882b-4bc0-9ca1-7fe88b1c6c7e] Running
	I0328 01:08:40.742958 1131600 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-283961" [1c31e7a3-507e-43ea-96cf-1d182a1e0875] Running
	I0328 01:08:40.742964 1131600 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-283961" [5bbc6650-b37c-4b10-bc6e-cceb214f8881] Running
	I0328 01:08:40.742968 1131600 system_pods.go:61] "kube-proxy-js7j2" [1f31a6ee-9417-4f4a-ba0b-fdbab6a9169d] Running
	I0328 01:08:40.742972 1131600 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-283961" [3a5691f7-d6d4-45e7-aea7-94fe1cfa541a] Running
	I0328 01:08:40.742980 1131600 system_pods.go:61] "metrics-server-57f55c9bc5-gkv67" [7f0f5d0b-6821-44b6-8f3b-0bc0aeccc356] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0328 01:08:40.742986 1131600 system_pods.go:61] "storage-provisioner" [cb80efe2-521f-45d5-84e7-f6dc216b4c6d] Running
	I0328 01:08:40.742998 1131600 system_pods.go:74] duration metric: took 172.918886ms to wait for pod list to return data ...
	I0328 01:08:40.743010 1131600 default_sa.go:34] waiting for default service account to be created ...
	I0328 01:08:40.939208 1131600 default_sa.go:45] found service account: "default"
	I0328 01:08:40.939255 1131600 default_sa.go:55] duration metric: took 196.220048ms for default service account to be created ...
	I0328 01:08:40.939266 1131600 system_pods.go:116] waiting for k8s-apps to be running ...
	I0328 01:08:41.144986 1131600 system_pods.go:86] 9 kube-system pods found
	I0328 01:08:41.145023 1131600 system_pods.go:89] "coredns-76f75df574-gdv5x" [5b4b835c-ae9d-4eff-ab37-6ccb7e36a748] Running
	I0328 01:08:41.145030 1131600 system_pods.go:89] "coredns-76f75df574-qzcfp" [8e7bfa94-f249-4f7a-be7b-9a615810c956] Running
	I0328 01:08:41.145034 1131600 system_pods.go:89] "etcd-default-k8s-diff-port-283961" [eb26a2e2-882b-4bc0-9ca1-7fe88b1c6c7e] Running
	I0328 01:08:41.145039 1131600 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-283961" [1c31e7a3-507e-43ea-96cf-1d182a1e0875] Running
	I0328 01:08:41.145043 1131600 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-283961" [5bbc6650-b37c-4b10-bc6e-cceb214f8881] Running
	I0328 01:08:41.145047 1131600 system_pods.go:89] "kube-proxy-js7j2" [1f31a6ee-9417-4f4a-ba0b-fdbab6a9169d] Running
	I0328 01:08:41.145051 1131600 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-283961" [3a5691f7-d6d4-45e7-aea7-94fe1cfa541a] Running
	I0328 01:08:41.145058 1131600 system_pods.go:89] "metrics-server-57f55c9bc5-gkv67" [7f0f5d0b-6821-44b6-8f3b-0bc0aeccc356] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0328 01:08:41.145062 1131600 system_pods.go:89] "storage-provisioner" [cb80efe2-521f-45d5-84e7-f6dc216b4c6d] Running
	I0328 01:08:41.145072 1131600 system_pods.go:126] duration metric: took 205.800485ms to wait for k8s-apps to be running ...
	I0328 01:08:41.145083 1131600 system_svc.go:44] waiting for kubelet service to be running ....
	I0328 01:08:41.145131 1131600 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0328 01:08:41.163220 1131600 system_svc.go:56] duration metric: took 18.120266ms WaitForService to wait for kubelet
	I0328 01:08:41.163255 1131600 kubeadm.go:576] duration metric: took 3.308947131s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0328 01:08:41.163280 1131600 node_conditions.go:102] verifying NodePressure condition ...
	I0328 01:08:41.339219 1131600 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0328 01:08:41.339247 1131600 node_conditions.go:123] node cpu capacity is 2
	I0328 01:08:41.339292 1131600 node_conditions.go:105] duration metric: took 176.004328ms to run NodePressure ...
	I0328 01:08:41.339306 1131600 start.go:240] waiting for startup goroutines ...
	I0328 01:08:41.339317 1131600 start.go:245] waiting for cluster config update ...
	I0328 01:08:41.339334 1131600 start.go:254] writing updated cluster config ...
	I0328 01:08:41.339656 1131600 ssh_runner.go:195] Run: rm -f paused
	I0328 01:08:41.399111 1131600 start.go:600] kubectl: 1.29.3, cluster: 1.29.3 (minor skew: 0)
	I0328 01:08:41.401360 1131600 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-283961" cluster and "default" namespace by default
	I0328 01:08:49.653091 1130827 kubeadm.go:309] [init] Using Kubernetes version: v1.30.0-beta.0
	I0328 01:08:49.653205 1130827 kubeadm.go:309] [preflight] Running pre-flight checks
	I0328 01:08:49.653327 1130827 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0328 01:08:49.653468 1130827 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0328 01:08:49.653576 1130827 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0328 01:08:49.653666 1130827 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0328 01:08:49.656419 1130827 out.go:204]   - Generating certificates and keys ...
	I0328 01:08:49.656503 1130827 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0328 01:08:49.656583 1130827 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0328 01:08:49.656669 1130827 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0328 01:08:49.656775 1130827 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0328 01:08:49.656903 1130827 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0328 01:08:49.656973 1130827 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0328 01:08:49.657057 1130827 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0328 01:08:49.657138 1130827 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0328 01:08:49.657246 1130827 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0328 01:08:49.657362 1130827 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0328 01:08:49.657415 1130827 kubeadm.go:309] [certs] Using the existing "sa" key
	I0328 01:08:49.657510 1130827 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0328 01:08:49.657601 1130827 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0328 01:08:49.657713 1130827 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0328 01:08:49.657811 1130827 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0328 01:08:49.657900 1130827 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0328 01:08:49.657980 1130827 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0328 01:08:49.658074 1130827 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0328 01:08:49.658160 1130827 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0328 01:08:49.659588 1130827 out.go:204]   - Booting up control plane ...
	I0328 01:08:49.659669 1130827 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0328 01:08:49.659771 1130827 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0328 01:08:49.659855 1130827 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0328 01:08:49.659962 1130827 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0328 01:08:49.660075 1130827 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0328 01:08:49.660139 1130827 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0328 01:08:49.660309 1130827 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0328 01:08:49.660426 1130827 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0328 01:08:49.660518 1130827 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 501.594495ms
	I0328 01:08:49.660610 1130827 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0328 01:08:49.660691 1130827 kubeadm.go:309] [api-check] The API server is healthy after 5.502996727s
	I0328 01:08:49.660830 1130827 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0328 01:08:49.660975 1130827 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0328 01:08:49.661028 1130827 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0328 01:08:49.661198 1130827 kubeadm.go:309] [mark-control-plane] Marking the node no-preload-248059 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0328 01:08:49.661283 1130827 kubeadm.go:309] [bootstrap-token] Using token: 4jnfa0.q3dre6ogqbxtw8j0
	I0328 01:08:49.662907 1130827 out.go:204]   - Configuring RBAC rules ...
	I0328 01:08:49.663014 1130827 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0328 01:08:49.663090 1130827 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0328 01:08:49.663239 1130827 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0328 01:08:49.663379 1130827 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0328 01:08:49.663484 1130827 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0328 01:08:49.663576 1130827 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0328 01:08:49.663688 1130827 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0328 01:08:49.663750 1130827 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0328 01:08:49.663811 1130827 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0328 01:08:49.663820 1130827 kubeadm.go:309] 
	I0328 01:08:49.663871 1130827 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0328 01:08:49.663877 1130827 kubeadm.go:309] 
	I0328 01:08:49.663976 1130827 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0328 01:08:49.663984 1130827 kubeadm.go:309] 
	I0328 01:08:49.664004 1130827 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0328 01:08:49.664080 1130827 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0328 01:08:49.664144 1130827 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0328 01:08:49.664151 1130827 kubeadm.go:309] 
	I0328 01:08:49.664202 1130827 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0328 01:08:49.664209 1130827 kubeadm.go:309] 
	I0328 01:08:49.664246 1130827 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0328 01:08:49.664252 1130827 kubeadm.go:309] 
	I0328 01:08:49.664301 1130827 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0328 01:08:49.664370 1130827 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0328 01:08:49.664436 1130827 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0328 01:08:49.664444 1130827 kubeadm.go:309] 
	I0328 01:08:49.664515 1130827 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0328 01:08:49.664600 1130827 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0328 01:08:49.664607 1130827 kubeadm.go:309] 
	I0328 01:08:49.664678 1130827 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token 4jnfa0.q3dre6ogqbxtw8j0 \
	I0328 01:08:49.664764 1130827 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:eb47e943713ff45bf2b8bbea854932df17a469668ec0f8e79ea3bb626e56fc59 \
	I0328 01:08:49.664783 1130827 kubeadm.go:309] 	--control-plane 
	I0328 01:08:49.664789 1130827 kubeadm.go:309] 
	I0328 01:08:49.664856 1130827 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0328 01:08:49.664863 1130827 kubeadm.go:309] 
	I0328 01:08:49.664938 1130827 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token 4jnfa0.q3dre6ogqbxtw8j0 \
	I0328 01:08:49.665073 1130827 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:eb47e943713ff45bf2b8bbea854932df17a469668ec0f8e79ea3bb626e56fc59 
	I0328 01:08:49.665117 1130827 cni.go:84] Creating CNI manager for ""
	I0328 01:08:49.665130 1130827 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0328 01:08:49.667556 1130827 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0328 01:08:49.668776 1130827 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0328 01:08:49.680262 1130827 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0328 01:08:49.701490 1130827 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0328 01:08:49.701557 1130827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:49.701606 1130827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-248059 minikube.k8s.io/updated_at=2024_03_28T01_08_49_0700 minikube.k8s.io/version=v1.33.0-beta.0 minikube.k8s.io/commit=1a940f980f77c9bfc8ef93678db8c1f49c9dd79d minikube.k8s.io/name=no-preload-248059 minikube.k8s.io/primary=true
	I0328 01:08:49.734009 1130827 ops.go:34] apiserver oom_adj: -16
	I0328 01:08:49.901866 1130827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:50.402635 1130827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:50.902480 1130827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:51.402417 1130827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:51.902253 1130827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:52.402411 1130827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:52.901926 1130827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:53.402394 1130827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:53.902738 1130827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:54.401878 1130827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:54.901920 1130827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:55.401878 1130827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:55.902140 1130827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:56.402863 1130827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:56.901970 1130827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:57.402088 1130827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:57.901869 1130827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:58.402056 1130827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:58.902333 1130827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:59.402753 1130827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:59.902930 1130827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:09:00.402623 1130827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:09:00.901863 1130827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:09:01.402264 1130827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:09:01.902054 1130827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:09:02.402212 1130827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:09:02.503310 1130827 kubeadm.go:1107] duration metric: took 12.80181586s to wait for elevateKubeSystemPrivileges
	W0328 01:09:02.503352 1130827 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0328 01:09:02.503362 1130827 kubeadm.go:393] duration metric: took 5m10.46697508s to StartCluster
	I0328 01:09:02.503380 1130827 settings.go:142] acquiring lock: {Name:mk594dd5d083edcb4c668528431bf610fe4ea638 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 01:09:02.503482 1130827 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18485-1069254/kubeconfig
	I0328 01:09:02.505909 1130827 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18485-1069254/kubeconfig: {Name:mk608bb0dce90860f51972a105c5a28b1a9b081e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 01:09:02.506302 1130827 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.61.107 Port:8443 KubernetesVersion:v1.30.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0328 01:09:02.508103 1130827 out.go:177] * Verifying Kubernetes components...
	I0328 01:09:02.506385 1130827 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0328 01:09:02.506502 1130827 config.go:182] Loaded profile config "no-preload-248059": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0-beta.0
	I0328 01:09:02.509509 1130827 addons.go:69] Setting default-storageclass=true in profile "no-preload-248059"
	I0328 01:09:02.509519 1130827 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0328 01:09:02.509517 1130827 addons.go:69] Setting metrics-server=true in profile "no-preload-248059"
	I0328 01:09:02.509542 1130827 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-248059"
	I0328 01:09:02.509559 1130827 addons.go:234] Setting addon metrics-server=true in "no-preload-248059"
	W0328 01:09:02.509580 1130827 addons.go:243] addon metrics-server should already be in state true
	I0328 01:09:02.509509 1130827 addons.go:69] Setting storage-provisioner=true in profile "no-preload-248059"
	I0328 01:09:02.509623 1130827 host.go:66] Checking if "no-preload-248059" exists ...
	I0328 01:09:02.509636 1130827 addons.go:234] Setting addon storage-provisioner=true in "no-preload-248059"
	W0328 01:09:02.509690 1130827 addons.go:243] addon storage-provisioner should already be in state true
	I0328 01:09:02.509729 1130827 host.go:66] Checking if "no-preload-248059" exists ...
	I0328 01:09:02.510005 1130827 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18485-1069254/.minikube/bin/docker-machine-driver-kvm2
	I0328 01:09:02.510009 1130827 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18485-1069254/.minikube/bin/docker-machine-driver-kvm2
	I0328 01:09:02.510049 1130827 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 01:09:02.510050 1130827 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 01:09:02.510053 1130827 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18485-1069254/.minikube/bin/docker-machine-driver-kvm2
	I0328 01:09:02.510085 1130827 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 01:09:02.528082 1130827 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42381
	I0328 01:09:02.528124 1130827 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43929
	I0328 01:09:02.528714 1130827 main.go:141] libmachine: () Calling .GetVersion
	I0328 01:09:02.528738 1130827 main.go:141] libmachine: () Calling .GetVersion
	I0328 01:09:02.529081 1130827 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34573
	I0328 01:09:02.529378 1130827 main.go:141] libmachine: Using API Version  1
	I0328 01:09:02.529397 1130827 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 01:09:02.529444 1130827 main.go:141] libmachine: Using API Version  1
	I0328 01:09:02.529464 1130827 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 01:09:02.529465 1130827 main.go:141] libmachine: () Calling .GetVersion
	I0328 01:09:02.529791 1130827 main.go:141] libmachine: () Calling .GetMachineName
	I0328 01:09:02.529849 1130827 main.go:141] libmachine: () Calling .GetMachineName
	I0328 01:09:02.529948 1130827 main.go:141] libmachine: Using API Version  1
	I0328 01:09:02.529965 1130827 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 01:09:02.529950 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetState
	I0328 01:09:02.530389 1130827 main.go:141] libmachine: () Calling .GetMachineName
	I0328 01:09:02.530437 1130827 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18485-1069254/.minikube/bin/docker-machine-driver-kvm2
	I0328 01:09:02.530472 1130827 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 01:09:02.531004 1130827 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18485-1069254/.minikube/bin/docker-machine-driver-kvm2
	I0328 01:09:02.531058 1130827 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 01:09:02.534108 1130827 addons.go:234] Setting addon default-storageclass=true in "no-preload-248059"
	W0328 01:09:02.534134 1130827 addons.go:243] addon default-storageclass should already be in state true
	I0328 01:09:02.534173 1130827 host.go:66] Checking if "no-preload-248059" exists ...
	I0328 01:09:02.534563 1130827 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18485-1069254/.minikube/bin/docker-machine-driver-kvm2
	I0328 01:09:02.534592 1130827 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 01:09:02.546812 1130827 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35769
	I0328 01:09:02.547478 1130827 main.go:141] libmachine: () Calling .GetVersion
	I0328 01:09:02.547999 1130827 main.go:141] libmachine: Using API Version  1
	I0328 01:09:02.548031 1130827 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 01:09:02.548370 1130827 main.go:141] libmachine: () Calling .GetMachineName
	I0328 01:09:02.548616 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetState
	I0328 01:09:02.549185 1130827 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37367
	I0328 01:09:02.549663 1130827 main.go:141] libmachine: () Calling .GetVersion
	I0328 01:09:02.550365 1130827 main.go:141] libmachine: Using API Version  1
	I0328 01:09:02.550390 1130827 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 01:09:02.550772 1130827 main.go:141] libmachine: () Calling .GetMachineName
	I0328 01:09:02.550787 1130827 main.go:141] libmachine: (no-preload-248059) Calling .DriverName
	I0328 01:09:02.550977 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetState
	I0328 01:09:02.553075 1130827 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0328 01:09:02.554750 1130827 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0328 01:09:02.554769 1130827 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0328 01:09:02.552577 1130827 main.go:141] libmachine: (no-preload-248059) Calling .DriverName
	I0328 01:09:02.554788 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHHostname
	I0328 01:09:02.553550 1130827 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45153
	I0328 01:09:02.556534 1130827 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0328 01:09:02.555339 1130827 main.go:141] libmachine: () Calling .GetVersion
	I0328 01:09:02.558480 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:09:02.563734 1130827 main.go:141] libmachine: (no-preload-248059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:33:e2", ip: ""} in network mk-no-preload-248059: {Iface:virbr3 ExpiryTime:2024-03-28 02:03:25 +0000 UTC Type:0 Mac:52:54:00:58:33:e2 Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:no-preload-248059 Clientid:01:52:54:00:58:33:e2}
	I0328 01:09:02.563773 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined IP address 192.168.61.107 and MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:09:02.563823 1130827 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0328 01:09:02.563846 1130827 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0328 01:09:02.563876 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHHostname
	I0328 01:09:02.564584 1130827 main.go:141] libmachine: Using API Version  1
	I0328 01:09:02.564604 1130827 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 01:09:02.564633 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHPort
	I0328 01:09:02.564933 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHKeyPath
	I0328 01:09:02.565025 1130827 main.go:141] libmachine: () Calling .GetMachineName
	I0328 01:09:02.565458 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHUsername
	I0328 01:09:02.565593 1130827 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18485-1069254/.minikube/bin/docker-machine-driver-kvm2
	I0328 01:09:02.565617 1130827 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 01:09:02.565745 1130827 sshutil.go:53] new ssh client: &{IP:192.168.61.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/no-preload-248059/id_rsa Username:docker}
	I0328 01:09:02.569766 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:09:02.570083 1130827 main.go:141] libmachine: (no-preload-248059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:33:e2", ip: ""} in network mk-no-preload-248059: {Iface:virbr3 ExpiryTime:2024-03-28 02:03:25 +0000 UTC Type:0 Mac:52:54:00:58:33:e2 Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:no-preload-248059 Clientid:01:52:54:00:58:33:e2}
	I0328 01:09:02.570104 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined IP address 192.168.61.107 and MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:09:02.570413 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHPort
	I0328 01:09:02.570778 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHKeyPath
	I0328 01:09:02.570975 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHUsername
	I0328 01:09:02.571142 1130827 sshutil.go:53] new ssh client: &{IP:192.168.61.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/no-preload-248059/id_rsa Username:docker}
	I0328 01:09:02.589503 1130827 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41085
	I0328 01:09:02.590061 1130827 main.go:141] libmachine: () Calling .GetVersion
	I0328 01:09:02.590641 1130827 main.go:141] libmachine: Using API Version  1
	I0328 01:09:02.590661 1130827 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 01:09:02.591065 1130827 main.go:141] libmachine: () Calling .GetMachineName
	I0328 01:09:02.591310 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetState
	I0328 01:09:02.593270 1130827 main.go:141] libmachine: (no-preload-248059) Calling .DriverName
	I0328 01:09:02.593665 1130827 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0328 01:09:02.593696 1130827 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0328 01:09:02.593717 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHHostname
	I0328 01:09:02.596796 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:09:02.597270 1130827 main.go:141] libmachine: (no-preload-248059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:33:e2", ip: ""} in network mk-no-preload-248059: {Iface:virbr3 ExpiryTime:2024-03-28 02:03:25 +0000 UTC Type:0 Mac:52:54:00:58:33:e2 Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:no-preload-248059 Clientid:01:52:54:00:58:33:e2}
	I0328 01:09:02.597298 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined IP address 192.168.61.107 and MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:09:02.597460 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHPort
	I0328 01:09:02.597637 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHKeyPath
	I0328 01:09:02.597807 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHUsername
	I0328 01:09:02.597937 1130827 sshutil.go:53] new ssh client: &{IP:192.168.61.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/no-preload-248059/id_rsa Username:docker}
	I0328 01:09:02.705837 1130827 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0328 01:09:02.727955 1130827 node_ready.go:35] waiting up to 6m0s for node "no-preload-248059" to be "Ready" ...
	I0328 01:09:02.737291 1130827 node_ready.go:49] node "no-preload-248059" has status "Ready":"True"
	I0328 01:09:02.737325 1130827 node_ready.go:38] duration metric: took 9.337953ms for node "no-preload-248059" to be "Ready" ...
	I0328 01:09:02.737338 1130827 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0328 01:09:02.741939 1130827 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-248059" in "kube-system" namespace to be "Ready" ...
	I0328 01:09:02.749157 1130827 pod_ready.go:92] pod "etcd-no-preload-248059" in "kube-system" namespace has status "Ready":"True"
	I0328 01:09:02.749192 1130827 pod_ready.go:81] duration metric: took 7.224004ms for pod "etcd-no-preload-248059" in "kube-system" namespace to be "Ready" ...
	I0328 01:09:02.749205 1130827 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-248059" in "kube-system" namespace to be "Ready" ...
	I0328 01:09:02.755106 1130827 pod_ready.go:92] pod "kube-apiserver-no-preload-248059" in "kube-system" namespace has status "Ready":"True"
	I0328 01:09:02.755132 1130827 pod_ready.go:81] duration metric: took 5.919446ms for pod "kube-apiserver-no-preload-248059" in "kube-system" namespace to be "Ready" ...
	I0328 01:09:02.755144 1130827 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-248059" in "kube-system" namespace to be "Ready" ...
	I0328 01:09:02.761123 1130827 pod_ready.go:92] pod "kube-controller-manager-no-preload-248059" in "kube-system" namespace has status "Ready":"True"
	I0328 01:09:02.761171 1130827 pod_ready.go:81] duration metric: took 6.017877ms for pod "kube-controller-manager-no-preload-248059" in "kube-system" namespace to be "Ready" ...
	I0328 01:09:02.761187 1130827 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-248059" in "kube-system" namespace to be "Ready" ...
	I0328 01:09:02.773958 1130827 pod_ready.go:92] pod "kube-scheduler-no-preload-248059" in "kube-system" namespace has status "Ready":"True"
	I0328 01:09:02.773983 1130827 pod_ready.go:81] duration metric: took 12.787671ms for pod "kube-scheduler-no-preload-248059" in "kube-system" namespace to be "Ready" ...
	I0328 01:09:02.773991 1130827 pod_ready.go:38] duration metric: took 36.637128ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0328 01:09:02.774008 1130827 api_server.go:52] waiting for apiserver process to appear ...
	I0328 01:09:02.774068 1130827 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:09:02.794342 1130827 api_server.go:72] duration metric: took 287.989042ms to wait for apiserver process to appear ...
	I0328 01:09:02.794376 1130827 api_server.go:88] waiting for apiserver healthz status ...
	I0328 01:09:02.794408 1130827 api_server.go:253] Checking apiserver healthz at https://192.168.61.107:8443/healthz ...
	I0328 01:09:02.826957 1130827 api_server.go:279] https://192.168.61.107:8443/healthz returned 200:
	ok
	I0328 01:09:02.830377 1130827 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0328 01:09:02.830399 1130827 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0328 01:09:02.837250 1130827 api_server.go:141] control plane version: v1.30.0-beta.0
	I0328 01:09:02.837284 1130827 api_server.go:131] duration metric: took 42.898933ms to wait for apiserver health ...
	I0328 01:09:02.837295 1130827 system_pods.go:43] waiting for kube-system pods to appear ...
	I0328 01:09:02.838515 1130827 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0328 01:09:02.865482 1130827 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0328 01:09:02.880510 1130827 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0328 01:09:02.880544 1130827 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0328 01:09:02.933895 1130827 system_pods.go:59] 4 kube-system pods found
	I0328 01:09:02.933958 1130827 system_pods.go:61] "etcd-no-preload-248059" [df24a43f-e4f8-4ee2-a2d5-9a718a197670] Running
	I0328 01:09:02.933967 1130827 system_pods.go:61] "kube-apiserver-no-preload-248059" [60aa0336-e0a2-476a-9458-c76fb40a95e1] Running
	I0328 01:09:02.933973 1130827 system_pods.go:61] "kube-controller-manager-no-preload-248059" [3759c96b-4476-439f-bf97-ea6175e53272] Running
	I0328 01:09:02.933977 1130827 system_pods.go:61] "kube-scheduler-no-preload-248059" [63a844a3-9d1e-48b0-90eb-52d3d20f0de8] Running
	I0328 01:09:02.933984 1130827 system_pods.go:74] duration metric: took 96.68223ms to wait for pod list to return data ...
	I0328 01:09:02.933994 1130827 default_sa.go:34] waiting for default service account to be created ...
	I0328 01:09:02.939507 1130827 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0328 01:09:02.939538 1130827 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0328 01:09:02.994042 1130827 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0328 01:09:03.160934 1130827 default_sa.go:45] found service account: "default"
	I0328 01:09:03.160971 1130827 default_sa.go:55] duration metric: took 226.968222ms for default service account to be created ...
	I0328 01:09:03.160982 1130827 system_pods.go:116] waiting for k8s-apps to be running ...
	I0328 01:09:03.396511 1130827 system_pods.go:86] 7 kube-system pods found
	I0328 01:09:03.396549 1130827 system_pods.go:89] "coredns-7db6d8ff4d-8zzf5" [91f329ea-6d6d-45dc-ac77-40a2739249b4] Pending
	I0328 01:09:03.396554 1130827 system_pods.go:89] "coredns-7db6d8ff4d-qtgp9" [aac5a4d0-acf3-426c-a81e-d129f94d58f3] Pending
	I0328 01:09:03.396558 1130827 system_pods.go:89] "etcd-no-preload-248059" [df24a43f-e4f8-4ee2-a2d5-9a718a197670] Running
	I0328 01:09:03.396562 1130827 system_pods.go:89] "kube-apiserver-no-preload-248059" [60aa0336-e0a2-476a-9458-c76fb40a95e1] Running
	I0328 01:09:03.396567 1130827 system_pods.go:89] "kube-controller-manager-no-preload-248059" [3759c96b-4476-439f-bf97-ea6175e53272] Running
	I0328 01:09:03.396575 1130827 system_pods.go:89] "kube-proxy-g5f6g" [d9c30bc3-42b1-446f-838b-979489cf661d] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0328 01:09:03.396580 1130827 system_pods.go:89] "kube-scheduler-no-preload-248059" [63a844a3-9d1e-48b0-90eb-52d3d20f0de8] Running
	I0328 01:09:03.396601 1130827 retry.go:31] will retry after 288.008379ms: missing components: kube-dns, kube-proxy
	I0328 01:09:03.697645 1130827 system_pods.go:86] 7 kube-system pods found
	I0328 01:09:03.697688 1130827 system_pods.go:89] "coredns-7db6d8ff4d-8zzf5" [91f329ea-6d6d-45dc-ac77-40a2739249b4] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0328 01:09:03.697697 1130827 system_pods.go:89] "coredns-7db6d8ff4d-qtgp9" [aac5a4d0-acf3-426c-a81e-d129f94d58f3] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0328 01:09:03.697704 1130827 system_pods.go:89] "etcd-no-preload-248059" [df24a43f-e4f8-4ee2-a2d5-9a718a197670] Running
	I0328 01:09:03.697710 1130827 system_pods.go:89] "kube-apiserver-no-preload-248059" [60aa0336-e0a2-476a-9458-c76fb40a95e1] Running
	I0328 01:09:03.697720 1130827 system_pods.go:89] "kube-controller-manager-no-preload-248059" [3759c96b-4476-439f-bf97-ea6175e53272] Running
	I0328 01:09:03.697726 1130827 system_pods.go:89] "kube-proxy-g5f6g" [d9c30bc3-42b1-446f-838b-979489cf661d] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0328 01:09:03.697730 1130827 system_pods.go:89] "kube-scheduler-no-preload-248059" [63a844a3-9d1e-48b0-90eb-52d3d20f0de8] Running
	I0328 01:09:03.697750 1130827 retry.go:31] will retry after 356.016468ms: missing components: kube-dns, kube-proxy
	I0328 01:09:03.962535 1130827 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.097008499s)
	I0328 01:09:03.962614 1130827 main.go:141] libmachine: Making call to close driver server
	I0328 01:09:03.962633 1130827 main.go:141] libmachine: (no-preload-248059) Calling .Close
	I0328 01:09:03.963093 1130827 main.go:141] libmachine: Successfully made call to close driver server
	I0328 01:09:03.963119 1130827 main.go:141] libmachine: Making call to close connection to plugin binary
	I0328 01:09:03.963129 1130827 main.go:141] libmachine: Making call to close driver server
	I0328 01:09:03.963139 1130827 main.go:141] libmachine: (no-preload-248059) Calling .Close
	I0328 01:09:03.963406 1130827 main.go:141] libmachine: Successfully made call to close driver server
	I0328 01:09:03.963424 1130827 main.go:141] libmachine: Making call to close connection to plugin binary
	I0328 01:09:03.964335 1130827 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.125788348s)
	I0328 01:09:03.964375 1130827 main.go:141] libmachine: Making call to close driver server
	I0328 01:09:03.964384 1130827 main.go:141] libmachine: (no-preload-248059) Calling .Close
	I0328 01:09:03.964712 1130827 main.go:141] libmachine: (no-preload-248059) DBG | Closing plugin on server side
	I0328 01:09:03.964740 1130827 main.go:141] libmachine: Successfully made call to close driver server
	I0328 01:09:03.964763 1130827 main.go:141] libmachine: Making call to close connection to plugin binary
	I0328 01:09:03.964776 1130827 main.go:141] libmachine: Making call to close driver server
	I0328 01:09:03.964785 1130827 main.go:141] libmachine: (no-preload-248059) Calling .Close
	I0328 01:09:03.965054 1130827 main.go:141] libmachine: Successfully made call to close driver server
	I0328 01:09:03.965125 1130827 main.go:141] libmachine: Making call to close connection to plugin binary
	I0328 01:09:03.965142 1130827 main.go:141] libmachine: (no-preload-248059) DBG | Closing plugin on server side
	I0328 01:09:04.002303 1130827 main.go:141] libmachine: Making call to close driver server
	I0328 01:09:04.002340 1130827 main.go:141] libmachine: (no-preload-248059) Calling .Close
	I0328 01:09:04.002744 1130827 main.go:141] libmachine: Successfully made call to close driver server
	I0328 01:09:04.002766 1130827 main.go:141] libmachine: Making call to close connection to plugin binary
	I0328 01:09:04.062017 1130827 system_pods.go:86] 8 kube-system pods found
	I0328 01:09:04.062096 1130827 system_pods.go:89] "coredns-7db6d8ff4d-8zzf5" [91f329ea-6d6d-45dc-ac77-40a2739249b4] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0328 01:09:04.062111 1130827 system_pods.go:89] "coredns-7db6d8ff4d-qtgp9" [aac5a4d0-acf3-426c-a81e-d129f94d58f3] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0328 01:09:04.062121 1130827 system_pods.go:89] "etcd-no-preload-248059" [df24a43f-e4f8-4ee2-a2d5-9a718a197670] Running
	I0328 01:09:04.062132 1130827 system_pods.go:89] "kube-apiserver-no-preload-248059" [60aa0336-e0a2-476a-9458-c76fb40a95e1] Running
	I0328 01:09:04.062158 1130827 system_pods.go:89] "kube-controller-manager-no-preload-248059" [3759c96b-4476-439f-bf97-ea6175e53272] Running
	I0328 01:09:04.062172 1130827 system_pods.go:89] "kube-proxy-g5f6g" [d9c30bc3-42b1-446f-838b-979489cf661d] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0328 01:09:04.062180 1130827 system_pods.go:89] "kube-scheduler-no-preload-248059" [63a844a3-9d1e-48b0-90eb-52d3d20f0de8] Running
	I0328 01:09:04.062192 1130827 system_pods.go:89] "storage-provisioner" [1dcee5b1-4531-4068-bce7-081d51602015] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0328 01:09:04.062220 1130827 retry.go:31] will retry after 477.684804ms: missing components: kube-dns, kube-proxy
	I0328 01:09:04.574661 1130827 system_pods.go:86] 9 kube-system pods found
	I0328 01:09:04.574716 1130827 system_pods.go:89] "coredns-7db6d8ff4d-8zzf5" [91f329ea-6d6d-45dc-ac77-40a2739249b4] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0328 01:09:04.574728 1130827 system_pods.go:89] "coredns-7db6d8ff4d-qtgp9" [aac5a4d0-acf3-426c-a81e-d129f94d58f3] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0328 01:09:04.574740 1130827 system_pods.go:89] "etcd-no-preload-248059" [df24a43f-e4f8-4ee2-a2d5-9a718a197670] Running
	I0328 01:09:04.574748 1130827 system_pods.go:89] "kube-apiserver-no-preload-248059" [60aa0336-e0a2-476a-9458-c76fb40a95e1] Running
	I0328 01:09:04.574754 1130827 system_pods.go:89] "kube-controller-manager-no-preload-248059" [3759c96b-4476-439f-bf97-ea6175e53272] Running
	I0328 01:09:04.574761 1130827 system_pods.go:89] "kube-proxy-g5f6g" [d9c30bc3-42b1-446f-838b-979489cf661d] Running
	I0328 01:09:04.574768 1130827 system_pods.go:89] "kube-scheduler-no-preload-248059" [63a844a3-9d1e-48b0-90eb-52d3d20f0de8] Running
	I0328 01:09:04.574778 1130827 system_pods.go:89] "metrics-server-569cc877fc-frc5k" [d1b84bf5-8f9e-4da6-8aea-568e9bb1a4dd] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0328 01:09:04.574799 1130827 system_pods.go:89] "storage-provisioner" [1dcee5b1-4531-4068-bce7-081d51602015] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0328 01:09:04.574821 1130827 retry.go:31] will retry after 460.13955ms: missing components: kube-dns
	I0328 01:09:04.692708 1130827 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.69861394s)
	I0328 01:09:04.692782 1130827 main.go:141] libmachine: Making call to close driver server
	I0328 01:09:04.692798 1130827 main.go:141] libmachine: (no-preload-248059) Calling .Close
	I0328 01:09:04.693323 1130827 main.go:141] libmachine: Successfully made call to close driver server
	I0328 01:09:04.693366 1130827 main.go:141] libmachine: Making call to close connection to plugin binary
	I0328 01:09:04.693376 1130827 main.go:141] libmachine: Making call to close driver server
	I0328 01:09:04.693384 1130827 main.go:141] libmachine: (no-preload-248059) Calling .Close
	I0328 01:09:04.693320 1130827 main.go:141] libmachine: (no-preload-248059) DBG | Closing plugin on server side
	I0328 01:09:04.693818 1130827 main.go:141] libmachine: Successfully made call to close driver server
	I0328 01:09:04.693865 1130827 main.go:141] libmachine: Making call to close connection to plugin binary
	I0328 01:09:04.693879 1130827 main.go:141] libmachine: (no-preload-248059) DBG | Closing plugin on server side
	I0328 01:09:04.693895 1130827 addons.go:470] Verifying addon metrics-server=true in "no-preload-248059"
	I0328 01:09:04.696310 1130827 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0328 01:09:04.025791 1131323 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0328 01:09:04.026055 1131323 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0328 01:09:04.026065 1131323 kubeadm.go:309] 
	I0328 01:09:04.026124 1131323 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0328 01:09:04.026172 1131323 kubeadm.go:309] 		timed out waiting for the condition
	I0328 01:09:04.026181 1131323 kubeadm.go:309] 
	I0328 01:09:04.026221 1131323 kubeadm.go:309] 	This error is likely caused by:
	I0328 01:09:04.026279 1131323 kubeadm.go:309] 		- The kubelet is not running
	I0328 01:09:04.026401 1131323 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0328 01:09:04.026411 1131323 kubeadm.go:309] 
	I0328 01:09:04.026529 1131323 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0328 01:09:04.026586 1131323 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0328 01:09:04.026632 1131323 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0328 01:09:04.026640 1131323 kubeadm.go:309] 
	I0328 01:09:04.026758 1131323 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0328 01:09:04.026884 1131323 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0328 01:09:04.026902 1131323 kubeadm.go:309] 
	I0328 01:09:04.027061 1131323 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0328 01:09:04.027222 1131323 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0328 01:09:04.027335 1131323 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0328 01:09:04.027429 1131323 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0328 01:09:04.027537 1131323 kubeadm.go:309] 
	I0328 01:09:04.029027 1131323 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0328 01:09:04.029164 1131323 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0328 01:09:04.029284 1131323 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	W0328 01:09:04.029477 1131323 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0328 01:09:04.029545 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0328 01:09:04.543275 1131323 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0328 01:09:04.562572 1131323 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0328 01:09:04.577013 1131323 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0328 01:09:04.577040 1131323 kubeadm.go:156] found existing configuration files:
	
	I0328 01:09:04.577102 1131323 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0328 01:09:04.590795 1131323 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0328 01:09:04.590885 1131323 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0328 01:09:04.604227 1131323 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0328 01:09:04.616720 1131323 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0328 01:09:04.616818 1131323 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0328 01:09:04.630095 1131323 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0328 01:09:04.643166 1131323 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0328 01:09:04.643259 1131323 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0328 01:09:04.658084 1131323 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0328 01:09:04.671786 1131323 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0328 01:09:04.671874 1131323 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0328 01:09:04.685852 1131323 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0328 01:09:04.779013 1131323 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0328 01:09:04.779113 1131323 kubeadm.go:309] [preflight] Running pre-flight checks
	I0328 01:09:04.964178 1131323 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0328 01:09:04.964317 1131323 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0328 01:09:04.964463 1131323 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0328 01:09:05.181712 1131323 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0328 01:09:05.183644 1131323 out.go:204]   - Generating certificates and keys ...
	I0328 01:09:05.183759 1131323 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0328 01:09:05.183851 1131323 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0328 01:09:05.183962 1131323 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0328 01:09:05.184042 1131323 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0328 01:09:05.184156 1131323 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0328 01:09:05.184244 1131323 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0328 01:09:05.184337 1131323 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0328 01:09:05.184424 1131323 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0328 01:09:05.184535 1131323 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0328 01:09:05.184633 1131323 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0328 01:09:05.184683 1131323 kubeadm.go:309] [certs] Using the existing "sa" key
	I0328 01:09:05.184758 1131323 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0328 01:09:04.698039 1130827 addons.go:505] duration metric: took 2.191652421s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0328 01:09:05.044303 1130827 system_pods.go:86] 9 kube-system pods found
	I0328 01:09:05.044340 1130827 system_pods.go:89] "coredns-7db6d8ff4d-8zzf5" [91f329ea-6d6d-45dc-ac77-40a2739249b4] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0328 01:09:05.044348 1130827 system_pods.go:89] "coredns-7db6d8ff4d-qtgp9" [aac5a4d0-acf3-426c-a81e-d129f94d58f3] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0328 01:09:05.044354 1130827 system_pods.go:89] "etcd-no-preload-248059" [df24a43f-e4f8-4ee2-a2d5-9a718a197670] Running
	I0328 01:09:05.044360 1130827 system_pods.go:89] "kube-apiserver-no-preload-248059" [60aa0336-e0a2-476a-9458-c76fb40a95e1] Running
	I0328 01:09:05.044366 1130827 system_pods.go:89] "kube-controller-manager-no-preload-248059" [3759c96b-4476-439f-bf97-ea6175e53272] Running
	I0328 01:09:05.044369 1130827 system_pods.go:89] "kube-proxy-g5f6g" [d9c30bc3-42b1-446f-838b-979489cf661d] Running
	I0328 01:09:05.044373 1130827 system_pods.go:89] "kube-scheduler-no-preload-248059" [63a844a3-9d1e-48b0-90eb-52d3d20f0de8] Running
	I0328 01:09:05.044378 1130827 system_pods.go:89] "metrics-server-569cc877fc-frc5k" [d1b84bf5-8f9e-4da6-8aea-568e9bb1a4dd] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0328 01:09:05.044387 1130827 system_pods.go:89] "storage-provisioner" [1dcee5b1-4531-4068-bce7-081d51602015] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0328 01:09:05.044406 1130827 retry.go:31] will retry after 486.01075ms: missing components: kube-dns
	I0328 01:09:05.539158 1130827 system_pods.go:86] 9 kube-system pods found
	I0328 01:09:05.539204 1130827 system_pods.go:89] "coredns-7db6d8ff4d-8zzf5" [91f329ea-6d6d-45dc-ac77-40a2739249b4] Running
	I0328 01:09:05.539213 1130827 system_pods.go:89] "coredns-7db6d8ff4d-qtgp9" [aac5a4d0-acf3-426c-a81e-d129f94d58f3] Running
	I0328 01:09:05.539219 1130827 system_pods.go:89] "etcd-no-preload-248059" [df24a43f-e4f8-4ee2-a2d5-9a718a197670] Running
	I0328 01:09:05.539226 1130827 system_pods.go:89] "kube-apiserver-no-preload-248059" [60aa0336-e0a2-476a-9458-c76fb40a95e1] Running
	I0328 01:09:05.539232 1130827 system_pods.go:89] "kube-controller-manager-no-preload-248059" [3759c96b-4476-439f-bf97-ea6175e53272] Running
	I0328 01:09:05.539238 1130827 system_pods.go:89] "kube-proxy-g5f6g" [d9c30bc3-42b1-446f-838b-979489cf661d] Running
	I0328 01:09:05.539244 1130827 system_pods.go:89] "kube-scheduler-no-preload-248059" [63a844a3-9d1e-48b0-90eb-52d3d20f0de8] Running
	I0328 01:09:05.539255 1130827 system_pods.go:89] "metrics-server-569cc877fc-frc5k" [d1b84bf5-8f9e-4da6-8aea-568e9bb1a4dd] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0328 01:09:05.539260 1130827 system_pods.go:89] "storage-provisioner" [1dcee5b1-4531-4068-bce7-081d51602015] Running
	I0328 01:09:05.539274 1130827 system_pods.go:126] duration metric: took 2.37828469s to wait for k8s-apps to be running ...
	I0328 01:09:05.539292 1130827 system_svc.go:44] waiting for kubelet service to be running ....
	I0328 01:09:05.539362 1130827 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0328 01:09:05.560593 1130827 system_svc.go:56] duration metric: took 21.288819ms WaitForService to wait for kubelet
	I0328 01:09:05.560628 1130827 kubeadm.go:576] duration metric: took 3.054281955s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0328 01:09:05.560657 1130827 node_conditions.go:102] verifying NodePressure condition ...
	I0328 01:09:05.564453 1130827 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0328 01:09:05.564489 1130827 node_conditions.go:123] node cpu capacity is 2
	I0328 01:09:05.564502 1130827 node_conditions.go:105] duration metric: took 3.837449ms to run NodePressure ...
	I0328 01:09:05.564517 1130827 start.go:240] waiting for startup goroutines ...
	I0328 01:09:05.564527 1130827 start.go:245] waiting for cluster config update ...
	I0328 01:09:05.564542 1130827 start.go:254] writing updated cluster config ...
	I0328 01:09:05.564843 1130827 ssh_runner.go:195] Run: rm -f paused
	I0328 01:09:05.623218 1130827 start.go:600] kubectl: 1.29.3, cluster: 1.30.0-beta.0 (minor skew: 1)
	I0328 01:09:05.625408 1130827 out.go:177] * Done! kubectl is now configured to use "no-preload-248059" cluster and "default" namespace by default
	I0328 01:09:05.587190 1131323 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0328 01:09:05.923219 1131323 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0328 01:09:06.087945 1131323 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0328 01:09:06.245638 1131323 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0328 01:09:06.266195 1131323 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0328 01:09:06.267461 1131323 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0328 01:09:06.267551 1131323 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0328 01:09:06.434155 1131323 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0328 01:09:06.436300 1131323 out.go:204]   - Booting up control plane ...
	I0328 01:09:06.436447 1131323 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0328 01:09:06.446573 1131323 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0328 01:09:06.447461 1131323 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0328 01:09:06.448313 1131323 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0328 01:09:06.450917 1131323 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0328 01:09:46.453199 1131323 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0328 01:09:46.453386 1131323 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0328 01:09:46.453643 1131323 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0328 01:09:51.454402 1131323 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0328 01:09:51.454665 1131323 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0328 01:10:01.455189 1131323 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0328 01:10:01.455417 1131323 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0328 01:10:21.456491 1131323 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0328 01:10:21.456726 1131323 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0328 01:11:01.456972 1131323 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0328 01:11:01.457256 1131323 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0328 01:11:01.457269 1131323 kubeadm.go:309] 
	I0328 01:11:01.457310 1131323 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0328 01:11:01.457404 1131323 kubeadm.go:309] 		timed out waiting for the condition
	I0328 01:11:01.457441 1131323 kubeadm.go:309] 
	I0328 01:11:01.457492 1131323 kubeadm.go:309] 	This error is likely caused by:
	I0328 01:11:01.457550 1131323 kubeadm.go:309] 		- The kubelet is not running
	I0328 01:11:01.457696 1131323 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0328 01:11:01.457708 1131323 kubeadm.go:309] 
	I0328 01:11:01.457856 1131323 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0328 01:11:01.457906 1131323 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0328 01:11:01.457935 1131323 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0328 01:11:01.457943 1131323 kubeadm.go:309] 
	I0328 01:11:01.458033 1131323 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0328 01:11:01.458139 1131323 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0328 01:11:01.458155 1131323 kubeadm.go:309] 
	I0328 01:11:01.458331 1131323 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0328 01:11:01.458483 1131323 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0328 01:11:01.458594 1131323 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0328 01:11:01.458707 1131323 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0328 01:11:01.458718 1131323 kubeadm.go:309] 
	I0328 01:11:01.459597 1131323 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0328 01:11:01.459737 1131323 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0328 01:11:01.459822 1131323 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0328 01:11:01.459962 1131323 kubeadm.go:393] duration metric: took 7m59.227261729s to StartCluster
	I0328 01:11:01.460023 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:11:01.460167 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:11:01.522644 1131323 cri.go:89] found id: ""
	I0328 01:11:01.522687 1131323 logs.go:276] 0 containers: []
	W0328 01:11:01.522700 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:11:01.522710 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:11:01.522782 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:11:01.567898 1131323 cri.go:89] found id: ""
	I0328 01:11:01.567928 1131323 logs.go:276] 0 containers: []
	W0328 01:11:01.567937 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:11:01.567945 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:11:01.568005 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:11:01.604782 1131323 cri.go:89] found id: ""
	I0328 01:11:01.604810 1131323 logs.go:276] 0 containers: []
	W0328 01:11:01.604819 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:11:01.604825 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:11:01.604935 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:11:01.642875 1131323 cri.go:89] found id: ""
	I0328 01:11:01.642908 1131323 logs.go:276] 0 containers: []
	W0328 01:11:01.642920 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:11:01.642929 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:11:01.642993 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:11:01.682186 1131323 cri.go:89] found id: ""
	I0328 01:11:01.682216 1131323 logs.go:276] 0 containers: []
	W0328 01:11:01.682223 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:11:01.682241 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:11:01.682312 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:11:01.720654 1131323 cri.go:89] found id: ""
	I0328 01:11:01.720689 1131323 logs.go:276] 0 containers: []
	W0328 01:11:01.720697 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:11:01.720704 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:11:01.720759 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:11:01.757340 1131323 cri.go:89] found id: ""
	I0328 01:11:01.757372 1131323 logs.go:276] 0 containers: []
	W0328 01:11:01.757383 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:11:01.757392 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:11:01.757462 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:11:01.797426 1131323 cri.go:89] found id: ""
	I0328 01:11:01.797462 1131323 logs.go:276] 0 containers: []
	W0328 01:11:01.797473 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:11:01.797488 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:11:01.797506 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:11:01.859582 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:11:01.859623 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:11:01.876027 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:11:01.876073 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:11:01.966513 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:11:01.966539 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:11:01.966557 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:11:02.084853 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:11:02.084894 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0328 01:11:02.127221 1131323 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0328 01:11:02.127288 1131323 out.go:239] * 
	W0328 01:11:02.127417 1131323 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0328 01:11:02.127456 1131323 out.go:239] * 
	W0328 01:11:02.128313 1131323 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0328 01:11:02.131916 1131323 out.go:177] 
	W0328 01:11:02.133288 1131323 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0328 01:11:02.133351 1131323 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0328 01:11:02.133381 1131323 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0328 01:11:02.134991 1131323 out.go:177] 
	
	
	==> CRI-O <==
	Mar 28 01:22:46 no-preload-248059 crio[709]: time="2024-03-28 01:22:46.233456768Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1711588966233433347,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:97399,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c5e9be09-63a5-4cf1-ba02-3d1789c4110f name=/runtime.v1.ImageService/ImageFsInfo
	Mar 28 01:22:46 no-preload-248059 crio[709]: time="2024-03-28 01:22:46.234253196Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=af41a235-cf8b-4bc9-8f83-c62e8d915c69 name=/runtime.v1.RuntimeService/ListContainers
	Mar 28 01:22:46 no-preload-248059 crio[709]: time="2024-03-28 01:22:46.234327827Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=af41a235-cf8b-4bc9-8f83-c62e8d915c69 name=/runtime.v1.RuntimeService/ListContainers
	Mar 28 01:22:46 no-preload-248059 crio[709]: time="2024-03-28 01:22:46.234514789Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2b9a3a8eb8ca9f67fd3d07a31c8119852de5aa2dc7c5dfa2c9dc35a2cc0f49fb,PodSandboxId:0920cf2e87dde2e665fd7b735c88708c4e59d37201f8a0a28e813da8b143468b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1711588144939175336,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1dcee5b1-4531-4068-bce7-081d51602015,},Annotations:map[string]string{io.kubernetes.container.hash: 3272bc23,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e25e21af79e013a4972f47527934ce39ffb915fc2de57e34d6c67f4bcbeb3c47,PodSandboxId:53c8b47dbdbb932b1ee62a0c91f702f7689b513c8fb781e5278e402f007c54aa,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1711588144460731334,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-8zzf5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 91f329ea-6d6d-45dc-ac77-40a2739249b4,},Annotations:map[string]string{io.kubernetes.container.hash: 3eee10e4,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f72c2ab4e509668aad306c31e730305e50bd950a3215e2d1aca869727d99b2f,PodSandboxId:59c32c596fb1b4aa2d2ca503f7fb700ff702eb4ca5c25156c4f65be3b7bb5a9f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1711588144393868121,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-qtgp9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa
c5a4d0-acf3-426c-a81e-d129f94d58f3,},Annotations:map[string]string{io.kubernetes.container.hash: f233c0b8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4233f922b7075b65b10e11b3e4e6100d66f2a5d7c2bac926615979defb1956c0,PodSandboxId:3772dd558dadbcaac1079e4aaaa39ba86d97b6da9f327f3d435314a6106066a2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:3f2e573c9528d6ccab780899b4b39b75a17f00250f31ec462fccb116d45befa8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3f2e573c9528d6ccab780899b4b39b75a17f00250f31ec462fccb116d45befa8,State:CONTAINER_RUNNING,CreatedAt:
1711588143678898175,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-g5f6g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d9c30bc3-42b1-446f-838b-979489cf661d,},Annotations:map[string]string{io.kubernetes.container.hash: eaa40bd4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9bc243955de3ae2bec25a51813f8f427b5858033de734b70866e943e518b6bd7,PodSandboxId:8ad3533817cba753b0aa138721b2785cc9c424618b40c07f487cd32cb6cd9c42,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1711588123345258983,Labels:map[string]s
tring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-248059,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e69d64566c92e3525f54abb99300da39,},Annotations:map[string]string{io.kubernetes.container.hash: 99e7a510,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:985e4e157e023253fd3baf217e792648385a081fdff62f924e938ffe7eb2b80d,PodSandboxId:8314db254c90fe303494976ded25ab371bc516f0527f30576dbe6e5580f09ac6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:746ac2129b574b8743dca05f512b7e097235ca7229b75100a38ec9cdb23454ac,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:746ac2129b574b8743dca05f512b7e097235ca7229b75100a38ec9cdb23454ac,State:CONTAINER_RUNNING,CreatedAt:1711588123285421818,Labels:map[string]string{io.kubernetes.container.nam
e: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-248059,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd8f772112ebebea502645fbe658d615,},Annotations:map[string]string{io.kubernetes.container.hash: 27285f37,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:20179eaa0c7f1d12ca086349dfdb854d43ee9515f46f5fb49de086de286cbc3c,PodSandboxId:bd5cbe84cfa9bc60450e4fd2635c4ff9bcac69d23ae1fb8e9040a9b99fc5f7ce,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:f68bf73ee2fbd4ea8b58c2038ce18d4e06cb59833c44dda7435b586add021841,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f68bf73ee2fbd4ea8b58c2038ce18d4e06cb59833c44dda7435b586add021841,State:CONTAINER_RUNNING,CreatedAt:1711588123253501175,Labels:map[string]string{io.kubernetes.container.name: k
ube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-248059,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8a38351dbe7f1abafd21396e32b13b05,},Annotations:map[string]string{io.kubernetes.container.hash: 3378c71d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c238f08ea2841d188d12c6e86c187d972d4d38c3032bf9dffe8d5d0a2482debc,PodSandboxId:45bd5b0d85da6648fc0e145c9826d36915fc738285c0af6dfe315f954bbee165,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c2da1dd389fe0ec92e0cd9e98f8def82c47e8e08ab27041cd23683c60aa1dcaa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2da1dd389fe0ec92e0cd9e98f8def82c47e8e08ab27041cd23683c60aa1dcaa,State:CONTAINER_RUNNING,CreatedAt:1711588123178228047,Labels:map[string]string{io.kubernetes.container.na
me: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-248059,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f055c15fe98f52895520db52ff8bcf3b,},Annotations:map[string]string{io.kubernetes.container.hash: 54931dd1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=af41a235-cf8b-4bc9-8f83-c62e8d915c69 name=/runtime.v1.RuntimeService/ListContainers
	Mar 28 01:22:46 no-preload-248059 crio[709]: time="2024-03-28 01:22:46.276124305Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ae052d3d-11bf-4b11-8343-f108dd2c636f name=/runtime.v1.RuntimeService/Version
	Mar 28 01:22:46 no-preload-248059 crio[709]: time="2024-03-28 01:22:46.276213982Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ae052d3d-11bf-4b11-8343-f108dd2c636f name=/runtime.v1.RuntimeService/Version
	Mar 28 01:22:46 no-preload-248059 crio[709]: time="2024-03-28 01:22:46.277970671Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=eec0888b-787e-4c6b-b586-bacc2c84083f name=/runtime.v1.ImageService/ImageFsInfo
	Mar 28 01:22:46 no-preload-248059 crio[709]: time="2024-03-28 01:22:46.278457962Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1711588966278423763,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:97399,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=eec0888b-787e-4c6b-b586-bacc2c84083f name=/runtime.v1.ImageService/ImageFsInfo
	Mar 28 01:22:46 no-preload-248059 crio[709]: time="2024-03-28 01:22:46.279104203Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=80ce0275-aea8-4680-b735-d966106a7358 name=/runtime.v1.RuntimeService/ListContainers
	Mar 28 01:22:46 no-preload-248059 crio[709]: time="2024-03-28 01:22:46.279164233Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=80ce0275-aea8-4680-b735-d966106a7358 name=/runtime.v1.RuntimeService/ListContainers
	Mar 28 01:22:46 no-preload-248059 crio[709]: time="2024-03-28 01:22:46.279338228Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2b9a3a8eb8ca9f67fd3d07a31c8119852de5aa2dc7c5dfa2c9dc35a2cc0f49fb,PodSandboxId:0920cf2e87dde2e665fd7b735c88708c4e59d37201f8a0a28e813da8b143468b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1711588144939175336,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1dcee5b1-4531-4068-bce7-081d51602015,},Annotations:map[string]string{io.kubernetes.container.hash: 3272bc23,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e25e21af79e013a4972f47527934ce39ffb915fc2de57e34d6c67f4bcbeb3c47,PodSandboxId:53c8b47dbdbb932b1ee62a0c91f702f7689b513c8fb781e5278e402f007c54aa,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1711588144460731334,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-8zzf5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 91f329ea-6d6d-45dc-ac77-40a2739249b4,},Annotations:map[string]string{io.kubernetes.container.hash: 3eee10e4,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f72c2ab4e509668aad306c31e730305e50bd950a3215e2d1aca869727d99b2f,PodSandboxId:59c32c596fb1b4aa2d2ca503f7fb700ff702eb4ca5c25156c4f65be3b7bb5a9f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1711588144393868121,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-qtgp9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa
c5a4d0-acf3-426c-a81e-d129f94d58f3,},Annotations:map[string]string{io.kubernetes.container.hash: f233c0b8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4233f922b7075b65b10e11b3e4e6100d66f2a5d7c2bac926615979defb1956c0,PodSandboxId:3772dd558dadbcaac1079e4aaaa39ba86d97b6da9f327f3d435314a6106066a2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:3f2e573c9528d6ccab780899b4b39b75a17f00250f31ec462fccb116d45befa8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3f2e573c9528d6ccab780899b4b39b75a17f00250f31ec462fccb116d45befa8,State:CONTAINER_RUNNING,CreatedAt:
1711588143678898175,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-g5f6g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d9c30bc3-42b1-446f-838b-979489cf661d,},Annotations:map[string]string{io.kubernetes.container.hash: eaa40bd4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9bc243955de3ae2bec25a51813f8f427b5858033de734b70866e943e518b6bd7,PodSandboxId:8ad3533817cba753b0aa138721b2785cc9c424618b40c07f487cd32cb6cd9c42,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1711588123345258983,Labels:map[string]s
tring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-248059,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e69d64566c92e3525f54abb99300da39,},Annotations:map[string]string{io.kubernetes.container.hash: 99e7a510,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:985e4e157e023253fd3baf217e792648385a081fdff62f924e938ffe7eb2b80d,PodSandboxId:8314db254c90fe303494976ded25ab371bc516f0527f30576dbe6e5580f09ac6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:746ac2129b574b8743dca05f512b7e097235ca7229b75100a38ec9cdb23454ac,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:746ac2129b574b8743dca05f512b7e097235ca7229b75100a38ec9cdb23454ac,State:CONTAINER_RUNNING,CreatedAt:1711588123285421818,Labels:map[string]string{io.kubernetes.container.nam
e: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-248059,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd8f772112ebebea502645fbe658d615,},Annotations:map[string]string{io.kubernetes.container.hash: 27285f37,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:20179eaa0c7f1d12ca086349dfdb854d43ee9515f46f5fb49de086de286cbc3c,PodSandboxId:bd5cbe84cfa9bc60450e4fd2635c4ff9bcac69d23ae1fb8e9040a9b99fc5f7ce,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:f68bf73ee2fbd4ea8b58c2038ce18d4e06cb59833c44dda7435b586add021841,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f68bf73ee2fbd4ea8b58c2038ce18d4e06cb59833c44dda7435b586add021841,State:CONTAINER_RUNNING,CreatedAt:1711588123253501175,Labels:map[string]string{io.kubernetes.container.name: k
ube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-248059,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8a38351dbe7f1abafd21396e32b13b05,},Annotations:map[string]string{io.kubernetes.container.hash: 3378c71d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c238f08ea2841d188d12c6e86c187d972d4d38c3032bf9dffe8d5d0a2482debc,PodSandboxId:45bd5b0d85da6648fc0e145c9826d36915fc738285c0af6dfe315f954bbee165,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c2da1dd389fe0ec92e0cd9e98f8def82c47e8e08ab27041cd23683c60aa1dcaa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2da1dd389fe0ec92e0cd9e98f8def82c47e8e08ab27041cd23683c60aa1dcaa,State:CONTAINER_RUNNING,CreatedAt:1711588123178228047,Labels:map[string]string{io.kubernetes.container.na
me: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-248059,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f055c15fe98f52895520db52ff8bcf3b,},Annotations:map[string]string{io.kubernetes.container.hash: 54931dd1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=80ce0275-aea8-4680-b735-d966106a7358 name=/runtime.v1.RuntimeService/ListContainers
	Mar 28 01:22:46 no-preload-248059 crio[709]: time="2024-03-28 01:22:46.318713040Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=2243cf10-6e5d-4988-a424-292ecb88f263 name=/runtime.v1.RuntimeService/Version
	Mar 28 01:22:46 no-preload-248059 crio[709]: time="2024-03-28 01:22:46.318786290Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=2243cf10-6e5d-4988-a424-292ecb88f263 name=/runtime.v1.RuntimeService/Version
	Mar 28 01:22:46 no-preload-248059 crio[709]: time="2024-03-28 01:22:46.319724976Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b8005ede-f469-4b9d-8372-673d3041f5e0 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 28 01:22:46 no-preload-248059 crio[709]: time="2024-03-28 01:22:46.320068191Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1711588966320049139,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:97399,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b8005ede-f469-4b9d-8372-673d3041f5e0 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 28 01:22:46 no-preload-248059 crio[709]: time="2024-03-28 01:22:46.320732309Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=23d508f7-f306-4f94-9ccf-7781ccf081c7 name=/runtime.v1.RuntimeService/ListContainers
	Mar 28 01:22:46 no-preload-248059 crio[709]: time="2024-03-28 01:22:46.320783526Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=23d508f7-f306-4f94-9ccf-7781ccf081c7 name=/runtime.v1.RuntimeService/ListContainers
	Mar 28 01:22:46 no-preload-248059 crio[709]: time="2024-03-28 01:22:46.320986791Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2b9a3a8eb8ca9f67fd3d07a31c8119852de5aa2dc7c5dfa2c9dc35a2cc0f49fb,PodSandboxId:0920cf2e87dde2e665fd7b735c88708c4e59d37201f8a0a28e813da8b143468b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1711588144939175336,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1dcee5b1-4531-4068-bce7-081d51602015,},Annotations:map[string]string{io.kubernetes.container.hash: 3272bc23,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e25e21af79e013a4972f47527934ce39ffb915fc2de57e34d6c67f4bcbeb3c47,PodSandboxId:53c8b47dbdbb932b1ee62a0c91f702f7689b513c8fb781e5278e402f007c54aa,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1711588144460731334,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-8zzf5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 91f329ea-6d6d-45dc-ac77-40a2739249b4,},Annotations:map[string]string{io.kubernetes.container.hash: 3eee10e4,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f72c2ab4e509668aad306c31e730305e50bd950a3215e2d1aca869727d99b2f,PodSandboxId:59c32c596fb1b4aa2d2ca503f7fb700ff702eb4ca5c25156c4f65be3b7bb5a9f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1711588144393868121,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-qtgp9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa
c5a4d0-acf3-426c-a81e-d129f94d58f3,},Annotations:map[string]string{io.kubernetes.container.hash: f233c0b8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4233f922b7075b65b10e11b3e4e6100d66f2a5d7c2bac926615979defb1956c0,PodSandboxId:3772dd558dadbcaac1079e4aaaa39ba86d97b6da9f327f3d435314a6106066a2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:3f2e573c9528d6ccab780899b4b39b75a17f00250f31ec462fccb116d45befa8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3f2e573c9528d6ccab780899b4b39b75a17f00250f31ec462fccb116d45befa8,State:CONTAINER_RUNNING,CreatedAt:
1711588143678898175,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-g5f6g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d9c30bc3-42b1-446f-838b-979489cf661d,},Annotations:map[string]string{io.kubernetes.container.hash: eaa40bd4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9bc243955de3ae2bec25a51813f8f427b5858033de734b70866e943e518b6bd7,PodSandboxId:8ad3533817cba753b0aa138721b2785cc9c424618b40c07f487cd32cb6cd9c42,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1711588123345258983,Labels:map[string]s
tring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-248059,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e69d64566c92e3525f54abb99300da39,},Annotations:map[string]string{io.kubernetes.container.hash: 99e7a510,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:985e4e157e023253fd3baf217e792648385a081fdff62f924e938ffe7eb2b80d,PodSandboxId:8314db254c90fe303494976ded25ab371bc516f0527f30576dbe6e5580f09ac6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:746ac2129b574b8743dca05f512b7e097235ca7229b75100a38ec9cdb23454ac,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:746ac2129b574b8743dca05f512b7e097235ca7229b75100a38ec9cdb23454ac,State:CONTAINER_RUNNING,CreatedAt:1711588123285421818,Labels:map[string]string{io.kubernetes.container.nam
e: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-248059,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd8f772112ebebea502645fbe658d615,},Annotations:map[string]string{io.kubernetes.container.hash: 27285f37,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:20179eaa0c7f1d12ca086349dfdb854d43ee9515f46f5fb49de086de286cbc3c,PodSandboxId:bd5cbe84cfa9bc60450e4fd2635c4ff9bcac69d23ae1fb8e9040a9b99fc5f7ce,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:f68bf73ee2fbd4ea8b58c2038ce18d4e06cb59833c44dda7435b586add021841,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f68bf73ee2fbd4ea8b58c2038ce18d4e06cb59833c44dda7435b586add021841,State:CONTAINER_RUNNING,CreatedAt:1711588123253501175,Labels:map[string]string{io.kubernetes.container.name: k
ube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-248059,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8a38351dbe7f1abafd21396e32b13b05,},Annotations:map[string]string{io.kubernetes.container.hash: 3378c71d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c238f08ea2841d188d12c6e86c187d972d4d38c3032bf9dffe8d5d0a2482debc,PodSandboxId:45bd5b0d85da6648fc0e145c9826d36915fc738285c0af6dfe315f954bbee165,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c2da1dd389fe0ec92e0cd9e98f8def82c47e8e08ab27041cd23683c60aa1dcaa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2da1dd389fe0ec92e0cd9e98f8def82c47e8e08ab27041cd23683c60aa1dcaa,State:CONTAINER_RUNNING,CreatedAt:1711588123178228047,Labels:map[string]string{io.kubernetes.container.na
me: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-248059,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f055c15fe98f52895520db52ff8bcf3b,},Annotations:map[string]string{io.kubernetes.container.hash: 54931dd1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=23d508f7-f306-4f94-9ccf-7781ccf081c7 name=/runtime.v1.RuntimeService/ListContainers
	Mar 28 01:22:46 no-preload-248059 crio[709]: time="2024-03-28 01:22:46.354775941Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9550eec9-69de-4196-9318-a40dcd02e138 name=/runtime.v1.RuntimeService/Version
	Mar 28 01:22:46 no-preload-248059 crio[709]: time="2024-03-28 01:22:46.354862673Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9550eec9-69de-4196-9318-a40dcd02e138 name=/runtime.v1.RuntimeService/Version
	Mar 28 01:22:46 no-preload-248059 crio[709]: time="2024-03-28 01:22:46.357020325Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=6378f0b8-f4e1-4a51-8f49-31ec6824b7bb name=/runtime.v1.ImageService/ImageFsInfo
	Mar 28 01:22:46 no-preload-248059 crio[709]: time="2024-03-28 01:22:46.357384401Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1711588966357360433,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:97399,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6378f0b8-f4e1-4a51-8f49-31ec6824b7bb name=/runtime.v1.ImageService/ImageFsInfo
	Mar 28 01:22:46 no-preload-248059 crio[709]: time="2024-03-28 01:22:46.358496548Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a923976e-4c80-42b6-bfcd-1cda91d8b971 name=/runtime.v1.RuntimeService/ListContainers
	Mar 28 01:22:46 no-preload-248059 crio[709]: time="2024-03-28 01:22:46.358659391Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a923976e-4c80-42b6-bfcd-1cda91d8b971 name=/runtime.v1.RuntimeService/ListContainers
	Mar 28 01:22:46 no-preload-248059 crio[709]: time="2024-03-28 01:22:46.358858939Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2b9a3a8eb8ca9f67fd3d07a31c8119852de5aa2dc7c5dfa2c9dc35a2cc0f49fb,PodSandboxId:0920cf2e87dde2e665fd7b735c88708c4e59d37201f8a0a28e813da8b143468b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1711588144939175336,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1dcee5b1-4531-4068-bce7-081d51602015,},Annotations:map[string]string{io.kubernetes.container.hash: 3272bc23,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e25e21af79e013a4972f47527934ce39ffb915fc2de57e34d6c67f4bcbeb3c47,PodSandboxId:53c8b47dbdbb932b1ee62a0c91f702f7689b513c8fb781e5278e402f007c54aa,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1711588144460731334,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-8zzf5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 91f329ea-6d6d-45dc-ac77-40a2739249b4,},Annotations:map[string]string{io.kubernetes.container.hash: 3eee10e4,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f72c2ab4e509668aad306c31e730305e50bd950a3215e2d1aca869727d99b2f,PodSandboxId:59c32c596fb1b4aa2d2ca503f7fb700ff702eb4ca5c25156c4f65be3b7bb5a9f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1711588144393868121,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-qtgp9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa
c5a4d0-acf3-426c-a81e-d129f94d58f3,},Annotations:map[string]string{io.kubernetes.container.hash: f233c0b8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4233f922b7075b65b10e11b3e4e6100d66f2a5d7c2bac926615979defb1956c0,PodSandboxId:3772dd558dadbcaac1079e4aaaa39ba86d97b6da9f327f3d435314a6106066a2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:3f2e573c9528d6ccab780899b4b39b75a17f00250f31ec462fccb116d45befa8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3f2e573c9528d6ccab780899b4b39b75a17f00250f31ec462fccb116d45befa8,State:CONTAINER_RUNNING,CreatedAt:
1711588143678898175,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-g5f6g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d9c30bc3-42b1-446f-838b-979489cf661d,},Annotations:map[string]string{io.kubernetes.container.hash: eaa40bd4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9bc243955de3ae2bec25a51813f8f427b5858033de734b70866e943e518b6bd7,PodSandboxId:8ad3533817cba753b0aa138721b2785cc9c424618b40c07f487cd32cb6cd9c42,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1711588123345258983,Labels:map[string]s
tring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-248059,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e69d64566c92e3525f54abb99300da39,},Annotations:map[string]string{io.kubernetes.container.hash: 99e7a510,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:985e4e157e023253fd3baf217e792648385a081fdff62f924e938ffe7eb2b80d,PodSandboxId:8314db254c90fe303494976ded25ab371bc516f0527f30576dbe6e5580f09ac6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:746ac2129b574b8743dca05f512b7e097235ca7229b75100a38ec9cdb23454ac,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:746ac2129b574b8743dca05f512b7e097235ca7229b75100a38ec9cdb23454ac,State:CONTAINER_RUNNING,CreatedAt:1711588123285421818,Labels:map[string]string{io.kubernetes.container.nam
e: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-248059,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd8f772112ebebea502645fbe658d615,},Annotations:map[string]string{io.kubernetes.container.hash: 27285f37,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:20179eaa0c7f1d12ca086349dfdb854d43ee9515f46f5fb49de086de286cbc3c,PodSandboxId:bd5cbe84cfa9bc60450e4fd2635c4ff9bcac69d23ae1fb8e9040a9b99fc5f7ce,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:f68bf73ee2fbd4ea8b58c2038ce18d4e06cb59833c44dda7435b586add021841,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f68bf73ee2fbd4ea8b58c2038ce18d4e06cb59833c44dda7435b586add021841,State:CONTAINER_RUNNING,CreatedAt:1711588123253501175,Labels:map[string]string{io.kubernetes.container.name: k
ube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-248059,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8a38351dbe7f1abafd21396e32b13b05,},Annotations:map[string]string{io.kubernetes.container.hash: 3378c71d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c238f08ea2841d188d12c6e86c187d972d4d38c3032bf9dffe8d5d0a2482debc,PodSandboxId:45bd5b0d85da6648fc0e145c9826d36915fc738285c0af6dfe315f954bbee165,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c2da1dd389fe0ec92e0cd9e98f8def82c47e8e08ab27041cd23683c60aa1dcaa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2da1dd389fe0ec92e0cd9e98f8def82c47e8e08ab27041cd23683c60aa1dcaa,State:CONTAINER_RUNNING,CreatedAt:1711588123178228047,Labels:map[string]string{io.kubernetes.container.na
me: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-248059,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f055c15fe98f52895520db52ff8bcf3b,},Annotations:map[string]string{io.kubernetes.container.hash: 54931dd1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a923976e-4c80-42b6-bfcd-1cda91d8b971 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	2b9a3a8eb8ca9       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   13 minutes ago      Running             storage-provisioner       0                   0920cf2e87dde       storage-provisioner
	e25e21af79e01       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   13 minutes ago      Running             coredns                   0                   53c8b47dbdbb9       coredns-7db6d8ff4d-8zzf5
	9f72c2ab4e509       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   13 minutes ago      Running             coredns                   0                   59c32c596fb1b       coredns-7db6d8ff4d-qtgp9
	4233f922b7075       3f2e573c9528d6ccab780899b4b39b75a17f00250f31ec462fccb116d45befa8   13 minutes ago      Running             kube-proxy                0                   3772dd558dadb       kube-proxy-g5f6g
	9bc243955de3a       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   14 minutes ago      Running             etcd                      2                   8ad3533817cba       etcd-no-preload-248059
	985e4e157e023       746ac2129b574b8743dca05f512b7e097235ca7229b75100a38ec9cdb23454ac   14 minutes ago      Running             kube-scheduler            2                   8314db254c90f       kube-scheduler-no-preload-248059
	20179eaa0c7f1       f68bf73ee2fbd4ea8b58c2038ce18d4e06cb59833c44dda7435b586add021841   14 minutes ago      Running             kube-controller-manager   2                   bd5cbe84cfa9b       kube-controller-manager-no-preload-248059
	c238f08ea2841       c2da1dd389fe0ec92e0cd9e98f8def82c47e8e08ab27041cd23683c60aa1dcaa   14 minutes ago      Running             kube-apiserver            2                   45bd5b0d85da6       kube-apiserver-no-preload-248059
	
	
	==> coredns [9f72c2ab4e509668aad306c31e730305e50bd950a3215e2d1aca869727d99b2f] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [e25e21af79e013a4972f47527934ce39ffb915fc2de57e34d6c67f4bcbeb3c47] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> describe nodes <==
	Name:               no-preload-248059
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-248059
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1a940f980f77c9bfc8ef93678db8c1f49c9dd79d
	                    minikube.k8s.io/name=no-preload-248059
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_03_28T01_08_49_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 28 Mar 2024 01:08:46 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-248059
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 28 Mar 2024 01:22:46 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 28 Mar 2024 01:19:22 +0000   Thu, 28 Mar 2024 01:08:44 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 28 Mar 2024 01:19:22 +0000   Thu, 28 Mar 2024 01:08:44 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 28 Mar 2024 01:19:22 +0000   Thu, 28 Mar 2024 01:08:44 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 28 Mar 2024 01:19:22 +0000   Thu, 28 Mar 2024 01:08:46 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.107
	  Hostname:    no-preload-248059
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 89bcb29939fb40d7ac8ffbe51d037041
	  System UUID:                89bcb299-39fb-40d7-ac8f-fbe51d037041
	  Boot ID:                    0ed144c6-e0e9-469d-b22e-b6114c7629e1
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0-beta.0
	  Kube-Proxy Version:         v1.30.0-beta.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7db6d8ff4d-8zzf5                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     13m
	  kube-system                 coredns-7db6d8ff4d-qtgp9                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     13m
	  kube-system                 etcd-no-preload-248059                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         13m
	  kube-system                 kube-apiserver-no-preload-248059             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-controller-manager-no-preload-248059    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-proxy-g5f6g                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-scheduler-no-preload-248059             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 metrics-server-569cc877fc-frc5k              100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         13m
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   0 (0%!)(MISSING)
	  memory             440Mi (20%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 13m   kube-proxy       
	  Normal  Starting                 13m   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  13m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  13m   kubelet          Node no-preload-248059 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m   kubelet          Node no-preload-248059 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m   kubelet          Node no-preload-248059 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           13m   node-controller  Node no-preload-248059 event: Registered Node no-preload-248059 in Controller
	
	
	==> dmesg <==
	[  +0.041276] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.795924] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.836951] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.681573] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.226730] systemd-fstab-generator[624]: Ignoring "noauto" option for root device
	[  +0.063221] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.071498] systemd-fstab-generator[636]: Ignoring "noauto" option for root device
	[  +0.177482] systemd-fstab-generator[650]: Ignoring "noauto" option for root device
	[  +0.182987] systemd-fstab-generator[662]: Ignoring "noauto" option for root device
	[  +0.332151] systemd-fstab-generator[692]: Ignoring "noauto" option for root device
	[ +17.175029] systemd-fstab-generator[1204]: Ignoring "noauto" option for root device
	[  +0.062262] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.204612] systemd-fstab-generator[1329]: Ignoring "noauto" option for root device
	[  +2.954320] kauditd_printk_skb: 97 callbacks suppressed
	[Mar28 01:04] kauditd_printk_skb: 52 callbacks suppressed
	[  +9.133446] kauditd_printk_skb: 20 callbacks suppressed
	[Mar28 01:08] kauditd_printk_skb: 4 callbacks suppressed
	[  +2.764349] systemd-fstab-generator[3840]: Ignoring "noauto" option for root device
	[  +6.602970] systemd-fstab-generator[4160]: Ignoring "noauto" option for root device
	[  +0.088395] kauditd_printk_skb: 57 callbacks suppressed
	[Mar28 01:09] systemd-fstab-generator[4368]: Ignoring "noauto" option for root device
	[  +0.091370] kauditd_printk_skb: 12 callbacks suppressed
	[ +57.456959] kauditd_printk_skb: 78 callbacks suppressed
	
	
	==> etcd [9bc243955de3ae2bec25a51813f8f427b5858033de734b70866e943e518b6bd7] <==
	{"level":"info","ts":"2024-03-28T01:08:44.114647Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7a1421f129b0f3c4 is starting a new election at term 1"}
	{"level":"info","ts":"2024-03-28T01:08:44.114748Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7a1421f129b0f3c4 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-03-28T01:08:44.114792Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7a1421f129b0f3c4 received MsgPreVoteResp from 7a1421f129b0f3c4 at term 1"}
	{"level":"info","ts":"2024-03-28T01:08:44.114811Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7a1421f129b0f3c4 became candidate at term 2"}
	{"level":"info","ts":"2024-03-28T01:08:44.114817Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7a1421f129b0f3c4 received MsgVoteResp from 7a1421f129b0f3c4 at term 2"}
	{"level":"info","ts":"2024-03-28T01:08:44.114824Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7a1421f129b0f3c4 became leader at term 2"}
	{"level":"info","ts":"2024-03-28T01:08:44.114834Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 7a1421f129b0f3c4 elected leader 7a1421f129b0f3c4 at term 2"}
	{"level":"info","ts":"2024-03-28T01:08:44.118853Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"7a1421f129b0f3c4","local-member-attributes":"{Name:no-preload-248059 ClientURLs:[https://192.168.61.107:2379]}","request-path":"/0/members/7a1421f129b0f3c4/attributes","cluster-id":"740117290cb61fd6","publish-timeout":"7s"}
	{"level":"info","ts":"2024-03-28T01:08:44.119039Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-28T01:08:44.119222Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-28T01:08:44.122645Z","caller":"etcdserver/server.go:2578","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-28T01:08:44.124772Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-03-28T01:08:44.124896Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-03-28T01:08:44.129098Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-03-28T01:08:44.133509Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.61.107:2379"}
	{"level":"info","ts":"2024-03-28T01:08:44.171921Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"740117290cb61fd6","local-member-id":"7a1421f129b0f3c4","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-28T01:08:44.172062Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-28T01:08:44.172118Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-28T01:18:44.195766Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":673}
	{"level":"info","ts":"2024-03-28T01:18:44.206854Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":673,"took":"10.401251ms","hash":2748906154,"current-db-size-bytes":2220032,"current-db-size":"2.2 MB","current-db-size-in-use-bytes":2220032,"current-db-size-in-use":"2.2 MB"}
	{"level":"info","ts":"2024-03-28T01:18:44.206963Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2748906154,"revision":673,"compact-revision":-1}
	{"level":"warn","ts":"2024-03-28T01:19:13.310458Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"131.142928ms","expected-duration":"100ms","prefix":"","request":"header:<ID:17565321138234766537 > lease_revoke:<id:73c48e829a24b477>","response":"size:27"}
	{"level":"info","ts":"2024-03-28T01:19:13.310849Z","caller":"traceutil/trace.go:171","msg":"trace[1843681177] linearizableReadLoop","detail":"{readStateIndex:1082; appliedIndex:1081; }","duration":"132.337905ms","start":"2024-03-28T01:19:13.17846Z","end":"2024-03-28T01:19:13.310798Z","steps":["trace[1843681177] 'read index received'  (duration: 516.552µs)","trace[1843681177] 'applied index is now lower than readState.Index'  (duration: 131.81928ms)"],"step_count":2}
	{"level":"warn","ts":"2024-03-28T01:19:13.311427Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"132.90788ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kubernetes-dashboard/\" range_end:\"/registry/pods/kubernetes-dashboard0\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-03-28T01:19:13.311504Z","caller":"traceutil/trace.go:171","msg":"trace[542074983] range","detail":"{range_begin:/registry/pods/kubernetes-dashboard/; range_end:/registry/pods/kubernetes-dashboard0; response_count:0; response_revision:941; }","duration":"133.054058ms","start":"2024-03-28T01:19:13.178432Z","end":"2024-03-28T01:19:13.311486Z","steps":["trace[542074983] 'agreement among raft nodes before linearized reading'  (duration: 132.906001ms)"],"step_count":1}
	
	
	==> kernel <==
	 01:22:46 up 19 min,  0 users,  load average: 0.03, 0.15, 0.16
	Linux no-preload-248059 5.10.207 #1 SMP Wed Mar 27 22:02:20 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [c238f08ea2841d188d12c6e86c187d972d4d38c3032bf9dffe8d5d0a2482debc] <==
	I0328 01:16:47.018624       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0328 01:18:46.020410       1 handler_proxy.go:93] no RequestInfo found in the context
	E0328 01:18:46.020659       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W0328 01:18:47.021380       1 handler_proxy.go:93] no RequestInfo found in the context
	E0328 01:18:47.021478       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0328 01:18:47.021486       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0328 01:18:47.021684       1 handler_proxy.go:93] no RequestInfo found in the context
	E0328 01:18:47.021793       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0328 01:18:47.026746       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0328 01:19:47.021922       1 handler_proxy.go:93] no RequestInfo found in the context
	E0328 01:19:47.022178       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0328 01:19:47.022260       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0328 01:19:47.027125       1 handler_proxy.go:93] no RequestInfo found in the context
	E0328 01:19:47.027215       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0328 01:19:47.027243       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0328 01:21:47.023516       1 handler_proxy.go:93] no RequestInfo found in the context
	E0328 01:21:47.023744       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0328 01:21:47.023764       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0328 01:21:47.028212       1 handler_proxy.go:93] no RequestInfo found in the context
	E0328 01:21:47.028871       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0328 01:21:47.028986       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [20179eaa0c7f1d12ca086349dfdb854d43ee9515f46f5fb49de086de286cbc3c] <==
	I0328 01:17:02.911758       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0328 01:17:32.428116       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0328 01:17:32.921213       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0328 01:18:02.435464       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0328 01:18:02.930367       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0328 01:18:32.440903       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0328 01:18:32.938993       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0328 01:19:02.448084       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0328 01:19:02.947903       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0328 01:19:32.455130       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0328 01:19:32.955730       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0328 01:19:56.042014       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="468.001µs"
	E0328 01:20:02.461414       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0328 01:20:02.966010       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0328 01:20:09.043227       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="63.202µs"
	E0328 01:20:32.468679       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0328 01:20:32.974536       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0328 01:21:02.475766       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0328 01:21:02.982848       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0328 01:21:32.481520       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0328 01:21:32.990748       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0328 01:22:02.488459       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0328 01:22:03.001675       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0328 01:22:32.494221       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0328 01:22:33.010646       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [4233f922b7075b65b10e11b3e4e6100d66f2a5d7c2bac926615979defb1956c0] <==
	I0328 01:09:04.971527       1 server_linux.go:69] "Using iptables proxy"
	I0328 01:09:05.005404       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.61.107"]
	I0328 01:09:05.107825       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0328 01:09:05.107929       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0328 01:09:05.107960       1 server_linux.go:165] "Using iptables Proxier"
	I0328 01:09:05.112423       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0328 01:09:05.112848       1 server.go:872] "Version info" version="v1.30.0-beta.0"
	I0328 01:09:05.113114       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0328 01:09:05.114770       1 config.go:192] "Starting service config controller"
	I0328 01:09:05.114843       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0328 01:09:05.114898       1 config.go:101] "Starting endpoint slice config controller"
	I0328 01:09:05.114915       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0328 01:09:05.115708       1 config.go:319] "Starting node config controller"
	I0328 01:09:05.115761       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0328 01:09:05.215634       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0328 01:09:05.215864       1 shared_informer.go:320] Caches are synced for service config
	I0328 01:09:05.216216       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [985e4e157e023253fd3baf217e792648385a081fdff62f924e938ffe7eb2b80d] <==
	W0328 01:08:46.078456       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0328 01:08:46.078484       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0328 01:08:46.078534       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0328 01:08:46.078634       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0328 01:08:46.078662       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0328 01:08:46.078687       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0328 01:08:46.079007       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0328 01:08:46.079095       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0328 01:08:46.894692       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0328 01:08:46.894748       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0328 01:08:47.021016       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0328 01:08:47.021167       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0328 01:08:47.077631       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0328 01:08:47.077689       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0328 01:08:47.312889       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0328 01:08:47.312951       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0328 01:08:47.338273       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0328 01:08:47.338330       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0328 01:08:47.348239       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0328 01:08:47.348350       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0328 01:08:47.363004       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0328 01:08:47.363069       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0328 01:08:47.397932       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0328 01:08:47.397991       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0328 01:08:49.464402       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Mar 28 01:22:11 no-preload-248059 kubelet[4167]: E0328 01:22:11.023799    4167 kubelet_pods.go:2464] "unknown runtime class" runtimeClassName=""
	Mar 28 01:22:11 no-preload-248059 kubelet[4167]: E0328 01:22:11.026935    4167 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-frc5k" podUID="d1b84bf5-8f9e-4da6-8aea-568e9bb1a4dd"
	Mar 28 01:22:23 no-preload-248059 kubelet[4167]: E0328 01:22:23.022005    4167 kubelet_pods.go:2464] "unknown runtime class" runtimeClassName=""
	Mar 28 01:22:23 no-preload-248059 kubelet[4167]: E0328 01:22:23.022360    4167 kubelet_pods.go:2464] "unknown runtime class" runtimeClassName=""
	Mar 28 01:22:23 no-preload-248059 kubelet[4167]: E0328 01:22:23.022399    4167 kubelet_pods.go:2464] "unknown runtime class" runtimeClassName=""
	Mar 28 01:22:26 no-preload-248059 kubelet[4167]: E0328 01:22:26.021501    4167 kubelet_pods.go:2464] "unknown runtime class" runtimeClassName=""
	Mar 28 01:22:26 no-preload-248059 kubelet[4167]: E0328 01:22:26.021956    4167 kubelet_pods.go:2464] "unknown runtime class" runtimeClassName=""
	Mar 28 01:22:26 no-preload-248059 kubelet[4167]: E0328 01:22:26.021998    4167 kubelet_pods.go:2464] "unknown runtime class" runtimeClassName=""
	Mar 28 01:22:26 no-preload-248059 kubelet[4167]: E0328 01:22:26.023485    4167 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-frc5k" podUID="d1b84bf5-8f9e-4da6-8aea-568e9bb1a4dd"
	Mar 28 01:22:30 no-preload-248059 kubelet[4167]: E0328 01:22:30.021218    4167 kubelet_pods.go:2464] "unknown runtime class" runtimeClassName=""
	Mar 28 01:22:30 no-preload-248059 kubelet[4167]: E0328 01:22:30.021284    4167 kubelet_pods.go:2464] "unknown runtime class" runtimeClassName=""
	Mar 28 01:22:30 no-preload-248059 kubelet[4167]: E0328 01:22:30.021292    4167 kubelet_pods.go:2464] "unknown runtime class" runtimeClassName=""
	Mar 28 01:22:33 no-preload-248059 kubelet[4167]: E0328 01:22:33.022197    4167 kubelet_pods.go:2464] "unknown runtime class" runtimeClassName=""
	Mar 28 01:22:33 no-preload-248059 kubelet[4167]: E0328 01:22:33.023202    4167 kubelet_pods.go:2464] "unknown runtime class" runtimeClassName=""
	Mar 28 01:22:33 no-preload-248059 kubelet[4167]: E0328 01:22:33.023298    4167 kubelet_pods.go:2464] "unknown runtime class" runtimeClassName=""
	Mar 28 01:22:41 no-preload-248059 kubelet[4167]: E0328 01:22:41.022061    4167 kubelet_pods.go:2464] "unknown runtime class" runtimeClassName=""
	Mar 28 01:22:41 no-preload-248059 kubelet[4167]: E0328 01:22:41.022135    4167 kubelet_pods.go:2464] "unknown runtime class" runtimeClassName=""
	Mar 28 01:22:41 no-preload-248059 kubelet[4167]: E0328 01:22:41.022142    4167 kubelet_pods.go:2464] "unknown runtime class" runtimeClassName=""
	Mar 28 01:22:41 no-preload-248059 kubelet[4167]: E0328 01:22:41.024092    4167 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-frc5k" podUID="d1b84bf5-8f9e-4da6-8aea-568e9bb1a4dd"
	Mar 28 01:22:44 no-preload-248059 kubelet[4167]: E0328 01:22:44.021755    4167 kubelet_pods.go:2464] "unknown runtime class" runtimeClassName=""
	Mar 28 01:22:44 no-preload-248059 kubelet[4167]: E0328 01:22:44.021884    4167 kubelet_pods.go:2464] "unknown runtime class" runtimeClassName=""
	Mar 28 01:22:44 no-preload-248059 kubelet[4167]: E0328 01:22:44.021907    4167 kubelet_pods.go:2464] "unknown runtime class" runtimeClassName=""
	Mar 28 01:22:45 no-preload-248059 kubelet[4167]: E0328 01:22:45.021987    4167 kubelet_pods.go:2464] "unknown runtime class" runtimeClassName=""
	Mar 28 01:22:45 no-preload-248059 kubelet[4167]: E0328 01:22:45.022325    4167 kubelet_pods.go:2464] "unknown runtime class" runtimeClassName=""
	Mar 28 01:22:45 no-preload-248059 kubelet[4167]: E0328 01:22:45.022447    4167 kubelet_pods.go:2464] "unknown runtime class" runtimeClassName=""
	
	
	==> storage-provisioner [2b9a3a8eb8ca9f67fd3d07a31c8119852de5aa2dc7c5dfa2c9dc35a2cc0f49fb] <==
	I0328 01:09:05.107439       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0328 01:09:05.130640       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0328 01:09:05.130718       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0328 01:09:05.144443       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0328 01:09:05.144673       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-248059_7c0dcb76-d036-45a2-95c2-ef87401c31ce!
	I0328 01:09:05.148985       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"fd5ac8e3-42ab-4e5e-876e-864a1f13c990", APIVersion:"v1", ResourceVersion:"401", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-248059_7c0dcb76-d036-45a2-95c2-ef87401c31ce became leader
	I0328 01:09:05.246773       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-248059_7c0dcb76-d036-45a2-95c2-ef87401c31ce!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-248059 -n no-preload-248059
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-248059 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-569cc877fc-frc5k
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-248059 describe pod metrics-server-569cc877fc-frc5k
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-248059 describe pod metrics-server-569cc877fc-frc5k: exit status 1 (68.098306ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-569cc877fc-frc5k" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-248059 describe pod metrics-server-569cc877fc-frc5k: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (278.49s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (156.37s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
E0328 01:20:28.663519 1076522 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/kindnet-443419/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
E0328 01:21:14.356082 1076522 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/addons-910864/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
E0328 01:21:19.153718 1076522 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/calico-443419/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
E0328 01:21:21.207601 1076522 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/functional-800754/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
E0328 01:21:47.935766 1076522 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/custom-flannel-443419/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
E0328 01:22:10.821181 1076522 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/enable-default-cni-443419/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.174:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.174:8443: connect: connection refused
start_stop_delete_test.go:287: ***** TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-986088 -n old-k8s-version-986088
start_stop_delete_test.go:287: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-986088 -n old-k8s-version-986088: exit status 2 (259.523445ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:287: status error: exit status 2 (may be ok)
start_stop_delete_test.go:287: "old-k8s-version-986088" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-986088 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-986088 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.501µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-986088 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-986088 -n old-k8s-version-986088
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-986088 -n old-k8s-version-986088: exit status 2 (253.790911ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-986088 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-986088 logs -n 25: (1.542318974s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|----------------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   |    Version     |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|----------------|---------------------|---------------------|
	| stop    | -p no-preload-248059                                   | no-preload-248059            | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:55 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |                |                     |                     |
	| addons  | enable metrics-server -p embed-certs-808809            | embed-certs-808809           | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:55 UTC | 28 Mar 24 00:55 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |                |                     |                     |
	| stop    | -p embed-certs-808809                                  | embed-certs-808809           | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:55 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |                |                     |                     |
	| addons  | enable metrics-server -p newest-cni-013642             | newest-cni-013642            | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:55 UTC | 28 Mar 24 00:55 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |                |                     |                     |
	| stop    | -p newest-cni-013642                                   | newest-cni-013642            | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:55 UTC | 28 Mar 24 00:55 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |                |                     |                     |
	| addons  | enable dashboard -p newest-cni-013642                  | newest-cni-013642            | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:55 UTC | 28 Mar 24 00:55 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| start   | -p newest-cni-013642 --memory=2200 --alsologtostderr   | newest-cni-013642            | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:55 UTC | 28 Mar 24 00:56 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |                |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |                |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |                |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |                |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.30.0-beta.0                    |                              |         |                |                     |                     |
	| image   | newest-cni-013642 image list                           | newest-cni-013642            | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:56 UTC | 28 Mar 24 00:56 UTC |
	|         | --format=json                                          |                              |         |                |                     |                     |
	| pause   | -p newest-cni-013642                                   | newest-cni-013642            | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:56 UTC | 28 Mar 24 00:56 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |                |                     |                     |
	| unpause | -p newest-cni-013642                                   | newest-cni-013642            | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:56 UTC | 28 Mar 24 00:56 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |                |                     |                     |
	| delete  | -p newest-cni-013642                                   | newest-cni-013642            | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:56 UTC | 28 Mar 24 00:56 UTC |
	| delete  | -p newest-cni-013642                                   | newest-cni-013642            | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:56 UTC | 28 Mar 24 00:56 UTC |
	| start   | -p                                                     | default-k8s-diff-port-283961 | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:56 UTC | 28 Mar 24 00:57 UTC |
	|         | default-k8s-diff-port-283961                           |                              |         |                |                     |                     |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |                |                     |                     |
	|         | --driver=kvm2                                          |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.29.3                           |                              |         |                |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-986088        | old-k8s-version-986088       | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:57 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |                |                     |                     |
	| addons  | enable dashboard -p no-preload-248059                  | no-preload-248059            | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:57 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-283961  | default-k8s-diff-port-283961 | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:57 UTC | 28 Mar 24 00:57 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |                |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-283961 | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:57 UTC |                     |
	|         | default-k8s-diff-port-283961                           |                              |         |                |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |                |                     |                     |
	| start   | -p no-preload-248059 --memory=2200                     | no-preload-248059            | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:57 UTC | 28 Mar 24 01:09 UTC |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |                |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.30.0-beta.0                    |                              |         |                |                     |                     |
	| addons  | enable dashboard -p embed-certs-808809                 | embed-certs-808809           | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:57 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| start   | -p embed-certs-808809                                  | embed-certs-808809           | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:57 UTC | 28 Mar 24 01:07 UTC |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |                |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.29.3                           |                              |         |                |                     |                     |
	| stop    | -p old-k8s-version-986088                              | old-k8s-version-986088       | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:59 UTC | 28 Mar 24 00:59 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |                |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-986088             | old-k8s-version-986088       | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:59 UTC | 28 Mar 24 00:59 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| start   | -p old-k8s-version-986088                              | old-k8s-version-986088       | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:59 UTC |                     |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --kvm-network=default                                  |                              |         |                |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |                |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |                |                     |                     |
	|         | --keep-context=false                                   |                              |         |                |                     |                     |
	|         | --driver=kvm2                                          |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |                |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-283961       | default-k8s-diff-port-283961 | jenkins | v1.33.0-beta.0 | 28 Mar 24 00:59 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-283961 | jenkins | v1.33.0-beta.0 | 28 Mar 24 01:00 UTC | 28 Mar 24 01:08 UTC |
	|         | default-k8s-diff-port-283961                           |                              |         |                |                     |                     |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |                |                     |                     |
	|         | --driver=kvm2                                          |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.29.3                           |                              |         |                |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/28 01:00:05
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0328 01:00:05.675380 1131600 out.go:291] Setting OutFile to fd 1 ...
	I0328 01:00:05.675675 1131600 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 01:00:05.675710 1131600 out.go:304] Setting ErrFile to fd 2...
	I0328 01:00:05.675718 1131600 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 01:00:05.676017 1131600 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18485-1069254/.minikube/bin
	I0328 01:00:05.676919 1131600 out.go:298] Setting JSON to false
	I0328 01:00:05.678046 1131600 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":31303,"bootTime":1711556303,"procs":200,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1054-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0328 01:00:05.678129 1131600 start.go:139] virtualization: kvm guest
	I0328 01:00:05.681128 1131600 out.go:177] * [default-k8s-diff-port-283961] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I0328 01:00:05.683139 1131600 out.go:177]   - MINIKUBE_LOCATION=18485
	I0328 01:00:05.683129 1131600 notify.go:220] Checking for updates...
	I0328 01:00:05.685082 1131600 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0328 01:00:05.686765 1131600 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18485-1069254/kubeconfig
	I0328 01:00:05.688389 1131600 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18485-1069254/.minikube
	I0328 01:00:05.690187 1131600 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0328 01:00:05.691887 1131600 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0328 01:00:05.693775 1131600 config.go:182] Loaded profile config "default-k8s-diff-port-283961": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0328 01:00:05.694270 1131600 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18485-1069254/.minikube/bin/docker-machine-driver-kvm2
	I0328 01:00:05.694323 1131600 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 01:00:05.709757 1131600 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35333
	I0328 01:00:05.710275 1131600 main.go:141] libmachine: () Calling .GetVersion
	I0328 01:00:05.710875 1131600 main.go:141] libmachine: Using API Version  1
	I0328 01:00:05.710900 1131600 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 01:00:05.711323 1131600 main.go:141] libmachine: () Calling .GetMachineName
	I0328 01:00:05.711531 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .DriverName
	I0328 01:00:05.711893 1131600 driver.go:392] Setting default libvirt URI to qemu:///system
	I0328 01:00:05.712342 1131600 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18485-1069254/.minikube/bin/docker-machine-driver-kvm2
	I0328 01:00:05.712392 1131600 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 01:00:05.727583 1131600 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44249
	I0328 01:00:05.728107 1131600 main.go:141] libmachine: () Calling .GetVersion
	I0328 01:00:05.728595 1131600 main.go:141] libmachine: Using API Version  1
	I0328 01:00:05.728625 1131600 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 01:00:05.728945 1131600 main.go:141] libmachine: () Calling .GetMachineName
	I0328 01:00:05.729170 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .DriverName
	I0328 01:00:05.763895 1131600 out.go:177] * Using the kvm2 driver based on existing profile
	I0328 01:00:05.765397 1131600 start.go:297] selected driver: kvm2
	I0328 01:00:05.765431 1131600 start.go:901] validating driver "kvm2" against &{Name:default-k8s-diff-port-283961 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.29.3 ClusterName:default-k8s-diff-port-283961 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.224 Port:8444 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisk
s:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0328 01:00:05.765564 1131600 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0328 01:00:05.766282 1131600 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0328 01:00:05.766391 1131600 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18485-1069254/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0328 01:00:05.783130 1131600 install.go:137] /home/jenkins/minikube-integration/18485-1069254/.minikube/bin/docker-machine-driver-kvm2 version is 1.33.0-beta.0
	I0328 01:00:05.783602 1131600 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0328 01:00:05.783724 1131600 cni.go:84] Creating CNI manager for ""
	I0328 01:00:05.783745 1131600 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0328 01:00:05.783795 1131600 start.go:340] cluster config:
	{Name:default-k8s-diff-port-283961 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:default-k8s-diff-port-283961 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.224 Port:8444 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0328 01:00:05.783949 1131600 iso.go:125] acquiring lock: {Name:mk3da1fa7d63a581c817f327d86a827697458cb8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0328 01:00:05.785871 1131600 out.go:177] * Starting "default-k8s-diff-port-283961" primary control-plane node in "default-k8s-diff-port-283961" cluster
	I0328 01:00:02.570474 1130827 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.107:22: connect: no route to host
	I0328 01:00:05.787210 1131600 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0328 01:00:05.787259 1131600 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4
	I0328 01:00:05.787272 1131600 cache.go:56] Caching tarball of preloaded images
	I0328 01:00:05.787364 1131600 preload.go:173] Found /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0328 01:00:05.787376 1131600 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on crio
	I0328 01:00:05.787509 1131600 profile.go:142] Saving config to /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/default-k8s-diff-port-283961/config.json ...
	I0328 01:00:05.787742 1131600 start.go:360] acquireMachinesLock for default-k8s-diff-port-283961: {Name:mk85e225431128bbd27ac7bb3815095957281902 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0328 01:00:08.650481 1130827 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.107:22: connect: no route to host
	I0328 01:00:11.722571 1130827 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.107:22: connect: no route to host
	I0328 01:00:17.802536 1130827 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.107:22: connect: no route to host
	I0328 01:00:20.874568 1130827 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.107:22: connect: no route to host
	I0328 01:00:26.954473 1130827 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.107:22: connect: no route to host
	I0328 01:00:30.026674 1130827 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.107:22: connect: no route to host
	I0328 01:00:36.106489 1130827 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.107:22: connect: no route to host
	I0328 01:00:39.178555 1130827 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.107:22: connect: no route to host
	I0328 01:00:45.258539 1130827 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.107:22: connect: no route to host
	I0328 01:00:48.330581 1130827 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.107:22: connect: no route to host
	I0328 01:00:54.410577 1130827 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.107:22: connect: no route to host
	I0328 01:00:57.482545 1130827 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.107:22: connect: no route to host
	I0328 01:01:03.562558 1130827 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.107:22: connect: no route to host
	I0328 01:01:06.634602 1130827 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.107:22: connect: no route to host
	I0328 01:01:12.714559 1130827 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.107:22: connect: no route to host
	I0328 01:01:15.786597 1130827 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.107:22: connect: no route to host
	I0328 01:01:21.866544 1130827 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.107:22: connect: no route to host
	I0328 01:01:24.938619 1130827 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.107:22: connect: no route to host
	I0328 01:01:31.018631 1130827 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.107:22: connect: no route to host
	I0328 01:01:34.090562 1130827 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.107:22: connect: no route to host
	I0328 01:01:40.170864 1130827 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.107:22: connect: no route to host
	I0328 01:01:43.242565 1130827 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.107:22: connect: no route to host
	I0328 01:01:49.322492 1130827 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.107:22: connect: no route to host
	I0328 01:01:52.394572 1130827 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.107:22: connect: no route to host
	I0328 01:01:58.474562 1130827 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.107:22: connect: no route to host
	I0328 01:02:01.546621 1130827 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.107:22: connect: no route to host
	I0328 01:02:07.626510 1130827 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.107:22: connect: no route to host
	I0328 01:02:10.698534 1130827 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.107:22: connect: no route to host
	I0328 01:02:13.703348 1130949 start.go:364] duration metric: took 4m25.677777198s to acquireMachinesLock for "embed-certs-808809"
	I0328 01:02:13.703416 1130949 start.go:96] Skipping create...Using existing machine configuration
	I0328 01:02:13.703429 1130949 fix.go:54] fixHost starting: 
	I0328 01:02:13.703888 1130949 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18485-1069254/.minikube/bin/docker-machine-driver-kvm2
	I0328 01:02:13.703923 1130949 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 01:02:13.719480 1130949 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44835
	I0328 01:02:13.719968 1130949 main.go:141] libmachine: () Calling .GetVersion
	I0328 01:02:13.720450 1130949 main.go:141] libmachine: Using API Version  1
	I0328 01:02:13.720475 1130949 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 01:02:13.720774 1130949 main.go:141] libmachine: () Calling .GetMachineName
	I0328 01:02:13.721011 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .DriverName
	I0328 01:02:13.721182 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetState
	I0328 01:02:13.722796 1130949 fix.go:112] recreateIfNeeded on embed-certs-808809: state=Stopped err=<nil>
	I0328 01:02:13.722828 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .DriverName
	W0328 01:02:13.722972 1130949 fix.go:138] unexpected machine state, will restart: <nil>
	I0328 01:02:13.724895 1130949 out.go:177] * Restarting existing kvm2 VM for "embed-certs-808809" ...
	I0328 01:02:13.700647 1130827 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0328 01:02:13.700689 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetMachineName
	I0328 01:02:13.701054 1130827 buildroot.go:166] provisioning hostname "no-preload-248059"
	I0328 01:02:13.701085 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetMachineName
	I0328 01:02:13.701344 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHHostname
	I0328 01:02:13.703200 1130827 machine.go:97] duration metric: took 4m37.399616994s to provisionDockerMachine
	I0328 01:02:13.703243 1130827 fix.go:56] duration metric: took 4m37.42352766s for fixHost
	I0328 01:02:13.703249 1130827 start.go:83] releasing machines lock for "no-preload-248059", held for 4m37.423563163s
	W0328 01:02:13.703274 1130827 start.go:713] error starting host: provision: host is not running
	W0328 01:02:13.703400 1130827 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0328 01:02:13.703411 1130827 start.go:728] Will try again in 5 seconds ...
	I0328 01:02:13.726437 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .Start
	I0328 01:02:13.726574 1130949 main.go:141] libmachine: (embed-certs-808809) Ensuring networks are active...
	I0328 01:02:13.727407 1130949 main.go:141] libmachine: (embed-certs-808809) Ensuring network default is active
	I0328 01:02:13.727667 1130949 main.go:141] libmachine: (embed-certs-808809) Ensuring network mk-embed-certs-808809 is active
	I0328 01:02:13.728050 1130949 main.go:141] libmachine: (embed-certs-808809) Getting domain xml...
	I0328 01:02:13.728836 1130949 main.go:141] libmachine: (embed-certs-808809) Creating domain...
	I0328 01:02:14.931757 1130949 main.go:141] libmachine: (embed-certs-808809) Waiting to get IP...
	I0328 01:02:14.932921 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:14.933298 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | unable to find current IP address of domain embed-certs-808809 in network mk-embed-certs-808809
	I0328 01:02:14.933396 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | I0328 01:02:14.933294 1131950 retry.go:31] will retry after 279.257708ms: waiting for machine to come up
	I0328 01:02:15.213830 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:15.214439 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | unable to find current IP address of domain embed-certs-808809 in network mk-embed-certs-808809
	I0328 01:02:15.214472 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | I0328 01:02:15.214415 1131950 retry.go:31] will retry after 387.406107ms: waiting for machine to come up
	I0328 01:02:15.603078 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:15.603464 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | unable to find current IP address of domain embed-certs-808809 in network mk-embed-certs-808809
	I0328 01:02:15.603497 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | I0328 01:02:15.603431 1131950 retry.go:31] will retry after 466.553599ms: waiting for machine to come up
	I0328 01:02:16.072165 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:16.072702 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | unable to find current IP address of domain embed-certs-808809 in network mk-embed-certs-808809
	I0328 01:02:16.072732 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | I0328 01:02:16.072643 1131950 retry.go:31] will retry after 375.428381ms: waiting for machine to come up
	I0328 01:02:16.449155 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:16.449614 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | unable to find current IP address of domain embed-certs-808809 in network mk-embed-certs-808809
	I0328 01:02:16.449652 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | I0328 01:02:16.449553 1131950 retry.go:31] will retry after 466.238903ms: waiting for machine to come up
	I0328 01:02:16.917246 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:16.917697 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | unable to find current IP address of domain embed-certs-808809 in network mk-embed-certs-808809
	I0328 01:02:16.917723 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | I0328 01:02:16.917633 1131950 retry.go:31] will retry after 772.819544ms: waiting for machine to come up
	I0328 01:02:17.691645 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:17.692121 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | unable to find current IP address of domain embed-certs-808809 in network mk-embed-certs-808809
	I0328 01:02:17.692151 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | I0328 01:02:17.692071 1131950 retry.go:31] will retry after 1.19065976s: waiting for machine to come up
	I0328 01:02:18.704949 1130827 start.go:360] acquireMachinesLock for no-preload-248059: {Name:mk85e225431128bbd27ac7bb3815095957281902 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0328 01:02:18.884525 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:18.885019 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | unable to find current IP address of domain embed-certs-808809 in network mk-embed-certs-808809
	I0328 01:02:18.885044 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | I0328 01:02:18.884980 1131950 retry.go:31] will retry after 1.434726863s: waiting for machine to come up
	I0328 01:02:20.321473 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:20.322009 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | unable to find current IP address of domain embed-certs-808809 in network mk-embed-certs-808809
	I0328 01:02:20.322035 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | I0328 01:02:20.321951 1131950 retry.go:31] will retry after 1.275277555s: waiting for machine to come up
	I0328 01:02:21.599454 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:21.600049 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | unable to find current IP address of domain embed-certs-808809 in network mk-embed-certs-808809
	I0328 01:02:21.600074 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | I0328 01:02:21.599982 1131950 retry.go:31] will retry after 1.852516502s: waiting for machine to come up
	I0328 01:02:23.455282 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:23.455760 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | unable to find current IP address of domain embed-certs-808809 in network mk-embed-certs-808809
	I0328 01:02:23.455830 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | I0328 01:02:23.455746 1131950 retry.go:31] will retry after 2.056736141s: waiting for machine to come up
	I0328 01:02:25.514112 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:25.514538 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | unable to find current IP address of domain embed-certs-808809 in network mk-embed-certs-808809
	I0328 01:02:25.514569 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | I0328 01:02:25.514492 1131950 retry.go:31] will retry after 2.711520437s: waiting for machine to come up
	I0328 01:02:32.751719 1131323 start.go:364] duration metric: took 3m27.302408957s to acquireMachinesLock for "old-k8s-version-986088"
	I0328 01:02:32.751823 1131323 start.go:96] Skipping create...Using existing machine configuration
	I0328 01:02:32.751833 1131323 fix.go:54] fixHost starting: 
	I0328 01:02:32.752289 1131323 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18485-1069254/.minikube/bin/docker-machine-driver-kvm2
	I0328 01:02:32.752326 1131323 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 01:02:32.770119 1131323 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45807
	I0328 01:02:32.770723 1131323 main.go:141] libmachine: () Calling .GetVersion
	I0328 01:02:32.771352 1131323 main.go:141] libmachine: Using API Version  1
	I0328 01:02:32.771380 1131323 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 01:02:32.771790 1131323 main.go:141] libmachine: () Calling .GetMachineName
	I0328 01:02:32.772020 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .DriverName
	I0328 01:02:32.772206 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetState
	I0328 01:02:32.773947 1131323 fix.go:112] recreateIfNeeded on old-k8s-version-986088: state=Stopped err=<nil>
	I0328 01:02:32.773980 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .DriverName
	W0328 01:02:32.774166 1131323 fix.go:138] unexpected machine state, will restart: <nil>
	I0328 01:02:32.776416 1131323 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-986088" ...
	I0328 01:02:28.229576 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:28.229970 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | unable to find current IP address of domain embed-certs-808809 in network mk-embed-certs-808809
	I0328 01:02:28.230000 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | I0328 01:02:28.229920 1131950 retry.go:31] will retry after 3.231405371s: waiting for machine to come up
	I0328 01:02:31.463477 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:31.463884 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has current primary IP address 192.168.72.210 and MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:31.463902 1130949 main.go:141] libmachine: (embed-certs-808809) Found IP for machine: 192.168.72.210
	I0328 01:02:31.463915 1130949 main.go:141] libmachine: (embed-certs-808809) Reserving static IP address...
	I0328 01:02:31.464394 1130949 main.go:141] libmachine: (embed-certs-808809) Reserved static IP address: 192.168.72.210
	I0328 01:02:31.464413 1130949 main.go:141] libmachine: (embed-certs-808809) Waiting for SSH to be available...
	I0328 01:02:31.464439 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | found host DHCP lease matching {name: "embed-certs-808809", mac: "52:54:00:60:d4:d2", ip: "192.168.72.210"} in network mk-embed-certs-808809: {Iface:virbr4 ExpiryTime:2024-03-28 02:02:25 +0000 UTC Type:0 Mac:52:54:00:60:d4:d2 Iaid: IPaddr:192.168.72.210 Prefix:24 Hostname:embed-certs-808809 Clientid:01:52:54:00:60:d4:d2}
	I0328 01:02:31.464464 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | skip adding static IP to network mk-embed-certs-808809 - found existing host DHCP lease matching {name: "embed-certs-808809", mac: "52:54:00:60:d4:d2", ip: "192.168.72.210"}
	I0328 01:02:31.464480 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | Getting to WaitForSSH function...
	I0328 01:02:31.466488 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:31.466876 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d4:d2", ip: ""} in network mk-embed-certs-808809: {Iface:virbr4 ExpiryTime:2024-03-28 02:02:25 +0000 UTC Type:0 Mac:52:54:00:60:d4:d2 Iaid: IPaddr:192.168.72.210 Prefix:24 Hostname:embed-certs-808809 Clientid:01:52:54:00:60:d4:d2}
	I0328 01:02:31.466916 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined IP address 192.168.72.210 and MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:31.467054 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | Using SSH client type: external
	I0328 01:02:31.467085 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | Using SSH private key: /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/embed-certs-808809/id_rsa (-rw-------)
	I0328 01:02:31.467124 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.210 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/embed-certs-808809/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0328 01:02:31.467138 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | About to run SSH command:
	I0328 01:02:31.467155 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | exit 0
	I0328 01:02:31.590708 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | SSH cmd err, output: <nil>: 
	I0328 01:02:31.591111 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetConfigRaw
	I0328 01:02:31.591959 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetIP
	I0328 01:02:31.594592 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:31.595075 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d4:d2", ip: ""} in network mk-embed-certs-808809: {Iface:virbr4 ExpiryTime:2024-03-28 02:02:25 +0000 UTC Type:0 Mac:52:54:00:60:d4:d2 Iaid: IPaddr:192.168.72.210 Prefix:24 Hostname:embed-certs-808809 Clientid:01:52:54:00:60:d4:d2}
	I0328 01:02:31.595114 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined IP address 192.168.72.210 and MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:31.595364 1130949 profile.go:142] Saving config to /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/embed-certs-808809/config.json ...
	I0328 01:02:31.595634 1130949 machine.go:94] provisionDockerMachine start ...
	I0328 01:02:31.595656 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .DriverName
	I0328 01:02:31.595901 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHHostname
	I0328 01:02:31.598184 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:31.598529 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d4:d2", ip: ""} in network mk-embed-certs-808809: {Iface:virbr4 ExpiryTime:2024-03-28 02:02:25 +0000 UTC Type:0 Mac:52:54:00:60:d4:d2 Iaid: IPaddr:192.168.72.210 Prefix:24 Hostname:embed-certs-808809 Clientid:01:52:54:00:60:d4:d2}
	I0328 01:02:31.598556 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined IP address 192.168.72.210 and MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:31.598681 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHPort
	I0328 01:02:31.598851 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHKeyPath
	I0328 01:02:31.599012 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHKeyPath
	I0328 01:02:31.599163 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHUsername
	I0328 01:02:31.599333 1130949 main.go:141] libmachine: Using SSH client type: native
	I0328 01:02:31.599604 1130949 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.210 22 <nil> <nil>}
	I0328 01:02:31.599619 1130949 main.go:141] libmachine: About to run SSH command:
	hostname
	I0328 01:02:31.703241 1130949 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0328 01:02:31.703272 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetMachineName
	I0328 01:02:31.703575 1130949 buildroot.go:166] provisioning hostname "embed-certs-808809"
	I0328 01:02:31.703602 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetMachineName
	I0328 01:02:31.703779 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHHostname
	I0328 01:02:31.706495 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:31.706777 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d4:d2", ip: ""} in network mk-embed-certs-808809: {Iface:virbr4 ExpiryTime:2024-03-28 02:02:25 +0000 UTC Type:0 Mac:52:54:00:60:d4:d2 Iaid: IPaddr:192.168.72.210 Prefix:24 Hostname:embed-certs-808809 Clientid:01:52:54:00:60:d4:d2}
	I0328 01:02:31.706799 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined IP address 192.168.72.210 and MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:31.706978 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHPort
	I0328 01:02:31.707146 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHKeyPath
	I0328 01:02:31.707334 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHKeyPath
	I0328 01:02:31.707580 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHUsername
	I0328 01:02:31.707765 1130949 main.go:141] libmachine: Using SSH client type: native
	I0328 01:02:31.707985 1130949 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.210 22 <nil> <nil>}
	I0328 01:02:31.708004 1130949 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-808809 && echo "embed-certs-808809" | sudo tee /etc/hostname
	I0328 01:02:31.821578 1130949 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-808809
	
	I0328 01:02:31.821608 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHHostname
	I0328 01:02:31.824412 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:31.824791 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d4:d2", ip: ""} in network mk-embed-certs-808809: {Iface:virbr4 ExpiryTime:2024-03-28 02:02:25 +0000 UTC Type:0 Mac:52:54:00:60:d4:d2 Iaid: IPaddr:192.168.72.210 Prefix:24 Hostname:embed-certs-808809 Clientid:01:52:54:00:60:d4:d2}
	I0328 01:02:31.824825 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined IP address 192.168.72.210 and MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:31.825030 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHPort
	I0328 01:02:31.825253 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHKeyPath
	I0328 01:02:31.825432 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHKeyPath
	I0328 01:02:31.825589 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHUsername
	I0328 01:02:31.825758 1130949 main.go:141] libmachine: Using SSH client type: native
	I0328 01:02:31.825950 1130949 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.210 22 <nil> <nil>}
	I0328 01:02:31.825976 1130949 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-808809' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-808809/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-808809' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0328 01:02:31.937655 1130949 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0328 01:02:31.937701 1130949 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18485-1069254/.minikube CaCertPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18485-1069254/.minikube}
	I0328 01:02:31.937728 1130949 buildroot.go:174] setting up certificates
	I0328 01:02:31.937742 1130949 provision.go:84] configureAuth start
	I0328 01:02:31.937754 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetMachineName
	I0328 01:02:31.938093 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetIP
	I0328 01:02:31.940874 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:31.941328 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d4:d2", ip: ""} in network mk-embed-certs-808809: {Iface:virbr4 ExpiryTime:2024-03-28 02:02:25 +0000 UTC Type:0 Mac:52:54:00:60:d4:d2 Iaid: IPaddr:192.168.72.210 Prefix:24 Hostname:embed-certs-808809 Clientid:01:52:54:00:60:d4:d2}
	I0328 01:02:31.941360 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined IP address 192.168.72.210 and MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:31.941580 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHHostname
	I0328 01:02:31.944250 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:31.944580 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d4:d2", ip: ""} in network mk-embed-certs-808809: {Iface:virbr4 ExpiryTime:2024-03-28 02:02:25 +0000 UTC Type:0 Mac:52:54:00:60:d4:d2 Iaid: IPaddr:192.168.72.210 Prefix:24 Hostname:embed-certs-808809 Clientid:01:52:54:00:60:d4:d2}
	I0328 01:02:31.944610 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined IP address 192.168.72.210 and MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:31.944828 1130949 provision.go:143] copyHostCerts
	I0328 01:02:31.944910 1130949 exec_runner.go:144] found /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.pem, removing ...
	I0328 01:02:31.944926 1130949 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.pem
	I0328 01:02:31.945006 1130949 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.pem (1078 bytes)
	I0328 01:02:31.945151 1130949 exec_runner.go:144] found /home/jenkins/minikube-integration/18485-1069254/.minikube/cert.pem, removing ...
	I0328 01:02:31.945162 1130949 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18485-1069254/.minikube/cert.pem
	I0328 01:02:31.945205 1130949 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18485-1069254/.minikube/cert.pem (1123 bytes)
	I0328 01:02:31.945285 1130949 exec_runner.go:144] found /home/jenkins/minikube-integration/18485-1069254/.minikube/key.pem, removing ...
	I0328 01:02:31.945294 1130949 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18485-1069254/.minikube/key.pem
	I0328 01:02:31.945330 1130949 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18485-1069254/.minikube/key.pem (1679 bytes)
	I0328 01:02:31.945400 1130949 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca-key.pem org=jenkins.embed-certs-808809 san=[127.0.0.1 192.168.72.210 embed-certs-808809 localhost minikube]
	I0328 01:02:32.070925 1130949 provision.go:177] copyRemoteCerts
	I0328 01:02:32.071007 1130949 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0328 01:02:32.071067 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHHostname
	I0328 01:02:32.073876 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:32.074295 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d4:d2", ip: ""} in network mk-embed-certs-808809: {Iface:virbr4 ExpiryTime:2024-03-28 02:02:25 +0000 UTC Type:0 Mac:52:54:00:60:d4:d2 Iaid: IPaddr:192.168.72.210 Prefix:24 Hostname:embed-certs-808809 Clientid:01:52:54:00:60:d4:d2}
	I0328 01:02:32.074339 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined IP address 192.168.72.210 and MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:32.074541 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHPort
	I0328 01:02:32.074754 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHKeyPath
	I0328 01:02:32.074931 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHUsername
	I0328 01:02:32.075091 1130949 sshutil.go:53] new ssh client: &{IP:192.168.72.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/embed-certs-808809/id_rsa Username:docker}
	I0328 01:02:32.158945 1130949 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0328 01:02:32.184903 1130949 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0328 01:02:32.210411 1130949 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0328 01:02:32.235788 1130949 provision.go:87] duration metric: took 298.03126ms to configureAuth
	I0328 01:02:32.235827 1130949 buildroot.go:189] setting minikube options for container-runtime
	I0328 01:02:32.236116 1130949 config.go:182] Loaded profile config "embed-certs-808809": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0328 01:02:32.236336 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHHostname
	I0328 01:02:32.239186 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:32.239520 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d4:d2", ip: ""} in network mk-embed-certs-808809: {Iface:virbr4 ExpiryTime:2024-03-28 02:02:25 +0000 UTC Type:0 Mac:52:54:00:60:d4:d2 Iaid: IPaddr:192.168.72.210 Prefix:24 Hostname:embed-certs-808809 Clientid:01:52:54:00:60:d4:d2}
	I0328 01:02:32.239555 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined IP address 192.168.72.210 and MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:32.239782 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHPort
	I0328 01:02:32.240036 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHKeyPath
	I0328 01:02:32.240257 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHKeyPath
	I0328 01:02:32.240431 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHUsername
	I0328 01:02:32.240633 1130949 main.go:141] libmachine: Using SSH client type: native
	I0328 01:02:32.240836 1130949 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.210 22 <nil> <nil>}
	I0328 01:02:32.240862 1130949 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0328 01:02:32.513263 1130949 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0328 01:02:32.513298 1130949 machine.go:97] duration metric: took 917.647337ms to provisionDockerMachine
	I0328 01:02:32.513314 1130949 start.go:293] postStartSetup for "embed-certs-808809" (driver="kvm2")
	I0328 01:02:32.513326 1130949 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0328 01:02:32.513365 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .DriverName
	I0328 01:02:32.513727 1130949 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0328 01:02:32.513770 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHHostname
	I0328 01:02:32.516906 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:32.517382 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d4:d2", ip: ""} in network mk-embed-certs-808809: {Iface:virbr4 ExpiryTime:2024-03-28 02:02:25 +0000 UTC Type:0 Mac:52:54:00:60:d4:d2 Iaid: IPaddr:192.168.72.210 Prefix:24 Hostname:embed-certs-808809 Clientid:01:52:54:00:60:d4:d2}
	I0328 01:02:32.517425 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined IP address 192.168.72.210 and MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:32.517603 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHPort
	I0328 01:02:32.517831 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHKeyPath
	I0328 01:02:32.517989 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHUsername
	I0328 01:02:32.518115 1130949 sshutil.go:53] new ssh client: &{IP:192.168.72.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/embed-certs-808809/id_rsa Username:docker}
	I0328 01:02:32.600013 1130949 ssh_runner.go:195] Run: cat /etc/os-release
	I0328 01:02:32.604953 1130949 info.go:137] Remote host: Buildroot 2023.02.9
	I0328 01:02:32.604983 1130949 filesync.go:126] Scanning /home/jenkins/minikube-integration/18485-1069254/.minikube/addons for local assets ...
	I0328 01:02:32.605057 1130949 filesync.go:126] Scanning /home/jenkins/minikube-integration/18485-1069254/.minikube/files for local assets ...
	I0328 01:02:32.605148 1130949 filesync.go:149] local asset: /home/jenkins/minikube-integration/18485-1069254/.minikube/files/etc/ssl/certs/10765222.pem -> 10765222.pem in /etc/ssl/certs
	I0328 01:02:32.605265 1130949 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0328 01:02:32.617685 1130949 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/files/etc/ssl/certs/10765222.pem --> /etc/ssl/certs/10765222.pem (1708 bytes)
	I0328 01:02:32.646415 1130949 start.go:296] duration metric: took 133.084551ms for postStartSetup
	I0328 01:02:32.646462 1130949 fix.go:56] duration metric: took 18.943034019s for fixHost
	I0328 01:02:32.646490 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHHostname
	I0328 01:02:32.649346 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:32.649686 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d4:d2", ip: ""} in network mk-embed-certs-808809: {Iface:virbr4 ExpiryTime:2024-03-28 02:02:25 +0000 UTC Type:0 Mac:52:54:00:60:d4:d2 Iaid: IPaddr:192.168.72.210 Prefix:24 Hostname:embed-certs-808809 Clientid:01:52:54:00:60:d4:d2}
	I0328 01:02:32.649717 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined IP address 192.168.72.210 and MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:32.649864 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHPort
	I0328 01:02:32.650191 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHKeyPath
	I0328 01:02:32.650444 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHKeyPath
	I0328 01:02:32.650637 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHUsername
	I0328 01:02:32.650844 1130949 main.go:141] libmachine: Using SSH client type: native
	I0328 01:02:32.651036 1130949 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.210 22 <nil> <nil>}
	I0328 01:02:32.651069 1130949 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0328 01:02:32.751522 1130949 main.go:141] libmachine: SSH cmd err, output: <nil>: 1711587752.718800758
	
	I0328 01:02:32.751547 1130949 fix.go:216] guest clock: 1711587752.718800758
	I0328 01:02:32.751556 1130949 fix.go:229] Guest: 2024-03-28 01:02:32.718800758 +0000 UTC Remote: 2024-03-28 01:02:32.646466137 +0000 UTC m=+284.780134501 (delta=72.334621ms)
	I0328 01:02:32.751598 1130949 fix.go:200] guest clock delta is within tolerance: 72.334621ms
	I0328 01:02:32.751610 1130949 start.go:83] releasing machines lock for "embed-certs-808809", held for 19.048217918s
	I0328 01:02:32.751638 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .DriverName
	I0328 01:02:32.751953 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetIP
	I0328 01:02:32.754795 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:32.755205 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d4:d2", ip: ""} in network mk-embed-certs-808809: {Iface:virbr4 ExpiryTime:2024-03-28 02:02:25 +0000 UTC Type:0 Mac:52:54:00:60:d4:d2 Iaid: IPaddr:192.168.72.210 Prefix:24 Hostname:embed-certs-808809 Clientid:01:52:54:00:60:d4:d2}
	I0328 01:02:32.755240 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined IP address 192.168.72.210 and MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:32.755454 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .DriverName
	I0328 01:02:32.756111 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .DriverName
	I0328 01:02:32.756320 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .DriverName
	I0328 01:02:32.756412 1130949 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0328 01:02:32.756475 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHHostname
	I0328 01:02:32.756612 1130949 ssh_runner.go:195] Run: cat /version.json
	I0328 01:02:32.756646 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHHostname
	I0328 01:02:32.759337 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:32.759468 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:32.759788 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d4:d2", ip: ""} in network mk-embed-certs-808809: {Iface:virbr4 ExpiryTime:2024-03-28 02:02:25 +0000 UTC Type:0 Mac:52:54:00:60:d4:d2 Iaid: IPaddr:192.168.72.210 Prefix:24 Hostname:embed-certs-808809 Clientid:01:52:54:00:60:d4:d2}
	I0328 01:02:32.759808 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined IP address 192.168.72.210 and MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:32.759845 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d4:d2", ip: ""} in network mk-embed-certs-808809: {Iface:virbr4 ExpiryTime:2024-03-28 02:02:25 +0000 UTC Type:0 Mac:52:54:00:60:d4:d2 Iaid: IPaddr:192.168.72.210 Prefix:24 Hostname:embed-certs-808809 Clientid:01:52:54:00:60:d4:d2}
	I0328 01:02:32.759866 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined IP address 192.168.72.210 and MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:32.760009 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHPort
	I0328 01:02:32.760018 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHPort
	I0328 01:02:32.760214 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHKeyPath
	I0328 01:02:32.760222 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHKeyPath
	I0328 01:02:32.760364 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHUsername
	I0328 01:02:32.760532 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHUsername
	I0328 01:02:32.760639 1130949 sshutil.go:53] new ssh client: &{IP:192.168.72.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/embed-certs-808809/id_rsa Username:docker}
	I0328 01:02:32.760698 1130949 sshutil.go:53] new ssh client: &{IP:192.168.72.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/embed-certs-808809/id_rsa Username:docker}
	I0328 01:02:32.840137 1130949 ssh_runner.go:195] Run: systemctl --version
	I0328 01:02:32.874039 1130949 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0328 01:02:33.020534 1130949 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0328 01:02:33.027141 1130949 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0328 01:02:33.027213 1130949 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0328 01:02:33.043738 1130949 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0328 01:02:33.043767 1130949 start.go:494] detecting cgroup driver to use...
	I0328 01:02:33.043840 1130949 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0328 01:02:33.064332 1130949 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0328 01:02:33.081926 1130949 docker.go:217] disabling cri-docker service (if available) ...
	I0328 01:02:33.082016 1130949 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0328 01:02:33.097179 1130949 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0328 01:02:33.113157 1130949 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0328 01:02:33.233183 1130949 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0328 01:02:33.374061 1130949 docker.go:233] disabling docker service ...
	I0328 01:02:33.374145 1130949 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0328 01:02:33.389813 1130949 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0328 01:02:33.403439 1130949 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0328 01:02:33.546146 1130949 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0328 01:02:33.706968 1130949 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0328 01:02:33.722279 1130949 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0328 01:02:33.742578 1130949 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0328 01:02:33.742652 1130949 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 01:02:33.754966 1130949 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0328 01:02:33.755027 1130949 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 01:02:33.767170 1130949 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 01:02:33.779960 1130949 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 01:02:33.792448 1130949 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0328 01:02:33.804912 1130949 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 01:02:33.818038 1130949 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 01:02:33.838794 1130949 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 01:02:33.852157 1130949 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0328 01:02:33.862921 1130949 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0328 01:02:33.862981 1130949 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0328 01:02:33.880973 1130949 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0328 01:02:33.892698 1130949 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0328 01:02:34.029903 1130949 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0328 01:02:34.170977 1130949 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0328 01:02:34.171074 1130949 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0328 01:02:34.176652 1130949 start.go:562] Will wait 60s for crictl version
	I0328 01:02:34.176736 1130949 ssh_runner.go:195] Run: which crictl
	I0328 01:02:34.180993 1130949 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0328 01:02:34.224564 1130949 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0328 01:02:34.224675 1130949 ssh_runner.go:195] Run: crio --version
	I0328 01:02:34.254457 1130949 ssh_runner.go:195] Run: crio --version
	I0328 01:02:34.287281 1130949 out.go:177] * Preparing Kubernetes v1.29.3 on CRI-O 1.29.1 ...
	I0328 01:02:32.778280 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .Start
	I0328 01:02:32.778470 1131323 main.go:141] libmachine: (old-k8s-version-986088) Ensuring networks are active...
	I0328 01:02:32.779179 1131323 main.go:141] libmachine: (old-k8s-version-986088) Ensuring network default is active
	I0328 01:02:32.779577 1131323 main.go:141] libmachine: (old-k8s-version-986088) Ensuring network mk-old-k8s-version-986088 is active
	I0328 01:02:32.779982 1131323 main.go:141] libmachine: (old-k8s-version-986088) Getting domain xml...
	I0328 01:02:32.780732 1131323 main.go:141] libmachine: (old-k8s-version-986088) Creating domain...
	I0328 01:02:34.066287 1131323 main.go:141] libmachine: (old-k8s-version-986088) Waiting to get IP...
	I0328 01:02:34.067193 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:34.067618 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | unable to find current IP address of domain old-k8s-version-986088 in network mk-old-k8s-version-986088
	I0328 01:02:34.067684 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | I0328 01:02:34.067586 1132067 retry.go:31] will retry after 291.270379ms: waiting for machine to come up
	I0328 01:02:34.360203 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:34.360690 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | unable to find current IP address of domain old-k8s-version-986088 in network mk-old-k8s-version-986088
	I0328 01:02:34.360721 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | I0328 01:02:34.360638 1132067 retry.go:31] will retry after 234.968456ms: waiting for machine to come up
	I0328 01:02:34.597291 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:34.597818 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | unable to find current IP address of domain old-k8s-version-986088 in network mk-old-k8s-version-986088
	I0328 01:02:34.597849 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | I0328 01:02:34.597750 1132067 retry.go:31] will retry after 382.522593ms: waiting for machine to come up
	I0328 01:02:34.982502 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:34.983176 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | unable to find current IP address of domain old-k8s-version-986088 in network mk-old-k8s-version-986088
	I0328 01:02:34.983205 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | I0328 01:02:34.983133 1132067 retry.go:31] will retry after 436.332635ms: waiting for machine to come up
	I0328 01:02:34.288748 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetIP
	I0328 01:02:34.292122 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:34.292516 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d4:d2", ip: ""} in network mk-embed-certs-808809: {Iface:virbr4 ExpiryTime:2024-03-28 02:02:25 +0000 UTC Type:0 Mac:52:54:00:60:d4:d2 Iaid: IPaddr:192.168.72.210 Prefix:24 Hostname:embed-certs-808809 Clientid:01:52:54:00:60:d4:d2}
	I0328 01:02:34.292556 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined IP address 192.168.72.210 and MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:02:34.292869 1130949 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0328 01:02:34.298738 1130949 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0328 01:02:34.313529 1130949 kubeadm.go:877] updating cluster {Name:embed-certs-808809 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.29.3 ClusterName:embed-certs-808809 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.210 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0328 01:02:34.313698 1130949 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0328 01:02:34.313762 1130949 ssh_runner.go:195] Run: sudo crictl images --output json
	I0328 01:02:34.356518 1130949 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.3". assuming images are not preloaded.
	I0328 01:02:34.356614 1130949 ssh_runner.go:195] Run: which lz4
	I0328 01:02:34.361492 1130949 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0328 01:02:34.366053 1130949 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0328 01:02:34.366090 1130949 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (402967820 bytes)
	I0328 01:02:36.024197 1130949 crio.go:462] duration metric: took 1.662731937s to copy over tarball
	I0328 01:02:36.024287 1130949 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0328 01:02:35.421623 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:35.422164 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | unable to find current IP address of domain old-k8s-version-986088 in network mk-old-k8s-version-986088
	I0328 01:02:35.422198 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | I0328 01:02:35.422135 1132067 retry.go:31] will retry after 700.861268ms: waiting for machine to come up
	I0328 01:02:36.124589 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:36.125001 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | unable to find current IP address of domain old-k8s-version-986088 in network mk-old-k8s-version-986088
	I0328 01:02:36.125031 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | I0328 01:02:36.124948 1132067 retry.go:31] will retry after 932.342478ms: waiting for machine to come up
	I0328 01:02:37.058954 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:37.059390 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | unable to find current IP address of domain old-k8s-version-986088 in network mk-old-k8s-version-986088
	I0328 01:02:37.059424 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | I0328 01:02:37.059332 1132067 retry.go:31] will retry after 1.163248691s: waiting for machine to come up
	I0328 01:02:38.224574 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:38.225019 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | unable to find current IP address of domain old-k8s-version-986088 in network mk-old-k8s-version-986088
	I0328 01:02:38.225053 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | I0328 01:02:38.224959 1132067 retry.go:31] will retry after 1.13372539s: waiting for machine to come up
	I0328 01:02:39.360393 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:39.360953 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | unable to find current IP address of domain old-k8s-version-986088 in network mk-old-k8s-version-986088
	I0328 01:02:39.360984 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | I0328 01:02:39.360906 1132067 retry.go:31] will retry after 1.793272671s: waiting for machine to come up
	I0328 01:02:38.420741 1130949 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.396415089s)
	I0328 01:02:38.420788 1130949 crio.go:469] duration metric: took 2.39655808s to extract the tarball
	I0328 01:02:38.420797 1130949 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0328 01:02:38.459869 1130949 ssh_runner.go:195] Run: sudo crictl images --output json
	I0328 01:02:38.505999 1130949 crio.go:514] all images are preloaded for cri-o runtime.
	I0328 01:02:38.506030 1130949 cache_images.go:84] Images are preloaded, skipping loading
	I0328 01:02:38.506039 1130949 kubeadm.go:928] updating node { 192.168.72.210 8443 v1.29.3 crio true true} ...
	I0328 01:02:38.506185 1130949 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-808809 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.210
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.3 ClusterName:embed-certs-808809 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0328 01:02:38.506301 1130949 ssh_runner.go:195] Run: crio config
	I0328 01:02:38.551608 1130949 cni.go:84] Creating CNI manager for ""
	I0328 01:02:38.551633 1130949 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0328 01:02:38.551646 1130949 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0328 01:02:38.551673 1130949 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.210 APIServerPort:8443 KubernetesVersion:v1.29.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-808809 NodeName:embed-certs-808809 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.210"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.210 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0328 01:02:38.551813 1130949 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.210
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-808809"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.210
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.210"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0328 01:02:38.551881 1130949 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.3
	I0328 01:02:38.562640 1130949 binaries.go:44] Found k8s binaries, skipping transfer
	I0328 01:02:38.562732 1130949 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0328 01:02:38.572870 1130949 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0328 01:02:38.590866 1130949 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0328 01:02:38.608302 1130949 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0328 01:02:38.626925 1130949 ssh_runner.go:195] Run: grep 192.168.72.210	control-plane.minikube.internal$ /etc/hosts
	I0328 01:02:38.631111 1130949 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.210	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0328 01:02:38.644528 1130949 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0328 01:02:38.785485 1130949 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0328 01:02:38.804087 1130949 certs.go:68] Setting up /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/embed-certs-808809 for IP: 192.168.72.210
	I0328 01:02:38.804113 1130949 certs.go:194] generating shared ca certs ...
	I0328 01:02:38.804132 1130949 certs.go:226] acquiring lock for ca certs: {Name:mkf4dec8f33bbf51de6ed3aabdf175c7fd744ef9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 01:02:38.804285 1130949 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.key
	I0328 01:02:38.804326 1130949 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/proxy-client-ca.key
	I0328 01:02:38.804363 1130949 certs.go:256] generating profile certs ...
	I0328 01:02:38.804505 1130949 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/embed-certs-808809/client.key
	I0328 01:02:38.804588 1130949 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/embed-certs-808809/apiserver.key.bdc16448
	I0328 01:02:38.804638 1130949 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/embed-certs-808809/proxy-client.key
	I0328 01:02:38.804798 1130949 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/1076522.pem (1338 bytes)
	W0328 01:02:38.804829 1130949 certs.go:480] ignoring /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/1076522_empty.pem, impossibly tiny 0 bytes
	I0328 01:02:38.804836 1130949 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca-key.pem (1675 bytes)
	I0328 01:02:38.804860 1130949 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem (1078 bytes)
	I0328 01:02:38.804882 1130949 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/cert.pem (1123 bytes)
	I0328 01:02:38.804902 1130949 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/key.pem (1679 bytes)
	I0328 01:02:38.804943 1130949 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/files/etc/ssl/certs/10765222.pem (1708 bytes)
	I0328 01:02:38.805829 1130949 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0328 01:02:38.864847 1130949 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0328 01:02:38.899197 1130949 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0328 01:02:38.926734 1130949 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0328 01:02:38.958277 1130949 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/embed-certs-808809/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0328 01:02:38.997201 1130949 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/embed-certs-808809/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0328 01:02:39.023136 1130949 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/embed-certs-808809/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0328 01:02:39.048459 1130949 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/embed-certs-808809/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0328 01:02:39.074052 1130949 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/1076522.pem --> /usr/share/ca-certificates/1076522.pem (1338 bytes)
	I0328 01:02:39.099326 1130949 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/files/etc/ssl/certs/10765222.pem --> /usr/share/ca-certificates/10765222.pem (1708 bytes)
	I0328 01:02:39.124775 1130949 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0328 01:02:39.149638 1130949 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0328 01:02:39.169169 1130949 ssh_runner.go:195] Run: openssl version
	I0328 01:02:39.175948 1130949 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1076522.pem && ln -fs /usr/share/ca-certificates/1076522.pem /etc/ssl/certs/1076522.pem"
	I0328 01:02:39.188255 1130949 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1076522.pem
	I0328 01:02:39.194296 1130949 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 27 23:42 /usr/share/ca-certificates/1076522.pem
	I0328 01:02:39.194374 1130949 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1076522.pem
	I0328 01:02:39.201138 1130949 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1076522.pem /etc/ssl/certs/51391683.0"
	I0328 01:02:39.213554 1130949 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10765222.pem && ln -fs /usr/share/ca-certificates/10765222.pem /etc/ssl/certs/10765222.pem"
	I0328 01:02:39.226474 1130949 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10765222.pem
	I0328 01:02:39.232074 1130949 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 27 23:42 /usr/share/ca-certificates/10765222.pem
	I0328 01:02:39.232149 1130949 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10765222.pem
	I0328 01:02:39.238733 1130949 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/10765222.pem /etc/ssl/certs/3ec20f2e.0"
	I0328 01:02:39.250983 1130949 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0328 01:02:39.263746 1130949 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0328 01:02:39.268967 1130949 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 27 23:34 /usr/share/ca-certificates/minikubeCA.pem
	I0328 01:02:39.269038 1130949 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0328 01:02:39.275589 1130949 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0328 01:02:39.287731 1130949 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0328 01:02:39.292985 1130949 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0328 01:02:39.300366 1130949 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0328 01:02:39.307241 1130949 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0328 01:02:39.314522 1130949 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0328 01:02:39.321070 1130949 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0328 01:02:39.327777 1130949 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0328 01:02:39.334174 1130949 kubeadm.go:391] StartCluster: {Name:embed-certs-808809 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29
.3 ClusterName:embed-certs-808809 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.210 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0328 01:02:39.334310 1130949 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0328 01:02:39.334367 1130949 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0328 01:02:39.376035 1130949 cri.go:89] found id: ""
	I0328 01:02:39.376145 1130949 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0328 01:02:39.387349 1130949 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0328 01:02:39.387377 1130949 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0328 01:02:39.387385 1130949 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0328 01:02:39.387469 1130949 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0328 01:02:39.397918 1130949 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0328 01:02:39.399122 1130949 kubeconfig.go:125] found "embed-certs-808809" server: "https://192.168.72.210:8443"
	I0328 01:02:39.401219 1130949 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0328 01:02:39.411475 1130949 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.72.210
	I0328 01:02:39.411562 1130949 kubeadm.go:1154] stopping kube-system containers ...
	I0328 01:02:39.411583 1130949 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0328 01:02:39.411650 1130949 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0328 01:02:39.449529 1130949 cri.go:89] found id: ""
	I0328 01:02:39.449638 1130949 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0328 01:02:39.468553 1130949 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0328 01:02:39.479489 1130949 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0328 01:02:39.479522 1130949 kubeadm.go:156] found existing configuration files:
	
	I0328 01:02:39.479589 1130949 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0328 01:02:39.489619 1130949 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0328 01:02:39.489689 1130949 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0328 01:02:39.499726 1130949 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0328 01:02:39.509362 1130949 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0328 01:02:39.509447 1130949 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0328 01:02:39.519262 1130949 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0328 01:02:39.528858 1130949 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0328 01:02:39.528920 1130949 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0328 01:02:39.538784 1130949 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0328 01:02:39.548517 1130949 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0328 01:02:39.548593 1130949 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0328 01:02:39.559931 1130949 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0328 01:02:39.574178 1130949 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0328 01:02:39.706243 1130949 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0328 01:02:40.342144 1130949 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0328 01:02:40.559108 1130949 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0328 01:02:40.636713 1130949 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0328 01:02:40.743171 1130949 api_server.go:52] waiting for apiserver process to appear ...
	I0328 01:02:40.743269 1130949 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:02:41.243401 1130949 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:02:41.743363 1130949 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:02:41.776504 1130949 api_server.go:72] duration metric: took 1.033329844s to wait for apiserver process to appear ...
	I0328 01:02:41.776547 1130949 api_server.go:88] waiting for apiserver healthz status ...
	I0328 01:02:41.776574 1130949 api_server.go:253] Checking apiserver healthz at https://192.168.72.210:8443/healthz ...
	I0328 01:02:41.777140 1130949 api_server.go:269] stopped: https://192.168.72.210:8443/healthz: Get "https://192.168.72.210:8443/healthz": dial tcp 192.168.72.210:8443: connect: connection refused
	I0328 01:02:42.276690 1130949 api_server.go:253] Checking apiserver healthz at https://192.168.72.210:8443/healthz ...
	I0328 01:02:41.156898 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:41.157309 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | unable to find current IP address of domain old-k8s-version-986088 in network mk-old-k8s-version-986088
	I0328 01:02:41.157336 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | I0328 01:02:41.157263 1132067 retry.go:31] will retry after 1.863775673s: waiting for machine to come up
	I0328 01:02:43.023074 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:43.023470 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | unable to find current IP address of domain old-k8s-version-986088 in network mk-old-k8s-version-986088
	I0328 01:02:43.023507 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | I0328 01:02:43.023419 1132067 retry.go:31] will retry after 2.73600503s: waiting for machine to come up
	I0328 01:02:44.743286 1130949 api_server.go:279] https://192.168.72.210:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0328 01:02:44.743383 1130949 api_server.go:103] status: https://192.168.72.210:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0328 01:02:44.743412 1130949 api_server.go:253] Checking apiserver healthz at https://192.168.72.210:8443/healthz ...
	I0328 01:02:44.822370 1130949 api_server.go:279] https://192.168.72.210:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0328 01:02:44.822416 1130949 api_server.go:103] status: https://192.168.72.210:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0328 01:02:44.822436 1130949 api_server.go:253] Checking apiserver healthz at https://192.168.72.210:8443/healthz ...
	I0328 01:02:44.847406 1130949 api_server.go:279] https://192.168.72.210:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0328 01:02:44.847462 1130949 api_server.go:103] status: https://192.168.72.210:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0328 01:02:45.276899 1130949 api_server.go:253] Checking apiserver healthz at https://192.168.72.210:8443/healthz ...
	I0328 01:02:45.281884 1130949 api_server.go:279] https://192.168.72.210:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0328 01:02:45.281919 1130949 api_server.go:103] status: https://192.168.72.210:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0328 01:02:45.777495 1130949 api_server.go:253] Checking apiserver healthz at https://192.168.72.210:8443/healthz ...
	I0328 01:02:45.783673 1130949 api_server.go:279] https://192.168.72.210:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0328 01:02:45.783704 1130949 api_server.go:103] status: https://192.168.72.210:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0328 01:02:46.277372 1130949 api_server.go:253] Checking apiserver healthz at https://192.168.72.210:8443/healthz ...
	I0328 01:02:46.282281 1130949 api_server.go:279] https://192.168.72.210:8443/healthz returned 200:
	ok
	I0328 01:02:46.291242 1130949 api_server.go:141] control plane version: v1.29.3
	I0328 01:02:46.291287 1130949 api_server.go:131] duration metric: took 4.514730698s to wait for apiserver health ...
	I0328 01:02:46.291301 1130949 cni.go:84] Creating CNI manager for ""
	I0328 01:02:46.291310 1130949 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0328 01:02:46.293461 1130949 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0328 01:02:46.294971 1130949 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0328 01:02:46.312955 1130949 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0328 01:02:46.345653 1130949 system_pods.go:43] waiting for kube-system pods to appear ...
	I0328 01:02:46.355470 1130949 system_pods.go:59] 8 kube-system pods found
	I0328 01:02:46.355506 1130949 system_pods.go:61] "coredns-76f75df574-pr5d8" [90a6f3d5-6f33-4c41-804b-4b20c518aa23] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0328 01:02:46.355512 1130949 system_pods.go:61] "etcd-embed-certs-808809" [93b6b8ee-f83f-4848-b2c5-912ec07acd52] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0328 01:02:46.355519 1130949 system_pods.go:61] "kube-apiserver-embed-certs-808809" [22eb788f-4647-4a07-b5bf-ecdd54c28fcd] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0328 01:02:46.355530 1130949 system_pods.go:61] "kube-controller-manager-embed-certs-808809" [83fecd9f-c0de-4afe-b5b5-7c04bd3adc20] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0328 01:02:46.355545 1130949 system_pods.go:61] "kube-proxy-qwzpg" [57a814c6-54c8-4fa7-b7d7-bcdd4bbc91d7] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0328 01:02:46.355553 1130949 system_pods.go:61] "kube-scheduler-embed-certs-808809" [0b229d84-43fb-45ee-8d49-39204812d490] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0328 01:02:46.355568 1130949 system_pods.go:61] "metrics-server-57f55c9bc5-swsxp" [4b20e133-3054-4806-9b7f-44d8c8c35a4d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0328 01:02:46.355580 1130949 system_pods.go:61] "storage-provisioner" [59303061-19e3-4aed-8753-804988a2a44e] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0328 01:02:46.355590 1130949 system_pods.go:74] duration metric: took 9.908316ms to wait for pod list to return data ...
	I0328 01:02:46.355603 1130949 node_conditions.go:102] verifying NodePressure condition ...
	I0328 01:02:46.358936 1130949 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0328 01:02:46.358987 1130949 node_conditions.go:123] node cpu capacity is 2
	I0328 01:02:46.359006 1130949 node_conditions.go:105] duration metric: took 3.394695ms to run NodePressure ...
	I0328 01:02:46.359054 1130949 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0328 01:02:46.686479 1130949 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0328 01:02:46.692502 1130949 kubeadm.go:733] kubelet initialised
	I0328 01:02:46.692526 1130949 kubeadm.go:734] duration metric: took 6.022393ms waiting for restarted kubelet to initialise ...
	I0328 01:02:46.692534 1130949 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0328 01:02:46.699146 1130949 pod_ready.go:78] waiting up to 4m0s for pod "coredns-76f75df574-pr5d8" in "kube-system" namespace to be "Ready" ...
	I0328 01:02:45.762440 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:45.762891 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | unable to find current IP address of domain old-k8s-version-986088 in network mk-old-k8s-version-986088
	I0328 01:02:45.762915 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | I0328 01:02:45.762845 1132067 retry.go:31] will retry after 2.201941476s: waiting for machine to come up
	I0328 01:02:47.966601 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:47.967196 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | unable to find current IP address of domain old-k8s-version-986088 in network mk-old-k8s-version-986088
	I0328 01:02:47.967237 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | I0328 01:02:47.967144 1132067 retry.go:31] will retry after 4.122216816s: waiting for machine to come up
	I0328 01:02:48.709890 1130949 pod_ready.go:102] pod "coredns-76f75df574-pr5d8" in "kube-system" namespace has status "Ready":"False"
	I0328 01:02:51.207697 1130949 pod_ready.go:102] pod "coredns-76f75df574-pr5d8" in "kube-system" namespace has status "Ready":"False"
	I0328 01:02:53.391471 1131600 start.go:364] duration metric: took 2m47.603687739s to acquireMachinesLock for "default-k8s-diff-port-283961"
	I0328 01:02:53.391553 1131600 start.go:96] Skipping create...Using existing machine configuration
	I0328 01:02:53.391565 1131600 fix.go:54] fixHost starting: 
	I0328 01:02:53.391980 1131600 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18485-1069254/.minikube/bin/docker-machine-driver-kvm2
	I0328 01:02:53.392031 1131600 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 01:02:53.409035 1131600 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34073
	I0328 01:02:53.409556 1131600 main.go:141] libmachine: () Calling .GetVersion
	I0328 01:02:53.410105 1131600 main.go:141] libmachine: Using API Version  1
	I0328 01:02:53.410136 1131600 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 01:02:53.410492 1131600 main.go:141] libmachine: () Calling .GetMachineName
	I0328 01:02:53.410734 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .DriverName
	I0328 01:02:53.410903 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetState
	I0328 01:02:53.412710 1131600 fix.go:112] recreateIfNeeded on default-k8s-diff-port-283961: state=Stopped err=<nil>
	I0328 01:02:53.412739 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .DriverName
	W0328 01:02:53.412927 1131600 fix.go:138] unexpected machine state, will restart: <nil>
	I0328 01:02:53.414773 1131600 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-283961" ...
	I0328 01:02:52.091210 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:52.091759 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has current primary IP address 192.168.50.174 and MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:52.091794 1131323 main.go:141] libmachine: (old-k8s-version-986088) Found IP for machine: 192.168.50.174
	I0328 01:02:52.091841 1131323 main.go:141] libmachine: (old-k8s-version-986088) Reserving static IP address...
	I0328 01:02:52.092295 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | found host DHCP lease matching {name: "old-k8s-version-986088", mac: "52:54:00:f6:94:40", ip: "192.168.50.174"} in network mk-old-k8s-version-986088: {Iface:virbr2 ExpiryTime:2024-03-28 02:02:44 +0000 UTC Type:0 Mac:52:54:00:f6:94:40 Iaid: IPaddr:192.168.50.174 Prefix:24 Hostname:old-k8s-version-986088 Clientid:01:52:54:00:f6:94:40}
	I0328 01:02:52.092321 1131323 main.go:141] libmachine: (old-k8s-version-986088) Reserved static IP address: 192.168.50.174
	I0328 01:02:52.092343 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | skip adding static IP to network mk-old-k8s-version-986088 - found existing host DHCP lease matching {name: "old-k8s-version-986088", mac: "52:54:00:f6:94:40", ip: "192.168.50.174"}
	I0328 01:02:52.092356 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | Getting to WaitForSSH function...
	I0328 01:02:52.092373 1131323 main.go:141] libmachine: (old-k8s-version-986088) Waiting for SSH to be available...
	I0328 01:02:52.094682 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:52.095012 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:94:40", ip: ""} in network mk-old-k8s-version-986088: {Iface:virbr2 ExpiryTime:2024-03-28 02:02:44 +0000 UTC Type:0 Mac:52:54:00:f6:94:40 Iaid: IPaddr:192.168.50.174 Prefix:24 Hostname:old-k8s-version-986088 Clientid:01:52:54:00:f6:94:40}
	I0328 01:02:52.095033 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined IP address 192.168.50.174 and MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:52.095158 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | Using SSH client type: external
	I0328 01:02:52.095180 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | Using SSH private key: /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/old-k8s-version-986088/id_rsa (-rw-------)
	I0328 01:02:52.095208 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.174 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/old-k8s-version-986088/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0328 01:02:52.095218 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | About to run SSH command:
	I0328 01:02:52.095232 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | exit 0
	I0328 01:02:52.218494 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | SSH cmd err, output: <nil>: 
	I0328 01:02:52.218983 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetConfigRaw
	I0328 01:02:52.219663 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetIP
	I0328 01:02:52.222349 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:52.222791 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:94:40", ip: ""} in network mk-old-k8s-version-986088: {Iface:virbr2 ExpiryTime:2024-03-28 02:02:44 +0000 UTC Type:0 Mac:52:54:00:f6:94:40 Iaid: IPaddr:192.168.50.174 Prefix:24 Hostname:old-k8s-version-986088 Clientid:01:52:54:00:f6:94:40}
	I0328 01:02:52.222823 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined IP address 192.168.50.174 and MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:52.223191 1131323 profile.go:142] Saving config to /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/old-k8s-version-986088/config.json ...
	I0328 01:02:52.223388 1131323 machine.go:94] provisionDockerMachine start ...
	I0328 01:02:52.223409 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .DriverName
	I0328 01:02:52.223605 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHHostname
	I0328 01:02:52.225686 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:52.225999 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:94:40", ip: ""} in network mk-old-k8s-version-986088: {Iface:virbr2 ExpiryTime:2024-03-28 02:02:44 +0000 UTC Type:0 Mac:52:54:00:f6:94:40 Iaid: IPaddr:192.168.50.174 Prefix:24 Hostname:old-k8s-version-986088 Clientid:01:52:54:00:f6:94:40}
	I0328 01:02:52.226038 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined IP address 192.168.50.174 and MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:52.226131 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHPort
	I0328 01:02:52.226341 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHKeyPath
	I0328 01:02:52.226507 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHKeyPath
	I0328 01:02:52.226633 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHUsername
	I0328 01:02:52.226802 1131323 main.go:141] libmachine: Using SSH client type: native
	I0328 01:02:52.227078 1131323 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.174 22 <nil> <nil>}
	I0328 01:02:52.227095 1131323 main.go:141] libmachine: About to run SSH command:
	hostname
	I0328 01:02:52.327218 1131323 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0328 01:02:52.327249 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetMachineName
	I0328 01:02:52.327515 1131323 buildroot.go:166] provisioning hostname "old-k8s-version-986088"
	I0328 01:02:52.327542 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetMachineName
	I0328 01:02:52.327754 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHHostname
	I0328 01:02:52.330253 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:52.330661 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:94:40", ip: ""} in network mk-old-k8s-version-986088: {Iface:virbr2 ExpiryTime:2024-03-28 02:02:44 +0000 UTC Type:0 Mac:52:54:00:f6:94:40 Iaid: IPaddr:192.168.50.174 Prefix:24 Hostname:old-k8s-version-986088 Clientid:01:52:54:00:f6:94:40}
	I0328 01:02:52.330691 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined IP address 192.168.50.174 and MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:52.330827 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHPort
	I0328 01:02:52.331048 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHKeyPath
	I0328 01:02:52.331258 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHKeyPath
	I0328 01:02:52.331406 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHUsername
	I0328 01:02:52.331593 1131323 main.go:141] libmachine: Using SSH client type: native
	I0328 01:02:52.331772 1131323 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.174 22 <nil> <nil>}
	I0328 01:02:52.331783 1131323 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-986088 && echo "old-k8s-version-986088" | sudo tee /etc/hostname
	I0328 01:02:52.445910 1131323 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-986088
	
	I0328 01:02:52.445943 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHHostname
	I0328 01:02:52.449023 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:52.449320 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:94:40", ip: ""} in network mk-old-k8s-version-986088: {Iface:virbr2 ExpiryTime:2024-03-28 02:02:44 +0000 UTC Type:0 Mac:52:54:00:f6:94:40 Iaid: IPaddr:192.168.50.174 Prefix:24 Hostname:old-k8s-version-986088 Clientid:01:52:54:00:f6:94:40}
	I0328 01:02:52.449358 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined IP address 192.168.50.174 and MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:52.449595 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHPort
	I0328 01:02:52.449810 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHKeyPath
	I0328 01:02:52.449970 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHKeyPath
	I0328 01:02:52.450116 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHUsername
	I0328 01:02:52.450310 1131323 main.go:141] libmachine: Using SSH client type: native
	I0328 01:02:52.450572 1131323 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.174 22 <nil> <nil>}
	I0328 01:02:52.450640 1131323 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-986088' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-986088/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-986088' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0328 01:02:52.567493 1131323 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0328 01:02:52.567529 1131323 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18485-1069254/.minikube CaCertPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18485-1069254/.minikube}
	I0328 01:02:52.567559 1131323 buildroot.go:174] setting up certificates
	I0328 01:02:52.567573 1131323 provision.go:84] configureAuth start
	I0328 01:02:52.567587 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetMachineName
	I0328 01:02:52.567944 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetIP
	I0328 01:02:52.570860 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:52.571363 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:94:40", ip: ""} in network mk-old-k8s-version-986088: {Iface:virbr2 ExpiryTime:2024-03-28 02:02:44 +0000 UTC Type:0 Mac:52:54:00:f6:94:40 Iaid: IPaddr:192.168.50.174 Prefix:24 Hostname:old-k8s-version-986088 Clientid:01:52:54:00:f6:94:40}
	I0328 01:02:52.571400 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined IP address 192.168.50.174 and MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:52.571547 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHHostname
	I0328 01:02:52.574052 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:52.574483 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:94:40", ip: ""} in network mk-old-k8s-version-986088: {Iface:virbr2 ExpiryTime:2024-03-28 02:02:44 +0000 UTC Type:0 Mac:52:54:00:f6:94:40 Iaid: IPaddr:192.168.50.174 Prefix:24 Hostname:old-k8s-version-986088 Clientid:01:52:54:00:f6:94:40}
	I0328 01:02:52.574517 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined IP address 192.168.50.174 and MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:52.574619 1131323 provision.go:143] copyHostCerts
	I0328 01:02:52.574698 1131323 exec_runner.go:144] found /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.pem, removing ...
	I0328 01:02:52.574710 1131323 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.pem
	I0328 01:02:52.574778 1131323 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.pem (1078 bytes)
	I0328 01:02:52.574894 1131323 exec_runner.go:144] found /home/jenkins/minikube-integration/18485-1069254/.minikube/cert.pem, removing ...
	I0328 01:02:52.574908 1131323 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18485-1069254/.minikube/cert.pem
	I0328 01:02:52.574985 1131323 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18485-1069254/.minikube/cert.pem (1123 bytes)
	I0328 01:02:52.575086 1131323 exec_runner.go:144] found /home/jenkins/minikube-integration/18485-1069254/.minikube/key.pem, removing ...
	I0328 01:02:52.575095 1131323 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18485-1069254/.minikube/key.pem
	I0328 01:02:52.575117 1131323 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18485-1069254/.minikube/key.pem (1679 bytes)
	I0328 01:02:52.575194 1131323 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-986088 san=[127.0.0.1 192.168.50.174 localhost minikube old-k8s-version-986088]
	I0328 01:02:52.688709 1131323 provision.go:177] copyRemoteCerts
	I0328 01:02:52.688776 1131323 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0328 01:02:52.688809 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHHostname
	I0328 01:02:52.691529 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:52.691977 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:94:40", ip: ""} in network mk-old-k8s-version-986088: {Iface:virbr2 ExpiryTime:2024-03-28 02:02:44 +0000 UTC Type:0 Mac:52:54:00:f6:94:40 Iaid: IPaddr:192.168.50.174 Prefix:24 Hostname:old-k8s-version-986088 Clientid:01:52:54:00:f6:94:40}
	I0328 01:02:52.692024 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined IP address 192.168.50.174 and MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:52.692188 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHPort
	I0328 01:02:52.692425 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHKeyPath
	I0328 01:02:52.692620 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHUsername
	I0328 01:02:52.692774 1131323 sshutil.go:53] new ssh client: &{IP:192.168.50.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/old-k8s-version-986088/id_rsa Username:docker}
	I0328 01:02:52.777200 1131323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0328 01:02:52.808740 1131323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0328 01:02:52.836646 1131323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0328 01:02:52.862627 1131323 provision.go:87] duration metric: took 295.032419ms to configureAuth
	I0328 01:02:52.862668 1131323 buildroot.go:189] setting minikube options for container-runtime
	I0328 01:02:52.862908 1131323 config.go:182] Loaded profile config "old-k8s-version-986088": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0328 01:02:52.863019 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHHostname
	I0328 01:02:52.865838 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:52.866585 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:94:40", ip: ""} in network mk-old-k8s-version-986088: {Iface:virbr2 ExpiryTime:2024-03-28 02:02:44 +0000 UTC Type:0 Mac:52:54:00:f6:94:40 Iaid: IPaddr:192.168.50.174 Prefix:24 Hostname:old-k8s-version-986088 Clientid:01:52:54:00:f6:94:40}
	I0328 01:02:52.866630 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined IP address 192.168.50.174 and MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:52.867271 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHPort
	I0328 01:02:52.867521 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHKeyPath
	I0328 01:02:52.867687 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHKeyPath
	I0328 01:02:52.867826 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHUsername
	I0328 01:02:52.867961 1131323 main.go:141] libmachine: Using SSH client type: native
	I0328 01:02:52.868176 1131323 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.174 22 <nil> <nil>}
	I0328 01:02:52.868194 1131323 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0328 01:02:53.154903 1131323 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0328 01:02:53.154936 1131323 machine.go:97] duration metric: took 931.534047ms to provisionDockerMachine
	I0328 01:02:53.154949 1131323 start.go:293] postStartSetup for "old-k8s-version-986088" (driver="kvm2")
	I0328 01:02:53.154961 1131323 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0328 01:02:53.154997 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .DriverName
	I0328 01:02:53.155353 1131323 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0328 01:02:53.155386 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHHostname
	I0328 01:02:53.158072 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:53.158448 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:94:40", ip: ""} in network mk-old-k8s-version-986088: {Iface:virbr2 ExpiryTime:2024-03-28 02:02:44 +0000 UTC Type:0 Mac:52:54:00:f6:94:40 Iaid: IPaddr:192.168.50.174 Prefix:24 Hostname:old-k8s-version-986088 Clientid:01:52:54:00:f6:94:40}
	I0328 01:02:53.158482 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined IP address 192.168.50.174 and MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:53.158612 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHPort
	I0328 01:02:53.158825 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHKeyPath
	I0328 01:02:53.158974 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHUsername
	I0328 01:02:53.159102 1131323 sshutil.go:53] new ssh client: &{IP:192.168.50.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/old-k8s-version-986088/id_rsa Username:docker}
	I0328 01:02:53.243411 1131323 ssh_runner.go:195] Run: cat /etc/os-release
	I0328 01:02:53.247745 1131323 info.go:137] Remote host: Buildroot 2023.02.9
	I0328 01:02:53.247769 1131323 filesync.go:126] Scanning /home/jenkins/minikube-integration/18485-1069254/.minikube/addons for local assets ...
	I0328 01:02:53.247830 1131323 filesync.go:126] Scanning /home/jenkins/minikube-integration/18485-1069254/.minikube/files for local assets ...
	I0328 01:02:53.247903 1131323 filesync.go:149] local asset: /home/jenkins/minikube-integration/18485-1069254/.minikube/files/etc/ssl/certs/10765222.pem -> 10765222.pem in /etc/ssl/certs
	I0328 01:02:53.247990 1131323 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0328 01:02:53.258574 1131323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/files/etc/ssl/certs/10765222.pem --> /etc/ssl/certs/10765222.pem (1708 bytes)
	I0328 01:02:53.284249 1131323 start.go:296] duration metric: took 129.2844ms for postStartSetup
	I0328 01:02:53.284300 1131323 fix.go:56] duration metric: took 20.532468979s for fixHost
	I0328 01:02:53.284324 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHHostname
	I0328 01:02:53.287097 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:53.287505 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:94:40", ip: ""} in network mk-old-k8s-version-986088: {Iface:virbr2 ExpiryTime:2024-03-28 02:02:44 +0000 UTC Type:0 Mac:52:54:00:f6:94:40 Iaid: IPaddr:192.168.50.174 Prefix:24 Hostname:old-k8s-version-986088 Clientid:01:52:54:00:f6:94:40}
	I0328 01:02:53.287534 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined IP address 192.168.50.174 and MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:53.287642 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHPort
	I0328 01:02:53.287874 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHKeyPath
	I0328 01:02:53.288039 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHKeyPath
	I0328 01:02:53.288225 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHUsername
	I0328 01:02:53.288439 1131323 main.go:141] libmachine: Using SSH client type: native
	I0328 01:02:53.288601 1131323 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.174 22 <nil> <nil>}
	I0328 01:02:53.288612 1131323 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0328 01:02:53.391262 1131323 main.go:141] libmachine: SSH cmd err, output: <nil>: 1711587773.373998758
	
	I0328 01:02:53.391292 1131323 fix.go:216] guest clock: 1711587773.373998758
	I0328 01:02:53.391299 1131323 fix.go:229] Guest: 2024-03-28 01:02:53.373998758 +0000 UTC Remote: 2024-03-28 01:02:53.284304642 +0000 UTC m=+227.998260980 (delta=89.694116ms)
	I0328 01:02:53.391341 1131323 fix.go:200] guest clock delta is within tolerance: 89.694116ms
	I0328 01:02:53.391346 1131323 start.go:83] releasing machines lock for "old-k8s-version-986088", held for 20.639550927s
	I0328 01:02:53.391377 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .DriverName
	I0328 01:02:53.391728 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetIP
	I0328 01:02:53.394421 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:53.394780 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:94:40", ip: ""} in network mk-old-k8s-version-986088: {Iface:virbr2 ExpiryTime:2024-03-28 02:02:44 +0000 UTC Type:0 Mac:52:54:00:f6:94:40 Iaid: IPaddr:192.168.50.174 Prefix:24 Hostname:old-k8s-version-986088 Clientid:01:52:54:00:f6:94:40}
	I0328 01:02:53.394811 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined IP address 192.168.50.174 and MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:53.394932 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .DriverName
	I0328 01:02:53.395449 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .DriverName
	I0328 01:02:53.395729 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .DriverName
	I0328 01:02:53.395828 1131323 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0328 01:02:53.395883 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHHostname
	I0328 01:02:53.395985 1131323 ssh_runner.go:195] Run: cat /version.json
	I0328 01:02:53.396014 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHHostname
	I0328 01:02:53.398819 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:53.399010 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:53.399281 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:94:40", ip: ""} in network mk-old-k8s-version-986088: {Iface:virbr2 ExpiryTime:2024-03-28 02:02:44 +0000 UTC Type:0 Mac:52:54:00:f6:94:40 Iaid: IPaddr:192.168.50.174 Prefix:24 Hostname:old-k8s-version-986088 Clientid:01:52:54:00:f6:94:40}
	I0328 01:02:53.399320 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined IP address 192.168.50.174 and MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:53.399451 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHPort
	I0328 01:02:53.399550 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:94:40", ip: ""} in network mk-old-k8s-version-986088: {Iface:virbr2 ExpiryTime:2024-03-28 02:02:44 +0000 UTC Type:0 Mac:52:54:00:f6:94:40 Iaid: IPaddr:192.168.50.174 Prefix:24 Hostname:old-k8s-version-986088 Clientid:01:52:54:00:f6:94:40}
	I0328 01:02:53.399620 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined IP address 192.168.50.174 and MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:53.399640 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHKeyPath
	I0328 01:02:53.399880 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHUsername
	I0328 01:02:53.399902 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHPort
	I0328 01:02:53.400065 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHKeyPath
	I0328 01:02:53.400081 1131323 sshutil.go:53] new ssh client: &{IP:192.168.50.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/old-k8s-version-986088/id_rsa Username:docker}
	I0328 01:02:53.400245 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetSSHUsername
	I0328 01:02:53.400445 1131323 sshutil.go:53] new ssh client: &{IP:192.168.50.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/old-k8s-version-986088/id_rsa Username:docker}
	I0328 01:02:53.514453 1131323 ssh_runner.go:195] Run: systemctl --version
	I0328 01:02:53.521123 1131323 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0328 01:02:53.678366 1131323 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0328 01:02:53.685402 1131323 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0328 01:02:53.685473 1131323 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0328 01:02:53.702781 1131323 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0328 01:02:53.702816 1131323 start.go:494] detecting cgroup driver to use...
	I0328 01:02:53.702900 1131323 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0328 01:02:53.720343 1131323 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0328 01:02:53.736749 1131323 docker.go:217] disabling cri-docker service (if available) ...
	I0328 01:02:53.736824 1131323 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0328 01:02:53.761087 1131323 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0328 01:02:53.779008 1131323 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0328 01:02:53.895064 1131323 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0328 01:02:54.060741 1131323 docker.go:233] disabling docker service ...
	I0328 01:02:54.060825 1131323 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0328 01:02:54.079139 1131323 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0328 01:02:54.093523 1131323 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0328 01:02:54.247544 1131323 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0328 01:02:54.396392 1131323 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0328 01:02:54.422612 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0328 01:02:54.443759 1131323 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0328 01:02:54.443817 1131323 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 01:02:54.459794 1131323 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0328 01:02:54.459875 1131323 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 01:02:54.472784 1131323 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 01:02:54.484963 1131323 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 01:02:54.496654 1131323 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0328 01:02:54.508382 1131323 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0328 01:02:54.518607 1131323 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0328 01:02:54.518687 1131323 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0328 01:02:54.532356 1131323 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0328 01:02:54.544424 1131323 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0328 01:02:54.685782 1131323 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0328 01:02:54.847233 1131323 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0328 01:02:54.847314 1131323 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0328 01:02:54.853148 1131323 start.go:562] Will wait 60s for crictl version
	I0328 01:02:54.853248 1131323 ssh_runner.go:195] Run: which crictl
	I0328 01:02:54.857536 1131323 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0328 01:02:54.901937 1131323 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0328 01:02:54.902082 1131323 ssh_runner.go:195] Run: crio --version
	I0328 01:02:54.935571 1131323 ssh_runner.go:195] Run: crio --version
	I0328 01:02:54.971452 1131323 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0328 01:02:54.972964 1131323 main.go:141] libmachine: (old-k8s-version-986088) Calling .GetIP
	I0328 01:02:54.976523 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:54.976985 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:94:40", ip: ""} in network mk-old-k8s-version-986088: {Iface:virbr2 ExpiryTime:2024-03-28 02:02:44 +0000 UTC Type:0 Mac:52:54:00:f6:94:40 Iaid: IPaddr:192.168.50.174 Prefix:24 Hostname:old-k8s-version-986088 Clientid:01:52:54:00:f6:94:40}
	I0328 01:02:54.977017 1131323 main.go:141] libmachine: (old-k8s-version-986088) DBG | domain old-k8s-version-986088 has defined IP address 192.168.50.174 and MAC address 52:54:00:f6:94:40 in network mk-old-k8s-version-986088
	I0328 01:02:54.977369 1131323 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0328 01:02:54.982326 1131323 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0328 01:02:54.996239 1131323 kubeadm.go:877] updating cluster {Name:old-k8s-version-986088 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-986088 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.174 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0328 01:02:54.996371 1131323 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0328 01:02:54.996433 1131323 ssh_runner.go:195] Run: sudo crictl images --output json
	I0328 01:02:55.045404 1131323 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0328 01:02:55.045483 1131323 ssh_runner.go:195] Run: which lz4
	I0328 01:02:55.050226 1131323 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0328 01:02:55.055182 1131323 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0328 01:02:55.055221 1131323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0328 01:02:53.416101 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .Start
	I0328 01:02:53.416332 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Ensuring networks are active...
	I0328 01:02:53.417021 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Ensuring network default is active
	I0328 01:02:53.417446 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Ensuring network mk-default-k8s-diff-port-283961 is active
	I0328 01:02:53.417857 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Getting domain xml...
	I0328 01:02:53.418555 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Creating domain...
	I0328 01:02:54.777201 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Waiting to get IP...
	I0328 01:02:54.778055 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:02:54.778563 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | unable to find current IP address of domain default-k8s-diff-port-283961 in network mk-default-k8s-diff-port-283961
	I0328 01:02:54.778705 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | I0328 01:02:54.778537 1132240 retry.go:31] will retry after 259.031702ms: waiting for machine to come up
	I0328 01:02:55.039365 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:02:55.039926 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | unable to find current IP address of domain default-k8s-diff-port-283961 in network mk-default-k8s-diff-port-283961
	I0328 01:02:55.039963 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | I0328 01:02:55.039860 1132240 retry.go:31] will retry after 254.124553ms: waiting for machine to come up
	I0328 01:02:55.295658 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:02:55.296218 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | unable to find current IP address of domain default-k8s-diff-port-283961 in network mk-default-k8s-diff-port-283961
	I0328 01:02:55.296265 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | I0328 01:02:55.296174 1132240 retry.go:31] will retry after 349.637234ms: waiting for machine to come up
	I0328 01:02:55.647590 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:02:55.648356 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | unable to find current IP address of domain default-k8s-diff-port-283961 in network mk-default-k8s-diff-port-283961
	I0328 01:02:55.648392 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | I0328 01:02:55.648298 1132240 retry.go:31] will retry after 446.471208ms: waiting for machine to come up
	I0328 01:02:53.707811 1130949 pod_ready.go:102] pod "coredns-76f75df574-pr5d8" in "kube-system" namespace has status "Ready":"False"
	I0328 01:02:55.708380 1130949 pod_ready.go:102] pod "coredns-76f75df574-pr5d8" in "kube-system" namespace has status "Ready":"False"
	I0328 01:02:57.213059 1130949 pod_ready.go:92] pod "coredns-76f75df574-pr5d8" in "kube-system" namespace has status "Ready":"True"
	I0328 01:02:57.213097 1130949 pod_ready.go:81] duration metric: took 10.513921238s for pod "coredns-76f75df574-pr5d8" in "kube-system" namespace to be "Ready" ...
	I0328 01:02:57.213113 1130949 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-808809" in "kube-system" namespace to be "Ready" ...
	I0328 01:02:57.222308 1130949 pod_ready.go:92] pod "etcd-embed-certs-808809" in "kube-system" namespace has status "Ready":"True"
	I0328 01:02:57.222344 1130949 pod_ready.go:81] duration metric: took 9.214056ms for pod "etcd-embed-certs-808809" in "kube-system" namespace to be "Ready" ...
	I0328 01:02:57.222357 1130949 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-808809" in "kube-system" namespace to be "Ready" ...
	I0328 01:02:57.231530 1130949 pod_ready.go:92] pod "kube-apiserver-embed-certs-808809" in "kube-system" namespace has status "Ready":"True"
	I0328 01:02:57.231558 1130949 pod_ready.go:81] duration metric: took 9.192864ms for pod "kube-apiserver-embed-certs-808809" in "kube-system" namespace to be "Ready" ...
	I0328 01:02:57.231568 1130949 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-808809" in "kube-system" namespace to be "Ready" ...
	I0328 01:02:56.994163 1131323 crio.go:462] duration metric: took 1.943992561s to copy over tarball
	I0328 01:02:56.994252 1131323 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0328 01:03:00.215115 1131323 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.220825311s)
	I0328 01:03:00.215159 1131323 crio.go:469] duration metric: took 3.22095583s to extract the tarball
	I0328 01:03:00.215171 1131323 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0328 01:03:00.259151 1131323 ssh_runner.go:195] Run: sudo crictl images --output json
	I0328 01:03:00.298446 1131323 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0328 01:03:00.298492 1131323 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0328 01:03:00.298585 1131323 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0328 01:03:00.298601 1131323 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0328 01:03:00.298613 1131323 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0328 01:03:00.298644 1131323 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0328 01:03:00.298662 1131323 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0328 01:03:00.298698 1131323 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0328 01:03:00.298593 1131323 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0328 01:03:00.298585 1131323 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0328 01:03:00.300347 1131323 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0328 01:03:00.300424 1131323 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0328 01:03:00.300470 1131323 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0328 01:03:00.300474 1131323 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0328 01:03:00.300637 1131323 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0328 01:03:00.300652 1131323 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0328 01:03:00.300723 1131323 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0328 01:03:00.300793 1131323 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0328 01:02:56.095939 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:02:56.096463 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | unable to find current IP address of domain default-k8s-diff-port-283961 in network mk-default-k8s-diff-port-283961
	I0328 01:02:56.096501 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | I0328 01:02:56.096412 1132240 retry.go:31] will retry after 490.029649ms: waiting for machine to come up
	I0328 01:02:56.588298 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:02:56.588835 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | unable to find current IP address of domain default-k8s-diff-port-283961 in network mk-default-k8s-diff-port-283961
	I0328 01:02:56.588868 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | I0328 01:02:56.588796 1132240 retry.go:31] will retry after 831.356628ms: waiting for machine to come up
	I0328 01:02:57.421917 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:02:57.422410 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | unable to find current IP address of domain default-k8s-diff-port-283961 in network mk-default-k8s-diff-port-283961
	I0328 01:02:57.422443 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | I0328 01:02:57.422353 1132240 retry.go:31] will retry after 1.164764985s: waiting for machine to come up
	I0328 01:02:58.588827 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:02:58.589183 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | unable to find current IP address of domain default-k8s-diff-port-283961 in network mk-default-k8s-diff-port-283961
	I0328 01:02:58.589225 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | I0328 01:02:58.589119 1132240 retry.go:31] will retry after 1.307248783s: waiting for machine to come up
	I0328 01:02:59.897607 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:02:59.897976 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | unable to find current IP address of domain default-k8s-diff-port-283961 in network mk-default-k8s-diff-port-283961
	I0328 01:02:59.898008 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | I0328 01:02:59.897926 1132240 retry.go:31] will retry after 1.560958271s: waiting for machine to come up
	I0328 01:02:58.241179 1130949 pod_ready.go:92] pod "kube-controller-manager-embed-certs-808809" in "kube-system" namespace has status "Ready":"True"
	I0328 01:02:58.241216 1130949 pod_ready.go:81] duration metric: took 1.00963904s for pod "kube-controller-manager-embed-certs-808809" in "kube-system" namespace to be "Ready" ...
	I0328 01:02:58.241245 1130949 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-qwzpg" in "kube-system" namespace to be "Ready" ...
	I0328 01:02:58.249787 1130949 pod_ready.go:92] pod "kube-proxy-qwzpg" in "kube-system" namespace has status "Ready":"True"
	I0328 01:02:58.249826 1130949 pod_ready.go:81] duration metric: took 8.571225ms for pod "kube-proxy-qwzpg" in "kube-system" namespace to be "Ready" ...
	I0328 01:02:58.249840 1130949 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-808809" in "kube-system" namespace to be "Ready" ...
	I0328 01:02:58.405101 1130949 pod_ready.go:92] pod "kube-scheduler-embed-certs-808809" in "kube-system" namespace has status "Ready":"True"
	I0328 01:02:58.405130 1130949 pod_ready.go:81] duration metric: took 155.281142ms for pod "kube-scheduler-embed-certs-808809" in "kube-system" namespace to be "Ready" ...
	I0328 01:02:58.405141 1130949 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace to be "Ready" ...
	I0328 01:03:00.412202 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:02.412688 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:00.499788 1131323 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0328 01:03:00.539135 1131323 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0328 01:03:00.541462 1131323 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0328 01:03:00.544184 1131323 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0328 01:03:00.544227 1131323 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0328 01:03:00.544261 1131323 ssh_runner.go:195] Run: which crictl
	I0328 01:03:00.555720 1131323 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0328 01:03:00.560189 1131323 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0328 01:03:00.562639 1131323 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0328 01:03:00.574105 1131323 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0328 01:03:00.681717 1131323 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0328 01:03:00.681742 1131323 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0328 01:03:00.681765 1131323 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0328 01:03:00.681803 1131323 ssh_runner.go:195] Run: which crictl
	I0328 01:03:00.682033 1131323 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0328 01:03:00.682076 1131323 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0328 01:03:00.682115 1131323 ssh_runner.go:195] Run: which crictl
	I0328 01:03:00.732868 1131323 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0328 01:03:00.732922 1131323 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0328 01:03:00.732988 1131323 ssh_runner.go:195] Run: which crictl
	I0328 01:03:00.742680 1131323 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0328 01:03:00.742730 1131323 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0328 01:03:00.742762 1131323 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0328 01:03:00.742777 1131323 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0328 01:03:00.742805 1131323 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0328 01:03:00.742770 1131323 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0328 01:03:00.742817 1131323 ssh_runner.go:195] Run: which crictl
	I0328 01:03:00.742851 1131323 ssh_runner.go:195] Run: which crictl
	I0328 01:03:00.742865 1131323 ssh_runner.go:195] Run: which crictl
	I0328 01:03:00.770435 1131323 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0328 01:03:00.770472 1131323 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0328 01:03:00.770567 1131323 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0328 01:03:00.770588 1131323 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0328 01:03:00.770727 1131323 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0328 01:03:00.770760 1131323 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0328 01:03:00.770728 1131323 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0328 01:03:00.882338 1131323 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0328 01:03:00.896602 1131323 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0328 01:03:00.918814 1131323 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0328 01:03:00.918869 1131323 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0328 01:03:00.918919 1131323 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0328 01:03:00.918968 1131323 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0328 01:03:01.186124 1131323 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0328 01:03:01.334547 1131323 cache_images.go:92] duration metric: took 1.036031169s to LoadCachedImages
	W0328 01:03:01.334676 1131323 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	I0328 01:03:01.334694 1131323 kubeadm.go:928] updating node { 192.168.50.174 8443 v1.20.0 crio true true} ...
	I0328 01:03:01.334827 1131323 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-986088 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.174
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-986088 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0328 01:03:01.334926 1131323 ssh_runner.go:195] Run: crio config
	I0328 01:03:01.391004 1131323 cni.go:84] Creating CNI manager for ""
	I0328 01:03:01.391034 1131323 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0328 01:03:01.391054 1131323 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0328 01:03:01.391081 1131323 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.174 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-986088 NodeName:old-k8s-version-986088 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.174"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.174 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0328 01:03:01.391265 1131323 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.174
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-986088"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.174
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.174"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0328 01:03:01.391347 1131323 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0328 01:03:01.403684 1131323 binaries.go:44] Found k8s binaries, skipping transfer
	I0328 01:03:01.403779 1131323 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0328 01:03:01.415168 1131323 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0328 01:03:01.434329 1131323 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0328 01:03:01.456280 1131323 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0328 01:03:01.476625 1131323 ssh_runner.go:195] Run: grep 192.168.50.174	control-plane.minikube.internal$ /etc/hosts
	I0328 01:03:01.480867 1131323 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.174	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0328 01:03:01.493833 1131323 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0328 01:03:01.642273 1131323 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0328 01:03:01.661857 1131323 certs.go:68] Setting up /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/old-k8s-version-986088 for IP: 192.168.50.174
	I0328 01:03:01.661887 1131323 certs.go:194] generating shared ca certs ...
	I0328 01:03:01.661909 1131323 certs.go:226] acquiring lock for ca certs: {Name:mkf4dec8f33bbf51de6ed3aabdf175c7fd744ef9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 01:03:01.662115 1131323 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.key
	I0328 01:03:01.662174 1131323 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/proxy-client-ca.key
	I0328 01:03:01.662188 1131323 certs.go:256] generating profile certs ...
	I0328 01:03:01.662324 1131323 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/old-k8s-version-986088/client.key
	I0328 01:03:01.662399 1131323 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/old-k8s-version-986088/apiserver.key.b88fbc7e
	I0328 01:03:01.662447 1131323 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/old-k8s-version-986088/proxy-client.key
	I0328 01:03:01.662600 1131323 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/1076522.pem (1338 bytes)
	W0328 01:03:01.662656 1131323 certs.go:480] ignoring /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/1076522_empty.pem, impossibly tiny 0 bytes
	I0328 01:03:01.662672 1131323 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca-key.pem (1675 bytes)
	I0328 01:03:01.662703 1131323 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem (1078 bytes)
	I0328 01:03:01.662738 1131323 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/cert.pem (1123 bytes)
	I0328 01:03:01.662774 1131323 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/key.pem (1679 bytes)
	I0328 01:03:01.662826 1131323 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/files/etc/ssl/certs/10765222.pem (1708 bytes)
	I0328 01:03:01.663831 1131323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0328 01:03:01.697171 1131323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0328 01:03:01.742118 1131323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0328 01:03:01.783263 1131323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0328 01:03:01.831682 1131323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/old-k8s-version-986088/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0328 01:03:01.878051 1131323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/old-k8s-version-986088/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0328 01:03:01.915626 1131323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/old-k8s-version-986088/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0328 01:03:01.942247 1131323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/old-k8s-version-986088/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0328 01:03:01.969054 1131323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/files/etc/ssl/certs/10765222.pem --> /usr/share/ca-certificates/10765222.pem (1708 bytes)
	I0328 01:03:01.998651 1131323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0328 01:03:02.024881 1131323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/1076522.pem --> /usr/share/ca-certificates/1076522.pem (1338 bytes)
	I0328 01:03:02.051284 1131323 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0328 01:03:02.070414 1131323 ssh_runner.go:195] Run: openssl version
	I0328 01:03:02.076635 1131323 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10765222.pem && ln -fs /usr/share/ca-certificates/10765222.pem /etc/ssl/certs/10765222.pem"
	I0328 01:03:02.089288 1131323 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10765222.pem
	I0328 01:03:02.094260 1131323 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 27 23:42 /usr/share/ca-certificates/10765222.pem
	I0328 01:03:02.094322 1131323 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10765222.pem
	I0328 01:03:02.100846 1131323 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/10765222.pem /etc/ssl/certs/3ec20f2e.0"
	I0328 01:03:02.114474 1131323 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0328 01:03:02.126467 1131323 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0328 01:03:02.131240 1131323 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 27 23:34 /usr/share/ca-certificates/minikubeCA.pem
	I0328 01:03:02.131293 1131323 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0328 01:03:02.137496 1131323 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0328 01:03:02.150863 1131323 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1076522.pem && ln -fs /usr/share/ca-certificates/1076522.pem /etc/ssl/certs/1076522.pem"
	I0328 01:03:02.163536 1131323 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1076522.pem
	I0328 01:03:02.168767 1131323 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 27 23:42 /usr/share/ca-certificates/1076522.pem
	I0328 01:03:02.168850 1131323 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1076522.pem
	I0328 01:03:02.175218 1131323 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1076522.pem /etc/ssl/certs/51391683.0"
	I0328 01:03:02.188272 1131323 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0328 01:03:02.193348 1131323 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0328 01:03:02.199969 1131323 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0328 01:03:02.206424 1131323 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0328 01:03:02.213530 1131323 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0328 01:03:02.220136 1131323 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0328 01:03:02.226502 1131323 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0328 01:03:02.232708 1131323 kubeadm.go:391] StartCluster: {Name:old-k8s-version-986088 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-986088 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.174 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0328 01:03:02.232831 1131323 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0328 01:03:02.232890 1131323 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0328 01:03:02.280062 1131323 cri.go:89] found id: ""
	I0328 01:03:02.280160 1131323 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0328 01:03:02.291968 1131323 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0328 01:03:02.292003 1131323 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0328 01:03:02.292011 1131323 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0328 01:03:02.292072 1131323 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0328 01:03:02.304006 1131323 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0328 01:03:02.305105 1131323 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-986088" does not appear in /home/jenkins/minikube-integration/18485-1069254/kubeconfig
	I0328 01:03:02.305785 1131323 kubeconfig.go:62] /home/jenkins/minikube-integration/18485-1069254/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-986088" cluster setting kubeconfig missing "old-k8s-version-986088" context setting]
	I0328 01:03:02.306728 1131323 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18485-1069254/kubeconfig: {Name:mk608bb0dce90860f51972a105c5a28b1a9b081e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 01:03:02.308610 1131323 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0328 01:03:02.320212 1131323 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.50.174
	I0328 01:03:02.320265 1131323 kubeadm.go:1154] stopping kube-system containers ...
	I0328 01:03:02.320283 1131323 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0328 01:03:02.320356 1131323 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0328 01:03:02.366411 1131323 cri.go:89] found id: ""
	I0328 01:03:02.366500 1131323 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0328 01:03:02.388351 1131323 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0328 01:03:02.402621 1131323 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0328 01:03:02.402652 1131323 kubeadm.go:156] found existing configuration files:
	
	I0328 01:03:02.402718 1131323 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0328 01:03:02.415559 1131323 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0328 01:03:02.415633 1131323 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0328 01:03:02.426666 1131323 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0328 01:03:02.439497 1131323 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0328 01:03:02.439558 1131323 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0328 01:03:02.451040 1131323 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0328 01:03:02.461780 1131323 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0328 01:03:02.461876 1131323 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0328 01:03:02.473295 1131323 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0328 01:03:02.484762 1131323 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0328 01:03:02.484841 1131323 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0328 01:03:02.496304 1131323 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0328 01:03:02.507634 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0328 01:03:02.641980 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0328 01:03:03.598106 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0328 01:03:03.840026 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0328 01:03:03.970336 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0328 01:03:04.067774 1131323 api_server.go:52] waiting for apiserver process to appear ...
	I0328 01:03:04.067911 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:04.568260 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:05.068794 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:01.460535 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:01.461008 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | unable to find current IP address of domain default-k8s-diff-port-283961 in network mk-default-k8s-diff-port-283961
	I0328 01:03:01.461039 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | I0328 01:03:01.460962 1132240 retry.go:31] will retry after 1.839531745s: waiting for machine to come up
	I0328 01:03:03.302965 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:03.303445 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | unable to find current IP address of domain default-k8s-diff-port-283961 in network mk-default-k8s-diff-port-283961
	I0328 01:03:03.303479 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | I0328 01:03:03.303387 1132240 retry.go:31] will retry after 2.461748315s: waiting for machine to come up
	I0328 01:03:04.413898 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:06.913608 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:05.568716 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:06.068362 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:06.568235 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:07.068696 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:07.567976 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:08.068032 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:08.568586 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:09.068046 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:09.568699 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:10.067967 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:05.767795 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:05.768329 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | unable to find current IP address of domain default-k8s-diff-port-283961 in network mk-default-k8s-diff-port-283961
	I0328 01:03:05.768360 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | I0328 01:03:05.768279 1132240 retry.go:31] will retry after 2.321291255s: waiting for machine to come up
	I0328 01:03:08.092644 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:08.093094 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | unable to find current IP address of domain default-k8s-diff-port-283961 in network mk-default-k8s-diff-port-283961
	I0328 01:03:08.093131 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | I0328 01:03:08.093046 1132240 retry.go:31] will retry after 4.151205276s: waiting for machine to come up
	I0328 01:03:09.413199 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:11.912234 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:13.671756 1130827 start.go:364] duration metric: took 54.966750689s to acquireMachinesLock for "no-preload-248059"
	I0328 01:03:13.671815 1130827 start.go:96] Skipping create...Using existing machine configuration
	I0328 01:03:13.671823 1130827 fix.go:54] fixHost starting: 
	I0328 01:03:13.672255 1130827 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18485-1069254/.minikube/bin/docker-machine-driver-kvm2
	I0328 01:03:13.672292 1130827 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 01:03:13.689811 1130827 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42017
	I0328 01:03:13.690364 1130827 main.go:141] libmachine: () Calling .GetVersion
	I0328 01:03:13.690817 1130827 main.go:141] libmachine: Using API Version  1
	I0328 01:03:13.690843 1130827 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 01:03:13.691213 1130827 main.go:141] libmachine: () Calling .GetMachineName
	I0328 01:03:13.691395 1130827 main.go:141] libmachine: (no-preload-248059) Calling .DriverName
	I0328 01:03:13.691523 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetState
	I0328 01:03:13.693093 1130827 fix.go:112] recreateIfNeeded on no-preload-248059: state=Stopped err=<nil>
	I0328 01:03:13.693123 1130827 main.go:141] libmachine: (no-preload-248059) Calling .DriverName
	W0328 01:03:13.693280 1130827 fix.go:138] unexpected machine state, will restart: <nil>
	I0328 01:03:13.695158 1130827 out.go:177] * Restarting existing kvm2 VM for "no-preload-248059" ...
	I0328 01:03:10.568240 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:11.068028 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:11.568146 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:12.068467 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:12.568820 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:13.068031 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:13.568977 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:14.068050 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:14.567938 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:15.068711 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:12.248769 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:12.249440 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Found IP for machine: 192.168.39.224
	I0328 01:03:12.249467 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Reserving static IP address...
	I0328 01:03:12.249498 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has current primary IP address 192.168.39.224 and MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:12.249832 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-283961", mac: "52:54:00:c4:df:6f", ip: "192.168.39.224"} in network mk-default-k8s-diff-port-283961: {Iface:virbr1 ExpiryTime:2024-03-28 02:03:06 +0000 UTC Type:0 Mac:52:54:00:c4:df:6f Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:default-k8s-diff-port-283961 Clientid:01:52:54:00:c4:df:6f}
	I0328 01:03:12.249872 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | skip adding static IP to network mk-default-k8s-diff-port-283961 - found existing host DHCP lease matching {name: "default-k8s-diff-port-283961", mac: "52:54:00:c4:df:6f", ip: "192.168.39.224"}
	I0328 01:03:12.249888 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Reserved static IP address: 192.168.39.224
	I0328 01:03:12.249908 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Waiting for SSH to be available...
	I0328 01:03:12.249921 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | Getting to WaitForSSH function...
	I0328 01:03:12.252053 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:12.252487 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:df:6f", ip: ""} in network mk-default-k8s-diff-port-283961: {Iface:virbr1 ExpiryTime:2024-03-28 02:03:06 +0000 UTC Type:0 Mac:52:54:00:c4:df:6f Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:default-k8s-diff-port-283961 Clientid:01:52:54:00:c4:df:6f}
	I0328 01:03:12.252521 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined IP address 192.168.39.224 and MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:12.252646 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | Using SSH client type: external
	I0328 01:03:12.252677 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | Using SSH private key: /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/default-k8s-diff-port-283961/id_rsa (-rw-------)
	I0328 01:03:12.252709 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.224 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/default-k8s-diff-port-283961/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0328 01:03:12.252731 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | About to run SSH command:
	I0328 01:03:12.252750 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | exit 0
	I0328 01:03:12.378419 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | SSH cmd err, output: <nil>: 
	I0328 01:03:12.378866 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetConfigRaw
	I0328 01:03:12.379659 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetIP
	I0328 01:03:12.382631 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:12.382997 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:df:6f", ip: ""} in network mk-default-k8s-diff-port-283961: {Iface:virbr1 ExpiryTime:2024-03-28 02:03:06 +0000 UTC Type:0 Mac:52:54:00:c4:df:6f Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:default-k8s-diff-port-283961 Clientid:01:52:54:00:c4:df:6f}
	I0328 01:03:12.383023 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined IP address 192.168.39.224 and MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:12.383276 1131600 profile.go:142] Saving config to /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/default-k8s-diff-port-283961/config.json ...
	I0328 01:03:12.383534 1131600 machine.go:94] provisionDockerMachine start ...
	I0328 01:03:12.383567 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .DriverName
	I0328 01:03:12.383805 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHHostname
	I0328 01:03:12.386472 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:12.386839 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:df:6f", ip: ""} in network mk-default-k8s-diff-port-283961: {Iface:virbr1 ExpiryTime:2024-03-28 02:03:06 +0000 UTC Type:0 Mac:52:54:00:c4:df:6f Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:default-k8s-diff-port-283961 Clientid:01:52:54:00:c4:df:6f}
	I0328 01:03:12.386870 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined IP address 192.168.39.224 and MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:12.387035 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHPort
	I0328 01:03:12.387240 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHKeyPath
	I0328 01:03:12.387399 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHKeyPath
	I0328 01:03:12.387577 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHUsername
	I0328 01:03:12.387729 1131600 main.go:141] libmachine: Using SSH client type: native
	I0328 01:03:12.387931 1131600 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.224 22 <nil> <nil>}
	I0328 01:03:12.387943 1131600 main.go:141] libmachine: About to run SSH command:
	hostname
	I0328 01:03:12.499608 1131600 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0328 01:03:12.499644 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetMachineName
	I0328 01:03:12.499930 1131600 buildroot.go:166] provisioning hostname "default-k8s-diff-port-283961"
	I0328 01:03:12.499962 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetMachineName
	I0328 01:03:12.500154 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHHostname
	I0328 01:03:12.502737 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:12.503089 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:df:6f", ip: ""} in network mk-default-k8s-diff-port-283961: {Iface:virbr1 ExpiryTime:2024-03-28 02:03:06 +0000 UTC Type:0 Mac:52:54:00:c4:df:6f Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:default-k8s-diff-port-283961 Clientid:01:52:54:00:c4:df:6f}
	I0328 01:03:12.503120 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined IP address 192.168.39.224 and MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:12.503295 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHPort
	I0328 01:03:12.503516 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHKeyPath
	I0328 01:03:12.503725 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHKeyPath
	I0328 01:03:12.503892 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHUsername
	I0328 01:03:12.504093 1131600 main.go:141] libmachine: Using SSH client type: native
	I0328 01:03:12.504271 1131600 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.224 22 <nil> <nil>}
	I0328 01:03:12.504285 1131600 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-283961 && echo "default-k8s-diff-port-283961" | sudo tee /etc/hostname
	I0328 01:03:12.625590 1131600 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-283961
	
	I0328 01:03:12.625624 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHHostname
	I0328 01:03:12.628570 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:12.628883 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:df:6f", ip: ""} in network mk-default-k8s-diff-port-283961: {Iface:virbr1 ExpiryTime:2024-03-28 02:03:06 +0000 UTC Type:0 Mac:52:54:00:c4:df:6f Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:default-k8s-diff-port-283961 Clientid:01:52:54:00:c4:df:6f}
	I0328 01:03:12.628968 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined IP address 192.168.39.224 and MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:12.629143 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHPort
	I0328 01:03:12.629397 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHKeyPath
	I0328 01:03:12.629627 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHKeyPath
	I0328 01:03:12.629825 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHUsername
	I0328 01:03:12.630008 1131600 main.go:141] libmachine: Using SSH client type: native
	I0328 01:03:12.630191 1131600 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.224 22 <nil> <nil>}
	I0328 01:03:12.630210 1131600 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-283961' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-283961/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-283961' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0328 01:03:12.744240 1131600 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0328 01:03:12.744280 1131600 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18485-1069254/.minikube CaCertPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18485-1069254/.minikube}
	I0328 01:03:12.744327 1131600 buildroot.go:174] setting up certificates
	I0328 01:03:12.744342 1131600 provision.go:84] configureAuth start
	I0328 01:03:12.744361 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetMachineName
	I0328 01:03:12.744722 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetIP
	I0328 01:03:12.747139 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:12.747448 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:df:6f", ip: ""} in network mk-default-k8s-diff-port-283961: {Iface:virbr1 ExpiryTime:2024-03-28 02:03:06 +0000 UTC Type:0 Mac:52:54:00:c4:df:6f Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:default-k8s-diff-port-283961 Clientid:01:52:54:00:c4:df:6f}
	I0328 01:03:12.747478 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined IP address 192.168.39.224 and MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:12.747582 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHHostname
	I0328 01:03:12.749705 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:12.749964 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:df:6f", ip: ""} in network mk-default-k8s-diff-port-283961: {Iface:virbr1 ExpiryTime:2024-03-28 02:03:06 +0000 UTC Type:0 Mac:52:54:00:c4:df:6f Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:default-k8s-diff-port-283961 Clientid:01:52:54:00:c4:df:6f}
	I0328 01:03:12.749995 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined IP address 192.168.39.224 and MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:12.750125 1131600 provision.go:143] copyHostCerts
	I0328 01:03:12.750203 1131600 exec_runner.go:144] found /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.pem, removing ...
	I0328 01:03:12.750217 1131600 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.pem
	I0328 01:03:12.750323 1131600 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.pem (1078 bytes)
	I0328 01:03:12.750435 1131600 exec_runner.go:144] found /home/jenkins/minikube-integration/18485-1069254/.minikube/cert.pem, removing ...
	I0328 01:03:12.750446 1131600 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18485-1069254/.minikube/cert.pem
	I0328 01:03:12.750479 1131600 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18485-1069254/.minikube/cert.pem (1123 bytes)
	I0328 01:03:12.750557 1131600 exec_runner.go:144] found /home/jenkins/minikube-integration/18485-1069254/.minikube/key.pem, removing ...
	I0328 01:03:12.750567 1131600 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18485-1069254/.minikube/key.pem
	I0328 01:03:12.750599 1131600 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18485-1069254/.minikube/key.pem (1679 bytes)
	I0328 01:03:12.750670 1131600 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-283961 san=[127.0.0.1 192.168.39.224 default-k8s-diff-port-283961 localhost minikube]
	I0328 01:03:12.963182 1131600 provision.go:177] copyRemoteCerts
	I0328 01:03:12.963265 1131600 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0328 01:03:12.963313 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHHostname
	I0328 01:03:12.965946 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:12.966177 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:df:6f", ip: ""} in network mk-default-k8s-diff-port-283961: {Iface:virbr1 ExpiryTime:2024-03-28 02:03:06 +0000 UTC Type:0 Mac:52:54:00:c4:df:6f Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:default-k8s-diff-port-283961 Clientid:01:52:54:00:c4:df:6f}
	I0328 01:03:12.966207 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined IP address 192.168.39.224 and MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:12.966347 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHPort
	I0328 01:03:12.966573 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHKeyPath
	I0328 01:03:12.966773 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHUsername
	I0328 01:03:12.966934 1131600 sshutil.go:53] new ssh client: &{IP:192.168.39.224 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/default-k8s-diff-port-283961/id_rsa Username:docker}
	I0328 01:03:13.057477 1131600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0328 01:03:13.083706 1131600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0328 01:03:13.109167 1131600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0328 01:03:13.136835 1131600 provision.go:87] duration metric: took 392.475069ms to configureAuth
	I0328 01:03:13.136867 1131600 buildroot.go:189] setting minikube options for container-runtime
	I0328 01:03:13.137048 1131600 config.go:182] Loaded profile config "default-k8s-diff-port-283961": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0328 01:03:13.137131 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHHostname
	I0328 01:03:13.139508 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:13.139761 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:df:6f", ip: ""} in network mk-default-k8s-diff-port-283961: {Iface:virbr1 ExpiryTime:2024-03-28 02:03:06 +0000 UTC Type:0 Mac:52:54:00:c4:df:6f Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:default-k8s-diff-port-283961 Clientid:01:52:54:00:c4:df:6f}
	I0328 01:03:13.139792 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined IP address 192.168.39.224 and MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:13.139959 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHPort
	I0328 01:03:13.140170 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHKeyPath
	I0328 01:03:13.140343 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHKeyPath
	I0328 01:03:13.140502 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHUsername
	I0328 01:03:13.140685 1131600 main.go:141] libmachine: Using SSH client type: native
	I0328 01:03:13.140873 1131600 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.224 22 <nil> <nil>}
	I0328 01:03:13.140897 1131600 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0328 01:03:13.422372 1131600 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0328 01:03:13.422405 1131600 machine.go:97] duration metric: took 1.038857021s to provisionDockerMachine
	I0328 01:03:13.422418 1131600 start.go:293] postStartSetup for "default-k8s-diff-port-283961" (driver="kvm2")
	I0328 01:03:13.422428 1131600 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0328 01:03:13.422456 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .DriverName
	I0328 01:03:13.422788 1131600 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0328 01:03:13.422819 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHHostname
	I0328 01:03:13.425539 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:13.425865 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:df:6f", ip: ""} in network mk-default-k8s-diff-port-283961: {Iface:virbr1 ExpiryTime:2024-03-28 02:03:06 +0000 UTC Type:0 Mac:52:54:00:c4:df:6f Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:default-k8s-diff-port-283961 Clientid:01:52:54:00:c4:df:6f}
	I0328 01:03:13.425894 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined IP address 192.168.39.224 and MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:13.426023 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHPort
	I0328 01:03:13.426225 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHKeyPath
	I0328 01:03:13.426407 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHUsername
	I0328 01:03:13.426577 1131600 sshutil.go:53] new ssh client: &{IP:192.168.39.224 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/default-k8s-diff-port-283961/id_rsa Username:docker}
	I0328 01:03:13.511874 1131600 ssh_runner.go:195] Run: cat /etc/os-release
	I0328 01:03:13.516643 1131600 info.go:137] Remote host: Buildroot 2023.02.9
	I0328 01:03:13.516673 1131600 filesync.go:126] Scanning /home/jenkins/minikube-integration/18485-1069254/.minikube/addons for local assets ...
	I0328 01:03:13.516749 1131600 filesync.go:126] Scanning /home/jenkins/minikube-integration/18485-1069254/.minikube/files for local assets ...
	I0328 01:03:13.516846 1131600 filesync.go:149] local asset: /home/jenkins/minikube-integration/18485-1069254/.minikube/files/etc/ssl/certs/10765222.pem -> 10765222.pem in /etc/ssl/certs
	I0328 01:03:13.516969 1131600 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0328 01:03:13.529004 1131600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/files/etc/ssl/certs/10765222.pem --> /etc/ssl/certs/10765222.pem (1708 bytes)
	I0328 01:03:13.557244 1131600 start.go:296] duration metric: took 134.810243ms for postStartSetup
	I0328 01:03:13.557289 1131600 fix.go:56] duration metric: took 20.165726422s for fixHost
	I0328 01:03:13.557313 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHHostname
	I0328 01:03:13.560216 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:13.560585 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:df:6f", ip: ""} in network mk-default-k8s-diff-port-283961: {Iface:virbr1 ExpiryTime:2024-03-28 02:03:06 +0000 UTC Type:0 Mac:52:54:00:c4:df:6f Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:default-k8s-diff-port-283961 Clientid:01:52:54:00:c4:df:6f}
	I0328 01:03:13.560623 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined IP address 192.168.39.224 and MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:13.560803 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHPort
	I0328 01:03:13.561050 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHKeyPath
	I0328 01:03:13.561188 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHKeyPath
	I0328 01:03:13.561303 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHUsername
	I0328 01:03:13.561552 1131600 main.go:141] libmachine: Using SSH client type: native
	I0328 01:03:13.561742 1131600 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.224 22 <nil> <nil>}
	I0328 01:03:13.561757 1131600 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0328 01:03:13.671545 1131600 main.go:141] libmachine: SSH cmd err, output: <nil>: 1711587793.617322674
	
	I0328 01:03:13.671570 1131600 fix.go:216] guest clock: 1711587793.617322674
	I0328 01:03:13.671578 1131600 fix.go:229] Guest: 2024-03-28 01:03:13.617322674 +0000 UTC Remote: 2024-03-28 01:03:13.55729386 +0000 UTC m=+187.934897846 (delta=60.028814ms)
	I0328 01:03:13.671632 1131600 fix.go:200] guest clock delta is within tolerance: 60.028814ms
	I0328 01:03:13.671642 1131600 start.go:83] releasing machines lock for "default-k8s-diff-port-283961", held for 20.280118311s
	I0328 01:03:13.671673 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .DriverName
	I0328 01:03:13.671976 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetIP
	I0328 01:03:13.674978 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:13.675384 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:df:6f", ip: ""} in network mk-default-k8s-diff-port-283961: {Iface:virbr1 ExpiryTime:2024-03-28 02:03:06 +0000 UTC Type:0 Mac:52:54:00:c4:df:6f Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:default-k8s-diff-port-283961 Clientid:01:52:54:00:c4:df:6f}
	I0328 01:03:13.675436 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined IP address 192.168.39.224 and MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:13.675562 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .DriverName
	I0328 01:03:13.676167 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .DriverName
	I0328 01:03:13.676337 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .DriverName
	I0328 01:03:13.676436 1131600 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0328 01:03:13.676501 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHHostname
	I0328 01:03:13.676557 1131600 ssh_runner.go:195] Run: cat /version.json
	I0328 01:03:13.676578 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHHostname
	I0328 01:03:13.679418 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:13.679452 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:13.679758 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:df:6f", ip: ""} in network mk-default-k8s-diff-port-283961: {Iface:virbr1 ExpiryTime:2024-03-28 02:03:06 +0000 UTC Type:0 Mac:52:54:00:c4:df:6f Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:default-k8s-diff-port-283961 Clientid:01:52:54:00:c4:df:6f}
	I0328 01:03:13.679785 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined IP address 192.168.39.224 and MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:13.679813 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:df:6f", ip: ""} in network mk-default-k8s-diff-port-283961: {Iface:virbr1 ExpiryTime:2024-03-28 02:03:06 +0000 UTC Type:0 Mac:52:54:00:c4:df:6f Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:default-k8s-diff-port-283961 Clientid:01:52:54:00:c4:df:6f}
	I0328 01:03:13.679832 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined IP address 192.168.39.224 and MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:13.679986 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHPort
	I0328 01:03:13.680089 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHPort
	I0328 01:03:13.680190 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHKeyPath
	I0328 01:03:13.680255 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHKeyPath
	I0328 01:03:13.680345 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHUsername
	I0328 01:03:13.680410 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHUsername
	I0328 01:03:13.680517 1131600 sshutil.go:53] new ssh client: &{IP:192.168.39.224 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/default-k8s-diff-port-283961/id_rsa Username:docker}
	I0328 01:03:13.680608 1131600 sshutil.go:53] new ssh client: &{IP:192.168.39.224 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/default-k8s-diff-port-283961/id_rsa Username:docker}
	I0328 01:03:13.759826 1131600 ssh_runner.go:195] Run: systemctl --version
	I0328 01:03:13.796647 1131600 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0328 01:03:13.947036 1131600 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0328 01:03:13.954165 1131600 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0328 01:03:13.954265 1131600 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0328 01:03:13.973503 1131600 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0328 01:03:13.973538 1131600 start.go:494] detecting cgroup driver to use...
	I0328 01:03:13.973629 1131600 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0328 01:03:13.997675 1131600 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0328 01:03:14.015349 1131600 docker.go:217] disabling cri-docker service (if available) ...
	I0328 01:03:14.015421 1131600 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0328 01:03:14.031099 1131600 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0328 01:03:14.046446 1131600 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0328 01:03:14.186993 1131600 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0328 01:03:14.351164 1131600 docker.go:233] disabling docker service ...
	I0328 01:03:14.351232 1131600 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0328 01:03:14.370629 1131600 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0328 01:03:14.387837 1131600 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0328 01:03:14.544060 1131600 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0328 01:03:14.707699 1131600 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0328 01:03:14.725658 1131600 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0328 01:03:14.746063 1131600 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0328 01:03:14.746141 1131600 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 01:03:14.759244 1131600 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0328 01:03:14.759317 1131600 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 01:03:14.773015 1131600 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 01:03:14.786810 1131600 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 01:03:14.807101 1131600 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0328 01:03:14.821013 1131600 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 01:03:14.834181 1131600 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 01:03:14.861163 1131600 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 01:03:14.874274 1131600 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0328 01:03:14.885890 1131600 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0328 01:03:14.885968 1131600 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0328 01:03:14.903142 1131600 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0328 01:03:14.916364 1131600 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0328 01:03:15.073343 1131600 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0328 01:03:15.218406 1131600 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0328 01:03:15.218500 1131600 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0328 01:03:15.226299 1131600 start.go:562] Will wait 60s for crictl version
	I0328 01:03:15.226373 1131600 ssh_runner.go:195] Run: which crictl
	I0328 01:03:15.232051 1131600 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0328 01:03:15.278793 1131600 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0328 01:03:15.278903 1131600 ssh_runner.go:195] Run: crio --version
	I0328 01:03:15.313408 1131600 ssh_runner.go:195] Run: crio --version
	I0328 01:03:15.351613 1131600 out.go:177] * Preparing Kubernetes v1.29.3 on CRI-O 1.29.1 ...
	I0328 01:03:15.353013 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetIP
	I0328 01:03:15.355924 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:15.356306 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:df:6f", ip: ""} in network mk-default-k8s-diff-port-283961: {Iface:virbr1 ExpiryTime:2024-03-28 02:03:06 +0000 UTC Type:0 Mac:52:54:00:c4:df:6f Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:default-k8s-diff-port-283961 Clientid:01:52:54:00:c4:df:6f}
	I0328 01:03:15.356341 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined IP address 192.168.39.224 and MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:03:15.356555 1131600 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0328 01:03:15.361194 1131600 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0328 01:03:15.380926 1131600 kubeadm.go:877] updating cluster {Name:default-k8s-diff-port-283961 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.29.3 ClusterName:default-k8s-diff-port-283961 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.224 Port:8444 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0328 01:03:15.381043 1131600 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0328 01:03:15.381099 1131600 ssh_runner.go:195] Run: sudo crictl images --output json
	I0328 01:03:15.423322 1131600 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.3". assuming images are not preloaded.
	I0328 01:03:15.423409 1131600 ssh_runner.go:195] Run: which lz4
	I0328 01:03:15.428123 1131600 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0328 01:03:15.433023 1131600 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0328 01:03:15.433065 1131600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (402967820 bytes)
	I0328 01:03:13.696314 1130827 main.go:141] libmachine: (no-preload-248059) Calling .Start
	I0328 01:03:13.696506 1130827 main.go:141] libmachine: (no-preload-248059) Ensuring networks are active...
	I0328 01:03:13.697344 1130827 main.go:141] libmachine: (no-preload-248059) Ensuring network default is active
	I0328 01:03:13.697668 1130827 main.go:141] libmachine: (no-preload-248059) Ensuring network mk-no-preload-248059 is active
	I0328 01:03:13.698009 1130827 main.go:141] libmachine: (no-preload-248059) Getting domain xml...
	I0328 01:03:13.698805 1130827 main.go:141] libmachine: (no-preload-248059) Creating domain...
	I0328 01:03:14.955922 1130827 main.go:141] libmachine: (no-preload-248059) Waiting to get IP...
	I0328 01:03:14.957088 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:14.957534 1130827 main.go:141] libmachine: (no-preload-248059) DBG | unable to find current IP address of domain no-preload-248059 in network mk-no-preload-248059
	I0328 01:03:14.957660 1130827 main.go:141] libmachine: (no-preload-248059) DBG | I0328 01:03:14.957533 1132389 retry.go:31] will retry after 222.894093ms: waiting for machine to come up
	I0328 01:03:15.182078 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:15.182541 1130827 main.go:141] libmachine: (no-preload-248059) DBG | unable to find current IP address of domain no-preload-248059 in network mk-no-preload-248059
	I0328 01:03:15.182580 1130827 main.go:141] libmachine: (no-preload-248059) DBG | I0328 01:03:15.182528 1132389 retry.go:31] will retry after 263.74163ms: waiting for machine to come up
	I0328 01:03:15.448081 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:15.448653 1130827 main.go:141] libmachine: (no-preload-248059) DBG | unable to find current IP address of domain no-preload-248059 in network mk-no-preload-248059
	I0328 01:03:15.448684 1130827 main.go:141] libmachine: (no-preload-248059) DBG | I0328 01:03:15.448586 1132389 retry.go:31] will retry after 444.066222ms: waiting for machine to come up
	I0328 01:03:15.894141 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:15.894695 1130827 main.go:141] libmachine: (no-preload-248059) DBG | unable to find current IP address of domain no-preload-248059 in network mk-no-preload-248059
	I0328 01:03:15.894732 1130827 main.go:141] libmachine: (no-preload-248059) DBG | I0328 01:03:15.894650 1132389 retry.go:31] will retry after 469.421771ms: waiting for machine to come up
	I0328 01:03:14.413443 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:16.418789 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:15.568507 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:16.068210 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:16.568761 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:17.067929 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:17.568403 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:18.068454 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:18.568086 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:19.068049 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:19.569020 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:20.068068 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:17.139682 1131600 crio.go:462] duration metric: took 1.71160157s to copy over tarball
	I0328 01:03:17.139764 1131600 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0328 01:03:19.581198 1131600 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.441406061s)
	I0328 01:03:19.581229 1131600 crio.go:469] duration metric: took 2.441510253s to extract the tarball
	I0328 01:03:19.581241 1131600 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0328 01:03:19.620964 1131600 ssh_runner.go:195] Run: sudo crictl images --output json
	I0328 01:03:19.666765 1131600 crio.go:514] all images are preloaded for cri-o runtime.
	I0328 01:03:19.666791 1131600 cache_images.go:84] Images are preloaded, skipping loading
	I0328 01:03:19.666802 1131600 kubeadm.go:928] updating node { 192.168.39.224 8444 v1.29.3 crio true true} ...
	I0328 01:03:19.666921 1131600 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-283961 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.224
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.3 ClusterName:default-k8s-diff-port-283961 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0328 01:03:19.666987 1131600 ssh_runner.go:195] Run: crio config
	I0328 01:03:19.716082 1131600 cni.go:84] Creating CNI manager for ""
	I0328 01:03:19.716106 1131600 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0328 01:03:19.716115 1131600 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0328 01:03:19.716139 1131600 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.224 APIServerPort:8444 KubernetesVersion:v1.29.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-283961 NodeName:default-k8s-diff-port-283961 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.224"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.224 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0328 01:03:19.716323 1131600 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.224
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-283961"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.224
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.224"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0328 01:03:19.716399 1131600 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.3
	I0328 01:03:19.727826 1131600 binaries.go:44] Found k8s binaries, skipping transfer
	I0328 01:03:19.727913 1131600 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0328 01:03:19.738525 1131600 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0328 01:03:19.756732 1131600 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0328 01:03:19.776665 1131600 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0328 01:03:19.795756 1131600 ssh_runner.go:195] Run: grep 192.168.39.224	control-plane.minikube.internal$ /etc/hosts
	I0328 01:03:19.800097 1131600 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.224	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0328 01:03:19.813019 1131600 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0328 01:03:19.946740 1131600 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0328 01:03:19.964216 1131600 certs.go:68] Setting up /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/default-k8s-diff-port-283961 for IP: 192.168.39.224
	I0328 01:03:19.964244 1131600 certs.go:194] generating shared ca certs ...
	I0328 01:03:19.964262 1131600 certs.go:226] acquiring lock for ca certs: {Name:mkf4dec8f33bbf51de6ed3aabdf175c7fd744ef9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 01:03:19.964448 1131600 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.key
	I0328 01:03:19.964524 1131600 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/proxy-client-ca.key
	I0328 01:03:19.964538 1131600 certs.go:256] generating profile certs ...
	I0328 01:03:19.964648 1131600 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/default-k8s-diff-port-283961/client.key
	I0328 01:03:19.964735 1131600 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/default-k8s-diff-port-283961/apiserver.key.22bfb146
	I0328 01:03:19.964810 1131600 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/default-k8s-diff-port-283961/proxy-client.key
	I0328 01:03:19.964956 1131600 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/1076522.pem (1338 bytes)
	W0328 01:03:19.965008 1131600 certs.go:480] ignoring /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/1076522_empty.pem, impossibly tiny 0 bytes
	I0328 01:03:19.965021 1131600 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca-key.pem (1675 bytes)
	I0328 01:03:19.965058 1131600 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem (1078 bytes)
	I0328 01:03:19.965091 1131600 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/cert.pem (1123 bytes)
	I0328 01:03:19.965113 1131600 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/key.pem (1679 bytes)
	I0328 01:03:19.965154 1131600 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/files/etc/ssl/certs/10765222.pem (1708 bytes)
	I0328 01:03:19.966026 1131600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0328 01:03:19.998578 1131600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0328 01:03:20.042666 1131600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0328 01:03:20.075405 1131600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0328 01:03:20.117888 1131600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/default-k8s-diff-port-283961/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0328 01:03:20.145160 1131600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/default-k8s-diff-port-283961/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0328 01:03:20.178207 1131600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/default-k8s-diff-port-283961/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0328 01:03:20.208610 1131600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/default-k8s-diff-port-283961/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0328 01:03:20.235356 1131600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0328 01:03:20.262434 1131600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/1076522.pem --> /usr/share/ca-certificates/1076522.pem (1338 bytes)
	I0328 01:03:20.291315 1131600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/files/etc/ssl/certs/10765222.pem --> /usr/share/ca-certificates/10765222.pem (1708 bytes)
	I0328 01:03:20.318034 1131600 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0328 01:03:20.337627 1131600 ssh_runner.go:195] Run: openssl version
	I0328 01:03:20.344242 1131600 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0328 01:03:20.360732 1131600 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0328 01:03:20.365858 1131600 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 27 23:34 /usr/share/ca-certificates/minikubeCA.pem
	I0328 01:03:20.365926 1131600 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0328 01:03:20.372120 1131600 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0328 01:03:20.384554 1131600 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1076522.pem && ln -fs /usr/share/ca-certificates/1076522.pem /etc/ssl/certs/1076522.pem"
	I0328 01:03:20.401731 1131600 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1076522.pem
	I0328 01:03:20.406945 1131600 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 27 23:42 /usr/share/ca-certificates/1076522.pem
	I0328 01:03:20.407024 1131600 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1076522.pem
	I0328 01:03:20.414661 1131600 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1076522.pem /etc/ssl/certs/51391683.0"
	I0328 01:03:20.427573 1131600 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10765222.pem && ln -fs /usr/share/ca-certificates/10765222.pem /etc/ssl/certs/10765222.pem"
	I0328 01:03:20.439807 1131600 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10765222.pem
	I0328 01:03:20.445064 1131600 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 27 23:42 /usr/share/ca-certificates/10765222.pem
	I0328 01:03:20.445138 1131600 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10765222.pem
	I0328 01:03:20.451754 1131600 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/10765222.pem /etc/ssl/certs/3ec20f2e.0"
	I0328 01:03:20.464988 1131600 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0328 01:03:20.470461 1131600 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0328 01:03:20.477200 1131600 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0328 01:03:20.484238 1131600 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0328 01:03:20.491125 1131600 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0328 01:03:20.497888 1131600 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0328 01:03:20.504680 1131600 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0328 01:03:20.511372 1131600 kubeadm.go:391] StartCluster: {Name:default-k8s-diff-port-283961 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.29.3 ClusterName:default-k8s-diff-port-283961 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.224 Port:8444 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0328 01:03:20.511477 1131600 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0328 01:03:20.511542 1131600 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0328 01:03:20.552247 1131600 cri.go:89] found id: ""
	I0328 01:03:20.552345 1131600 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0328 01:03:20.564906 1131600 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0328 01:03:20.564937 1131600 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0328 01:03:20.564944 1131600 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0328 01:03:20.565002 1131600 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0328 01:03:20.576394 1131600 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0328 01:03:20.593699 1131600 kubeconfig.go:125] found "default-k8s-diff-port-283961" server: "https://192.168.39.224:8444"
	I0328 01:03:20.595978 1131600 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0328 01:03:20.609519 1131600 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.39.224
	I0328 01:03:20.609565 1131600 kubeadm.go:1154] stopping kube-system containers ...
	I0328 01:03:20.609583 1131600 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0328 01:03:20.609651 1131600 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0328 01:03:20.651892 1131600 cri.go:89] found id: ""
	I0328 01:03:20.651967 1131600 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0328 01:03:20.671895 1131600 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0328 01:03:16.365505 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:16.366404 1130827 main.go:141] libmachine: (no-preload-248059) DBG | unable to find current IP address of domain no-preload-248059 in network mk-no-preload-248059
	I0328 01:03:16.366435 1130827 main.go:141] libmachine: (no-preload-248059) DBG | I0328 01:03:16.366360 1132389 retry.go:31] will retry after 488.383898ms: waiting for machine to come up
	I0328 01:03:16.856125 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:16.856727 1130827 main.go:141] libmachine: (no-preload-248059) DBG | unable to find current IP address of domain no-preload-248059 in network mk-no-preload-248059
	I0328 01:03:16.856761 1130827 main.go:141] libmachine: (no-preload-248059) DBG | I0328 01:03:16.856626 1132389 retry.go:31] will retry after 617.77144ms: waiting for machine to come up
	I0328 01:03:17.476749 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:17.477351 1130827 main.go:141] libmachine: (no-preload-248059) DBG | unable to find current IP address of domain no-preload-248059 in network mk-no-preload-248059
	I0328 01:03:17.477386 1130827 main.go:141] libmachine: (no-preload-248059) DBG | I0328 01:03:17.477282 1132389 retry.go:31] will retry after 835.951988ms: waiting for machine to come up
	I0328 01:03:18.315387 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:18.315894 1130827 main.go:141] libmachine: (no-preload-248059) DBG | unable to find current IP address of domain no-preload-248059 in network mk-no-preload-248059
	I0328 01:03:18.315925 1130827 main.go:141] libmachine: (no-preload-248059) DBG | I0328 01:03:18.315848 1132389 retry.go:31] will retry after 1.405695765s: waiting for machine to come up
	I0328 01:03:19.723053 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:19.723559 1130827 main.go:141] libmachine: (no-preload-248059) DBG | unable to find current IP address of domain no-preload-248059 in network mk-no-preload-248059
	I0328 01:03:19.723591 1130827 main.go:141] libmachine: (no-preload-248059) DBG | I0328 01:03:19.723473 1132389 retry.go:31] will retry after 1.555358462s: waiting for machine to come up
	I0328 01:03:18.913403 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:21.599662 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:20.568464 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:21.068983 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:21.568470 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:22.068772 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:22.568940 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:23.068907 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:23.568272 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:24.068055 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:24.568056 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:25.068006 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:20.685320 1131600 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0328 01:03:21.187521 1131600 kubeadm.go:156] found existing configuration files:
	
	I0328 01:03:21.187587 1131600 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0328 01:03:21.200463 1131600 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0328 01:03:21.200533 1131600 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0328 01:03:21.212763 1131600 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0328 01:03:21.224344 1131600 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0328 01:03:21.224419 1131600 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0328 01:03:21.235869 1131600 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0328 01:03:21.245970 1131600 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0328 01:03:21.246045 1131600 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0328 01:03:21.258589 1131600 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0328 01:03:21.270651 1131600 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0328 01:03:21.270724 1131600 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0328 01:03:21.283074 1131600 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0328 01:03:21.295811 1131600 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0328 01:03:21.668224 1131600 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0328 01:03:23.046357 1131600 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.378083996s)
	I0328 01:03:23.046401 1131600 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0328 01:03:23.271959 1131600 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0328 01:03:23.353976 1131600 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0328 01:03:23.501611 1131600 api_server.go:52] waiting for apiserver process to appear ...
	I0328 01:03:23.501734 1131600 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:24.002619 1131600 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:24.502614 1131600 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:24.547383 1131600 api_server.go:72] duration metric: took 1.045771287s to wait for apiserver process to appear ...
	I0328 01:03:24.547419 1131600 api_server.go:88] waiting for apiserver healthz status ...
	I0328 01:03:24.547447 1131600 api_server.go:253] Checking apiserver healthz at https://192.168.39.224:8444/healthz ...
	I0328 01:03:24.548081 1131600 api_server.go:269] stopped: https://192.168.39.224:8444/healthz: Get "https://192.168.39.224:8444/healthz": dial tcp 192.168.39.224:8444: connect: connection refused
	I0328 01:03:25.047885 1131600 api_server.go:253] Checking apiserver healthz at https://192.168.39.224:8444/healthz ...
	I0328 01:03:21.279945 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:21.590947 1130827 main.go:141] libmachine: (no-preload-248059) DBG | unable to find current IP address of domain no-preload-248059 in network mk-no-preload-248059
	I0328 01:03:21.590967 1130827 main.go:141] libmachine: (no-preload-248059) DBG | I0328 01:03:21.280358 1132389 retry.go:31] will retry after 1.905587467s: waiting for machine to come up
	I0328 01:03:23.187571 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:23.188214 1130827 main.go:141] libmachine: (no-preload-248059) DBG | unable to find current IP address of domain no-preload-248059 in network mk-no-preload-248059
	I0328 01:03:23.188248 1130827 main.go:141] libmachine: (no-preload-248059) DBG | I0328 01:03:23.188159 1132389 retry.go:31] will retry after 2.68043246s: waiting for machine to come up
	I0328 01:03:25.871414 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:25.871997 1130827 main.go:141] libmachine: (no-preload-248059) DBG | unable to find current IP address of domain no-preload-248059 in network mk-no-preload-248059
	I0328 01:03:25.872030 1130827 main.go:141] libmachine: (no-preload-248059) DBG | I0328 01:03:25.871956 1132389 retry.go:31] will retry after 2.689404788s: waiting for machine to come up
	I0328 01:03:23.913816 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:26.413616 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:27.352533 1131600 api_server.go:279] https://192.168.39.224:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0328 01:03:27.352570 1131600 api_server.go:103] status: https://192.168.39.224:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0328 01:03:27.352589 1131600 api_server.go:253] Checking apiserver healthz at https://192.168.39.224:8444/healthz ...
	I0328 01:03:27.453408 1131600 api_server.go:279] https://192.168.39.224:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0328 01:03:27.453448 1131600 api_server.go:103] status: https://192.168.39.224:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0328 01:03:27.547781 1131600 api_server.go:253] Checking apiserver healthz at https://192.168.39.224:8444/healthz ...
	I0328 01:03:27.552703 1131600 api_server.go:279] https://192.168.39.224:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0328 01:03:27.552738 1131600 api_server.go:103] status: https://192.168.39.224:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0328 01:03:28.048135 1131600 api_server.go:253] Checking apiserver healthz at https://192.168.39.224:8444/healthz ...
	I0328 01:03:28.053291 1131600 api_server.go:279] https://192.168.39.224:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0328 01:03:28.053322 1131600 api_server.go:103] status: https://192.168.39.224:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0328 01:03:28.548374 1131600 api_server.go:253] Checking apiserver healthz at https://192.168.39.224:8444/healthz ...
	I0328 01:03:28.553141 1131600 api_server.go:279] https://192.168.39.224:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0328 01:03:28.553178 1131600 api_server.go:103] status: https://192.168.39.224:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0328 01:03:29.047609 1131600 api_server.go:253] Checking apiserver healthz at https://192.168.39.224:8444/healthz ...
	I0328 01:03:29.053027 1131600 api_server.go:279] https://192.168.39.224:8444/healthz returned 200:
	ok
	I0328 01:03:29.060710 1131600 api_server.go:141] control plane version: v1.29.3
	I0328 01:03:29.060747 1131600 api_server.go:131] duration metric: took 4.513320481s to wait for apiserver health ...
	I0328 01:03:29.060757 1131600 cni.go:84] Creating CNI manager for ""
	I0328 01:03:29.060764 1131600 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0328 01:03:29.062763 1131600 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0328 01:03:25.568927 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:26.068371 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:26.568107 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:27.068037 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:27.567985 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:28.068036 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:28.568843 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:29.068483 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:29.568942 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:30.068849 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:29.064492 1131600 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0328 01:03:29.089164 1131600 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0328 01:03:29.115071 1131600 system_pods.go:43] waiting for kube-system pods to appear ...
	I0328 01:03:29.126819 1131600 system_pods.go:59] 8 kube-system pods found
	I0328 01:03:29.126871 1131600 system_pods.go:61] "coredns-76f75df574-79cdj" [48ffe344-a386-4904-a73e-56e3ce0a8bef] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0328 01:03:29.126885 1131600 system_pods.go:61] "etcd-default-k8s-diff-port-283961" [1d8fc768-e39c-4c96-bd65-2ae76fc9c6ec] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0328 01:03:29.126898 1131600 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-283961" [7c5c9f85-f16f-4248-8d2d-73c1ed2b0128] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0328 01:03:29.126912 1131600 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-283961" [2e943e7b-5506-4797-9e77-4a33e06056fa] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0328 01:03:29.126931 1131600 system_pods.go:61] "kube-proxy-d776v" [c1c86f61-b074-4a51-89e6-17c7b1076748] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0328 01:03:29.126944 1131600 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-283961" [8a840579-4145-4b68-ab3f-b1ebd3d63e81] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0328 01:03:29.126956 1131600 system_pods.go:61] "metrics-server-57f55c9bc5-w4ww4" [6d60f9e6-8ac7-4fad-91dc-61520586666c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0328 01:03:29.126968 1131600 system_pods.go:61] "storage-provisioner" [2b5e2e68-7e7c-46ec-bcec-ff9b01cbb8d9] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0328 01:03:29.126979 1131600 system_pods.go:74] duration metric: took 11.875076ms to wait for pod list to return data ...
	I0328 01:03:29.126992 1131600 node_conditions.go:102] verifying NodePressure condition ...
	I0328 01:03:29.130927 1131600 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0328 01:03:29.130971 1131600 node_conditions.go:123] node cpu capacity is 2
	I0328 01:03:29.130986 1131600 node_conditions.go:105] duration metric: took 3.984383ms to run NodePressure ...
	I0328 01:03:29.131011 1131600 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0328 01:03:29.421513 1131600 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0328 01:03:29.426043 1131600 kubeadm.go:733] kubelet initialised
	I0328 01:03:29.426104 1131600 kubeadm.go:734] duration metric: took 4.524275ms waiting for restarted kubelet to initialise ...
	I0328 01:03:29.426114 1131600 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0328 01:03:29.432378 1131600 pod_ready.go:78] waiting up to 4m0s for pod "coredns-76f75df574-79cdj" in "kube-system" namespace to be "Ready" ...
	I0328 01:03:28.563249 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:28.563778 1130827 main.go:141] libmachine: (no-preload-248059) DBG | unable to find current IP address of domain no-preload-248059 in network mk-no-preload-248059
	I0328 01:03:28.563808 1130827 main.go:141] libmachine: (no-preload-248059) DBG | I0328 01:03:28.563718 1132389 retry.go:31] will retry after 2.919225956s: waiting for machine to come up
	I0328 01:03:28.913653 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:30.914379 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:31.484584 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:31.485027 1130827 main.go:141] libmachine: (no-preload-248059) Found IP for machine: 192.168.61.107
	I0328 01:03:31.485048 1130827 main.go:141] libmachine: (no-preload-248059) Reserving static IP address...
	I0328 01:03:31.485065 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has current primary IP address 192.168.61.107 and MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:31.485584 1130827 main.go:141] libmachine: (no-preload-248059) DBG | found host DHCP lease matching {name: "no-preload-248059", mac: "52:54:00:58:33:e2", ip: "192.168.61.107"} in network mk-no-preload-248059: {Iface:virbr3 ExpiryTime:2024-03-28 02:03:25 +0000 UTC Type:0 Mac:52:54:00:58:33:e2 Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:no-preload-248059 Clientid:01:52:54:00:58:33:e2}
	I0328 01:03:31.485617 1130827 main.go:141] libmachine: (no-preload-248059) Reserved static IP address: 192.168.61.107
	I0328 01:03:31.485638 1130827 main.go:141] libmachine: (no-preload-248059) DBG | skip adding static IP to network mk-no-preload-248059 - found existing host DHCP lease matching {name: "no-preload-248059", mac: "52:54:00:58:33:e2", ip: "192.168.61.107"}
	I0328 01:03:31.485651 1130827 main.go:141] libmachine: (no-preload-248059) Waiting for SSH to be available...
	I0328 01:03:31.485671 1130827 main.go:141] libmachine: (no-preload-248059) DBG | Getting to WaitForSSH function...
	I0328 01:03:31.487909 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:31.488293 1130827 main.go:141] libmachine: (no-preload-248059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:33:e2", ip: ""} in network mk-no-preload-248059: {Iface:virbr3 ExpiryTime:2024-03-28 02:03:25 +0000 UTC Type:0 Mac:52:54:00:58:33:e2 Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:no-preload-248059 Clientid:01:52:54:00:58:33:e2}
	I0328 01:03:31.488322 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined IP address 192.168.61.107 and MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:31.488469 1130827 main.go:141] libmachine: (no-preload-248059) DBG | Using SSH client type: external
	I0328 01:03:31.488506 1130827 main.go:141] libmachine: (no-preload-248059) DBG | Using SSH private key: /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/no-preload-248059/id_rsa (-rw-------)
	I0328 01:03:31.488531 1130827 main.go:141] libmachine: (no-preload-248059) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.107 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/no-preload-248059/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0328 01:03:31.488541 1130827 main.go:141] libmachine: (no-preload-248059) DBG | About to run SSH command:
	I0328 01:03:31.488555 1130827 main.go:141] libmachine: (no-preload-248059) DBG | exit 0
	I0328 01:03:31.618358 1130827 main.go:141] libmachine: (no-preload-248059) DBG | SSH cmd err, output: <nil>: 
	I0328 01:03:31.618786 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetConfigRaw
	I0328 01:03:31.619494 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetIP
	I0328 01:03:31.622183 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:31.622555 1130827 main.go:141] libmachine: (no-preload-248059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:33:e2", ip: ""} in network mk-no-preload-248059: {Iface:virbr3 ExpiryTime:2024-03-28 02:03:25 +0000 UTC Type:0 Mac:52:54:00:58:33:e2 Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:no-preload-248059 Clientid:01:52:54:00:58:33:e2}
	I0328 01:03:31.622584 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined IP address 192.168.61.107 and MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:31.622889 1130827 profile.go:142] Saving config to /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/no-preload-248059/config.json ...
	I0328 01:03:31.623120 1130827 machine.go:94] provisionDockerMachine start ...
	I0328 01:03:31.623147 1130827 main.go:141] libmachine: (no-preload-248059) Calling .DriverName
	I0328 01:03:31.623400 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHHostname
	I0328 01:03:31.626078 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:31.626432 1130827 main.go:141] libmachine: (no-preload-248059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:33:e2", ip: ""} in network mk-no-preload-248059: {Iface:virbr3 ExpiryTime:2024-03-28 02:03:25 +0000 UTC Type:0 Mac:52:54:00:58:33:e2 Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:no-preload-248059 Clientid:01:52:54:00:58:33:e2}
	I0328 01:03:31.626458 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined IP address 192.168.61.107 and MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:31.626663 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHPort
	I0328 01:03:31.626864 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHKeyPath
	I0328 01:03:31.627031 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHKeyPath
	I0328 01:03:31.627179 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHUsername
	I0328 01:03:31.627380 1130827 main.go:141] libmachine: Using SSH client type: native
	I0328 01:03:31.627595 1130827 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.107 22 <nil> <nil>}
	I0328 01:03:31.627611 1130827 main.go:141] libmachine: About to run SSH command:
	hostname
	I0328 01:03:31.739662 1130827 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0328 01:03:31.739699 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetMachineName
	I0328 01:03:31.740049 1130827 buildroot.go:166] provisioning hostname "no-preload-248059"
	I0328 01:03:31.740086 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetMachineName
	I0328 01:03:31.740421 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHHostname
	I0328 01:03:31.743410 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:31.743776 1130827 main.go:141] libmachine: (no-preload-248059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:33:e2", ip: ""} in network mk-no-preload-248059: {Iface:virbr3 ExpiryTime:2024-03-28 02:03:25 +0000 UTC Type:0 Mac:52:54:00:58:33:e2 Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:no-preload-248059 Clientid:01:52:54:00:58:33:e2}
	I0328 01:03:31.743811 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined IP address 192.168.61.107 and MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:31.744001 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHPort
	I0328 01:03:31.744212 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHKeyPath
	I0328 01:03:31.744394 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHKeyPath
	I0328 01:03:31.744515 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHUsername
	I0328 01:03:31.744669 1130827 main.go:141] libmachine: Using SSH client type: native
	I0328 01:03:31.744846 1130827 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.107 22 <nil> <nil>}
	I0328 01:03:31.744860 1130827 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-248059 && echo "no-preload-248059" | sudo tee /etc/hostname
	I0328 01:03:31.869330 1130827 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-248059
	
	I0328 01:03:31.869368 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHHostname
	I0328 01:03:31.872451 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:31.872817 1130827 main.go:141] libmachine: (no-preload-248059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:33:e2", ip: ""} in network mk-no-preload-248059: {Iface:virbr3 ExpiryTime:2024-03-28 02:03:25 +0000 UTC Type:0 Mac:52:54:00:58:33:e2 Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:no-preload-248059 Clientid:01:52:54:00:58:33:e2}
	I0328 01:03:31.872868 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined IP address 192.168.61.107 and MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:31.873159 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHPort
	I0328 01:03:31.873405 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHKeyPath
	I0328 01:03:31.873632 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHKeyPath
	I0328 01:03:31.873803 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHUsername
	I0328 01:03:31.873982 1130827 main.go:141] libmachine: Using SSH client type: native
	I0328 01:03:31.874220 1130827 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.107 22 <nil> <nil>}
	I0328 01:03:31.874268 1130827 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-248059' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-248059/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-248059' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0328 01:03:31.997509 1130827 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0328 01:03:31.997543 1130827 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18485-1069254/.minikube CaCertPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18485-1069254/.minikube}
	I0328 01:03:31.997565 1130827 buildroot.go:174] setting up certificates
	I0328 01:03:31.997573 1130827 provision.go:84] configureAuth start
	I0328 01:03:31.997583 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetMachineName
	I0328 01:03:31.997870 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetIP
	I0328 01:03:32.000739 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:32.001127 1130827 main.go:141] libmachine: (no-preload-248059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:33:e2", ip: ""} in network mk-no-preload-248059: {Iface:virbr3 ExpiryTime:2024-03-28 02:03:25 +0000 UTC Type:0 Mac:52:54:00:58:33:e2 Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:no-preload-248059 Clientid:01:52:54:00:58:33:e2}
	I0328 01:03:32.001162 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined IP address 192.168.61.107 and MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:32.001306 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHHostname
	I0328 01:03:32.003571 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:32.003958 1130827 main.go:141] libmachine: (no-preload-248059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:33:e2", ip: ""} in network mk-no-preload-248059: {Iface:virbr3 ExpiryTime:2024-03-28 02:03:25 +0000 UTC Type:0 Mac:52:54:00:58:33:e2 Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:no-preload-248059 Clientid:01:52:54:00:58:33:e2}
	I0328 01:03:32.003988 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined IP address 192.168.61.107 and MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:32.004162 1130827 provision.go:143] copyHostCerts
	I0328 01:03:32.004246 1130827 exec_runner.go:144] found /home/jenkins/minikube-integration/18485-1069254/.minikube/key.pem, removing ...
	I0328 01:03:32.004261 1130827 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18485-1069254/.minikube/key.pem
	I0328 01:03:32.004329 1130827 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18485-1069254/.minikube/key.pem (1679 bytes)
	I0328 01:03:32.004442 1130827 exec_runner.go:144] found /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.pem, removing ...
	I0328 01:03:32.004454 1130827 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.pem
	I0328 01:03:32.004486 1130827 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.pem (1078 bytes)
	I0328 01:03:32.004562 1130827 exec_runner.go:144] found /home/jenkins/minikube-integration/18485-1069254/.minikube/cert.pem, removing ...
	I0328 01:03:32.004572 1130827 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18485-1069254/.minikube/cert.pem
	I0328 01:03:32.004602 1130827 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18485-1069254/.minikube/cert.pem (1123 bytes)
	I0328 01:03:32.004667 1130827 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca-key.pem org=jenkins.no-preload-248059 san=[127.0.0.1 192.168.61.107 localhost minikube no-preload-248059]
	I0328 01:03:32.206585 1130827 provision.go:177] copyRemoteCerts
	I0328 01:03:32.206657 1130827 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0328 01:03:32.206691 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHHostname
	I0328 01:03:32.210170 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:32.210636 1130827 main.go:141] libmachine: (no-preload-248059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:33:e2", ip: ""} in network mk-no-preload-248059: {Iface:virbr3 ExpiryTime:2024-03-28 02:03:25 +0000 UTC Type:0 Mac:52:54:00:58:33:e2 Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:no-preload-248059 Clientid:01:52:54:00:58:33:e2}
	I0328 01:03:32.210676 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined IP address 192.168.61.107 and MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:32.210979 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHPort
	I0328 01:03:32.211187 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHKeyPath
	I0328 01:03:32.211364 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHUsername
	I0328 01:03:32.211564 1130827 sshutil.go:53] new ssh client: &{IP:192.168.61.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/no-preload-248059/id_rsa Username:docker}
	I0328 01:03:32.305858 1130827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0328 01:03:32.337654 1130827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0328 01:03:32.368942 1130827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0328 01:03:32.401639 1130827 provision.go:87] duration metric: took 404.051415ms to configureAuth
	I0328 01:03:32.401669 1130827 buildroot.go:189] setting minikube options for container-runtime
	I0328 01:03:32.401936 1130827 config.go:182] Loaded profile config "no-preload-248059": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0-beta.0
	I0328 01:03:32.402025 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHHostname
	I0328 01:03:32.404890 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:32.405352 1130827 main.go:141] libmachine: (no-preload-248059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:33:e2", ip: ""} in network mk-no-preload-248059: {Iface:virbr3 ExpiryTime:2024-03-28 02:03:25 +0000 UTC Type:0 Mac:52:54:00:58:33:e2 Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:no-preload-248059 Clientid:01:52:54:00:58:33:e2}
	I0328 01:03:32.405387 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined IP address 192.168.61.107 and MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:32.405588 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHPort
	I0328 01:03:32.405858 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHKeyPath
	I0328 01:03:32.406091 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHKeyPath
	I0328 01:03:32.406303 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHUsername
	I0328 01:03:32.406510 1130827 main.go:141] libmachine: Using SSH client type: native
	I0328 01:03:32.406731 1130827 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.107 22 <nil> <nil>}
	I0328 01:03:32.406759 1130827 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0328 01:03:32.697738 1130827 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0328 01:03:32.697768 1130827 machine.go:97] duration metric: took 1.074632092s to provisionDockerMachine
	I0328 01:03:32.697781 1130827 start.go:293] postStartSetup for "no-preload-248059" (driver="kvm2")
	I0328 01:03:32.697795 1130827 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0328 01:03:32.697812 1130827 main.go:141] libmachine: (no-preload-248059) Calling .DriverName
	I0328 01:03:32.698263 1130827 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0328 01:03:32.698298 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHHostname
	I0328 01:03:32.701020 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:32.701421 1130827 main.go:141] libmachine: (no-preload-248059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:33:e2", ip: ""} in network mk-no-preload-248059: {Iface:virbr3 ExpiryTime:2024-03-28 02:03:25 +0000 UTC Type:0 Mac:52:54:00:58:33:e2 Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:no-preload-248059 Clientid:01:52:54:00:58:33:e2}
	I0328 01:03:32.701450 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined IP address 192.168.61.107 and MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:32.701609 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHPort
	I0328 01:03:32.701837 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHKeyPath
	I0328 01:03:32.702010 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHUsername
	I0328 01:03:32.702188 1130827 sshutil.go:53] new ssh client: &{IP:192.168.61.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/no-preload-248059/id_rsa Username:docker}
	I0328 01:03:32.790670 1130827 ssh_runner.go:195] Run: cat /etc/os-release
	I0328 01:03:32.795098 1130827 info.go:137] Remote host: Buildroot 2023.02.9
	I0328 01:03:32.795131 1130827 filesync.go:126] Scanning /home/jenkins/minikube-integration/18485-1069254/.minikube/addons for local assets ...
	I0328 01:03:32.795222 1130827 filesync.go:126] Scanning /home/jenkins/minikube-integration/18485-1069254/.minikube/files for local assets ...
	I0328 01:03:32.795297 1130827 filesync.go:149] local asset: /home/jenkins/minikube-integration/18485-1069254/.minikube/files/etc/ssl/certs/10765222.pem -> 10765222.pem in /etc/ssl/certs
	I0328 01:03:32.795402 1130827 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0328 01:03:32.806276 1130827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/files/etc/ssl/certs/10765222.pem --> /etc/ssl/certs/10765222.pem (1708 bytes)
	I0328 01:03:32.832753 1130827 start.go:296] duration metric: took 134.954685ms for postStartSetup
	I0328 01:03:32.832801 1130827 fix.go:56] duration metric: took 19.16097847s for fixHost
	I0328 01:03:32.832825 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHHostname
	I0328 01:03:32.835830 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:32.836199 1130827 main.go:141] libmachine: (no-preload-248059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:33:e2", ip: ""} in network mk-no-preload-248059: {Iface:virbr3 ExpiryTime:2024-03-28 02:03:25 +0000 UTC Type:0 Mac:52:54:00:58:33:e2 Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:no-preload-248059 Clientid:01:52:54:00:58:33:e2}
	I0328 01:03:32.836237 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined IP address 192.168.61.107 and MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:32.836472 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHPort
	I0328 01:03:32.836707 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHKeyPath
	I0328 01:03:32.836949 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHKeyPath
	I0328 01:03:32.837104 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHUsername
	I0328 01:03:32.837339 1130827 main.go:141] libmachine: Using SSH client type: native
	I0328 01:03:32.837551 1130827 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.107 22 <nil> <nil>}
	I0328 01:03:32.837563 1130827 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0328 01:03:32.947440 1130827 main.go:141] libmachine: SSH cmd err, output: <nil>: 1711587812.922631180
	
	I0328 01:03:32.947477 1130827 fix.go:216] guest clock: 1711587812.922631180
	I0328 01:03:32.947486 1130827 fix.go:229] Guest: 2024-03-28 01:03:32.92263118 +0000 UTC Remote: 2024-03-28 01:03:32.832804811 +0000 UTC m=+356.715929719 (delta=89.826369ms)
	I0328 01:03:32.947507 1130827 fix.go:200] guest clock delta is within tolerance: 89.826369ms
	I0328 01:03:32.947512 1130827 start.go:83] releasing machines lock for "no-preload-248059", held for 19.275724068s
	I0328 01:03:32.947531 1130827 main.go:141] libmachine: (no-preload-248059) Calling .DriverName
	I0328 01:03:32.947805 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetIP
	I0328 01:03:32.950439 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:32.950814 1130827 main.go:141] libmachine: (no-preload-248059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:33:e2", ip: ""} in network mk-no-preload-248059: {Iface:virbr3 ExpiryTime:2024-03-28 02:03:25 +0000 UTC Type:0 Mac:52:54:00:58:33:e2 Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:no-preload-248059 Clientid:01:52:54:00:58:33:e2}
	I0328 01:03:32.950844 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined IP address 192.168.61.107 and MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:32.950992 1130827 main.go:141] libmachine: (no-preload-248059) Calling .DriverName
	I0328 01:03:32.951517 1130827 main.go:141] libmachine: (no-preload-248059) Calling .DriverName
	I0328 01:03:32.951707 1130827 main.go:141] libmachine: (no-preload-248059) Calling .DriverName
	I0328 01:03:32.951809 1130827 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0328 01:03:32.951852 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHHostname
	I0328 01:03:32.951938 1130827 ssh_runner.go:195] Run: cat /version.json
	I0328 01:03:32.951964 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHHostname
	I0328 01:03:32.954721 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:32.955058 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:32.955135 1130827 main.go:141] libmachine: (no-preload-248059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:33:e2", ip: ""} in network mk-no-preload-248059: {Iface:virbr3 ExpiryTime:2024-03-28 02:03:25 +0000 UTC Type:0 Mac:52:54:00:58:33:e2 Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:no-preload-248059 Clientid:01:52:54:00:58:33:e2}
	I0328 01:03:32.955165 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined IP address 192.168.61.107 and MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:32.955314 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHPort
	I0328 01:03:32.955473 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHKeyPath
	I0328 01:03:32.955512 1130827 main.go:141] libmachine: (no-preload-248059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:33:e2", ip: ""} in network mk-no-preload-248059: {Iface:virbr3 ExpiryTime:2024-03-28 02:03:25 +0000 UTC Type:0 Mac:52:54:00:58:33:e2 Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:no-preload-248059 Clientid:01:52:54:00:58:33:e2}
	I0328 01:03:32.955538 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined IP address 192.168.61.107 and MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:32.955622 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHUsername
	I0328 01:03:32.955698 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHPort
	I0328 01:03:32.955809 1130827 sshutil.go:53] new ssh client: &{IP:192.168.61.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/no-preload-248059/id_rsa Username:docker}
	I0328 01:03:32.955859 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHKeyPath
	I0328 01:03:32.956001 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHUsername
	I0328 01:03:32.956134 1130827 sshutil.go:53] new ssh client: &{IP:192.168.61.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/no-preload-248059/id_rsa Username:docker}
	I0328 01:03:33.079381 1130827 ssh_runner.go:195] Run: systemctl --version
	I0328 01:03:33.086184 1130827 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0328 01:03:33.241799 1130827 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0328 01:03:33.248779 1130827 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0328 01:03:33.248893 1130827 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0328 01:03:33.267944 1130827 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0328 01:03:33.267977 1130827 start.go:494] detecting cgroup driver to use...
	I0328 01:03:33.268082 1130827 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0328 01:03:33.286132 1130827 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0328 01:03:33.301676 1130827 docker.go:217] disabling cri-docker service (if available) ...
	I0328 01:03:33.301762 1130827 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0328 01:03:33.317202 1130827 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0328 01:03:33.333162 1130827 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0328 01:03:33.458738 1130827 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0328 01:03:33.608509 1130827 docker.go:233] disabling docker service ...
	I0328 01:03:33.608623 1130827 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0328 01:03:33.626616 1130827 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0328 01:03:33.641798 1130827 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0328 01:03:33.808865 1130827 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0328 01:03:33.962636 1130827 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0328 01:03:33.978138 1130827 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0328 01:03:34.002323 1130827 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0328 01:03:34.002404 1130827 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 01:03:34.014483 1130827 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0328 01:03:34.014589 1130827 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 01:03:34.028647 1130827 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 01:03:34.041601 1130827 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 01:03:34.054993 1130827 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0328 01:03:34.066671 1130827 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 01:03:34.079389 1130827 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 01:03:34.099660 1130827 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 01:03:34.112379 1130827 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0328 01:03:34.123050 1130827 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0328 01:03:34.123109 1130827 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0328 01:03:34.137132 1130827 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0328 01:03:34.147092 1130827 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0328 01:03:34.282367 1130827 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0328 01:03:34.436510 1130827 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0328 01:03:34.436599 1130827 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0328 01:03:34.443019 1130827 start.go:562] Will wait 60s for crictl version
	I0328 01:03:34.443092 1130827 ssh_runner.go:195] Run: which crictl
	I0328 01:03:34.447740 1130827 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0328 01:03:34.488366 1130827 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0328 01:03:34.488469 1130827 ssh_runner.go:195] Run: crio --version
	I0328 01:03:34.520940 1130827 ssh_runner.go:195] Run: crio --version
	I0328 01:03:34.557953 1130827 out.go:177] * Preparing Kubernetes v1.30.0-beta.0 on CRI-O 1.29.1 ...
	I0328 01:03:30.568918 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:31.068097 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:31.568306 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:32.068345 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:32.568773 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:33.068072 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:33.568377 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:34.068141 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:34.568574 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:35.067986 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:31.439199 1131600 pod_ready.go:102] pod "coredns-76f75df574-79cdj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:33.439575 1131600 pod_ready.go:102] pod "coredns-76f75df574-79cdj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:34.559624 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetIP
	I0328 01:03:34.563089 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:34.563549 1130827 main.go:141] libmachine: (no-preload-248059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:33:e2", ip: ""} in network mk-no-preload-248059: {Iface:virbr3 ExpiryTime:2024-03-28 02:03:25 +0000 UTC Type:0 Mac:52:54:00:58:33:e2 Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:no-preload-248059 Clientid:01:52:54:00:58:33:e2}
	I0328 01:03:34.563583 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined IP address 192.168.61.107 and MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:03:34.563943 1130827 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0328 01:03:34.570153 1130827 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0328 01:03:34.584566 1130827 kubeadm.go:877] updating cluster {Name:no-preload-248059 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.0-beta.0 ClusterName:no-preload-248059 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.107 Port:8443 KubernetesVersion:v1.30.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0328 01:03:34.584723 1130827 preload.go:132] Checking if preload exists for k8s version v1.30.0-beta.0 and runtime crio
	I0328 01:03:34.584786 1130827 ssh_runner.go:195] Run: sudo crictl images --output json
	I0328 01:03:34.620182 1130827 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.0-beta.0". assuming images are not preloaded.
	I0328 01:03:34.620215 1130827 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.30.0-beta.0 registry.k8s.io/kube-controller-manager:v1.30.0-beta.0 registry.k8s.io/kube-scheduler:v1.30.0-beta.0 registry.k8s.io/kube-proxy:v1.30.0-beta.0 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.12-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0328 01:03:34.620297 1130827 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0328 01:03:34.620312 1130827 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.12-0
	I0328 01:03:34.620333 1130827 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.30.0-beta.0
	I0328 01:03:34.620301 1130827 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.30.0-beta.0
	I0328 01:03:34.620374 1130827 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.30.0-beta.0
	I0328 01:03:34.620401 1130827 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0328 01:03:34.620481 1130827 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0328 01:03:34.620319 1130827 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.30.0-beta.0
	I0328 01:03:34.621997 1130827 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.30.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.30.0-beta.0
	I0328 01:03:34.622009 1130827 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0328 01:03:34.622009 1130827 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.30.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.30.0-beta.0
	I0328 01:03:34.622052 1130827 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.30.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.30.0-beta.0
	I0328 01:03:34.621997 1130827 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.30.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.30.0-beta.0
	I0328 01:03:34.622115 1130827 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.12-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.12-0
	I0328 01:03:34.621996 1130827 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0328 01:03:34.622438 1130827 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0328 01:03:34.832761 1130827 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.30.0-beta.0
	I0328 01:03:34.849045 1130827 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I0328 01:03:34.868049 1130827 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0328 01:03:34.883941 1130827 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.30.0-beta.0" needs transfer: "registry.k8s.io/kube-proxy:v1.30.0-beta.0" does not exist at hash "3f2e573c9528d6ccab780899b4b39b75a17f00250f31ec462fccb116d45befa8" in container runtime
	I0328 01:03:34.883988 1130827 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.30.0-beta.0
	I0328 01:03:34.884047 1130827 ssh_runner.go:195] Run: which crictl
	I0328 01:03:34.884972 1130827 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.30.0-beta.0
	I0328 01:03:34.887551 1130827 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.30.0-beta.0
	I0328 01:03:34.899677 1130827 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.12-0
	I0328 01:03:34.904772 1130827 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.30.0-beta.0
	I0328 01:03:35.045850 1130827 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0328 01:03:35.045906 1130827 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0328 01:03:35.045944 1130827 ssh_runner.go:195] Run: which crictl
	I0328 01:03:35.045959 1130827 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.30.0-beta.0
	I0328 01:03:35.064862 1130827 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.30.0-beta.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.30.0-beta.0" does not exist at hash "c2da1dd389fe0ec92e0cd9e98f8def82c47e8e08ab27041cd23683c60aa1dcaa" in container runtime
	I0328 01:03:35.064908 1130827 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.30.0-beta.0
	I0328 01:03:35.064959 1130827 ssh_runner.go:195] Run: which crictl
	I0328 01:03:35.066700 1130827 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.30.0-beta.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.30.0-beta.0" does not exist at hash "f68bf73ee2fbd4ea8b58c2038ce18d4e06cb59833c44dda7435b586add021841" in container runtime
	I0328 01:03:35.066753 1130827 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.30.0-beta.0
	I0328 01:03:35.066820 1130827 ssh_runner.go:195] Run: which crictl
	I0328 01:03:35.097425 1130827 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.30.0-beta.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.30.0-beta.0" does not exist at hash "746ac2129b574b8743dca05f512b7e097235ca7229b75100a38ec9cdb23454ac" in container runtime
	I0328 01:03:35.097479 1130827 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.30.0-beta.0
	I0328 01:03:35.097546 1130827 ssh_runner.go:195] Run: which crictl
	I0328 01:03:35.097619 1130827 cache_images.go:116] "registry.k8s.io/etcd:3.5.12-0" needs transfer: "registry.k8s.io/etcd:3.5.12-0" does not exist at hash "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899" in container runtime
	I0328 01:03:35.097667 1130827 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.12-0
	I0328 01:03:35.097715 1130827 ssh_runner.go:195] Run: which crictl
	I0328 01:03:35.126977 1130827 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0328 01:03:35.126980 1130827 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.30.0-beta.0
	I0328 01:03:35.127020 1130827 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.30.0-beta.0
	I0328 01:03:35.127084 1130827 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.12-0
	I0328 01:03:35.127090 1130827 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.30.0-beta.0
	I0328 01:03:35.127082 1130827 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.30.0-beta.0
	I0328 01:03:35.127161 1130827 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.30.0-beta.0
	I0328 01:03:35.264395 1130827 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.30.0-beta.0
	I0328 01:03:35.264499 1130827 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.30.0-beta.0 (exists)
	I0328 01:03:35.264534 1130827 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.30.0-beta.0
	I0328 01:03:35.264543 1130827 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.30.0-beta.0
	I0328 01:03:35.264506 1130827 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0328 01:03:35.264590 1130827 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.30.0-beta.0
	I0328 01:03:35.264631 1130827 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.30.0-beta.0
	I0328 01:03:35.264652 1130827 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0328 01:03:35.264516 1130827 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.30.0-beta.0
	I0328 01:03:35.264584 1130827 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.12-0
	I0328 01:03:35.264717 1130827 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.30.0-beta.0
	I0328 01:03:35.264728 1130827 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.30.0-beta.0
	I0328 01:03:35.264768 1130827 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.12-0
	I0328 01:03:35.269734 1130827 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.12-0 (exists)
	I0328 01:03:35.277344 1130827 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.30.0-beta.0 (exists)
	I0328 01:03:35.277580 1130827 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0328 01:03:35.279792 1130827 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.30.0-beta.0 (exists)
	I0328 01:03:35.280423 1130827 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.30.0-beta.0 (exists)
	I0328 01:03:35.535980 1130827 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0328 01:03:33.413451 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:35.414017 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:37.913609 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:35.568345 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:36.068227 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:36.568528 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:37.068834 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:37.568407 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:38.068142 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:38.568732 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:39.068094 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:39.568799 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:40.068973 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:35.940767 1131600 pod_ready.go:102] pod "coredns-76f75df574-79cdj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:37.440919 1131600 pod_ready.go:92] pod "coredns-76f75df574-79cdj" in "kube-system" namespace has status "Ready":"True"
	I0328 01:03:37.440949 1131600 pod_ready.go:81] duration metric: took 8.008542386s for pod "coredns-76f75df574-79cdj" in "kube-system" namespace to be "Ready" ...
	I0328 01:03:37.440963 1131600 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-283961" in "kube-system" namespace to be "Ready" ...
	I0328 01:03:39.452822 1131600 pod_ready.go:102] pod "etcd-default-k8s-diff-port-283961" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:40.467937 1131600 pod_ready.go:92] pod "etcd-default-k8s-diff-port-283961" in "kube-system" namespace has status "Ready":"True"
	I0328 01:03:40.467973 1131600 pod_ready.go:81] duration metric: took 3.027001179s for pod "etcd-default-k8s-diff-port-283961" in "kube-system" namespace to be "Ready" ...
	I0328 01:03:40.467987 1131600 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-283961" in "kube-system" namespace to be "Ready" ...
	I0328 01:03:40.491342 1131600 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-283961" in "kube-system" namespace has status "Ready":"True"
	I0328 01:03:40.491373 1131600 pod_ready.go:81] duration metric: took 23.375914ms for pod "kube-apiserver-default-k8s-diff-port-283961" in "kube-system" namespace to be "Ready" ...
	I0328 01:03:40.491387 1131600 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-283961" in "kube-system" namespace to be "Ready" ...
	I0328 01:03:40.511379 1131600 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-283961" in "kube-system" namespace has status "Ready":"True"
	I0328 01:03:40.511414 1131600 pod_ready.go:81] duration metric: took 20.018124ms for pod "kube-controller-manager-default-k8s-diff-port-283961" in "kube-system" namespace to be "Ready" ...
	I0328 01:03:40.511430 1131600 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-d776v" in "kube-system" namespace to be "Ready" ...
	I0328 01:03:40.526689 1131600 pod_ready.go:92] pod "kube-proxy-d776v" in "kube-system" namespace has status "Ready":"True"
	I0328 01:03:40.526724 1131600 pod_ready.go:81] duration metric: took 15.28424ms for pod "kube-proxy-d776v" in "kube-system" namespace to be "Ready" ...
	I0328 01:03:40.526738 1131600 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-283961" in "kube-system" namespace to be "Ready" ...
	I0328 01:03:37.431690 1130827 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.30.0-beta.0: (2.167073369s)
	I0328 01:03:37.431729 1130827 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.30.0-beta.0 from cache
	I0328 01:03:37.431755 1130827 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.12-0
	I0328 01:03:37.431764 1130827 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.895749302s)
	I0328 01:03:37.431805 1130827 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0328 01:03:37.431811 1130827 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.12-0
	I0328 01:03:37.431837 1130827 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0328 01:03:37.431870 1130827 ssh_runner.go:195] Run: which crictl
	I0328 01:03:39.913936 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:42.412656 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:40.568441 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:41.068790 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:41.568919 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:42.068166 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:42.568012 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:43.068027 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:43.568916 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:44.067940 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:44.568074 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:45.068786 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:42.535179 1131600 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-283961" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:44.034128 1131600 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-283961" in "kube-system" namespace has status "Ready":"True"
	I0328 01:03:44.034164 1131600 pod_ready.go:81] duration metric: took 3.507415677s for pod "kube-scheduler-default-k8s-diff-port-283961" in "kube-system" namespace to be "Ready" ...
	I0328 01:03:44.034175 1131600 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace to be "Ready" ...
	I0328 01:03:41.523268 1130827 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.12-0: (4.091420228s)
	I0328 01:03:41.523305 1130827 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.12-0 from cache
	I0328 01:03:41.523330 1130827 ssh_runner.go:235] Completed: which crictl: (4.091431875s)
	I0328 01:03:41.523345 1130827 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.30.0-beta.0
	I0328 01:03:41.523412 1130827 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0328 01:03:41.523445 1130827 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.30.0-beta.0
	I0328 01:03:41.567312 1130827 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0328 01:03:41.567455 1130827 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0328 01:03:44.336954 1130827 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.30.0-beta.0: (2.813479223s)
	I0328 01:03:44.336991 1130827 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.30.0-beta.0 from cache
	I0328 01:03:44.336994 1130827 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (2.769509386s)
	I0328 01:03:44.337020 1130827 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0328 01:03:44.337035 1130827 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0328 01:03:44.337080 1130827 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0328 01:03:44.414767 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:46.415110 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:45.568662 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:46.068299 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:46.568793 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:47.068929 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:47.568250 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:48.068910 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:48.568138 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:49.068128 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:49.568153 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:50.068075 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:46.042489 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:48.541049 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:50.547355 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:46.297705 1130827 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.960592772s)
	I0328 01:03:46.297744 1130827 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0328 01:03:46.297776 1130827 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.30.0-beta.0
	I0328 01:03:46.297828 1130827 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.30.0-beta.0
	I0328 01:03:47.769522 1130827 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.30.0-beta.0: (1.471661236s)
	I0328 01:03:47.769569 1130827 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.30.0-beta.0 from cache
	I0328 01:03:47.769602 1130827 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.30.0-beta.0
	I0328 01:03:47.769656 1130827 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.30.0-beta.0
	I0328 01:03:50.231843 1130827 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.30.0-beta.0: (2.462162757s)
	I0328 01:03:50.231876 1130827 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.30.0-beta.0 from cache
	I0328 01:03:50.231902 1130827 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0328 01:03:50.231956 1130827 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0328 01:03:48.913184 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:51.412474 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:50.568929 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:51.068812 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:51.568899 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:52.068890 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:52.568751 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:53.068406 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:53.568466 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:54.068039 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:54.568745 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:55.068690 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:53.041197 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:55.041977 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:51.188382 1130827 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0328 01:03:51.188441 1130827 cache_images.go:123] Successfully loaded all cached images
	I0328 01:03:51.188448 1130827 cache_images.go:92] duration metric: took 16.568214969s to LoadCachedImages
	I0328 01:03:51.188464 1130827 kubeadm.go:928] updating node { 192.168.61.107 8443 v1.30.0-beta.0 crio true true} ...
	I0328 01:03:51.188628 1130827 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-248059 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.107
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0-beta.0 ClusterName:no-preload-248059 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0328 01:03:51.188710 1130827 ssh_runner.go:195] Run: crio config
	I0328 01:03:51.237071 1130827 cni.go:84] Creating CNI manager for ""
	I0328 01:03:51.237099 1130827 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0328 01:03:51.237109 1130827 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0328 01:03:51.237131 1130827 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.107 APIServerPort:8443 KubernetesVersion:v1.30.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-248059 NodeName:no-preload-248059 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.107"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.107 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0328 01:03:51.237263 1130827 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.107
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-248059"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.107
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.107"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0328 01:03:51.237330 1130827 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0-beta.0
	I0328 01:03:51.248044 1130827 binaries.go:44] Found k8s binaries, skipping transfer
	I0328 01:03:51.248113 1130827 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0328 01:03:51.257854 1130827 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (324 bytes)
	I0328 01:03:51.276307 1130827 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I0328 01:03:51.294698 1130827 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2168 bytes)
	I0328 01:03:51.313297 1130827 ssh_runner.go:195] Run: grep 192.168.61.107	control-plane.minikube.internal$ /etc/hosts
	I0328 01:03:51.317668 1130827 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.107	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0328 01:03:51.330478 1130827 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0328 01:03:51.457500 1130827 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0328 01:03:51.484463 1130827 certs.go:68] Setting up /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/no-preload-248059 for IP: 192.168.61.107
	I0328 01:03:51.484493 1130827 certs.go:194] generating shared ca certs ...
	I0328 01:03:51.484518 1130827 certs.go:226] acquiring lock for ca certs: {Name:mkf4dec8f33bbf51de6ed3aabdf175c7fd744ef9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 01:03:51.484718 1130827 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.key
	I0328 01:03:51.484768 1130827 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/proxy-client-ca.key
	I0328 01:03:51.484781 1130827 certs.go:256] generating profile certs ...
	I0328 01:03:51.484910 1130827 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/no-preload-248059/client.key
	I0328 01:03:51.484989 1130827 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/no-preload-248059/apiserver.key.85d037b2
	I0328 01:03:51.485040 1130827 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/no-preload-248059/proxy-client.key
	I0328 01:03:51.485196 1130827 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/1076522.pem (1338 bytes)
	W0328 01:03:51.485243 1130827 certs.go:480] ignoring /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/1076522_empty.pem, impossibly tiny 0 bytes
	I0328 01:03:51.485257 1130827 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca-key.pem (1675 bytes)
	I0328 01:03:51.485292 1130827 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/ca.pem (1078 bytes)
	I0328 01:03:51.485327 1130827 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/cert.pem (1123 bytes)
	I0328 01:03:51.485357 1130827 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/key.pem (1679 bytes)
	I0328 01:03:51.485416 1130827 certs.go:484] found cert: /home/jenkins/minikube-integration/18485-1069254/.minikube/files/etc/ssl/certs/10765222.pem (1708 bytes)
	I0328 01:03:51.486614 1130827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0328 01:03:51.537554 1130827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0328 01:03:51.587256 1130827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0328 01:03:51.620264 1130827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0328 01:03:51.652100 1130827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/no-preload-248059/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0328 01:03:51.694388 1130827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/no-preload-248059/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0328 01:03:51.720913 1130827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/no-preload-248059/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0328 01:03:51.747141 1130827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/no-preload-248059/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0328 01:03:51.776370 1130827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/certs/1076522.pem --> /usr/share/ca-certificates/1076522.pem (1338 bytes)
	I0328 01:03:51.803168 1130827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/files/etc/ssl/certs/10765222.pem --> /usr/share/ca-certificates/10765222.pem (1708 bytes)
	I0328 01:03:51.831138 1130827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0328 01:03:51.857272 1130827 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0328 01:03:51.876070 1130827 ssh_runner.go:195] Run: openssl version
	I0328 01:03:51.882197 1130827 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1076522.pem && ln -fs /usr/share/ca-certificates/1076522.pem /etc/ssl/certs/1076522.pem"
	I0328 01:03:51.893560 1130827 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1076522.pem
	I0328 01:03:51.898293 1130827 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 27 23:42 /usr/share/ca-certificates/1076522.pem
	I0328 01:03:51.898361 1130827 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1076522.pem
	I0328 01:03:51.904549 1130827 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1076522.pem /etc/ssl/certs/51391683.0"
	I0328 01:03:51.918175 1130827 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10765222.pem && ln -fs /usr/share/ca-certificates/10765222.pem /etc/ssl/certs/10765222.pem"
	I0328 01:03:51.930387 1130827 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10765222.pem
	I0328 01:03:51.935610 1130827 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 27 23:42 /usr/share/ca-certificates/10765222.pem
	I0328 01:03:51.935691 1130827 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10765222.pem
	I0328 01:03:51.942127 1130827 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/10765222.pem /etc/ssl/certs/3ec20f2e.0"
	I0328 01:03:51.954252 1130827 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0328 01:03:51.966727 1130827 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0328 01:03:51.971742 1130827 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 27 23:34 /usr/share/ca-certificates/minikubeCA.pem
	I0328 01:03:51.971810 1130827 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0328 01:03:51.978082 1130827 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0328 01:03:51.992233 1130827 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0328 01:03:51.997556 1130827 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0328 01:03:52.004178 1130827 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0328 01:03:52.010666 1130827 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0328 01:03:52.017076 1130827 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0328 01:03:52.023334 1130827 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0328 01:03:52.029980 1130827 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0328 01:03:52.036395 1130827 kubeadm.go:391] StartCluster: {Name:no-preload-248059 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.0-beta.0 ClusterName:no-preload-248059 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.107 Port:8443 KubernetesVersion:v1.30.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0
m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0328 01:03:52.036483 1130827 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0328 01:03:52.036539 1130827 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0328 01:03:52.080486 1130827 cri.go:89] found id: ""
	I0328 01:03:52.080580 1130827 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0328 01:03:52.094552 1130827 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0328 01:03:52.094583 1130827 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0328 01:03:52.094599 1130827 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0328 01:03:52.094650 1130827 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0328 01:03:52.107008 1130827 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0328 01:03:52.108200 1130827 kubeconfig.go:125] found "no-preload-248059" server: "https://192.168.61.107:8443"
	I0328 01:03:52.110536 1130827 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0328 01:03:52.122998 1130827 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.61.107
	I0328 01:03:52.123044 1130827 kubeadm.go:1154] stopping kube-system containers ...
	I0328 01:03:52.123090 1130827 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0328 01:03:52.123170 1130827 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0328 01:03:52.165568 1130827 cri.go:89] found id: ""
	I0328 01:03:52.165666 1130827 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0328 01:03:52.183930 1130827 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0328 01:03:52.195188 1130827 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0328 01:03:52.195215 1130827 kubeadm.go:156] found existing configuration files:
	
	I0328 01:03:52.195271 1130827 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0328 01:03:52.205872 1130827 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0328 01:03:52.205932 1130827 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0328 01:03:52.216481 1130827 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0328 01:03:52.226719 1130827 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0328 01:03:52.226787 1130827 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0328 01:03:52.238852 1130827 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0328 01:03:52.250272 1130827 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0328 01:03:52.250341 1130827 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0328 01:03:52.262474 1130827 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0328 01:03:52.273981 1130827 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0328 01:03:52.274059 1130827 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0328 01:03:52.286028 1130827 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0328 01:03:52.297016 1130827 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0328 01:03:52.406981 1130827 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0328 01:03:53.521529 1130827 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.114505514s)
	I0328 01:03:53.521569 1130827 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0328 01:03:53.735728 1130827 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0328 01:03:53.808590 1130827 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0328 01:03:53.931165 1130827 api_server.go:52] waiting for apiserver process to appear ...
	I0328 01:03:53.931281 1130827 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:54.432358 1130827 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:54.931653 1130827 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:54.948811 1130827 api_server.go:72] duration metric: took 1.017647613s to wait for apiserver process to appear ...
	I0328 01:03:54.948843 1130827 api_server.go:88] waiting for apiserver healthz status ...
	I0328 01:03:54.948871 1130827 api_server.go:253] Checking apiserver healthz at https://192.168.61.107:8443/healthz ...
	I0328 01:03:54.949490 1130827 api_server.go:269] stopped: https://192.168.61.107:8443/healthz: Get "https://192.168.61.107:8443/healthz": dial tcp 192.168.61.107:8443: connect: connection refused
	I0328 01:03:55.449050 1130827 api_server.go:253] Checking apiserver healthz at https://192.168.61.107:8443/healthz ...
	I0328 01:03:53.413775 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:55.914095 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:57.515811 1130827 api_server.go:279] https://192.168.61.107:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0328 01:03:57.515852 1130827 api_server.go:103] status: https://192.168.61.107:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0328 01:03:57.515872 1130827 api_server.go:253] Checking apiserver healthz at https://192.168.61.107:8443/healthz ...
	I0328 01:03:57.564527 1130827 api_server.go:279] https://192.168.61.107:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0328 01:03:57.564560 1130827 api_server.go:103] status: https://192.168.61.107:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0328 01:03:57.949780 1130827 api_server.go:253] Checking apiserver healthz at https://192.168.61.107:8443/healthz ...
	I0328 01:03:57.955515 1130827 api_server.go:279] https://192.168.61.107:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0328 01:03:57.955565 1130827 api_server.go:103] status: https://192.168.61.107:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0328 01:03:58.449103 1130827 api_server.go:253] Checking apiserver healthz at https://192.168.61.107:8443/healthz ...
	I0328 01:03:58.456345 1130827 api_server.go:279] https://192.168.61.107:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0328 01:03:58.456384 1130827 api_server.go:103] status: https://192.168.61.107:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0328 01:03:58.949575 1130827 api_server.go:253] Checking apiserver healthz at https://192.168.61.107:8443/healthz ...
	I0328 01:03:58.954466 1130827 api_server.go:279] https://192.168.61.107:8443/healthz returned 200:
	ok
	I0328 01:03:58.961213 1130827 api_server.go:141] control plane version: v1.30.0-beta.0
	I0328 01:03:58.961244 1130827 api_server.go:131] duration metric: took 4.012391589s to wait for apiserver health ...
	I0328 01:03:58.961256 1130827 cni.go:84] Creating CNI manager for ""
	I0328 01:03:58.961265 1130827 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0328 01:03:58.963147 1130827 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0328 01:03:55.568378 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:56.068253 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:56.568989 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:57.068709 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:57.569038 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:58.068236 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:58.568386 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:59.068971 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:59.568858 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:04:00.067964 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:03:57.043266 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:59.541626 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:03:58.964446 1130827 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0328 01:03:58.979425 1130827 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0328 01:03:59.042826 1130827 system_pods.go:43] waiting for kube-system pods to appear ...
	I0328 01:03:59.060388 1130827 system_pods.go:59] 8 kube-system pods found
	I0328 01:03:59.060429 1130827 system_pods.go:61] "coredns-7db6d8ff4d-86n4s" [71402ca8-dfa7-4caf-a422-6de9f24bf9dc] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0328 01:03:59.060439 1130827 system_pods.go:61] "etcd-no-preload-248059" [954b6886-b84f-4d94-bbce-7e520142eb4b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0328 01:03:59.060451 1130827 system_pods.go:61] "kube-apiserver-no-preload-248059" [2d3caabe-27c2-44e7-8f52-76e03f262e2d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0328 01:03:59.060462 1130827 system_pods.go:61] "kube-controller-manager-no-preload-248059" [30b9f4aa-c9a7-4d91-8e4d-35ad32f40425] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0328 01:03:59.060472 1130827 system_pods.go:61] "kube-proxy-b9qpb" [7ab4cca8-0ba2-4177-84cd-c6ac045930fc] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0328 01:03:59.060481 1130827 system_pods.go:61] "kube-scheduler-no-preload-248059" [4d9e45e3-d990-40d4-a4be-8384c39eb9b0] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0328 01:03:59.060493 1130827 system_pods.go:61] "metrics-server-569cc877fc-cvnrj" [063a47ac-9ceb-4521-9dde-aca02ec5e0d8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0328 01:03:59.060508 1130827 system_pods.go:61] "storage-provisioner" [0a0eb2d3-a426-4b76-8009-1a0a0e0312bb] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0328 01:03:59.060518 1130827 system_pods.go:74] duration metric: took 17.666067ms to wait for pod list to return data ...
	I0328 01:03:59.060533 1130827 node_conditions.go:102] verifying NodePressure condition ...
	I0328 01:03:59.065018 1130827 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0328 01:03:59.065054 1130827 node_conditions.go:123] node cpu capacity is 2
	I0328 01:03:59.065071 1130827 node_conditions.go:105] duration metric: took 4.531253ms to run NodePressure ...
	I0328 01:03:59.065097 1130827 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-beta.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0328 01:03:59.454609 1130827 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0328 01:03:59.459707 1130827 kubeadm.go:733] kubelet initialised
	I0328 01:03:59.459730 1130827 kubeadm.go:734] duration metric: took 5.09757ms waiting for restarted kubelet to initialise ...
	I0328 01:03:59.459739 1130827 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0328 01:03:59.465352 1130827 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-86n4s" in "kube-system" namespace to be "Ready" ...
	I0328 01:03:59.471020 1130827 pod_ready.go:97] node "no-preload-248059" hosting pod "coredns-7db6d8ff4d-86n4s" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-248059" has status "Ready":"False"
	I0328 01:03:59.471054 1130827 pod_ready.go:81] duration metric: took 5.676291ms for pod "coredns-7db6d8ff4d-86n4s" in "kube-system" namespace to be "Ready" ...
	E0328 01:03:59.471067 1130827 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-248059" hosting pod "coredns-7db6d8ff4d-86n4s" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-248059" has status "Ready":"False"
	I0328 01:03:59.471075 1130827 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-248059" in "kube-system" namespace to be "Ready" ...
	I0328 01:03:59.476393 1130827 pod_ready.go:97] node "no-preload-248059" hosting pod "etcd-no-preload-248059" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-248059" has status "Ready":"False"
	I0328 01:03:59.476421 1130827 pod_ready.go:81] duration metric: took 5.333391ms for pod "etcd-no-preload-248059" in "kube-system" namespace to be "Ready" ...
	E0328 01:03:59.476430 1130827 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-248059" hosting pod "etcd-no-preload-248059" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-248059" has status "Ready":"False"
	I0328 01:03:59.476436 1130827 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-248059" in "kube-system" namespace to be "Ready" ...
	I0328 01:03:59.485889 1130827 pod_ready.go:97] node "no-preload-248059" hosting pod "kube-apiserver-no-preload-248059" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-248059" has status "Ready":"False"
	I0328 01:03:59.485924 1130827 pod_ready.go:81] duration metric: took 9.481204ms for pod "kube-apiserver-no-preload-248059" in "kube-system" namespace to be "Ready" ...
	E0328 01:03:59.485937 1130827 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-248059" hosting pod "kube-apiserver-no-preload-248059" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-248059" has status "Ready":"False"
	I0328 01:03:59.485957 1130827 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-248059" in "kube-system" namespace to be "Ready" ...
	I0328 01:03:59.491064 1130827 pod_ready.go:97] node "no-preload-248059" hosting pod "kube-controller-manager-no-preload-248059" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-248059" has status "Ready":"False"
	I0328 01:03:59.491095 1130827 pod_ready.go:81] duration metric: took 5.125981ms for pod "kube-controller-manager-no-preload-248059" in "kube-system" namespace to be "Ready" ...
	E0328 01:03:59.491107 1130827 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-248059" hosting pod "kube-controller-manager-no-preload-248059" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-248059" has status "Ready":"False"
	I0328 01:03:59.491116 1130827 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-b9qpb" in "kube-system" namespace to be "Ready" ...
	I0328 01:03:59.858724 1130827 pod_ready.go:92] pod "kube-proxy-b9qpb" in "kube-system" namespace has status "Ready":"True"
	I0328 01:03:59.858753 1130827 pod_ready.go:81] duration metric: took 367.628034ms for pod "kube-proxy-b9qpb" in "kube-system" namespace to be "Ready" ...
	I0328 01:03:59.858764 1130827 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-248059" in "kube-system" namespace to be "Ready" ...
	I0328 01:03:58.413911 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:00.913297 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:02.913414 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:00.568622 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:04:01.067943 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:04:01.567964 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:04:02.068537 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:04:02.568772 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:04:03.068458 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:04:03.568943 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:04:04.068085 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:04:04.068176 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:04:04.112601 1131323 cri.go:89] found id: ""
	I0328 01:04:04.112631 1131323 logs.go:276] 0 containers: []
	W0328 01:04:04.112642 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:04:04.112650 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:04:04.112726 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:04:04.151837 1131323 cri.go:89] found id: ""
	I0328 01:04:04.151873 1131323 logs.go:276] 0 containers: []
	W0328 01:04:04.151885 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:04:04.151894 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:04:04.151965 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:04:04.193411 1131323 cri.go:89] found id: ""
	I0328 01:04:04.193451 1131323 logs.go:276] 0 containers: []
	W0328 01:04:04.193463 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:04:04.193473 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:04:04.193545 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:04:04.239623 1131323 cri.go:89] found id: ""
	I0328 01:04:04.239652 1131323 logs.go:276] 0 containers: []
	W0328 01:04:04.239662 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:04:04.239673 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:04:04.239732 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:04:04.279561 1131323 cri.go:89] found id: ""
	I0328 01:04:04.279600 1131323 logs.go:276] 0 containers: []
	W0328 01:04:04.279615 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:04:04.279627 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:04:04.279708 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:04:04.318680 1131323 cri.go:89] found id: ""
	I0328 01:04:04.318710 1131323 logs.go:276] 0 containers: []
	W0328 01:04:04.318722 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:04:04.318731 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:04:04.318797 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:04:04.356486 1131323 cri.go:89] found id: ""
	I0328 01:04:04.356514 1131323 logs.go:276] 0 containers: []
	W0328 01:04:04.356523 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:04:04.356530 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:04:04.356586 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:04:04.394281 1131323 cri.go:89] found id: ""
	I0328 01:04:04.394319 1131323 logs.go:276] 0 containers: []
	W0328 01:04:04.394334 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:04:04.394348 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:04:04.394364 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:04:04.458688 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:04:04.458729 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:04:04.501399 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:04:04.501440 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:04:04.556183 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:04:04.556225 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:04:04.571392 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:04:04.571427 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:04:04.709967 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:04:02.041555 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:04.541464 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:01.866183 1130827 pod_ready.go:102] pod "kube-scheduler-no-preload-248059" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:03.868706 1130827 pod_ready.go:102] pod "kube-scheduler-no-preload-248059" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:04.915667 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:07.412548 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:07.210550 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:04:07.224274 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:04:07.224345 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:04:07.262604 1131323 cri.go:89] found id: ""
	I0328 01:04:07.262640 1131323 logs.go:276] 0 containers: []
	W0328 01:04:07.262665 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:04:07.262674 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:04:07.262763 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:04:07.296868 1131323 cri.go:89] found id: ""
	I0328 01:04:07.296907 1131323 logs.go:276] 0 containers: []
	W0328 01:04:07.296918 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:04:07.296926 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:04:07.296992 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:04:07.333110 1131323 cri.go:89] found id: ""
	I0328 01:04:07.333149 1131323 logs.go:276] 0 containers: []
	W0328 01:04:07.333162 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:04:07.333171 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:04:07.333240 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:04:07.371138 1131323 cri.go:89] found id: ""
	I0328 01:04:07.371168 1131323 logs.go:276] 0 containers: []
	W0328 01:04:07.371186 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:04:07.371195 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:04:07.371259 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:04:07.412197 1131323 cri.go:89] found id: ""
	I0328 01:04:07.412230 1131323 logs.go:276] 0 containers: []
	W0328 01:04:07.412242 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:04:07.412251 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:04:07.412331 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:04:07.457021 1131323 cri.go:89] found id: ""
	I0328 01:04:07.457052 1131323 logs.go:276] 0 containers: []
	W0328 01:04:07.457070 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:04:07.457080 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:04:07.457153 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:04:07.517996 1131323 cri.go:89] found id: ""
	I0328 01:04:07.518026 1131323 logs.go:276] 0 containers: []
	W0328 01:04:07.518034 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:04:07.518040 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:04:07.518111 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:04:07.556829 1131323 cri.go:89] found id: ""
	I0328 01:04:07.556856 1131323 logs.go:276] 0 containers: []
	W0328 01:04:07.556865 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:04:07.556875 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:04:07.556890 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:04:07.572234 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:04:07.572270 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:04:07.648615 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:04:07.648641 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:04:07.648658 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:04:07.719617 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:04:07.719665 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:04:07.764053 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:04:07.764097 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:04:10.319480 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:04:06.542160 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:08.550725 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:06.366150 1130827 pod_ready.go:102] pod "kube-scheduler-no-preload-248059" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:07.365200 1130827 pod_ready.go:92] pod "kube-scheduler-no-preload-248059" in "kube-system" namespace has status "Ready":"True"
	I0328 01:04:07.365233 1130827 pod_ready.go:81] duration metric: took 7.506461201s for pod "kube-scheduler-no-preload-248059" in "kube-system" namespace to be "Ready" ...
	I0328 01:04:07.365256 1130827 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace to be "Ready" ...
	I0328 01:04:09.373694 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:09.413378 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:11.913400 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:10.334347 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:04:10.335893 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:04:10.375231 1131323 cri.go:89] found id: ""
	I0328 01:04:10.375263 1131323 logs.go:276] 0 containers: []
	W0328 01:04:10.375274 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:04:10.375281 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:04:10.375353 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:04:10.413652 1131323 cri.go:89] found id: ""
	I0328 01:04:10.413706 1131323 logs.go:276] 0 containers: []
	W0328 01:04:10.413726 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:04:10.413736 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:04:10.413805 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:04:10.449546 1131323 cri.go:89] found id: ""
	I0328 01:04:10.449588 1131323 logs.go:276] 0 containers: []
	W0328 01:04:10.449597 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:04:10.449604 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:04:10.449658 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:04:10.487518 1131323 cri.go:89] found id: ""
	I0328 01:04:10.487556 1131323 logs.go:276] 0 containers: []
	W0328 01:04:10.487570 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:04:10.487579 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:04:10.487663 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:04:10.525088 1131323 cri.go:89] found id: ""
	I0328 01:04:10.525124 1131323 logs.go:276] 0 containers: []
	W0328 01:04:10.525137 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:04:10.525146 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:04:10.525213 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:04:10.567177 1131323 cri.go:89] found id: ""
	I0328 01:04:10.567209 1131323 logs.go:276] 0 containers: []
	W0328 01:04:10.567221 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:04:10.567231 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:04:10.567302 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:04:10.609440 1131323 cri.go:89] found id: ""
	I0328 01:04:10.609474 1131323 logs.go:276] 0 containers: []
	W0328 01:04:10.609485 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:04:10.609492 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:04:10.609549 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:04:10.652466 1131323 cri.go:89] found id: ""
	I0328 01:04:10.652502 1131323 logs.go:276] 0 containers: []
	W0328 01:04:10.652516 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:04:10.652529 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:04:10.652546 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:04:10.737406 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:04:10.737451 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:04:10.786955 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:04:10.786991 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:04:10.843072 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:04:10.843114 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:04:10.857209 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:04:10.857244 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:04:10.950885 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:04:13.451542 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:04:13.465833 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:04:13.465924 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:04:13.503353 1131323 cri.go:89] found id: ""
	I0328 01:04:13.503386 1131323 logs.go:276] 0 containers: []
	W0328 01:04:13.503398 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:04:13.503407 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:04:13.503474 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:04:13.543175 1131323 cri.go:89] found id: ""
	I0328 01:04:13.543208 1131323 logs.go:276] 0 containers: []
	W0328 01:04:13.543220 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:04:13.543229 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:04:13.543287 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:04:13.580796 1131323 cri.go:89] found id: ""
	I0328 01:04:13.580829 1131323 logs.go:276] 0 containers: []
	W0328 01:04:13.580840 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:04:13.580848 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:04:13.580900 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:04:13.619483 1131323 cri.go:89] found id: ""
	I0328 01:04:13.619516 1131323 logs.go:276] 0 containers: []
	W0328 01:04:13.619529 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:04:13.619539 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:04:13.619596 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:04:13.654651 1131323 cri.go:89] found id: ""
	I0328 01:04:13.654683 1131323 logs.go:276] 0 containers: []
	W0328 01:04:13.654697 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:04:13.654705 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:04:13.654774 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:04:13.691763 1131323 cri.go:89] found id: ""
	I0328 01:04:13.691794 1131323 logs.go:276] 0 containers: []
	W0328 01:04:13.691805 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:04:13.691813 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:04:13.691881 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:04:13.730580 1131323 cri.go:89] found id: ""
	I0328 01:04:13.730614 1131323 logs.go:276] 0 containers: []
	W0328 01:04:13.730627 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:04:13.730635 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:04:13.730694 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:04:13.767802 1131323 cri.go:89] found id: ""
	I0328 01:04:13.767834 1131323 logs.go:276] 0 containers: []
	W0328 01:04:13.767848 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:04:13.767860 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:04:13.767876 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:04:13.815612 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:04:13.815653 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:04:13.870945 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:04:13.870991 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:04:13.891456 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:04:13.891506 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:04:14.022124 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:04:14.022163 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:04:14.022187 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:04:11.041196 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:13.044490 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:15.541942 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:11.873574 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:13.875251 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:14.412081 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:16.412837 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:16.604087 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:04:16.618872 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:04:16.618971 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:04:16.665628 1131323 cri.go:89] found id: ""
	I0328 01:04:16.665661 1131323 logs.go:276] 0 containers: []
	W0328 01:04:16.665675 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:04:16.665683 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:04:16.665780 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:04:16.703727 1131323 cri.go:89] found id: ""
	I0328 01:04:16.703758 1131323 logs.go:276] 0 containers: []
	W0328 01:04:16.703768 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:04:16.703775 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:04:16.703835 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:04:16.741425 1131323 cri.go:89] found id: ""
	I0328 01:04:16.741455 1131323 logs.go:276] 0 containers: []
	W0328 01:04:16.741464 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:04:16.741470 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:04:16.741524 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:04:16.782333 1131323 cri.go:89] found id: ""
	I0328 01:04:16.782373 1131323 logs.go:276] 0 containers: []
	W0328 01:04:16.782387 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:04:16.782398 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:04:16.782469 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:04:16.820321 1131323 cri.go:89] found id: ""
	I0328 01:04:16.820355 1131323 logs.go:276] 0 containers: []
	W0328 01:04:16.820364 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:04:16.820372 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:04:16.820429 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:04:16.861091 1131323 cri.go:89] found id: ""
	I0328 01:04:16.861130 1131323 logs.go:276] 0 containers: []
	W0328 01:04:16.861144 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:04:16.861154 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:04:16.861226 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:04:16.901347 1131323 cri.go:89] found id: ""
	I0328 01:04:16.901394 1131323 logs.go:276] 0 containers: []
	W0328 01:04:16.901408 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:04:16.901418 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:04:16.901491 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:04:16.944027 1131323 cri.go:89] found id: ""
	I0328 01:04:16.944067 1131323 logs.go:276] 0 containers: []
	W0328 01:04:16.944080 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:04:16.944093 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:04:16.944110 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:04:16.959104 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:04:16.959151 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:04:17.035432 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:04:17.035464 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:04:17.035480 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:04:17.116236 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:04:17.116276 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:04:17.159321 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:04:17.159370 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:04:19.711326 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:04:19.726016 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:04:19.726094 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:04:19.776639 1131323 cri.go:89] found id: ""
	I0328 01:04:19.776676 1131323 logs.go:276] 0 containers: []
	W0328 01:04:19.776690 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:04:19.776700 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:04:19.776782 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:04:19.817849 1131323 cri.go:89] found id: ""
	I0328 01:04:19.817887 1131323 logs.go:276] 0 containers: []
	W0328 01:04:19.817897 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:04:19.817904 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:04:19.817981 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:04:19.855055 1131323 cri.go:89] found id: ""
	I0328 01:04:19.855089 1131323 logs.go:276] 0 containers: []
	W0328 01:04:19.855102 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:04:19.855110 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:04:19.855177 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:04:19.895296 1131323 cri.go:89] found id: ""
	I0328 01:04:19.895332 1131323 logs.go:276] 0 containers: []
	W0328 01:04:19.895346 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:04:19.895354 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:04:19.895414 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:04:19.930936 1131323 cri.go:89] found id: ""
	I0328 01:04:19.930968 1131323 logs.go:276] 0 containers: []
	W0328 01:04:19.930980 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:04:19.930989 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:04:19.931067 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:04:19.968573 1131323 cri.go:89] found id: ""
	I0328 01:04:19.968610 1131323 logs.go:276] 0 containers: []
	W0328 01:04:19.968623 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:04:19.968632 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:04:19.968693 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:04:20.006130 1131323 cri.go:89] found id: ""
	I0328 01:04:20.006180 1131323 logs.go:276] 0 containers: []
	W0328 01:04:20.006195 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:04:20.006203 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:04:20.006304 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:04:20.043646 1131323 cri.go:89] found id: ""
	I0328 01:04:20.043678 1131323 logs.go:276] 0 containers: []
	W0328 01:04:20.043689 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:04:20.043701 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:04:20.043717 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:04:20.058728 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:04:20.058761 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:04:20.136392 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:04:20.136417 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:04:20.136431 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:04:20.214971 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:04:20.215015 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:04:20.255002 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:04:20.255047 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:04:18.041868 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:20.542175 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:16.372600 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:18.373203 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:20.374228 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:18.913596 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:20.913978 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:22.914777 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:22.810078 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:04:22.824083 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:04:22.824169 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:04:22.862037 1131323 cri.go:89] found id: ""
	I0328 01:04:22.862066 1131323 logs.go:276] 0 containers: []
	W0328 01:04:22.862074 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:04:22.862081 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:04:22.862141 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:04:22.901625 1131323 cri.go:89] found id: ""
	I0328 01:04:22.901658 1131323 logs.go:276] 0 containers: []
	W0328 01:04:22.901670 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:04:22.901679 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:04:22.901752 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:04:22.938858 1131323 cri.go:89] found id: ""
	I0328 01:04:22.938891 1131323 logs.go:276] 0 containers: []
	W0328 01:04:22.938903 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:04:22.938912 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:04:22.938983 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:04:22.978781 1131323 cri.go:89] found id: ""
	I0328 01:04:22.978818 1131323 logs.go:276] 0 containers: []
	W0328 01:04:22.978829 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:04:22.978837 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:04:22.978910 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:04:23.016844 1131323 cri.go:89] found id: ""
	I0328 01:04:23.016882 1131323 logs.go:276] 0 containers: []
	W0328 01:04:23.016895 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:04:23.016904 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:04:23.016975 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:04:23.058456 1131323 cri.go:89] found id: ""
	I0328 01:04:23.058508 1131323 logs.go:276] 0 containers: []
	W0328 01:04:23.058522 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:04:23.058531 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:04:23.058604 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:04:23.099368 1131323 cri.go:89] found id: ""
	I0328 01:04:23.099399 1131323 logs.go:276] 0 containers: []
	W0328 01:04:23.099408 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:04:23.099420 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:04:23.099492 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:04:23.135593 1131323 cri.go:89] found id: ""
	I0328 01:04:23.135634 1131323 logs.go:276] 0 containers: []
	W0328 01:04:23.135653 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:04:23.135665 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:04:23.135679 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:04:23.191215 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:04:23.191260 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:04:23.206849 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:04:23.206884 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:04:23.289566 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:04:23.289596 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:04:23.289618 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:04:23.365429 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:04:23.365480 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:04:23.042312 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:25.541788 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:22.872233 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:25.373908 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:25.413591 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:27.912983 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:25.914883 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:04:25.929336 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:04:25.929415 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:04:25.969452 1131323 cri.go:89] found id: ""
	I0328 01:04:25.969485 1131323 logs.go:276] 0 containers: []
	W0328 01:04:25.969497 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:04:25.969506 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:04:25.969573 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:04:26.008978 1131323 cri.go:89] found id: ""
	I0328 01:04:26.009006 1131323 logs.go:276] 0 containers: []
	W0328 01:04:26.009015 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:04:26.009022 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:04:26.009075 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:04:26.051110 1131323 cri.go:89] found id: ""
	I0328 01:04:26.051138 1131323 logs.go:276] 0 containers: []
	W0328 01:04:26.051146 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:04:26.051153 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:04:26.051213 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:04:26.088231 1131323 cri.go:89] found id: ""
	I0328 01:04:26.088262 1131323 logs.go:276] 0 containers: []
	W0328 01:04:26.088271 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:04:26.088277 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:04:26.088342 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:04:26.125741 1131323 cri.go:89] found id: ""
	I0328 01:04:26.125782 1131323 logs.go:276] 0 containers: []
	W0328 01:04:26.125794 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:04:26.125800 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:04:26.125867 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:04:26.163367 1131323 cri.go:89] found id: ""
	I0328 01:04:26.163406 1131323 logs.go:276] 0 containers: []
	W0328 01:04:26.163417 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:04:26.163426 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:04:26.163503 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:04:26.202302 1131323 cri.go:89] found id: ""
	I0328 01:04:26.202340 1131323 logs.go:276] 0 containers: []
	W0328 01:04:26.202355 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:04:26.202364 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:04:26.202422 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:04:26.240880 1131323 cri.go:89] found id: ""
	I0328 01:04:26.240911 1131323 logs.go:276] 0 containers: []
	W0328 01:04:26.240921 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:04:26.240931 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:04:26.240943 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:04:26.283151 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:04:26.283180 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:04:26.341313 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:04:26.341350 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:04:26.356762 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:04:26.356791 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:04:26.428033 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:04:26.428054 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:04:26.428066 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:04:29.006332 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:04:29.020634 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:04:29.020745 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:04:29.060812 1131323 cri.go:89] found id: ""
	I0328 01:04:29.060843 1131323 logs.go:276] 0 containers: []
	W0328 01:04:29.060852 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:04:29.060859 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:04:29.060916 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:04:29.100110 1131323 cri.go:89] found id: ""
	I0328 01:04:29.100139 1131323 logs.go:276] 0 containers: []
	W0328 01:04:29.100149 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:04:29.100155 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:04:29.100212 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:04:29.140345 1131323 cri.go:89] found id: ""
	I0328 01:04:29.140384 1131323 logs.go:276] 0 containers: []
	W0328 01:04:29.140396 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:04:29.140404 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:04:29.140479 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:04:29.182415 1131323 cri.go:89] found id: ""
	I0328 01:04:29.182449 1131323 logs.go:276] 0 containers: []
	W0328 01:04:29.182459 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:04:29.182465 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:04:29.182533 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:04:29.225177 1131323 cri.go:89] found id: ""
	I0328 01:04:29.225214 1131323 logs.go:276] 0 containers: []
	W0328 01:04:29.225225 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:04:29.225233 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:04:29.225310 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:04:29.265437 1131323 cri.go:89] found id: ""
	I0328 01:04:29.265471 1131323 logs.go:276] 0 containers: []
	W0328 01:04:29.265485 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:04:29.265493 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:04:29.265556 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:04:29.301578 1131323 cri.go:89] found id: ""
	I0328 01:04:29.301617 1131323 logs.go:276] 0 containers: []
	W0328 01:04:29.301630 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:04:29.301639 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:04:29.301719 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:04:29.340816 1131323 cri.go:89] found id: ""
	I0328 01:04:29.340847 1131323 logs.go:276] 0 containers: []
	W0328 01:04:29.340856 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:04:29.340867 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:04:29.340880 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:04:29.384658 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:04:29.384687 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:04:29.439243 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:04:29.439285 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:04:29.456179 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:04:29.456211 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:04:29.534878 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:04:29.534906 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:04:29.534927 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:04:28.041463 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:30.042506 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:27.872489 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:30.371109 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:29.913856 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:32.415699 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:32.115798 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:04:32.130464 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:04:32.130560 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:04:32.168846 1131323 cri.go:89] found id: ""
	I0328 01:04:32.168877 1131323 logs.go:276] 0 containers: []
	W0328 01:04:32.168887 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:04:32.168894 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:04:32.168952 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:04:32.208590 1131323 cri.go:89] found id: ""
	I0328 01:04:32.208622 1131323 logs.go:276] 0 containers: []
	W0328 01:04:32.208632 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:04:32.208638 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:04:32.208694 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:04:32.247323 1131323 cri.go:89] found id: ""
	I0328 01:04:32.247362 1131323 logs.go:276] 0 containers: []
	W0328 01:04:32.247375 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:04:32.247384 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:04:32.247507 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:04:32.285260 1131323 cri.go:89] found id: ""
	I0328 01:04:32.285293 1131323 logs.go:276] 0 containers: []
	W0328 01:04:32.285312 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:04:32.285319 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:04:32.285395 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:04:32.326678 1131323 cri.go:89] found id: ""
	I0328 01:04:32.326712 1131323 logs.go:276] 0 containers: []
	W0328 01:04:32.326725 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:04:32.326740 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:04:32.326823 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:04:32.363375 1131323 cri.go:89] found id: ""
	I0328 01:04:32.363403 1131323 logs.go:276] 0 containers: []
	W0328 01:04:32.363412 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:04:32.363419 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:04:32.363473 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:04:32.401410 1131323 cri.go:89] found id: ""
	I0328 01:04:32.401449 1131323 logs.go:276] 0 containers: []
	W0328 01:04:32.401462 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:04:32.401470 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:04:32.401558 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:04:32.438645 1131323 cri.go:89] found id: ""
	I0328 01:04:32.438680 1131323 logs.go:276] 0 containers: []
	W0328 01:04:32.438691 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:04:32.438703 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:04:32.438718 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:04:32.488743 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:04:32.488786 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:04:32.503908 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:04:32.503944 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:04:32.577307 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:04:32.577333 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:04:32.577350 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:04:32.657787 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:04:32.657832 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:04:35.201151 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:04:35.215313 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:04:35.215383 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:04:35.253467 1131323 cri.go:89] found id: ""
	I0328 01:04:35.253504 1131323 logs.go:276] 0 containers: []
	W0328 01:04:35.253515 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:04:35.253522 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:04:35.253593 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:04:35.290218 1131323 cri.go:89] found id: ""
	I0328 01:04:35.290280 1131323 logs.go:276] 0 containers: []
	W0328 01:04:35.290292 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:04:35.290300 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:04:35.290378 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:04:35.330714 1131323 cri.go:89] found id: ""
	I0328 01:04:35.330749 1131323 logs.go:276] 0 containers: []
	W0328 01:04:35.330757 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:04:35.330764 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:04:35.330831 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:04:32.542071 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:34.544163 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:32.372100 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:34.872293 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:34.913212 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:37.411734 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:35.371524 1131323 cri.go:89] found id: ""
	I0328 01:04:35.371553 1131323 logs.go:276] 0 containers: []
	W0328 01:04:35.371570 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:04:35.371577 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:04:35.371630 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:04:35.411610 1131323 cri.go:89] found id: ""
	I0328 01:04:35.411638 1131323 logs.go:276] 0 containers: []
	W0328 01:04:35.411646 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:04:35.411652 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:04:35.411711 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:04:35.456709 1131323 cri.go:89] found id: ""
	I0328 01:04:35.456745 1131323 logs.go:276] 0 containers: []
	W0328 01:04:35.456758 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:04:35.456766 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:04:35.456836 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:04:35.492688 1131323 cri.go:89] found id: ""
	I0328 01:04:35.492719 1131323 logs.go:276] 0 containers: []
	W0328 01:04:35.492729 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:04:35.492755 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:04:35.492811 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:04:35.531205 1131323 cri.go:89] found id: ""
	I0328 01:04:35.531234 1131323 logs.go:276] 0 containers: []
	W0328 01:04:35.531243 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:04:35.531254 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:04:35.531266 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:04:35.611803 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:04:35.611845 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:04:35.653513 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:04:35.653551 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:04:35.708030 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:04:35.708075 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:04:35.724542 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:04:35.724576 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:04:35.798624 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:04:38.299312 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:04:38.314128 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:04:38.314213 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:04:38.357728 1131323 cri.go:89] found id: ""
	I0328 01:04:38.357761 1131323 logs.go:276] 0 containers: []
	W0328 01:04:38.357779 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:04:38.357786 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:04:38.357848 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:04:38.394512 1131323 cri.go:89] found id: ""
	I0328 01:04:38.394541 1131323 logs.go:276] 0 containers: []
	W0328 01:04:38.394549 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:04:38.394558 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:04:38.394618 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:04:38.434353 1131323 cri.go:89] found id: ""
	I0328 01:04:38.434380 1131323 logs.go:276] 0 containers: []
	W0328 01:04:38.434391 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:04:38.434399 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:04:38.434466 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:04:38.477662 1131323 cri.go:89] found id: ""
	I0328 01:04:38.477693 1131323 logs.go:276] 0 containers: []
	W0328 01:04:38.477703 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:04:38.477710 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:04:38.477763 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:04:38.515014 1131323 cri.go:89] found id: ""
	I0328 01:04:38.515044 1131323 logs.go:276] 0 containers: []
	W0328 01:04:38.515053 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:04:38.515060 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:04:38.515117 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:04:38.558865 1131323 cri.go:89] found id: ""
	I0328 01:04:38.558899 1131323 logs.go:276] 0 containers: []
	W0328 01:04:38.558911 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:04:38.558920 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:04:38.558982 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:04:38.600261 1131323 cri.go:89] found id: ""
	I0328 01:04:38.600290 1131323 logs.go:276] 0 containers: []
	W0328 01:04:38.600299 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:04:38.600306 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:04:38.600366 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:04:38.637131 1131323 cri.go:89] found id: ""
	I0328 01:04:38.637167 1131323 logs.go:276] 0 containers: []
	W0328 01:04:38.637179 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:04:38.637194 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:04:38.637218 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:04:38.716032 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:04:38.716058 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:04:38.716079 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:04:38.804534 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:04:38.804578 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:04:38.851781 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:04:38.851820 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:04:38.910091 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:04:38.910125 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:04:37.041273 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:39.541843 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:37.372262 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:39.372555 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:39.912953 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:42.412667 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:41.425801 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:04:41.441072 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:04:41.441168 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:04:41.482934 1131323 cri.go:89] found id: ""
	I0328 01:04:41.482962 1131323 logs.go:276] 0 containers: []
	W0328 01:04:41.482974 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:04:41.482983 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:04:41.483063 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:04:41.521762 1131323 cri.go:89] found id: ""
	I0328 01:04:41.521796 1131323 logs.go:276] 0 containers: []
	W0328 01:04:41.521810 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:04:41.521819 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:04:41.521931 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:04:41.560814 1131323 cri.go:89] found id: ""
	I0328 01:04:41.560847 1131323 logs.go:276] 0 containers: []
	W0328 01:04:41.560857 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:04:41.560864 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:04:41.560928 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:04:41.601158 1131323 cri.go:89] found id: ""
	I0328 01:04:41.601189 1131323 logs.go:276] 0 containers: []
	W0328 01:04:41.601199 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:04:41.601206 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:04:41.601271 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:04:41.638760 1131323 cri.go:89] found id: ""
	I0328 01:04:41.638789 1131323 logs.go:276] 0 containers: []
	W0328 01:04:41.638799 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:04:41.638806 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:04:41.638861 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:04:41.675235 1131323 cri.go:89] found id: ""
	I0328 01:04:41.675268 1131323 logs.go:276] 0 containers: []
	W0328 01:04:41.675278 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:04:41.675285 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:04:41.675341 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:04:41.712918 1131323 cri.go:89] found id: ""
	I0328 01:04:41.712957 1131323 logs.go:276] 0 containers: []
	W0328 01:04:41.712972 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:04:41.712983 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:04:41.713078 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:04:41.750552 1131323 cri.go:89] found id: ""
	I0328 01:04:41.750582 1131323 logs.go:276] 0 containers: []
	W0328 01:04:41.750591 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:04:41.750601 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:04:41.750617 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:04:41.811163 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:04:41.811204 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:04:41.826502 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:04:41.826547 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:04:41.900727 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:04:41.900759 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:04:41.900777 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:04:41.981731 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:04:41.981783 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:04:44.525845 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:04:44.542301 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:04:44.542389 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:04:44.584907 1131323 cri.go:89] found id: ""
	I0328 01:04:44.584936 1131323 logs.go:276] 0 containers: []
	W0328 01:04:44.584945 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:04:44.584952 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:04:44.585007 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:04:44.630465 1131323 cri.go:89] found id: ""
	I0328 01:04:44.630499 1131323 logs.go:276] 0 containers: []
	W0328 01:04:44.630511 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:04:44.630520 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:04:44.630588 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:04:44.669095 1131323 cri.go:89] found id: ""
	I0328 01:04:44.669131 1131323 logs.go:276] 0 containers: []
	W0328 01:04:44.669143 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:04:44.669152 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:04:44.669235 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:04:44.708445 1131323 cri.go:89] found id: ""
	I0328 01:04:44.708484 1131323 logs.go:276] 0 containers: []
	W0328 01:04:44.708495 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:04:44.708502 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:04:44.708570 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:04:44.747706 1131323 cri.go:89] found id: ""
	I0328 01:04:44.747744 1131323 logs.go:276] 0 containers: []
	W0328 01:04:44.747755 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:04:44.747762 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:04:44.747822 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:04:44.787768 1131323 cri.go:89] found id: ""
	I0328 01:04:44.787807 1131323 logs.go:276] 0 containers: []
	W0328 01:04:44.787821 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:04:44.787830 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:04:44.787899 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:04:44.829018 1131323 cri.go:89] found id: ""
	I0328 01:04:44.829049 1131323 logs.go:276] 0 containers: []
	W0328 01:04:44.829059 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:04:44.829066 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:04:44.829123 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:04:44.874334 1131323 cri.go:89] found id: ""
	I0328 01:04:44.874374 1131323 logs.go:276] 0 containers: []
	W0328 01:04:44.874383 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:04:44.874393 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:04:44.874405 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:04:44.921577 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:04:44.921619 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:04:44.976660 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:04:44.976713 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:04:44.991365 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:04:44.991400 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:04:45.067595 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:04:45.067630 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:04:45.067651 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:04:42.042736 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:44.543288 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:41.372902 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:43.872925 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:45.873163 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:44.913827 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:47.412342 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:47.647634 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:04:47.663581 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:04:47.663687 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:04:47.702889 1131323 cri.go:89] found id: ""
	I0328 01:04:47.702940 1131323 logs.go:276] 0 containers: []
	W0328 01:04:47.702954 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:04:47.702966 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:04:47.703043 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:04:47.744995 1131323 cri.go:89] found id: ""
	I0328 01:04:47.745027 1131323 logs.go:276] 0 containers: []
	W0328 01:04:47.745037 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:04:47.745044 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:04:47.745103 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:04:47.785518 1131323 cri.go:89] found id: ""
	I0328 01:04:47.785550 1131323 logs.go:276] 0 containers: []
	W0328 01:04:47.785562 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:04:47.785572 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:04:47.785645 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:04:47.831739 1131323 cri.go:89] found id: ""
	I0328 01:04:47.831771 1131323 logs.go:276] 0 containers: []
	W0328 01:04:47.831786 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:04:47.831794 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:04:47.831867 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:04:47.871864 1131323 cri.go:89] found id: ""
	I0328 01:04:47.871906 1131323 logs.go:276] 0 containers: []
	W0328 01:04:47.871918 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:04:47.871929 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:04:47.872008 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:04:47.907899 1131323 cri.go:89] found id: ""
	I0328 01:04:47.907934 1131323 logs.go:276] 0 containers: []
	W0328 01:04:47.907946 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:04:47.907955 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:04:47.908022 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:04:47.946073 1131323 cri.go:89] found id: ""
	I0328 01:04:47.946107 1131323 logs.go:276] 0 containers: []
	W0328 01:04:47.946118 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:04:47.946127 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:04:47.946223 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:04:47.986122 1131323 cri.go:89] found id: ""
	I0328 01:04:47.986154 1131323 logs.go:276] 0 containers: []
	W0328 01:04:47.986168 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:04:47.986182 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:04:47.986198 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:04:48.057234 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:04:48.057271 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:04:48.109881 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:04:48.109926 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:04:48.125154 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:04:48.125189 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:04:48.208295 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:04:48.208327 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:04:48.208345 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:04:47.041447 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:49.542203 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:48.371275 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:50.372057 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:49.413451 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:51.414465 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:50.785126 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:04:50.800000 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:04:50.800078 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:04:50.839883 1131323 cri.go:89] found id: ""
	I0328 01:04:50.839911 1131323 logs.go:276] 0 containers: []
	W0328 01:04:50.839920 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:04:50.839927 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:04:50.839983 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:04:50.879627 1131323 cri.go:89] found id: ""
	I0328 01:04:50.879654 1131323 logs.go:276] 0 containers: []
	W0328 01:04:50.879661 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:04:50.879668 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:04:50.879734 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:04:50.918392 1131323 cri.go:89] found id: ""
	I0328 01:04:50.918434 1131323 logs.go:276] 0 containers: []
	W0328 01:04:50.918446 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:04:50.918454 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:04:50.918517 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:04:50.957198 1131323 cri.go:89] found id: ""
	I0328 01:04:50.957234 1131323 logs.go:276] 0 containers: []
	W0328 01:04:50.957248 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:04:50.957257 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:04:50.957328 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:04:50.997389 1131323 cri.go:89] found id: ""
	I0328 01:04:50.997424 1131323 logs.go:276] 0 containers: []
	W0328 01:04:50.997438 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:04:50.997446 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:04:50.997513 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:04:51.040259 1131323 cri.go:89] found id: ""
	I0328 01:04:51.040296 1131323 logs.go:276] 0 containers: []
	W0328 01:04:51.040309 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:04:51.040318 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:04:51.040389 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:04:51.081824 1131323 cri.go:89] found id: ""
	I0328 01:04:51.081858 1131323 logs.go:276] 0 containers: []
	W0328 01:04:51.081868 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:04:51.081875 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:04:51.081942 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:04:51.119742 1131323 cri.go:89] found id: ""
	I0328 01:04:51.119783 1131323 logs.go:276] 0 containers: []
	W0328 01:04:51.119796 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:04:51.119810 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:04:51.119836 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:04:51.173486 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:04:51.173529 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:04:51.188532 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:04:51.188568 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:04:51.269181 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:04:51.269207 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:04:51.269226 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:04:51.349882 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:04:51.349936 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:04:53.893562 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:04:53.910104 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:04:53.910186 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:04:53.951333 1131323 cri.go:89] found id: ""
	I0328 01:04:53.951375 1131323 logs.go:276] 0 containers: []
	W0328 01:04:53.951388 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:04:53.951397 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:04:53.951472 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:04:53.992438 1131323 cri.go:89] found id: ""
	I0328 01:04:53.992474 1131323 logs.go:276] 0 containers: []
	W0328 01:04:53.992486 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:04:53.992493 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:04:53.992561 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:04:54.032934 1131323 cri.go:89] found id: ""
	I0328 01:04:54.032969 1131323 logs.go:276] 0 containers: []
	W0328 01:04:54.032982 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:04:54.032992 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:04:54.033061 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:04:54.074670 1131323 cri.go:89] found id: ""
	I0328 01:04:54.074707 1131323 logs.go:276] 0 containers: []
	W0328 01:04:54.074777 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:04:54.074801 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:04:54.074875 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:04:54.111527 1131323 cri.go:89] found id: ""
	I0328 01:04:54.111555 1131323 logs.go:276] 0 containers: []
	W0328 01:04:54.111566 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:04:54.111573 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:04:54.111658 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:04:54.151401 1131323 cri.go:89] found id: ""
	I0328 01:04:54.151428 1131323 logs.go:276] 0 containers: []
	W0328 01:04:54.151437 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:04:54.151443 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:04:54.151494 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:04:54.197997 1131323 cri.go:89] found id: ""
	I0328 01:04:54.198036 1131323 logs.go:276] 0 containers: []
	W0328 01:04:54.198048 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:04:54.198058 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:04:54.198135 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:04:54.234016 1131323 cri.go:89] found id: ""
	I0328 01:04:54.234049 1131323 logs.go:276] 0 containers: []
	W0328 01:04:54.234058 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:04:54.234068 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:04:54.234081 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:04:54.286118 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:04:54.286161 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:04:54.300489 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:04:54.300541 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:04:54.376949 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:04:54.376972 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:04:54.376988 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:04:54.463857 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:04:54.463901 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:04:52.041517 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:54.042088 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:52.875923 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:55.371823 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:53.912140 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:55.912329 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:57.026395 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:04:57.041270 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:04:57.041358 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:04:57.082380 1131323 cri.go:89] found id: ""
	I0328 01:04:57.082416 1131323 logs.go:276] 0 containers: []
	W0328 01:04:57.082428 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:04:57.082436 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:04:57.082503 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:04:57.121835 1131323 cri.go:89] found id: ""
	I0328 01:04:57.121870 1131323 logs.go:276] 0 containers: []
	W0328 01:04:57.121885 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:04:57.121894 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:04:57.121969 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:04:57.163688 1131323 cri.go:89] found id: ""
	I0328 01:04:57.163725 1131323 logs.go:276] 0 containers: []
	W0328 01:04:57.163737 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:04:57.163745 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:04:57.163819 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:04:57.212628 1131323 cri.go:89] found id: ""
	I0328 01:04:57.212666 1131323 logs.go:276] 0 containers: []
	W0328 01:04:57.212693 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:04:57.212703 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:04:57.212788 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:04:57.249196 1131323 cri.go:89] found id: ""
	I0328 01:04:57.249231 1131323 logs.go:276] 0 containers: []
	W0328 01:04:57.249244 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:04:57.249253 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:04:57.249318 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:04:57.286996 1131323 cri.go:89] found id: ""
	I0328 01:04:57.287031 1131323 logs.go:276] 0 containers: []
	W0328 01:04:57.287040 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:04:57.287047 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:04:57.287101 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:04:57.324523 1131323 cri.go:89] found id: ""
	I0328 01:04:57.324551 1131323 logs.go:276] 0 containers: []
	W0328 01:04:57.324560 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:04:57.324566 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:04:57.324627 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:04:57.363946 1131323 cri.go:89] found id: ""
	I0328 01:04:57.363984 1131323 logs.go:276] 0 containers: []
	W0328 01:04:57.363998 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:04:57.364012 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:04:57.364034 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:04:57.418300 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:04:57.418337 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:04:57.433214 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:04:57.433242 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:04:57.508623 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:04:57.508651 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:04:57.508665 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:04:57.586336 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:04:57.586377 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:05:00.129903 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:05:00.146829 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:05:00.146920 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:05:00.197823 1131323 cri.go:89] found id: ""
	I0328 01:05:00.197856 1131323 logs.go:276] 0 containers: []
	W0328 01:05:00.197865 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:05:00.197872 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:05:00.197930 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:05:00.257523 1131323 cri.go:89] found id: ""
	I0328 01:05:00.257561 1131323 logs.go:276] 0 containers: []
	W0328 01:05:00.257575 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:05:00.257584 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:05:00.257657 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:05:00.314511 1131323 cri.go:89] found id: ""
	I0328 01:05:00.314539 1131323 logs.go:276] 0 containers: []
	W0328 01:05:00.314549 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:05:00.314558 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:05:00.314610 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:04:56.042295 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:58.541684 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:00.543232 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:57.372451 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:59.372577 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:04:58.412203 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:00.412880 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:02.913222 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:00.351043 1131323 cri.go:89] found id: ""
	I0328 01:05:00.351076 1131323 logs.go:276] 0 containers: []
	W0328 01:05:00.351090 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:05:00.351098 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:05:00.351167 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:05:00.391477 1131323 cri.go:89] found id: ""
	I0328 01:05:00.391507 1131323 logs.go:276] 0 containers: []
	W0328 01:05:00.391519 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:05:00.391525 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:05:00.391595 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:05:00.436196 1131323 cri.go:89] found id: ""
	I0328 01:05:00.436230 1131323 logs.go:276] 0 containers: []
	W0328 01:05:00.436242 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:05:00.436249 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:05:00.436316 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:05:00.473389 1131323 cri.go:89] found id: ""
	I0328 01:05:00.473428 1131323 logs.go:276] 0 containers: []
	W0328 01:05:00.473441 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:05:00.473450 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:05:00.473523 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:05:00.508829 1131323 cri.go:89] found id: ""
	I0328 01:05:00.508866 1131323 logs.go:276] 0 containers: []
	W0328 01:05:00.508879 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:05:00.508901 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:05:00.508931 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:05:00.553709 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:05:00.553741 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:05:00.612679 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:05:00.612732 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:05:00.630908 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:05:00.630948 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:05:00.706984 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:05:00.707016 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:05:00.707034 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:05:03.287887 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:05:03.304679 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:05:03.304779 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:05:03.343579 1131323 cri.go:89] found id: ""
	I0328 01:05:03.343608 1131323 logs.go:276] 0 containers: []
	W0328 01:05:03.343618 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:05:03.343625 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:05:03.343677 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:05:03.387158 1131323 cri.go:89] found id: ""
	I0328 01:05:03.387192 1131323 logs.go:276] 0 containers: []
	W0328 01:05:03.387206 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:05:03.387224 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:05:03.387308 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:05:03.426622 1131323 cri.go:89] found id: ""
	I0328 01:05:03.426653 1131323 logs.go:276] 0 containers: []
	W0328 01:05:03.426663 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:05:03.426670 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:05:03.426724 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:05:03.468743 1131323 cri.go:89] found id: ""
	I0328 01:05:03.468780 1131323 logs.go:276] 0 containers: []
	W0328 01:05:03.468793 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:05:03.468801 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:05:03.468870 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:05:03.508518 1131323 cri.go:89] found id: ""
	I0328 01:05:03.508554 1131323 logs.go:276] 0 containers: []
	W0328 01:05:03.508566 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:05:03.508575 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:05:03.508653 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:05:03.548295 1131323 cri.go:89] found id: ""
	I0328 01:05:03.548331 1131323 logs.go:276] 0 containers: []
	W0328 01:05:03.548343 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:05:03.548353 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:05:03.548444 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:05:03.591561 1131323 cri.go:89] found id: ""
	I0328 01:05:03.591594 1131323 logs.go:276] 0 containers: []
	W0328 01:05:03.591607 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:05:03.591615 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:05:03.591670 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:05:03.635055 1131323 cri.go:89] found id: ""
	I0328 01:05:03.635086 1131323 logs.go:276] 0 containers: []
	W0328 01:05:03.635097 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:05:03.635109 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:05:03.635127 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:05:03.715639 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:05:03.715683 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:05:03.755888 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:05:03.755931 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:05:03.810128 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:05:03.810170 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:05:03.825197 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:05:03.825227 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:05:03.908589 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:05:03.043330 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:05.541544 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:01.372692 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:03.373747 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:05.871945 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:05.413583 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:07.912379 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:06.409060 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:05:06.424034 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:05:06.424119 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:05:06.461827 1131323 cri.go:89] found id: ""
	I0328 01:05:06.461888 1131323 logs.go:276] 0 containers: []
	W0328 01:05:06.461902 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:05:06.461911 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:05:06.461985 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:05:06.505006 1131323 cri.go:89] found id: ""
	I0328 01:05:06.505061 1131323 logs.go:276] 0 containers: []
	W0328 01:05:06.505078 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:05:06.505085 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:05:06.505145 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:05:06.542000 1131323 cri.go:89] found id: ""
	I0328 01:05:06.542033 1131323 logs.go:276] 0 containers: []
	W0328 01:05:06.542044 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:05:06.542052 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:05:06.542121 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:05:06.583725 1131323 cri.go:89] found id: ""
	I0328 01:05:06.583778 1131323 logs.go:276] 0 containers: []
	W0328 01:05:06.583800 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:05:06.583810 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:05:06.583887 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:05:06.620457 1131323 cri.go:89] found id: ""
	I0328 01:05:06.620501 1131323 logs.go:276] 0 containers: []
	W0328 01:05:06.620516 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:05:06.620524 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:05:06.620595 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:05:06.664380 1131323 cri.go:89] found id: ""
	I0328 01:05:06.664412 1131323 logs.go:276] 0 containers: []
	W0328 01:05:06.664425 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:05:06.664432 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:05:06.664502 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:05:06.701799 1131323 cri.go:89] found id: ""
	I0328 01:05:06.701850 1131323 logs.go:276] 0 containers: []
	W0328 01:05:06.701862 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:05:06.701870 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:05:06.701935 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:05:06.739899 1131323 cri.go:89] found id: ""
	I0328 01:05:06.739936 1131323 logs.go:276] 0 containers: []
	W0328 01:05:06.739948 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:05:06.739958 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:05:06.739973 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:05:06.814373 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:05:06.814404 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:05:06.814421 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:05:06.894331 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:05:06.894371 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:05:06.952912 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:05:06.952979 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:05:07.011851 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:05:07.011900 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:05:09.528068 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:05:09.545082 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:05:09.545167 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:05:09.586944 1131323 cri.go:89] found id: ""
	I0328 01:05:09.586983 1131323 logs.go:276] 0 containers: []
	W0328 01:05:09.586996 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:05:09.587004 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:05:09.587077 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:05:09.624153 1131323 cri.go:89] found id: ""
	I0328 01:05:09.624184 1131323 logs.go:276] 0 containers: []
	W0328 01:05:09.624192 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:05:09.624198 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:05:09.624256 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:05:09.661125 1131323 cri.go:89] found id: ""
	I0328 01:05:09.661160 1131323 logs.go:276] 0 containers: []
	W0328 01:05:09.661172 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:05:09.661182 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:05:09.661256 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:05:09.699865 1131323 cri.go:89] found id: ""
	I0328 01:05:09.699903 1131323 logs.go:276] 0 containers: []
	W0328 01:05:09.699916 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:05:09.699925 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:05:09.699992 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:05:09.737925 1131323 cri.go:89] found id: ""
	I0328 01:05:09.737958 1131323 logs.go:276] 0 containers: []
	W0328 01:05:09.737967 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:05:09.737973 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:05:09.738029 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:05:09.776906 1131323 cri.go:89] found id: ""
	I0328 01:05:09.776941 1131323 logs.go:276] 0 containers: []
	W0328 01:05:09.776950 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:05:09.776957 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:05:09.777021 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:05:09.815767 1131323 cri.go:89] found id: ""
	I0328 01:05:09.815794 1131323 logs.go:276] 0 containers: []
	W0328 01:05:09.815803 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:05:09.815809 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:05:09.815876 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:05:09.855880 1131323 cri.go:89] found id: ""
	I0328 01:05:09.855915 1131323 logs.go:276] 0 containers: []
	W0328 01:05:09.855928 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:05:09.855941 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:05:09.855958 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:05:09.918339 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:05:09.918376 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:05:09.932775 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:05:09.932810 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:05:10.011566 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:05:10.011594 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:05:10.011610 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:05:10.096057 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:05:10.096100 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:05:08.041230 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:10.041991 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:07.873367 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:10.372311 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:09.913349 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:12.412259 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:12.641999 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:05:12.655761 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:05:12.655843 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:05:12.697335 1131323 cri.go:89] found id: ""
	I0328 01:05:12.697369 1131323 logs.go:276] 0 containers: []
	W0328 01:05:12.697381 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:05:12.697390 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:05:12.697453 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:05:12.736482 1131323 cri.go:89] found id: ""
	I0328 01:05:12.736520 1131323 logs.go:276] 0 containers: []
	W0328 01:05:12.736534 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:05:12.736544 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:05:12.736617 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:05:12.771992 1131323 cri.go:89] found id: ""
	I0328 01:05:12.772034 1131323 logs.go:276] 0 containers: []
	W0328 01:05:12.772046 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:05:12.772055 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:05:12.772125 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:05:12.810738 1131323 cri.go:89] found id: ""
	I0328 01:05:12.810770 1131323 logs.go:276] 0 containers: []
	W0328 01:05:12.810779 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:05:12.810786 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:05:12.810837 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:05:12.848172 1131323 cri.go:89] found id: ""
	I0328 01:05:12.848209 1131323 logs.go:276] 0 containers: []
	W0328 01:05:12.848222 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:05:12.848230 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:05:12.848310 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:05:12.884660 1131323 cri.go:89] found id: ""
	I0328 01:05:12.884698 1131323 logs.go:276] 0 containers: []
	W0328 01:05:12.884710 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:05:12.884719 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:05:12.884794 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:05:12.926180 1131323 cri.go:89] found id: ""
	I0328 01:05:12.926209 1131323 logs.go:276] 0 containers: []
	W0328 01:05:12.926218 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:05:12.926244 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:05:12.926303 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:05:12.966938 1131323 cri.go:89] found id: ""
	I0328 01:05:12.966969 1131323 logs.go:276] 0 containers: []
	W0328 01:05:12.966983 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:05:12.966996 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:05:12.967014 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:05:13.018501 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:05:13.018541 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:05:13.033140 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:05:13.033175 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:05:13.108806 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:05:13.108832 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:05:13.108858 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:05:13.189198 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:05:13.189241 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:05:12.541088 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:15.041830 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:12.372413 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:14.372804 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:14.414059 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:16.912343 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:15.737415 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:05:15.752534 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:05:15.752614 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:05:15.789941 1131323 cri.go:89] found id: ""
	I0328 01:05:15.789974 1131323 logs.go:276] 0 containers: []
	W0328 01:05:15.789986 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:05:15.789994 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:05:15.790107 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:05:15.827688 1131323 cri.go:89] found id: ""
	I0328 01:05:15.827731 1131323 logs.go:276] 0 containers: []
	W0328 01:05:15.827745 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:05:15.827766 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:05:15.827831 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:05:15.867005 1131323 cri.go:89] found id: ""
	I0328 01:05:15.867041 1131323 logs.go:276] 0 containers: []
	W0328 01:05:15.867054 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:05:15.867064 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:05:15.867149 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:05:15.909924 1131323 cri.go:89] found id: ""
	I0328 01:05:15.910035 1131323 logs.go:276] 0 containers: []
	W0328 01:05:15.910055 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:05:15.910066 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:05:15.910139 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:05:15.950571 1131323 cri.go:89] found id: ""
	I0328 01:05:15.950606 1131323 logs.go:276] 0 containers: []
	W0328 01:05:15.950619 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:05:15.950632 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:05:15.950707 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:05:15.992557 1131323 cri.go:89] found id: ""
	I0328 01:05:15.992594 1131323 logs.go:276] 0 containers: []
	W0328 01:05:15.992605 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:05:15.992615 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:05:15.992687 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:05:16.032417 1131323 cri.go:89] found id: ""
	I0328 01:05:16.032458 1131323 logs.go:276] 0 containers: []
	W0328 01:05:16.032473 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:05:16.032482 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:05:16.032559 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:05:16.071399 1131323 cri.go:89] found id: ""
	I0328 01:05:16.071433 1131323 logs.go:276] 0 containers: []
	W0328 01:05:16.071445 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:05:16.071459 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:05:16.071481 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:05:16.147078 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:05:16.147113 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:05:16.147131 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:05:16.223828 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:05:16.223870 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:05:16.269377 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:05:16.269409 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:05:16.318545 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:05:16.318584 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:05:18.836044 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:05:18.851138 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:05:18.851231 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:05:18.887223 1131323 cri.go:89] found id: ""
	I0328 01:05:18.887260 1131323 logs.go:276] 0 containers: []
	W0328 01:05:18.887273 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:05:18.887283 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:05:18.887354 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:05:18.928652 1131323 cri.go:89] found id: ""
	I0328 01:05:18.928682 1131323 logs.go:276] 0 containers: []
	W0328 01:05:18.928692 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:05:18.928698 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:05:18.928756 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:05:18.968519 1131323 cri.go:89] found id: ""
	I0328 01:05:18.968555 1131323 logs.go:276] 0 containers: []
	W0328 01:05:18.968567 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:05:18.968575 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:05:18.968646 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:05:19.010939 1131323 cri.go:89] found id: ""
	I0328 01:05:19.010977 1131323 logs.go:276] 0 containers: []
	W0328 01:05:19.010991 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:05:19.010999 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:05:19.011070 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:05:19.048723 1131323 cri.go:89] found id: ""
	I0328 01:05:19.048748 1131323 logs.go:276] 0 containers: []
	W0328 01:05:19.048758 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:05:19.048769 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:05:19.048820 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:05:19.091761 1131323 cri.go:89] found id: ""
	I0328 01:05:19.091794 1131323 logs.go:276] 0 containers: []
	W0328 01:05:19.091803 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:05:19.091810 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:05:19.091863 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:05:19.134017 1131323 cri.go:89] found id: ""
	I0328 01:05:19.134049 1131323 logs.go:276] 0 containers: []
	W0328 01:05:19.134059 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:05:19.134065 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:05:19.134119 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:05:19.176070 1131323 cri.go:89] found id: ""
	I0328 01:05:19.176106 1131323 logs.go:276] 0 containers: []
	W0328 01:05:19.176118 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:05:19.176131 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:05:19.176155 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:05:19.261546 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:05:19.261584 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:05:19.261605 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:05:19.340271 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:05:19.340314 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:05:19.383625 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:05:19.383676 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:05:19.441635 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:05:19.441679 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:05:17.541876 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:20.040841 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:16.872723 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:19.372916 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:19.414384 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:21.912881 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:21.958362 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:05:21.974427 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:05:21.974528 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:05:22.013099 1131323 cri.go:89] found id: ""
	I0328 01:05:22.013139 1131323 logs.go:276] 0 containers: []
	W0328 01:05:22.013152 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:05:22.013160 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:05:22.013229 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:05:22.055558 1131323 cri.go:89] found id: ""
	I0328 01:05:22.055594 1131323 logs.go:276] 0 containers: []
	W0328 01:05:22.055604 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:05:22.055611 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:05:22.055668 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:05:22.106836 1131323 cri.go:89] found id: ""
	I0328 01:05:22.106870 1131323 logs.go:276] 0 containers: []
	W0328 01:05:22.106879 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:05:22.106886 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:05:22.106961 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:05:22.145135 1131323 cri.go:89] found id: ""
	I0328 01:05:22.145175 1131323 logs.go:276] 0 containers: []
	W0328 01:05:22.145189 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:05:22.145197 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:05:22.145266 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:05:22.183879 1131323 cri.go:89] found id: ""
	I0328 01:05:22.183909 1131323 logs.go:276] 0 containers: []
	W0328 01:05:22.183919 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:05:22.183926 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:05:22.183981 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:05:22.223087 1131323 cri.go:89] found id: ""
	I0328 01:05:22.223115 1131323 logs.go:276] 0 containers: []
	W0328 01:05:22.223124 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:05:22.223131 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:05:22.223209 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:05:22.263232 1131323 cri.go:89] found id: ""
	I0328 01:05:22.263262 1131323 logs.go:276] 0 containers: []
	W0328 01:05:22.263272 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:05:22.263279 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:05:22.263331 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:05:22.302919 1131323 cri.go:89] found id: ""
	I0328 01:05:22.302954 1131323 logs.go:276] 0 containers: []
	W0328 01:05:22.302967 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:05:22.302980 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:05:22.302998 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:05:22.358550 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:05:22.358596 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:05:22.374688 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:05:22.374722 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:05:22.453584 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:05:22.453609 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:05:22.453624 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:05:22.540983 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:05:22.541048 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:05:25.091773 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:05:25.107412 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:05:25.107484 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:05:25.143917 1131323 cri.go:89] found id: ""
	I0328 01:05:25.143944 1131323 logs.go:276] 0 containers: []
	W0328 01:05:25.143953 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:05:25.143960 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:05:25.144013 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:05:25.183615 1131323 cri.go:89] found id: ""
	I0328 01:05:25.183650 1131323 logs.go:276] 0 containers: []
	W0328 01:05:25.183659 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:05:25.183666 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:05:25.183729 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:05:25.221125 1131323 cri.go:89] found id: ""
	I0328 01:05:25.221158 1131323 logs.go:276] 0 containers: []
	W0328 01:05:25.221167 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:05:25.221174 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:05:25.221229 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:05:25.262023 1131323 cri.go:89] found id: ""
	I0328 01:05:25.262056 1131323 logs.go:276] 0 containers: []
	W0328 01:05:25.262065 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:05:25.262072 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:05:25.262134 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:05:25.297919 1131323 cri.go:89] found id: ""
	I0328 01:05:25.297948 1131323 logs.go:276] 0 containers: []
	W0328 01:05:25.297957 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:05:25.297964 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:05:25.298035 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:05:22.041977 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:24.542416 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:21.872312 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:23.872885 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:23.914459 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:25.916730 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:25.336582 1131323 cri.go:89] found id: ""
	I0328 01:05:25.336610 1131323 logs.go:276] 0 containers: []
	W0328 01:05:25.336620 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:05:25.336627 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:05:25.336690 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:05:25.375554 1131323 cri.go:89] found id: ""
	I0328 01:05:25.375589 1131323 logs.go:276] 0 containers: []
	W0328 01:05:25.375600 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:05:25.375609 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:05:25.375683 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:05:25.415941 1131323 cri.go:89] found id: ""
	I0328 01:05:25.415973 1131323 logs.go:276] 0 containers: []
	W0328 01:05:25.415984 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:05:25.415996 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:05:25.416013 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:05:25.430168 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:05:25.430196 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:05:25.507782 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:05:25.507805 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:05:25.507862 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:05:25.588780 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:05:25.588824 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:05:25.634958 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:05:25.634997 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:05:28.190651 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:05:28.205714 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:05:28.205794 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:05:28.242015 1131323 cri.go:89] found id: ""
	I0328 01:05:28.242056 1131323 logs.go:276] 0 containers: []
	W0328 01:05:28.242067 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:05:28.242077 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:05:28.242169 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:05:28.289132 1131323 cri.go:89] found id: ""
	I0328 01:05:28.289169 1131323 logs.go:276] 0 containers: []
	W0328 01:05:28.289182 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:05:28.289189 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:05:28.289256 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:05:28.327001 1131323 cri.go:89] found id: ""
	I0328 01:05:28.327031 1131323 logs.go:276] 0 containers: []
	W0328 01:05:28.327040 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:05:28.327052 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:05:28.327105 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:05:28.365474 1131323 cri.go:89] found id: ""
	I0328 01:05:28.365507 1131323 logs.go:276] 0 containers: []
	W0328 01:05:28.365516 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:05:28.365523 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:05:28.365587 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:05:28.405494 1131323 cri.go:89] found id: ""
	I0328 01:05:28.405553 1131323 logs.go:276] 0 containers: []
	W0328 01:05:28.405567 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:05:28.405576 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:05:28.405652 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:05:28.448464 1131323 cri.go:89] found id: ""
	I0328 01:05:28.448502 1131323 logs.go:276] 0 containers: []
	W0328 01:05:28.448512 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:05:28.448521 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:05:28.448594 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:05:28.488143 1131323 cri.go:89] found id: ""
	I0328 01:05:28.488172 1131323 logs.go:276] 0 containers: []
	W0328 01:05:28.488182 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:05:28.488189 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:05:28.488258 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:05:28.545977 1131323 cri.go:89] found id: ""
	I0328 01:05:28.546012 1131323 logs.go:276] 0 containers: []
	W0328 01:05:28.546024 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:05:28.546036 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:05:28.546050 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:05:28.629955 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:05:28.630001 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:05:28.670504 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:05:28.670536 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:05:28.722021 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:05:28.722069 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:05:28.737274 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:05:28.737310 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:05:28.824025 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:05:27.041205 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:29.041342 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:26.372037 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:28.373545 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:30.872569 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:28.414921 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:30.912980 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:31.324497 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:05:31.339715 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:05:31.339811 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:05:31.379017 1131323 cri.go:89] found id: ""
	I0328 01:05:31.379050 1131323 logs.go:276] 0 containers: []
	W0328 01:05:31.379062 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:05:31.379072 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:05:31.379138 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:05:31.420024 1131323 cri.go:89] found id: ""
	I0328 01:05:31.420055 1131323 logs.go:276] 0 containers: []
	W0328 01:05:31.420065 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:05:31.420071 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:05:31.420136 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:05:31.458732 1131323 cri.go:89] found id: ""
	I0328 01:05:31.458764 1131323 logs.go:276] 0 containers: []
	W0328 01:05:31.458773 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:05:31.458779 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:05:31.458835 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:05:31.504249 1131323 cri.go:89] found id: ""
	I0328 01:05:31.504280 1131323 logs.go:276] 0 containers: []
	W0328 01:05:31.504292 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:05:31.504300 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:05:31.504366 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:05:31.545284 1131323 cri.go:89] found id: ""
	I0328 01:05:31.545316 1131323 logs.go:276] 0 containers: []
	W0328 01:05:31.545324 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:05:31.545331 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:05:31.545385 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:05:31.583402 1131323 cri.go:89] found id: ""
	I0328 01:05:31.583434 1131323 logs.go:276] 0 containers: []
	W0328 01:05:31.583444 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:05:31.583453 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:05:31.583587 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:05:31.624411 1131323 cri.go:89] found id: ""
	I0328 01:05:31.624449 1131323 logs.go:276] 0 containers: []
	W0328 01:05:31.624462 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:05:31.624471 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:05:31.624528 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:05:31.666103 1131323 cri.go:89] found id: ""
	I0328 01:05:31.666144 1131323 logs.go:276] 0 containers: []
	W0328 01:05:31.666158 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:05:31.666173 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:05:31.666192 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:05:31.717595 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:05:31.717636 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:05:31.731606 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:05:31.731637 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:05:31.803302 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:05:31.803325 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:05:31.803339 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:05:31.885552 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:05:31.885590 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:05:34.432446 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:05:34.448002 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:05:34.448085 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:05:34.493207 1131323 cri.go:89] found id: ""
	I0328 01:05:34.493246 1131323 logs.go:276] 0 containers: []
	W0328 01:05:34.493259 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:05:34.493268 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:05:34.493337 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:05:34.541838 1131323 cri.go:89] found id: ""
	I0328 01:05:34.541871 1131323 logs.go:276] 0 containers: []
	W0328 01:05:34.541883 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:05:34.541891 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:05:34.541956 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:05:34.582319 1131323 cri.go:89] found id: ""
	I0328 01:05:34.582357 1131323 logs.go:276] 0 containers: []
	W0328 01:05:34.582371 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:05:34.582380 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:05:34.582458 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:05:34.618753 1131323 cri.go:89] found id: ""
	I0328 01:05:34.618788 1131323 logs.go:276] 0 containers: []
	W0328 01:05:34.618801 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:05:34.618810 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:05:34.618882 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:05:34.656994 1131323 cri.go:89] found id: ""
	I0328 01:05:34.657027 1131323 logs.go:276] 0 containers: []
	W0328 01:05:34.657037 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:05:34.657043 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:05:34.657114 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:05:34.695214 1131323 cri.go:89] found id: ""
	I0328 01:05:34.695252 1131323 logs.go:276] 0 containers: []
	W0328 01:05:34.695264 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:05:34.695271 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:05:34.695337 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:05:34.733688 1131323 cri.go:89] found id: ""
	I0328 01:05:34.733718 1131323 logs.go:276] 0 containers: []
	W0328 01:05:34.733731 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:05:34.733739 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:05:34.733808 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:05:34.771697 1131323 cri.go:89] found id: ""
	I0328 01:05:34.771729 1131323 logs.go:276] 0 containers: []
	W0328 01:05:34.771744 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:05:34.771758 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:05:34.771776 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:05:34.828190 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:05:34.828236 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:05:34.842741 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:05:34.842776 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:05:34.918494 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:05:34.918525 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:05:34.918541 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:05:35.012689 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:05:35.012747 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:05:31.042633 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:33.541295 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:35.541588 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:33.371991 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:35.872753 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:33.412886 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:35.914065 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:37.574759 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:05:37.590014 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:05:37.590128 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:05:37.626883 1131323 cri.go:89] found id: ""
	I0328 01:05:37.626914 1131323 logs.go:276] 0 containers: []
	W0328 01:05:37.626926 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:05:37.626935 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:05:37.627005 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:05:37.665171 1131323 cri.go:89] found id: ""
	I0328 01:05:37.665202 1131323 logs.go:276] 0 containers: []
	W0328 01:05:37.665215 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:05:37.665225 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:05:37.665294 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:05:37.702923 1131323 cri.go:89] found id: ""
	I0328 01:05:37.702963 1131323 logs.go:276] 0 containers: []
	W0328 01:05:37.702976 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:05:37.702984 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:05:37.703064 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:05:37.741148 1131323 cri.go:89] found id: ""
	I0328 01:05:37.741182 1131323 logs.go:276] 0 containers: []
	W0328 01:05:37.741191 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:05:37.741199 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:05:37.741269 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:05:37.782298 1131323 cri.go:89] found id: ""
	I0328 01:05:37.782331 1131323 logs.go:276] 0 containers: []
	W0328 01:05:37.782341 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:05:37.782348 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:05:37.782407 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:05:37.819056 1131323 cri.go:89] found id: ""
	I0328 01:05:37.819110 1131323 logs.go:276] 0 containers: []
	W0328 01:05:37.819124 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:05:37.819134 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:05:37.819215 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:05:37.862372 1131323 cri.go:89] found id: ""
	I0328 01:05:37.862414 1131323 logs.go:276] 0 containers: []
	W0328 01:05:37.862427 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:05:37.862436 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:05:37.862507 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:05:37.899639 1131323 cri.go:89] found id: ""
	I0328 01:05:37.899675 1131323 logs.go:276] 0 containers: []
	W0328 01:05:37.899689 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:05:37.899703 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:05:37.899721 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:05:37.978962 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:05:37.978990 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:05:37.979007 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:05:38.058972 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:05:38.059015 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:05:38.102975 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:05:38.103016 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:05:38.157994 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:05:38.158035 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:05:38.041091 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:40.041892 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:38.371787 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:40.373131 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:38.412214 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:40.415412 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:42.912341 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:40.673425 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:05:40.690969 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:05:40.691041 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:05:40.735552 1131323 cri.go:89] found id: ""
	I0328 01:05:40.735585 1131323 logs.go:276] 0 containers: []
	W0328 01:05:40.735594 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:05:40.735602 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:05:40.735669 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:05:40.816611 1131323 cri.go:89] found id: ""
	I0328 01:05:40.816648 1131323 logs.go:276] 0 containers: []
	W0328 01:05:40.816661 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:05:40.816669 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:05:40.816725 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:05:40.864093 1131323 cri.go:89] found id: ""
	I0328 01:05:40.864125 1131323 logs.go:276] 0 containers: []
	W0328 01:05:40.864138 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:05:40.864147 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:05:40.864218 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:05:40.908781 1131323 cri.go:89] found id: ""
	I0328 01:05:40.908817 1131323 logs.go:276] 0 containers: []
	W0328 01:05:40.908829 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:05:40.908846 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:05:40.908914 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:05:40.950330 1131323 cri.go:89] found id: ""
	I0328 01:05:40.950369 1131323 logs.go:276] 0 containers: []
	W0328 01:05:40.950382 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:05:40.950390 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:05:40.950481 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:05:40.989983 1131323 cri.go:89] found id: ""
	I0328 01:05:40.990041 1131323 logs.go:276] 0 containers: []
	W0328 01:05:40.990054 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:05:40.990063 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:05:40.990136 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:05:41.042428 1131323 cri.go:89] found id: ""
	I0328 01:05:41.042470 1131323 logs.go:276] 0 containers: []
	W0328 01:05:41.042481 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:05:41.042489 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:05:41.042560 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:05:41.089309 1131323 cri.go:89] found id: ""
	I0328 01:05:41.089342 1131323 logs.go:276] 0 containers: []
	W0328 01:05:41.089353 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:05:41.089363 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:05:41.089377 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:05:41.148502 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:05:41.148547 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:05:41.163889 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:05:41.163918 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:05:41.242825 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:05:41.242848 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:05:41.242861 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:05:41.322658 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:05:41.322702 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:05:43.865117 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:05:43.880642 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:05:43.880729 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:05:43.919519 1131323 cri.go:89] found id: ""
	I0328 01:05:43.919550 1131323 logs.go:276] 0 containers: []
	W0328 01:05:43.919559 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:05:43.919565 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:05:43.919622 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:05:43.957906 1131323 cri.go:89] found id: ""
	I0328 01:05:43.957936 1131323 logs.go:276] 0 containers: []
	W0328 01:05:43.957945 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:05:43.957951 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:05:43.958008 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:05:44.001448 1131323 cri.go:89] found id: ""
	I0328 01:05:44.001486 1131323 logs.go:276] 0 containers: []
	W0328 01:05:44.001497 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:05:44.001505 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:05:44.001573 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:05:44.039767 1131323 cri.go:89] found id: ""
	I0328 01:05:44.039801 1131323 logs.go:276] 0 containers: []
	W0328 01:05:44.039812 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:05:44.039818 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:05:44.039871 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:05:44.079441 1131323 cri.go:89] found id: ""
	I0328 01:05:44.079470 1131323 logs.go:276] 0 containers: []
	W0328 01:05:44.079480 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:05:44.079486 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:05:44.079541 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:05:44.116534 1131323 cri.go:89] found id: ""
	I0328 01:05:44.116584 1131323 logs.go:276] 0 containers: []
	W0328 01:05:44.116596 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:05:44.116604 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:05:44.116670 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:05:44.163335 1131323 cri.go:89] found id: ""
	I0328 01:05:44.163369 1131323 logs.go:276] 0 containers: []
	W0328 01:05:44.163381 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:05:44.163389 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:05:44.163457 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:05:44.201367 1131323 cri.go:89] found id: ""
	I0328 01:05:44.201403 1131323 logs.go:276] 0 containers: []
	W0328 01:05:44.201413 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:05:44.201424 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:05:44.201442 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:05:44.257485 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:05:44.257529 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:05:44.272489 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:05:44.272534 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:05:44.354442 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:05:44.354477 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:05:44.354498 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:05:44.436219 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:05:44.436262 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:05:42.044443 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:44.541648 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:42.872072 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:44.873552 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:44.913292 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:47.412489 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:46.982131 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:05:46.998022 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:05:46.998100 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:05:47.037167 1131323 cri.go:89] found id: ""
	I0328 01:05:47.037205 1131323 logs.go:276] 0 containers: []
	W0328 01:05:47.037217 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:05:47.037226 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:05:47.037295 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:05:47.076175 1131323 cri.go:89] found id: ""
	I0328 01:05:47.076213 1131323 logs.go:276] 0 containers: []
	W0328 01:05:47.076226 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:05:47.076235 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:05:47.076306 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:05:47.115193 1131323 cri.go:89] found id: ""
	I0328 01:05:47.115227 1131323 logs.go:276] 0 containers: []
	W0328 01:05:47.115237 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:05:47.115244 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:05:47.115297 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:05:47.154942 1131323 cri.go:89] found id: ""
	I0328 01:05:47.154976 1131323 logs.go:276] 0 containers: []
	W0328 01:05:47.154989 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:05:47.154998 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:05:47.155069 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:05:47.196571 1131323 cri.go:89] found id: ""
	I0328 01:05:47.196609 1131323 logs.go:276] 0 containers: []
	W0328 01:05:47.196622 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:05:47.196631 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:05:47.196707 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:05:47.237572 1131323 cri.go:89] found id: ""
	I0328 01:05:47.237616 1131323 logs.go:276] 0 containers: []
	W0328 01:05:47.237625 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:05:47.237633 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:05:47.237691 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:05:47.275208 1131323 cri.go:89] found id: ""
	I0328 01:05:47.275254 1131323 logs.go:276] 0 containers: []
	W0328 01:05:47.275265 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:05:47.275272 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:05:47.275329 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:05:47.313515 1131323 cri.go:89] found id: ""
	I0328 01:05:47.313555 1131323 logs.go:276] 0 containers: []
	W0328 01:05:47.313568 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:05:47.313582 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:05:47.313598 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:05:47.368993 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:05:47.369033 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:05:47.383063 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:05:47.383097 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:05:47.460239 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:05:47.460278 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:05:47.460298 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:05:47.538552 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:05:47.538594 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:05:50.084960 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:05:50.101764 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:05:50.101859 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:05:50.141457 1131323 cri.go:89] found id: ""
	I0328 01:05:50.141488 1131323 logs.go:276] 0 containers: []
	W0328 01:05:50.141497 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:05:50.141504 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:05:50.141557 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:05:50.178184 1131323 cri.go:89] found id: ""
	I0328 01:05:50.178220 1131323 logs.go:276] 0 containers: []
	W0328 01:05:50.178254 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:05:50.178263 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:05:50.178358 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:05:50.217908 1131323 cri.go:89] found id: ""
	I0328 01:05:50.217946 1131323 logs.go:276] 0 containers: []
	W0328 01:05:50.217959 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:05:50.217966 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:05:50.218027 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:05:50.256029 1131323 cri.go:89] found id: ""
	I0328 01:05:50.256058 1131323 logs.go:276] 0 containers: []
	W0328 01:05:50.256067 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:05:50.256074 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:05:50.256130 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:05:50.295054 1131323 cri.go:89] found id: ""
	I0328 01:05:50.295087 1131323 logs.go:276] 0 containers: []
	W0328 01:05:50.295100 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:05:50.295106 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:05:50.295165 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:05:47.042338 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:49.542501 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:47.372867 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:49.872948 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:49.913873 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:52.412600 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:50.334695 1131323 cri.go:89] found id: ""
	I0328 01:05:50.336588 1131323 logs.go:276] 0 containers: []
	W0328 01:05:50.336605 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:05:50.336614 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:05:50.336697 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:05:50.375968 1131323 cri.go:89] found id: ""
	I0328 01:05:50.376003 1131323 logs.go:276] 0 containers: []
	W0328 01:05:50.376013 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:05:50.376021 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:05:50.376091 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:05:50.417146 1131323 cri.go:89] found id: ""
	I0328 01:05:50.417175 1131323 logs.go:276] 0 containers: []
	W0328 01:05:50.417184 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:05:50.417194 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:05:50.417207 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:05:50.474090 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:05:50.474131 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:05:50.489006 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:05:50.489040 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:05:50.566220 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:05:50.566268 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:05:50.566286 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:05:50.645593 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:05:50.645653 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:05:53.190872 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:05:53.205223 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:05:53.205320 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:05:53.242396 1131323 cri.go:89] found id: ""
	I0328 01:05:53.242433 1131323 logs.go:276] 0 containers: []
	W0328 01:05:53.242445 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:05:53.242455 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:05:53.242524 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:05:53.281237 1131323 cri.go:89] found id: ""
	I0328 01:05:53.281275 1131323 logs.go:276] 0 containers: []
	W0328 01:05:53.281288 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:05:53.281297 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:05:53.281357 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:05:53.321239 1131323 cri.go:89] found id: ""
	I0328 01:05:53.321268 1131323 logs.go:276] 0 containers: []
	W0328 01:05:53.321287 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:05:53.321296 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:05:53.321358 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:05:53.359240 1131323 cri.go:89] found id: ""
	I0328 01:05:53.359269 1131323 logs.go:276] 0 containers: []
	W0328 01:05:53.359278 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:05:53.359284 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:05:53.359337 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:05:53.396973 1131323 cri.go:89] found id: ""
	I0328 01:05:53.397008 1131323 logs.go:276] 0 containers: []
	W0328 01:05:53.397021 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:05:53.397030 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:05:53.397100 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:05:53.438368 1131323 cri.go:89] found id: ""
	I0328 01:05:53.438400 1131323 logs.go:276] 0 containers: []
	W0328 01:05:53.438408 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:05:53.438415 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:05:53.438477 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:05:53.474679 1131323 cri.go:89] found id: ""
	I0328 01:05:53.474708 1131323 logs.go:276] 0 containers: []
	W0328 01:05:53.474732 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:05:53.474742 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:05:53.474799 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:05:53.512509 1131323 cri.go:89] found id: ""
	I0328 01:05:53.512547 1131323 logs.go:276] 0 containers: []
	W0328 01:05:53.512560 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:05:53.512579 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:05:53.512599 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:05:53.569536 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:05:53.569580 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:05:53.584977 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:05:53.585016 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:05:53.657865 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:05:53.657895 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:05:53.657908 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:05:53.733158 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:05:53.733203 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:05:52.041508 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:54.541663 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:52.373317 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:54.872090 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:54.913464 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:57.413256 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:56.278693 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:05:56.291870 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:05:56.291949 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:05:56.332909 1131323 cri.go:89] found id: ""
	I0328 01:05:56.332943 1131323 logs.go:276] 0 containers: []
	W0328 01:05:56.332957 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:05:56.332965 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:05:56.333038 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:05:56.370608 1131323 cri.go:89] found id: ""
	I0328 01:05:56.370638 1131323 logs.go:276] 0 containers: []
	W0328 01:05:56.370649 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:05:56.370657 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:05:56.370721 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:05:56.408031 1131323 cri.go:89] found id: ""
	I0328 01:05:56.408068 1131323 logs.go:276] 0 containers: []
	W0328 01:05:56.408081 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:05:56.408100 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:05:56.408170 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:05:56.445057 1131323 cri.go:89] found id: ""
	I0328 01:05:56.445092 1131323 logs.go:276] 0 containers: []
	W0328 01:05:56.445105 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:05:56.445113 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:05:56.445177 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:05:56.486868 1131323 cri.go:89] found id: ""
	I0328 01:05:56.486898 1131323 logs.go:276] 0 containers: []
	W0328 01:05:56.486908 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:05:56.486914 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:05:56.486969 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:05:56.533594 1131323 cri.go:89] found id: ""
	I0328 01:05:56.533622 1131323 logs.go:276] 0 containers: []
	W0328 01:05:56.533632 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:05:56.533638 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:05:56.533702 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:05:56.569200 1131323 cri.go:89] found id: ""
	I0328 01:05:56.569237 1131323 logs.go:276] 0 containers: []
	W0328 01:05:56.569250 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:05:56.569258 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:05:56.569335 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:05:56.604919 1131323 cri.go:89] found id: ""
	I0328 01:05:56.604955 1131323 logs.go:276] 0 containers: []
	W0328 01:05:56.604968 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:05:56.604982 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:05:56.605011 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:05:56.654473 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:05:56.654513 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:05:56.671309 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:05:56.671339 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:05:56.739516 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:05:56.739543 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:05:56.739559 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:05:56.817445 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:05:56.817495 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:05:59.361711 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:05:59.375672 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:05:59.375750 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:05:59.414329 1131323 cri.go:89] found id: ""
	I0328 01:05:59.414360 1131323 logs.go:276] 0 containers: []
	W0328 01:05:59.414371 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:05:59.414379 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:05:59.414443 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:05:59.454813 1131323 cri.go:89] found id: ""
	I0328 01:05:59.454846 1131323 logs.go:276] 0 containers: []
	W0328 01:05:59.454855 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:05:59.454862 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:05:59.454917 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:05:59.492890 1131323 cri.go:89] found id: ""
	I0328 01:05:59.492924 1131323 logs.go:276] 0 containers: []
	W0328 01:05:59.492936 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:05:59.492946 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:05:59.493043 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:05:59.529412 1131323 cri.go:89] found id: ""
	I0328 01:05:59.529443 1131323 logs.go:276] 0 containers: []
	W0328 01:05:59.529454 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:05:59.529464 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:05:59.529521 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:05:59.568620 1131323 cri.go:89] found id: ""
	I0328 01:05:59.568655 1131323 logs.go:276] 0 containers: []
	W0328 01:05:59.568664 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:05:59.568671 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:05:59.568731 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:05:59.605826 1131323 cri.go:89] found id: ""
	I0328 01:05:59.605861 1131323 logs.go:276] 0 containers: []
	W0328 01:05:59.605874 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:05:59.605883 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:05:59.605955 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:05:59.645799 1131323 cri.go:89] found id: ""
	I0328 01:05:59.645833 1131323 logs.go:276] 0 containers: []
	W0328 01:05:59.645847 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:05:59.645856 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:05:59.645931 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:05:59.683866 1131323 cri.go:89] found id: ""
	I0328 01:05:59.683903 1131323 logs.go:276] 0 containers: []
	W0328 01:05:59.683916 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:05:59.683929 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:05:59.683953 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:05:59.726678 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:05:59.726711 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:05:59.779910 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:05:59.779954 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:05:59.795743 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:05:59.795774 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:05:59.875137 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:05:59.875162 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:05:59.875174 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:05:56.542345 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:58.542599 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:00.543094 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:57.372258 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:59.872483 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:05:59.912150 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:01.913694 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:02.455212 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:06:02.468850 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:06:02.468945 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:06:02.506347 1131323 cri.go:89] found id: ""
	I0328 01:06:02.506385 1131323 logs.go:276] 0 containers: []
	W0328 01:06:02.506397 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:06:02.506406 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:06:02.506484 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:06:02.546056 1131323 cri.go:89] found id: ""
	I0328 01:06:02.546085 1131323 logs.go:276] 0 containers: []
	W0328 01:06:02.546096 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:06:02.546103 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:06:02.546173 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:06:02.585343 1131323 cri.go:89] found id: ""
	I0328 01:06:02.585385 1131323 logs.go:276] 0 containers: []
	W0328 01:06:02.585398 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:06:02.585407 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:06:02.585563 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:06:02.625380 1131323 cri.go:89] found id: ""
	I0328 01:06:02.625414 1131323 logs.go:276] 0 containers: []
	W0328 01:06:02.625423 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:06:02.625429 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:06:02.625486 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:06:02.664653 1131323 cri.go:89] found id: ""
	I0328 01:06:02.664687 1131323 logs.go:276] 0 containers: []
	W0328 01:06:02.664701 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:06:02.664708 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:06:02.664764 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:06:02.704468 1131323 cri.go:89] found id: ""
	I0328 01:06:02.704498 1131323 logs.go:276] 0 containers: []
	W0328 01:06:02.704511 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:06:02.704519 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:06:02.704595 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:06:02.740969 1131323 cri.go:89] found id: ""
	I0328 01:06:02.740997 1131323 logs.go:276] 0 containers: []
	W0328 01:06:02.741007 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:06:02.741014 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:06:02.741102 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:06:02.782113 1131323 cri.go:89] found id: ""
	I0328 01:06:02.782150 1131323 logs.go:276] 0 containers: []
	W0328 01:06:02.782163 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:06:02.782185 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:06:02.782203 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:06:02.836804 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:06:02.836848 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:06:02.852266 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:06:02.852299 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:06:02.929441 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:06:02.929467 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:06:02.929484 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:06:03.008114 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:06:03.008156 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:06:03.041919 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:05.542209 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:02.372332 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:04.871689 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:04.413251 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:06.912348 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:05.554291 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:06:05.570208 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:06:05.570304 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:06:05.610887 1131323 cri.go:89] found id: ""
	I0328 01:06:05.610916 1131323 logs.go:276] 0 containers: []
	W0328 01:06:05.610926 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:06:05.610932 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:06:05.610991 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:06:05.651561 1131323 cri.go:89] found id: ""
	I0328 01:06:05.651600 1131323 logs.go:276] 0 containers: []
	W0328 01:06:05.651610 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:06:05.651616 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:06:05.651681 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:06:05.690801 1131323 cri.go:89] found id: ""
	I0328 01:06:05.690830 1131323 logs.go:276] 0 containers: []
	W0328 01:06:05.690843 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:06:05.690851 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:06:05.690920 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:06:05.729098 1131323 cri.go:89] found id: ""
	I0328 01:06:05.729136 1131323 logs.go:276] 0 containers: []
	W0328 01:06:05.729146 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:06:05.729153 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:06:05.729225 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:06:05.774461 1131323 cri.go:89] found id: ""
	I0328 01:06:05.774499 1131323 logs.go:276] 0 containers: []
	W0328 01:06:05.774520 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:06:05.774530 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:06:05.774602 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:06:05.812135 1131323 cri.go:89] found id: ""
	I0328 01:06:05.812166 1131323 logs.go:276] 0 containers: []
	W0328 01:06:05.812180 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:06:05.812188 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:06:05.812255 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:06:05.847744 1131323 cri.go:89] found id: ""
	I0328 01:06:05.847775 1131323 logs.go:276] 0 containers: []
	W0328 01:06:05.847786 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:06:05.847796 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:06:05.847863 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:06:05.885600 1131323 cri.go:89] found id: ""
	I0328 01:06:05.885641 1131323 logs.go:276] 0 containers: []
	W0328 01:06:05.885656 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:06:05.885669 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:06:05.885684 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:06:05.963837 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:06:05.963879 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:06:06.007342 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:06:06.007381 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:06:06.062798 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:06:06.062843 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:06:06.077547 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:06:06.077599 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:06:06.148373 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:06:08.648791 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:06:08.664082 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:06:08.664154 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:06:08.701746 1131323 cri.go:89] found id: ""
	I0328 01:06:08.701776 1131323 logs.go:276] 0 containers: []
	W0328 01:06:08.701789 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:06:08.701797 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:06:08.701855 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:06:08.739035 1131323 cri.go:89] found id: ""
	I0328 01:06:08.739066 1131323 logs.go:276] 0 containers: []
	W0328 01:06:08.739076 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:06:08.739083 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:06:08.739136 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:06:08.776128 1131323 cri.go:89] found id: ""
	I0328 01:06:08.776166 1131323 logs.go:276] 0 containers: []
	W0328 01:06:08.776180 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:06:08.776189 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:06:08.776255 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:06:08.816136 1131323 cri.go:89] found id: ""
	I0328 01:06:08.816172 1131323 logs.go:276] 0 containers: []
	W0328 01:06:08.816187 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:06:08.816196 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:06:08.816271 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:06:08.855675 1131323 cri.go:89] found id: ""
	I0328 01:06:08.855709 1131323 logs.go:276] 0 containers: []
	W0328 01:06:08.855722 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:06:08.855730 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:06:08.855802 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:06:08.893161 1131323 cri.go:89] found id: ""
	I0328 01:06:08.893198 1131323 logs.go:276] 0 containers: []
	W0328 01:06:08.893212 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:06:08.893221 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:06:08.893297 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:06:08.935498 1131323 cri.go:89] found id: ""
	I0328 01:06:08.935527 1131323 logs.go:276] 0 containers: []
	W0328 01:06:08.935540 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:06:08.935548 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:06:08.935622 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:06:08.971622 1131323 cri.go:89] found id: ""
	I0328 01:06:08.971657 1131323 logs.go:276] 0 containers: []
	W0328 01:06:08.971668 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:06:08.971679 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:06:08.971696 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:06:09.039975 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:06:09.040036 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:06:09.057877 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:06:09.057920 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:06:09.130093 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:06:09.130119 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:06:09.130135 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:06:09.217177 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:06:09.217228 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:06:08.040921 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:10.042895 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:06.872367 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:08.873187 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:08.914313 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:11.412330 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:11.762393 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:06:11.776356 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:06:11.776424 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:06:11.811982 1131323 cri.go:89] found id: ""
	I0328 01:06:11.812017 1131323 logs.go:276] 0 containers: []
	W0328 01:06:11.812030 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:06:11.812038 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:06:11.812103 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:06:11.849789 1131323 cri.go:89] found id: ""
	I0328 01:06:11.849817 1131323 logs.go:276] 0 containers: []
	W0328 01:06:11.849826 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:06:11.849833 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:06:11.849884 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:06:11.890455 1131323 cri.go:89] found id: ""
	I0328 01:06:11.890488 1131323 logs.go:276] 0 containers: []
	W0328 01:06:11.890497 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:06:11.890503 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:06:11.890559 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:06:11.929047 1131323 cri.go:89] found id: ""
	I0328 01:06:11.929093 1131323 logs.go:276] 0 containers: []
	W0328 01:06:11.929102 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:06:11.929108 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:06:11.929164 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:06:11.969536 1131323 cri.go:89] found id: ""
	I0328 01:06:11.969566 1131323 logs.go:276] 0 containers: []
	W0328 01:06:11.969576 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:06:11.969583 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:06:11.969641 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:06:12.008779 1131323 cri.go:89] found id: ""
	I0328 01:06:12.008811 1131323 logs.go:276] 0 containers: []
	W0328 01:06:12.008821 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:06:12.008828 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:06:12.008890 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:06:12.044061 1131323 cri.go:89] found id: ""
	I0328 01:06:12.044091 1131323 logs.go:276] 0 containers: []
	W0328 01:06:12.044104 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:06:12.044112 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:06:12.044176 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:06:12.082307 1131323 cri.go:89] found id: ""
	I0328 01:06:12.082336 1131323 logs.go:276] 0 containers: []
	W0328 01:06:12.082346 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:06:12.082357 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:06:12.082369 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:06:12.133044 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:06:12.133091 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:06:12.148584 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:06:12.148624 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:06:12.218799 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:06:12.218834 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:06:12.218852 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:06:12.295580 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:06:12.295623 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:06:14.842815 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:06:14.856385 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:06:14.856456 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:06:14.895351 1131323 cri.go:89] found id: ""
	I0328 01:06:14.895409 1131323 logs.go:276] 0 containers: []
	W0328 01:06:14.895418 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:06:14.895424 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:06:14.895476 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:06:14.930333 1131323 cri.go:89] found id: ""
	I0328 01:06:14.930366 1131323 logs.go:276] 0 containers: []
	W0328 01:06:14.930380 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:06:14.930389 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:06:14.930461 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:06:14.968701 1131323 cri.go:89] found id: ""
	I0328 01:06:14.968742 1131323 logs.go:276] 0 containers: []
	W0328 01:06:14.968754 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:06:14.968767 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:06:14.968867 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:06:15.004580 1131323 cri.go:89] found id: ""
	I0328 01:06:15.004613 1131323 logs.go:276] 0 containers: []
	W0328 01:06:15.004626 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:06:15.004634 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:06:15.004700 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:06:15.046702 1131323 cri.go:89] found id: ""
	I0328 01:06:15.046726 1131323 logs.go:276] 0 containers: []
	W0328 01:06:15.046736 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:06:15.046742 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:06:15.046795 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:06:15.088693 1131323 cri.go:89] found id: ""
	I0328 01:06:15.088725 1131323 logs.go:276] 0 containers: []
	W0328 01:06:15.088734 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:06:15.088741 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:06:15.088797 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:06:15.130293 1131323 cri.go:89] found id: ""
	I0328 01:06:15.130324 1131323 logs.go:276] 0 containers: []
	W0328 01:06:15.130333 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:06:15.130339 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:06:15.130394 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:06:15.172381 1131323 cri.go:89] found id: ""
	I0328 01:06:15.172408 1131323 logs.go:276] 0 containers: []
	W0328 01:06:15.172417 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:06:15.172427 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:06:15.172440 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:06:15.225631 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:06:15.225674 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:06:15.241251 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:06:15.241294 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:06:15.319701 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:06:15.319731 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:06:15.319747 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:06:12.540755 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:14.541618 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:11.371580 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:13.371640 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:15.373147 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:13.911792 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:15.912479 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:17.913926 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:15.406813 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:06:15.406853 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:06:17.993893 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:06:18.007755 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:06:18.007843 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:06:18.047750 1131323 cri.go:89] found id: ""
	I0328 01:06:18.047777 1131323 logs.go:276] 0 containers: []
	W0328 01:06:18.047786 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:06:18.047797 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:06:18.047855 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:06:18.088264 1131323 cri.go:89] found id: ""
	I0328 01:06:18.088291 1131323 logs.go:276] 0 containers: []
	W0328 01:06:18.088303 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:06:18.088311 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:06:18.088369 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:06:18.127485 1131323 cri.go:89] found id: ""
	I0328 01:06:18.127514 1131323 logs.go:276] 0 containers: []
	W0328 01:06:18.127523 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:06:18.127530 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:06:18.127581 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:06:18.167462 1131323 cri.go:89] found id: ""
	I0328 01:06:18.167496 1131323 logs.go:276] 0 containers: []
	W0328 01:06:18.167510 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:06:18.167516 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:06:18.167571 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:06:18.209536 1131323 cri.go:89] found id: ""
	I0328 01:06:18.209571 1131323 logs.go:276] 0 containers: []
	W0328 01:06:18.209583 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:06:18.209591 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:06:18.209662 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:06:18.247565 1131323 cri.go:89] found id: ""
	I0328 01:06:18.247601 1131323 logs.go:276] 0 containers: []
	W0328 01:06:18.247614 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:06:18.247623 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:06:18.247701 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:06:18.288123 1131323 cri.go:89] found id: ""
	I0328 01:06:18.288162 1131323 logs.go:276] 0 containers: []
	W0328 01:06:18.288172 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:06:18.288179 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:06:18.288242 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:06:18.328132 1131323 cri.go:89] found id: ""
	I0328 01:06:18.328161 1131323 logs.go:276] 0 containers: []
	W0328 01:06:18.328170 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:06:18.328181 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:06:18.328193 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:06:18.403245 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:06:18.403287 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:06:18.403305 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:06:18.483446 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:06:18.483500 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:06:18.527357 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:06:18.527392 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:06:18.588402 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:06:18.588463 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:06:16.542137 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:18.542554 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:20.546396 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:17.872147 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:20.373000 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:20.412369 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:22.412661 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:21.103566 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:06:21.117538 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:06:21.117616 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:06:21.174215 1131323 cri.go:89] found id: ""
	I0328 01:06:21.174270 1131323 logs.go:276] 0 containers: []
	W0328 01:06:21.174284 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:06:21.174293 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:06:21.174364 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:06:21.238666 1131323 cri.go:89] found id: ""
	I0328 01:06:21.238707 1131323 logs.go:276] 0 containers: []
	W0328 01:06:21.238722 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:06:21.238730 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:06:21.238803 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:06:21.303510 1131323 cri.go:89] found id: ""
	I0328 01:06:21.303543 1131323 logs.go:276] 0 containers: []
	W0328 01:06:21.303553 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:06:21.303559 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:06:21.303614 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:06:21.345823 1131323 cri.go:89] found id: ""
	I0328 01:06:21.345853 1131323 logs.go:276] 0 containers: []
	W0328 01:06:21.345862 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:06:21.345870 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:06:21.345940 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:06:21.386205 1131323 cri.go:89] found id: ""
	I0328 01:06:21.386248 1131323 logs.go:276] 0 containers: []
	W0328 01:06:21.386261 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:06:21.386269 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:06:21.386335 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:06:21.427424 1131323 cri.go:89] found id: ""
	I0328 01:06:21.427457 1131323 logs.go:276] 0 containers: []
	W0328 01:06:21.427470 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:06:21.427478 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:06:21.427546 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:06:21.465054 1131323 cri.go:89] found id: ""
	I0328 01:06:21.465087 1131323 logs.go:276] 0 containers: []
	W0328 01:06:21.465099 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:06:21.465107 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:06:21.465177 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:06:21.507197 1131323 cri.go:89] found id: ""
	I0328 01:06:21.507229 1131323 logs.go:276] 0 containers: []
	W0328 01:06:21.507238 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:06:21.507248 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:06:21.507263 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:06:21.586657 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:06:21.586709 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:06:21.633702 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:06:21.633739 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:06:21.688960 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:06:21.688999 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:06:21.704675 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:06:21.704714 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:06:21.781612 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:06:24.282521 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:06:24.297096 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:06:24.297185 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:06:24.338745 1131323 cri.go:89] found id: ""
	I0328 01:06:24.338780 1131323 logs.go:276] 0 containers: []
	W0328 01:06:24.338793 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:06:24.338802 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:06:24.338872 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:06:24.375499 1131323 cri.go:89] found id: ""
	I0328 01:06:24.375528 1131323 logs.go:276] 0 containers: []
	W0328 01:06:24.375540 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:06:24.375548 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:06:24.375616 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:06:24.410939 1131323 cri.go:89] found id: ""
	I0328 01:06:24.410966 1131323 logs.go:276] 0 containers: []
	W0328 01:06:24.410978 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:06:24.410986 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:06:24.411042 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:06:24.455316 1131323 cri.go:89] found id: ""
	I0328 01:06:24.455345 1131323 logs.go:276] 0 containers: []
	W0328 01:06:24.455354 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:06:24.455360 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:06:24.455427 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:06:24.493177 1131323 cri.go:89] found id: ""
	I0328 01:06:24.493206 1131323 logs.go:276] 0 containers: []
	W0328 01:06:24.493219 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:06:24.493228 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:06:24.493300 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:06:24.533612 1131323 cri.go:89] found id: ""
	I0328 01:06:24.533648 1131323 logs.go:276] 0 containers: []
	W0328 01:06:24.533659 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:06:24.533668 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:06:24.533743 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:06:24.573960 1131323 cri.go:89] found id: ""
	I0328 01:06:24.573998 1131323 logs.go:276] 0 containers: []
	W0328 01:06:24.574014 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:06:24.574020 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:06:24.574074 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:06:24.617282 1131323 cri.go:89] found id: ""
	I0328 01:06:24.617319 1131323 logs.go:276] 0 containers: []
	W0328 01:06:24.617333 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:06:24.617346 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:06:24.617364 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:06:24.691660 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:06:24.691688 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:06:24.691707 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:06:24.773138 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:06:24.773180 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:06:24.820408 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:06:24.820440 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:06:24.875901 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:06:24.875940 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:06:23.041030 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:25.041064 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:22.874513 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:25.378939 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:24.413732 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:26.912433 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:27.392663 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:06:27.407958 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:06:27.408046 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:06:27.446750 1131323 cri.go:89] found id: ""
	I0328 01:06:27.446782 1131323 logs.go:276] 0 containers: []
	W0328 01:06:27.446792 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:06:27.446799 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:06:27.446872 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:06:27.489199 1131323 cri.go:89] found id: ""
	I0328 01:06:27.489236 1131323 logs.go:276] 0 containers: []
	W0328 01:06:27.489249 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:06:27.489258 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:06:27.489316 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:06:27.525754 1131323 cri.go:89] found id: ""
	I0328 01:06:27.525787 1131323 logs.go:276] 0 containers: []
	W0328 01:06:27.525796 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:06:27.525803 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:06:27.525861 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:06:27.560817 1131323 cri.go:89] found id: ""
	I0328 01:06:27.560849 1131323 logs.go:276] 0 containers: []
	W0328 01:06:27.560858 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:06:27.560866 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:06:27.560930 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:06:27.597706 1131323 cri.go:89] found id: ""
	I0328 01:06:27.597736 1131323 logs.go:276] 0 containers: []
	W0328 01:06:27.597744 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:06:27.597750 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:06:27.597821 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:06:27.635170 1131323 cri.go:89] found id: ""
	I0328 01:06:27.635211 1131323 logs.go:276] 0 containers: []
	W0328 01:06:27.635223 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:06:27.635232 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:06:27.635299 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:06:27.672043 1131323 cri.go:89] found id: ""
	I0328 01:06:27.672079 1131323 logs.go:276] 0 containers: []
	W0328 01:06:27.672091 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:06:27.672099 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:06:27.672166 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:06:27.711401 1131323 cri.go:89] found id: ""
	I0328 01:06:27.711435 1131323 logs.go:276] 0 containers: []
	W0328 01:06:27.711448 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:06:27.711468 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:06:27.711488 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:06:27.755172 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:06:27.755211 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:06:27.807588 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:06:27.807632 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:06:27.823557 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:06:27.823589 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:06:27.905292 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:06:27.905316 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:06:27.905329 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:06:27.041105 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:29.041205 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:27.873797 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:30.374214 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:29.412378 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:31.413211 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:30.491565 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:06:30.505601 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:06:30.505667 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:06:30.541894 1131323 cri.go:89] found id: ""
	I0328 01:06:30.541929 1131323 logs.go:276] 0 containers: []
	W0328 01:06:30.541940 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:06:30.541949 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:06:30.542029 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:06:30.581484 1131323 cri.go:89] found id: ""
	I0328 01:06:30.581514 1131323 logs.go:276] 0 containers: []
	W0328 01:06:30.581532 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:06:30.581538 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:06:30.581613 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:06:30.624788 1131323 cri.go:89] found id: ""
	I0328 01:06:30.624830 1131323 logs.go:276] 0 containers: []
	W0328 01:06:30.624842 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:06:30.624850 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:06:30.624922 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:06:30.664373 1131323 cri.go:89] found id: ""
	I0328 01:06:30.664403 1131323 logs.go:276] 0 containers: []
	W0328 01:06:30.664413 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:06:30.664420 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:06:30.664489 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:06:30.702885 1131323 cri.go:89] found id: ""
	I0328 01:06:30.702917 1131323 logs.go:276] 0 containers: []
	W0328 01:06:30.702928 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:06:30.702934 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:06:30.703006 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:06:30.748170 1131323 cri.go:89] found id: ""
	I0328 01:06:30.748205 1131323 logs.go:276] 0 containers: []
	W0328 01:06:30.748217 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:06:30.748226 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:06:30.748316 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:06:30.785218 1131323 cri.go:89] found id: ""
	I0328 01:06:30.785255 1131323 logs.go:276] 0 containers: []
	W0328 01:06:30.785268 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:06:30.785276 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:06:30.785343 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:06:30.825529 1131323 cri.go:89] found id: ""
	I0328 01:06:30.825555 1131323 logs.go:276] 0 containers: []
	W0328 01:06:30.825565 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:06:30.825575 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:06:30.825589 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:06:30.881353 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:06:30.881391 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:06:30.896682 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:06:30.896718 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:06:30.973356 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:06:30.973386 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:06:30.973402 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:06:31.049014 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:06:31.049047 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:06:33.594365 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:06:33.609372 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:06:33.609460 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:06:33.648699 1131323 cri.go:89] found id: ""
	I0328 01:06:33.648728 1131323 logs.go:276] 0 containers: []
	W0328 01:06:33.648749 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:06:33.648757 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:06:33.648829 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:06:33.686707 1131323 cri.go:89] found id: ""
	I0328 01:06:33.686744 1131323 logs.go:276] 0 containers: []
	W0328 01:06:33.686758 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:06:33.686767 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:06:33.686832 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:06:33.723091 1131323 cri.go:89] found id: ""
	I0328 01:06:33.723121 1131323 logs.go:276] 0 containers: []
	W0328 01:06:33.723130 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:06:33.723136 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:06:33.723187 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:06:33.763439 1131323 cri.go:89] found id: ""
	I0328 01:06:33.763471 1131323 logs.go:276] 0 containers: []
	W0328 01:06:33.763481 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:06:33.763488 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:06:33.763544 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:06:33.812236 1131323 cri.go:89] found id: ""
	I0328 01:06:33.812271 1131323 logs.go:276] 0 containers: []
	W0328 01:06:33.812285 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:06:33.812294 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:06:33.812365 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:06:33.849421 1131323 cri.go:89] found id: ""
	I0328 01:06:33.849454 1131323 logs.go:276] 0 containers: []
	W0328 01:06:33.849465 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:06:33.849473 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:06:33.849528 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:06:33.888020 1131323 cri.go:89] found id: ""
	I0328 01:06:33.888051 1131323 logs.go:276] 0 containers: []
	W0328 01:06:33.888065 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:06:33.888078 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:06:33.888145 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:06:33.925952 1131323 cri.go:89] found id: ""
	I0328 01:06:33.925990 1131323 logs.go:276] 0 containers: []
	W0328 01:06:33.926003 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:06:33.926016 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:06:33.926034 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:06:33.976695 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:06:33.976734 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:06:33.991708 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:06:33.991752 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:06:34.068244 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:06:34.068276 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:06:34.068293 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:06:34.155843 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:06:34.155885 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:06:31.041375 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:33.041526 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:35.541169 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:32.872009 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:34.873043 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:33.913191 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:36.413213 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:36.697480 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:06:36.712322 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:06:36.712420 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:06:36.749541 1131323 cri.go:89] found id: ""
	I0328 01:06:36.749570 1131323 logs.go:276] 0 containers: []
	W0328 01:06:36.749579 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:06:36.749587 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:06:36.749655 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:06:36.788226 1131323 cri.go:89] found id: ""
	I0328 01:06:36.788255 1131323 logs.go:276] 0 containers: []
	W0328 01:06:36.788264 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:06:36.788270 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:06:36.788323 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:06:36.823824 1131323 cri.go:89] found id: ""
	I0328 01:06:36.823856 1131323 logs.go:276] 0 containers: []
	W0328 01:06:36.823866 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:06:36.823872 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:06:36.823927 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:06:36.869331 1131323 cri.go:89] found id: ""
	I0328 01:06:36.869362 1131323 logs.go:276] 0 containers: []
	W0328 01:06:36.869371 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:06:36.869378 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:06:36.869473 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:06:36.907918 1131323 cri.go:89] found id: ""
	I0328 01:06:36.907950 1131323 logs.go:276] 0 containers: []
	W0328 01:06:36.907960 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:06:36.907966 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:06:36.908028 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:06:36.947708 1131323 cri.go:89] found id: ""
	I0328 01:06:36.947738 1131323 logs.go:276] 0 containers: []
	W0328 01:06:36.947749 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:06:36.947757 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:06:36.947824 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:06:36.986200 1131323 cri.go:89] found id: ""
	I0328 01:06:36.986251 1131323 logs.go:276] 0 containers: []
	W0328 01:06:36.986266 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:06:36.986275 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:06:36.986350 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:06:37.026670 1131323 cri.go:89] found id: ""
	I0328 01:06:37.026698 1131323 logs.go:276] 0 containers: []
	W0328 01:06:37.026708 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:06:37.026718 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:06:37.026732 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:06:37.079891 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:06:37.079933 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:06:37.094347 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:06:37.094378 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:06:37.168653 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:06:37.168681 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:06:37.168695 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:06:37.247909 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:06:37.247949 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:06:39.791285 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:06:39.807921 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:06:39.808000 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:06:39.851460 1131323 cri.go:89] found id: ""
	I0328 01:06:39.851499 1131323 logs.go:276] 0 containers: []
	W0328 01:06:39.851512 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:06:39.851520 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:06:39.851593 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:06:39.889506 1131323 cri.go:89] found id: ""
	I0328 01:06:39.889541 1131323 logs.go:276] 0 containers: []
	W0328 01:06:39.889554 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:06:39.889564 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:06:39.889632 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:06:39.930291 1131323 cri.go:89] found id: ""
	I0328 01:06:39.930321 1131323 logs.go:276] 0 containers: []
	W0328 01:06:39.930331 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:06:39.930337 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:06:39.930400 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:06:39.965121 1131323 cri.go:89] found id: ""
	I0328 01:06:39.965160 1131323 logs.go:276] 0 containers: []
	W0328 01:06:39.965174 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:06:39.965183 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:06:39.965252 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:06:40.003217 1131323 cri.go:89] found id: ""
	I0328 01:06:40.003248 1131323 logs.go:276] 0 containers: []
	W0328 01:06:40.003258 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:06:40.003264 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:06:40.003319 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:06:40.042702 1131323 cri.go:89] found id: ""
	I0328 01:06:40.042737 1131323 logs.go:276] 0 containers: []
	W0328 01:06:40.042749 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:06:40.042759 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:06:40.042826 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:06:40.079733 1131323 cri.go:89] found id: ""
	I0328 01:06:40.079769 1131323 logs.go:276] 0 containers: []
	W0328 01:06:40.079780 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:06:40.079788 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:06:40.079852 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:06:40.117066 1131323 cri.go:89] found id: ""
	I0328 01:06:40.117098 1131323 logs.go:276] 0 containers: []
	W0328 01:06:40.117107 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:06:40.117117 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:06:40.117130 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:06:40.158589 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:06:40.158623 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:06:40.210997 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:06:40.211049 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:06:40.225419 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:06:40.225453 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:06:40.305356 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:06:40.305385 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:06:40.305401 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:06:37.541534 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:39.541905 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:36.874220 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:39.373763 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:38.413719 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:40.912939 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:42.913528 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:42.896394 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:06:42.912285 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:06:42.912355 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:06:42.949381 1131323 cri.go:89] found id: ""
	I0328 01:06:42.949411 1131323 logs.go:276] 0 containers: []
	W0328 01:06:42.949420 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:06:42.949427 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:06:42.949496 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:06:42.985325 1131323 cri.go:89] found id: ""
	I0328 01:06:42.985358 1131323 logs.go:276] 0 containers: []
	W0328 01:06:42.985371 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:06:42.985388 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:06:42.985456 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:06:43.023570 1131323 cri.go:89] found id: ""
	I0328 01:06:43.023616 1131323 logs.go:276] 0 containers: []
	W0328 01:06:43.023630 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:06:43.023638 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:06:43.023714 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:06:43.062995 1131323 cri.go:89] found id: ""
	I0328 01:06:43.063025 1131323 logs.go:276] 0 containers: []
	W0328 01:06:43.063036 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:06:43.063042 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:06:43.063111 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:06:43.101666 1131323 cri.go:89] found id: ""
	I0328 01:06:43.101704 1131323 logs.go:276] 0 containers: []
	W0328 01:06:43.101713 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:06:43.101720 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:06:43.101789 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:06:43.150713 1131323 cri.go:89] found id: ""
	I0328 01:06:43.150745 1131323 logs.go:276] 0 containers: []
	W0328 01:06:43.150757 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:06:43.150765 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:06:43.150830 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:06:43.193449 1131323 cri.go:89] found id: ""
	I0328 01:06:43.193479 1131323 logs.go:276] 0 containers: []
	W0328 01:06:43.193487 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:06:43.193495 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:06:43.193559 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:06:43.237641 1131323 cri.go:89] found id: ""
	I0328 01:06:43.237673 1131323 logs.go:276] 0 containers: []
	W0328 01:06:43.237682 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:06:43.237698 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:06:43.237714 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:06:43.287282 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:06:43.287320 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:06:43.303307 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:06:43.303343 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:06:43.383597 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:06:43.383619 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:06:43.383632 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:06:43.467874 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:06:43.467914 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:06:42.041406 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:44.540550 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:41.874286 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:44.372393 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:45.410973 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:47.412852 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:46.011081 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:06:46.025731 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:06:46.025824 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:06:46.064336 1131323 cri.go:89] found id: ""
	I0328 01:06:46.064371 1131323 logs.go:276] 0 containers: []
	W0328 01:06:46.064385 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:06:46.064394 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:06:46.064451 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:06:46.104493 1131323 cri.go:89] found id: ""
	I0328 01:06:46.104530 1131323 logs.go:276] 0 containers: []
	W0328 01:06:46.104550 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:06:46.104559 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:06:46.104636 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:06:46.147546 1131323 cri.go:89] found id: ""
	I0328 01:06:46.147582 1131323 logs.go:276] 0 containers: []
	W0328 01:06:46.147594 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:06:46.147602 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:06:46.147656 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:06:46.186162 1131323 cri.go:89] found id: ""
	I0328 01:06:46.186197 1131323 logs.go:276] 0 containers: []
	W0328 01:06:46.186207 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:06:46.186213 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:06:46.186296 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:06:46.230412 1131323 cri.go:89] found id: ""
	I0328 01:06:46.230450 1131323 logs.go:276] 0 containers: []
	W0328 01:06:46.230464 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:06:46.230473 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:06:46.230552 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:06:46.266000 1131323 cri.go:89] found id: ""
	I0328 01:06:46.266037 1131323 logs.go:276] 0 containers: []
	W0328 01:06:46.266050 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:06:46.266059 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:06:46.266126 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:06:46.301031 1131323 cri.go:89] found id: ""
	I0328 01:06:46.301065 1131323 logs.go:276] 0 containers: []
	W0328 01:06:46.301077 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:06:46.301084 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:06:46.301155 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:06:46.339222 1131323 cri.go:89] found id: ""
	I0328 01:06:46.339248 1131323 logs.go:276] 0 containers: []
	W0328 01:06:46.339258 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:06:46.339271 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:06:46.339290 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:06:46.352558 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:06:46.352595 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:06:46.427283 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:06:46.427308 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:06:46.427325 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:06:46.512134 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:06:46.512178 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:06:46.558276 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:06:46.558307 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:06:49.113455 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:06:49.127554 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:06:49.127645 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:06:49.169380 1131323 cri.go:89] found id: ""
	I0328 01:06:49.169421 1131323 logs.go:276] 0 containers: []
	W0328 01:06:49.169435 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:06:49.169444 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:06:49.169511 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:06:49.204540 1131323 cri.go:89] found id: ""
	I0328 01:06:49.204568 1131323 logs.go:276] 0 containers: []
	W0328 01:06:49.204579 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:06:49.204596 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:06:49.204664 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:06:49.243074 1131323 cri.go:89] found id: ""
	I0328 01:06:49.243102 1131323 logs.go:276] 0 containers: []
	W0328 01:06:49.243112 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:06:49.243119 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:06:49.243170 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:06:49.281264 1131323 cri.go:89] found id: ""
	I0328 01:06:49.281301 1131323 logs.go:276] 0 containers: []
	W0328 01:06:49.281314 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:06:49.281322 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:06:49.281391 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:06:49.320473 1131323 cri.go:89] found id: ""
	I0328 01:06:49.320505 1131323 logs.go:276] 0 containers: []
	W0328 01:06:49.320514 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:06:49.320521 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:06:49.320592 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:06:49.357715 1131323 cri.go:89] found id: ""
	I0328 01:06:49.357749 1131323 logs.go:276] 0 containers: []
	W0328 01:06:49.357759 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:06:49.357766 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:06:49.357823 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:06:49.398427 1131323 cri.go:89] found id: ""
	I0328 01:06:49.398464 1131323 logs.go:276] 0 containers: []
	W0328 01:06:49.398477 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:06:49.398498 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:06:49.398576 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:06:49.439921 1131323 cri.go:89] found id: ""
	I0328 01:06:49.439956 1131323 logs.go:276] 0 containers: []
	W0328 01:06:49.439969 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:06:49.439982 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:06:49.440003 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:06:49.557260 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:06:49.557289 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:06:49.557312 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:06:49.640105 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:06:49.640169 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:06:49.683153 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:06:49.683185 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:06:49.737420 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:06:49.737463 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:06:46.541377 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:49.041761 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:46.374869 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:48.875897 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:49.912535 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:51.912893 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:52.253208 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:06:52.268572 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:06:52.268649 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:06:52.305136 1131323 cri.go:89] found id: ""
	I0328 01:06:52.305180 1131323 logs.go:276] 0 containers: []
	W0328 01:06:52.305193 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:06:52.305202 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:06:52.305273 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:06:52.344774 1131323 cri.go:89] found id: ""
	I0328 01:06:52.344806 1131323 logs.go:276] 0 containers: []
	W0328 01:06:52.344816 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:06:52.344823 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:06:52.344885 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:06:52.382127 1131323 cri.go:89] found id: ""
	I0328 01:06:52.382174 1131323 logs.go:276] 0 containers: []
	W0328 01:06:52.382185 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:06:52.382200 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:06:52.382280 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:06:52.421340 1131323 cri.go:89] found id: ""
	I0328 01:06:52.421368 1131323 logs.go:276] 0 containers: []
	W0328 01:06:52.421377 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:06:52.421383 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:06:52.421433 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:06:52.460046 1131323 cri.go:89] found id: ""
	I0328 01:06:52.460084 1131323 logs.go:276] 0 containers: []
	W0328 01:06:52.460100 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:06:52.460107 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:06:52.460164 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:06:52.500067 1131323 cri.go:89] found id: ""
	I0328 01:06:52.500094 1131323 logs.go:276] 0 containers: []
	W0328 01:06:52.500102 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:06:52.500109 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:06:52.500171 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:06:52.537614 1131323 cri.go:89] found id: ""
	I0328 01:06:52.537646 1131323 logs.go:276] 0 containers: []
	W0328 01:06:52.537671 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:06:52.537680 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:06:52.537745 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:06:52.577362 1131323 cri.go:89] found id: ""
	I0328 01:06:52.577392 1131323 logs.go:276] 0 containers: []
	W0328 01:06:52.577402 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:06:52.577417 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:06:52.577434 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:06:52.633638 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:06:52.633689 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:06:52.650762 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:06:52.650796 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:06:52.729436 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:06:52.729470 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:06:52.729484 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:06:52.818193 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:06:52.818248 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:06:51.540541 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:53.541340 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:55.542165 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:51.376916 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:53.872313 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:55.873335 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:54.411986 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:56.412892 1130949 pod_ready.go:102] pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:55.362950 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:06:55.378461 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:06:55.378577 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:06:55.419968 1131323 cri.go:89] found id: ""
	I0328 01:06:55.419995 1131323 logs.go:276] 0 containers: []
	W0328 01:06:55.420005 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:06:55.420010 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:06:55.420072 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:06:55.464308 1131323 cri.go:89] found id: ""
	I0328 01:06:55.464341 1131323 logs.go:276] 0 containers: []
	W0328 01:06:55.464350 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:06:55.464357 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:06:55.464421 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:06:55.523059 1131323 cri.go:89] found id: ""
	I0328 01:06:55.523092 1131323 logs.go:276] 0 containers: []
	W0328 01:06:55.523106 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:06:55.523114 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:06:55.523186 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:06:55.570957 1131323 cri.go:89] found id: ""
	I0328 01:06:55.570990 1131323 logs.go:276] 0 containers: []
	W0328 01:06:55.571004 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:06:55.571013 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:06:55.571077 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:06:55.606712 1131323 cri.go:89] found id: ""
	I0328 01:06:55.606739 1131323 logs.go:276] 0 containers: []
	W0328 01:06:55.606749 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:06:55.606755 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:06:55.606817 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:06:55.646445 1131323 cri.go:89] found id: ""
	I0328 01:06:55.646477 1131323 logs.go:276] 0 containers: []
	W0328 01:06:55.646486 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:06:55.646493 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:06:55.646548 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:06:55.685176 1131323 cri.go:89] found id: ""
	I0328 01:06:55.685208 1131323 logs.go:276] 0 containers: []
	W0328 01:06:55.685217 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:06:55.685225 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:06:55.685289 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:06:55.722948 1131323 cri.go:89] found id: ""
	I0328 01:06:55.722984 1131323 logs.go:276] 0 containers: []
	W0328 01:06:55.722995 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:06:55.723006 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:06:55.723022 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:06:55.797332 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:06:55.797368 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:06:55.797385 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:06:55.877648 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:06:55.877688 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:06:55.918966 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:06:55.918997 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:06:55.971226 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:06:55.971272 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:06:58.488464 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:06:58.504999 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:06:58.505088 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:06:58.549290 1131323 cri.go:89] found id: ""
	I0328 01:06:58.549325 1131323 logs.go:276] 0 containers: []
	W0328 01:06:58.549338 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:06:58.549347 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:06:58.549414 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:06:58.589222 1131323 cri.go:89] found id: ""
	I0328 01:06:58.589252 1131323 logs.go:276] 0 containers: []
	W0328 01:06:58.589261 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:06:58.589271 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:06:58.589337 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:06:58.626470 1131323 cri.go:89] found id: ""
	I0328 01:06:58.626499 1131323 logs.go:276] 0 containers: []
	W0328 01:06:58.626508 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:06:58.626514 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:06:58.626578 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:06:58.671634 1131323 cri.go:89] found id: ""
	I0328 01:06:58.671663 1131323 logs.go:276] 0 containers: []
	W0328 01:06:58.671674 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:06:58.671683 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:06:58.671744 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:06:58.707335 1131323 cri.go:89] found id: ""
	I0328 01:06:58.707370 1131323 logs.go:276] 0 containers: []
	W0328 01:06:58.707381 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:06:58.707390 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:06:58.707459 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:06:58.745635 1131323 cri.go:89] found id: ""
	I0328 01:06:58.745666 1131323 logs.go:276] 0 containers: []
	W0328 01:06:58.745679 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:06:58.745687 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:06:58.745752 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:06:58.792172 1131323 cri.go:89] found id: ""
	I0328 01:06:58.792205 1131323 logs.go:276] 0 containers: []
	W0328 01:06:58.792216 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:06:58.792225 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:06:58.792287 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:06:58.840027 1131323 cri.go:89] found id: ""
	I0328 01:06:58.840063 1131323 logs.go:276] 0 containers: []
	W0328 01:06:58.840075 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:06:58.840089 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:06:58.840108 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:06:58.921964 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:06:58.921988 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:06:58.922003 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:06:59.016935 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:06:59.016980 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:06:59.065747 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:06:59.065788 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:06:59.119189 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:06:59.119231 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:06:58.042362 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:00.544351 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:57.875649 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:00.371953 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:06:58.406154 1130949 pod_ready.go:81] duration metric: took 4m0.000981669s for pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace to be "Ready" ...
	E0328 01:06:58.406192 1130949 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-swsxp" in "kube-system" namespace to be "Ready" (will not retry!)
	I0328 01:06:58.406218 1130949 pod_ready.go:38] duration metric: took 4m11.713667334s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0328 01:06:58.406275 1130949 kubeadm.go:591] duration metric: took 4m19.018883002s to restartPrimaryControlPlane
	W0328 01:06:58.406372 1130949 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0328 01:06:58.406432 1130949 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0328 01:07:01.637081 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:07:01.652557 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:07:01.652634 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:07:01.691795 1131323 cri.go:89] found id: ""
	I0328 01:07:01.691832 1131323 logs.go:276] 0 containers: []
	W0328 01:07:01.691846 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:07:01.691854 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:07:01.691927 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:07:01.732815 1131323 cri.go:89] found id: ""
	I0328 01:07:01.732850 1131323 logs.go:276] 0 containers: []
	W0328 01:07:01.732861 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:07:01.732868 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:07:01.732938 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:07:01.776370 1131323 cri.go:89] found id: ""
	I0328 01:07:01.776408 1131323 logs.go:276] 0 containers: []
	W0328 01:07:01.776422 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:07:01.776431 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:07:01.776501 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:07:01.821260 1131323 cri.go:89] found id: ""
	I0328 01:07:01.821290 1131323 logs.go:276] 0 containers: []
	W0328 01:07:01.821301 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:07:01.821308 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:07:01.821377 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:07:01.860666 1131323 cri.go:89] found id: ""
	I0328 01:07:01.860696 1131323 logs.go:276] 0 containers: []
	W0328 01:07:01.860708 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:07:01.860719 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:07:01.860787 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:07:01.898255 1131323 cri.go:89] found id: ""
	I0328 01:07:01.898291 1131323 logs.go:276] 0 containers: []
	W0328 01:07:01.898304 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:07:01.898314 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:07:01.898383 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:07:01.937770 1131323 cri.go:89] found id: ""
	I0328 01:07:01.937809 1131323 logs.go:276] 0 containers: []
	W0328 01:07:01.937822 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:07:01.937830 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:07:01.937901 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:07:01.976946 1131323 cri.go:89] found id: ""
	I0328 01:07:01.976981 1131323 logs.go:276] 0 containers: []
	W0328 01:07:01.976994 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:07:01.977008 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:07:01.977027 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:07:02.062804 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:07:02.062845 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:07:02.110750 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:07:02.110783 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:07:02.179633 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:07:02.179677 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:07:02.203131 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:07:02.203181 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:07:02.303281 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:07:04.804238 1131323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:07:04.819654 1131323 kubeadm.go:591] duration metric: took 4m2.527630194s to restartPrimaryControlPlane
	W0328 01:07:04.819747 1131323 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0328 01:07:04.819787 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0328 01:07:03.041692 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:05.540478 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:02.372472 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:04.376413 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:07.322821 1131323 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (2.50300166s)
	I0328 01:07:07.322918 1131323 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0328 01:07:07.338692 1131323 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0328 01:07:07.349812 1131323 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0328 01:07:07.361566 1131323 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0328 01:07:07.361597 1131323 kubeadm.go:156] found existing configuration files:
	
	I0328 01:07:07.361667 1131323 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0328 01:07:07.372926 1131323 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0328 01:07:07.373008 1131323 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0328 01:07:07.383770 1131323 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0328 01:07:07.394260 1131323 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0328 01:07:07.394332 1131323 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0328 01:07:07.405874 1131323 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0328 01:07:07.417177 1131323 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0328 01:07:07.417254 1131323 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0328 01:07:07.428589 1131323 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0328 01:07:07.438788 1131323 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0328 01:07:07.438845 1131323 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0328 01:07:07.449649 1131323 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0328 01:07:07.533886 1131323 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0328 01:07:07.533989 1131323 kubeadm.go:309] [preflight] Running pre-flight checks
	I0328 01:07:07.693599 1131323 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0328 01:07:07.693736 1131323 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0328 01:07:07.693852 1131323 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0328 01:07:07.910557 1131323 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0328 01:07:07.912634 1131323 out.go:204]   - Generating certificates and keys ...
	I0328 01:07:07.912743 1131323 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0328 01:07:07.912855 1131323 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0328 01:07:07.912984 1131323 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0328 01:07:07.913098 1131323 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0328 01:07:07.913212 1131323 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0328 01:07:07.913298 1131323 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0328 01:07:07.913384 1131323 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0328 01:07:07.913569 1131323 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0328 01:07:07.913947 1131323 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0328 01:07:07.914429 1131323 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0328 01:07:07.914649 1131323 kubeadm.go:309] [certs] Using the existing "sa" key
	I0328 01:07:07.914728 1131323 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0328 01:07:08.225778 1131323 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0328 01:07:08.353927 1131323 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0328 01:07:08.631240 1131323 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0328 01:07:08.824445 1131323 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0328 01:07:08.840240 1131323 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0328 01:07:08.841200 1131323 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0328 01:07:08.841315 1131323 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0328 01:07:08.997129 1131323 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0328 01:07:08.999073 1131323 out.go:204]   - Booting up control plane ...
	I0328 01:07:08.999224 1131323 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0328 01:07:09.014811 1131323 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0328 01:07:09.015898 1131323 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0328 01:07:09.016727 1131323 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0328 01:07:09.019426 1131323 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0328 01:07:07.541363 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:10.041094 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:06.874606 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:09.372537 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:12.540137 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:14.541608 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:11.372643 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:13.873029 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:16.541814 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:19.047225 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:16.372556 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:18.871954 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:20.872047 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:21.542880 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:24.041786 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:22.872845 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:24.873747 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:26.042186 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:28.541303 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:30.540610 1130949 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.134147754s)
	I0328 01:07:30.540688 1130949 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0328 01:07:30.558971 1130949 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0328 01:07:30.570331 1130949 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0328 01:07:30.581192 1130949 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0328 01:07:30.581246 1130949 kubeadm.go:156] found existing configuration files:
	
	I0328 01:07:30.581306 1130949 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0328 01:07:30.592337 1130949 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0328 01:07:30.592410 1130949 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0328 01:07:30.603288 1130949 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0328 01:07:30.613714 1130949 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0328 01:07:30.613776 1130949 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0328 01:07:30.624281 1130949 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0328 01:07:30.634569 1130949 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0328 01:07:30.634644 1130949 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0328 01:07:30.647279 1130949 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0328 01:07:30.658554 1130949 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0328 01:07:30.658646 1130949 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0328 01:07:30.670364 1130949 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0328 01:07:30.730349 1130949 kubeadm.go:309] [init] Using Kubernetes version: v1.29.3
	I0328 01:07:30.730414 1130949 kubeadm.go:309] [preflight] Running pre-flight checks
	I0328 01:07:30.887056 1130949 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0328 01:07:30.887234 1130949 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0328 01:07:30.887385 1130949 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0328 01:07:31.104288 1130949 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0328 01:07:27.373135 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:29.373436 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:31.106496 1130949 out.go:204]   - Generating certificates and keys ...
	I0328 01:07:31.106628 1130949 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0328 01:07:31.106697 1130949 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0328 01:07:31.106765 1130949 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0328 01:07:31.106826 1130949 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0328 01:07:31.106892 1130949 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0328 01:07:31.107528 1130949 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0328 01:07:31.108302 1130949 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0328 01:07:31.112246 1130949 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0328 01:07:31.112762 1130949 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0328 01:07:31.113711 1130949 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0328 01:07:31.115230 1130949 kubeadm.go:309] [certs] Using the existing "sa" key
	I0328 01:07:31.115284 1130949 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0328 01:07:31.297632 1130949 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0328 01:07:32.446275 1130949 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0328 01:07:32.565869 1130949 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0328 01:07:32.641288 1130949 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0328 01:07:32.817229 1130949 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0328 01:07:32.817814 1130949 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0328 01:07:32.820366 1130949 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0328 01:07:32.822328 1130949 out.go:204]   - Booting up control plane ...
	I0328 01:07:32.822467 1130949 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0328 01:07:32.822550 1130949 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0328 01:07:32.822990 1130949 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0328 01:07:32.846800 1130949 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0328 01:07:32.847829 1130949 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0328 01:07:32.847902 1130949 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0328 01:07:31.044103 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:33.542106 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:35.542875 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:31.873591 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:33.875737 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:35.881819 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:32.992001 1130949 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0328 01:07:38.997010 1130949 kubeadm.go:309] [apiclient] All control plane components are healthy after 6.003888 seconds
	I0328 01:07:39.012971 1130949 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0328 01:07:39.036328 1130949 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0328 01:07:39.569806 1130949 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0328 01:07:39.570135 1130949 kubeadm.go:309] [mark-control-plane] Marking the node embed-certs-808809 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0328 01:07:40.085165 1130949 kubeadm.go:309] [bootstrap-token] Using token: 4zk5zi.uttj4zihedk5oj6k
	I0328 01:07:40.086719 1130949 out.go:204]   - Configuring RBAC rules ...
	I0328 01:07:40.086873 1130949 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0328 01:07:40.096373 1130949 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0328 01:07:40.106484 1130949 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0328 01:07:40.110525 1130949 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0328 01:07:40.120015 1130949 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0328 01:07:40.129060 1130949 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0328 01:07:40.141167 1130949 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0328 01:07:40.415429 1130949 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0328 01:07:40.507275 1130949 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0328 01:07:40.507333 1130949 kubeadm.go:309] 
	I0328 01:07:40.507551 1130949 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0328 01:07:40.507617 1130949 kubeadm.go:309] 
	I0328 01:07:40.507860 1130949 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0328 01:07:40.507891 1130949 kubeadm.go:309] 
	I0328 01:07:40.507947 1130949 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0328 01:07:40.508057 1130949 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0328 01:07:40.508140 1130949 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0328 01:07:40.508157 1130949 kubeadm.go:309] 
	I0328 01:07:40.508250 1130949 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0328 01:07:40.508264 1130949 kubeadm.go:309] 
	I0328 01:07:40.508329 1130949 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0328 01:07:40.508344 1130949 kubeadm.go:309] 
	I0328 01:07:40.508421 1130949 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0328 01:07:40.508539 1130949 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0328 01:07:40.508626 1130949 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0328 01:07:40.508632 1130949 kubeadm.go:309] 
	I0328 01:07:40.508804 1130949 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0328 01:07:40.508970 1130949 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0328 01:07:40.508990 1130949 kubeadm.go:309] 
	I0328 01:07:40.509155 1130949 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token 4zk5zi.uttj4zihedk5oj6k \
	I0328 01:07:40.509474 1130949 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:eb47e943713ff45bf2b8bbea854932df17a469668ec0f8e79ea3bb626e56fc59 \
	I0328 01:07:40.509514 1130949 kubeadm.go:309] 	--control-plane 
	I0328 01:07:40.509524 1130949 kubeadm.go:309] 
	I0328 01:07:40.509641 1130949 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0328 01:07:40.509655 1130949 kubeadm.go:309] 
	I0328 01:07:40.509767 1130949 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token 4zk5zi.uttj4zihedk5oj6k \
	I0328 01:07:40.509932 1130949 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:eb47e943713ff45bf2b8bbea854932df17a469668ec0f8e79ea3bb626e56fc59 
	I0328 01:07:40.510139 1130949 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0328 01:07:40.510157 1130949 cni.go:84] Creating CNI manager for ""
	I0328 01:07:40.510166 1130949 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0328 01:07:40.512099 1130949 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0328 01:07:38.041290 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:40.041569 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:38.373789 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:40.374369 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:40.513314 1130949 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0328 01:07:40.563257 1130949 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0328 01:07:40.627024 1130949 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0328 01:07:40.627097 1130949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:07:40.627137 1130949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-808809 minikube.k8s.io/updated_at=2024_03_28T01_07_40_0700 minikube.k8s.io/version=v1.33.0-beta.0 minikube.k8s.io/commit=1a940f980f77c9bfc8ef93678db8c1f49c9dd79d minikube.k8s.io/name=embed-certs-808809 minikube.k8s.io/primary=true
	I0328 01:07:40.928916 1130949 ops.go:34] apiserver oom_adj: -16
	I0328 01:07:40.929138 1130949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:07:41.429797 1130949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:07:41.930103 1130949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:07:42.429366 1130949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:07:42.540932 1131600 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:44.035055 1131600 pod_ready.go:81] duration metric: took 4m0.000860608s for pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace to be "Ready" ...
	E0328 01:07:44.035094 1131600 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-w4ww4" in "kube-system" namespace to be "Ready" (will not retry!)
	I0328 01:07:44.035124 1131600 pod_ready.go:38] duration metric: took 4m14.608998431s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0328 01:07:44.035180 1131600 kubeadm.go:591] duration metric: took 4m23.470228903s to restartPrimaryControlPlane
	W0328 01:07:44.035292 1131600 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0328 01:07:44.035344 1131600 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0328 01:07:42.375179 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:44.876120 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:42.929464 1130949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:07:43.429369 1130949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:07:43.929241 1130949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:07:44.429904 1130949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:07:44.930251 1130949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:07:45.429816 1130949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:07:45.930177 1130949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:07:46.429416 1130949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:07:46.929152 1130949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:07:47.429708 1130949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:07:49.021732 1131323 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0328 01:07:49.021890 1131323 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0328 01:07:49.022195 1131323 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0328 01:07:47.373358 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:49.872482 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:47.929139 1130949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:07:48.429732 1130949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:07:48.930207 1130949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:07:49.429230 1130949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:07:49.929298 1130949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:07:50.429919 1130949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:07:50.929364 1130949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:07:51.429403 1130949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:07:51.929356 1130949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:07:52.429410 1130949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:07:52.929894 1130949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:07:53.043365 1130949 kubeadm.go:1107] duration metric: took 12.416334145s to wait for elevateKubeSystemPrivileges
	W0328 01:07:53.043410 1130949 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0328 01:07:53.043419 1130949 kubeadm.go:393] duration metric: took 5m13.709259014s to StartCluster
	I0328 01:07:53.043445 1130949 settings.go:142] acquiring lock: {Name:mk594dd5d083edcb4c668528431bf610fe4ea638 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 01:07:53.043560 1130949 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18485-1069254/kubeconfig
	I0328 01:07:53.045798 1130949 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18485-1069254/kubeconfig: {Name:mk608bb0dce90860f51972a105c5a28b1a9b081e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 01:07:53.046158 1130949 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.72.210 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0328 01:07:53.047867 1130949 out.go:177] * Verifying Kubernetes components...
	I0328 01:07:53.046201 1130949 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0328 01:07:53.046412 1130949 config.go:182] Loaded profile config "embed-certs-808809": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0328 01:07:53.049163 1130949 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-808809"
	I0328 01:07:53.049175 1130949 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0328 01:07:53.049195 1130949 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-808809"
	W0328 01:07:53.049204 1130949 addons.go:243] addon storage-provisioner should already be in state true
	I0328 01:07:53.049230 1130949 host.go:66] Checking if "embed-certs-808809" exists ...
	I0328 01:07:53.049205 1130949 addons.go:69] Setting default-storageclass=true in profile "embed-certs-808809"
	I0328 01:07:53.049250 1130949 addons.go:69] Setting metrics-server=true in profile "embed-certs-808809"
	I0328 01:07:53.049271 1130949 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-808809"
	I0328 01:07:53.049309 1130949 addons.go:234] Setting addon metrics-server=true in "embed-certs-808809"
	W0328 01:07:53.049327 1130949 addons.go:243] addon metrics-server should already be in state true
	I0328 01:07:53.049371 1130949 host.go:66] Checking if "embed-certs-808809" exists ...
	I0328 01:07:53.049530 1130949 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18485-1069254/.minikube/bin/docker-machine-driver-kvm2
	I0328 01:07:53.049569 1130949 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 01:07:53.049696 1130949 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18485-1069254/.minikube/bin/docker-machine-driver-kvm2
	I0328 01:07:53.049729 1130949 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 01:07:53.049795 1130949 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18485-1069254/.minikube/bin/docker-machine-driver-kvm2
	I0328 01:07:53.049838 1130949 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 01:07:53.067042 1130949 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36143
	I0328 01:07:53.067078 1130949 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33653
	I0328 01:07:53.067536 1130949 main.go:141] libmachine: () Calling .GetVersion
	I0328 01:07:53.067599 1130949 main.go:141] libmachine: () Calling .GetVersion
	I0328 01:07:53.068156 1130949 main.go:141] libmachine: Using API Version  1
	I0328 01:07:53.068184 1130949 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 01:07:53.068289 1130949 main.go:141] libmachine: Using API Version  1
	I0328 01:07:53.068315 1130949 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 01:07:53.068583 1130949 main.go:141] libmachine: () Calling .GetMachineName
	I0328 01:07:53.068669 1130949 main.go:141] libmachine: () Calling .GetMachineName
	I0328 01:07:53.069095 1130949 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18485-1069254/.minikube/bin/docker-machine-driver-kvm2
	I0328 01:07:53.069121 1130949 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 01:07:53.069245 1130949 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18485-1069254/.minikube/bin/docker-machine-driver-kvm2
	I0328 01:07:53.069276 1130949 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 01:07:53.069991 1130949 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43737
	I0328 01:07:53.070509 1130949 main.go:141] libmachine: () Calling .GetVersion
	I0328 01:07:53.071078 1130949 main.go:141] libmachine: Using API Version  1
	I0328 01:07:53.071103 1130949 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 01:07:53.071480 1130949 main.go:141] libmachine: () Calling .GetMachineName
	I0328 01:07:53.071705 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetState
	I0328 01:07:53.075617 1130949 addons.go:234] Setting addon default-storageclass=true in "embed-certs-808809"
	W0328 01:07:53.075659 1130949 addons.go:243] addon default-storageclass should already be in state true
	I0328 01:07:53.075703 1130949 host.go:66] Checking if "embed-certs-808809" exists ...
	I0328 01:07:53.075982 1130949 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18485-1069254/.minikube/bin/docker-machine-driver-kvm2
	I0328 01:07:53.076011 1130949 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 01:07:53.085991 1130949 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39265
	I0328 01:07:53.086508 1130949 main.go:141] libmachine: () Calling .GetVersion
	I0328 01:07:53.086724 1130949 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36033
	I0328 01:07:53.087105 1130949 main.go:141] libmachine: Using API Version  1
	I0328 01:07:53.087122 1130949 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 01:07:53.087158 1130949 main.go:141] libmachine: () Calling .GetVersion
	I0328 01:07:53.087646 1130949 main.go:141] libmachine: Using API Version  1
	I0328 01:07:53.087667 1130949 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 01:07:53.087706 1130949 main.go:141] libmachine: () Calling .GetMachineName
	I0328 01:07:53.087922 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetState
	I0328 01:07:53.088031 1130949 main.go:141] libmachine: () Calling .GetMachineName
	I0328 01:07:53.088225 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetState
	I0328 01:07:53.089941 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .DriverName
	I0328 01:07:53.090168 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .DriverName
	I0328 01:07:53.091945 1130949 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0328 01:07:53.093023 1130949 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41613
	I0328 01:07:53.093537 1130949 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0328 01:07:53.093553 1130949 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0328 01:07:53.093563 1130949 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0328 01:07:53.095147 1130949 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0328 01:07:53.095165 1130949 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0328 01:07:53.093574 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHHostname
	I0328 01:07:53.095185 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHHostname
	I0328 01:07:53.093939 1130949 main.go:141] libmachine: () Calling .GetVersion
	I0328 01:07:53.096301 1130949 main.go:141] libmachine: Using API Version  1
	I0328 01:07:53.096322 1130949 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 01:07:53.096662 1130949 main.go:141] libmachine: () Calling .GetMachineName
	I0328 01:07:53.097251 1130949 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18485-1069254/.minikube/bin/docker-machine-driver-kvm2
	I0328 01:07:53.097306 1130949 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 01:07:53.098907 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:07:53.099014 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:07:53.099513 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d4:d2", ip: ""} in network mk-embed-certs-808809: {Iface:virbr4 ExpiryTime:2024-03-28 02:02:25 +0000 UTC Type:0 Mac:52:54:00:60:d4:d2 Iaid: IPaddr:192.168.72.210 Prefix:24 Hostname:embed-certs-808809 Clientid:01:52:54:00:60:d4:d2}
	I0328 01:07:53.099546 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined IP address 192.168.72.210 and MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:07:53.099996 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHPort
	I0328 01:07:53.100126 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d4:d2", ip: ""} in network mk-embed-certs-808809: {Iface:virbr4 ExpiryTime:2024-03-28 02:02:25 +0000 UTC Type:0 Mac:52:54:00:60:d4:d2 Iaid: IPaddr:192.168.72.210 Prefix:24 Hostname:embed-certs-808809 Clientid:01:52:54:00:60:d4:d2}
	I0328 01:07:53.100177 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHKeyPath
	I0328 01:07:53.100187 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined IP address 192.168.72.210 and MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:07:53.100287 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHUsername
	I0328 01:07:53.100392 1130949 sshutil.go:53] new ssh client: &{IP:192.168.72.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/embed-certs-808809/id_rsa Username:docker}
	I0328 01:07:53.100470 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHPort
	I0328 01:07:53.100576 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHKeyPath
	I0328 01:07:53.100709 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHUsername
	I0328 01:07:53.100796 1130949 sshutil.go:53] new ssh client: &{IP:192.168.72.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/embed-certs-808809/id_rsa Username:docker}
	I0328 01:07:53.114056 1130949 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41121
	I0328 01:07:53.114680 1130949 main.go:141] libmachine: () Calling .GetVersion
	I0328 01:07:53.115279 1130949 main.go:141] libmachine: Using API Version  1
	I0328 01:07:53.115313 1130949 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 01:07:53.115721 1130949 main.go:141] libmachine: () Calling .GetMachineName
	I0328 01:07:53.116061 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetState
	I0328 01:07:53.118022 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .DriverName
	I0328 01:07:53.118348 1130949 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0328 01:07:53.118370 1130949 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0328 01:07:53.118391 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHHostname
	I0328 01:07:53.121337 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:07:53.121699 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d4:d2", ip: ""} in network mk-embed-certs-808809: {Iface:virbr4 ExpiryTime:2024-03-28 02:02:25 +0000 UTC Type:0 Mac:52:54:00:60:d4:d2 Iaid: IPaddr:192.168.72.210 Prefix:24 Hostname:embed-certs-808809 Clientid:01:52:54:00:60:d4:d2}
	I0328 01:07:53.121728 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | domain embed-certs-808809 has defined IP address 192.168.72.210 and MAC address 52:54:00:60:d4:d2 in network mk-embed-certs-808809
	I0328 01:07:53.121906 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHPort
	I0328 01:07:53.122084 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHKeyPath
	I0328 01:07:53.122266 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .GetSSHUsername
	I0328 01:07:53.122414 1130949 sshutil.go:53] new ssh client: &{IP:192.168.72.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/embed-certs-808809/id_rsa Username:docker}
	I0328 01:07:53.242121 1130949 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0328 01:07:53.267118 1130949 node_ready.go:35] waiting up to 6m0s for node "embed-certs-808809" to be "Ready" ...
	I0328 01:07:53.276640 1130949 node_ready.go:49] node "embed-certs-808809" has status "Ready":"True"
	I0328 01:07:53.276670 1130949 node_ready.go:38] duration metric: took 9.513599ms for node "embed-certs-808809" to be "Ready" ...
	I0328 01:07:53.276683 1130949 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0328 01:07:53.283091 1130949 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-2rn6k" in "kube-system" namespace to be "Ready" ...
	I0328 01:07:53.325201 1130949 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0328 01:07:53.325234 1130949 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0328 01:07:53.341335 1130949 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0328 01:07:53.361084 1130949 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0328 01:07:53.361109 1130949 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0328 01:07:53.393089 1130949 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0328 01:07:53.393116 1130949 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0328 01:07:53.419245 1130949 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0328 01:07:53.445663 1130949 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0328 01:07:53.515515 1130949 main.go:141] libmachine: Making call to close driver server
	I0328 01:07:53.515555 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .Close
	I0328 01:07:53.515871 1130949 main.go:141] libmachine: Successfully made call to close driver server
	I0328 01:07:53.515891 1130949 main.go:141] libmachine: Making call to close connection to plugin binary
	I0328 01:07:53.515901 1130949 main.go:141] libmachine: Making call to close driver server
	I0328 01:07:53.515910 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .Close
	I0328 01:07:53.516173 1130949 main.go:141] libmachine: Successfully made call to close driver server
	I0328 01:07:53.516253 1130949 main.go:141] libmachine: Making call to close connection to plugin binary
	I0328 01:07:53.516212 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | Closing plugin on server side
	I0328 01:07:53.527854 1130949 main.go:141] libmachine: Making call to close driver server
	I0328 01:07:53.527882 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .Close
	I0328 01:07:53.528152 1130949 main.go:141] libmachine: Successfully made call to close driver server
	I0328 01:07:53.528173 1130949 main.go:141] libmachine: Making call to close connection to plugin binary
	I0328 01:07:53.528220 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | Closing plugin on server side
	I0328 01:07:54.159164 1130949 main.go:141] libmachine: Making call to close driver server
	I0328 01:07:54.159192 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .Close
	I0328 01:07:54.159264 1130949 main.go:141] libmachine: Making call to close driver server
	I0328 01:07:54.159292 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .Close
	I0328 01:07:54.159523 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | Closing plugin on server side
	I0328 01:07:54.159597 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | Closing plugin on server side
	I0328 01:07:54.159619 1130949 main.go:141] libmachine: Successfully made call to close driver server
	I0328 01:07:54.159637 1130949 main.go:141] libmachine: Making call to close connection to plugin binary
	I0328 01:07:54.159648 1130949 main.go:141] libmachine: Making call to close driver server
	I0328 01:07:54.159658 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .Close
	I0328 01:07:54.159660 1130949 main.go:141] libmachine: Successfully made call to close driver server
	I0328 01:07:54.159667 1130949 main.go:141] libmachine: Making call to close connection to plugin binary
	I0328 01:07:54.159688 1130949 main.go:141] libmachine: Making call to close driver server
	I0328 01:07:54.159696 1130949 main.go:141] libmachine: (embed-certs-808809) Calling .Close
	I0328 01:07:54.159981 1130949 main.go:141] libmachine: Successfully made call to close driver server
	I0328 01:07:54.160037 1130949 main.go:141] libmachine: Making call to close connection to plugin binary
	I0328 01:07:54.160056 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | Closing plugin on server side
	I0328 01:07:54.160062 1130949 addons.go:470] Verifying addon metrics-server=true in "embed-certs-808809"
	I0328 01:07:54.160088 1130949 main.go:141] libmachine: Successfully made call to close driver server
	I0328 01:07:54.160090 1130949 main.go:141] libmachine: (embed-certs-808809) DBG | Closing plugin on server side
	I0328 01:07:54.160106 1130949 main.go:141] libmachine: Making call to close connection to plugin binary
	I0328 01:07:54.162879 1130949 out.go:177] * Enabled addons: default-storageclass, metrics-server, storage-provisioner
	I0328 01:07:54.022449 1131323 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0328 01:07:54.022704 1131323 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0328 01:07:52.372314 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:54.372913 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:54.164263 1130949 addons.go:505] duration metric: took 1.11806212s for enable addons: enabled=[default-storageclass metrics-server storage-provisioner]
	I0328 01:07:55.294728 1130949 pod_ready.go:102] pod "coredns-76f75df574-2rn6k" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:55.790690 1130949 pod_ready.go:92] pod "coredns-76f75df574-2rn6k" in "kube-system" namespace has status "Ready":"True"
	I0328 01:07:55.790717 1130949 pod_ready.go:81] duration metric: took 2.50759161s for pod "coredns-76f75df574-2rn6k" in "kube-system" namespace to be "Ready" ...
	I0328 01:07:55.790726 1130949 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-pgcdh" in "kube-system" namespace to be "Ready" ...
	I0328 01:07:55.796249 1130949 pod_ready.go:92] pod "coredns-76f75df574-pgcdh" in "kube-system" namespace has status "Ready":"True"
	I0328 01:07:55.796279 1130949 pod_ready.go:81] duration metric: took 5.54233ms for pod "coredns-76f75df574-pgcdh" in "kube-system" namespace to be "Ready" ...
	I0328 01:07:55.796291 1130949 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-808809" in "kube-system" namespace to be "Ready" ...
	I0328 01:07:55.801226 1130949 pod_ready.go:92] pod "etcd-embed-certs-808809" in "kube-system" namespace has status "Ready":"True"
	I0328 01:07:55.801254 1130949 pod_ready.go:81] duration metric: took 4.956106ms for pod "etcd-embed-certs-808809" in "kube-system" namespace to be "Ready" ...
	I0328 01:07:55.801263 1130949 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-808809" in "kube-system" namespace to be "Ready" ...
	I0328 01:07:55.814571 1130949 pod_ready.go:92] pod "kube-apiserver-embed-certs-808809" in "kube-system" namespace has status "Ready":"True"
	I0328 01:07:55.814599 1130949 pod_ready.go:81] duration metric: took 13.328662ms for pod "kube-apiserver-embed-certs-808809" in "kube-system" namespace to be "Ready" ...
	I0328 01:07:55.814613 1130949 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-808809" in "kube-system" namespace to be "Ready" ...
	I0328 01:07:55.825995 1130949 pod_ready.go:92] pod "kube-controller-manager-embed-certs-808809" in "kube-system" namespace has status "Ready":"True"
	I0328 01:07:55.826022 1130949 pod_ready.go:81] duration metric: took 11.401096ms for pod "kube-controller-manager-embed-certs-808809" in "kube-system" namespace to be "Ready" ...
	I0328 01:07:55.826035 1130949 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-tjbhs" in "kube-system" namespace to be "Ready" ...
	I0328 01:07:56.188116 1130949 pod_ready.go:92] pod "kube-proxy-tjbhs" in "kube-system" namespace has status "Ready":"True"
	I0328 01:07:56.188147 1130949 pod_ready.go:81] duration metric: took 362.103962ms for pod "kube-proxy-tjbhs" in "kube-system" namespace to be "Ready" ...
	I0328 01:07:56.188161 1130949 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-808809" in "kube-system" namespace to be "Ready" ...
	I0328 01:07:56.588294 1130949 pod_ready.go:92] pod "kube-scheduler-embed-certs-808809" in "kube-system" namespace has status "Ready":"True"
	I0328 01:07:56.588334 1130949 pod_ready.go:81] duration metric: took 400.16517ms for pod "kube-scheduler-embed-certs-808809" in "kube-system" namespace to be "Ready" ...
	I0328 01:07:56.588347 1130949 pod_ready.go:38] duration metric: took 3.311651338s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0328 01:07:56.588369 1130949 api_server.go:52] waiting for apiserver process to appear ...
	I0328 01:07:56.588445 1130949 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:07:56.606404 1130949 api_server.go:72] duration metric: took 3.560197315s to wait for apiserver process to appear ...
	I0328 01:07:56.606435 1130949 api_server.go:88] waiting for apiserver healthz status ...
	I0328 01:07:56.606460 1130949 api_server.go:253] Checking apiserver healthz at https://192.168.72.210:8443/healthz ...
	I0328 01:07:56.612218 1130949 api_server.go:279] https://192.168.72.210:8443/healthz returned 200:
	ok
	I0328 01:07:56.613459 1130949 api_server.go:141] control plane version: v1.29.3
	I0328 01:07:56.613481 1130949 api_server.go:131] duration metric: took 7.039378ms to wait for apiserver health ...
	I0328 01:07:56.613490 1130949 system_pods.go:43] waiting for kube-system pods to appear ...
	I0328 01:07:56.793192 1130949 system_pods.go:59] 9 kube-system pods found
	I0328 01:07:56.793227 1130949 system_pods.go:61] "coredns-76f75df574-2rn6k" [2a77c778-dd83-4e2e-b45a-ca16e3922b45] Running
	I0328 01:07:56.793232 1130949 system_pods.go:61] "coredns-76f75df574-pgcdh" [52452b24-490e-4999-b700-198c6f9b2fa1] Running
	I0328 01:07:56.793236 1130949 system_pods.go:61] "etcd-embed-certs-808809" [cba526ab-c8ca-4b90-abb8-533d1bf6cb4f] Running
	I0328 01:07:56.793239 1130949 system_pods.go:61] "kube-apiserver-embed-certs-808809" [02934ac6-1e07-4cbe-ba2f-4149e59d6044] Running
	I0328 01:07:56.793243 1130949 system_pods.go:61] "kube-controller-manager-embed-certs-808809" [b3350ff9-7abb-407e-939c-fcde61186c17] Running
	I0328 01:07:56.793246 1130949 system_pods.go:61] "kube-proxy-tjbhs" [cdb30ca1-5165-4e24-888a-df79af7987d0] Running
	I0328 01:07:56.793249 1130949 system_pods.go:61] "kube-scheduler-embed-certs-808809" [5b52a22d-6ffb-468b-b7b9-8f89b0dee3b8] Running
	I0328 01:07:56.793255 1130949 system_pods.go:61] "metrics-server-57f55c9bc5-bqbfl" [8434fd7d-838b-4cf2-96a3-e4d613633871] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0328 01:07:56.793260 1130949 system_pods.go:61] "storage-provisioner" [20c1951e-7da8-4025-bbcf-2da60f87f3ab] Running
	I0328 01:07:56.793268 1130949 system_pods.go:74] duration metric: took 179.77213ms to wait for pod list to return data ...
	I0328 01:07:56.793275 1130949 default_sa.go:34] waiting for default service account to be created ...
	I0328 01:07:56.988234 1130949 default_sa.go:45] found service account: "default"
	I0328 01:07:56.988274 1130949 default_sa.go:55] duration metric: took 194.984089ms for default service account to be created ...
	I0328 01:07:56.988288 1130949 system_pods.go:116] waiting for k8s-apps to be running ...
	I0328 01:07:57.192153 1130949 system_pods.go:86] 9 kube-system pods found
	I0328 01:07:57.192188 1130949 system_pods.go:89] "coredns-76f75df574-2rn6k" [2a77c778-dd83-4e2e-b45a-ca16e3922b45] Running
	I0328 01:07:57.192194 1130949 system_pods.go:89] "coredns-76f75df574-pgcdh" [52452b24-490e-4999-b700-198c6f9b2fa1] Running
	I0328 01:07:57.192200 1130949 system_pods.go:89] "etcd-embed-certs-808809" [cba526ab-c8ca-4b90-abb8-533d1bf6cb4f] Running
	I0328 01:07:57.192205 1130949 system_pods.go:89] "kube-apiserver-embed-certs-808809" [02934ac6-1e07-4cbe-ba2f-4149e59d6044] Running
	I0328 01:07:57.192210 1130949 system_pods.go:89] "kube-controller-manager-embed-certs-808809" [b3350ff9-7abb-407e-939c-fcde61186c17] Running
	I0328 01:07:57.192214 1130949 system_pods.go:89] "kube-proxy-tjbhs" [cdb30ca1-5165-4e24-888a-df79af7987d0] Running
	I0328 01:07:57.192218 1130949 system_pods.go:89] "kube-scheduler-embed-certs-808809" [5b52a22d-6ffb-468b-b7b9-8f89b0dee3b8] Running
	I0328 01:07:57.192225 1130949 system_pods.go:89] "metrics-server-57f55c9bc5-bqbfl" [8434fd7d-838b-4cf2-96a3-e4d613633871] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0328 01:07:57.192230 1130949 system_pods.go:89] "storage-provisioner" [20c1951e-7da8-4025-bbcf-2da60f87f3ab] Running
	I0328 01:07:57.192239 1130949 system_pods.go:126] duration metric: took 203.942878ms to wait for k8s-apps to be running ...
	I0328 01:07:57.192249 1130949 system_svc.go:44] waiting for kubelet service to be running ....
	I0328 01:07:57.192301 1130949 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0328 01:07:57.209840 1130949 system_svc.go:56] duration metric: took 17.576605ms WaitForService to wait for kubelet
	I0328 01:07:57.209883 1130949 kubeadm.go:576] duration metric: took 4.163683877s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0328 01:07:57.209918 1130949 node_conditions.go:102] verifying NodePressure condition ...
	I0328 01:07:57.388321 1130949 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0328 01:07:57.388347 1130949 node_conditions.go:123] node cpu capacity is 2
	I0328 01:07:57.388357 1130949 node_conditions.go:105] duration metric: took 178.433633ms to run NodePressure ...
	I0328 01:07:57.388370 1130949 start.go:240] waiting for startup goroutines ...
	I0328 01:07:57.388377 1130949 start.go:245] waiting for cluster config update ...
	I0328 01:07:57.388387 1130949 start.go:254] writing updated cluster config ...
	I0328 01:07:57.388784 1130949 ssh_runner.go:195] Run: rm -f paused
	I0328 01:07:57.446699 1130949 start.go:600] kubectl: 1.29.3, cluster: 1.29.3 (minor skew: 0)
	I0328 01:07:57.448951 1130949 out.go:177] * Done! kubectl is now configured to use "embed-certs-808809" cluster and "default" namespace by default
	I0328 01:07:56.373123 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:07:58.872454 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:08:04.023273 1131323 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0328 01:08:04.023535 1131323 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0328 01:08:01.372711 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:08:03.877734 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:08:06.374031 1130827 pod_ready.go:102] pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace has status "Ready":"False"
	I0328 01:08:07.366164 1130827 pod_ready.go:81] duration metric: took 4m0.000887668s for pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace to be "Ready" ...
	E0328 01:08:07.366245 1130827 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-cvnrj" in "kube-system" namespace to be "Ready" (will not retry!)
	I0328 01:08:07.366271 1130827 pod_ready.go:38] duration metric: took 4m7.906522585s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0328 01:08:07.366301 1130827 kubeadm.go:591] duration metric: took 4m15.27169704s to restartPrimaryControlPlane
	W0328 01:08:07.366368 1130827 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0328 01:08:07.366406 1130827 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0328 01:08:16.281280 1131600 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.245904746s)
	I0328 01:08:16.281365 1131600 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0328 01:08:16.298463 1131600 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0328 01:08:16.310406 1131600 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0328 01:08:16.321387 1131600 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0328 01:08:16.321415 1131600 kubeadm.go:156] found existing configuration files:
	
	I0328 01:08:16.321475 1131600 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0328 01:08:16.331965 1131600 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0328 01:08:16.332033 1131600 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0328 01:08:16.343030 1131600 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0328 01:08:16.353193 1131600 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0328 01:08:16.353254 1131600 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0328 01:08:16.363865 1131600 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0328 01:08:16.374276 1131600 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0328 01:08:16.374346 1131600 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0328 01:08:16.385300 1131600 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0328 01:08:16.396118 1131600 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0328 01:08:16.396181 1131600 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0328 01:08:16.406896 1131600 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0328 01:08:16.626615 1131600 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0328 01:08:24.024091 1131323 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0328 01:08:24.024388 1131323 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0328 01:08:25.420974 1131600 kubeadm.go:309] [init] Using Kubernetes version: v1.29.3
	I0328 01:08:25.421059 1131600 kubeadm.go:309] [preflight] Running pre-flight checks
	I0328 01:08:25.421154 1131600 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0328 01:08:25.421300 1131600 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0328 01:08:25.421547 1131600 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0328 01:08:25.421649 1131600 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0328 01:08:25.423435 1131600 out.go:204]   - Generating certificates and keys ...
	I0328 01:08:25.423549 1131600 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0328 01:08:25.423630 1131600 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0328 01:08:25.423749 1131600 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0328 01:08:25.423844 1131600 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0328 01:08:25.423956 1131600 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0328 01:08:25.424058 1131600 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0328 01:08:25.424166 1131600 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0328 01:08:25.424260 1131600 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0328 01:08:25.424375 1131600 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0328 01:08:25.424489 1131600 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0328 01:08:25.424552 1131600 kubeadm.go:309] [certs] Using the existing "sa" key
	I0328 01:08:25.424642 1131600 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0328 01:08:25.424700 1131600 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0328 01:08:25.424765 1131600 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0328 01:08:25.424832 1131600 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0328 01:08:25.424920 1131600 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0328 01:08:25.424982 1131600 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0328 01:08:25.425106 1131600 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0328 01:08:25.425207 1131600 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0328 01:08:25.426863 1131600 out.go:204]   - Booting up control plane ...
	I0328 01:08:25.427001 1131600 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0328 01:08:25.427108 1131600 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0328 01:08:25.427205 1131600 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0328 01:08:25.427327 1131600 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0328 01:08:25.427431 1131600 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0328 01:08:25.427491 1131600 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0328 01:08:25.427686 1131600 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0328 01:08:25.427784 1131600 kubeadm.go:309] [apiclient] All control plane components are healthy after 6.003000 seconds
	I0328 01:08:25.427897 1131600 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0328 01:08:25.428032 1131600 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0328 01:08:25.428109 1131600 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0328 01:08:25.428325 1131600 kubeadm.go:309] [mark-control-plane] Marking the node default-k8s-diff-port-283961 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0328 01:08:25.428408 1131600 kubeadm.go:309] [bootstrap-token] Using token: g6jusr.8nbqw788gjbu8fwz
	I0328 01:08:25.430595 1131600 out.go:204]   - Configuring RBAC rules ...
	I0328 01:08:25.430734 1131600 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0328 01:08:25.430837 1131600 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0328 01:08:25.430981 1131600 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0328 01:08:25.431163 1131600 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0328 01:08:25.431357 1131600 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0328 01:08:25.431481 1131600 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0328 01:08:25.431670 1131600 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0328 01:08:25.431726 1131600 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0328 01:08:25.431767 1131600 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0328 01:08:25.431774 1131600 kubeadm.go:309] 
	I0328 01:08:25.431819 1131600 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0328 01:08:25.431829 1131600 kubeadm.go:309] 
	I0328 01:08:25.431893 1131600 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0328 01:08:25.431900 1131600 kubeadm.go:309] 
	I0328 01:08:25.431934 1131600 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0328 01:08:25.432028 1131600 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0328 01:08:25.432089 1131600 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0328 01:08:25.432114 1131600 kubeadm.go:309] 
	I0328 01:08:25.432178 1131600 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0328 01:08:25.432186 1131600 kubeadm.go:309] 
	I0328 01:08:25.432245 1131600 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0328 01:08:25.432255 1131600 kubeadm.go:309] 
	I0328 01:08:25.432342 1131600 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0328 01:08:25.432454 1131600 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0328 01:08:25.432566 1131600 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0328 01:08:25.432576 1131600 kubeadm.go:309] 
	I0328 01:08:25.432719 1131600 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0328 01:08:25.432812 1131600 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0328 01:08:25.432825 1131600 kubeadm.go:309] 
	I0328 01:08:25.432914 1131600 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8444 --token g6jusr.8nbqw788gjbu8fwz \
	I0328 01:08:25.433018 1131600 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:eb47e943713ff45bf2b8bbea854932df17a469668ec0f8e79ea3bb626e56fc59 \
	I0328 01:08:25.433052 1131600 kubeadm.go:309] 	--control-plane 
	I0328 01:08:25.433058 1131600 kubeadm.go:309] 
	I0328 01:08:25.433135 1131600 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0328 01:08:25.433143 1131600 kubeadm.go:309] 
	I0328 01:08:25.433222 1131600 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8444 --token g6jusr.8nbqw788gjbu8fwz \
	I0328 01:08:25.433318 1131600 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:eb47e943713ff45bf2b8bbea854932df17a469668ec0f8e79ea3bb626e56fc59 
	I0328 01:08:25.433337 1131600 cni.go:84] Creating CNI manager for ""
	I0328 01:08:25.433346 1131600 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0328 01:08:25.434943 1131600 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0328 01:08:25.436103 1131600 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0328 01:08:25.483149 1131600 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0328 01:08:25.508422 1131600 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0328 01:08:25.508514 1131600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:25.508518 1131600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-283961 minikube.k8s.io/updated_at=2024_03_28T01_08_25_0700 minikube.k8s.io/version=v1.33.0-beta.0 minikube.k8s.io/commit=1a940f980f77c9bfc8ef93678db8c1f49c9dd79d minikube.k8s.io/name=default-k8s-diff-port-283961 minikube.k8s.io/primary=true
	I0328 01:08:25.537955 1131600 ops.go:34] apiserver oom_adj: -16
	I0328 01:08:25.738462 1131600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:26.239473 1131600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:26.739478 1131600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:27.238883 1131600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:27.738830 1131600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:28.239281 1131600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:28.738643 1131600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:29.238703 1131600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:29.739025 1131600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:30.239127 1131600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:30.739473 1131600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:31.239461 1131600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:31.739480 1131600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:32.239525 1131600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:32.738543 1131600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:33.239468 1131600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:33.739475 1131600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:34.238558 1131600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:34.739550 1131600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:35.239400 1131600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:35.738766 1131600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:36.239384 1131600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:36.738797 1131600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:37.238736 1131600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:37.739543 1131600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:37.850963 1131600 kubeadm.go:1107] duration metric: took 12.342521507s to wait for elevateKubeSystemPrivileges
	W0328 01:08:37.851011 1131600 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0328 01:08:37.851024 1131600 kubeadm.go:393] duration metric: took 5m17.339661641s to StartCluster
	I0328 01:08:37.851048 1131600 settings.go:142] acquiring lock: {Name:mk594dd5d083edcb4c668528431bf610fe4ea638 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 01:08:37.851164 1131600 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18485-1069254/kubeconfig
	I0328 01:08:37.853862 1131600 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18485-1069254/kubeconfig: {Name:mk608bb0dce90860f51972a105c5a28b1a9b081e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 01:08:37.854264 1131600 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.39.224 Port:8444 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0328 01:08:37.856170 1131600 out.go:177] * Verifying Kubernetes components...
	I0328 01:08:37.854341 1131600 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0328 01:08:37.854447 1131600 config.go:182] Loaded profile config "default-k8s-diff-port-283961": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0328 01:08:37.857860 1131600 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0328 01:08:37.857864 1131600 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-283961"
	I0328 01:08:37.857878 1131600 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-283961"
	I0328 01:08:37.857885 1131600 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-283961"
	I0328 01:08:37.857909 1131600 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-283961"
	I0328 01:08:37.857912 1131600 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-283961"
	W0328 01:08:37.857923 1131600 addons.go:243] addon storage-provisioner should already be in state true
	I0328 01:08:37.857928 1131600 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-283961"
	W0328 01:08:37.857941 1131600 addons.go:243] addon metrics-server should already be in state true
	I0328 01:08:37.857970 1131600 host.go:66] Checking if "default-k8s-diff-port-283961" exists ...
	I0328 01:08:37.857983 1131600 host.go:66] Checking if "default-k8s-diff-port-283961" exists ...
	I0328 01:08:37.858330 1131600 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18485-1069254/.minikube/bin/docker-machine-driver-kvm2
	I0328 01:08:37.858363 1131600 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 01:08:37.858403 1131600 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18485-1069254/.minikube/bin/docker-machine-driver-kvm2
	I0328 01:08:37.858436 1131600 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 01:08:37.858335 1131600 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18485-1069254/.minikube/bin/docker-machine-driver-kvm2
	I0328 01:08:37.858509 1131600 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 01:08:37.881197 1131600 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32799
	I0328 01:08:37.881230 1131600 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35101
	I0328 01:08:37.881244 1131600 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40403
	I0328 01:08:37.881857 1131600 main.go:141] libmachine: () Calling .GetVersion
	I0328 01:08:37.881882 1131600 main.go:141] libmachine: () Calling .GetVersion
	I0328 01:08:37.882021 1131600 main.go:141] libmachine: () Calling .GetVersion
	I0328 01:08:37.882460 1131600 main.go:141] libmachine: Using API Version  1
	I0328 01:08:37.882482 1131600 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 01:08:37.882523 1131600 main.go:141] libmachine: Using API Version  1
	I0328 01:08:37.882540 1131600 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 01:08:37.882585 1131600 main.go:141] libmachine: Using API Version  1
	I0328 01:08:37.882601 1131600 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 01:08:37.882934 1131600 main.go:141] libmachine: () Calling .GetMachineName
	I0328 01:08:37.882992 1131600 main.go:141] libmachine: () Calling .GetMachineName
	I0328 01:08:37.883007 1131600 main.go:141] libmachine: () Calling .GetMachineName
	I0328 01:08:37.883239 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetState
	I0328 01:08:37.883592 1131600 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18485-1069254/.minikube/bin/docker-machine-driver-kvm2
	I0328 01:08:37.883620 1131600 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18485-1069254/.minikube/bin/docker-machine-driver-kvm2
	I0328 01:08:37.883625 1131600 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 01:08:37.883644 1131600 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 01:08:37.887335 1131600 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-283961"
	W0328 01:08:37.887359 1131600 addons.go:243] addon default-storageclass should already be in state true
	I0328 01:08:37.887390 1131600 host.go:66] Checking if "default-k8s-diff-port-283961" exists ...
	I0328 01:08:37.887745 1131600 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18485-1069254/.minikube/bin/docker-machine-driver-kvm2
	I0328 01:08:37.887779 1131600 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 01:08:37.901416 1131600 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37383
	I0328 01:08:37.901909 1131600 main.go:141] libmachine: () Calling .GetVersion
	I0328 01:08:37.902530 1131600 main.go:141] libmachine: Using API Version  1
	I0328 01:08:37.902559 1131600 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 01:08:37.902967 1131600 main.go:141] libmachine: () Calling .GetMachineName
	I0328 01:08:37.903211 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetState
	I0328 01:08:37.904529 1131600 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36633
	I0328 01:08:37.905034 1131600 main.go:141] libmachine: () Calling .GetVersion
	I0328 01:08:37.905268 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .DriverName
	I0328 01:08:37.907486 1131600 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0328 01:08:37.905802 1131600 main.go:141] libmachine: Using API Version  1
	I0328 01:08:37.909062 1131600 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 01:08:37.909180 1131600 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0328 01:08:37.909196 1131600 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0328 01:08:37.909218 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHHostname
	I0328 01:08:37.909555 1131600 main.go:141] libmachine: () Calling .GetMachineName
	I0328 01:08:37.909794 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetState
	I0328 01:08:37.911251 1131600 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33539
	I0328 01:08:37.911845 1131600 main.go:141] libmachine: () Calling .GetVersion
	I0328 01:08:37.911995 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .DriverName
	I0328 01:08:37.913838 1131600 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0328 01:08:37.912457 1131600 main.go:141] libmachine: Using API Version  1
	I0328 01:08:37.913039 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:08:37.913804 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHPort
	I0328 01:08:37.915256 1131600 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 01:08:37.915268 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:df:6f", ip: ""} in network mk-default-k8s-diff-port-283961: {Iface:virbr1 ExpiryTime:2024-03-28 02:03:06 +0000 UTC Type:0 Mac:52:54:00:c4:df:6f Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:default-k8s-diff-port-283961 Clientid:01:52:54:00:c4:df:6f}
	I0328 01:08:37.915288 1131600 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0328 01:08:37.915297 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined IP address 192.168.39.224 and MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:08:37.915303 1131600 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0328 01:08:37.915321 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHHostname
	I0328 01:08:37.915492 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHKeyPath
	I0328 01:08:37.915674 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHUsername
	I0328 01:08:37.915894 1131600 sshutil.go:53] new ssh client: &{IP:192.168.39.224 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/default-k8s-diff-port-283961/id_rsa Username:docker}
	I0328 01:08:37.916689 1131600 main.go:141] libmachine: () Calling .GetMachineName
	I0328 01:08:37.917364 1131600 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18485-1069254/.minikube/bin/docker-machine-driver-kvm2
	I0328 01:08:37.917410 1131600 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 01:08:37.918302 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:08:37.918651 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:df:6f", ip: ""} in network mk-default-k8s-diff-port-283961: {Iface:virbr1 ExpiryTime:2024-03-28 02:03:06 +0000 UTC Type:0 Mac:52:54:00:c4:df:6f Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:default-k8s-diff-port-283961 Clientid:01:52:54:00:c4:df:6f}
	I0328 01:08:37.918678 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined IP address 192.168.39.224 and MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:08:37.918944 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHPort
	I0328 01:08:37.919117 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHKeyPath
	I0328 01:08:37.919267 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHUsername
	I0328 01:08:37.919386 1131600 sshutil.go:53] new ssh client: &{IP:192.168.39.224 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/default-k8s-diff-port-283961/id_rsa Username:docker}
	I0328 01:08:37.935233 1131600 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45659
	I0328 01:08:37.935750 1131600 main.go:141] libmachine: () Calling .GetVersion
	I0328 01:08:37.936283 1131600 main.go:141] libmachine: Using API Version  1
	I0328 01:08:37.936301 1131600 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 01:08:37.936691 1131600 main.go:141] libmachine: () Calling .GetMachineName
	I0328 01:08:37.936872 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetState
	I0328 01:08:37.938736 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .DriverName
	I0328 01:08:37.939016 1131600 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0328 01:08:37.939042 1131600 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0328 01:08:37.939065 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHHostname
	I0328 01:08:37.941653 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:08:37.941967 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:df:6f", ip: ""} in network mk-default-k8s-diff-port-283961: {Iface:virbr1 ExpiryTime:2024-03-28 02:03:06 +0000 UTC Type:0 Mac:52:54:00:c4:df:6f Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:default-k8s-diff-port-283961 Clientid:01:52:54:00:c4:df:6f}
	I0328 01:08:37.941991 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | domain default-k8s-diff-port-283961 has defined IP address 192.168.39.224 and MAC address 52:54:00:c4:df:6f in network mk-default-k8s-diff-port-283961
	I0328 01:08:37.942199 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHPort
	I0328 01:08:37.942405 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHKeyPath
	I0328 01:08:37.942575 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .GetSSHUsername
	I0328 01:08:37.942761 1131600 sshutil.go:53] new ssh client: &{IP:192.168.39.224 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/default-k8s-diff-port-283961/id_rsa Username:docker}
	I0328 01:08:38.109817 1131600 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0328 01:08:38.134996 1131600 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-283961" to be "Ready" ...
	I0328 01:08:38.158252 1131600 node_ready.go:49] node "default-k8s-diff-port-283961" has status "Ready":"True"
	I0328 01:08:38.158286 1131600 node_ready.go:38] duration metric: took 23.249221ms for node "default-k8s-diff-port-283961" to be "Ready" ...
	I0328 01:08:38.158305 1131600 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0328 01:08:38.170391 1131600 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-gdv5x" in "kube-system" namespace to be "Ready" ...
	I0328 01:08:38.277223 1131600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0328 01:08:38.299923 1131600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0328 01:08:38.300686 1131600 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0328 01:08:38.300707 1131600 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0328 01:08:38.355800 1131600 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0328 01:08:38.355837 1131600 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0328 01:08:38.464742 1131600 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0328 01:08:38.464769 1131600 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0328 01:08:38.542696 1131600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0328 01:08:39.644116 1131600 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.344141889s)
	I0328 01:08:39.644184 1131600 main.go:141] libmachine: Making call to close driver server
	I0328 01:08:39.644189 1131600 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.366934481s)
	I0328 01:08:39.644197 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .Close
	I0328 01:08:39.644210 1131600 main.go:141] libmachine: Making call to close driver server
	I0328 01:08:39.644219 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .Close
	I0328 01:08:39.644620 1131600 main.go:141] libmachine: Successfully made call to close driver server
	I0328 01:08:39.644644 1131600 main.go:141] libmachine: Making call to close connection to plugin binary
	I0328 01:08:39.644654 1131600 main.go:141] libmachine: Making call to close driver server
	I0328 01:08:39.644664 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .Close
	I0328 01:08:39.644846 1131600 main.go:141] libmachine: Successfully made call to close driver server
	I0328 01:08:39.644865 1131600 main.go:141] libmachine: Making call to close connection to plugin binary
	I0328 01:08:39.644890 1131600 main.go:141] libmachine: Making call to close driver server
	I0328 01:08:39.644905 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .Close
	I0328 01:08:39.644987 1131600 main.go:141] libmachine: Successfully made call to close driver server
	I0328 01:08:39.645004 1131600 main.go:141] libmachine: Making call to close connection to plugin binary
	I0328 01:08:39.645154 1131600 main.go:141] libmachine: Successfully made call to close driver server
	I0328 01:08:39.645171 1131600 main.go:141] libmachine: Making call to close connection to plugin binary
	I0328 01:08:39.708104 1131600 main.go:141] libmachine: Making call to close driver server
	I0328 01:08:39.708143 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .Close
	I0328 01:08:39.708543 1131600 main.go:141] libmachine: Successfully made call to close driver server
	I0328 01:08:39.708567 1131600 main.go:141] libmachine: Making call to close connection to plugin binary
	I0328 01:08:39.739487 1131600 pod_ready.go:92] pod "coredns-76f75df574-gdv5x" in "kube-system" namespace has status "Ready":"True"
	I0328 01:08:39.739515 1131600 pod_ready.go:81] duration metric: took 1.569088177s for pod "coredns-76f75df574-gdv5x" in "kube-system" namespace to be "Ready" ...
	I0328 01:08:39.739526 1131600 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-qzcfp" in "kube-system" namespace to be "Ready" ...
	I0328 01:08:39.797314 1131600 pod_ready.go:92] pod "coredns-76f75df574-qzcfp" in "kube-system" namespace has status "Ready":"True"
	I0328 01:08:39.797347 1131600 pod_ready.go:81] duration metric: took 57.813218ms for pod "coredns-76f75df574-qzcfp" in "kube-system" namespace to be "Ready" ...
	I0328 01:08:39.797366 1131600 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-283961" in "kube-system" namespace to be "Ready" ...
	I0328 01:08:39.830784 1131600 pod_ready.go:92] pod "etcd-default-k8s-diff-port-283961" in "kube-system" namespace has status "Ready":"True"
	I0328 01:08:39.830865 1131600 pod_ready.go:81] duration metric: took 33.488753ms for pod "etcd-default-k8s-diff-port-283961" in "kube-system" namespace to be "Ready" ...
	I0328 01:08:39.830886 1131600 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-283961" in "kube-system" namespace to be "Ready" ...
	I0328 01:08:39.852459 1131600 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-283961" in "kube-system" namespace has status "Ready":"True"
	I0328 01:08:39.852489 1131600 pod_ready.go:81] duration metric: took 21.594748ms for pod "kube-apiserver-default-k8s-diff-port-283961" in "kube-system" namespace to be "Ready" ...
	I0328 01:08:39.852501 1131600 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-283961" in "kube-system" namespace to be "Ready" ...
	I0328 01:08:39.862630 1131600 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-283961" in "kube-system" namespace has status "Ready":"True"
	I0328 01:08:39.862658 1131600 pod_ready.go:81] duration metric: took 10.149867ms for pod "kube-controller-manager-default-k8s-diff-port-283961" in "kube-system" namespace to be "Ready" ...
	I0328 01:08:39.862674 1131600 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-js7j2" in "kube-system" namespace to be "Ready" ...
	I0328 01:08:39.893124 1131600 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.350363727s)
	I0328 01:08:39.893191 1131600 main.go:141] libmachine: Making call to close driver server
	I0328 01:08:39.893216 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .Close
	I0328 01:08:39.893559 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | Closing plugin on server side
	I0328 01:08:39.893568 1131600 main.go:141] libmachine: Successfully made call to close driver server
	I0328 01:08:39.893617 1131600 main.go:141] libmachine: Making call to close connection to plugin binary
	I0328 01:08:39.893634 1131600 main.go:141] libmachine: Making call to close driver server
	I0328 01:08:39.893646 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) Calling .Close
	I0328 01:08:39.893985 1131600 main.go:141] libmachine: (default-k8s-diff-port-283961) DBG | Closing plugin on server side
	I0328 01:08:39.893985 1131600 main.go:141] libmachine: Successfully made call to close driver server
	I0328 01:08:39.894013 1131600 main.go:141] libmachine: Making call to close connection to plugin binary
	I0328 01:08:39.894031 1131600 addons.go:470] Verifying addon metrics-server=true in "default-k8s-diff-port-283961"
	I0328 01:08:39.896978 1131600 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0328 01:08:39.898636 1131600 addons.go:505] duration metric: took 2.044292782s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0328 01:08:40.138962 1131600 pod_ready.go:92] pod "kube-proxy-js7j2" in "kube-system" namespace has status "Ready":"True"
	I0328 01:08:40.138994 1131600 pod_ready.go:81] duration metric: took 276.313147ms for pod "kube-proxy-js7j2" in "kube-system" namespace to be "Ready" ...
	I0328 01:08:40.139006 1131600 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-283961" in "kube-system" namespace to be "Ready" ...
	I0328 01:08:40.538892 1131600 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-283961" in "kube-system" namespace has status "Ready":"True"
	I0328 01:08:40.538917 1131600 pod_ready.go:81] duration metric: took 399.903327ms for pod "kube-scheduler-default-k8s-diff-port-283961" in "kube-system" namespace to be "Ready" ...
	I0328 01:08:40.538925 1131600 pod_ready.go:38] duration metric: took 2.380606168s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0328 01:08:40.538943 1131600 api_server.go:52] waiting for apiserver process to appear ...
	I0328 01:08:40.539009 1131600 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:08:40.561639 1131600 api_server.go:72] duration metric: took 2.707321816s to wait for apiserver process to appear ...
	I0328 01:08:40.561681 1131600 api_server.go:88] waiting for apiserver healthz status ...
	I0328 01:08:40.561709 1131600 api_server.go:253] Checking apiserver healthz at https://192.168.39.224:8444/healthz ...
	I0328 01:08:40.568521 1131600 api_server.go:279] https://192.168.39.224:8444/healthz returned 200:
	ok
	I0328 01:08:40.570016 1131600 api_server.go:141] control plane version: v1.29.3
	I0328 01:08:40.570060 1131600 api_server.go:131] duration metric: took 8.369036ms to wait for apiserver health ...
	I0328 01:08:40.570071 1131600 system_pods.go:43] waiting for kube-system pods to appear ...
	I0328 01:08:39.696094 1130827 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.32965227s)
	I0328 01:08:39.696193 1130827 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0328 01:08:39.717556 1130827 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0328 01:08:39.730434 1130827 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0328 01:08:39.746521 1130827 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0328 01:08:39.746567 1130827 kubeadm.go:156] found existing configuration files:
	
	I0328 01:08:39.746644 1130827 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0328 01:08:39.758252 1130827 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0328 01:08:39.758352 1130827 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0328 01:08:39.771929 1130827 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0328 01:08:39.785312 1130827 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0328 01:08:39.785400 1130827 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0328 01:08:39.800685 1130827 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0328 01:08:39.814982 1130827 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0328 01:08:39.815073 1130827 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0328 01:08:39.828804 1130827 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0328 01:08:39.841984 1130827 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0328 01:08:39.842074 1130827 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0328 01:08:39.854502 1130827 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0328 01:08:40.089742 1130827 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0328 01:08:40.742900 1131600 system_pods.go:59] 9 kube-system pods found
	I0328 01:08:40.742938 1131600 system_pods.go:61] "coredns-76f75df574-gdv5x" [5b4b835c-ae9d-4eff-ab37-6ccb7e36a748] Running
	I0328 01:08:40.742945 1131600 system_pods.go:61] "coredns-76f75df574-qzcfp" [8e7bfa94-f249-4f7a-be7b-9a615810c956] Running
	I0328 01:08:40.742951 1131600 system_pods.go:61] "etcd-default-k8s-diff-port-283961" [eb26a2e2-882b-4bc0-9ca1-7fe88b1c6c7e] Running
	I0328 01:08:40.742958 1131600 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-283961" [1c31e7a3-507e-43ea-96cf-1d182a1e0875] Running
	I0328 01:08:40.742964 1131600 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-283961" [5bbc6650-b37c-4b10-bc6e-cceb214f8881] Running
	I0328 01:08:40.742968 1131600 system_pods.go:61] "kube-proxy-js7j2" [1f31a6ee-9417-4f4a-ba0b-fdbab6a9169d] Running
	I0328 01:08:40.742972 1131600 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-283961" [3a5691f7-d6d4-45e7-aea7-94fe1cfa541a] Running
	I0328 01:08:40.742980 1131600 system_pods.go:61] "metrics-server-57f55c9bc5-gkv67" [7f0f5d0b-6821-44b6-8f3b-0bc0aeccc356] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0328 01:08:40.742986 1131600 system_pods.go:61] "storage-provisioner" [cb80efe2-521f-45d5-84e7-f6dc216b4c6d] Running
	I0328 01:08:40.742998 1131600 system_pods.go:74] duration metric: took 172.918886ms to wait for pod list to return data ...
	I0328 01:08:40.743010 1131600 default_sa.go:34] waiting for default service account to be created ...
	I0328 01:08:40.939208 1131600 default_sa.go:45] found service account: "default"
	I0328 01:08:40.939255 1131600 default_sa.go:55] duration metric: took 196.220048ms for default service account to be created ...
	I0328 01:08:40.939266 1131600 system_pods.go:116] waiting for k8s-apps to be running ...
	I0328 01:08:41.144986 1131600 system_pods.go:86] 9 kube-system pods found
	I0328 01:08:41.145023 1131600 system_pods.go:89] "coredns-76f75df574-gdv5x" [5b4b835c-ae9d-4eff-ab37-6ccb7e36a748] Running
	I0328 01:08:41.145030 1131600 system_pods.go:89] "coredns-76f75df574-qzcfp" [8e7bfa94-f249-4f7a-be7b-9a615810c956] Running
	I0328 01:08:41.145034 1131600 system_pods.go:89] "etcd-default-k8s-diff-port-283961" [eb26a2e2-882b-4bc0-9ca1-7fe88b1c6c7e] Running
	I0328 01:08:41.145039 1131600 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-283961" [1c31e7a3-507e-43ea-96cf-1d182a1e0875] Running
	I0328 01:08:41.145043 1131600 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-283961" [5bbc6650-b37c-4b10-bc6e-cceb214f8881] Running
	I0328 01:08:41.145047 1131600 system_pods.go:89] "kube-proxy-js7j2" [1f31a6ee-9417-4f4a-ba0b-fdbab6a9169d] Running
	I0328 01:08:41.145051 1131600 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-283961" [3a5691f7-d6d4-45e7-aea7-94fe1cfa541a] Running
	I0328 01:08:41.145058 1131600 system_pods.go:89] "metrics-server-57f55c9bc5-gkv67" [7f0f5d0b-6821-44b6-8f3b-0bc0aeccc356] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0328 01:08:41.145062 1131600 system_pods.go:89] "storage-provisioner" [cb80efe2-521f-45d5-84e7-f6dc216b4c6d] Running
	I0328 01:08:41.145072 1131600 system_pods.go:126] duration metric: took 205.800485ms to wait for k8s-apps to be running ...
	I0328 01:08:41.145083 1131600 system_svc.go:44] waiting for kubelet service to be running ....
	I0328 01:08:41.145131 1131600 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0328 01:08:41.163220 1131600 system_svc.go:56] duration metric: took 18.120266ms WaitForService to wait for kubelet
	I0328 01:08:41.163255 1131600 kubeadm.go:576] duration metric: took 3.308947131s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0328 01:08:41.163280 1131600 node_conditions.go:102] verifying NodePressure condition ...
	I0328 01:08:41.339219 1131600 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0328 01:08:41.339247 1131600 node_conditions.go:123] node cpu capacity is 2
	I0328 01:08:41.339292 1131600 node_conditions.go:105] duration metric: took 176.004328ms to run NodePressure ...
	I0328 01:08:41.339306 1131600 start.go:240] waiting for startup goroutines ...
	I0328 01:08:41.339317 1131600 start.go:245] waiting for cluster config update ...
	I0328 01:08:41.339334 1131600 start.go:254] writing updated cluster config ...
	I0328 01:08:41.339656 1131600 ssh_runner.go:195] Run: rm -f paused
	I0328 01:08:41.399111 1131600 start.go:600] kubectl: 1.29.3, cluster: 1.29.3 (minor skew: 0)
	I0328 01:08:41.401360 1131600 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-283961" cluster and "default" namespace by default
	I0328 01:08:49.653091 1130827 kubeadm.go:309] [init] Using Kubernetes version: v1.30.0-beta.0
	I0328 01:08:49.653205 1130827 kubeadm.go:309] [preflight] Running pre-flight checks
	I0328 01:08:49.653327 1130827 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0328 01:08:49.653468 1130827 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0328 01:08:49.653576 1130827 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0328 01:08:49.653666 1130827 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0328 01:08:49.656419 1130827 out.go:204]   - Generating certificates and keys ...
	I0328 01:08:49.656503 1130827 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0328 01:08:49.656583 1130827 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0328 01:08:49.656669 1130827 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0328 01:08:49.656775 1130827 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0328 01:08:49.656903 1130827 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0328 01:08:49.656973 1130827 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0328 01:08:49.657057 1130827 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0328 01:08:49.657138 1130827 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0328 01:08:49.657246 1130827 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0328 01:08:49.657362 1130827 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0328 01:08:49.657415 1130827 kubeadm.go:309] [certs] Using the existing "sa" key
	I0328 01:08:49.657510 1130827 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0328 01:08:49.657601 1130827 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0328 01:08:49.657713 1130827 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0328 01:08:49.657811 1130827 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0328 01:08:49.657900 1130827 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0328 01:08:49.657980 1130827 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0328 01:08:49.658074 1130827 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0328 01:08:49.658160 1130827 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0328 01:08:49.659588 1130827 out.go:204]   - Booting up control plane ...
	I0328 01:08:49.659669 1130827 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0328 01:08:49.659771 1130827 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0328 01:08:49.659855 1130827 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0328 01:08:49.659962 1130827 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0328 01:08:49.660075 1130827 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0328 01:08:49.660139 1130827 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0328 01:08:49.660309 1130827 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0328 01:08:49.660426 1130827 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0328 01:08:49.660518 1130827 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 501.594495ms
	I0328 01:08:49.660610 1130827 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0328 01:08:49.660691 1130827 kubeadm.go:309] [api-check] The API server is healthy after 5.502996727s
	I0328 01:08:49.660830 1130827 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0328 01:08:49.660975 1130827 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0328 01:08:49.661028 1130827 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0328 01:08:49.661198 1130827 kubeadm.go:309] [mark-control-plane] Marking the node no-preload-248059 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0328 01:08:49.661283 1130827 kubeadm.go:309] [bootstrap-token] Using token: 4jnfa0.q3dre6ogqbxtw8j0
	I0328 01:08:49.662907 1130827 out.go:204]   - Configuring RBAC rules ...
	I0328 01:08:49.663014 1130827 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0328 01:08:49.663090 1130827 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0328 01:08:49.663239 1130827 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0328 01:08:49.663379 1130827 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0328 01:08:49.663484 1130827 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0328 01:08:49.663576 1130827 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0328 01:08:49.663688 1130827 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0328 01:08:49.663750 1130827 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0328 01:08:49.663811 1130827 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0328 01:08:49.663820 1130827 kubeadm.go:309] 
	I0328 01:08:49.663871 1130827 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0328 01:08:49.663877 1130827 kubeadm.go:309] 
	I0328 01:08:49.663976 1130827 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0328 01:08:49.663984 1130827 kubeadm.go:309] 
	I0328 01:08:49.664004 1130827 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0328 01:08:49.664080 1130827 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0328 01:08:49.664144 1130827 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0328 01:08:49.664151 1130827 kubeadm.go:309] 
	I0328 01:08:49.664202 1130827 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0328 01:08:49.664209 1130827 kubeadm.go:309] 
	I0328 01:08:49.664246 1130827 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0328 01:08:49.664252 1130827 kubeadm.go:309] 
	I0328 01:08:49.664301 1130827 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0328 01:08:49.664370 1130827 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0328 01:08:49.664436 1130827 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0328 01:08:49.664444 1130827 kubeadm.go:309] 
	I0328 01:08:49.664515 1130827 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0328 01:08:49.664600 1130827 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0328 01:08:49.664607 1130827 kubeadm.go:309] 
	I0328 01:08:49.664678 1130827 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token 4jnfa0.q3dre6ogqbxtw8j0 \
	I0328 01:08:49.664764 1130827 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:eb47e943713ff45bf2b8bbea854932df17a469668ec0f8e79ea3bb626e56fc59 \
	I0328 01:08:49.664783 1130827 kubeadm.go:309] 	--control-plane 
	I0328 01:08:49.664789 1130827 kubeadm.go:309] 
	I0328 01:08:49.664856 1130827 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0328 01:08:49.664863 1130827 kubeadm.go:309] 
	I0328 01:08:49.664938 1130827 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token 4jnfa0.q3dre6ogqbxtw8j0 \
	I0328 01:08:49.665073 1130827 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:eb47e943713ff45bf2b8bbea854932df17a469668ec0f8e79ea3bb626e56fc59 
	I0328 01:08:49.665117 1130827 cni.go:84] Creating CNI manager for ""
	I0328 01:08:49.665130 1130827 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0328 01:08:49.667556 1130827 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0328 01:08:49.668776 1130827 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0328 01:08:49.680262 1130827 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0328 01:08:49.701490 1130827 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0328 01:08:49.701557 1130827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:49.701606 1130827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-248059 minikube.k8s.io/updated_at=2024_03_28T01_08_49_0700 minikube.k8s.io/version=v1.33.0-beta.0 minikube.k8s.io/commit=1a940f980f77c9bfc8ef93678db8c1f49c9dd79d minikube.k8s.io/name=no-preload-248059 minikube.k8s.io/primary=true
	I0328 01:08:49.734009 1130827 ops.go:34] apiserver oom_adj: -16
	I0328 01:08:49.901866 1130827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:50.402635 1130827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:50.902480 1130827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:51.402417 1130827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:51.902253 1130827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:52.402411 1130827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:52.901926 1130827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:53.402394 1130827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:53.902738 1130827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:54.401878 1130827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:54.901920 1130827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:55.401878 1130827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:55.902140 1130827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:56.402863 1130827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:56.901970 1130827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:57.402088 1130827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:57.901869 1130827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:58.402056 1130827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:58.902333 1130827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:59.402753 1130827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:08:59.902930 1130827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:09:00.402623 1130827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:09:00.901863 1130827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:09:01.402264 1130827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:09:01.902054 1130827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:09:02.402212 1130827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:09:02.503310 1130827 kubeadm.go:1107] duration metric: took 12.80181586s to wait for elevateKubeSystemPrivileges
	W0328 01:09:02.503352 1130827 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0328 01:09:02.503362 1130827 kubeadm.go:393] duration metric: took 5m10.46697508s to StartCluster
	I0328 01:09:02.503380 1130827 settings.go:142] acquiring lock: {Name:mk594dd5d083edcb4c668528431bf610fe4ea638 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 01:09:02.503482 1130827 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18485-1069254/kubeconfig
	I0328 01:09:02.505909 1130827 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18485-1069254/kubeconfig: {Name:mk608bb0dce90860f51972a105c5a28b1a9b081e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 01:09:02.506302 1130827 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.61.107 Port:8443 KubernetesVersion:v1.30.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0328 01:09:02.508103 1130827 out.go:177] * Verifying Kubernetes components...
	I0328 01:09:02.506385 1130827 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0328 01:09:02.506502 1130827 config.go:182] Loaded profile config "no-preload-248059": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0-beta.0
	I0328 01:09:02.509509 1130827 addons.go:69] Setting default-storageclass=true in profile "no-preload-248059"
	I0328 01:09:02.509519 1130827 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0328 01:09:02.509517 1130827 addons.go:69] Setting metrics-server=true in profile "no-preload-248059"
	I0328 01:09:02.509542 1130827 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-248059"
	I0328 01:09:02.509559 1130827 addons.go:234] Setting addon metrics-server=true in "no-preload-248059"
	W0328 01:09:02.509580 1130827 addons.go:243] addon metrics-server should already be in state true
	I0328 01:09:02.509509 1130827 addons.go:69] Setting storage-provisioner=true in profile "no-preload-248059"
	I0328 01:09:02.509623 1130827 host.go:66] Checking if "no-preload-248059" exists ...
	I0328 01:09:02.509636 1130827 addons.go:234] Setting addon storage-provisioner=true in "no-preload-248059"
	W0328 01:09:02.509690 1130827 addons.go:243] addon storage-provisioner should already be in state true
	I0328 01:09:02.509729 1130827 host.go:66] Checking if "no-preload-248059" exists ...
	I0328 01:09:02.510005 1130827 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18485-1069254/.minikube/bin/docker-machine-driver-kvm2
	I0328 01:09:02.510009 1130827 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18485-1069254/.minikube/bin/docker-machine-driver-kvm2
	I0328 01:09:02.510049 1130827 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 01:09:02.510050 1130827 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 01:09:02.510053 1130827 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18485-1069254/.minikube/bin/docker-machine-driver-kvm2
	I0328 01:09:02.510085 1130827 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 01:09:02.528082 1130827 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42381
	I0328 01:09:02.528124 1130827 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43929
	I0328 01:09:02.528714 1130827 main.go:141] libmachine: () Calling .GetVersion
	I0328 01:09:02.528738 1130827 main.go:141] libmachine: () Calling .GetVersion
	I0328 01:09:02.529081 1130827 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34573
	I0328 01:09:02.529378 1130827 main.go:141] libmachine: Using API Version  1
	I0328 01:09:02.529397 1130827 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 01:09:02.529444 1130827 main.go:141] libmachine: Using API Version  1
	I0328 01:09:02.529464 1130827 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 01:09:02.529465 1130827 main.go:141] libmachine: () Calling .GetVersion
	I0328 01:09:02.529791 1130827 main.go:141] libmachine: () Calling .GetMachineName
	I0328 01:09:02.529849 1130827 main.go:141] libmachine: () Calling .GetMachineName
	I0328 01:09:02.529948 1130827 main.go:141] libmachine: Using API Version  1
	I0328 01:09:02.529965 1130827 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 01:09:02.529950 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetState
	I0328 01:09:02.530389 1130827 main.go:141] libmachine: () Calling .GetMachineName
	I0328 01:09:02.530437 1130827 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18485-1069254/.minikube/bin/docker-machine-driver-kvm2
	I0328 01:09:02.530472 1130827 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 01:09:02.531004 1130827 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18485-1069254/.minikube/bin/docker-machine-driver-kvm2
	I0328 01:09:02.531058 1130827 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 01:09:02.534108 1130827 addons.go:234] Setting addon default-storageclass=true in "no-preload-248059"
	W0328 01:09:02.534134 1130827 addons.go:243] addon default-storageclass should already be in state true
	I0328 01:09:02.534173 1130827 host.go:66] Checking if "no-preload-248059" exists ...
	I0328 01:09:02.534563 1130827 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18485-1069254/.minikube/bin/docker-machine-driver-kvm2
	I0328 01:09:02.534592 1130827 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 01:09:02.546812 1130827 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35769
	I0328 01:09:02.547478 1130827 main.go:141] libmachine: () Calling .GetVersion
	I0328 01:09:02.547999 1130827 main.go:141] libmachine: Using API Version  1
	I0328 01:09:02.548031 1130827 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 01:09:02.548370 1130827 main.go:141] libmachine: () Calling .GetMachineName
	I0328 01:09:02.548616 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetState
	I0328 01:09:02.549185 1130827 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37367
	I0328 01:09:02.549663 1130827 main.go:141] libmachine: () Calling .GetVersion
	I0328 01:09:02.550365 1130827 main.go:141] libmachine: Using API Version  1
	I0328 01:09:02.550390 1130827 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 01:09:02.550772 1130827 main.go:141] libmachine: () Calling .GetMachineName
	I0328 01:09:02.550787 1130827 main.go:141] libmachine: (no-preload-248059) Calling .DriverName
	I0328 01:09:02.550977 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetState
	I0328 01:09:02.553075 1130827 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0328 01:09:02.554750 1130827 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0328 01:09:02.554769 1130827 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0328 01:09:02.552577 1130827 main.go:141] libmachine: (no-preload-248059) Calling .DriverName
	I0328 01:09:02.554788 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHHostname
	I0328 01:09:02.553550 1130827 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45153
	I0328 01:09:02.556534 1130827 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0328 01:09:02.555339 1130827 main.go:141] libmachine: () Calling .GetVersion
	I0328 01:09:02.558480 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:09:02.563734 1130827 main.go:141] libmachine: (no-preload-248059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:33:e2", ip: ""} in network mk-no-preload-248059: {Iface:virbr3 ExpiryTime:2024-03-28 02:03:25 +0000 UTC Type:0 Mac:52:54:00:58:33:e2 Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:no-preload-248059 Clientid:01:52:54:00:58:33:e2}
	I0328 01:09:02.563773 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined IP address 192.168.61.107 and MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:09:02.563823 1130827 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0328 01:09:02.563846 1130827 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0328 01:09:02.563876 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHHostname
	I0328 01:09:02.564584 1130827 main.go:141] libmachine: Using API Version  1
	I0328 01:09:02.564604 1130827 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 01:09:02.564633 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHPort
	I0328 01:09:02.564933 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHKeyPath
	I0328 01:09:02.565025 1130827 main.go:141] libmachine: () Calling .GetMachineName
	I0328 01:09:02.565458 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHUsername
	I0328 01:09:02.565593 1130827 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18485-1069254/.minikube/bin/docker-machine-driver-kvm2
	I0328 01:09:02.565617 1130827 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 01:09:02.565745 1130827 sshutil.go:53] new ssh client: &{IP:192.168.61.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/no-preload-248059/id_rsa Username:docker}
	I0328 01:09:02.569766 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:09:02.570083 1130827 main.go:141] libmachine: (no-preload-248059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:33:e2", ip: ""} in network mk-no-preload-248059: {Iface:virbr3 ExpiryTime:2024-03-28 02:03:25 +0000 UTC Type:0 Mac:52:54:00:58:33:e2 Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:no-preload-248059 Clientid:01:52:54:00:58:33:e2}
	I0328 01:09:02.570104 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined IP address 192.168.61.107 and MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:09:02.570413 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHPort
	I0328 01:09:02.570778 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHKeyPath
	I0328 01:09:02.570975 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHUsername
	I0328 01:09:02.571142 1130827 sshutil.go:53] new ssh client: &{IP:192.168.61.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/no-preload-248059/id_rsa Username:docker}
	I0328 01:09:02.589503 1130827 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41085
	I0328 01:09:02.590061 1130827 main.go:141] libmachine: () Calling .GetVersion
	I0328 01:09:02.590641 1130827 main.go:141] libmachine: Using API Version  1
	I0328 01:09:02.590661 1130827 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 01:09:02.591065 1130827 main.go:141] libmachine: () Calling .GetMachineName
	I0328 01:09:02.591310 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetState
	I0328 01:09:02.593270 1130827 main.go:141] libmachine: (no-preload-248059) Calling .DriverName
	I0328 01:09:02.593665 1130827 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0328 01:09:02.593696 1130827 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0328 01:09:02.593717 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHHostname
	I0328 01:09:02.596796 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:09:02.597270 1130827 main.go:141] libmachine: (no-preload-248059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:33:e2", ip: ""} in network mk-no-preload-248059: {Iface:virbr3 ExpiryTime:2024-03-28 02:03:25 +0000 UTC Type:0 Mac:52:54:00:58:33:e2 Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:no-preload-248059 Clientid:01:52:54:00:58:33:e2}
	I0328 01:09:02.597298 1130827 main.go:141] libmachine: (no-preload-248059) DBG | domain no-preload-248059 has defined IP address 192.168.61.107 and MAC address 52:54:00:58:33:e2 in network mk-no-preload-248059
	I0328 01:09:02.597460 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHPort
	I0328 01:09:02.597637 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHKeyPath
	I0328 01:09:02.597807 1130827 main.go:141] libmachine: (no-preload-248059) Calling .GetSSHUsername
	I0328 01:09:02.597937 1130827 sshutil.go:53] new ssh client: &{IP:192.168.61.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/no-preload-248059/id_rsa Username:docker}
	I0328 01:09:02.705837 1130827 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0328 01:09:02.727955 1130827 node_ready.go:35] waiting up to 6m0s for node "no-preload-248059" to be "Ready" ...
	I0328 01:09:02.737291 1130827 node_ready.go:49] node "no-preload-248059" has status "Ready":"True"
	I0328 01:09:02.737325 1130827 node_ready.go:38] duration metric: took 9.337953ms for node "no-preload-248059" to be "Ready" ...
	I0328 01:09:02.737338 1130827 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0328 01:09:02.741939 1130827 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-248059" in "kube-system" namespace to be "Ready" ...
	I0328 01:09:02.749157 1130827 pod_ready.go:92] pod "etcd-no-preload-248059" in "kube-system" namespace has status "Ready":"True"
	I0328 01:09:02.749192 1130827 pod_ready.go:81] duration metric: took 7.224004ms for pod "etcd-no-preload-248059" in "kube-system" namespace to be "Ready" ...
	I0328 01:09:02.749205 1130827 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-248059" in "kube-system" namespace to be "Ready" ...
	I0328 01:09:02.755106 1130827 pod_ready.go:92] pod "kube-apiserver-no-preload-248059" in "kube-system" namespace has status "Ready":"True"
	I0328 01:09:02.755132 1130827 pod_ready.go:81] duration metric: took 5.919446ms for pod "kube-apiserver-no-preload-248059" in "kube-system" namespace to be "Ready" ...
	I0328 01:09:02.755144 1130827 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-248059" in "kube-system" namespace to be "Ready" ...
	I0328 01:09:02.761123 1130827 pod_ready.go:92] pod "kube-controller-manager-no-preload-248059" in "kube-system" namespace has status "Ready":"True"
	I0328 01:09:02.761171 1130827 pod_ready.go:81] duration metric: took 6.017877ms for pod "kube-controller-manager-no-preload-248059" in "kube-system" namespace to be "Ready" ...
	I0328 01:09:02.761187 1130827 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-248059" in "kube-system" namespace to be "Ready" ...
	I0328 01:09:02.773958 1130827 pod_ready.go:92] pod "kube-scheduler-no-preload-248059" in "kube-system" namespace has status "Ready":"True"
	I0328 01:09:02.773983 1130827 pod_ready.go:81] duration metric: took 12.787671ms for pod "kube-scheduler-no-preload-248059" in "kube-system" namespace to be "Ready" ...
	I0328 01:09:02.773991 1130827 pod_ready.go:38] duration metric: took 36.637128ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0328 01:09:02.774008 1130827 api_server.go:52] waiting for apiserver process to appear ...
	I0328 01:09:02.774068 1130827 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:09:02.794342 1130827 api_server.go:72] duration metric: took 287.989042ms to wait for apiserver process to appear ...
	I0328 01:09:02.794376 1130827 api_server.go:88] waiting for apiserver healthz status ...
	I0328 01:09:02.794408 1130827 api_server.go:253] Checking apiserver healthz at https://192.168.61.107:8443/healthz ...
	I0328 01:09:02.826957 1130827 api_server.go:279] https://192.168.61.107:8443/healthz returned 200:
	ok
	I0328 01:09:02.830377 1130827 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0328 01:09:02.830399 1130827 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0328 01:09:02.837250 1130827 api_server.go:141] control plane version: v1.30.0-beta.0
	I0328 01:09:02.837284 1130827 api_server.go:131] duration metric: took 42.898933ms to wait for apiserver health ...
	I0328 01:09:02.837295 1130827 system_pods.go:43] waiting for kube-system pods to appear ...
	I0328 01:09:02.838515 1130827 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0328 01:09:02.865482 1130827 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0328 01:09:02.880510 1130827 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0328 01:09:02.880544 1130827 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0328 01:09:02.933895 1130827 system_pods.go:59] 4 kube-system pods found
	I0328 01:09:02.933958 1130827 system_pods.go:61] "etcd-no-preload-248059" [df24a43f-e4f8-4ee2-a2d5-9a718a197670] Running
	I0328 01:09:02.933967 1130827 system_pods.go:61] "kube-apiserver-no-preload-248059" [60aa0336-e0a2-476a-9458-c76fb40a95e1] Running
	I0328 01:09:02.933973 1130827 system_pods.go:61] "kube-controller-manager-no-preload-248059" [3759c96b-4476-439f-bf97-ea6175e53272] Running
	I0328 01:09:02.933977 1130827 system_pods.go:61] "kube-scheduler-no-preload-248059" [63a844a3-9d1e-48b0-90eb-52d3d20f0de8] Running
	I0328 01:09:02.933984 1130827 system_pods.go:74] duration metric: took 96.68223ms to wait for pod list to return data ...
	I0328 01:09:02.933994 1130827 default_sa.go:34] waiting for default service account to be created ...
	I0328 01:09:02.939507 1130827 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0328 01:09:02.939538 1130827 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0328 01:09:02.994042 1130827 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0328 01:09:03.160934 1130827 default_sa.go:45] found service account: "default"
	I0328 01:09:03.160971 1130827 default_sa.go:55] duration metric: took 226.968222ms for default service account to be created ...
	I0328 01:09:03.160982 1130827 system_pods.go:116] waiting for k8s-apps to be running ...
	I0328 01:09:03.396511 1130827 system_pods.go:86] 7 kube-system pods found
	I0328 01:09:03.396549 1130827 system_pods.go:89] "coredns-7db6d8ff4d-8zzf5" [91f329ea-6d6d-45dc-ac77-40a2739249b4] Pending
	I0328 01:09:03.396554 1130827 system_pods.go:89] "coredns-7db6d8ff4d-qtgp9" [aac5a4d0-acf3-426c-a81e-d129f94d58f3] Pending
	I0328 01:09:03.396558 1130827 system_pods.go:89] "etcd-no-preload-248059" [df24a43f-e4f8-4ee2-a2d5-9a718a197670] Running
	I0328 01:09:03.396562 1130827 system_pods.go:89] "kube-apiserver-no-preload-248059" [60aa0336-e0a2-476a-9458-c76fb40a95e1] Running
	I0328 01:09:03.396567 1130827 system_pods.go:89] "kube-controller-manager-no-preload-248059" [3759c96b-4476-439f-bf97-ea6175e53272] Running
	I0328 01:09:03.396575 1130827 system_pods.go:89] "kube-proxy-g5f6g" [d9c30bc3-42b1-446f-838b-979489cf661d] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0328 01:09:03.396580 1130827 system_pods.go:89] "kube-scheduler-no-preload-248059" [63a844a3-9d1e-48b0-90eb-52d3d20f0de8] Running
	I0328 01:09:03.396601 1130827 retry.go:31] will retry after 288.008379ms: missing components: kube-dns, kube-proxy
	I0328 01:09:03.697645 1130827 system_pods.go:86] 7 kube-system pods found
	I0328 01:09:03.697688 1130827 system_pods.go:89] "coredns-7db6d8ff4d-8zzf5" [91f329ea-6d6d-45dc-ac77-40a2739249b4] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0328 01:09:03.697697 1130827 system_pods.go:89] "coredns-7db6d8ff4d-qtgp9" [aac5a4d0-acf3-426c-a81e-d129f94d58f3] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0328 01:09:03.697704 1130827 system_pods.go:89] "etcd-no-preload-248059" [df24a43f-e4f8-4ee2-a2d5-9a718a197670] Running
	I0328 01:09:03.697710 1130827 system_pods.go:89] "kube-apiserver-no-preload-248059" [60aa0336-e0a2-476a-9458-c76fb40a95e1] Running
	I0328 01:09:03.697720 1130827 system_pods.go:89] "kube-controller-manager-no-preload-248059" [3759c96b-4476-439f-bf97-ea6175e53272] Running
	I0328 01:09:03.697726 1130827 system_pods.go:89] "kube-proxy-g5f6g" [d9c30bc3-42b1-446f-838b-979489cf661d] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0328 01:09:03.697730 1130827 system_pods.go:89] "kube-scheduler-no-preload-248059" [63a844a3-9d1e-48b0-90eb-52d3d20f0de8] Running
	I0328 01:09:03.697750 1130827 retry.go:31] will retry after 356.016468ms: missing components: kube-dns, kube-proxy
	I0328 01:09:03.962535 1130827 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.097008499s)
	I0328 01:09:03.962614 1130827 main.go:141] libmachine: Making call to close driver server
	I0328 01:09:03.962633 1130827 main.go:141] libmachine: (no-preload-248059) Calling .Close
	I0328 01:09:03.963093 1130827 main.go:141] libmachine: Successfully made call to close driver server
	I0328 01:09:03.963119 1130827 main.go:141] libmachine: Making call to close connection to plugin binary
	I0328 01:09:03.963129 1130827 main.go:141] libmachine: Making call to close driver server
	I0328 01:09:03.963139 1130827 main.go:141] libmachine: (no-preload-248059) Calling .Close
	I0328 01:09:03.963406 1130827 main.go:141] libmachine: Successfully made call to close driver server
	I0328 01:09:03.963424 1130827 main.go:141] libmachine: Making call to close connection to plugin binary
	I0328 01:09:03.964335 1130827 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.125788348s)
	I0328 01:09:03.964375 1130827 main.go:141] libmachine: Making call to close driver server
	I0328 01:09:03.964384 1130827 main.go:141] libmachine: (no-preload-248059) Calling .Close
	I0328 01:09:03.964712 1130827 main.go:141] libmachine: (no-preload-248059) DBG | Closing plugin on server side
	I0328 01:09:03.964740 1130827 main.go:141] libmachine: Successfully made call to close driver server
	I0328 01:09:03.964763 1130827 main.go:141] libmachine: Making call to close connection to plugin binary
	I0328 01:09:03.964776 1130827 main.go:141] libmachine: Making call to close driver server
	I0328 01:09:03.964785 1130827 main.go:141] libmachine: (no-preload-248059) Calling .Close
	I0328 01:09:03.965054 1130827 main.go:141] libmachine: Successfully made call to close driver server
	I0328 01:09:03.965125 1130827 main.go:141] libmachine: Making call to close connection to plugin binary
	I0328 01:09:03.965142 1130827 main.go:141] libmachine: (no-preload-248059) DBG | Closing plugin on server side
	I0328 01:09:04.002303 1130827 main.go:141] libmachine: Making call to close driver server
	I0328 01:09:04.002340 1130827 main.go:141] libmachine: (no-preload-248059) Calling .Close
	I0328 01:09:04.002744 1130827 main.go:141] libmachine: Successfully made call to close driver server
	I0328 01:09:04.002766 1130827 main.go:141] libmachine: Making call to close connection to plugin binary
	I0328 01:09:04.062017 1130827 system_pods.go:86] 8 kube-system pods found
	I0328 01:09:04.062096 1130827 system_pods.go:89] "coredns-7db6d8ff4d-8zzf5" [91f329ea-6d6d-45dc-ac77-40a2739249b4] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0328 01:09:04.062111 1130827 system_pods.go:89] "coredns-7db6d8ff4d-qtgp9" [aac5a4d0-acf3-426c-a81e-d129f94d58f3] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0328 01:09:04.062121 1130827 system_pods.go:89] "etcd-no-preload-248059" [df24a43f-e4f8-4ee2-a2d5-9a718a197670] Running
	I0328 01:09:04.062132 1130827 system_pods.go:89] "kube-apiserver-no-preload-248059" [60aa0336-e0a2-476a-9458-c76fb40a95e1] Running
	I0328 01:09:04.062158 1130827 system_pods.go:89] "kube-controller-manager-no-preload-248059" [3759c96b-4476-439f-bf97-ea6175e53272] Running
	I0328 01:09:04.062172 1130827 system_pods.go:89] "kube-proxy-g5f6g" [d9c30bc3-42b1-446f-838b-979489cf661d] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0328 01:09:04.062180 1130827 system_pods.go:89] "kube-scheduler-no-preload-248059" [63a844a3-9d1e-48b0-90eb-52d3d20f0de8] Running
	I0328 01:09:04.062192 1130827 system_pods.go:89] "storage-provisioner" [1dcee5b1-4531-4068-bce7-081d51602015] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0328 01:09:04.062220 1130827 retry.go:31] will retry after 477.684804ms: missing components: kube-dns, kube-proxy
	I0328 01:09:04.574661 1130827 system_pods.go:86] 9 kube-system pods found
	I0328 01:09:04.574716 1130827 system_pods.go:89] "coredns-7db6d8ff4d-8zzf5" [91f329ea-6d6d-45dc-ac77-40a2739249b4] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0328 01:09:04.574728 1130827 system_pods.go:89] "coredns-7db6d8ff4d-qtgp9" [aac5a4d0-acf3-426c-a81e-d129f94d58f3] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0328 01:09:04.574740 1130827 system_pods.go:89] "etcd-no-preload-248059" [df24a43f-e4f8-4ee2-a2d5-9a718a197670] Running
	I0328 01:09:04.574748 1130827 system_pods.go:89] "kube-apiserver-no-preload-248059" [60aa0336-e0a2-476a-9458-c76fb40a95e1] Running
	I0328 01:09:04.574754 1130827 system_pods.go:89] "kube-controller-manager-no-preload-248059" [3759c96b-4476-439f-bf97-ea6175e53272] Running
	I0328 01:09:04.574761 1130827 system_pods.go:89] "kube-proxy-g5f6g" [d9c30bc3-42b1-446f-838b-979489cf661d] Running
	I0328 01:09:04.574768 1130827 system_pods.go:89] "kube-scheduler-no-preload-248059" [63a844a3-9d1e-48b0-90eb-52d3d20f0de8] Running
	I0328 01:09:04.574778 1130827 system_pods.go:89] "metrics-server-569cc877fc-frc5k" [d1b84bf5-8f9e-4da6-8aea-568e9bb1a4dd] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0328 01:09:04.574799 1130827 system_pods.go:89] "storage-provisioner" [1dcee5b1-4531-4068-bce7-081d51602015] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0328 01:09:04.574821 1130827 retry.go:31] will retry after 460.13955ms: missing components: kube-dns
	I0328 01:09:04.692708 1130827 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.69861394s)
	I0328 01:09:04.692782 1130827 main.go:141] libmachine: Making call to close driver server
	I0328 01:09:04.692798 1130827 main.go:141] libmachine: (no-preload-248059) Calling .Close
	I0328 01:09:04.693323 1130827 main.go:141] libmachine: Successfully made call to close driver server
	I0328 01:09:04.693366 1130827 main.go:141] libmachine: Making call to close connection to plugin binary
	I0328 01:09:04.693376 1130827 main.go:141] libmachine: Making call to close driver server
	I0328 01:09:04.693384 1130827 main.go:141] libmachine: (no-preload-248059) Calling .Close
	I0328 01:09:04.693320 1130827 main.go:141] libmachine: (no-preload-248059) DBG | Closing plugin on server side
	I0328 01:09:04.693818 1130827 main.go:141] libmachine: Successfully made call to close driver server
	I0328 01:09:04.693865 1130827 main.go:141] libmachine: Making call to close connection to plugin binary
	I0328 01:09:04.693879 1130827 main.go:141] libmachine: (no-preload-248059) DBG | Closing plugin on server side
	I0328 01:09:04.693895 1130827 addons.go:470] Verifying addon metrics-server=true in "no-preload-248059"
	I0328 01:09:04.696310 1130827 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0328 01:09:04.025791 1131323 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0328 01:09:04.026055 1131323 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0328 01:09:04.026065 1131323 kubeadm.go:309] 
	I0328 01:09:04.026124 1131323 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0328 01:09:04.026172 1131323 kubeadm.go:309] 		timed out waiting for the condition
	I0328 01:09:04.026181 1131323 kubeadm.go:309] 
	I0328 01:09:04.026221 1131323 kubeadm.go:309] 	This error is likely caused by:
	I0328 01:09:04.026279 1131323 kubeadm.go:309] 		- The kubelet is not running
	I0328 01:09:04.026401 1131323 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0328 01:09:04.026411 1131323 kubeadm.go:309] 
	I0328 01:09:04.026529 1131323 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0328 01:09:04.026586 1131323 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0328 01:09:04.026632 1131323 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0328 01:09:04.026640 1131323 kubeadm.go:309] 
	I0328 01:09:04.026758 1131323 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0328 01:09:04.026884 1131323 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0328 01:09:04.026902 1131323 kubeadm.go:309] 
	I0328 01:09:04.027061 1131323 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0328 01:09:04.027222 1131323 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0328 01:09:04.027335 1131323 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0328 01:09:04.027429 1131323 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0328 01:09:04.027537 1131323 kubeadm.go:309] 
	I0328 01:09:04.029027 1131323 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0328 01:09:04.029164 1131323 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0328 01:09:04.029284 1131323 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	W0328 01:09:04.029477 1131323 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0328 01:09:04.029545 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0328 01:09:04.543275 1131323 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0328 01:09:04.562572 1131323 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0328 01:09:04.577013 1131323 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0328 01:09:04.577040 1131323 kubeadm.go:156] found existing configuration files:
	
	I0328 01:09:04.577102 1131323 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0328 01:09:04.590795 1131323 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0328 01:09:04.590885 1131323 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0328 01:09:04.604227 1131323 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0328 01:09:04.616720 1131323 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0328 01:09:04.616818 1131323 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0328 01:09:04.630095 1131323 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0328 01:09:04.643166 1131323 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0328 01:09:04.643259 1131323 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0328 01:09:04.658084 1131323 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0328 01:09:04.671786 1131323 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0328 01:09:04.671874 1131323 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0328 01:09:04.685852 1131323 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0328 01:09:04.779013 1131323 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0328 01:09:04.779113 1131323 kubeadm.go:309] [preflight] Running pre-flight checks
	I0328 01:09:04.964178 1131323 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0328 01:09:04.964317 1131323 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0328 01:09:04.964463 1131323 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0328 01:09:05.181712 1131323 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0328 01:09:05.183644 1131323 out.go:204]   - Generating certificates and keys ...
	I0328 01:09:05.183759 1131323 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0328 01:09:05.183851 1131323 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0328 01:09:05.183962 1131323 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0328 01:09:05.184042 1131323 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0328 01:09:05.184156 1131323 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0328 01:09:05.184244 1131323 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0328 01:09:05.184337 1131323 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0328 01:09:05.184424 1131323 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0328 01:09:05.184535 1131323 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0328 01:09:05.184633 1131323 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0328 01:09:05.184683 1131323 kubeadm.go:309] [certs] Using the existing "sa" key
	I0328 01:09:05.184758 1131323 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0328 01:09:04.698039 1130827 addons.go:505] duration metric: took 2.191652421s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0328 01:09:05.044303 1130827 system_pods.go:86] 9 kube-system pods found
	I0328 01:09:05.044340 1130827 system_pods.go:89] "coredns-7db6d8ff4d-8zzf5" [91f329ea-6d6d-45dc-ac77-40a2739249b4] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0328 01:09:05.044348 1130827 system_pods.go:89] "coredns-7db6d8ff4d-qtgp9" [aac5a4d0-acf3-426c-a81e-d129f94d58f3] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0328 01:09:05.044354 1130827 system_pods.go:89] "etcd-no-preload-248059" [df24a43f-e4f8-4ee2-a2d5-9a718a197670] Running
	I0328 01:09:05.044360 1130827 system_pods.go:89] "kube-apiserver-no-preload-248059" [60aa0336-e0a2-476a-9458-c76fb40a95e1] Running
	I0328 01:09:05.044366 1130827 system_pods.go:89] "kube-controller-manager-no-preload-248059" [3759c96b-4476-439f-bf97-ea6175e53272] Running
	I0328 01:09:05.044369 1130827 system_pods.go:89] "kube-proxy-g5f6g" [d9c30bc3-42b1-446f-838b-979489cf661d] Running
	I0328 01:09:05.044373 1130827 system_pods.go:89] "kube-scheduler-no-preload-248059" [63a844a3-9d1e-48b0-90eb-52d3d20f0de8] Running
	I0328 01:09:05.044378 1130827 system_pods.go:89] "metrics-server-569cc877fc-frc5k" [d1b84bf5-8f9e-4da6-8aea-568e9bb1a4dd] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0328 01:09:05.044387 1130827 system_pods.go:89] "storage-provisioner" [1dcee5b1-4531-4068-bce7-081d51602015] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0328 01:09:05.044406 1130827 retry.go:31] will retry after 486.01075ms: missing components: kube-dns
	I0328 01:09:05.539158 1130827 system_pods.go:86] 9 kube-system pods found
	I0328 01:09:05.539204 1130827 system_pods.go:89] "coredns-7db6d8ff4d-8zzf5" [91f329ea-6d6d-45dc-ac77-40a2739249b4] Running
	I0328 01:09:05.539213 1130827 system_pods.go:89] "coredns-7db6d8ff4d-qtgp9" [aac5a4d0-acf3-426c-a81e-d129f94d58f3] Running
	I0328 01:09:05.539219 1130827 system_pods.go:89] "etcd-no-preload-248059" [df24a43f-e4f8-4ee2-a2d5-9a718a197670] Running
	I0328 01:09:05.539226 1130827 system_pods.go:89] "kube-apiserver-no-preload-248059" [60aa0336-e0a2-476a-9458-c76fb40a95e1] Running
	I0328 01:09:05.539232 1130827 system_pods.go:89] "kube-controller-manager-no-preload-248059" [3759c96b-4476-439f-bf97-ea6175e53272] Running
	I0328 01:09:05.539238 1130827 system_pods.go:89] "kube-proxy-g5f6g" [d9c30bc3-42b1-446f-838b-979489cf661d] Running
	I0328 01:09:05.539244 1130827 system_pods.go:89] "kube-scheduler-no-preload-248059" [63a844a3-9d1e-48b0-90eb-52d3d20f0de8] Running
	I0328 01:09:05.539255 1130827 system_pods.go:89] "metrics-server-569cc877fc-frc5k" [d1b84bf5-8f9e-4da6-8aea-568e9bb1a4dd] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0328 01:09:05.539260 1130827 system_pods.go:89] "storage-provisioner" [1dcee5b1-4531-4068-bce7-081d51602015] Running
	I0328 01:09:05.539274 1130827 system_pods.go:126] duration metric: took 2.37828469s to wait for k8s-apps to be running ...
	I0328 01:09:05.539292 1130827 system_svc.go:44] waiting for kubelet service to be running ....
	I0328 01:09:05.539362 1130827 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0328 01:09:05.560593 1130827 system_svc.go:56] duration metric: took 21.288819ms WaitForService to wait for kubelet
	I0328 01:09:05.560628 1130827 kubeadm.go:576] duration metric: took 3.054281955s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0328 01:09:05.560657 1130827 node_conditions.go:102] verifying NodePressure condition ...
	I0328 01:09:05.564453 1130827 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0328 01:09:05.564489 1130827 node_conditions.go:123] node cpu capacity is 2
	I0328 01:09:05.564502 1130827 node_conditions.go:105] duration metric: took 3.837449ms to run NodePressure ...
	I0328 01:09:05.564517 1130827 start.go:240] waiting for startup goroutines ...
	I0328 01:09:05.564527 1130827 start.go:245] waiting for cluster config update ...
	I0328 01:09:05.564542 1130827 start.go:254] writing updated cluster config ...
	I0328 01:09:05.564843 1130827 ssh_runner.go:195] Run: rm -f paused
	I0328 01:09:05.623218 1130827 start.go:600] kubectl: 1.29.3, cluster: 1.30.0-beta.0 (minor skew: 1)
	I0328 01:09:05.625408 1130827 out.go:177] * Done! kubectl is now configured to use "no-preload-248059" cluster and "default" namespace by default
	I0328 01:09:05.587190 1131323 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0328 01:09:05.923219 1131323 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0328 01:09:06.087945 1131323 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0328 01:09:06.245638 1131323 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0328 01:09:06.266195 1131323 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0328 01:09:06.267461 1131323 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0328 01:09:06.267551 1131323 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0328 01:09:06.434155 1131323 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0328 01:09:06.436300 1131323 out.go:204]   - Booting up control plane ...
	I0328 01:09:06.436447 1131323 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0328 01:09:06.446573 1131323 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0328 01:09:06.447461 1131323 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0328 01:09:06.448313 1131323 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0328 01:09:06.450917 1131323 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0328 01:09:46.453199 1131323 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0328 01:09:46.453386 1131323 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0328 01:09:46.453643 1131323 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0328 01:09:51.454402 1131323 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0328 01:09:51.454665 1131323 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0328 01:10:01.455189 1131323 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0328 01:10:01.455417 1131323 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0328 01:10:21.456491 1131323 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0328 01:10:21.456726 1131323 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0328 01:11:01.456972 1131323 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0328 01:11:01.457256 1131323 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0328 01:11:01.457269 1131323 kubeadm.go:309] 
	I0328 01:11:01.457310 1131323 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0328 01:11:01.457404 1131323 kubeadm.go:309] 		timed out waiting for the condition
	I0328 01:11:01.457441 1131323 kubeadm.go:309] 
	I0328 01:11:01.457492 1131323 kubeadm.go:309] 	This error is likely caused by:
	I0328 01:11:01.457550 1131323 kubeadm.go:309] 		- The kubelet is not running
	I0328 01:11:01.457696 1131323 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0328 01:11:01.457708 1131323 kubeadm.go:309] 
	I0328 01:11:01.457856 1131323 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0328 01:11:01.457906 1131323 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0328 01:11:01.457935 1131323 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0328 01:11:01.457943 1131323 kubeadm.go:309] 
	I0328 01:11:01.458033 1131323 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0328 01:11:01.458139 1131323 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0328 01:11:01.458155 1131323 kubeadm.go:309] 
	I0328 01:11:01.458331 1131323 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0328 01:11:01.458483 1131323 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0328 01:11:01.458594 1131323 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0328 01:11:01.458707 1131323 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0328 01:11:01.458718 1131323 kubeadm.go:309] 
	I0328 01:11:01.459597 1131323 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0328 01:11:01.459737 1131323 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0328 01:11:01.459822 1131323 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0328 01:11:01.459962 1131323 kubeadm.go:393] duration metric: took 7m59.227261729s to StartCluster
	I0328 01:11:01.460023 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 01:11:01.460167 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 01:11:01.522644 1131323 cri.go:89] found id: ""
	I0328 01:11:01.522687 1131323 logs.go:276] 0 containers: []
	W0328 01:11:01.522700 1131323 logs.go:278] No container was found matching "kube-apiserver"
	I0328 01:11:01.522710 1131323 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 01:11:01.522782 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 01:11:01.567898 1131323 cri.go:89] found id: ""
	I0328 01:11:01.567928 1131323 logs.go:276] 0 containers: []
	W0328 01:11:01.567937 1131323 logs.go:278] No container was found matching "etcd"
	I0328 01:11:01.567945 1131323 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 01:11:01.568005 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 01:11:01.604782 1131323 cri.go:89] found id: ""
	I0328 01:11:01.604810 1131323 logs.go:276] 0 containers: []
	W0328 01:11:01.604819 1131323 logs.go:278] No container was found matching "coredns"
	I0328 01:11:01.604825 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 01:11:01.604935 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 01:11:01.642875 1131323 cri.go:89] found id: ""
	I0328 01:11:01.642908 1131323 logs.go:276] 0 containers: []
	W0328 01:11:01.642920 1131323 logs.go:278] No container was found matching "kube-scheduler"
	I0328 01:11:01.642929 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 01:11:01.642993 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 01:11:01.682186 1131323 cri.go:89] found id: ""
	I0328 01:11:01.682216 1131323 logs.go:276] 0 containers: []
	W0328 01:11:01.682223 1131323 logs.go:278] No container was found matching "kube-proxy"
	I0328 01:11:01.682241 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 01:11:01.682312 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 01:11:01.720654 1131323 cri.go:89] found id: ""
	I0328 01:11:01.720689 1131323 logs.go:276] 0 containers: []
	W0328 01:11:01.720697 1131323 logs.go:278] No container was found matching "kube-controller-manager"
	I0328 01:11:01.720704 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 01:11:01.720759 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 01:11:01.757340 1131323 cri.go:89] found id: ""
	I0328 01:11:01.757372 1131323 logs.go:276] 0 containers: []
	W0328 01:11:01.757383 1131323 logs.go:278] No container was found matching "kindnet"
	I0328 01:11:01.757392 1131323 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 01:11:01.757462 1131323 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 01:11:01.797426 1131323 cri.go:89] found id: ""
	I0328 01:11:01.797462 1131323 logs.go:276] 0 containers: []
	W0328 01:11:01.797473 1131323 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0328 01:11:01.797488 1131323 logs.go:123] Gathering logs for kubelet ...
	I0328 01:11:01.797506 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:11:01.859582 1131323 logs.go:123] Gathering logs for dmesg ...
	I0328 01:11:01.859623 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:11:01.876027 1131323 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:11:01.876073 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0328 01:11:01.966513 1131323 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0328 01:11:01.966539 1131323 logs.go:123] Gathering logs for CRI-O ...
	I0328 01:11:01.966557 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 01:11:02.084853 1131323 logs.go:123] Gathering logs for container status ...
	I0328 01:11:02.084894 1131323 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0328 01:11:02.127221 1131323 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0328 01:11:02.127288 1131323 out.go:239] * 
	W0328 01:11:02.127417 1131323 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0328 01:11:02.127456 1131323 out.go:239] * 
	W0328 01:11:02.128313 1131323 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0328 01:11:02.131916 1131323 out.go:177] 
	W0328 01:11:02.133288 1131323 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0328 01:11:02.133351 1131323 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0328 01:11:02.133381 1131323 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0328 01:11:02.134991 1131323 out.go:177] 
	
	
	==> CRI-O <==
	Mar 28 01:22:43 old-k8s-version-986088 crio[655]: time="2024-03-28 01:22:43.737919941Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1711588963737898292,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=126a20d0-da0d-4169-ae8b-b7baeca9fe5a name=/runtime.v1.ImageService/ImageFsInfo
	Mar 28 01:22:43 old-k8s-version-986088 crio[655]: time="2024-03-28 01:22:43.738482452Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=94fd55ab-ed1e-4613-9196-c55bf4c0007c name=/runtime.v1.RuntimeService/ListContainers
	Mar 28 01:22:43 old-k8s-version-986088 crio[655]: time="2024-03-28 01:22:43.738571252Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=94fd55ab-ed1e-4613-9196-c55bf4c0007c name=/runtime.v1.RuntimeService/ListContainers
	Mar 28 01:22:43 old-k8s-version-986088 crio[655]: time="2024-03-28 01:22:43.738605903Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=94fd55ab-ed1e-4613-9196-c55bf4c0007c name=/runtime.v1.RuntimeService/ListContainers
	Mar 28 01:22:43 old-k8s-version-986088 crio[655]: time="2024-03-28 01:22:43.775783617Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=14674234-229b-4b72-86b3-e7312f2d487c name=/runtime.v1.RuntimeService/Version
	Mar 28 01:22:43 old-k8s-version-986088 crio[655]: time="2024-03-28 01:22:43.775925981Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=14674234-229b-4b72-86b3-e7312f2d487c name=/runtime.v1.RuntimeService/Version
	Mar 28 01:22:43 old-k8s-version-986088 crio[655]: time="2024-03-28 01:22:43.777890866Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0a188463-f42a-4420-8b05-dd24b32c8371 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 28 01:22:43 old-k8s-version-986088 crio[655]: time="2024-03-28 01:22:43.778544779Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1711588963778512725,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0a188463-f42a-4420-8b05-dd24b32c8371 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 28 01:22:43 old-k8s-version-986088 crio[655]: time="2024-03-28 01:22:43.779192894Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=59fd5864-27b6-4c29-8254-0da47f599290 name=/runtime.v1.RuntimeService/ListContainers
	Mar 28 01:22:43 old-k8s-version-986088 crio[655]: time="2024-03-28 01:22:43.779249435Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=59fd5864-27b6-4c29-8254-0da47f599290 name=/runtime.v1.RuntimeService/ListContainers
	Mar 28 01:22:43 old-k8s-version-986088 crio[655]: time="2024-03-28 01:22:43.779293275Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=59fd5864-27b6-4c29-8254-0da47f599290 name=/runtime.v1.RuntimeService/ListContainers
	Mar 28 01:22:43 old-k8s-version-986088 crio[655]: time="2024-03-28 01:22:43.814818516Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b7f2403d-c491-4d0d-88fc-33a5e5d1c336 name=/runtime.v1.RuntimeService/Version
	Mar 28 01:22:43 old-k8s-version-986088 crio[655]: time="2024-03-28 01:22:43.814891640Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b7f2403d-c491-4d0d-88fc-33a5e5d1c336 name=/runtime.v1.RuntimeService/Version
	Mar 28 01:22:43 old-k8s-version-986088 crio[655]: time="2024-03-28 01:22:43.816257581Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f1f4549d-3bb8-43aa-86ac-2c3148e79a48 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 28 01:22:43 old-k8s-version-986088 crio[655]: time="2024-03-28 01:22:43.816731799Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1711588963816695645,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f1f4549d-3bb8-43aa-86ac-2c3148e79a48 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 28 01:22:43 old-k8s-version-986088 crio[655]: time="2024-03-28 01:22:43.817554422Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=239257d1-37ea-419f-b0e8-1117e234b5b0 name=/runtime.v1.RuntimeService/ListContainers
	Mar 28 01:22:43 old-k8s-version-986088 crio[655]: time="2024-03-28 01:22:43.817604205Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=239257d1-37ea-419f-b0e8-1117e234b5b0 name=/runtime.v1.RuntimeService/ListContainers
	Mar 28 01:22:43 old-k8s-version-986088 crio[655]: time="2024-03-28 01:22:43.817638050Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=239257d1-37ea-419f-b0e8-1117e234b5b0 name=/runtime.v1.RuntimeService/ListContainers
	Mar 28 01:22:43 old-k8s-version-986088 crio[655]: time="2024-03-28 01:22:43.851051505Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b47b2855-7732-41c1-b4b8-213c99ae884f name=/runtime.v1.RuntimeService/Version
	Mar 28 01:22:43 old-k8s-version-986088 crio[655]: time="2024-03-28 01:22:43.851128582Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b47b2855-7732-41c1-b4b8-213c99ae884f name=/runtime.v1.RuntimeService/Version
	Mar 28 01:22:43 old-k8s-version-986088 crio[655]: time="2024-03-28 01:22:43.852999349Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=97e4c68a-de37-413c-a0f2-d6db36ac03d8 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 28 01:22:43 old-k8s-version-986088 crio[655]: time="2024-03-28 01:22:43.853465420Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1711588963853350001,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=97e4c68a-de37-413c-a0f2-d6db36ac03d8 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 28 01:22:43 old-k8s-version-986088 crio[655]: time="2024-03-28 01:22:43.853911889Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b535007e-cb55-4477-9373-dd190e1472d8 name=/runtime.v1.RuntimeService/ListContainers
	Mar 28 01:22:43 old-k8s-version-986088 crio[655]: time="2024-03-28 01:22:43.853957597Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b535007e-cb55-4477-9373-dd190e1472d8 name=/runtime.v1.RuntimeService/ListContainers
	Mar 28 01:22:43 old-k8s-version-986088 crio[655]: time="2024-03-28 01:22:43.853992350Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=b535007e-cb55-4477-9373-dd190e1472d8 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Mar28 01:02] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.054089] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.043023] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.677467] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.716356] systemd-fstab-generator[115]: Ignoring "noauto" option for root device
	[  +1.626498] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.938962] systemd-fstab-generator[574]: Ignoring "noauto" option for root device
	[  +0.065252] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.078257] systemd-fstab-generator[586]: Ignoring "noauto" option for root device
	[  +0.191570] systemd-fstab-generator[601]: Ignoring "noauto" option for root device
	[  +0.159223] systemd-fstab-generator[613]: Ignoring "noauto" option for root device
	[  +0.285028] systemd-fstab-generator[640]: Ignoring "noauto" option for root device
	[Mar28 01:03] systemd-fstab-generator[845]: Ignoring "noauto" option for root device
	[  +0.069643] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.129611] systemd-fstab-generator[969]: Ignoring "noauto" option for root device
	[ +11.468422] kauditd_printk_skb: 46 callbacks suppressed
	[Mar28 01:07] systemd-fstab-generator[4979]: Ignoring "noauto" option for root device
	[Mar28 01:09] systemd-fstab-generator[5264]: Ignoring "noauto" option for root device
	[  +0.093089] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 01:22:44 up 20 min,  0 users,  load average: 0.03, 0.06, 0.05
	Linux old-k8s-version-986088 5.10.207 #1 SMP Wed Mar 27 22:02:20 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Mar 28 01:22:38 old-k8s-version-986088 kubelet[6776]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*controller).Run.func1(0xc00009e0c0, 0xc000ba4360)
	Mar 28 01:22:38 old-k8s-version-986088 kubelet[6776]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/controller.go:130 +0x34
	Mar 28 01:22:38 old-k8s-version-986088 kubelet[6776]: created by k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*controller).Run
	Mar 28 01:22:38 old-k8s-version-986088 kubelet[6776]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/controller.go:129 +0xa5
	Mar 28 01:22:38 old-k8s-version-986088 kubelet[6776]: goroutine 158 [select]:
	Mar 28 01:22:38 old-k8s-version-986088 kubelet[6776]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0009adef0, 0x4f0ac20, 0xc000a03090, 0x1, 0xc00009e0c0)
	Mar 28 01:22:38 old-k8s-version-986088 kubelet[6776]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:167 +0x149
	Mar 28 01:22:38 old-k8s-version-986088 kubelet[6776]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*Reflector).Run(0xc000255180, 0xc00009e0c0)
	Mar 28 01:22:38 old-k8s-version-986088 kubelet[6776]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:220 +0x1c5
	Mar 28 01:22:38 old-k8s-version-986088 kubelet[6776]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).StartWithChannel.func1()
	Mar 28 01:22:38 old-k8s-version-986088 kubelet[6776]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:56 +0x2e
	Mar 28 01:22:38 old-k8s-version-986088 kubelet[6776]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1(0xc000ba22d0, 0xc000a0f6a0)
	Mar 28 01:22:38 old-k8s-version-986088 kubelet[6776]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:73 +0x51
	Mar 28 01:22:38 old-k8s-version-986088 kubelet[6776]: created by k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start
	Mar 28 01:22:38 old-k8s-version-986088 kubelet[6776]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:71 +0x65
	Mar 28 01:22:38 old-k8s-version-986088 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Mar 28 01:22:38 old-k8s-version-986088 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Mar 28 01:22:39 old-k8s-version-986088 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 141.
	Mar 28 01:22:39 old-k8s-version-986088 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Mar 28 01:22:39 old-k8s-version-986088 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Mar 28 01:22:39 old-k8s-version-986088 kubelet[6785]: I0328 01:22:39.227045    6785 server.go:416] Version: v1.20.0
	Mar 28 01:22:39 old-k8s-version-986088 kubelet[6785]: I0328 01:22:39.227331    6785 server.go:837] Client rotation is on, will bootstrap in background
	Mar 28 01:22:39 old-k8s-version-986088 kubelet[6785]: I0328 01:22:39.229363    6785 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Mar 28 01:22:39 old-k8s-version-986088 kubelet[6785]: W0328 01:22:39.230164    6785 manager.go:159] Cannot detect current cgroup on cgroup v2
	Mar 28 01:22:39 old-k8s-version-986088 kubelet[6785]: I0328 01:22:39.230530    6785 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-986088 -n old-k8s-version-986088
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-986088 -n old-k8s-version-986088: exit status 2 (302.283634ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-986088" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (156.37s)

                                                
                                    

Test pass (250/319)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 25.16
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.08
9 TestDownloadOnly/v1.20.0/DeleteAll 0.15
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.14
12 TestDownloadOnly/v1.29.3/json-events 12.34
13 TestDownloadOnly/v1.29.3/preload-exists 0
17 TestDownloadOnly/v1.29.3/LogsDuration 0.08
18 TestDownloadOnly/v1.29.3/DeleteAll 0.14
19 TestDownloadOnly/v1.29.3/DeleteAlwaysSucceeds 0.14
21 TestDownloadOnly/v1.30.0-beta.0/json-events 14.38
22 TestDownloadOnly/v1.30.0-beta.0/preload-exists 0
26 TestDownloadOnly/v1.30.0-beta.0/LogsDuration 0.08
27 TestDownloadOnly/v1.30.0-beta.0/DeleteAll 0.14
28 TestDownloadOnly/v1.30.0-beta.0/DeleteAlwaysSucceeds 0.13
30 TestBinaryMirror 0.6
31 TestOffline 62.38
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.07
36 TestAddons/Setup 151.51
38 TestAddons/parallel/Registry 18.54
40 TestAddons/parallel/InspektorGadget 11.38
41 TestAddons/parallel/MetricsServer 7.08
42 TestAddons/parallel/HelmTiller 15.18
44 TestAddons/parallel/CSI 91.15
45 TestAddons/parallel/Headlamp 13.09
46 TestAddons/parallel/CloudSpanner 5.74
47 TestAddons/parallel/LocalPath 16.36
48 TestAddons/parallel/NvidiaDevicePlugin 6.53
49 TestAddons/parallel/Yakd 6.01
52 TestAddons/serial/GCPAuth/Namespaces 0.12
54 TestCertOptions 60.76
55 TestCertExpiration 323.38
57 TestForceSystemdFlag 85.27
58 TestForceSystemdEnv 73.49
60 TestKVMDriverInstallOrUpdate 6.56
64 TestErrorSpam/setup 42.84
65 TestErrorSpam/start 0.4
66 TestErrorSpam/status 0.78
67 TestErrorSpam/pause 1.63
68 TestErrorSpam/unpause 1.69
69 TestErrorSpam/stop 5.87
72 TestFunctional/serial/CopySyncFile 0
73 TestFunctional/serial/StartWithProxy 69.15
74 TestFunctional/serial/AuditLog 0
75 TestFunctional/serial/SoftStart 379.05
76 TestFunctional/serial/KubeContext 0.05
77 TestFunctional/serial/KubectlGetPods 0.07
80 TestFunctional/serial/CacheCmd/cache/add_remote 4.45
81 TestFunctional/serial/CacheCmd/cache/add_local 2.59
82 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
83 TestFunctional/serial/CacheCmd/cache/list 0.06
84 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.25
85 TestFunctional/serial/CacheCmd/cache/cache_reload 2.03
86 TestFunctional/serial/CacheCmd/cache/delete 0.12
87 TestFunctional/serial/MinikubeKubectlCmd 0.13
88 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.12
89 TestFunctional/serial/ExtraConfig 39.14
90 TestFunctional/serial/ComponentHealth 0.07
91 TestFunctional/serial/LogsCmd 1.69
92 TestFunctional/serial/LogsFileCmd 1.63
93 TestFunctional/serial/InvalidService 4.58
95 TestFunctional/parallel/ConfigCmd 0.45
96 TestFunctional/parallel/DashboardCmd 15.41
97 TestFunctional/parallel/DryRun 0.36
98 TestFunctional/parallel/InternationalLanguage 0.2
99 TestFunctional/parallel/StatusCmd 1.12
103 TestFunctional/parallel/ServiceCmdConnect 9.7
104 TestFunctional/parallel/AddonsCmd 0.16
105 TestFunctional/parallel/PersistentVolumeClaim 35.93
107 TestFunctional/parallel/SSHCmd 0.46
108 TestFunctional/parallel/CpCmd 1.55
109 TestFunctional/parallel/MySQL 28.42
110 TestFunctional/parallel/FileSync 0.28
111 TestFunctional/parallel/CertSync 1.66
115 TestFunctional/parallel/NodeLabels 0.06
117 TestFunctional/parallel/NonActiveRuntimeDisabled 0.46
119 TestFunctional/parallel/License 1.13
120 TestFunctional/parallel/ServiceCmd/DeployApp 11.23
130 TestFunctional/parallel/UpdateContextCmd/no_changes 0.11
131 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.11
132 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.11
133 TestFunctional/parallel/Version/short 0.06
134 TestFunctional/parallel/Version/components 0.54
135 TestFunctional/parallel/ProfileCmd/profile_not_create 0.37
136 TestFunctional/parallel/ProfileCmd/profile_list 0.3
137 TestFunctional/parallel/ProfileCmd/profile_json_output 0.3
138 TestFunctional/parallel/MountCmd/any-port 15.82
139 TestFunctional/parallel/ServiceCmd/List 0.52
140 TestFunctional/parallel/ServiceCmd/JSONOutput 0.66
141 TestFunctional/parallel/ServiceCmd/HTTPS 0.44
142 TestFunctional/parallel/ServiceCmd/Format 0.41
143 TestFunctional/parallel/ServiceCmd/URL 0.68
144 TestFunctional/parallel/ImageCommands/ImageListShort 0.39
145 TestFunctional/parallel/ImageCommands/ImageListTable 0.25
146 TestFunctional/parallel/ImageCommands/ImageListJson 0.33
147 TestFunctional/parallel/ImageCommands/ImageListYaml 0.39
148 TestFunctional/parallel/ImageCommands/ImageBuild 3.97
149 TestFunctional/parallel/ImageCommands/Setup 2.58
150 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 4.74
151 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 5.04
152 TestFunctional/parallel/MountCmd/specific-port 1.83
153 TestFunctional/parallel/MountCmd/VerifyCleanup 1.57
154 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 6.71
155 TestFunctional/parallel/ImageCommands/ImageSaveToFile 1.6
156 TestFunctional/parallel/ImageCommands/ImageRemove 1.98
158 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 2.74
159 TestFunctional/delete_addon-resizer_images 0.07
160 TestFunctional/delete_my-image_image 0.01
161 TestFunctional/delete_minikube_cached_images 0.01
165 TestMultiControlPlane/serial/StartCluster 279.49
166 TestMultiControlPlane/serial/DeployApp 6.59
167 TestMultiControlPlane/serial/PingHostFromPods 1.48
168 TestMultiControlPlane/serial/AddWorkerNode 46.34
169 TestMultiControlPlane/serial/NodeLabels 0.07
170 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.6
171 TestMultiControlPlane/serial/CopyFile 14.07
173 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 3.51
175 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.45
177 TestMultiControlPlane/serial/DeleteSecondaryNode 17.37
178 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.42
180 TestMultiControlPlane/serial/RestartCluster 356.37
181 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.4
182 TestMultiControlPlane/serial/AddSecondaryNode 76.83
183 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.59
187 TestJSONOutput/start/Command 60.21
188 TestJSONOutput/start/Audit 0
190 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
191 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
193 TestJSONOutput/pause/Command 0.79
194 TestJSONOutput/pause/Audit 0
196 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
197 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
199 TestJSONOutput/unpause/Command 0.7
200 TestJSONOutput/unpause/Audit 0
202 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
203 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
205 TestJSONOutput/stop/Command 7.43
206 TestJSONOutput/stop/Audit 0
208 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
209 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
210 TestErrorJSONOutput 0.23
215 TestMainNoArgs 0.06
216 TestMinikubeProfile 96.93
219 TestMountStart/serial/StartWithMountFirst 28.05
220 TestMountStart/serial/VerifyMountFirst 0.41
221 TestMountStart/serial/StartWithMountSecond 25.98
222 TestMountStart/serial/VerifyMountSecond 0.4
223 TestMountStart/serial/DeleteFirst 0.67
224 TestMountStart/serial/VerifyMountPostDelete 0.41
225 TestMountStart/serial/Stop 1.29
226 TestMountStart/serial/RestartStopped 26.18
227 TestMountStart/serial/VerifyMountPostStop 0.41
230 TestMultiNode/serial/FreshStart2Nodes 100.23
231 TestMultiNode/serial/DeployApp2Nodes 5.68
232 TestMultiNode/serial/PingHostFrom2Pods 0.9
233 TestMultiNode/serial/AddNode 43.92
234 TestMultiNode/serial/MultiNodeLabels 0.07
235 TestMultiNode/serial/ProfileList 0.24
236 TestMultiNode/serial/CopyFile 7.67
237 TestMultiNode/serial/StopNode 2.54
238 TestMultiNode/serial/StartAfterStop 30.21
240 TestMultiNode/serial/DeleteNode 2.54
242 TestMultiNode/serial/RestartMultiNode 171.96
243 TestMultiNode/serial/ValidateNameConflict 44.46
250 TestScheduledStopUnix 117.03
254 TestRunningBinaryUpgrade 220.89
259 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
260 TestNoKubernetes/serial/StartWithK8s 96.47
261 TestNoKubernetes/serial/StartWithStopK8s 66.1
262 TestNoKubernetes/serial/Start 51.52
270 TestNetworkPlugins/group/false 3.97
271 TestNoKubernetes/serial/VerifyK8sNotRunning 0.24
272 TestNoKubernetes/serial/ProfileList 5.16
276 TestNoKubernetes/serial/Stop 2.34
285 TestPause/serial/Start 64.62
286 TestNoKubernetes/serial/StartNoArgs 53.54
287 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.22
288 TestStoppedBinaryUpgrade/Setup 2.3
289 TestStoppedBinaryUpgrade/Upgrade 126.77
291 TestNetworkPlugins/group/auto/Start 61.67
292 TestNetworkPlugins/group/kindnet/Start 88.24
293 TestStoppedBinaryUpgrade/MinikubeLogs 0.97
294 TestNetworkPlugins/group/calico/Start 127.97
295 TestNetworkPlugins/group/auto/KubeletFlags 0.26
296 TestNetworkPlugins/group/auto/NetCatPod 13.28
297 TestNetworkPlugins/group/auto/DNS 0.18
298 TestNetworkPlugins/group/auto/Localhost 0.13
299 TestNetworkPlugins/group/auto/HairPin 0.15
300 TestNetworkPlugins/group/custom-flannel/Start 83.24
301 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
302 TestNetworkPlugins/group/kindnet/KubeletFlags 0.23
303 TestNetworkPlugins/group/kindnet/NetCatPod 13.27
304 TestNetworkPlugins/group/kindnet/DNS 0.19
305 TestNetworkPlugins/group/kindnet/Localhost 0.15
306 TestNetworkPlugins/group/kindnet/HairPin 0.16
307 TestNetworkPlugins/group/enable-default-cni/Start 63.35
308 TestNetworkPlugins/group/calico/ControllerPod 6.01
309 TestNetworkPlugins/group/calico/KubeletFlags 0.23
310 TestNetworkPlugins/group/calico/NetCatPod 10.32
311 TestNetworkPlugins/group/calico/DNS 0.22
312 TestNetworkPlugins/group/calico/Localhost 0.21
313 TestNetworkPlugins/group/calico/HairPin 0.17
314 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.3
315 TestNetworkPlugins/group/custom-flannel/NetCatPod 11.39
316 TestNetworkPlugins/group/flannel/Start 89.51
317 TestNetworkPlugins/group/custom-flannel/DNS 0.22
318 TestNetworkPlugins/group/custom-flannel/Localhost 0.16
319 TestNetworkPlugins/group/custom-flannel/HairPin 0.17
320 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.29
321 TestNetworkPlugins/group/enable-default-cni/NetCatPod 12.33
322 TestNetworkPlugins/group/bridge/Start 77.84
323 TestNetworkPlugins/group/enable-default-cni/DNS 0.17
324 TestNetworkPlugins/group/enable-default-cni/Localhost 0.17
325 TestNetworkPlugins/group/enable-default-cni/HairPin 0.15
329 TestStartStop/group/no-preload/serial/FirstStart 130.14
330 TestNetworkPlugins/group/flannel/ControllerPod 6.01
331 TestNetworkPlugins/group/flannel/KubeletFlags 0.26
332 TestNetworkPlugins/group/flannel/NetCatPod 10.31
333 TestNetworkPlugins/group/bridge/KubeletFlags 0.23
334 TestNetworkPlugins/group/bridge/NetCatPod 10.26
335 TestNetworkPlugins/group/flannel/DNS 0.15
336 TestNetworkPlugins/group/flannel/Localhost 0.14
337 TestNetworkPlugins/group/flannel/HairPin 0.16
338 TestNetworkPlugins/group/bridge/DNS 0.22
339 TestNetworkPlugins/group/bridge/Localhost 0.19
340 TestNetworkPlugins/group/bridge/HairPin 0.18
342 TestStartStop/group/embed-certs/serial/FirstStart 65.65
344 TestStartStop/group/newest-cni/serial/FirstStart 82.4
345 TestStartStop/group/no-preload/serial/DeployApp 9.32
346 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.06
348 TestStartStop/group/embed-certs/serial/DeployApp 10.3
349 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.04
351 TestStartStop/group/newest-cni/serial/DeployApp 0
352 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.14
353 TestStartStop/group/newest-cni/serial/Stop 10.38
354 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.23
355 TestStartStop/group/newest-cni/serial/SecondStart 37.97
356 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
357 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
358 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.24
359 TestStartStop/group/newest-cni/serial/Pause 2.56
361 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 60.9
362 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 10.29
366 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.06
369 TestStartStop/group/no-preload/serial/SecondStart 689.85
370 TestStartStop/group/embed-certs/serial/SecondStart 609.89
371 TestStartStop/group/old-k8s-version/serial/Stop 1.43
372 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.24
375 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 516.11
x
+
TestDownloadOnly/v1.20.0/json-events (25.16s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-441167 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-441167 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (25.156743543s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (25.16s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-441167
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-441167: exit status 85 (81.476603ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|----------------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   |    Version     |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|----------------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-441167 | jenkins | v1.33.0-beta.0 | 27 Mar 24 23:32 UTC |          |
	|         | -p download-only-441167        |                      |         |                |                     |          |
	|         | --force --alsologtostderr      |                      |         |                |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |                |                     |          |
	|         | --container-runtime=crio       |                      |         |                |                     |          |
	|         | --driver=kvm2                  |                      |         |                |                     |          |
	|         | --container-runtime=crio       |                      |         |                |                     |          |
	|---------|--------------------------------|----------------------|---------|----------------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/27 23:32:48
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0327 23:32:48.856214 1076534 out.go:291] Setting OutFile to fd 1 ...
	I0327 23:32:48.856459 1076534 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 23:32:48.856468 1076534 out.go:304] Setting ErrFile to fd 2...
	I0327 23:32:48.856473 1076534 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 23:32:48.856648 1076534 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18485-1069254/.minikube/bin
	W0327 23:32:48.856781 1076534 root.go:314] Error reading config file at /home/jenkins/minikube-integration/18485-1069254/.minikube/config/config.json: open /home/jenkins/minikube-integration/18485-1069254/.minikube/config/config.json: no such file or directory
	I0327 23:32:48.857341 1076534 out.go:298] Setting JSON to true
	I0327 23:32:48.858379 1076534 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":26066,"bootTime":1711556303,"procs":221,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1054-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0327 23:32:48.858457 1076534 start.go:139] virtualization: kvm guest
	I0327 23:32:48.860924 1076534 out.go:97] [download-only-441167] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I0327 23:32:48.862439 1076534 out.go:169] MINIKUBE_LOCATION=18485
	W0327 23:32:48.861042 1076534 preload.go:294] Failed to list preload files: open /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/preloaded-tarball: no such file or directory
	I0327 23:32:48.861101 1076534 notify.go:220] Checking for updates...
	I0327 23:32:48.865223 1076534 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0327 23:32:48.866637 1076534 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/18485-1069254/kubeconfig
	I0327 23:32:48.867895 1076534 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/18485-1069254/.minikube
	I0327 23:32:48.869017 1076534 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0327 23:32:48.871208 1076534 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0327 23:32:48.871433 1076534 driver.go:392] Setting default libvirt URI to qemu:///system
	I0327 23:32:48.907241 1076534 out.go:97] Using the kvm2 driver based on user configuration
	I0327 23:32:48.907271 1076534 start.go:297] selected driver: kvm2
	I0327 23:32:48.907280 1076534 start.go:901] validating driver "kvm2" against <nil>
	I0327 23:32:48.907669 1076534 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0327 23:32:48.907769 1076534 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18485-1069254/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0327 23:32:48.923839 1076534 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0-beta.0
	I0327 23:32:48.923939 1076534 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0327 23:32:48.924517 1076534 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0327 23:32:48.924683 1076534 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0327 23:32:48.924757 1076534 cni.go:84] Creating CNI manager for ""
	I0327 23:32:48.924775 1076534 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0327 23:32:48.924788 1076534 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0327 23:32:48.924867 1076534 start.go:340] cluster config:
	{Name:download-only-441167 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-441167 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0327 23:32:48.925073 1076534 iso.go:125] acquiring lock: {Name:mk3da1fa7d63a581c817f327d86a827697458cb8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0327 23:32:48.926977 1076534 out.go:97] Downloading VM boot image ...
	I0327 23:32:48.927019 1076534 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso.sha256 -> /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/iso/amd64/minikube-v1.33.0-1711559712-18485-amd64.iso
	I0327 23:32:57.870976 1076534 out.go:97] Starting "download-only-441167" primary control-plane node in "download-only-441167" cluster
	I0327 23:32:57.871005 1076534 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0327 23:32:57.968722 1076534 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0327 23:32:57.968761 1076534 cache.go:56] Caching tarball of preloaded images
	I0327 23:32:57.968909 1076534 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0327 23:32:57.971110 1076534 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0327 23:32:57.971131 1076534 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0327 23:32:58.081304 1076534 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:f93b07cde9c3289306cbaeb7a1803c19 -> /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0327 23:33:10.364926 1076534 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0327 23:33:10.365045 1076534 preload.go:255] verifying checksum of /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0327 23:33:11.321158 1076534 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0327 23:33:11.321580 1076534 profile.go:142] Saving config to /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/download-only-441167/config.json ...
	I0327 23:33:11.321626 1076534 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/download-only-441167/config.json: {Name:mk307032c0e06d1c878cc494993bc939f9495c57 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 23:33:11.321887 1076534 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0327 23:33:11.322105 1076534 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/linux/amd64/v1.20.0/kubectl
	
	
	* The control-plane node download-only-441167 host does not exist
	  To start a cluster, run: "minikube start -p download-only-441167"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-441167
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/json-events (12.34s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-412310 --force --alsologtostderr --kubernetes-version=v1.29.3 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-412310 --force --alsologtostderr --kubernetes-version=v1.29.3 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (12.337973451s)
--- PASS: TestDownloadOnly/v1.29.3/json-events (12.34s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/preload-exists
--- PASS: TestDownloadOnly/v1.29.3/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-412310
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-412310: exit status 85 (76.329377ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   |    Version     |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-441167 | jenkins | v1.33.0-beta.0 | 27 Mar 24 23:32 UTC |                     |
	|         | -p download-only-441167        |                      |         |                |                     |                     |
	|         | --force --alsologtostderr      |                      |         |                |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |                |                     |                     |
	|         | --container-runtime=crio       |                      |         |                |                     |                     |
	|         | --driver=kvm2                  |                      |         |                |                     |                     |
	|         | --container-runtime=crio       |                      |         |                |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.33.0-beta.0 | 27 Mar 24 23:33 UTC | 27 Mar 24 23:33 UTC |
	| delete  | -p download-only-441167        | download-only-441167 | jenkins | v1.33.0-beta.0 | 27 Mar 24 23:33 UTC | 27 Mar 24 23:33 UTC |
	| start   | -o=json --download-only        | download-only-412310 | jenkins | v1.33.0-beta.0 | 27 Mar 24 23:33 UTC |                     |
	|         | -p download-only-412310        |                      |         |                |                     |                     |
	|         | --force --alsologtostderr      |                      |         |                |                     |                     |
	|         | --kubernetes-version=v1.29.3   |                      |         |                |                     |                     |
	|         | --container-runtime=crio       |                      |         |                |                     |                     |
	|         | --driver=kvm2                  |                      |         |                |                     |                     |
	|         | --container-runtime=crio       |                      |         |                |                     |                     |
	|---------|--------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/27 23:33:14
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0327 23:33:14.374010 1076739 out.go:291] Setting OutFile to fd 1 ...
	I0327 23:33:14.374163 1076739 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 23:33:14.374174 1076739 out.go:304] Setting ErrFile to fd 2...
	I0327 23:33:14.374181 1076739 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 23:33:14.374433 1076739 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18485-1069254/.minikube/bin
	I0327 23:33:14.375039 1076739 out.go:298] Setting JSON to true
	I0327 23:33:14.376068 1076739 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":26092,"bootTime":1711556303,"procs":219,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1054-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0327 23:33:14.376140 1076739 start.go:139] virtualization: kvm guest
	I0327 23:33:14.378425 1076739 out.go:97] [download-only-412310] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I0327 23:33:14.379986 1076739 out.go:169] MINIKUBE_LOCATION=18485
	I0327 23:33:14.378665 1076739 notify.go:220] Checking for updates...
	I0327 23:33:14.382424 1076739 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0327 23:33:14.383690 1076739 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/18485-1069254/kubeconfig
	I0327 23:33:14.384996 1076739 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/18485-1069254/.minikube
	I0327 23:33:14.386266 1076739 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0327 23:33:14.388567 1076739 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0327 23:33:14.388830 1076739 driver.go:392] Setting default libvirt URI to qemu:///system
	I0327 23:33:14.423372 1076739 out.go:97] Using the kvm2 driver based on user configuration
	I0327 23:33:14.423418 1076739 start.go:297] selected driver: kvm2
	I0327 23:33:14.423425 1076739 start.go:901] validating driver "kvm2" against <nil>
	I0327 23:33:14.423744 1076739 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0327 23:33:14.423941 1076739 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18485-1069254/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0327 23:33:14.439599 1076739 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0-beta.0
	I0327 23:33:14.439682 1076739 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0327 23:33:14.440231 1076739 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0327 23:33:14.440372 1076739 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0327 23:33:14.440423 1076739 cni.go:84] Creating CNI manager for ""
	I0327 23:33:14.440432 1076739 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0327 23:33:14.440439 1076739 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0327 23:33:14.440491 1076739 start.go:340] cluster config:
	{Name:download-only-412310 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:download-only-412310 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0327 23:33:14.440584 1076739 iso.go:125] acquiring lock: {Name:mk3da1fa7d63a581c817f327d86a827697458cb8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0327 23:33:14.442438 1076739 out.go:97] Starting "download-only-412310" primary control-plane node in "download-only-412310" cluster
	I0327 23:33:14.442467 1076739 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0327 23:33:14.814570 1076739 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.3/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4
	I0327 23:33:14.814625 1076739 cache.go:56] Caching tarball of preloaded images
	I0327 23:33:14.814780 1076739 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0327 23:33:14.816771 1076739 out.go:97] Downloading Kubernetes v1.29.3 preload ...
	I0327 23:33:14.816801 1076739 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4 ...
	I0327 23:33:14.916184 1076739 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.3/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4?checksum=md5:6f4e94cb6232b24c3932ab20b1ee6dad -> /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-412310 host does not exist
	  To start a cluster, run: "minikube start -p download-only-412310"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.29.3/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/DeleteAll (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.29.3/DeleteAll (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-412310
--- PASS: TestDownloadOnly/v1.29.3/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-beta.0/json-events (14.38s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-beta.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-811387 --force --alsologtostderr --kubernetes-version=v1.30.0-beta.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-811387 --force --alsologtostderr --kubernetes-version=v1.30.0-beta.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (14.380448032s)
--- PASS: TestDownloadOnly/v1.30.0-beta.0/json-events (14.38s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-beta.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-beta.0/preload-exists
--- PASS: TestDownloadOnly/v1.30.0-beta.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-beta.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-beta.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-811387
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-811387: exit status 85 (79.832414ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|-------------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	| Command |                Args                 |       Profile        |  User   |    Version     |     Start Time      |      End Time       |
	|---------|-------------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	| start   | -o=json --download-only             | download-only-441167 | jenkins | v1.33.0-beta.0 | 27 Mar 24 23:32 UTC |                     |
	|         | -p download-only-441167             |                      |         |                |                     |                     |
	|         | --force --alsologtostderr           |                      |         |                |                     |                     |
	|         | --kubernetes-version=v1.20.0        |                      |         |                |                     |                     |
	|         | --container-runtime=crio            |                      |         |                |                     |                     |
	|         | --driver=kvm2                       |                      |         |                |                     |                     |
	|         | --container-runtime=crio            |                      |         |                |                     |                     |
	| delete  | --all                               | minikube             | jenkins | v1.33.0-beta.0 | 27 Mar 24 23:33 UTC | 27 Mar 24 23:33 UTC |
	| delete  | -p download-only-441167             | download-only-441167 | jenkins | v1.33.0-beta.0 | 27 Mar 24 23:33 UTC | 27 Mar 24 23:33 UTC |
	| start   | -o=json --download-only             | download-only-412310 | jenkins | v1.33.0-beta.0 | 27 Mar 24 23:33 UTC |                     |
	|         | -p download-only-412310             |                      |         |                |                     |                     |
	|         | --force --alsologtostderr           |                      |         |                |                     |                     |
	|         | --kubernetes-version=v1.29.3        |                      |         |                |                     |                     |
	|         | --container-runtime=crio            |                      |         |                |                     |                     |
	|         | --driver=kvm2                       |                      |         |                |                     |                     |
	|         | --container-runtime=crio            |                      |         |                |                     |                     |
	| delete  | --all                               | minikube             | jenkins | v1.33.0-beta.0 | 27 Mar 24 23:33 UTC | 27 Mar 24 23:33 UTC |
	| delete  | -p download-only-412310             | download-only-412310 | jenkins | v1.33.0-beta.0 | 27 Mar 24 23:33 UTC | 27 Mar 24 23:33 UTC |
	| start   | -o=json --download-only             | download-only-811387 | jenkins | v1.33.0-beta.0 | 27 Mar 24 23:33 UTC |                     |
	|         | -p download-only-811387             |                      |         |                |                     |                     |
	|         | --force --alsologtostderr           |                      |         |                |                     |                     |
	|         | --kubernetes-version=v1.30.0-beta.0 |                      |         |                |                     |                     |
	|         | --container-runtime=crio            |                      |         |                |                     |                     |
	|         | --driver=kvm2                       |                      |         |                |                     |                     |
	|         | --container-runtime=crio            |                      |         |                |                     |                     |
	|---------|-------------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/27 23:33:27
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0327 23:33:27.074101 1076918 out.go:291] Setting OutFile to fd 1 ...
	I0327 23:33:27.074259 1076918 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 23:33:27.074271 1076918 out.go:304] Setting ErrFile to fd 2...
	I0327 23:33:27.074279 1076918 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 23:33:27.074495 1076918 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18485-1069254/.minikube/bin
	I0327 23:33:27.075099 1076918 out.go:298] Setting JSON to true
	I0327 23:33:27.076098 1076918 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":26104,"bootTime":1711556303,"procs":219,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1054-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0327 23:33:27.076170 1076918 start.go:139] virtualization: kvm guest
	I0327 23:33:27.079612 1076918 out.go:97] [download-only-811387] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I0327 23:33:27.081099 1076918 out.go:169] MINIKUBE_LOCATION=18485
	I0327 23:33:27.079810 1076918 notify.go:220] Checking for updates...
	I0327 23:33:27.082893 1076918 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0327 23:33:27.084505 1076918 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/18485-1069254/kubeconfig
	I0327 23:33:27.085958 1076918 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/18485-1069254/.minikube
	I0327 23:33:27.087323 1076918 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0327 23:33:27.089817 1076918 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0327 23:33:27.090117 1076918 driver.go:392] Setting default libvirt URI to qemu:///system
	I0327 23:33:27.123392 1076918 out.go:97] Using the kvm2 driver based on user configuration
	I0327 23:33:27.123433 1076918 start.go:297] selected driver: kvm2
	I0327 23:33:27.123442 1076918 start.go:901] validating driver "kvm2" against <nil>
	I0327 23:33:27.124026 1076918 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0327 23:33:27.124140 1076918 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18485-1069254/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0327 23:33:27.140760 1076918 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0-beta.0
	I0327 23:33:27.140846 1076918 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0327 23:33:27.141331 1076918 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0327 23:33:27.141481 1076918 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0327 23:33:27.141553 1076918 cni.go:84] Creating CNI manager for ""
	I0327 23:33:27.141566 1076918 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0327 23:33:27.141582 1076918 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0327 23:33:27.141640 1076918 start.go:340] cluster config:
	{Name:download-only-811387 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0-beta.0 ClusterName:download-only-811387 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loc
al ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0327 23:33:27.141736 1076918 iso.go:125] acquiring lock: {Name:mk3da1fa7d63a581c817f327d86a827697458cb8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0327 23:33:27.143540 1076918 out.go:97] Starting "download-only-811387" primary control-plane node in "download-only-811387" cluster
	I0327 23:33:27.143559 1076918 preload.go:132] Checking if preload exists for k8s version v1.30.0-beta.0 and runtime crio
	I0327 23:33:27.526549 1076918 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.0-beta.0/preloaded-images-k8s-v18-v1.30.0-beta.0-cri-o-overlay-amd64.tar.lz4
	I0327 23:33:27.526601 1076918 cache.go:56] Caching tarball of preloaded images
	I0327 23:33:27.526780 1076918 preload.go:132] Checking if preload exists for k8s version v1.30.0-beta.0 and runtime crio
	I0327 23:33:27.528812 1076918 out.go:97] Downloading Kubernetes v1.30.0-beta.0 preload ...
	I0327 23:33:27.528834 1076918 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.30.0-beta.0-cri-o-overlay-amd64.tar.lz4 ...
	I0327 23:33:27.626298 1076918 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.0-beta.0/preloaded-images-k8s-v18-v1.30.0-beta.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:6f8942c73bc4cf06adbbee21f15bde53 -> /home/jenkins/minikube-integration/18485-1069254/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-beta.0-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-811387 host does not exist
	  To start a cluster, run: "minikube start -p download-only-811387"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.30.0-beta.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-beta.0/DeleteAll (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-beta.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.30.0-beta.0/DeleteAll (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-beta.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-beta.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-811387
--- PASS: TestDownloadOnly/v1.30.0-beta.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestBinaryMirror (0.6s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-751324 --alsologtostderr --binary-mirror http://127.0.0.1:45193 --driver=kvm2  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-751324" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-751324
--- PASS: TestBinaryMirror (0.60s)

                                                
                                    
x
+
TestOffline (62.38s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-555682 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-555682 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio: (1m1.516473732s)
helpers_test.go:175: Cleaning up "offline-crio-555682" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-555682
--- PASS: TestOffline (62.38s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:928: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-910864
addons_test.go:928: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-910864: exit status 85 (63.665568ms)

                                                
                                                
-- stdout --
	* Profile "addons-910864" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-910864"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-910864
addons_test.go:939: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-910864: exit status 85 (68.631666ms)

                                                
                                                
-- stdout --
	* Profile "addons-910864" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-910864"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/Setup (151.51s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:109: (dbg) Run:  out/minikube-linux-amd64 start -p addons-910864 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:109: (dbg) Done: out/minikube-linux-amd64 start -p addons-910864 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller: (2m31.513204453s)
--- PASS: TestAddons/Setup (151.51s)

                                                
                                    
x
+
TestAddons/parallel/Registry (18.54s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:330: registry stabilized in 23.704203ms
addons_test.go:332: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-ft9qx" [1d2583d0-7b3d-414c-be8d-513217d275d5] Running
addons_test.go:332: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.008261754s
addons_test.go:335: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-fn7cc" [c0e21e74-fdab-4924-8b67-c75809a350f1] Running
addons_test.go:335: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.005348675s
addons_test.go:340: (dbg) Run:  kubectl --context addons-910864 delete po -l run=registry-test --now
addons_test.go:345: (dbg) Run:  kubectl --context addons-910864 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:345: (dbg) Done: kubectl --context addons-910864 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (6.255918584s)
addons_test.go:359: (dbg) Run:  out/minikube-linux-amd64 -p addons-910864 ip
addons_test.go:388: (dbg) Run:  out/minikube-linux-amd64 -p addons-910864 addons disable registry --alsologtostderr -v=1
addons_test.go:388: (dbg) Done: out/minikube-linux-amd64 -p addons-910864 addons disable registry --alsologtostderr -v=1: (1.066391952s)
--- PASS: TestAddons/parallel/Registry (18.54s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.38s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-gsn5n" [02eea2c8-d418-40a8-b6c8-92b370d7f2dc] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.005309785s
addons_test.go:841: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-910864
addons_test.go:841: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-910864: (6.369445382s)
--- PASS: TestAddons/parallel/InspektorGadget (11.38s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (7.08s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:407: metrics-server stabilized in 23.797117ms
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-69cf46c98-c4zrg" [be84ea98-7e43-48f2-8b80-8187e0478a9c] Running
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.005829561s
addons_test.go:415: (dbg) Run:  kubectl --context addons-910864 top pods -n kube-system
addons_test.go:432: (dbg) Run:  out/minikube-linux-amd64 -p addons-910864 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (7.08s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (15.18s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:456: tiller-deploy stabilized in 3.230968ms
addons_test.go:458: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-7b677967b9-kt5p7" [617f7919-14ff-44b4-8722-d900d19371e3] Running
addons_test.go:458: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.00468726s
addons_test.go:473: (dbg) Run:  kubectl --context addons-910864 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:473: (dbg) Done: kubectl --context addons-910864 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (9.157273956s)
addons_test.go:490: (dbg) Run:  out/minikube-linux-amd64 -p addons-910864 addons disable helm-tiller --alsologtostderr -v=1
addons_test.go:490: (dbg) Done: out/minikube-linux-amd64 -p addons-910864 addons disable helm-tiller --alsologtostderr -v=1: (1.009175756s)
--- PASS: TestAddons/parallel/HelmTiller (15.18s)

                                                
                                    
x
+
TestAddons/parallel/CSI (91.15s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:561: csi-hostpath-driver pods stabilized in 24.681914ms
addons_test.go:564: (dbg) Run:  kubectl --context addons-910864 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:569: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-910864 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-910864 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-910864 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-910864 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-910864 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-910864 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-910864 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-910864 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-910864 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-910864 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-910864 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-910864 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-910864 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-910864 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-910864 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-910864 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-910864 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-910864 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-910864 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-910864 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-910864 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-910864 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-910864 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-910864 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-910864 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-910864 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-910864 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-910864 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-910864 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-910864 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-910864 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-910864 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-910864 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-910864 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-910864 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-910864 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-910864 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-910864 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-910864 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-910864 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-910864 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-910864 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-910864 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-910864 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-910864 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-910864 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-910864 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-910864 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-910864 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:574: (dbg) Run:  kubectl --context addons-910864 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:579: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [69533a11-9a2f-465b-863e-61f4737408ff] Pending
helpers_test.go:344: "task-pv-pod" [69533a11-9a2f-465b-863e-61f4737408ff] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [69533a11-9a2f-465b-863e-61f4737408ff] Running
addons_test.go:579: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 10.005017578s
addons_test.go:584: (dbg) Run:  kubectl --context addons-910864 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:589: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-910864 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-910864 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:594: (dbg) Run:  kubectl --context addons-910864 delete pod task-pv-pod
addons_test.go:600: (dbg) Run:  kubectl --context addons-910864 delete pvc hpvc
addons_test.go:606: (dbg) Run:  kubectl --context addons-910864 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:611: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-910864 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-910864 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-910864 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-910864 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-910864 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-910864 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-910864 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-910864 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-910864 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-910864 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-910864 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-910864 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-910864 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-910864 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-910864 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:616: (dbg) Run:  kubectl --context addons-910864 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:621: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [c050042f-2d70-463d-aded-99bcecc33a75] Pending
helpers_test.go:344: "task-pv-pod-restore" [c050042f-2d70-463d-aded-99bcecc33a75] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [c050042f-2d70-463d-aded-99bcecc33a75] Running
addons_test.go:621: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.004130209s
addons_test.go:626: (dbg) Run:  kubectl --context addons-910864 delete pod task-pv-pod-restore
addons_test.go:630: (dbg) Run:  kubectl --context addons-910864 delete pvc hpvc-restore
addons_test.go:634: (dbg) Run:  kubectl --context addons-910864 delete volumesnapshot new-snapshot-demo
addons_test.go:638: (dbg) Run:  out/minikube-linux-amd64 -p addons-910864 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:638: (dbg) Done: out/minikube-linux-amd64 -p addons-910864 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.781308286s)
addons_test.go:642: (dbg) Run:  out/minikube-linux-amd64 -p addons-910864 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (91.15s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (13.09s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:824: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-910864 --alsologtostderr -v=1
addons_test.go:824: (dbg) Done: out/minikube-linux-amd64 addons enable headlamp -p addons-910864 --alsologtostderr -v=1: (1.083686821s)
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-5b77dbd7c4-4jmtg" [a192abb6-aa9b-48c5-b8ed-d698518f6d50] Pending
helpers_test.go:344: "headlamp-5b77dbd7c4-4jmtg" [a192abb6-aa9b-48c5-b8ed-d698518f6d50] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-5b77dbd7c4-4jmtg" [a192abb6-aa9b-48c5-b8ed-d698518f6d50] Running
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 12.005508545s
--- PASS: TestAddons/parallel/Headlamp (13.09s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.74s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-5446596998-l69nz" [d1ded8fa-f1a1-42bf-ac80-451d1ac1e9b8] Running
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.004539361s
addons_test.go:860: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-910864
--- PASS: TestAddons/parallel/CloudSpanner (5.74s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (16.36s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:873: (dbg) Run:  kubectl --context addons-910864 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:879: (dbg) Run:  kubectl --context addons-910864 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:883: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-910864 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-910864 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-910864 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-910864 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-910864 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-910864 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-910864 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-910864 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-910864 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-910864 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-910864 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [ce3ef7be-941f-4b5b-a9b5-aeec7b27065b] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [ce3ef7be-941f-4b5b-a9b5-aeec7b27065b] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [ce3ef7be-941f-4b5b-a9b5-aeec7b27065b] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 5.005322955s
addons_test.go:891: (dbg) Run:  kubectl --context addons-910864 get pvc test-pvc -o=json
addons_test.go:900: (dbg) Run:  out/minikube-linux-amd64 -p addons-910864 ssh "cat /opt/local-path-provisioner/pvc-5d867087-2511-4b36-8e94-b5e7118d57da_default_test-pvc/file1"
addons_test.go:912: (dbg) Run:  kubectl --context addons-910864 delete pod test-local-path
addons_test.go:916: (dbg) Run:  kubectl --context addons-910864 delete pvc test-pvc
addons_test.go:920: (dbg) Run:  out/minikube-linux-amd64 -p addons-910864 addons disable storage-provisioner-rancher --alsologtostderr -v=1
--- PASS: TestAddons/parallel/LocalPath (16.36s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.53s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-trctv" [189a46ce-1f31-42d5-bcf7-caefe2c656f6] Running
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.005040513s
addons_test.go:955: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p addons-910864
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.53s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (6.01s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-9947fc6bf-fmvp5" [1c917feb-9f7b-4dbc-9b7c-cede23bc4786] Running
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.0041044s
--- PASS: TestAddons/parallel/Yakd (6.01s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:650: (dbg) Run:  kubectl --context addons-910864 create ns new-namespace
addons_test.go:664: (dbg) Run:  kubectl --context addons-910864 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                    
x
+
TestCertOptions (60.76s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-863150 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-863150 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio: (59.4091357s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-863150 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-863150 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-863150 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-863150" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-863150
--- PASS: TestCertOptions (60.76s)

                                                
                                    
x
+
TestCertExpiration (323.38s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-927384 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-927384 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio: (1m39.591984305s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-927384 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-927384 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio: (42.925369713s)
helpers_test.go:175: Cleaning up "cert-expiration-927384" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-927384
--- PASS: TestCertExpiration (323.38s)

                                                
                                    
x
+
TestForceSystemdFlag (85.27s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-083201 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-083201 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m24.050283859s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-083201 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-083201" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-083201
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-083201: (1.005554515s)
--- PASS: TestForceSystemdFlag (85.27s)

                                                
                                    
x
+
TestForceSystemdEnv (73.49s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-859821 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-859821 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m12.665306185s)
helpers_test.go:175: Cleaning up "force-systemd-env-859821" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-859821
--- PASS: TestForceSystemdEnv (73.49s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (6.56s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
E0328 00:46:04.256335 1076522 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/functional-800754/client.crt: no such file or directory
--- PASS: TestKVMDriverInstallOrUpdate (6.56s)

                                                
                                    
x
+
TestErrorSpam/setup (42.84s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-305976 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-305976 --driver=kvm2  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-305976 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-305976 --driver=kvm2  --container-runtime=crio: (42.84380739s)
--- PASS: TestErrorSpam/setup (42.84s)

                                                
                                    
x
+
TestErrorSpam/start (0.4s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-305976 --log_dir /tmp/nospam-305976 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-305976 --log_dir /tmp/nospam-305976 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-305976 --log_dir /tmp/nospam-305976 start --dry-run
--- PASS: TestErrorSpam/start (0.40s)

                                                
                                    
x
+
TestErrorSpam/status (0.78s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-305976 --log_dir /tmp/nospam-305976 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-305976 --log_dir /tmp/nospam-305976 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-305976 --log_dir /tmp/nospam-305976 status
--- PASS: TestErrorSpam/status (0.78s)

                                                
                                    
x
+
TestErrorSpam/pause (1.63s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-305976 --log_dir /tmp/nospam-305976 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-305976 --log_dir /tmp/nospam-305976 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-305976 --log_dir /tmp/nospam-305976 pause
--- PASS: TestErrorSpam/pause (1.63s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.69s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-305976 --log_dir /tmp/nospam-305976 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-305976 --log_dir /tmp/nospam-305976 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-305976 --log_dir /tmp/nospam-305976 unpause
--- PASS: TestErrorSpam/unpause (1.69s)

                                                
                                    
x
+
TestErrorSpam/stop (5.87s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-305976 --log_dir /tmp/nospam-305976 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-305976 --log_dir /tmp/nospam-305976 stop: (2.296360015s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-305976 --log_dir /tmp/nospam-305976 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-305976 --log_dir /tmp/nospam-305976 stop: (2.054738586s)
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-305976 --log_dir /tmp/nospam-305976 stop
error_spam_test.go:182: (dbg) Done: out/minikube-linux-amd64 -p nospam-305976 --log_dir /tmp/nospam-305976 stop: (1.51559059s)
--- PASS: TestErrorSpam/stop (5.87s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /home/jenkins/minikube-integration/18485-1069254/.minikube/files/etc/test/nested/copy/1076522/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (69.15s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-linux-amd64 start -p functional-800754 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio
functional_test.go:2230: (dbg) Done: out/minikube-linux-amd64 start -p functional-800754 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio: (1m9.154236374s)
--- PASS: TestFunctional/serial/StartWithProxy (69.15s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (379.05s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-amd64 start -p functional-800754 --alsologtostderr -v=8
E0327 23:46:14.356418 1076522 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/addons-910864/client.crt: no such file or directory
E0327 23:46:14.362463 1076522 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/addons-910864/client.crt: no such file or directory
E0327 23:46:14.372758 1076522 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/addons-910864/client.crt: no such file or directory
E0327 23:46:14.393022 1076522 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/addons-910864/client.crt: no such file or directory
E0327 23:46:14.433310 1076522 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/addons-910864/client.crt: no such file or directory
E0327 23:46:14.513880 1076522 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/addons-910864/client.crt: no such file or directory
E0327 23:46:14.674382 1076522 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/addons-910864/client.crt: no such file or directory
E0327 23:46:14.994557 1076522 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/addons-910864/client.crt: no such file or directory
E0327 23:46:15.635531 1076522 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/addons-910864/client.crt: no such file or directory
E0327 23:46:16.915856 1076522 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/addons-910864/client.crt: no such file or directory
E0327 23:46:19.476674 1076522 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/addons-910864/client.crt: no such file or directory
E0327 23:46:24.597404 1076522 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/addons-910864/client.crt: no such file or directory
E0327 23:46:34.838534 1076522 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/addons-910864/client.crt: no such file or directory
E0327 23:46:55.319430 1076522 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/addons-910864/client.crt: no such file or directory
E0327 23:47:36.279677 1076522 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/addons-910864/client.crt: no such file or directory
E0327 23:48:58.201671 1076522 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/addons-910864/client.crt: no such file or directory
functional_test.go:655: (dbg) Done: out/minikube-linux-amd64 start -p functional-800754 --alsologtostderr -v=8: (6m19.043895193s)
functional_test.go:659: soft start took 6m19.044719899s for "functional-800754" cluster.
--- PASS: TestFunctional/serial/SoftStart (379.05s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-800754 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (4.45s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-800754 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-800754 cache add registry.k8s.io/pause:3.1: (1.512286517s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-800754 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-800754 cache add registry.k8s.io/pause:3.3: (1.476855522s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-800754 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-800754 cache add registry.k8s.io/pause:latest: (1.464681427s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (4.45s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (2.59s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-800754 /tmp/TestFunctionalserialCacheCmdcacheadd_local131715833/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-amd64 -p functional-800754 cache add minikube-local-cache-test:functional-800754
functional_test.go:1085: (dbg) Done: out/minikube-linux-amd64 -p functional-800754 cache add minikube-local-cache-test:functional-800754: (2.1908289s)
functional_test.go:1090: (dbg) Run:  out/minikube-linux-amd64 -p functional-800754 cache delete minikube-local-cache-test:functional-800754
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-800754
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (2.59s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.25s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-amd64 -p functional-800754 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.25s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2.03s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-amd64 -p functional-800754 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-amd64 -p functional-800754 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-800754 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (226.515715ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-amd64 -p functional-800754 cache reload
functional_test.go:1154: (dbg) Done: out/minikube-linux-amd64 -p functional-800754 cache reload: (1.287594556s)
functional_test.go:1159: (dbg) Run:  out/minikube-linux-amd64 -p functional-800754 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.03s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-amd64 -p functional-800754 kubectl -- --context functional-800754 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-800754 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (39.14s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-amd64 start -p functional-800754 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:753: (dbg) Done: out/minikube-linux-amd64 start -p functional-800754 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (39.136841232s)
functional_test.go:757: restart took 39.136955903s for "functional-800754" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (39.14s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-800754 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.69s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-amd64 -p functional-800754 logs
E0327 23:51:14.356081 1076522 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/addons-910864/client.crt: no such file or directory
functional_test.go:1232: (dbg) Done: out/minikube-linux-amd64 -p functional-800754 logs: (1.691719183s)
--- PASS: TestFunctional/serial/LogsCmd (1.69s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.63s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-amd64 -p functional-800754 logs --file /tmp/TestFunctionalserialLogsFileCmd2992198481/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-amd64 -p functional-800754 logs --file /tmp/TestFunctionalserialLogsFileCmd2992198481/001/logs.txt: (1.624960297s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.63s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.58s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-800754 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-800754
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-800754: exit status 115 (314.230874ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|-----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |             URL             |
	|-----------|-------------|-------------|-----------------------------|
	| default   | invalid-svc |          80 | http://192.168.39.216:30169 |
	|-----------|-------------|-------------|-----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-800754 delete -f testdata/invalidsvc.yaml
functional_test.go:2323: (dbg) Done: kubectl --context functional-800754 delete -f testdata/invalidsvc.yaml: (1.050906334s)
--- PASS: TestFunctional/serial/InvalidService (4.58s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-800754 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-800754 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-800754 config get cpus: exit status 14 (86.611742ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-800754 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-800754 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-800754 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-800754 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-800754 config get cpus: exit status 14 (62.138661ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (15.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-800754 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-800754 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 1085257: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (15.41s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-amd64 start -p functional-800754 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-800754 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (177.899783ms)

                                                
                                                
-- stdout --
	* [functional-800754] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18485
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18485-1069254/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18485-1069254/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0327 23:51:32.046340 1084911 out.go:291] Setting OutFile to fd 1 ...
	I0327 23:51:32.046502 1084911 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 23:51:32.046520 1084911 out.go:304] Setting ErrFile to fd 2...
	I0327 23:51:32.046527 1084911 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 23:51:32.046835 1084911 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18485-1069254/.minikube/bin
	I0327 23:51:32.047629 1084911 out.go:298] Setting JSON to false
	I0327 23:51:32.049044 1084911 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":27189,"bootTime":1711556303,"procs":228,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1054-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0327 23:51:32.049183 1084911 start.go:139] virtualization: kvm guest
	I0327 23:51:32.051938 1084911 out.go:177] * [functional-800754] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I0327 23:51:32.053552 1084911 out.go:177]   - MINIKUBE_LOCATION=18485
	I0327 23:51:32.053576 1084911 notify.go:220] Checking for updates...
	I0327 23:51:32.056287 1084911 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0327 23:51:32.058177 1084911 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18485-1069254/kubeconfig
	I0327 23:51:32.059676 1084911 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18485-1069254/.minikube
	I0327 23:51:32.061110 1084911 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0327 23:51:32.062526 1084911 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0327 23:51:32.064478 1084911 config.go:182] Loaded profile config "functional-800754": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0327 23:51:32.065123 1084911 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0327 23:51:32.065197 1084911 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0327 23:51:32.082083 1084911 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42313
	I0327 23:51:32.082609 1084911 main.go:141] libmachine: () Calling .GetVersion
	I0327 23:51:32.083323 1084911 main.go:141] libmachine: Using API Version  1
	I0327 23:51:32.083353 1084911 main.go:141] libmachine: () Calling .SetConfigRaw
	I0327 23:51:32.083809 1084911 main.go:141] libmachine: () Calling .GetMachineName
	I0327 23:51:32.084045 1084911 main.go:141] libmachine: (functional-800754) Calling .DriverName
	I0327 23:51:32.084343 1084911 driver.go:392] Setting default libvirt URI to qemu:///system
	I0327 23:51:32.084634 1084911 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0327 23:51:32.084671 1084911 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0327 23:51:32.101965 1084911 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38279
	I0327 23:51:32.102588 1084911 main.go:141] libmachine: () Calling .GetVersion
	I0327 23:51:32.103078 1084911 main.go:141] libmachine: Using API Version  1
	I0327 23:51:32.103130 1084911 main.go:141] libmachine: () Calling .SetConfigRaw
	I0327 23:51:32.103610 1084911 main.go:141] libmachine: () Calling .GetMachineName
	I0327 23:51:32.103804 1084911 main.go:141] libmachine: (functional-800754) Calling .DriverName
	I0327 23:51:32.138406 1084911 out.go:177] * Using the kvm2 driver based on existing profile
	I0327 23:51:32.139471 1084911 start.go:297] selected driver: kvm2
	I0327 23:51:32.139507 1084911 start.go:901] validating driver "kvm2" against &{Name:functional-800754 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.29.3 ClusterName:functional-800754 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.216 Port:8441 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
unt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0327 23:51:32.139622 1084911 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0327 23:51:32.141665 1084911 out.go:177] 
	W0327 23:51:32.142837 1084911 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0327 23:51:32.144051 1084911 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-amd64 start -p functional-800754 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-amd64 start -p functional-800754 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-800754 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (197.645646ms)

                                                
                                                
-- stdout --
	* [functional-800754] minikube v1.33.0-beta.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18485
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18485-1069254/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18485-1069254/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0327 23:51:32.407797 1084987 out.go:291] Setting OutFile to fd 1 ...
	I0327 23:51:32.407943 1084987 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 23:51:32.407956 1084987 out.go:304] Setting ErrFile to fd 2...
	I0327 23:51:32.407984 1084987 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 23:51:32.408300 1084987 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18485-1069254/.minikube/bin
	I0327 23:51:32.408857 1084987 out.go:298] Setting JSON to false
	I0327 23:51:32.409864 1084987 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":27190,"bootTime":1711556303,"procs":230,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1054-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0327 23:51:32.409940 1084987 start.go:139] virtualization: kvm guest
	I0327 23:51:32.412364 1084987 out.go:177] * [functional-800754] minikube v1.33.0-beta.0 sur Ubuntu 20.04 (kvm/amd64)
	I0327 23:51:32.416469 1084987 out.go:177]   - MINIKUBE_LOCATION=18485
	I0327 23:51:32.418326 1084987 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0327 23:51:32.416391 1084987 notify.go:220] Checking for updates...
	I0327 23:51:32.421998 1084987 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18485-1069254/kubeconfig
	I0327 23:51:32.423796 1084987 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18485-1069254/.minikube
	I0327 23:51:32.425401 1084987 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0327 23:51:32.430662 1084987 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0327 23:51:32.432582 1084987 config.go:182] Loaded profile config "functional-800754": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0327 23:51:32.433104 1084987 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0327 23:51:32.433199 1084987 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0327 23:51:32.456854 1084987 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34901
	I0327 23:51:32.457410 1084987 main.go:141] libmachine: () Calling .GetVersion
	I0327 23:51:32.458155 1084987 main.go:141] libmachine: Using API Version  1
	I0327 23:51:32.458177 1084987 main.go:141] libmachine: () Calling .SetConfigRaw
	I0327 23:51:32.458585 1084987 main.go:141] libmachine: () Calling .GetMachineName
	I0327 23:51:32.458782 1084987 main.go:141] libmachine: (functional-800754) Calling .DriverName
	I0327 23:51:32.459113 1084987 driver.go:392] Setting default libvirt URI to qemu:///system
	I0327 23:51:32.459543 1084987 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0327 23:51:32.459606 1084987 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0327 23:51:32.479328 1084987 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43631
	I0327 23:51:32.482643 1084987 main.go:141] libmachine: () Calling .GetVersion
	I0327 23:51:32.483233 1084987 main.go:141] libmachine: Using API Version  1
	I0327 23:51:32.483266 1084987 main.go:141] libmachine: () Calling .SetConfigRaw
	I0327 23:51:32.483703 1084987 main.go:141] libmachine: () Calling .GetMachineName
	I0327 23:51:32.483892 1084987 main.go:141] libmachine: (functional-800754) Calling .DriverName
	I0327 23:51:32.518945 1084987 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I0327 23:51:32.520238 1084987 start.go:297] selected driver: kvm2
	I0327 23:51:32.520250 1084987 start.go:901] validating driver "kvm2" against &{Name:functional-800754 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.29.3 ClusterName:functional-800754 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.216 Port:8441 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
unt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0327 23:51:32.520357 1084987 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0327 23:51:32.522388 1084987 out.go:177] 
	W0327 23:51:32.523753 1084987 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0327 23:51:32.524959 1084987 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-amd64 -p functional-800754 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-amd64 -p functional-800754 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-amd64 -p functional-800754 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.12s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (9.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1625: (dbg) Run:  kubectl --context functional-800754 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1631: (dbg) Run:  kubectl --context functional-800754 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-55497b8b78-jpjv7" [0788e864-9574-4bfa-b835-25fc9ce1aa51] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-55497b8b78-jpjv7" [0788e864-9574-4bfa-b835-25fc9ce1aa51] Running
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 9.005155652s
functional_test.go:1645: (dbg) Run:  out/minikube-linux-amd64 -p functional-800754 service hello-node-connect --url
functional_test.go:1651: found endpoint for hello-node-connect: http://192.168.39.216:32261
functional_test.go:1671: http://192.168.39.216:32261: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-55497b8b78-jpjv7

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.39.216:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.39.216:32261
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (9.70s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-linux-amd64 -p functional-800754 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-linux-amd64 -p functional-800754 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (35.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [804f3819-36a7-458d-a3be-78027f6bdb3c] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.005253902s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-800754 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-800754 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-800754 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-800754 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [3c57f533-a9e5-4c2f-a785-efefc2dfe5aa] Pending
helpers_test.go:344: "sp-pod" [3c57f533-a9e5-4c2f-a785-efefc2dfe5aa] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [3c57f533-a9e5-4c2f-a785-efefc2dfe5aa] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 15.004742088s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-800754 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-800754 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-800754 delete -f testdata/storage-provisioner/pod.yaml: (1.862378498s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-800754 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [6b2fa1e9-6c2d-4281-98b9-156fe7cb5dc1] Pending
helpers_test.go:344: "sp-pod" [6b2fa1e9-6c2d-4281-98b9-156fe7cb5dc1] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [6b2fa1e9-6c2d-4281-98b9-156fe7cb5dc1] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 13.005927889s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-800754 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (35.93s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-linux-amd64 -p functional-800754 ssh "echo hello"
functional_test.go:1738: (dbg) Run:  out/minikube-linux-amd64 -p functional-800754 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-800754 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-800754 ssh -n functional-800754 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-800754 cp functional-800754:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd2129760274/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-800754 ssh -n functional-800754 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-800754 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-800754 ssh -n functional-800754 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.55s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (28.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1789: (dbg) Run:  kubectl --context functional-800754 replace --force -f testdata/mysql.yaml
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-859648c796-nhc65" [7e0329d4-bd97-4f9e-bbd6-796b405c9a02] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-859648c796-nhc65" [7e0329d4-bd97-4f9e-bbd6-796b405c9a02] Running
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 27.004333296s
functional_test.go:1803: (dbg) Run:  kubectl --context functional-800754 exec mysql-859648c796-nhc65 -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-800754 exec mysql-859648c796-nhc65 -- mysql -ppassword -e "show databases;": exit status 1 (129.099775ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-800754 exec mysql-859648c796-nhc65 -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (28.42s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/1076522/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-amd64 -p functional-800754 ssh "sudo cat /etc/test/nested/copy/1076522/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/1076522.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-800754 ssh "sudo cat /etc/ssl/certs/1076522.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/1076522.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-800754 ssh "sudo cat /usr/share/ca-certificates/1076522.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-800754 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/10765222.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-800754 ssh "sudo cat /etc/ssl/certs/10765222.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/10765222.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-800754 ssh "sudo cat /usr/share/ca-certificates/10765222.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-800754 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.66s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-800754 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-800754 ssh "sudo systemctl is-active docker"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-800754 ssh "sudo systemctl is-active docker": exit status 1 (227.249563ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-800754 ssh "sudo systemctl is-active containerd"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-800754 ssh "sudo systemctl is-active containerd": exit status 1 (234.81843ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/License (1.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-linux-amd64 license
functional_test.go:2284: (dbg) Done: out/minikube-linux-amd64 license: (1.128475694s)
--- PASS: TestFunctional/parallel/License (1.13s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (11.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1435: (dbg) Run:  kubectl --context functional-800754 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1441: (dbg) Run:  kubectl --context functional-800754 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-d7447cc7f-jr845" [53a41f21-8c08-468d-99b7-ab89e7f48645] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-d7447cc7f-jr845" [53a41f21-8c08-468d-99b7-ab89e7f48645] Running
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 11.005116708s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (11.23s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-800754 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-800754 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-800754 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-linux-amd64 -p functional-800754 version --short
--- PASS: TestFunctional/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-linux-amd64 -p functional-800754 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1311: Took "234.449285ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1325: Took "61.711419ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1362: Took "239.496564ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1375: Took "62.030852ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (15.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-800754 /tmp/TestFunctionalparallelMountCmdany-port3489053085/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1711583486746969766" to /tmp/TestFunctionalparallelMountCmdany-port3489053085/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1711583486746969766" to /tmp/TestFunctionalparallelMountCmdany-port3489053085/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1711583486746969766" to /tmp/TestFunctionalparallelMountCmdany-port3489053085/001/test-1711583486746969766
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-800754 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-800754 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (215.204752ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-800754 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-800754 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Mar 27 23:51 created-by-test
-rw-r--r-- 1 docker docker 24 Mar 27 23:51 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Mar 27 23:51 test-1711583486746969766
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-800754 ssh cat /mount-9p/test-1711583486746969766
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-800754 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [de8a572b-e0cd-4dfb-b0e6-aa7bcf8b803c] Pending
helpers_test.go:344: "busybox-mount" [de8a572b-e0cd-4dfb-b0e6-aa7bcf8b803c] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [de8a572b-e0cd-4dfb-b0e6-aa7bcf8b803c] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [de8a572b-e0cd-4dfb-b0e6-aa7bcf8b803c] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 13.005908556s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-800754 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-800754 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-800754 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-800754 ssh "sudo umount -f /mount-9p"
E0327 23:51:42.042578 1076522 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/addons-910864/client.crt: no such file or directory
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-800754 /tmp/TestFunctionalparallelMountCmdany-port3489053085/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (15.82s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-linux-amd64 -p functional-800754 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-linux-amd64 -p functional-800754 service list -o json
functional_test.go:1490: Took "659.011736ms" to run "out/minikube-linux-amd64 -p functional-800754 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.66s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-linux-amd64 -p functional-800754 service --namespace=default --https --url hello-node
functional_test.go:1518: found endpoint: https://192.168.39.216:31065
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-linux-amd64 -p functional-800754 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-linux-amd64 -p functional-800754 service hello-node --url
functional_test.go:1561: found endpoint for hello-node: http://192.168.39.216:31065
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.68s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-800754 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-800754 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.29.3
registry.k8s.io/kube-proxy:v1.29.3
registry.k8s.io/kube-controller-manager:v1.29.3
registry.k8s.io/kube-apiserver:v1.29.3
registry.k8s.io/etcd:3.5.12-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.1
localhost/minikube-local-cache-test:functional-800754
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-800754
docker.io/library/nginx:latest
docker.io/library/mysql:5.7
docker.io/kindest/kindnetd:v20240202-8f1494ea
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-800754 image ls --format short --alsologtostderr:
I0327 23:52:07.019999 1086315 out.go:291] Setting OutFile to fd 1 ...
I0327 23:52:07.020098 1086315 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0327 23:52:07.020108 1086315 out.go:304] Setting ErrFile to fd 2...
I0327 23:52:07.020113 1086315 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0327 23:52:07.020373 1086315 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18485-1069254/.minikube/bin
I0327 23:52:07.020927 1086315 config.go:182] Loaded profile config "functional-800754": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
I0327 23:52:07.021034 1086315 config.go:182] Loaded profile config "functional-800754": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
I0327 23:52:07.021880 1086315 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0327 23:52:07.022062 1086315 main.go:141] libmachine: Launching plugin server for driver kvm2
I0327 23:52:07.039423 1086315 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33833
I0327 23:52:07.040188 1086315 main.go:141] libmachine: () Calling .GetVersion
I0327 23:52:07.040795 1086315 main.go:141] libmachine: Using API Version  1
I0327 23:52:07.040824 1086315 main.go:141] libmachine: () Calling .SetConfigRaw
I0327 23:52:07.041268 1086315 main.go:141] libmachine: () Calling .GetMachineName
I0327 23:52:07.041470 1086315 main.go:141] libmachine: (functional-800754) Calling .GetState
I0327 23:52:07.043789 1086315 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0327 23:52:07.043835 1086315 main.go:141] libmachine: Launching plugin server for driver kvm2
I0327 23:52:07.060244 1086315 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45921
I0327 23:52:07.060646 1086315 main.go:141] libmachine: () Calling .GetVersion
I0327 23:52:07.061138 1086315 main.go:141] libmachine: Using API Version  1
I0327 23:52:07.061159 1086315 main.go:141] libmachine: () Calling .SetConfigRaw
I0327 23:52:07.061651 1086315 main.go:141] libmachine: () Calling .GetMachineName
I0327 23:52:07.061827 1086315 main.go:141] libmachine: (functional-800754) Calling .DriverName
I0327 23:52:07.062010 1086315 ssh_runner.go:195] Run: systemctl --version
I0327 23:52:07.062034 1086315 main.go:141] libmachine: (functional-800754) Calling .GetSSHHostname
I0327 23:52:07.064757 1086315 main.go:141] libmachine: (functional-800754) DBG | domain functional-800754 has defined MAC address 52:54:00:f3:8d:da in network mk-functional-800754
I0327 23:52:07.065484 1086315 main.go:141] libmachine: (functional-800754) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:8d:da", ip: ""} in network mk-functional-800754: {Iface:virbr1 ExpiryTime:2024-03-28 00:43:10 +0000 UTC Type:0 Mac:52:54:00:f3:8d:da Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:functional-800754 Clientid:01:52:54:00:f3:8d:da}
I0327 23:52:07.065505 1086315 main.go:141] libmachine: (functional-800754) DBG | domain functional-800754 has defined IP address 192.168.39.216 and MAC address 52:54:00:f3:8d:da in network mk-functional-800754
I0327 23:52:07.065753 1086315 main.go:141] libmachine: (functional-800754) Calling .GetSSHPort
I0327 23:52:07.066000 1086315 main.go:141] libmachine: (functional-800754) Calling .GetSSHKeyPath
I0327 23:52:07.066268 1086315 main.go:141] libmachine: (functional-800754) Calling .GetSSHUsername
I0327 23:52:07.066417 1086315 sshutil.go:53] new ssh client: &{IP:192.168.39.216 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/functional-800754/id_rsa Username:docker}
I0327 23:52:07.158141 1086315 ssh_runner.go:195] Run: sudo crictl images --output json
I0327 23:52:07.328279 1086315 main.go:141] libmachine: Making call to close driver server
I0327 23:52:07.328297 1086315 main.go:141] libmachine: (functional-800754) Calling .Close
I0327 23:52:07.328667 1086315 main.go:141] libmachine: Successfully made call to close driver server
I0327 23:52:07.328695 1086315 main.go:141] libmachine: Making call to close connection to plugin binary
I0327 23:52:07.328715 1086315 main.go:141] libmachine: (functional-800754) DBG | Closing plugin on server side
I0327 23:52:07.328783 1086315 main.go:141] libmachine: Making call to close driver server
I0327 23:52:07.328823 1086315 main.go:141] libmachine: (functional-800754) Calling .Close
I0327 23:52:07.329109 1086315 main.go:141] libmachine: (functional-800754) DBG | Closing plugin on server side
I0327 23:52:07.329158 1086315 main.go:141] libmachine: Successfully made call to close driver server
I0327 23:52:07.329173 1086315 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-800754 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-800754 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| docker.io/library/nginx                 | latest             | 92b11f67642b6 | 191MB  |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | 6e38f40d628db | 31.5MB |
| localhost/minikube-local-cache-test     | functional-800754  | 4f9fdacb41fb7 | 3.33kB |
| registry.k8s.io/etcd                    | 3.5.12-0           | 3861cfcd7c04c | 151MB  |
| registry.k8s.io/kube-proxy              | v1.29.3            | a1d263b5dc5b0 | 83.6MB |
| registry.k8s.io/kube-scheduler          | v1.29.3            | 8c390d98f50c0 | 60.7MB |
| registry.k8s.io/pause                   | 3.3                | 0184c1613d929 | 686kB  |
| registry.k8s.io/coredns/coredns         | v1.11.1            | cbb01a7bd410d | 61.2MB |
| registry.k8s.io/pause                   | latest             | 350b164e7ae1d | 247kB  |
| docker.io/library/mysql                 | 5.7                | 5107333e08a87 | 520MB  |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 56cc512116c8f | 4.63MB |
| registry.k8s.io/echoserver              | 1.8                | 82e4c8a736a4f | 97.8MB |
| registry.k8s.io/kube-controller-manager | v1.29.3            | 6052a25da3f97 | 123MB  |
| registry.k8s.io/pause                   | 3.1                | da86e6ba6ca19 | 747kB  |
| registry.k8s.io/pause                   | 3.9                | e6f1816883972 | 750kB  |
| docker.io/kindest/kindnetd              | v20240202-8f1494ea | 4950bb10b3f87 | 65.3MB |
| gcr.io/google-containers/addon-resizer  | functional-800754  | ffd4cfbbe753e | 34.1MB |
| registry.k8s.io/kube-apiserver          | v1.29.3            | 39f995c9f1996 | 129MB  |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-800754 image ls --format table --alsologtostderr:
I0327 23:52:07.397480 1086394 out.go:291] Setting OutFile to fd 1 ...
I0327 23:52:07.397859 1086394 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0327 23:52:07.397910 1086394 out.go:304] Setting ErrFile to fd 2...
I0327 23:52:07.397928 1086394 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0327 23:52:07.398417 1086394 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18485-1069254/.minikube/bin
I0327 23:52:07.399872 1086394 config.go:182] Loaded profile config "functional-800754": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
I0327 23:52:07.400081 1086394 config.go:182] Loaded profile config "functional-800754": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
I0327 23:52:07.400490 1086394 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0327 23:52:07.400540 1086394 main.go:141] libmachine: Launching plugin server for driver kvm2
I0327 23:52:07.415960 1086394 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34409
I0327 23:52:07.416552 1086394 main.go:141] libmachine: () Calling .GetVersion
I0327 23:52:07.417098 1086394 main.go:141] libmachine: Using API Version  1
I0327 23:52:07.417122 1086394 main.go:141] libmachine: () Calling .SetConfigRaw
I0327 23:52:07.417446 1086394 main.go:141] libmachine: () Calling .GetMachineName
I0327 23:52:07.417640 1086394 main.go:141] libmachine: (functional-800754) Calling .GetState
I0327 23:52:07.419397 1086394 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0327 23:52:07.419435 1086394 main.go:141] libmachine: Launching plugin server for driver kvm2
I0327 23:52:07.434420 1086394 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33391
I0327 23:52:07.434841 1086394 main.go:141] libmachine: () Calling .GetVersion
I0327 23:52:07.435355 1086394 main.go:141] libmachine: Using API Version  1
I0327 23:52:07.435386 1086394 main.go:141] libmachine: () Calling .SetConfigRaw
I0327 23:52:07.435795 1086394 main.go:141] libmachine: () Calling .GetMachineName
I0327 23:52:07.436039 1086394 main.go:141] libmachine: (functional-800754) Calling .DriverName
I0327 23:52:07.436289 1086394 ssh_runner.go:195] Run: systemctl --version
I0327 23:52:07.436332 1086394 main.go:141] libmachine: (functional-800754) Calling .GetSSHHostname
I0327 23:52:07.438979 1086394 main.go:141] libmachine: (functional-800754) DBG | domain functional-800754 has defined MAC address 52:54:00:f3:8d:da in network mk-functional-800754
I0327 23:52:07.439358 1086394 main.go:141] libmachine: (functional-800754) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:8d:da", ip: ""} in network mk-functional-800754: {Iface:virbr1 ExpiryTime:2024-03-28 00:43:10 +0000 UTC Type:0 Mac:52:54:00:f3:8d:da Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:functional-800754 Clientid:01:52:54:00:f3:8d:da}
I0327 23:52:07.439389 1086394 main.go:141] libmachine: (functional-800754) DBG | domain functional-800754 has defined IP address 192.168.39.216 and MAC address 52:54:00:f3:8d:da in network mk-functional-800754
I0327 23:52:07.439480 1086394 main.go:141] libmachine: (functional-800754) Calling .GetSSHPort
I0327 23:52:07.439663 1086394 main.go:141] libmachine: (functional-800754) Calling .GetSSHKeyPath
I0327 23:52:07.439846 1086394 main.go:141] libmachine: (functional-800754) Calling .GetSSHUsername
I0327 23:52:07.440005 1086394 sshutil.go:53] new ssh client: &{IP:192.168.39.216 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/functional-800754/id_rsa Username:docker}
I0327 23:52:07.533367 1086394 ssh_runner.go:195] Run: sudo crictl images --output json
I0327 23:52:07.574994 1086394 main.go:141] libmachine: Making call to close driver server
I0327 23:52:07.575019 1086394 main.go:141] libmachine: (functional-800754) Calling .Close
I0327 23:52:07.575353 1086394 main.go:141] libmachine: Successfully made call to close driver server
I0327 23:52:07.575382 1086394 main.go:141] libmachine: Making call to close connection to plugin binary
I0327 23:52:07.575396 1086394 main.go:141] libmachine: Making call to close driver server
I0327 23:52:07.575407 1086394 main.go:141] libmachine: (functional-800754) Calling .Close
I0327 23:52:07.575452 1086394 main.go:141] libmachine: (functional-800754) DBG | Closing plugin on server side
I0327 23:52:07.575639 1086394 main.go:141] libmachine: Successfully made call to close driver server
I0327 23:52:07.575666 1086394 main.go:141] libmachine: (functional-800754) DBG | Closing plugin on server side
I0327 23:52:07.575669 1086394 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-800754 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-800754 image ls --format json --alsologtostderr:
[{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":["gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126"],"repoTags":["gcr.io/google-containers/addon-resizer:functional-800754"],"size":"34114467"},{"id":"a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392","repoDigests":["registry.k8s.io/kube-proxy@sha256:d137dd922e588abc7b0e2f20afd338065e9abccdecfe705abfb19f588fbac11d","registry.k8s.io/kube-proxy@sha256:fa87cba052adcb992bd59bd1304115c6f3b3fb370407805ba52af3d9ff3f0863"],"repoTags":["registry.k8s.io/kube-proxy:v1.29.3"],"size":"83634073"},{"id":"da86e6b
a6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029"],"repoTags":[],"size":"249229937"},{"id":"92b11f67642b62bbb98e7e49169c346b30e20cd3c1c034d31087e46924b9312e","repoDigests":["docker.io/library/nginx@sha256:52478f8cd6a142fd462f0a7614a7bb064e969a4c083648235d6943c786df8cc7","docker.io/library/nginx@sha2
56:6db391d1c0cfb30588ba0bf72ea999404f2764febf0f1f196acd5867ac7efa7e"],"repoTags":["docker.io/library/nginx:latest"],"size":"190865876"},{"id":"cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4","repoDigests":["registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1","registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.1"],"size":"61245718"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"97846543"},{"id":"8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b","repoDigests":["registry.k8s.io/kube-scheduler@sha256:6fb91d791db6d62f6b1ac9dbed23fdb597335550d99ff8333d53c4136e889b3a","registry.k8s.io/kube-scheduler@sha256:c6dae5df00e42512d2baa3e1e7
4efbf08bddd595e930123f6021f715198b8e88"],"repoTags":["registry.k8s.io/kube-scheduler:v1.29.3"],"size":"60724018"},{"id":"39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533","repoDigests":["registry.k8s.io/kube-apiserver@sha256:21be2c03b528e582a63a41d8270f469ad1b24e2f6ba8238386768fc981ca1322","registry.k8s.io/kube-apiserver@sha256:ebd35bc7ef24672c5c50ffccb21f71307a82d4fb20c0ecb6d3d27b28b69e0e3c"],"repoTags":["registry.k8s.io/kube-apiserver:v1.29.3"],"size":"128508878"},{"id":"6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:495e03d609009733264502138f33ab4ebff55e4ccc34b51fce1dc48eba5aa606","registry.k8s.io/kube-controller-manager@sha256:5a7968649f8aee83d5a2d75d6d377ba2680df25b0b97b3be12fa10f15ad67104"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.29.3"],"size":"123142962"},{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":["registry.k8s.io/pause@sha256:7031c1b283388d2c2e0
9b57badb803c05ebed362dc88d84b480cc47f72a21097","registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"],"repoTags":["registry.k8s.io/pause:3.9"],"size":"750414"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb","docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da"],"repoTags":["docker.io/library/mysql:5.7"],"size":"519571821"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikub
e/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"4f9fdacb41fb7766004ebf7b2cce192e4d98b37d757d780cf2b2ba4ce9d861c7","repoDigests":["localhost/minikube-local-cache-test@sha256:412dac9c936a26ec62e86641aed4985e29fbedcbfb4c06a8160c541d1662600e"],"repoTags":["localhost/minikube-local-cache-test:functional-800754"],"size":"3330"},{"id":"3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899","repoDigests":["registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62","registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"],"repoTags":["registry.k8s.io/etcd:3.5.12-0"],"size":"150779692"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause
@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5","repoDigests":["docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988","docker.io/kindest/kindnetd@sha256:bdddbe20c61d325166b48dd517059f5b93c21526eb74c5c80d86cd6d37236bac"],"repoTags":["docker.io/kindest/kindnetd:v20240202-8f1494ea"],"size":"65291810"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-800754 image ls --format json --alsologtostderr:
I0327 23:52:07.018335 1086316 out.go:291] Setting OutFile to fd 1 ...
I0327 23:52:07.018681 1086316 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0327 23:52:07.018726 1086316 out.go:304] Setting ErrFile to fd 2...
I0327 23:52:07.018739 1086316 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0327 23:52:07.019023 1086316 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18485-1069254/.minikube/bin
I0327 23:52:07.019691 1086316 config.go:182] Loaded profile config "functional-800754": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
I0327 23:52:07.019844 1086316 config.go:182] Loaded profile config "functional-800754": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
I0327 23:52:07.020368 1086316 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0327 23:52:07.020461 1086316 main.go:141] libmachine: Launching plugin server for driver kvm2
I0327 23:52:07.037915 1086316 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39815
I0327 23:52:07.038477 1086316 main.go:141] libmachine: () Calling .GetVersion
I0327 23:52:07.039207 1086316 main.go:141] libmachine: Using API Version  1
I0327 23:52:07.039234 1086316 main.go:141] libmachine: () Calling .SetConfigRaw
I0327 23:52:07.039614 1086316 main.go:141] libmachine: () Calling .GetMachineName
I0327 23:52:07.039849 1086316 main.go:141] libmachine: (functional-800754) Calling .GetState
I0327 23:52:07.042322 1086316 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0327 23:52:07.042359 1086316 main.go:141] libmachine: Launching plugin server for driver kvm2
I0327 23:52:07.057894 1086316 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36143
I0327 23:52:07.058419 1086316 main.go:141] libmachine: () Calling .GetVersion
I0327 23:52:07.058997 1086316 main.go:141] libmachine: Using API Version  1
I0327 23:52:07.059021 1086316 main.go:141] libmachine: () Calling .SetConfigRaw
I0327 23:52:07.059426 1086316 main.go:141] libmachine: () Calling .GetMachineName
I0327 23:52:07.059687 1086316 main.go:141] libmachine: (functional-800754) Calling .DriverName
I0327 23:52:07.059931 1086316 ssh_runner.go:195] Run: systemctl --version
I0327 23:52:07.059960 1086316 main.go:141] libmachine: (functional-800754) Calling .GetSSHHostname
I0327 23:52:07.063244 1086316 main.go:141] libmachine: (functional-800754) DBG | domain functional-800754 has defined MAC address 52:54:00:f3:8d:da in network mk-functional-800754
I0327 23:52:07.063980 1086316 main.go:141] libmachine: (functional-800754) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:8d:da", ip: ""} in network mk-functional-800754: {Iface:virbr1 ExpiryTime:2024-03-28 00:43:10 +0000 UTC Type:0 Mac:52:54:00:f3:8d:da Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:functional-800754 Clientid:01:52:54:00:f3:8d:da}
I0327 23:52:07.064032 1086316 main.go:141] libmachine: (functional-800754) DBG | domain functional-800754 has defined IP address 192.168.39.216 and MAC address 52:54:00:f3:8d:da in network mk-functional-800754
I0327 23:52:07.064282 1086316 main.go:141] libmachine: (functional-800754) Calling .GetSSHPort
I0327 23:52:07.064500 1086316 main.go:141] libmachine: (functional-800754) Calling .GetSSHKeyPath
I0327 23:52:07.064663 1086316 main.go:141] libmachine: (functional-800754) Calling .GetSSHUsername
I0327 23:52:07.064823 1086316 sshutil.go:53] new ssh client: &{IP:192.168.39.216 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/functional-800754/id_rsa Username:docker}
I0327 23:52:07.166196 1086316 ssh_runner.go:195] Run: sudo crictl images --output json
I0327 23:52:07.272847 1086316 main.go:141] libmachine: Making call to close driver server
I0327 23:52:07.272861 1086316 main.go:141] libmachine: (functional-800754) Calling .Close
I0327 23:52:07.273234 1086316 main.go:141] libmachine: Successfully made call to close driver server
I0327 23:52:07.273256 1086316 main.go:141] libmachine: Making call to close connection to plugin binary
I0327 23:52:07.273266 1086316 main.go:141] libmachine: Making call to close driver server
I0327 23:52:07.273276 1086316 main.go:141] libmachine: (functional-800754) Calling .Close
I0327 23:52:07.273571 1086316 main.go:141] libmachine: Successfully made call to close driver server
I0327 23:52:07.273595 1086316 main.go:141] libmachine: Making call to close connection to plugin binary
I0327 23:52:07.273579 1086316 main.go:141] libmachine: (functional-800754) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-800754 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-800754 image ls --format yaml --alsologtostderr:
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests:
- docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb
- docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da
repoTags:
- docker.io/library/mysql:5.7
size: "519571821"
- id: 92b11f67642b62bbb98e7e49169c346b30e20cd3c1c034d31087e46924b9312e
repoDigests:
- docker.io/library/nginx@sha256:52478f8cd6a142fd462f0a7614a7bb064e969a4c083648235d6943c786df8cc7
- docker.io/library/nginx@sha256:6db391d1c0cfb30588ba0bf72ea999404f2764febf0f1f196acd5867ac7efa7e
repoTags:
- docker.io/library/nginx:latest
size: "190865876"
- id: cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1
- registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.1
size: "61245718"
- id: a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392
repoDigests:
- registry.k8s.io/kube-proxy@sha256:d137dd922e588abc7b0e2f20afd338065e9abccdecfe705abfb19f588fbac11d
- registry.k8s.io/kube-proxy@sha256:fa87cba052adcb992bd59bd1304115c6f3b3fb370407805ba52af3d9ff3f0863
repoTags:
- registry.k8s.io/kube-proxy:v1.29.3
size: "83634073"
- id: 8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:6fb91d791db6d62f6b1ac9dbed23fdb597335550d99ff8333d53c4136e889b3a
- registry.k8s.io/kube-scheduler@sha256:c6dae5df00e42512d2baa3e1e74efbf08bddd595e930123f6021f715198b8e88
repoTags:
- registry.k8s.io/kube-scheduler:v1.29.3
size: "60724018"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5
repoDigests:
- docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988
- docker.io/kindest/kindnetd@sha256:bdddbe20c61d325166b48dd517059f5b93c21526eb74c5c80d86cd6d37236bac
repoTags:
- docker.io/kindest/kindnetd:v20240202-8f1494ea
size: "65291810"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029
repoTags: []
size: "249229937"
- id: 3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899
repoDigests:
- registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62
- registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b
repoTags:
- registry.k8s.io/etcd:3.5.12-0
size: "150779692"
- id: 39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:21be2c03b528e582a63a41d8270f469ad1b24e2f6ba8238386768fc981ca1322
- registry.k8s.io/kube-apiserver@sha256:ebd35bc7ef24672c5c50ffccb21f71307a82d4fb20c0ecb6d3d27b28b69e0e3c
repoTags:
- registry.k8s.io/kube-apiserver:v1.29.3
size: "128508878"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests:
- registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097
- registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10
repoTags:
- registry.k8s.io/pause:3.9
size: "750414"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests:
- gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126
repoTags:
- gcr.io/google-containers/addon-resizer:functional-800754
size: "34114467"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: 4f9fdacb41fb7766004ebf7b2cce192e4d98b37d757d780cf2b2ba4ce9d861c7
repoDigests:
- localhost/minikube-local-cache-test@sha256:412dac9c936a26ec62e86641aed4985e29fbedcbfb4c06a8160c541d1662600e
repoTags:
- localhost/minikube-local-cache-test:functional-800754
size: "3330"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "43824855"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "97846543"
- id: 6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:495e03d609009733264502138f33ab4ebff55e4ccc34b51fce1dc48eba5aa606
- registry.k8s.io/kube-controller-manager@sha256:5a7968649f8aee83d5a2d75d6d377ba2680df25b0b97b3be12fa10f15ad67104
repoTags:
- registry.k8s.io/kube-controller-manager:v1.29.3
size: "123142962"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-800754 image ls --format yaml --alsologtostderr:
I0327 23:52:07.024674 1086317 out.go:291] Setting OutFile to fd 1 ...
I0327 23:52:07.024833 1086317 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0327 23:52:07.024861 1086317 out.go:304] Setting ErrFile to fd 2...
I0327 23:52:07.024882 1086317 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0327 23:52:07.025122 1086317 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18485-1069254/.minikube/bin
I0327 23:52:07.026514 1086317 config.go:182] Loaded profile config "functional-800754": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
I0327 23:52:07.026776 1086317 config.go:182] Loaded profile config "functional-800754": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
I0327 23:52:07.027909 1086317 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0327 23:52:07.028201 1086317 main.go:141] libmachine: Launching plugin server for driver kvm2
I0327 23:52:07.043068 1086317 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36411
I0327 23:52:07.043586 1086317 main.go:141] libmachine: () Calling .GetVersion
I0327 23:52:07.044106 1086317 main.go:141] libmachine: Using API Version  1
I0327 23:52:07.044147 1086317 main.go:141] libmachine: () Calling .SetConfigRaw
I0327 23:52:07.044558 1086317 main.go:141] libmachine: () Calling .GetMachineName
I0327 23:52:07.044773 1086317 main.go:141] libmachine: (functional-800754) Calling .GetState
I0327 23:52:07.047152 1086317 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0327 23:52:07.047206 1086317 main.go:141] libmachine: Launching plugin server for driver kvm2
I0327 23:52:07.062604 1086317 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41689
I0327 23:52:07.063056 1086317 main.go:141] libmachine: () Calling .GetVersion
I0327 23:52:07.063502 1086317 main.go:141] libmachine: Using API Version  1
I0327 23:52:07.063527 1086317 main.go:141] libmachine: () Calling .SetConfigRaw
I0327 23:52:07.063914 1086317 main.go:141] libmachine: () Calling .GetMachineName
I0327 23:52:07.064193 1086317 main.go:141] libmachine: (functional-800754) Calling .DriverName
I0327 23:52:07.064499 1086317 ssh_runner.go:195] Run: systemctl --version
I0327 23:52:07.064531 1086317 main.go:141] libmachine: (functional-800754) Calling .GetSSHHostname
I0327 23:52:07.067730 1086317 main.go:141] libmachine: (functional-800754) DBG | domain functional-800754 has defined MAC address 52:54:00:f3:8d:da in network mk-functional-800754
I0327 23:52:07.068170 1086317 main.go:141] libmachine: (functional-800754) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:8d:da", ip: ""} in network mk-functional-800754: {Iface:virbr1 ExpiryTime:2024-03-28 00:43:10 +0000 UTC Type:0 Mac:52:54:00:f3:8d:da Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:functional-800754 Clientid:01:52:54:00:f3:8d:da}
I0327 23:52:07.068249 1086317 main.go:141] libmachine: (functional-800754) DBG | domain functional-800754 has defined IP address 192.168.39.216 and MAC address 52:54:00:f3:8d:da in network mk-functional-800754
I0327 23:52:07.068412 1086317 main.go:141] libmachine: (functional-800754) Calling .GetSSHPort
I0327 23:52:07.068570 1086317 main.go:141] libmachine: (functional-800754) Calling .GetSSHKeyPath
I0327 23:52:07.068711 1086317 main.go:141] libmachine: (functional-800754) Calling .GetSSHUsername
I0327 23:52:07.068850 1086317 sshutil.go:53] new ssh client: &{IP:192.168.39.216 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/functional-800754/id_rsa Username:docker}
I0327 23:52:07.182372 1086317 ssh_runner.go:195] Run: sudo crictl images --output json
I0327 23:52:07.329032 1086317 main.go:141] libmachine: Making call to close driver server
I0327 23:52:07.329047 1086317 main.go:141] libmachine: (functional-800754) Calling .Close
I0327 23:52:07.329309 1086317 main.go:141] libmachine: Successfully made call to close driver server
I0327 23:52:07.329339 1086317 main.go:141] libmachine: Making call to close connection to plugin binary
I0327 23:52:07.329355 1086317 main.go:141] libmachine: Making call to close driver server
I0327 23:52:07.329363 1086317 main.go:141] libmachine: (functional-800754) Calling .Close
I0327 23:52:07.331275 1086317 main.go:141] libmachine: (functional-800754) DBG | Closing plugin on server side
I0327 23:52:07.331341 1086317 main.go:141] libmachine: Successfully made call to close driver server
I0327 23:52:07.331401 1086317 main.go:141] libmachine: Making call to close connection to plugin binary
E0327 23:52:07.332051 1086317 logFile.go:53] failed to close the audit log: invalid argument
W0327 23:52:07.332070 1086317 root.go:91] failed to log command end to audit: failed to convert logs to rows: failed to unmarshal "{\"specversion\":\"1.0\",\"id\":\"a7ad0710-dbb2-4215-a1fc-021cfff80b78\",\"source\":\"https://minikube.sigs.k8s.io/\",\"type\":\"io.k8s.sigs.minikube.audit\",\"datacontenttype\":\"application/": unexpected end of JSON input
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-amd64 -p functional-800754 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-800754 ssh pgrep buildkitd: exit status 1 (225.639165ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-amd64 -p functional-800754 image build -t localhost/my-image:functional-800754 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-amd64 -p functional-800754 image build -t localhost/my-image:functional-800754 testdata/build --alsologtostderr: (3.490721795s)
functional_test.go:319: (dbg) Stdout: out/minikube-linux-amd64 -p functional-800754 image build -t localhost/my-image:functional-800754 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 1513a7d3823
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-800754
--> 0aa70b04375
Successfully tagged localhost/my-image:functional-800754
0aa70b043756e1eddeafd1300fde3b046fb9774c771e4bd1019670084f7375b4
functional_test.go:322: (dbg) Stderr: out/minikube-linux-amd64 -p functional-800754 image build -t localhost/my-image:functional-800754 testdata/build --alsologtostderr:
I0327 23:52:07.561775 1086435 out.go:291] Setting OutFile to fd 1 ...
I0327 23:52:07.561923 1086435 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0327 23:52:07.561939 1086435 out.go:304] Setting ErrFile to fd 2...
I0327 23:52:07.562015 1086435 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0327 23:52:07.562794 1086435 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18485-1069254/.minikube/bin
I0327 23:52:07.564125 1086435 config.go:182] Loaded profile config "functional-800754": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
I0327 23:52:07.564790 1086435 config.go:182] Loaded profile config "functional-800754": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
I0327 23:52:07.565192 1086435 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0327 23:52:07.565230 1086435 main.go:141] libmachine: Launching plugin server for driver kvm2
I0327 23:52:07.581024 1086435 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38483
I0327 23:52:07.581561 1086435 main.go:141] libmachine: () Calling .GetVersion
I0327 23:52:07.582213 1086435 main.go:141] libmachine: Using API Version  1
I0327 23:52:07.582250 1086435 main.go:141] libmachine: () Calling .SetConfigRaw
I0327 23:52:07.582671 1086435 main.go:141] libmachine: () Calling .GetMachineName
I0327 23:52:07.582907 1086435 main.go:141] libmachine: (functional-800754) Calling .GetState
I0327 23:52:07.584960 1086435 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0327 23:52:07.585011 1086435 main.go:141] libmachine: Launching plugin server for driver kvm2
I0327 23:52:07.602356 1086435 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46047
I0327 23:52:07.602777 1086435 main.go:141] libmachine: () Calling .GetVersion
I0327 23:52:07.603285 1086435 main.go:141] libmachine: Using API Version  1
I0327 23:52:07.603313 1086435 main.go:141] libmachine: () Calling .SetConfigRaw
I0327 23:52:07.603697 1086435 main.go:141] libmachine: () Calling .GetMachineName
I0327 23:52:07.603937 1086435 main.go:141] libmachine: (functional-800754) Calling .DriverName
I0327 23:52:07.604165 1086435 ssh_runner.go:195] Run: systemctl --version
I0327 23:52:07.604191 1086435 main.go:141] libmachine: (functional-800754) Calling .GetSSHHostname
I0327 23:52:07.607062 1086435 main.go:141] libmachine: (functional-800754) DBG | domain functional-800754 has defined MAC address 52:54:00:f3:8d:da in network mk-functional-800754
I0327 23:52:07.607429 1086435 main.go:141] libmachine: (functional-800754) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:8d:da", ip: ""} in network mk-functional-800754: {Iface:virbr1 ExpiryTime:2024-03-28 00:43:10 +0000 UTC Type:0 Mac:52:54:00:f3:8d:da Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:functional-800754 Clientid:01:52:54:00:f3:8d:da}
I0327 23:52:07.607462 1086435 main.go:141] libmachine: (functional-800754) DBG | domain functional-800754 has defined IP address 192.168.39.216 and MAC address 52:54:00:f3:8d:da in network mk-functional-800754
I0327 23:52:07.607630 1086435 main.go:141] libmachine: (functional-800754) Calling .GetSSHPort
I0327 23:52:07.607799 1086435 main.go:141] libmachine: (functional-800754) Calling .GetSSHKeyPath
I0327 23:52:07.607942 1086435 main.go:141] libmachine: (functional-800754) Calling .GetSSHUsername
I0327 23:52:07.608098 1086435 sshutil.go:53] new ssh client: &{IP:192.168.39.216 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/functional-800754/id_rsa Username:docker}
I0327 23:52:07.699495 1086435 build_images.go:161] Building image from path: /tmp/build.1142430950.tar
I0327 23:52:07.699585 1086435 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0327 23:52:07.711190 1086435 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1142430950.tar
I0327 23:52:07.715869 1086435 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1142430950.tar: stat -c "%s %y" /var/lib/minikube/build/build.1142430950.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1142430950.tar': No such file or directory
I0327 23:52:07.715913 1086435 ssh_runner.go:362] scp /tmp/build.1142430950.tar --> /var/lib/minikube/build/build.1142430950.tar (3072 bytes)
I0327 23:52:07.746235 1086435 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1142430950
I0327 23:52:07.766315 1086435 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1142430950 -xf /var/lib/minikube/build/build.1142430950.tar
I0327 23:52:07.778847 1086435 crio.go:315] Building image: /var/lib/minikube/build/build.1142430950
I0327 23:52:07.778935 1086435 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-800754 /var/lib/minikube/build/build.1142430950 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I0327 23:52:10.964126 1086435 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-800754 /var/lib/minikube/build/build.1142430950 --cgroup-manager=cgroupfs: (3.185160972s)
I0327 23:52:10.964204 1086435 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1142430950
I0327 23:52:10.978138 1086435 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1142430950.tar
I0327 23:52:10.991519 1086435 build_images.go:217] Built localhost/my-image:functional-800754 from /tmp/build.1142430950.tar
I0327 23:52:10.991562 1086435 build_images.go:133] succeeded building to: functional-800754
I0327 23:52:10.991567 1086435 build_images.go:134] failed building to: 
I0327 23:52:10.991625 1086435 main.go:141] libmachine: Making call to close driver server
I0327 23:52:10.991645 1086435 main.go:141] libmachine: (functional-800754) Calling .Close
I0327 23:52:10.992038 1086435 main.go:141] libmachine: Successfully made call to close driver server
I0327 23:52:10.992046 1086435 main.go:141] libmachine: (functional-800754) DBG | Closing plugin on server side
I0327 23:52:10.992057 1086435 main.go:141] libmachine: Making call to close connection to plugin binary
I0327 23:52:10.992068 1086435 main.go:141] libmachine: Making call to close driver server
I0327 23:52:10.992074 1086435 main.go:141] libmachine: (functional-800754) Calling .Close
I0327 23:52:10.992331 1086435 main.go:141] libmachine: Successfully made call to close driver server
I0327 23:52:10.992347 1086435 main.go:141] libmachine: Making call to close connection to plugin binary
I0327 23:52:10.992374 1086435 main.go:141] libmachine: (functional-800754) DBG | Closing plugin on server side
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-800754 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.97s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (2.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (2.555937963s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-800754
--- PASS: TestFunctional/parallel/ImageCommands/Setup (2.58s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-amd64 -p functional-800754 image load --daemon gcr.io/google-containers/addon-resizer:functional-800754 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-linux-amd64 -p functional-800754 image load --daemon gcr.io/google-containers/addon-resizer:functional-800754 --alsologtostderr: (4.411213997s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-800754 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.74s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (5.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-amd64 -p functional-800754 image load --daemon gcr.io/google-containers/addon-resizer:functional-800754 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-linux-amd64 -p functional-800754 image load --daemon gcr.io/google-containers/addon-resizer:functional-800754 --alsologtostderr: (4.758389651s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-800754 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (5.04s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-800754 /tmp/TestFunctionalparallelMountCmdspecific-port2838007246/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-800754 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-800754 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (312.466354ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-800754 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-800754 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-800754 /tmp/TestFunctionalparallelMountCmdspecific-port2838007246/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-800754 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-800754 ssh "sudo umount -f /mount-9p": exit status 1 (258.883573ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-800754 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-800754 /tmp/TestFunctionalparallelMountCmdspecific-port2838007246/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.83s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-800754 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3237066806/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-800754 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3237066806/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-800754 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3237066806/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-800754 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-800754 ssh "findmnt -T" /mount1: exit status 1 (383.808827ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-800754 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-800754 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-800754 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-800754 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-800754 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3237066806/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-800754 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3237066806/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-800754 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3237066806/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.57s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (6.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
2024/03/27 23:51:47 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (2.632192686s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-800754
functional_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p functional-800754 image load --daemon gcr.io/google-containers/addon-resizer:functional-800754 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-linux-amd64 -p functional-800754 image load --daemon gcr.io/google-containers/addon-resizer:functional-800754 --alsologtostderr: (3.818890329s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-800754 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (6.71s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-amd64 -p functional-800754 image save gcr.io/google-containers/addon-resizer:functional-800754 /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:379: (dbg) Done: out/minikube-linux-amd64 -p functional-800754 image save gcr.io/google-containers/addon-resizer:functional-800754 /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr: (1.59627952s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.60s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (1.98s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-amd64 -p functional-800754 image rm gcr.io/google-containers/addon-resizer:functional-800754 --alsologtostderr
functional_test.go:391: (dbg) Done: out/minikube-linux-amd64 -p functional-800754 image rm gcr.io/google-containers/addon-resizer:functional-800754 --alsologtostderr: (1.535552011s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-800754 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (1.98s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (2.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-800754
functional_test.go:423: (dbg) Run:  out/minikube-linux-amd64 -p functional-800754 image save --daemon gcr.io/google-containers/addon-resizer:functional-800754 --alsologtostderr
functional_test.go:423: (dbg) Done: out/minikube-linux-amd64 -p functional-800754 image save --daemon gcr.io/google-containers/addon-resizer:functional-800754 --alsologtostderr: (2.672207177s)
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-800754
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (2.74s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.07s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-800754
--- PASS: TestFunctional/delete_addon-resizer_images (0.07s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-800754
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-800754
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (279.49s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-377576 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0327 23:56:14.356333 1076522 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/addons-910864/client.crt: no such file or directory
E0327 23:56:21.207474 1076522 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/functional-800754/client.crt: no such file or directory
E0327 23:56:21.212822 1076522 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/functional-800754/client.crt: no such file or directory
E0327 23:56:21.223171 1076522 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/functional-800754/client.crt: no such file or directory
E0327 23:56:21.243408 1076522 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/functional-800754/client.crt: no such file or directory
E0327 23:56:21.283864 1076522 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/functional-800754/client.crt: no such file or directory
E0327 23:56:21.364224 1076522 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/functional-800754/client.crt: no such file or directory
E0327 23:56:21.524709 1076522 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/functional-800754/client.crt: no such file or directory
E0327 23:56:21.845435 1076522 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/functional-800754/client.crt: no such file or directory
E0327 23:56:22.486414 1076522 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/functional-800754/client.crt: no such file or directory
E0327 23:56:23.766730 1076522 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/functional-800754/client.crt: no such file or directory
E0327 23:56:26.328744 1076522 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/functional-800754/client.crt: no such file or directory
E0327 23:56:31.449942 1076522 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/functional-800754/client.crt: no such file or directory
E0327 23:56:41.691103 1076522 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/functional-800754/client.crt: no such file or directory
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-377576 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (4m38.764629303s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-377576 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (279.49s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (6.59s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-377576 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-377576 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-377576 -- rollout status deployment/busybox: (4.143926043s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-377576 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-377576 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-377576 -- exec busybox-7fdf7869d9-2dqtf -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-377576 -- exec busybox-7fdf7869d9-78c89 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-377576 -- exec busybox-7fdf7869d9-jrh7n -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-377576 -- exec busybox-7fdf7869d9-2dqtf -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-377576 -- exec busybox-7fdf7869d9-78c89 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-377576 -- exec busybox-7fdf7869d9-jrh7n -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-377576 -- exec busybox-7fdf7869d9-2dqtf -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-377576 -- exec busybox-7fdf7869d9-78c89 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-377576 -- exec busybox-7fdf7869d9-jrh7n -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (6.59s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.48s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-377576 -- get pods -o jsonpath='{.items[*].metadata.name}'
E0327 23:57:02.172273 1076522 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/functional-800754/client.crt: no such file or directory
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-377576 -- exec busybox-7fdf7869d9-2dqtf -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-377576 -- exec busybox-7fdf7869d9-2dqtf -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-377576 -- exec busybox-7fdf7869d9-78c89 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-377576 -- exec busybox-7fdf7869d9-78c89 -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-377576 -- exec busybox-7fdf7869d9-jrh7n -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-377576 -- exec busybox-7fdf7869d9-jrh7n -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.48s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (46.34s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-377576 -v=7 --alsologtostderr
E0327 23:57:43.132878 1076522 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/functional-800754/client.crt: no such file or directory
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-377576 -v=7 --alsologtostderr: (45.420055043s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-377576 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (46.34s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-377576 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.6s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.60s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (14.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-amd64 -p ha-377576 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-377576 cp testdata/cp-test.txt ha-377576:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-377576 ssh -n ha-377576 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-377576 cp ha-377576:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3418864072/001/cp-test_ha-377576.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-377576 ssh -n ha-377576 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-377576 cp ha-377576:/home/docker/cp-test.txt ha-377576-m02:/home/docker/cp-test_ha-377576_ha-377576-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-377576 ssh -n ha-377576 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-377576 ssh -n ha-377576-m02 "sudo cat /home/docker/cp-test_ha-377576_ha-377576-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-377576 cp ha-377576:/home/docker/cp-test.txt ha-377576-m03:/home/docker/cp-test_ha-377576_ha-377576-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-377576 ssh -n ha-377576 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-377576 ssh -n ha-377576-m03 "sudo cat /home/docker/cp-test_ha-377576_ha-377576-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-377576 cp ha-377576:/home/docker/cp-test.txt ha-377576-m04:/home/docker/cp-test_ha-377576_ha-377576-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-377576 ssh -n ha-377576 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-377576 ssh -n ha-377576-m04 "sudo cat /home/docker/cp-test_ha-377576_ha-377576-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-377576 cp testdata/cp-test.txt ha-377576-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-377576 ssh -n ha-377576-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-377576 cp ha-377576-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3418864072/001/cp-test_ha-377576-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-377576 ssh -n ha-377576-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-377576 cp ha-377576-m02:/home/docker/cp-test.txt ha-377576:/home/docker/cp-test_ha-377576-m02_ha-377576.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-377576 ssh -n ha-377576-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-377576 ssh -n ha-377576 "sudo cat /home/docker/cp-test_ha-377576-m02_ha-377576.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-377576 cp ha-377576-m02:/home/docker/cp-test.txt ha-377576-m03:/home/docker/cp-test_ha-377576-m02_ha-377576-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-377576 ssh -n ha-377576-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-377576 ssh -n ha-377576-m03 "sudo cat /home/docker/cp-test_ha-377576-m02_ha-377576-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-377576 cp ha-377576-m02:/home/docker/cp-test.txt ha-377576-m04:/home/docker/cp-test_ha-377576-m02_ha-377576-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-377576 ssh -n ha-377576-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-377576 ssh -n ha-377576-m04 "sudo cat /home/docker/cp-test_ha-377576-m02_ha-377576-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-377576 cp testdata/cp-test.txt ha-377576-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-377576 ssh -n ha-377576-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-377576 cp ha-377576-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3418864072/001/cp-test_ha-377576-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-377576 ssh -n ha-377576-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-377576 cp ha-377576-m03:/home/docker/cp-test.txt ha-377576:/home/docker/cp-test_ha-377576-m03_ha-377576.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-377576 ssh -n ha-377576-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-377576 ssh -n ha-377576 "sudo cat /home/docker/cp-test_ha-377576-m03_ha-377576.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-377576 cp ha-377576-m03:/home/docker/cp-test.txt ha-377576-m02:/home/docker/cp-test_ha-377576-m03_ha-377576-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-377576 ssh -n ha-377576-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-377576 ssh -n ha-377576-m02 "sudo cat /home/docker/cp-test_ha-377576-m03_ha-377576-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-377576 cp ha-377576-m03:/home/docker/cp-test.txt ha-377576-m04:/home/docker/cp-test_ha-377576-m03_ha-377576-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-377576 ssh -n ha-377576-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-377576 ssh -n ha-377576-m04 "sudo cat /home/docker/cp-test_ha-377576-m03_ha-377576-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-377576 cp testdata/cp-test.txt ha-377576-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-377576 ssh -n ha-377576-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-377576 cp ha-377576-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3418864072/001/cp-test_ha-377576-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-377576 ssh -n ha-377576-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-377576 cp ha-377576-m04:/home/docker/cp-test.txt ha-377576:/home/docker/cp-test_ha-377576-m04_ha-377576.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-377576 ssh -n ha-377576-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-377576 ssh -n ha-377576 "sudo cat /home/docker/cp-test_ha-377576-m04_ha-377576.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-377576 cp ha-377576-m04:/home/docker/cp-test.txt ha-377576-m02:/home/docker/cp-test_ha-377576-m04_ha-377576-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-377576 ssh -n ha-377576-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-377576 ssh -n ha-377576-m02 "sudo cat /home/docker/cp-test_ha-377576-m04_ha-377576-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-377576 cp ha-377576-m04:/home/docker/cp-test.txt ha-377576-m03:/home/docker/cp-test_ha-377576-m04_ha-377576-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-377576 ssh -n ha-377576-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-377576 ssh -n ha-377576-m03 "sudo cat /home/docker/cp-test_ha-377576-m04_ha-377576-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (14.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (3.51s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:390: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (3.506784492s)
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (3.51s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.45s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.45s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (17.37s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-amd64 -p ha-377576 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-linux-amd64 -p ha-377576 node delete m03 -v=7 --alsologtostderr: (16.5572055s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-amd64 -p ha-377576 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (17.37s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.42s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.42s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (356.37s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-amd64 start -p ha-377576 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0328 00:11:14.356562 1076522 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/addons-910864/client.crt: no such file or directory
E0328 00:11:21.207016 1076522 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/functional-800754/client.crt: no such file or directory
E0328 00:12:44.254884 1076522 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/functional-800754/client.crt: no such file or directory
E0328 00:16:14.356812 1076522 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/addons-910864/client.crt: no such file or directory
E0328 00:16:21.206976 1076522 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/functional-800754/client.crt: no such file or directory
ha_test.go:560: (dbg) Done: out/minikube-linux-amd64 start -p ha-377576 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (5m55.499945758s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-amd64 -p ha-377576 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (356.37s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.4s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.40s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (76.83s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-377576 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Done: out/minikube-linux-amd64 node add -p ha-377576 --control-plane -v=7 --alsologtostderr: (1m15.904787545s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-amd64 -p ha-377576 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (76.83s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.59s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.59s)

                                                
                                    
x
+
TestJSONOutput/start/Command (60.21s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-543889 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-543889 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio: (1m0.210275771s)
--- PASS: TestJSONOutput/start/Command (60.21s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.79s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-543889 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.79s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.7s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-543889 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.70s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (7.43s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-543889 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-543889 --output=json --user=testUser: (7.429402566s)
--- PASS: TestJSONOutput/stop/Command (7.43s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.23s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-868288 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-868288 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (82.304984ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"5364fe02-7c19-4e14-97a0-f7f63eff6950","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-868288] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"3fa47f45-9dc0-45f4-ab14-70ff0576c380","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=18485"}}
	{"specversion":"1.0","id":"bc7a66bc-2972-44f2-bd86-6d11070b4bed","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"044e23fc-9c1b-4bb3-9e82-843f01391ea6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/18485-1069254/kubeconfig"}}
	{"specversion":"1.0","id":"b3aaafaa-9bf5-4d17-910f-7521bb75612c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/18485-1069254/.minikube"}}
	{"specversion":"1.0","id":"bcabbec6-b30c-4f58-8602-07bed5ad89ba","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"b65178ea-48a9-44a0-87f6-c6e71571dc38","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"eae4c182-120d-437c-8c62-b3725388e5bb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-868288" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-868288
--- PASS: TestErrorJSONOutput (0.23s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (96.93s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-431074 --driver=kvm2  --container-runtime=crio
E0328 00:19:17.404424 1076522 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/addons-910864/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-431074 --driver=kvm2  --container-runtime=crio: (48.976482322s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-434068 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-434068 --driver=kvm2  --container-runtime=crio: (45.317315028s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-431074
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-434068
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-434068" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-434068
helpers_test.go:175: Cleaning up "first-431074" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-431074
--- PASS: TestMinikubeProfile (96.93s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (28.05s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-342937 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
E0328 00:21:14.356212 1076522 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/addons-910864/client.crt: no such file or directory
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-342937 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (27.047196252s)
--- PASS: TestMountStart/serial/StartWithMountFirst (28.05s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.41s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-342937 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-342937 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.41s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (25.98s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-359341 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
E0328 00:21:21.207158 1076522 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/functional-800754/client.crt: no such file or directory
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-359341 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (24.981893696s)
--- PASS: TestMountStart/serial/StartWithMountSecond (25.98s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.4s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-359341 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-359341 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.40s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.67s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-342937 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.67s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.41s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-359341 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-359341 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.41s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.29s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-359341
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-359341: (1.290059595s)
--- PASS: TestMountStart/serial/Stop (1.29s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (26.18s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-359341
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-359341: (25.183161012s)
--- PASS: TestMountStart/serial/RestartStopped (26.18s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.41s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-359341 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-359341 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.41s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (100.23s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-200224 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-200224 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m39.780225398s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-200224 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (100.23s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.68s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-200224 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-200224 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-200224 -- rollout status deployment/busybox: (4.029329008s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-200224 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-200224 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-200224 -- exec busybox-7fdf7869d9-2h8w6 -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-200224 -- exec busybox-7fdf7869d9-4mbrk -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-200224 -- exec busybox-7fdf7869d9-2h8w6 -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-200224 -- exec busybox-7fdf7869d9-4mbrk -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-200224 -- exec busybox-7fdf7869d9-2h8w6 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-200224 -- exec busybox-7fdf7869d9-4mbrk -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.68s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.9s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-200224 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-200224 -- exec busybox-7fdf7869d9-2h8w6 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-200224 -- exec busybox-7fdf7869d9-2h8w6 -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-200224 -- exec busybox-7fdf7869d9-4mbrk -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-200224 -- exec busybox-7fdf7869d9-4mbrk -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.90s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (43.92s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-200224 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-200224 -v 3 --alsologtostderr: (43.306722472s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-200224 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (43.92s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-200224 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.07s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.24s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.24s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (7.67s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-200224 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-200224 cp testdata/cp-test.txt multinode-200224:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-200224 ssh -n multinode-200224 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-200224 cp multinode-200224:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3842904601/001/cp-test_multinode-200224.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-200224 ssh -n multinode-200224 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-200224 cp multinode-200224:/home/docker/cp-test.txt multinode-200224-m02:/home/docker/cp-test_multinode-200224_multinode-200224-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-200224 ssh -n multinode-200224 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-200224 ssh -n multinode-200224-m02 "sudo cat /home/docker/cp-test_multinode-200224_multinode-200224-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-200224 cp multinode-200224:/home/docker/cp-test.txt multinode-200224-m03:/home/docker/cp-test_multinode-200224_multinode-200224-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-200224 ssh -n multinode-200224 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-200224 ssh -n multinode-200224-m03 "sudo cat /home/docker/cp-test_multinode-200224_multinode-200224-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-200224 cp testdata/cp-test.txt multinode-200224-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-200224 ssh -n multinode-200224-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-200224 cp multinode-200224-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3842904601/001/cp-test_multinode-200224-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-200224 ssh -n multinode-200224-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-200224 cp multinode-200224-m02:/home/docker/cp-test.txt multinode-200224:/home/docker/cp-test_multinode-200224-m02_multinode-200224.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-200224 ssh -n multinode-200224-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-200224 ssh -n multinode-200224 "sudo cat /home/docker/cp-test_multinode-200224-m02_multinode-200224.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-200224 cp multinode-200224-m02:/home/docker/cp-test.txt multinode-200224-m03:/home/docker/cp-test_multinode-200224-m02_multinode-200224-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-200224 ssh -n multinode-200224-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-200224 ssh -n multinode-200224-m03 "sudo cat /home/docker/cp-test_multinode-200224-m02_multinode-200224-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-200224 cp testdata/cp-test.txt multinode-200224-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-200224 ssh -n multinode-200224-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-200224 cp multinode-200224-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3842904601/001/cp-test_multinode-200224-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-200224 ssh -n multinode-200224-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-200224 cp multinode-200224-m03:/home/docker/cp-test.txt multinode-200224:/home/docker/cp-test_multinode-200224-m03_multinode-200224.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-200224 ssh -n multinode-200224-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-200224 ssh -n multinode-200224 "sudo cat /home/docker/cp-test_multinode-200224-m03_multinode-200224.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-200224 cp multinode-200224-m03:/home/docker/cp-test.txt multinode-200224-m02:/home/docker/cp-test_multinode-200224-m03_multinode-200224-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-200224 ssh -n multinode-200224-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-200224 ssh -n multinode-200224-m02 "sudo cat /home/docker/cp-test_multinode-200224-m03_multinode-200224-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (7.67s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.54s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-200224 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-200224 node stop m03: (1.618107575s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-200224 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-200224 status: exit status 7 (456.32519ms)

                                                
                                                
-- stdout --
	multinode-200224
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-200224-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-200224-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-200224 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-200224 status --alsologtostderr: exit status 7 (465.107892ms)

                                                
                                                
-- stdout --
	multinode-200224
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-200224-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-200224-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0328 00:24:57.757257 1102468 out.go:291] Setting OutFile to fd 1 ...
	I0328 00:24:57.757643 1102468 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 00:24:57.757668 1102468 out.go:304] Setting ErrFile to fd 2...
	I0328 00:24:57.757681 1102468 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 00:24:57.757908 1102468 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18485-1069254/.minikube/bin
	I0328 00:24:57.758093 1102468 out.go:298] Setting JSON to false
	I0328 00:24:57.758124 1102468 mustload.go:65] Loading cluster: multinode-200224
	I0328 00:24:57.758182 1102468 notify.go:220] Checking for updates...
	I0328 00:24:57.758584 1102468 config.go:182] Loaded profile config "multinode-200224": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0328 00:24:57.758606 1102468 status.go:255] checking status of multinode-200224 ...
	I0328 00:24:57.759049 1102468 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0328 00:24:57.759104 1102468 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 00:24:57.781355 1102468 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35567
	I0328 00:24:57.781909 1102468 main.go:141] libmachine: () Calling .GetVersion
	I0328 00:24:57.782562 1102468 main.go:141] libmachine: Using API Version  1
	I0328 00:24:57.782585 1102468 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 00:24:57.782956 1102468 main.go:141] libmachine: () Calling .GetMachineName
	I0328 00:24:57.783183 1102468 main.go:141] libmachine: (multinode-200224) Calling .GetState
	I0328 00:24:57.784871 1102468 status.go:330] multinode-200224 host status = "Running" (err=<nil>)
	I0328 00:24:57.784891 1102468 host.go:66] Checking if "multinode-200224" exists ...
	I0328 00:24:57.785208 1102468 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0328 00:24:57.785254 1102468 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 00:24:57.801672 1102468 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37715
	I0328 00:24:57.802093 1102468 main.go:141] libmachine: () Calling .GetVersion
	I0328 00:24:57.802615 1102468 main.go:141] libmachine: Using API Version  1
	I0328 00:24:57.802639 1102468 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 00:24:57.803006 1102468 main.go:141] libmachine: () Calling .GetMachineName
	I0328 00:24:57.803243 1102468 main.go:141] libmachine: (multinode-200224) Calling .GetIP
	I0328 00:24:57.806304 1102468 main.go:141] libmachine: (multinode-200224) DBG | domain multinode-200224 has defined MAC address 52:54:00:36:d0:1a in network mk-multinode-200224
	I0328 00:24:57.806783 1102468 main.go:141] libmachine: (multinode-200224) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:d0:1a", ip: ""} in network mk-multinode-200224: {Iface:virbr1 ExpiryTime:2024-03-28 01:22:32 +0000 UTC Type:0 Mac:52:54:00:36:d0:1a Iaid: IPaddr:192.168.39.88 Prefix:24 Hostname:multinode-200224 Clientid:01:52:54:00:36:d0:1a}
	I0328 00:24:57.806819 1102468 main.go:141] libmachine: (multinode-200224) DBG | domain multinode-200224 has defined IP address 192.168.39.88 and MAC address 52:54:00:36:d0:1a in network mk-multinode-200224
	I0328 00:24:57.806941 1102468 host.go:66] Checking if "multinode-200224" exists ...
	I0328 00:24:57.807236 1102468 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0328 00:24:57.807277 1102468 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 00:24:57.824445 1102468 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32837
	I0328 00:24:57.824951 1102468 main.go:141] libmachine: () Calling .GetVersion
	I0328 00:24:57.825469 1102468 main.go:141] libmachine: Using API Version  1
	I0328 00:24:57.825493 1102468 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 00:24:57.825811 1102468 main.go:141] libmachine: () Calling .GetMachineName
	I0328 00:24:57.825992 1102468 main.go:141] libmachine: (multinode-200224) Calling .DriverName
	I0328 00:24:57.826289 1102468 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0328 00:24:57.826321 1102468 main.go:141] libmachine: (multinode-200224) Calling .GetSSHHostname
	I0328 00:24:57.829342 1102468 main.go:141] libmachine: (multinode-200224) DBG | domain multinode-200224 has defined MAC address 52:54:00:36:d0:1a in network mk-multinode-200224
	I0328 00:24:57.829784 1102468 main.go:141] libmachine: (multinode-200224) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:d0:1a", ip: ""} in network mk-multinode-200224: {Iface:virbr1 ExpiryTime:2024-03-28 01:22:32 +0000 UTC Type:0 Mac:52:54:00:36:d0:1a Iaid: IPaddr:192.168.39.88 Prefix:24 Hostname:multinode-200224 Clientid:01:52:54:00:36:d0:1a}
	I0328 00:24:57.829808 1102468 main.go:141] libmachine: (multinode-200224) DBG | domain multinode-200224 has defined IP address 192.168.39.88 and MAC address 52:54:00:36:d0:1a in network mk-multinode-200224
	I0328 00:24:57.829942 1102468 main.go:141] libmachine: (multinode-200224) Calling .GetSSHPort
	I0328 00:24:57.830119 1102468 main.go:141] libmachine: (multinode-200224) Calling .GetSSHKeyPath
	I0328 00:24:57.830277 1102468 main.go:141] libmachine: (multinode-200224) Calling .GetSSHUsername
	I0328 00:24:57.830401 1102468 sshutil.go:53] new ssh client: &{IP:192.168.39.88 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/multinode-200224/id_rsa Username:docker}
	I0328 00:24:57.910611 1102468 ssh_runner.go:195] Run: systemctl --version
	I0328 00:24:57.920239 1102468 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0328 00:24:57.942015 1102468 kubeconfig.go:125] found "multinode-200224" server: "https://192.168.39.88:8443"
	I0328 00:24:57.942044 1102468 api_server.go:166] Checking apiserver status ...
	I0328 00:24:57.942086 1102468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 00:24:57.957001 1102468 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1188/cgroup
	W0328 00:24:57.966507 1102468 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1188/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0328 00:24:57.966563 1102468 ssh_runner.go:195] Run: ls
	I0328 00:24:57.971383 1102468 api_server.go:253] Checking apiserver healthz at https://192.168.39.88:8443/healthz ...
	I0328 00:24:57.975784 1102468 api_server.go:279] https://192.168.39.88:8443/healthz returned 200:
	ok
	I0328 00:24:57.975821 1102468 status.go:422] multinode-200224 apiserver status = Running (err=<nil>)
	I0328 00:24:57.975838 1102468 status.go:257] multinode-200224 status: &{Name:multinode-200224 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0328 00:24:57.975866 1102468 status.go:255] checking status of multinode-200224-m02 ...
	I0328 00:24:57.976310 1102468 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0328 00:24:57.976363 1102468 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 00:24:57.993254 1102468 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45859
	I0328 00:24:57.993732 1102468 main.go:141] libmachine: () Calling .GetVersion
	I0328 00:24:57.994293 1102468 main.go:141] libmachine: Using API Version  1
	I0328 00:24:57.994324 1102468 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 00:24:57.994731 1102468 main.go:141] libmachine: () Calling .GetMachineName
	I0328 00:24:57.994982 1102468 main.go:141] libmachine: (multinode-200224-m02) Calling .GetState
	I0328 00:24:57.996603 1102468 status.go:330] multinode-200224-m02 host status = "Running" (err=<nil>)
	I0328 00:24:57.996621 1102468 host.go:66] Checking if "multinode-200224-m02" exists ...
	I0328 00:24:57.996933 1102468 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0328 00:24:57.996971 1102468 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 00:24:58.012612 1102468 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37021
	I0328 00:24:58.013077 1102468 main.go:141] libmachine: () Calling .GetVersion
	I0328 00:24:58.013559 1102468 main.go:141] libmachine: Using API Version  1
	I0328 00:24:58.013599 1102468 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 00:24:58.014063 1102468 main.go:141] libmachine: () Calling .GetMachineName
	I0328 00:24:58.014306 1102468 main.go:141] libmachine: (multinode-200224-m02) Calling .GetIP
	I0328 00:24:58.017446 1102468 main.go:141] libmachine: (multinode-200224-m02) DBG | domain multinode-200224-m02 has defined MAC address 52:54:00:80:53:14 in network mk-multinode-200224
	I0328 00:24:58.017929 1102468 main.go:141] libmachine: (multinode-200224-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:53:14", ip: ""} in network mk-multinode-200224: {Iface:virbr1 ExpiryTime:2024-03-28 01:23:30 +0000 UTC Type:0 Mac:52:54:00:80:53:14 Iaid: IPaddr:192.168.39.161 Prefix:24 Hostname:multinode-200224-m02 Clientid:01:52:54:00:80:53:14}
	I0328 00:24:58.017964 1102468 main.go:141] libmachine: (multinode-200224-m02) DBG | domain multinode-200224-m02 has defined IP address 192.168.39.161 and MAC address 52:54:00:80:53:14 in network mk-multinode-200224
	I0328 00:24:58.018093 1102468 host.go:66] Checking if "multinode-200224-m02" exists ...
	I0328 00:24:58.018434 1102468 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0328 00:24:58.018486 1102468 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 00:24:58.035479 1102468 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32895
	I0328 00:24:58.036043 1102468 main.go:141] libmachine: () Calling .GetVersion
	I0328 00:24:58.036657 1102468 main.go:141] libmachine: Using API Version  1
	I0328 00:24:58.036680 1102468 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 00:24:58.037017 1102468 main.go:141] libmachine: () Calling .GetMachineName
	I0328 00:24:58.037283 1102468 main.go:141] libmachine: (multinode-200224-m02) Calling .DriverName
	I0328 00:24:58.037551 1102468 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0328 00:24:58.037578 1102468 main.go:141] libmachine: (multinode-200224-m02) Calling .GetSSHHostname
	I0328 00:24:58.041025 1102468 main.go:141] libmachine: (multinode-200224-m02) DBG | domain multinode-200224-m02 has defined MAC address 52:54:00:80:53:14 in network mk-multinode-200224
	I0328 00:24:58.041496 1102468 main.go:141] libmachine: (multinode-200224-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:53:14", ip: ""} in network mk-multinode-200224: {Iface:virbr1 ExpiryTime:2024-03-28 01:23:30 +0000 UTC Type:0 Mac:52:54:00:80:53:14 Iaid: IPaddr:192.168.39.161 Prefix:24 Hostname:multinode-200224-m02 Clientid:01:52:54:00:80:53:14}
	I0328 00:24:58.041529 1102468 main.go:141] libmachine: (multinode-200224-m02) DBG | domain multinode-200224-m02 has defined IP address 192.168.39.161 and MAC address 52:54:00:80:53:14 in network mk-multinode-200224
	I0328 00:24:58.041699 1102468 main.go:141] libmachine: (multinode-200224-m02) Calling .GetSSHPort
	I0328 00:24:58.041885 1102468 main.go:141] libmachine: (multinode-200224-m02) Calling .GetSSHKeyPath
	I0328 00:24:58.042043 1102468 main.go:141] libmachine: (multinode-200224-m02) Calling .GetSSHUsername
	I0328 00:24:58.042204 1102468 sshutil.go:53] new ssh client: &{IP:192.168.39.161 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18485-1069254/.minikube/machines/multinode-200224-m02/id_rsa Username:docker}
	I0328 00:24:58.122045 1102468 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0328 00:24:58.138339 1102468 status.go:257] multinode-200224-m02 status: &{Name:multinode-200224-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0328 00:24:58.138375 1102468 status.go:255] checking status of multinode-200224-m03 ...
	I0328 00:24:58.138679 1102468 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0328 00:24:58.138716 1102468 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0328 00:24:58.156101 1102468 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38317
	I0328 00:24:58.156520 1102468 main.go:141] libmachine: () Calling .GetVersion
	I0328 00:24:58.156921 1102468 main.go:141] libmachine: Using API Version  1
	I0328 00:24:58.156941 1102468 main.go:141] libmachine: () Calling .SetConfigRaw
	I0328 00:24:58.157329 1102468 main.go:141] libmachine: () Calling .GetMachineName
	I0328 00:24:58.157542 1102468 main.go:141] libmachine: (multinode-200224-m03) Calling .GetState
	I0328 00:24:58.159161 1102468 status.go:330] multinode-200224-m03 host status = "Stopped" (err=<nil>)
	I0328 00:24:58.159179 1102468 status.go:343] host is not running, skipping remaining checks
	I0328 00:24:58.159187 1102468 status.go:257] multinode-200224-m03 status: &{Name:multinode-200224-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.54s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (30.21s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-200224 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-200224 node start m03 -v=7 --alsologtostderr: (29.532521281s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-200224 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (30.21s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.54s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-200224 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-200224 node delete m03: (1.973539705s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-200224 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.54s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (171.96s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-200224 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-200224 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (2m51.374954016s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-200224 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
E0328 00:35:57.404750 1076522 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/addons-910864/client.crt: no such file or directory
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (171.96s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (44.46s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-200224
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-200224-m02 --driver=kvm2  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-200224-m02 --driver=kvm2  --container-runtime=crio: exit status 14 (80.220005ms)

                                                
                                                
-- stdout --
	* [multinode-200224-m02] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18485
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18485-1069254/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18485-1069254/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-200224-m02' is duplicated with machine name 'multinode-200224-m02' in profile 'multinode-200224'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-200224-m03 --driver=kvm2  --container-runtime=crio
E0328 00:36:14.356464 1076522 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/addons-910864/client.crt: no such file or directory
E0328 00:36:21.208279 1076522 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/functional-800754/client.crt: no such file or directory
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-200224-m03 --driver=kvm2  --container-runtime=crio: (43.07790954s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-200224
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-200224: exit status 80 (232.556926ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-200224 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-200224-m03 already exists in multinode-200224-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_1.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-200224-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-200224-m03: (1.004343585s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (44.46s)

                                                
                                    
x
+
TestScheduledStopUnix (117.03s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-406916 --memory=2048 --driver=kvm2  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-406916 --memory=2048 --driver=kvm2  --container-runtime=crio: (45.222180195s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-406916 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-406916 -n scheduled-stop-406916
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-406916 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-406916 --cancel-scheduled
E0328 00:41:14.356264 1076522 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/addons-910864/client.crt: no such file or directory
E0328 00:41:21.208345 1076522 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/functional-800754/client.crt: no such file or directory
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-406916 -n scheduled-stop-406916
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-406916
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-406916 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-406916
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-406916: exit status 7 (85.261385ms)

                                                
                                                
-- stdout --
	scheduled-stop-406916
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-406916 -n scheduled-stop-406916
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-406916 -n scheduled-stop-406916: exit status 7 (77.355222ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-406916" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-406916
--- PASS: TestScheduledStopUnix (117.03s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (220.89s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.3325101398 start -p running-upgrade-642721 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.3325101398 start -p running-upgrade-642721 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (2m5.455431576s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-642721 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-642721 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m31.057470738s)
helpers_test.go:175: Cleaning up "running-upgrade-642721" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-642721
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-642721: (1.232305189s)
--- PASS: TestRunningBinaryUpgrade (220.89s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-636163 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-636163 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio: exit status 14 (101.531461ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-636163] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18485
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18485-1069254/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18485-1069254/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (96.47s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-636163 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-636163 --driver=kvm2  --container-runtime=crio: (1m36.206441836s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-636163 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (96.47s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (66.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-636163 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-636163 --no-kubernetes --driver=kvm2  --container-runtime=crio: (1m4.251842262s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-636163 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-636163 status -o json: exit status 2 (271.499358ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-636163","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-636163
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-636163: (1.575170277s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (66.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (51.52s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-636163 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-636163 --no-kubernetes --driver=kvm2  --container-runtime=crio: (51.52248756s)
--- PASS: TestNoKubernetes/serial/Start (51.52s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.97s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-443419 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-443419 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio: exit status 14 (127.441945ms)

                                                
                                                
-- stdout --
	* [false-443419] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18485
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18485-1069254/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18485-1069254/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0328 00:45:55.498663 1111757 out.go:291] Setting OutFile to fd 1 ...
	I0328 00:45:55.498771 1111757 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 00:45:55.498779 1111757 out.go:304] Setting ErrFile to fd 2...
	I0328 00:45:55.498783 1111757 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 00:45:55.498988 1111757 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18485-1069254/.minikube/bin
	I0328 00:45:55.499583 1111757 out.go:298] Setting JSON to false
	I0328 00:45:55.500618 1111757 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":30453,"bootTime":1711556303,"procs":223,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1054-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0328 00:45:55.500688 1111757 start.go:139] virtualization: kvm guest
	I0328 00:45:55.503240 1111757 out.go:177] * [false-443419] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I0328 00:45:55.504656 1111757 notify.go:220] Checking for updates...
	I0328 00:45:55.504666 1111757 out.go:177]   - MINIKUBE_LOCATION=18485
	I0328 00:45:55.505922 1111757 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0328 00:45:55.507208 1111757 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18485-1069254/kubeconfig
	I0328 00:45:55.508530 1111757 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18485-1069254/.minikube
	I0328 00:45:55.509802 1111757 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0328 00:45:55.511052 1111757 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0328 00:45:55.512902 1111757 config.go:182] Loaded profile config "NoKubernetes-636163": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I0328 00:45:55.513039 1111757 config.go:182] Loaded profile config "cert-expiration-927384": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0328 00:45:55.513195 1111757 config.go:182] Loaded profile config "running-upgrade-642721": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.1
	I0328 00:45:55.513334 1111757 driver.go:392] Setting default libvirt URI to qemu:///system
	I0328 00:45:55.558299 1111757 out.go:177] * Using the kvm2 driver based on user configuration
	I0328 00:45:55.559733 1111757 start.go:297] selected driver: kvm2
	I0328 00:45:55.559747 1111757 start.go:901] validating driver "kvm2" against <nil>
	I0328 00:45:55.559760 1111757 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0328 00:45:55.561777 1111757 out.go:177] 
	W0328 00:45:55.563092 1111757 out.go:239] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0328 00:45:55.564347 1111757 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-443419 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-443419

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-443419

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-443419

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-443419

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-443419

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-443419

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-443419

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-443419

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-443419

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-443419

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-443419" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-443419"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-443419" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-443419"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-443419" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-443419"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-443419

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-443419" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-443419"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-443419" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-443419"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-443419" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-443419" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-443419" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-443419" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-443419" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-443419" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-443419" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-443419" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-443419" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-443419"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-443419" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-443419"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-443419" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-443419"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-443419" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-443419"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-443419" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-443419"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-443419" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-443419" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-443419" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-443419" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-443419"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-443419" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-443419"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-443419" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-443419"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-443419" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-443419"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-443419" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-443419"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.crt
extensions:
- extension:
last-update: Thu, 28 Mar 2024 00:45:14 UTC
provider: minikube.sigs.k8s.io
version: v1.33.0-beta.0
name: cluster_info
server: https://192.168.72.11:8443
name: cert-expiration-927384
- cluster:
certificate-authority: /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.crt
extensions:
- extension:
last-update: Thu, 28 Mar 2024 00:45:24 UTC
provider: minikube.sigs.k8s.io
version: v1.33.0-beta.0
name: cluster_info
server: https://192.168.39.231:8443
name: running-upgrade-642721
contexts:
- context:
cluster: cert-expiration-927384
extensions:
- extension:
last-update: Thu, 28 Mar 2024 00:45:14 UTC
provider: minikube.sigs.k8s.io
version: v1.33.0-beta.0
name: context_info
namespace: default
user: cert-expiration-927384
name: cert-expiration-927384
- context:
cluster: running-upgrade-642721
extensions:
- extension:
last-update: Thu, 28 Mar 2024 00:45:24 UTC
provider: minikube.sigs.k8s.io
version: v1.33.0-beta.0
name: context_info
namespace: default
user: running-upgrade-642721
name: running-upgrade-642721
current-context: ""
kind: Config
preferences: {}
users:
- name: cert-expiration-927384
user:
client-certificate: /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/cert-expiration-927384/client.crt
client-key: /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/cert-expiration-927384/client.key
- name: running-upgrade-642721
user:
client-certificate: /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/running-upgrade-642721/client.crt
client-key: /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/running-upgrade-642721/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-443419

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-443419" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-443419"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-443419" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-443419"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-443419" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-443419"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-443419" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-443419"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-443419" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-443419"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-443419" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-443419"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-443419" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-443419"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-443419" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-443419"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-443419" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-443419"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-443419" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-443419"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-443419" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-443419"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-443419" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-443419"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-443419" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-443419"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-443419" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-443419"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-443419" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-443419"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-443419" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-443419"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-443419" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-443419"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-443419" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-443419"

                                                
                                                
----------------------- debugLogs end: false-443419 [took: 3.688550516s] --------------------------------
helpers_test.go:175: Cleaning up "false-443419" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-443419
--- PASS: TestNetworkPlugins/group/false (3.97s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.24s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-636163 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-636163 "sudo systemctl is-active --quiet service kubelet": exit status 1 (243.242571ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.24s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (5.16s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-linux-amd64 profile list: (4.507134065s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (5.16s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (2.34s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-636163
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-636163: (2.339604743s)
--- PASS: TestNoKubernetes/serial/Stop (2.34s)

                                                
                                    
x
+
TestPause/serial/Start (64.62s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-040046 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-040046 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio: (1m4.621156975s)
--- PASS: TestPause/serial/Start (64.62s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (53.54s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-636163 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-636163 --driver=kvm2  --container-runtime=crio: (53.54480038s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (53.54s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.22s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-636163 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-636163 "sudo systemctl is-active --quiet service kubelet": exit status 1 (220.635439ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.22s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (2.3s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (2.30s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (126.77s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.878799719 start -p stopped-upgrade-317492 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.878799719 start -p stopped-upgrade-317492 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (1m13.728201979s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.878799719 -p stopped-upgrade-317492 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.878799719 -p stopped-upgrade-317492 stop: (2.139872112s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-317492 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-317492 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (50.899417781s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (126.77s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (61.67s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-443419 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-443419 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio: (1m1.668449488s)
--- PASS: TestNetworkPlugins/group/auto/Start (61.67s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (88.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-443419 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-443419 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio: (1m28.240475437s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (88.24s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.97s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-317492
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.97s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (127.97s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-443419 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-443419 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio: (2m7.974275427s)
--- PASS: TestNetworkPlugins/group/calico/Start (127.97s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-443419 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (13.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-443419 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-6g4ff" [b699af30-3c2b-4e71-ba5e-27422aa9459c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-6g4ff" [b699af30-3c2b-4e71-ba5e-27422aa9459c] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 13.004865867s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (13.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-443419 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-443419 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-443419 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (83.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-443419 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-443419 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio: (1m23.244094914s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (83.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-szh8f" [7fab24fb-74f1-4198-ac10-334560d787ea] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.008205844s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-443419 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (13.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-443419 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-n862p" [5010d879-dc7b-4e67-a78b-e6c934f64f62] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-n862p" [5010d879-dc7b-4e67-a78b-e6c934f64f62] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 13.004922613s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (13.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-443419 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-443419 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-443419 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (63.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-443419 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio
E0328 00:51:14.356580 1076522 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/addons-910864/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-443419 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio: (1m3.348693112s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (63.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-dfvqk" [3d791a2c-b908-4997-8284-1c411eb5b64e] Running
E0328 00:51:21.207349 1076522 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/functional-800754/client.crt: no such file or directory
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.004881294s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-443419 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (10.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-443419 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-x8kcv" [6677832c-0e89-4804-b162-4d9149fa14a1] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-x8kcv" [6677832c-0e89-4804-b162-4d9149fa14a1] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 10.005666035s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (10.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-443419 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-443419 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-443419 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-443419 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (11.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-443419 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-bjllx" [5aa722cd-4dfa-4ec4-9933-76bed69824f9] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-bjllx" [5aa722cd-4dfa-4ec4-9933-76bed69824f9] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 11.005173171s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (11.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (89.51s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-443419 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-443419 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio: (1m29.513334484s)
--- PASS: TestNetworkPlugins/group/flannel/Start (89.51s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-443419 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-443419 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-443419 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-443419 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-443419 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-9vh4q" [e776950b-6700-413c-9854-dc26eb944fb8] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-9vh4q" [e776950b-6700-413c-9854-dc26eb944fb8] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 12.004782275s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (77.84s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-443419 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-443419 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio: (1m17.844412242s)
--- PASS: TestNetworkPlugins/group/bridge/Start (77.84s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-443419 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-443419 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-443419 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.15s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (130.14s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-248059 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.0-beta.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-248059 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.0-beta.0: (2m10.137076686s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (130.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-m9qmm" [be0c2783-b899-4213-97a2-a27e1737c409] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.005202256s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-443419 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (10.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-443419 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-7m5rs" [bbb74e40-5d60-4927-bb4f-fdcea5ccd2f3] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-7m5rs" [bbb74e40-5d60-4927-bb4f-fdcea5ccd2f3] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 10.004626162s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (10.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-443419 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-443419 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-zj8wp" [8f073732-b1b8-422c-803d-0a5bccdbd253] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-zj8wp" [8f073732-b1b8-422c-803d-0a5bccdbd253] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.007963908s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-443419 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-443419 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-443419 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-443419 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-443419 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-443419 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.18s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (65.65s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-808809 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.3
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-808809 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.3: (1m5.646746903s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (65.65s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (82.4s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-013642 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.0-beta.0
E0328 00:54:52.855425 1076522 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/auto-443419/client.crt: no such file or directory
E0328 00:54:52.860720 1076522 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/auto-443419/client.crt: no such file or directory
E0328 00:54:52.871214 1076522 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/auto-443419/client.crt: no such file or directory
E0328 00:54:52.891489 1076522 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/auto-443419/client.crt: no such file or directory
E0328 00:54:52.932031 1076522 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/auto-443419/client.crt: no such file or directory
E0328 00:54:53.012379 1076522 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/auto-443419/client.crt: no such file or directory
E0328 00:54:53.173456 1076522 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/auto-443419/client.crt: no such file or directory
E0328 00:54:53.494401 1076522 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/auto-443419/client.crt: no such file or directory
E0328 00:54:54.135013 1076522 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/auto-443419/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-013642 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.0-beta.0: (1m22.400887879s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (82.40s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.32s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-248059 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [c73023f4-06ff-4f3e-9258-3eef384f0247] Pending
helpers_test.go:344: "busybox" [c73023f4-06ff-4f3e-9258-3eef384f0247] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0328 00:54:55.416148 1076522 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/auto-443419/client.crt: no such file or directory
E0328 00:54:57.977137 1076522 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/auto-443419/client.crt: no such file or directory
helpers_test.go:344: "busybox" [c73023f4-06ff-4f3e-9258-3eef384f0247] Running
E0328 00:55:03.097791 1076522 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/auto-443419/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.004650581s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-248059 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.32s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-248059 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-248059 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.06s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (10.3s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-808809 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [1316dfe6-2779-4ede-98ee-15d916cbbf28] Pending
helpers_test.go:344: "busybox" [1316dfe6-2779-4ede-98ee-15d916cbbf28] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [1316dfe6-2779-4ede-98ee-15d916cbbf28] Running
E0328 00:55:13.338334 1076522 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/auto-443419/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 10.004320851s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-808809 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (10.30s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.04s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-808809 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-808809 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.04s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.14s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-013642 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E0328 00:55:28.662861 1076522 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/kindnet-443419/client.crt: no such file or directory
E0328 00:55:28.668146 1076522 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/kindnet-443419/client.crt: no such file or directory
E0328 00:55:28.678455 1076522 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/kindnet-443419/client.crt: no such file or directory
E0328 00:55:28.699399 1076522 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/kindnet-443419/client.crt: no such file or directory
E0328 00:55:28.739760 1076522 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/kindnet-443419/client.crt: no such file or directory
E0328 00:55:28.820569 1076522 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/kindnet-443419/client.crt: no such file or directory
E0328 00:55:28.981655 1076522 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/kindnet-443419/client.crt: no such file or directory
E0328 00:55:29.302193 1076522 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/kindnet-443419/client.crt: no such file or directory
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-013642 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.138559196s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.14s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (10.38s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-013642 --alsologtostderr -v=3
E0328 00:55:29.942727 1076522 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/kindnet-443419/client.crt: no such file or directory
E0328 00:55:31.223810 1076522 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/kindnet-443419/client.crt: no such file or directory
E0328 00:55:33.784447 1076522 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/kindnet-443419/client.crt: no such file or directory
E0328 00:55:33.818960 1076522 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/auto-443419/client.crt: no such file or directory
E0328 00:55:38.904961 1076522 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/kindnet-443419/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-013642 --alsologtostderr -v=3: (10.38264426s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (10.38s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-013642 -n newest-cni-013642
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-013642 -n newest-cni-013642: exit status 7 (86.251037ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-013642 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (37.97s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-013642 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.0-beta.0
E0328 00:55:49.146117 1076522 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/kindnet-443419/client.crt: no such file or directory
E0328 00:56:09.626423 1076522 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/kindnet-443419/client.crt: no such file or directory
E0328 00:56:14.356012 1076522 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/addons-910864/client.crt: no such file or directory
E0328 00:56:14.779301 1076522 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/auto-443419/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-013642 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.0-beta.0: (37.680945465s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-013642 -n newest-cni-013642
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (37.97s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-013642 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.56s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-013642 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-013642 -n newest-cni-013642
E0328 00:56:19.152958 1076522 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/calico-443419/client.crt: no such file or directory
E0328 00:56:19.158271 1076522 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/calico-443419/client.crt: no such file or directory
E0328 00:56:19.168582 1076522 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/calico-443419/client.crt: no such file or directory
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-013642 -n newest-cni-013642: exit status 2 (260.143997ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-013642 -n newest-cni-013642
E0328 00:56:19.189676 1076522 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/calico-443419/client.crt: no such file or directory
E0328 00:56:19.229991 1076522 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/calico-443419/client.crt: no such file or directory
E0328 00:56:19.310346 1076522 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/calico-443419/client.crt: no such file or directory
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-013642 -n newest-cni-013642: exit status 2 (259.750427ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-013642 --alsologtostderr -v=1
E0328 00:56:19.470661 1076522 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/calico-443419/client.crt: no such file or directory
E0328 00:56:19.790860 1076522 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/calico-443419/client.crt: no such file or directory
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-013642 -n newest-cni-013642
E0328 00:56:20.431530 1076522 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/calico-443419/client.crt: no such file or directory
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-013642 -n newest-cni-013642
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.56s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (60.9s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-283961 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.3
E0328 00:56:24.272573 1076522 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/calico-443419/client.crt: no such file or directory
E0328 00:56:29.393561 1076522 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/calico-443419/client.crt: no such file or directory
E0328 00:56:39.634550 1076522 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/calico-443419/client.crt: no such file or directory
E0328 00:56:47.936354 1076522 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/custom-flannel-443419/client.crt: no such file or directory
E0328 00:56:47.941636 1076522 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/custom-flannel-443419/client.crt: no such file or directory
E0328 00:56:47.951957 1076522 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/custom-flannel-443419/client.crt: no such file or directory
E0328 00:56:47.972275 1076522 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/custom-flannel-443419/client.crt: no such file or directory
E0328 00:56:48.012692 1076522 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/custom-flannel-443419/client.crt: no such file or directory
E0328 00:56:48.093808 1076522 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/custom-flannel-443419/client.crt: no such file or directory
E0328 00:56:48.254571 1076522 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/custom-flannel-443419/client.crt: no such file or directory
E0328 00:56:48.574740 1076522 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/custom-flannel-443419/client.crt: no such file or directory
E0328 00:56:49.215166 1076522 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/custom-flannel-443419/client.crt: no such file or directory
E0328 00:56:50.495991 1076522 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/custom-flannel-443419/client.crt: no such file or directory
E0328 00:56:50.587309 1076522 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/kindnet-443419/client.crt: no such file or directory
E0328 00:56:53.056876 1076522 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/custom-flannel-443419/client.crt: no such file or directory
E0328 00:56:58.177820 1076522 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/custom-flannel-443419/client.crt: no such file or directory
E0328 00:57:00.115641 1076522 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/calico-443419/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-283961 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.3: (1m0.895456484s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (60.90s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.29s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-283961 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [a4442fae-164d-44d3-9ff3-5701c17483ba] Pending
helpers_test.go:344: "busybox" [a4442fae-164d-44d3-9ff3-5701c17483ba] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [a4442fae-164d-44d3-9ff3-5701c17483ba] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 10.003917754s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-283961 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.29s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-283961 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-283961 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.06s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (689.85s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-248059 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.0-beta.0
E0328 00:57:36.699905 1076522 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/auto-443419/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-248059 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.0-beta.0: (11m29.532752044s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-248059 -n no-preload-248059
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (689.85s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (609.89s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-808809 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.3
E0328 00:57:51.783241 1076522 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/enable-default-cni-443419/client.crt: no such file or directory
E0328 00:58:09.860023 1076522 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/custom-flannel-443419/client.crt: no such file or directory
E0328 00:58:12.508523 1076522 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/kindnet-443419/client.crt: no such file or directory
E0328 00:58:24.182981 1076522 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/flannel-443419/client.crt: no such file or directory
E0328 00:58:24.188286 1076522 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/flannel-443419/client.crt: no such file or directory
E0328 00:58:24.198627 1076522 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/flannel-443419/client.crt: no such file or directory
E0328 00:58:24.218997 1076522 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/flannel-443419/client.crt: no such file or directory
E0328 00:58:24.259354 1076522 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/flannel-443419/client.crt: no such file or directory
E0328 00:58:24.339949 1076522 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/flannel-443419/client.crt: no such file or directory
E0328 00:58:24.500365 1076522 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/flannel-443419/client.crt: no such file or directory
E0328 00:58:24.821035 1076522 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/flannel-443419/client.crt: no such file or directory
E0328 00:58:25.461913 1076522 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/flannel-443419/client.crt: no such file or directory
E0328 00:58:26.743041 1076522 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/flannel-443419/client.crt: no such file or directory
E0328 00:58:29.303280 1076522 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/flannel-443419/client.crt: no such file or directory
E0328 00:58:32.744340 1076522 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/enable-default-cni-443419/client.crt: no such file or directory
E0328 00:58:34.423741 1076522 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/flannel-443419/client.crt: no such file or directory
E0328 00:58:36.487854 1076522 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/bridge-443419/client.crt: no such file or directory
E0328 00:58:36.493170 1076522 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/bridge-443419/client.crt: no such file or directory
E0328 00:58:36.503424 1076522 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/bridge-443419/client.crt: no such file or directory
E0328 00:58:36.523783 1076522 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/bridge-443419/client.crt: no such file or directory
E0328 00:58:36.564104 1076522 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/bridge-443419/client.crt: no such file or directory
E0328 00:58:36.644551 1076522 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/bridge-443419/client.crt: no such file or directory
E0328 00:58:36.805003 1076522 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/bridge-443419/client.crt: no such file or directory
E0328 00:58:37.125401 1076522 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/bridge-443419/client.crt: no such file or directory
E0328 00:58:37.765801 1076522 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/bridge-443419/client.crt: no such file or directory
E0328 00:58:39.046382 1076522 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/bridge-443419/client.crt: no such file or directory
E0328 00:58:41.606669 1076522 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/bridge-443419/client.crt: no such file or directory
E0328 00:58:44.664809 1076522 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/flannel-443419/client.crt: no such file or directory
E0328 00:58:46.727602 1076522 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/bridge-443419/client.crt: no such file or directory
E0328 00:58:56.968366 1076522 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/bridge-443419/client.crt: no such file or directory
E0328 00:59:02.997090 1076522 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/calico-443419/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-808809 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.3: (10m9.606180781s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-808809 -n embed-certs-808809
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (609.89s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (1.43s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-986088 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-986088 --alsologtostderr -v=3: (1.42645818s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (1.43s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-986088 -n old-k8s-version-986088
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-986088 -n old-k8s-version-986088: exit status 7 (88.217877ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-986088 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
E0328 00:59:05.145590 1076522 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/flannel-443419/client.crt: no such file or directory
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.24s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (516.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-283961 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.3
E0328 01:00:20.541461 1076522 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/auto-443419/client.crt: no such file or directory
E0328 01:00:28.662700 1076522 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/kindnet-443419/client.crt: no such file or directory
E0328 01:00:56.349825 1076522 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/kindnet-443419/client.crt: no such file or directory
E0328 01:01:08.028623 1076522 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/flannel-443419/client.crt: no such file or directory
E0328 01:01:14.355927 1076522 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/addons-910864/client.crt: no such file or directory
E0328 01:01:19.153379 1076522 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/calico-443419/client.crt: no such file or directory
E0328 01:01:20.329766 1076522 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/bridge-443419/client.crt: no such file or directory
E0328 01:01:21.207064 1076522 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/functional-800754/client.crt: no such file or directory
E0328 01:01:46.837435 1076522 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/calico-443419/client.crt: no such file or directory
E0328 01:01:47.935687 1076522 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/custom-flannel-443419/client.crt: no such file or directory
E0328 01:02:10.821069 1076522 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/enable-default-cni-443419/client.crt: no such file or directory
E0328 01:02:15.620769 1076522 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/custom-flannel-443419/client.crt: no such file or directory
E0328 01:02:38.506167 1076522 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/enable-default-cni-443419/client.crt: no such file or directory
E0328 01:02:44.257103 1076522 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/functional-800754/client.crt: no such file or directory
E0328 01:03:24.182465 1076522 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/flannel-443419/client.crt: no such file or directory
E0328 01:03:36.488153 1076522 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/bridge-443419/client.crt: no such file or directory
E0328 01:03:51.869644 1076522 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/flannel-443419/client.crt: no such file or directory
E0328 01:04:04.170272 1076522 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/bridge-443419/client.crt: no such file or directory
E0328 01:04:52.855742 1076522 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/auto-443419/client.crt: no such file or directory
E0328 01:05:28.663483 1076522 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/kindnet-443419/client.crt: no such file or directory
E0328 01:06:14.356127 1076522 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/addons-910864/client.crt: no such file or directory
E0328 01:06:19.153966 1076522 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/calico-443419/client.crt: no such file or directory
E0328 01:06:21.207700 1076522 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/functional-800754/client.crt: no such file or directory
E0328 01:06:47.935697 1076522 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/custom-flannel-443419/client.crt: no such file or directory
E0328 01:07:10.820849 1076522 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/enable-default-cni-443419/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-283961 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.3: (8m35.804369103s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-283961 -n default-k8s-diff-port-283961
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (516.11s)

                                                
                                    

Test skip (39/319)

Order skiped test Duration
5 TestDownloadOnly/v1.20.0/cached-images 0
6 TestDownloadOnly/v1.20.0/binaries 0
7 TestDownloadOnly/v1.20.0/kubectl 0
14 TestDownloadOnly/v1.29.3/cached-images 0
15 TestDownloadOnly/v1.29.3/binaries 0
16 TestDownloadOnly/v1.29.3/kubectl 0
23 TestDownloadOnly/v1.30.0-beta.0/cached-images 0
24 TestDownloadOnly/v1.30.0-beta.0/binaries 0
25 TestDownloadOnly/v1.30.0-beta.0/kubectl 0
29 TestDownloadOnlyKic 0
43 TestAddons/parallel/Olm 0
56 TestDockerFlags 0
59 TestDockerEnvContainerd 0
61 TestHyperKitDriverInstallOrUpdate 0
62 TestHyperkitDriverSkipUpgrade 0
113 TestFunctional/parallel/DockerEnv 0
114 TestFunctional/parallel/PodmanEnv 0
122 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
123 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
124 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
125 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
126 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
127 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
128 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
129 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
162 TestGvisorAddon 0
184 TestImageBuild 0
211 TestKicCustomNetwork 0
212 TestKicExistingNetwork 0
213 TestKicCustomSubnet 0
214 TestKicStaticIP 0
246 TestChangeNoneUser 0
249 TestScheduledStopWindows 0
251 TestSkaffold 0
253 TestInsufficientStorage 0
257 TestMissingContainerUpgrade 0
265 TestNetworkPlugins/group/kubenet 3.48
275 TestNetworkPlugins/group/cilium 4.53
282 TestStartStop/group/disable-driver-mounts 0.16
x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.29.3/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.29.3/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.29.3/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-beta.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-beta.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.30.0-beta.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-beta.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-beta.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.30.0-beta.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-beta.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-beta.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.30.0-beta.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:498: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:459: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.48s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:626: 
----------------------- debugLogs start: kubenet-443419 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-443419

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-443419

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-443419

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-443419

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-443419

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-443419

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-443419

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-443419

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-443419

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-443419

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-443419" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-443419"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-443419" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-443419"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-443419" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-443419"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-443419

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-443419" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-443419"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-443419" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-443419"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-443419" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-443419" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-443419" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-443419" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-443419" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-443419" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-443419" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-443419" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-443419" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-443419"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-443419" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-443419"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-443419" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-443419"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-443419" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-443419"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-443419" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-443419"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-443419" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-443419" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-443419" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-443419" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-443419"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-443419" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-443419"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-443419" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-443419"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-443419" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-443419"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-443419" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-443419"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.crt
extensions:
- extension:
last-update: Thu, 28 Mar 2024 00:45:14 UTC
provider: minikube.sigs.k8s.io
version: v1.33.0-beta.0
name: cluster_info
server: https://192.168.72.11:8443
name: cert-expiration-927384
- cluster:
certificate-authority: /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.crt
extensions:
- extension:
last-update: Thu, 28 Mar 2024 00:45:24 UTC
provider: minikube.sigs.k8s.io
version: v1.33.0-beta.0
name: cluster_info
server: https://192.168.39.231:8443
name: running-upgrade-642721
contexts:
- context:
cluster: cert-expiration-927384
extensions:
- extension:
last-update: Thu, 28 Mar 2024 00:45:14 UTC
provider: minikube.sigs.k8s.io
version: v1.33.0-beta.0
name: context_info
namespace: default
user: cert-expiration-927384
name: cert-expiration-927384
- context:
cluster: running-upgrade-642721
extensions:
- extension:
last-update: Thu, 28 Mar 2024 00:45:24 UTC
provider: minikube.sigs.k8s.io
version: v1.33.0-beta.0
name: context_info
namespace: default
user: running-upgrade-642721
name: running-upgrade-642721
current-context: ""
kind: Config
preferences: {}
users:
- name: cert-expiration-927384
user:
client-certificate: /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/cert-expiration-927384/client.crt
client-key: /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/cert-expiration-927384/client.key
- name: running-upgrade-642721
user:
client-certificate: /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/running-upgrade-642721/client.crt
client-key: /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/running-upgrade-642721/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-443419

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-443419" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-443419"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-443419" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-443419"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-443419" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-443419"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-443419" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-443419"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-443419" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-443419"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-443419" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-443419"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-443419" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-443419"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-443419" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-443419"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-443419" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-443419"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-443419" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-443419"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-443419" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-443419"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-443419" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-443419"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-443419" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-443419"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-443419" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-443419"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-443419" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-443419"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-443419" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-443419"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-443419" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-443419"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-443419" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-443419"

                                                
                                                
----------------------- debugLogs end: kubenet-443419 [took: 3.320456953s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-443419" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-443419
--- SKIP: TestNetworkPlugins/group/kubenet (3.48s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (4.53s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-443419 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-443419

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-443419

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-443419

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-443419

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-443419

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-443419

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-443419

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-443419

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-443419

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-443419

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-443419" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-443419"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-443419" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-443419"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-443419" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-443419"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-443419

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-443419" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-443419"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-443419" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-443419"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-443419" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-443419" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-443419" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-443419" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-443419" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-443419" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-443419" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-443419" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-443419" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-443419"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-443419" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-443419"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-443419" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-443419"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-443419" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-443419"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-443419" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-443419"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-443419

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-443419

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-443419" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-443419" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-443419

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-443419

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-443419" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-443419" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-443419" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-443419" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-443419" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-443419" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-443419"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-443419" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-443419"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-443419" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-443419"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-443419" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-443419"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-443419" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-443419"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.crt
extensions:
- extension:
last-update: Thu, 28 Mar 2024 00:45:14 UTC
provider: minikube.sigs.k8s.io
version: v1.33.0-beta.0
name: cluster_info
server: https://192.168.72.11:8443
name: cert-expiration-927384
- cluster:
certificate-authority: /home/jenkins/minikube-integration/18485-1069254/.minikube/ca.crt
extensions:
- extension:
last-update: Thu, 28 Mar 2024 00:45:24 UTC
provider: minikube.sigs.k8s.io
version: v1.33.0-beta.0
name: cluster_info
server: https://192.168.39.231:8443
name: running-upgrade-642721
contexts:
- context:
cluster: cert-expiration-927384
extensions:
- extension:
last-update: Thu, 28 Mar 2024 00:45:14 UTC
provider: minikube.sigs.k8s.io
version: v1.33.0-beta.0
name: context_info
namespace: default
user: cert-expiration-927384
name: cert-expiration-927384
- context:
cluster: running-upgrade-642721
extensions:
- extension:
last-update: Thu, 28 Mar 2024 00:45:24 UTC
provider: minikube.sigs.k8s.io
version: v1.33.0-beta.0
name: context_info
namespace: default
user: running-upgrade-642721
name: running-upgrade-642721
current-context: ""
kind: Config
preferences: {}
users:
- name: cert-expiration-927384
user:
client-certificate: /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/cert-expiration-927384/client.crt
client-key: /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/cert-expiration-927384/client.key
- name: running-upgrade-642721
user:
client-certificate: /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/running-upgrade-642721/client.crt
client-key: /home/jenkins/minikube-integration/18485-1069254/.minikube/profiles/running-upgrade-642721/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-443419

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-443419" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-443419"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-443419" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-443419"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-443419" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-443419"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-443419" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-443419"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-443419" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-443419"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-443419" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-443419"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-443419" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-443419"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-443419" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-443419"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-443419" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-443419"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-443419" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-443419"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-443419" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-443419"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-443419" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-443419"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-443419" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-443419"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-443419" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-443419"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-443419" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-443419"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-443419" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-443419"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-443419" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-443419"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-443419" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-443419"

                                                
                                                
----------------------- debugLogs end: cilium-443419 [took: 4.371515122s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-443419" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-443419
--- SKIP: TestNetworkPlugins/group/cilium (4.53s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-782067" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-782067
--- SKIP: TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                    
Copied to clipboard